I want over users to see his reply as they may also be using 1GB sdcards to store their Homebrew dol's , emulator's etc, i'm not talking about the actual launching of GC games from a 1GB sdcard with swiss
I believe all past Homebrew used only FAT when used with a sdcard, none of them supported FAT32 irc or it was recommened to only use FAT.
Using FAT limits you to only using 0-4GB sdcard's?, yet sd-boot and swiss both support
SDHC via FAT32?
so they have support for both FAT versions?
SDSC: 1MB to 4 GB = FAT, [2GB, 4GB with 64KB clusters (not widely supported)]
SDHC: 4GB to 32GB = FAT32
SDXC: 32GB to 2TB = FAT32
The FAT file system does not contain mechanisms which prevent newly written files from becoming scattered across the partition.Other file systems, like HPFS, use free space bitmaps that indicate used and available clusters, which could then be quickly looked up in order to find free contiguous areas (improved in exFAT). Another solution is the linkage of all free clusters into one or more lists (as is done in Unix file systems). Instead, the FAT has to be scanned as an array to find free clusters, which can lead to performance penalties with large disks.
In fact, computing free disk space on FAT is one of the most resource intensive operations, as it requires reading the entire FAT linearly. A possible justification suggested by Microsoft's Raymond Chen for limiting the maximum size of FAT32 partitions created on Windows was the time required to perform a "DIR" operation, which always displays the free disk space as the last line. Displaying this line took longer and longer as the number of clusters increased.
The High Performance File System (HPFS) divides disk space into bands, which have their own free space bitmap, where multiple files opened for simultaneous write could be expanded separately.
Some of the perceived problems with fragmentation resulted from operating system and hardware limitations.
The single-tasking DOS and the traditionally single-tasking PC hard disk architecture (only 1 outstanding input/output request at a time, no DMA transfers) did not contain mechanisms which could alleviate fragmentation by asynchronously prefetching next data while the application was processing the previous chunks.
Similarly, write-behind caching was often not enabled by default with Microsoft software (if present) given the problem of data loss in case of a crash, made easier by the lack of hardware protection between applications and the system.
Modern operating systems have introduced these optimizations to FAT partitions, but optimizations can still produce unwanted artifacts in case of a system crash. A Windows NT system will allocate space to files on FAT in advance, selecting large contiguous areas, but in case of a crash, files which were being appended will appear larger than they were ever written into, with dozens of random kilobytes at the end.
With the large cluster sizes, 16 or 32K, forced by larger FAT32 partitions, the external fragmentation becomes somewhat less significant, and internal fragmentation, i.e. disk space waste (since files are rarely exact multiples of cluster size), starts to be a problem as well, especially when there are a great many small files.