Thread: Defrag: A myth?
View Single Post
Old 2007-08-19, 01:22   Link #88
Jinto
Asuki-tan Kairin ↓
 
 
Join Date: Feb 2004
Location: Fürth (GER)
Age: 43
Quote:
Originally Posted by Tiberium Wolf View Post
And what's the 30 GB made off? All contiguous random data? Then there is no point moving that. That's like filling a tiny hole in a sea of data. What said would only viable if you had tons of gaps and block of data ain't that big(something looking like a code bar).
In that case you can use versioning file systems, the logical next step to journaling file systems. These systems usually use a copy-on-write strategy. Basically they provide a continous data block of old data (the data history... containing permanent data and historic versions of data), that could easily be defragmented. And a data block with reversions of parts of the historic data (the oldest reversions in this block are written to the continous history data block if the space is needed).
Such systems would allow for a decent JIT defragmentation. (still they have a somewhat natural fragmentation and have therefore not the best performance, but the JIT defragmentation would work far better on them than on normal file systems - see next paragraph)

Your mentioned approach (which I hope am understanding correctly now) does not trigger defragmentation for a certain area before a certain amount of fragmentation is reached (if I understood that part correctly). And then if the fragmentation limit is reached the part will be defragmented (for example in a period where the disk is in idle mode). That would require that the routine always knows the physical structure/layout of the disk (with one medium sized disk this is not a big problem... but the more disks there are the more memory has to be spend on that - in my case approx. 20-30 Mbytes per 160Gbyte disk <= depends on fragmentation, number of files and disk size). The only problem with it is, that only simple defragmentation would be possible, because only small changes are enforced, things like e.g., meta information like folder structures cannot be sufficiently utilized for keeping data that is belonging semantically together also physically together. Thus single files will be spread in gaps all over the disk in the process (which is what I was addressing two or so posts above). This could only then be avoided if your mentioned approach would behave like I was suggesting in the post before (but that would have other disadvantages).
__________________
Folding@Home, Team Animesuki
Jinto is offline   Reply With Quote