2007-08-18, 10:40 | Link #81 | ||
Senior Member
Join Date: Apr 2006
Location: Philippines
|
Quote:
Quote:
Thanks for the information.. |
||
2007-08-18, 13:53 | Link #82 | |
Senior Member
Join Date: Dec 2004
Location: Portugal
Age: 44
|
Quote:
For example: XXXXXXX__YYYYY_ZZZZZZ _ is freespace. You would take then and compress into. XXXXXXXYYYYYZZZZZZ___ At least this way it won't get out of sequence order but you move more data than if was just filling those 3 spaces. You would do this for gaps till a certain size. Whatever data is in the right you would take them and compress into a single block. Assuming none is fragmented.
__________________
|
|
2007-08-18, 14:47 | Link #83 |
Asuki-tan Kairin ↓
Join Date: Feb 2004
Location: Fürth (GER)
Age: 43
|
I see, that is even worse ^^'. It might be bad for dynamic files (like log files or registry and so on). And it might cause much more traffic since even small gaps can make the move of several Gbytes necessary. Imagine you delete a little txt file and then your drive moves 30Gbytes of data just to fill the gap with the neighbour data.
__________________
|
2007-08-18, 15:37 | Link #85 | |
Gregory House
IT Support
|
Quote:
__________________
|
|
2007-08-18, 16:05 | Link #86 | |
Asuki-tan Kairin ↓
Join Date: Feb 2004
Location: Fürth (GER)
Age: 43
|
Quote:
30GbyteXXXXXXXXYYZZZZZZZZZ30Gbyte Imagine the X- and Z-bloc to be continous data and the file YY is deleted.
__________________
|
|
2007-08-18, 16:55 | Link #87 |
Senior Member
Join Date: Dec 2004
Location: Portugal
Age: 44
|
And what's the 30 GB made off? All contiguous random data? Then there is no point moving that. That's like filling a tiny hole in a sea of data. What said would only viable if you had tons of gaps and block of data ain't that big(something looking like a code bar).
__________________
|
2007-08-19, 01:22 | Link #88 | |
Asuki-tan Kairin ↓
Join Date: Feb 2004
Location: Fürth (GER)
Age: 43
|
Quote:
Such systems would allow for a decent JIT defragmentation. (still they have a somewhat natural fragmentation and have therefore not the best performance, but the JIT defragmentation would work far better on them than on normal file systems - see next paragraph) Your mentioned approach (which I hope am understanding correctly now) does not trigger defragmentation for a certain area before a certain amount of fragmentation is reached (if I understood that part correctly). And then if the fragmentation limit is reached the part will be defragmented (for example in a period where the disk is in idle mode). That would require that the routine always knows the physical structure/layout of the disk (with one medium sized disk this is not a big problem... but the more disks there are the more memory has to be spend on that - in my case approx. 20-30 Mbytes per 160Gbyte disk <= depends on fragmentation, number of files and disk size). The only problem with it is, that only simple defragmentation would be possible, because only small changes are enforced, things like e.g., meta information like folder structures cannot be sufficiently utilized for keeping data that is belonging semantically together also physically together. Thus single files will be spread in gaps all over the disk in the process (which is what I was addressing two or so posts above). This could only then be avoided if your mentioned approach would behave like I was suggesting in the post before (but that would have other disadvantages).
__________________
|
|
2007-08-19, 06:35 | Link #89 | |
Mew Member
IT Support
Join Date: Aug 2007
Location: Ontario, Canada
Age: 39
|
Quote:
|
|
2007-08-19, 11:03 | Link #91 | |
Mew Member
IT Support
Join Date: Aug 2007
Location: Ontario, Canada
Age: 39
|
Quote:
|
|
Thread Tools | |
|
|