AnimeSuki Forums

Register Forum Rules FAQ Members List Social Groups Search Today's Posts Mark Forums Read

Go Back   AnimeSuki Forum > AnimeSuki & Technology > Tech Support

Notices

Reply
 
Thread Tools
Old 2007-08-18, 10:40   Link #81
toru310
Senior Member
 
Join Date: Apr 2006
Location: Philippines
Quote:
Also keep in mind that if the system attempts to compress 3 GB of scattered data (not scattered in the sense of fragmentation, but in the sense of it being thousands, or even millions, of different files) it'll probably become unusable for a considerable time (while it attempts the compression).
Sorry I mis read my statement I thought it cleans the compressed files but it actually compress old files which can really "screw" up my system..thanks alot I was thinking of using clean disk good thing you stopped me..

Quote:
(1) Compressing old files, means that files that a re rarely used become compressed. But in the process of compressing them they become more vulnerable to data inconsistency issues (ever had a broken zip file?)
Thatswhy I would not recommend to compress old (not so often used) files.
When you said broken zip file..thats the thing that I would like to avoid..

Thanks for the information..
toru310 is offline   Reply With Quote
Old 2007-08-18, 13:53   Link #82
Tiberium Wolf
Senior Member
 
 
Join Date: Dec 2004
Location: Portugal
Age: 34
Quote:
Originally Posted by Jinto Lin View Post
Actually you described a method of how you would utilize these gaps for stuffing in small files (which have to be taken out of somewhere to stuff them into the gaps). And that practice will most likely take many small files out of a certain file sequence (order).
Actually I said to take files to fill the gap was to take files right next to them.

For example:

XXXXXXX__YYYYY_ZZZZZZ

_ is freespace.


You would take then and compress into.

XXXXXXXYYYYYZZZZZZ___

At least this way it won't get out of sequence order but you move more data than if was just filling those 3 spaces. You would do this for gaps till a certain size. Whatever data is in the right you would take them and compress into a single block. Assuming none is fragmented.
__________________
Tiberium Wolf is offline   Reply With Quote
Old 2007-08-18, 14:47   Link #83
Jinto
Asuki-tan Kairin ↓
 
 
Join Date: Feb 2004
Location: Fürth (GER)
Age: 33
I see, that is even worse ^^'. It might be bad for dynamic files (like log files or registry and so on). And it might cause much more traffic since even small gaps can make the move of several Gbytes necessary. Imagine you delete a little txt file and then your drive moves 30Gbytes of data just to fill the gap with the neighbour data.
Jinto is offline   Reply With Quote
Old 2007-08-18, 15:19   Link #84
Tiberium Wolf
Senior Member
 
 
Join Date: Dec 2004
Location: Portugal
Age: 34
Oi oi! You are not actually going to move GB of data. That's insane.
__________________
Tiberium Wolf is offline   Reply With Quote
Old 2007-08-18, 15:37   Link #85
WanderingKnight
Gregory House
*IT Support
 
 
Join Date: Jun 2006
Location: Buenos Aires, Argentina
Age: 25
Send a message via MSN to WanderingKnight
Quote:
Sorry I mis read my statement I thought it cleans the compressed files but it actually compress old files which can really "screw" up my system..thanks alot I was thinking of using clean disk good thing you stopped me..
What I meant was that the compression would take a REALLY long time. It probably won't touch your system files (even Microsoft isn't that stupid), but if you've got a Pentium 4 (IIRC) it will take a REALLY REALLY long time, during which the processor will be forced at 100% and you won't be able to do much.
__________________


Place them in a box until a quieter time | Lights down, you up and die.
WanderingKnight is offline   Reply With Quote
Old 2007-08-18, 16:05   Link #86
Jinto
Asuki-tan Kairin ↓
 
 
Join Date: Feb 2004
Location: Fürth (GER)
Age: 33
Quote:
Originally Posted by Tiberium Wolf View Post
Oi oi! You are not actually going to move GB of data. That's insane.
Not? But how would your algorithm treat something like this:

30GbyteXXXXXXXXYYZZZZZZZZZ30Gbyte

Imagine the X- and Z-bloc to be continous data and the file YY is deleted.
Jinto is offline   Reply With Quote
Old 2007-08-18, 16:55   Link #87
Tiberium Wolf
Senior Member
 
 
Join Date: Dec 2004
Location: Portugal
Age: 34
And what's the 30 GB made off? All contiguous random data? Then there is no point moving that. That's like filling a tiny hole in a sea of data. What said would only viable if you had tons of gaps and block of data ain't that big(something looking like a code bar).
__________________
Tiberium Wolf is offline   Reply With Quote
Old 2007-08-19, 01:22   Link #88
Jinto
Asuki-tan Kairin ↓
 
 
Join Date: Feb 2004
Location: Fürth (GER)
Age: 33
Quote:
Originally Posted by Tiberium Wolf View Post
And what's the 30 GB made off? All contiguous random data? Then there is no point moving that. That's like filling a tiny hole in a sea of data. What said would only viable if you had tons of gaps and block of data ain't that big(something looking like a code bar).
In that case you can use versioning file systems, the logical next step to journaling file systems. These systems usually use a copy-on-write strategy. Basically they provide a continous data block of old data (the data history... containing permanent data and historic versions of data), that could easily be defragmented. And a data block with reversions of parts of the historic data (the oldest reversions in this block are written to the continous history data block if the space is needed).
Such systems would allow for a decent JIT defragmentation. (still they have a somewhat natural fragmentation and have therefore not the best performance, but the JIT defragmentation would work far better on them than on normal file systems - see next paragraph)

Your mentioned approach (which I hope am understanding correctly now) does not trigger defragmentation for a certain area before a certain amount of fragmentation is reached (if I understood that part correctly). And then if the fragmentation limit is reached the part will be defragmented (for example in a period where the disk is in idle mode). That would require that the routine always knows the physical structure/layout of the disk (with one medium sized disk this is not a big problem... but the more disks there are the more memory has to be spend on that - in my case approx. 20-30 Mbytes per 160Gbyte disk <= depends on fragmentation, number of files and disk size). The only problem with it is, that only simple defragmentation would be possible, because only small changes are enforced, things like e.g., meta information like folder structures cannot be sufficiently utilized for keeping data that is belonging semantically together also physically together. Thus single files will be spread in gaps all over the disk in the process (which is what I was addressing two or so posts above). This could only then be avoided if your mentioned approach would behave like I was suggesting in the post before (but that would have other disadvantages).
Jinto is offline   Reply With Quote
Old 2007-08-19, 06:35   Link #89
TakutoKun
Mew Member
*IT Support
 
Join Date: Aug 2007
Location: Ontario, Canada
Age: 29
Quote:
Originally Posted by Jinto Lin View Post
In that case you can use versioning file systems, the logical next step to journaling file systems. These systems usually use a copy-on-write strategy. Basically they provide a continous data block of old data (the data history... containing permanent data and historic versions of data), that could easily be defragmented. And a data block with reversions of parts of the historic data (the oldest reversions in this block are written to the continous history data block if the space is needed).
Such systems would allow for a decent JIT defragmentation. (still they have a somewhat natural fragmentation and have therefore not the best performance, but the JIT defragmentation would work far better on them than on normal file systems - see next paragraph)
VFS looks pretty neat. Shadow copying and making a snapshot of files as it works. VFS could be very good for Server fault tolerance.
TakutoKun is offline   Reply With Quote
Old 2007-08-19, 10:39   Link #90
hobbes_fan
You could say.....
 
 
Join Date: Apr 2007
Actually here's a somewhat strange question, in relation to my newest thread about NAS/RAID. Do you defrag drives in RAID config? Or does the striping render this moot?
hobbes_fan is offline   Reply With Quote
Old 2007-08-19, 11:03   Link #91
TakutoKun
Mew Member
*IT Support
 
Join Date: Aug 2007
Location: Ontario, Canada
Age: 29
Quote:
Originally Posted by hobbes_fan View Post
Actually here's a somewhat strange question, in relation to my newest thread about NAS/RAID. Do you defrag drives in RAID config? Or does the striping render this moot?
You still have to defrag a RAID disk. I found an interesting article that you might want to read that mentions about defragging RAID and a program called DiskKeeper to keep from circumventing the benefits of RAID. Here is the website with the article - http://www.processor.com/editorial/a...=&bJumpTo=True . By the way, which RAID have you decided to use? I do not suggest using striping unless you want to chance at losing all of your data. If you can afford RAID 5, it would be a bit better.
TakutoKun is offline   Reply With Quote
Old 2007-08-19, 11:06   Link #92
grey_moon
Yummy, sweet and unyuu!!!
 
 
Join Date: Dec 2004
RAID is a lower level then the FS, basically the OS shouldn't need to know it is a RAID device (apart from for management). Defragging depends on what FS you are going to install on it.
__________________
grey_moon is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 07:55.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
We use Silk.