2007-08-12, 13:09 | Link #21 | |
Yummy, sweet and unyuu!!!
Join Date: Dec 2004
|
Quote:
*Edit* Hee hee my reply was really off topic so here is something to bring it back on track.... Ext3 actually doesn't have any safe way to defrag it apart from copying files off and back on. It is said that ext2/3 doesn't need defragmenting, but I have read of people suffering from slow ext3 drives and then they check their disks they were heavily fragmented. Personally in all the years I've used it I haven't experienced slowness from fragmented ext3 drives, but it is all about use of the disk and I suspect that the people suffering might have been filling their disks up too much.
__________________
Last edited by grey_moon; 2007-08-12 at 13:20. |
|
2007-08-12, 13:24 | Link #22 |
Obey the Darkly Cute ...
Author
Join Date: Dec 2005
Location: On the whole, I'd rather be in Kyoto ...
Age: 66
|
Not to derail... but my epiphany that Exchange was a disaster to administrate came when I had to do an email "discovery" of a lawyer's system for a case. Gigabytes of mail.... and the built-in search tools were worse than grep and really had no way to store the results (it just gave links to the innards of the Exchange datablob rather than the desired content pieces). It was a disaster from a confidentiality standpoint.
The Microsoft Partner solution? A multi-thousand dollar third party application to actually do data mining of the Exchange database. Extensive googling mostly produced likewise complaints from other administrators. Back on defrag: My understanding wasn't that ext3 was automagically defrag... its just another file system protocol and format. Its just that *nix has had a background defragger and file mangler in place for eons. Some file systems are just more amenable to management than others.
__________________
|
2007-08-12, 13:30 | Link #23 |
Yummy, sweet and unyuu!!!
Join Date: Dec 2004
|
I believe that ext3 tries to allocate the file space first, so it does best guess for a contiguous file save. The only defragger tool I remember reading about for ext is for ext2 so you would have to convert the FS *shudder*
__________________
|
2007-08-12, 13:43 | Link #24 |
Love Yourself
Join Date: Mar 2003
Location: Northeast USA
Age: 38
|
On the subject of fragmentation and partition types, does anyone know about HFS+Journaling? I think that my Mac has the ability to defrag... but I also wonder, because lately my father has been experiencing data loss of settings and certain email. I switched him over to Thunderbird but the problem persisted. He has very little free space, and some people on his mailing list said that the problem was due to fragmentation and low disk space. Data loss due to that? Sounds like garbage to me, but then again I'm still relatively new to Mac systems. Any thoughts?
And of course, anyone know about ZFS with regard to fragmentation? Probably hasn't really been discussed, since ZFS really targets server systems.
__________________
|
2007-08-12, 13:45 | Link #25 |
Inactive Member
Join Date: Dec 2005
|
The ext3 /home partition on my laptop is a little bit fragmented. I think it's because I access it from within Windows as Thunderbird shares the Linux profile. Same thing a few other things I'm working on (documents, source code etc.)
For defragmenting the ntfs partition I noticed that O&O Defrag seems to do it a little different from the standard Windows tool. After a defrag Windows boots a little faster and idem for startup speed of programs like Eclipse & Firefox. |
2007-08-12, 14:20 | Link #26 | ||
Asuki-tan Kairin ↓
Join Date: Feb 2004
Location: Fürth (GER)
Age: 43
|
Quote:
Quote:
The next thing one can do with this is adding time stamps and one has a real history... and then one has time travel disk (very neat for certain undo stuff - better than those restore points in Windows).
__________________
|
||
2007-08-12, 16:28 | Link #27 |
Gregory House
IT Support
|
What is considered a high amount of fragmentation, anyways? I'd do a check on my ext3 /home folder, but I'd have to enable the root account (in order to unmount it) and I'm too lazy to do it. By researching (read: Google search), I found out that, most of times, ext3 doesn't go beyond 3% of fragmentation, 5% at worst. And, if I remember correctly, my Windows drives used to have a 10-20% of fragmentation. I could be remembering wrong, though.
__________________
|
2007-08-12, 17:00 | Link #28 |
Senior Member
Join Date: Dec 2004
Location: Portugal
Age: 44
|
About that I had a few months ago a download HDD of 60 GBs. You go to the windows defrag and get a report. The bar that shows blue and red stripes representing the files you would see 90% red(fragmented). The avg files fragments were around 4000. But the bigger the files the more fragments it had. All this just downloading from the IRC. Some times it would take 2 mins to copy a 200 MB file to another HDD.
__________________
|
2007-08-12, 17:12 | Link #29 | |
Mew Member
IT Support
Join Date: Aug 2007
Location: Ontario, Canada
Age: 39
|
Quote:
|
|
2007-08-12, 22:18 | Link #30 |
Yummy, sweet and unyuu!!!
Join Date: Dec 2004
|
@Ledgem - Can't say for the FS you are asking about, but the general rule of thumb I use for any FS is that its performance takes a nose dive once it approaches 90% of capacity. I believe that OpenSuSE doesn't let a user write past that limit and u have to be root to do that (<- I have never tested that ).
One of the main issue I believe when it comes to fragmentation and disk usage is that as long as the initially written data is contiguous, then the user will believe that their disk is not badly fragmented, but as they run out of space, then data they are writing there can not help but be fragmented. With sod's law that is the data that they are actively using so they will be accessing fragmented data. In a Windows situation, combine this with a swap file managed by the system and they are in for a world of pain.
__________________
|
2007-08-13, 00:34 | Link #31 | |
Asuki-tan Kairin ↓
Join Date: Feb 2004
Location: Fürth (GER)
Age: 43
|
Quote:
__________________
|
|
2007-08-13, 05:46 | Link #32 |
Mew Member
IT Support
Join Date: Aug 2007
Location: Ontario, Canada
Age: 39
|
Here is a wonderful, but lengthy, article comparing fragmentation of a hard disk over performance - http://www.diskeeper.com/defrag/impa...gmentation.asp . It seems the author also compared different levels of fragmentation and load times.
|
2007-08-13, 08:42 | Link #33 | |
Asuki-tan Kairin ↓
Join Date: Feb 2004
Location: Fürth (GER)
Age: 43
|
Quote:
Besides the test is not ver representative anyway. The tester always copied data continouos and not in small amounts over time with several defrag sessions in between. Of course that will lead to very good performance results for experiments with serial access of successively written data on defragmented drives, and very bad performance results for fragmented drives. But when files are out of reading order (even if these files are perfectly continouos and defragmented) this fragmentation slow down will occur. And since no defragmenter really knows which files belong semantically together. it might spread them far across the disk. And sometimes fragmentation in a file is less a problem than two files being on totally different locations on the platter if read in succession. Example: installation routine for program x: install some files in program folder install some other files in common folder install some other files in user space install some other files in certain windows directories When done in succession, these files will be located close together on the drive. That will lead to small access times to get all the app data for a program run. Now when a defragmenter decides to relocate 50% of these files in another region of the drive the program will actually load much longer. And once the data is spread rather randomly across all over the disk, it doesn't matter if the single file is defragmented when the whole domain of files is spread instead. And that is where the tests of this tester lacked reality. It was a simulation of a fragmentation situation. But a very one sided and theoretical one. Sometimes I wished people would use more structured thinking in the creation phase of test models and test cases. edit: the best option is not to defragment a lot, but to separate transient from permanent data on different drives. The drive with the transient data can be wiped at times. (permanent data is e.g. OS data -except the page file- or permanently installed software).
__________________
|
|
2007-08-13, 09:52 | Link #34 | |
Mew Member
IT Support
Join Date: Aug 2007
Location: Ontario, Canada
Age: 39
|
Quote:
|
|
2007-08-13, 11:29 | Link #35 | |
makes no files now
Join Date: May 2006
|
Quote:
I myself have my Windows done this way, to some extent (due to certain Windows limitations). Windows system files and the page file are kept on partition C (along with Doc&Set and a few other files/applications which specifically need to be on the same partition as the system files), Program Files and other stuff on a second one. It is just one hard drive, but I can definitely see less fragmentation on the OS/pagefile partition then with my previous setup which had all of them on one partition...
__________________
|
|
2007-08-13, 11:32 | Link #36 | |||
Asuki-tan Kairin ↓
Join Date: Feb 2004
Location: Fürth (GER)
Age: 43
|
Quote:
Quote:
Quote:
Btw. programs which you have permanently installed (always) you can place on the OS drive too. What I would definitly not place on the OS drive is the Documents and User Settings folder (or how its called).
__________________
Last edited by Jinto; 2007-08-13 at 11:42. |
|||
2007-08-13, 11:44 | Link #37 | |
Mew Member
IT Support
Join Date: Aug 2007
Location: Ontario, Canada
Age: 39
|
Quote:
|
|
2007-08-13, 23:26 | Link #38 | |
Yummy, sweet and unyuu!!!
Join Date: Dec 2004
|
Quote:
I see your point in regards to the heads being used by another partition, but there is an increase in performance with a well set-up swap partition. The actual main benefit of the nix over the XP model is how it tries not to use the swap unless needed
__________________
|
|
2007-08-14, 07:26 | Link #39 | |
Mew Member
IT Support
Join Date: Aug 2007
Location: Ontario, Canada
Age: 39
|
Quote:
|
|
2007-08-14, 09:12 | Link #40 | |
Yummy, sweet and unyuu!!!
Join Date: Dec 2004
|
Quote:
__________________
|
|
Thread Tools | |
|
|