AnimeSuki Forums

Register Forum Rules FAQ Members List Social Groups Search Today's Posts Mark Forums Read

Go Back   AnimeSuki Forum > General > Tech Support

Notices

Reply
 
Thread Tools
Old 2007-08-12, 13:09   Link #21
grey_moon
Yummy, sweet and unyuu!!!
 
 
Join Date: Dec 2004
Quote:
Originally Posted by jpwong View Post
Hehe, one of the OS teachers at my school was discussing that one day. When it turned out that the disk did fragment, MS's response at the time was "Backup your files to tape drive, wipe the hard disk and then restore the drive from your tape backup"
Wait till you see the restore process for exchange, you have to do an entire mail store and can't just recover one persons mail box <- That might depend on the backup process, but the consultants said we needed to purchase the archive product to give some form of user level backup

*Edit*
Hee hee my reply was really off topic so here is something to bring it back on track....

Ext3 actually doesn't have any safe way to defrag it apart from copying files off and back on. It is said that ext2/3 doesn't need defragmenting, but I have read of people suffering from slow ext3 drives and then they check their disks they were heavily fragmented. Personally in all the years I've used it I haven't experienced slowness from fragmented ext3 drives, but it is all about use of the disk and I suspect that the people suffering might have been filling their disks up too much.
__________________

Last edited by grey_moon; 2007-08-12 at 13:20.
grey_moon is offline   Reply With Quote
Old 2007-08-12, 13:24   Link #22
Vexx
Obey the Darkly Cute ...
*Author
 
 
Join Date: Dec 2005
Location: On the whole, I'd rather be in Kyoto ...
Age: 57
Not to derail... but my epiphany that Exchange was a disaster to administrate came when I had to do an email "discovery" of a lawyer's system for a case. Gigabytes of mail.... and the built-in search tools were worse than grep and really had no way to store the results (it just gave links to the innards of the Exchange datablob rather than the desired content pieces). It was a disaster from a confidentiality standpoint.

The Microsoft Partner solution? A multi-thousand dollar third party application to actually do data mining of the Exchange database. Extensive googling mostly produced likewise complaints from other administrators.

Back on defrag: My understanding wasn't that ext3 was automagically defrag... its just another file system protocol and format. Its just that *nix has had a background defragger and file mangler in place for eons. Some file systems are just more amenable to management than others.
__________________
Vexx is offline   Reply With Quote
Old 2007-08-12, 13:30   Link #23
grey_moon
Yummy, sweet and unyuu!!!
 
 
Join Date: Dec 2004
Quote:
Originally Posted by Vexx View Post
My understanding wasn't that ext3 was automagically defrag... its just another file system protocol and format. Its just that *nix has had a background defragger and file mangler in place for eons. Some file systems are just more amenable to management than others.
I believe that ext3 tries to allocate the file space first, so it does best guess for a contiguous file save. The only defragger tool I remember reading about for ext is for ext2 so you would have to convert the FS *shudder*
__________________
grey_moon is offline   Reply With Quote
Old 2007-08-12, 13:43   Link #24
Ledgem
Love Yourself
 
 
Join Date: Mar 2003
Location: Northeast USA
Age: 29
On the subject of fragmentation and partition types, does anyone know about HFS+Journaling? I think that my Mac has the ability to defrag... but I also wonder, because lately my father has been experiencing data loss of settings and certain email. I switched him over to Thunderbird but the problem persisted. He has very little free space, and some people on his mailing list said that the problem was due to fragmentation and low disk space. Data loss due to that? Sounds like garbage to me, but then again I'm still relatively new to Mac systems. Any thoughts?

And of course, anyone know about ZFS with regard to fragmentation? Probably hasn't really been discussed, since ZFS really targets server systems.
__________________
Ledgem is offline   Reply With Quote
Old 2007-08-12, 13:45   Link #25
Syaoran
Contemplating Naruto
 
 
Join Date: Dec 2005
The ext3 /home partition on my laptop is a little bit fragmented. I think it's because I access it from within Windows as Thunderbird shares the Linux profile. Same thing a few other things I'm working on (documents, source code etc.)

For defragmenting the ntfs partition I noticed that O&O Defrag seems to do it a little different from the standard Windows tool. After a defrag Windows boots a little faster and idem for startup speed of programs like Eclipse & Firefox.
__________________

Visit http://syaoran.miniville.fr/ or help them get a job (~_^)
Syaoran is offline   Reply With Quote
Old 2007-08-12, 14:20   Link #26
Jinto
Asuki-tan Kairin ↓
 
 
Join Date: Feb 2004
Location: Fürth (GER)
Age: 34
Quote:
Originally Posted by matradley View Post
Fragmentation is generally occurs when a file is written to multiple sectors are written to. The more the information spans across the sectors and even into clusters, the more chance the file will be fragemented.
Afaik that depends on file size and a priori fragmentation. It is not a general rule that files have to fragment when passing sector borders.

Quote:
Originally Posted by matradley View Post
Journalizing filesystems are great to work with. You do lose performance because of the journalizing, but fault tolerance is much better in case of a loss. Free space is a concern with a Journalizing FS. I believe WinFS is supposed to be used in Windows Vienna (2009).
I know. What many don't know though, is that journaling filesystems use certain fragmentation too. There is a part that has the historic files... and then there is a part that has the changes of/for/to files (pretty much like in an SVN). So there basically even exists a history.

The next thing one can do with this is adding time stamps and one has a real history... and then one has time travel disk (very neat for certain undo stuff - better than those restore points in Windows).
Jinto is offline   Reply With Quote
Old 2007-08-12, 16:28   Link #27
WanderingKnight
Gregory House
*IT Support
 
 
Join Date: Jun 2006
Location: Buenos Aires, Argentina
Age: 25
Send a message via MSN to WanderingKnight
What is considered a high amount of fragmentation, anyways? I'd do a check on my ext3 /home folder, but I'd have to enable the root account (in order to unmount it) and I'm too lazy to do it. By researching (read: Google search), I found out that, most of times, ext3 doesn't go beyond 3% of fragmentation, 5% at worst. And, if I remember correctly, my Windows drives used to have a 10-20% of fragmentation. I could be remembering wrong, though.
__________________


Place them in a box until a quieter time | Lights down, you up and die.
WanderingKnight is offline   Reply With Quote
Old 2007-08-12, 17:00   Link #28
Tiberium Wolf
Senior Member
 
 
Join Date: Dec 2004
Location: Portugal
Age: 34
Quote:
Originally Posted by WanderingKnight View Post
What is considered a high amount of fragmentation, anyways?
About that I had a few months ago a download HDD of 60 GBs. You go to the windows defrag and get a report. The bar that shows blue and red stripes representing the files you would see 90% red(fragmented). The avg files fragments were around 4000. But the bigger the files the more fragments it had. All this just downloading from the IRC. Some times it would take 2 mins to copy a 200 MB file to another HDD.
__________________
Tiberium Wolf is offline   Reply With Quote
Old 2007-08-12, 17:12   Link #29
TakutoKun
Mew Member
*IT Support
 
Join Date: Aug 2007
Location: Ontario, Canada
Age: 29
Quote:
Originally Posted by WanderingKnight View Post
What is considered a high amount of fragmentation, anyways? I'd do a check on my ext3 /home folder, but I'd have to enable the root account (in order to unmount it) and I'm too lazy to do it. By researching (read: Google search), I found out that, most of times, ext3 doesn't go beyond 3% of fragmentation, 5% at worst. And, if I remember correctly, my Windows drives used to have a 10-20% of fragmentation. I could be remembering wrong, though.
I did not find much on the maximum fragmentation of the ext3 FS. I have seen some users complain about fragmentation of their ext3 system with about 65% non-contiguous files. People have noted that the average fragmentation of an ext3 system is 3-3.5% though.
TakutoKun is offline   Reply With Quote
Old 2007-08-12, 22:18   Link #30
grey_moon
Yummy, sweet and unyuu!!!
 
 
Join Date: Dec 2004
@Ledgem - Can't say for the FS you are asking about, but the general rule of thumb I use for any FS is that its performance takes a nose dive once it approaches 90% of capacity. I believe that OpenSuSE doesn't let a user write past that limit and u have to be root to do that (<- I have never tested that ).

One of the main issue I believe when it comes to fragmentation and disk usage is that as long as the initially written data is contiguous, then the user will believe that their disk is not badly fragmented, but as they run out of space, then data they are writing there can not help but be fragmented. With sod's law that is the data that they are actively using so they will be accessing fragmented data. In a Windows situation, combine this with a swap file managed by the system and they are in for a world of pain.
__________________
grey_moon is offline   Reply With Quote
Old 2007-08-13, 00:34   Link #31
Jinto
Asuki-tan Kairin ↓
 
 
Join Date: Feb 2004
Location: Fürth (GER)
Age: 34
Quote:
Originally Posted by WanderingKnight View Post
What is considered a high amount of fragmentation, anyways? I'd do a check on my ext3 /home folder, but I'd have to enable the root account (in order to unmount it) and I'm too lazy to do it. By researching (read: Google search), I found out that, most of times, ext3 doesn't go beyond 3% of fragmentation, 5% at worst. And, if I remember correctly, my Windows drives used to have a 10-20% of fragmentation. I could be remembering wrong, though.
When I said it does work only good with lots of free space, that was not because of additional fragmentation so much... but because of performance.
Jinto is offline   Reply With Quote
Old 2007-08-13, 05:46   Link #32
TakutoKun
Mew Member
*IT Support
 
Join Date: Aug 2007
Location: Ontario, Canada
Age: 29
Here is a wonderful, but lengthy, article comparing fragmentation of a hard disk over performance - http://www.diskeeper.com/defrag/impa...gmentation.asp . It seems the author also compared different levels of fragmentation and load times.
TakutoKun is offline   Reply With Quote
Old 2007-08-13, 08:42   Link #33
Jinto
Asuki-tan Kairin ↓
 
 
Join Date: Feb 2004
Location: Fürth (GER)
Age: 34
Quote:
Originally Posted by matradley View Post
Here is a wonderful, but lengthy, article comparing fragmentation of a hard disk over performance - http://www.diskeeper.com/defrag/impa...gmentation.asp . It seems the author also compared different levels of fragmentation and load times.
I wonder if the fragmentation results for installing software would be better if the test was performed with more than just 256MB RAM (and Windows XP as OS).

Besides the test is not ver representative anyway. The tester always copied data continouos and not in small amounts over time with several defrag sessions in between. Of course that will lead to very good performance results for experiments with serial access of successively written data on defragmented drives, and very bad performance results for fragmented drives.

But when files are out of reading order (even if these files are perfectly continouos and defragmented) this fragmentation slow down will occur.
And since no defragmenter really knows which files belong semantically together. it might spread them far across the disk. And sometimes fragmentation in a file is less a problem than two files being on totally different locations on the platter if read in succession.

Example:

installation routine for program x:

install some files in program folder
install some other files in common folder
install some other files in user space
install some other files in certain windows directories

When done in succession, these files will be located close together on the drive. That will lead to small access times to get all the app data for a program run. Now when a defragmenter decides to relocate 50% of these files in another region of the drive the program will actually load much longer.

And once the data is spread rather randomly across all over the disk, it doesn't matter if the single file is defragmented when the whole domain of files is spread instead.

And that is where the tests of this tester lacked reality. It was a simulation of a fragmentation situation. But a very one sided and theoretical one.

Sometimes I wished people would use more structured thinking in the creation phase of test models and test cases.

edit:

the best option is not to defragment a lot, but to separate transient from permanent data on different drives. The drive with the transient data can be wiped at times. (permanent data is e.g. OS data -except the page file- or permanently installed software).
Jinto is offline   Reply With Quote
Old 2007-08-13, 09:52   Link #34
TakutoKun
Mew Member
*IT Support
 
Join Date: Aug 2007
Location: Ontario, Canada
Age: 29
Quote:
Originally Posted by Jinto Lin View Post
edit:

the best option is not to defragment a lot, but to separate transient from permanent data on different drives. The drive with the transient data can be wiped at times. (permanent data is e.g. OS data -except the page file- or permanently installed software).
That's a great plan. The separation of OS/Applications and pagefile would increase overall performance. Performance would definitely increase by utilizing another hard disk. Overall, the study of fragmentation seems to be relatively difficult. It would change dependent upon the usage of the hard disk itself. Windows overall performance issues is with the installation/removing of programs where entries in the registry and files are left after the un-install. This along with the frequent security patches that are released, it is not wonder many Windows users have to re-load their OS once in a while.
TakutoKun is offline   Reply With Quote
Old 2007-08-13, 11:29   Link #35
martino
makes no files now
 
 
Join Date: May 2006
Quote:
Originally Posted by matradley View Post
That's a great plan. The separation of OS/Applications and pagefile would increase overall performance. Performance would definitely increase by utilizing another hard disk.
Isn't that basically what *nix OS do? IIRC it partitions the space on the drive, and each part is put on a different partition (swap, user files, etc), or not? (I haven't had Linux installed for ages on my system now so I barely remember how it was).

I myself have my Windows done this way, to some extent (due to certain Windows limitations). Windows system files and the page file are kept on partition C (along with Doc&Set and a few other files/applications which specifically need to be on the same partition as the system files), Program Files and other stuff on a second one. It is just one hard drive, but I can definitely see less fragmentation on the OS/pagefile partition then with my previous setup which had all of them on one partition...
__________________
"Light and shadow don't battle each other, because they're two sides of the same coin"
martino is offline   Reply With Quote
Old 2007-08-13, 11:32   Link #36
Jinto
Asuki-tan Kairin ↓
 
 
Join Date: Feb 2004
Location: Fürth (GER)
Age: 34
Quote:
Originally Posted by matradley View Post
That's a great plan. The separation of OS/Applications and pagefile would increase overall performance. Performance would definitely increase by utilizing another hard disk. Overall, the study of fragmentation seems to be relatively difficult. It would change dependent upon the usage of the hard disk itself. Windows overall performance issues is with the installation/removing of programs where entries in the registry and files are left after the un-install. This along with the frequent security patches that are released, it is not wonder many Windows users have to re-load their OS once in a while.
Well, that is very true. I have the feeling some of my friends don't feel too well when they do not at least wipe their Windows systems once a year.

Quote:
Originally Posted by martino View Post
Isn't that basically what *nix OS do? IIRC it partitions the space on the drive, and each part is put on a different partition (swap, user files, etc), or not? (I haven't had Linux installed for ages on my system now so I barely remember how it was).
It depends on the distribution one is using. It is not default that there is a separate page drive (yet I have one). The separation of user space and system space is usually not done by using different drives or partitions, but it would be no problem to arrange things like this, since only the home folder is meant for user data and it is easy to locate this folder on another drive.

Quote:
Originally Posted by martino View Post
I myself have my Windows done this way, to some extent (due to certain Windows limitations). Windows system files and the page file are kept on partition C (along with Doc&Set and a few other files/applications which specifically need to be on the same partition as the system files), Program Files and other stuff on a second one. It is just one hard drive, but I can definitely see less fragmentation on the OS/pagefile partition then with my previous setup which had all of them on one partition...
Do you use a fixed size page file? If yes one can keep it on the OS drive otherwise I would reconsider this idea.
Btw. programs which you have permanently installed (always) you can place on the OS drive too. What I would definitly not place on the OS drive is the Documents and User Settings folder (or how its called).

Last edited by Jinto; 2007-08-13 at 11:42.
Jinto is offline   Reply With Quote
Old 2007-08-13, 11:44   Link #37
TakutoKun
Mew Member
*IT Support
 
Join Date: Aug 2007
Location: Ontario, Canada
Age: 29
Quote:
Originally Posted by martino View Post
Isn't that basically what *nix OS do? IIRC it partitions the space on the drive, and each part is put on a different partition (swap, user files, etc), or not? (I haven't had Linux installed for ages on my system now so I barely remember how it was).

I myself have my Windows done this way, to some extent (due to certain Windows limitations). Windows system files and the page file are kept on partition C (along with Doc&Set and a few other files/applications which specifically need to be on the same partition as the system files), Program Files and other stuff on a second one. It is just one hard drive, but I can definitely see less fragmentation on the OS/pagefile partition then with my previous setup which had all of them on one partition...
The Linux/Unix Root model is very well established. However, having your swap on the same hard disk as your main partition will yield not increased performance. You will get your regular thrashing with a swap file on the same hard disk/partition as your main information.
TakutoKun is offline   Reply With Quote
Old 2007-08-13, 23:26   Link #38
grey_moon
Yummy, sweet and unyuu!!!
 
 
Join Date: Dec 2004
Quote:
Originally Posted by matradley View Post
The Linux/Unix Root model is very well established. However, having your swap on the same hard disk as your main partition will yield not increased performance. You will get your regular thrashing with a swap file on the same hard disk/partition as your main information.
On a single disk the recommendation is that it is on the edge of the disk so it has faster slurp times. Since it is on its own swap type partition it wont suffer from the fragmentation problems that having a swap file on a data/app drive would.

I see your point in regards to the heads being used by another partition, but there is an increase in performance with a well set-up swap partition. The actual main benefit of the nix over the XP model is how it tries not to use the swap unless needed
__________________
grey_moon is offline   Reply With Quote
Old 2007-08-14, 07:26   Link #39
TakutoKun
Mew Member
*IT Support
 
Join Date: Aug 2007
Location: Ontario, Canada
Age: 29
Quote:
Originally Posted by grey_moon View Post
On a single disk the recommendation is that it is on the edge of the disk so it has faster slurp times. Since it is on its own swap type partition it wont suffer from the fragmentation problems that having a swap file on a data/app drive would.

I see your point in regards to the heads being used by another partition, but there is an increase in performance with a well set-up swap partition. The actual main benefit of the nix over the XP model is how it tries not to use the swap unless needed
The "swap" will get used as much as it would if you have a low amount of RAM. I mean, most people nowadays have enough RAM to reduce the overall use of the SWAP. Overall, I must admit that Linux, in general, has a high rate of performance due to the fact that it is not overly resource hungry - unless you want it to be.
TakutoKun is offline   Reply With Quote
Old 2007-08-14, 09:12   Link #40
grey_moon
Yummy, sweet and unyuu!!!
 
 
Join Date: Dec 2004
Quote:
Originally Posted by matradley View Post
The "swap" will get used as much as it would if you have a low amount of RAM. I mean, most people nowadays have enough RAM to reduce the overall use of the SWAP. Overall, I must admit that Linux, in general, has a high rate of performance due to the fact that it is not overly resource hungry - unless you want it to be.
I was thinking of a situation with 3GB on my home server and when I had XP on it, just booting it up the swap file would be already used. Now with linux, only when I push it with virtual machines does the swap get used. XP imho likes to use the swap file was the point I was trying to make.
__________________
grey_moon is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 14:59.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
We use Silk.