In one of the computer magazines I read (c't) was a huge report about disk fragmentation and the impact on performance. They also showed that when you write two files at the same time (e.g. 2 downloads) on a NTFS partition these two files can use every other cluster on the harddisk.
mm177 - 07 10 05 - 06:10
...and the amiga standard file system is more than 17 years older ;)
Murmel - 07 10 05 - 14:09
I switched to XP not too long ago, so I started using NTFS on my drives because people said it's less likely to fragment (I think Microsoft said it first, and then everyone just preach it through out the Internet :p).
What I found was that files are more likely to fragment. Even files copy to a completely empty drive will fragment ... I really don't get why it chooses to fragment the files.
meanie - 08 10 05 - 14:48
The service pack directory is compressed using file-system compression. When a non-compressed file is converted to compressed, this worst case can occur (it's compressed in place in 64K chunks). This is not an issue when creating files compressed (e.g. copying them into a directory marked for compression).
Just run the builtin deframenter occasionally, or buy Diskeeper for more flexible and automatic defragmentation.
CharlesWilson - 10 10 05 - 07:58
I don't want to defragment often. I want a filesystem that doesn't suck at block allocation. Many Un*x filesystems have novel and much more effective approaches to allocation. I believe XFS allocates blocks immediately, but doesn't assign specific blocks until the actual write to disk, meaning that the dynamic write cache acts as a sort of defragmenter.
Because of the poor underlying implementation of the defragmentation API, nearly any defragmenter in Windows NT/2000/XP is dog slow, even on FAT32. I used to be a huge fan of Norton SpeedDisk in Windows 95/98 days because it was very good at avoiding seeks, and thus was ridiculously fast — I could defragment a 2GB partition in about five minutes. And yet, it was quite safe (I had a machine that was unstable under high load at the time and spontaneously rebooted on several defrag attempts, and never lost anything on that partition). Sadly, Microsoft kept cutting out the legs under Symantec, and eventually they had to give up on making a fast defragmenter.
Phaeron - 11 10 05 - 02:26
By default Win XP uses last access time on NTFS drives.
i.e., last access time on files will be updated when reading
and/ or writing. Use "fsutil" from the
command line to check the "lastaccesstime-behavior"
on your system.
>fsutil behavior query disablelastaccess
to disable the last access time use
>fsutil behavior set disablelastaccess 1
Try it and see whether it helps
paydar - 12 10 05 - 05:16
I don't see how that would help. I have it off already so XP don't keep updating files' access times. And fragmentation doesn't seem to have much to do with compression ... though I guess it will always fragment if you compress.
Here's the most fragmented files on my NTFS drive (not compressed), and I really should defragment the drive (but it just scares me to do it now):
fragments: 488 (512 fragments would be 1 cluster per fragment :/)
Along with Some other files of couple hundred fragments.
Here's the most fragmented files on my other FAT32 drive, use in pretty much the same way as the NTFS drive with same cluster size:
meanie - 12 10 05 - 14:50
Yes, NTFS, fat32 & fat fragment. So? just defragment them, regularly.
Use the XP defragger, it's good.
But it does not defrag the Page file.
Set a regular time to defragment when you dont use the PC, maybe dinner time.
A fragmented Page file can slow everything
Must be defragged sometimes, also the MFT (Master File Table) if it outgrows it's allocated space and fragments.
See how to defrag the Page file at The Elder Geek:
froglet - 12 10 05 - 18:00
There is a free tool to defragment the page file and other critical system files. Look there: http://www.sysinternals.com/Utilities/Pa..
DesDZ - 13 10 05 - 07:28
Quote: "It's fairly well known that the NT file system (NTFS) is very bad at avoiding fragmentation, partly due to its allocation strategy of intentionally placing tiny gaps between files — which is good if those files expand, but bad if they don't."
VoptXP defragmenter has an option of packing files "tight" - I think it's about this. Also, several of the 'big boys' of defragmentation show similar behaviour (at least according to their graphs), but Diskeeper (latest versions) does not - it leaves some gaps. According to its help, this is a good thing/the contrary is impossible. Go figure. :)
Amiga - yes, the standard file system was disaster, but there were free/paid alternatives which rocked, with journaling and everything when NTFS was still in diapers (but I didn't follow much the PC world back then). :)
Grof Luigi - 13 10 05 - 20:37
In addition to PageDefrag, sysinternals has another free utility called contig http://www.sysinternals.com/Utilities/Co..
that can defrag just a file or directory. I made an edit in the registry so a right click in explorer runs contig on the selected item.
Run this on your input A/V files before/after processing, and you're quickly dealing with files that are totally contiguous.
hi - 13 10 05 - 21:32
a few days ago I was looking for in virtual dub that option to enable multiprocess, but I didn't find it anywhere... I remember that option in the older version but it desapeared...
that option would be very usefull for PC with pentium HT and athlon xeon whith are bi-processed...
in my tests with pentium HT 2.8ghz the performance do not surpass 60%... the encode speed was the same that my AMD duron 1.3..........
it would be possible to put that option back???
id - 14 10 05 - 13:03
The best and fastest way to defrag any FAT-NTFS file system is to use Norton Ghost.
Ghost does a wonderful jobs of placing files to the beginning of a drive. Use partition to image then, to partition from image option the menu. Ghost 8.0 is just one executable file, both dos and windows version.(different executables makes totally 2 :)
Also, for nt,win2k and xp there is linux filesystem driver which makes it possible to work with files bigger than 4gb where you do not have any NTFS partition, and have a linux partition and you can enable/disable it instantly with just one click.
Enjoy these beauties...
Burak Ural - 14 10 05 - 17:29
No official versions of VirtualDub have had such an option, but some builds of VirtualDubMod might have. I think the VDubMod team removed it because it had too many problems, but I haven't tried it or looked at the code so I can't say for sure. Implementing effective multiprocessing is a lot of work to handle all the corner cases properly, and I can't say when I might have a working implementation that uses multiple CPUs more effectively than current builds.
Phaeron - 15 10 05 - 03:05
An XFS or Reiser4 driver for NT would be a most excellent thing. I'd love to let O&O go. (I believe they're one of the few that rewrote their own version of the defrag api.)
"In one of the computer magazines I read (c’t) was a huge report about disk fragmentation and the impact on performance. They also showed that when you write two files at the same time (e.g. 2 downloads) on a NTFS partition these two files can use every other cluster on the harddisk."
I actually had a chance to test this while ripping a DVD, and I got a pattern of about 100 video/20 audio clusters. I don't know if it's the huge cache on my hard drive, the software, or ntfs, though. It could have to do with the rate data was coming in, much faster than a download even in PIO.
foxyshadis (link) - 15 10 05 - 07:17
and last but not least, the Amiga standard file system is not used anymore whereas NTFS ... ;-)
Elwood (link) - 16 10 05 - 05:09
Guess I'm not alone.
Fragments File size Most fragmented files
162 648 KB WINDOWSsystem32dllcache [Directory]
172 1 KB WINDOWSsystem32configsystem.LOG [Excess Allocation]
328 44 KB WINDOWSsystem32configsoftware.LOG [Excess Allocation]
But I reckon this takes more cake. ;-) (4 KB clusters)
pencil_ethics - 21 10 05 - 01:09
I think it's bad to blame NTFS on it's own for fragmentation. I believe it's the OS that takes the decisions that eventually does or does not cause fragementation. NTFS is just a a way of structurizing (meta-)data on disk, and has nothing to do with the drivers that's at fault - ntfs.sys.
It doesn't need another file system to prevent fragmentation, it needs a bugfix of this driver. But, as I always say, MS doesn't fix bugs. They only fix security issues.
Thany - 23 10 05 - 13:09
It is partially true that the allocation behavior is a property of the filesystem driver and not of the on-disk structures, but NTFS lacks any on-disk organization that would assist in better allocation (such as cylinder grouping), and at least externally there is no distinction between NTFS and ntfs.sys because there is no specification that is distinct from the implementation. As far as the world is concerned, ntfs.sys defines NTFS.
Phaeron - 23 10 05 - 15:37
An interesting article here
Although they do themselves a great disservice by linking to about.com which talks about New Technology Filing System...
Secret - 26 10 05 - 10:53
That's why I keep a working win98 with it's Norton Speed Disk in a separate folder with dual boot, I use it for making full backups (You can't backup system files under XP) and defrag with SD. My only NTFS partition is located from 130Gb until EndOfDisk because win98 can't handle sectors above 137Gb and it would be too risky alowing any write there (would map to sector: N - 137Gb destroying begining of HD), So using NTFS for >137G will make sure win98 will never write there.
isidro - 28 10 05 - 03:24
AN article on the topic said " "Even though NTFS is more resistant to fragmentation than FAT, it can and does still fragment. The reason NTFS is less prone to fragmentation is that it makes intelligent choices about where to store file data on the disk. NTFS reserves space for the expansion of the Master File Table, reducing fragmentation of its structures. In contrast to FAT's first-come, first-served method, NTFS's method of writing files minimizes, but does not eliminate, the problem of file fragmentation on NTFS volumes." -cant help but agree, what with the average file sizes becoming so large nowadays. Our systems get fragmented pretty fast and i think it does affect performance, we've taken to running a third party tool with some neat scheduling features. its doing a good job.Cant imagine defragging all the systems manually with the built in tool which one has to admit is pretty slow.
jade - 10 11 05 - 12:57
55,747 fragments, 8,174 MB filesize ~ RedSplat1 Hard Disk3.vhd
And I defragmented 3 days ago - and this file *was* defragmented at the time:
283 frags, 28,172 KB ~ EudoraMailbox.mbx
Intelligent choices my bum...
Roger "Merch" Merchberger
Roger "Merch" Merchberger - 15 11 05 - 13:39
Try JkDefrag, it is open-source, free and fast (gee, I sound like a linux commercial).
Igor (link) - 17 02 08 - 21:31
I know this is realllly old, but you might just be the man to make a point of it.
I saw similar effects. More specific, I used a tool which also allows you to inspect the drivecontents by cluster.
I saw some very long files (500MB, text) which were not scattered all over the disk, but semi continuous.
What I mean is : alternate bits of file and free space, not interleaved with pieces of other files.
I think the clue is that this file was compressed after a disk cleanup with 'compress old files' activated. Apparently ntfs compression first divides these files into a large number of small pieces and doesn't save them consecutively (which would leave one big hole following it) but stores the compressed pieces at about the same location where it found the uncompressed source. This results in many, many small compressed file-bits, with equally many (minus 1) unused spaces in between.
Now this file is fragmented, just imagine what happens to a file that gets stored in that free space.....
Do you think this is what happened to you?
PaulvdH - 07 08 08 - 09:53
This is old alright, but still drives me crazy.
I just deleted everything on my 400GB destination disk under XP pro, created 3 compressed folders, and am copying 300GB worth for large files using xcopy /v. I have only copied a dozen files so far and they are all fragmented according to xp's defrag (from 20 to 500 fragments) and I am already advised to defrag!! As you say, this takes the cake!! I dont suppose Vista or 7 is any better since I haven't read that NTFS or disk management was changed. And don't get me started on OS's that install/overwrite files in the system directories when any random software is installed. Sheesh!!!
chris - 16 04 09 - 19:01
Not to defend NTFS's allocator, but compressed files are a special case in which the size of a span of the file on disk depends on its contents. In particular, rewriting a portion of a compressed file in-place has a high chance of changing its on-disk length, which will then force the filesystem to fragment the file.
Phaeron - 17 04 09 - 15:21
Those who think Unix file systems don't degfragment - unfortunately they do. A fragmented unix/linux partition also has performance impact. Unfortunately it is much more difficult to find defragmentation tool for unix, or a tool that will show you how fragmented the unix/linux partition is. All modern file systems still have a fragmentation problem. However in Vista / Windows 7 defragmentation happens in the background (by default), and in addition this process automatically optimizes files in consecutive order for system and application startup.
As for FAT32/NTFS performance there may be a slight performance difference but I would still reccomend NTFS as it is much more reliable over FAT32. I'm using NTFS for high def video editing / audio recording and it's not been a problem on 7200 RPM disks.
Malcolm - 21 08 09 - 17:45
1TB hdd with 3x333GB NTFS partitions on Vista.
It is outrageous !!!
Almost all files copied on an empty(and formatted) NTFS partition are fragmented. I use the Hard Drive for backup storage, and almost all files have 1GB, and 90% of them are fragmented after being copied on a EMPTY DRIVE. Outrageous :(
Stefan - 25 01 10 - 11:42
"Those who think Unix file systems don't degfragment - unfortunately they do."
The level of fragmentation on modern Linux filesystems is so low that one cannot even notice it!
I have been using Linux/UNIX for many years and even when my HD was almost full, I could not get fragmentation above 0.4%. Yes, I checked it because when I copied a few files on my huge NTFS partition a few days earlier, Auslogics showed fragmentation going up to almost 10%, and I wanted to see what would happen on ext4 if I tried to fill it up a bit. Again, almost nothing happened. As far as I know, the only way to get it fragmented is to create files that with time grow huge. I imagine if you have a few VirtualBox images that grow dynamically, that could make the FS a bit fragmented, at least in theory.
Tom - 08 04 12 - 21:42