I've become increasingly frustrated with the speed of computers lately, or rather, lack thereof. After thinking about it, I came up with three reasons why I think computers have gotten slow. It's a bit long. Should I call this a rant? Oh well, let's just start with the first one:
Multithreading, overall, has been good for computer usability. I'm glad that I no longer have to choose to run either my editor or my program, or shell out from my editor and remember that I've done so (ah, BRIEF). While multithreading on the CPU is good, though, it isn't so good for the hard disk. In fact, it makes hard disk performance suck, because seeks are expensive and even a moderate amount of seeking causes throughput to plummet. The problem is that when you add multithreading to the mix, you have multiple threads submitting disk requests, which then causes the disk to thrash all over the place. This is especially fun when you boot your computer and all of the garbage that every program has placed in the startup folder decides to load at the same time. It's gotten worse with faster CPUs, because now programmers think they can speed things up by doing tasks in the "background," only to increase the amount of contention on the one poor disk, which ends up lowering overall throughput. (*cough*Intellisense*cough*)
My fondest memory of the effect of multithreading on disk performance was from my Amiga days, where attempting to read two files at the same time would cause the floppy disk drive to seek for every single sector read. As you can imagine, that was very loud and very slow. (Gronk, gronk, gronk, gronk, gronk....) Newer operating systems do buffering and read-ahead, but you can still see some pretty huge performance drops in VirtualDub if you are doing a high bandwidth disk operation and have something else hitting the disk at the same time. Supposedly this might be better in Vista, as the Vista I/O manager doesn't need to split I/Os into 64K chunks, but I haven't tested that.
#2: Virtual memory (paging)
Ah, virtual memory. Let's use disk space to emulate extra memory... oh wait, the disk is at least three orders of magnitude slower. Ouch.
Contrary to popular opinion, virtual memory can help performance. In a good operating system, any unused memory is used for disk cache, which means that enabling virtual memory can improve performance by allowing unused data to be swapped out and letting the disk cache grow. In the days of Windows 95 there was a big difference between having a 500K cache and a 2MB cache... or no cache at all. Nowadays, it feels like this is very counterproductive, because Windows will frequently swap out applications to grow the disk cache from 800MB to 1GB, which barely increases cache hit rate at all, and just incurs huge delays when application data has to be swapped back in. It's especially silly to have every single application on your system incrementally swapped out just because you did a dir /s/b/a-d c:.
In the old Win9x days, you could set a system.ini parameter to clamp the size of the disk cache to fix this problem; with Windows NT platforms, it seems you have to pay $30 for some dorky program that calls undocumented APIs periodically. I wouldn't laugh, Linux folks -- I've heard that your VM is scarcely better than Windows, and the VMs of both OSes are trounced by the one in FreeBSD. Of course, I can count the number of people I know that run FreeBSD on one hand.
For better or worse, virtual memory has made out-of-memory problems almost a non-issue. Sure, you can still hit it if you exhaust address space or get absolutely outrageous with your memory requests, and the system will start running reaaaalllllyyyy slooooowwwww when you overcommit by a lot, but usually, the user has ample warning as the system gradually slows down. Virtual memory has also let software vendors be a bit loose with their system requirements -- you're a moron if you think a program will run well with 256MB when that's what it says on the box -- but at least the program will work, somewhat, until you can get more RAM.
That isn't the worst problem, though. Care to guess what my #1 is?(Read more....)
New stable release is out.
If you are using the 64-bit (X64) version of VirtualDub 1.7.5, you definitely need this version because I fixed a critical bug that was causing crashes in the display drivers whenever VirtualDub happened to display video -- I messed up in an init call and misaligned the stack by four bytes, which mostly was OK except when the display driver attempted to use SSE2, where it crashed with an alignment fault. My access to an X64 system is sometimes spotty, so I didn't fully test the new code. Whoops.
1.7.6 was also delayed for about a week in order to shake out bugs in the new input driver plugin API -- if you are attempting to write or test such a driver, you also really want this version. I'd like to extend a special thanks to everyone who participated in the input driver thread on the forums and provided feedback on the SDK and bug reports, including fccHandler, Moitah, tateu, and pintcat. Thanks to everyone's fast feedback, I was able to push out ten very fast iterations on 1.7.6 to squash a bunch of bugs and I got to see some exciting input format plugins in development. I won't steal anyone's thunder here, but there's some really cool file format support coming, and now that they're plugins, people won't have to try to sync to my versions.
Incidentally, if anyone wants to write an input plugin, the SDK still isn't finished, but here is the current version:
Found this in VirtualDub's render path while debugging some issues with the input driver API. Do you see the bugs? I spotted the first one immediately, but almost posted a new test release to the forums before spotting the second.
(Read more....)frameInfo.mRawFrame = stream_frame; frameInfo.mTimelineFrame = target_frame, frameInfo.mDisplayFrame = display_frame; frameInfo.mTimelineFrame = timeline_frame;