Ben Rudiak-Gould's Huffyuv is a fairly common lossless video codec on Windows. Its main advantages are fast compression speeds, lossless support for both YCbCr and RGB video, and of course, it's free. Because it's lossless, the compression ratios that can be obtained are limited, but generally you can get around 1.6:1 to 2.2:1 on average, and that's a big help when you can use it during real-time video capture.
There is, however, a bear trap in its settings.
If you bring up the configuration dialog for the video codec (Video > Compression > Huffyuv, Configure in VirtualDub), there are two combo boxes at the top of the dialog which set the prediction modes for YUY2 and RGB video. The prediction mode changes the way that Huffyuv attempts to detect patterns in the video and thus the speed and effectiveness of the compression. Most of the modes are fast for both compression and decompression, but you should be aware of the Predict Median mode on the YUY2 side. I've made the mistake of picking this mode before, and while it's fast for compression, it's unexpectedly slow for decompression. The reason it's so slow is that the Predict Median predictor goes through a non-vectorized scalar code path on the decompression side and is thus very slow. This won't affect your video, of course, but if this turns out to be too slow for your post-processing work, you can always recompress it again with another Huffyuv predictor thanks to the lossless nature of the codec.
I should note that I only have experience with the last official version of Huffyuv, 2.1.1. There have been some unofficial updates to Huffyuv to add workarounds for compatibility issues with applications and to add YV12 support, but I don't know if those have any speed improvements to the predictor. I looked into trying to speed up the median predictor in it at one point but was unable to do so, and I'm guessing that it's a hard problem because it shares a lot in common with the Paeth predictor in PNG, which is also inherently serial and hard to parallelize. I also looked into rewriting the bitstream parser into C++, but I tried using compiler intrinsics and of course ended up hanging both the VC8 and VC9 compilers.
The MultimediaWiki has a description of the Huffyuv compression algorithm: http://wiki.multimedia.cx/index.php?title=HuffYUV. Unfortunately, it seems to be missing a few critical details, most notably the exact nature of the VLC bitstream: 32-bit words in little endian format, codewords placed starting from the MSB, Huffman codes allocated longest codeword first, and with a maximum codeword length of 31 bits.
Incidentally, just like with Avisynth, I've noticed that people like to butcher the name of this codec for some reason. It's Huffyuv, first letter capitalized, rest in lowercase. It's been that way since 1.0.0 and appears that way in the documentation, dialogs, and source code. Yet, for some reason, people keep misnaming this codec HuffYUV.(Read more....)
A slight annoyance with my new laptop is that, without a docking station, it doesn't have a DVI input. That means I have to use the analog VGA output instead. Not a huge problem, since my 24" LCD is surprisingly crisp with analog video even at 1920x1200, but occasionally I do have to give its auto-adjust a hand as the desktop in my current Vista 64 partition is both fairly empty and black. The way I do this is to launch Paint, set it to black-and-white mode, and fill the screen with a 50% checkerboard.
While going through the menus to switch to black-and-white mode, I noticed that the Vista version seems to have a new Custom item. Cool, I thought, finally I can set a zoom factor higher than 8x. Well, not quite:
Apparently I can choose a custom zoom level from one of seven predefined zoom factors....(Read more....)
Occasionally, whenever I visit my parents, I end up fixing something on their computer. Yeah, I know what you're thinking, but it's a bit different. My dad's not the most technical guy in the world, but he tends to wait to ask me for help until after he's tried swapping the video card and reinstalling Windows, so the problems I get to look at are a bit more annoying than just clicking a checkbox somewhere. Hardware, he can usually figure out, but it's software that gives him real problems -- specifically, the casino games he likes to play sometimes don't work when he reinstalls them on different machines.
The first time, the problem was that the game was hanging on launch on a second machine. I had thought, bah, it's just an outdated video driver. Sure enough, the installed video driver was ancient, and I updated it, but still no-go. Eventually I ended up running the game under ntsd.exe and discovering that the game was stuck in a QueryPerformanceCounter() loop. It seems that the programmers that wrote the game figured that the 64-bit LARGE_INTEGER returned by QueryPerformanceCounter() always counted up at somewhere between 1-4MHz and hardcoded that fact in a weird way that the game hung in a speed regulation loop when run on an AMD Athlon. I just ended up hacking out a branch instruction in the EXE to fix it.
The second time, different game, checked the video driver again, no go. The symptom was a silent crash to desktop shortly after the game launched -- pretty user unfriendly, but not unusual for a game. I ran it again under NTSD, and I got a call stack with a module called ixvpd.dll on the stack. No clues on a web search. I checked the game vendor's website, and all I found was a lame "codec patch" that apparently "fixes" a codec compatibility issue by uninstalling ffdshow and avisplitter. (Sigh.) ixvpd.dll didn't have a version description block, but a strings search did reveal that it was a decoder of some sort, and renaming it did fix the problem. It turned out to be a poorly written DirectShow-based video codec that came with the software package from a webcam that my dad no longer uses, so I uninstalled that and that fixed the problem proper.
If you think about it, with issues like these, it's no wonder that average people just throw their hands up in the air and say their computer's broken. It's hard enough to tell what's going on if you are a programmer, much less if you're not.(Read more....)
A couple of days ago I found myself writing a breadth-first search routine in C++ for a particular puzzle (Klotski, to be specific). Since I was using Visual C++, I needed a set for the board database, and decided to use its stdext::hash_set.
See, Visual C++'s Standard Template Library (STL) is based on the implementation from Dinkumware, and has the following prototype:
class Traits = hash_compare<Key, less<Key> >,
class Allocator = allocator<Key> >
Notice two things about this hash set implementation: it takes a less-than predicate, and it doesn't take an equality predicate. What the frick kind of hash table requires a less-than predicate??? And we're not talking about a predicate for the hash code here, but a less-than comparison operator for the whole key. It defeats one of the main advantages of a hash container, since IMO it's often easier to write hash and equality predicates instead of a less-than predicate, as long as you aren't dealing with floats. I've never seen a hashed container that required this -- not in SGI-STL/STLport, not in Java, not in the .NET Framework. As you might expect, this means that VC8's hash_set is a pain in the butt to use and is slow. Thankfully, C++ TR1's unordered_set follows the SGI-STL/STLport path rather than Dinkumware's.
I haven't really had a need for a full blown hash_map or a hash_set in VirtualDub, but I'm thinking that if I do, I'll be better off writing my own....(Read more....)