§ ¶VirtualDub 1.6.18 and 1.7.2 released
Two new releases of VirtualDub today.
1.6.18 (stable) has a couple of fixes from the 1.7.x series backported to it, one being a fix for AVI compatibility issues when alignment is enabled (mostly capture mode), and another being a crash related to the Cancel button in the video filter list dialog.
1.7.2 (experimental) has a slew of fixes, and a couple of changes. One is that throttling is now supported, so you can force VirtualDub to use less than 100% of the CPU during processing. Another change is that this is the first version with an input driver plugin API. This allows new container formats to be added for import, much like video filters can be added with new .vdf files. The API is still very experimental at this point, but if anyone wants to experiment with it: [VirtualDub plugin SDK 0.5]. Please send any comments on the API. Again, this API is still experimental and is subject to breaking changes.
Full change lists after the jump.
(Read more....)§ ¶Dithering
Dithering, according to Wikipedia, is a way of randomizing quantization error... but in a nutshell, when used for images it's basically a way of simulating a higher depth display than you've got. This is done by bumping pixels up or down in brightness, so that partially lit up patterns of dots average out in areas to the desired color values:
There are lots of ways to dither, with varying levels of quality. When it comes to real-time dithering, though, the trick is how to do this with speed, but still effectively. So how to dither quickly?
It's not that hard, actually.
The main characteristics we're looking for in a dithering algorithm are:
- Black maps to black.
- White maps to white.
- The average value of a group of dithered pixels approximates the average of the original color values.
- Dots spaced as far apart as possible.
Ordered dithering satisfies these, and is fairly easy to implement. The main idea is to start with a simple dot order in a 2x2 grid:
0 2 3 1
You can then apply this recursively to 4x4 and larger sizes:
0 8 2 10 12 4 14 6 3 11 1 9 15 7 13 5
We then add a multiple of this pattern to the source image in order to gradually bump the image pixels between levels. For instance, if a color value is 4/16ths of the way from level 5 to level 6, it will light up four out of the 16 pixels in the 4x4 dither matrix. So one way to convert an 8-bit grayscale image to 4-bit is to add the above matrix with saturation (clamping at 255), and then shift down by 4. Repeat two more times for green and blue if you have a color image. For a 15-bit (555) target image, shift the matrix down by one bit first, since there are 8 source levels for each target level instead of 16. A 2x2 matrix adds two bits (four levels) to the effective depth, and a 4x4 matrix adds four bits (sixteen levels). You're not likely to need an 8x8 matrix for extra levels, although you might need it for better quality (more on that later).
If you're working with hardware that has saturation arithmetic, this may be the best way to go, as it's very fast. Heck, in MMX code, it's probably just one PADDUSB instruction. It has a couple of flaws, though. First, we're adding all non-negative values, so our 4x4 dither matrix has a bias of +7.5. We can fix this by rebiasing the dither matrix, although that has the disadvantage of requiring both saturating add and subtract. The other problem is that we're increasing the contrast of the image slightly -- when dithering, an input value of 240/255 maps to 15/15, for a 6% increase in contrast. This can be fixed by scaling down the source data by 240/255, mapping input white to "just barely all white" on the output.
Eww, multiplies, you say? Not so fast.
(Read more....)