§ ¶The DXGI_FORMAT_R8G8_B8G8_UNORM texture format
A pet peeve of mine is underspecification, where the documentation for an API or file format doesn't tell you enough specifics about what it is describing. Instead, you get some ambiguous description of a feature that looks like it's informative, but doesn't actually tell you what you need to know.
Take, for example, this description of a texture format from Microsoft's DXGI_FORMAT enumeration (from http://msdn.microsoft.com/en-us/library/windows/desktop/bb173059%28v=vs.85%29.aspx):
A four-component, 32-bit unsigned-normalized integer format. 
I'm pretty sure this wasn't reviewed by anyone who would actually try to use said format.
Well, at least we know it's 32-bit, and the unsigned-normalized (UNORM) part described later tells us that the components are at least decoded as 0-1. From the name we could make an educated guess that the components are 8-bit and somehow relate to red, green, and blue, and that there's something strange going on with green, but that's about it. Maybe that footnote will tell us more?
3. A resource using a sub-sampled format (such as DXGI_FORMAT_R8G8_B8G8) must have a size that is a multiple of 2 in the x dimension.
That's good to know, but we still don't know enough to actually use it.
Allow me to fill in the missing information:
This format encodes pairs of RGB pixels in a 32-bit packed format, where the red and blue components are encoded at half the horizontal resolution of the green component. The byte ordering in memory is red, green0, blue, green1, where the green0 and green1 pixels are the left and right pixels of each horizontal pair, respectively. The RGB color of each pixel is the shared red and blue values of the pair combined with the corresponding green pixel. Alpha is 1.0.
Although the resource size must be an even number of pixels in the X dimension, it may have mipmaps that are not an even number of pixels across, similarly to how mipmapped DXTn/BCn textures are handled.
Apparently, this format was intended to allow for efficient decoding of UYVY format video, which encodes video in YCbCr colorspace with 4:2:2 subsampling. R8G8_B8G8_UNORM could be used to render UYVY video with an appropriate color matrix transform, if it were not for two deficiencies. First, the red and blue components are upsampled using point sampling. Bilinear interpolation can be used but will simply interpolate the result after the red and blue samples have been doubled up. Second, the shared red and blue components are positioned in the center of the pair of green samples, which is at odds with modern video formats that align it with the left sample (coalignment). The result is that the R8G8_B8G8_UNORM format gives low-quality rendering of UYVY and should not be used in a production video renderer. This might have been somewhat useful for MPEG-1 rendering, except that MPEG-1 is 4:2:0 instead of 4:2:2 and if you have hardware that supports Direct3D 10/11 you might as well use three R8 textures instead.
The G8R8_G8B8_UNORM format is a byte-swapped version that is analogous for YUY2 encoded video, and is equally useless.
You should bill Microsoft for your time as a tech writer.
Z.T. - 08 08 12 - 17:30
That's (kind of) a bayer grid: http://en.wikipedia.org/wiki/Bayer_filte..
But I guess it is in fact not related to why they are using it. It probably has more to do with a poor man's mean to compress RGB data for memory benefits (usage/bandwidth).
kurosu - 08 08 12 - 20:40
Well, point sampling r/b is a nice way of not worrying about their intended placement. [/irony]
Seems pretty useless as you point out.
Klaus Post - 08 08 12 - 21:26
Technically, it's possible to point sample chroma and still have correct placement, but it'd require that chroma be sampled separately from luma. It looks like the blocks are just being decompressed into the texture cache prior to interpolation like some cards did for DXT.
Phaeron - 09 08 12 - 13:53
I can't resist... And if you don't have Direct3D 10/11 just use OpenGL and three luminance textures :-).
If you don't care much about the exact conversion coefficients you can get that to work on stuff as old as GeForce3 (not (all) the GeForce 4 though, if you remember that mess...).
Reimar - 21 08 12 - 19:47
This texture format seems like it uses one of the "built in" d3d formats that games use to get a cheap speed boost. I wonder which software tools that do gpu decoding are using this, maybe only on older cards?
Dragorth - 31 08 12 - 03:49
I'd be surprised if games were using it this way, particularly since support for it is not guaranteed in D3D9. DXT1 at 4bpp will give better quality for RGB and if you need only two channels there's A8L8, V8U8, or G8R8 at equivalent 16bpp.
The issue with using three luma textures is that you have to do work on the CPU in order to deinterleave the components. VirtualDub's D3D11 renderer uploads UYVY/YUY2 data into a half-width R8G8B8A8 texture, does one shader pass to convert it to RGB, then does another pass to scale it. This allows the texture upload to proceed at memcpy speeds or even zero-copy if the decompressor can write directly to it. The DXGI 1.2 capture code I have in the 1.10.3 test release does similar tricks for download in that it renders a few extra quads into the readback texture so the result can be memcpy'd out directly in YV12 layout and format.
Phaeron - 31 08 12 - 17:37
This format is obviously meant for Bayer sensor data (i.e. for image editing applications which deal with RAW sensor data such as from CR2, NEF, RAF files). As red and blue are less important for our visual system camera sensors do not sample them at the same density as green.
Interpolation is always used and the results depend on quality of demosaicing/interpolation method used. One of the common methods is Adaptive homogeneity-directed (AHD). Now I am curious if that kind of stuff (demosaicing, color matrix transformation, white balance, etc) could all be implemented in a shader so you can view RAW files with great speed and without need for converting them to JPEG (and thus RGB) first.
Igor Levicki (link) - 04 10 12 - 01:23
I doubt that the formats were originally added for de-Bayering, particularly given the lack of specificity or control over the interpolation. They appear to derive from the Direct3D 9 D3DFMT_G8R8_G8B8 and D3DFMT_R8G8_R8B8 formats, which are specifically documented as being similar to UYVY and YUY2.
At this point, it's not so much a question of whether algorithms can be implemented in a shader so much as whether it would be advantageous. With D3D9 you can run just about any floating-point algorithm with enough passes, and with compute shader capable hardware you can do integer algorithms too.
Image viewers are a whole another issue... frankly, a lot of them aren't much more sophisticated than decode+blit without a lot of thought to acceleration, prefetching, or color management. Heck, the one in Windows 7 is so lame it uses point sampling for its zoom.
Phaeron - 07 10 12 - 08:05
Please keep comments on-topic for this entry.
If you have unrelated comments about VirtualDub, the forum is a better place to post them.