§ ¶Fun with animated GIFs
For no particular reason, I sat down this weekend and wrote animated GIF import/export routines. Animated GIFs aren't that common anymore -- at least, for non-obnoxious uses -- but I don't have any tools to manipulate them and hadn't ever worked with the format. Besides, the last time I'd experimented with LZW compression was in high school, where I got around 10 bytes/sec on a 7MHz 68000. I was pretty sure I could improve on that since I actually know about hash tables this time.
It's now apparent to me why the GIF image format became so popular: it's much simpler and easier to implement than either JPEG or PNG. JPEG is difficult because of the inverse discrete cosine transform (IDCT) and marker stuffing in the bitstream; PNG's Deflate compression uses two nested layers of Huffman trees in addition to sliding window compression. The LZW compression used by GIF, on the other hand, can be implemented in a reasonably performant fashion with about a page of code for the encoder and decoder each. All told, I'd say it only took about a day each to get the GIF decoder and encoders working, which is astonishing when you consider that GIF is still competitive with PNG for some types of images. It helps that the original Compuserve GIF specification is well-written and precise, although the PNG specification is even better in these respects.
It also helps that I cut a bunch of corners in the initial implementations, but it works on the test images so far....
Now, any export code must of course be appropriately tested:
01/15/2007 04:54 PM 170,706,854 SHUFFLE! OP.gif
640x360, 2183 frames, 24 fps. And no, I'm not uploading it.
Internet Explorer did manage to open the 163MB animated GIF, but froze for about a minute at 100% CPU during the load. It then only managed to play the video at about 8 fps and only up to about a quarter of it. That's disappointing, but reasonable.
Mozilla Firefox also had difficulty opening the GIF file. A lot of difficulty, in fact:
(FYI, my laptop has 1.2GB of memory.)
Amazingly enough, although it swapped a lot, it did manage to play the entire video, at full frame rate, and without crashing. So, if you want to play your favorite anime openings in a horrendously inefficiently encoded fashion and in 256 color glory, Firefox is clearly better than Internet Explorer. I might try writing an animated GIF larger than 4GB next, just to see what happens.
I predict the world would implode.
Seriously though, It seems as if they're extracting the images into memory raw, maybe even converting them to 24/32bit RGB for playback. Not really the most efficiant memory wise, but probably saves a bit of CPU time.
Blight - 15 01 07 - 22:34
Have you tried Irfanview or something old school like SEA by Photodex ?
mikesum32 (link) - 16 01 07 - 02:41
Nice! This can be useful for forum avatars and signature images. Looking forward to it being implemented in VD :)
Spinal (link) - 16 01 07 - 13:47
sorry, last post meant to say output animated gif's (it already outputs .avi's) *duh*
roseman - 16 01 07 - 15:42
Someday i will be qualified to operate this comment form...
Did this blog entry mean that someday VirtualDub might inherit the ability to either import animated .GIF's, and/or output animated .GIF's (?)
roseman - 16 01 07 - 15:44
A clear case of 'Because I can' x-D
-SPM-Mad - 17 01 07 - 11:06
FYI, neither Firefox nor Internet Explorer were able to open a 2.6GB animated GIF. :)
Phaeron - 17 01 07 - 23:34
Phaeron: You got too much time on your hands if you make a 2,6GB Gif ;)
P. Simons - 18 01 07 - 09:03
Yes, Firefox stores each frame uncompressed in memory; trouble may happen with much smaller files, for example a 100fps 500x500 animation where only a few pixels change each frame. I have a 600K GIF that makes it eat 80 megs of RAM...
By the way, a "Video GIF" may look much better than most people realize:
Long story short, each frame may include its own palette, and changing the palette doesn't destroy what is already decoded. This allows the use of a frame-optimized palette. Otherwise you can go the cheap way (RGB 332) or the very expensive one (generate optimal palette for all the frames) which might not look so great anyway.
This might also help justify Firefox's decision of storing uncompressed RGB, but it's optimizing for the corner case.
Michel - 18 01 07 - 11:39
No, the world won't implode, but you will start getting null pointers when you try to malloc() or new anything. In Windows (32-bit Windows, at least), the process virtual address space is 2 GB wide -- once you hit the top, Windows won't let you allocate any more memory. I found this out the hard way while playing with photo overlays in Google Earth a while back.
Lee - 19 01 07 - 20:33
Actually, you can increase the address space of programs on 32bit Windows to 3GB by setting a flag in EXE header (editbin.exe /LARGEADDRESSAWARE), and adding /3GB switch in boot.ini.
ender - 21 01 07 - 06:49
That still wouldn't permit a single memory allocation larger than 2GB. The reason is that the OS has libraries and system memory blocks that are fixed right below the 2GB point, so you can never have a contiguous allocation cross the 2GB boundary even with /3GB enabled. In practice, the location of system DLLs in Windows XP SP2 makes it hazardous to attempt to contiguously allocate even 512MB-1GB of address space.
Also, with a 32-bit ptrdiff_t, it isn't correct for a C implementation to allow you to malloc() that much memory. Even if you could allocate that much memory, you'd have to go straight to the Win32 APIs.
Phaeron - 21 01 07 - 16:36
Will it be in the next version?!
me - 22 01 07 - 06:37
Yes. If you want to try it, head over to the Testing/Bug Reports forum and try 1.7.1 test-3, which has the animated GIF import/export code.
Phaeron - 22 01 07 - 23:14
Hi all, I am really interested in bitmap file formats because of I an trying to develope a new one in my spare time. My objective would be to create a new lossless file format able to compress an image more than PNG. I am actually 20kb near PNG (using PNGOUT) on Lenna test image and coding on new ideas trying to do better than this.
I'd like to know your opinion on PNG file format and a new possible (I hope) file format that is really more simple to code because of based on neighbour differences and LZW (that is patented out in the last 2 year) - no sin & cos - no floats - no Huffmann at this time. Thank you all for feedbacks. Bye
LiloLilo - 30 01 07 - 15:24
Actually, the C standard lets you allocate objects larger than ptrdiff_t can hold. It's just undefined to subtract 2 pointers that are longer than that distance.
asdf - 01 02 07 - 01:32
Hmmm, you're right. Unfortunate, that. I wonder if that's to support 16-bit x86 near/far/huge silliness.
Phaeron - 01 02 07 - 01:57
Nah, it's more of a speed/size vs. correctness tradeoff with the burden placed on C programmers as opposed to implementers. You have to keep in mind that C is kind of lame in that it doesn't let you specify the range of an integer type and it just gives you whatever handful of types an implementation uses. So size_t is just some unsigned type large enough to allocate an object and ptrdiff_t is just some signed integer that basically only has a minimum range constraint. There is no relationship whatsoever between size_t and ptrdiff_t.
And the lame part about this is that one of the committee members told me that the reasoning behind this was that subtracting 2 pointers is rare and that in practice you can get away with just casting the result to a size_t (which assumes a. the implementer uses 2's complement b. size_t and ptrdiff_t have exactly the same number of value bits c. the C compiler doesn't care that you're doing something undefined).
The funny thing is how C++ compounds this embarrassment even further:
* Subtracting 2 pointers isn't rare in the STL.
* The really sneaky way that this all works in C is because pointer addition is the only arithmetic operation that doesn't do any integer promotions, so both ptrdiff_t and size_t can be added to a pointer (or equivalently using array index notation) without problems.
* The original STL draft was specified using distance_type everywhere.
* During standardization, containers were "cleaned up" to use size_type in places (size_type is only used in containers, iterators (and hence the algorithms) know nothing about them).
* Somewhere in the standard, you'll find that it's undefined if you pass an iterator range to any STL function where the distance between them is larger than distance_type's range.
There are 3 really funny (but sad) things about all this:
* std::advance was templated to take an arbitrary type as opposed to using the iterator's distance_type (too bad using it on a value larger than distance_type's range is actually undefined). Iterators were not templated to do this, they all use distance_type.
* Some implementations of std::vector will do a sanity check to make sure the requested size doesn't overflow vector::max_size() but no implementation actually uses PTRDIFF_MAX for max_size(), they all use SIZE_MAX. The ones that don't even bother to do a sanity check have a problem with vector<bool> being able to overflow SIZE_MAX if they didn't bother to make size_type larger than size_t.
* The C++ standard committee was clever enough to specialize std::less on pointers to allow comparing 2 pointers that don't belong to the same array (as opposed to leaving it undefined like the < operator) but they weren't clever enough to template std::distance to do something similar. Yay, C++ provides a portable way to determine if 2 arrays overlap in O(1) time as opposed to O(N) time. Boo, C++ doesn't provide a way determine the distance between 2 pointers in O(1) time as opposed to O(ceil(SIZE_MAX/PTRDIFF_MAX)) time.
fn1. Check out what happens if you "allocate an object larger than SIZE_MAX":http://www.open-std.org/jtc1/sc22/wg14/w..
if you don't believe how insane the C committee is.
asdf - 01 02 07 - 09:16
mplayer 's gif demuxer just got polished a bit and should be able to play any gif's thrown at it. please try svn mplayer on your giant GIF files and report any problems :)
compn (link) - 02 02 07 - 09:46
What about Opera???
capma - 03 02 07 - 13:00
Does the encoder optimize the output by only tracking the difference between frames? This helps a lot
with filesize problems.
Blah - 04 02 07 - 04:29
Yes, it does -- it computes the delta frame, computes a bounding rect around the changed pixels, and then encodes the rect. There are some interesting optimizations that can be done with the GIF delta modes, though.
Phaeron - 04 02 07 - 04:57
There's one problem with loading GIF files in VirtualDub: it doesn't preserve the information about display time of each frame. Instead, it plays all frames with the same constant FPS, which in some cases is not what user wants, because it breaks the original movie. Is it possible to retain the times to display the sequence in the correct way? (and save it into AVI in a correct way, not with a normalized FPS).
SasQ (link) - 14 02 12 - 14:45