§ ¶Fun with animated GIFs
For no particular reason, I sat down this weekend and wrote animated GIF import/export routines. Animated GIFs aren't that common anymore -- at least, for non-obnoxious uses -- but I don't have any tools to manipulate them and hadn't ever worked with the format. Besides, the last time I'd experimented with LZW compression was in high school, where I got around 10 bytes/sec on a 7MHz 68000. I was pretty sure I could improve on that since I actually know about hash tables this time.
It's now apparent to me why the GIF image format became so popular: it's much simpler and easier to implement than either JPEG or PNG. JPEG is difficult because of the inverse discrete cosine transform (IDCT) and marker stuffing in the bitstream; PNG's Deflate compression uses two nested layers of Huffman trees in addition to sliding window compression. The LZW compression used by GIF, on the other hand, can be implemented in a reasonably performant fashion with about a page of code for the encoder and decoder each. All told, I'd say it only took about a day each to get the GIF decoder and encoders working, which is astonishing when you consider that GIF is still competitive with PNG for some types of images. It helps that the original Compuserve GIF specification is well-written and precise, although the PNG specification is even better in these respects.
It also helps that I cut a bunch of corners in the initial implementations, but it works on the test images so far....
Now, any export code must of course be appropriately tested:
01/15/2007 04:54 PM 170,706,854 SHUFFLE! OP.gif
640x360, 2183 frames, 24 fps. And no, I'm not uploading it.
Internet Explorer did manage to open the 163MB animated GIF, but froze for about a minute at 100% CPU during the load. It then only managed to play the video at about 8 fps and only up to about a quarter of it. That's disappointing, but reasonable.
Mozilla Firefox also had difficulty opening the GIF file. A lot of difficulty, in fact:
(FYI, my laptop has 1.2GB of memory.)
Amazingly enough, although it swapped a lot, it did manage to play the entire video, at full frame rate, and without crashing. So, if you want to play your favorite anime openings in a horrendously inefficiently encoded fashion and in 256 color glory, Firefox is clearly better than Internet Explorer. I might try writing an animated GIF larger than 4GB next, just to see what happens.
I predict the world would implode.
Seriously though, It seems as if they're extracting the images into memory raw, maybe even converting them to 24/32bit RGB for playback. Not really the most efficiant memory wise, but probably saves a bit of CPU time.
Blight - 15 01 07 - 22:34
Have you tried Irfanview or something old school like SEA by Photodex ?
mikesum32 (link) - 16 01 07 - 02:41
Nice! This can be useful for forum avatars and signature images. Looking forward to it being implemented in VD :)
Spinal (link) - 16 01 07 - 13:47
sorry, last post meant to say output animated gif's (it already outputs .avi's) *duh*
roseman - 16 01 07 - 15:42
Someday i will be qualified to operate this comment form...
Did this blog entry mean that someday VirtualDub might inherit the ability to either import animated .GIF's, and/or output animated .GIF's (?)
roseman - 16 01 07 - 15:44
A clear case of 'Because I can' x-D
-SPM-Mad - 17 01 07 - 11:06
FYI, neither Firefox nor Internet Explorer were able to open a 2.6GB animated GIF. :)
Phaeron - 17 01 07 - 23:34
Phaeron: You got too much time on your hands if you make a 2,6GB Gif ;)
P. Simons - 18 01 07 - 09:03
Yes, Firefox stores each frame uncompressed in memory; trouble may happen with much smaller files, for example a 100fps 500x500 animation where only a few pixels change each frame. I have a 600K GIF that makes it eat 80 megs of RAM...
By the way, a "Video GIF" may look much better than most people realize:
Long story short, each frame may include its own palette, and changing the palette doesn't destroy what is already decoded. This allows the use of a frame-optimized palette. Otherwise you can go the cheap way (RGB 332) or the very expensive one (generate optimal palette for all the frames) which might not look so great anyway.
This might also help justify Firefox's decision of storing uncompressed RGB, but it's optimizing for the corner case.
Michel - 18 01 07 - 11:39
No, the world won't implode, but you will start getting null pointers when you try to malloc() or new anything. In Windows (32-bit Windows, at least), the process virtual address space is 2 GB wide -- once you hit the top, Windows won't let you allocate any more memory. I found this out the hard way while playing with photo overlays in Google Earth a while back.
Lee - 19 01 07 - 20:33
Actually, you can increase the address space of programs on 32bit Windows to 3GB by setting a flag in EXE header (editbin.exe /LARGEADDRESSAWARE), and adding /3GB switch in boot.ini.
ender - 21 01 07 - 06:49
That still wouldn't permit a single memory allocation larger than 2GB. The reason is that the OS has libraries and system memory blocks that are fixed right below the 2GB point, so you can never have a contiguous allocation cross the 2GB boundary even with /3GB enabled. In practice, the location of system DLLs in Windows XP SP2 makes it hazardous to attempt to contiguously allocate even 512MB-1GB of address space.
Also, with a 32-bit ptrdiff_t, it isn't correct for a C implementation to allow you to malloc() that much memory. Even if you could allocate that much memory, you'd have to go straight to the Win32 APIs.
Phaeron - 21 01 07 - 16:36
Will it be in the next version?!
me - 22 01 07 - 06:37
Yes. If you want to try it, head over to the Testing/Bug Reports forum and try 1.7.1 test-3, which has the animated GIF import/export code.
Phaeron - 22 01 07 - 23:14
Hi all, I am really interested in bitmap file formats because of I an trying to develope a new one in my spare time. My objective would be to create a new lossless file format able to compress an image more than PNG. I am actually 20kb near PNG (using PNGOUT) on Lenna test image and coding on new ideas trying to do better than this.
I'd like to know your opinion on PNG file format and a new possible (I hope) file format that is really more simple to code because of based on neighbour differences and LZW (that is patented out in the last 2 year) - no sin & cos - no floats - no Huffmann at this time. Thank you all for feedbacks. Bye
LiloLilo - 30 01 07 - 15:24
Actually, the C standard lets you allocate objects larger than ptrdiff_t can hold. It's just undefined to subtract 2 pointers that are longer than that distance.
asdf - 01 02 07 - 01:32
Hmmm, you're right. Unfortunate, that. I wonder if that's to support 16-bit x86 near/far/huge silliness.
Phaeron - 01 02 07 - 01:57
Nah, it's more of a speed/size vs. correctness tradeoff with the burden placed on C programmers as opposed to implementers. You have to keep in mind that C is kind of lame in that it doesn't let you specify the range of an integer type and it just gives you whatever handful of types an implementation uses. So size_t is just some unsigned type large enough to allocate an object and ptrdiff_t is just some signed integer that basically only has a minimum range constraint. There is no relationship whatsoever between size_t and ptrdiff_t.
And the lame part about this is that one of the committee members told me that the reasoning behind this was that subtracting 2 pointers is rare and that in practice you can get away with just casting the result to a size_t (which assumes a. the implementer uses 2's complement b. size_t and ptrdiff_t have exactly the same number of value bits c. the C compiler doesn't care that you're doing something undefined).
The funny thing is how C++ compounds this embarrassment even further:
* Subtracting 2 pointers isn't rare in the STL.
* The really sneaky way that this all works in C is because pointer addition is the only arithmetic operation that doesn't do any integer promotions, so both ptrdiff_t and size_t can be added to a pointer (or equivalently using array index notation) without problems.
* The original STL draft was specified using distance_type everywhere.
* During standardization, containers were "cleaned up" to use size_type in places (size_type is only used in containers, iterators (and hence the algorithms) know nothing about them).
* Somewhere in the standard, you'll find that it's undefined if you pass an iterator range to any STL function where the distance between them is larger than distance_type's range.
There are 3 really funny (but sad) things about all this:
* std::advance was templated to take an arbitrary type as opposed to using the iterator's distance_type (too bad using it on a value larger than distance_type's range is actually undefined). Iterators were not templated to do this, they all use distance_type.
* Some implementations of std::vector will do a sanity check to make sure the requested size doesn't overflow vector::max_size() but no implementation actually uses PTRDIFF_MAX for max_size(), they all use SIZE_MAX. The ones that don't even bother to do a sanity check have a problem with vector<bool> being able to overflow SIZE_MAX if they didn't bother to make size_type larger than size_t.
* The C++ standard committee was clever enough to specialize std::less on pointers to allow comparing 2 pointers that don't belong to the same array (as opposed to leaving it undefined like the < operator) but they weren't clever enough to template std::distance to do something similar. Yay, C++ provides a portable way to determine if 2 arrays overlap in O(1) time as opposed to O(N) time. Boo, C++ doesn't provide a way determine the distance between 2 pointers in O(1) time as opposed to O(ceil(SIZE_MAX/PTRDIFF_MAX)) time.
fn1. Check out what happens if you "allocate an object larger than SIZE_MAX":http://www.open-std.org/jtc1/sc22/wg14/w..
if you don't believe how insane the C committee is.
asdf - 01 02 07 - 09:16
mplayer 's gif demuxer just got polished a bit and should be able to play any gif's thrown at it. please try svn mplayer on your giant GIF files and report any problems :)
compn (link) - 02 02 07 - 09:46
What about Opera???
capma - 03 02 07 - 13:00
Does the encoder optimize the output by only tracking the difference between frames? This helps a lot
with filesize problems.
Blah - 04 02 07 - 04:29
Yes, it does -- it computes the delta frame, computes a bounding rect around the changed pixels, and then encodes the rect. There are some interesting optimizations that can be done with the GIF delta modes, though.
Phaeron - 04 02 07 - 04:57
There's one problem with loading GIF files in VirtualDub: it doesn't preserve the information about display time of each frame. Instead, it plays all frames with the same constant FPS, which in some cases is not what user wants, because it breaks the original movie. Is it possible to retain the times to display the sequence in the correct way? (and save it into AVI in a correct way, not with a normalized FPS).
SasQ (link) - 14 02 12 - 14:45
With regards to what Michel said back in 2007:
It seems like VirtualDub uses the little-used per-frame palette capability of the GIF format to some
extent -- it's output is higher quality than one would expect for GIFs; it seems to calculate the optimal palette per GIF frame for new opaque pixels. (If I open a VirtualDub-generated GIF in GIMP, I notice that GIMP recognizes that it has to load the file into RGB space, since GIMPs editing workspace can't work with per-frame palettes.) If web browsers allowed 0ms-delay frames (see this comparison from 2012 for the minimums at the time: http://nullsleep.tumblr.com/post/1652451..
), it would be possible for a GIF encoder to output multiple no-delay frames to build true-color image for every given video frame input (the rendering time would be pretty quick these days, so it wouldn't delay the GIFs progression much in terms of real milliseconds actually taken to paint it). But something that can still be done, even if one has to limit an encoder to having delays between frames, would be to distribute into subsequent GIF frames (or insert one or more extra GIF frames if the frame spacing of the video is above some minimum like 20ms) some pixels in new palettes, to smooth out colors that weren't accurately rendered by the previous frame's palettes. Particularly if the next frame doesn't have much changes otherwise going on anyway.
It would be a much more complicated process, involving maintaining, as the GIF is built up, a listing of pixels that when last painted to a GIF frame, were rendered into the GIF to a color more than some colorspace distance from the original video pixel's color, and then when free intervals arise (what is a sufficient interval length to insert an extra frame in, could be configurable in the GIF output settings), or on subsequent frames where not much change is otherwise being painted to a GIF frame, taking the opportunity to paint them in with a more optimal palette.
Hopefully that wasn't unclearly stated. No one else ever seems to have made a GIF encoder that automatically does this (at least as far as I've found). I guess it's a longshot that Phaeron would be interested in taking that on at this point, given the stated less abundant free time, and the fact that this thread hasn't been updated in years.
In these days of HTML5 video, there maybe less impetus for such a thing, but in some sense it seems a shame that no one ever really pushed animated GIF output as far as it could technically go.
The GIF export may have been a bit of fun for Phaeron, to this day I still use VirtualDub for making GIFs. Nothing I've seen with a GUI seems to make as high quality GIFs as conveniently from video files or image sequences.
Jacob - 26 10 16 - 11:17