Animation tools & tips
This is not a technical document, but rather a list of resources for the creation and playback of animation on a computer.
The most valuable resource is knowledge and experience. Drawing is an art; the physics of motion of solid and of flexible objects is a science. Animation has been described as an "exacting art", the blending of artistic creations and the physical groundings and laws of weight and momentum. Both are needed to bring live into objects in a credible way.
The main activity of CompuPhase is contract programming. CompuPhase develops several products that allow animations to be embedded into programs, be it games, screen savers or desktop applications. Obviously, we use our own tools to give us an edge over the competition in our pursue of contracts. If you are looking for developers, rather than for a programmer's library or for a collection of tips and tricks, please contact us. If you are based in the Netherlands (or if you can read Dutch), read our Dutch pages for more information.
- Jump to the section with software and resources to create animations on a computer.
- Jump to the section with tools to embed animations in a computer program.
- Jump to the section with the tips.
- Looking for animation file formats? The Wotsit's Format pages are a good source.
If experience is the most valuable resource, then the most valuable tip must be to study the work of others. Look at cartoons, especially those by Disney. Or better yet, read a book; I highly recommend:
White, Tony; "The Animator's Workbook"; Phaidon Press Ltd.; 1986;
A workbook to teach traditional animation, from idea to a film reel. The book focuses on making the key drawings and the inbetweens, and it has special chapters on lip synchronization and realism in animations.
Blair, Preston; "Cartoon Animation"; Walter Foster Pub; 1995;
A "learn by example" book. Less technical than Tony White's book but a large amount of sketches and drawings.
If you want to try your hand at traditional animation, Chromacolour International. sells the tools, paper and cel for traditional animation. Chromacolour also has a range of products for scanning or drawing animations on the computer.
More focused on achieving maximum effect with minimum effort than the book by Tony White is Jim Blinn's list of animation tricks Note that Tony White's book covers, and elaborates on, many of these "tricks" in a more structured manner. In addition, the book covers subjects that are absent from Jim Blinn's list.
A nice site with interviews, tips, "how-to" documents and template forms targeted to traditional animation is Animation Meat!; many documents are available as PDF (Adobe Acrobat) for easy on-line viewing or high-resolution printing.
Most books on (traditional) animation start with a chapter that presents the flow of a complete animated movie project, starting with brain-storming and storyboarding and ending (hopefully) with the applause after seeing the finished movie. Many animated movies are "inked" and "painted" on a computer, today, but the (line) drawings, as well as the filming are typically done in the traditional way. In our projects, the drawing and compositing is usually done on a computer too, because the end result is intended for a computer display. Storyboarding, however, we still do on paper. It is still more convenient to sketch on paper than it is to sketch on a graphic tablet and, more importantly, it is more convenient to discuss the story with the project group with all sketches on a large table (or pinned on a wall) than cram the group behind a 17" monitor.
We started storyboarding using the form that the "Animation Meat!" web site provides; it allows you to make three sketches on one sheet of paper (in "landscape" mode). Today, we use a different storyboard sheet, which we found more practical. With the thought that what serves us may serve you, we have made our storyboard form available to you as a free download (Adobe Acrobat "PDF" format). If you would desire this file in Adobe Illustrator format, please contact us.
The storyboard form is intended to be printed in portrait mode and then be cut in half. This gives you two cards that hold one storyboard panel each. The area for the panel is based on a "A" paper size that is commonly used in Europe, with an aspect ratio of 1:1.4. There are thin "crop marks" in this "A"-format rectangle for both television (aspect ratio 1:1.33 —or 3:4), and a typical movie format (aspect ratio 1:1.85) that can also be used for HDTV. You get a television format by drawing vertical lines between the marks on the top and the bottom edges of the panel, and movie format by drawing horizontal lines between the marks on the left and the right edges.
During discussions, it frequently occurs that panels get inserted, removed or replaced. One of the main problems that we had with the storyboard layout from "Animation Meat!" was that it puts three panel on one sheet, making the replacement of a single drawing a bit of a burden. Apart from just having one panel per sheet, we wanted in our storyboard sheet a flexible panel numbering system. Replacing sheets is easy: you just give the new sheets the same panel numbers as the old sheets. We try to not renumber panels, because documents or memos sometimes refer to specific panels.
When removing sheets, we note in the "next" field of the sheet preceding the first removed sheet the panel number of the sheet that follows the last removed sheet. For example, if in the range 1 .. 10, sheets 4 and 5 are removed, we write the number "6" in the "next" field of sheet 3. In a similar manner, on sheet 6, we note in the "previous" field that the sheet that precedes it is number "3".
Insertion works in a similar manner as removal, in addition to giving "sub-numbers" to the inserted sheets. If in a range 1 .. 10, we want to insert three sheets between sheets 7 and 8, those three sheets are numbered 7.1, 7.2 and 7.3. Then, the "next" field on sheet 7 gets the number 7.1 and the "previous" field of sheet 8 is marked 7.3.
By the way, we leave the "next" and "previous" fields empty if the numbering is "logical" (in a loose sense of the word).
When a storyboard is finished, we put it in an A5 binder (A5 is the format of one sheet; it is half of A4, which is the typical paper size for letters in Europe). This gives us a booklet of the storyboard, that we can quickly browse through. In the sheet format we made sure that the most important information of the sheet is away from the booklet's binding (the booklet is bound on the left edge). Actually, we make use of this property well before the storyboard is carved in stone. The storyboard is pinned on a wall or on a large board when planning things like timing, music or when verifying the flow of the storyline. To save space, we pin the sheets in a way that the left part of each sheet is overlaid by the sheet on its left. Obviously, it is important, now, that the space left for "effects and notes" does not contain data that is important for the timing, music or storyline. We use it for hints for the animator, zoom/pan instructions, or whatever pops up during discussions, but not to describe the action or the dialogue.
Animation has many faces, as it covers any change of appearance or any visual effect that is time-based. It includes motion (change of position), time-varying changes in shape, colour (palette), transparency and even changes of the rendering technique.
Two typical computer animation methods are "frame" animation and sprite animation. Frame animation is animation inside a rectangular frame. It is similar to cartoon movies: a sequence of frames that follow each other at a fast rate, fast enough to convey fluent motion.
Frame animation is an "internal" animation method. It is typically pre-compiled and non-interactive. The frame is typically rectangular and non-transparent. Frame animation with transparency information is also referred to as "cel" animation. In traditional animation, a cel is a sheet of transparent acetate on which a single object (or character) is drawn.
Sprite animation, in its simplest form, is a two-dimensional graphic object that moves across the display. Sprites often can have transparent areas. By using a mask or a transparent colour, sprites are not restricted to rectangular shapes.
Sprite animation lends itself well to be interactive: the position of each sprite is controlled by the user or by an application program (or by both). It is called "external" animation.
That said, nearly all sprite animation libraries and tool kits allow some form of internal animation. That is, in addition to moving a sprite around, the library allows you to change the appearance of the sprite, usually by attaching a new image to the sprite.
Referring to the CompuPhase products: EGI is a frame animation engine/compiler, with some extensions towards sprite animation, and AniSprite is a sprite animation library with (the common) support for multiple images per sprite. We think that by combining both products, you can achieve very mobile sprites, or synchronized cartoon-like animations.
In casual parlance, I refer to animated objects (sprites or movies) as "animobs". In the next paragraphs, animob stands just for "animated object", regardless of the way that this object is animated.
In games and in many multimedia applications, the animations should adapt themselves to the environment, the program status or the user activity. That is, animation should be interactive.
Referring to the CompuPhase products, EGI has had support for dynamically switchable segments since version 1.0. Each segment is a small movie, or a "clip" as an animator would call it. EGI and AniSprite were designed with interactivity in mind.
To make the animations more event driven, one can embed a script, a small executable program, in every animob. Every time an animob touches another animob or when an animob gets clicked, the script is activated. The script then decides how to react to the event (if at all). The script file itself is written by the animator or by a programmer; it has a simplified, C-like syntax.
We have developed this technology, and some of it is already available in one of our products (EGI 3.0 and above). We are actively developing this idea further. If you have any requirement for a tool or product that features high-quality, interactive animation, we invite you to contact us. We have a Dutch page with more information, and you can e-mail us directly.
- Pro Motion is a low cost,
feature packed paint program that is especially designed to create frame based
animations. It supports FLIC files, Animated GIF
and a private animation file format.
One of the strong points of Pro Motion is the handling and creation of images and animations with 256 colours. It offers full control over the palette.
Pro Motion can be extended with plug-ins to support more file formats or to add image processing filters. A good example is the PluginMaker plug-in, which modifies frames in Pro Motion under control of a user-written program. A plug-in to add support for EGI extended FLIC format is also available here.
- Plastic Animation Paper is a professional, easy-to-use cel drawing program (to create frame based animations). It is especially good in creating cleaned-up "inked" drawings from initial "rough sketches". It's light box (onion skinning) is convenient to use.
- Already mentioned, Chromacolour International. is a good source for hardware and software for both traditional or computer-based animation.
- Paint Shop Pro by Corel Corporation.
is a nice program for drawing and image processing. Notable for Paint Shop Pro
is that it can save the alpha channel in a separate file in the
.BMPformat (although the extension is
.MSK). These (alpha channel) masks can be used with AniSprite.
- TVPaint by CiS is a full featured paint and animation program.
- Video editing programs, like Adobe Premiere, can be put to good use for animation as well. Next to the transition effects and the general image processing filters, these programs also provide support for sequencing and synchronization.
- For frame based animation:
- Video for Windows by Microsoft Corporation
- Smacker by RAD Game Tools
- For sprite animation:
I have included here only tips that are not in the book by Tony White or in the list by Jim Blinn. Please refer to those sources for more tips and techniques. And please, please, contact me if you have a nice trick or tip that is lacking from any list.
Some of these tips are aimed at the artists that create the drawings, some are aimed at the programmers that build a program around it. However, from my point of view, most of the tips presented here bridge the design versus implementation gap.
I was once asked to make an animation of a first-person view of a car driving on a highway and passing several traffic signs. A kind of a "fly by" animation, in fact. The symbols on those signs should be selectable at run-time.
My solution was to create the entire animation with 240 colours, instead of the usual 256. This gave me 16 free palette entries for special purposes. In each traffic sign, I painted one or more "symbols", superimposed and using these special colours. At run-time, the software mapped the special palette entries to white (the background colour of the traffic sign) or to black, depending on what sign to display.
The win was that I did not have to render the "fly by" animations separately for every sign that I might have to display; I rendered just one fly by animation with a multi-purpose sign.
A film reel runs at 24 frames per second (fps). To reduce the number of drawings to make, traditional cartoon animators often shoot "on twos", which gives an effective rate of 12 fps.
On the computer, there is no restriction of a hardware based frame rate. Unless you plan to transfer the animation to video, you are free to choose any frame rate. In my experience, 15 fps gives a good balance between fluency and the number of images you have to draw or render. (A lot depends on the images, though. Sometimes, one can go as low as 10 frames per second and still have fluent motion.)
Without shadows, a character may seem to "float" above the ground level. Even rough, approximate shadows work better than no shadows at all. Since shadows give a visual clue of where a character or object touches the ground (or the surface of a table top, for example), shadows also help in adding depth to perspective drawings.
A fascinating example of how shadow can even suggest motion in a stationary object (and a deeper discussion about shadows) can be found at Daniel Kersten's lab.
As a side note, shadows are especially helpful when drawing in an axonometric or an isometric perspective. Axonometric projections are way beyond the scope of this "tip" and isometric projections are a subset of axonometric projections. These (parallel) perspectives are popularized by several contemporary games, though the projections themselves are quite old.
Our sprite library AniSprite supports "luma masks" that can provide subtle, and more realistic, shadows. The frame animation library EGI can generate these mask from multiple transparent colours (as of version 3.0). If your animation kit supports alpha blending, you can also use that to create realistic shadows.
Photo and video cameras show an amount of blur when they capture moving objects. The amount of blur can be small or large, depending on the magnitude and the speed of the motion and on the exposure time. The "motion blur" that is inherent to photo and video cameras also gives a clue about the speed and the direction of the motion. The human eye has a small latency, a kind of "afterglow" effect, that also blurs moving objects. This property of the human eye combines well with the camera blur, and leads to the rule of thumb that a movie shot at 20 fps conveys fluid motion.
Computer generated images have no exposure limit associated with them. Consequently, (rapidly) moving objects generated by computers can appear too crisp, and you will also need a much higher frame rate, say 30 fps, to keep the animation fluid (instead of stepwise).
Motion blur, if properly applied, can result in a good perceived sharpness of a moving object, while reducing the minimum frame rate at which the motion is fluid.
Motion blur is doable for frame animation, but often too costly for sprite animation. A low-cost substitute, that at least "blurs" the edges of the sprite with the background, is to use a mask with softened edges and to use alpha-blending to combine the sprite with the background. In addition to reducing the jaggedness of the edges of a sprite, this trick also has the effect of "blurring" the motion around the edges.
AniSprite is a sprite library for Microsoft Windows that supports alpha-blending.
Also available is a paper on algorithms to feather a mask for anti-aliased sprites.
Lip synchronizing costs a lot of effort, at least when done in the traditional way that is explained in Tony White's book. For high quality animation, the way to handle lip synchronization is still to carefully draw the character's mouth in the correct shape for every phoneme.
If your 2D character is not going to be able to display too much detail, making a perfect lip sync is rather pointless. What you can do is cheat: draw only a small number of basic mouth shapes: "A", "E", "I", "O", "U", "TH", "M", "F" and so on. Just enough to cover the major mouth forms. You will need a matrix of frames to allow for transitions between each form from any other form. So the fewer basic shapes you have to start with, the easier it is.
An animated character usually has a few repeatable movements (loops) like walking. After having drawn all frames for one full step, you can repeat those frames for the next step. The character also has non-repeatable movements, such as standing up from a seated position. Next to drawing the frames for the non-repeatable movement, you may also have to draw a few frames to go from a standing position to a walking transition.
In EGI parlance, repeatable movements are "loops" and non-repeatable movements are "transitions" (because they move from one pose to another pose).
In my experience, linking poses and loops to one another (via transitions) is easier if you carefully plan at what frame each loop ends. After you have drawn a complete loop, you can select any frame as the starting point and ending point. It has proven advantageous for me to draw the loops first and then to check what transitions are needed. At that time, I select the starting frame for each loop.
Although, in fact, I choose the ending frame for each loop. The ending frame for a loop is also the starting frame for a transition from that loop to something else. Therefore, I make it a key-frame. Having a key-frame start a transition is a plus. That the loop does not start with a key-frame (it ends with a key-frame) is only a minor inconvenience, because loops seldomly start out-of-nothing; that is, a loop is mostly introduced by a transition, and only rarely does a loop start immediately from a pose or from another loop (in my animations, at least).
There have always been a large number of drawings to be made for animation, and this will likely remain so. But traditional animators have also been very inventive to find ways to reduce the number of drawings that they needed to make. Drawing on cel (acetate foil) is taken for granted now, but it is nothing else as a technique to use the same drawings over different backgrounds.
When drawing a cartoon character on a computer, I suggest that you draw the facial expression (the eyes, mouth, nose) on a separate layer. So one layer will have the complete cartoon character with an empty face and the other layer fills in the face. This allows you to quickly change the facial expression of the character without drawing the entire corps again.
The reason why traditional animators did not do this already is that you can layer only a limited number of cels on the exposure sheet. About six layers is considered a maximum. But on computers, the transparency of a virtual cel is 100% and no maximum of layers applies.
This technique becomes even more interesting when you are creating dynamic animations on the computer (for example, in computer games).
Using a single timer as the time base for all time-related events, such as animations, is not as obvious as it may sound. If you play a MIDI track as background music to the animation, you probably already have two timers: the MIDI sequencer uses an internal timer to push MIDI events to the synthesizer and most sequencers do not make their timer available to external programs.
Features of animation tool kits may also tempt you in the wrong direction; some tool kits (especially the multi-threaded ones) allow you to put a timer on every animob ("animated object"). If the timers are truly independent, they are prone to "drift" slowly away from one another. For example, if you set two timers at a 50 ms interval (20 fps), but one timer really runs at 50.01 ms and the other at 49.99, the timers will quickly run out of sync. To keep the timers synchronized, you must appoint one of the timers as the master and add code to the other timer to constantly monitor the master and adjust itself. This process is called "chasing" a timer.
Chasing works rather well, if implemented properly, but it does require that you set the second timer (the "slave" timer) at a much higher frequency than the desired frame rate. The high frequency timer and the chasing code that is executed at every tick affects overall system performance, which is why you would not want to have dozens of timers chasing a master timer.
Returning to the earlier example of synchronizing animation to MIDI, the technical paper Synchronizing animation to MIDI discusses animation and MIDI tool kits for Microsoft Windows that can share a single timer: the MIDI timer also clocks the animation. This technique avoids timer chasing altogether.
Perspective drawing creates an illusion of depth in an image. However, not all images lend themselves well to clearly identify "vanishing points", or even a horizon. To enhance the illusion of depth, a common trick is to paint objects on the foreground in full and strong (saturated) colours and paint objects in the background in pastel (or unsaturated) colours.
Photographic and video cameras focus on a particular distance. The object that is in focus is pictured sharply, the objects before and behind that object are blurred and the greater so as the distance from the in-focus object increases. In your drawings you can play a similar trick by blurring the background objects.
In hand drawn animation, it is very common to animate an action, then slow into a pose and hold the drawing of that pose for several frames, then move into action again. The animation stays alive even with the use of held drawings. But in computer animation, as soon as you go into a held pose, the action dies immediately. You are prone to make this "mistake" when you come out of traditional animation.
To combat this, use a "moving hold". Instead of having every part of the character stop, have some part continue to move slightly, like an arm or a head. Even the slightest movement will keep your character alive.