Multimedia : Definitions Multimedia is, as described as a woven combination of digitally manipulated text, photographs, graphic art, sound, animation, and video elements. interactive multimedia : When you allow an end user also known as the viewer of a multimedia project to control what and when the elements are delivered, it is called interactive multimedia. Hypermedia : When you provide a structure of linked elements through which the user can navigate, interactive multimedia becomes hypermedia. Multimedia developers. Although the definition of multimedia is a simple one, making it work can be complicated. Not only do we need to understand how to make each multimedia element stand up and dance, but we also need to know how to use multimedia computer tools and technologies to weave them together. The people who weave multimedia into meaningful tapestries are called Multimedia developers. Multimedia project The software vehicle, the messages, and the content presented on a computer, television screen, PDA (personal digital assistant), or mobile phone together constitute a Multimedia project. Multimedia title If the project is to be shipped or sold to consumers or end users, typically delivered as a download on the Internet but also on a CD‐ROM or DVD in a box or sleeve, with or without instructions, it is a Multimedia title. Internet and Multimedia Your project may also be a page or site on the World Wide Web, where you can weave the elements of multimedia into documents with HTML (Hypertext Markup Language) or DHTML (Dynamic Hypertext Markup Language) or XML (eXtensible Markup Language) and play rich media files created in such programs as Adobe’s Flash, LiveMotion, or Apple’s QuickTime by installing plug‐ins into a browser application such as Internet Explorer, Safari, Google Chrome, or Firefox. Browsers are software programs or tools for viewing content on the Web. Linear and Non Linear Multimedia Projects A multimedia project need not be interactive to be called multimedia. Users can sit back and watch it just as they do a movie or the television. In such cases a project is linear . When users are given navigational control and can wander through the content at will, multimedia becomes nonlinear and user interactive, and is a powerful personal gateway to information. Points to be noted while designing an Interactive Multimedia Title Determining how a user will interact with and navigate through the content of a project requires great attention to the message, the scripting or storyboarding, the artwork, and the programming. You can break an entire project with a badly designed interface. You can also lose the message in a project with inadequate or inaccurate content. Authoring Tools Multimedia elements are typically sewn together into a project using authoring tools. These software tools are designed to manage individual multimedia elements and provide user interaction. Integrated multimedia is will combine source documents such as montages, graphics, video cuts, and sounds merge into a final presentation. In addition to providing a method for users to interact with the project, most authoring tools also offer facilities for creating and editing text and images and controls for playing back separate audio and video files that have been created with editing tools designed for these media. GUI sometimes pronounced “Gooey” OR Graphical User Interface is a type of user interface that allows users to interact with electronic devices with images rather than text commands. GUIs can be used in computers, hand‐held devices such as MP3 players, portable media players or gaming devices, household appliances and office equipment .
Multimedia Hardware Multimedia Deivces can be broadly divided into 1.Digital media devices 2. Analog media devices 3. General‐purpose devices 4. Synchronization devices 5. Interaction devices 6. Multimedia platforms Digital Media Devices are those digital devices that are involved in creating a multimedia content by Capturing, Processing, Presenting. Digital media device fall in to 3 different category 1.Capturing Device 2. Processing Device 3. Presenting Device. Capturing Device, for example a Midi Keyboad,Image Scanner, 3d Digitizer, Video frame Grabber or simply a Video Capture card. Processing device are those device that process the given input into a multimedia perentation. Examples for precessing device are Audio Video encoders and decoders, Digital video and Effects Processors, 3d Graphic Card. Presentation Devices are those devices which are involved in delivering the output of Multimedia.Some of their examples are Display buffer, MIDI synthsizer etc. Analogue Media Devices are analogue digital devices that are involved in creating a multimedia content. Analogue media devices will fall under three category Sourcing Devices Synchronization devices and filtering devices. Sourcing Device include those devices which act as input for the filtering device and Syncing device are those devices which work to process the sourced input. Filtering device will act as an output device in analogue multimedia devices. Microphone, Video Camera, Video tape player, Audio tape players etc are some of the examples for sourcing device where in these device act as a source. Analogue Video Effects filters and Processors, Audio Mixers are some of the examples for the analogue media filtering devices. Speakers,Video Display, Audio tape recorder, Audio digital to analogue converters are some of the examples for Synchronization devices. General Purpose devices Devices that are arbitrary media are called general purpose devices. Magnetic Disk, CD ROM, DVD ROM, Blueray etc. Interaction Devices These are the devices which are used in getting users interaction. Some of these devices are , Keyboard, Mouse, Joystick, Electronic pen, 3d Tracker, Gloves, Helmets and eye tracker goggles. Synchronization Devices Simultaneous presentation and manipulation of multiple media requires hardware assistance to maintain Proper timing. These are done through Synchronization Devices. Following are some of the examples for synchronization devices
Genlocks or Vision Mixers :
A vision mixer (also called video switcher, video mixer or production switcher) is a device used to select between several different video sources and in some cases Compositing (mix) video sources together to create special effects. This is similar to what a mixing console does for audio. Sync generators or A Video Signal generator is a type of signal generator which could combine different video signal into one video signal. For example some sync generator may composite PAL and NTSC video signals into a single signal. However some sync generator can insert a television stations logo or some other fixed graphics into a live television signal. Multimedia Platform Complience : Specially configured PCs and Workstations. Personal computer with Color display, Speaker and microphone, CD‐ROM drive, Audio and Video Digitizers, Network access etc.
Multimedia and its uses Multimedia in Business Multimedia is used in business in presentations, training, marketing, advertising, product demos, simulations, databases, catalogs, instant messaging, and networked communications. Voice mail and video conferencing are provided on many local and wide area networks (LANsand WANs) using distributed networks and Internet protocols. Multimedia is enjoying widespread use in training programs. Flight attendants learn to manage international terrorism and security through simulation. Drug enforcement agencies of the UN are trained using interactive videos and photographs to recognize likely hiding places on airplanes and ships. Medical doctors and veterinarians can practice surgery methods via simulation prior to actual surgery. Mechanics learn to repair engines. Salespeople learn about product lines and leave behind software to train their customers. Fighter pilots practice full‐terrain sorties before spooling up for the real thing. Increasingly easy‐to‐use authoring programs and media production tools even let workers on assembly lines create their own training programs for use by their peers. Multimedia around the office has also become more commonplace. Image capture hardware is used for building employee ID and badging databases, scanning medical insurance cards, for video annotation, and for real‐time teleconferencing. Presentation documents attached to e‐mail and video conferencing are widely available. Laptop computers and highresolution projectors are commonplace for multimedia presentations on the road. Mobile phones and personal digital assistants (PDAs) utilizing Bluetooth and Wi‐Fi communications technology make communication and the pursuit of business more efficient. As companies and businesses catch on to the power of multimedia, the cost of installing multimedia capability decreases, meaning that more applications can be developed both in‐house and by third parties, which allow businesses to run more smoothly and effectively. These advancement in technology is changing the very way business is transacted by affirming that the use of multimedia offers a significant contribution to the bottom line while also advertising the public image of the business as an investor in technology.
Multimedia in Schools Multimedia will provoke radical changes in the teaching process during the coming decades, particularly as smart students discover they can go beyond the limits of traditional teaching methods. There is, indeed, a move away from the transmission or passive‐learner model of learning to the experiential learning or active‐learner model. In some instances, teachers may become more like guides and mentors, or facilitators of learning, leading students along a learning path, rather than the more traditional role of being the primary providers of information and understanding. The students, not teachers, become the core of the teaching and learning process. E‐learning is a sensitive and highly politicized subject among educators, so educational software is often positioned as “enriching”the learning process, not as a potential substitute for traditional teacher‐based methods. An interesting use of multimedia in schools will also involves the students themselves. Students can put together interactive magazines and newsletters, make original art using image‐manipulation software tools, and interview Students, townspeople, coaches, and teachers. They can even make video clips with cameras and mobile phones for local use or uploading to YouTube. They can also design and run web sites. ITV (Interactive TV) is widely used among campuses to join students from different locations into one class with one teacher. Remote trucks containing computers, generators, and a satellite dish can be dispatched to areas where people want to learn but have no computers or schools near them. In the online version of school, students can enroll at schools all over the world and interact with particular teachers and other students classes can be accessed at the convenience of the student's lifestyle while the teacher may be relaxing on a beach and communicating via a wireless system. Some university offers classes to students who do not wish to spend gas money, fight traffic, and compete for parking space, they even provide training to professors so they can learn how best to present their classes on‐line.
Multimedia at Home From gardening, cooking, home design, remodeling, and repair to genealogy software multimedia has entered the home. Eventually, most multimedia projects will reach the home via television sets or monitors with built‐in interactive user inputs either on old‐fashioned color TVs or on new high‐definition sets. The multimedia viewed on these sets will likely arrive on a pay‐for‐use basis along the data highway. Today, home consumers of multimedia own either a computer with an attached CD‐ROM or DVD drive or Blue Ray drive a set‐top player that hooks up to the television, such as a Nintendo Wii, X‐box, or Sony PlayStation machine. There is increasing convergence or melding of computer based multimedia with entertainment and games‐ based media. Nintendo alone has sold over 118 million game players worldwide along with more than 750 million games. Users with TiVo technology (www.tivo.com) can store 80 hours of television viewing and gaming on a stand‐alone hard disk. Live Internet pay‐for‐play gaming with multiple players has also become popular, bringing multimedia to homes on the broadband Internet, often in combination with CD‐ROMs or DVDs inserted into the user's machine. Microsoft's Internet Gaming Zone and Sony's Station web site boast more than a million registered users each.Microsoft claims to be the most successful, with tens of thousands of people logged on and playing every evening. Multimedia in Public Places In hotels, railway stations, shopping malls, museums, libraries, and grocery stores, multimedia is already available at stand‐alone terminals or kiosks, providing information and help for customers. Multimedia is also piped to wireless devices such as cell phones and PDAs using wi‐fi connectivity and blootooth. Such installations reduce demand on traditional information booths and personnel, add value, and are available around the clock, even in the middle of the night, when live help is off duty. The way we live is changing as multimedia penetrates our day‐to‐day experience and our culture.
Virtual reality Virtual reality (VR) is a term that applies to computer‐simulated environments that can simulate physical presence in places in the real world, as well as in imaginary worlds. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special stereoscopic displays. But sometimes simulations will also include additional sensory information, such as sound through speakers or headphones. Some advanced, haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications. Virtual reality will also covers remote communication environments which provide virtual presence of users. The simulated environment can be similar to the real world in order to create a lifelike experience, for example, in simulations for pilot or combat training or it can differ significantly from reality, such as in VR games. In practice, it is currently very difficult to create a high‐fidelity virtual reality experience, due largely to technical limitations on processing power, image resolution, and communication bandwidth; however, the technology's proponents hope that such limitations will be overcome as processor, imaging, and data communication technologies become more powerful and cost‐effective over time. Virtual reality is often used to describe a wide variety of applications commonly associated with immersive, highly visual, 3D environments. At the convergence of technology and creative invention in multimedia is virtual reality. Goggles, helmets, special gloves, and bizarre human interfaces attempt to place you “inside” a lifelike experience. Take a step forward, and the view gets closer; turn your head, and the view rotates. Reach out and grab an object; your hand moves in front of you. Maybe the object explodes in a 90‐decibel wrap your fingers around it. Or it slips out from your grip, falls to the floor, and hurriedly escapes through a mouse hole at the bottom of the wall. As you move about, each motion or action requires the computer to recalculate the position, angle, size, and shape of all the objects that make up your view, and many thousands of computations must occur as fast as 30 times per second to seem smooth. On the World Wide Web, standards for transmitting virtual reality worlds or scenes in VRML (Virtual Reality Modeling Language) documents (with the filename extension .wrl) have been developed. Intel and software makers such as Adobe have announced support for new 3‐D technologies. Virtual Reality and its Uses Using high‐speed dedicated computers, multi‐million‐dollar flight simulators built by Singer, RediFusion, and others have led the way in commercial application of VR. Pilots of F‐16s, Boeing 777s, and Rockwell space shuttles have made many simulated dry runs before doing the real thing. At the Maine Maritime Academy and other merchant marine officer training schools, computer‐controlled simulators teach the intricate loading and unloading of oil tankers and container ships. Virtual reality is an extension of multimedia and it uses the basic multimedia elements of imagery, sound, and animation. When Virtual Reality requires instrumented feedback from a wired‐up person, VR is perhaps interactive multimedia at its fullest extension.
Principles of Animation Animation is possible because of a biological phenomenon known as persistence of vision and a psychological phenomenon called phi. An object seen by the human eye remains chemically mapped on the eye's retina for a brief time after viewing (Persistance). Combined with the human mind's need to conceptually complete a perceived action, makes it possible for a series of images that are changed very slightly and very rapidly, one after the other, to seemingly blend together into a visual illusion of movement (Phi). Digital television video builds at 24, 30, or 60 entire frames or pictures every second, depending upon settings. The speed with which each frame is replaced by the next one makes the images appear to blend smoothly into movement. Movies on film are typically shot at a shutter rate of 24 frames per second, but using projection tricks (the projectors shutter flashes light through each image twice), the flicker rate is increased to 48 times per second, and the human eye thus seen a motion picture. On some film projectors, each frame is shown three times before the pull‐down claw moves to the next frame, for a total of 72 flickers per second, which helps to eliminate the flicker effect: the more interruptions per second, the more continuous the beam of light appears. Quickly changing the viewed image is the principle of an animatics , a flip‐book, or a zoetrope. To make an object travel across the screen while it changes its shape, just change the shape and also move, or translate, it a few pixels for each frame. Then, when you play the frames back at a faster speed, the changes blend together and you have motion and animation. Terminologies On the Computer, paint is most often filled or drawn with tools using features such as gradients and anti‐aliasing. The word inks, in computer animation terminology, usually means special methods for computing color values, providing edge detection, and layering so that images can blend or otherwise mix their colors to produce special transparencies, inversions, and effects. You can also set your own frame rates on the computer. 2D Cel based animated files , for example, allow you to specify how long each frame is to be displayed and how many times the animation should loop before stopping. 3‐D animations output as digital video files can be set to run at 15 or 24 or 30 frames per second. However, the rate at which changes are computed and screens are actually refreshed will depend on the speed and power of your user's display platform and hardware;(That is you graphic card and monitor and your system configuration), especially for animations such as path animations that are being generated by the computer on the fly. Although your animations will probably not push the limits of a monitor’s scan rate (about 60 to 70 frames per second), animation does put raw computing horsepower to task. If you cannot compute all your changes and display them as a new frame on your monitor within, say, 1/15th of a second, then the animation may appear jerky and slow. Luckily, when the files include audio, the software maintains the continuity of the audio at all cost, preferring to drop visual frames or hold a single frame for several seconds while the audio plays. Types of Animation When you create an animation, organize its execution into a series of logical steps. First, gather up in your mind all the activities you wish to provide in the animation. If it is complicated, you may wish to create a written script with a list of activities and required objects and then create a storyboard to visualize the animation. Choose the animation tool best suited for the job, and then build and tweak your sequences. This may include creating objects, planning their movements, texturing their surfaces, adding lights, experimenting with lighting effects, and positioning the camera or point of view. Allow plenty of time for this phase when you are experimenting and testing. Finally, post‐process your animation, with special renderings and adding sound effects.
2D ANIMATION In cel based 2 D animation, each frame of an animation is provided by the animator, and the frames are then composited into a single file of images to be played in sequence. Adobe Flash,ULead’s GIF Animator and Alchemy’s GIF Construction Set Pro are some of the examples for 2d animation softwares. Using appropriate software and techniques, you can animate visual images in many ways. In 2‐D space, the visual changes that bring an image alive occur on the flat x and y axes of the screen. A blinking word, a color‐cycling logo (where the colors of an image are rapidly altered according to a formula), a cel animation or a button or tab that changes state on mouse rollover to let a user know it is active are also some of the examples of 2‐D animations. These are simple and static, not changing their position on the screen. Path animation in 2‐D space increases the complexity of an animation and provides motion, changing the location of an image along a predetermined path (position) during a specified amount of time (speed). 3D ANIMATION 3‐D animation, most of your effort may be spent in creating the models of individual objects and designing the characteristics of their shapes and surfaces. It is the software that then computes the movement of the objects within the 3‐D space and renders each frame, in the end stitching them together in a digital output file or container such as an AVI or QuickTime movie. In 3‐D animation, software creates a virtual realm in three dimensions, and changes like positioning, rotation, and scaling of the given 3d models are calculated along all three axes x, y, and z on the respective frame. 3d Animation involves creating a 3d model of the given character or object, creating a camera for point of view, creating light source for illumination , texturing or colouring the given 3d model with scanned materials. It also involves changing the position or rotation or scale of the created 3d model for animation either through path animation or through key framing. Keyframing There is an animation technique where every frame of a keyframed computer animation is directly modified or manipulated by the creator. This method is similar to the drawing of traditional animation, and is chosen by artists who wish to have complete control over the animation. Such animations are typically rendered frame by frame by high‐end 3‐D animation programs such as NewTek’s Lightwave or AutoDesk’s Maya. Today, computers have taken the handwork out of the animation and rendering process, and commercial films such as Shrek, Coraline, Toy Story, and Avatar have utilized the power of computers. Morphing Morphing is a popular effect in which one image transforms into another. Morphing applications and other modeling tools that offer this effect can transition not only between still images but often between moving images as well. Some products that offer morphing features are Black Belt’s Easy Morph and WinImages and Human Software’s Squizz video. The morphed images were built at a rate of eight frames per second, with each transition taking a total of four seconds (32 separate images for each transition), and the number of key points was held to a minimum to shorten rendering time. Setting key points is crucial for a smooth transition between two images. The point you set in the start image will move to the corresponding point in the end image.This is important for things like eyes and noses, which you want to end up in about the same place (even if they look different) after the transition. The more key points, the smoother the morph.
Animation Techniques Cel Animation The animation techniques made famous by Disney use a series of progressively different graphics or cels on each frame of movie film (which plays at 24 frames per second). A minute of animation may thus require as many as 1,440 separate frames, and each frame may be composed of many layers of cels. The term cel derives from the clear celluloid sheets that were used for drawing each frame, which have been replaced today by layers of digital imagery. Cels of famous animated cartoons have become sought‐after, suitable‐for‐ framing collector's items. Cel animation artwork begins with keyframes (the first and last frame of an action). For example, when an animated figure of a woman walks across the screen, she balances the weight of her entire body on one foot and then the other in a series of falls and recoveries, with the opposite foot and leg catching up to support the body. Thus the first keyframe to portray a single step might be the woman pitching her body weight forward off the left foot and leg, while her center of gravity shifts forward; the feet are close together, and she appears to be falling. The last keyframe might be the right foot and leg catching the body's fall, with the center of gravity now centered between the outstretched stride and the left and right feet positioned far apart. Stop motion Also known as stop action, is an animation technique to make a physically manipulated object appear to move on its own. The object is moved in small increments between individually photographed frames, creating the illusion of movement when the series of frames is played as a continuous sequence. Clay figures are often used in stop motion for their ease of repositioning. Motion animation using clay is called clay animation or clay‐mation. Tweening The series of frames in between the keyframes are drawn in a process called tweening. Tweening is an action that requires calculating the number of frames between keyframes and the path the action takes, and then actually sketching with pencil the series of progressively different outlines. As tweening progresses, the action sequence is checked by flipping through the frames. The penciled frames are assembled and then actually filmed as a pencil test to check smoothness, continuity, and timing. When the pencil frames are satisfactory, they are permanently inked, photocopied onto cels, and given to artists who use acrylic colors to paint the details for each cel. The cels for each frame of our example of a walking woman—which may consist of a text title, a background, foreground, characters (with perhaps separate cels for a left arm, a right arm, legs, shoes, a body, and facial features) are carefully registered and stacked. It is this composite that becomes the final photographed single frame in an animated movie. Motion Capture To replicate natural motion, traditional cel animators often utilized “motion capture” by photographing a woman walking, a horse trotting, or a cat jumping to help visualize timings and movements. Today, animators use reflective sensors applied to a person, animal, or other object whose motion is to be captured. Cameras and computers convert the precise locations of the sensors into x,y,z coordinates and the data is rendered into 3 D surfaces moving over time. Kinematics Kinematics is the study of the movement and motion of structures that have joints, such as a walking man. Animating a walking step is tricky: you need to calculate the position, rotation, velocity, and acceleration of all the joints and articulated parts involved—knees bend, hips flex, shoulders swing, and the head bobs. 3D modeling program, provides preassembled adjustable human models (male, female, infant, teenage, and superhero) in many poses, such as “walking” or “thinking.” You also can pose figures in 3D and then scale and manipulate individual body parts. Surface textures can then be applied to create muscle‐ bound hulks or smooth chrome androids. Inverse kinematics, available in high‐end 3‐D programs such as Lightwave and Maya, is the process by which you link objects such as hands to arms and define their relationships and limits (for example, elbows cannot bend backward). Once those relationships and parameters have been set, you can then drag these parts around and let the computer calculate the result.
Video Of all the multimedia elements, video places the highest performance demand on your computer or device;and its memory and storage. Consider that a high‐quality color still image on a computer screen could require as much as a megabyte or more of storage memory. Multiplying this by 30( that is the number of times per second that the picture is replaced to provide the appearance of motion.) you would need at least 30 megabytes of storage to play your video for one second, more than 1.8 gigabytes of storage for a minute, and 108 gigabytes or more for an hour. Just moving all this picture data from computer memory to the screen at that rate would challenge the processing capability of a supercomputer. Some of the best multimedia technologies and research efforts have dealt with compressing digital video image data into manageable streams of information. Compression (and decompression), using special software called a codec, allows a massive amount of imagery to be squeezed into a comparatively small data file, which can still deliver a good viewing experience on the intended viewing platform during playback. Analogue Video In an analog system, the output of the CCD is processed by the camera into three channels of color information and synchronization pulses (sync) and the signals are recorded onto magnetic tape. There are several video standards for managing analog CCD output, each dealing with the amount of separation between the components. More the separation of the color information, the higher the quality of the image and the more expensive the equipment is . If each channel of color information is transmitted as a separate signal on its own conductor, the signal output is called component video (separate red, green, and blue channels or Y in luminance Track and ,Chrominance on RY and BY in Betacam), which is the preferred method for higher‐quality and professional video work. Lower in quality is the signal that makes up Separate Video (S‐Video), using two channels that carry luminance and chrominance information. The least separation and thus the lowest quality for a video signal is composite video. Thus all the signals are mixed together and carried on a single cable as a composite of the three color channels and the sync signal. The composite signal yields less‐precise color definition, which cannot be manipulated or color‐corrected as much as S‐Video or Component Video signals. The analog video and audio signals are written to tape by a spinning recording head that changes the local magnetic properties of the tape's surface in a series of long diagonal stripes. Because the head is canted or tilted at a slight angle compared with the path of the tape, it follows a helical (spiral) path, which is why it is called helical scan recording. Audio is recorded on a separate straight‐line track at the top of the videotape, although with some recording systems (notably for 1¾‐inch tape and for ½2‐inch tape with highfidelity audio), sound is recorded helically between the video tracks. At the bottom of the tape is a control track containing the pulses used to regulate speed.
Digital Video In digital systems, the output of the CCD is digitized by the camera into a sequence of single frames. The video and audio data are compressed before being written to a tape or digitally stored to disc or flash memory in one of several proprietary file formats. Digital video data formats, especially the codec's are used for compressing and decompressing video (and audio) data. In 1995, Apple’s FireWire technology was standardized as IEEE 1394, and Sony quickly adopted it for much of its digital camera line under the name i.Link. FireWire and i.Link (and USB 2) cable connections allow a completely digital process, from the camera's CCD to the hard disk of a computer. Camcorders store the video and sound data on an onboard digital tape, writable mini‐DVD, mini–hard disk, or flash memory. Codecs , Formats and compression Digital Video Containers ( Format) A digital video architecture is made up of an algorithm for compressing and encoding video and audio. A Digital Video Container is a digital video file format, in which we put the compressed digital video data, and a player is a software that can recognize and play back those files. Common containers for video are Ogg (.ogg, Theora for video, Vorbis for audio),Flash Video (.flv), MPEG (.mp4), QuickTime (.mov), Windows Media Format (.wmv), WebM (.webm), and RealMedia (.rm). Containers may include data compressed by a choice of codecs, and media players may recognize and play back more than one video file container format. Container formats may also include metadata which contains important information about the tracks contained in them and even additional media besides audio and video. The QuickTime container, for example, allows inclusion of text tracks, chapter markers, transitions, and even interactive sprites. Codecs (Compression) To digitize and store a 10‐second clip of full‐motion video in your computer requires the transfer of an enormous mount of data in a very short amount of time. Reproducing just one frame of digital video component video at 24 bits requires almost 1MB of computer data which means 30 seconds of full‐screen, uncompressed video will fill a gigabyte hard disk. Full‐size, full‐motion uncompressed video requires that the computer deliver data at about 30MB per second. This overwhelming technological bottleneck is overcome using digital video compression schemes or codecs (coders/decoders). A codec is the algorithm used to compress a video for delivery and then decode it in real time for fast playback. Different codecs are optimized for different methods of delivery (for example, from a hard drive, from a DVD, or over the Web). Codecs such as Theora and H.264 compress digital video information at rates that range from 50:1 to 200:1. Some codecs store only the image data that changes from frame to frame instead of the data that makes up each and every individual frame. Other codecs use computation intensive methods to predict what pixels will change from frame to frame and store the predictions to be deconstructed during playback. There are some codecs, colloquially called lossy codecs where image quality is sacrificed to significantly reduce file size.