Page 1

FINDING A VOICE

OUT OF THIN AIR A Ph.D. student makes adding objects to images easy and photorealistic

Art Theater undergoes historic changes to support digital projection

PAGE 5

PAGE 11

PAGE 14

Scottish software company helps Roger Ebert get his voice back

AT THE MOVIES

technograph Volume 127: summer 2012

NOW PLAYING NCSA LAB BRINGS SCIENCE TO THE MOVIES WITH ADVANCED D ATA V I S U A L I Z AT I O N S PAGE 8

Student Engineering Magazine at the University of Illinois


 !'

AB/@B7<5/B



!

E SEE WEBSIT S IL TA FOR DE

summer contracts available

217.337.7500

one-illinois.com


CONTENTS

3

5

8

11

before

after

14 A NOTE FROM THE EDITOR

Samantha Kiesel Editor-in-chief

W

hen opening this issue of the Technograph, you might notice something different. That’s because the student engineering magazine at the University of Illinois is under

new management. We at The Daily Illini are excited for the task ahead in bringing only the best science journalism to the students and faculty of the University. We want Technograph to bring you stories that will immerse you in the cutting-edge science happening on campus right here, right now. But we know we need to stay true to the roots of a publication that predates even the Illini Media

Company. Since 1885, this magazine has been a publication for engineers, by engineers. That’s where you come in. This year, you’ll notice we have no official Technograph editor and only a few writers. We encourage all those who have a passion for technology and a love of writing to get involved. Email mewriting@dailyillini.com to start your experience with the Technograph.

Editor-in-chief

Designers

Writers

Publisher

Mail

Samantha Kiesel

Charlie Tan Lim Danny Weilandt

Thomas Thoren Amanda Steelman Danny Wicentowski Megan Reilly

Lilyan Levant

Technograph 512 E. Green St. Champaign, IL 61820

Managing editor for reporting

Nathaniel Lash Managing editor for visuals

Shannon Lancor Managing editor for online

Marty Malone

Copy Editors

Kevin Dollear Johnathan Hettinger Laurie Shinbaum

Cover image

Advanced Visualization Laboratory

Web

readtechno.com Email

Phone

mewriting@dailyillini.com

1-217-337-8350 An Illini Media Publication Copyright 2012


In 2006  ,   Roger   Ebert   spoke   his   last   words.   But   with   the   help   of   text-to-speech   software,   he’s   now   getting   them   back.  This  is  the  story  of   how   Ebert’s   voice   was

LOST & FOUND BY THOMAS THOREN | STAFF WRITER

PHOTOS BY THOMPSON MCCLELLAN PHOTOGRAPHY


R

oger Ebert spoke his last words in the summer of 2006, losing the ability to speak after fighting thyroid cancer for four years. But now, six years after much of his lower jaw was removed, advances in text-to-speech software have allowed the famed film critic to continue broadcasting his thoughts using a computer-generated voice. Starting in 2009 with the help of CereProc, a company based out of Edinburgh, Scotland, Ebert’s digital voice has c ontinuously moved closer to sounding like the broadcaster’s natural voice that so many cinephiles once heard pick apart movies. “I was intrigued by the possibility of a computer-generated voice that sounded something like mine,” Ebert said in an email. “CereProc is a leader in such technology.” Matthew Aylett, chief technical officer for CereProc, said the first step in recreating anyone’s voice in a synthetic form is to obtain clean audio of the person’s speech. Despite Ebert’s spending countless hours in front of a television camera or behind a microphone in programs like “Ebert & Roeper,” only a small fraction of this audio was available and usable. “You’d be amazed. Someone who’s been broadcast all their lives, you think there’d be hours and hours of audio, but actually a lot of it is thrown away,” Aylett said. “You only have the final mixed version. So, for example, you might have a video of a TV show, but you don’t have his speech separate from all the other audio. And of course if you have laughter or audience noise and whatever, then it’s very hard to use the audio.” Ebert said most of the audio came from DVD commentary tracks, television appearances and various speeches. Many of these audio tracks were conversational in tone, which presented a challenge to CereProc because they would not be appropriate for other speaking situations. “The TV commentaries weren’t ideal, and certainly one of the big technical hurdles was just being able to knock that data into shape,” Aylett said. CereProc then transcribed the usable

audio and began building Ebert’s new synthetic voice. The company divided the audio into small segments, which could be combined to form his words. These phonemes came from many different sources, so CereProc had to tailor the clips so they would fit together well enough to form coherent words and sentences. “Because the audio was recorded differently — in different times, different places — we had to try and normalize the audio from each one of these commentaries so that it was similar,” Aylett said, referring

understand you, you can emphasize and change your voice to make it a bit easier to understand,” Aylett said. “But when you’re using a synthetic system, it’s quite frustrating if (the audio) is not really quite intelligible to start with.” Despite the fame garnered by CereProc thanks to its high-profile client, recreating synthetic voices for specific people is not the company’s main function. It specializes in general speech assistance, such as producing voices for Android phone soft-

“(Ebert) doesn’t just want to communicate — he wants to be able to use (his new voice) with the same sort of mastery that he used his own original voice. It was quite tricky.” — Matthew Aylett, CereProc chief technical officer to Ebert’s DVD commentaries of classic films. “So there’s a process where you look at the audio from say, ‘Casablanca,’ and then the audio from ‘Citizen Kane,’ and then you modify the audio so it matches more closely.” In CereProc’s latest version, sent to Ebert in the final week of March 2012, parametric synthesis was used to fix issues with stability and intelligibility. Parametric synthesis creates a statistical model of the different small sound bites cut from the speaker’s audio and generates a completely new waveform, Aylett said. This is in contrast to using small sound bites to make new sentences. “Very often when you’re speaking conversationally, if someone doesn’t quite

ware, PCs and the OS X operating system for Apple computers. Aylett said CereProc separates itself from many of its competitors by specializing in producing “voices with character.” “We really try and focus on producing synthesis, which conveys more than just the words,” Aylett said. As CereProc has worked on Ebert’s voice, the company has had to adjust its involvement on the project based on the amount of usable audio available. “We’d send the voice, and we’d get comments back, and we’d make some changes, and so on,” Aylett said. “So it’s varied a lot on how much effort has gone into it at different times.”

See EBERT, Page 7

5


No, this isn’t one of those ordinary jobs. Every day, people depend on Cummins to do some extraordinary things in some unusual places. Sometimes we’re under the hood of a truck lowering emissions. Other times we’re using generator exhaust to heat a swimming pool in China. Over 30 affinity groups, numerous awards, executive leadership involvement and our rich history all add up to one thing. This is a work environment where doing something exceptional and thinking beyond your desk is more than part of the job. It is the job. If you think you’ve never seen a company like Cummins before,

2011

just imagine what it’s like to work here.

Working Right. | careers.cummins.com


7 with the same sort of mastery that he used his own original voice,â&#x20AC;? Aylett said. â&#x20AC;&#x153;So it was interesting trying to shape the voice for his needs as well. It was quite tricky.â&#x20AC;?

EBERT FROM PAGE 5 Despite occasionally having to wait for additional data, Aylett said Ebert has made the project easier for the companyâ&#x20AC;&#x2122;s team. â&#x20AC;&#x153;Heâ&#x20AC;&#x2122;s been great to work with,â&#x20AC;? Aylett said. â&#x20AC;&#x153;Heâ&#x20AC;&#x2122;s very positive and interested in the technology and has been very good at coming back with suggestions and comments about the voice to help us get it right for him.â&#x20AC;? CereProc is tasked with the additional challenge of trying to recreate Ebertâ&#x20AC;&#x2122;s broadcast voice. â&#x20AC;&#x153;He doesnâ&#x20AC;&#x2122;t just want to communicate â&#x20AC;&#x201D; he wants to be able to use (his new voice)

This project is unprecedented because none of the audio was recorded with the intention of its being used to recreate a voice. â&#x20AC;&#x153;Itâ&#x20AC;&#x2122;s a first in terms of producing a voice like this â&#x20AC;Ś from audio which wasnâ&#x20AC;&#x2122;t recorded for the purpose,â&#x20AC;? Aylett said. â&#x20AC;&#x153;No oneâ&#x20AC;&#x2122;s done that before.â&#x20AC;? CereProc advises its clients to create prerecorded voice banks with large samples of basic words and sounds necessary for their speech, Ebert said. He did not have time to prepare such a library, so CereProc explored innovative ways to recreate his voice. â&#x20AC;&#x153;The use of this technology to clone peopleâ&#x20AC;&#x2122;s voices is still very much on the cutting edge,â&#x20AC;? Aylett said. â&#x20AC;&#x153;What Iâ&#x20AC;&#x2122;d like to see in the future is to have an automatic system where anyone can record their voice for a bit, and then it would produce a synthesizer which sounds like them. We will get there actually, at some point. Quite soon.â&#x20AC;? Even though synthetic voices are quickly advancing toward this reality, Aylett said they will never be able to completely match a natural voice.

   ,! ($$ "%)'- %&("#('  $)''$#!+($!$ '*+ ,)#&+# %&("#( )&#' #(&#( , &$$" %&("#(' , &$$" $)''

  

   

He said a natural speakerâ&#x20AC;&#x2122;s timing and on-the-fly adjustments are not easily recreated when a synthetic voice user must type a response into a keyboard. Along with this, intonation is difficult to reproduce with digital voices, though one feature of Ebertâ&#x20AC;&#x2122;s synthetic voice gives him some control over this. Ebertâ&#x20AC;&#x2122;s synthetic voice has come a long way over the past few years, but it still has room for improvement. â&#x20AC;&#x153;I am not yet using it on a daily basis,â&#x20AC;? Ebert said. â&#x20AC;&#x153;Theyâ&#x20AC;&#x2122;re still in up to their elbows on it, and Iâ&#x20AC;&#x2122;m impressed by their dedication.â&#x20AC;? He said he still primarily uses Alex, a preinstalled voice on the latest of Apple products, because it is the best voice currently available. Still, his involvement with the project has helped showcase the possibilities of voice synthesis to the world. â&#x20AC;&#x153;In terms of making people aware that this technology exists and it can be refined and made better for people, I think it is something which is important for people to know about,â&#x20AC;? Aylett said. â&#x20AC;&#x153;It was great to have someone as well known as Roger to sort of, in effect, test and highlight this approach, because it makes â&#x20AC;Ś the overall awareness of the ability to do this much greater.â&#x20AC;?


8

9

Advanced Visualization Laboratory brings data to life for moviegoers

T

BY AMANDA STEELMAN | STAFF WRITER IMAGE BY ADVANCED VISUALIZATION LABORATORY

he University of Illinois is home to a number of hidden gems on the forefront of scientific discovery. The Advanced Visualization Laboratory, or AVL, part of the National Center for Supercomputing Applications, is one of those gems. This month, the AVL will be featured in the Ebertfest Film Festival for their work in the feature film, “Tree of Life.” AVL Director Donna Cox and senior research artist Robert Patterson will host the presentation “Tree of Life: Making Movies using Scientific Data” on April 28 at the Illini Union. The presentation title alone sums up what makes the AVL so special. Even though they normally specialize in visualizations for planetariums, museums or IMAX documentaries (“Tree of Life” is their first feature film), the visualizations they make aren’t just pretty pictures: They are elaborate simulations based on scientific data. Such was the case with “Dynamic Earth,” a full-length film made for planetarium domes. The AVL partnered with the National Center for Atmospheric Research in Boulder, Colo., to digitally recreate Hurricane Katrina based on data the Boulder researchers had collected. The film transports the viewer into the hurricane, traveling in toward the eye. The audience sees arrows mapping air flow and temperature changes throughout the hurricane, charting the hurricane’s evolution throughout a 36-hour period as it moves toward New Orleans. In their IMAX movie “Hubble 3D,” AVL partnered with Johns Hopkins’ Space Telescope Science Institute, creating the amazing experience of journeying through Ori-

on’s Nebula, dodging stars while flying toward Orion’s star nursery. Once there, among the gases, audiences can watch how stellar winds dictate the way new stars and even entire solar systems are formed. “AVL is highly unique in that our approach is to bring science to the people through cinematic representation of scientific data,” Cox said. But the AVL is still faced with the task of taking all this data and transforming it into the striking visualizations we see. The first step is getting the scientific data. The data the lab primarily deals with are particle positions, photographs and volumes. Particle positions are basically 3-D data points plotted in 3-D space, used in situations similar to plotting star positions in outer space. Photographic data is a bit trickier. Here they use real photos but take information from the photos and use it in different ways. In some cases, they sculpt a photo into a three-dimensional environment, like with Orion’s Nebula in “Hubble 3D,” or use the photos as a guide to create a new environment. In other cases, images can be broken down into smaller parts and actually pasted in 3-D space. Stuart Levy, a member of the AVL team, described it as “a mix of imagery, scientific guesswork and artistry.” The last data type is volumes, which are mathematical structures consisting of 3-D grids of information. One of these grids is called a vector field, which contains imaginary 1-unit arrows represented by x,y,z values. Usually there’s another grid that can represent other types of data, such as position, density, temperature and speed. In the Hurricane Katrina visualization, vector fields and volumes were used to create the arrows that represent air flow,

direction and temperature change. “We create our pretty CG (computer-generated) arrows by essentially dropping a bunch of massless balls into the vector field, which pushes each ball in one direction or another, and then we trace the path of the ball,” said AVL team member AJ Christensen. “The ‘path through the vector field’ is the CG arrow.” Once the data is in hand, the boundaries of the data need to be explored. In some cases, resolution boundaries dictate how close you can get to the data. With the Katrina visualization, they dictate how close the film can get to the volumes without running into them. In the space scenes, they affect how close filmmakers can zoom in on stars and other objects without losing quality. This is also important when it comes time to determine the camera paths needed when combining the different layers of data and images. The next step is creating the actual visualizations. The team normally uses the software Autodesk Maya, but AVL

also relies on software specially created here at the University. Then comes the final task of rendering the visualizations. The way this is done depends on the visualizations’ intended output; IMAX movies are rendered differently than planetarium dome films. It requires “interdisciplinary teamwork to create these stunning visualizations,” Cox said. “We couldn’t do it without being at the University of Illinois.” In the end, these visualizations make it further than just blockbuster hits. They provide an educational outlet through the films and documentaries they’re featured in, and they are also a great tool for scientists themselves. Not only do they allow scientists to see their data in action, like in “Dynamic Earth,” but they provide a great medium for scientists to share their data among each other. But regardless of whether they are meant for work or play, the visualizations created at AVL are brilliant all the same, bringing some muchneeded science to cinema.

TOP: Hurricane Katrina, or rather, the Advanced Visualization Laboratory’s simulation of the deadly tempest, gains strength in this still from the IMAX production “Dynamic Earth.” The different-colored arrows represent the changes in air flow, direction and temperature as the hurricane edges toward land.

ABOVE: The Advanced Visualization Laboratory team, from left to right: Alex Betts, Jeff Carpenter, Stuart Levy, Donna Cox, Bob Patterson and AJ Christensen. (Photo Courtesy Robin Scholz, Illinois Alumni Association)


NOW HIRING

WORDPRESS SPECIALIST  -­  ILLINI  MEDIA Illini Media is an educational, independent organization of awardwinning, student-run media at the University of Illinois at UrbanaChampaign. Our non-profit group includes 400+ students working at The Daily Illini newspaper, WPGU-FM107.1 commercial radio station, Buzz entertainment weekly, Illio yearbook, Technograph engineering magazine, the217. com comprehensive calendar, and our expanding online operations. This position reports directly to the Director of IT and is expected to collaborate with edit and advertising staff.

901 W. Springfield, U

$ 520-570

* On engineering & comptuer science campus (Urbana Side).

911 W. Springfield, U

$ 525-595

* 2 Blocks to Grainger

1 Bedroom

1004 W. Springfield, U $ 495-529

* DSL Available * Parking Available

2 Bedroom

* Furnished

111 S. Lincoln, $ 765

* Microwave * Dishwashers (In 2-3-4 Br Apt)

DUTIES: MAINTAIN  ALL  IMC  WORDPRESS  WEBSITES  

Perform updates and security fixes as needed Update web content as needed Research and activate new WordPress specific plug-ins Find new ways for students to maximize the websites Create log-ins as needed Document processes, code and plug-in usage Create and maintain mobile sites Support editorial adviser in executing new web based projects on WordPress sites

SOCIAL MEDIA  Help the students find new ways to use social media to drive traffic  Review and enforce rules for social media, while adhering to journalistic standards and practices  Maintain and create logins for social media

FINANCES  Place ads on all websites including mobile sites  Track all traffic using all the tools at your disposal  Send out monthly revenue reports for all sites using AdSense  Create microsites for advertisers as requested ANY  OTHER  MISC.  DUTIES  AS  ASSIGNED  BY  DIRECTOF  IT   OR  PUBLISHER

QUALIFICATIONS: Associate Degree or equivalent experience in an Web Support capacity Experience working with WordPress at a PHP code level Knowledge and experience of web accessibility Thorough understanding of cross-platform and cross-browser issues Ability to work independently; juggle multiple projects; meet strict deadlines Excellent personable, positive communications skills and style Desire to work in a fast paced and creative environment TO APPLY:  EMAIL  YOUR  COVER  LETTER,  RESUME  AND   WORDPRESS  SAMPLES  TO:  TODD@ILLINIMEDIA.COM

* Central A/C (in most apts)

3 Bedroom 1010 W. Springfield, U

$1140 (2 Left)

* 24 Hr. Maintenance * Laundry * No Pets

4 Bedroom

* Garbage Included

1010 W. Springfield, U $1560-1696

* Mo. Preventitive Pest Control For Info: (217) 344-3008

911 W. Springfield, Urbana www.BaileyApartments.com

Health Data & Management Solutions We're looking for a few good data geeks! SM

HDMS – one of the leading data, analytics and consulting companies in the healthcare industry – is looking to hire recent graduates who are unashamed of their love of numbers, unabashed about their passion for analysis, and unintimidated by challenging technical problems.

HDMS is looking to hire recent graduates for the following positions: • QA Test Designer • Solution Engineer • Application Developer • Network/Systems Engineer • QA Analyst If you think you have what it takes to join an analytical, fast-paced and dynamic company in the healthcare industry, then we would love to hear from you. Please send a cover letter, resume and references to careers@hdms.com with the job you are interested in as the Subject Line.


11

Kevin Karsch just might be a magician. The application he programmed lets even the most inexperienced rookie summon and insert objects into photos out of thin air. He is a University Ph.D. student in computer science, but he might as well be called a...

m a s t e r

o f

BY DANNY WICENTOWSKI | STAFF WRITER

K

evin Karsch pulls up two photos on his Macbook. The first shows an unremarkable living room with light spilling through partially shaded windows. A plush Pokemon doll stares down from a bookshelf. In the foreground, an empty beer bottle stands on a coffee table. With a few short keystroke, Karsch switches to the other photo. It’s a nearly identical shot of the room. There’s just one added detail: a 7-foot-tall marble statue of an angel. The angel isn’t real. It’s a 3-D model, and the statue’s digital insertion took all of 10 minutes.

BEFORE

PHOTOS COURTESY OF KEVIN KARSCH

Karsch, a University Ph.D. student in computer science, has been working on an imagerendering program that can make picture-perfect manipulations that, when revealed, can be as unnerving as they are breathtaking. He’s been working on this rendering tech since 2009, and it has garnered some serious investor attention. And already he’s turned to the next step: video. “The idea of inserting objects into digital content has been around for some time. But for our technology, the goal of it is to make things much more efficient and much easier for the user,” Karsch said. The underlying novelties of the system are

AFTER

its speed and how it handles light, Karsch explained. In the pictures above, the angel isn’t gaining its photorealism from anything inherent in the 3-D model of the statute itself. Rather, it’s the way light coming through the windows flows over the uneven surfaces of the statue’s robe, the way the shadow’s gradient lightens and darkens the angel’s face, and the way all the lighting conforms to the angle of the light coming through the window. The rendering program works by first processing the 2-D picture into something that can be represented as three-dimensional space. To

See MANIPULATION, Page 12


12

MANIPULATION FROM PAGE 11 accomplish this, the user draws lines over vanishing points, parallel lines and light sources within a 2-D image. The payoff for all this tagging and linedragging is that the algorithm can recognize the depth and three-dimensional layout of the room. So when Karsch places the angel statue in the picture, the program can model the effect of the light hitting the statue and even the effect of the light reflected by the statue’s color. The statue looks impressive, to be sure, but for Karsch, the real leap forward is how relatively simple it was to produce. With a little instruction, even a rank amateur could produce the image in little more than 10 minutes. “Photoshopping costs a lot of money to have the program, and it could take years to learn,” Karcsh said, comparing the results of his rendering software to that of the popular Adobe software. The next step for Karsch and his colleagues is to use their algorithm to make the same kind of manipulations for video, and he explained that the program would allow artists to easily insert objects and effects into a scene without ever needing them to be physically present.

Even legendary scenes like the historical footage mash-ups found in “Forrest Gump” could be easily replicated. “That scene probably took one or more artists several hours, if not days, to produce,” Karsch said. “Hopefully, with our technology, that can be done in just a few minutes or maybe an hour.” But with great ease and availability comes the question of its popular use, and Karsch thinks there may come a point where literally anybody can use this kind of technology. Karsch is unsure of what this could mean for the general idea of validity or realism in photography, but film critic Roger Ebert doesn’t think the spread of accessible image manipulation software means much at this point. In fact, it’s old news: “Years ago, backgrounds were matte paintings. Now they’re CGI. Film has always employed trickery,” Ebert wrote in an email, adding: “I think today’s audiences are so savvy and cynical they assume they’re looking at CGI — even if they’re not.” Rather, Karsch’s software is just one more link in a chain stretching back more than a hundred years to the days of the Lumière brothers and George Méliès (who recently experienced a revival as a supporting char-

acter in Martin Scorsese’s “Hugo”). The irony is that the quest for this realism is accomplished by increasingly sophisticated tools of illusion, explained James Hay, a professor of media and cinema studies at the University. “You could say that the history of media productivity, of film and TV, has been toward greater and greater realism,” Hay said. “Technologies of visualization are increasingly about overcoming a perception that the earlier technology is unrealistic in some ways.” Karsch thinks the balance between technology and human agency is decreasing. Most animation needs some kind of motioncapture to ground the animation in believable movement. But the with the technological tools of illusion growing more sophisticated, the balance is moving toward a day when no image we see could be construed as “real.” Whether we’ve already reached that day is an interesting question, but Hay suggested that we all keep in mind René Magritte’s famous painting “The Treachery of Images,” which shows a detailed picture of a pipe with the words “This is not a pipe” written underneath. Magritte’s point with the iconic pipe was that an image can never be the real thing. But for innovators like Karsch, perhaps letting people believe the pipe is the real accomplishment.


13 HOW IT WORKS | Breaking down Karsch’s image-manipulation system

1

2

3

4

5

6

1. The user first inputs a picture or movie to Karsch’s system. 2. The system automatically estimates a rough outline of the room. 3. The user corrects any errors in the system, also indicating the room’s light sources. 4. The system then automatically computes a full 3-D scene. 5. The user is then free to insert objects or animations into the scene. 6. The objects are finally placed in the original image, appearing naturally lit and casting shadows on other elements in the image.

Do you have a passion for science and technology? Do you want to write about one of the premiere research institutions in the world? Then join the Technograph!

EDITOR: EDITOR: WRITERS: WRITERS: Contact: CONTACT:

Assign, edit stories, oversee content, manage the Technograph desk for The Daily Illini. Write! Write! Write!

mewriting@dailyillini.com


14

THE ART THEATER GOES DIGITAL Theater updates decades-old projectors MEGAN REILLY | STAFF WRITER

A favorite way for Champaign-Urbana residents to harken back to a bygone era is making drastic changes in the very near future. According to owner Sanford Hess, most of the projection technology in The Art Theater, located at 126 W. Church St. in Champaign, is the same that was used in the 1950s. For each film showed at the theater, Hess needs the huge rolls of film on which the movie is recorded. While these rolls are several feet across and 50 to 60 pounds each, a closer look at a section of the film reveals that they are very similar to those that older cameras might use — filmmakers still need to develop movie film from negatives in a lab, and the result is dark, transparent and only 35 millimeters wide. Why still show films with this older technology? For Hess, the quality of digital projection can never compare to traditional film. “If we showed a DVD at regular resolution, you would actually see the pixels,” Hess said. While the industry standard has reached more than 2,000 pixels for theater projection, that still can’t beat traditional film’s infi nite resolution and true colors. Nevertheless, theaters are moving away from film projection across the nation. This plays into some of the advantages of digital projection, ranging from its being “idiotproof” by looking and working like an iPod playlist to costing less for studios and the-

PHOTO COURTESY OF FLICKR USER OTTERMAN56 aters alike in regards to the actual prints. But on top of that, film studios are requiring the theaters to make the change. Major studios have announced that they will require digital projection by January, and smaller studios will do the same over the next couple of years. This is partially because of costs, but studios also like digital projection because it gives them a tighter control over the films even after distribution. Instead of receiving a large reel of film for each movie, theaters receive a hard drive and a separate flash drive to decrypt the files. Not only is the hard drive locked for a certain distribution date, but the flash drive sends a signal to the distributor to make sure the film has not been shown more than the theater’s contract allows.

The Art Theater is lucky. While the new standards might be good for filmmakers and distributors, the $60,000 price tag for theaters just to meet the most basic of equipment standards means that many small theaters similar to the Art are going out of business. After assessing that his theater would actually need closer to $100,000 to make the necessary changes, Hess decided to form the Art Theater Cooperative to fund the switch and run the theater afterward rather than see it close. By selling shares at $65 each, the theater has raised more than $65,000. Even with the changes, the theater will keep the old technology to continue showing archived films at 35mm for that classic moviegoing experience.

LISTEN TO THE FACTS. Our Library Has...

420,000 megabytes of music

1mb


Grow here.

Daily Illini Independent student news organization

Illio Univerity of Illinois Yearbook

Technograph Quarterly engineering magazine

Buzz Weekly entertainment magazine

WPGU-FM

the217.com

Commercial radio station

Entertainment Web site


! G N I S U O H D E V O R FRESHMAN APP DROP IN FOR A TOUR !

t $BSQFUFESPPNTBOE TFNJQSJWBUFCBUIT t $PNQMFUFMZBJSDPOEJUJPOFE

The only privately owned residence hall near: tEngineering Campus tComputer Science tBeckman Institute

t 8FFLMZNBJETFSWJDF t (SFBUGPPE t 0O4JUFQBSLJOHBWBJMBCMF

(SFFO-JODPMO6SCBOB *-ttXXXIFOESJDLIPVTFDPN

B7HFKQRJUDSKB)XOO3DJHZLWK%OHHGLQGG

30

Technograph Issue 127 Volume 4 (Summer 2012)  

Our newest issue spans from theater renovations to the man who's most likely reviewed over 90% of the movies screened in theaters: Roger Ebe...

Read more
Read more
Similar to
Popular now
Just for you