LIDS all
Laboratory for Information and Decision Systems MIT

Editor: Jennifer Donovan
Design & Illustration: Michael Lewy
Writers:
Genevieve Wanucha
Katharine Stoel Gammon
Photography credits:
LIDS MIT 150 photos taken by Patsy Sampson, Liza Zvereva, Alessandro Colombo, and Jo Woon Chong. LIDS Commencement
Reception photos taken by Hoda Eydgahi. Student Conference photos taken by Yola Katsargyri. Portraits of Ketan Savla and Debbie Wright taken by Michael Lewy.
Massachusetts Institute of Technology
Laboratory for Information and Decision Systems
77 Massachusetts Avenue, Room 32-D608
Cambridge, Massachusetts 02139
http://lids.mit.edu/
send questions or comments to lidsmag@mit.edu
We are about to complete another exciting year at LIDS. This was a particularly special year because of the MIT150 celebrations that brought thousands of people to MIT ranging from alumni to members of the scientific and local communities to celebrate MIT’s 150th anniversary. LIDS participated enthusiastically in these festivities. In the open house on April 30th, LIDS faculty, students and staff featured our work by engaging the visiting community with creative hands-on demos, illustrating the rich variety of the research conducted at the laboratory. The displays and activities included a demonstration of drivers’ responses to disruptions in transportation networks, control of distributed and mobile agents, the DARPA Grand-challenge autonomous vehicle, and a demonstration involving social behavior and optimized ranking of individual choices. It was exciting for all of us to have so many children and young adults spend a substantial amount of time at our booth watching, playing, and experimenting with these demos.
Capitalizing on the timing of the MIT150 celebrations and the leadership role that LIDS has established in defining research directions, we organized a workshop on information and decision in social networks (WIDS) under the superb organization and leadership of LIDS visiting faculty, Prof. Vincent Blondel. The workshop was a great success with more than 40 presentations and
The Laboratory for Information and Decision Systems (LIDS) at MIT, established in 1940 as the Servomechanisms Laboratory, currently focuses on four main research areas: communication and networks, control and system theory, optimization, and statistical signal processing. These areas range from basic theoretical studies to a wide array of applications in the communication, computer, control, electronics, and aerospace industries. LIDS is truly an interdisciplinary lab, home to about 100 graduate students and post-doctoral associates from EECS, Aero-Astro, and the School of Management. The intellectual culture at LIDS encourages students, postdocs, and faculty to both develop the conceptual structures of the above system areas and apply these structures to important engineering problems.
poster sessions and 250 attendees, bringing together the views and research vision of a very broad community including social scientists, computer scientists, and engineers. The feedback from the community was overwhelmingly positive with requests to make this truly interdisciplinary workshop an annual event. LIDS will continue to define critical research directions in this area as well as in the area of general networked systems.
Many have asked me about my experience as acting director of LIDS. Without hesitation, I will say that I had a great experience and a very productive year thanks to the support I got from the LIDS community. I was absolutely delighted to have the opportunity to send out all the award announcements, to participate in the “Idea Forum” that was created by the students with the help of my assistant, Jennifer Donovan, to welcome Yury Polyanskiy to join our faculty, to celebrate Jennifer’s Infinite Mile award, and to celebrate Debbie Wright, our Assistant Director for Administration, getting married. I enjoyed collaborating with my colleagues of faculty, research scientists, and post docs on various projects and initiatives. I particularly enjoyed the challenge of defining the scope of the “Systemic Risk” initiative and in preparing the proposal for a Science and Technology center in this area in coordination with faculty from across the MIT campus. Finally, it was a great joy to see many of our students graduate knowing that they will move on to wonderful and ambitious accomplishments.
As I move on to take on the position of EECS Associate Department Head, I look fondly back at the last several years as part of the leadership of LIDS, jointly with Alan Willsky and John Tsitsiklis. Alan was an excellent mentor and John was a great partner. I feel this experience has prepared me well to take on the new challenge. Looking even further back at my own career at MIT and LIDS, I am extremely grateful to Sanjoy Mitter for his continuing support and for sharing his wisdom with me since I was a junior faculty. I am pleased to see that LIDS continues to provide that type of mentoring for our junior faculty.
This issue of LIDS-ALL promises a great selection of profiles of LIDS students, faculty, alumni, administrators, and friends. Their stories bring out not only the wonderful experiences that they had being members of the LIDS community but also their great contributions to the laboratory over the past many years. I sincerely hope you enjoy reading our magazine and I look forward to hearing from you in the future.
Sincerely,
Munther Dahleh, Acting Director
By Genevieve Wanucha
It happens in the blink of an eye. A press of the ‘Enter’ key returns Google search results almost instantaneously. Video chats easily collapse the distance between conversing friends. For those born before the early nineties, the memory of those scratchy beep-beep-boopboop sounds of a dial-up Internet connection now seem rather quaint.
Almost no one has participated in more of these advances in data communications than LIDS’s Dave Forney. From his graduate school days at MIT to his career in industry and parallel career in academia, he has been a key player in many of these dramatic advancements.
Dave acknowledges that few imagined the current speeds of communication technology. “When Google first came out, it seemed like a miracle even to technically sophisticated people. Ten years earlier, who would have predicted it?” For that to happen, Dave says, people would have had to imagine what would become possible if everything could be done a million times faster. The difficulty of predicting the power of an idea has been illustrated time and again throughout his career.
Dave recalls that one such example occurred right under his nose. Back in 1960, Dave’s future professor and advisor, MIT’s Bob Gallager, had introduced “low-density parity-check” (LDPC) codes in his PhD thesis.
Gallager was aiming for the Holy Grail of coding theorists: to find error-correcting codes that could approach the Shannon limit, with feasible complexity. In other words, he wanted to send messages over a given noisy channel efficiently, reliably, and as fast as possible. Gallager’s work was appreciated as a theoretical contribution, and was published in an MIT Press monograph, but it didn’t make any practical impact at the time. Indeed, Codex Corporation (for which Bob consulted, and Dave later worked) turned down the opportunity to exploit LDPC codes; they were simply too complicated for the available technology.
Thirty years later, in 1993, the coding world was rocked by the invention of “turbo codes,” which approached the Shannon limit closely with very reasonable complexity. Turbo codes use an iterative decoding method that successively refines a set of estimates of the likelihoods of the encoded bits. This looked a lot like Gallager’s iterative decoding methods for LDPC codes. Soon it was realized that turbo codes and LDPC codes were closely related, and in practice it turned out that LDPC codes worked even better. “And I kicked myself,” Dave says. “I should have thought of this, and so should many of my colleagues, but somehow Bob’s codes had been tagged as ‘impractical.’ Maybe this was true in the Sixties and Seventies, but we never re-examined them when technology had advanced in the Eight-
ies and Nineties.” Today, Gallager’s codes achieve the closest approaches to Shannon limit, and are used in most new data communications standards.
Dave experienced a surprise like this in his own career, as well. As an MIT doctoral student studying under Gallager and Jack Wozencraft, his aim also was to design coding systems with good performance and reasonable complexity. He invented a scheme called “concatenated codes,” for which error probability decreases exponentially while complexity increases only as a small power of the code length for all rates less than the Shannon limit. His 1965 thesis too was published as an MIT Press monograph. However, these codes were also viewed as impractical. Dave recalls an employment interview at Bell Labs in which he presented some of these long and complicated codes; he had the clear impression that he was perceived as “not a very practical guy.” But within a decade, concatenated codes became the standard in space communications. “I had no idea they would have such a big practical impact,” he says. In 1998, they were awarded an IEEE Information Theory (IT) Society Golden Jubilee Award for Technological Innovation.
Another instance of a surprise success came after Dave wrote a system theory paper while a visiting scientist at Stanford, published in the SIAM Journal on Control in 1975. After it was
accepted and Dave had become VP-R&D at Codex, he says that he virtually forgot about its existence: “I shot an arrow into the air; it fell to earth, I knew not where.” Ten years later, he received a phone call from a system theorist wanting to ask him a question about his “famous paper,” and Dave discovered that his paper had become a “citation classic.”
Dave’s primary career was in industry, first at Codex (1965-77), and then with Motorola (1977-99), after Motorola acquired Codex. Bob Gallager was one of the founding consultants for Codex, and suggested that Dave interview there after his graduation in 1965. At Codex, Bob and Dave became close colleagues. Codex’s business success in the 1970s was based on a series of high-speed modems that Dave designed, building on Bob’s basic work on quadrature amplitude modulation. These modems became international standards, and are still deeply imbedded in all personal computers, in a tiny corner of the Intel chip that allows you to connect to the Internet through a phone line.
In those days, Dave remembers that professors were encouraged to consult with outside companies one day a week, and most of them did. Dave says that practice is rare now, perhaps because professors have gotten busier: “I’m sorry to see that’s changed.” Dave says that his own experience has been that some of the most interesting research problems come
out of trying to understand practically successful systems at a deep level.
Dave has been an Adjunct Professor in LIDS since 1996. He has received many awards and honors, including the IT Society Shannon Award, the IEEE Edison Medal, and membership in the NAE and NAS. He recently served as President of the IT Society, for the second time, and won the IEEE Donald Fink Prize Paper Award, for the second time. He continues to write research papers, and taught his graduate course on coding in Fall
2010. However, he says that his objective now is to be “more retired.” He remarried five years ago, and now works mainly from his home office, although he still maintains an office at MIT. He says that his current research is a purely intellectual investigation of the theory of codes on graphs and its connections to system theory, with no pretensions to practicality. But, if the past is any guide, could Dave be surprised once again by an unexpected payoff?
By Genevieve Wanucha
From “Likes” on Facebook to five-star movie ratings on Netflix and uploaded images in Google Photos, our world is virtually leaking raw data. Increasingly sophisticated technologies are exponentially increasing the amounts of it we can collect. This data deluge poses certain problems for science and engineering, namely that we are producing more data than we can store and analyze. For LIDS alum Martin Wainwright, the real challenge is making all that data useful. “Data, on its own, is not really interesting,” he says. “Data contains information. Information is interesting.” It’s not the details on Facebook that advertisers need, but large patterns in user behavior. It’s not the individual cases of HIV, but the vectors of transmission that hold value for epidemiologists.
Martin spends his time thinking about how to convert raw data to meaningful information. He designs algorithms to extract structure from massive strings of zeros and ones. The needs for this kind of work are many. For example, image processing requires ways to compress pixels; Netflix uses algorithms that can quickly assess people’s eclectic movie choices and come up with tailored recommendations. But Martin works on the fundamentals of culling information from data. While these potential applications of his work drive the research, they don’t determine the conceptual questions he asks.
Martin works as a professor with a joint appointment between the Department of Electrical Engineering and Computer Science,
and Department of Statistics at UC Berkeley. These days, he’s spending his sabbatical here at LIDS. It’s not far from home, actually, as Martin earned his PhD here in 2002. The institution has left an indelible mark on his career, and naturally, it’s pulled him back.
“Once you’ve been here, it’s always in your blood,” he says. During this extended visit, he is working on a book about high dimensional data, teaching a special graduate topics class, and continuing his research with a group of his Berkeley students who followed him to MIT.
Republicans. There is one highly connected node, reflecting many shared votes—it’s Lieberman in 2006, right before he switched from Democrat to Independent. Martin says that this kind of algorithm reveals changes in data over time, which would be useful in extracting other interesting information from any kind of social network.
Martin’s research could also apply to something called error control coding. When a satellite sends transmissions back to Earth, the data travels in a string of zeros and ones. During their journey, some of the data points
One of Martin’s favorite algorithms to emerge from his group is one that extracts all the voting records of US senators from a large database publically available at Senate.gov. When Martin demonstrates it, the complexity of the math seems to disappear. At a click of a button on his laptop, the installed algorithm spits out the image of a big circle. Tiny red and blue lines span the circle’s diameter, like strings stretching across a dream catcher.
Martin explains that the lines, red for Republican, Blue for Democrat, and yellow for Independent, show actual relationships between the senators.
This algorithm, without knowing the political affiliations of the senators, has “learned” about them. It turns out that there are many more connections between Democrats than
will inevitably become corrupted; some zeros will become ones, for instance, and some digits will be erased. Engineers need to combat these errors of transmission by introducing redundancy, basically extra code. Back on Earth, the coding measure can help the corrupted code self-correct. This idea doesn’t just apply to sending information to and from the far-flung reaches of space, though. Supermarket barcodes also contain an error control code. If a package is scanned off-center and records a number wrong, the extra checks figure out the mistake.
Martin has a multidisciplinary background, and jokes about being a dilettante. After receiving his undergraduate degree in mathematics, he earned a Masters in neuroscience from Harvard. The human brain may at first
seem a big jump from theoretical algorithms, but Martin explains that the statistical analysis inherent in neuroscience modeling drew him into Information and Decision Systems. He is still fascinated by the brain, “the most sophisticated object on Earth,” he calls it.
“Computer vision systems come no where near what a simple visual system can do in our brain…It’s interesting from an engineering standpoint because here is a system and we are not capable of engineering anything remotely close to it. At least not yet.” In the future, it’s a challenge he wants to tackle.
Martin says it might be possible to apply his current methods of high dimensional data analysis to neuroscience. As it becomes possible to record the action of different neurons simultaneously, something like the voting re-
cords algorithm could detect interesting patterns in the ways cells electrically interact. Similar to amassing senate votes from consecutive years, Martin explains, recordings from neurons would provide spikes of data over time. Theoretically, the right algorithm could infer neural connections and patterns, the kind of information that would revolutionize our understanding of the brain.
If you sit right outside Martin’s office, located centrally in the midst of other offices and cubicals in LIDS, you will hear streams of voices and the sounds of collaboration emerge from all directions. Martin likes this unique feature of LIDS. “People [here] like to study fundamental principles, but the applications they work on are really diverse—biological, social networks, aircraft control systems” he says. So, there’s common ground, and everyone has a shared way of thinking about things, but there’s enough diversity as well that it makes for interesting interaction.”
Being able to just walk around the lab and have productive conversations with other researchers is a huge benefit, as Martin’s work is not always fun. Detecting order in the chaos of a data deluge is a “non-linear” experience. “You are struggling for a long time to formulate the right model, or to understand what key ingredients in problems are,” Martin says. “But what keeps you addicted to it is that moment when you are working very hard, feeling like you are on treadmill making no progress, but then suddenly there is a flash of insight. That’s what keeps me coming back.”
By Genevieve Wanucha
It’s not everyday that a lab develops a new science. But that’s exactly what’s happening at LIDS, where research scientist Ketan Savla is building the theoretical foundation of what he calls cyberphysical science. “Only in the last decade or so has wireless technology boomed,” he says, “so the idea of machines talking to each other is only now becoming reality.” Over the three years Ketan has spent at LIDS designing controls for complex, multiagent systems, he has developed a unique insight: we now have enough technology that researchers can begin to develop theories to use those tools right.
Ketan works with technologies in the form of robots and sensors, whole swarms of them, and he designs protocols by which these robots can sense the environment and talk to each other. To do so, he programs them with algorithms so they can take collective action to manipulate complex, large-scale systems of interest. These systems, such as electrical grids or transportation networks, are becoming more and more complex, which means that ultimately, robots of the future will need the ability to divide a task between each other, and then autonomously function within the system. They will need to manage pick-up and delivery services, Ketan suggests, and even move people doorstep to doorstep in congested cities.
Autonomous robot action has already begun to help humans manage complex tasks, such as forest fire monitoring. In the usual situation, cheap sensors are scattered throughtout a forest. They measure the local temperature and
send a signal to a central command unit upon sensing a fire. But these unsophisticated devices too often send false fire alarms, triggered by heat from sunlight. Robots have started to solve this problem. These unmanned vehicles can travel to the location of an alarm to assess an actual fire, eliminating any unnecessary, expensive, or potentially hazardous human interventions.
Ketan’s work, however, aims to take this technology to its next phase by applying it to more complex situations. Traffic networks are, for instance, perhaps the best example of a potential application for Ketan’s control algorithms. Citywide traffic networks have no single human command center that directs traffic flow – performance is governed by the decisions of individual drivers. Drivers must react to accidents and other obstacles, but since they do not have the information to make the most informed decisions, their collective actions can bring traffic to a standstill.
Ketan thinks traffic level sensing technologies could inform individual drivers of upcoming obstacles through devices. Of course, people will have to decide whether they want monitoring devices inside their vehicles, or stationed on the roadside. Ketan predicts these kinds of innovations would produce a win-win situation, decongesting roads and reducing all that fuel we waste standing in bumper-to-bumper traffic. But way before engineers can implement traffic flow monitoring, researchers like Ketan must first truly understand decision making in complex traffic dynamics.
For a couple of years now, Ketan has been mentoring a group of computer science Masters of Engineering students in this exact transportation problem. The students get material for their own software development, while contributing ideas to the control of traffic levels in cities. Directed by Ketan, they have developed an interactive virtual environment called Virtual City Testbed. The testbed allows people to log on and maneuver virtual cars along streets and around flaming traffic accidents just like drivers in a real traffic network. Eventually, the group will gather enough data on peoples’ decision patterns to build a theory as to how drivers make decisions in different scenarios.
Prior to joining LIDS, Ketan earned his PhD from the University of California in Santa Barbara focusing on single robotic systems. As he worked, he noticed a huge surge in interest in simultaneous control of multiple robots. The cross-disciplinary nature of this enterprise fascinated Ketan. “That’s what really started me off—the opportunity to…marry ideas from controls to operation systems and computer science,” he says. When it came time to choose a place for post-doc work, the diverse research methods of LIDS sealed his decision. “The environment here is bubbling with lots of new ideas from different disciplines,” he says. LIDS offered him the perfect combination of freedom and motivation to push beyond his doctoral research and branch out to make new connections between research areas.
As important as multi-disciplinary approaches have been to the new world of complex
multi-agent systems, simply merging ideas from different specialties is not enough to solve real challenges. New mathematical solutions may spring up one by one in different control systems labs, but how can researchers determine the best one to fill a knowledge gap or respond to a technological challenge? There is currently no efficient way. Ketan thinks a new cyberphysical science could provide a nuanced answer.
Ketan is working to devise a theoretically sound “fundamental benchmark,” which is a standard way to compare different solutions to find the one that pushes towards the natural limitations of the complex system. For example, there is a limit to the amount of drivers a road network can handle. An optimal solution to a traffic flow problem will maximize the network’s capacity, without overloading it. A benchmark will enable LIDS researchers to locate best-performing, simplest algorithm among multiple possibilities, rather than shoot in the dark.
This kind of work is not practiced much because, simply, it’s difficult. “Defining those limits is very hard, and it takes much more work than just coming up with one solution,” Ketan says. “Here at LIDS, we do not to just adopt a piecemeal approach, taking some ideas from this discipline and some ideas from that, and tie them up in some way, and solve the problem. No—I think what we are trying to do here is develop new science at the interface of traditional domains.” It’s rigorous work, and Ketan emphasizes that it takes years
of deep thinking to wrestle with a theoretical problem, but it’s worth it. The cyberphysical science emerging from LIDS offers the possibility of pinpointing real, implementable solutions to challenges as pressing as the safety of our highways and forests, and as practical as saving gas money.
By Genevieve Wanucha
Ermin Wei is one of the newest members of LIDS, yet she’s already had a taste of how her ideas in distributed optimization could change technology. Last year at the IEEE Conference on Decision and Control in Atlanta, she presented her MIT Master’s thesis on a fastdistributed algorithm, basically a scheme that could make the Internet run much faster. A bunch of Internet engineers approached her after the talk, exclaiming, “This could change our TCP protocol!” Their enthusiasm took Ermin by surprise. “Working on a theory makes me feel a little far from reality from time to time. I didn’t know if people would actually use it, so it was a big shock.” Now a first year PhD student, she is still hard at work on the new algorithm’s mathematics.
Ermin works on network utility maximization. With pencil, paper and simulations, she figures out how to best allocate resources through a network. These calculations are relevant for any kind of network, whether it’s Internet wires, highway and road systems, or electrical grids. A simple example is railroads. You can only send so many trains through tracks at once. The trains will travel on predetermined paths, but each may have a different priority. To achieve maximum utility, engineers need to know how to send the most trains through the system without overloading it and causing delays.
network of networks that must handle huge amounts of random data flooding interconnected wires. Certain computers have security constraints, so there can be no central planner monitoring everything and directing data flow. Instead, computers only know some information about their neighbors and only use local information to talk to each other. Currently, the Internet uses a slow distributed algorithm dating back to 1997. Ermin and researchers like her want to create an algorithm for a faster Web of the future.
As an undergraduate at University of Maryland at College Park, she started out in computer engineering. An insatiable desire for challenging material and a desire to learn more drove her to add another major in finance and later a third in mathematics. Searching for an exciting project for the summer after freshman year, she joined the Intelligent Servosystems Laboratory in the Institute for Systems Research. There, with the unlikely partner of the big brown bat, Ermin discovered her love of applied math under the supervision of Professor P. S. Krishnaprasad from the Department of Electrical and Computer Engineering.
But scaling up to a network as complex as the Internet, this optimization problem grows daunting. Our on-line world is a large-scale
At the time, Krishnaprasad and his students were studying the dynamics of bat echolocation, the biologically genius sense system that allows bats to navigate and hunt using sound waves. Bats are nocturnal, so the lab members had to give them “jet lag training,” turning off the lights in the day and exposing them to
light at night. After the bats adjusted to a human schedule, Ermin set up nets in different shapes, and videotaped the flight paths of the bats as they zoomed towards an insect buzzing at the far end of the nets. She analyzed how bats skillfully, yet blindly, pilot themselves around the curvatures standing in the way.
Ermin joined one project to apply evolutionary game theory to the analysis of pursuit laws found in the brown bat’s sonar hunting. Did bats use classical strategy, and simply follow their prey? Or, did the nocturnal creatures use motion camouflage; a method of capturing prey in the shortest possible time, given the prey has a piecewise linear path, which would give them an evolutionary edge. Surprisingly, they found that bats do use motion camouflage in experiments. “We didn’t expect that nature would use a sophisticated time-optimal approach,” she remembers. It was a transitional moment for Ermin, a recognition that math is beautiful. When applied to real world problems, the result is beyond an equation, it presents an intuitive approach to the world. “That got me into research,” she says.
Here at LIDS, Ermin works with her advisor Professor Asu Ozdaglar from the Department of Electrical Engineering and Computer Science. Asu’s unconditional supportiveness has become especially important. Applied mathematics, though excitingly intuitive, comes with its special challenges. “There is a com-
mon saying at LIDS,” Ermin says. “90% of the time, you are stuck on a proof, 1% of the time you make a breakthrough, and the other 9% of the time you are writing a paper. A student can get stressed out and disappointed!”
But sometimes when Ermin feels about to give up, talking with Asu gives her an immediate energy boost and a flow of new ideas.
“You feel the energy radiating out of her,” Ermin says of Asu, who has interests in game theory, social networking, and of course, distributed optimization. She cares a lot about her students, helping them with idea generation and technical writing, and advocating for them at conferences. Working with Asu has inspired Ermin to think of her own future as a professor and advisor. She has realized that an academic setting gives freedom for meaningful research. And most importantly, “a good advisor can have a huge impact on students,” she says.
The social, interactive atmosphere of LIDS also tempers the academic frustrations. Reflective of LIDS’s multidisciplinary nature, researchers have diverse interests and personal backgrounds. A short while ago, Ermin asked everyone to sign a birthday card for Asu, using his or her first language. “We ended up with 8 different languages,” she remembers. “But no English!”
Ermin’s desire to make an impact on real world problems has her thinking of future research directions. She’s particularly at-
tracted to the mathematical foundations of efficient energy systems, or smart grids. If we price energy according to demand, laundry machines, dishwashers, and air conditioners could work more when other appliances aren’t using it. Electricity could be locally stored. We could use it when the price is high, and could
sell any extra on the open market. However, smarter energy use is a classic distributed optimization problem. Someone needs to figure out how to implement such a system on the enormous scale of entire cities. There is no question that Ermin will be an optimal person for the task.
By Katharine Stoel Gammon
While Sridevi Sarma was a post-doc in the prestigious neuroscience lab led by Emery Brown in the Brain and Cognitive Sciences department at MIT, she met a roadblock. She was trying to design a better way to build a controller for Deep Brain Stimulation (DBS), which as an engineer felt comfortable to her, but she was frantic to learn the techniques of her new discipline, Neuroscience. Techniques of Dr. Brown’s lab had broken barriers in understanding the neurophysiology of animals and humans while they are conscious and mobile. However, these methods were not enough to help Sridevi achieve her goals. She sought out her old LIDS advisor, Munther Dahleh, who gave her a piece of vital advice. “He just said stop -- stop what you’re doing. Use your systems and control approach and don’t forget it. That’s the new thing that only you bring to the table.”
Sridevi said that those words have stuck with her ever since. “I was just following their bandwagon and hitting all the same roadblocks that neuroscientists hit in understanding DBS.” But once she started thinking in the framework of control systems – whether it was for an airplane or for brain-implanted electrodes, her perspective changed and she was able to merge her education from Brain and Cognitive Sciences with what she learned at LIDS.
Sridevi’s path to brain-implant controller design started during the time she was doing her doctoral studies in LIDS. Though her graduate studies were focused on control theory, she
was taking courses in neuroscience on the side – “We had to do a minor in something, but I really wasn’t looking to do anything interdisciplinary,” she says.
A class project, though, led her to do a threeday case study on her aunt who had earlyonset Parkinson’s disease. In those three days, she gained a completely different perspective of what it’s like to live with Parkinson’s disease, and what it’s like to be the spouse of someone with it. Sridevi saw an opening to put her training to use in finding a better way to help people with neurological disorders. “I was motivated to better understand what’s happening in the brain, to use my training in systems modeling and control to better understand the brain, and how to design better therapy.”
Now an assistant professor in the Biomedical Engineering Department and in the Institute for Computational Medicine at Johns Hopkins University, Sridevi works on optimizing a controller for DBS, a procedure during which an electrode is surgically implanted in the brain and sends electrical impulses to specific parts of the brain. The impulses can be targeted in such a way that they are able to alleviate some symptoms of movement disorders, including those induced by Parkinson’s disease.
“The way it’s done now is very heuristically,” said Sridevi. DBS will work if two things are done correctly – if the doctor implants the electrode in the correct place, and if the cor-
rect electrical signal is injected. While most people have a good understanding of the target, she says, getting the signal exactly right for each person is very difficult.
Most doctors who work with Deep Brain Stimulation do their best to adjust the instrumentation, but it’s essentially a shot in the dark. “After the patient recovers from surgery, the doctor literally tweaks the parameters of the stimulator,” she said, adding that while the pulse train of current that goes into the electrodes is constant, the intensity and frequency of the pulse can be tweaked. Sridevi says that doctors typically ask patients to walk around the room and perform certain motor tasks while the pulse’s parameters are changed. Not only is this process laborious, it’s also expensive and lengthy – it can take up to a year figure out the best way to optimize DBS treatment for each patient.
Sridevi’s goal is to create a more intelligent system for both placing and controlling the electrode signals. Currently, once the parameters are correct, she says, the same highpower signal runs 24 hours a day, 7 days a week – until the batteries run out three to five years later. At this point another surgery and another round of tweaks are required. Plus, the electrical impulses can trigger other circuits in the brain, making patients anxious or depressed.
Sridevi thinks it’s possible to re-imagine the whole system. The normal brain operates in a low power, and she thinks that there is a
low-power alternative for Deep Brain Stimulation. She also thinks that the electrical signal can be designed to work smarter. “DBS should intelligently adapt to the patient’s state, on-medication/off-medication, awake and during sleep, across all stages of the disease,” she said. “We want to get an intelligent chip sitting in there, measuring appropriate activity, and responding accordingly so that the patient’s brain looks more like a healthy brain.”
She is now analyzing neural data from normal primates and primates induced with Parkinsonian symptoms via a neurotoxin, to build systems level models of healthy and diseased neural circuits with and without Deep Brain Stimulation. The systems approach and computational framework she brought from LIDS can be both exciting and challenging to neuroscience colleagues who have a set way of doing things. “LIDS gave me the systems perspective – the methodology and tools to apply to this field. I really have not run across anyone with my training looking at these problems,” she said.
Some parts of her new role are different from her time at LIDS, though. For one, there are patients involved. “Unlike LIDS, where you have definitions, a problem statement, and theorems, my metric here is that something I invent has to work in a patient. They just want a solution. When I talk about what we’re doing, it’s got to be results-oriented,” she said.
Sridevi hopes that within ten years, one of the approaches for Deep Brain Stimulation she’s
working on finds its way into clinical trials.
“One project is focused on Parkinson’s Disease, and the other on Epilepsy. I would feel great if any one of my approaches makes its way to clinical trials,” she said, adding that the goal is realistic, assuming everything falls into place today.
As for her aunt with Parkinson’s, Sridevi has kept in touch, sending her some papers and discussing Deep Brain Stimulation with her. “She does have DBS, and we’ve talked about what it’s doing in her brain. The thing with her is that she’s now in her 50s, and she has very late-stage Parkinson’s. She’s gone through just about every possible therapy including ablative surgery and bi-lateral DBS, but she hasn’t completely regained quality of life.”
LIDS has helped Sridevi in many ways in envisioning her research. “In particular, the technical training I got at LIDS was stellar. I can say I have always understand the technical details of all the research talks I go to.”
Munther Dahleh’s advice still sticks with her.
“Everything we’re doing is novel because of our approach. I think we are making lots of headway in both the Parkinson’s problem and also in detecting seizures in drug-resistant epileptics.” When she gets to bring a new approach to an old problem, Sridevi is putting her training to use for the good of many.
ORGANIZING COMMITTEE
Student Conference Chairs
Yola Katsargyri
Spyros Zoumpoulis
Committee Members
Elie Adam
Amir Ali Ahmadi
Giancarlo Baldan
Kimon Drakopoulos
Hoda Eydgahi
Rose Faghih
Mihalis Markakis
Mitra Osqui
James Saunderson
Yuan Shen
Mark Tobenkin
Prof. Daron Acemoglu
Amir Ali Ahmadi
Ross Anderson
Prof. Dimitris Bertsimas
Ozan Candogan
Güner Celik
Venkat Chandrasekaran
Prof. Munther Dahleh
Dr. John Dowdle
Kimon Drakopoulos
Hoda Eydgahi
Alborz Geramifard
Peter Jones
Prof. Ilya Kolmanovsky
Prof. Andrew Lo
Prof. Costis Maglaras
Mihalis Markakis
Mesrob Ohannessian
Mitra Osqui
Jagdish Ramakrishnan
James Saunderson
Parikshit Shah
Noah Stein
Jacob Steinhardt
Prof. Vahid Tarokh
Mark Tobenkin
Stavros Athans Valavanis
Prof. Martin Wainwright
Daniel Weller
Kuang Xu Yunjian Xu
Tauhid Zaman
Yuan Zhong
By Genevieve Wanucha
When Manfred Morari looks back over the 35 years of his career, he will tell you that his graduate school years, spent under his advisor George Stephanopoulos at the University of Minnesota (now at MIT), still stand out. The most important thing he learned, he says, was how to formulate problems. “It was a continuous dialogue with my advisor to arrive at those problems worth solving.” Since then, he has solved many. The proof is in his expansive collection of published papers, which have placed him among the most highly cited researchers.
Manfred specializes in hybrid systems analysis and control. The techniques emerging from his research have applications to diverse industrial challenges, such as those that arise in automotive systems, biomedical engineering, efficient energy use, and active noise control. These days, though, Manfred works not only as a researcher but also as a leading administrator. He heads the Department of Information Technology and Electrical Engineering at The Swiss Federal Institute of Technology Zurich, known by its German acronym as ETH.
Manfred moved to ETH in 1994, coming from his professorship at California Institute of Technology. It was ultimately a transition from chemical engineering to electrical. He and his group found themselves tasked with taking methods developed for slow, complex
chemical systems, and extending them to fast electrical processes. A new fast variety of a control algorithm for dynamic systems called Model Predictive Control emerged from this challenge. After a decade of study and computation, “We have now learned how to apply those techniques to processes that are not reacting in tens of minutes, but in microseconds.”
Manfred sees his approach at ETH as a balance between three pillars: theory, computation, and application – a similar approach to the one taken by LIDS researchers. However, LIDS and ETH differ in the level of focus on applications. At ETH, active collaborations with companies such as Ford and Siemens allow new algorithms to be tested on real developing technologies, some of which have already reached the market. For example, Daimler and Ford use an electronic stability program, or traction control system, features of which were contributed by Manfred’s group.
Currently, Manfred is involved in the algorithmic design of efficient energy technologies for buildings. The goal is to reduce overall energy consumption and peak electricity load while improving occupant comfort. They’ve designed a system that monitors electricity prices and determines when it is most cost effective to heat. But the weather also affects heating and cooling needs. So, ETH’s de-
sign goes a step further. The control system takes weather forecasts and shifting electricity prices into account to calculate, for example, whether a building should use more electricity in the morning, afternoon, or night.
Manfred says that the application of his computations to buildings would simply not be possible without collaboration with industry. “It’s a fantastic opportunity that very few people in academia have to apply their research results on full scale to see what the benefits and difficulties will be,” he says. They suspect, however, that even though it’s easy to install new technology in new homes, it may take decades to implement it widely in old buildings.
Manfred is most excited about another project involving energy production. Many research groups worldwide are working on machinery that harvests wind energy through the controlled movement of high-altitude kites. By many estimates, kite systems have the potential to meet global demands at a cost lower than fossil fuels.
Outside of these projects, Manfred spends a great deal of time recruiting new faculty members to replace a large number of retiring professors. He enjoys seeing young people enter the field full of fresh ideas, plus the job keeps him in contact with MIT.
Pablo Parrilo has recently left ETH Zurich to join LIDS, while Vanessa Wood of MIT’s Electrical Engineering and Computer Science Department joined ETH’s faculty in January. Manfred says that he’ll be stopping over at LIDS again soon, and not just because he is a member of LIDS’ Advisory Committee. “If we can recruit another assistant professor, then I’ll be delighted.”
Overseeing this new generation has given Manfred a clear sense of how his field has evolved over the last 40 years. The changes are unmistakable: The world has become much more competitive and it is increasingly challenging to excel in the academic environment, Manfred observes. Part of the reason is the rapid growth of new areas like bioengineering and bioelectronics, a shift that’s happening so fast that it’s hard to keep up.
Researchers in Manfred’s departments, however, have stepped up to this biomedical challenge. One group is developing special electrodes that measure electrical signals and reactions in nerve cells. New imaging methods are in the works, too. Other developments unheard of ten years ago include a “microsyringe” that allows researchers to inject DNA strands into single cells without using a genetic vector, as well as a process that samples the interior of a cell without destroying it. “This range of areas to which we can contribute is not limited to traditional electrical systems,”
says Manfred. His work is about applying innovative theoretical ideas to broad contexts, medical or environmental. Wind harvesting technology and microsyringes may be worlds apart, but they are united by the control systems operating within them.
Overseeing this new generation has given Manfred a clear sense of how his field has evolved over the last 40 years.
“When we observe the complex world around us, it is inevitable that we leave a good part of the possibilities unseen,” said Mesrob Ohannessian, a final year Ph.D. student in LIDS. He i ntroduces his research with a basic example: birdwatchers counting birds in a park. “How would they estimate their chances of seeing a new species of bird?” he asked, adding, “We, humans, intuitively know that the possibility is out there, but it is not immediately clear how to quantify it.”
Over the past five years, Mesrob has developed his research at LIDS around such questions of quantifiably predicting events that happen so rarely they’re nearly impossible to foresee. His own encounter with LIDS was arguably such an event. He first learned about the lab as a matter of proximity. He recalls taking a class in stochastic processes, detection and estimation – a course that continues to guide how he t hinks about statistics and probability. “When the lecturer learned of my fascination about the topic, he told me he worked in LIDS, and recommended that I attend the lab seminars. I did so eagerly.”
At the time, Mesrob had graduated from the American University of Beirut and was a master’s student at MIT, designing and developing educational software that could be used to teach electromagnetism to freshmen. He was working in building 9, just a few short steps from building 35, the home of LIDS at the time. “So it was especially convenient to get to those seminars, and to also interact with people in the lab.” Over the course of these visits,
he was introduced to LIDS professors Sanjoy Mitter and Munther Dahleh, from whom he gained a deeper understanding of the lab’s research. Mesrob liked what he learned, and formally joined the lab in 2006, under their supervision.
He soon became interested in rare events, especially the question of when one can actually say something meaningful about them. The birdwatcher example may seem esoteric, but it has some very real consequences. Indeed, determining the probability of rare events has a storied mathematical background. With his coworker Jack Good, mathematical great Alan Turing developed an algorithm for this unseen species problem during World War II. This algorithm, which uses knowledge of almost-rare events to help predict truly rare ones, helped crack German Enigma ciphers. On a different front, the North Sea flood of 1953 spurred a number of intense government initiatives in Europe. “They wanted to understand another kind of rare event: when natural phenomena, such as wind speeds or wave heights, exceed historical highs,” Mesrob said.
His own motivation, however, was of a different sort: to help machines understand natural language. “When computers perform automatic speech recognition, it is crucial for them to estimate the number of words, or succession of words, that they have not seen or have seen very rarely during their training. If they do not take such ignorance into account, they will be overconfident and misinterpret,” he said. So he started his research by examining whether the
famous Good-Turing estimator was any better than simply answering “never” to the question of whether one would see a new outcome, that is a new bird species in the park or a new word in the text.
Mesrob says he soon made a critical observation. The Good-Turing estimator’s advantage depends on how outcomes look when they are arranged from most frequent to least frequent. If outcomes become less frequent slowly, then the Good-Turing estimator is preferable. But if they become less frequent quickly, then it is about as good to guess that an unobserved outcome will never occur. This property of “becoming less frequent slowly” is often called a “ heavy tail,” because of the appearance of the distribution graph.
By making this distinction, Mesrob believes he has isolated a common principle underlying many other rare event problems. In fact, i n the problem of exceeding historical highs, researchers had also identified heavy tails as a critical assumption to make inference possible. But no one had highlighted their importance in the problem of estimating the probability of new outcomes.
Therefore, Mesrob has been working on problems of the special mathematical relationships of power laws, which formalize the notion of heavy tails. Making such structural assumptions are necessary to infer things about the world, he explains. “This manifests itself in our everyday life, just as much as it does in almost every field of science and engineering
where measurements are made and estimates are needed.” With this new perspective, Mesrob is assembling a framework where we can not only analyze algorithms such as the GoodTuring estimator, but also go beyond them, by giving stronger and more accurate predictions.
Mesrob believes his work has many potential applications, like evaluating faults in power grids, estimating the probability that a physical system evolves beyond its desirable range of operation, or modeling rare changes in financial markets. However, one of the first places where he wants to test the usefulness of this new framework is in natural language modeling, which gave him his original impetus, a nd where such techniques are already heavily used. Researchers often use algorithms to patch over the gaps in rare data – called smoothing – in a relatively ad hoc way. “In the framework that I am developing, there is potential to perform such smoothing implicitly, in a principled and predictable way, and by doing so to boost speech recognition and machine translation performance,” he said. “Perhaps I am helping our artificial brethren get better at gauging risks and recognizing novelty, tasks that we do relatively effortlessly as humans.”
Outside of his research, Mesrob enjoys spending time in the outdoors – though bird watching is not one of his hobbies, he camps and h ikes. He and his girlfriend have taken up a vegetable garden, too, which can be another problem to solve. “Growing from seed to healthy plants is challenging, rewarding, and
fun -- also fickle and frustrating sometimes,” he said, “but nothing beats home grown tomatoes!”
Mesrob has also taken advantage of the open environment LIDS fosters. This has been an important way the lab helped promote and complement diverse parts of his education. He gives the example of a student-run seminar course in his area of interest, which he coorganized in Fall 2007.“We used this opportunity to develop mini-curricula that helped get us up to speed with recent research topics that were not yet organized into traditional courses,” said Mesrob. He also talks about being part of a summer study group with other L IDS students, where they taught each other topology. “These and other experiences, be they organized or not, are not unique to me, and LIDS accelerates its students’ education very effectively in this way.”
The lab has offered Mesrob support and inspiration, as well. “LIDS provides an environment where engineering and mathematics meet vigorously. The foundations of the lab, the questions the people ask, remain very deeply entrenched in engineering and practical concerns,” he said. He describes LIDS as a place where the culture insists that these questions be asked, and solved, with sound mathematics. “This is a skill that I think LIDS helps hone in everyone who passes through.” It isn’t difficult to predict, unlike the events he researches, that Mesrob will use this skill with great frequency, as he journeys onward to his future career.
What’s your official job title and function at LIDS?
I am the Assistant Director for Administration. This means that I have primary responsibility for all financial, personnel, grants management, space planning and other administrative matters. To be more specific, some of my duties include preparation and submission of government, industrial, foundation and other organization proposals; I also advise principal investigators on all aspects of sponsored research administration, which includes preparing financial analyses and forecasting for individual research groups. Additionally, I manage personnel functions
for all staff categories including the creation of job descriptions, hiring, salary reviews, visa application process, etc; and managing space, security, and facilities related issues. So as you can see, my responsibilities are wide-ranging.
What were you doing before you came here?
Before coming to LIDS, I had the opportunity to have worked at MIT’s biggest academic department as well as one of its biggest labs. I was working in Electrical Engineering & Computer Science (EECS) and prior to that in the research lab, Computer Science and Artificial Laboratory (CSAIL).
How long have you been at LIDS?
I started in August of 2008, so it will be three years this summer. However, I’ve actually been at MIT since December 1999 when I relocated from New York City to Boston.
Do you ever get worn out? What do you like about working at LIDS?
It’s actually quite exciting because there is a mixture of everything. In fact it is the unexpected that adds to the excitement. I really enjoy everything here because LIDS has such warm ambience. But first and foremost I would say the most interesting and enjoyable aspect is the people. I have the best director who creates an environment of trust and excellence
that enables me to grow, an excellent staff that makes my job so much easier, and a group of extraordinary faculty and students to work with. What more can I ask for? Not too many people actually enjoy doing their jobs. So perhaps I am one of the fortunate ones -- I am doing what I like and it gives me satisfaction.
Do you have any interesting stories from your time at LIDS?
About a year after I stared working in LIDS, I came into work one morning, and the director came into my office. With a very serious tone he told me that I was no longer the administrative officer of LIDS. I thought to myself, “Did I just get fired?” I thought that I must have done something wrong, as I was still fairly new on my job. Well, unexpectedly, he told me that I had just been promoted to Assistant Director for Administration. I was really surprised by the news.
do you do when you’re not working?
I am involved in public service as a volunteer in Bible educational work. As one of Jehovah’s Witnesses, I devote several hours per week to assisting and teaching both older and younger individuals. It is very fulfilling to be able to help people. Yet, there is time for travel, something I also enjoy – and I’ve been able to visit several places: Asia, the Middle East, Europe, South America, Central America and Africa. Even locally, I like to just walk around the different areas and simply explore.
Without a doubt, it would be Back Bay. The area has a lot of history and I love the Victorian brownstone buildings…. and of course Fenway Park is not too far, home of the Red Sox. Need I say more?
I like the architecture, the resources, the culture, and most importantly, the diverse set of people. Everyone here at MIT is driven, dedicated to pushing boundaries, constantly improving, and yet still being laid back and enjoying life.
I was going to say travel around the world. However, I realize that it is all too easy to get caught up in the routine and bustle of life without stopping to think about what we really want out of life, what matters most to us. I was married just a few short weeks ago and thus began a new phase in my or should I say our lives. It is my goal to be able to set priorities and strive to maintain a good balance between my spiritual life, my family life and my secular life. As simple as this may sound, if this can be done, then one day my husband and I will travel around the world together while also doing many other things that we both enjoy.
Weekly colloquia & seminars are a highlight of the LIDS experience. Each talk, which features a visiting or internal invited speaker, provides the LIDS community an unparalleled opportunity to meet with and learn from scholars at the forefront of their fields.
The Stochastic Systems Group seminar schedule can be found at: http://ssg.mit.edu/cal/cal.shtml
Listed in order of appearance.
Ramesh Johari
Stanford Management Science and Engineering
Vijay Subramanian NUIM-National University of Ireland Maynooth Hamilton Institute
Malcolm Smith Univ of Cambridge (UK)
Engineering
Balaji Prabhakar
Stanford
Electrical Engineering - Information
Systems Laboratory
Tom Luo Univ of Minnesota-Twin Cities
Electrical and Computer Engineering
Jorge Cortes Univ of California-San Diego
Mechanical and Aerospace Engineering
Anna Scaglione Univ of California-Davis
Electrical and Computer Engineering
Jeremy Gunawardena
Harvard
Systems Biology
Vincent Blondel
Univ of Louvain (Belgium)
Electronics and Applied Mathematics
David B. Shmoys
Cornell
Operations Research and Information Engineering
Eva Tardos
Cornell
Computer Science
Sundeep Rangan
Polytechnic Institute of NYU
Electrical and Computer Engineering
Martin Wainwright
Univ of California-Berkeley
Statistics and Electrical Engineering and Computer Sciences
Alessia Marigo
Rutgers-Camden
Mathematics
Benedetto Piccoli
Rutgers-Camden
Mathematics
Richard Braatz
MIT
Chemical Engineering
Joel Tropp
Caltech
Applied and Computational Mathematics
Urs Niesen Bell Labs
Mathematics of Networks and Communications Research
Alexandre Megretski MIT
Electrical Engineering and Computer Science
Congratulations to our members for the following achievements!
Former LIDS post-doc Animashree
Anadkumar received the SIGMETRICS 2011 Best Paper Award for her paper “Topology Discovery of Sparse Random Graphs With Few Participants”.
The 2010 George Axelby Outstanding Paper Award was given to Prof. Munther Dahleh and Nuno Martins for their paper “Feedback Control in the Presence of Noisy Channels: ‘Bode-Like’ Fundamental Limitations of Performance.”
Administrative Assistant Jennifer Donovan was the recipient of the 2011 School of Engineering Infinite Mile Award for Excellence.
Kimon Drakopoulos received the second place Ernst Guillem in Award for Best Electrical Engineering SM thesis. Kimon’s thesis was co-supervised by Prof. Asu Ozdaglar and Prof. John Tsitsiklis .
Prof. Emilio Frazzoli and Sertac Karaman received the Willow Garage’s Best Open Source Code Award for RRT(*), a software library implementing the algorithms introduced and analyzed in their paper “Incremental Sampling-based Algorithms for Optimal Motion Planning.
Prof. Jon How and LIDS alum Prof. HanLim Choi won the Best Application Paper published in Automatica over the last three years for their paper, “Continuous trajectory planning of mobile sensors for informative forecasting.”
Srikanth Jagabathula , jointly supervised by Prof. Devavrat Shah and Prof. Vivek Farias, received the MSOM student paper competition first prize during the 2010 NFORMS annual meeting.
Prof. Patrick Jaillet was named the new co-holder of the Dugald C. Jackson Chair.
Sertac Karaman was awarded a 2011 NVIDIA Graduate Fellowship. Sertac also received the 2011 American Institute of Aeronautics and Astronautics Orville and Wilbur Wright Graduate Award. He is supervised by Prof. Emilio Frazzoli.
Prof. Asuman Ozdaglar was selected as a 2011 Kavli Fellow of the National Academy of Sciences.
Prof. Pablo Parrilo was invited to speak at the International Congress of Mathematicians (ICM) for the section on Control Theory and Optimization.
Prof. Devavrat Shah was announced as the recipient of the prestigious Erlang Prize at the 2010 INFORMS annual meeting.
Yuan Shen received the Marconi Society Young Scholar Award for his work on the fundamental limits of wideband cooperative localization. Yuan is supervised by Prof. Moe Win.
Jinwoo Shin received a 2010-2011 Sprowls Award. The Sprowls Awards are given every year for the best Ph.D. theses in Computer Science at MIT.
Prof. Eduardo Sontag was selected as the recipient of the 2011 IEEE Control Systems Award for fundamental contributions to nonlinear systems theory and nonlinear feedback control.
Prof. David Staelin was awarded the John Howard Dellinger Medal “for seminal contributions to the passive microwave remote sensing of planetary atmospheres and the development of remote sensing of the amosphere and environment of the Earth from space” by the International Union of Radio Science (Union Radio-Scientifique Interntionale).
Vincent Tan received the 2010-2011 EECS Jin-Au Kong Award for Best Electrical Engineering Ph.D. thesis. Vincent was supervised by Prof. Alan Willsky.
Lav Varshney received Honorable Mention for the Jin-Au Kong Award for Best Electrical Engineering Ph.D. thesis. Lav’s thesis was jointly supervised by Prof. Vivek Goyal and Prof. Sanjoy Mitter.
Prof. Alan Willsky, LIDS alum Dmitry Malioutov, and research scientist Mujdat Cetin, have been awarded the 2010 IEEE Signal Processing Society Best Paper Award.
Kuang Xu, supervised by Prof. John Tsitsiklis, was awarded the first place Ernst Guillemin Award for Best Electrical Engineering SM thesis.
Questions or comments, please email us:lidsmag@mit.edu
Massachusetts Institute of Technology Laboratory for Information and Decision Systems
77 Massachusetts Avenue, Room 32-D608 Cambridge, Massachusetts 02139 http://lids.mit.edu/