Futures Microsoft’s European Innovation Magazine Issue n°1 I December 2007
“When the future historians look back on these years, they will examine us on two things: our innovation and the purposes to which we put it.”
Editor in Chief Thaima Samman, Senior Director Corporate Affairs Europe, Microsoft
4 THE ‘RENAISSANCE’ OF COMPUTER SCIENCE What if Newton had published his theory of gravitation – and nobody noticed? Richard Hudson of Science Business discusses the frustrating position of the computer scientist today.
Editorial Board Jan Muehlfeit, Chairman Europe, Microsoft Dirk Delmartino, EU Communications Director, Microsoft Andre Hagehülsmann, Innovation Coordinator, Microsoft Rachel Thompson, Regional Director Europe, Middle East & Africa, APCO Worldwide Joanna Meade, EU Corporate Communications, GPlus Europe
18 LIVESTATION – THE ‘SPARK OF LIVE’ TV ON YOUR COMPUTER European start-up Skinkers has pioneered a comprehensive solution for “push” communications which includes multiple communication channels and multiple devices - bringing live audio and video to the desktop.
External Contributors Richard L. Hudson, Science Business Nuala Moran, Science Business Kate Lothian, Skinkers Julian Hale, Freelance Journalist Production Quadrant Communications B-9000 Gent
28 LCD THAT CAN SEE What if the computer screen you stare into every day was able to look back at you? A research team at Microsoft Research Cambridge is investigating that very idea, as a way to enable new modes of human-computer interaction.
Layout and Design sittingonacornflake.be B-9040 Gent Illustrations Leon Mussche Printing Roels Printing B-2500 Lier Advertising Cisco Econet Intel Microsoft Randstad Contact details Microsoft Corporate Affairs Europe Troonstraat / Rue du Trône, 4 B-1000 Brussels www.microsoft.com/emea firstname.lastname@example.org Circulation number / Frequency 2,000 copies / Quarterly publication Disclaimer The content of this magazine, including news, quotes, data and other information, is provided by Microsoft and its third parties for your personal information only. Views imparted by thirds parties do not necessarily reflect the views of Microsoft Corporation. Copyright Microsoft Printed on recycled paper.
Further SCIENCE OF THINKING Training the next generation
A push to reform the way Europe does research
The researchers’ directive
MEDIA AND CONTENT MANAGEMENT The next phase of the IP TV revolution
Microsoft’s connected TV services platform
Overcoming information overload and ‘the crisis of choice’
Austrian start-up secures funding for video telephony project
A 2020 vision for global research libraries
HUMAN COMPUTER INTERACTION The Tablet PC and mathematics
Pushing the boundaries in graphics hardware
“What if…?” – breaking new ground in enterprise resource planning visualisation
Sophisticated graphic website and advert-tracking software set for 2008 launch
Virtualisation technology transforms old mining site in Portugal into a leading centre for scientific education and research
Innovation for social and economic empowerment
Preface When the future historians look back on these years, they will examine us on two things: our innovation and the purposes to which we put it. In doing so, one of the themes on which they will remark is how quickly and how widely information and communications technology (ICT) became embedded in virtually all forms of innovative endeavour – whether technological, social, environmental, distribution and business model innovation.
Today, ICT doesn’t simply distribute inno vation: it enables the very process of innovation, across the spectrum of human activity – for scientists, engineers, doc tors, teachers, librarians and NGOs, as well as business leaders and employees – and in every part of the world. Microsoft has always invested heavily in innovation, and in particular in providing a predictable and very widely-used platform for innovation by others. In Europe today, our local software ecosystem – independent software vendors, developers, resellers and other partners - comprises 37% of the total ICT-industry employment and accounts for 57% of total ICT-industry tax revenues. Microsoft is also investing deeply and broadly in R&D in Europe, where there is a highly skilled and motivated talent pool and excellent industry and academic partners to collaborate with. Our R&D related facilities in Europe employ more than 1,000 researchers and engineers, with an annual investment of €300 million; and cover the full spectrum of software development, from the earliest blue-sky concept to product implementation.
For Europe, alongside the key challenge of building an attractive policy environment for innovation, there is a second challenge: to communicate and celebrate Europe’s innovation achievements and its innovators; to explain how innovation happens and why it is exciting - and to inspire more. This publication is a contribution to that European effort. It includes innovation that is accelerating new kinds of science and computing. Collaborative applied research partnerships to advance the European Research agenda. Product development in Europe, for Europe and the world. And support for European start-ups, SMEs and community NGOs to gain access to innovations. Europe’s innovation story is indeed one that Microsoft is proud to be part of, now and in the future! Jan Muehlfeit Chairman Europe, Microsoft
What if Newton had published his theory of gravitation â€“ and nobody noticed? Thatâ€™s the frustrating position of the computer scientist today. 4
SCIENCE OF THINKING
THE ‘RENAISSANCE’ OF COMPUTER SCIENCE
AND HOW IT WILL CHANGE OUR WORLD By Nuala Moran, Science|Business
The rapid pace of industrialisation in India, China and elsewhere is pushing up the price of every commodity, from oil and copper, to wheat and water. But one resource – computer processing power – is abundant and falling in price. That has profound implications for every other field: released from the need to use a scarce resource sparingly, computer scientists are applying this power in ways that are transforming the wider world of science, commerce, business and policy. But so far, outside the world of computer science, awareness is low about where all of this is heading.
of policy makers, academics and industry executives in Brussels on 19 September 2007. ‘The science of thinking: Europe’s next challenge,’ a symposium organised by R&D-news service Science|Business and
research and strategy officer of Microsoft. Among those changes: computer processing speed won’t continue to increase at the same exponential rates seen in the past, so to drive significant increases in performance there will be a shift to increased numbers of processors – and with it, the need to address challenges regarding parallelism and concurrency.
“Computer processing power is abundant, falling in price - and being used to transform the wider world of science, commerce, business and policy.”
That gap, between the future potential of computer science and public awareness of it, was the topic of a high-level gathering
supported by Microsoft, explored the impact of new advances in fundamental computer science – and the appropriate policy responses. “The challenge is that the biggest change in computing itself is coming in the next four to five years, and to date there has been little preparation to deal with the majority of that change,” said Craig Mundie, chief
“Computer science is in a period of Renaissance,” said another symposium participant, Muffy Calder, professor and head of computer science at Glasgow University. In computer science, “we are being reborn – at the same time as other sciences are being reborn by computer science.”
Symposium “The science of thinking: Europe’s next challenge”, 19 September 2007
Worldwide, there needs to be more funding for basic research in computing and for interdisci plinary research between com puting and other fields.
The prime cause of all this is well known: the smaller, faster, cheaper cycle of the global computer industry. Infinite amounts of computer power make it possible to reflect and model the world around us in all its minute detail, from the exquisite machinations of a single cell, to the baroque feedback loops that are driving climate change. But the impact isn’t just about applying more and more computer firepower to manipulate and query bigger and bigger data sets. Nor is it merely to do with collecting, maintaining and sharing information. It is about a different way of handling complex questions, in which the concepts and tools of computer science provide the framework for problem solving. The term ‘computational thinking’ has been coined to describe this new approach. Jeanette M. Wing, a Carnegie Mellon University professor who is currently assistant director of the US National Science Foundation’s computer programmes, has a vision of it becoming a fundamental skill, ranking alongside reading, writing and arithmetic. “Imagine every child thinking like a computer scientist,” she said. In the case of systems biology, it means the ability to pull together the multiple abstractions that molecular biology has accumulated – the individual chemical pathways, protein structures, and receptors – and build holistic models of entire biological processes ‘in silica.’ Similarly, in astronomy, the sky becomes a vast database of star observations for modelling. In epidemiology, doctors can simulate the spread of disease and conduct experiments not possible in the real world. And the entire science of climate change simply wouldn’t exist without computer modelling and the ability to handle multiple abstractions. “You can pull together many different pictures, rather than having to focus on one,” said Malik Ghallab, CEO for science and technology at the French national computer lab, INRIA. One of Wing’s favourite examples is a proposal from geophysicists to model processes from the earth’s core to its surface, and from the earth’s surface to the sun. “And they want all the models to interact,” said Wing. Boeing’s 777 model was the first aircraft to be designed and tested without the use of a wind tunnel. “It relied completely on computational simulation and methods – which goes to show how much in engineering is predicted using computational methods,” said Wing. Worldwide, the symposium participants agreed, there needs to be more funding for basic research in computing, and for interdisciplinary research between computing and other fields. An example of the latter is a new US$52 million programme of research at the National
SCIENCE OF THINKING Jeanette M. Wing, Professor at Carnegie Mellon University and Assistant Director of the US National Science Foundation’s computer programmes
programming. The problem is we don’t know how to teach computer science to kids: there needs to be a research programme to investigate the pedagogy of computer science.” Peter Buneman, professor of database systems at Edinburgh University agreed. “We can’t just go into schools and say how important computational thinking is. We need to inspire. We need to find the computational thinking equivalents of the chemistry set and get those into schools.”
At present, the emphasis is on teaching computing as a tool rather than teaching the concepts that underlie it, said Martin Rem, director of ICTRegie, the Dutch government’s ICT research agency. “We as computer scientists have a responsibility to come up with good, teachable concepts for young children.”
Nuala Moran is senior editor of Science|Business, an R&D news and events service at www.sciencebusiness.net
A LOOK IN THE CRYSTAL BALL: Science Foundation, called cyber-enabled discovery and innovation. But that’s just one of several US funding programmes for computer science. By contrast, the European Union spends roughly €150 million a year on all forms of fundamental computer science; to keep up with just the US civilian programmes would require at least a doubling of resources. But money isn’t the only issue, notes Corrado Priami, CEO of a joint venture in systems biology between Microsoft Research and the University of Trento, Italy. “I would suggest more importance be given to better spending of the money, by selecting the areas in which Europe is the most competitive and which are starting up, so we can be in the lead.” Furthermore, he says, a new system is needed for reviewing grants. “Referees are getting in the way. If you try to do something on the borders of disciplines, you just get handed over from one to the other.”
The implications for education are broader still – as it means a change in the way all citizens are trained, not just scientists and engineers. At present, only 20% to 25% of undergraduates complete their computer science courses successfully, noted Jan Bierlant, dean of sciences at Belgium’s Katholieke Universiteit Leuven. Said Wing: “Introductory courses are not inspiring – especially to non-computer scientists because they tend to be introductions to
CRAIG MUNDIE ON THE FUTURE OF COMPUTING By Nuala Moran, Science|Business
The free lunch is over
For the past 20 years the computer industry has grown on the back of ever-increasing clock rates. In line with Moore’s law, coined by Intel co-founder Gordon Moore, advances in chip design have allowed performance to double every 18 months or so. “But the clock rate can’t go up any more,” said Craig Mundie, chief research and strategy officer at Microsoft, at a 19 September 2007 conference in Brussels on computer science. “We find ourselves increasingly unable to remove the heat generated by denser and denser microprocessors. Yesterday, Gordon [Moore] predicted the demise of his law in 2020.” Parallelism has long been proposed as a way out of this bind, but few in the industry were prepared to invest in the field whilst processors were taking regular, massive, leaps in capacity. “We are now at the point where if we want computing to support all the things it is capable of, we need to deal with the issue of parallelism,” said Mundie. Graig Mundie, Now, said Mundie, it is up to the software community to rise Chief Research and Strategy Officer, Microsoft to the challenge. In the immediate future this will necessitate grooming a cadre of programmers who are at ease with these architectures. It also calls for the development of newer, higher level languages that can handle the complexity involved in parallel programming.
The move to a source of processing that is not only more powerful, but also far more flexible, has profound repercussions for all fields of science and commerce. For a start, believes Mundie, it will transform the economics of computing. It will be possible to build parallel arrays of systems to handle what today would be an impossible data-mining exercise. “This will be at the heart of breakthroughs in science and business. It will, in fact, be impossible to make breakthroughs without computing,” said Mundie.
form to a real engineering discipline,” said Mundie. He noted that one of the leading European centres researching formal methods is the French state computer lab, INRIA, with which Microsoft Research is collaborating.
Medicine will undergo its largest transition in decades, as it becomes a far more data-driven business. There will be a focus on prevention, not alleviation. There are profound implications for computer science itself. Older fields of engineering have always evolved by what’s called ‘formal composition.’ Expertise is built up layer by layer, making it possible to attack larger and larger problems. For example, in civil engineering, design expertise is supplemented by knowledge of different or new materials, making it possible to build a longer bridge or a higher skyscraper.
Alas, funding for this kind of research is rare, Mundie noted. “Most governments are pulling back from basic research, and computer science was never regarded as basic. So there is a double whammy for basic research in computer science.” As the world’s largest spender on software research, Microsoft started to move in the direction of parallelism six years ago. Researchers at the company’s lab in Cambridge, UK are devising new languages and architectures, and creating new strategies for writing programmes. The first fruits should be on the market by 2012. “But it will take two product cycles to move this ecosystem forward,” said Mundie.
“That’s not the case in computing,” said Mundie. “We haven’t mastered programming in the same way as formal composition.” What’s needed is a big advance in the formal methods of computer science. “This would move software from what is too much of an art
SYSTEMS BIOLOGY UNDER THE MICROSCOPE
By Nuala Moran, Science|Business
You can’t capture the thrill and excitement of a football match by describing the individual players. Similarly, it is not possible to understand biological processes and pathways by looking just at the component parts.
The focus of Priami’s own research is this convergence of life sciences and computer science. The aim is to develop new Prof. Corrado Priami, President and CEO, Microsoft Research/University of Trento Center for Computational computational tools and Systems Biology to enhance understanding of the evolutionary processes that are responsible for the large scale properties and dynamics of biological systems. Concurrently, he is building a better understanding of how biological systems process information. This reverse engineering is underpinning the development of new, more powerful, more reliable programming languages that will be used to develop the software of the future.
“Biology is a science of interactions and complexity. Looking at its individual components doesn’t tell you about the system as a whole,” says Corrado Priami, President and CEO of the Microsoft ResearchUniversity of Trento Centre for Computational and Systems Biology.
In the past 40 years the reductionist techniques of molecular biology have provided deep insights into thousands of individual actors – membranes, hormones, enzymes, genes, and their associated kinetics – that are involved in the functioning of organisms. But biological systems do not respond in particular way because of one particular component or another. “They behave in a given way due to the interaction of components,” says Priami.
“We are trying to exploit computer science as an enabling technology to enhance life science at large, and capitalise on the new knowledge to enhance computer science,” says Priami. This then, is the vision. At a practical level systems biology involves many different disciplines. “And once you have built a multidisciplinary team you need a common language – different sciences use different words to talk about the same thing,” said Priami. A further implication is that the basic model of research has to change. “It should be targeted and interdisciplinary – and you should make it iterative, not linear.” Research also needs to be ‘communicative’ - we need to disseminate the results in the broader community to help enhance the visibility of science and to facilitate decisions to invest money in research.”
But understanding this requires a fundamental shift towards viewing biology as an information science. Seen from this perspective, computer science and systems biology share the same conceptual challenges. “They both need to handle complex systems that are inherently highly parallel,” says Priami. The prospect is that one discipline will feed off the other: understanding the parallelism of biology will be used to build better tools in computer science. The ultimate vision is to use living systems as computers: in effect, organisms are processing systems with all the essential properties of a highly efficient computer.
SCIENCE OF THINKING
EUROPE’S PLACE ON THE IT MAP Q&A with Dr. Andrew Herbert, Managing Director, Microsoft Research Cambridge, UK By Richard L. Hudson, Science|Business
In the global village, goes the standard economic theory, every region should have its own set of specialised skills to trade with the rest of the world – an inventory of talents and resources at which it excels and earns its keep. So what is Europe’s niche? Two other things: Europe is a multi-cultural society, and there’s a very strong emphasis on design. One thinks of Italian fashion, Scandinavian furniture. There’s quite a lot of European strength in the field of human-computer interaction. I think of consumer electronics companies like Philips: very design-led. Then you think about European leadership in the mobile phone markets, led by companies like Nokia.
When it comes to computer science, Europe’s strengths are in its culture and traditions, believes Andrew Herbert, managing director of Microsoft Research’s European lab, in Cambridge, UK.
And there’s strength in the computational science and ‘e-science’ field. The US has been focused on connecting supercomputers. In Europe, we’ve been looking more at scientific collaboration, helping people working together. We’ve used computers as a collaboration technology, to overcome the fact that we’re very fragmented, that university departments are often small. The flagship for this is what CERN does with the European physics community – creating networked virtual organisations pulling groups together.
As head of one of Microsoft’s five worldwide research labs, the British computer scientist oversees a research staff of 100, and more than 250 inter-disciplinary research collaborations across Europe – for instance, in systems biology at the University of Trento, and in software security and information interaction with the French national computer lab, INRIA. As such, he has had to make his own mental map of which skills are on offer in Europe. Herewith, a glimpse of that map.
Q. What’s the obstacle to a stronger European computer-science effort?
A. There is a problem that computer science has in Europe: it is often perceived as a service, rather than as a discipline in its own right. People only bring in computer scientists when they want some programming done. The realisation that a computer scientist with a good background in computer science theory (what some call ‘computational thinking’) could work jointly with someone in biology, and produce something better than either could do on their own – that’s not well established. Computer scientists get frustrated about this. They are expected to do a lot of ‘training’ for other subjects, since many computer sciences departments grew out of university data processing departments. The contribution that computer scientists can make to basic science, engineering and technology is not so well understood, but if every computer scientist went on strike tomorrow, a lot of industries would say: ‘we’d better pay attention.’
Q. What are Europe’s strengths in computer science?
A. Europe has a very strong tradition in some of the more theoretical aspects of computer science, and that’s particularly important when you’re thinking about the reliability of software. As we depend on software more and more for things in everyday life – transport, mobile phone systems, medical systems – I would like to be confident that the software works. Now, because computer science in Europe has never had the same level of funding as in the US, people tended to go more for theory. Also, mathematics has a stronger tradition in Europe than in the US. So when we are building these large, complex computer systems, we have the ability in Europe. Software reliability is also something that comes with the fact that a lot of the European IT industry has been centred on safety-critical things like aerospace: real-time safety and physical systems.
Another of the challenges for Europe is how to make sure talented people who aren’t at the best known centres also have the chance to excel. It’s easy to focus on the top 15 or 20 labs, but we should also be tracking and supporting the strongest individuals, and be a little less emotional about supporting the institutions. The job of the institutions is to attract the best individuals and not rest on their laurels.
Another area where there is strength is in machine learning and computer perception. That grows out of the mathematical tradition – the use of very advanced statistical techniques for image processing, handwriting recognition. Modern computers have the horse-power to run demanding algorithms that can achieve near human levels of ‘perception’.
TRAINING THE NEXT GENERATION By Richard L. Hudson, Science|Business
If Europe is to prosper in the global economy, it needs a lot more people like Fabian Suchanek. The 27-year-old German student is full of enthusiasm for his field, computer science. He talks animatedly about his current PhD research, into the database structure of online encyclopedia Wikipedia. And as for computer science itself, it’s a field in which “you can be creative. Many other sciences try to understand what exists. In computer science, I am creating a new thing that hasn’t been there before.” 10
SCIENCE OF THINKING
Fabian is at the leading edge of a movement to train more computer scientists in Europe. Today, the EU has 3% of its workforce in ICT professions, compared to 4% in the U.S., according to the Organization for Economic Cooperation and Development. And demand for programmers, systems analysts and theoreticians is growing world-wide. Without more ICT professionals, says an industry report commissioned by the European Commission last year, the EU could be left “to imitate rather than innovate in a competitive global economy”. To avoid that fate, several new initiatives have been cropping up around Europe. Fabian , for instance, is part of a new training programme run jointly by the Saarland University at Saarbrucken and the Max Planck Institute for Informatics. At the University of Southampton in England, the computer science department is trying to tempt young engineers into the field by offering a four-day programme to let them play with supercomputers to design an aircraft and fly it by simulator. In Brussels an industry consortium, the e-Skills Industry Leadership Board (including Cisco Systems, Microsoft, Siemens and HewlettPackard) was launched in June to promote training. And in September 2007 the European Commission announced several new projects to coordinate EU and US university programmes – for instance, moving towards a common masters curriculum in computer science, and a bachelors in information management. Gerhard Weikum is Fabian ’s thesis advisor. He says that, whether in Europe or the US, “the supply does not match the demand” for computer scientists – and that matters for the economy and society broadly, because computing now pervades every field imaginable. “Computing and computer modelling is key to many issues – there are embedded systems in cars, trains, airplanes, factories. You do a lot of virtual engineering, simulation and modelling. When we think of global warming, how do we analyse and understand it? By the methodology of computer models and simulations. In the natural sciences, computation is now the third way of doing
science: there’s experiment, there’s theory, and there’s computation.” A look at the Max Planck initiative, at the Saarland University, shows the potential of these new training efforts. It’s a graduate programme, taught in English, and structured on the American model of masters and doctoral degrees. Since the programme started in 2000, it has matriculated 199 PhD students. They can work under the tutelage of the Max Planck researchers but get their degrees from the university. The programme broadened a few years ago with the addition of another Max Planck institute, for software systems. The upshot: the International Max Planck Research School for Computer Science has become a magnet for foreign students – from Algeria, Bulgaria, China, India, and South Korea - who might otherwise have followed a more familiar path for ambitious international students: a move to the US. It has also attracted business interest. In July 2007, Microsoft Research announced it would contribute up to €1 million to help fund exceptional PhD students at the Max Planck Research School for Computer Science. The partnership will enable students to gain valuable experience working in a leading academic institution and be in direct contact with a leading business research organisation, with the aim to help develop some of the world’s most talented computing and science researchers of the future. As Gerhard Weikum said, “It is vital for computer science to bridge both the fundamental and applied research, in order to keep pushing the boundaries of science and innovation.” Each year, for the next three years, five PhD students will get funding from the Microsoft Research PhD Scholarship Programme for their research projects. As part of the programme Microsoft will invite students to attend its annual Research Summer School in the UK, giving them the opportunity to showcase their projects to Microsoft researchers and local academics and to build contacts within the industry. The most promising students will have the possibility of an internship at the Microsoft
Research laboratory in Cambridge, UK. In Gerhard Weikum’s view, the ideal programme gives its students deep knowledge in their own field – but also the training ‘to look across the fence’ at other disciplines. “The goal is to develop people so that they become independent and open-minded scientific researchers.” He cites Fabian as an example – a student who had “a neat idea, off the beaten path” and earned the freedom to pursue it for his thesis. The problem under study is familiar: when you search online you often get more than you bargained for. If you type into Google the term ‘Max Planck papers’ you’ll find thousands of references to papers by researchers at the Max Planck Institutes – and they swamp what you really want: archives of the physicist, Max Planck, after whom the institutes are named. If you could qualify your search by specifying the type of data you want – say, biographical archives – you could get the right answer faster. It sounds simple, but the problem lies in defining a knowledge base that makes sense in many different fields – an efficient ‘ontology’. Fabian ’s idea was to look to how people naturally organise data in Wikipedia, the online encyclopedia, and in Wordnet, another online resource. From that, he and colleague Gjergji Kasneci have been constructing a knowledge base that can be built into future-generation search engines. The research was presented at an international conference, and it helped get to Cambridge for his Microsoft internship – where he worked on social tagging, a Web2.0-style bottom-up approach to ontological knowledge. “It was a new environment there” he says. “I could work with lawyers, designers, programmers.” He doesn’t know yet what he’ll do on graduation in a year, but he says the breadth of the Max Planck programme has given him a taste for several worlds – industry, research institutes, and academia.
Richard L. Hudson is editor of Science|Business, an R&D news and events service at www.sciencebusiness.net.
A PUSH TO REFORM THE WAY EUROPE DOES RESEARCH By Richard L. Hudson, Science|Business
“Europe has a team of star players, but it is not a star team.” That frank assessment of Europe’s weaknesses and strengths in research was how Dr Janez Potocnik, the EU Science and Research Commissioner, opened a campaign earlier this year to reform how R&D is governed in Europe. His effort is the most-sweeping look at EU research policy in years, and is expected to result in a series of new policy proposals from Brussels early in 2008 – proposals that could make a fundamental difference in how much Europe gets out of its research budget. At the moment, there is deep discontent in Brussels and most national capitals in the state of Europe’s research base.
Part of the problem, Potocnik argues, is that Europe’s R&D efforts are badly organised.
Sure, the latest crop of Nobel prizes were a European triumph, with two Germans, a Frenchman and a Brit dominating the 2007 Nobels in physics, chemistry and medicine. But that’s an anomaly: so far this century Europe has won only 24% of the Nobels – down from the 33% average of 1950-1999, and 73% in the prior half-century. Only two European universities (Cambridge and Oxford) rank among the top 20 in the most-watched league table for international research universities. And while EU scientists produce more research papers than anyone else, they generally score below their American counterparts when it comes to how much the papers are cited by other scientists.
For starters, there’s duplicated effort: for instance, the EU counts 29 different nano technology-funding programmes across the 27-nation bloc, and 110 different national research grants for the study of one bacterium, campylobacter. There are inflexible rules for academic tenure, pension and employment – rules that make it difficult for researchers to move between academic and industry labs, or across borders within the EU. There are dozens of good proposals for new scientific instruments – from synchrotrons to biobanks – that never get funded because the EU members can never agree to work on them together. To start fixing these and other problems,
in April this year the Slovenian economist – named research commissioner after negotiating his country’s accession to the EU – published a ‘Green Paper,’ the Brussels term for a document calling for public comment and suggestions on a problem. The document raises 30 policy questions and mentions scores of possible solutions, but deliberately avoids backing any of them in an effort to open the dialogue to as many researchers across Europe as possible. Indeed, the launch of the paper was a political act in itself: an attempt to go around the national R&D agencies and mobilise the EU-wide R&D community behind the idea of change. “How many more millions of euros are going to be spent on replicating research institutions and sexy areas of research?” the Commissioner exclaimed, while speaking at a
SCIENCE OF THINKING
© Fotodienst.cc/Oskar Goldberger
The EU Science and Research Commissioner’s 2007 Green Paper is the mostsweeping look at EU research policy in years, and is expected to result in a series of new policy proposals from Brussels early in 2008.
Janez Potocnik, European Commissioner responsible for Science and Research
June 2007 conference on the subject organised by R&D news service Science|Business, and co-sponsored by Microsoft. “We simply don’t have the luxury of time.”
In his view, what’s needed is a Fifth Freedom of the European Union: that knowledge should be able to move as freely across EU borders as do the other four, more widely recognised freedoms of movement for goods, services, capital and people. “These are the realities that Europe is facing,” said the Commissioner: the Green Paper “is confronting the reality that Europe does not have freedom of movement of knowledge.” At the conference, the Commissioner got plenty of suggestions. Prof. I.T. Young, of TU Delft, argued for more meritocracy in EU research grants – “in the sense that it is having the creative ideas, and not the right connections, that count.”
Andrew Herbert, managing director of Microsoft Research Europe, spoke of the disparity of skill levels between young European and American computer scientists: “What can we do to make our PhDs more competitive?” Others argued for a greater policy focus on innovation clusters – communities of universities, corporate labs and suppliers, that could become regional engines for innovation. There were calls for a new ‘scientific visa’, to make it easier for nonEuropean scientists to move from lab to lab once they’re inside the EU. And there was abundant criticism of the EU’s R&D bureaucracy. The excess financial reporting, auditing and paperwork reflect “a sort of mistrust” of researchers, complained Vlastimil Ružicka, rector of the Institute for Chemical Technology in Prague. “Trust honest people and punish offenders,” he urged the Commissioner.
The outcome of this debate will be closely watched. The preliminary feedback, the Commissioner has said, is mixed. Among the 800-plus formal, written comments that the commission had received by early autumn, many urged action. But many were also leery of Brussels playing a bigger role in R&D coordination; the old political tugof-war between Brussels and the national capitals, seen in trade, agricultural and many other policy areas, is also very much alive in the research world. The Commission is due to publish concrete proposals at the beginning of 2008. That’s fortuitous timing for the Commissioner: it’s also when his native Slovenia will be setting the political agenda, holding the rotating presidency of the European Union.
Richard L. Hudson is editor of Science|Business, an R&D news and events service at www.sciencebusiness.net.
THE RESEARCHERS’ DIRECTIVE By Franco Frattini, European Commission
One of the key objectives of the European Commission, as first outlined in the Lisbon Agenda and reiterated by policy decisions since then, is to turn Europe into the world’s most competitive and dynamic knowledgebased society. The Commission has striven to facilitate and create opportunities in Europe which will lead to the attainment of this goal, not at least in the area of scientific research. Enormous efforts have been made to improve the quality and amount of research and development currently taking place in European universities and private laboratories, through public-private partnerships and investment in universities in particular. However, we have recognised that one problem persists. If the European Union wants to be successful in its quest to become the innovation centre of the world, then it must rapidly increase the quality and quantity of researchers within the EU: not only to ensure the progress of science and innovation in Europe, but also as a crucial mean to attract and sustain the necessary investment required.
Implementation of the Directive by member states is essential to the paradigm of ‘brain circulation’ and the development of the European Research Area.
In recognition of the need to attract talented researchers from all over the world, the Council adopted a Directive in October 2005, which sets out specific procedures for admitting third-country nationals for the purposes of scientific research This Directive, which was established thanks to the close collaboration by my Directorate General and DG Research, facilitates access of non-European researchers to the European Union and creates a specific residence permit for 3rd country researchers, which enables them to move freely within the Union for the purpose of scientific projects. The Directive is aimed at “cutting red tape” and diminishes significantly the burden of the administrative procedures involved. It represents a pioneering piece of legislation, the adoption of which is essential to the paradigm of ‘brain circulation’ and the development of the European Research Area, which my colleagues in the European Commission and I have been advocating so strongly and persistently. Moreover the researcher’s visa system will provide an unparalleled opportunity for non-European and European workers to work together in helping Europe face the challenges of globalisation.
The date by which Member States were to adopt the necessary legislative and administrative procedures needed to transpose this Directive into their national laws has now passed. Unfortunately the great majority of the Member States have failed to adopt it in time. I hope that this situation will be remedied shortly. If we do not take action Europe will never succeed in securing the human resources required to attain its objective of investing 3% of GDP in Research and Development and will, as a result, be surpassed in this crucial field. Furthermore, failing to implement this directive means depriving European scientists of an invaluable opportunity to benefit from intellectual input from abroad and to exchange ideas with other leading research experts. We cannot let our scientists and, the rest of European society down, by not making the concrete commitments needed to make this possible. In closing, let me once more underline that ensuring the access and mobility of 3rd country researchers is essential and indispensable for the future of Europe, not merely as means to improve and encourage scientific development and innovation, but also as a way to stimulate productivity and growth and to enable us to compete with other markets around the world. I therefore urge the remaining Member States to implement the Directive and thus allow us to come a step closer to fulfilling the Lisbon Goals and making Europe a more prosperous society.
Mr Franco Frattini is a vice-President of the European Commission, responsible for Justice, Freedom and Security.
© Fotodienst.cc/Oskar Goldberger
SCIENCE OF THINKING
MEDIA AND CONTENT MANAGEMENT
THE NEXT PHASE OF THE IPTV REVOLUTION Merely a concept at the start of the century, Internet Protocol TV (IPTV) has already completed the first stage of its growth, moving from an idea to a real service that has now achieved a broad market penetration across many European countries. The transmission of digital signals is also a critical aspect of IPTV. At present, a major debate is underway in Europe – and in the rest of the world – on the transmission methods available for digital signals. Currently, digital signals are routinely transmitted using terrestrial methods. European providers mostly work with the
For consumers, IPTV provides connected and personalised experiences by integrating the TV with an intelligent two-way network. Forrester Research predicts that by 2017, one in four European fibre broadband subscribers will have IPTV, with penetration today already ranging from 13 percent in the UK to 33 percent in France.
formats, greater managed bandwidth, up to nine MB (such as ADSL2+ networks deliver) is needed. Accordingly, making savings on bandwidth usage is very important to service providers, especially as they look to offer bandwidth intensive services like HDTV – which are currently seen as the next ‘killer application’.
In this second phase of IPTV, the main challenges relate to the daily business of delivering IPTV, including cooperation with content and infrastructure providers, taking deployment to scale and the guarantee of uninterrupted reception and robust picture quality.
For consumers, IPTV provides connected and personalised experiences by integrating the TV with an intelligent two-way network.
For the telecommunications industry, IPTV offers the potential to generate considerable revenue streams by combining voice and internet with next-generation TV services. Telecommunications companies deliver these interactive TV services over their high-speed broadband networks. Typically, an operator will use a two MB line for triple play services and some subset for the IPTV service. For high definition
Moreover, the IPTV sector is in the midst of a reality-check as cable and satellite operators are fighting back. The impending arrival of ‘cable IPTV’ and ‘satellite IPTV’ introduces a significant new dynamic to this market as new competitors look to challenge the current players. The anticipated use of IP as the delivery mechanism for television on these networks redefines the potential for IPTV and expands its boundaries far beyond what has, until now, been considered a telco-centric activity.
digital TV standard DVB-T – Digital Video Broadcasting-Terrestial – which transmits digital TV signals via aerial antennas within their terrestrial networks. The method, which is called DTT (Digital Terrestrial TV) in UK and Ireland, ATSC (Advanced Television System Committee) in the US and ISDB-T (Integrated Services Digital Broadcasting-Terrestrial) in Japan, is slowly replacing analogue television systems. The challenge ahead is in integrating different protocols.
Another major point of discussion is content and its protection and conditional access. The major Hollywood studios are reluctant to release their content to supposedly ‘open’ networks – even if IPTV is anything but an open network. Consequently, a vigorous discussion on video formats and content protection is underway. Most IPTV platform architectures are consequently designed to provide support for multiple video formats including VC-1, MPEG 2 and H.264, allowing service provider flexibility
in terms of video formats, including other advanced codecs. The challenge ahead is in integrating different media types. To succeed, IPTV providers need to prove that they can create systems that will make converged, cross-platform service delivery a reality. In addition, companies have to prove that they can achieve significant penetration by reaching millions of subscribers rather than hundreds of thousands. This can be achieved by capitalising on the
advantages IPTV offers: the availability of a broader range of TV channels, archives with different TV formats, access to movie databases, and in the future, the engagement of the customer through interactive services. Microsoft and its partners are at the forefront of the IPTV revolution, working on innovative solutions in partnership with operators to create compelling, interactive, connected TV services for consumers.
Microsoft’s connected TV services platform Peter Yves Ruland, Microsoft TV, presents Microsoft Mediaroom
Driven by computer and network technologies, TV is undergoing a paradigm shift that is not only changing TV itself but also how people consume media and make it an integral part of their daily lives. Changing the meaning of TV The commercial roll-out of IPTV brings a range of features that mark a significant shift in what TV means: high-definition live TV channels, advanced video on demand (VOD) and digital video recording (DVR).
And giving the consumer real control Faced with an increased variety of new channels and new media, it is critical that the consumer experiences this variety as a benefit and not as a burden. This means empowering the consumer, so that: g Technology aids in the selection of the TV offerings based on the consumer’s preferences, g Technology aids in aligning the TV offerings with the consumer’s schedule (allowing for
interruption and restarting of live streams) rather than dictating the schedule, and
This is the vision behind Microsoft Mediaroom, a platform that combines all the components that telecommunications companies need in order to deploy a robust IPTV service, from content acquisition, distribution and protection, to on-demand video streaming, digital video recording, and service and subscriber management. For the consumer, Mediaroom enables both personalisation and mobility through, for example, remote digital video recording from a mobile phone or web-connected pc, and personal media sharing on the TV, by providing centralised access to digital photos and music stored on PCs in the home.
g Technology aids in integrating TV more seamlessly into the consumer’s environment,
e.g. by enabling a gaming console and the PC to operate in conjunction with the TV.
Furthermore, broadband allows two-way communication, enabling consumers to use the backchannel for requests and feedback, and thus more personalisation. These developments pose several challenges: IPTV providers need to acquire, manage and protect the content (which consumer subscribed to which service and has access to which content) and distribute it. Consequently, effective partnership is essential between content creators, service providers and technology platforms - both to drive an integrated telecommunication service and to enhance the consumer’s experience.
Today, Microsoft Mediaroom is supporting major customer relationships with Europe’s largest broadband service providers such as BT in the United Kingdom and Deutsche Telecom in Germany. More than 18 service providers worldwide have selected the Microsoft Mediaroom platform for their digital TV offerings, and commercial deployments are currently underway with another eight providers.
MEDIA AND CONTENT MANAGEMENT
European Microsoft Innovation Center (EMIC):
Overcoming information overload and ‘the crisis of choice’ Year by year, the flood of content is rising all around us: television channels, books, music and the Internet - where not only traditional media but millions of individuals are adding content. So, in this flood, how do you find the content that matters to you? How do you discover multi media information and entertainment in ways that suit you personally? Isn’t there an easier way? Finding what interests you doesn’t have to be an accident. A Microsoft research team in Germany is addressing, within the research project MyMedia, the key social problem of information overload and what has been called ‘the crisis of choice’ - by jumping beyond traditional recommender systems which are based on a single multimedia source. Instead, the European Microsoft Innovation Center research project provides recommendations to you that are integrated from many sources. As the user, you personalise the system by indicating simply that you like a particular video or audio-cast, and the system will find similar content and even learn from you what you like – the more you use it, the more it learns your preferences.
“We think personalisation is a very interesting research area with direct bene fits for users. Each collaborator in the MyMedia project brings great experience and unique capabilities and we’re very excited to begin this project.”
The MyMedia Dynamic Personalisation Framework is a collaborative research project organised under the EU Research Framework Programme and involves the European Microsoft Innovation Center, the BBC, BT, Microgénesis, Telin and the Universities of Hildesheim and Eindhoven. The resulting system will allow easy integration of multiple content catalogues and recommender algorithms in a single system and provide technology for userranked content. The system will learn user preferences and enable the sharing of recommendation results with friends and family while observing privacy and security protocols. The technology will be evaluated on its effectiveness and user-friendliness in a variety of cultures and languages via scientific analysis tools and field trials in several European countries.
Tim McGrath, MyMedia Project Coordinator, European Microsoft Innovation Center (EMIC)
Here’s a scenario that illustrates what the European Microsoft Innovation Center (EMIC) and its partners will enable within the collaborative research project MyMedia: Carin is a scuber diver and is planning a vacation to the Balearic Islands for a diving vacation. One night she notices a documentary on the Balearics on her television programme guide. She watches the show and uses her media system, which is based on the MyMedia framework, to express her preferences for more content like this in both English and Spanish. The system returns a set of choices that includes both professionally created content and user created content, ranked according to her preferences – so content about local dive spots is ranked higher than content about local discos.
After a few days of viewing, Carin has a well-tuned content set that she decides to share with her diving community on their social networking site. The MyMedia components make it easy to share this specific set of preferences, but not her prefer-
ences on other topics. Juan, a member of Carin’s dive community, sees her posting and sends her his set of preferences about diving in the Balearics, which contains some new content on environmental issues which Carin decides to incorporate into her own preferences. One of the new sources ‘rents’ its content and Carin likes what she sees so she keeps it for a rental fee. Another of the sources is free but uses her preferences to include advertisements relevant to her interests in scuba diving on Majorca. In the following weeks, Carin watches and listens to several more programmes about her focus area. On her return from her diving vacation, Carin uploads digital videos of her dives onto her favourite community video site. Using new metadata components from MyMedia, she easily tags and annotates her content. These metadata and tags are also automatically included in her set of preferences and the preferences she had previously uploaded to her dive community site. Others on the site who have subscribed to her ‘virtual channel’ through the preferences she shared with them can now see the content from her diving vacation.
THE â€˜SPARK OF LIVEâ€™ TV ON YOUR COMPUTER Back in 2001, Matteo Berlucchi, an Italian entrepreneur and academic, was ploughing through his email inbox trying to sort out the important messages from the spam, when a simple idea struck him. Create a dedicated priority communication channel that could work outside of email and could be used for sending important and time critical messages straight to the desktop. 18
MEDIA AND CONTENT MANAGEMENT
Cambridge had been looking at ways to build peer-to-peer networks for high bandwidth content streaming/distribution and had produced a technology named ‘Pastry’. Not only was this technology ideal for Skinkers, Microsoft had also recently formed a group, IP Ventures, whose remit was to find ways to commercialise the technologies created by Microsoft Research. IP Ventures became involved with Skinkers and discussions began to see if a marriage was possible.
From this simple idea, Matteo Berlucchi formed Skinkers with David Long, another entrepreneur. The idea has since expanded and Skinkers now offers a comprehensive solution for ‘push’ communications which includes multiple communication channels and multiple devices. With over 100 bluechip customers and 80 employees Skinkers has gone from strength to strength. So where does Livestation fit into this story? How does live interactive TV on your computer fit with push communication technology? The answer lies in the concept of very high speed/volume push. Skinkers engineers identified the value of being able to push large amounts of information at very high speeds around corporate networks and the Internet. After thorough research it was decided that a ‘flavour’ of peer-to-peer technology was best suited to meet this challenge. The peer-to-peer concept is based on the principle that every computer connected to a network works together with its neighbours to share information: instead of requiring a large number of dedicated servers to distribute messages, content can be ‘pushed’ by neighbouring computers. The concept is simple but the technical challenges are immense, particularly when it comes to pushing high volume ‘live’ data to large groups of users. Skinkers found the best peer-to-peer technology at the Microsoft Research Lab in Cambridge. A team of researchers at
video over the Internet to mass than conventional approaches. Livestation is aiming to become the de facto standard for delivering live radio and television over the Internet by delivering remarkable quality audio and video using a simple software application. With conventional streaming services, each stream is typically delivered from central servers or using a special content distribution network. Every additional user receives their own stream, which places enormous
Livestation is a unique interactive, global broadcast radio and television platform that will allow broadcasters to distribute live audio and video to a potential audience of hundreds of millions of broadband connected consumers. In June 2006 the groundbreaking ‘technology-for-equity’ deal was struck between Microsoft and Skinkers, and development of the technology began. Once the technology was brought into Skinkers, a special Lab was set up (internally referred to as ‘The Bakery’) to use the research code to develop commercial technology and solutions. Very quickly, the Skinkers engineers realised that the technology was not only ideal for pushing messages but it was also opening the door to a truly revolutionary proposition: being able to stream live video - and therefore television - direct to any computer on an Internet Protocol network. Recognising the incredible potential of this idea, the decision was taken to develop a viable technology and take it to market. This was the birth of Livestation! Livestation is a unique interactive, global broadcast radio and television platform that will allow broadcasters to distribute live audio and video to a potential audience of hundreds of millions of broadband connected consumers. It provides a far more economically viable and scaleable solution to the problem of delivering live audio and
demands on the Internet infrastructure and ultimately limits the number of users that can be simultaneously supported. In a peer-to-peer network, each node functions as both a client and a server, sharing its data with other users. This helps spread the load to the edge of the network, so that capacity grows with demand. Livestation is designed to deliver the very best user experience utilising Microsoft Silverlight. It is simple to use - just like watching television or listening to the radio! Users get a number of stations or channels to listen to or watch whenever they want. So, from a simple idea of improving communications, a chain of events has led to Skinkers’ relationship with Microsoft and, from this excellent partnership, Livestation was born. The stage is now set for Livestation to become a global standard for listening to and watching live radio and television over the Internet.
Livestation is currently in technical trials and is planning to launch globally in 2008. For further information visit: www.skinkers.com I www.livestation.com
To find out more about IP Ventures visit: http://www.microsoft.com/about/legal/intellectualproperty/ipventures/default.mspx
AUSTRIAN START-UP SECURES FUNDING FOR VIDEO TELEPHONY PROJECT Vienna-based start-up IQ Mobile found that obtaining funding to develop innovative solutions was a straightforward exercise. It was awarded 15 per cent of the costs for its video telephony project and now names Sony BMG as one of its customers.
© Fotodienst.cc/ Oskar Goldberger
Günter Schneider, Microsoft and Harald Winkelhofer, Chief Executive, IQ mobile
MEDIA AND CONTENT MANAGEMENT
“It’s really important to help small companies grow their business, because it brings more jobs, higher turnover, and greater investment in other companies, and boosts the whole economy.” Harald Winkelhofer, Chief Executive, IQ Mobile. http://www.iq-mobile.at/en/index.htm
The European Commission identifies ICT as the biggest driver of growth for small to medium-sized enterprises (SMEs). But for many start-ups, lack of financial support can see the company fail before realising its innovative potential. Recognising this, and the importance of SMEs to the economy, the European Union (EU) established sources of funding to support these fledgling operations. But with business strategies to plan and operational processes to put in place, when do CEOs have the chance to find and apply for these grants? The answer is they often don’t, and as a result miss out on an opportunity to make their business more competitive. Harald Winkelhofer is one entrepreneur who avoided this situation when he set up one of Austria’s most exciting SMEs.
Building on more than 10 years’ experience in multimedia products for mobile phones, Winkelhofer founded IQ Mobile in June 2006. His vision centred on offering two sophisticated services to the mobile telephony market. Winkelhofer wanted IQ Mobile to become the first company in Austria to develop a video telephony platform for the mobile phone, and the first to support advertising on mobile phone portals such as Vodafone Live. In the first few months at IQ Mobile, Winkelhofer was heavily involved in preparing business, marketing, and sales plans, and had little time to look for funding opportunities. “From the outset, I wanted to apply for funding,” he says, “but during the start-up phase I didn’t have time to study complex application forms.” While reading the newspaper, the IQ mobile founder came across a means to apply for funding that wouldn’t take his focus away from the business. The European Union Grants Advisor (EUGA) programme is an initiative supported by a number of community partners and industry leaders, such as Microsoft, Intel, and HP, to help in-
crease SMEs’ awareness of, and access to, dedicated EU, national and regional funds. Winkelhofer contacted EUGA to arrange a consultation. During the initial contact, EUGA informed the IQ Mobile founder about an online competition sponsored by Microsoft Austria and news provider Pressetext. At the event, Winkelhofer scooped the competition’s first prize - €500 (U.S.$709). He says: “There were two great things to come out of that day for me—the €500 prize and an increased knowledge of funding for start-up companies.” Winkelhofer presented his ideas for IQ Mobile and the video telephony platform to EUGA experts in a series of consultations. “It was a very structured process,” he says. “We had three or four personal meetings where I told them what the company was doing, what my goals are, what technical background we need, and what the estimated costs were, and then they recommended the funding to apply for.”
up a base of 40 customers including big names such as Sony BMG and Nokia. IQ Mobile is the first company in Austria to provide a video telephony platform with interactive voice response. Impressed with the technology, Sony BMG signed up for mobile marketing and video telephony services on the first anniversary of IQ Mobile. The music and entertainment giant is set to use the IQ Mobile technology to bring previews of new music videos to its customers’ mobile phones. Winkelhofer says: “At the moment Sony is sending out previews to its online newsletter community, but soon they’ll be marketing a free preview of new music to be consumed on the mobile phone.” Over the last year, IQ Mobile has developed an extensive convergent platform (sms, mms, voice) with an innovative portal of mobile video solutions. As well, mobile marketing and advertising tools were established. Winkelhofer plans to use EUGA services again and recognises that
The European Union Grants Advisor (EUGA) programme is an initiative to help increase SMEs’ awareness of, and access to, dedicated EU, national and regional funds. IQ Mobile applied for a risk related grant from an Austrian and European funding cooperative in September because of the risks involved in working with such new technology. By the end of 2006, the funding body rewarded the innovative nature of the IQ Mobile project with a grant to cover 15 per cent of the project’s total costs. Winkelhofer says: “The consultation process was a good way for us to work, because I could focus on my daily business and let the expert recommend which funds to apply for - that’s why we’ll consult EUGA again.” Since IQ Mobile was founded, the Viennabased company has grown rapidly, building
future grants will also help the company to progress faster. “I could invest the money in a sales employee or further technical developments, so that will definitely support the business,” he says. While Winkelhofer wants the company to extend its reach, he is keen for it to remain a small, creative, high-quality business in the mobile sector. Winkelhofer shares EUGA’s view that innovative SMEs have an important role to play in the European economy. “It’s really important to help small companies grow their business,” he says, “because it brings more jobs, higher turnover, and greater investment in other companies, and boosts the whole economy.”
GLOBAL INNOVATION IN THE HEART OF EUROPE Today, the efficient and innovative distribution and processing of information is at the core of all business activity – so the effects of new technologies on company growth opportunities, and as a result, on investment decisions, are significant. For some years, says Erich Gebhardt, the Director of Industry Engagement in Microsoft’s Unified Communications Group and Head of the Microsoft Development Center, Zurich, it has been predicted that the next major innovative leap in enterprise communications would be enabled by Voice-over-IP (VoIP), the transmission of voice applications via the Internet Protocol (IP) network. But until now, the technology has not yet lived up entirely to the expectations of users and technology suppliers. Now, however, says Gebhardt, the signs are growing that a significant tipping point has been reached: “because information technology - which for some time has been based on IP networks - is the nerve centre that manages the transmission of applications, voice and data; and now information
technology and (tele)communications are merging.” Unified communications with added value Gebhardt says the key to changing this much anticipated convergence from marketing buzzword into business reality is two-fold: open standards, such as SIP (Session Initiation Protocol), which have found widespread acceptance and will also shape the future of VoIP; and seamless integration into the unified communications platform components rather than use of an isolated VoIP application.
As Gebhardt points out, the benefits of this approach are obvious: Voice-over-IP is no longer ‘old products in new packaging’, or in other words, the adoption of well-known telephone functions using a PC and expensive IP telephones. On the contrary, voice functions are now seamlessly integrated at a stroke into the familiar office environment. This makes the user interface a lot simpler and gaining user acceptance should be child’s play. The variety of unified communications presence and identity functions are also available unconditionally to VoIP applications.
This is what Microsoft has done in entering the VoIP market for the first time, with its Unified Communications solution, launched in October 2007.
The key application in Microsoft’s Unified Communications solution is the Microsoft Office Communications Server 2007. Its key VoIP components were developed in
MEDIA AND CONTENT MANAGEMENT
Europe. Over the past two years, in the cramped rooms of an old Zurich apartment, not far from the banks of Lake Zurich, a close-knit group of software engineers have played a huge role in unified communications’ source code. The Zurich Development Center for Collaboration Technologies, opened by Microsoft in 2006, is the company’s fourth European software development centre and reports internally to the Unified Communications Group. Development work for the business segment is distributed to four sites: Microsoft headquarters in Redmond, the development centre in Zurich, and development centres in Hyderabad, the capital of the Indian state Andhra Pradesh and in the Chinese capital, Beijing.
will continue to be part of basic workplace equipment, but as a simple audio device connected to the PC via an USB hub, just like the mouse and keyboard. “The most important advantage of unified communications is the added value that is created at the workplace and the benefits that information processors receive from the close networking of voice and collabo-
a realistic target given the interest the application has already generated. “Highly promising pilot tests with the Office Communications Server 2007 are taking place, above all in corporations that operate globally, where vast savings in costs are possible. So at enterprise level, where VoIP has already been a subject of discussion for some time, it is now – thanks to the Microsoft technology – a hot topic,” he says.
The telephone will continue to be part of basic workplace equipment, but as a simple audio device connected to the PC via a USB hub, just like the mouse and keyboard.
From Lake Zurich to the world The fact that Switzerland, and in particular Zurich, were given this role, is anything but a coincidence - it was born from the Zurich start-up company, media-streams, that was acquired by Microsoft in 2005. As Gebhart explains, “In its VoIP development work, media-streams was already focused on the advanced SIP standard. Furthermore, from the outset the media-streams developers set the conceptual course for their work, with far-reaching effects: they decided to develop their VoIP solution as an integrated component of an existing infrastructure – or in other words the mail server. This laid the foundations for integrating the media-streams technology in Microsoft’s unified communications solution.” The new solution means that certain weak points in previously available VoIP solutions for companies have been eradicated. The breakthroughs include not only the user interface advantages noted already, but also, for example, the fact that high quality IP equipment is no longer needed - an area where companies are still making unnecessary costly investments. The telephone
rative applications,” Gebhardt says. “In doing so, the total openness of unified communications plays a huge role. As a result, any services and products that comply with the SIP standard can be integrated – with mobile applications set to play a particularly decisive role. And it goes without saying that Microsoft unified communications will also be taking no risks with security – possibly by using total encryption of voice transmission.” After the launch is before the launch Development work for the next product versions of Microsoft’s unified communications is already moving full steam ahead. While the developer teams in Beijing and Hyderabad are primarily working on the development of further clients (for Windows Mobile and Outlook Web Access), in Zurich they are also developing ‘presence-based call routing’. This allows user presence data, already available today, to be used even more extensively for call management and process design. In addition to developing source code, Zurich is also managing the setting up of the VoIP business in Europe, the Middle East and Africa. Because advanced technology only becomes an innovation if it can penetrate the market, Microsoft’s declared objective is that in three years’ time, 100 million people will be using their PCs to make phone calls. Gebhardt says this is
Gebhardt adds that in cross-corporate VoIP business communications – already, thanks to federation, at an advanced stage in today’s version of unified communications - the next step in development will focus on another aspect. “By employing what is known as SIP trunking, the barriers between VoIP and traditional telephony will be overcome,” Gebhardt explains. This means that VoIP will finally be a step ahead in ‘quality of experience’, i.e., voice quality and functionality – which is already largely the case with LAN. Quality of voice connections between PCs are measured directly and continually enhanced during the call; and work is also being done on hosting for Office Communications Servers (OCS hosting). Above all, this will make VoIP even more appealing to small and mediumsized enterprises. As part of the Microsoft strategy of software plus services, Internet Service Providers (ISP) can also offer their customers VoIP services based on Microsoft unified communications as leased models. Gebhardt emphasises that, beyond ISP, it is the partners in general who will develop individual, vertical solutions on Microsoft’s open platforms and implement them in the companies. Therefore, opening up unified communications to Independent Software Vendors and the support of the partner networks are also crucial focal points in building up the VoIP business. “Once Microsoft partners are making their own vast contribution to it, VoIP will indeed become the success factor in the fields of unified communications.”
Global Research Library 2020 (GRL 2020) is an initiative of the University of Washington Libraries and Microsoft Corporation to bring together global leaders from different sectors to shape a roadmap for research libraries for the decades ahead.
A 2020 VISION FOR GLOBAL RESEARCH LIBRARIES The inaugural GRL2020 workshop held in Woodinville, Washington USA on September 30 through 2 October, 2007, drew experts from around the globe – including Europe, China, India, Japan, Canada, Australia and the United States. Setting the scene for the discussion, Betsy Wilson, Dean of University Libraries, University of Washington, and Tony Hey, Corporate Vice President for Technical Computing at Microsoft, described the changing landscape for libraries thus:
“The rapid dissemination of findings, the creation of new tools and platforms for information manipulation, and open access to research data have rendered the traditional institution-based approaches to providing access to information inadequate. In order for research libraries to play a central role in this increasingly multi-institutional and cross-sector environment, we must find new approaches for how they operate and add value to research and discovery on a global basis.”
MEDIA AND CONTENT MANAGEMENT
The ‘framing discussions’ at the outset of the workshop focused on a wide array of issues related to understanding and managing the output of global research. Many global problems - climate change, world-wide health threats, and international economic issues require support for the research enterprise that transcends political boundaries, and demands new infrastructure and cooperative frameworks. Participants largely agreed on various core value
The GRL2020 group outlined critical impediments that must be addressed if the vision of the global research library of the future is to be realised:
g Funding for research and learning is fragmented and suffers
from steep disparities globally
propositions for the Global Research Library: g Intellectual property and copyright constraints increase friction
in the information supply chain
g Innovation and knowledge creation rely on sustained availability
of information (information drives discovery).
g Complexity of the stakeholder environment impairs
interoperability and information flow
g The creation of public value is central to the mission of GRLs. g Cross-sector tensions and proprietary perspectives dilute g Selection, sharing, and sustainability are longstanding
components of library missions, and remain so as library assets transition from paper to digital formats.
resources and leadership
g Infrastructure deficiencies, especially in developing countries,
limit the scope and effectiveness of recognised solutions
g Long-term curation of content is critical, and requires focused
effort in the development of systems and standards to support them in the long digital future ahead.
g Economic and technological sustainability are problems
at all levels
g Skills appropriate to the 21st century information world are It was widely acknowledged in the GRL2020 discussions that the global research library of the future will be an interoperable network of services, resources, and expertise designed to facilitate the process of research and the selecting, sharing, and sustaining of the outputs of research. Participants agreed that overlapping infrastructures must be integrated and managed within policy frameworks by staff with appropriate skills now uncommon in the field. Infrastructure was broadly interpreted to include telecommunications, protocol standards, computing, electronic publishing, repositories, discovery and delivery services, and instructional services necessary to support rapidly changing skills that support these technologies. To the extent that common, interoperable components of such infrastructure can be agreed upon and shared, costs of various dimensions of the enterprise can be reduced and efficiencies increased.
scarce in a 20th century workforce
Disparate political, economic and cultural environments often confound collaboration so the GRL2020 participants focused on identifying areas where collective leadership could have the greatest impact. They agreed that an important first step would include an advocacy document to help create a unified, coherent voice in support of the global research library. This work is already underway among a group of participants in follow-up to the workshop.
“Research libraries have a cen tral role to play in the radical transformation taking place in research, scholarship, science and discovery. By bringing to gether this remarkable group of people for several days of intense dialogue, we have gen erated new ideas, energy, and momentum in the essential work of shaping the future direction of libraries in what will necessarily be a global and cross-sector information environment.”
The growing worldwide trend towards Open Access is an example of a kind of social interoperability that earned attention from the group. While acknowledging that this issue is dealt with more effectively by others, the importance of changes to the business models of scholarly publication and research is recognised as a key to improving the effectiveness of research and learning in the developing world. The Web has reduced separation among communities, and research libraries need to exploit these trends by creating collaborative environments where public, private, and governmental agencies may find common purpose and mutual benefit. Microsoft Corporation’s generous support of the University of Washington’s leadership of this meeting is an example worthy of elaboration.
Betsy Wilson, Dean of University Libraries, University of Washington
LCD THAT CAN SEE
Microsoft researchers develop an LCD screen that can see as well as be seen, opening up new possibilities for touch-screen technologies. 28
HUMAN COMPUTER INTERACTION
Unlike the touch screens we regularly encounter at banks and airports, multi-touch systems can recognise and react to two or more touch points applied simultaneously to the computer screen. It allows new kinds of ‘gestures’ with which people can work with software, well beyond the clicking and dragging we are familiar with today. A very simple multi-touch gesture would be to zoom in or zoom out of an on-screen object by touching its opposite corners and then moving the forefingers further apart to enlarge the view or closer together to shrink it. Easy and intuitive.
Steve Hodges, manager of the Sensors and Devices Group at Microsoft Research Cambridge, and Shahram Izadi, a researcher in the lab’s Socio-Digital Systems Group, have broken new ground through a novel approach to touch-screen technology, called ThinSight. By bringing multi-touch sensing to the world of thin computing devices such as Tablet PCs, laptops and handheld devices - ThinSight makes a new realm of human-computer interaction more practical and deployable in real-world settings.
The Genesis of ThinSight Over the span of a couple of years, Izadi and Hodges began to have regular conversations imagining a future in which computer displays not only rendered digital content visually, but also contained photo sensors embedded alongside those pixels which could actually see.
“We got very excited about the idea, and developed ThinSight as a way of exploring and prototyping those future possibilities,” Izadi says. Hodges and Izadi assembled a team to develop a prototype system based on infrared sensing. Instead of detecting user interaction using a camera, which requires physical distance in front of or behind the display, ThinSight sensors were applied directly to the back of an LCD panel. To create the prototype, the team cut out part of the lid of a standard laptop computer and attached the sensors directly to it. The resulting infrared images, captured at 11 frames per second, are processed using computer vision technology, enabling people to interact directly with the computer display using both hands, or multiple objects that the software can be program med to recognise.
What if the computer screen you stare into every day was able to look back at you? A research team at Microsoft Research Cambridge is investigating that very idea, as a way to enable new modes of human-computer interaction.
HUMAN COMPUTER INTERACTION
“What’s exciting about Thinsight is that it is based on optical sensing, so it can perceive all kinds of objects as they get close to the screen, not just fingertips.” “What’s exciting about ThinSight is that it is based on optical sensing, so it can perceive all kinds of objects as they get close to the screen, not just fingertips,” Izadi says. “The optical approach gives ThinSight the ability to sense outlines of infrared-reflective objects through the display. So it’s multitouch plus the ability to support tangible interaction with objects.” Imagine, for example, illustration software that allows you to paint with either your fingers or with a paintbrush, or both at the same time. And what if that programme could respond to objects placed against the screen, such as a stencil or an oak leaf. Other objects could be tagged with identifying markers that the software could recognise for even greater interaction between the physical and virtual worlds. “We believe that ThinSight provides a glimpse of a future where new display technologies such as organic LEDs (OLEDs) will cheaply incorporate optical sensing pixels alongside RGB pixels in a similar manner, resulting in the widespread adoption of thin form-factor multi-touch sensitive displays,” Izadi says. “Ultimately we will either see LCDs being manufactured with these sensing pixels or organic LEDs - or other new display technologies emerging with these capabilities embedded within them.” Toshiba and Sharp, he says, are already beginning to manufacture LCD displays with sensing pixels built into the same substrate as the LCD itself. Imaging in the Infrared One of the key differentiators of ThinSight is that it uses infrared light, which is invisible to the human eye and doesn’t interfere with the integrity of the visual display. The prototype created by Hodges’ and Izadi’s team relies on a device known as a retroreflective optosensor. This is a sensing element that contains two components: a light emitter and an optically isolated light detector. It is therefore capable of emitting
light and, at the same time, detecting the intensity of reflected light. If a reflective object is placed in front of the optosensor, some of the emitted light is reflected back and is therefore detected. By using a grid of retro-reflective optosensors distributed uniformly behind an LCD display, it is possible to detect any number of fingertips on the display surface. The raw data generated is essentially a low resolution greyscale image of what can be seen through the display in the infrared spectrum. By applying computer vision techniques to this image, it is possible to generate information about the number and position of multiple touch points. In addition to the detection of passive objects via their shape or some kind of barcode, it is also possible to embed a very small infrared transmitter into an object. In this way, the object can transmit a code representing its identity, its state, or some other information, and this data transmission can be picked up by the infrared detectors built into ThinSight. Indeed, ThinSight naturally supports bi-directional infrared data transfer with nearby electronic devices such as smartphones and PDAs. Data can be transmitted from the display to a device by modulating the infrared light emitted. Moreover, a device that emits a collimated beam of infrared light can be used as a pointer, either close to the display surface, as with a stylus, or from some distance. Such a pointing device could be used to
support new modes of interaction with a single display or with multiple displays. From Prototype to… Izadi is quick to point out that ThinSight is only a research prototype at this stage and, while the initial results are very promising, there is more work to be done before any detailed plans for the technology can be considered. “We have shown how this technique can be integrated with off-the-shelf LCD technology, making such interaction techniques more practical and deployable in real-world settings,” Izadi says. “And we have many ideas for refining the ThinSight hardware, firmware and PC software with which we plan to experiment. We would obviously like to expand the sensing area to cover the entire display, which we believe will be relatively straightforward given the scaleable nature of the hardware. We also want to move to larger displays and experiment with more appealing form-factors such as tabletops.”
But although ThinSight’s approach looks very promising technically, how likely is multi-touch computing to take off? Izadi points to the world of music as an illustrative example. “If you look at pianos, guitars, mixers — all of these things require you to interact using two hands at the same time. Invariably people use two hands to do all sorts of things. It just happens that we’ve come to use one hand, with the mouse, when using a PC. ThinSight may play a role in changing that.”
About MSRC Microsoft Research Cambridge (MSRC) is one of the largest and most prolific computer science research laboratories in Europe with a global impact. Through fundamental, cross-disciplinary research, MSRC aims to push the boundaries of computing, challenge scientific convention and ultimately further scientific knowledge. MSRC aims to create technologies that improve the way the world works, plays, and lives.“
<* 8** l