Issuu on Google+

Futures Microsoft’s European Innovation Magazine Issue n°1 I December 2007

“When the future historians look back on these years, they will examine us on two things: our innovation and the purposes to which we put it.”



Editor in Chief Thaima Samman, Senior Director Corporate Affairs Europe, Microsoft

4 THE ‘RENAISSANCE’ OF COMPUTER SCIENCE What if Newton had published his theory of gravitation – and nobody noticed? Richard Hudson of Science Business discusses the frustrating position of the computer scientist today.

Editorial Board Jan Muehlfeit, Chairman Europe, Microsoft Dirk Delmartino, EU Communications Director, Microsoft Andre Hagehülsmann, Innovation Coordinator, Microsoft Rachel Thompson, Regional Director Europe, Middle East & Africa, APCO Worldwide Joanna Meade, EU Corporate Communications, GPlus Europe

18 LIVESTATION – THE ‘SPARK OF LIVE’ TV ON YOUR COMPUTER European start-up Skinkers has pioneered a comprehensive solution for “push” communications which includes multiple communication channels and multiple devices - bringing live audio and video to the desktop.

External Contributors Richard L. Hudson, Science Business Nuala Moran, Science Business Kate Lothian, Skinkers Julian Hale, Freelance Journalist Production Quadrant Communications B-9000 Gent

28 LCD THAT CAN SEE What if the computer screen you stare into every day was able to look back at you? A research team at Microsoft Research Cambridge is investigating that very idea, as a way to enable new modes of human-computer interaction.

Layout and Design B-9040 Gent Illustrations Leon Mussche Printing Roels Printing B-2500 Lier Advertising Cisco Econet Intel Microsoft Randstad Contact details Microsoft Corporate Affairs Europe Troonstraat / Rue du Trône, 4 B-1000 Brussels Circulation number / Frequency 2,000 copies / Quarterly publication Disclaimer The content of this magazine, including news, quotes, data and other information, is provided by Microsoft and its third parties for your personal information only. Views imparted by thirds parties do not necessarily reflect the views of Microsoft Corporation. Copyright Microsoft Printed on recycled paper.

Further SCIENCE OF THINKING Training the next generation


A push to reform the way Europe does research


The researchers’ directive


MEDIA AND CONTENT MANAGEMENT The next phase of the IP TV revolution


Microsoft’s connected TV services platform


Overcoming information overload and ‘the crisis of choice’


Austrian start-up secures funding for video telephony project


Internet telephony


A 2020 vision for global research libraries


HUMAN COMPUTER INTERACTION The Tablet PC and mathematics


Pushing the boundaries in graphics hardware


“What if…?” – breaking new ground in enterprise resource planning visualisation


Sophisticated graphic website and advert-tracking software set for 2008 launch


Virtualisation technology transforms old mining site in Portugal into a leading centre for scientific education and research


Innovation for social and economic empowerment


Preface When the future historians look back on these years, they will examine us on two things: our innovation and the purposes to which we put it. In doing so, one of the themes on which they will remark is how quickly and how widely information and communications technology (ICT) became embedded in virtually all forms of innovative endeavour – whether technological, social, environmental, distribution and business model innovation.

Today, ICT doesn’t simply distribute inno­ vation: it enables the very process of innovation, across the spectrum of human activity – for scientists, engineers, doc­ tors, teachers, librarians and NGOs, as well as business leaders and employees – and in every part of the world. Microsoft has always invested heavily in innovation, and in particular in providing a predictable and very widely-used platform for innovation by others. In Europe today, our local software ecosystem – independent software vendors, developers, resellers and other partners - comprises 37% of the total ICT-industry employment and accounts for 57% of total ICT-industry tax revenues. Microsoft is also investing deeply and broadly in R&D in Europe, where there is a highly skilled and motivated talent pool and excellent industry and academic partners to collaborate with. Our R&D related facilities in Europe employ more than 1,000 researchers and engineers, with an annual investment of €300 million; and cover the full spectrum of software development, from the earliest blue-sky concept to product implementation.

For Europe, alongside the key challenge of building an attractive policy environment for innovation, there is a second challenge: to communicate and celebrate Europe’s innovation achievements and its innovators; to explain how innovation happens and why it is exciting - and to inspire more. This publication is a contribution to that European effort. It includes innovation that is accelerating new kinds of science and computing. Collaborative applied research partnerships to advance the European Research agenda. Product development in Europe, for Europe and the world. And support for European start-ups, SMEs and community NGOs to gain access to innovations. Europe’s innovation story is indeed one that Microsoft is proud to be part of, now and in the future! Jan Muehlfeit Chairman Europe, Microsoft

What if Newton had published his theory of gravitation – and nobody noticed? That’s the frustrating position of the computer scientist today. 4



AND HOW IT WILL CHANGE OUR WORLD By Nuala Moran, Science|Business

The rapid pace of industrialisation in India, China and elsewhere is pushing up the price of every commodity, from oil and copper, to wheat and water. But one resource – computer processing power – is abundant and falling in price. That has profound implications for every other field: released from the need to use a scarce resource sparingly, computer scientists are applying this power in ways that are transforming the wider world of science, commerce, business and policy. But so far, outside the world of computer science, awareness is low about where all of this is heading.

of policy makers, academics and industry executives in Brussels on 19 September 2007. ‘The science of thinking: Europe’s next challenge,’ a symposium organised by R&D-news service Science|Business and

research and strategy officer of Microsoft. Among those changes: computer processing speed won’t continue to increase at the same exponential rates seen in the past, so to drive significant increases in performance there will be a shift to increased numbers of processors – and with it, the need to address challenges regarding parallelism and concurrency.

“Computer processing power is abundant, falling in price - and being used to transform the wider world of science, commerce, business and policy.”

That gap, between the future potential of computer science and public awareness of it, was the topic of a high-level gathering

supported by Microsoft, explored the impact of new advances in fundamental computer science – and the appropriate policy responses. “The challenge is that the biggest change in computing itself is coming in the next four to five years, and to date there has been little preparation to deal with the majority of that change,” said Craig Mundie, chief


“Computer science is in a period of Renaissance,” said another symposium participant, Muffy Calder, professor and head of computer science at Glasgow University. In computer science, “we are being reborn – at the same time as other sciences are being reborn by computer science.”

Symposium “The science of thinking: Europe’s next challenge”, 19 September 2007

Worldwide, there needs to be more funding for basic research in computing and for interdisci­ plinary research between com­ puting and other fields.

The prime cause of all this is well known: the smaller, faster, cheaper cycle of the global computer industry. Infinite amounts of computer power make it possible to reflect and model the world around us in all its minute detail, from the exquisite machinations of a single cell, to the baroque feedback loops that are driving climate change. But the impact isn’t just about applying more and more computer firepower to manipulate and query bigger and bigger data sets. Nor is it merely to do with collecting, maintaining and sharing information. It is about a different way of handling complex questions, in which the concepts and tools of computer science provide the framework for problem solving. The term ‘computational thinking’ has been coined to describe this new approach. Jeanette M. Wing, a Carnegie Mellon University professor who is currently assistant director of the US National Science Foundation’s computer programmes, has a vision of it becoming a fundamental skill, ranking alongside reading, writing and arithmetic. “Imagine every child thinking like a computer scientist,” she said. In the case of systems biology, it means the ability to pull together the multiple abstractions that molecular biology has accumulated – the individual chemical pathways, protein structures, and receptors – and build holistic models of entire biological processes ‘in silica.’ Similarly, in astronomy, the sky becomes a vast database of star observations for modelling. In epidemiology, doctors can simulate the spread of disease and conduct experiments not possible in the real world. And the entire science of climate change simply wouldn’t exist without computer modelling and the ability to handle multiple abstractions. “You can pull together many different pictures, rather than having to focus on one,” said Malik Ghallab, CEO for science and technology at the French national computer lab, INRIA. One of Wing’s favourite examples is a proposal from geophysicists to model processes from the earth’s core to its surface, and from the earth’s surface to the sun. “And they want all the models to interact,” said Wing. Boeing’s 777 model was the first aircraft to be designed and tested without the use of a wind tunnel. “It relied completely on computational simulation and methods – which goes to show how much in engineering is predicted using computational methods,” said Wing. Worldwide, the symposium participants agreed, there needs to be more funding for basic research in computing, and for interdisciplinary research between computing and other fields. An example of the latter is a new US$52 million programme of research at the National


SCIENCE OF THINKING Jeanette M. Wing, Professor at Carnegie Mellon University and Assistant Director of the US National Science Foundation’s computer programmes

programming. The problem is we don’t know how to teach computer science to kids: there needs to be a research programme to investigate the pedagogy of computer science.” Peter Buneman, professor of database systems at Edinburgh University agreed. “We can’t just go into schools and say how important computational thinking is. We need to inspire. We need to find the computational thinking equivalents of the chemistry set and get those into schools.”

At present, the emphasis is on teaching computing as a tool rather than teaching the concepts that underlie it, said Martin Rem, director of ICTRegie, the Dutch government’s ICT research agency. “We as computer scientists have a responsibility to come up with good, teachable concepts for young children.”

Nuala Moran is senior editor of Science|Business, an R&D news and events service at

A LOOK IN THE CRYSTAL BALL: Science Foundation, called cyber-enabled discovery and innovation. But that’s just one of several US funding programmes for computer science. By contrast, the European Union spends roughly €150 million a year on all forms of fundamental computer science; to keep up with just the US civilian programmes would require at least a doubling of resources. But money isn’t the only issue, notes Corrado Priami, CEO of a joint venture in systems biology between Microsoft Research and the University of Trento, Italy. “I would suggest more importance be given to better spending of the money, by selecting the areas in which Europe is the most competitive and which are starting up, so we can be in the lead.” Furthermore, he says, a new system is needed for reviewing grants. “Referees are getting in the way. If you try to do something on the borders of disciplines, you just get handed over from one to the other.”

The implications for education are broader still – as it means a change in the way all citizens are trained, not just scientists and engineers. At present, only 20% to 25% of undergraduates complete their computer science courses successfully, noted Jan Bierlant, dean of sciences at Belgium’s Katholieke Universiteit Leuven. Said Wing: “Introductory courses are not inspiring – especially to non-computer scientists because they tend to be introductions to


The free lunch is over

For the past 20 years the computer industry has grown on the back of ever-increasing clock rates. In line with Moore’s law, coined by Intel co-founder Gordon Moore, advances in chip design have allowed performance to double every 18 months or so. “But the clock rate can’t go up any more,” said Craig Mundie, chief research and strategy officer at Microsoft, at a 19 September 2007 conference in Brussels on computer science. “We find ourselves increasingly unable to remove the heat generated by denser and denser microprocessors. Yesterday, Gordon [Moore] predicted the demise of his law in 2020.” Parallelism has long been proposed as a way out of this bind, but few in the industry were prepared to invest in the field whilst processors were taking regular, massive, leaps in capacity. “We are now at the point where if we want computing to support all the things it is capable of, we need to deal with the issue of parallelism,” said Mundie. Graig Mundie, Now, said Mundie, it is up to the software community to rise Chief Research and Strategy Officer, Microsoft to the challenge. In the immediate future this will necessitate grooming a cadre of programmers who are at ease with these architectures. It also calls for the development of newer, higher level languages that can handle the complexity involved in parallel programming.

The move to a source of processing that is not only more powerful, but also far more flexible, has profound repercussions for all fields of science and commerce. For a start, believes Mundie, it will transform the economics of computing. It will be possible to build parallel arrays of systems to handle what today would be an impossible data-mining exercise. “This will be at the heart of breakthroughs in science and business. It will, in fact, be impossible to make breakthroughs without computing,” said Mundie.


form to a real engineering discipline,” said Mundie. He noted that one of the leading European centres researching formal methods is the French state computer lab, INRIA, with which Microsoft Research is collaborating.

Medicine will undergo its largest transition in decades, as it becomes a far more data-driven business. There will be a focus on prevention, not alleviation. There are profound implications for computer science itself. Older fields of engineering have always evolved by what’s called ‘formal composition.’ Expertise is built up layer by layer, making it possible to attack larger and larger problems. For example, in civil engineering, design expertise is supplemented by knowledge of different or new materials, making it possible to build a longer bridge or a higher skyscraper.

Alas, funding for this kind of research is rare, Mundie noted. “Most governments are pulling back from basic research, and computer science was never regarded as basic. So there is a double whammy for basic research in computer science.” As the world’s largest spender on software research, Microsoft started to move in the direction of parallelism six years ago. Researchers at the company’s lab in Cambridge, UK are devising new languages and architectures, and creating new strategies for writing programmes. The first fruits should be on the market by 2012. “But it will take two product cycles to move this ecosystem forward,” said Mundie.

“That’s not the case in computing,” said Mundie. “We haven’t mastered programming in the same way as formal composition.” What’s needed is a big advance in the formal methods of computer science. “This would move software from what is too much of an art


By Nuala Moran, Science|Business

You can’t capture the thrill and excitement of a football match by describing the individual players. Similarly, it is not possible to understand biological processes and pathways by looking just at the component parts.

The focus of Priami’s own research is this convergence of life sciences and computer science. The aim is to develop new Prof. Corrado Priami, President and CEO, Microsoft Research/University of Trento Center for Computational computational tools and Systems Biology to enhance understanding of the evolutionary processes that are responsible for the large scale properties and dynamics of biological systems. Concurrently, he is building a better understanding of how biological systems process information. This reverse engineering is underpinning the development of new, more powerful, more reliable programming languages that will be used to develop the software of the future.

“Biology is a science of interactions and complexity. Looking at its individual components doesn’t tell you about the system as a whole,” says Corrado Priami, President and CEO of the Microsoft ResearchUniversity of Trento Centre for Computational and Systems Biology.

In the past 40 years the reductionist techniques of molecular biology have provided deep insights into thousands of individual actors – membranes, hormones, enzymes, genes, and their associated kinetics – that are involved in the functioning of organisms. But biological systems do not respond in particular way because of one particular component or another. “They behave in a given way due to the interaction of components,” says Priami.

“We are trying to exploit computer science as an enabling technology to enhance life science at large, and capitalise on the new knowledge to enhance computer science,” says Priami. This then, is the vision. At a practical level systems biology involves many different disciplines. “And once you have built a multidisciplinary team you need a common language – different sciences use different words to talk about the same thing,” said Priami. A further implication is that the basic model of research has to change. “It should be targeted and interdisciplinary – and you should make it iterative, not linear.” Research also needs to be ‘communicative’ - we need to disseminate the results in the broader community to help enhance the visibility of science and to facilitate decisions to invest money in research.”

But understanding this requires a fundamental shift towards viewing biology as an information science. Seen from this perspective, computer science and systems biology share the same conceptual challenges. “They both need to handle complex systems that are inherently highly parallel,” says Priami. The prospect is that one discipline will feed off the other: understanding the parallelism of biology will be used to build better tools in computer science. The ultimate vision is to use living systems as computers: in effect, organisms are processing systems with all the essential properties of a highly efficient computer.



EUROPE’S PLACE ON THE IT MAP Q&A with Dr. Andrew Herbert, Managing Director, Microsoft Research Cambridge, UK By Richard L. Hudson, Science|Business

In the global village, goes the standard economic theory, every region should have its own set of specialised skills to trade with the rest of the world – an inventory of talents and resources at which it excels and earns its keep. So what is Europe’s niche? Two other things: Europe is a multi-cultural society, and there’s a very strong emphasis on design. One thinks of Italian fashion, Scandinavian furniture. There’s quite a lot of European strength in the field of human-computer interaction. I think of consumer electronics companies like Philips: very design-led. Then you think about European leadership in the mobile phone markets, led by companies like Nokia.

When it comes to computer science, Europe’s strengths are in its culture and traditions, believes Andrew Herbert, managing director of Microsoft Research’s European lab, in Cambridge, UK.

And there’s strength in the computational science and ‘e-science’ field. The US has been focused on connecting supercomputers. In Europe, we’ve been looking more at scientific collaboration, helping people working together. We’ve used computers as a collaboration technology, to overcome the fact that we’re very fragmented, that university departments are often small. The flagship for this is what CERN does with the European physics community – creating networked virtual organisations pulling groups together.

As head of one of Microsoft’s five worldwide research labs, the British computer scientist oversees a research staff of 100, and more than 250 inter-disciplinary research collaborations across Europe – for instance, in systems biology at the University of Trento, and in software security and information interaction with the French national computer lab, INRIA. As such, he has had to make his own mental map of which skills are on offer in Europe. Herewith, a glimpse of that map.

Q. What’s the obstacle to a stronger European computer-science effort?

A. There is a problem that computer science has in Europe: it is often perceived as a service, rather than as a discipline in its own right. People only bring in computer scientists when they want some programming done. The realisation that a computer scientist with a good background in computer science theory (what some call ‘computational thinking’) could work jointly with someone in biology, and produce something better than either could do on their own – that’s not well established. Computer scientists get frustrated about this. They are expected to do a lot of ‘training’ for other subjects, since many computer sciences departments grew out of university data processing departments. The contribution that computer scientists can make to basic science, engineering and technology is not so well understood, but if every computer scientist went on strike tomorrow, a lot of industries would say: ‘we’d better pay attention.’

Q. What are Europe’s strengths in computer science?

A. Europe has a very strong tradition in some of the more theoretical aspects of computer science, and that’s particularly important when you’re thinking about the reliability of software. As we depend on software more and more for things in everyday life – transport, mobile phone systems, medical systems – I would like to be confident that the software works. Now, because computer science in Europe has never had the same level of funding as in the US, people tended to go more for theory. Also, mathematics has a stronger tradition in Europe than in the US. So when we are building these large, complex computer systems, we have the ability in Europe. Software reliability is also something that comes with the fact that a lot of the European IT industry has been centred on safety-critical things like aerospace: real-time safety and physical systems.

Another of the challenges for Europe is how to make sure talented people who aren’t at the best known centres also have the chance to excel. It’s easy to focus on the top 15 or 20 labs, but we should also be tracking and supporting the strongest individuals, and be a little less emotional about supporting the institutions. The job of the institutions is to attract the best individuals and not rest on their laurels.

Another area where there is strength is in machine learning and computer perception. That grows out of the mathematical tradition – the use of very advanced statistical techniques for image processing, handwriting recognition. Modern computers have the horse-power to run demanding algorithms that can achieve near human levels of ‘perception’.


TRAINING THE NEXT GENERATION By Richard L. Hudson, Science|Business

If Europe is to prosper in the global economy, it needs a lot more people like Fabian Suchanek. The 27-year-old German student is full of enthusiasm for his field, computer science. He talks animatedly about his current PhD research, into the database structure of online encyclopedia Wikipedia. And as for computer science itself, it’s a field in which “you can be creative. Many other sciences try to understand what exists. In computer science, I am creating a new thing that hasn’t been there before.” 10


Fabian is at the leading edge of a movement to train more computer scientists in Europe. Today, the EU has 3% of its workforce in ICT professions, compared to 4% in the U.S., according to the Organization for Economic Cooperation and Development. And demand for programmers, systems analysts and theoreticians is growing world-wide. Without more ICT professionals, says an industry report commissioned by the European Commission last year, the EU could be left “to imitate rather than innovate in a competitive global economy”. To avoid that fate, several new initiatives have been cropping up around Europe. Fabian , for instance, is part of a new training programme run jointly by the Saarland University at Saarbrucken and the Max Planck Institute for Informatics. At the University of Southampton in England, the computer science department is trying to tempt young engineers into the field by offering a four-day programme to let them play with supercomputers to design an aircraft and fly it by simulator. In Brussels an industry consortium, the e-Skills Industry Leadership Board (including Cisco Systems, Microsoft, Siemens and HewlettPackard) was launched in June to promote training. And in September 2007 the European Commission announced several new projects to coordinate EU and US university programmes – for instance, moving towards a common masters curriculum in computer science, and a bachelors in information management. Gerhard Weikum is Fabian ’s thesis advisor. He says that, whether in Europe or the US, “the supply does not match the demand” for computer scientists – and that matters for the economy and society broadly, because computing now pervades every field imaginable. “Computing and computer modelling is key to many issues – there are embedded systems in cars, trains, airplanes, factories. You do a lot of virtual engineering, simulation and modelling. When we think of global warming, how do we analyse and understand it? By the methodology of computer models and simulations. In the natural sciences, computation is now the third way of doing

science: there’s experiment, there’s theory, and there’s computation.” A look at the Max Planck initiative, at the Saarland University, shows the potential of these new training efforts. It’s a graduate programme, taught in English, and structured on the American model of masters and doctoral degrees. Since the programme started in 2000, it has matriculated 199 PhD students. They can work under the tutelage of the Max Planck researchers but get their degrees from the university. The programme broadened a few years ago with the addition of another Max Planck institute, for software systems. The upshot: the International Max Planck Research School for Computer Science has become a magnet for foreign students – from Algeria, Bulgaria, China, India, and South Korea - who might otherwise have followed a more familiar path for ambitious international students: a move to the US. It has also attracted business interest. In July 2007, Microsoft Research announced it would contribute up to €1 million to help fund exceptional PhD students at the Max Planck Research School for Computer Science. The partnership will enable students to gain valuable experience working in a leading academic institution and be in direct contact with a leading business research organisation, with the aim to help develop some of the world’s most talented computing and science researchers of the future. As Gerhard Weikum said, “It is vital for computer science to bridge both the fundamental and applied research, in order to keep pushing the boundaries of science and innovation.” Each year, for the next three years, five PhD students will get funding from the Microsoft Research PhD Scholarship Programme for their research projects. As part of the programme Microsoft will invite students to attend its annual Research Summer School in the UK, giving them the opportunity to showcase their projects to Microsoft researchers and local academics and to build contacts within the industry. The most promising students will have the possibility of an internship at the Microsoft


Research laboratory in Cambridge, UK. In Gerhard Weikum’s view, the ideal programme gives its students deep knowledge in their own field – but also the training ‘to look across the fence’ at other disciplines. “The goal is to develop people so that they become independent and open-minded scientific researchers.” He cites Fabian as an example – a student who had “a neat idea, off the beaten path” and earned the freedom to pursue it for his thesis. The problem under study is familiar: when you search online you often get more than you bargained for. If you type into Google the term ‘Max Planck papers’ you’ll find thousands of references to papers by researchers at the Max Planck Institutes – and they swamp what you really want: archives of the physicist, Max Planck, after whom the institutes are named. If you could qualify your search by specifying the type of data you want – say, biographical archives – you could get the right answer faster. It sounds simple, but the problem lies in defining a knowledge base that makes sense in many different fields – an efficient ‘ontology’. Fabian ’s idea was to look to how people naturally organise data in Wikipedia, the online encyclopedia, and in Wordnet, another online resource. From that, he and colleague Gjergji Kasneci have been constructing a knowledge base that can be built into future-generation search engines. The research was presented at an international conference, and it helped get to Cambridge for his Microsoft internship – where he worked on social tagging, a Web2.0-style bottom-up approach to ontological knowledge. “It was a new environment there” he says. “I could work with lawyers, designers, programmers.” He doesn’t know yet what he’ll do on graduation in a year, but he says the breadth of the Max Planck programme has given him a taste for several worlds – industry, research institutes, and academia.

Richard L. Hudson is editor of Science|Business, an R&D news and events service at


“Europe has a team of star players, but it is not a star team.” That frank assessment of Europe’s weaknesses and strengths in research was how Dr Janez Potocnik, the EU Science and Research Commissioner, opened a campaign earlier this year to reform how R&D is governed in Europe. His effort is the most-sweeping look at EU research policy in years, and is expected to result in a series of new policy proposals from Brussels early in 2008 – proposals that could make a fundamental difference in how much Europe gets out of its research budget. At the moment, there is deep discontent in Brussels and most national capitals in the state of Europe’s research base.

Part of the problem, Potocnik argues, is that Europe’s R&D efforts are badly organised.

Sure, the latest crop of Nobel prizes were a European triumph, with two Germans, a Frenchman and a Brit dominating the 2007 Nobels in physics, chemistry and medicine. But that’s an anomaly: so far this century Europe has won only 24% of the Nobels – down from the 33% average of 1950-1999, and 73% in the prior half-century. Only two European universities (Cambridge and Oxford) rank among the top 20 in the most-watched league table for international research universities. And while EU scientists produce more research papers than anyone else, they generally score below their American counterparts when it comes to how much the papers are cited by other scientists.

For starters, there’s duplicated effort: for instance, the EU counts 29 different nano­ technology-funding programmes across the 27-nation bloc, and 110 different national research grants for the study of one bacterium, campylobacter. There are inflexible rules for academic tenure, pension and employment – rules that make it difficult for researchers to move between academic and industry labs, or across borders within the EU. There are dozens of good proposals for new scientific instruments – from synchrotrons to biobanks – that never get funded because the EU members can never agree to work on them together. To start fixing these and other problems,


in April this year the Slovenian economist – named research commissioner after negotiating his country’s accession to the EU – published a ‘Green Paper,’ the Brussels term for a document calling for public comment and suggestions on a problem. The document raises 30 policy questions and mentions scores of possible solutions, but deliberately avoids backing any of them in an effort to open the dialogue to as many researchers across Europe as possible. Indeed, the launch of the paper was a political act in itself: an attempt to go around the national R&D agencies and mobilise the EU-wide R&D community behind the idea of change. “How many more millions of euros are going to be spent on replicating research institutions and sexy areas of research?” the Commissioner exclaimed, while speaking at a


© Goldberger

The EU Science and Research Commissioner’s 2007 Green Paper is the mostsweeping look at EU research policy in years, and is expected to result in a series of new policy proposals from Brussels early in 2008.

Janez Potocnik, European Commissioner responsible for Science and Research

June 2007 conference on the subject organised by R&D news service Science|Business, and co-sponsored by Microsoft. “We simply don’t have the luxury of time.”

In his view, what’s needed is a Fifth Freedom of the European Union: that knowledge should be able to move as freely across EU borders as do the other four, more widely recognised freedoms of movement for goods, services, capital and people. “These are the realities that Europe is facing,” said the Commissioner: the Green Paper “is confronting the reality that Europe does not have freedom of movement of knowledge.” At the conference, the Commissioner got plenty of suggestions. Prof. I.T. Young, of TU Delft, argued for more meritocracy in EU research grants – “in the sense that it is having the creative ideas, and not the right connections, that count.”

Andrew Herbert, managing director of Microsoft Research Europe, spoke of the disparity of skill levels between young European and American computer scientists: “What can we do to make our PhDs more competitive?” Others argued for a greater policy focus on innovation clusters – communities of universities, corporate labs and suppliers, that could become regional engines for innovation. There were calls for a new ‘scientific visa’, to make it easier for nonEuropean scientists to move from lab to lab once they’re inside the EU. And there was abundant criticism of the EU’s R&D bureaucracy. The excess financial reporting, auditing and paperwork reflect “a sort of mistrust” of researchers, complained Vlastimil Ružicka, rector of the Institute for Chemical Technology in Prague. “Trust honest people and punish offenders,” he urged the Commissioner.


The outcome of this debate will be closely watched. The preliminary feedback, the Commissioner has said, is mixed. Among the 800-plus formal, written comments that the commission had received by early autumn, many urged action. But many were also leery of Brussels playing a bigger role in R&D coordination; the old political tugof-war between Brussels and the national capitals, seen in trade, agricultural and many other policy areas, is also very much alive in the research world. The Commission is due to publish concrete proposals at the beginning of 2008. That’s fortuitous timing for the Commissioner: it’s also when his native Slovenia will be setting the political agenda, holding the rotating presidency of the European Union.

Richard L. Hudson is editor of Science|Business, an R&D news and events service at

THE RESEARCHERS’ DIRECTIVE By Franco Frattini, European Commission

One of the key objectives of the European Commission, as first outlined in the Lisbon Agenda and reiterated by policy decisions since then, is to turn Europe into the world’s most competitive and dynamic knowledgebased society. The Commission has striven to facilitate and create opportunities in Europe which will lead to the attainment of this goal, not at least in the area of scientific research. Enormous efforts have been made to improve the quality and amount of research and development currently taking place in European universities and private laboratories, through public-private partnerships and investment in universities in particular. However, we have recognised that one problem persists. If the European Union wants to be successful in its quest to become the innovation centre of the world, then it must rapidly increase the quality and quantity of researchers within the EU: not only to ensure the progress of science and innovation in Europe, but also as a crucial mean to attract and sustain the necessary investment required.

Implementation of the Directive by member states is essential to the paradigm of ‘brain circulation’ and the development of the European Research Area.

In recognition of the need to attract talented researchers from all over the world, the Council adopted a Directive in October 2005, which sets out specific procedures for admitting third-country nationals for the purposes of scientific research This Directive, which was established thanks to the close collaboration by my Directorate General and DG Research, facilitates access of non-European researchers to the European Union and creates a specific residence permit for 3rd country researchers, which enables them to move freely within the Union for the purpose of scientific projects. The Directive is aimed at “cutting red tape” and diminishes significantly the burden of the administrative procedures involved. It represents a pioneering piece of legislation, the adoption of which is essential to the paradigm of ‘brain circulation’ and the development of the European Research Area, which my colleagues in the European Commission and I have been advocating so strongly and persistently. Moreover the researcher’s visa system will provide an unparalleled opportunity for non-European and European workers to work together in helping Europe face the challenges of globalisation.

The date by which Member States were to adopt the necessary legislative and administrative procedures needed to transpose this Directive into their national laws has now passed. Unfortunately the great majority of the Member States have failed to adopt it in time. I hope that this situation will be remedied shortly. If we do not take action Europe will never succeed in securing the human resources required to attain its objective of investing 3% of GDP in Research and Development and will, as a result, be surpassed in this crucial field. Furthermore, failing to implement this directive means depriving European scientists of an invaluable opportunity to benefit from intellectual input from abroad and to exchange ideas with other leading research experts. We cannot let our scientists and, the rest of European society down, by not making the concrete commitments needed to make this possible. In closing, let me once more underline that ensuring the access and mobility of 3rd country researchers is essential and indispensable for the future of Europe, not merely as means to improve and encourage scientific development and innovation, but also as a way to stimulate productivity and growth and to enable us to compete with other markets around the world. I therefore urge the remaining Member States to implement the Directive and thus allow us to come a step closer to fulfilling the Lisbon Goals and making Europe a more prosperous society.

Mr Franco Frattini is a vice-President of the European Commission, responsible for Justice, Freedom and Security.


© Goldberger



THE NEXT PHASE OF THE IPTV REVOLUTION Merely a concept at the start of the century, Internet Protocol TV (IPTV) has already completed the first stage of its growth, moving from an idea to a real service that has now achieved a broad market penetration across many European countries. The transmission of digital signals is also a critical aspect of IPTV. At present, a major debate is underway in Europe – and in the rest of the world – on the transmission methods available for digital signals. Currently, digital signals are routinely transmitted using terrestrial methods. European providers mostly work with the

For consumers, IPTV provides connected and personalised experiences by integrating the TV with an intelligent two-way network. Forrester Research predicts that by 2017, one in four European fibre broadband subscribers will have IPTV, with penetration today already ranging from 13 percent in the UK to 33 percent in France.

formats, greater managed bandwidth, up to nine MB (such as ADSL2+ networks deliver) is needed. Accordingly, making savings on bandwidth usage is very important to service providers, especially as they look to offer bandwidth intensive services like HDTV – which are currently seen as the next ‘killer application’.

In this second phase of IPTV, the main challenges relate to the daily business of delivering IPTV, including cooperation with content and infrastructure providers, taking deployment to scale and the guarantee of uninterrupted reception and robust picture quality.

For consumers, IPTV provides connected and personalised experiences by integrating the TV with an intelligent two-way network.

For the telecommunications industry, IPTV offers the potential to generate considerable revenue streams by combining voice and internet with next-generation TV services. Telecommunications companies deliver these interactive TV services over their high-speed broadband networks. Typically, an operator will use a two MB line for triple play services and some subset for the IPTV service. For high definition

Moreover, the IPTV sector is in the midst of a reality-check as cable and satellite operators are fighting back. The impending arrival of ‘cable IPTV’ and ‘satellite IPTV’ introduces a significant new dynamic to this market as new competitors look to challenge the current players. The anticipated use of IP as the delivery mechanism for television on these networks redefines the potential for IPTV and expands its boundaries far beyond what has, until now, been considered a telco-centric activity.


digital TV standard DVB-T – Digital Video Broadcasting-Terrestial – which transmits digital TV signals via aerial antennas within their terrestrial networks. The method, which is called DTT (Digital Terrestrial TV) in UK and Ireland, ATSC (Advanced Television System Committee) in the US and ISDB-T (Integrated Services Digital Broadcasting-Terrestrial) in Japan, is slowly replacing analogue television systems. The challenge ahead is in integrating different protocols.

Another major point of discussion is content and its protection and conditional access. The major Hollywood studios are reluctant to release their content to supposedly ‘open’ networks – even if IPTV is anything but an open network. Consequently, a vigorous discussion on video formats and content protection is underway. Most IPTV platform architectures are consequently designed to provide support for multiple video formats including VC-1, MPEG 2 and H.264, allowing service provider flexibility

in terms of video formats, including other advanced codecs. The challenge ahead is in integrating different media types. To succeed, IPTV providers need to prove that they can create systems that will make converged, cross-platform service delivery a reality. In addition, companies have to prove that they can achieve significant penetration by reaching millions of subscribers rather than hundreds of thousands. This can be achieved by capitalising on the

advantages IPTV offers: the availability of a broader range of TV channels, archives with different TV formats, access to movie databases, and in the future, the engagement of the customer through interactive services. Microsoft and its partners are at the forefront of the IPTV revolution, working on innovative solutions in partnership with operators to create compelling, interactive, connected TV services for consumers.

Microsoft Mediaroom

Microsoft’s connected TV services platform Peter Yves Ruland, Microsoft TV, presents Microsoft Mediaroom

Driven by computer and network technologies, TV is undergoing a paradigm shift that is not only changing TV itself but also how people consume media and make it an integral part of their daily lives. Changing the meaning of TV The commercial roll-out of IPTV brings a range of features that mark a significant shift in what TV means: high-definition live TV channels, advanced video on demand (VOD) and digital video recording (DVR).

And giving the consumer real control Faced with an increased variety of new channels and new media, it is critical that the consumer experiences this variety as a benefit and not as a burden. This means empowering the consumer, so that: g Technology aids in the selection of the TV offerings based on the consumer’s preferences, g Technology aids in aligning the TV offerings with the consumer’s schedule (allowing for

interruption and restarting of live streams) rather than dictating the schedule, and

This is the vision behind Microsoft Mediaroom, a platform that combines all the components that telecommunications companies need in order to deploy a robust IPTV service, from content acquisition, distribution and protection, to on-demand video streaming, digital video recording, and service and subscriber management. For the consumer, Mediaroom enables both personalisation and mobility through, for example, remote digital video recording from a mobile phone or web-connected pc, and personal media sharing on the TV, by providing centralised access to digital photos and music stored on PCs in the home.

g Technology aids in integrating TV more seamlessly into the consumer’s environment,

e.g. by enabling a gaming console and the PC to operate in conjunction with the TV.

Furthermore, broadband allows two-way communication, enabling consumers to use the backchannel for requests and feedback, and thus more personalisation. These developments pose several challenges: IPTV providers need to acquire, manage and protect the content (which consumer subscribed to which service and has access to which content) and distribute it. Consequently, effective partnership is essential between content creators, service providers and technology platforms - both to drive an integrated telecommunication service and to enhance the consumer’s experience.


Today, Microsoft Mediaroom is supporting major customer relationships with Europe’s largest broadband service providers such as BT in the United Kingdom and Deutsche Telecom in Germany. More than 18 service providers worldwide have selected the Microsoft Mediaroom platform for their digital TV offerings, and commercial deployments are currently underway with another eight providers.


European Microsoft Innovation Center (EMIC):

Overcoming information overload and ‘the crisis of choice’ Year by year, the flood of content is rising all around us: television channels, books, music and the Internet - where not only traditional media but millions of individuals are adding content. So, in this flood, how do you find the content that matters to you? How do you discover multi­ media information and entertainment in ways that suit you personally? Isn’t there an easier way? Finding what interests you doesn’t have to be an accident. A Microsoft research team in Germany is addressing, within the research project MyMedia, the key social problem of information overload and what has been called ‘the crisis of choice’ - by jumping beyond traditional recommender systems which are based on a single multimedia source. Instead, the European Microsoft Innovation Center research project provides recommendations to you that are integrated from many sources. As the user, you personalise the system by indicating simply that you like a particular video or audio-cast, and the system will find similar content and even learn from you what you like – the more you use it, the more it learns your preferences.

“We think personalisation is a very interesting research area with direct bene­ fits for users. Each collaborator in the MyMedia project brings great experience and unique capabilities and we’re very excited to begin this project.”

The MyMedia Dynamic Personalisation Framework is a collaborative research project organised under the EU Research Framework Programme and involves the European Microsoft Innovation Center, the BBC, BT, Microgénesis, Telin and the Universities of Hildesheim and Eindhoven. The resulting system will allow easy integration of multiple content catalogues and recommender algorithms in a single system and provide technology for userranked content. The system will learn user preferences and enable the sharing of recommendation results with friends and family while observing privacy and security protocols. The technology will be evaluated on its effectiveness and user-friendliness in a variety of cultures and languages via scientific analysis tools and field trials in several European countries.

Tim McGrath, MyMedia Project Coordinator, European Microsoft Innovation Center (EMIC)

Here’s a scenario that illustrates what the European Microsoft Innovation Center (EMIC) and its partners will enable within the collaborative research project MyMedia: Carin is a scuber diver and is planning a vacation to the Balearic Islands for a diving vacation. One night she notices a documentary on the Balearics on her television programme guide. She watches the show and uses her media system, which is based on the MyMedia framework, to express her preferences for more content like this in both English and Spanish. The system returns a set of choices that includes both professionally created content and user created content, ranked according to her preferences – so content about local dive spots is ranked higher than content about local discos.

After a few days of viewing, Carin has a well-tuned content set that she decides to share with her diving community on their social networking site. The MyMedia components make it easy to share this specific set of preferences, but not her prefer-

ences on other topics. Juan, a member of Carin’s dive community, sees her posting and sends her his set of preferences about diving in the Balearics, which contains some new content on environmental issues which Carin decides to incorporate into her own preferences. One of the new sources ‘rents’ its content and Carin likes what she sees so she keeps it for a rental fee. Another of the sources is free but uses her preferences to include advertisements relevant to her interests in scuba diving on Majorca. In the following weeks, Carin watches and listens to several more programmes about her focus area. On her return from her diving vacation, Carin uploads digital videos of her dives onto her favourite community video site. Using new metadata components from MyMedia, she easily tags and annotates her content. These metadata and tags are also automatically included in her set of preferences and the preferences she had previously uploaded to her dive community site. Others on the site who have subscribed to her ‘virtual channel’ through the preferences she shared with them can now see the content from her diving vacation.



THE ‘SPARK OF LIVE’ TV ON YOUR COMPUTER Back in 2001, Matteo Berlucchi, an Italian entrepreneur and academic, was ploughing through his email inbox trying to sort out the important messages from the spam, when a simple idea struck him. Create a dedicated priority communication channel that could work outside of email and could be used for sending important and time critical messages straight to the desktop. 18


Cambridge had been looking at ways to build peer-to-peer networks for high bandwidth content streaming/distribution and had produced a technology named ‘Pastry’. Not only was this technology ideal for Skinkers, Microsoft had also recently formed a group, IP Ventures, whose remit was to find ways to commercialise the technologies created by Microsoft Research. IP Ventures became involved with Skinkers and discussions began to see if a marriage was possible.

From this simple idea, Matteo Berlucchi formed Skinkers with David Long, another entrepreneur. The idea has since expanded and Skinkers now offers a comprehensive solution for ‘push’ communications which includes multiple communication channels and multiple devices. With over 100 bluechip customers and 80 employees Skinkers has gone from strength to strength. So where does Livestation fit into this story? How does live interactive TV on your computer fit with push communication technology? The answer lies in the concept of very high speed/volume push. Skinkers engineers identified the value of being able to push large amounts of information at very high speeds around corporate networks and the Internet. After thorough research it was decided that a ‘flavour’ of peer-to-peer technology was best suited to meet this challenge. The peer-to-peer concept is based on the principle that every computer connected to a network works together with its neighbours to share information: instead of requiring a large number of dedicated servers to distribute messages, content can be ‘pushed’ by neighbouring computers. The concept is simple but the technical challenges are immense, particularly when it comes to pushing high volume ‘live’ data to large groups of users. Skinkers found the best peer-to-peer technology at the Microsoft Research Lab in Cambridge. A team of researchers at

video over the Internet to mass than conventional approaches. Livestation is aiming to become the de facto standard for delivering live radio and television over the Internet by delivering remarkable quality audio and video using a simple software application. With conventional streaming services, each stream is typically delivered from central servers or using a special content distribution network. Every additional user receives their own stream, which places enormous

Livestation is a unique interactive, global broadcast radio and television platform that will allow broadcasters to distribute live audio and video to a potential audience of hundreds of millions of broadband connected consumers. In June 2006 the groundbreaking ‘technology-for-equity’ deal was struck between Microsoft and Skinkers, and development of the technology began. Once the technology was brought into Skinkers, a special Lab was set up (internally referred to as ‘The Bakery’) to use the research code to develop commercial technology and solutions. Very quickly, the Skinkers engineers realised that the technology was not only ideal for pushing messages but it was also opening the door to a truly revolutionary proposition: being able to stream live video - and therefore television - direct to any computer on an Internet Protocol network. Recognising the incredible potential of this idea, the decision was taken to develop a viable technology and take it to market. This was the birth of Livestation! Livestation is a unique interactive, global broadcast radio and television platform that will allow broadcasters to distribute live audio and video to a potential audience of hundreds of millions of broadband connected consumers. It provides a far more economically viable and scaleable solution to the problem of delivering live audio and

demands on the Internet infrastructure and ultimately limits the number of users that can be simultaneously supported. In a peer-to-peer network, each node functions as both a client and a server, sharing its data with other users. This helps spread the load to the edge of the network, so that capacity grows with demand. Livestation is designed to deliver the very best user experience utilising Microsoft Silverlight. It is simple to use - just like watching television or listening to the radio! Users get a number of stations or channels to listen to or watch whenever they want. So, from a simple idea of improving communications, a chain of events has led to Skinkers’ relationship with Microsoft and, from this excellent partnership, Livestation was born. The stage is now set for Live­station to become a global standard for listening to and watching live radio and television over the Internet.

Livestation is currently in technical trials and is planning to launch globally in 2008. For further information visit: I

To find out more about IP Ventures visit:


AUSTRIAN START-UP SECURES FUNDING FOR VIDEO TELEPHONY PROJECT Vienna-based start-up IQ Mobile found that obtaining funding to develop innovative solutions was a straightforward exercise. It was awarded 15 per cent of the costs for its video telephony project and now names Sony BMG as one of its customers.

© Oskar Goldberger

Günter Schneider, Microsoft and Harald Winkelhofer, Chief Executive, IQ mobile



“It’s really important to help small companies grow their business, because it brings more jobs, higher turnover, and greater investment in other companies, and boosts the whole economy.” Harald Winkelhofer, Chief Executive, IQ Mobile.

The European Commission identifies ICT as the biggest driver of growth for small to medium-sized enterprises (SMEs). But for many start-ups, lack of financial support can see the company fail before realising its innovative potential. Recognising this, and the importance of SMEs to the economy, the European Union (EU) established sources of funding to support these fledgling operations. But with business strategies to plan and operational processes to put in place, when do CEOs have the chance to find and apply for these grants? The answer is they often don’t, and as a result miss out on an opportunity to make their business more competitive. Harald Winkelhofer is one entrepreneur who avoided this situation when he set up one of Austria’s most exciting SMEs.

Building on more than 10 years’ experience in multimedia products for mobile phones, Winkelhofer founded IQ Mobile in June 2006. His vision centred on offering two sophisticated services to the mobile telephony market. Winkelhofer wanted IQ Mobile to become the first company in Austria to develop a video telephony platform for the mobile phone, and the first to support advertising on mobile phone portals such as Vodafone Live. In the first few months at IQ Mobile, Winkelhofer was heavily involved in preparing business, marketing, and sales plans, and had little time to look for funding opportunities. “From the outset, I wanted to apply for funding,” he says, “but during the start-up phase I didn’t have time to study complex application forms.” While reading the newspaper, the IQ mobile founder came across a means to apply for funding that wouldn’t take his focus away from the business. The European Union Grants Advisor (EUGA) programme is an initiative supported by a number of community partners and industry leaders, such as Microsoft, Intel, and HP, to help in-

crease SMEs’ awareness of, and access to, dedicated EU, national and regional funds. Winkelhofer contacted EUGA to arrange a consultation. During the initial contact, EUGA informed the IQ Mobile founder about an online competition sponsored by Microsoft Austria and news provider Pressetext. At the event, Winkelhofer scooped the competition’s first prize - €500 (U.S.$709). He says: “There were two great things to come out of that day for me—the €500 prize and an increased knowledge of funding for start-up companies.” Winkelhofer presented his ideas for IQ Mobile and the video telephony platform to EUGA experts in a series of consultations. “It was a very structured process,” he says. “We had three or four personal meetings where I told them what the company was doing, what my goals are, what technical background we need, and what the estimated costs were, and then they recommended the funding to apply for.”

up a base of 40 customers including big names such as Sony BMG and Nokia. IQ Mobile is the first company in Austria to provide a video telephony platform with interactive voice response. Impressed with the technology, Sony BMG signed up for mobile marketing and video telephony services on the first anniversary of IQ Mobile. The music and entertainment giant is set to use the IQ Mobile technology to bring previews of new music videos to its customers’ mobile phones. Winkelhofer says: “At the moment Sony is sending out previews to its online newsletter community, but soon they’ll be marketing a free preview of new music to be consumed on the mobile phone.” Over the last year, IQ Mobile has developed an extensive convergent platform (sms, mms, voice) with an innovative portal of mobile video solutions. As well, mobile marketing and advertising tools were established. Winkelhofer plans to use EUGA services again and recognises that

The European Union Grants Advisor (EUGA) programme is an initiative to help increase SMEs’ awareness of, and access to, dedicated EU, national and regional funds. IQ Mobile applied for a risk related grant from an Austrian and European funding cooperative in September because of the risks involved in working with such new technology. By the end of 2006, the funding body rewarded the innovative nature of the IQ Mobile project with a grant to cover 15 per cent of the project’s total costs. Winkelhofer says: “The consultation process was a good way for us to work, because I could focus on my daily business and let the expert recommend which funds to apply for - that’s why we’ll consult EUGA again.” Since IQ Mobile was founded, the Viennabased company has grown rapidly, building


future grants will also help the company to progress faster. “I could invest the money in a sales employee or further technical developments, so that will definitely support the business,” he says. While Winkelhofer wants the company to extend its reach, he is keen for it to remain a small, creative, high-quality business in the mobile sector. Winkelhofer shares EUGA’s view that innovative SMEs have an important role to play in the European economy. “It’s really important to help small companies grow their business,” he says, “because it brings more jobs, higher turnover, and greater investment in other companies, and boosts the whole economy.”


GLOBAL INNOVATION IN THE HEART OF EUROPE Today, the efficient and innovative distribution and processing of information is at the core of all business activity – so the effects of new technologies on company growth opportunities, and as a result, on investment decisions, are significant. For some years, says Erich Gebhardt, the Director of Industry Engagement in Microsoft’s Unified Communications Group and Head of the Microsoft Development Center, Zurich, it has been predicted that the next major innovative leap in enterprise communications would be enabled by Voice-over-IP (VoIP), the transmission of voice applications via the Internet Protocol (IP) network. But until now, the technology has not yet lived up entirely to the expectations of users and technology suppliers. Now, however, says Gebhardt, the signs are growing that a significant tipping point has been reached: “because information technology - which for some time has been based on IP networks - is the nerve centre that manages the transmission of applications, voice and data; and now information

technology and (tele)communications are merging.” Unified communications with added value Gebhardt says the key to changing this much anticipated convergence from marketing buzzword into business reality is two-fold: open standards, such as SIP (Session Initiation Protocol), which have found widespread acceptance and will also shape the future of VoIP; and seamless integration into the unified communications platform components rather than use of an isolated VoIP application.

As Gebhardt points out, the benefits of this approach are obvious: Voice-over-IP is no longer ‘old products in new packaging’, or in other words, the adoption of well-known telephone functions using a PC and expensive IP telephones. On the contrary, voice functions are now seamlessly integrated at a stroke into the familiar office environment. This makes the user interface a lot simpler and gaining user acceptance should be child’s play. The variety of unified communications presence and identity functions are also available unconditionally to VoIP applications.

This is what Microsoft has done in entering the VoIP market for the first time, with its Unified Communications solution, launched in October 2007.

The key application in Microsoft’s Unified Communications solution is the Microsoft Office Communications Server 2007. Its key VoIP components were developed in



Europe. Over the past two years, in the cramped rooms of an old Zurich apartment, not far from the banks of Lake Zurich, a close-knit group of software engineers have played a huge role in unified communications’ source code. The Zurich Development Center for Collaboration Technologies, opened by Microsoft in 2006, is the company’s fourth European software development centre and reports internally to the Unified Communications Group. Development work for the business segment is distributed to four sites: Microsoft headquarters in Redmond, the development centre in Zurich, and development centres in Hyderabad, the capital of the Indian state Andhra Pradesh and in the Chinese capital, Beijing.

will continue to be part of basic workplace equipment, but as a simple audio device connected to the PC via an USB hub, just like the mouse and keyboard. “The most important advantage of unified communications is the added value that is created at the workplace and the benefits that information processors receive from the close networking of voice and collabo-

a realistic target given the interest the application has already generated. “Highly promising pilot tests with the Office Communications Server 2007 are taking place, above all in corporations that operate globally, where vast savings in costs are possible. So at enterprise level, where VoIP has already been a subject of discussion for some time, it is now – thanks to the Microsoft technology – a hot topic,” he says.

The telephone will continue to be part of basic workplace equipment, but as a simple audio device connected to the PC via a USB hub, just like the mouse and keyboard.

From Lake Zurich to the world The fact that Switzerland, and in particular Zurich, were given this role, is anything but a coincidence - it was born from the Zurich start-up company, media-streams, that was acquired by Microsoft in 2005. As Gebhart explains, “In its VoIP development work, media-streams was already focused on the advanced SIP standard. Furthermore, from the outset the media-streams developers set the conceptual course for their work, with far-reaching effects: they decided to develop their VoIP solution as an integrated component of an existing infrastructure – or in other words the mail server. This laid the foundations for integrating the media-streams technology in Microsoft’s unified communications solution.” The new solution means that certain weak points in previously available VoIP solutions for companies have been eradicated. The breakthroughs include not only the user interface advantages noted already, but also, for example, the fact that high quality IP equipment is no longer needed - an area where companies are still making unnecessary costly investments. The telephone

rative applications,” Gebhardt says. “In doing so, the total openness of unified communications plays a huge role. As a result, any services and products that comply with the SIP standard can be integrated – with mobile applications set to play a particularly decisive role. And it goes without saying that Microsoft unified communications will also be taking no risks with security – possibly by using total encryption of voice transmission.” After the launch is before the launch Development work for the next product versions of Microsoft’s unified communications is already moving full steam ahead. While the developer teams in Beijing and Hyderabad are primarily working on the development of further clients (for Windows Mobile and Outlook Web Access), in Zurich they are also developing ‘presence-based call routing’. This allows user presence data, already available today, to be used even more extensively for call management and process design. In addition to developing source code, Zurich is also managing the setting up of the VoIP business in Europe, the Middle East and Africa. Because advanced technology only becomes an innovation if it can penetrate the market, Microsoft’s declared objective is that in three years’ time, 100 million people will be using their PCs to make phone calls. Gebhardt says this is


Gebhardt adds that in cross-corporate VoIP business communications – already, thanks to federation, at an advanced stage in today’s version of unified communications - the next step in development will focus on another aspect. “By employing what is known as SIP trunking, the barriers between VoIP and traditional telephony will be overcome,” Gebhardt explains. This means that VoIP will finally be a step ahead in ‘quality of experience’, i.e., voice quality and functionality – which is already largely the case with LAN. Quality of voice connections between PCs are measured directly and continually enhanced during the call; and work is also being done on hosting for Office Communications Servers (OCS hosting). Above all, this will make VoIP even more appealing to small and mediumsized enterprises. As part of the Microsoft strategy of software plus services, Internet Service Providers (ISP) can also offer their customers VoIP services based on Microsoft unified communications as leased models. Gebhardt emphasises that, beyond ISP, it is the partners in general who will develop individual, vertical solutions on Microsoft’s open platforms and implement them in the companies. Therefore, opening up unified communications to Independent Software Vendors and the support of the partner networks are also crucial focal points in building up the VoIP business. “Once Microsoft partners are making their own vast contribution to it, VoIP will indeed become the success factor in the fields of unified communications.”

Global Research Library 2020 (GRL 2020) is an initiative of the University of Washington Libraries and Microsoft Corporation to bring together global leaders from different sectors to shape a roadmap for research libraries for the decades ahead.

A 2020 VISION FOR GLOBAL RESEARCH LIBRARIES The inaugural GRL2020 workshop held in Woodinville, Washington USA on September 30 through 2 October, 2007, drew experts from around the globe – including Europe, China, India, Japan, Canada, Australia and the United States. Setting the scene for the discussion, Betsy Wilson, Dean of University Libraries, University of Washington, and Tony Hey, Corporate Vice President for Technical Computing at Microsoft, described the changing landscape for libraries thus:

“The rapid dissemination of findings, the creation of new tools and platforms for information manipulation, and open access to research data have rendered the traditional institution-based approaches to providing access to information inadequate. In order for research libraries to play a central role in this increasingly multi-institutional and cross-sector environment, we must find new approaches for how they operate and add value to research and discovery on a global basis.”



The ‘framing discussions’ at the outset of the workshop focused on a wide array of issues related to understanding and managing the output of global research. Many global problems - climate change, world-wide health threats, and international economic issues require support for the research enterprise that transcends political boundaries, and demands new infrastructure and cooperative frameworks. Participants largely agreed on various core value

The GRL2020 group outlined critical impediments that must be addressed if the vision of the global research library of the future is to be realised:

g Funding for research and learning is fragmented and suffers

from steep disparities globally

propositions for the Global Research Library: g Intellectual property and copyright constraints increase friction

in the information supply chain

g Innovation and knowledge creation rely on sustained availability

of information (information drives discovery).

g Complexity of the stakeholder environment impairs

interoperability and information flow

g The creation of public value is central to the mission of GRLs. g Cross-sector tensions and proprietary perspectives dilute g Selection, sharing, and sustainability are longstanding

components of library missions, and remain so as library assets transition from paper to digital formats.

resources and leadership

g Infrastructure deficiencies, especially in developing countries,

limit the scope and effectiveness of recognised solutions

g Long-term curation of content is critical, and requires focused

effort in the development of systems and standards to support them in the long digital future ahead.

g Economic and technological sustainability are problems

at all levels

g Skills appropriate to the 21st century information world are It was widely acknowledged in the GRL2020 discussions that the global research library of the future will be an interoperable network of services, resources, and expertise designed to facilitate the process of research and the selecting, sharing, and sustaining of the outputs of research. Participants agreed that overlapping infrastructures must be integrated and managed within policy frameworks by staff with appropriate skills now uncommon in the field. Infrastructure was broadly interpreted to include telecommunications, protocol standards, computing, electronic publishing, repositories, discovery and delivery services, and instructional services necessary to support rapidly changing skills that support these technologies. To the extent that common, interoperable components of such infrastructure can be agreed upon and shared, costs of various dimensions of the enterprise can be reduced and efficiencies increased.

scarce in a 20th century workforce

Disparate political, economic and cultural environments often confound collaboration so the GRL2020 participants focused on identifying areas where collective leadership could have the greatest impact. They agreed that an important first step would include an advocacy document to help create a unified, coherent voice in support of the global research library. This work is already underway among a group of participants in follow-up to the workshop.

“Research libraries have a cen­ tral role to play in the radical transformation taking place in research, scholarship, science and discovery. By bringing to­ gether this remarkable group of people for several days of intense dialogue, we have gen­ erated new ideas, energy, and momentum in the essential work of shaping the future direction of libraries in what will necessarily be a global and cross-sector information environment.”

The growing worldwide trend towards Open Access is an example of a kind of social interoperability that earned attention from the group. While acknowledging that this issue is dealt with more effectively by others, the importance of changes to the business models of scholarly publication and research is recognised as a key to improving the effectiveness of research and learning in the developing world. The Web has reduced separation among communities, and research libraries need to exploit these trends by creating collaborative environments where public, private, and governmental agencies may find common purpose and mutual benefit. Microsoft Corporation’s generous support of the University of Washington’s leadership of this meeting is an example worthy of elaboration.

Betsy Wilson, Dean of University Libraries, University of Washington



Microsoft researchers develop an LCD screen that can see as well as be seen, opening up new possibilities for touch-screen technologies. 28


Unlike the touch screens we regularly encounter at banks and airports, multi-touch systems can recognise and react to two or more touch points applied simultaneously to the computer screen. It allows new kinds of ‘gestures’ with which people can work with software, well beyond the clicking and dragging we are familiar with today. A very simple multi-touch gesture would be to zoom in or zoom out of an on-screen object by touching its opposite corners and then moving the forefingers further apart to enlarge the view or closer together to shrink it. Easy and intuitive.

Steve Hodges, manager of the Sensors and Devices Group at Microsoft Research Cambridge, and Shahram Izadi, a researcher in the lab’s Socio-Digital Systems Group, have broken new ground through a novel approach to touch-screen technology, called ThinSight. By bringing multi-touch sensing to the world of thin computing devices such as Tablet PCs, laptops and handheld devices - ThinSight makes a new realm of human-computer interaction more practical and deployable in real-world settings.

The Genesis of ThinSight Over the span of a couple of years, Izadi and Hodges began to have regular conversations imagining a future in which computer displays not only rendered digital content visually, but also contained photo sensors embedded alongside those pixels which could actually see.

“We got very excited about the idea, and developed ThinSight as a way of exploring and prototyping those future possibilities,” Izadi says. Hodges and Izadi assembled a team to develop a prototype system based on infrared sensing. Instead of detecting user interaction using a camera, which requires physical distance in front of or behind the display, ThinSight sensors were applied directly to the back of an LCD panel. To create the prototype, the team cut out part of the lid of a standard laptop computer and attached the sensors directly to it. The resulting infrared images, captured at 11 frames per second, are processed using computer vision technology, enabling people to interact directly with the computer display using both hands, or multiple objects that the software can be program­ med to recognise.

What if the computer screen you stare into every day was able to look back at you? A research team at Microsoft Research Cambridge is investigating that very idea, as a way to enable new modes of human-computer interaction.



“What’s exciting about Thinsight is that it is based on optical sensing, so it can perceive all kinds of objects as they get close to the screen, not just fingertips.” “What’s exciting about ThinSight is that it is based on optical sensing, so it can perceive all kinds of objects as they get close to the screen, not just fingertips,” Izadi says. “The optical approach gives ThinSight the ability to sense outlines of infrared-reflective objects through the display. So it’s multitouch plus the ability to support tangible interaction with objects.” Imagine, for example, illustration software that allows you to paint with either your fingers or with a paintbrush, or both at the same time. And what if that programme could respond to objects placed against the screen, such as a stencil or an oak leaf. Other objects could be tagged with identifying markers that the software could recognise for even greater interaction between the physical and virtual worlds. “We believe that ThinSight provides a glimpse of a future where new display technologies such as organic LEDs (OLEDs) will cheaply incorporate optical sensing pixels alongside RGB pixels in a similar manner, resulting in the widespread adoption of thin form-factor multi-touch sensitive displays,” Izadi says. “Ultimately we will either see LCDs being manufactured with these sensing pixels or organic LEDs - or other new display technologies emerging with these capabilities embedded within them.” Toshiba and Sharp, he says, are already beginning to manufacture LCD displays with sensing pixels built into the same substrate as the LCD itself. Imaging in the Infrared One of the key differentiators of ThinSight is that it uses infrared light, which is invisible to the human eye and doesn’t interfere with the integrity of the visual display. The prototype created by Hodges’ and Izadi’s team relies on a device known as a retroreflective optosensor. This is a sensing element that contains two components: a light emitter and an optically isolated light detector. It is therefore capable of emitting

light and, at the same time, detecting the intensity of reflected light. If a reflective object is placed in front of the optosensor, some of the emitted light is reflected back and is therefore detected. By using a grid of retro-reflective optosensors distributed uniformly behind an LCD display, it is possible to detect any number of fingertips on the display surface. The raw data generated is essentially a low resolution greyscale image of what can be seen through the display in the infrared spectrum. By applying computer vision techniques to this image, it is possible to generate information about the number and position of multiple touch points. In addition to the detection of passive objects via their shape or some kind of barcode, it is also possible to embed a very small infrared transmitter into an object. In this way, the object can transmit a code representing its identity, its state, or some other information, and this data transmission can be picked up by the infrared detectors built into ThinSight. Indeed, ThinSight naturally supports bi-directional infrared data transfer with nearby electronic devices such as smartphones and PDAs. Data can be transmitted from the display to a device by modulating the infrared light emitted. Moreover, a device that emits a collimated beam of infrared light can be used as a pointer, either close to the display surface, as with a stylus, or from some distance. Such a pointing device could be used to

support new modes of interaction with a single display or with multiple displays. From Prototype to… Izadi is quick to point out that ThinSight is only a research prototype at this stage and, while the initial results are very promising, there is more work to be done before any detailed plans for the technology can be considered. “We have shown how this technique can be integrated with off-the-shelf LCD technology, making such interaction techniques more practical and deployable in real-world settings,” Izadi says. “And we have many ideas for refining the ThinSight hardware, firmware and PC software with which we plan to experiment. We would obviously like to expand the sensing area to cover the entire display, which we believe will be relatively straightforward given the scaleable nature of the hardware. We also want to move to larger displays and experiment with more appealing form-factors such as tabletops.”

But although ThinSight’s approach looks very promising technically, how likely is multi-touch computing to take off? Izadi points to the world of music as an illustrative example. “If you look at pianos, guitars, mixers — all of these things require you to interact using two hands at the same time. Invariably people use two hands to do all sorts of things. It just happens that we’ve come to use one hand, with the mouse, when using a PC. ThinSight may play a role in changing that.”

About MSRC Microsoft Research Cambridge (MSRC) is one of the largest and most prolific computer science research laboratories in Europe with a global impact. Through fundamental, cross-disciplinary research, MSRC aims to push the boundaries of computing, challenge scientific convention and ultimately further scientific knowledge. MSRC aims to create technologies that improve the way the world works, plays, and lives.“


<*8** l



Microsoft® gives Tablet PCs, teacher and student training, and software to schools. By providing state of the art educational tools, it helps teachers teach and children gain the skills they’ll need to reach their potential. Find out more at

© 2006 Microsoft Corporation. All rights reserved. Microsoft and “Your potential. Our passion.” are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.



The Tablet PC has broadened our concept of how we use a personal computer: a mobile computer with a screen that allows for inputs via a pen or fingertips instead of a keyboard or a mouse device. With a Tablet PC, users can capture handwritten notes on screen, which can then be converted into text in the required languages. Bodin Dresevic, Product Unit Manager, Microsoft Development Center Serbia The way that a user enters data is much more intuitive, often much faster and applicable in situations where a normal PC is unwieldy or impractical. Although handwritten text recognition has improved in recent years, the technology available whets the appetite of many people for more, especially for applications that make a Tablet PC a natural fit with some people’s work – such as scientists and engineers. The Belgrade Development Centre In September 2005, Microsoft set up a development centre in Belgrade; the focus of the MDCS – Microsoft Development Centre Serbia - is on the development of code for Microsoft Tablet PC technology, the Microsoft Windows operating system built for mobile-computing. Tablet PC handwriting recognisers already exist for several major Western European and East Asian languages. The development centre in Belgrade expands the number and quality of supported languages; more than half of the handwriting recognisers in the forthcoming Windows version will be ‘made in Belgrade’. In 2005, Bodin Dresevic, a sixteen-year Microsoft veteran and at that time a development manager in the Tablet PC group in Redmond, decided to take the opportunity offered by the opening of eastern Europe. “I loved my job with Microsoft. Yet I missed living in Europe. I also wanted to help stem the ‘brain drain’ in Serbia-Montenegro,” Bodin says. The breakup of Yugoslavia and the resulting unrest had meant that there was a lack of opportunities in Belgrade for talented technology engineers. A significant proportion of the country’s highly skilled professional population had emigrated to countries offering better opportunities. Dresevic, who left Serbia more than 20 years ago, is one example of this mass exo-

dus of talented and educated professionals. Consequently, the development of Serbia has suffered. “There is a large number of people from former Yugoslavia working for Microsoft in North America. The number of highly educated professionals from Serbia and other parts of former Yugoslavia residing elsewhere far exceeds the number of professionals that work within the country,” says Dresevic. “This is something that needs to be addressed for the future of my country.” While talking over his predicament with Microsoft colleague Dejan Cvetkovic, the two expatriates came up with an exciting idea. “We realised that we could kill two birds with one stone: continue to contribute to the value of Microsoft products while leveraging the potential talent base located within Serbia-Montenegro and rebuilding an intellectual community,” says Dresevic. Recognition of mathematical equations The development centre in Belgrade will not only work on text recognisers but also conduct research on how to apply modern machine-learning techniques to the recognition of digital ink 2-D structures. In other words, building recognisers for more complex, handwritten structures, such as mathematical equations and chemical diagrams. The result will give Tablet PC users more flexibility to use a computer in more places and situations than previous systems have allowed. Handwriting is the most natural and efficient way of expressing mathematical formula and much more so than keyboard input or input through traditional pull-down menus - a simple formula is handwritten about five, six times faster than using standard computer input methods. The so-called math input control recognises the


handwritten mathematical expressions and translates them into a standardised textual markup language (MathML). These expressions can be transferred to other computer programmes which apply these formulas. This would make working with mathema­ tic expressions in everyday computer programmes such as Word or Powerpoint much easier. The biggest challenge with mathematical expressions is that, unlike linear texts, these expressions resemble 2-D structures. And whereas correcting failures recognised handwritten text is based on semantics, the correction of mathematical expressions is far more complex. The mathematical expressions can also be presented through a sophisticated graphical user interface that is enhanced by visual features that makes it easier to capture and edit formulas, to instantly and seamlessly solve mathematical problems and perform other operations provided by computer applications. This breakthrough work on math recognition at MDCS opens the Tablet PC more widely to the people whose work produces breakthroughs for society - engineers, scientists and mathematicians. “I really believe we are working on a truly innovative and exciting technology that will revolutionise the way math is input into applications. Handwritten math input in combination with Tablet PC technology will be of great use in writing scientific papers, specifications and documents in general, converting handwritten math notes to typeset form for future reference and end-to-end solutions of math problems inside computational engines, to name a few scenarios. The interest and reactions I have seen so far can only confirm that we are paving the way of the future.”

PUSHING THE BOUNDARIES IN GRAPHICS HARDWARE The folks at RARE, one of the iconic pioneers in computer gaming, like to joke that the quiet surrounds of their headquarters, deep in the UK countryside in Warwickshire, is the reason for their company’s consistently strong record in innovation. “From the early days it was intentional to be outside the city,” says Nick Burton, a Senior Software Engineer at RARE. “Being somewhere quiet lets you concentrate on making games, and not on where to go at lunchtime!” RARE’s track record is also built on teamwork and putting the gaming experience first. “We don’t do technology for technology’s sake,” Nick says. “The technology has to facilitate what the concept artists and designers want to do. The best term is ‘cyclical’: the designers ask for something, the programmer usually sucks his teeth and goes away to think about it, and then delivers something close to what the designers want.”

Since 2002, RARE has been part of Microsoft Games Studios where they are the biggest studio by headcount. “We’ve always had close links with UK universities and now we have access to Microsoft Research and its network as well,” Nick points out. Working on core technology as well as broader demographic games such as Viva Piñata means that RARE is constantly pushing the boundaries. “We’re entering a


very exciting phase of computer programming right now,” says Mike Boulton, also a Senior Software Engineer at RARE. “In graphics we’re seeing how a pragmatic approach to fidelity improvement is actually driving us to generate new algorithms and new ways of doing things. So you don’t feel limited by the hardware as much as by your ability to come up with novel approaches. It’s an open problem but we’re chipping away at it.”


Opposite page: Viva Piñata. Above: Kameo. Below: Nick Burton (left) with Mike Boulton, both Senior Software Engineers at RARE.

“The focus is shifting to clever means of extracting greater fidelity without resorting to the brute force method of manually increasing asset complexity,” Mike continues. “Exploiting various types of coherency will also be important. When you look into the future, you are going to see machines with a bit more RAM, but a lot more processing power. There might also be a generalisation between the CPU and the graphics processing unit – that is going to be a very interesting time. With the Xbox360, we are just at this threshold with a number of threads on the CPU and a powerful GPU.” What this means is that as users walk through a game, they will find the fidelity of the light and shadow around them changing dynamically, or that something really does jump up at them. At the moment it is impossible to calculate these for every frame as it would just take too long. “We’re moving from the situation of calculating something every frame to a continual ‘moving point average’ calculation where assets, such as a background, will be stored in a more raw and verbose format. The machine will be constantly deciding and self-analysing what to build and re-build,” Mike explains.

“It is all about focusing the power of the system on what you see and what you appreciate. The surface exposed to the player needs to have as high a fidelity as possible and there will be a much more sophisticated algorithmic approach to this challenge.” Nick Burton says the other big development is that computers will decide where to spend their power – which will prove liberating for all developers. “It feels like we entered a renaissance with the Xbox360, allowing us to do what we wanted - so as the hardware challenges evaporate even further we can increasingly focus on creating the best assets and the best games.” Another challenge that RARE has embraced is broadening the appeal of games to new audiences, particularly those who are intimidated by status quo gamers.


“We want to push the boundaries technically but also demographically,” Nick says. “Viva Piñata appeals very broadly, but the hardcore audience still want to pick it up because it is a deeper experience: it looks good and it pushes the hardware.” As Mike says, “The ideal situation is to have a game that appeals on multiple levels. Experience shows that different gamers have different levels of expectation – so generating games that appeal to all these disparate groups is probably the biggest challenge of all.”

“WHAT IF …?”

BREAKING NEW GROUND IN ENTERPRISE RESOURCE PLANNING VISUALISATION What if you could get assistance for your complicated business decisions by simply going through various What-If scenarios and having them visualised instantly in 3D, with key metrics and resulting outcomes calculated on the fly? 36


At Microsoft Development Center Copenhagen (MDCC), two teams have cooperated to make this vision come true.

team is helping to develop new products using Microsoft Research’s academic work,” said Christian Abeln.

Morten Holm-Petersen and his research team in Adaptive Business Processes joined hands with Christian Abeln and the Client Team. Through their joint efforts, extensive user research on Dynamics NAV users was married to intricate mathematical models, and the result was an interactive 3D visualisation of key metrics and scenarios in production planning. The resulting product, known as Shortage Tracking Visualisation, has been demonstrated in keynotes by Steve Ballmer and Bill Gates at Microsoft Convergence 2007 events, MIX and several internal Microsoft events, creating a buzz everywhere.

Real life visualisation Christian Abeln, an electrical engineer by education, has worked with the visualisation of complex, interrelated data for years in European Research projects with major industry research partners. He came to MDCC from his native Germany 2½ years

The innovation process But long before the applause, in the early stages of the innovation process, the developers from MDCC had to find a method. To jump start the project, they teamed up with the company Identity Mine, who are experts in WPF technology - the foundation for building applications and high fidelity experiences in Windows Vista. “It was great fun to work with the WPF developers. Some of the interactions proved too hard to implement, but when you see the 3D world moving on the screen, you get new ideas for animations and interactions that can help users achieve their goal,” says Morten Holm-Petersen, who holds a Master’s degree in human-computer interaction and has been with Microsoft for more than ten years.

on their feedback, he constructed the scenarios that the application should support. “By entering a little simulation world, users are able to see the way all the data is related. And once they understand the relationships they can change them right there in the same view. That avoids lots of expensive mistakes, and it helps planners make the best of any situation.” says Morten Holm-Petersen.

Morten Holm-Petersen (left) and Chrisitan Abeln, Microsoft Development Center Copenhagen

ago, eyeing the chance to apply his work to a new industry. Earlier in his career, he helped to visualise chemical processing at oil-refinery plants – where safety is necessarily a top priority and mistakes can be a matter of life and death.

Next, Christian Abeln and the Client team worked on the sophisticated calculation processes necessary for the digital control station in cooperation with the Microsoft Research department at Microsoft’s headquarters near Seattle.

“My ambition is that Dynamics products will be known for their excellence in data visualisation and for interacting and providing feedback in a way which imitates real life interactions as closely as possible. Business planners should excel at doing business, not at handling business applications,” he said.

“Throughout product development, we worked very closely with the research group and constantly exchanged code and knowledge. This is a great example of cross-site innovation in Microsoft and of how our business software development

In developing the key features of Shortage Tracking Visualisation, Morten HolmPetersen used results from many customer visits to identify the so-called “pain points” of people who work as planners in manufacturing or wholesale companies. Based


Both Morten Holm-Petersen and Christian Abeln are convinced that strong visualisation features will be a key demand among enterprise resource planning (ERP) clients in the future. About MDCC: Microsoft Development Center Copenhagen is Microsoft’s biggest development centre in Europe. More than 900 employees have a common goal to develop the simple and flexible business applications known as the Microsoft Dynamics product suite – among them Dynamics NAV, Dynamics AX and Dynamics Mobile. Microsoft Dynamics allows Microsoft partners to optimise the customers’ business processes. MDCC draws on a unique corporate culture and international work environment that emphasises a high degree of staff involvement to create an open and innovative environment.


The phenomenal growth of online advertising is driving demand for smarter tools to track results: enabling advertisers to see clearly how many people have accessed the website in question, how many people have bought something and how much they have spent. A Microsoft product development group in Dublin, Ireland, is developing advanced visualisations for Microsoft’s new website and advertising analytics software. Part of Microsoft’s European Development Centre in Dublin, Ireland, the team, called Global Product Development – Europe, is developing key components of a product called Gatineau Analytics (a beta release codename for the advertisement analytical tools in Microsoft’s adCenter offering), the first of which are due to be launched in a beta test release in January 2008. After a further test release later on in 2008, the final version of the product is set to be launched by the end of the year. The product will be web-based and free of charge. There will be no need for those accessing it to buy or install any software. Gatineau Analytics will also provide information such as where visitors come from and their demographics. The new product is off to a flying start before it has even been officially launched. Microsoft says that it has had thousands of requests from people wanting to be invited to take part in the beta-testing and describes the response as “unprecedented”.



Advanced data visualisation techniques “It’s state-of-the-art technology in the sense that we’re using very advanced data visualisation techniques to convey complex data relationships to a very non-technical, business audience,” explained Group Programme Manager Dan Stevenson. “By leveraging Ajax, ASP.NET and other Web 2.0 technologies, we’re able to present these customers with really powerful visualisations that they can understand and act upon.” Huge demand driving huge opportunity “There is huge demand in online advertising and we see it as a huge market opportunity,” said Stevenson. “We’re developing a product that does more than any analytics tool in the market today with the aim of driving a bigger slice of advertising budgets to Microsoft.” One of his team members, Programme Manager Olivier Dabrowski, describes the two new tools as being “a sophisticated but user-friendly representation of complex datasets”. One of the components, an advertising campaign timeline, is targeted at helping advertisers evaluate their online ad campaigns, while the other components help both advertisers and web publishers to make sense of the way people are navigating through a website.

The goal of the advertising ‘campaign timeline’ visualisation is to provide marketing managers with a long-term, at-a-glance view of how much revenue is being driven from their web advertisements placed on Microsoft’s search engine, or even competing search and ad networks such as Google or Yahoo. The top half of this interactive data visualisation shows a high-level view of ad campaigns across both online and offline media (such as television and newspapers) while the bottom half allows advertisers to ‘drill down’ to compare advertisement and purchase activity across different ad campaigns in considerable detail, comparing trends in the number of page views and resulting revenue. “They will be able to associate the revenue generated with a website event, be that purchasing a physical product, downloading some content, or signing up to a newsletter,” said Stevenson. The second visualisation is a website traffic analytical tool that uses a technique known as a treemap to display hierarchical data. In this case, it shows how many people have visited each page on the website, presented as a set of differently-sized and coloured boxes to facilitate comparison across different pages and the entire site. “People don’t necessarily navigate through a website the way the website producer expects them to,” said Software Development Engineer Vincent Vergonjeanne. “For websites with hundreds of pages, it would be highly inefficient to look at a long list of data in a spreadsheet.” The third visualisation that the Dublinbased team is developing shows the most popular paths that visitors take through a site. “This tool presents the data clearly in visual form so that the user can quickly see which pages website visitors are clicking on, how they are getting from the home page to the ‘product page,’ for example, and whether they are using some pages at all,” said Programme Manager Reeves Little.


User-friendly tool to track performance of adverts Stevenson is confident that the product will provide the right balance of usability and functionality for the target audience, mainly business analysts and marketing managers. “Usability studies have been carried out in Dublin and in the US. The upcoming beta testing will also gather valuable feedback that we can incorporate into the final version if necessary,” said Stevenson. “When a company is selling a given product on their website, our analytics software will help them to optimise their website structure and advertising campaigns to better serve their customers and their business,” said Stevenson. “By analysing complex data which are presented in a more accessible way, it will, for example, help companies track how their web-generated advertisements are performing and determine which keywords bring the best return on investment.” The Global Product Development – Europe team is based in Dublin, Ireland, and is an example of Microsoft’s increased emphasis on core development work outside the US. The group in Dublin was created to tap into the large pool of European software development talent, as Microsoft’s own studies show that Europe is second only to the US in terms of the numbers of highly qualified software engineers it produces. “Ireland’s high standard of living, high-tech culture, and progressive immigration policies have also helped draw talent from Europe and around the world,” said Stevenson. Julian Hale is a Brussels-based freelance journalist who writes about and produces TV reports on EU policy issues for a range of publications and broadcasters. This article is based on an interview with Global Product Development – Europe Group Programme Manager Dan Stevenson, Software Development Engineer Vincent Vergonjeanne, Programme Manager Olivier Dabrowski, and Programme Manager Reeves Little. The team is part of Microsoft’s European Development Centre in Dublin, Ireland. For more information on Microsoft Global Product Development – Europe, please visit

The Cave-Hollowspace at Lousal

VIRTUALISATION TECHNOLOGY TRANSFORMS OLD MINING SITE IN PORTUGAL INTO A LEADING CENTRE FOR SCIENTIFIC EDUCATION AND RESEARCH Thanks to a unique partnership between the public, private and academic sectors in Portugal and Brazil, the decommissioned pyrite mine in Lousal, in the Alentjo region of Portugal, has become the site of the first large-scale immersive Virtual Reality system ever installed in Portugal. Today, the Lousal Live Science Centre provides an interactive learning experience about mining, as well as facilities for academic and commercial research and product testing using the latest virtualisation technologies. The Lousal Live Science Centre is part of the Portuguese Live Science Centres Network, an initiative of the Portuguese Ministry of Science, Technology and Higher Education to promote interactive science and technology education and the dissemination of science and technology in Portugal.

The Lousal initiative is a joint collaboration between several organisations: in Portugal, Fundação Frederic Velge, the Ministry of Science, Technology and Higher Education, and several public universities (ISCTE, Technical University of Lisbon, Classical University of Lisbon); and in Brazil, Petrobrás and PUCRio de Janeiro, a public university.

The virtualisation infrastructure at Lousal, known as a CAVE-Hollowspace, is one of the first in Europe to emerge from Portuguese and Brazilian academic research based in Windows platforms and Barco projection systems, and is oriented to science-based entertainment, science research and R&D services for industry. The system is owned by Fundação Frederic Velge, a joint venture between SAPEC, the Belgium company that owns the Lousal Mine, which was in operation until the late 1980s, and the Grandola City Hall in Portugal.

The physical configuration of the Lousal CAVE-Hollowspace includes six projection planes in a U-shaped layout: it has two projection planes in front (5.6 m x 2.7 m of projection surface), two on the floor (with the same dimensions), and one on each side (3.4 m x 2.7 m). The system is supported by a high-end graphics computing cluster running Microsoft Windows, which is able to generate a consolidated synthetic image of 8.3 million pixels in real time and in stereo and is able to manage 3D scenes with millions of triangles. There are seven sound-speakers facing the audience and


a subwoofer under the floor. For public exhibitions, a storyteller navigates in the Virtual Environment for audiences of up to 14 people. Fernando Fantasia, the CEO of Fundação Frederic Velge, explains that “The Centre’s first purpose is to show a cinematic story about the miner and his work. The story is called ‘Virtual Visit to the Mine’. Other applications are also planned: the project will offer the unique possibility for schools, universities and research laboratories to test and visualise their academic and research projects. The Portuguese high-tech industry may also benefit from using this infrastructure for digital visualisation and testing of research in areas such as plastics moulding, industrial product design, oil exploration and mining, and engineering.” Luciano Pereira Soares, Phd in Computer Graphics, a Brazilian pioneer in the


development of these types of facilities in South America, and now at Petrobrás and PUC-Rio de Janeiro in Brazil, thinks that “immersive virtual reality facilities enable people to go to places where it is not possible to be otherwise. Educators can travel to any age, studying how humanity was and will be. Engineers, designers and marketers can analyse their industrial products before they are produced. Miners can enter an oil reservoir and decide the best place to dig. The CAVE-Hollowspace at Lousal will offer the most modern technology in virtual reality to fully immerse users in any virtual environment. This solution, a best practice case in the adoption of Windows platforms for this type of high-end computing infrastructure, has accurate image synthesis, high quality sound and precise user tracking, key elements to virtual reality simulations. I believe that in a short time, many users in Portugal and all over the world will take advantage of this facility for their projects.” The Cave-Hollowspace at Lousal

The Lousal pyrite Mine today (Alentejo, Portugal)

According to Rafael Bastos, ADETTI researcher and PhD student at ISCTE, Portugal, “The adoption of the Microsoft platform and development tools for the base system support has facilitated our in-house core developments for the CAVE-Hollowspace, namely the infraredbased user tracking sub-system, the software to manage both the 3D graphics and 3D sound information over the network, the multimodal user computer interaction sub-system, our real-time image synthesis algorithms and our Virtual Reality content authoring sub-system. Our system will be easily disseminated in Portuguese and Brazilian academic research. Speech recognition is another effective interaction mechanism for immersive systems. Discrete interactions such as pausing and restarting the simulation or choosing interaction device can be easily be accomplished by a voice command. For this purpose, our project has adopted the Microsoft language pack in Portuguese, developed by MLDC, which greatly improves natural multimodal interaction.”

About MLDC The Microsoft Language Development Center (MLDC) in Portugal was founded in 2005. This Microsoft Development Center is one of the four established in Europe and the first, outside of the US, dedicated to local language development. MLDC’s unique characteristic is its long-term plan to bring key language component product development to Europe and other regions. MLDC acts as an expansion branch of the Redmond-based Microsoft product development group, responsible for speech in Microsoft, and benefits from the experience, technological background and support of this group. MLDC aims to support the language expansion policy of the Redmond-based product group, by creating a ‘best practice of local language development’ that can be applied in Europe and other regions.



INNOVATION FOR SOCIAL AND ECONOMIC EMPOWERMENT Innovation isn’t just what scientists and engineers do: it’s also a feature of many community-based organisations and social entrepreneurs. But the result is the same: new ways of doing things and new products and services that enable people to change their lives. 43

Often, the critical inputs that spark social innovation are seed-funding and partnership. Across Europe, Microsoft supports hundreds of community-based organisations, and the people they help, to realise their potential as innovators.

The Microsoft Unlimited Potential Community Technology Skills programme is a global community-based effort to extend IT skills and economic opportunities for young people and adults. Microsoft is committed to providing the training and tools to create expanded social and economic opportunities that can transform communities and help people realise their potential. The Community Technology Skills programme provides cash grants, software and a specialised curriculum to community technology centres in 102 countries. http://www.mi­

FOUNDATION HORIZONTI: Enabling digital inclusion for visually impaired people in Bulgaria Like many visually impaired people in his native Bulgaria, Guner Mehmed felt excluded from the digital age. Unable to access information, or communicate with friends on the Internet, Guner felt isolated and found it hard to find fulfilling employment.

Guner’s life changed when he joined an IT and business skills training course in Sofia, run by Foundation Horizonti with support from Microsoft and Sofia University’s Faculty of Philosophy. Using a new Bulgarian language speech synthesiser known as Speech Lab, Guner learned to use the Internet for the first time. As he explains, “The main thing that has changed my life is that I can now access information from the Internet. I am able to check currency rates online, read electronic books and chat with friends.” Through Speech Lab, Guner has gained immense confidence and new skills, and is now using the Internet to find new employment. Foundation Horizonti was founded in 1995 to help blind and visually impaired in

Bulgaria make the most of new technologies and to gain IT skills to improve their lives and employment prospects. There are 18,000 permanently blind people in Bulgaria and over 40,000 whose visual impairment prevents easy mobility. In 2004, Foundation Horizonti began working with Microsoft and the Bulgarian Association of Computer Linguistics (BACL) to develop and distribute a Bulgarian voice synthesiser that can act as an interface to software applications for the visually impaired. Microsoft also funded a community technology centre in Sofia, which trains members of the community as well as training more than 200 trainers to teach others IT skills using Speech Lab. Since the introduction of Speech Lab, Foundation Horizonti and 19 local branches of the Union of the Blind have made the technology freely available throughout Bulgaria. Commenting on the impact of Speech Lab, Ismail Salih Ismail, a trainer and volunteer


at Foundation Horizonti explains: “Before the development of Speech Lab, we were constrained to use Russian synthesisers that were not appropriate for the Bulgarian language. With Speech Lab, access to Bulgarian websites has increased by 100 percent.” Another trainer, Ivaylo Marinov, adds: “The most important element for me is that the people I have trained are now able to look for work or create their own businesses. People are also able to make new friends over the Internet and access a range of services. In short, through Speech Lab people can improve their professional qualifications and become more integrated within society.” Foundation Horizonti chairman Husein Ismail says that “Speech Lab has made a great contribution to the visually impaired community in Bulgaria, giving us the chance to have a normal life and moderate our future independently. I think undertaking similar projects in other countries will be of great value for the people and society, and we are ready to share our knowledge and experience with them.”


ADIE IN FRANCE: Supporting unemployed and poor people to become entrepreneurs

In France, where unemployment is high, lack of finance and insufficient skills prevent many people from fulfilling their dream of running their own small business. Created in 1989, Adie (Association for the right to economic initiative) is an organisation that aims to support unemployed and poor people across France to create a small business through microfinance and skills training. In 2004, Adie and Microsoft formed a partnership to open IT training centres to provide training on IT and small business management. Over 22 IT training centres are now established across France, providing two-day courses at the end of which trainees are eligible to receive a refurbished computer. Often, the people who come to the course are scared of computers but are innovators and entrepreneurs at heart. Catherine Morel, 45, is one such individual. She had always wanted to be a seamstress and clothes designer but due to financial and family hardship had worked in temporary jobs. When she heard about the Adie programme, Catherine jumped at an opportunity that has changed her life. After presenting her project for approval, Catherine obtained an Adie microcredit and also took an IT and business skills course at the Toulouse IT training centre, opened by Adie and Microsoft.

Catherine had never used a computer before, but today is con-

fident in her book-keeping and managing orders from her clients, for whom she produces original, custom made clothes. “I love to create,” says Catherine. “For me drawing and designing is a true passion. I design my own models from scratch, make patterns for mannequins, and my daughter shows off my designs with her friends at our neighbourhood events.” She has regained confidence in her abilities, and now has the capability to provide for her family. “I love my new business and I could not have done it without the help of Adie and Microsoft. I am very grateful for the opportunity they have provided me,” she says. Since Adie’s creation in 1989, 42,990 SMEs have been created through the Adie microcredit programme. With the introduction of the IT training centres, the positive influence of Adie is being extended even further. Marie Nowak, Adie’s founder, says the AdieMicrosoft partnership is particularly relevant. ”It addresses three key issues of today’s society - to reduce the digital divide, support the unemployed to get back to work, and provide essential support to SMEs created by unemployed workers.”


Futures magazine Next edition in April 2008

CREATIVITY AND COLLABORATION How do creativity toolkits and virtual collaborative environments accelerate each other?

COMPUTING IN MY SURROUNDINGS If computing devices are everywhere, do they understand where they are? A look at tools such as Virtual Earth in action.

COMPLEXITY What do computer software systems have in common with systems we see in nature, and can we address both with the same concepts and tools? How can we make software do what we want it to do?

COMPUTER GAMING The next wave of innovation in reality and personalisation. Do the video game characters do what I want them to do?


The MicrosoftÂŽ Imagine Cup is a global competition that helps young innovators in the EU and 90 other countries pursue their creativity, ideas, and dreams. It helps young people around the world become the high-tech inventors and skilled workers of tomorrow. Find out more at

Š 2006 Microsoft Corporation.

Futures 01