os Exp pa eri tia en l 2 ce 01 th 3 ef at or a r ce oa of d In sh te ow rg ne raph ar yo u Ge
THE FORCE THAT DRIVES SMARTER DECISIONS Welcome to Intergraph Geospatial 2013 WE ARE UNITED. Whether it’s by desktop, server, web, or cloud – our integrated geospatial portfolio delivers what you need, where you need it. Less hassle. Complete workﬂow. One partner. WE ARE MODERN. Our fresh and intuitive interfaces and automated technology transform the way you see and share your data. Our world has new challenges. Combat them with a smarter design.
GEOSPATIAL.INTERGRAPH.COM/2013 © 2012 Intergraph Corporation. All rights reserved. Intergraph is part of Hexagon. Intergraph and the Intergraph logo are registered trademarks of Intergraph Corporation or its subsidiaries in the United States and in other countries.
WE ARE DYNAMIC. Leverage our single integrated, dynamic environment for spatial modeling. Our core geospatial tools enable you to exploit the wealth of information found in data from any source.
os Exp pa eri tia en l 2 ce 01 th 3 ef at or a r ce oa of d In sh te ow rg ne raph ar yo u Ge
THE FORCE THAT DRIVES SMARTER DECISIONS Welcome to Intergraph Geospatial 2013 WE ARE UNITED. Whether it’s by desktop, server, web, or cloud – our integrated geospatial portfolio delivers what you need, where you need it. Less hassle. Complete workﬂow. One partner. WE ARE MODERN. Our fresh and intuitive interfaces and automated technology transform the way you see and share your data. Our world has new challenges. Combat them with a smarter design.
GEOSPATIAL.INTERGRAPH.COM/2013 © 2012 Intergraph Corporation. All rights reserved. Intergraph is part of Hexagon. Intergraph and the Intergraph logo are registered trademarks of Intergraph Corporation or its subsidiaries in the United States and in other countries.
WE ARE DYNAMIC. Leverage our single integrated, dynamic environment for spatial modeling. Our core geospatial tools enable you to exploit the wealth of information found in data from any source.
Leica RCD30 Oblique Life from a Different Angle
Looking at Life from a Different Angle! The new Leica RCD30 Oblique camera system is speciﬁcally designed for high accuracy 3D urban mapping and 3D corridor mapping applications. Based on the leading Leica RCD30, the world’s ﬁrst 60MP multispectral medium format camera, the Leica RCD30 Oblique boasts a number of unique photogrammetric design features that not only offer superior image quality and highest accuracy, but also highest ﬂexibility. Photogrammetric Quality – A Measurable Difference for Urban Mapping! Please visit our website to learn more or contact us directly: +41 71 727 3443.
For more information visit http://di.leica-geosystems.com www.leica-geosystems.com
THE FORCE THAT DRIVES SMARTER DECISIONS Welcome to Intergraph Geospatial 2013 WE ARE UNITED. Whether it’s by desktop, server, web, or cloud – our integrated geospatial portfolio delivers what you need, where you need it. Less hassle. Complete workflow. One partner. WE ARE MODERN. Our fresh and intuitive interfaces and automated technology transform the way you see and share your data. This world has new challenges. Combat them with a smarter design. WE ARE DYNAMIC. Leverage our single integrated, dynamic environment for spatial modeling. Our core geospatial tools enable you to exploit the wealth of information found in data from any source. GEOSPATIAL.INTERGRAPH.COM/2013
INTERGRAPH GEOSPATIAL 2013 TEAM GEO-FORCE
Experience the force that’s driving smarter decisions at a road show near you.
© 2012 Intergraph Corporation. All rights reserved. Intergraph is part of Hexagon. Intergraph and the Intergraph logo are registered trademarks of Intergraph Corporation or its subsidiaries in the United States and in other countries.
Streamline your workflow from beginning to end with unparalleled search functionality, exploitation capabilities, and product creation for commercial and defense industries. Discover your data with GXP Xplorer. Search multiple data stores across an enterprise with a single query to locate imagery, terrain, text documents, and video. Exploit data using SOCET GXP® to create geospatial intelligence products with advanced feature extraction tools, annotations, and 3-D visualization for planning, analysis, and publication. Our Geospatial eXploitation Products give you the power to deliver actionable intelligence, when it counts.
Imagery courtesy of DigitalGlobe
GXP XPLORER AND SOCET GXP. MAXIMIZE YOUR PRODUCTIVITY — FROM DISCOVERY TO EXPLOITATION.
business 12 16 20 22 26 28 30 33 36 38
Ola Rollen, Hexagon Steve Berglund, Trimble Navigation Jack Dangermond, Esri Carl Bass, Autodesk Greg Bentley, Bentley Systems Jeffrey Tarr, DigitalGlobe John O’Hara, Pitney Bowes Software Kamal K Singh, Rolta Group Matt O’Connell, GeoEye Raymond O’Connor, Topcon Positioning Systems
technology 42 46 50 54 58 60 62 64 66
M P Narayanan
Stephen Lawler, Bing Maps, Microsoft Ruedi Wagner, Leica Geosystems Fraser Taylor, Carleton University, Canada Don Carswell, Optech David Erba, Stonex Europe Johannes Riegl, RIEGL Laser Measurement Systems Amar Hanspal, Autodesk Steven Hagan, Oracle Jeff Jonas, IBM & Chris Tucker, The MapStory Foundation
Disclaimer Geospatial World does not necessarily subscribe to the views expressed in the publication. All views expressed in this issue are those of the contributors. Geospatial World is not responsible for any loss to anyone due to the information provided.
PUBLICATIONS TEAM Managing Editor Business Editor Editor — Building & Energy Editor — Latin America (Honorary) Sr. Associate Editor (Honorary) Executive Editor Deputy Executive Editor Product Manager Assistant Editors Sub-Editor
Prof. Arup Dasgupta Bob M. Samborski Geoff Zeiss Tania Maria Sausen Dr. Hrishikesh Samant Bhanu Rekha Anusuya Datta Harsha Vardhan Madiraju Deepali Roy, Vaibhav Arora Ridhima Kumar
Geospatial World Geospatial Media and Communications Pvt. Ltd. (formerly GIS Development Pvt. Ltd.) A - 145, Sector - 63, Noida, India Tel + 91-120-4612500 Fax +91-120-4612555 / 666
Price: INR 150/US$ 15
Owner, Publisher & Printer Sanjay Kumar Printed at M. P. Printers B - 220, Phase-II, Noida - 201 301, Gautam Budh Nagar (UP) India Publication Address A - 92, Sector - 52, Gautam Budh Nagar, Noida, India
All guarded by TSshield.
PowerTrac + RC-5 Advanced Auto-Tracking Technology
Windows速CE + MAGNET
Easy operation + Powerful features
Topcon Positioning Middle East and Africa FZE P.O.Box 371028, LIU J-11, Dubai Airport Free Zone, Dubai, UAE Phone : (+971)4-299-0203 Fax : (+971)4-299-0403 E-mail : email@example.com Website : www.topconpositioningmea.com
application 69 72 76 80 84 86 88 92 94 98 100
Ron Bisio, Trimble Navigation John Graham, Intergraph Bhupinder Singh, Bentley Systems Arvind Thakur, NIIT Technologies Joep van Beurden, CSR Drs. Th A J Burmanje (Dorine), Kadaster, The Netherlands Geoff Zeiss, Geospatial Media & Communications Christof Hellmis, Nokia BVR Mohan Reddy, Infotech Enterprises Bill McKenzie, Thomson Reuters Jay Pealman, IEEE
law & policy 104 108 112 114 118 120 122
Dawn Wright, Esri Barbara Ryan, Group on Earth Observations David Schell, Open Geospatial Consortium Prof. Josef Strobl, University of Salzburg, Austria Kevin Pomfret, Centre for Spatial Law & Policy Siebe Riedstra, Ministry of Infrastructure and the Environment, The Netherlands Mark Reichard, Open Geospatial Consortium
COLD WINDS. SLIPPERY SOLES. 30-METRE DROP. WE’RE TOTALLY COMFORTABLE IN PLACES LIKE THIS.
© 2012, Trimble Navigation Limited. All rights reserved. Trimble and the Globe & Triangle logo are trademarks of Trimble Navigation Limited, registered in the United States and in other countries. All other trademarks are the property of their respective owners. SUR-208-GSW (10/12)
k editor speak
Prof. Arup Dasgupta Managing Editor firstname.lastname@example.org
The world we want in 2013!
t’s that time of the year when editors emulate astrologers and attempt to predict the future. However, as making predictions is a risky endeavour, I shall therefore restrain myself to creating a wish list instead. What would we like to see in 2013? Well, inertia being what it is, it would be great if we could at least begin to move in these directions in 2013. The one major activity that must pick up speed is societal applications of geospatial technologies. The Rio+20 declaration has made this a political agenda. The tsunami in Japan and Hurricane Sandy have added an element of urgency. Not very far behind are the very basic needs of energy, food, water and shelter for a vast majority of the human race. There has to be a better connect between these needs, the technological solutions and the unique capabilities that geospatial systems can provide. One of the solutions that have become a part of life is meteorological and oceanographic satellites. They provide invaluable data for weather prediction, sea state and navigation. This is shared and there is no charge on the data or on the analysed infor-
mation as it is considered to be for ‘public good’. All meteorological and oceanographic satellites are scientific satellites and therefore publicly funded. Why then are we so obsessed with the commercialisation of remote sensing satellites when it comes to land? The events of 2012 have shown that the so-called commercial satellite services depend heavily on public funding. Countries use public funds to finance their land remote sensing programmes and sell the data, ostensibly to recover the running costs. Why is then direct and free access to data from such space programmes denied, particularly to the nongovernmental organisations and the research community. Why is meteorological data free but land remote sensing imagery is not, given that both are for ‘public good’ and supported by the taxpayers’ contribution? Governments play Big Brother by using remotely sensed and other data for their development programmes. These programmes provide ‘one-size-fits-all’ solutions and do not address the needs of individual beneficiaries. Modern ICT systems do allow the generation of personalised information.
k editor speak
Geospatial systems are a part of a larger ICT ecosystem. We also need to see integration with the mainstream management and governance systems. Management and governance are subject to regulatory environments which are unable to keep pace with the fastchanging world of ICT
The entire LBS market thrives on this philosophy but when it comes to the social sector, this seems to have been ignored in spite of efforts like public participatory GIS. If urban dwellers can locate a desired restaurant using their mobile phone and the services of an LBS provider, why can’t this be done for a farmer looking for help on a drought resistant crop variety? “The World We Want,” as outlined by the Rio+20 declaration, has to have open access to data which can be used by developers to create individual user-centric applications which can be accessed on mobile phones. This is a vast market waiting to be served. The technology for such applications and their delivery exist in full measure. High resolution imaging satellites, automated data analysis and mapping, the cloud and handheld devices are a few of the technological tools already available. Standards have been developed and are constantly being upgraded as new technologies emerge. However, barring a few examples, the efforts to harness these technologies remain within the realm of governments. SDIs must begin to serve the public directly as well as through mediated services implemented by the government and private service providers.
uch services will require convergence of systems at a much higher level. At the technical level, we still see a compartmentalisation of remote sensing, GIS, surveying and mapping. In reality, satellite remote sensing is also used in photogrammetry which is a mapping activity. GPS used for LBS is also useful as a surveying tool. We need to move away from such compartmentalisation and consider all these technologies as parts of a geospatial ecosystem which need to be used in an integrated manner to create solutions. Similarly, geospatial systems are a part of a larger ICT ecosystem. We also need to see
integration with the mainstream management and governance systems. Management and governance are subject to regulatory environments which are unable to keep pace with the fast-changing world of ICT. Many governments still rely on denial to control what they consider to be inappropriate use of information. Recent events have shown how social media can be used to mobilise human resources. Longterm efforts like Open Street Maps have shown the power of people to create maps. During disasters like Sandy the social network has been very useful in creating a supporting web. The citizen can become a sensor and collect vast amount of data very quickly which is beyond the capacity of a government department. One of the ways of achieving integration is through integrated capacity building. Geospatial systems and techniques should become a part of domain-oriented courses. This has happened in disciplines like geology and civil engineering. It needs to be extended to all other domains where geospatial technologies are used or can be used. In particular, it should be a part of the civil services training programmes. It is heartening to see that rudimentary geospatial training is becoming a part of school education. The Internet world has put the individual firmly centrestage in terms of interactive information gathering, storage and delivery. The year 2013 should see the emergence of the individual as the focus for geospatial information in a similar manner. The passive, amorphous and faceless ‘beneficiary’ or ‘user’ should be replaced by millions of persona, each with their unique characteristics, needs and capabilities. Interacting with them will be the major challenge to industry and governments. Are you ready?
Ola Rollén President & CEO Hexagon AB
Redefining the language of geospatial industry
map, however exciting it may be, is just a map. All the conventions discuss maps a lot. Now, the new buzzword is 3D, which is just about adding one more dimension to the same map. This industry needs to move beyond mapping. Unless you put a layer on top of the map that is interesting for a specific audience or user, it has no value. For example, a utility company is interested in finding solutions to outage problems. It wants to know when and where an
The fusion approach
outage happens so that it can dispatch maintenance people to restore electricity as soon as it happens. By adding this layer of information, the map becomes a useful tool to get the electricity flowing back so that the utility company can continue its business and its customers can continue to enjoy hot tea or watch their favourite TV programme. That’s the context you have to put into the map. A map in isolation is quite useless. But when you add activity to it, it becomes commercially viable. If I create the world’s most accurate map and nothing more, I probably won’t find anyone willing to pay for it. At the end of the day, what good is a great technology if no one wants to use it? If a customer is not willing to pay for the technology, it’s probably not good. I think that’s the ultimate acid test for any technology. One must have a vision but that vision should not be just about technology. It has to be defined by a purpose and linked to a customer’s need. That is the difference Hexagon is bringing to most of the industries we are engaged in today. This technology can surely resolve many of the great challenges that we, as a mankind, have to face in the next 20 years. But to be able to do that, we need to re-define the language,
because we are selling this technology to people who are not experts. We are at the beginning of re-defining the language of geospatial industry. And one cannot do it alone. It is amusing to see industry players react to Hexagonâ€™s acquisition of Intergraph. The acquisition triggered a lot of activities in the industry, when people realised that may be it is not wrong after all to look at the workflow of the customer rather than one nitty-gritty detail of a total solution. The geospatial industry will see more of such activities and eventually move from being a horizontal industry, to a vertical one focussed on certain applications. Take the analogy of sports. One cannot be good at 100 m swimming, 100 m sprint and others. One has to choose a specific discipline. Going forward, the geospa-
tial industry has to focus on one or few customer groups. Lessons from Google In many geospatial conferences, people ignore Google. It is the biggest geospatial company in the world, which the geospatial industry does not acknowledge. But the fact of the matter is that Google has delivered something to the world that this industry couldnâ€™t deliver even though it possessed the technology all this while. Google has delivered what the consumer market needs and we should give credit to them. They have created a baseline for the industry with maps, virtual earth and augmented reality. However, they work in a different space. Hexagon is all about the professional mar-
At the end of the day, what good is a great technology if no one wants to use it? Geospatial technology can resolve many of the great challenges that mankind has to face in the next 20 years, but to be able to do that, we need to re-define the language, because we are selling this technology to people who are not experts
Traditional Geospatial Market “Static GIS”
DYNAMIC GIS Opportunities for the Professional GIS community
DYNAMIC GIS LOW ACCURACY Few Updates
The evolving geospatial market: Blue represents the traditional, geospatial industry before Google came and took a chunk of this space. Maintaining the accuracies available, which are good enough and adding more updates (dynamic GIS) is the way forward for the industry
The two technologies we really need to master are capturing data and converting it into useful information in the shortest possible time
ket. We don’t have any major role to play in the consumer market. This industry has been delivering solutions that focus on better accuracy, but limited updates. If accuracies and updates were plotted on a graph, the traditional geospatial industry could be represented by the blue colour in the histogram below. In 2004-2005, Google came and took a chunk of this space. The industry hasn’t acknowledged, but a big chunk of their historic market disappeared while they continued to focus on accuracies. In most cases, the achievable accuracies are good enough today. We need to work on maintaining these accuracies but add more updates, creating fresher maps. We call this dynamic GIS and that’s what our vision is all about.
I think the two technologies we really need to master are capturing data and converting it into useful information in the shortest possible time. The other end of the spectrum has customers who do not use laptops and computers. They use cell phones and tablets. And we need to distribute that data back to the users in a comprehensive way so that it can be used wherever the customers are and whenever they want it. It is all about speed. Thinking forward, we need to re-create the real world to a dynamic digital world by fusing various geospatial technologies with other modern technologies. We can then bring back that information to the real world empowering a billion- plus people to discover the power of geography.
Steven W. Berglund President & CEO Trimble Navigation
Geospatial is a survivor instinct now
he current trends in the world economy are not conducive to a strong investment environment. This is causing caution around the world. The world economy can remain very uncertain for the next five years or even longer. However, I think this is also the most technologically dynamic period. Any company which is selling capability and not capacity would be able to do well in such a truncated economic environment. There is an inherent excitement about the possibilities and the subsequent impact the geospatial industry may have. Geospatial technology can rightly park itself into scenarios where there is an existing fleet of machines or instruments and can sell productivity, add value to existing capabilities and a difficult market. So the excitement comes from the standpoint that we are not necessarily tied to the economic cycle because we bring value to our users, we bring substance to our world and we define our world continuously. The probability to innovate in the next five years will
Just as the Internet defied all attempts to pre-define it, geospatial technology will defy all attempts to pre-define it. Look how popular Google Earth is. There is a need to see that democracy exists in terms of data storage and data access
continue to be our source of excitement. From sensors to computing power, data storage to connectivity to visualisation and data interaction across all the industries, there have been inventions along the way. The challenge however is to have a strong and lasting impact on the applications and solutions. Both in terms of the economic scenario and in terms of technological innovations, the industry presents lot of challenging opportunities and we have to do well and do new things that we have never considered before. Let the market define itself There is a sense of letting the market define its way. Just as the Internet defied all attempts to pre-define it, geospatial technology will defy all attempts to pre-define it. Look how popular Google Earth is. There is a need to see that democracy exists in terms of data storage and data access. Therefore the ability and capability to control and manage data by governments or data providers will become difficult. So I think it’s best to watch the trends as opposed to saying, ‘here’s the solution’ 5-10 years down the line. We are either moving through an inflection point where the technology is moving
into the mainstream or we intellectually understand and appreciate the potential of the market. Geospatial technology applies to a number of industries. Construction, real estate, energy, utility, environment and agriculture are a few important ones. At the core of the enterprise market, there is a strong geospatial-centric database. Eventually, every piece of data and enterprise solution for any industry is going to involve geospatial components. There’s a possibility to redefine the industries and to apply geospatial awareness in a productive way. However, it is still early for most of the industries. The core geospatial capability has the ability to transform multiple industries and our roles change from industry to industry. It’s very early and there is a lot of runway left. Empowering field force With connectivity, in-field computational power and the ability to access large data bases, technology is empowering field workers and integrating them into the larger work processes of the enterprise. This process is not quite democratic but there is a team of activities that we are contributors to. Through crowdsourcing, one can create a democratic GIS, but it will lack data integrity. However, the days of sourcing data only from a designated few are passé. There are going to be multiple sources of data and one of them could be crowdsourced data. High-value engineering design companies cannot afford to have unreliable, poor quality data in their systems. Such enterprises will still require qualified datasets for their mission-critical activities and need absolute data integrity. There is however no reason why there cannot be more qualitative data augments. So, it’s the question of
With connectivity, in-field computational power and the ability to access large data bases, technology is empowering field workers and integrating them into the larger work processes of the enterprise. This process is not quite democratic but there is a team of activities that we are contributors to
how do you go about using multiple standards of integrity or quality coexisting in the same universe. Geospatial â€“ the survivor instinct The value proposition of geospatial technology is not necessarily the lowest price. It starts with quality and reliability. Itâ€™s about having a mutually satisfying relationship with the user which goes beyond a single transaction. The value proposition is working with customers, working with users to raise the expectations. At the CEO level, there is limited under-
standing about geospatial technology in other industries. However, there is lot of appreciation for the impact the technology has had on an organisation. Geospatiallyenabled solutions are being recognised and appreciated. For instance, in construction, contractors are realising if they have to stay competitive in an industry with a severe margin pressure and heavy competition, they need geospatial technology. Whether it is articulated as geospatial or not is an open question but the appreciation for the technology in general is growing. It has become a survivor instinct now.
Jack Dangermond President Esri
It’s a ‘small’ world
very company, including Esri, was at one time an SME (small & medium enterprise). Small businesses are a seed of innovation; they often are places where new ideas can be birthed and tried out. They are also often close to end-users and are best able to respond to their needs. Today, many of the companies that were start-ups in 1960s and 1970s have become large geospatial companies with toolsets and partner programmes that enable small businesses to develop their solutions faster without the risks those early start-ups faced. This has resulted in thousands of new businesses being created and extended, some becoming large companies in geospatial marketplace. Esri’s experience and history provide a good pattern to understand the functioning of SMEs. We have seen five kinds of companies that have been very successful in this industry. Database creation companies These include companies that do survey and mapping, data conversion and aerial photography. Having a strong COTS GIS software platform for data creation process has contributed to their success. Initially, this was a high-risk business. Conversion vendors typically used a combination of their own software as well as CAD and surveying software to create and automate more intelligent map data, resulting in
quality issues, particularly when the data was completed for use in GIS. Gradually, most of them migrated to the GIS software platform and dramatically lowered the risk by creating ‘GIS-ready’ datasets. Many of them have added GIS services to their list of offerings. Consultants These are companies with GIS technical knowledge that help end-users understand their needs and design GIS systems (technology, databases, applications, workflows and organisational/governance) as well as work plans for their implementation. Here, the COTS platform has made this effort far easier and lowered the risk of end-users. System integrators These are organisations (small and big) which implement systems. They take databases, hardware, GIS software and other software applications and craft complete systems for agencies or businesses. These systems bring together all the components of the GIS and provide them as working systems for end-user organisations. Sometimes these companies build custom interfaces between systems. Many SMEs have been able to work with the end-users in bringing all the various pieces of the GIS together and provide services to manage these systems as well.
Application software developers These companies leverage the core GIS platform to build end-user solutions for specific industries and workflows such as engineering, emergency management, cartography, asset management and hundreds of others. The main focus of these organisations is to build applications that respond to very specific end-user needs. Companies that build these industry-specific applications often wind up productising their development and selling it to others. They are most in need of developer-oriented technology with good documentation and software development kits. There is a huge marketplace for these solutions. Web developers These are interested in building simple maps and business location applications based on web services. This market is a fastgrowing area and promises to be the largest and most pervasive.
Conclusion Most successful entrepreneurs have fundamentally had a vision or a passion to make a difference. A number of them subsequently sold their businesses to become part of a large organisation that took their ideas to scale. One of the fundamental elements driving success it to have this passion to invent or create something. This passion is more than simply financial success. It often involves making an impact on society or the world. The challenge is to reach out to the new web community that is less interested in the details of GIS technology and more interested in having simple mapping APIs that can be embedded and used to build their applications. There awaits a huge opportunity for geospatial industry to leverage the work it has already done to help the new community. This is a new and exciting business for the geospatial industry and one that promises many rewards for SME participants.
One of the challenges today is to reach the new web community that is less interested in the details of GIS technology and more interested in having simple mapping APIs that can be embedded and used to build their applications. This is a new and exciting business for the geospatial industry and one that promises many rewards for SME participants
Carl Bass President & CEO Autodesk
Designing a better world
he word “design” can be used in many different ways and with many different meanings. There’s an expansive view of design that people recognise when the word is used as a verb; whether I am building a space shuttle or a city, I say I am designing a space shuttle or I am designing a city. But when you talk about design as a noun, somehow the word gets squeezed into a very small space. Suddenly it’s about, say, graphic design or typeface design, which is a very limited view. But the full view of design is a lot wider than that, especially when we talk about design as a way to make the world better. To me, design is a problem-solving skill that exists in your head and which can help you make things in the real world. Design is all about enhancing things and expanding their capabilities and using our own abilities to make this a better world. It is part of the process of making buildings, infrastructure, roads, dams and highways, but it’s also about giving shape to our basic urge to build all of these things.
Design is playing an increasingly important role in the world, especially as new technologies increase the power and availability of our design tools. This democratisation of design is giving more people the ability to address and solve problems, creating new opportunities and improving the world we live in
The power and responsibility of the designer Today, design is playing an increasingly important role in the world, especially as new technologies increase the power and availability of our design tools. This democratisation of design is giving more people the ability to address and solve problems, creating new opportunities and improving the world we live in. Unfortunately, not all designers take this responsibility seriously, thinking about the implications of designing something and asking themselves questions like, how do I design an eco-friendly city, or a more fuelefficient car, or better solar panels. By asking and answering questions like these, designers can have a tremendous impact on the world. Design choices can define, for example, the carbon footprint of a city. There’s a vast difference between New York, where most people do not own cars, and Los Angeles, where you can’t really live without a car. That is the result of two very different designs, not quite contemporaries but not that far apart in time. An incredibly different set of design decisions were made that allowed one city to develop in one way and the other to develop in another. Today, New York is much more environmentally responsible than Los Angeles just because of its public transit (rather than car). Thinking
carefully about the impacts of design decisions like these is important for people designing new cities. Design can address the challenges presented by enormous social changes, like the move from rural to urban areas that we are seeing all over the world. This move is creating new pressures on resources and raising urgent questions about how all these new urban dwellers will get to work, where they will go to school, how they can get clean water — all of which can be addressed with better design. Democratising design The human ability to create and use tools has differentiated us from the other animals, influenced how we developed as a species and even today defines (and limits) what we can accomplish. And by tools, I don’t mean only software tools: when we created the stone hand axe 2.6 million years ago, it helped us to do things we could never do before; when we created the steam engine, we could do more; and now that we have new tools, we can go even further. In many ways, tools determine the boundaries of our capabilities, so it is the responsibility of the toolmaker to keep trying to expand those boundaries. We have seen a level of casual use of design tools evolving that was different from the professional perspective. For example, if I am developing a railroad from Beijing to Shanghai, that is a very different thing from, say, designing a park bench or a shoe rack. Democratisation has created a new awareness about design. To be clear, democratisation doesn’t mean everybody now does design — it means that design is accessible to anyone interested in pursuing it. As a toolmaker, one important thing we can do is make sure that our tools are easy
Image courtesy VTN Consulting
Geospatial tools are important for design, because design is usually related to a particular location on earth. A design has to be created in the context of where it will exist: where a building will be, where a dam will be and what will be the effect on the land and people in the area
to use, easy to discover and as broadly available as possible. We do for design what Google did for maps — democratise a technology. But just because you can Google and look at directions doesn’t make you a professional cartographer — it just means you have more information available than you used to. In the same way, in the world of design, we’re blurring the distinctions between “professional” and “non-professional” by giving people access to capabilities that previously only professionals had. But that doesn’t make everyone a professional designer—really what it does is makes everyone into a potential designer. Designing in context: The power of localisation Geospatial tools are important for design, because design is usually related to a particular location on earth. A design has to be created in the context of where it will exist: where a building will be, where the roads will run, where a dam will be and what will be the effect on the land and people in the area. Localisation still influences design to a large extent. Of course, these days you no longer have to travel to see a building, be-
cause the day it’s unveiled you can see it on the Internet. But I think the localisation preference still seems to be culturally rooted; just because I see something doesn’t mean everyone wants it and vice versa. Infinite computing for a better world New technologies are enabling the way professional designers go about their work. Infinite computing is all about computing getting cheaper and more scalable, more powerful and ubiquitous. It allows us to explore alternatives on a computer before we actually commit to construction in the real world. Today we do not need to speculate how energy efficient or sustainable a city is; now we can measure it, precisely, in real time. We can now build digital prototypes on a computer and get simulated answers, quickly and cheaply. Infinite computing makes this possible and the tools are going to get even more powerful over the next couple of decades. Good design is a distinct human endeavour. It is a spark of creativity, nurtured by way of a disciplined process and equipped with remarkable tools – and it can help us imagine, design, and create a better world.
Greg Bentley CEO Bentley Systems
The ‘G’ is everywhere
he world is transiting from being a society that is largely agrarian to one that is industry oriented. I am afraid that urban migration and how we prepare our cities for the increasing infrastructure and economic demands brought on by rapidly growing populations, will always be an issue. Therefore, the task at hand is to employ technology tools that help make cities livable, ‘green,’ and economically productive as they continue to expand. In infrastructure, geospatial technology is not an end in itself but the means to an end. The end here is to improve our planet and the quality of life for billions of people around the globe who are not aware of geospatial technology because it is implicit in the way they use their apps for communication devices, or when they get on the rail system and the signalling slows or speeds their journey. It is a good reminder to us that we are empowering and enabling. It is the users who are accomplishing and their constituents who are benefitting. The industry is advancing GIS for infrastructure, and we emphasise that because we think it
In infrastructure, geospatial technology is not an end in itself but the means to an end. The end here is to improve our planet and the quality of life for billions of people around the globe who are not aware of this technology
is infrastructure that directly contributes to the quality of life. Factor of production Modern technology and information mobility have made it possible to have a hands-on approach from design to construction and the operation of infrastructure, instead of a hand-off between design and construction and a hand-over between construction and operations. Look at the example of China, where as a result of the Jiangxia 500-kV transformer project in Wuhan, rural citizens are getting electricity, in some cases for the first time. Doing that requires investment in generating capacity, transmission and distribution. Another example is in the area of hydropower. Hydropower projects are some of the most interesting and challenging projects because of their enormous scale. Technology helps users in China, India or elsewhere in the world to create better-performing and smarter hydropower plants, the result is that less coal is burned and the whole world benefits. So, even though we normally think of infrastructure in an urban context, the quality of life of those who live in rural areas is also improved significantly. The need is for a handshake between government and industry to make sure that information is made available. And software without information goes nowhere, partic-
ularly in geospatial world. I do not think it is really important for us to argue about the geospatial industry and its destiny, because the more time we spend asking that question, the more opportunity we are losing. Moving ahead, we should be thinking of the industry as a factor of production in most economic activities. In Italy, a number of people do not pay their taxes, and I ask the question, ‘Can technology help find tax evaders?’ In Italy, when the aerial surveys were compared to the cadastre
maps, it was found that there were 1.2 million structures that had never been permitted and, therefore, never taxed. Thus, the use of technology can help the world to become a more resilient and fairer economy. Five years from now, our subject area would expand from what was once 2D mapping and more recently has been 3D geospatial modelling. I say we are now increasingly into performance simulation. The ‘G’ is everywhere and we have learnt a lesson that geospatial is as implicit as it is important.
It is not important for us to argue about the geospatial industry and its destiny, because the more we ask that question, the more opportunity we are losing. We should think of it as a factor of production in most economic activities
Jeffrey R. Tarr President & CEO DigitalGlobe
A new era of opportunities for geospatial intelligence he year 2012 was a momentous one. The geospatial intelligence industry continued to grow, diversify and innovate at a dramatic pace. Amidst the change, four important trends emerged. Taken together, these trends highlight the challenge, but also give confidence that a new wave of opportunity for our industry and the customers we serve is just ahead.
Democratisation of geospatial intelligence: Free mapping services accessed through the major online portals have made satellite imagery ubiquitous in the lives of people worldwide. The rise of social media is having a similar impact on imagery analysis, with a rapidly growing number of endusers, in all walks of life, becoming creators and consumers of geospatial intelligence.
Defining trends Rising competition: The proliferation of new sensors and new applications from both pre-existing and newly formed businesses created a more intense competitive environment. Imagery, mapping, mobility and geospatial applications are becoming central to the way people around the world live and work.
Maturation of cloud-based solutions: Cloudbased geoint has long held great promise for reducing adoption cost and complexity for enterprises, but 2012 was a tipping point. As more mature cloud-based platforms continue being released, geoint is becoming far more accessible, allowing new groups of users to begin harnessing the power of geospatial intelligence across a wider range of industries.
Diversification mitigates risk, ensures that our necessary ongoing investment can occur for scale and technology leadership and enables the cost of those investments to be spread across a wide revenue base and not borne by just a few large customers
Proliferation of mobile access: More than ever before, users are accessing geospatial imagery, information and insight on-demand through tablets and smart phones. As more data moves to the cloud, the proliferation of mobile applications to access
that data will continue to accelerate, creating a fertile environment for the next-generation of geoint startups. Core fundamentals: In this dynamic and potentially chaotic environment, it is critical for our industry to stay focused on a core set of fundamentals. Intense focus on customer needs: Instead of researching and developing technology for technologyâ€™s sake, our industry must continue to identify what customers really need and want. This knowledge should be the most important driver of our decisions and strategies. Move beyond just data, to information and insight: For a growing number of customers, the imagery or data by itself is no longer enough. We must transform that data into information and insight, helping customers answer their most important location-based questions and solve their most pressing challenges. Deliver maximum value through an ecosystem of partners: We can deliver a higher level of value by engaging a diverse ecosystem of partners with deep customer experience in their respective industry verticals. We must then create innovative ways of delivering the resulting integrated information, on-demand, in an easy and intuitive way to the user. Diversify revenue bases to benefit customers, shareholders and ourselves: Diversification is beneficial not only for every industry provider, but it is also critically valuable to each of our customers. Diversification mitigates risk, ensures that our necessary ongoing investment can occur for scale and technology leadership and en-
As more data moves to the cloud, the proliferation of mobile applications to access that data will continue to accelerate, creating a fertile environment for the next-generation of geoint startups ables the cost of those investments to be spread across a wide revenue base and not borne by just a few large customers. So, while 2012 reminded us that our industry is constantly changing, just like the planet we observe and analyse, it also began a new era of opportunity. The coming year will no doubt bring more changes. New sensors, new operators and new applications will make news on these pages in 2013. Those who successfully innovate with a focus on customer needs will thrive and in the process contribute to customer success and a better world.
John Oâ€™Hara President Pitney Bowes Software
Geocoding attributes for multinational expansion
odayâ€™s market leaders are using geocoding capabilities to visualise relationships and see opportunities to streamline processes, reduce costs, manage risk and optimise customer communications. Many insurance companies and financial services organisations are using high-quality geocoding tools to increase the accuracy, immediacy and insight of enterprise location intelligence in order to differentiate themselves from competitors and delight their customers. Insurance and financial services firms and companies in many other sectors are now asking how do we expand this capability internationally. However, there are a host of challenges when ensuring that geocoding solutions travel well. From data cleansing to analytical issues, the careful expansion to each new country is fraught with challenges. High accuracy and low false positives There are still significant challenges to ensuring the accuracy and actionability of geographic insights. Many organisations have begun using reverse geocoding for a level of precision that goes beyond other geo-location solutions. Reverse geocoding takes latitude and longitude coordinates, turns them into an address and returns location results within milliseconds. The process
also allows for scalability, allowing organisations to translate millions of records at the same time. End users are provided with pinpointed location data that can inform real-time marketing, logistics and other business decisions. Ultimately, successful reverse geocoding can assign each address to a specific building rooftop.
Operational location intelligence Operational location intelligence helps businesses make informed decisions with each geocoding exercise. For example, high quality geographic information is critical to the insurance industry, as risk is typically tightly coupled with location. Home policy premiums are influenced by situation within natural hazard zones, while insurance companies seek diversification of both core products within the same regions as well as market distribution across different regions. Location data is necessary to complete the underwriting process and inform operational activities, such as balancing exposure to aggregate risk across different areas, catastrophe management, assessing theft
and damage potentials and regulatory reporting to governmental agencies and commissions. This also helps insurers to better estimate potential losses and help customers to purchase the correct amount of cover at the right price. Insurers can suffer major financial losses if risk assessment is not accurately done. Location intelligence offers them a sharper accuracy, enabling them to price policies more competitively instead of taking large geographic areas and apportioning a small risk to every single property. High throughput at high speed Too many companies are still relying on postal codes, which lack true location precision and can seriously misinform business
Reverse geocoding is enabling organisations to return location results within milliseconds, providing end users with pinpointed location data that can inform real-time marketing, logistics and other business decisions
k business decisions. Organisations need to use longitude and latitude coordinates to gain a true view of their location data, allowing them to solve even the most complex of business problems. Geocoding capabilities are enabling organisations to verify location data and transform location insights into valuable business intelligence to reduce risk, increase customer satisfaction and streamline operations. For instance, financial institutions are able to use geocoding tools to ensure that decisions on tax assignments are based on accurate location data, allowing them to avoid costly errors. Many companies also need to process millions of location data records from multiple sources at high
Effective international geocoding needs to offer a consistent API to be used across countries, an extensive selection of databases and the flexibility to use third-party databases
rates. An efficient geocoding solution will be able to process high quantities of location data in a matter of seconds. Single platform for all countries More and more organisations are turning to international geocoding for insights to support their planning and operations worldwide. However, geocoding at an international level poses a range of new challenges â€” address formats vary, different languages are often used within one country, localities or points-of-interest may replace street names and key pieces of information that are not required for local delivery will be unavailable for use in geocoding.
Worse still, many companies are using unfit geocoding tools. This can have the side effect of creating more false positive returns, which wrongly identify the location of a site. Businesses could be making decisions based on false data that they believe has been verified, costing them time, money and jeopardising new ventures. With all of these challenges, effective international geocoding needs to offer a consistent API to be used across countries, an extensive selection of databases and the flexibility to use third-party databases. Further, it is important for decision makers to understand the accuracy of the available geocodes so that they have a sense of the level of confidence they can place in the data. Organisations need to ensure that this is conveyed through a results code that indicates the geographical accuracy of the data. Deliver clear business insight The growing volume of data related to the location of events and transactions can and should be analysed in ways that will contribute to business insight among a variety of operational and analytical dimensions. These can include influencing customer behavior, analysing risk, identifying credible threats and evaluating tax dependencies. As the nature of business transactions is increasingly influenced by location, and more and more business users participate in studying the location component, organisations need to make sure that mapping capabilities are included in geocoding solutions. Clear graphical representation of location data is critical for general business users to understand the data and to act upon it. Companies are beginning to realise the need for a definite global strategy to manage these processes. With a little planning and commitment, considering the attributes above can help them map out business success across the world.
Kamal K Singh Chairman & CEO Rolta Group
Powering the next frontier of information revolution
ustaining earth, in the face of fast-changing geopolitical balances, economic turbulence, technology thrusts and population dynamics, depends on our ability to maintain a delicate balance between human-promoted planetary modification and decline in thresholds for land, water, atmosphere and biological systems. As our lives become more and more complicated due to competing demands of development and conservation, geospatial information and solutions are coming to our rescue and helping us in managing our resources better. Today, we live in a world that has been dramatically transformed by technological innovations and advancements. In the last three decades, GIS, combined with modern information technology, communication, Internet and advances in space technology, has significantly transformed our lives. The use of geospatial information is increasing rapidly. There is a growing recognition amongst both government and the private sector that an understanding of location and place is a vital component of effective decision making. Citizens with no recognised expertise in geospatial information and who are unlikely to even be familiar with the term are also increasingly using
and interacting with geospatial information, and in many cases, contributing to its collection as well. Geographic information is becoming ubiquitous in almost every aspect of government and citizen lives.
There is a growing recognition amongst both government and the private sector that understanding of location and place is a vital component of effective decision making. Geographic information is becoming ubiquitous in almost every aspect of government and citizen lives
G-tech and economic growth In todayâ€™s world, the use of geospatial technology is a very important catalyst for economic growth, which can radically change models of governance and business on the whole. Timely, reliable and comparable information on social, demographic, economic and environmental conditions are the key inputs for policymakers to make evidence-based planning and policy decisions. The importance of information to individuals and organisations, and therefore the need to manage it well, is growing rapidly. The advent of PCs, Internet and mobile telephony has provided a wide choice in collection, storage, processing, transmission and presentation of information in multiple formats to meet the diverse information requirement, leading to information revolution. Considering that all information pertains to a certain geography/location on earth, geospatial information plays a key role in powering information revolution.
G-tech and information management Man has been creating maps for thousands of years for navigation, territory identification and a host of other uses. From hot air balloons to aerial flights to satellites, manâ€™s ingenuity to gather geographic data has been commendable. With advances in sensors and platform technologies, processing capabilities, mobile, cloud and information and ICT capabilities, geospatial information is getting more accurate, more dynamic and capable of being provided in real time. This rich geospatial information is being used to reveal new insights about the physical world, our relationships with it and amongst ourselves. The movement of geospatial technology to SaaS, PaaS or DaaS is getting popular and profitable with valuable geospatial content to serve. Today, more than ever, dynamic geospatial information is enabling governments understand complex situations like economic trends, natural disasters, ocean levels, military action, or even demographic dynamics. With an increase in fully automated decision systems, geospatial computation is becoming a non-human consumable in nature. This power of geospatial analysis is providing a new way of managing and mainstreaming digital information, which have become crucial enabling factors in information revolution. G-tech for public good Campaigns to promote geospatial data as essential public good and the consequential support from free and open data policies are increasing the ease of access to information, underscoring the value of geospatial information in increased economic activity. The emergence of volunteered geographic information, location-enabled social media and other actor networks is adding a new dimension to information revolution. The biggest challenge is to integrate all the data,
both spatial and non-spatial, residing in different sources into a manageable environment for unified view in real time. The data fusion technology, i.e. geospatial fusion, can easily facilitate data sharing and integrate both spatial and non-spatial data from disparate sources to create a common intelligence picture. This approach is key to providing capabilities which establish a flexible, robust and solid foundation for information exchange, interaction and decision making in any organisation. Increasing integration of geospatial technology with enterprise information and communication technologies will now see the advent of a “geospatial web” in the coming years. However, sheer existence of this data is not in itself a driver of value. One critical factor influencing the impact of the information revolution is the sophisticated analytics that are being applied to aggregate, analyse and manipulate the data being collected. Technological evolution will accelerate, with previously niche geospatial information technologies becoming mainstream, while mainstream technologies like
cloud and SaaS will get absorbed into geospatial information. Data will be increasingly interconnected through the Web via capabilities such as linked data and this will challenge standard methods. Technology will enable rapid distribution and absorption of information, and also accelerate responses to that data to the extent that location devices will be pervasive. The emerging trend is towards the provision of 3D and even 4D geospatial information to meet global goals. Geospatial information, coupled with other big data components, is empowering and ushering the information revolution into a new era of effective decision-making, increased productivity and improved performance. The philosophy of big data analytics is changing long-standing ideas about the value of experience, the nature of expertise, unleashing innovation and opening up new avenues in public-private collaboration. This is spurring new business models and bringing a fundamental shift in how businesses are run. Geospatial industry is at the core of this fundamental shift and is ushering a new frontier in information revolution.
Increasing trends in the integration of geospatial technologies with enterprise information and communication technologies will now see the advent of a “geospatial web” in the coming years
Matt O’Connell President and CEO GeoEye
Collaborate, standardise, optimise to evolve
It is absolutely critical that the industry makes our imagery more accessible. A common theme that the industry needs to consider is standardisation of processes, platforms and systems
he geospatial industry has grown more than anyone could have predicted just 10 years ago. With many imaging satellites currently in orbit, more scheduled for launch and the introduction of a new breed of end-to-end geospatial solutions, our vibrant industry continues to demonstrate its ability to evolve. More than pretty pictures The geospatial industry had to evolve. We had to do more than just take pretty pictures and update maps that capture our physical geography. Today, we are incorporating other types of information in a geospatial context, including data related to culture, tribal boundaries, political affiliations and the many ways they inform us about what’s happening in the world. These new capabilities are enabling us to deliver new levels of situational awareness and deep analytic insights that equip warfighters and first responders to make life-or-death decisions. Over time, we believe that many of the breakthrough geospatial innovations that have changed the way our military operates and how we personally navigate the world will affect every facet of our society. We see enormous potential to apply these innovations to
solve larger economic and social problems. So, how are we going to get there? Enhanced accessibility It is absolutely critical that the industry makes our imagery more accessible. A common theme the industry needs to consider is standardisation of processes, platforms and systems. The goal of the industry should be to optimise the satellite collection platforms — we have to become capital efficient. We can do that by designing collection systems that fulfill the broadest set of customer requirements. Also, future large-scale programmes should take advantage of current collection platforms that offer a “good enough” spec at a lower cost. The commercial industry is approaching a point of diminishing returns on investments in improving resolution and accuracy for the broader market. As we become capital efficient, we can use the savings of lower-cost programmes to invest in innovations that will help us build new solutions. We can then use these solutions to create new forms to help us solve tough problems. We can see Web platforms being developed that provide online access to a growing range of high-resolution, time-sensitive premium geospatial content.
Imagery, the pixels themselves, remains the underpinning of the business, but today the focus is on being more than just a pixel provider, to providing services and information. Customers need insight drawn from the imagery coupled with other sources of information. This is a tremendous area for growth. The geospatial industry is uniquely positioned to help its customers reduce risk and manage limited resources. When responding to a problem, threat or opportunity, the ability to narrow the area to which you need to deploy or apply scarce resources is priceless.
Turning pixels into insight The industry has successfully made the transition from simply collecting pixels to turning those pixels into insight. The industry will have challenges in the years ahead. One of the challenges is the fiscal constraints that many of the worldâ€™s governments face. A natural reaction to austerity is for individual enterprises to combine, as evident in the recent merger of GeoEye with DigitalGlobe. US had to reduce its budget, particularly its defence spending. The combined company will be able to deliver to the US government the imagery it needs, while substantially reducing the governmentâ€™s spending. The industry also needs to collaborate more, especially across borders, through systems standardisation, data collection and problem solving. Collaboration must improve in areas where we have shared interests, whether that is in preventing piracy and conservation crimes, understanding climate change, responding to humanitarian disasters or fighting terrorism. All governments should adopt policies that will enable stronger, deeper partnerships and freer trading in imagery. We all need the same three things: the best imagery; easy access to that imagery anytime, anywhere; and tools to get more information from that imagery. We hope that as the industry continues to grow and evolve, every country will recognise the power of geospatial technology to help make better, more cost-effective and more efficient decisions that benefit our world.
Many breakthrough geospatial innovations that have changed the way our military operates and how we personally navigate the world will affect every facet of our society
Raymond O’Connor President Topcon Positioning Systems
In five years, automated machines will be a standard practice
onstruction is really a manufacturing industry. Roads, buildings, bridges, highways, pipelines — all are manufactured. But compared to other industries such as automobiles, telephones, electronic goods or computers, the construction industry is very antiquated. If you walk into production facilities of any of these industries, everything is automated, whereas construction and agriculture, which are the two largest manufacturing industries in the world representing between USD 8 to 10 trillion a year, are the least automated of the manufacturing industries. Look at how other industries were automated in the 1960s and 1970s — measurement instruments and machines were integrated to automate whole processes. Look at how cars are built by auto robotic arms — it is all integration of positioning
Construction and agriculture are the two largest manufacturing industries in the world, representing between USD 8 to 10 trillion a year. They are also the least automated of the manufacturing industries
technology and machinery that goes into the process. Then think about the construction industry in the civil engineering world! Automation to be a standard practice When I joined Topcon in 1993, I was anxious to connect an instrument manufacturing company [which Topcon was] to the machinery in order to realise this vision of automation of the construction industry. Of course, I didn’t think it would take 20 years. But today, with the connection of optical instruments, GPS and machinery manufacturers, there are very few civil engineering jobs in the developed countries that do not have integrated machine control technology. In many cases, this has led to exponential savings in productivity and a 50 to 100 per cent improvement in cost savings. Until the last five-to-six years, most of the integration of machinery and measurement instruments was done in the aftermarket. Now, most major machinery manufacturers have started making these equipment available in-house. So we have seen a transition from pure aftermarket to
integration of specified products, software and solutions. When that happens, you know you have reached a certain critical mass and it has become a standard for the world industry. In most developed countries, this is an accepted practice, and as the technology cost comes down, it will increasingly become a standard practice. The industry is readily embracing this technology and understands the importance of automated machines. We are going to see more and more machines — bulldozers, motor graders and excavators — being automated. In the next five years, every major machine supplied from the factory will have the option to come automated as all major construction companies will use this technology — they will have no choice given the changes in the marketplace and the savings that can be realised. Cost saving and productivity gain There are tremendous savings in terms of time, manpower and environment. That translates into a more profitable business. If you can do the work in half the time, you are burning half the amount of fuel, too. You are dropping more money to your company’s bottom line. Machine control drastically cuts costs of doing fine-grading in construction, or sowing of seeds or harvesting in agriculture. There is a 30-40 per cent gain in productivity. Some of our customers report doubling of productivity; that means an average of 50 per cent reduction in costs in the specific area this technology being used. Imagine the 3D system that goes on a bulldozer; it costs USD 60,000-70,000 and a company investing in that technology will get the investment back in as short as three months. Today, with the cost of the machines, fuel, labour and everything else going up, how do you make money? Your
The 3D system that goes on a bulldozer is priced around USD 60,00070,000 and a company investing in that will get the investment back in as short as three months. Machine control leads to 30-40 per cent productivity gain while some customers report doubling of productivity
answer is to focus on productivity improvements, i.e. technological advances. Our industry is moving from a productand-technology industry to a solutions-oriented one where customers are demanding not only hardware but also integrated solutions and compatible software. A look at some of these machines will tell you why. They do all the scheduling; not only controlling the machinery but also connecting them to local offices and contractors where they schedule what a machine is doing, which part of a project it is working on and how much work is being done. Integrated systems speed up machinery. Every total station that we produce is connected via telematics. So information about any change can be immediately available to engineering. That integration is an important part of making it a seamless solution. I see that part of the business not only continuing but growing rapidly — precision and seamlessness through execution and through expansion. This is just the beginning The industry is still in its infancy. When you look at all the construction going on all over
the world — highways, infrastructure, buildings, pipelines — the cost of constructing that infrastructure is a fraction of the cost of maintaining them for a lifetime. Until recently, we didn’t have a good way of capturing all the data so that people responsible for managing all that infrastructure could capture, handle and manage the data throughout the life of the projects. But the business is growing exponentially because the technology is getting better and the ability to manage huge amounts of data and processes has improved in the last five years. The biggest and the fastest growing industry for us at the moment is agriculture though the opportunities are endless in construction. In 2000, the agriculture industry’s use of precision measurement equipment was probably less than USD 100 million; last year it was more than USD 1 billion. There are a lot of things we, as a company, can do to build the market, expand it more rapidly in more areas, expand our networks, standardise data formats and educate the emerging markets. You have to be smart enough look into the future to make sure you continue to grow your business.
Stephen Lawler Chief Technology Officer Bing Maps, Microsoft
Where is the future? The future is ‘Where’
W JANUARY 2013
here” is the new “who.” The “where” dimension is one of the most natural, powerful, insightful and intuitive ways to explore the rapidly growing world of data and services. Consumers and enterprises are overwhelmed by the amount of raw data avail-
able and are yearning for valuable insights into their work and personal lives. The ability to pivot on real world dimensions like people, places, calendar and things can provide better answers for everyday tasks for individuals as well as deep understandings
for businesses, municipalities and governments. The Internet is rapidly evolving into a real-time, read/write medium of data and services. Device proliferation is assuring that this real-time, read/write capability is part of our lives through wearable computing devices and sensors, smartphones, tablets and a host of other Internet-enabled devices. Technologies like augmented reality and real-time sensors will connect the digital world with real world. The “Internet of things” will breathe digital life into the world of physical things. Connectivity, searchability, relevance and usefulness depend on the contextual understanding of the location and their relevant intersections. The last decade changed the computing landscape with the “who” dimension through personalisation and social connec-
tion. This created powerful context, relevance and relations for people both at home and work with companies like Amazon, Facebook, LinkedIn, Twitter and others leading the way. The current decade is poised to do the same for the “where” dimension across devices, data and services. Just as the “who” dimension filtered and increased relevancy for our digital world, the “where” dimension will do the same as our digital connectivity reaches out to touch the entire physical world and reasons with the volumes of user-generated data and real-time sensor information. This creates opportunity but also unparalleled potential noise, raising the need for a powerful reasoning principle like the “where” dimension. Traditionally, consumers performed simple mapping tasks while GIS professionals provided deeper understanding to corporations by wielding GIS analytical tools against private business data. Vast insight remains locked in tags, posts, tweets, reviews, micro-blogs and websites. Today, some of the larger companies in the Internet ecosystem have begun to move from traditional Web indexing to higher-level information organisation. They are using algorithmic extraction and big data graphs to create and relate entities on the Web, organising them through a semantic taxonomy and enabling natural access to this knowledge via conversational understanding and other natural user interfaces. This allows people to find answers to many business questions, in addition to addressing consumer needs. A great deal of value will be created at the intersections of individuals’ personal data, private business data and the read/write web. GIS derivative analysis and statistical data mining have long been encumbered by the inability to reach tail data, the absence of scale collabo-
The “where” dimension is one of the most natural, powerful, insightful and intuitive ways to explore the rapidly growing world of data and services. The ability to pivot on real world dimensions like people, places, calendar and things can provide better answers for everyday tasks for individuals as well as deep understandings for businesses, municipalities and governments
The future “living” 3D map of the world will encompass contributions from user-generated data, real-time broadcasting sensors and massively scalable machine vision algorithms that digitise and make sense of the physical world through imagery
rative filtering, the lack of ample “oxygen” for statistical reasoning and more that could be garnered from the Web if only there were a stronger relational notion of entities and a semantic model. On the reasoning side, traditional GIS relies on visual layering of data and mash-ups, leaving the burden of analysis and understanding upon the user. This, too, will change due to the increasing power of computational analysis and reasoning engines, and also because of the increasing volume of semantically indexed data. Extracting the digital identity of entities and all of their digital relations, attributions, properties and characteristics requires massive scale computing resources and human curation. These entities need to be lashed to their physical existence or trajectory in a computable “living” 3D map canvas. The future “living” 3D map will encompass contributions from user-generated data, real-time broadcasting sensors and massively scalable machine vision algorithms that digitise and make sense of the physical world through imagery. Imagery will range from high quality professional satellite, aerial and streetside capture to consumer grade camera phone pictures and video for much of the
Bringing the map to life
outdoors and interior spaces. GPS traces, check-ins, metadata, map edits, commercial data feeds and municipal public data sources will be integral to the comprehensiveness and freshness of the map. The future map will evolve at the world’s course and speed. The map will have many views — your personal view decorated with your pertinent information, your enterprise view enhanced with your business data and public views based on your authorisations and task purpose. Privacy and firewall controls will put the user, enterprise and municipality in control of private data. Task and situational tolerances will determine the appropriate level of verification of crowdsourced data. . One of the most essential aspects to the future is developer innovation at all levels. Today, most developers are operating at the presentation level and there are not enough handles, levers, APIs and interfaces to contribute and consume services at the database, algorithmic or foundational building block levels. The future “living” 3D map canvas must be extensible to developer innovations and services at every tier by every developer, not just GIS specialists. This real-world trellis and underlying data ontology provides the visual and data framework for augmenting your real-world view, framing your spatio-temporal exploration, powering your intelligent agents and analysing the world you live, work and play in. Democratisation of entity and attribution creation by users, algorithmic advancement towards digitisation and capture of the physical world, connectivity of the temporal state, a conversational understanding of user intent, a rich semantic organisation of the data beyond keyword matching, natural user interfaces and new augmented device experiences powered by the “where” dimension promise to change the future of your digital and physical landscape.
Why do leading aerial mapping and surveying FRPSDQLHVZRUOGZLGHĂ \ZLWK8OWUD&DP" Blue Skies Consulting United States
GEODIS Czech Republic
UltraCamXp & Eagle
UltraCamD, X, Xp, & Eagle
UltraCamD & X
Youâ€™ve seen Microsoftâ€™s international â€œI Fly UltraCamâ€? video series featuring UltraCam customers explaining, in their own words, why they chose award-winning UltraCam photogrammetric digital aerial sensor systems. But they werenâ€™t the only proud UltraCam users eager to share their experiences. Check out the new â€œI Fly UltraCamâ€? videos that were personally created and submitted by UltraCam customers from around the world.
Visit www.WeFlyUltraCam.com to see their video testimonials!
ÂŠ2012 Microsoft Corporation. All rights reserved. Microsoft, UltraCam, and UltraCam Eagle are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
Ruedi Wagner Vice President – Imaging, Geospatial Solutions Division Leica Geosystems
Into the ‘Drone Age’? Stone Age. Bronze Age. Iron Age. We define entire epics of humanity by the technology they use.” —Reed Hastings, Entrepreneur and Educator
ust like many of you, I recently attended Intergeo, which has grown into an event large enough to reliably highlight some of the trends in the geospatial industry. Having been a regular visitor for the past years, it was hard not to notice the increased number of unmanned aerial vehicles (UAVs) or drones exhibited at the show. Reed Hastings referred in his above quote to the Internet Age, but maybe Chris Anderson, the former editor-in-chief of Wired magazine, is right and we are about to enter the “Drone Age”. One does not need to be a visionary to realise that UAVs have become an integral tool for military operations and public safety. Going back to this year’s Intergeo, it is obvious that we see a similar, albeit delayed trend in the kind of commercial and civil mapping applications. At previous Intergeos, smaller start-ups with innovative processing methods or innovative UAV de-
To promote the safe and peaceful use of the UAV technology, we need to meaningfully engage local authorities, manufacturers, operators and end-users to create safe and reliable systems, define best practices and continue to develop new applications
signs created a bit of a buzz. This year, the number of systems on display for outdoor mapping applications mushroomed. The multitude of designs and approaches on display is not unusual for emerging technologies that often find their way into the market more by trial and error than by strategic intent. Emerging technology (and trade-show) dynamics also explain why a plethora of start-ups with astonishingly innovative approaches seem to have taken the lead over the more established companies — bringing innovative ideas to the market appears a whole lot easier when you start with a blank sheet. Nobody is going to limit your creativity by asking you if the new widget will work with the previous one. Yet, sustainable innovations need to prove themselves beyond the exhibition floor. To avoid being victims of some kind of Darwinian market consolidation, they need to meet customers’ expectations. The big question is: which UAVs (if any) work for which application? If not only for the lack of a crystal ball, it is probably too early to give answers. There is undoubtedly still a lot of trial and error going on, but if we listen to the market carefully, a few trends seem to emerge.
Tech advances For sure, the rise in micro drone sales, personal UAVs and DIY drones is largely linked to advances in smartphone technologies with cheaper and smaller sensors, better integration and ease-of-use. A few years back you needed to be a genius to fly an RC helicopter. Today, nearly everyone can do that without even reading a manual. One may argue that the payload/camera on most of the low-cost UAVs is not usable for professional mapping applications: but, we all should be at least a little amazed at what you can get for a few bucks today. Also, because these UAVs are borderline “toys”, their use for recreational purposes within line-of-sight operation is accepted in many countries. At least so long as your line-ofsight is not straight into your neighbour’s window. Low price, easy-to-use, no legal issues combined with a “wow” factor — it can be understood why this attracts young and old alike for an afternoon of fun. But this is not just about fun. There is a lot of scientific research going on with these micro drones as well and I expect great solutions for a number of professional applications, such as indoor mapping and public safety. Yet their use for traditional outdoor mapping applications currently is limited. One step up from “your personal UAV” is the more professional portable, small, fixed wing UAVs or Quad- and Octocopter solution with a total weight of less than 10kg. They are generally equipped with a low to medium-end commercial camera and, when used for mapping purposes, controlled by off-the-shelf autopilots. Quad- and Octocopter systems have not only found their way into TV and movie making. Because they take off and land vertically, can hover for observations, follow even the most acrobatic flight paths and
run on relatively silent electric motors, they are also the preferred vehicles for urban mapping and surveillance applications. In addition, their flexible in-air-performance supports more accurate and environmentspecific mission planning, a key factor to achieve accuracy in airborne mapping. Small, portable, fixed-winged UAVs are currently most often used for small area mapping applications such as mine site mapping, archaeology and agriculture.
k technology One does not need to be a visionary to realise that UAVs have become an integral tool for military operations and public safety. It is also obvious that we are seeing a similar, albeit delayed trend in the kind of commercial and civil mapping applications
These systems have a rather limited payload capacity and the combination of a non-calibrated camera, a low-end GNSS/IMU system and lightweight construction requires processing tools closer to computer-vision than traditional photogrammetry. With basic GPS pre-orientation of the images and the right software, orthophotos and DEMs can be created automatically. On the other hand, if strong winds turn planned nadir images into unplanned obliques, the iterative process may be time consuming and may not lead to the desired accuracy or quality. The relatively inexpensive styrofoam-design UAVs have advantages beyond damage control in case of malfunction. They can be replaced, easily and at little cost. Medium-sized rotary and fixed-wing UAVs with a weight ranging from 10 to 60 kg are for certain professional mapping applications . They offer a higher payload capacity and flexibility to carry metric cameras, small LiDARs, hyperspectral scanners or multiple sensors. They also offer a more stable flight and more endurance. Fixed-wing UAVs in this category are easier to build and pilot, but require space for take-off and landing and often have smaller payload capacity. Helicopter-based systems of this size offer greater payload and endurance capabilities, but are more difficult to pilot. Note of caution Commercial operation of such mediumsized systems, however, is currently forbidden, restricted or under investigation in many countries. In case of special permissions, a qualified pilot is the minimum requirement. Although more and more countries are investigating their policies and legal framework, I suspect this heavy regulation is not going to change in the short term. However, there are a few applications where the use of such systems brings clear advan-
tages over traditional airborne or terrestrial mapping and thus should be explored further. One area is mining, where UAV-based mapping systems such as the Swissdrones Waran equipped with metric cameras can deliver highly accurate maps, orthophotos, volumetric measurements and slope determination, plus environmental data over an open pit or a pipeline corridor. Another area is agriculture, as the UAV can transform from a vegetation mapper with a multispectral camera to a crop sprayer. I do not believe this technology is going to replace traditional airborne mapping, at least not soon. Some airborne mapping companies have or will make UAV-based mapping a new area of business, complementing well their everyday activities. To promote the safe and peaceful use of this technology, we need to engage local authorities, manufacturers, operators and end-users to create safe and reliable systems, define best practices and continue to develop new applications. Focus areas should include sensor integration, operational safety and fail-safe mechanisms, sense-and-avoid technologies, best practices and certification. For mapping, intelligent integration of all onboard sensors to promote safe, autonomous operation and accurate data is of particular importance. I am not sure if we are entering the â€œDrone Ageâ€?, but I am sure the future should be fun!
Dr Fraser Taylor Director, Geomatics and Cartographic Research Centre Carleton University, Canada
Cybercartography is charting a new route
ybercartography is a holistic and dynamic concept which continues to develop in an iterative fashion through the interaction of both theory and practice. It has relationships to other developments in both cartography and geography, especially critical cartography, participatory mapping, neogeography, volunteered geographic information and others and incorporates many of the technologies of GIS. Cybercartography pre-dates some of these but includes elements of all these approaches. It is the holistic nature of cybercartography which helps to set it apart, together with an approach to content creation that emphasises community involvement and trans-disciplinary teamwork. Cybercartography is designed to facilitate “development from below”. The technology is a means to an end, not an end in itself. The concept of cybercartography was introduced in a keynote address I delivered at the International Cartographic Conference in Stockholm in 1997. In 2003, a formal definition was given as “… the organisation, presentation, analysis and communication of spatially referenced information on a wide range of topics of interest and use to society in an interactive, dynamic, multimedia,
multi-sensory format with the use of multimedia and multimodal interfaces.” (Taylor 2003). At the core of cybercartography are seven key elements and six central concepts: Cybercartography: Is multisensory using vision, hearing, touch and eventually smell and taste. ● Uses multimedia formats and new telecommunications technologies like the World Wide Web including mobile devices ● Is highly interactive and engages the user in new ways — user-centric and interactive, understanding and engaging the user in new ways through user needs analysis and usability studies, Wiki atlases and "edutainment" (online educational games). Cybercartographic "users" are increasingly becoming "creators" or “prosumers”. ● Is not a stand-alone product like the traditional map but part of an information/analytical package including both qualitative and quantitative information. The Cybercartographic Atlas Framework provides an organisational approach for the emerging products and processes of the Web 2.0/3.0 era of social computing. ● Is compiled by teams of individuals from ●
different domains including disciplines not normally associated with cartography. ● Applied to a wide range of topics, not only to location finding and the physical environment. Responds to societal demands including topics not usually "mapped". ● Involves new research and development partnerships among academia, government, civil society and the private sector. The six central concepts are: People use all of their senses in learning. Consequently, cybercartography creates representations which allow them to do
this through cybercartographic atlases. People learn in different ways and prefer teaching and learning materials in different formats. Cybercartographic atlases provide a choice of learning styles or combinations of learning styles. The same information is presented in multiple formats. ● Effective teaching and learning takes place when individuals are involved. Multimedia and interactive approaches used in cybercartographic atlases facilitate this. ● People need power to create own narratives, i.e. the social computing revolution. The Cybercartographic Atlas Framework ●
The interaction between practice and theory is central to the future of cybercartography and new applications lead to both new technology and new theoretical understanding
provides a mechanism for this, which gives some structure and metadata indicating the quality and nature of the narratives that people create. The Framework is open source and does not require special knowledge to create a narrative. ● Many topics of interest to society are very complex. There is no simple "right" or "wrong" answer to many questions such as global warming and climate change. To understand these, different ontologies or narratives should be presented in ways that people can easily understand without privileging one over the other. Cybercartographic atlases do this. Of particular importance is giving voices to local people. They can speak for themselves rather than having others speak for them. ● There has been a shift from “map user” to “map creator” which establishes new forms of democratised teaching and learning. The Cybercartographic Atlas Framework helps to democratise mapping in new ways and provides a framework for volunteered geographic information. The interaction between practice and theory is central to the future of cybercartography and new applications lead to both new technology and new theoretical understanding. We have a special interest in indigenous and traditional knowledge and in the creation of cybercartographic atlases with northern and First Nations peoples in Canada. This has taken us into new theoretical and applied directions, which are illustrated in the cybercartographic atlases such as the Atlas of Arctic Bay, the Inuit Siku (sea ice) Atlas and the Kitkmeot Place Names Atlas (http://gcrc.carleton.ca). The work with indigenous people has led to new challenges and opportunities. We have had to consider the legal and ethical issues in-
volved in portraying indigenous knowledge in digital form and the concrete discussion of ownership, copyright, consent, liability and intellectual property in digital mapping is a very important one. The technological challenges have led to the development of a completely new framework for our atlases which we call the “Nunaliit Cybercartographic Framework”. Nunaliit means community in Inuktituk. This is a document-oriented database allowing the input of a wide variety of information objects. Rather than develop a fixed schema in advance, we build the database on what type of information the community wants to include and extend it as needed. This is a technological reflection of the philosophy that these atlases are primarily driven “from below” rather than “from above”. We also facilitate the telling of stories, or “geonarratives”, which is essential in a predominately oral society. In content terms, the communities are creating things of interest to them which are often unanticipated. For example, the youth in Arctic Bay created a rap video which they call “I am not an Eskimo”! Education is now a key application of our atlases and they are being used both in formal and the informal education system, including the development of new courses at Nunavut Arctic College to help make Inuit active research participants, not passive subjects of research by others. Tim Berners-Lee, in discussing the future of the Web, identified two major challenges: linking datasets on disparate topics and displaying new information created in innovative ways. The future of cybercartography lies in responding to these challenges and we are doing so in our work with communities in Canada’s north.
Donald Carswell CEO Optech
LiDAR brings a new perspective to the image it represents
iDAR technology has recently become tremendously popular. LiDAR’s biggest advantage is that it is an active technology. It is its own light source and unlike conventional photogrammetry, it is not dependent on sunlight. This gives LiDAR the ability to produce survey results day or night, independent of sun angle or clouds. In LiDAR’s early years, companies in the aerial survey business measured the ground only at one or two thousand measurement points per second, typically taking spot elevations to aid photogrammetric processes. Today, by contrast, aerial LiDAR instruments operate at many hundreds of thousands of points per second, and mobile LiDAR systems are now beyond a million points per second. The result is that, unlike 20 years ago, a LiDAR data set is no longer just a series of
Laser scanning is an additional, effective tool in a surveyor’s or photogrammetrist’s toolbox — not a complete replacement of existing techniques. Laser scanning systems can be used in a diverse range of applications and the market for LiDAR is evolving based on that
points, it’s a recognisable three-dimensional image. Human beings are creatures of vision, so having a recognisable image brings new perspectives on how to extract the information represented by that image. Applications galore Laser scanning systems can be used in a diverse range of applications and the market for LiDAR is evolving based on that. For example, Hurricane Sandy, which recently hit the US coast, has created a great demand for surveying areas that perhaps would not have been routinely surveyed before. Such a requirement might stimulate demand for a different type of sensor, in order to produce the most appropriate data in the most economical and timely way. Laser scanning is an additional, effective tool in a surveyor’s or photogrammetrist’s toolbox — not a complete replacement of existing techniques. In most situations, the ideal option is a combination of traditional surveying techniques and laser scanning. For example, in tripod-mount surveying, until a few years ago, most surveyors would
use a total station that gave very accurate measurements but at a very low rate â€“ perhaps only a few measurements per minute. Now a surveyor will use both a total station and a terrestrial laser scanner on the same job, exploiting each tool for maximum benefit. There are many applications where seeing far more points provides a better result, even if those points individually are slightly less accurate than a total station. If you are interested in seeing the deformation of a hillside, you want measurements on a 10- or 20cm grid. Even if your accuracy is 7 or 8 millimetres per point, you can get a better picture of the situation from a dense LiDAR grid than with a point grid from a total station that is perhaps on a 10-metre spacing, even if those individual points are accurate to 1 or 2 millimetres.
Since we are visual creatures, it's obvious that more 3D information will be asked for in the future. And whether you call it pretty pictures for the Web or geospatial information for the professional, LiDAR will be one of the leading technologies to enhance our 3D digital world
Another example of complementary sensors is in the aerial space, where LiDAR can provide data that a camera cannot. In a forest survey, a camera provides meaningful information but it can only go so far in detecting the ground and offers at best only an approximation of the ground elevation. LiDAR gives the exact answers. Similarly, in coastal mapping, a camera has no ability to give depth (bottom elevation) information in any meaningful way or with any reasonable accuracy, whereas LiDAR produces accurate depths as well as information on the water column itself. In coastal applications, the ideal solution is a combination of a LiDAR, to give depth and other information, and a camera or a hyperspectral imager, with depth corrected by the LiDAR, to provide an image of the bottom. Another potential area is in oil and gas production. Here the key knowledge is not the location of the offshore platform, but how and where to get the oil back on shore. Thus the interface between the open sea and the shoreline is critical and so is water column and bottom information. During the oil spill in the Gulf of Mexico, a survey was done to look at the leaking oil. Apparently oil floats in layers, eventually reaching many metres below the surface. While this completely visible with a LiDAR scan, it is not always obvious from a visual inspection. Any coastal survey application is therefore an ideal application for LiDAR, particularly where it is important to know not just where the bottom is, but what properties it might have. Fusing hyperspectral imagery with LiDAR information gives a fairly automated bottom classification, so that one is able to discriminate between a bottom of sand, hard rock, coral or sea grass. Today, there's a shift towards more integrated solutions and users typically pay for information, not for data. It is important to
aim for an approach where LiDAR ceases to be merely datapoints and becomes deliverable information specific to an application. LiDAR technology is very easy to deploy and gives actionable results immediately after a flight. Companies can realise economic benefits by getting sufficient information to enable maximum flexibility in decision making. For a utility company, for example, one of the biggest costs in routing is the fees needed to pay land owners on any one of many possible routes. Before LiDAR, it was hard to get information in enough detail to make an informed decision. But now, aerial LiDAR surveys offer a cost-effective way for these companies to get all the information they need on all routes and determine which is the most cost-effective. Investing in a LiDAR solution is therefore fundamentally an economic decision that enables the user to accomplish more with the same or fewer resources. Visual creatures Given all the new technologies now available, it is clear that a large subset of those technologies will be economically important and create more jobs and value than initially expected. And technological advances donâ€™t have to be at the expense of people. Thirty years ago, people thought that computers would put people out of work. But it instead created new industries and opportunities. We know that there will be a demand for more information in more detail. Computers everywhere are now capable of 3D image presentation and 3D rendering. Since we are visual creatures, it's obvious to me that more 3D information will be asked for in the future. And whether you call it pretty pictures for the Web or geospatial information for the professional, LiDAR will be one of the leading technologies to enhance our 3D digital world.
Davide Erba CEO Stonex Europe
GPS finding its way into the surveying toolkit
rofessional surveyors are always looking for new technologies that can be helpful in their daily work. The â€˜survey toolâ€™ must be accurate, easy to use, reliable, fast and water and dust resistant. Today, the GPS technology is fully compatible with all these requisites: high-accuracy receivers are certified to provide real-time GPS measurements with centimetric accuracy, with short acquisition time (seconds), are able to fix in less than 10 seconds even in the cold start conditions. Current satellites constellations (Navstar, Glonass, the incoming Galileo and Compass) offer 24-hour coverage all over the world. It is always possible to find several satellites just above our heads! The large number of satellites in orbits around the Earth offer field operators the choice of the best configuration of satellites in the sky and enhance the quality of survey. Also, the number of broadcasted carriers by each satellite has increased: the latest generation of GPS receivers can detect, for example, the GPS L5 carrier, or increase the precision of measurements in an area where satellite reception is weak. Field software is becoming more and more important: new protocols for differential corrections, such as RTMC 3.1, pro-
vide real time transformation information for geographical coordinates and height, with better accuracy compared to old systems. The field software must always be updated and customised for every new protocol, giving the user the ability to fully exploit the new technologies. New technologies A new way for GPS survey is now present in the market, not only for GIS applications, but even for high-accuracy survey. New receivers integrate in one unit the complete system: receiver, GNSS antenna, controller facilities (display and software), connections (Wi-Fi, Bluetooth, GPRS): all in one hand complete system. Even total stations are an exciting challenge for surveyors: new robotic total stations, developed using an advanced electronic technology to maximise the precision of distance measurements, allow to execute survey with only one operator on field (that means: save money). Using a GPS receiver embedded in the robotic controller, the robotic instrument can be pre-oriented to the operatorâ€˜s position, assuring a safe survey even in case of obstacles in the line of sight. Laser scanner technology is also growing considerably. With a laser scanner, it is possible to collect millions of points in few minutes, allowing civil engineers and cadastral operators a fantastic chance to process/produce in their office 3D models of objects, lands, roads, houses or store in electronic format shapes of objects of interest for cadastral, architecture, archeology and industrial applications. It is always difficult make a prediction in a market of fast changes. New challenges and prospects for GPS survey could be summarised as some improvements in the receiver system, such as integration of inertial platform in the GPS sensor, a major use of the wireless connections of new generation,
more robust and powerful and other minor improvement like tilt compensators. Other sensors will be integrated, in order to maximise the accuracy of the survey: for example humidity or temperature sensors. GPS network expansion Beyond the receiverâ€™s improvements, the big challenge will be the great expansion of local GPS network, thanks to the opportunities given to the major competition between producers of CORS stations and network adjustment software. The possible uses of these permanent GPS/GNSS networks interest a wide range of scientific orders, as well as applications for territory management nowadays which is of considerable interest for the community. Surveyors, GIS users, engineers, scientists and the public at large who collect GPS data can use CORS data to improve the precision of their positions. CORS-enhanced, post-processed coordinates approach a few centimetres relative to the National Spatial Reference System, both horizontally and vertically. This approach to positioning in RTK is radically improving productivity and the quality of the survey. Deleting the installation of a reference receiver, the professional will save time and money for the purchase of a second receiver. With a permanent GPS/GNSS network, control data is always guaranteed and it is possible to reduce the propagation of errors resulting from improper placement of the reference receiver. The receiver will initialise faster and the quality of all data will be monitored before one can get them. Such a system application can find its way in various applications like survey works, monitoring, construction control, asset management, cadaster, navigation, etc. The solution enables customers to increase productivity and reduce costs â€” there is no need to use a separate base station.
GPS is fully compatible with all the requisites of professional surveyors: high accuracy receivers are certified to provide real time GPS measurements with centimetre accuracy, have short acquisition time (seconds), are able to fix in less than 10 seconds even in the cold start conditions
Dr Johannes Riegl CEO RIEGL Laser Measurement Systems
Exploring new markets with 3D data
eospatial data has been collected for centuries and is often represented in 2D, 3D and even in 4D. Instrumentation and software to collect 3D data have changed significantly in the past 10 years. LiDAR/laser scanning from the air and the ground is the biggest of the developments and has become the most preferred technique of collecting 3D data. There are several unique technologies like full waveform processing that offer users unrivaled accuracy, completeness of information and operational competitiveness. The cutting edge technological capabilities available today are allowing users to explore new markets and services. For example, there are bathymetric laser scanners now available that combine topographic and hydrographic airborne laser scanning, high speed data acquisition rates and full waveform processing. Historically, such systems were so expensive that only governments could afford them. But today, the overall price objective for such systems has been achieved and they are now a tool for commercial purposes. Airborne laser scanning is witnessing interesting developments. There are systems, for example, that allow operating altitudes
up to 2,450 m above ground level at 400 KHz, which significantly increases acquisition efficiency and point density in wide area mapping applications, as well as operational safety for the aircraft crew in mountainous areas. Data acquisition in airborne laser scanning at high measurement rates from high altitudes implicitly results in ranging ambiguities, an effect known as â€œmultiple-time-aroundâ€? (MTA). Instruments with multiple-time around capability can handle up to 10 pulses in the air simultaneously: with the dedicated software for fully automated range ambiguity resolution, a unique technology is developed which is also the basis for future generations of high repetition rate laser scanners. Based on the demand of the application, compact airborne laser scanners can be installed in a variety of platforms, like fixedwing planes, helicopters and also UAVs. The UAV segment is definitely emerging and will have its important position in the future. Applications galore The applications in 3D data acquisition are evolving rapidly. With terrestrial laser scanners, crash scene investigation, construction work, open-pit mining and monitoring
The cutting- edge technological capabilities available today for 3D data acquisition are enabling rapid evolution of applications and a host of other applications are possible. Latest technology in mobile laser scanners is delivering unrivaled point density and accuracy. These systems are being used in surveying and documenting road networks and in railway applications. Applications for airborne laser scanners range from glacier and snowfield mapping to agricultural and forestry applications, along with the continuing demands of wide area and corridor mapping. Moreover, combined topographic and hydrographic surveying is set to grow in the future. In industrial scanning, today there are robust, reliable and high performance solutions available for the demanding and harsh environments users work in.
On the hardware side, there is a clear trend to integrated solutions with various kinds of sensors such as aerial cameras, thermal imaging, or specialised sensors for road assessments. With respect to the scanner hardware itself, we believe a broader spectrum of available wavelengths will provide perfect tools for the geospatial industry to open completely new fields of applications.
Way forward We can observe two main strains for development. Regarding software, there is an obvious demand for enhanced extraction algorithms of information from the point cloud. There is a wide range of applications using point clouds as source data for information extraction, for instance traffic sign recognition, or break line detection.
Amar Hanspal Senior Vice President, Information Modeling Product, Autodesk
Cloud and BIM are two new platforms for GIS
here are several aspects to a design technology platform. On the one hand, design enables people to experience things before they actually turn into reality; on the other hand, it allows users to make ‘what if ’ decisions by giving them the ability to look at the available options or generate new options and thus make better decisions. Another important aspect is that the platform makes it possible for many people to come on board as it is not just about technology but also about making sure that it is inclusive of all the people that need to come together to make critical decisions. User-driven technology strategy We plan our strategy after taking into account the users’ perspective rather than our company’s perspective; this makes it easier for the users to acquire and use our software. Besides, we try and contextualise geospatial information for users in the industry. What has been nice to watch in the last four-five years is that geospatial information is getting democratised. We are all familiar with driving directions and traffic updates as a single consumer. Companies have the same need as they want information residing in a specialist environment to be accessible to people in the enterprise through the
appropriate tool. For example, the HR professional in an enterprise would want to look at all his assets. Similarly, we want to let the designers and engineers in that organisation to seamlessly access information. We are taking a BIM process and exposing geospatial information through it to make it accessible and seamless. Thus, our customers will no longer have this CAD in GIS but it is just that the information is accessible through the tool much like when you are looking at driving directions but you do not even have to think about how you provide location information. CAD-GIS integration CAD and BIM are two sides of the same coin, but simply belong to different generations. People understand CAD as describing geometry and BIM as describing geometry plus all the information about objects that you are working with. While we do not plan to use GIS traditionally, we are certainly going to expose the data that people have inside the BIM or CAD tools and in that sense these two worlds are coming together. What we have learnt through all these years is that the customers use multiple tools from multiple vendors. And if we can work together better, then the customers will use us for the right task. We want to help the customers have a more seamless experience of exchanging data while using different tools. Interoperability and design standards Interoperability is both very important and unimportant. While it is extremely important that the users are able to use whatever data that they need, it is not important at all to try and solve the problem from the 1980s which relates to the classic file format. In a new world, we have to make that completely transparent to the users through things like
disoriented architecture and open standards. For example, there are ways in which I can use the Google Web service without ever having to understand the way they have stored their data. Thus, interoperability is very important to solve the customer problems but it is not important to solve it the way we had thought about it in the 1980s, which was trying to read another file format. Customised apps in virtual gaming The demand for customised apps means that the users want an experience that is tailored to the needs of their company. That is always going to be true and thus we all have to adhere to providing APIs and good tools for people to build custom apps. Games that children play are a great example of that, many companies want to solve practical business problems with that approach. There are car companies that want to have a new car model through a very realistic experience, literally to the point where you could sit inside the car and see what it would look like. Virtual gaming is an inevitable next step in our gallery because we have always seen the physical and digital worlds collide.
CAD and BIM are two sides of the same coin, but simply belong to different generations. People understand CAD as describing geometry and BIM as describing geometry plus all the information about objects that you are working with
The cloud revolution Thirty years ago, Autodesk was founded against the backdrop of the PC revolution. The PC gave a platform for small and medium-size businesses to adopt computing for a variety of purposes. I see the same thing happening again with cloud. Everyone has a mobile device now, which gives us the ability to reach millions of mobile users by building software that empowers those people, just like we did in the PC era. Therefore, we are strategically building professional tools for mobile devices and are also building consumer design tools.
Steven Hagan Vice President - Server Technologies Oracle
Cloud is the new-age geospatial information manager
C In a time of increasing public sector austerity, cloud computing directly addresses cost reduction while increasing efficiency of IT service delivery
loud computing is creating an opportunity for organisations to be leaders in the delivery of information resources, while reducing the costs to design, build, deploy and support these services in-house. In particular, it has the ability to fundamentally change the way geospatial services are delivered and consumed. In a time of increasing public sector austerity, cloud computing directly addresses cost reduction while increasing the efficiency of IT service delivery. Because it consolidates storage and applications to centres of expertise, new services can be rapidly deployed and ready for use in a matter of minutes, as opposed to months it traditionally takes. Geospatial cloud computing in public sector organisations The use of cloud computing is particularly relevant to public sector organisations managing large volumes of geospatial information. For example, state and federal agencies are faced with mounting budget pressure to reduce data storage costs, consolidate operations and help minimise the
overall cost of government. In doing so, public sector organisations are looking to cloud computing to: • Align with a state/federal-wide data center consolidation • Reduce IT costs • Produce new streams of revenue • Foster an effective IT organisation and culture that supports a shared services model • Provide a sustainable infrastructure and business continuity • Leverage compliance, standards and optimisation to become an agile, secure, efficient and effective service provider. Agencies are also setting up cost-recovery models that are easy to manage and adhere to stringent security and compliance regulations. Naturally, these IT reforms will introduce new models for managing IT services. As one example, US western states are collaborating to establish regional cloud services to support the management of geospatial information resources. US western states GIS cloud initiative In 2011, The Western States Contracting Al-
liance, which facilitates multi-state purchasing, issued an RFI for multi-tenant GIS cloud. The states involved this novel exercise include Colorado, Montana, Oregon and Utah. The motivation is driven primarily by economics and the perception that cloud computing provides a cheaper, costeffective and more efficient way to manage their vast geospatial information assets. Pooling requirements of government agencies and combining their purchasing power will also drive down costs. The respective CIOs recognise that although the security risks of hosting government information in the cloud are real, they no longer outweigh the potential cost savings to citizens. US National Spatial Data Infrastructure: FGDC Geospatial Platform In 2012, the US Federal Geographic Data Committee (FGDC) launched the geospatial platform which is hosted on a shared fed-
eral cloud. Some of the key data themes managed by the geospatial platform include: transportation, topography, orthoimagery, elevation, hydrography and administrative boundaries, among others. The geospatial platform provides shared and trusted data, services and applications for use by government agencies, their partners and the public. Key services provided by geospatial platform include: • A “one-stop shop” to deliver trusted, nationally consistent data and services. • A portal for discovery of geospatial data, services, and applications (www.geoplatform.gov). • A publishing framework for geospatial assets. • A place where partners can host data and analytical services. • A forum for communities to form, collaborate and share common geospatial assets. • Built within the federal cloud computing infrastructure. By leveraging a cloud platform, FGDC is well on its way to transforming the management of the nation’s SDI.
The perceived benefits of moving to cloud computing can be short-lived without a plan that places cloud computing in the context of its overall business strategy and how it affects security, performance and connectivity
Planning for geospatial cloud computing The perceived benefits of moving to cloud computing can be short-lived without a plan that places cloud computing in the context of its overall business strategy and how it affects security, performance and connectivity. In particular, large organisations need to be able to integrate cloud computing into existing IT systems and applications. Very few organisations are ready or willing to start from scratch and most will not move all of their business processes to the cloud at once. This makes it essential to plan carefully for the challenges ahead.
Jeff Jonas IBM Fellow
Chris Tucker Chairman and CEO The MapStory Foundation
Space time features: the highest order bits
hen dealing with an endless and dynamic flow of space-time data, how does one determine what represents an important change, or even where on the surface of the earth to train oneâ€™s attention in the first place? Increasingly, mankind will be relying on analytic sensemaking engines to suggest and direct human attention. Will these engines be right? Or, will they constantly misdirect human attention (false positives) leaving the most important discoveries out of sight? Detecting insight and actionable relevance requires access to a wide observation space; an ability to contextualise the available observation space; and have principles by which one can assess opportunity and risk â€” enabling the triage of relevance. For a moment, imagine looking out your
Increasingly, mankind will be relying on analytic sensemaking engines to suggest and direct human attention. Will these engines be right? Or, will they constantly misdirect human attention (false positives) leaving the most important discoveries out of sight?
kitchen windows only to witness your neighbours in an epic argument. The next day you see the husband at the store purchasing a firearm. Four days later, late night while trying to fall asleep you can’t help but notice a somewhat muffled ‘bang’ sound from outside. The next morning, while pulling out in your car for work, you see the neighbour laboring as he drags what looks like a few blankets filled with heavy stuff towards his pickup truck. Insight adds up The point is, insight adds up. Take any one or two of these observations independently and there would be very little basis for alarm. However, the combination of these insights would cause any alert human being to at least raise an eyebrow.
Sounds easy. But this innate capability of human beings to piece together such diverse observations over space and time – incrementally accumulating context – has been difficult to replicate in machines. Just ask an organisation running a risk assessment system with queues growing faster than their workforce can keep up – overwhelmed by false positives. Now imagine feeding these processes substantially more data. In fact, the thought of having to also process the merging ‘big data’ to these existing processed forces one to stand back for a moment and ask; “How many more false positives can we afford?” The only way to wrestle big data to the ground involves first placing information into context. In the same way, puzzle pieces mean more when attached to other puzzle
k technology As more sensors produce more accurate geospatial data about where things are and how they move, the speed and accuracy of context accumulating processes will be a game changer for machine triage and attention directing systems
pieces, big data in context makes it possible to lower false positives and false negatives at the same time. No surprise, as more puzzle pieces come to form the picture – the more precise the understanding of the big picture (risk or opportunity). The contextualisation of diverse data sources has seen some gains over the last few decades. For example, entity resolution systems allow machines to determine with great certainty that both transactions were carried out by the same person. Alternatively, little gain has been made in the area of video or imagery, when it comes to classifying an object and determining with certainty that is the same entity as seen in previous observations over the same, secondary, or tertiary data sources. The game changers Fortunately, big breakthroughs are afoot as space and time move from being a means to correctly place symbols on maps or to conduct spatial analysis, to being the magic bits of which computers will use to contextualise very diverse observations over time. In the story about the neighbour with the argument, gun, bang and dead weight, the space and time of these observations are in fact the “highest order bits,” aiding one’s ability to estimate the big picture. As more sensors produce more accurate geospatial data about where things are and how they move, the speed and accuracy of context accumulating processes will be a game changer for machine triage and attention directing systems. Beyond space and time points that demonstrate a point-in-time presence, the motion of entities themselves is telling. Imagine the journey of a cargo container ship. Tick tick tick as it moves along over the surface of the water — following a recurring, predictable route optimised for fuel conser-
vation and time. Then it reaches a port and begins to hang out (hover). Tick tick tick as it is observed to remain in one place. Over a period of time, one discovers that most vessels have a finite number of “hangouts”. In fact, the collection of frequent hangouts strung together can be thought of as a pattern of life or “life arc.” Artifacts such as hangouts and life arcs might be useful for projections on maps for human presentation, but data points such as these are pure super-food to context-accumulating, sense-making systems. Let’s face it. There are not going to be enough humans to ask every smart question every day. And while this is true today, tomorrow, thanks to the big data phenomenon, it will be become orders of magnitude more difficult to make sense of all this data in the future. A new paradigm is needed. The future: The data must find the data and the relevance must find you How will the data find the data? For starters, diverse observations must be co-located into a shared space. Then one must integrate such diverse observations as they happen, fast enough to do something about it while it is still happening. In both cases, more diverse data, co-located, placed in context (organised fundamentally in terms of space and time) will prove to deliver unprecedented advances in understanding, whether this involves detecting actionable relevance or whether it enables one to deliver materially better story telling. Analytic exploitation of the space-time features will usher in advances in high-quality prediction systems. This happens when diverse data converges in ways only possible with space and time alignment. What follows is better context, better understanding, and superior sensemaking, which in turn enables better business and mission outcomes.
Ron Bisio Railway Industry Business Area Director Trimble Navigation
Express route to rail efficiency
he challenges faced by the railway industry may be self-inflicted. Thanks to rail’s fuel efficiency and low cost per tonne-mile, many companies are switching their shipping business from trucks to trains. As a result, railway operators are experiencing increased demand. This growth is good, of course. But the growing business places additional pressure on the finite and — in many
locations — aging railway infrastructure. The problem is compounded by the relentless upward spiral of fuel costs and tightened environmental and safety regulations. As a result, railways need to maximise the utilisation of their fixed assets and rolling stock to move as much tonnage as possible. Achieving this goal calls for faster speeds, larger railcars and reduced downtime for maintenance and repairs. At the same time, railways must ensure safety for passengers, train crews and maintenance teams. The information railway Track inspection and maintenance are key components to both utilisation and safety. Effective inspection systems can directly contribute to efficient maintenance operation. By pinpointing the location and condition of maintenance needs, operators can develop tighter schedules for repair and maintenance activities. The teams know exactly where to go and what equipment and materials they will need. In order to minimise disruption to normal rail traffic, track inspection must be conducted in short time or during periods of lower train activity. Ideally, inspectors should be able to
k application normal rail traffic. Unmanned aerial systems (UAS) are emerging as a flexible approach to gathering information along transportation corridors. A UAS uses a small, autonomous aircraft to fly routes along a railway. The system can collect imagery more frequently and at lower cost than traditional airborne photography. Data from terrestrial sensors can be combined with the aerial images to produce detailed information over large areas. By adding a geographic component to the enterprise management system, geospatial technologies provide a higher level of efficiency to resource and operations management.
A 3D point cloud collected using a mobile mapping system. The data can be used for clearance analyses, maintenance and planning
Gathering and utilising information are keys to efficient railway operations and maintenance. New geospatial technologies are presenting innovative opportunities for safety and cost control
conduct inspections without affecting normal traffic. Historically, local workers responsible for specific sections of track have conducted track inspection. But retirement and normal attrition are taking a serious toll on the ability to monitor track conditions. In order to remain effective, local knowledge must be replaced with automated, systematic approaches to gathering, maintaining and utilising information about the local conditions. Geospatial technology provides an array of solutions to these needs, including terrestrial and aerial systems. Handheld GIS data collection systems are efficient and precise tools for inspecting and cataloguing fixed and mobile assets. Mobile mapping systems that combine GNSS with imaging and LiDAR can collect information over large areas while in motion. Because of speed and portability, these approaches can operate without disrupting
Increasing capacity One of the best ways to increase capacity is to put more cargo onto the same length of track. This can be accomplished by upgrading railcars and locomotives. The new rolling stock is larger, faster and more fuel efficient than existing trains. A primary concern is making sure that the new equipment can operate safely on existing track and rail infrastructure that may be more than one hundred years old. Here again, geospatial technology comes through. Before making the move to larger rolling stock, railway operators must confirm that the new equipment will fit into existing corridors. This requires detailed analysis of clearances in tunnels, overpasses, stations and retaining structures. Gathering basic data for these analyses requires physical measurements of thousands of locations along each route. The technological solution begins with surveying equipment such as high-precision total stations and GNSS. These systems provide the positioning framework for inspection and analysis. In confined or congested areas, 3D scanning is an ideal method for gathering data. This
may be accomplished in several ways. Scanners installed on railcars or high-rail vehicles can collect information as part of a mobile mapping system. A second approach uses scanners mounted on tripods at fixed locations. A new solution utilises a high-speed scanner mounted on a trolley that is easily moved along the track. The scanning information is processed to produce 3D point clouds of the track and surrounding features. By creating 3D models of rolling stock, designers can define clearance envelopes around the new railcars. The envelopes are added to the 3D models, where clash detection routines identify locations where clearance may be an issue. Modern track inspection systems measure the position, gauge and cant of rails to millimeter precision. The systems combine optical instruments with precision tilt sensors and can operate on both slab and stringer/ballast track configurations. Construction and maintenance teams use the data to help ensure safe, comfortable operation of the passenger and freight cars. Automating maintenance The value of geospatial technology is well established for gathering and utilising position-based data for inspection and plan-
ning. Automated machine control based on GNSS or optical positioning is producing cost savings in construction of new track and facilities. Geospatial technology is helping to automate maintenance operations as well. Rail crews use trolleys to collect detailed data on the condition of tracks. This information can be loaded into ballast tamping machines, which adjust the track and ballast to maintain the required alignment, gauge and cant. When needed, new ballast can be delivered and automatically offloaded exactly where it is needed. Similar advances in productivity can be achieved in ditch maintenance, where automated railmounted excavating machines can reduce the need for large work crews in the right of way. The opportunities for positioning and information management are bright, and geospatial systems continue to develop in variety and flexibility. Itâ€™s an ideal fit for the complex, far-flung operations of a railway, which demands accurate positioning and related data. More importantly, the data can be converted into actionable information that is delivered directly to where it is needed for design, construction, maintenance and lifecycle planning.
IINSTITUTION NSTITUTION OF OF G GEOSPATIAL EOSPATIAL AND AND IGRSM ) REMOTE R EMOTE S SENSING ENSING M MALAYSIA ALAYSIA ((IGRSM) formerly Malaysian Remote Sensing Society - (MRSS)
New Direction | Broader Focus | Endless Opportunities SECRETARIAT: Geospatial Information Science Research Center, Faculty of Engineering, UPM, 43400 Serdang, Selangor, Malaysia
w ww.igrs m . c o m
Tel: +603 8946 7543 Fax: +603 8946 8470 Email: email@example.com
The value of geospatial technology is well established for gathering and utilising position-based data for inspection and planning. Automated machine control based on GNSS or optical positioning is producing cost savings in construction of new tracks and facilities
John Graham President — Security, Government & Infrastructure Intergraph
Smarter decisions for public safety
he rise in terrorism and natural disasters has placed the spotlight on public safety over the past decade. So has the explosive growth of cities. More people and problems require more, and better, services. In order to deal with these major emergencies and daily demands, governments have increased their public safety focus. New personnel and equipment are important (and visible) signs of investment. But some of the most important investments – investments in geospatial technologies – aren’t always apparent to citizens. The geospatial aspects of managing infrastructure or mapping property are obvious, but not everyone realises how critical geospatial information is to public safety. When asked about police or fire departments, most people probably think of uniforms and sirens, not data collection and sharing. But those dedicated public safety professionals can’t help you if they don’t know where you are. It’s as simple as that. Police, fire and emergency medical agencies depend on accuracy and precision. Public safety, therefore, depends on geospatial information. Think about it: every piece of safety or security information has a spatial reference.
That’s why information about homes, buildings, streets and more are included in the interactive, real-time maps used in computer-aided dispatch (CAD) systems at public safety communications centres. But that’s not all there is to it. Geospatial information management doesn’t begin or end with a map. For instance, CAD combines a map display with so much more: incident data, records, mobile data from the field and more to ensure agencies have accurate information when lives are on line. Beyond data In other words, it is not enough to see the data. You have to be able to do something with it. To me, that’s the promise of geospatial information. It is not just about visualisation, but also geospatially powered
analytics and response. It is about seeing, understanding and acting – all in one. It’s about making smarter decisions. When put into action, this concept yields powerful results. Let’s look at dispatching. In São Paulo, Brazil, emergency medical services agency SAMU deployed a new, comprehensive CAD system. With a population of 11 million, São Paulo is the biggest city in Latin America and SAMU is the largest emergency medical services agency in the region, responding to 8,000 emergency calls daily. Using its new geospatially powered system, SAMU reduced the time it takes to respond to emergency calls from an average of 35 minutes to 10 minutes, a nearly 72 percent improvement that saves lives. Then there's also mobility. Access to
When asked about police or fire departments, most people probably think of uniforms and sirens, not data collection and sharing. But those dedicated public safety professionals can’t help you if they don’t know where you are!
Immediate response isn’t always enough. Agencies have to plan ahead, too, in order to figure out how to allocate their resources most effectively. They can’t just work harder, they have to work smarter
large volumes of spatial data is important for responders. The ability to tap into data on or from a mobile device for instant, relevant information can have a measurable impact. For example, Copenhagen Fire Brigade and KMS, the Danish national survey and cadastre agency, took advantage of the widespread use of mobile phones in the Danish capital, deploying a mobile app for citizens to report emergencies. Location data from the app is sent to public safety communications centers for dispatching. Previously, emergency calls from mobile devices could identify only a rough location of the applicable cell tower — typically within several hundred metres. The new app can locate the caller within a few metres, with no need for a nearby street name. Not just response The takeaway from these examples is obvious: geospatial information helps public safety professionals act fast, responding to the right place at the right time with the right information. In the case of crimes, fires or medical emergencies, saving time helps save lives. But it doesn’t stop there, because immediate response isn’t always enough. Agencies have to plan ahead, too, in order to figure out how to allocate their resources most effectively. They need to understand what happened before and predict what may happen next. They can’t just work harder, they have to work smarter. Again, geospatial information can help. Every day, public safety agencies capture and manage massive amounts of important data. By merging this geospatially enabled data with analytics, agencies can pinpoint areas of concern, such as high crime and traffic accidents. By knowing where the problems are, they can figure out how to
solve them. When Arkansas State Highway and Transportation Department, United States, learned it was at high risk for roadway accidents, it deployed geospatial analytics to pinpoint accident “hot spots” and create easy-to-understand visuals for the public, including detailed maps of accident times and locations. With geospatial analytics, the department reduced accident analysis and reporting time from 4 hours to 10 minutes, and, importantly, it can now prioritise safety improvements based on the results. There are many other examples. A government agency in China uses geospatial information to monitor dam vulnerabilities. Multiple public safety agencies in Germany have joined forces to deploy a virtual command centre using a geospatially enabled Web client. A fire agency in New Zealand uses geospatial information to plan for and respond to wildfires. The list goes on, because the uses of geospatial information are as limitless as the threats to public safety. Its’ all about the results It is clear that geospatial information is the foundation. From it, agencies can better see, understand and act. They can make those smarter decisions that protect their citizens and their communities. Ultimately, that’s all that really matters. It’s not about the information for its own sake, but the results.
Bhupinder Singh Senior Vice President Bentley Systems
Geospatial information and transportation asset lifecycle
wners of transportation infrastructure,including highways, bridges and rail networks, face growing pressure to make the most of limited budgets, as they typically have more to build and maintain than their budgets allow. As a result, not only do they have to make more intelligent decisions on what to build, but also on how to build and maintain them. They have to design, build, and maintain more intelligent, better-performing and more resilient infrastructure. Integrating geospatial information into the design, construction and operations lifecycle is instrumental in achieving this goal. Barriers to leveraging geospatial information A number of obstacles have limited the integration of geospatial information into the engineering design construction, operations and maintenance workflows of transportation projects. Limited access to geospatial information: Often, designers and engineers are unable to access geospatial information because it is locked in information â€œsilosâ€?that result from the use of proprietary file formats and
monolithic data structures. Even when geospatial information is accessible, the ability to share it across systems, project phases and project disciplines is challenging due to data complexity, applicability and density. We call this â€œinformation mortality.â€? Limited precision of geospatial information: For geospatial information to be valuable to those who design, construct and maintain transportation infrastructure, project team members need data that is both geospatially correct and has the required engineering rigour and precision. Unavailability of correct data have resulted in a surge in data acquisition costs. An often-cited 2004 study by the US National Institute of Standards and Technology estimates costs of inadequate interoperability in capital facilities to be
USD 15.8 billion per year. Making geospatial information relevant and accessible Recognising the need to bring the engineering and geospatial worlds closer together to overcome these issues, software providers are developing solutions that enhance the mobility of geospatial as well as architectural, engineering, construction and operations information across disciplines and project phases. Forward-thinking software vendors are already including mapping capabilities within their civil design applications. Combining engineering and mapping tools can bring CAD and engineering design precision, ease-of-use and efficiency to GIS. Through this enhanced information mobil-
Faced with limited budgets, owners of transportation infrastructure not only have to make more intelligent decisions on what to build, but also on how to build and maintain them. Integrating geospatial information into the design, construction and operations lifecycle is instrumental in achieving this goal
New software for transportation asset inspection and maintenance is enabling owner-operators to better track and manage all types of transportation assets, from bridges to culverts, signs, retaining walls and other ancillary structures
ity, transportation design and construction firms, as well as owner-operators, are able to leverage the information modelling of geospatial and other data through integrated projects to design, build, operate and maintain high-performing, intelligent infrastructure throughout the lifecycle of road, bridge, rail and transit network, or other transportation infrastructure assets. Information mobility & project integration The demand among transportation professionals is for software that transcends geospatial information and lifecycle asset management, to make it possible for all stakeholders across the lifecycle to readily consume geospatial information. A key requirement for the software is its ability to support multiple file formats. In addition, standards such as LandXML and the various OGC standards play an important role in supporting interoperability. Importation innovations to support information mobility include i-models â€” containers for the open exchange of infrastructure information. In addition, with the explosion of mobile devices and form factors, a number of apps enable trusted geospatial information to be easily accessed anytime, anywhere. Information mobility and point clouds Point-cloud scanning devices are becoming commonplace and inexpensive, so images are now cost-effective to capture and are becoming pervasive across various market segments and disciplines, including civil engineering. These 3D laser scans can be used in the planning stages of a transportation project, as a way of capturing and modeling existing conditions, including creating digital terrain models for site design and also for visualisations to aid in project communications and approvals. In the design stage,
these point clouds can then be used to aid in road or rail alignments and corridor modelling. In ongoing maintenance and inspection, point-cloud data can be used for analysing road conditions for rehabilitation projects, to capturing as-built conditions for operations. Equally important are solutions that support intelligent 3D positioning, which enables real-time and real-place referenced engineering models, from and into mobile devices in the field. As a result, 3D design models can be used by contractors for machine grading and earthmoving and to improve and validate onsite construction processes. A strategic alliance between Bentley and Trimble will enable intelligent positioning for large infrastructure project sites, establishing a new benchmark for construction and operations quality, efficiency and safety. Finally, new software for transportation asset inspection and maintenance is enabling owner-operators to better track and manage all types of transportation assets, from bridges to culverts, signs, retaining walls and other ancillary structures. Such software helps operators to quickly and effectively collect, analyse, manage and report inspection data. This allows proactive lifecycle planning for infrastructure maintenance based on accurate data and risk assessment, as well as planning for future operational changes such as remodeling or expansions. More intelligent infrastructure Information mobility is a key enabler in allowing geospatial information to be effectively leveraged throughout the lifecycle of transportation infrastructure. It results in infrastructure professionals being able to make better decisions faster, transportation infrastructure projects being built and maintained more efficiently, and more effective road and rail networks that deliver a better return on investment to society.
Arvind Thakur CEO NIIT Technologies
ood governance delivers better services across all areas of administration. E-governance is a popular terminology today. It basically involves use of technology to reform government work. Increasingly, geospatial technology is becoming the core to many IT programmes in the government. From public utilities to land records to internal security, we are seeing deployment of geospatial technology. With increased use of geospatial technology governance processes are maturing from e-governance to g-governance. G-governance can be defined as the use of geospatial technology to spatially enable policy-makers take informed decisions. Geospatial technology essentially provides a framework for integrated problem solving. To solve a problem, we need to first understand it. Geospatial technology enables us to understand problems better because it presents issues visually, in a more understandable manner. To move towards g-governance we need to see how egovernance programmes can get a geospatial layer. Data integration for better governance Most government programmes operate in silos. To take quick and informed decisions it is important to integrate information. So for example, can we integrate land records
with urban solutions and determine how much property tax should be collected from a region? Geospatial solutions facilitate such integrated views which can help the government increase revenues and deliver better services. NIIT Technologies is currently engaged with many states in a nationwide programme called the Crime and Criminal Tracking Networking System (CCTNS). The programme seeks to automate police stations across all states — FIRs, criminal records and complaint status will be online, enabling data accessibility and transfer. Now, if you geo-code FIR locations, you can analyse crime zones and deploy resources appropriately and take preventive measures for better citizen service. Indeed geospatial technology is being conceived for deployment in the next phase of the programme. GIS is beginning to form the core of many programmes. Take the Accelerated Power Development and Reforms Programme (APDRP) in India, for instance. The investment in geospatial element may be just 10-15 per cent of the total outlay, but from planning to implementation, or even follow-ups, it is the core to the utilities functioning. APDRP’s core is GIS -- assets are geo-coded, it uses satellite imagery, and there are layers of network asset locations and layouts of every township. The latitude and longitude of every electricity pole, details of every household meter and all the characteristics get coded into very rich data. The geospatial element is central to planning, consumer indexing, load dispatch, checking pilferage and even new infrastructure; a good example of an e-governance programme moving towards g-governance. Public distribution is another great example of geospatial technology serving the
grassroots. First you locate the distribution points, and then you capture information to see how much distribution is taking place and whether the subsidies are in line with the volume of goods going through those points. Such g-governance dramatically improves citizen services. The movement to g-governance calls for provisioning geospatial layers to e-governance programmes such as the National Spatial Data Infrastructure, state SDIs and the National e-Governance Programme. The recent National GIS initiative in India aims to integrate spatial information across different government departments and is a major step towards g-governance.
With increased use of geospatial technology, governance processes are maturing from e-governance to g-governance. G-governance can be defined as the use of geospatial technology to spatially enable policy-makers take informed decisions
SUBSURFACE UTILITY MAPPING SOLUTIONS
SAFE DIGGING… IT MATTERS! CONTACT US TODAY!!
AXIS CIRCLE brings you 15 years of experience in underground ulity detecon and mapping using the latest detecon technology including electro-magnec and ground penetrang radar. The buried ulies are electronically marked with unique RFID tagging for accurate and precise record for future development planning.
www.axisulitymapping.com AXIS CIRCLE SDN. BHD. 13A Jalan SG 3/8 Taman Seri Gombak, Batu Caves, 68100 Selangor, Malaysia Tel: +603 6189 0659/ +603 6189 9659, Fax: +603 6189 0661 Skype: axiscircle, Email: firstname.lastname@example.org
The movement to g-governance calls for provisioning geospatial layers to e-governance programmes such as the National Spatial Data Infrastructure, state SDIs and the National e-Governance Programme
Hurdles for spatially enabled governance in India High-quality imaging remains a restrictive environment. We obviously need to be sensitive to the concerns of security agencies but to spatially enable governance approaches to make images available need to be reviewed. Complete benefits from geospatial technology can be derived only with policy reforms in this area. This is a significant challenge which requires the attention of policy makers. Availability of good talent is another hurdle as for any evolving sector. Most people in the Indian geospatial sector are scientists and we need more engineers. There is a need for formal educational programmes, like B.Tech/M.Tech in geospatial technology. At the NIIT University we started such a programme, but that churns out only about 20 professionals annually. We have to create the infrastructure and awareness to bring out larger number of qualified professionals. The sector has developed to a great extent and it can be an exciting career op-
Esri Roads and Highways supports desktop and server workflows
portunity today. Another challenge as with any large programme is managing the change. Bringing in technology is a simple task but the bigger challenge is getting the organisation to operate in the new way. Change management is critical to large programme success and it falters when executive management is not involved and the task for implementation is left to the technologists. Even though technology helps executives do their job better, for most, thatâ€™s for the future and not relevant to the current responsibilities on hand. It is important to bring about a level of awareness among line administrators that the success of programme implementation can happen only with their personal involvement. Evolving policy environment Awareness on the importance of geospatial technology is growing in India. That a programme like the National GIS has got funding means the government is putting its money behind its intent. With a clear road map emerging at the national level, geospatial is no longer an evolving sector, it has evolved. Geospatial technology has followed IT developments. When the IT revolution took place in the country during the â€™90s, the sector had no policies. The industry collaborated with the government to create IT policies. However, the geospatial industry has many restrictive policies in place pertaining to the availability of maps and images. Government has many priorities. Geospatial Industry bodies have to make its agenda the governmentâ€™s priority. As during the IT revolution where we saw great collaboration between industry bodies and the government we need some geospatial bodies with large users as participants take up the cause and drive policy reform.
52#%'(14 ).1$#. &'8'.12/'06 6JGHCEWNV[QH)GQ+PHQTOCVKQP5EKGPEGCPF 'CTVJ1DUGTXCVKQP +6%QHVJG7PKXGTUKV[QH6YGPVGKU QPGQHVJGYQTNFoUHQTGOQUVGFWECVKQPCPFTGUGCTEJ GUVCDNKUJGPVUKPVJGÆ‚GNFQHIGQKPHQTOCVKQPUEKGPEGCPF GCTVJQDUGTXCVKQPYKVJUWEJCYKFGTCPIGQHFKUEKRNKPGU CPFCEVKXKVKGUKPVJKUÆ‚GNF Â…)GTCTF-WUVGT
%#4''42'452'%6+8'5 #VVJGJGCTVQH+6%oUCEVKXKVKGUNKGUECRCEKV[DWKNF KPICPFKPUVKVWVKQPCNFGXGNQROGPVVJGRTQEGUUGU D[YJKEJKPFKXKFWCNUITQWRUCPFQTICPK\CVKQPU UVTGPIVJGPVJGKTCDKNKV[VQECTT[QWVVJGKT HWPEVKQPUCPFRWTUWGVJGKTIQCNUGHHGEVKXGN[CPF GHÆ‚EKGPVN[6JKUF[PCOKEUGVVKPIQHHGTUCVVTCEVKXG ECTGGTRGTURGEVKXGUGPCDNKPISWCNKÆ‚GF RGTUQPPGNVQRWVVJGKTUMKNNUCPFGZRGTVKUGVQ GZEGNNGPVWUG
&')4''%1745'5+0)'1+0(14/#6+105%+'0%' #0&'#46*1$5'48#6+10(14 r #RRNKGF'CTVJ5EKGPEGU r )GQKPHQTOCVKEU r 'PXKTQPOGPVCN/QFGNNKPICPF/CPCIGOGPV r .CPF#FOKPKUVTCVKQP r 0CVWTCN4GUQWTEGU/CPCIGOGPV r 7TDCP2NCPPKPICPF/CPCIGOGPV r 9CVGT4GUQWTEGUCPF 'PXKTQPOGPVCN/CPCIGOGPV
&')4''&+2.1/##0&%'46+(+%#6'241)4#//'5 1XGTVJG[GCTU+6%JCUFGXGNQRGFCYKFGUGNGE VKQPQHEQWTUGUKPKVUFGITGGFKRNQOCCPFEGTVKÆ‚ ECVGRTQITCOOGUKPIGQKPHQTOCVKQPUEKGPEG CPFGCTVJQDUGTXCVKQP6JGUGEQWTUGUCTGQHHGTGF KPVJG0GVJGTNCPFUQPNKPGCPFCDTQCFD[+6% KVUGNHQTD[+6%KPEQNNCDQTCVKQPYKVJTGRWVCDNG SWCNKÆ‚GFGFWECVKQPCNQTICPK\CVKQPU
(14/14'+0(14/#6+10 +6%(CEWNV[ 7PKXGTUKV[QH6YGPVG 21$QZ#''PUEJGFG 6JG0GVJGTNCPFU KPHQ"KVEPNYYYKVEPNCPFYYYWVYGPVGPN
Joep van Beurden CEO CSR Plc
Doors opening for indoor navigation
F For LBS to take off, indoor navigation technology is a must. This will allow users to use their PNDs or smartphones inside buildings like shopping malls, big airports or railway stations. The users will then know where they are inside that building and how to get to another location
orty years ago, a new technology called GPS emerged on the horizon, generating a lot of interest. People speculated about the possibilities of using this military technology in commercial applications. This speculation turned into reality several years later, made possible by coming together of three factors: mass production and low-cost viability of systems that could accurately give location; creation of outdoor maps by a number of companies; and the development of a ‘killer’ application. This ‘killer application was turnby-turn navigation on portable devices. For many years we have waited for the next series of applications, specifically the ‘location-based services’, or LBS, but they have not quite yet lived up to the promise. And the reason is simple: people spend nearly 90 per cent of their time indoors. Today’s GPS technology is satellite based and the signals from these satellites do not penetrate buildings. This means location is not available indoors. For LBS to take off, indoor navigation technology is a must. It would allow users to use their PNDs or smartphones inside buildings like in shopping malls, big airports, train stations and such buildings, and would tell them where they are and how to get to a specific location in the building. Having a technology that gives
deep indoor location would be useful and therefore considered as the next frontier in location or navigation. The wait is almost over The exciting news is that associated technological requisites and innovations to make indoor navigation a reality are under way, and we are almost there. With technology advancing, we have a navigation chip today that not only has functionalities to support traditional GPS outdoor navigation, but also harnesses various signals inside buildings that can aid indoor navigation, like those from Wi-Fi and cell towers. This chip can triangulate various signals as an intelligent, seamless combination can help determine location everywhere: indoors and outdoors. The chip also utilises the on-board MEMS sensors present in smartphones. MEMS sensors can keep a track of things when Wi-Fi access points and radio signals are unavailable. Smartphones are also increasingly equipped with pressure sensors that can determine a user’s presence on a particular floor inside a building. We also have the necessary software to optimise various signals outdoors and indoors and figure out the indoor location with adequate accuracy. Finally, there is the cloud-
based server functionality that can contain the database of all the indoor access points and all GSM cellphone towers. While it used to be quite cumbersome for phones to be connected to the cloud, it is now routine and no longer costly. Another critical element is indoor maps. In addition to companies working on technology to figure out indoor location, there are quite a few others working on mapping public buildings around the world. A market forecast by IMSF projects that by 2016, 120,000 indoor maps will be available. In my view, it is the combination of these technology developments and the availability of indoor maps on mobile handsets that will make indoor location a very exciting opportunity in the coming years. Many a doors will open There are a host of compelling services that can be developed on the back of indoor navigation. It can enable tracking of goods being
shipped, or anything else that one would like to keep a track of — a child or elderly people. Indoor location technology helps people in a shopping mall or such places know any point of interest within the place. Similarly, if one is already shopping in a department store, the location of specific items they are looking for can be found with ease. In case of payments, indoor navigation gives the security of knowing that one’s payment is being made while being actually in the shop where the item is purchased, and not somewhere else. People have been very excited about LBS because of the wide range of compelling applications for a long time. The hurdle has been the need for a commercially viable way to figure out one’s location indoors, where all of us spend 90 per cent of our time. With indoor location, accurate indoor maps and with many companies working on useful, exciting applications, indoor navigation is poised to be a fast growing opportunity over the next couple of years.
HOW IT WORKS • A navigation chip supports traditional GPS outdoor navigation • It also harnesses signals inside buildings like those from Wi-Fi and cell towers • This chip can triangulate between various signals • It utilises the on-board MEMS sensors present in smartphones • When Wi-Fi access points and radio signals are unavailable, MEMS sensors can keep a track
Drs. Th A J Burmanje (Dorine) Chairman of the Board Kadaster, The Netherlands
Land administration standard cornerstone for development
and administration documentation indicates the relationship between people and land. However, about three-quarters of these people-to-land-relationships concerning about 4.5 billion cases the world over are not documented, which often results in land disputes and land grabbing, thus denying the local people of their rights. Sustainable development, human rights or spatial planning are difficult to achieve without proper land administration. But proper land administration systems need proper data standards, which facilitate quick and efficient setup of land registrations. Bringing knowledge together Itâ€™s a pity to see the wheel being re-invented again and again, leading to waste of time and money, especially in countries that do not have the means or funding. New land administration systems are being developed worldwide. They experience the same struggles again and again â€” how to divide responsibilities, how to bring together fragmented data sets of different organisations, how to define public or private roles or which IT structure to choose. As a conse-
quence, land administrations are often incomplete, and data is not up-to-date and lacks quality and governance. A successful land administration system should provide answers to these questions. Land informa-
tion systems required a data model that is able to structure and connect the data. There was an emerging global demand for a widely accepted data model (domain) standard making use of the already existing knowledge. This was supported by UNHabitat, the Food and Agricultural Organization and the International Federation of Surveyors. A team of land administration professionals initiated the development of a common data model which was flexible enough to function as the core of any land administration system. The Dutch Kadaster was part of the international team that created a practical solution â€” a common standard called the Land Administration Domain Model or LADM. Land Administration Domain Model In land administration systems, data standards are required to identify elements. These may include objects, transactions, relationships between spatial units and persons, classification of land use, land value and map representations of objects. In existing administrations, a data standard is generally limited to the region, or jurisdiction, where the land administration is in operation. The new LADM data standard offers great flexibility. LADM provides a common framework, a set of concepts and associated terms. It also addresses all the parties, their rights, responsibilities, spatial units, surveying and terminology among others, with the power to combine data from different
sources. LADM acquired ISO accreditation in November 2012, thus ensuring that it meets the internationally recognised quality standards. In several countries, the LADM standard is being used. In Cyprus, the model serves as the backbone for improving data processing. In Portugal, an object-oriented model has been developed for the Portuguese Cadastre and the Portuguese Real Estate Register. LADM is also being used in the FAO project called the Solutions for the Open Land Administration, which aims to make computerised land administration systems affordable and sustainable in developing countries.
The new LADM data standard offers great flexibility. It not only defines the elements that provide a basis for any land administration setup but also defines them in a way that it can be applied anywhere in the world
For sustainable development The Netherlandsâ€™ Cadastre, Land Registry and Mapping Agency has a long tradition of collecting and providing data on property and geography. We are constantly enriching our knowledge by sharing what we know and learning from our international colleagues. This is reflected in whether we are helping in implementation of the European INSPIRE directive on geographic data or supporting the Russian cadastre in designing a 3D cadastre system. We believe in collaboration for the benefit of the society. LADM will accelerate the development and implementation of proper land administration systems as that is the cornerstone for economic growth.
Geoff Zeiss Editor—Building & Energy Geospatial Media & Communications
Adding a spark to energy efficiency
T Buildings account for one-third of the world’s total energy consumption and improving their energy efficiency is a priority. Yet about 80 per cent energy efficiency potential of buildings remains untapped
he global demand for energy is expected to increase by over 30 per cent by 2035, according to the International Energy Agency (IEA). During the same time, the demand for power is expected to surge by over 70 per cent. The IEA also projects that energy-related CO2 emissions will rise from an estimated 31.2 Gt (gigatonne) in 2011 to 37 Gt in 2035, leading to a rise of 3.6 degree Celsius in the earth's surface temperature. Reducing this to even 3 degree Celsius requires a significant investment in energy efficiency, leading to energy intensity improvements of 2.6 times the rate of the last 25 years. Countries such as China, US, Japan and the EU have adopted measures projected to contribute to a reduction in global energy intensity of 1.8 per cent per year through to 2035, compared with only 0.5 per cent a year over the last decade. Buildings account for one-third of the world’s total energy consumption and improving their energy efficiency is a priority. Yet about 80 per cent energy efficiency potential of buildings remains untapped.
in the energy industry. It is used in areas such as transmission line routing, energy density analysis, energy performance optimisation of buildings, vegetation management for transmission lines, and estimating solar power potential and optimising the PV panel positioning. It also plays a crucial role in managing field resources, disaster management, outage management, asset management, and work crew dispatch. Environmental monitoring and impact assessment in oil and gas are other areas.
G-tech in energy efficiency Geospatial technology plays a critical role
• Energy efficiency of new buildings: Energy performance analysis helps architects and
• Energy density analysis: In 2007, Ontario planned to reduce greenhouse gas emission by 6 per cent below 1990 by 2014. This meant reducing peak demand by 5.6 per cent and consumption by 4.9 per cent. Power utility Horizon procured detailed information such as building age, sun exposure, heating type, air conditioning and other geospatial data to map energy density of buildings. This enabled it to successfully target conservation and demand management (CDM) on the buildings with the highest energy footprints.
of an old building can be assessed.
engineers to optimise energy usage of new buildings. Starting with a simplified BIM model, the geographical location and surrounding natural and man-made structures, the energy performance analysis of a building involving thermal, lighting and airflow simulations can be done to compute how much energy the building will consume in a year, thus making the way for testing alternative options. This could reduce annual energy consumption by 40 per cent. • Energy efficiency of existing buildings: For an existing structure, it is necessary to measure its present performance. This involves compiling information from past photographs, construction drawings, and geospatial data sources. High-definition laser scanning is used to collect accurate 3D physical and spatial information. With the geometry and other information contained in the BIM, along with the geographic location and orientation of the building and surrounding structures, the energy analysis
• Disaster management: Alabama Power installed over a million smart meters and supporting automated meter infrastructure (AMI) to help differentiate between network and customer-induced outages, thereby reducing the number of truck rolls. In April 2011, tornadoes hit its service area, destroying two substations, flattening transmission pylons, breaking 7,500 poles, and leaving 400,000 customers without power. By using Google Maps to display which smart meters could not be read, Alabama Power put together a picture of areas that had lost power. The application also provided emergency officials with information about whether power was on or off in specific buildings. In addition, Alabama Power was able to map power restoration trends on Google Maps as customers started coming back on-line.
Integrating AMI and GIS enables linking of a utility’s service points and metered accounts with customers’ physical addresses. Once a utility has reliably linked service points to the customers’ physical addresses and geolocated them, a whole range of other systems can be integrated with it
• Integrating AMI with enterprise systems: Integrating AMI and GIS enables linking of a utility's service points and metered accounts with customers' physical addresses. Once a utility has reliably linked service points to the customers’ physical addresses and geolocated them, a whole range of other systems can be integrated with it, including the outage management systems, data analytics and the work management system. • Smart grid management: A smart grid involves integrating a SCADA system, smart meters reporting power use every 15 minutes, AMI, intelligent electronic devices such as power-line sensors, bi-directional communications to receive information from and control smart devices, self-healing networks, distributed generation (wind or solar PV), electric vehicle charging stations, factory ride-through systems and battery-based elec-
Geospatial data and technology goes a long way in improving the management of energy infrastructure while reducing the environmental impact by ensuring energy efficiency
tric storage. The volume of data generated by smart grid networks has been estimated to be 10,000 times greater than our existing networks, and much of the data is real-time. Integration with geospatial technology allows real-time monitoring and decision making. Burlington Hydro's real-time, geospatially enabled smart grid operations and management system integrates with its existing enterprise systems and provides a common point of access to its operational data. BHI's transformer status monitoring dashboard shows a map of its service area with transformer loading in real time in the form of a heat map. It also reports historical loading and estimates. It supports reconfiguration of the grid in real time to reduce the load on transformers and redistribute it to others with available capacity. • Transmission line siting and design Planning and building a new transmission line takes 12 to 20 years in the United States. Use of software tools to design transmission lines and integrate supporting structures, a digital terrain model, planimetric maps, aerial photographs, LiDAR and other geospatial
LiDAR scanning for vegetation management of transmission lines
data can help site the line in the most economical, environmentally friendly and visually minimalist way. • Using solar energy The deployment of solar PV is accelerating around the world. A type of geospatial imagery called oblique imagery provides almost 3D capabilities at a much lesser cost. It allows contractors to determine whether a building is suitable for mounting solar panels. Contractors can view and measure roof space, determine the tilt and direction, and even identify obstacles that could make installation difficult or cause shade problems. • Bringing the field into the office One of the most important goals of utilities is improving the productivity of operations staff. Several tasks involving the field staff can now be done in the office. Oblique imagery, street-view, laser-scanning and highresolution orthophotography can provide operators most of the information required without leaving office, resulting in reduction in the number of truck rolls and costs. • Vegetation mgmt for transmission lines To prevent outages due to vegetation encroaching on transmission lines, laser scanning is done with a fixed-wing or helicopter-based platform. These LiDAR data collection platforms can be configured to provide the required point densities and accuracies to model vegetation encroachments from LiDAR point clouds. Semi-automated processing is used to identify pylons and wires and vegetation which are entered into a GIS-based asset management system. Geospatial data and technology goes a long way in improving the management of energy infrastructure while reducing the environmental impact by ensuring energy efficiency.
3D Mobile Mapping/Survey Systems
IP-S2 Lite Mobile Mapping System
Integration with GIS software
Measurement between 2 Points
Points, Polylines & Polygons
3D Coordinate Measurement
Add New Value to Your GIS Database with Geo-referenced 3D Video Images
IP-S2 Mobile Survey System
Tunnel ProďŹ le
Inventories of roadside features
Road signs, guardrails, telephone poles, etc.
Height, depth, width, area slope, azimuth Road surface inspection
Drive and Scan the Roadside Features, Acquire Complete Dataset with Survey-grade Accuracy
Topcon Positioning Middle East and Africa FZE P.O.Box 371028, LIU J-11, Dubai Airport Free Zone, Dubai, UAE Phone : (+971)4-299-0203 Fax : (+971)4-299-0403 E-mail : email@example.com Website : www.topconpositioningmea.com
Christof Hellmis Vice President-Map Platform Nokia
Location brings a new dawn in mapmaking
H So many of our daily choices and planning begins with three simple questions: ‘who?’, ‘what’, and ‘where’? During the Internet revolution, tech companies seized on the first two. Today, the ‘where’ category is redefining how people navigate the world around them
ello, operator, can you get me New York?” It used to be when we placed a phone call we’d ask a switchboard operator to connect us to a place. Then with the rise of mobile telephony, it suddenly became possible for us to connect directly with people, wherever they may be in the world at the moment. Today we’re seeing another shift underway, and “place” has become important again. Today we use the word, “place” to describe location and the host of location-enabled devices and services -- smartphones, vehicles, cameras, essentially anything that moves — that are increasingly becoming an integral part of our daily lives. Have you ever noticed that so many of our daily choices and planning begins with three simple questions: “who?”, “what”, and “where”? During the Internet revolution and subsequent dot-com boom, technology companies seized on the first two of these questions. The “what” category came to define online search, enabled by the proliferation of desktop computers and Internet access. It changed the way we think about discovering information and conducting business. A few years later, the “who” category came
into vogue with the rise of social networks, and a new class of innovators realised our interest in sharing personal information and making social connections online. Today we’re in the age of the “where” category that is redefining how people navigate the world around them. Nokia is at the forefront of creating this new category. We’re going to combine our own data collecting and technology efforts with those passionate individuals who can become mapmakers, through crowdsourcing and online community tools. We want to give better answers to the questions people ask in their daily lives: “when should I leave for work to make my morning meeting?”, “where should I buy coffee along the way?”, “how do I avoid bad traffic?”, “how do I get home on public transport?” etc. People ask these questions through the lens of everyday routines and simple actions — driving, connecting, walking, exploring, and riding public transport. All of these questions, each of these actions share a common element: location. With detailed and freshly captured content, we give people the most accurate maps and location-based information. With a computable platform, we deliver people
better responsiveness and interactivity.
on-demand maps and location experiences everywhere. The result is not just one map, but essentially millions of maps, generated every day. At the end of 2012, Nokia introduced HERE — the brand of our location cloud that delivers the leading maps and location experiences across all screens and operating systems. Communication network critical for rich content Connectivity will always play a key role in driving content, and any information of dynamic nature will live in the cloud. This is why we invest in the location cloud. The devices we power with our products, such as our Nokia Lumia, are an example of our hybrid-engine approach — moving the location experience closer to the device. Unlike traditional digital maps stored on a remote server, our maps are computed on the device at the moment people need them. The result is a truly offline experience, uses lower bandwidth consumption,
Computation cartography redefines map-making This is an example of computation cartography, which is redefining map-making. This means maps are created on demand to meet a specific purpose, to help govern an individual action, to help answer a personal question. It may be a map which is very reduced and abstract for in-vehicle use or navigation, where it is focused on safety, security or driver distraction. Even reduced maps can provide advanced functions such as fuel economy based on highprecision data. Computational cartography also gives way to the rich visual experiences you may want to have on the Web, mobile or tablet to explore your surroundings if you, say, go skiing. You may want to preview the slopes or afterwards share information about your surroundings and personal experience. In this way, individuals become part of the digital mapping universe ecosystem. At Nokia, this is what we are starting to do as a first step towards providing a richer map that reflects the reality of one’s environment and how individuals sense the world around them.
We believe everything benefits from being location enabled and described by its location. Indoor mapping, for instance, is a big challenge that we are addressing
Indoor location mapping next thing We believe everything benefits from being location enabled and described by its location. Indoor mapping, for instance, is a big challenge that we are addressing. Over the next months we will expand and we will inspire a new generation of location services and devices that make the mobile experience more individual for people everywhere. As a result, we will bring our offering to more devices and more people than ever before. We will redefine mapmaking.
BVR Mohan Reddy Chairman & CEO Infotech Enterprises
Smart networks & role of GIS
stitch in time saves nine” — can we use this proverb to highlight the state of our network operations? The answer is a “no”, most of the times. However, network operators are aspiring to incorporate several proactive measures to improve the reliability, adaptability, security, predictability and self-healing aspects of the networks. In other words, they are trying to create “smart networks”. Let us take a closer look at the challenges faced by network operators and how GIS can be used as an enabler for their end state vision of smart networks. Current state of networks Network operators generally resort to either preventive maintenance of their net-
While a GIS system has been traditionally used for network planning and design during the construction phase, it is now increasingly being seen as a foundational system to establish a single source of truth across the network operation
works or restoration after an event. In the absence of effective systems to provide the lead indicators, network operators fall short in their ability to take proactive measures. According to an estimate in 2006, the annual cost of power interruptions in the US is a staggering USD 80 billion. A good share of these interruptions can be attributed to the state of the networks. Building blocks of smart networks Utilities (smart grid) and communications (next generation networks) industries are investing heavily to make their networks “smart”. Any network comprises of sources and sinks (end points) and the supporting infrastructure to facilitate delivery of the commodity. For a telecommunications network, it could be machines exchanging data; or in-
dividuals speaking on telephone. For an electrical network, the end points are generating stations and consumers. Similarly for a gas network, it is the storage points or wells and the end consumers. While this is a simplified view of a network, in reality, several components, systems and stakeholders together constitute the network (Figure 1). A typical electric utility serves more than a million customers while a communications network operator serves several million customers. These complex networks though are only as good as their weakest link. For these networks to operate with minimal downtime, several components and systems need to perform in symphony, without errors. Challenges in operating networks Independent departments perform diverse functions to operate the networks. While the operations division addresses the aspects of delivery reliability, fault repair, maintenance and crew management, finance handles the commercial aspects and the planning, design and engineering divisions ensure that the radial and fixed assets of the network support operations in delivering the commodity. Although all these de-
partments work towards a common goal, the absence of a synchronised view of information pertaining to their network components often results in these functions operating in silos â€” a recipe for inefficiency and ineffectiveness. Deployment of smart electronic devices and the availability of low cost communication bandwidth to haul information across the network is leading to an information ex-
Figure 1: Stakeholders in a network
k application shifts in the way networks are operated.
Figure 2: GIS as a unifier of systems and information in an enterprise
Advancements in communication technologies, awareness to reduce carbon footprint, availability of processing power with miniaturisation of silicon, standardisation as well as the openness in systems are leading to paradigm shifts in the way networks are operated
plosion. However, this information is seldom being used to take intelligent decisions for network operations. Since a majority of our networks were built over the past 100 years or even earlier, their components and systems were added incrementally with wide variations in technological maturity. While some of them are state-of-art, several legacy systems and components continue to run the networks across the world. Advances in communication technologies, increasing awareness to reduce carbon footprint by tapping renewable energy sources, availability of processing power with the miniaturisation of silicon, advancements in storage technologies, standardisation as well as the openness in systems are leading to paradigm
GIS as an enabler for smart networks In order for network operators to make quick decisions, it is imperative that the information at their disposal is correct, consistent, complete and current across systems. While a GIS system has been traditionally used for network planning and design during the construction phase, it is now increasingly being seen as a foundational system to establish a single source of truth across the network operation. This characteristic is enabling the unification of diverse systems, departments and operations (Figure 2) within the network. Visualisation of information is another major application of GIS, allowing network operators to analyse the â€œtsunami of dataâ€? that is generated by the components of the network. Traditional schematic views and tables are quite inadequate to handle such large information. GIS however enables the thematic representation of business analytics using colour codes on a map that highlights areas needing focus and rapid action. Conclusion We are still a long way from realising smart networks. Nonetheless, steps are being initiated in the right earnest to lay the foundation, by incorporating smartness to the components of networks. Operators need lead indicators to be proactive in dealing with the networks. Technological advances to the network components, such as intelligent devices, are spewing out volumes of information for analysis. Traditional methods are failing to provide operational benefits. It is in this scenario that killer applications developed leveraging the potential of GIS can play a crucial role in creating smart networks.
Bill McKinzie Executive Vice-President â€” Tax & Accounting Thomson Reuters
Property management a collective responsibility
Right now, only 40-50 countries in the world are actively reorganising their land systems. While there is lot of work pending, awareness among the policy makers is clearly missing
n estimated 70 per cent of total land parcels in the world have no documented rights. Not having documented rights to land and ownership hinders growth and development of a country or a region. A number of countries have inherited land administration systems first implemented under the colonial rule, and the processes involved are often too complex, paperwork cumbersome and ancillary costs forbidding. These statutory systems also often do not record customary or tribal rights to lands. Every nation that aspires for economic development should have a governmentmanaged revenue base. This can be created by having the property rights well recorded and registered, understood, and managed. Such a system, supported with the right policy framework, facilitates transparent and easy access to credit for building infrastructure like roads and bridges and creating social infrastructure like schools and libraries. The positive relationship between effective land, property and tax systems, better governance and policy making improves individual social and economic security. Today, governments want to be more transparent and want to put in place integrated solutions for monitoring and manag-
ing land and property records. We are seeing a trend in moving towards digitisation of land information. This involves the integration of all land, property, rights and their ownership information across governments, so that society can be afforded a holistic and spatial view of the fabric of the many fold interest in lands. We see these properties and rights from a map perspective where boundaries and locations are clearly outlined. Integrated solutions Often, land administration cannot be viewed in isolation. Land valuation, land administration, land use and even the use of land administration systems need to be taken into account. Here, integration is a key component. The understanding of the holistic viewpoint of those land assets whether they are privately held, and have transactions around those privately held lands, or whether they are publicly held, is critical. The complete inventory around how those rights and ownership change through time is critical. It is necessary to ensure that countries do not have solutions in which the components are managed in a loose way where they lose track of the data and start having duplicate data. So
these systems have to be integrated. This ensures that land administration processes enable proper valuation and effective taxation which enable the local governments to fund the local needs through a structured and organised funding mechanism. Cost-effective solutions The geospatial industry is moving towards providing cost-effective solutions which allow scalability across jurisdictions that are very large or small. We see a movement towards solutions that have consistency in approach; consistency in the development cycle of the software and solutions; and a
consistent approach on how they get deployed and are leveraged. We are also seeing a more consistent approach among vendors in the use of standards. Developed versus emerging world The needs of emerging countries vis-a-vis cadastre and property management systems are quite different from those of the developed world. Developed countries have policies in place that help in documenting, managing and tracking land holdings. They are looking for tools which allow them to be more efficient and decision-supporting analytics that can predict what will happen in the next 5 to 10 years. Technology helps governments better analyse and visualise the financial health of their property markets. Countries that are in the early stages of land management and administration want to put policies and the processes that support a manageable yet scalable structure of land management in place. They are often at the early stages where they need structure, organisation, and tools to support that structure. Developing economies have the advantage of learning from the best practices of developed nations in implementing land and property management systems.
Land administration cannot be viewed in isolation often. Land valuation, land administration, land use and even the use of land administration systems need to be taken into account. Integration is a key component here
Collective responsibility Right now, only 40 to 50 countries are actively reorganising land systems. It is important for the geospatial industry to work with governments that are willing bring about this change around land administration, to connect with them and to work with them in embracing global best practices. It is also important to leverage the work done by organisations such as the World Bank and USAID. Having a common language amongst all stakeholders for the economic growth and sustainability of a nation is paramount.
Dr Jay Pearlman Fellow, IEEE Chair, IEEE Committee on Earth Observation
Earth information for societal development
O Reliable information and trusted analytical and decision support tools are necessary for decision making. If methodologies can be evolved further to reach across programmes and communities, the value of information analyses will both improve and be more consistent
ver the next decade, the global community must take on the challenge of creating and using the knowledge necessary to assess the risks that humanity is facing and how society can effectively mitigate those that it can impact and adapt to others it cannot. The International Council for Science (ICSU) has identified this as a key challenge for the science community, but the extent of the societal risk is broader than one sector or community. Business evolution, innovative and even individual decisions will be affected significantly. Reliable information and trusted analytical and decision support tools are necessary for informed decision making. The support tools we use to make decisions have tried to keep pace. The ability to assess the impact of decisions, particularly quantitative assessments of impacts, has lagged significantly behind the pace at which data can be distributed. However, asking the value of the information is a typical question in developing proposals for large space systems or information infrastructure that enhance societal information. Examples of such studies were done for the â€œInfrastructure for Spatial Information in Europeâ€? (IN-
SPIRE) and Global Monitoring for Environment & Security (GMES). These analyses are generally tailored to the programme under consideration from the more generic costbenefit methodologies. While there are reasons for this, such as the need for programme-specific assumptions, the processes and best practices are less readily transferable to other programmes unless the assumptions are made very clear. If methodologies can be evolved further to reach across programmes and communities, the value of information analyses will both improve and be more consistent. Steps in this direction have begun. Approaches and methodologies Recent studies have examined the value of earth observation information. The derived information has economic value if it either makes a current choice more confident or reveals a different choice as better than the current one. Information has value even if it introduces more uncertainty, yet better informs a decision. The value of information needs to be expressed quantitatively, although it may not always be monetary. Various scenarios for quantifying the value of
information have been examined or are under development. The sample classes that may work in different circumstances are: Price and cost-based derivation: This methodology uses costs incurred or avoided through the use of information, generally in addressing areas such as quantifying risks (e.g. disaster insurance). Probabilistic approaches: There are cases where price and cost vary by the statistical nature of the decision environment and a probabilistic approach provides a better base for understanding the value of information. Scenario modeling and simulation: Where information is applied to complex situations with more than a few dominant variables, modelling allows the information to be applied to multiple scenarios where variables can be manipulated and the outcomes assessed. Use cases play an important role in validating methodologies either selected or developed to measure the value of information. As important, they provide teaching on how to apply analyses to new problems and programmes. Use cases are also an underpinning in creating a com-
Earth observation and geospatial information have increased value if they enable an action or a decision not to take action. The derived information has economic value if it either makes a current choice more confident or reveals a different choice as better than the current one
pendium of best practices for further applications. The following case study substantiates the point. Case in point This case study presents a methodology and results for a return on investment (RoI) analysis by the British Columbia Ministry of the Environment in Canada (see chart ), assessing the benefits of a Web-accessible geospatial analytical tool for geospatial analysis of earth observation data for use in the natural resources sector. The system grants anyone with web access the ability to view, download or analyse large volumes of centrally held imagery and imagery-based data using a query tool. The application has a wide user base, including staff within provincial agencies, the Forest Practices Board, private consulting companies and universities. Benefit categories include: increased productivity to Ministry of Forests staff; reduction in costs to acquire data for environ-
mental assessments; increased value to conservation planning from land purchased; increased scientific credibility leading to greater funding levels and potential to streamline the environmental referral process. Major benefits based on user interviews showed large productivity benefits to agency staff, as manual analysis took 20 - 50 times longer without improved access through the new tool. A multi-agency financial analysis incorporates costs and benefits for provincial government agencies at the Integrated Land Management Bureau, Ministry of Environment, and Ministry of Forests and Range, and a Federal agency, Agriculture and Agri-Food Canada. The forward-looking 10-year analysis of currently realised benefit categories involving tool use shows CAD 6.6 million net present value, 75 per cent annualised RoI and a payback period of three years. Steps forward Since 2010, there have been four workshops on the value of information addressing development of an international community that encompasses a wide range of scientific, social, economic, management, and communication disciplines. The emphasis is on tasks that foster collaboration across specialties and build trust across disciplines.
(Co-authored by Francoise Pearlman, IEEE; R. Bernknopf, University of New Mexico; A. Coote, ConsultingWhere; M. Craglia, European Commission Joint Research Center; L. Friedl, NASA; M. A. Stewart, Mary Ann Stewart Engineering; Dr Gilberto Camara, National Institute for Space Research, Brazil) For references, please refer to the online version at geospatialworld.net
k law & policy
Dawn Wright Chief Scientist Esri
Bridging the gap between scientists and policy makers
T Inaction by our governments on critical societal challenges will have dire consequences, and many in the scientific community are realising that scientists can no longer afford to stand on the sidelines and not speak out beyond the boundaries of academe
There are many unresolved public problems in societies across the world, such as pollution and waste management, pandemics and biosecurity, access to clean air and clean drinking water, response to and recovery from natural disasters, choices among energy resources (oil and gas versus nuclear versus "alternative"), the loss of open space in urban areas, as well as biodiversity in rural areas, and much more, all of which increasingly revolve around science. And yet, there is a tension between the world of science, focused on discovery and the world of policy making, focused on decisions. The ramifications of the aforementioned critical societal challenges have become too great. Inaction by our governments on these issues will have dire consequences and many in the scientific community are realising that scientists can no longer afford to stand on the sidelines and not speak out beyond the boundaries of academe. I would argue that both scientists and policy makers need each other now more than ever. The policy maker needs the knowledge of science communicated in a way in which they can take action to solve ever-pressing
problems. In fact, scientists today can say not only that we have a problem, but also suggest what can be done about the problem. In turn, the scientist needs the policymaker to help extend his or her research into the realm of practical, useful outcomes that inform relevant, real-world societal issues. The policymaker may also be the one providing the lifeblood of funding that makes the science possible. Resources for scientists The culture of science is changing to the point that, there is growing agreement that scientists can and should seek to engage with policy makers and many are even been called up by policy makers to do so [e.g., Baron 2011]. And increasingly, scientists as communicators are moving into positions of administrative leadership, where they not only continue to engage with society in various ways, but also working to change the culture of academic institutions from within. What are the resources available to scientists to help them become effective communicators to policy makers, especially in light
of the already huge demands on their time? Special sessions on science communication and science informing policy are now being held regularly at prominent scientific meetings such as the annual meeting of the American Association for the Advancement of Science, the world's largest general scientific society, and the Fall Meeting of the American Geophysical Union. There are also several excellent programmes available, including the Communication Partnership for Science and the Sea, which supplies scientists with the communication tools needed to effectively
bridge the worlds of science, policy, and journalism. One such tool is the "message box," which aids scientists in effectively distilling the importance of their research to policy makers in terms of what they really need to know, stated in a way that most matters to them. The message box (see figure) distills a 15page GIScience research proposal funded by a federal agency into a salient message for a journalist interested in developing a feature article on the project. Another potentially effective tool in development is the "story map" which com-
Scientists today can say not only that we have a problem, but also suggest what can be done about the problem. In turn, the scientist needs the policymaker to help extend his or her research into the realm of practical, useful outcomes that inform relevant, real-world societal issues
k law & policy
Message box focused on GIS and marine mammal conservation for journalists
The geospatial community needs to ponder and discuss if communicating with policymakers is now an ethical issue, and whether science communication should be made a formal part of geospatial curricula and professional GIS certification
bines the new medium of "intelligent web maps" with text, multimedia content and intuitive user experiences to inform, educate, entertain and inspire many audiences about a wide variety of environmental issues, including policy makers. A story map has been developed in collaboration with the European Environment Agency (EEA) and the Eye on Earth Network that allows examination of the climate model predictions suggesting that Europe's urban areas will experience more hot days and tropical nights in the period 2071-2100. Clearly this should be of interest to European policy makers. Wither geospatial? What are the implications for scientific researchers in the geospatial realm? Scientists are normally concerned with how the earth works. But the dominating force of humanity on the Earth begs the question of how the Earth should look, es-
pecially with regard to landscape architecture, urban planning, land use planning and zoning and ocean/coastal management. These involve decisions that must be made by policymakers and require the use geospatial data and geographical analysis. And along these lines, geodesign [Esri 2012b], will continue to make an impact in the sustainability world, leveraging geographic information and scientific modeling so that future designs for urban areas, watersheds, protected areas and the like, more closely follow natural systems and result in less harmful impacts. How should geospatial scientists communicate this to policy makers? Given the challenges that our planet faces, I hope the geospatial community will also ponder and discuss whether communicating with policymakers is now an ethical issue, and should science communication be made a formal part of geospatial curricula and professional GIS certification?
k law & policy
Barbara J. Ryan Secretariat Director Group on Earth Observations (GEO)
The economic value of EO data is in its utility
W Models, analysis tools, information products and services, all add value to EO data and it is usually these value-added products and services that environmental managers, public policy officials and ultimately ministers recognise and need
hether it is remotely sensed, in-situ, oceanbased, or surface-based, earth observation (EO) data is essential for making informed public policy decisions in many areas involving societal benefits like climate variability and change, energy management, agriculture, biodiversity, human health and epidemiology, weather forecasting and water management. Data, in and of itself, is of little value unless it is used. Models, analysis tools, information products and services, all add value to EO data and it is usually these value-added products and services that environmental managers, public policy officials and ultimately ministers recognise and need. While many existing EO systems in the world were primarily designed for a single purpose, it is both beneficial and cost effective if these systems can be multi-purposed. A public infrastructure like the Global Earth Observation System of Systems (GEOSS) is helping connect a diverse and growing array of earth observation and information systems for monitoring and forecasting changes in the global environment particularly in the nine Societal Benefit Areas (SBAs) shown in Figure 1.
Today, under the GEO framework, 89 governments and 67 participating organisations are coming together to share data across these nine SBAs, to advocate universal access to the International Charter for Space and Major Disasters and to further advance broad open data policies for all publically funded EO data. There are, however, some significant challenges and gaps that need to be addressed. These include, but are not limited to, lack of access to EO data, particularly in the developing world; technical infrastructure shortcomings; gaps in selected spatial datasets; uncertainty over continuity of observations and inadequate data integration and interoperability. Open data sharing policies With a backdrop of increasing budget constraints and increased frequency, intensity and impact of natural disasters, it is time for
governments to adopt broad, open data sharing policies. Let me give an example of one benefit that open data sharing can bring. In 2007, the U.S. policy for Landsat data changed. Rather than charging users for this data, the US government decided to provide the data free over the Web. Before the change occurred, there was considerable internal resistance (eliminating a funding source) and external resistance (perceived competition from the government with the private sector) that arose. Ironically, however, a few years later, a representative of one of the private companies, who initially resisted the data policy change, informed me that his company has since added 125 jobs to his unit alone just to process Landsat data that is now freely available on the Web. In my view, this is a substantial improvement upon past practices. Jobs have been created in the private sector which helps grow the economy and
The private sector, which is a big consumer of EO data, can and must play a key role in providing commercial services as well as institutional services, under contract from governmental entities
k law & policy
ing pressure, it is important to define a suitable framework that both permits and encourages public-private partnerships. In this regard, private sector engagement in GEO would bring additional expertise, knowledge and resources to all nine SBAs and with the creation of value-added products and services, new marketplaces would emerge. The private sector also represents a big ‘consumer’ of earth observation data and information. It therefore can and must play a key role in providing commercial services as well as institutional services, under contract from governmental entities. Figure 1: Societal benefit areas of EO data
Figure 2: Landsat internet data distribution
the usage of a government asset has grown by several orders of magnitude (Figure 2). Private participation While the burden of investing in EO infrastructure and data is generally borne by governments, there is an increasing understanding that not only the public sector, but the private sector also can benefit from increased data sharing and from the exploitation of integrated earth observations for the benefit of society. In a world where budgets, both public and private, are under increas-
Data is public good Today, there are several restrictions with regard to free trade of satellite imagery. While national interest and national security issues are certainly understandable, as described above, much of the earth observation data is based on government investment, paid for by all taxpayers and therefore should be made available to all citizens as a public good without restriction. Governments need to move in the direction where data can be democratised and accessible from the Web for free, with minimal time delay, all over the world. Landsat data is now being accessed from more than 186 countries around the world — in other words, about 95 percent of the countries in the world have access to this data and that’s only one satellite. Imagine the usage of EO data if there were similar data policies for all government satellites. So, in summary, the economic value is not in EO data itself, but in what one does with the data downstream. And although it will take quite some time to be realised, these same principles could apply to data from private sector satellites. It is largely the value-added products, applications and services that people want (and will pay for), not the data itself.
C POLI Y ADV
E V E LO
SS D EN T
ISATION AL TR E N DS
ll b wi
s ey itie rk n ut e mu
R E T UR N O
sptial Indus Geo 100 Billio try n by D S 2 eU eospatial World
S T ME N
SI de o cis f g io eo T
U N ITI E S
MM covers the en that ce nologies, bring tire g i n a pla ech g tog m E l t rom the stakeho H tia l a rs f g the entire g der c eth l ob o m sp ake annin e m sp n
ON I T A DYNAMICS, INNOV
R T U O D AN
UR IN GT HE
BU TAL GOALS N E
ECTING N ON
RY SIZE ST
k law & policy David Schell Founder & Chairman Emeritus & Chief Strategist, Open Geospatial Consortium
A cultural development called geospatial industry
find it interesting that people still speak of the collection of technologies and data stores relating to spatial-temporal measurement and analysis as an “industry”. Because I have a deep interest in both language and mathematics, I am perhaps too rigorous a critic of the way certain words are defined, especially those summoned up by commercial researchers to characterise market phenomena, words which are often misleading and serve as licence for predictive studies that impose incongruent expectation on both technology professionals and business planners, and confuse investors. I am hard pressed, for example, to know how to respond to questions regarding the future of the “geospatial industry”. The term “industry” doesn't seem to fit, and easy definitions of “geospatial industry” slip through my fingers like mercury. It used to be easy to talk about a measurable market for GIS software or spatial datasets and the limited number of companies and agencies that worked to create it. And it was easy to make presentations to that market about a future of interoperable software and data sharing — everyone who was considered visionary bought into the idea of larger markets and the development of future industry growth.
But now that interoperability is a reality and the information processing community in general is learning to assimilate spatial information and “spatial thinking”, things are becoming more complicated. Growth of markets for geospatial technology have not by any means maintained the coherence of original business plans. Moreover, it seems clear that GIS is no longer a focal concept for the wave of interdisciplinary research and commercial programmes that have spawned our massive collection of imaging and sensor datasets. And both research and application development involving geospatial information is to a great extent underwritten by major investment in infrastructure design, and driven by social policy. Indeed, a whole new set of actors have taken an interest in spatial information; not as participants in a traditionally defined geospatial market, but as institutions representing diverse industrial or societal domains that employ a variety of complex data and architectural models. I think geospatial technology has actually become a property of information processing itself, flowing around domain after domain like oil in a huge machine, looking to each to be integral with the science and information models that drive it. It is not a single industry but a wave of cultural development that facilitates all, enabling as the invention of movable type or the telescope, and marking the beginning of an epoch of human discovery and rational thinking. In addressing such a phenomenon, we have something larger to define than an “industry”. As we think about transitioning to this new epoch, new questions emerge and we sense that a future is taking shape in which many traditional terms don’t make sense, where both institutional and individ-
ual motivation in becoming more sophisticated is also more mysterious. In the beginning, geospatial technology was the domain of specialists and the “buyin” required considerable investment of both intellectual and financial capital. Now geospatial information is our franchise, our right, and we know how to expect to use it. More importantly, we feel that we have a natural right to benefit from it — to have fun with it, to work with it, to improve our lives by using the powerful appliances and policies it has produced. We need to ask then what sort of future is it we are addressing, and what we should expect to be the next evolution of markets and industry domains, and how in fact do we now define progress. Manifestly, the greatest change that has taken place is that people in both public and private sectors, in all technology and industry domains, are beginning to understand the importance of “context” in space and time, and the commonality of space and time to traditional application markets. The need for interoperability and data sharing is now becoming subsumed in complex processing models that link diverse research disciplines, commercial application and operational strategies to address the requirements of broad social policy or largescale consumer requirements. Industry lines are blurred not only for geospatial issues, but for all traditional markets. As demographics evolve in the modern world, so do technology markets merge, transform and redefine the “industry” terrain in which they operate. This is the terrain that I think of when I am asked to characterise the “geospatial industry” of the future, a terrain defined by our joint tenancy of the world’s “space” and journey through time, and the unity of the human enterprise.
Geospatial technology has become a property of information processing itself, flowing around domain after domain like oil in a huge machine, looking to each to be integral with the science and information models that drive it. It is not a single industry but a wave of cultural development that facilitates all
k law & policy
Prof. Josef Strobl Chair, Department of Geoinformatics University of Salzburg, Austria
Geospatial education: Trends and directions
Competences should expect a longer half-life period than any specific software environments and lead towards attractive career paths
rainware, as the intended outcome from learning, is increasingly being accepted as the key success factor behind geospatial approaches in business, society and government and for managing our resources and environments. While technologies are maturing and settling into stable cycles of innovation – even when moving onto the cloud – the ‘people factor’ and capacity building contribute the critical elements, the indispensable ‘human factor’ behind our emerging spatial data infrastructures and the information and knowledge derived from these. The broad field of geospatial education has recently been explored from numerous angles: developing spatial literacy among the population at large, training specialists in geospatial software development, preparing technicians to complete typical GIS tasks, or educating analysts to be capable of creating contextual information from spatial data and ultimately spatial decision support. While spatial literacy and spatial thinking should be considered as causes and responsibilities for general education and common denominators across a multitude of disciplines, professionally oriented qual-
ifications typically face conflicting demands and alternative career pathways. Should academic programmes attempt to simultaneously satisfy the requirements towards competences in software development, systems architectures, data acquisition, sensor integration, spatial analysis, cartographic visualisation and many more? No one-size-fits-all It is unlikely that such a wide scope of competences will fit into any one curriculum and it most certainly will not appeal to one set of student personalities and mindsets. The Swiss army knife approach to educational outcomes typically leads to compromises not really satisfying anyone. This is not to argue for a silo approach to education, where occupational classifications are the basis for separate educational tracks and rigid professional certification. On the contrary, flexibility in educational frameworks will allow for dynamic matching with evolving professional requirements. We can currently observe the emergence and ongoing adjustment of diverse geospatial academic environments: orientation towards technology and development, geospatial enabling in application domains
including engineering, a focus on conceptual foundations, analysis and decision support and more. These approaches necessarily will balance with the job market and thus self-regulate. Rather than aiming for THE geospatial curriculum — this rather be replaced by defined competence descriptors, perhaps from a universal ‘body of knowledge’ — the common denominator across a variety of professional orientations has to be established towards advanced spatial literacy and competences. A proposal for a lucky seven could include: • Spatial referencing — establishing location and place • Orientation — connecting (my) place with space, for context • Representations of spatial features and phenomena • Dimensions and scale – moving between individual and aggregate • Spatial relations — answering the ‘why’ • Geospatial communication – designing interaction • Spatial thinking and reasoning – supporting decisions With these skills and competences firmly established as a common ground, professionals with diverse specialisations are able to cooperate in geospatial teams and organ-
Image courtesy: David L. Ryan
k law & policy
isations. From these (or similar) basic â€˜core competences,â€™ we can then legitimately develop a range of educational objectives and outcomes.
Analysing spatial information and preparing for decision processes is an attractive set of competences valid across domains and wide open for lifetime career development
Developing competencies Let me just outline the philosophy behind academic programmes I had the opportunity to be involved with over many years. Firstly, I prefer to build geoinformatics competences with a methodology focus on the graduate level. Students have completed undergraduate education in a spatial discipline and recognise the need for geospatial methods and solutions. This avoids the phenomenon of a solution in desperate search of a suitable problem. Secondly, competences should expect a longer half-life period than any specific software environments, and lead towards attractive career paths. Analysing spatial information and preparing for decision processes is an attractive set of competences valid across domains and wide open for lifetime career development. Of course the development of methodology competences in spatial analysis, geovisualisation, distributed environments and
sensor strategies will always be facilitated by practical exposure and significant skill levels for tools from geospatial technologies like sensors and software, including development environments. Key learning objectives and outcomes, though, clearly focus on the extraction of information and contextualised knowledge from geospatial data, on communicating among empowered stakeholders and on supporting decisions in government and business. This is just one of a variety of educational pathways. Of course, there is evident and strong demand for other orientations like systems architecture, software development or interaction design. We currently observe the emergence of a diverse family of geospatial occupations, sharing the common genome of spatial literacy and spatial thinking. May be it is not a clearly defined family with all the well known designated roles and family relations, but rather a flexible patchwork family of professional responsibilities. Referring to the title above, this might be a trend in society, it certainly is a challenge, but a realistic prospect into the direction we are moving with geospatial education.
VXUYH\V (QJLQHHULQJ VXUYH\V
x +\GURJUDSKLF VXUYH\V
Z Z Z L L F W H F K Q R O R J L H V F R P PDLOXVLQIR#LLFWHFKQRORJLHVFRP
GHYHORSPHQW VXSSRUW $SSOLFDWLRQ GHYHORSPHQW VXSSRUW 3URMHFW PDQDJHPHQW
x +HULWDJH x *HRVXUYH\V x 6RIWZDUH
k law & policy
Kevin Pomfret Executive Director Centre for Spatial Law and Policy
Regulations to support a â€˜location-enabledâ€™ society
L In a highly competitive global marketplace, nations that begin the process of developing a legal and policy framework that takes advantage of the power of geospatial data will realise both economic and societal benefits
arge corporations, entrepreneurs, government officials, non-governmental organisations (NGOs) and the general public are now seeing the value of location and other types of geospatial information. However, we are just beginning to see the potential benefits of location. Businesses that collect, use and distribute geospatial information will generate economic growth through job creation and revenue enhancement. Govvernment agencies will make their spatiallyenabled information broadly available so as to spur innovation, create jobs and to allow them to deliver better services with greater transparency and at a reduced cost. Moreover, a location-enabled society will generate critical information that can be shared with other nations to address important transnational issues. Legal threats Unfortunately, there are a number of policy and legal developments that are beginning to threaten the broader adoption of geospatial technologies and consequently the creation of location-enabled societies. These developments include concerns over privacy, increased government regulation, uncertainty over ownership rights
in location information, issues involving national and homeland security as well government funding challenges. Moreover, since these technologies and applications are interdependent and geospatial information can be used for a variety of purposes, the impact of policies and laws in one segment of this ecosystem inevitably impacts other segments. In addition, products and services that require combining spatially-enabled information from governments, commercial enterprises (and increasingly, individuals) from around the world will be subject to differing and uncertain intellectual property regimes. Maximising power of geo-info While the policy and regulatory framework necessary to support a location-enabled society will differ somewhat between nations, it is increasingly clear that there are certain fundamental principles that will maximise the power of geospatial information while protecting against potential misuse. Let us discuss some of these principals. Individual location privacy must be properly protected: Protecting individual location privacy is critical in a location-enabled society. Otherwise individuals will be
reluctant to adopt the necessary technology. However, such protections must take into account that a person’s location information is different from other types of personal information. Governments must continue to play an active role in the collection and distribution of data: Adequate government funding (either direct or indirect) for certain types of geospatial information (such as weather) is critical. Governments need to make funding the collection of geospatial information a priority, even in tight budget times. The availability and flow of geospatial information over the internet must be protected: In a global economy, a location-enabled society will require the free flow of geospatial information over the Internet. Censorship or restrictions that im-
pact any step of the process will make it difficult to use geospatial information for domestic purposes or to address critical transnational issues. Location information must be interoperable: A ‘location-enabled’ society will require combining geospatial data from a variety of sources. This means that the various data types collected from numerous sensors around the world and processed, analysed and visualised on different systems must be interoperable. Regulations should be balanced, narrowly tailored and technology neutral: Public policy is inherently a trade-off between benefits and risks. Therefore, such polices should be made only after taking into account the full benefit of a locationenabled society. Laws, regulations and policies with respect to the collection, use and transfer of location information must be clear, transparent and consistently applied: Increasingly, efforts to collect or share information are being thwarted due to laws, regulations or policies that are not publicly disclosed or universally applied. Such inconsistencies will make it difficult, if not impossible, to create the necessary environment for a “location-enabled” society. In this highly competitive global marketplace, nations that begin the process of developing a legal and policy framework that takes advantage of the power of geospatial data will realise both economic and societal benefits. Nations that do not develop such frameworks run the risk of being on the wrong side of the “geo-divide.” More ominously, the lack of such a framework could result in the creation of dystopian societies in which these same technologies can be used by closed-minded or repressive regimes to hinder economic growth and even endanger personal freedom.
In a global economy, a location-enabled society will require the free flow of geospatial information over the Internet. Censorship or restrictions that impact any step of the process will make it difficult to use geospatial information for domestic purposes or to address critical transnational issues
k law & policy
Siebe Riedstra Secretary-General Ministry of Infrastructure and the Environment, The Netherlands
Data sharing, standards and harmony catalysts for geospatial revolution
T Open data feeds innovation by opening up information to research institutes and lowering the threshold for investment in innovative products and services. It contributes to the concept of ‘monetising geospatial value’ in an important way
he Netherlands has a long tradition of collecting and organising geo-information. The Dutch Land Registry and Mapping Agency will be 180 years old this year. And there are things yet to come. At the Geospatial World Forum 2011 organised in India, Kapil Sibal, the Indian minister responsible for geo-information, had said: “GIS is the power of today and of the future. There is a need to take active steps to advance geospatial technologies in a bigger way. The opportunities are tremendous and it’s time for the geospatial revolution.” To reach the goals Sibal put forward, we will have to overcome a number of serious challenges — challenges that we must all face together in the arena of standardisation, harmonisation and data sharing. Only then can we make the geospatial revolution happen. Open data as a key One of the keys is the use of open data. The Dutch government aims for free accessibility of all public information. The Ministry of Infrastructure and Environment, coor-
dinating for geoinformation in the Netherlands, aims to make data belonging to it freely accessible by 2015. The exception, of course, will be information that cannot be disclosed on the grounds of national security or privacy considerations. The idea is to remove the need for costly data transactions, thereby reducing the administrative burden on both public sector and government. Moreover, open data feeds innovation by opening up information to research institutes and lowering the threshold for investment in innovative products and services. The Dutch National Topography Database is a prime example of what data sharing can achieve. With the release of the National Topography Database available free of charge, new applications rose overnight, from two or three a day to 40 a day. By now its use has tripled, as casual users and business professionals alike frequent the database. Another good example of sharing information is the Atlas of the Physical Environment. Public bodies possess a wealth of geo-re-
lated information, often derived from local research and environmental impact assessments — information that can be used again to support new technical or spatial developments. People can now utilise this data for new services, such as apps like Weather and Car Spotter. The city of Amsterdam even has a motto: ‘We provide the data, you make the apps’. Open data feeds innovation and lowers the threshold for investment in innovative products and services. Open data contributes to the concept of ‘monetising geospatial value’ in an important way. The Environmental Planning Act Opening up of geoinformation has boosted information use and reuse. Availability of spatial policy such as spatial plans and permits in the digital space also lends trans-
parency to government policies. Both developments are to be boosted further by the new Environmental Planning Act which is being prepared by the Ministry of Infrastructure and Environment. The purpose of this ambitious new law is to combine, unify and modernise many conflicting regulations concerning our physical environment. The exercise, a massive operation that will take several years to complete, aims to combine such areas as environment, water, traffic, building, nature and monuments into a unified whole. Part of making the Environmental Planning Act is building the digital environment needed for such a broad-reaching law. This provides new impetus for opening up and harmonising different kinds of available geoinformation. Open data and the INSPIRE initiative (the EU initiative to establish an infrastructure for spatial information in Europe to help make geographical information more accessible and interoperable for sustainable development) are huge steps towards this goal. The Netherlands has plans for a “data roundabout” with information needed for the processes of the Environmental Planning Act. The main goal of this will be to boost information reuse, thereby lowering future assessment costs for new spatial developments. If data is easily available, it can be used to support the processes of the Environmental Planning Act in a reliable and accepted manner. This will lower the amount and length of objection procedures. That, again, would be a great example of monetising geospatial value. One of the interesting questions to solve here will be finding a way to guarantee sufficient data reliability for reuse in the formal planning process. We are working in conjunction with data providers on this topic.
Public bodies possess a wealth of geo-related information, often derived from local research and environmental impact assessments — information that can be used again to support new technical or spatial developments
k law & policy
Mark Reichardt President & CEO Open Geospatial Consortium
Standards: A catalyst for innovation
I There are various core standards underpinning the Internet, but by themselves they don't provide standard ways to encode and communicate information about land parcels, weather phenomena, cell phone location, geolocated sensors and actuators, earth observation images etc
n all of human history, no major technological transformation has exploded into the world as quickly as the Internet and Web. Stable, open, free-for-anyone-to-use technology standards, beginning with TPC/IP and HTTP, are the very definition of the Internet and the Web. How much investment went into creating those standards? Very little. What has been the economic return on that investment? Trillions of dollars, pesos, francs, rupees... Looking around the world, it is hard to calculate the monetary value of something that has triggered such extraordinary societal progress. TPC/IP and HTTP and other core standards underpin the internet, but by themselves they don't provide standard ways to encode and communicate information about land parcels, weather phenomena, cell phone location, geolocated sensors and actuators, earth observation images etc. Geospatial standards are developed by diverse communities in a process managed by the not-for-profit OGC that has provided a number of open, free-for-anyone-to-use technology standards to address a range of geospatial, sensor, Web processing and mobile needs.
Role of standards Standards as a catalyst for innovation: Contrary to the thinking of some, standards facilitate innovation. I see this quite often in the research community where OGC standards are used as a mechanism to accelerate the development and delivery of important research to broad community application. Standards save time, reduce costs and save lives: A recent German DIN Standards study identified an estimated USD 17 billion in economic benefit from standards in Germany alone. Feng Chia University in Chinese Taipei has advanced an impressive emergency event forecasting, warning and response network now in use across Chinese Taipei, where the combination of earthquakes and typhoons pose a great risk of debris flows that can reduce population centers to rubble in minutes. Standards allow us to adapt more quickly to changing information and technology needs: Almost all providers of geographic information systems, earth imaging systems and spatial database systems implement OGC standards that enable their users to query each otherâ€™s systems for data and services. For example. the Global Earth
Observing System of Systems (GEOSS) architecture leverages OGC and complementary ISO standards to facilitate international sharing of earth observations and assets. Standards allow us to achieve interoperability for sharing, for collaboration and for enhanced problem solving in a very complex world: Consider the case of water resource monitoring and forecasting. Our ability to monitor and predict significant flooding and drought events depends on our ability to bring together and process a staggering array of data and observations from many interdisciplinary sources. Through the OGC process, experts from around the world have united to advance a WaterML 2.0 encoding standard and a supporting framework of OGC geospatial and sensor standards to harness the myriad of disparate hydrologic observations maintained by different organisations worldwide. Standards promote a competitive marketplace: Vendors benefit from standards because standards boost the value that customers derive from their products, but they also benefit in other ways. Open standards create larger markets and new market opportunities. Also, by sharing with other ven-
Almost all providers of geographic information systems, earth imaging systems and spatial database systems implement OGC standards that enable their users to query each otherâ€™s systems for data and services
k law & policy
In most of the emerging technology domains, what is important is communication, which requires encoding standards, interface standards and expertise collected in best practices
dors the development costs of interfaces and encodings, vendors can invest more in developing applications that meet specific requirements of their customers. Vendors also realise that they can easily integrate their offerings into user domains, enterprises and systems that employ standards. The same is true for geospatial data providers across government and industry. Digital Globe and GeoEye, for instance, both implement OGC standards to assure that their data offerings can plug in to existing spatial data infrastructures implemented by governments worldwide. Standards help advance and deliver research results: Andrew Terhorst of the Commonwealth Scientific and Industrial Research Organization of Australia is advancing a hydrological sensor Web in Tasmania. He notes that “By using open standards we are able to avoid reinventing the wheel by using existing reference implementations, collaborate with a growing body of like-minded researchers across the world, maximise opportunities to achieve research impact, ensure we conform with current best practices, learn from other people's experiences with implementation, maximise opportunities for re-use/repurposing of infrastructure and software, build highly scalable solutions, and make a difference where it counts…” Through its committees, working groups and rapid prototyping testbed and pilot initiatives, OGC is increasingly focused on helping communities of practice benefit from the application of existing OGC standards and complementary standards and from complementary standards from other organisations. For example: • The US Federal Aviation Administration (FAA) and EUROCONROL have been leveraging the OGC Interoperability Program's
annual OGC Web Services (OWS) testbed activities to advance interoperability solutions for their aviation modernisation programmes. These organisations have engaged a wide variety of private sector participants to develop and demonstrate the successful use of OGC standards and best practices for rapid, flexible on-demand access to critically important Aeronautical Information and weather Information – on the ground and on the flight deck. • Hydrology Domain Working Group developed the OGC WaterML2.0 Encoding Standard, which is being tested in real-world applications in the Climatology-Hydrology Information Sharing Pilot. • Emergency and disaster management domain discussions have led to candidate standards for geosynchronisation and bulk data handling that improve the ability of users in the field to consume, use and update large volumes of spatial data on their mobile devices in intermittently-connected communications environments. The standards process itself, because it’s a community process, generates value. Challenges and future directions Technological innovation is accelerating, spurred by the power of global communication and global ICT markets. Some of the emerging technology domains being addressed by the OGC include the semantic web, indoor location, augmented reality, horizontal imaging, the rapidly expanding Internet of Things and, of course, mobile apps that leverage geospatial technologies. In all of these domains, what is important is communication, which requires encoding standards, interface standards and expertise collected in best practices.
The MagniďŹ cent The all-new, world-class SOKKIA total stations! Exclusive
Topcon Positioning Middle East and Africa FZE P.O.Box 371028, LIU J-11, Dubai Airport Free Zone, Dubai, UAE Phone : (+971)4-299-0203 Fax : (+971)4-299-0403 E-mail : firstname.lastname@example.org Website : www.topconpositioningmea.com
Data courtesy City of Quebec
THE 3D GIS FOR INFRASTRUCTURE – EXPERIENCE THE POWER OF BENTLEY MAP GIS is going 3D and the beneﬁts are enormous. With Bentley Map, you’ll gain the additional advantage of a GIS that’s both intrinsically 3D and optimized for the rigorous demands of sustaining infrastructure. Bentley Map supports 3D objects in Oracle Spatial natively, has smart 3D object editing tools, and executes advanced 3D spatial analyses as well as standard 2D routines.
CHECK OUT THE NEW BENTLEY MAP EDITIONS
Bentley Map is the choice of infrastructure professionals around the globe. It has all the power of MicroStation to make workﬂows efﬁcient, and includes innovative and comprehensive map ﬁnishing functions as well as advanced parcel management functionality. Featuring an extended API, the latest version of Bentley Map is optimized for developers and enterprise deployments alike. It comes in three editions to meet a range of user needs – from light editing and review, to 2D and 3D spatial information creation and analysis, to advanced raster image management and long transactions using Oracle Spatial. To ﬁnd out how Bentley Map is advancing GIS for infrastructure, visit www.Bentley.com/Map/GW or call +91-11-4902 1100.
ADVANCING GIS FOR INFRASTRUCTURE © 2011 Bentley Systems, Incorporated. Bentley, the “B” Bentley logo, Bentley Map, and MicroStation are either registered or unregistered trademarks or service marks of Bentley Systems, Incorporated or one of its direct or indirect wholly owned subsidiaries. Other brands and product names are trademarks of their respective owners.