Impact Magazine Spring 2016

Page 1

D R I V I N G I M P R O V E M E N T W I T H O P E R AT I O N A L R E S E A R C H A N D D E C I S I O N A N A LY T I C S

SPRING 2016

MAKING BEST USE OF REDUCED RESOURCES AT LONDON FIRE BRIGADE

Image © London Fire Brigade

Analysts help to minimise the impact on time to reach fires, in the face of significant budget cuts

M A N AG I N G H E AT H R OW ’ S C A PAC I T Y

CHILD P R OT E C T I O N

A simulation tool transforms how

to the implemented

plans are developed and imple-

recommendations of a

mented at Europe’s busiest airport

Government report

Systems thinking contributes


THE JOURNALS OF THE OR SOCIETY Linking research, practice, opinion and ideas The journals of The OR Society span a variety of subject areas within operational research and the management sciences, providing an essential resource library for OR scholars and professionals. In addition to the Journal of the Operational Research Society (JORS) – the flagship publication of The OR Society and the world’s longestestablished OR journal – the portfolio includes leading research on the theory and practice of information systems, practical case studies of OR and managing knowledge, and journals taking a focused look at discrete-event simulation and applying a systems approach to health and healthcare delivery.

OR Society members receive complete online access to these publications. To learn more about member benefits please visit the membership section of The OR Society website: www.theorsociety.com

www.palgrave-journals.com/orms/


E D I TO R I A L Welcome to this third issue of Impact. I hope you enjoy reading these stories of analytical work making an impact in varying organisations and governments. That such an impact is made is my prime consideration in choosing the applications to pursue for inclusion in Impact: has the work made a difference to the organisation concerned? Sometimes a quotation from a satisfied user is difficult to obtain: those involved have left the organisation, or rules prevent such a quotation being published. Nevertheless, I won’t publish a story unless I am convinced of the veracity of what is being claimed.

The OR Society is the trading name of the Operational Research Society, which is a registered charity and a company limited by guarantee.

Seymour House, 12 Edward Street, Birmingham, B1 2RX, UK Tel: + 44 (0)121 233 9300,

My usual focus, therefore, is on the impact of the work on the organisation for which it was done. However, I have noticed the impact many of these stories have had on me, and, no doubt, many of you. This is true of the work for Heathrow described by Simon Martin, even though as a “northerner”, or, rather, someone based in the north, I usually fly out of Manchester. Readers living in the London area will, I hope, not have experienced at first hand the effects of the analysis described by Andrew Cooper, to minimise the time to get to a call given the budget cuts that London Fire Brigade have experienced, but will be grateful for it.

Fax: + 44 (0)121 233 0321

The stories that resonate most with me are those of Eileen Munro and her colleagues’ report of how systems thinking played a crucial part in her report to the Government concerning child protection, and Chris Holt’s story of the role that Dstl analysts played concerning security operations for the Olympics, Paralympics and Commonwealth Games. In this latter case, I can also give my own personal endorsement, where the security arrangements worked extremely well at all the Olympic and Commonwealth Games events I attended. At Old Trafford, as I entered the stadium to watch Great Britain play Senegal, I was frisked by an old friend, a former president of the Islamic Society at Lancaster University! I didn’t realise at the time that this was an impact of O.R.

Editor: Graham Rand

Email: email@theorsociety.com Secretary and General Manager: Gavin Blackett President: Ruth Kaufmann FORS, OBE (Independent Consultant)

(Lancaster University) g.rand@lancaster.ac.uk

Print ISSN: 2058-802X Online ISSN: 2058-8038 Copyright © 2016 Operational Research Society Ltd Published by Palgrave Macmillan Printed by Latimer Trend This issue is now available at: www.issuu.com/orsimpact

Graham Rand

OPERATIONAL RESEARCH AND DECISION ANALYTICS

Operational Research (O.R.) is the discipline of applying appropriate analytical methods to help those who run organisations make better decisions. It’s a ‘real world’ discipline with a focus on improving the complex systems and processes that underpin everyone’s daily life - O.R. is an improvement science. For over 70 years, O.R. has focussed on supporting decision making in a wide range of organisations. It is a major contributor to the development of decision analytics, which has come to prominence because of the availability of big data. Work under the O.R. label continues, though some prefer names such as business analysis, decision analysis, analytics or management science. Whatever the name, O.R. analysts seek to work in partnership with managers and decision makers to achieve desirable outcomes that are informed and evidence-based. As the world has become more complex, problems tougher to solve using gut-feel alone, and computers become increasingly powerful, O.R. continues to develop new techniques to guide decision making. The methods used are typically quantitative, tempered with problem structuring methods to resolve problems that have multiple stakeholders and conflicting objectives. Impact aims to encourage further use of O.R. by demonstrating the value of these techniques in every kind of organisation – large and small, private and public, for-profit and not-for-profit. To find out more about how decision analytics could help your organisation make more informed decisions see www.scienceofbetter.co.uk. O.R. is the ‘science of better’.


THE EVENT FOR THE UK’S ANALYTICS AND OPERATIONAL RESEARCH COMMUNITY

OR58: The OR Society Annual Conference 6 – 8 September 2016

University of Portsmouth

This year’s OR Society conference is designed to support everyone – analytics professionals, academics and practitioners – in making an impact. This year’s conference is three days of:       

An eclectic mix of presentations from 32 streams Stimulating plenary and keynote speakers Academic-practitioner bazaars Speed networking One-to one mentoring clinics The Big Debate – Is Analytics the future of OR? Workshops, exhibits, social events and more!

Look out for the #OR58 hashtag on social media.

From our last conference: “Thank you for the intellectually stimulating #OR56 conference!” @emelaktas “Great time at #OR56 this week. Reminded how smart and how nice OR people are.” @ProfBobOKeefe

Booking now open: £300 excl. VAT early member rate (£380 excl. VAT early non-member)

www.theorsociety.com/OR58


CONTENTS 7

STRATEGIC AIRPORT CAPACITY MANAGEMENT AT HEATHROW Simon Martin tells how the development of a simulation tool has transformed the ability for Heathrow’s operations to be analysed

13

SELLAFIELD: SAFETY THROUGH SIMULATION Brian Clegg reports how simulation is involved in the removal of nuclear waste

24

SECURING INTERNATIONAL SPORTING EVENTS Chris Holt gives an insight into analytical work Dstl carried out concerning security operations for the Olympics, Paralympics and Commonwealth Games

27

TACKLING THE TERRORIST: AN O.R. RESPONSE TO A DEADLY BIOHAZARD Brian Clegg shows how the work of Margaret Brandeau has influenced US Government policy for dealing with potential anthrax attacks

31

RBS RISK ANALYTICS & MODELLING – SHAPING THE FUTURE WITH ANALYTICS Rosemary Byde explains how her team’s analytical skills target fraud, to minimise the bank’s losses and protect their customers

35

OPEN ALL HOURS Martin Slaughter shows how Hartley McMaster Ltd. helped Vodafone determine what their stores opening hours should be

38

MAXIMISING PERFORMANCE WHILE REDUCING RESOURCES AT LONDON FIRE BRIGADE Andrew Cooper outlines how ORH has helped the London Fire Brigade cope with significant financial cuts with minimal reduction in service levels

42

THE CHILD PROTECTION JIGSAW

Image courtesy of NATS

David Lane, Eileen Munro and Elke Husemann report how systems thinking played a crucial role in a Government report on child protection

4 Seen elsewhere

Analytics making an impact 18 Making an impact: too

important to ignore Mike Pidd reflects on O.R. and Government 20 Universities making an impact

Brief reports of three postgraduate student projects 47 Graph Colouring: an ancient

problem with modern applications Rhyd Lewis demonstrates that there is more to colouring graphs than you think 51 Gue??timate That!

Geoff Royston explains why the proverbial back-of-the-envelope may just be what is needed


SEEN ELSEWHERE BARRIERS TO ANALYTICS ADOPTION

© Rafael Ben-Ari / Alamy Stock Photo

Amongst the findings from a survey of 297 UK SMEs undertaken by Source for Consulting and Advanced Business Solutions are a number of key factors which may create barriers to implementing an analytics solution. The top 5 listed were: 1. No clear owner or champion for analytics 2. Inability to step back from day-today operations 3. Lack of awareness about what is possible 4. Insufficient in-house skills to implement and manage analytics 5. Concerns about IT spend required to support approach Simon Fowler, Managing Director, Advanced Business Solutions (Commercial Division), says, “Analytics technology has made significant strides in recent times and can help businesses to determine what is going to happen in the future and make more profitable decisions in the present. “Businesses leaders who fail to embrace analytics technology are likely to be left behind by their competitors. For the best results, the software should be implemented throughout an organisation instead of being isolated to a particular department in order to make a highly visible and meaningful impact. “To overcome resistance it’s also vital to articulate what benefits the technology will provide and how these will be measured. If senior executives are more engaged and clearly understand the value of analytics, they are more likely to commit to investing.”

4

IMPACT © THE OR SOCIETY

The report can be accessed at http:// www.advancedcomputersoftware. com/abs/news/analytics-technologymassively-under-used-in-supply-chainoperations-says-advanced.php FÜR ELISE AND OTHER BAGATELLES

Apparently, the problem of finding an optimal fingering, which indicates the finger that should play each note in a piece, is a combinatorial optimization problem. Who would have known it!? In “A variable neighborhood search algorithm to generate piano fingerings for polyphonic sheet music” published in International Transactions in Operational Research (DOI: 10.1111/ itor.12211) Matteo Balliauw, Dorien Herremans, Daniel Palhazi Cuervo and Kenneth Sörensen develop a variable neighbourhood search algorithm to generate piano fingerings for complex polyphonic music. They take into account the biomechanical properties of the pianist’s hand in order to generate a fingering that is user-specific and as easy to play as possible. The results of computational experiments show that the algorithm generates good fingerings that are very similar to those published in sheet music books.

ANALYTICS: DON’T FORGET THE HUMAN ELEMENT

Chris Mazzei, global chief analytics officer (CAO) and global analytics Center of Excellence (COE) leader at Ernst & Young, believes that “all companies will need to have analytics as a core competency in order for business decisions to be informed by data. End users of the analytics will enhance their decision-making with the help of analytics. But this cannot happen without recognizing that the consumption of analytics is as important as the production”. He comes to this conclusion in an article published in Analytics-Magazine.org, as a result of a report which can be accessed at http://www.forbes.com/ forbesinsights/ey_data_analytics_2015/ index.html. “Now is the time to ask if your investment in producing data-driven insights is delivering a competitive advantage. If not, ask yourself if your organization is effectively consuming analytics.” HOW MANY DENTISTS DOES SRI LANKA NEED?

Southampton researcher Sally Brailsford and her PhD student Dileep De Silva from Sri Lanka’s Ministry of Health have developed an analytical model to inform government planning for provision of state-funded dental care and the future university intake of dental students. The model represents supply and demand for dental-care services. The supply-side component uses system dynamics to represent the career progression of dentists from recruitment and training at the University Dental School, looking at different career paths through


© The OR Society

to retirement. The demand-side component calculates a range of future demand scenarios for dental care, based on different assumptions about Sri Lanka’s potential future economic development. The results from the model underpinned a government decision in 2012 to limit dental student intake, create 400 new posts in under-resourced rural areas and grant access to dental care to an additional 1.5 million people. (See Journal of the Operational Research Society (2015) 66, 1566–1577)

on data as it happens. With hardware commoditized (or bypassed entirely in favour of the cloud) and open source software (e.g. Apache Ignite, Spark streaming, Storm) coming into its own, it is now economically feasible to squeeze even more value out of data in real time.”

SERVICES TO OPERATIONAL RESEARCH

New OR Society President Ruth Kaufman was awarded the OBE for services to O.R. in the New Year’s Honours list. When asked what ‘services to Operational Research’ meant in her case, Ruth responded “I don’t know exactly what they had in mind because, as the recipient, I don’t get to see the 10-page form that the nominator had to submit. But as head of an O.R. group in a government department, one-time chair of the Government O.R. Service, member of the OR Society Board for 7 years, and now OR Society President, I have persistently tried to do two things: to look internally, to improve the quality and effectiveness of O.R. practice and governance; and to look outwardly, to extend O.R. influence across professions or disciplines, across application areas, between O.R. commissioners and O.R. providers. I’ve been involved with setting up Pro Bono O.R., with building the ‘Making an Impact’ practitioner sessions at conferences, with raising the profile of O.R. in government, with representing O.R. at Executive Board level in a government department – every one of these has been a result of teamwork, but I’m very proud of my role within the team.”

RUTH KAUFMAN

PRESCRIPTIVE ANALYTICS TO TAKE CENTRE STAGE

In his blog (http://www.fico.com/en/ blogs/author/scott-zoldi/) Scott Zoldi, chief analytics officer at FICO, predicts that prescriptive analytics will take centre stage as the ultimate destination on the analytics journey. “What do I mean by prescriptive analytics? Prescriptive analytics is a form of advanced analytics which examines data or content to answer the question “What should be done?” or “What can we do to make ____ happen?” and is characterized by techniques such as graph analysis, simulation, complex event processing, neural networks, recommendation engines, heuristic and machine learning.” Zoldi makes the following five predictions about the role of prescriptive analytics and where he sees things going in 2016. 1. Streaming analytics will come of age in 2016. “It’s no longer just about gathering and analyzing data, but also about acting

2. Predicting cyber crime is a reality. “Prescriptive analytics is emerging as “The Next Big Thing” in cyber security. Identifying anomalous behaviour and recognizing patterns as they are developing enable analytics to sound the alarm before attackers can harm the organization. Prescriptive analytics will become a must-have security technology.” 3. Lifestyle analytics becomes part of daily life. “In 2016, the Internet of Things will go even more mainstream. From home appliances to cars to automated shopping, “lifestyle analytics” is poised for explosive growth. Groceries delivered without an order having to be placed. Doctors monitoring patients remotely 24/7/52. Biometric security. It’s all coming together thanks to the cloud and all the devices and sensors that surround us.” 4. The Big Data belly ache – rethinking what’s really important. “It seems as though major data breaches are happening daily. From banks to retailers to government agencies, bad guys are accessing personal data at a staggering rate – millions of records at a time. In the upcoming year, businesses that have been collecting Big Data without putting thought into what they really want to collect – what is useful, what is superfluous, what is risky to store – will start to suffer from Big Data indigestion. Businesses need to put greater care into governance or

IMPACT | SPRING 2016

5


© INFORMS CAP®

face serious consequences due to the cost, liability, dangers and headaches involved in storing so much sensitiveyet-unnecessary data spread across the organization.” 5. Beware of wolves in data scientist clothing. “A growing number of crowd-sourced and open-sourced algorithms available today have bugs in them or offer questionable value. Many companies are storing too much data and using iffy algorithms and hiring practitioners with limited expertise who may apply these resources without knowing where the deficiencies are. This will result in negative impacts to businesses as they rely on the results of the computation. We need to understand, test and harden our algorithms, and develop more consistent expertise in applying them. In 2016, I see the industry having to deal with challenges created by the flood of open source algorithms and a dearth of qualified analytic scientist practitioners.” THE CAP FITS

Michael Mortenson, a mature PhD student at Loughborough University, researching the development of analytics education in U.K. universities, can now put the letters CAP after his name. This was the result of the OR Society investigating the possibility of providing INFORMS’ Certified Analytics Professional certification to Society members and the wider U.K. operational research community, and wishing for a “guinea pig” to have first-hand experience of the exam. He concludes that it is “an exam I would recommend, both to analytics/O.R. professionals seeking to “prove” their practical expertise, and to employers looking for recruits who can genuinely hit the ground running”. His assessment is that “this certificate is definitely more at the O.R. end of the

6

IMPACT | SPRING 2016

declare their true aim and assume the characteristics of the previous government (the most common outcome of regime change). The paper is available in Open Access at https://spiral.imperial.ac.uk:8443/ bitstream/10044/1/28897/7/ jors201528a.pdf SOCCER STATS

spectrum... However, for those from an O.R. background, this alone is unlikely to be enough. You will need a reasonable knowledge of a range of topics, including data warehouses, project management and machine learning.” MODELLING REGIME CHANGE

A paper with unusual subject matter, recently published in the Journal of the Operational Research Society (Volume 66, pp. 1939–1947), presents a model of regime change. Two researchers, Richard Syms and Laszlo Solymar, of Imperial College London’s Department of Electrical and Electronic Engineering, develop, with restrictive assumptions, a simple differential model with few coefficients, concentrating on aspects of a struggle that might prevent or allow regime change: popular support, resources and weapons. This enables three scenarios to be investigated: stable points (representing an established, oppressive government), abrupt changes in stability (regime change), and limit cycles (a return to oppression). The model is not predator-prey type, but represents a competition between two parasites (the government and the rebels) to exploit a host (the population at large). The return to oppression occurs when successful rebels

On February 29th, Sean Ingle, sports columnist of The Guardian quoted an article in the European Journal of Operational Research as a response to Arsène Wenger’s estimate that Arsenal’s Champions League chances were only 5%. This followed a home defeat against Barcelona. He referred to “a new piece of academic wizardry” by academics who “wanted to quantify the effect of the away goals rule – and to show how a first-leg result affected a team’s chances of qualification”. The article, by a multinational team from the UK, Spain and Italy, entitled “What is a good result in the first leg of a twolegged football match?” was published in December 2015 (volume 247, pp 641–647). Ingle argues that the results open up the debate about whether the away goals rule should be scrapped. In his article Ingle says “The academics … asked … how great is the damage when a club at home in the first leg concedes an away goal? …..They found that if two teams of equal strength face each other and the first leg finishes 0-0, a home side has a 46.7% chance of progressing. A 1-0 victory lifts that to 65.3%. But losing 0-1 drops it like a stone, to only 12.5%. ….The academics also found the most balanced result going into a second leg is 2-1 – a score that leaves the home side with a 55.3% chance of progressing. By contrast, a 1-0 win would give the home side a 63.5% chance of going through.”


© Image courtesy of NATS

S T R AT E G I C A I R P O R T C A PAC I T Y M A N AG E M E N T AT H E AT H R OW SIMON MARTIN

MOST PEOPLE IN BRITAIN are aware of the debate about airport runway capacity that has been taking place over the last couple of years. The Airports Commission, led by Sir Howard Davies, which was established to examine the requirement for additional capacity in the UK, delivered its report last year and recommended the construction of a third runway at London Heathrow Airport. The

Government has the final say on this and has not yet made its decision. Heathrow operates at 98% of its available capacity for aircraft movements, handling over 470,000 flights and 75 million passengers in 2015. It is the busiest airport in Europe. London’s other airports, including Gatwick, Stansted, Luton and London City, are also getting increasingly busy.

IMPACT © THE OR SOCIETY

7


© Image courtesy of NATS

Airports are complex systems that can be constrained by a number of elements, including terminal capacity, stand capacity, taxiways, runway capacity or airspace. Being able to proactively identify future problems or constraints, to put solutions in place ahead of time, is extremely valuable. The primary constraint at Heathrow is runway capacity. In order to manage this scarce resource, airlines are allocated runway slots, which give them the right to operate flights at agreed times. The number of slots available is finite, which creates a high demand for each one.

Airports are complex systems that can be constrained by a number of elements, including terminal capacity, stand capacity, taxiways, runway capacity or airspace

8

IMPACT | SPRING 2016

In January 2015, Heathrow Airport announced that an early morning runway slot had been allocated to Vietnam Airlines, which was previously operating at London Gatwick airport. This was a significant announcement that generated a great deal of media interest as it was the first new early morning arrival runway slot made available at Heathrow since 1996. At the time of the announcement Lord Puttnam, UK Trade Envoy to Vietnam, Cambodia and Laos, explained the value of the direct Vietnam route: “As an island trading nation, air links in to fast growing import markets like Vietnam are vital for the UK’s economic growth prospects. That’s why I am delighted to welcome the announcement that Vietnam Airlines will now have a daily, direct route to Britain’s hub airport, helping British business to compete in the global race.” The introduction of a new slot allows services to new destinations,

expanding the connectivity of the UK, bringing benefits to the national economy as a whole.

In January 2015, Heathrow Airport announced that an early morning runway slot had been allocated to Vietnam Airlines

MAXIMISING THE USE OF EXISTING ASSETS

New slots at Heathrow are rare, but this one was made possible by the use of Strategic Airport Capacity Management (Strategic ACM), a ground-breaking new capability that was recently developed and deployed. Strategic ACM was developed by NATS, the UK’s leading provider of air traffic services, as a direct result of airport and airline customer need, with NATS’ Analytics department bringing their considerable skills and


© Image courtesy of NATS

experience in this field as part of this project. Heathrow is the first airport to be using the service and has achieved immediate benefits. Strategic ACM is designed to support decision-making at busy airports, using a combination of simulation and data analysis to allow airport operators to make the most effective decisions about runway capacity, scheduling and planned infrastructure changes. Given the costs of airport expansion and the capacity constraints at many airports around the world, making the best use of existing assets is vital. This is particularly important for the UK. The delivery of a new runway in the South East of England is likely to be many years away, even if a decision is made imminently. However, the implementation of Strategic ACM demonstrates the key role that the fields of Operational Research and Analytics play in maximising capacity, minimising delays and improving resilience to adverse weather or systems issues.

HISTORIC DATA ANALYSIS

Data analysis is crucial to understanding any complex real world system and an airport is no exception. NATS’ Analytics team has been delivering runway capacity studies and data analysis to airport operators in the UK and around the world for many years. Strategic ACM represents a major evolution of the capability, combining years of analytical experience with the latest techniques and web-based technologies. To understand how Strategic ACM has been so beneficial in increasing capacity, we must go through the various components of the system.

NATS has developed a data warehouse, which is updated daily and combines multiple data sources including: approach radar; ground radar; weather data; and other sources of operational Air Traffic Control (ATC) data.

The data warehouse allows for over four years of historic data (more than 2 million flights) at Heathrow to be analysed

Logic was developed to process the radar data in order to extract key metrics for each flight, such as runway entry and exit times, delays and taxi times. Event data for every flight was matched against other sources and stored in the data warehouse, which now contains millions of data items. The Strategic ACM system has access to the data warehouse, which allows for over four years of historic data (more than 2 million flights) at Heathrow to be analysed. Over a dozen key airfield

performance metrics were selected as being the most valuable, including: arrival and departure runway separations; arrival and departure delay; runway occupancy times; runway throughput and demand; and terminal utilisation. The Analytics team carried out an extensive data mining exercise to identify and segregate the operational conditions (such as wind and visibility) that led to distinct variations in airfield performance, measured in terms of the separation distances applied between pairs of aircraft to ensure safety. Pairs of aircraft flying one behind the other into Heathrow, that were recorded and stored in the data warehouse, were then statistically analysed to derive a decision structure that identified the factors that influence runway separations, splitting the data into groups, each with a distinct distribution of values. Following testing of the optimised groups, a set of distinct scenarios were agreed. Each scenario had a different set of arrival and departure separation distributions, which covered all arrival wake turbulence pairs and all departure

IMPACT | SPRING 2016

9


route pairs. While these scenarios were specific to Heathrow, similar configuration exercises will be carried out for other airports that decide to use Strategic ACM to determine and define the operational scenarios specific to those airports. This can be done at a level of detail depending on the complexity of the airport’s operation.

© Image courtesy of NATS

SIMULATIONS

The Strategic ACM system is designed to carry out ‘what if ’ simulations. Examples of the types of questions that Strategic ACM can be used to answer include: • What is the current and ultimate capacity of my airport? • What are the effects of changes in scheduled demand? • What are the effects of taxiway closures? • How much additional capacity and what level of airport efficiency would result if new taxiway, runway or terminal infrastructure is built? • What are the effects of increasing numbers of very large aircraft (e.g. Airbus A380s)? • What happens to runway capacity in different weather conditions? • By how much will runway capacity increase if new runway exits are built? • What are the effects of airspace changes, such as new departure routes? • What would be the capacity enhancement benefit of new Air Traffic Management (ATM) systems? Various simulation model types are available, each offering increasing levels of detail. The model types range from a stochastic simulator, focussing on runway capacity calculations to

10

IMPACT | SPRING 2016

event simulations, which add further elements of the airport system, such as taxiways, terminals and gates, and more detailed logic. The addition of each element provides new output metrics for analysis. Air traffic controllers order aircraft in a sequence to optimise runway capacity and all the ACM models replicate this process. The choice of model type depends on the nature of the question that the analyst wishes to answer. Strategic ACM offers the analyst the option to start with a high level runway analysis and then add levels of detail to understand the impact on the airport system as a whole.

Base models were created for each of the scenarios defined by the data mining exercise.

The Strategic ACM system is designed to carry out ‘what if’ simulations

Each of the simulation models was validated by comparing throughput, taxi times and delays against actual data and expected values. Air traffic controllers at NATS reviewed and validated the models to confirm their realism. Outputs for each scenario were also compared between each model type


to ensure consistency. Once validated, simulation models are uploaded to the Strategic ACM platform and made available to the airport.

WEB INTERFACE

To access Strategic ACM, analysts log in to a web-based platform to perform data analysis, to set-up simulations and to view simulation results. The web platform was designed specifically for Strategic ACM and to make the process of creating, running and analysing simulations as straightforward as possible. The web interface links to the NATS data warehouse for Heathrow allowing

the airfield key performance metrics to be viewed to uncover valuable insights into how the airport performs under the different weather scenarios. The schedule is a key simulation input because it determines the level of demand placed on the runways and is therefore a key driver of throughput and delay levels. It contains details about each flight, including the call sign, aircraft type and origin or destination. Schedules are stored in a database and can be edited, downloaded, cloned or uploaded via the user interface. A library of all previous simulations is stored in a database. Each entry in the simulation library links to

the simulation results. Analysts can also submit requests online for new simulation scenarios, analytical support or for videos of simulation playback. Creating a new simulation is a simple process: the model type, infrastructure, weather scenario and operational environment are selected and then a traffic schedule is added. Simulations are queued and run in turn, which can take from a matter of seconds to several hours, depending on the level of detail in the model.

Various simulation model types are available, each offering increasing levels of detail

Each simulation run consists of multiple simulation iterations with randomised parameters, all managed by the Strategic ACM system. The use of randomisation increases the reliability of the model outputs. Simulation results are stored at the end of each iteration and are then aggregated by software to generate the simulation results. Many simulation outputs can be produced, including runway delay and throughput. Additional output metrics depend on the model type selected and the airport system elements included.

BENEFITS OF STRATEGIC ACM

Utilising this tool can help busy airports facing demand and capacity challenges while trying to maintain a consistent and quality service. Strategic ACM has been extremely beneficial for Heathrow and subsequently its airline customers. It has enabled the seasonal Runway Scheduling Limits process that is

IMPACT | SPRING 2016

11


used to define runway capacity to be re-designed. The speed of the simulation tool allows requested changes to the schedule to be assessed within seconds, rather than days and weeks, giving more scope to make potential improvements to next season’s schedule in an efficient and timely way within the International Air Transport Association (IATA) scheduling timeline. Since Heathrow Airport went live with Strategic ACM, NATS’ Analytics team has provided support and additional context, exploration and interpretation where required. NATS will provide comprehensive training to all users and future Strategic ACM

customers, for them to gain maximum usage and help with detailed analysis. Lucy Hodgson, ATM Project Manager at Heathrow Airport Limited, said: “The Strategic ACM tools have transformed our way of working in this area. The seasonal runway capacity declaration process is now much faster and more responsive to the needs of our customers than it was in the past and the new functionality allows us to test possible future ground operations and concepts more efficiently than ever before. “The new simulation tools let the effects of various infrastructure changes be compared and the benefits quantified, allowing investment

ABOUT NATS NATS is a leading air navigation services specialist, handling 2.4 million flights in 2015, covering the UK and eastern North Atlantic. NATS provides air traffic control from centres at Swanwick, Hampshire and Prestwick, Ayrshire. NATS also provides air traffic control services at 13 UK airports including Heathrow, Stansted, Manchester, Edinburgh and Glasgow; at Gibraltar Airport and, in a joint venture with Ferrovial, at a number of airport towers in Spain. Building on its reputation for operational excellence and innovation, NATS also offers aerodrome, data, engineering and consultancy solutions to customers worldwide, including airports, air traffic service providers and Governments. There is more information on the NATS

decisions to be made, and priorities agreed, consistently and with increased confidence in achieving maximum benefit for the airport and the customer. Strategic ACM will support the development and implementation of our master plan, assisting Heathrow Airport in giving passengers the best airport service in the world.”

Strategic ACM will support the development and implementation of our master plan, assisting Heathrow Airport in giving passengers the best airport service in the world.

The design of the new toolset makes the capability both flexible and adaptable, and most importantly future proof. Airports are assessed to understand their individual operating challenges. As the challenges change over time, so can the analytical components, ensuring that the focus is always on the relevant questions being asked at the time. There is a substantial pipeline of future development activities to expand the capabilities of Strategic ACM, making it a very exciting prospect.

website at www.nats.aero

ABOUT NATS ANALYTICS NATS Analytics provides innovative analytics and recommendations to make the best operational and business decisions. The department has won a number of awards, including the OR Society President’s Medal in 2011 for the 3Di metric. Staff members have a range of numerate backgrounds, such as Operational Research, Statistics, Mathematics and Engineering and work across a number of areas including Safety, Forecasting, Business Modelling, Airport Capacity, Environment and Airspace Development. Visit www.nats.aero/careers

12

IMPACT | SPRING 2016

Simon Martin joined NATS as an OR graduate in 2000 after completing a BSc in Mathematics at the University of Nottingham and an MSc in OR at Lancaster University. Since then Simon has worked on dozens of airport capacity studies at major airports in the UK and around the world, using simulation and analysis to safely reduce delays, increase capacity and enable optimal infrastructure decisions to be made. Simon is a Fellow of the OR Society, a frequent traveller and an occasional private pilot.


SELLAFIELD: SAFETY THROUGH SIMULATION BRIAN CLEGG

SELLAFIELD, on the Irish Sea coast by Seascale in Cumbria, has been a significant operational site since the 1940s. Originally a Royal Ordnance Factory, in 1947 it became the site for Windscale, the first British nuclear reactors to produce plutonium for nuclear weapons, and by the 1950s also included the Calder Hall nuclear power station, the world’s first production scale nuclear power plant. Sellafield is still a massive industrial complex. Operated now by Sellafield Limited on behalf of the public Nuclear Decommissioning Authority, it has become primarily a reprocessing site that has a significant heritage nuclear installation to manage, and it is here that the key Operational Research (O.R.) tool of simulation is being used to support safe and secure operations while saving time and money.

The O.R. group at Sellafield has a history going back to the British Nuclear Fuels Limited team in the 1970s, but in its current incarnation has been in existence since 1998, and is now home to around 26 staff. The group has a strongly proactive role, acting as Sellafield Ltd’s O.R. ‘capability group and intelligent customer’. This may sound like business-speak, but it is an industry-recognised term that places the group at a strategic point in the organisation, enabling it to be interactive and approach projects where it feels that it can provide value. Panos Frangos, joined the group in 2001 after degrees in Mathematics, Statistics and O.R. from UMIST (University of Manchester Institute of Science and Technology), and an MSc in O.R. at Lancaster University’s School of Management, where his

IMPACT © THE OR SOCIETY

13


project involved producing an energy strategy for the UK. He says: ‘I have been working for Sellafield for 14 years now and I am enjoying it more than ever. The challenges in the industry, aligned with the fast changing IT, allow us as an O.R. department to push the boundaries.’ A good example of the way that the group’s expertise enables them to push those boundaries is in the major project of removing historic waste from the Sellafield Pile Fuel Cladding Silo (see Figure 1), a project that Panos has worked on for a number of years. The silo is a massive storage facility, almost 20 metres high, holding intermediate level waste – cladding from fuel

containers and other materials that have a higher level of radioactivity than everyday exposure items like work gloves, but that aren’t as radioactive as spent fuel and other high level waste. The facility was designed in the 1950s and gradually filled as waste was tipped into the six internal silo compartments, becoming completely full in 1964 and closed off in 1965. It contains enough waste to fill an Olympic size swimming pool twice. The silo is ageing and has been identified as one of the greatest hazards on the Sellafield site, making the removal of the contents to be stored in modern containers in a safer location a priority. The current aim is to start removing waste in 2023.

FIGURE 1 PICTURE OF PILE FUEL CLADDING SILO IN MID 60S

14

IMPACT | SPRING 2016

Planning the most effective way to perform an operation like this is bread and butter to the Sellafield O.R. group, which has modelling and simulation as its core tools. In a sense, simulation dates back as far as people have built physical models to test out a large scale project before undertaking it, or to try to understand better how something complex works. However, the real breakthrough in simulation came after the Second World War with the availability of digital computers, which rendered pencil and paper modelling redundant. An example of a simulation model might be to examine the different queuing options for a supermarket. A simple model might consist of a customer generator, producing a string of numbers representing the arrival of customers at the queue and the size of their shopping load, plus a series of modules representing the checkouts. The virtual customers are allocated to checkouts by whatever queuing process is being tested and the system tots up the waiting times and other statistics. By running it many times, a good picture can be built up of the influence of different queue types on customer service without disrupting a store to try it out for real. Originally such simulations were simply collections of numbers that went through various processes. Some were deterministic, where the outcome was straightforward numerical process, like adding two values together to calculate the overall waiting time for a customer. Others were algorithmic or rule-based – so, for instance, when simulating a traditional supermarket queue, there might be a rule that a customer joins the shortest available queue. Finally, some were stochastic, where random numbers were used to simulate a variable real world situation. So, for


instance, a random number generator might be used to decide how many items were in a customer’s basket, based on the best data available for how this might vary. Such number-based simulations can easily be run now in a spreadsheet, and that is often still how they are performed, but it was realised fairly early on that the clients who were being helped with simulations reacted to the information better if they could see what was being simulated – and it made it easier to spot errors, when behaviour looked odd. Before long, simulations were being run in specialist applications that allowed for little blocks to move around on the screen, representing the entities taking part in the simulation. Now simulations are widespread, featuring in everything from weather forecasting, where immensely powerful computers are used to run models multiple times to produce a probabilistic forecast (50 per cent chance of rain, for example), to traffic flow simulators to help plan and manage road systems. High end simulation packages are able to produce complex 3D visual models, to give the best picture of exactly how the process being simulated plays out.

Decommissioning projects are complex and require multiple practical engineering problems to be resolved

Decommissioning work, such as that on the Pile Fuel Cladding Silo (see Figure 2), has been underway at Sellafield since the 1950s, originally to expand the available space for new facilities and for refurbishment, and more recently as part of a longterm plan to reduce hazards on

the site to the minimum possible. Such decommissioning projects are complex and require multiple practical engineering problems to be resolved. After all, this is a significantly more complex process than traditional production engineering. The exact condition of the waste is unknown, and some items may need to be cut down to fit the new containers. And because of the radioactivity, all this needs to be managed remotely, adding to the complexity. Being able to simulate the process first in the case of the retrieval of the material from the silo was seen as a significant opportunity to overcome these obstacles. The Sellafield O.R. group originally had Excel and early simulation packages called SEE WHY and WITNESS in their simulation toolkit, but more recently the team has used Flexsim, which is a highly visual tool providing an impressive three dimensional rendering of the facility being modelled. Panos recalled: ‘When I joined the company in 2001, the group was using a simulation package called WITNESS. It had a 2-D capability, which was more advanced than the static spreadsheet modelling used before. It also allowed us to model and evaluate complex systems and plants to assess their performance and predict their throughput. By 2004, though, we were becoming more experienced and demanding, and we searched the market for alternative software. When we came across Flexsim, approximately 10 years ago, we were at first impressed by the visual 3-D functionality that it offered. During the trial and testing period we were convinced that it was as capable as WITNESS but it had the added benefit of the 3-D functionality.’ At the same time, the group has moved from developing individual ad-hoc models, which provided little

ability to audit the outcomes, to a standardised model-building process with built-in quality control. This sounds like something that would limit creativity, but the payoff is more robust results and the ability to handle much larger models, as would be required for the Pile Fuel Cladding Silo. Panos commented: ‘The group’s Modelling Framework provides a robust audit trail, a structured approach and defines all stages of our engagement in any project we undertake. It shows how the 3 stakeholders (the Customer, the Supply Chain and the Sellafield O.R. Group) interact with each other throughout the project, from the definition of the problem to the completion of the analysis.’

The group’s Modelling Framework provides a robust audit trail, a structured approach and defines all stages of our engagement in any project we undertake

Not only would a sophisticated simulation provide the ability to run through the process and find potential weaknesses, it would give management an overview that would enable them to make decisions on the options available as quickly as possible, reducing both risks and cost. A big factor in this was the visual capability (see, for example, Figure 3). Panos: ‘Although the visual element was not the most important objective of the work we did, it definitely paid dividends once we started using it. The engagement with our customers was immediate as we were able to make them visualise their plants in our simulation models. Testing, verification and validation of the

IMPACT | SPRING 2016

15


FIGURE 2 CURRENT AERIAL VIEW OF THE PILE FUEL CLADDING SILO

models became significantly easier as the user has the spatial awareness provided. It also allows us to demonstrate and present our analysis to senior management in a more efficient way.’ The initial phase was to put together relatively simple high-level models to examine the different engineering options to moving the waste, deciding the preferred option from these models. Panos commented: ‘The engineering solutions included accessing the Silo from the bottom or the roof of each compartment, or the one that was selected, opening a hole in the side wall near the top of each compartment. A lot of factors contributed to the selection of this option, such as available technology, structural integrity of the building, condition of the waste, type of the waste and effectiveness of the retrieval operations.’ This initial phase was followed by building a detailed model of the preferred option based on the best available data, which would

16

IMPACT | SPRING 2016

enable a whole range of ‘what-if ’s to be run. These could cover, for instance, optimising the support and maintenance operations, minimising downtime and evaluating which operations could be completed in parallel. At the same time, the O.R. team was looking for parts of the process where there was a bottleneck that meant that a small improvement would make a significant difference to the project as a whole. Panos: ‘The ability to identify and measure bottlenecks using the model lies in the accurate representation of the process in the model… What is important is to measure the bottlenecks, understand what is causing them but also understand how sensitive they are under different circumstances. This is where the “what if ” analysis comes into play, along with the careful design of scenarios. The information collected by the model includes buffer statistics and residence times, statistics on all states of the modelled process and equipment, and so on.’

With repeated runs to pull together the forecasts made by the stochastic components, which represented the aspects that couldn’t be definitively predicted, a detailed picture was drawn up. In total, over 1,000 scenarios were run, ensuring robust outputs. The group also made use of its close working relationship with the O.R. departments at Lancaster and Warwick Universities to get external review of their work to ensure it was delivering effectively. Perhaps the most impressive outcome of the multiple model runs was to identify key activities where a small change in the time taken to undertake an activity would result in a large overall time saving in the project as a whole. Panos notes: ‘Focusing on the critical path of the process we identified the processes that contributed the most to the turnaround time of the filled boxes. It takes multiple grabs to a fill a box, the number depending on the grab volume. For the baseline dataset it was evident that any deviation of this box filling cycle by one minute resulted in an extension or reduction to the retrievals programme by approximately one month.’ Working

The O.R. team was looking for parts of the process where there was a bottleneck that meant that a small improvement would make a significant difference to the project as a whole

with process engineers and making use of the flexibility of the simulation, the final result was to save 15 minutes on this front-end action, meaning that the project could be achieved 15 months quicker than originally thought, producing a saving of around £20 million.


Another extremely valuable outcome was the way that the simulation could be used to produce a demand schedule for the containers in which the waste would be stored in the new facility. When the process starts in 2023, current estimates are that a total of 2,200 individual waste containers will be needed. As these cost around £40,000 each, to purchase them all up front would result in a major capital expenditure on equipment which would then sit idle for years. It would also present a problem to the manufacturers in coming up with so many containers simultaneously. However, the simulation makes it clear just how the flow of materials will become available over the duration of the project. This means that the storage container purchasing can be based on a well-supported schedule over a period of several years running from 2023 to 2033, making the whole process more robust and cost effective.

The visual nature of the model made it a valuable tool to explore the whole plan with regulators and other stakeholders

On top of the straightforward cost and timing benefits, the use of the simulations also made it possible to bring in a whole range of useful information for the site management, including the payback for different kinds of operator training and the ability to put together a schedule for planned maintenance, rather than running the process until things went wrong and, as a result, wasting considerably more time correcting the faults. What’s more the visual nature of the model made it a valuable tool to

FIGURE 3 PILE FUEL CLADDING SILO SIMULATION MODEL - VIEW OF THE RETRIEVAL FACILITY CONSISTING OF 2 PARALLEL WASTE RETRIEVAL AND EXPORT MODULES

explore the whole plan with regulators and other stakeholders. Not only has this project been a considerable success, with the proviso that there is still several years before the removal is put into practice, it has raised awareness and appreciation of the capabilities of simulation and the O.R. group within Sellafield Limited, making it a trendsetter for future projects that will benefit the company and the environment and see the Sellafield O.R. group busily employed for many years to come. Panos commented on future directions: ‘The highlight of the group and my personal favourite over the last two years is Distributed Simulation Modelling. In Distributed Simulation, models can interact between themselves by communicating data and synchronised activities. This concept has been used for years in the military sector for conducting real-time platform-level war-gaming, as seen in

an article in the first issue of Impact, but there are limited examples in other industries. Distributed Simulation provides an additional level of accuracy in strategic decision making by explicitly modelling the boundaries and assumptions introduced in individual facility models. The future at Sellafield may take simulation in a whole new direction, but the technique will continue to be at the core of Operational Research activity at the site. Brian Clegg is a science journalist and author who runs the www.popularscience. co.uk and his own www.brianclegg.net websites. After graduating with a Lancaster University MA in Operational Research in 1977, Brian joined the O.R. Department at British Airways, where his work was focussed on computing, as information technology became central to all the O.R. work he did. He left BA in 1994 to set up a creativity training business.

IMPACT | SPRING 2016

17


M A K I N G A N I M PAC T : TOO IMPORTANT TO IGNORE Mike Pidd

Many years later, around the millennium, I worked with Lancaster colleagues on a policy project using soft O.R. methods, for the then Inland Revenue. I found this fascinating and it led to other involvement and also to what I think was called a Whitehall Fellowship. After that it led to a period as a Research Fellow in the UK’s ESRC-funded Advanced Institute for Management Research, thinking about how to measure the performance of public services. I also supervised, sometimes successfully, a series of Lancaster MSc students working on projects for several government departments. Finally I wrote a book about performance measurement, which sadly performed badly. MORE RECENTLY: O.R. MAKES AN IMPACT

SOME HISTORY: I FAILED TO MAKE AN IMPACT

My involvement with O.R. in the public sector started rather late in my working life, as I shall explain later. Many years ago I did, though, apply for a job in the Civil Service, for which department I can’t recall. At the time I was working at Aston University under Steve Cook’s leadership, having previously had a real job at Cadbury Schweppes. I saw an ad for a Civil Service post and, thinking this might be a way to make the world a better place, I applied. I still remember the interview process, held in somewhat shabby offices near Charing Cross. Two things stick in my mind. The first was sitting before a fierce lady whose job was to check my expense claim, which consisted of a train fare, plus bus and tube fares. She got out her book of fares, thumbed carefully through it, and checked every detail of the claim. She sat opposite me, behind her desk all the time, looking down on me from a chair higher than mine. My second recollection was the interview itself, which began with the usual questions about my work to date and why I’d applied. The rest of the interview, which lasted about 30 minutes, consisted of me not answering the following question. “Long distance runners in the UK often have problems with their calf muscles when they race in other countries. Have you any thoughts on why this might be?” I didn’t have any useful thoughts on this and spent 30 tortuous minutes saying so. They sensibly didn’t offer me the job, though my expenses were paid in full.

18

IMPACT © THE OR SOCIETY

Last year one of the large government departments decided that it would welcome involvement from the O.R. community on its scientific advisory council and eventually David Lane, Henley Business School, and I were appointed to it. I think it is really important that government departments can call upon experts from scientific communities to give advice and to comment on existing work. As far as I am aware, no one from the O.R. community had previously been appointed to this august group, which consists of well-known researchers, including a sprinkling of knights of the realm; plus the more humble O.R. pair. I have yet to attend a meeting, because I was in hospital before Christmas, so I haven’t taken part as yet, so I may find that my role is to make the tea.

O.R. is being taken so seriously that there are issues on which outside comment from external O.R. academics is not only welcome but regarded as essential

However, regardless of whether or not I deserve a place in this eminent group, I think that this shows the major impact made by the O.R. staff in this government department. That is, O.R. is being taken very seriously, so seriously that there are issues on which outside comment from external O.R. academics is not only welcome but regarded as essential. To me, this indicates the high quality, importance and visibility of the O.R. work being done there. This is not just about scale, though the O.R. activity is large in this and in other departments, but more about its importance. O.R. makes an impact in many different ways and having a seat at the table seems vital in government circles.


Image courtesy of Roger White

A second indication of the importance and prominence of O.R. in government came when I was asked to Chair a new committee established within the department to oversee the governance aspects of the quality assurance of models used for decision making and policy. All major departments have signed up to implement the recommendations of the Macpherson Review, of which full details are available on government web-sites. The review was an exercise internal to government that stemmed from a major policy failure based on the use of a model. I know very little about the policy failure and nothing at all about the model that was used. However, I do know that this led to a concern that proper quality assurance processes are in place for models that are mission critical. The review recommends how this should be done. This is important because it recognises that modelling now plays a very important part in policy and decision making in government.

A significant proportion of these mission or business critical models is developed wholly or partially by O.R. workers

I understand that members of the Government O.R. Service had a major role to play in the recommendations, known as the AQuA book, which followed the Macpherson Review. The book explains the importance of model quality assurance and recommends how this should be done. The major O.R. involvement is another indication of the impact of O.R. in government and of the high regard in which the O.R. community is held. As far as my involvement is concerned, it’s possible that more established members of the Department’s Scientific Advisory Council were asked to Chair the governance board and declined the invitation, leading them to ask me as a last resort, but I doubt it. Apparently we’re not just there to make the tea and it is significant that O.R. has a seat at this table. There are many different types of model employed in important policy and decision making in government. Some

of them are conceptually rather simple; for example, very large spreadsheets that do lots of arithmetic and that can be tested to destruction using test data sets and structured walkthroughs. Some are models developed by economists, based to varying degrees on economic theory and collected data sets. Others, though, are developed in the O.R. community, sometimes using external consultants, based on O.R. ways of going about things. That a significant proportion of these mission or business critical models is developed wholly or partially by O.R. workers, shows just what an impact O.R. is making in government.

It is important that the public, which foots the bill for modelling work in government, can be sure that its money is wisely and well spent.

Across this range of models, some are mission or business critical, in that policy failure due to their use might be acutely embarrassing, very expensive or lead to poor experiences by the public and service users. It is important that the public, which foots the bill for modelling work in government, can be sure that its money is wisely and well spent. No wonder the O.R. community in government continues to grow, despite cutbacks elsewhere. Mike Pidd is Professor Emeritus of Management Science at Lancaster University.

IMPACT | SPRING 2016

19


U N I V E R S I T I E S M A K I N G A N I M PAC T EACH YEAR, STUDENTS on MSc programmes in analytical subjects at several UK universities spend their last few months undertaking a project, often for an organisation. These projects can make a significant impact. This issue features reports of projects recently carried out at two of our universities: Warwick and Lancaster. In addition, we also report work carried out as part of an M.Phil. at Cardiff. Please contact the Operational Research Society at email@theorsociety.com if you are interested in availing yourself of such an opportunity.

DEVELOPING A DEMAND DRIVEN INVENTORY FORECASTING MODEL FOR PANALPINA WORLD TRANSPORT (Nicole Ayiomamitou, Cardiff University, MPhil Operational Research and Operations Management) © Panalpina / © Cardiff University

Panalpina, a logistics and freight forwarding company with offices in more than 75 countries worldwide, was looking for ways to meet customers’ requirements to optimize inventory levels in the supply chain and increase service levels by introducing major innovative operational improvements with a new inventory forecasting model. Therefore they decided to launch a Knowledge Transfer Partnership (KTP) with Cardiff University (Cardiff Business School). KTP programmes aim to help businesses to improve their competitiveness and productivity through the better use of knowledge, technology and skills that reside within the UK knowledge base. KTPs are funded by the Technology Strategy Board along with other government funding organisations, in this case the EPSRC. As a result of this partnership, Panalpina and Cardiff University have developed a new tool called D2ID (Demand Driven Inventory Dispositioning) to challenge the

20

IMPACT © THE OR SOCIETY

traditional third-party logistics (3PL) approach to logistics. Traditionally, 3PL companies have focused on filling warehouse space - the opposite of what customers need. The scientific approach to advanced forecasting and inventory planning was designed to address the specific needs of customers with complex global, supply chains and a focus on speed and asset velocity. D2ID is an inventory forecasting model that facilitates inventory reductions in the chains of customers and Panalpina can now offer innovative logistics solutions to existing and potential customers. The tool uses the latest statistical methods to provide customers with a demand forecast, against which inventory can be planned and optimised. Mike Wilson, Panalpina’s global head of Logistics, explains: “D2ID is Panalpina’s most recent innovation to help customers improve asset velocity in their supply chain; it helps our customers ensure they have the right product at the right level of inventory at the right place at the right time.”

The project deals with inventory management, including statistical and judgemental demand forecasting and inventory control theory. It is based on analytical and simulation modelling. Analysis was done on a SKU (stock keeping unit) by SKU level and the tool can provide product classification, demand forecasts, order levels, SKU fill rate specification and optimized inventory levels. The tool was developed in such a way to effectively address challenging demand patterns, taking into account the potential intermittence and lumpiness of demand. The D2ID approach allows Panalpina to identify opportunities to reduce inventory, free up cash and improve service levels to support business growth. Andrew Lahy, global head of continuous improvement at Panalpina, said “Thanks to the KTP and Nicole’s excellent work, Panalpina now has the knowledge and software to help customers identify and then reduce excess inventory in their supply chain”.


© Warwick Business School

DEVELOPMENT OF PERSONALIZED OFFERS TO CLIENTS OF UNICREDIT BULBANK (Temenuzhka Panayotova, Warwick Business School, MSc Business Analytics)

During the past three years, data mining and its relation to customer relationship management has found its applications in many industries in Bulgaria, especially in the banking sector. In 2014, UniCredit Bulbank Bulgaria celebrated its 50th anniversary, being a market leader and number one in assets, loans, deposits, shareholders’ capital and profit. It provides the full range of financial services in Bulgaria through its subsidiaries in the country: UniCredit Factoring, UniCredit Leasing, UniCredit Consumer Financing and mutual funds through Pioneer Investments Bulgaria. Nevertheless, the bank is facing problems related to customer retention and development, especially in the upper mass segment which contains a large number of bank customers who have basic bank products and are not targeted with specific offers due to their low response rate. The project undertaken by Temenuzhka aimed at determining the next best product to offer to clients from this segment so as to increase the offer acceptance rate. The focus was on five main products: overdrafts, credit cards, consumer loans, saving plans and Pioneer Investments. Overall, Temenuzhka analysed data of around 150,000 bank customers

covering transactions of more than 3 years. Methods used included decision trees, stepwise regression and neural networks. The work successfully identified association rules and predictors that can be used to determine which clients are likely to be interested in which of these products and hence should be targeted accordingly. Two marketing campaigns based on the results from the project ran in October and November 2015, and resulted in conversion rates of 3.4% and 4.2%, respectively. For comparison, Forrester Research Inc reported in 2009 that sales conversion rates were as low as 0.08% for a mail campaign at ING. Several analytical methods were used in order to meet the objective of the project: Market Basket Analysis, Sequence Analysis, and Predictive Modelling, all implemented using SAS. The project demonstrated associations among different bank products and services, as well as a typical client “shopping basket”. Moreover, the use of predictive analytics helped not only for validation of the association results, but also for the extraction of thorough information for best predictors of the five products of interest to the bank and for selecting the top 50% of clients that have to

be targeted with specific bank offers in order to increase the cross-sell and up-sell index. At the end of her project, Temenuzhka said “It has been a great pleasure working for UniCredit Bulbank for the past four months. The time spent at the bank allowed me not only to increase my understanding of analytical tools and concepts, but also to gain acquaintance with business insight practices in the financial industry. The support of the CRM department and the constant feedback and updates make me more confident in the results of the project and I cannot wait to see the results after its implementation.” Nadezhda Vicheva, Head of CRM Unit, commented “She was well prepared on the subject and we are glad that we had the chance to see the point of view and the methods used by an external person to our department. This allowed us to think over some of our practices and see possible things that we are missing. The findings from the analysis conducted by Temenuzhka were implemented in our marketing campaign and produced impressive results for the targeted segment.” Temenuzhka graduated with distinction and also won the SAS Dissertation Prize for the best dissertation in her cohort.

IMPACT | SPRING 2016

21


USING LINEAR PROGRAMMING TO ALLOCATE RESOURCES IN A POLICING ENVIRONMENT (Gail Mawdsley, Lancaster University, MSc Operational Research and Management Science)

22

IMPACT © THE OR SOCIETY

It was built in MS Excel using the Solver add-in to optimise the resource allocation. All outputs were presented in an easy to use format for conducting what-if analyses to understand what the best mix of service options are if the budget is reduced, for example. Gail used a team within WYP as an example of how this methodology could be applied. Prolonged and repeated model design and building exercises can be avoided as the user is only required to enter the data, click a button and view and interpret the results. After completing her Masters, Gail was employed within the Business Change team in WYP where she has continued to use O.R. techniques. Another analytical tool Gail created as part of her role in WYP was to support the Force’s extensive application of Value Stream Mapping, focusing on time and value adding activities identified through staff consultation and process mapping exercises, captured in MS Visio with an export function for analysing the process and any changes within MS Excel. This is another tool aiming at providing accessible analysis techniques in a simple to use format for anyone to use. Gail has also been working with the National Police Air Service (NPAS) to provide quantitative analysis to guide them in making a decision on their new operating model. This involved

creating a simulation model to assess the impact on performance of various base locations and aircraft options, under a range of different budget scenarios. This work helped the NPAS National Strategic Board identify the most efficient operating model. Martin Rahman, Corporate Business Change Lead, said “Gail’s work has helped to bring into focus the opportunities offered by Operational Research / analytics and how they can be applied within a policing environment. From the outset we made it clear that to gain universal buy-in we needed easy to use and easy to understand tools which can be applied to real life scenarios. Gail worked with the subject experts to help her understand their needs and worked closely with them to refine her final products. The analyses and tools Gail has developed have been widely adopted and have proved to be highly valuable in helping the service increase its efficiency. Senior managers have been able to make difficult decisions with more confidence because the analysis used is based on sound evidence, containing the detail they needed to convince their colleagues. Gail will continue to make complex analytical tools more accessible to untrained users offering them the opportunity to make informed decisions in their day to day managerial roles.”

© West Yorkshire Police / © Lancaster University

West Yorkshire Police (WYP) is the fourth largest Force in the country, serving approximately 2.2 million people living in the districts of Bradford, Calderdale, Kirklees, Leeds and Wakefield which include both busy cities and rural locations. As part of the comprehensive spending review in 2010, WYP are required to save around £160 million between 2010 and 2017. Various projects are being undertaken to meet the savings required, finding a balance between the challenging financial constraints and the need to deliver a high quality service. The Force’s Senior Leadership Team places great reliance on evidence based decisions supported by professional analysis to reinforce their strategic, tactical and operational decisions. The project undertaken by Gail produced a resource allocation methodology to aid WYP in deciding how best to allocate their limited resources across competing functions. The methodology required balancing costs and benefits of different options and recognising any emergent threats to meet service obligations. The methodology produced a user friendly tool to help managers make difficult resourcing decisions within their set budget, and attempted to hide some of the unnecessary complexities associated with a systematic and structured methodology from the untrained user.


OR ESSENTIALS Series Editor: Simon J E Taylor, Reader in the Department of Computer Science at Brunel University, UK The OR Essentials series presents a unique cross-section of high quality research work fundamental to understanding contemporary issues and research across a range of operational research (OR) topics. It brings together some of the best research papers from the highly respected journals of The OR Society.

If you would like to submit a proposal for the series, please email: Liz.Barlow@palgrave.com and simon.taylor@brunel.ac.uk

www.palgrave.com/series/or-essentials/OREss/


SECURING INTERNATIONAL SPORTING EVENTS

24

IMPACT © THE OR SOCIETY

IN THE LAST FEW YEARS the UK has hosted the London 2012 Olympic and Paralympic Games and the Glasgow 2014 Commonwealth Games. Following the success of a similar initiative for London 2012, the UK Government’s Centre for the Protection of National Infrastructure (CPNI) commissioned the Defence Science and Technology Laboratory (Dstl) to deploy a team of ten analysts to Glasgow in support of the Commonwealth Games. Working alongside government, police and military colleagues, Dstl’s operational analysts helped to assure the security screening of spectators, athletes and the workforce entering venues. Our contribution was to observe and

collect data on the performance of search and screening at all thirteen Games venues. Our teams monitored and analysed the security screening, and compiled a report in time for each evening’s security operations meeting, ready to inform planning for the next day. Such quick turnaround analytical support – completed in only a matter of hours each day – proved to be of great value to ensure that security was maintained throughout the Commonwealth Games. However, the story begins several years earlier: the summer of 2012 saw the culmination of a four-year CPNI-led programme of support from a cross-government team (CPNI,

Image: Licensed under the Open Government Licence v3.0: www. nationalarchives.gov.uk/doc/open-government-licence/version/3/

CHRIS HOLT


Image provided by Glasgow City Marketing Bureau (GCMB)

Dstl and Home Office) that provided evidence-based advice and analysis to the London Organising Committee of the Olympic and Paralympic Games (LOCOG) on search and screening. Beginning in 2008, the cross-government team advised on the optimal solution to delivering appropriate and cost-effective processes for screening over 11 million spectators and 200,000 accredited personnel (and associated vehicles) at 32 Olympic and Paralympic venues across the UK (in addition to other locations such as the Athletes’ Village and the International Press and Broadcast Centres). With the spotlight on the UK to deliver a safe and secure Games, it was essential that an effective security regime was in place. The specific focus of our work was the search and screening processes designed to prevent prohibited items from being taken into venues. Whilst clear regulations and standards existed for security screening in certain regulated transport sectors – aviation being the most obvious – there were no such standards for security screening at major events. Whilst much could be learnt from the aviation approach, the challenges here were very different. During the initial planning stages of the Olympics, the team delivered advice on the nature of the security screening equipment to be used, the way it would be used, and the number and layout of screening lanes required at the different venues. This advice was critical to enable LOCOG to establish an optimal, cost-effective screening process. This advice was underpinned by an extensive programme of simulation modelling of the search and screening processes. Initial modelling used Simul8, adapting models that had originally been developed by Dstl for modelling transport security processes. Advice and analysis also

considered the screening of vehicles during the construction and fit-out phases of the Olympic Park, ensuring that any prohibited items would be prevented from being concealed in venues during construction. As planning evolved, so did the analytical tools and techniques used. Simulation modelling initially conducted using Simul8 transitioned into CAST (the Comprehensive Airport Simulation Tool) to enable the flows of spectators through screening lanes to be modelled with greater fidelity. In parallel, the vehicle screening process was modelled using a bespoke discrete event simulation tool which Defence Research and Development Canada (DRDC, the Canadian counterpart of Dstl) had developed as Canada prepared to host the 2010 Winter Olympics in Vancouver.

Quick turnaround analytical support proved to be of great value to ensure that security was maintained throughout the Commonwealth Games

One of the key challenges for ensuring that robust analysis could be conducted was the availability of relevant data. How many spectators

would arrive? How long before their event would they arrive? How much stuff (bags, coats, food and drink) would they bring with them? How would this vary with weather conditions? After all, a glorious British summer couldn’t be relied upon! The solution to this was an extensive programme of data collection and test events. Analysis of data from these test events allowed modelling assumptions to be tested and refined, and sensitivities to be explored. Initial data collection took place at other sporting events (such as the Wimbledon Championships, which was also a Games venue), but nearer to the Games a specific series of test events took place, starting small (just a single screening lane) and growing progressively until just before Games time. As later happened in Glasgow, a team of analysts collected data to assess the effectiveness of the screening process across the different venues throughout Games-time, identifying issues of immediate concern which could then be addressed by LOCOG, the military and the contracted security companies. During the London 2012 Olympic and Paralympic Games a team of over 40 personnel, from Dstl and partner organisations, deployed to venues. A senior official within the Olympic Security Directorate praised the team’s

IMPACT | SPRING 2016

25


work, saying : “As a result of the team’s careful analysis, exceptional team-work and innovative approach, spectator screening was both efficient and effective. Spectator approval ratings were consistently high for the security regime (of which spectators’ primary experience was the screening regime); this contributed to a Games which was not only safe and secure, but also in keeping with the Olympic spirit”. Following the success of the Olympics, attention turned to Glasgow 2014. Again, a high level of security was paramount for the Games, although with differences in terms of scale, threat context and available budgets the Commonwealth Games brought with them their own set of challenges. Drawing upon extensive experience from London 2012, Dstl was tasked to assist in Glasgow, again as part of a coordinated cross-government team. Prior to the Games, Dstl helped to inform the planning for the security screening process, modelling proposed screening processes which differed from those used at London 2012. This time around, the analysis had the advantage of being able to make use of the extensive dataset collected during the Olympic Games. Through this analysis and simulation modelling, Dstl influenced the design of the screening systems and provided a clear evidence base to the event organisers, government and security forces, allowing them to make wellinformed security decisions. One lesson that emerged from different strands of the analytical support was the need to ensure that security isn’t considered in isolation from other areas. Whilst the organising teams were structured in conventional silos (security, venues and infrastructure, transport, ticketing, etc.), effective security required effective communication between these various teams. The ‘last mile’ before

26

IMPACT | SPRING 2016

spectators arrived at venues included volunteers (“Games Makers” and “Clyde-siders” respectively) marshalling spectators and checking tickets both before and after security screening. Both processes had the potential to cause bottlenecks that would have meant screening lanes were not able to operate at the capacity that had been planned for and was required in order to process the number of spectators arriving.

Dstl has been able to apply operational research techniques to the problem of screening spectators, athletes and workforce at the world’s three largest multi-sport events by spectator attendance

To give an illustration of the scale of the problem, the main entrance to the Olympic Park, the Stratford Entrance, needed to screen over 200,000 spectators on peak days, most of whom arrived during two short peaks (for morning and afternoon sessions). Dstl’s modelling explicitly represented queues, soft ticket-check (before security) and hard ticket-check (after security) in order to understand how these processes may interact with or, crucially, slow down the security processes. However, equally vital were other government partners’ efforts to ensure effective communication between the different teams within organising committees. Communications were another vital area – ensuring that spectators were advised of what to expect when they arrived (aviation-style screening at most venues), what items were prohibited, and how long before their event to arrive. When necessary these communication messages were part of the solution to

ensuring minimum levels of disruption and not allowing long queues to build up – for instance, smoothing the arrivals profile at security by encouraging spectators to arrive early or to use alternative, less busy entrances and routes. One legacy of the cross-government support to London 2012 and Glasgow 2014 is the development of guidance on security screening for a wide range of applications including for future high-profile events (BSi PAS 127:2014 Checkpoint security screening of people and their belongings – Guide). This guidance encourages the location-owner or event-organiser to consider the types of items they are trying to detect and the level of throughput they require in order to achieve a proportionate and cost-effective screening process. As part of a coordinated government effort, Dstl has been able to apply operational research techniques to the problem of screening spectators, athletes and workforce at the world’s three largest multi-sport events by spectator attendance. Beginning in 2008 with preparations for the Olympics and Paralympics and ending with the Glasgow 2014 Commonwealth Games this support helped to ensure successful events that were not only secure but also widely praised for the quality of the visitor experience. Chris Holt is a Principal Analyst in the Defence and Security Analysis Division of Dstl. He has worked in the homeland security analysis area since joining Dstl in 2006. He holds an MSc in Operational Research and is an Associate Fellow of the OR Society. © Crown copyright (2016), Dstl. This material is licensed under the terms of the Open Government Licence except where otherwise stated. To view this licence, visit http://www.nationalarchives.gov.uk/doc/ open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi@nationalarchives.gsi.gov.uk.


© OJO Images Ltd / Alamy Stock Photo

TAC K L I N G T H E T E R R O R I S T : A N O. R . R E S P O N S E TO A D E A D LY B I O H A Z A R D BRIAN CLEGG

WHEN I TOOK my Masters in Operational Research, the course on O.R. in the health service stood out from all the others. This wasn’t just a matter of improving the bottom line or enhancing efficiency. It was about saving lives. With limited resources, this means that ethics must come up against cold mathematical values,

forcing difficult decisions that balance cost and risk to life. And nowhere is this complexity more obvious than in a recent O.R. exercise to assess the response to an assault on the US public using deadly anthrax. A week after the attacks on 11 September 2001, a series of letters were sent to media and government

IMPACT © THE OR SOCIETY

27


offices containing anthrax spores. They resulted in 22 infections, five of them deadly. Anthrax was initially a disease of cattle and other herbivores. The bacteria behind anthrax form spores which can remain inactive for years before coming into contact with living tissue, when they reactivate and multiply. This capability made anthrax ideal for biological weapons, and a number of countries included the bacterium in their bioweapon programmes. Although there are effective vaccines, and anthrax can respond well to antibiotics, the spores could form a deadly weapon in the hands of terrorists. The 2001 attacks were small scale, thought to be the work of a single individual. But the US government wanted to be prepared for a possible large-scale attack. Ten years on from 9/11, the Department of Health and Human Services sponsored a study of the impact of sending out antibiotics ahead of time to regional or local

28

IMPACT | SPRING 2016

can model complex systems for which intuition alone is insufficient. Since we do not always know the answer in advance, an O.R. model is an ideal way to help one understand the tradeoffs in a complex problem.”

Anthrax is a disease where rapid access to antibiotics after an attack is essential if they are to be effective

It’s hard to imagine a decision where the factors Brandeau describes are more significant than in the preparation for an anthrax attack. Resources are limited. The form an attack would take is uncertain. Yet get the decision wrong and thousands of lives could be lost. A knee-jerk reaction might be that everyone at risk – perhaps the whole US population – should keep antibiotics in their medicine cabinets. But does that make sense? O.R. can provide the logic to help in what seem otherwise intractable decisions. Anthrax is hard to deal with, because to treat it efficiently with antibiotics the course has to start before the symptoms develop, during the incubation period that usually lasts several days after infection. Once symptoms are clear and beyond an initial flu-like state, the infected patient has a very high probability of dying and needs intensive care. The existing approach was to rely on feeding distribution centres, usually administered by states, from national and regional stockpiles of antibiotics, pushing out the majority of stock after an event, with only small local inventories. In analyzing the different possible strategies, the O.R. model had to pull together costs, which are generally

© Norbert von der Groeben Photography

MARGARET BRANDEAU, PROFESSOR OF MANAGEMENT SCIENCE AND ENGINEERING AT STANFORD UNIVERSITY

centres, or even distributing them to homes. Anthrax is a disease where rapid access to antibiotics after an attack is essential if they are to be effective. Leading the analysis was Margaret Brandeau, Professor of Management Science and Engineering at Stanford University. Brandeau discovered O.R. while an undergraduate maths student at MIT (Massachusetts Institute of Technology), deciding the moment she heard of it that this application of mathematical techniques to real world problems would be a great field to pursue. The first real piece of work Brandeau undertook after taking her Masters involved improving ambulance deployment in Boston. She recalls a typical response that many fresh-faced Operational Researchers experience in the field: “On my very first day of my very first paid O.R. job at the City Hall Dispatch Center, the dispatcher, who was a grizzled old man, came up to me with a cigar between his lips and said, ‘You know, dear, I have been doing this job for 40 years. What can your model tell me that I don’t already know?’ Of course, I just stammered politely that we wanted to help him.” It was a long time before Brandeau could clarify exactly why O.R. has such potential to make a difference, in the face of long-term expertise. “It took me 30 years of thinking about this question to finally arrive at an answer. First of all, O.R. can quantify and optimize the consequences of alternative decisions. In most public health problems, optimization is difficult because many non-quantifiable factors such as ethical issues are inevitably involved. A second great aspect of O.R. is that it can determine where data is lacking. This comes in very handy, especially in public health problems. Finally, and perhaps most importantly, O.R.


relatively easy to assess, and the much harder to quantify benefits, where it was necessary to estimate the impact of different strategies on the time taken to get the antibiotics to the affected population, and also to map the elapsed time onto the chances of surviving once infected.

‘You know, dear, I have been doing this job for 40 years. What can your model tell me that I don’t already know?’

Brandeau’s team produced an Excel spreadsheet model, which was used to simulate the capabilities of dealing with different types of attack and applied a range of strategies covering better event detection, varying local dispensing ability, varying local stockpile levels and alternative mechanisms for distribution. The model consisted of 21 different compartments for individuals to move between, and calculated expected costs, deaths and life years lost after an attack to assess the impact of different strategies. For example, early in the process individuals can be in compartments such as ‘Not exposed’, ‘Potentially exposed but unaware’, ‘Potentially exposed but aware’, ‘Potentially exposed and taking antibiotics’ and so on. The model moves people from compartment to compartment depending on flow rates that are influenced by the different scenarios, such as the rate of dispensing of antibiotics or the time before the attack was discovered. The simulation used a baseline scenario of an aerosol release of anthrax in a major metropolitan area with 5 million inhabitants. This covered a range of exposed individuals from

50,000 – typically if the release was at a major sporting event – through to 250,000 people from a larger attack. But the problem is that it wouldn’t be clear until too late just who had been exposed, and so numbers requiring treatment could range between 100,000 and 5 million. It’s not enough when dealing with human behaviour to assume that everyone will act perfectly. Antibiotic use is often terminated early by the patient as the symptoms subside before the antibiotic has completed its work. So the model had to build in the assumption that not everyone would continue treatment properly. Based on experience from the 2001 attacks, the team assumed that just 64 per cent would stick to their regime (though the simulation was run with values up to 90 per cent to see how sensitive the outcome was to such variation). Treatment for those already showing symptoms – and the rate at which symptoms came through was modelled from historical data – was significantly more intensive, requiring far more resources. The outcome was not a simple, one size fits all solution. It seemed clear that pre-dispensing antibiotics – effectively issuing a survival kit for everyone ahead of time – would be too expensive compared to any benefit, but keeping significant local stockpiles was cost effective if the probability of an attack was high. A key bottleneck appeared to be the rate at which antibiotics can be dispensed. At low levels of dispensing most exposed individuals died – getting this dispensing rate up to 100,000 individuals a day in the simulation was highly cost effective even with very low probabilities of attack. Speed of detection also proved important, particularly for large

attacks, assuming that dispensing rates are high enough. If the attack was announced by the terrorists at the time there was a chance to get organised, but if the attack was not discovered until a number of the victims were showing advanced symptoms, then the number of deaths could soar. The ideal was to have a mechanism to determine who has been exposed, before they began to show symptoms, so the resources could be concentrated where they would have most impact. Where the rates of dispensing were over 14,000 individuals an hour, the dominant factor became how well those treated stuck to their full course of antibiotics.

It’s not enough when dealing with human behaviour to assume that everyone will act perfectly

One key factor proved to be the risk of attack. Despite being highly uncertain, this has a major impact on costs and effectiveness. This is because the costs of being prepared have to be paid whether or not an attack occurs, while the benefits are only accrued when an attack takes place. Margaret Brandeau: ‘The probability of attack, and the size of such an attack, is of course the key uncertainty. If there is a high probability of attack, then it makes sense to invest more in preparedness than if there is a low probability of attack. The probabilities are generally quite small. One way to think about them is to think about the probability of such an event occurring in the next 10 years, and then divide by 10. But, generally speaking, one is just guessing when estimating the probability of an attack.’

IMPACT | SPRING 2016

29


30

IMPACT | SPRING 2016

there was no good evidence that the items stocked or the levels held were the best to support actual needs that might arise

Cost effectiveness seems a cold and impersonal measure when dealing with human life or death decisions, but in the end, with limited resources, cost has to come into the equation. Margaret Brandeau: ‘When studying policies that affect human health and well being, you have to put a value on human life, either explicitly or implicitly. For example, when thinking about whether a new medical intervention is

Brian Clegg’s career started in Operational Research, with a Lancaster University MA in O.R. and first job in the O.R. Department at British Airways. The work he did there was heavily centred on computing, as IT became central to all the O.R. work he did. He left BA in 1994 to set up a creativity training business. Brian is now a science journalist and author and he runs the www. popularscience.co.uk and his own www. brianclegg.net websites.

At the moment the approach to the stockpile does not include cost in the equation, with stocking levels based on a combination of the expected threat and how effective different products would be. It has been suggested that instead the planners should consider alternative scenarios for actual need of supplies and calculate the incremental impact of adding different levels of stock for an item depending on its cost effectiveness.

©Innocenti/Cultura/Getty Images

An interesting comparison of the kind of risks involved is made in one of the papers on this research. The probabilities of attack considered were low – typically in the annual range of 10-3 to 10-7 (1 in 1000 to 1 in 10 million). These were compared with an estimated risk of a major earthquake in the San Francisco Bay area in the next 30 years of 0.62 and the risk of a 1 kilometre asteroid strike causing major regional destruction during this century of 2 x 10-4. Since the initial analysis, some consideration has been given to the huge strategic national stockpile, a vast repository of antibiotics, vaccines and equipment for public health emergencies. This massive resource – the equivalent of around 7 large supermarkets stacked floor to ceiling with a total of 900 inventory items worth around $6 billion – costs $500 million a year to maintain, mostly in keeping the inventory fresh. But there was no good evidence that the items stocked or the levels held were the best to support actual needs that might arise. A second study is underway considering what the stockpile should hold and what models could support the decisions.

worthwhile, we think not only about how much that intervention costs, but also about how much the intervention will improve health (both length of life and quality of life). ‘In our work, we do not put a value on life. Rather, we report intervention costs and health benefits and allow policy makers to make their own decisions. The World Health Organization says that an intervention whose cost per quality adjusted life year (QALY) gained is less than a country’s GDP per capita is very cost effective, and if the cost/QALY gained is less than three times a country’s GDP per capita then the intervention is cost effective.’ For the moment, the key lessons from the exercise stress the importance of increasing local dispensing capability, developing means to identify who was exposed and finding ways to ensure that those on antibiotics complete their courses. In producing those results, Margaret Brandeau and her team demonstrated the effectiveness of O.R. techniques – in this case a relatively simple model, even if featuring a wide range of inputs – in quantifying the impact of alternatives and demonstrating the best approach in a complex problem, using costs and probabilities in the balance of saving human lives.


© MELBA PHOTO AGENCY / Alamy Stock Photo

R B S R I S K A N A LY T I C S & MODELLING – SHAPING THE FUTURE WITH A N A LY T I C S ROSEMARY BYDE

RBS IS OFTEN FEATURED in the media and you’ve almost certainly heard or read about us. But how much do you know about some of the great work that goes on inside to make it all tick? These are some of the stories about how analytics and modelling are being used to help build a strong, customer-focussed bank.

We’re a UK-based bank, headquartered in Edinburgh, with a long history. We serve customers from all walks of life, from high street customers to wealthy individuals, from small businesses and entrepreneurs to large corporate companies. RBS might not have a team named “Operational Research” (O.R.), but

IMPACT | © THE OR SOCIETY

31


this does not mean we don’t do O.R.. On the contrary, RBS has hundreds of modellers and analysts working with vast quantities of data every day. These teams help RBS make better decisions.

RBS has hundreds of modellers and analysts working with vast quantities of data every day

Some examples of areas in which modelling is used include improving the customer experience, marketing and operational efficiency, but here I will focus on the Risk Analytics and Modelling teams, which comprise about 100 people. As it says on the tin, we build, monitor and review models used to

understand and manage risks in RBS. As our Chief Risk Officer put it: “we take risks for a living”. We make money by lending to people, but not everyone will repay. We need to make sustainable decisions that are right for our customers. We need to make them quickly and often and help the business to grow. There are many regulatory requirements that cannot be fulfilled without models. We also keep our eye on fraud, the crime fighting part of our remit! The focus here is protecting our customers, as well as minimising our losses. In the past, we lost sight of our customers. We aimed to expand and become the ‘biggest’. But now we have refocused. We are firmly aimed at the UK market and our vision is to become number one for customer

service, trust and advocacy by 2020. Our teams have an important part to play in achieving this vision.

DO WE THINK YOU CAN REPAY?

You walk into a branch, or perhaps you visit us online. You want a new current account, an overdraft, a loan or maybe a mortgage. Despite many a parody based on ‘computer says no’, automated systems can be efficient and accurate. At RBS we have used logistic regression techniques to make fair and objective lending decisions for decades. Using previous history, our models assess a customer’s ability to repay so we can lend responsibly to people who can afford it. Unfortunately, no model is perfect and the future is never certain.

Source: Dominika Mnich

RBS’S ST. ANDREW SQUARE BRANCH, EDINBURGH

32

IMPACT | SPRING 2016


We create strategies to find the balance between safe, profitable lending and credit losses. Our number crunching means we can help meet the needs of as many customers as possible.

Our models assess a customer’s ability to repay so we can lend responsibly to people who can afford it

ARE WE A ‘SAFE’ BANK?

© Andrea Jones Images / Alamy Stock Photo

Everyone has heard of the banking crisis of 2008. RBS was at the focus of the maelstrom and since then we have been on the long road to recovery. The effects have been long lasting. Although it can feel difficult to face such scrutiny, it is quite right that regulators take a keen interest in our activities and risk profile. A lot of effort is directed onto models which measure the risk of our portfolios: how likely are people to default? If they do, how much will we lose? This in turn defines how risky our assets are. Known as “Risk Weighted Assets”, these are a key input into publicly available reports and, along with equity, a measure of our capital strength. Without these models we would not be able to demonstrate that we are a safe bank.

would perform in ‘stressed’ conditions. The result? Confidence that our business is strong enough to weather the storms. As a result of previous stress tests we have taken decisions that make our position stronger.

IS THAT REALLY YOU?

We process over 7.5 million transactions every day. The vast majority of these happen seamlessly, you would never know we were there. But behind the scenes, our models and strategies assess whether your account has been compromised. Is it you using your debit card details to buy a new tablet? Is it you transferring money via online banking to a friend for a holiday? We use a mix of external and internally built models to help us answer these questions. Techniques such as neural networks, segmentation and logistic regression are commonplace. They help us identify high risk transactions, unusual patterns of spending and vulnerable customers. Only a very small percentage of transactions are fraudulent, but these

models help alert us to the riskiest ones so we can take a closer look.

Techniques such as neural networks, segmentation and logistic regression help us identify high risk transactions, unusual patterns of spending and vulnerable customers

FINDING THE BALANCE

We are a bank that innovates. Recent examples have included simplifying our products, introducing Apply Pay and TouchID, developing our mobile app and online account opening. Every time we launch a new product, change our technology or make things easier for our customers, we also introduce new credit or fraud risks. Where we need to manage these risks with additional checks we must focus our resources in the right places. We use our data and models to make sure we target this effort intelligently. Finding the right balance between customer

COULD WE SURVIVE ANOTHER CRISIS?

Stress testing is a hot topic in banking at the moment. We now have a specialist team who build time series forecasting models to help answer questions about what might happen in the future. Regulators give us predefined macroeconomic scenarios. The variables involved, such as interest rates or GDP, must be applied to our models to see how our portfolios

RBS HQ, EDINBURGH

IMPACT | SPRING 2016

33


MAKING A DIFFERENCE

convenience and risk prudence is an analytical and business challenge.

A BIG DATA FUTURE

The media is full of stories about ‘big data’ and the future. The potential is huge, but change in large organisations can be slow. What we do have is data and lots of it. Our challenge is how to do something useful with it. Do we have the right systems and infrastructure? People with relevant skills and know how? The ideas about what we can do with it? At the same time, there is still a place for external companies and universities. They sometimes inform us, sometimes sell to us and sometimes challenge us with ideas about new technologies and modelling. Even where we use external products and services, we have an important role to play in making sure what we pay for is fit for purpose and adds value.

IT’S ALL ABOUT THE PEOPLE

We couldn’t do any of this without our people. There are several modelling teams in RBS, and there is a reason

34

IMPACT | SPRING 2016

for that. In comparison to many other roles, ours are specialist. We recruit and train graduates and experienced analysts with mathematical, numerical and O.R. backgrounds. We expect them to be able to deal with complex and messy data, code and even build models as well! And that isn’t all. Without our stakeholder buy in nothing can be implemented or changed. We must be able to communicate our results and new opportunities in a way that they will understand. By grouping these specialist roles together, we can be more efficient in our

We can see the impact of our work in the annual results

recruitment and training. It also means we can easily learn from each other: we often run knowledge sharing events and internal coaching sessions. Our collective heads can often solve difficult problems together. Understanding our systems, data and internal processes is critical to project success. Skills and knowledge learnt on one project are often highly transferrable to another. With such large communities we can also support

We often have to be practical and patient as we work through the complexities of data, systems and regulation. However, the reward is knowing that what we do makes a difference. We can see the impact of our work in the annual results. We help make the bank strong and sustainable, and provide evidence that this is so. Above all else, we enable great customer service. Our models help us to give an answer quickly and to be confident we are selling the right products to the right people. We protect people from fraud and keep their money safe. Without us, the bank would not be able to operate in today’s data driven world. Rosemary’s masters degree in Management Science and Operational Research from Warwick led her to a life of crime…. fighting! She leads a team in Risk Analytics & Modelling in RBS specialising in Fraud Modelling, although the team also provides a range of other analytical services covering anti-money laundering and credit risk. The Royal Bank of Scotland (RBS), founded in 1727, is one of the oldest banks in the UK. We are proud of our long history. Today, RBS is made up of 12 brands, with branches throughout the UK and Ireland. Following recent upheavals in the financial sector, RBS plans are to make the bank stronger, simpler and fairer. More information about this can be found on our website: www.rbs.com

© Guillem Lopez / Alamy Stock Photo

secondments and moves to other teams. Careers can be built around being an ‘expert’ in a specific area, having a ‘portfolio’ of experiences, or becoming a manager or leader of a technical team. Not only that, working with like-minded people can even be fun!


© Vodafone

OPEN ALL HOURS? MARTIN SLAUGHTER

What works for the Arkwrights – in terms of opening hours rather than double entendres and carnivorous cash registers – simply can’t be replicated by a major high street chain. The running costs mean that it isn’t viable to trade for long hours, hoping to hoover up every last drop of potential custom. In some cases – particularly shopping centres, where trading hours are often part of the lease – there is no scope to consider what the optimum opening hours should be, but it has traditionally been the case that high street stores have traded standard hours for all stores in all locations. A standard pattern of 9am to 5:30pm might be very easy to apply but is it the most profitable way to trade, particularly as working and shopping patterns change? Vodafone Retail in the UK operates a network of 300+ high street stores.

Although they also have digital and telephone channels for sales and support, the stores offer an important way to attract and serve customers. Running the store network is, however, a complex business. On top of the obvious concerns to make sure that the products and packages are the right ones for customers (which applies to all channels), the stores must be in the right place, have the right look and feel and have the right staff (both in terms of numbers and knowledge/experience). Hartley McMaster Ltd (HMcM) had worked with Vodafone analysts for many years, helping to collect data and develop models that were used to identify the national total staffing required for stores and the grade mix within this total. The work then allocated the staffing to individual stores, aiming to make an optimal fit to the trading targets for each

IMPACT © THE OR SOCIETY

35


36

IMPACT | SPRING 2016

their research before considering their options and returning later to make a purchase, then sales data alone would be misleading when assessing the activity levels of a store. Vodafone were able to provide footfall data (from automatic counters in store entrances) which could be compared to the sales patterns for those stores. The patterns were almost identical for each store – clearly sales activity was aligned almost exactly with store activity and so could be used as the sole measure for assessing the effectiveness of trading hours. The next step was to create typical trading profiles for each store. This involved two areas for analysis: converting each transaction to a standard value and taking into account seasonality in trading. For some transactions – the sale of an accessory or a Pay As You Go phone topup, for example – the value is obvious. For others – especially a new contract – the value of the transaction at the point of sale may be small (or zero) but the full-life value much higher. Working with Vodafone customer relations and finance staff, HMcM were able to define a set of methods to assign a standard value to each transaction. Seasonality in trading could clearly be seen in the data – for instance, December was a particularly busy month for stores (in the build up

© Chuck Pefley / Alamy Stock Photo

store within practical constraints such as standard shift length. Vodafone launched a major review of the trading in retail stores – the Right People Right Place programme – and HMcM were asked to look at one specific area of the business that hadn’t been considered before; were the stores trading at the right times? The picture was a very standard one: those stores with a contractual obligation traded in line with the terms of their lease, some stores (for example at London mainline stations) traded non-standard hours but all others traded 9am to 5:30pm Monday to Saturday, with some also trading on Sunday. There were anecdotal reports of stores full and turning customers away at 5:30pm and of stores with busy periods of trading where customers left the shop (possibly not to return) because queues were so long, while the store stood near empty during others. HMcM were given 12 months EPOS data (till receipts) for all UK stores and asked to see what these said about trading patterns. The EPOS data showed the store, type (eg contract, accessory, repair, etc), value, date and timing of each transaction. The first question to answer was whether the sales data painted an accurate picture of store activity. For example, if customers liked to browse in stores to do

to Christmas) and January was disproportionately quiet. There were also within month trends visible as well as clear patterns in trading within the week and within the day. The sales data for each store was processed to remove seasonality and to produce a typical trading pattern showing standard values of sales for 30 minute slots across each day of the week. Vodafone then provided a list of the trading hours for each store, allowing the trading patterns to be compared with these. The final dimension to the analysis was provided by considering the typical running costs for stores, allowing a profitable trading threshold to be defined, expressed in terms allowing it to be compared directly to standard sales. The trading analysis could now be presented to show the profitability of each 30 minute slot for each store. A typical set of results might have looked like those in Figure 1. In this case, the store is (say) one that trades 9am to 5:30pm on Monday to Saturday (but opens at 9:30 on Tuesday to allow for staff training) but does not trade on a Sunday. The analysis shows a number of interesting things. First of all, the store is trading profitably across most of its current open hours – clearly good news. However, the first 30 minutes of trading for Tuesday, Wednesday and Friday are below the profitable threshold. Even more strikingly, the store is showing sales activity in the 30 minutes after it officially closes on all days. This can only mean that there are customers left to be served when the store closes – the implication being that this is when customers want to use the store at that location. The recommendation would therefore be that the store should change its trading hours to 9:30am to 6pm for Monday to Friday. For Sunday trading the analysis looked at the sales patterns for those


Store 123

6-7

7-8

8-9

3-3:30

3:30-10

10-10:30

10:30-11

11-12

12-3

13-14

14-15

15-16

16-16:30

16:30-11

11-11:30

11:30-18

18-19

19-20

20-21

21-22

22-23

23-24

Monday

0.0

0.0

0.0

0.9

1.4

2.1

1.0

3.9

8.1

6.2

4.2

5.9

3.3

4.1

1.0

0.2

0.0

0.0

0.0

0.0

0.0

0.0

Tuesday

0.0

0.0

0.0

0.1

0.4

2.0

1.3

3.0

4.7

4.0

5.1

2.8

1.4

1.7

3.1

0.7

0.0

0.0

0.0

0.0

0.0

0.0

Wednesday

0.0

0.0

0.0

0.7

0.8

1.2

1.1

4.7

3.8

5.5

5.2

3.8

1.8

2.9

1.1

0.3

0.0

0.0

0.0

0.0

0.0

0.0

Thursday

0.0

0.0

0.0

0.9

1.4

1.2

0.9

3.7

5.1

3.8

3.4

2.9

2.1

2.8

0.9

0.5

0.0

0.0

0.0

0.0

0.0

0.0

Friday

0.0

0.0

0.0

0.2

1.0

2.3

1.9

3.2

4.8

8.1

4.7

7.9

2.2

2.5

2.2

0.8

0.2

0.0

0.0

0.0

0.0

0.0

Saturday

0.0

0.0

0.0

1.0

2.3

2.3

3.8

7.3

7.8

9.4

9.4

10.5

5.0

3.7

2.3

0.4

0.0

0.0

0.0

0.0

0.0

0.0

Sunday-Est

0.0

0.0

0.0

0.0

0.0

0.0

0.7

5.8

7.0

7.3

8.2

8.3

1.3

0.4

0.0

0.0

0.0

0.0

0.0

0.0

0.0

0.0

Under Utilised Monday Tuesday Wednesday

0.4 0.7

Thursday Friday

0.2

Saturday Sunday Opportunities Monday Tuesday

0.7

Wednesday

0.3

Thursday

0.5

Friday

0.8

Saturday Sunday

0.4 0.7

5.6

7.0

7.3

6.2

6.3

1.3

0.4

FIGURE 1 – TRADING

stores that were already trading this day and used their sales data to forecast what might be expected for other similar stores. In the example above, the indications are that the store should trade profitably on a Sunday, particularly after 11am – opening the store on a Sunday would therefore also be recommended. The analysis looked at each store and recommended what their trading patterns should be. For a minority, the recommendation for no change, but for most a change – generally to later opening and closing – was proposed. Sunday trading was recommended for most stores, although in a few cases it was recommended that stores currently trading on Sunday should stop doing so. Implementing the changes involved potential changes to staff contracts and, in a few cases, contractual negotiations

with landlords. For many organisations these sorts of hurdles might, at best, have led to recommendations being watered down. Vodafone, however, valued the evidence provided and made virtually all the changes recommended within six months of the analysis being completed.

The results were clear that the shift in trading hours had been very successful

A year later, HMcM worked with Vodafone to review the success of the change programme, repeating the analysis with post-implementation sales data to allow comparison with the forecasts from the original models. Clearly many factors can influence

sales performance (including shifts away from high street shopping, the general state of retail spend, the strength of competition and the range of products available) but it was possible to see if the patterns in trading had reduced periods of unprofitable trading and taken advantage of the opportunities. The results were clear that the shift in trading hours had been very successful. Vodafone were happy that their Right People Right Place programme had been successful. HMcM were happy to have been able to see their analysis validated and to have worked for a client which valued and profited from analysis. Martin Slaughter is Managing Director of Hartley McMaster

IMPACT | SPRING 2016

37


ANDREW COOPER

38

IMPACT © THE OR SOCIETY

London’s fire and rescue service is one of the biggest and busiest in the world. Planning resources to best meet the risks facing London is a complex task which can be aided by the use of Operational Research techniques. The London Fire and Emergency Planning Authority run the service and in the recent challenging economic circumstances they were faced with significant budget cuts. They needed to

reduce resources while minimising the impact on speed of arrival at emergency incidents, and did so with the support of consultants at ORH (Operational Research in Health Ltd.).

HOW MUCH NEEDED TO BE SAVED?

The Coalition Government, which was in power between 2010 and 2015,

© London Fire Brigade

M AX I M I S I N G PERFORMANCE WHILE R E D U C I N G R E S O U R C E S AT LO N D O N F I R E B R I G A D E


introduced a large-scale program of austerity measures to reduce costs across the public sector, including funding for fire and rescue services. Fire and rescue services are financed through a combination of local (i.e. Council tax) and national funding arrangements and are managed locally.

Š ORH and Š London Fire Brigade

The result of all this work is a ‘computerised London Fire Brigade’

Within the first year of coming to power, the UK Government asked the fire service nationally to reduce budgets by 25 per cent over the four years to April 2015. The London Fire and Emergency Planning Authority (LFEPA) needed to achieve savings of ÂŁ29.5 million in 2013-14 and a further ÂŁ35.5 million in 2014-15 to meet budget targets set by the Mayor of London. Over previous years significant back-office functions had been cut and options for pumping appliance (fire engine) removals and station closures needed to be considered, because they account for most of the LFEPA budget. ORH worked alongside LFEPA officers to identify reductions to front-line services while minimising the impact on attendance performance. At the outset, the scale of the cuts to be found from front-line services was not known. There were 169 pumping appliances and 112 stations but it was necessary to model a wide variety of options for change. Modelling considered deployments down to 84 pumping appliances and 56 stations, well beyond the expected reductions. Separate pieces of work considered the deployment of specialist appliances.

HOW DO ORH MODEL FIRE SERVICES?

ORH is a management consultancy that uses O.R. techniques to carry out resource planning studies for emergency services, health authorities, sports bodies and other public sector organisations. ORH’s clients face many different challenges, but they share a need to optimise performance and deploy valuable resources in the most costeffective and efficient way. ORH uses sophisticated analysis and modelling techniques to deliver robust consultancy and software solutions that are objective, evidence-based and quantified. ORH has developed planning models specifically for the fire service, including optimisation and simulation models. Incident and response data are obtained from the service and analysed to populate the models. Incident data are analysed to determine the type, location and frequency at which they occur. Response data are used to identify when a pumping appliance was assigned to an incident, mobilised from station, arrived at scene and left the incident. Data related to the availability

of resources are used to identify which resources are available and when. The modelling takes into account the location, availability and capacity of stations and pumping appliances. The incidents attended, the resources required to meet demand at different times of day and the time taken to get to incidents are all built into the model. The model is maintained with up-to-date data about the incidents attended, the policies about which pumping appliances to send and when, as well as other information. It also takes into account simultaneous incidents, large incidents or incidents that require resources over many days. Factoring in the need for training is also an important part of the modelling process. A process of model validation is then undertaken. Travel times are calibrated against actual blue-light journeys and other parameters adjusted to ensure they accurately reflect the operational regime of the service. The models can then be used with confidence to assess changes to the service. The result of all this work is a ‘computerised London Fire Brigade’.

Ĺ‚ 5XQ E\ WKH /RQGRQ )LUH DQG (PHUJHQF\ 3ODQQLQJ $XWKRULW\ /)(3$

Ĺ‚ 2QH RI WKH ODUJHVW ILUH VHUYLFHV LQ WKH ZRUOG Ĺ‚ 3URWHFW SHRSOH DQG SURSHUW\ FRYHULQJ NP RI *UHDWHU /RQGRQ Ĺ‚ +DQGOHG HPHUJHQF\ FDOOV LQ

Ĺ‚ 0DQDJHPHQW FRQVXOWDQF\ XWLOLVLQJ 2SHUDWLRQDO 5HVHDUFK WHFKQLTXHV Ĺ‚ ([SHUWV LQ VLPXODWLRQ DQG RSWLPLVDWLRQ PRGHOOLQJ Ĺ‚ 6SHFLDOLVH LQ VROYLQJ FRPSOH[ ORFDWLRQDO SODQQLQJ SUREOHPV Ĺ‚ 3ULPDULO\ ZRUN IRU HPHUJHQF\ VHUYLFHV KHDOWK DXWKRULWLHV DQG VSRUWV ERGLHV

IMPACT | SPRING 2016

39


appliances were allowed, in addition to potential removals/closures

The optimisation model was set up to minimise average first and second attendance times to potentially serious incidents (non-false alarm incidents receiving two or more pumping appliance attendances). These optimisation criteria were used as they align to LFEPA’s attendance standards, which are: • Average (mean) 1st attendance within 6 minutes • Average (mean) 2nd attendance within 8 minutes

A long-term approach is vital to help LFEPA determine options for station and pumping appliance deployments to minimise the impact on attendance times while making the required savings.

The optimisation model was set up to minimise average first and second attendance times to potentially serious incidents

WHY OPTIMISE?

Optimisation modelling was necessary as there are billions of permutations for deploying the varying number of pumping appliances across the varying number of stations. ORH has developed a unique and powerful program to optimise the location and

40

IMPACT | SPRING 2016

deployment of emergency response vehicles. OGRE (which stands for “Optimising by Genetic Resource Evolution”) uses a sophisticated genetic algorithm to assess millions of options for station and vehicle placement within minutes. Restrictions and assumptions that were agreed with LFEPA before ORH could commence optimisation, included: • No new locations were considered, only existing stations • LFEPA identified a few stations that were fixed and could not close for operational reasons • Stations could have two pumping appliances (if capacity allowed), one, or the station could close • At least one fire station had to be located in every Borough (33 boroughs in London)

While LFEPA attendance standards are set London-wide, equity of performance is an important issue for the planning of the service, so additional constraints were applied to the modelling, which were: • The performance of any Borough currently performing outside of target could not deteriorate • The performance of any Borough currently performing within target could not move outside of target

WHY SIMULATE?

Optimisation modelling was used to identify the best deployment for each combination of pumping appliances and stations. It was then necessary to fully test these deployments using simulation modelling. Model outputs for a range of measures were assessed, such as • Average attendance performance • Percentile targets (e.g. 95th percentile) • By specific incident type • By geographical area (Boroughs and Wards) • Number of incidents attended by station and pumping appliance

© London Fire Brigade

• Redeployments of pumping


• Utilisation of pumping

appliances The simulated performance results allowed detailed comparisons for the different deployment options.

The approved option was for 155 pumping appliances at 102 stations, a net reduction of 14 pumping appliances and 10 stations

At the outset, the exact budget savings were unknown. During the process, the expected savings from front-line services became clearer and the modelling reflected this, focussing on deployments that would result in savings around the desired level. An iterative process, utilising the ORH optimisation modelling and LFEPA officers’ professional judgment was used. This meant a large number of deployments were narrowed down to a reduced selection for further modelling. One assumption used in the modelling was that the historical location of incidents informs the location of incidents in the future. Year-on-year analysis of historical demand backs up this assumption, with very similar patterns of incident locations observed. The modelling primarily used the historical location and type of incidents to guide the deployment of resources, but further sensitivity modelling assessed these deployments against a range of other measures, such as: • Population and specific demographics (e.g. most deprived population in London) • Landmarks and Listed Buildings • High Rise locations

• Subsets of incidents (e.g. dwelling fires) • Changes in demand rates and patterns

WHAT WAS THE OUTCOME?

Following initial consultation with operational staff and the public, the approved option was for 155 pumping appliances at 102 stations, a net reduction of 14 pumping appliances and 10 stations. Five stations gained an appliance (previously one pumping appliance, now two). Based on the modelling outputs, the following changes to attendance times were predicted: • Average first attendance would increase by 13 seconds • Average second attendance would increase by 10 seconds • The number of Boroughs achieving the London-wide target would increase The objective, transparent approach and review process allowed LFEPA officers to make the decisions with confidence. The proposed deployment was put to public consultation and then successfully defended at a judicial review.

The proposed deployment was put to public consultation and then successfully defended at a judicial review

These changes to the pumping appliance and station deployments were included in LFEPA’s Fifth London Safety Plan and the changes were successfully implemented on 9 January 2014. The operational changes,

together with other savings, enabled LFEPA to deliver a balanced budget going forward. A large number of temporary deployments have been in place since 2014, which causes difficulty when comparing predicted and actual attendance time impacts. However, in the areas where an appropriate comparison is possible, there is a close correspondence.

WHAT THE CLIENT SAID

Ron Dobson, London Fire Commissioner said… “Since 2004, we have worked with ORH to find solutions to the issues we face. This has included modelling alternative locations for fire stations and fire appliances, developing training schedules and identifying suitable contingency plans. ORH have provided some unique insights into our work and the services we deliver, and the modelling they do is now an essential part of our planning for the location of resources that need to respond to emergency incidents. Over the ten or more years that we have worked together, the number of fires, fire casualties and emergency incidents has fallen considerably, which provides a further challenge. The robustness of the ORH modelling approach was one key contributor to being able to successfully resist the judicial review on the Fifth London Safety Plan, and allow us to implement the changes needed.”

Andrew Cooper is a consultant at ORH, a role that includes carrying out simulation and optimisation modelling, and managing projects for a number of ORH’s fire and rescue clients.

IMPACT | SPRING 2016

41


DAVID LANE, EILEEN MUNRO AND ELKE HUSEMANN

42

IMPACT © THE OR SOCIETY

Child protection is difficult. The broad aspiration is to make children safe, to try to ensure their welfare and development. Children’s Social Care departments are central to this but, for example, police, schools, general practitioners, health visitors, family courts and a range of charities all have roles to play in child protection. Having all those different agencies makes things complicated. The work itself is demanding. For example, scrutinizing possible cases of child maltreatment is an important aspect of child protection work. To do this,

child and family social workers must confront two key questions - both difficult to answer. First: is the child actually being maltreated? Children get bruises all the time – she wasn’t pushed, she fell. But parents or carers who harm their children sometimes tell lies. Children also lie sometimes; to protect their parents they may conceal the harm that they are experiencing. Consequently, understanding what is going on and why is hard; abuse can be missed. Even if you are sure there is abuse you confront the second question:

© RooM the Agency / Alamy Stock Photo

T H E C H I L D P R OT E C T I O N J I G S AW


what is the best thing to do? Removing a child is the obvious step. However, outcomes in care are not always good and disrupting a childhood and breaking up a family is a very significant step. The alternative is to work with the family to make it safe for a child to remain. And what if you fail? In 2010 morale amongst child and family social workers was poor and public esteem for the profession was at a low point. The Government believed that even though previous reviews had been well-intentioned, they had not worked. It therefore launched a review of the child protection system in England. The ‘Munro Review of Child Protection’ was based in the Department for Education - which oversees child protection in England. It had a ‘Reference Group’ of people from relevant professions (health, social work, judiciary, etc.). A core element of the resulting work was the wish to understand past and potential future policies in an holistic way. This was approached using Systems Thinking. The following sections give a sense of the central role played in the review by a combination of Systems Thinking ideas.

What Systems Thinking helped bring to light was that applying a prescriptive approach to child protection had led to the emergence of a ‘tick-box culture’ of compliance. A detailed version of this diagnosis was made public in the review’s first report. There was a good response to this from staff working in the sector. Many commented that the account chimed with their lived experience.

UNDERSTANDING THE EXISTING JIGSAW

Moving from this broad view, we created a detailed map of the jigsaw pieces that had made up the policies and operations of the sector in recent decades. This involved a ‘group model building’ approach. In a sequence of meetings we worked with a group of professionals and specialist in child protection. Participants sat in a room discussing the sector. An evolving systems model was projected onto a large screen. The aim was to express the contributions that participants

made. The job was to ask questions and to represent in the model what was said. The participants talked

In 2010 morale amongst child and family social workers was poor and public esteem for the profession was at a low point

with each other and referred to the projected model, discussing where it was right, where it was wrong, asking for changes and clarifications. This approach continued over a number of sessions and resulted in a map of almost 60 variables. Whilst the knowledge elicitation and mapping process involved a lot of iterations and re-visiting of issues, there was a general sequence of analysis throughout. That was structured around a series of questions applied to different aspects of child protection, as follows. 1. What problems had the sector experienced in the past?

© latham & holmes / Alamy Stock Photo

IS THIS HOW WE GOT HERE?

Things started with a broad view of the sector, with the aim of understanding how the existing situation came about. Comment elicited from the Reference Group, published papers, and interviews with experts were all used, as well as the range of evidence collected specifically for the review. The story that emerged was visualised using a causal loop diagram derived from System Dynamics modelling and a range of other Systems Thinking ideas. The resulting diagnosis is shown in the box.

IMPACT | SPRING 2016

43


SYSTEMS THINKING-BASED ACCOUNT OF HOW AN ADDICTION TO COMPLIANCE EMERGED IN CHILD PROTECTION A strong belief that a prescriptive approach for child protection would be effective led to the creation of procedures, i.e. detailed rules for undertaking tasks. In System Dynamics terms this was a powerful balancing loop getting compliance up to the desired level. In organizational learning terms, ensuring compliance with procedures is ‘single loop learning’, or ‘Doing it right’. However, more procedures meant less scope for using professional judgement and that resulted in ‘unintended consequences’. Reduced staff satisfaction and increased staff turnover were among these ‘ripple effects’. The cybernetic concept of ‘requisite variety’ proved useful: staff with less scope for tailoring their interventions produced lower quality help for children. That response could have produced a correction effect via a second balancing loop. What might have resulted was a questioning of the effectiveness of a prescriptive approach, or ‘double loop learning’ as the sector wondered whether it was ‘Doing the right thing’. However, that did not happen. The very existence of procedures allowed the ‘we followed the rules’ response. This ‘defensive routine’ reduced the ability to acknowledge less effective work - so the learning loop was not working. Worse still, the temptation was to introduce more procedures and so a ‘vicious circle’ emerged, a reinforcing loop (see Figure 1). The sector experienced an upwards spiralling use of prescription combined with a downwards spiralling ability to see the deficiencies of that approach. This was ‘organisational addiction’.

Perceived Procedural Effectiveness of Prescriptive Approach

o Ability to Acknowledge Errors o

R

s Target Level of Procedural Prescription s

Availability of ‘We just followed the rules’ Defence

Compliance Enforcement

s

o

o

Scope for Dealing With Variety of Circumstances Using Professional Judgement

FIGURE 1 CAUSAL LOOP DIAGRAM ILLUSTRATING THE ‘COMPLIANCE ADDICTION’ PHENOMENON. Links marked ‘s’ produce changes in the same direction whilst ‘o’ links produce changes in the opposite direction. The result is a positive feedback loop, or reinforcing effect.

44

IMPACT | SPRING 2016

We applied this sequence across the range of current operations, looking at time spent with families, quality of outcomes for children, staff skills levels, recruitment, workloads, sickness rates and a plethora of other factors. The map helped the group manage and think about the connections within and between the different areas, to see both the pieces of the jigsaw and how they fitted together. The ‘big picture’ that emerged was not encouraging. What the map told us about the state of the sector can be expressed in terms of feedback loops.

What Systems Thinking helped bring to light was that applying a prescriptive approach to child protection had led to the emergence of a ‘tick-box culture’ of compliance

CHANGING THE FACTS ON THE GROUND

Responding for the Government, the Parliamentary Under-Secretary of State for Children and Families accepted the recommendations, saying:

© David Lane and Elke Husemann

Compliance with Prescriptions

2. What policies were implemented to address those problems? 3. What were the anticipated effects of those policies? 4. What were the ripple effects, or unanticipated consequences of those policies? 5. What were the feedback loops produced by those policies? 6. Is there qualitative/quantitative data to support or challenge the previous answers? 7. Which feedback loops dominate? Which are dormant? 8. What can be deduced about the key drivers of the sector?


Image: Contains public sector information licensed under the Open Government Licence v3.0: www.nationalarchives.gov.uk/doc/open-government-licence/version/3/

“Moving away from a culture of compliance by reducing central prescription and placing a greater emphasis on the appropriate exercise of professional judgment represents a fundamental system-wide change.” Consequently, the recommendations have been implemented via changes in the law, changes in the inspection regime and changes in the culture of the child protection sector. For example: • The status of the profession needed improving and public understanding of its work addressed. To help here we recommended that the post ‘Chief Social Worker for Children’ be created. This was done and the post has been filled. • Child protection is inspected by the Office for Standards in Education, Children’s Services and Skills. Ofsted published a new inspection approach, ‘Framework for the inspection of local authority arrangements for the protection of children’. This significantly reduces the auditing of processes, instead judging whether children are receiving practical help. • The Department for Education has now published new statutory guidance ‘Working Together to Safeguard Children’. There is less prescription. Instead, much more weight is given to the role of professional judgement.

David Lane is Professor of Business Informatics at Henley Business School. Eileen Munro is Professor of Social Policy

at the London School of Economics. Elke Husemann specialises in analysis and education based on systems approaches.

A MORE DETAILED ACCOUNT OF THE WORK MAY BE FOUND AT D.C. Lane, E. Munro, E. Husemann (2016), Blending systems thinking approaches for organisational analysis: Reviewing child protection in England, European Journal of Operational Research 251 (2), 613-623. The official Government reports relating to this work may be found at: www.gov.uk/government/collections/munro-review

Social workers are now encouraged to spend more time with children and families, building relationships, applying their continuously developing expertise and using their judgements - for example, to answer the two key questions at the start of this article. With the addition of such new pieces of the jigsaw, a different picture is beginning to emerge - and this was only possible using Systems Thinking.

IMPACT | SPRING 2016

45



G R A P H C O LO U R I N G : A N ANCIENT PROBLEM WITH M O D E R N A P P L I C AT I O N S RHYD LEWIS

AS SOMEONE WHO has been colour-blind since birth – trying to tell the difference between blue and purple is a particularly difficult challenge for me – friends of mine always chuckle when I tell them that I spend my research time at Cardiff University studying graph colouring problems. Graph colouring has its origins in the work of Francis Guthrie who, while studying at University College London in 1852, noticed that no more than four colours ever seemed necessary to colour a map while ensuring that neighbouring regions received different colours. Guthrie passed this observation on to his brother Frederick who, in turn, passed it on to his mathematics tutor Augustus De Morgan who was unable to provide a conclusive proof for the conjecture. Indeed, despite having no real practical significance (cartographers are usually happy to use more than four different colours of ink) the problem captured

the interests, and ultimately stumped, many notable mathematicians of the time, including William Hamilton, Arthur Cayley, Charles Pierce, and Alfred Kempe. In fact, it would eventually take more than 120 years, and the considerable use of large-scale 1970s computing resources to prove conclusively what is now known as the Four Colour Theorem. To achieve this proof, one early but very important insight was to note that any map can be converted into a corresponding planar graph. A graph is simply an object comprising some “vertices” (nodes), some of which are linked by “edges” (lines); a planar graph is a special type of graph that can be drawn on a piece of paper so that none of the edges cross. To illustrate, consider the map of Wales in Figure 1(a). In Figure 1(b) we produce the corresponding planar graph by substituting each region in the map (plus England and the

IMPACT © THE OR SOCIETY

47

© Rhyd Lewis

FIGURE 1 - HOW A MAP CAN BE CONVERTED INTO A PLANAR GRAPH. A COLOURING OF THIS PLANAR GRAPH CAN THEN BE USED TO COLOUR THE REGIONS OF THE MAP.


Irish Sea) with a vertex and then add edges between pairs of vertices whose corresponding regions share a border. Having produced our planar graph, our task is to now merely to colour the vertices appropriately. That is, we want to use the minimum number of colours (and no more than four) to colour all of the vertices so that those connected by an edge receive different colours. These colours are then transferred back to the graph, as shown in Figure 1(c).

One of the best known applications of graph colouring arises in the production of timetables

Sounds easy doesn’t it? But how do we go about producing such a colouring? Moreover, what should we do if we are asked to colour more complicated graphs, such as those that are not planar and where more than four colours are needed? Is there any easy set of steps that we might follow to colour any graph using the minimum possible number of colours? The short answer to the last question is almost certainly “no”. Graph colouring is known to be NP-complete in general, meaning that an efficient exact algorithm for the problem almost certainly does not exist. However, graph colouring is definitely a problem

48

IMPACT | SPRING 2016

that needs to be tackled effectively because is known to underpin a wide variety of real-world operational research problems. Let’s now look at some of these. One of the best known applications of graph colouring arises in the production of timetables. Imagine a college where we need to assign lectures to timeslots. Imagine further that some pairs of lectures conflict in that they cannot be scheduled to the same timeslot – perhaps because some student wants to attend both of them. Figure 2(a) demonstrates this. We see, for example, that the Algebra lecture conflicts with Probability Theory, Graph Theory, and (for some reason) French Literature. As a result, in the corresponding graph edges are placed between the associated pairs of vertices, as shown in Figure 2(b). A feasible timetable for this problem can then be achieved by producing a colouring of this graph and mapping the colours to timeslots, as shown in Figure 2(c). The ordering of the timeslots and assignment of lectures to rooms can be taken care of by separate processes. In reality, timetabling problems usually feature other requirements on top of avoiding clashes. These might include ensuring students’ lectures are not too “clumped together”, making sure students have a lunch hour, and obeying various staff preferences.

Such techniques have been used to produce professional rugby schedules in both Wales and New Zealand

Another place where graph colouring is useful is in solving Sudoku puzzles. I expect most of you know

© Rhyd Lewis

FIGURE 2 - HOW A TIMETABLING PROBLEM CAN BE CONVERTED INTO A CORRESPONDING GRAPH COLOURING PROBLEM.

Often, automated timetabling methods therefore operate by first producing a graph colouring solution using the required number of colours, and then use specialised operators to “move” from one valid colouring to another to try and remove violations of the remaining constraints. (This has a certainly been a fruitful strategy in many of the International Timetabling Competitions held over the years). Related to educational timetabling is the problem of constructing sports leagues. In a typical Sunday football league you have a collection of teams, all of whom will want to play each other twice during the year. Setting up this sort of round-robin league is fairly straightforward; but what happens if other constraints are also introduced? For example, nowadays it is common for different teams to share pitches, so we will often see constraints such as, “If team A is playing at home, then team B must play away”. Certain fixtures, such as derby matches, might also need to be scheduled to specific weekends to maximise crowd numbers and so on. In fact, these more complicated problems can be modelled through graph colouring in a similar way to university timetabling. Indeed, such techniques have been used to produce professional rugby schedules in both Wales and New Zealand.


© Rhyd Lewis

the rules of Sudoku. Whether you love them or hate them, a good Sudoku puzzle should supply sufficient clues so that they are logic solvable. In other words, the filled-in cells supplied by the puzzle master should allow you to complete the puzzle without guessing at any point. Graph colouring algorithms are actually very effective at solving Sudoku puzzles, whether they are logic solvable or not. To do this, as Figure 3 shows using a mini-Sudoku puzzle, we simply map each cell in our grid to a vertex and then add edges between any vertex pairs in the same column, row, or box.

© Daniel Williams

Graph colouring methods form the basis of the optimisation algorithm used on the website www. weddingseatplanner.com

If some cells are filled, we can then also add additional edges – for example, in the figure an edge is also placed between the two vertices whose cells contain a “3”. Once we have coloured this graph with the minimum possible number of colours (four in this case), the colours of the vertices then specify the solution. Happily, some of my own work has shown that graph colouring algorithms can solve very large and difficult Sudoku grids in a matter of milliseconds. Perhaps our days of solving them by hand are over… We can also design seating plans using graph colouring. Imagine that a school fills its sports hall with desks for the end-of-year exams. To stop copying, the school also wants to ban students from sitting to the left, right, in front of, or behind another student

FIGURE 3 - HOW A SUDOKU PUZZLE CAN BE CONVERTED INTO A CORRESPONDING GRAPH COLOURING PROBLEM.

sitting the same paper. The school could go even further and place bans on the diagonals too. The result is a graph colouring problem where the minimum number of colours in the solution indicates the minimum number of different exams that can take place in the venue at the same time. Software for specifying such problems was actually commissioned in 2014 by the Cardiff School of Social Sciences to help them set up controlled experiments using human participants in computing laboratories (see Figure 4). From a different perspective, now imagine you are in charge of organising a wedding dinner where the guests need to be allocated to tables. Some guests, such as families, will need to be

put on the same table, whereas others (divorcees, and so on) might need to be kept apart. How can you keep everyone happy without it turning into a logistical nightmare? Fortunately, graph colouring also lies at the heart of this problem. Indeed, graph colouring methods form the basis of the optimisation algorithm used on the website www.weddingseatplanner. com where such plans can be created for free. The examples discussed in this article are just a taste of where graph colouring problems can arise in the real world. You may have noticed that at various points in this article I have said something along the lines of “we then produce an appropriate colouring of the graph”. Perhaps

FIGURE 4 - SCREENSHOT OF THE SEATING ALLOCATION TOOL (DESIGNED BY DANIEL WILLIAMS, SCHOOL OF COMPUTING, UNIVERSITY OF SOUTH WALES).

IMPACT | SPRING 2016

49


ADVERTISE IN IMPACT MAGAZINE The OR Society are delighted to offer the opportunity to advertise in the pages of Impact. The magazine is freely available at www.theorsociety.com and reaches a large audience of practitioners and researchers across the O.R. field, and potential users of O.R. If you would like further information, please email: advertising@palgrave.com. Rates Inside full page: £1000 Inside half page: £700 Outside back cover, full page: £1250 Inside back cover, full page: £1200 Inside front cover, full page: £1200 Inside full page 2, opposite contents: £1150 Inside full page, designated/preferred placing: £1100 Full Page Adverts must be 216x286mm to account for bleed, with 300dpi minimum.

50

IMPACT | SPRING 2016

© Rhyd Lewis

Rhyd Lewis is a lecturer in Operational Research at the School of Mathematics, Cardiff University. He continues to confuse blue and purple.

Cover courtesy of Springer

you are now wondering how we actually go about doing this. I have deliberately avoided this pressing issue here, but for now let’s just say that unless our graph is very small or belongs to a rather narrow set of types, then we will usually have to resort to heuristic-based approximation algorithms. Many examples of these can be found in my recent book A Guide to Graph Colouring: Algorithms and Applications (2015).


GUE??TIMATE THAT! Geoff Royston

ENRICO’S ENVELOPES

The Nobel laureate Enrico Fermi, as well as being famous for leading the team that developed the world’s first nuclear reactor, was also well known for being good with envelopes. Or, more exactly, for a good use of the backs of them. Although Fermi had outstanding mathematical ability, he always tried to use the simplest approach that would suffice for solving problems. For example, when he was observing the first atomic bomb test in July 1945, he estimated the bomb’s yield by dropping strips of paper into the blast wave, pacing off how far they were blown by the explosion, and thereby calculating the yield (as ten kilotons of TNT, which was accurate to within a factor of two). Fermi was renowned for his facility for using back-of-theenvelope calculations to get surprisingly good approximate estimates for complex quantities. His basic approach was to break down a complex calculation into simpler elements that could be guesstimated and to then combine these guesstimates to produce the overall result. Perhaps inevitably, given his track record in having things named after him (a whole class of atomic particles and a new element!), estimates obtained in this way are now known as “Fermi estimates”. The method often works remarkably well, partly because errors in estimating the individual components tend often to cancel out. GUE??TIMATION

The Fermi approach has recently been popularised by the book, gue??timation by Lawrence Weinstein and John Adam (and its follow-up gue??timation 2.0). Sub-titled “Solving the world’s problems on the back of a cocktail napkin” the book illustrates the approach through considering often

whimsical questions such as “ How many piano tuners are there in Los Angeles?”, “How long would it take a tap to fill the (inverted) dome of St.Paul’s Cathedral with water?”, or “How many golf balls would it take to circle the equator?”. But it has a serious purpose; unlocking the power of approximation through breaking down complex problems into simpler parts to obtain ballpark estimates. The method comprises three basic steps for making a quantitative estimate of something: Step 1. If you can, write down the answer straightaway! For some problems you will be able to give an estimate that is good enough e.g. to make a decision. Step 2. If you can’t make an estimate, break down the problem into smaller pieces and estimate each piece to within a factor of ten (if the smaller pieces are still too complicated, break them down further). Step 3. Recombine the pieces, (generally) multiplying them together to get the overall estimate. In estimating the size of the pieces the book points out that it is often better to do this via estimating upper and lower limits for them than by direct estimation. For example if you wanted to estimate the time an average mobile phone user spends per day talking or texting on their mobile phone, you would probably not know if it was 10, 40 or 120 minutes but you could be confident it would be more than 1 minute and less than 1000 minutes. A good way of using such upper and lower limits to get an estimate of an average is to take their geometric mean, which in this case is about 30 minutes. Here is a simple worked example based on one in the book: “How many people worldwide are talking or texting on their mobile phones at this instant?” Breaking this down, the answer must depend on the world population, the proportion who are mobile phone users, and the time they spend on their phones. The proportion of people using their mobile phones at any instant is the same as the proportion of the time an average person spends using a mobile phone. So the number we want is world population x proportion of population who use mobile phones x proportion of time spent by users on their phones. Taking World population: 7x109 Percentage who use mobile phones: 50% Average minutes per day spent on phone by users: 30 Minutes per day: 24 x 60 Number of people worldwide talking or texting on their mobile phones at this instant = 7x109 x 0.5 x 30 / (24 x 60) = 70 million.

IMPACT © THE OR SOCIETY

51


PRACTICAL, REAL-WORLD VALUE

Fermi estimation is more than a tool for nuclear physicists or a pastime activity for the curious. It has real practical value in a wide range of situations, something recognised by leading management consultancies and software firms who often put guesstimation questions to applicants to test their ability to think logically, reason quantitatively, and take a flexible approach to tackling imprecise real-world problems. Certainly I found it valuable in work as an analyst in operational research. For example, in the early days of the telephone helpline service NHS Direct (now NHS 111) there was a question about whether the service reduced (by diverting cases) or added to (by prompting extra referrals) the workload of G.P.s. Academic studies were undertaken with (in part) a view to testing this, but a simple Fermi-type calculation showed that they would not be able to detect any resulting shift (up or down) in G.P. workload as even the extreme upper bound for any possible effect was below the detection limit for the study design used; a totally different design was required. ARE WE ALONE IN THE UNIVERSE?

fc = The fraction of civilizations that develop a technology that releases detectable signs of their existence into space. L = The length of time such civilizations release detectable signals into space. Drake designed his equation to stimulate discussion and research rather than to provide an actual credible number (uncertainty in the values of the parameters mean that estimates for N vary between 1 and tens of millions!). It has certainly been successful in that aim, which illustrates the important point that the Fermi approach can be useful in providing valuable insights about key components of a problem even when producing a usable quantitative estimate is beyond its reach. FERMI’S ROYAL ROAD TO GUESSTIMATION

We live in a world where people commonly confuse or conflate millions and billions, often have wildly inaccurate ideas about the size of important quantities (see a previous article of mine in Impact) and frequently make key judgements and decisions in a numerical fog. In such a world, the skill of guesstimation – which might be described as getting things right to at least within a factor of ten, is not to be sniffed at. Fermi’s back-of-the-envelope method offers a royal road to guesstimation and books like gue??timation offer a ready and entertaining path to mastering that skill.

A piece about the Fermi method should not end without mentioning what is perhaps its most famous (or infamous) example: the Drake equation. This was the approach proposed in 1961 by the astronomer Frank Drake for estimating the number of technological civilizations that might exist in our galaxy. He noted that this number would be the product of various factors, suggesting the equation: N = R .fp .ne .fl .fi .fc .L where N = The number of civilizations in the Milky Way galaxy whose electromagnetic emissions are detectable. R = The rate of formation of stars in the galaxy suitable for the development of intelligent life. fp = The fraction of those stars with planetary systems. ne = The number of planets, per solar system, with an environment suitable for life. fl = The fraction of suitable planets on which life actually appears. fi = The fraction of life bearing planets on which intelligent life emerges.

52

IMPACT | SPRING 2016

Dr Geoff Royston is a former president of the O.R. Society and a former chair of the UK Government Operational Research Service. He was head of strategic analysis and operational research in the Department of Health for England, where for almost two decades he was the professional lead for a large group of health analysts.

Courtesy of Princeton University Presss

The gue??timation books handle a raft of problems by this kind of approach ranging from “How many batteries would it take to replace your car fuel tank?” to “Which is more powerful per pound, the Sun or a gerbil?”.


Wiley Encyclopedia of

Operations Research and Management Science Editor-in-Chief James J. Cochran Professor of Applied Statistics and Rogers-Spivey Faculty Fellow at The University of Alabama

• The only cohesive eight volume major reference resource devoted to the field of operations research/ management science theory, methodology and applications • Features an Editorial Board comprised of experts in the field who have vast experience in academia, industry, and government

EORMS “

Highly recommended for universities, businesses and other large groups that need a comprehensive OR/ MS reference in a central location. (Interfaces)

ISBN: 9780470400531

• Organized alphabetically according to a hierarchical article structure designed to make its content useful and accessible to the widest possible readership • Available as an online resource which has been updated quarterly since 2011

Visit wiley.com/go/eorms for sample articles


The Annual Analytics Summit 2016 Plenary Speaker: Megan Lucero, The Times Data Journalism Editor Megan Lucero led the development of The Times’ and Sunday Times’ first data journalism team from a small supporting unit into a key component of Times investigation. Megan also headed The Times’ political data unit that rejected polling data ahead of the UK’s last general election and is at work on a project looking at data relating to the upcoming ‘Brexit’ referendum on EU Membership. NEW FOR 2016 – to aid delegates’ learning the format for this year’s event will take the case studies presented in the morning and explore some of the techniques and technologies used in the projects in the afternoon sessions.

Main Speakers: Andy Hamflett (AAM Consultants) and Simon Raper (Coppelia Machine Learning & Analytics) on the Trussell Trust project using geospatial analysis and predictive modelling for food banks.

Tuesday 21 June 2016 9:15am to 5:00pm

David Goody (Dept. for Education) on the Future At Risk Analytics project helping manage the financial risks of 5,000 academies.

Tavistock Square London WC1H 9JP

Pete Williams (Marks & Spencer) on embedding analytics at the heart of M&S.

Booking now open: £150 + VAT (early bird rate £100 + VAT) www.theorsociety.com/analytics-summit


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.