IMPACT Spring 2025

Page 1


2024 PRESIDENT’S MEDAL

2024 MEDAL

Balancing Act: Joint XAI/OR for Power Grid Management

2024 LYN THOMAS IMPACT MEDAL

Impactful work at the crossroads between OR, ML and AI

Balancing the ML Programming

OPTIMISATION FOR SURVIVAL

Constraint Programming in Captive Breeding Design

LINKING

UP FOR RESILIENCE

Simulation-supported Supply Chain

Restructuring for SME Survival

LINKING UP for

Our events fully explore the disciplines of operational research, analytics and data science. We also run especialist events that focus on simulation, artificial intelligence, defence and analytics, and more. Our events are open to all, with members receiving prefential rates.

Our events fully explore the disciplines of operational research, analytics and data science. We also run especialist events that focus on simulation, artificial intelligence, defence and analytics, and more. Our events are open to all, with members receiving prefential rates.

The OR Society puts on a full calendar of events and woirkshops to help you further your career and:

The OR Society puts on a full calendar of events and woirkshops to help you further your career and:

๶ Connect and network

๶ Connect and network

๶ Share knowledge and research

๶ Share knowledge and research

๶ Expand your skillset

๶ Expand your skillset

๶ Find new ways of tackling real-world problems www.theorsociety.com/events

Become part of our communicty of excellence

Become part of our communicty of excellence

๶ Find new ways of tackling real-world problems www.theorsociety.com/events

EDITORIAL

is issue will be of help to anyone out there who might still need to be convinced of the always tighter connections between OR, Machine Learning (ML) and Arti cial Intelligence (AI), and the bene ts from their joint adoption in support of e ective decision making.

In the pages that follow, interwoven deployment of OR/ML/AI is ubiquitous. In a conversation with Professor Bart Baesens (2024 Lyn omas Impact Medal), we explore his impactful work of more than two decades in areas such as Credit Risk Modelling and Fraud Detection, and learn of BlueCourses, an AI/OR knowledge dissemination initiative, and its ocean clean-up events. Smith Institute’s article presents their collaboration with the National Energy System Operator (NESO) involving joint use of OR and explainable AI (XAI) in keeping the lights on across Great Britain. is work won the OR Society’s President’s Medal 2024. We also learn of how Constraint Programming helps to optimise the design of captive breeding programmes to foster preservation of endangered species, and discuss the importance and practice of feature selection in ML.

Furthermore, we feature more ‘classic’ uses of OR. We look at simulation-based supply chain recon guration at a Turkish SME operating in the textile sector, learn of the challenges of e ciently solving the Bike Rebalancing Problem at scale, and of a recent pro bono engagement with a charity supporting families with disabled children. Finally, our column by Geo Royston reminds us all of the crucial role of disciplined and deliberate approaches to creativity in the improvement science we call OR.

I hope you enjoy reading this issue as much as we did when putting it together!

Maurizio Tomasella

The OR Society is the trading name of the Operational Research Society, which is a registered charity and a company limited by guarantee.

Seymour House, 12 Edward Street, Birmingham, B1 2RX, UK

Tel: + 44 (0)121 233 9300, Email: email@theorsociety.com

Executive Director: Colette Fletcher

President: Sanja Petrovic

Editor: Maurizio Tomasella

Senior Editorial Assistant: Sophie Rouse

Editorial Assistant: Chiara Carparelli ImpactMagEditorial@theorsociety.com

Print ISSN: 2058-802X Online ISSN: 2058-8038 www.tandfonline.com/timp

Published by Taylor & Francis, an Informa business All Taylor and Francis Group journals are printed on paper from renewable sources by accredited partners.

OPERATIONAL RESEARCH AND DECISION ANALYTICS

Operational Research (OR) is the discipline of applying appropriate analytical methods to help those who run organisations make better decisions. It’s a ‘real world’ discipline with a focus on improving the complex systems and processes that underpin everyone’s daily life – OR is an improvement science. For over 70 years, OR has focussed on supporting decision making in a wide range of organisations. It is a major contributor to the development of decision analytics, which has come to prominence because of the availability of big data. Work under the OR label continues, though some prefer names such as business analysis, decision analysis, analytics or management science. Whatever the name, OR analysts seek to work in partnership with managers and decision makers to achieve desirable outcomes that are informed and evidence-based. As the world has become more complex, problems tougher to solve using gut-feel alone, and computers become increasingly powerful, OR continues to develop new techniques to guide decision-making. e methods used are typically quantitative, tempered with problem structuring methods to resolve problems that have multiple stakeholders and con icting objectives. Impact aims to encourage further use of OR by demonstrating the value of these techniques in every kind of organisation –large and small, private and public, for-pro t and not-for-pro t. To nd out more about how decision analytics could help your organisation make more informed decisions see https://www.theorsociety.com/ORS/About-OR/OR-in-Business.aspx. e OR Society is the home to the science + art of problem solving.

2024 LYN THOMAS IMPACT MEDAL: PROFESSOR BART BAESENS

In conversation with Professor Bart Baesens, we look at his impactful work of more than two decades, in areas such as Credit Risk Modelling, Fraud Detection, Cultural Heritage and Digital Archiving, and more

CONTACT: UNLOCKING THE POTENTIAL OF OPERATIONAL RESEARCH FOR CHARITIES

Isma Shafqat, The OR Society’s Pro Bono OR Manager, illustrates how OR made a meaningful difference at Contact, a charity supporting families with disabled children

FOR

– CAPTIVE BREEDING DESIGN

Brian Clegg unveils to us how constraint programming helps captive breeding programme designers in fighting the ongoing biodiversity crisis and foster preservation of some of the most endangered species 21 BALANCING ACT: THE PARTNERSHIP OF AI AND OPERATIONAL RESEARCH IN MODERN POWER GRID MANAGEMENT

Kieran Kalair, Alex Bowring and Adam Brummitt of the Smith Institute present their award winning project with the National Energy System Operator (NESO) and show how OR, combined with explainable artificial intelligence (XAI) helps to keep the lights on across Great Britain. The project won The OR Society’s 2024 President’s Medal

27 MACHINE LEARNING: WHEN LESS IS MORE

In this article, Tom Murarik of Optrak discusses the practice of Machine Learning (ML) and, within it, the crucial role of feature selection

31 LINKING UP FOR RESILIENCE: SUPPLY CHAIN STRUCTURES AS SME SURVIVAL TOOLS

Mustafa Çagri Gürbüz, Sena Ozdemir, Vania Sena, Oznur Yurt and Wantao Yu introduce us to their work with Turkish SME LUR Textile, and explain how simulation helped them to support their client in reconfiguring their supply chain during Covid times

4 Seen Elsewhere Analytics making an Impact

36 Universities making an impact Brief report on a student project from LSE

38 Serendipity’s Cousin Geoff Royston’s column discusses how creativity fits into the OR profession

DISCLAIMER

The Operational Research Society and our publisher Informa UK Limited, trading as Taylor & Francis Group, make every effort to ensure the accuracy of all the information (the “Content”) contained in our publications. However, the Operational Research Society and our publisher Informa UK Limited, trading as Taylor & Francis Group, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by the Operational Research Society or our publisher Informa UK Limited, trading as Taylor & Francis Group. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. The Operational Research Society and our publisher Informa UK Limited, trading as Taylor & Francis Group, shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Reusing Articles in this Magazine

All content is published under a Creative Commons Attribution-NonCommercial-NoDerivatives License which permits noncommercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way.

SEEN ELSEWHERE

COPILOTING OPTIMAL DECISION-MAKING

In a recent paper presented at the Thirty-Ninth Association for the Advancement of Artificial Intelligence AAAI Conference on Artificial Intelligence (AAAI-25), co-authors from IBM Research and universities including Harvard, MIT, Washington and Amsterdam, have moved one step forward in democratising optimisation modelling and validation for organisational decision-making (https://bit.ly/AAAI-25-DOCP). The idea, in itself, is not new. The authors believe the time has come to exploit the latest capabilities of Large Language Models (LLMs) and create a ‘Decision Optimisation Co-pilot (DOCP)’, quoting verbatim “an AI tool that interacts with decisionmakers in natural language, leveraging the decision-makers’ knowledge and feedback to generate and solve problem-specific optimisation models”.

In the authors’ vision, the decisionmaker would first provide the DOCP with a natural language description of the problem, generate and validate the optimisation model, run the most appropriate algorithm (and, perhaps, involve an optimisation engine), and provide actionable recommendations back to the user. Throughout the process, the DOCP would be capable of collecting user feedback and integrating it into one or more of these phases. The language adopted must, in all cases, be easily understood by the business decision-maker.

The authors also discuss what they believe are the three main requirements for a successful DOCP. First, the DOCP should be capable of translating a business level problem definition into a mental model of the problem at hand, one that is ‘appropriate enough’ (say ‘valid’) and can be turned into a formal model for scientific investigation. This is envisioned more as a two-way process facilitated by the DOCP, rather than an automatic, unilateral task executed by the AI tool.

Second, the DOCP should facilitate model verification: LLMs are likely to produce models that either yield runtime errors or display more subtle inaccuracies of semantic nature (which are harder to detect). Decision-makers cannot be expected to validate the implemented optimisation model itself, hence the DOCP should both take whatever steps possible to reduce errors in the first place, and support the decision-maker to both validate the results obtained and provide feedback that the DOCP may use to apply corrections.

Third and last requirement, the DOCP should ensure requisite levels of efficiency in both the model and its solution approach. The NP-hard nature of many (most?) interesting problems might be just as challenging to the DOCP as they are to optimisation specialists. When feasible solutions cannot be attained within reasonable

timeframes, more advanced modelling approaches, focused on finding equivalent, more efficient models and solution techniques, ought to be automatically deployed. When involving off-the-shelf engines, efficient, automatic parameter finetuning should take place, thereby shielding the decision-maker from unnecessary (albeit important) complications.

The authors conclude the paper with a call to action to scholars in both AI and OR to come together to further co-develop the DOCP concept. Exciting times, indeed!

PREDICTIVE STORYTELLING

In a recent article in Analytics Magazine (https://bit.ly/Predictive-Storytelling), Arun Prem Sanker, Data Scientist at Stripe and formerly at Amazon, explains how Predictive Storytelling helps “to create evidence-backed, scalable, exact and actionable stories”. He reminds us that the benefits from using Predictive Analytics to generate narratives is well known, with Netflix’s recommendation algorithm (to name but one) driving more than 80% of the material viewed by customers.

The article highlights some of the best practices that help companies make predictive storytelling more readily available, with the generated narratives easier to digest. They include: focusing on the most pertinent information, widespread use of data visualisation, progressive disclosure of facts, and ‘interactive storytelling’ (real-time interaction between consumers and predictive algorithms).

The author also discusses the essential cognitive elements affecting customers’ reactions to data-driven narratives, including personal

connection, loss aversion, cognitive ease, and the so-called ‘peak-end rule’ (which rests on the fact that many consumers only remember the most vivid impressions of their experience). And, of course, he delves into the ever more important ethical aspects of prejudice/bias, privacy and the need for transparency and explainability, all of which are currently driving much the debate around the need for more explainable artificial intelligence (XAI).

AI TOOLS FAIL HISTORY TESTS (FOR NOW)

Many scholars in Humanities and the Social Sciences hold particularly high expectations about the potential for LLMs to transform the way they do research. As it turns out, historians might want to save their best enthusiasm for later days.

A recent research collaboration involving academics from the Universities of Oxford, Vienna, Seattle, as well as UCL, George Brown College, and the Alan Turing Institute (https:// bit.ly/HiST-LLM), has thoroughly tested the ‘history knowledge’ of some of the most well-known LLMs.

This kind of benchmarking study is particularly challenging, given the marked imbalance of human knowledge of history, which is biased more heavily on Western history and more recent periods. The research, presented at the recent Neural Information Processing Conference in Vancouver, was motivated by the expressed desire to probe the potential of LLMs in supporting historical and archaeological research, two fields where the need to analyse vast

amounts of complex and unevenly distributed data frequently arises. The study used a subset of the Seshat Global History Databank, which covers all major world regions, from the Neolithic to the Industrial Revolution, and was put together and validated by historians and their graduate research assistants.

The researchers converted the dataset into multiple-choice questions that asked whether a ‘historical variable’ (for instance, a specific governance structure) was “present,” “absent,” “inferred absent” or “inferred present” over a given time frame and specified region. To assess model performance, the research team adopted a ‘balanced accuracy metric’, to account for the uneven distribution of answers across the dataset. Random guessing would therefore yield a score of 25%, while perfect accuracy would yield 100%. The study also assessed the ability of the tested models to distinguish between facts that are “evidenced” and those “inferred” (this is crucial in historical analysis!).

A total of seven models from the Gemini, OpenAI, and Llama families were benchmarked, using this data set and four-choice format questions.

Results, in terms of balanced accuracy, range from 33.6% (Llama-3.1-8B) to 46% (GPT-4-Turbo), outperforming random guessing (… Phew!) but falling way short of expert-level comprehension. In short: LLMs appear to possess some degree of expert-level historical knowledge, but clear margins for improvement exist.

Quoting directly a recent online article on this same study, authored by Eric W. Dolan and available at https://bit.ly/PsyPost-on-LLMs-inHistory, Jakob Hauser, first author of the study and a resident scientist at the Complexity Science Hub, says: “The Seshat Databank allows us to go beyond ‘general knowledge’ questions. A key component of our benchmark is that we not only test whether these LLMs can identify correct facts, but also explicitly ask whether a fact can be proven or inferred from indirect evidence”. And, from the same source, using now the words of UCL Lecturer Maria del Rio-Chanona, the study’s corresponding author: “The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history. They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task”. This is because, as the study reveals, in a simplified two-choice format (testing ‘present’ versus ‘absent'), LLMs such as GPT-4 Turbo performed better (accuracy up to 63.2%): therefore, LLMs appear to be able to identify more straightforward facts, while they seem to struggle with more nuanced questions. As Peter Turchin (project leader at the Complexity Hub, also a co-author) puts it, the study confirms very clearly that LLM performance is highly domain-specific nature, i.e. that whilst they may have been extremely successful in replacing paralegals, their ability to make “judgments about the characteristics

of past societies, especially those located outside North America and Western Europe” is still limited. Not surprisingly, one of the most immediate actions being addressed by this same research team aims at expanding the Seshat dataset with more data from less represented regions and historical periods.

FOSTERING DRIVER RETENTION IN LOGISTICS

Logistics service providers and distribution companies have been struggling with driver retention for some time. The job market has been experiencing an important shortage in this key but scarce resource. Experts from leading decision support software firm ORTEC have been looking at this problem. In a paper authored by Chief

Science Architect Leendert Kok and colleagues (https://bit.ly/ORTEC-DriverRetention-in-Logistics), published in the Journal of Supply Chain Management, Logistics and Procurement, ORTEC’s experts reflect on ways to boost driver retention that are alternative to increasing drivers’ wages (the market’s low margins and heavy competition just don’t make this theoretically appealing solution a feasible one!). The paper shows how driver satisfaction can be enhanced in a way to both retain existing drivers and attract new (especially young) employees. More precisely, the authors investigate and quantify the benefits from modelling drivers’ needs and wishes directly as part of operational and tactical supply chain planning. The article explores the links between enhanced driver satisfaction, retention and increased operational efficiency,

suggesting that with only a slight increase in operational costs, operational and tactical plans can be developed that are effectively perceived as substantially better by the drivers. The paper also discusses how drivers’ requirements change across industries, including field services, e-grocery, retail and wholesale, and parcel and last-mile delivery.

From a technical standpoint, the authors’ work follows a multi-objective optimisation approach that focuses on fairness (workload balance), allowing for a greater portion of drivers’ preferences to be visible in the staff schedules produced (including preferred days/times/regions/clients), and adding biomathematical fatigue constraints that take into account the drivers’ natural biorhythm (thereby yielding fresher and more alert drivers).

2024 LYN THOMAS

IMPACT MEDAL: PROFESSOR BART BAESENS

Hello Bart,

First of all, congratulations on winning the OR Society's Lyn Thomas Impact Medal 2024 for your extensive work on ‘Applications of Machine Learning and Artificial Intelligence’. Bearing in mind part of the remit of the OR Society is to encourage a career in OR, could I start by asking you about your career background, how you got involved in OR and ended up working at the crossroads between OR, ML, and AI?

Thank you so much, it was a real honor to me to have been awarded the Lyn Thomas Medal!

Sure, very happy to introduce myself. I studied Master of Business Informatics Engineering at KU Leuven (Belgium) and graduated in 1998. I then pursued a PhD in Applied Economic Sciences entitled ‘Developing Intelligent Systems for Credit Scoring Using Machine Learning Techniques’, which I defended in 2003. In fact, it was during my PhD when I met Lyn for the first time when attending the Credit Scoring Conference in Edinburgh. Lyn later served as a committee member in my PhD and gave me very useful feedback. It was also Lyn who invited me to become a lecturer at Southampton Business School, where I started in 2004 teaching courses on Analytics, Operations Research, and Management Science. It was exactly this mix of disciplines which I also managed to

exploit in my research and its applications in credit risk, fraud detection, and marketing. I currently do research on the development of combined OR, ML, and AI methods, with a particular focus on profitability, interpretability and sustainability. In doing so, I focus both on the development of new algorithms and performance metrics, innovative applications thereof (e.g., in fraud, insurance, digital archiving, HR, and smart agriculture), as well as on identifying new OR/AI inspired revenue generating business lines.

Lyn had the amazing talent of glancing through a paper or presentation, understand its key contribution and then ask the most insightful question one could ever imagine. He possessed an innate talent for research, embodying the essence of true scientific inquiry

In a career spanning longer than two decades, what are your top-three examples of your most impactful OR work so far? For credit risk modeling, I believe my benchmarking papers and books have been the most impactful. Our most recent 2016 [1] benchmarking study of classification algorithms for credit scoring, published in EJOR, gathered more than 1,300 citations thus far and was also awarded

the EURO 2017 ‘Best Theory and Methodology Paper’. In fact, at this very moment, we are working on an updated version of it which will also include classification techniques inspired by large language models (LLMs). Furthermore, I would also like to mention our book on Credit Risk Analytics [2], co-authored with Harry Schuele (who recently left us far too soon unfortunately) and Daniel Roesch, which has been well-received by practitioners and academics in the field of financial risk management.

In terms of fraud detection, I am very proud of the GOTCHA! method that was developed together with a PhD researcher and resulted in a Management Science paper [3] published in 2017. The key novelty of the GOTCHA! method is that it looks at networks of entities that may commit or experience fraud, instead of treating them individually. The GOTCHA! propagation algorithm diffuses fraud through the network, labeling the unknown and anticipating future fraud whilst simultaneously decaying the importance of past fraud. This method has been well-received in both academia and industry, where it has been used by both governments and banks.

Finally, I would like to mention our BlueCourses initiative which my friend Tim Verdonck and I started in 2019. Our BlueCourses mission statement is twofold. We disseminate knowledge by offering online courses on our expertise: credit risk, fraud, and other AI/OR applications. Next, we contribute to environmental sustainability and invest 20% of our EBIT to firms cleaning up our oceans from plastic. Together with Waste Free Oceans (WFO) and 90 local volunteers, on March 18th, 2023, we sponsored an ocean cleanup of Mbezi beach in Dar es Salaam, one of the most polluted beaches of Tanzania. We have another clean up in the pipeline which we are very excited about.

I think it’s important to deeply and thoroughly understand the business problem first, before you start throwing OR/AI techniques at it

For those who know you well enough, you are a highly appropriate winner of this particular award, having worked very closely with Professor Lyn Thomas until his untimely death in 2016. How did Lyn influence your work as an OR academic, and as a practitioner?

Lyn influenced my work in many ways. His knowledge and passion for OR and credit risk is something I have never seen before. Even at home when watching TV he was thinking about OR. To illustrate this, he wrote a JORS paper that introduces dynamic programming to investigate when contestants should bank their current winnings in the popular TV quiz show ‘The Weakest Link’ [4].

Lyn had the amazing talent of glancing through a paper or presentation, understand its key contribution, and then ask the most insightful question one could ever imagine. It left me baffled many times during our collaboration and conference attendances. Lyn taught

me to never get lost in mathematical formulae just for the sake of their complexity but to always keep a close eye on their application and added business value. He possessed an innate talent for research, embodying the essence of true scientific inquiry. In fact, our last conversation took place at the dinner of his Festschrift conference in 2016. It still strikes me up to this very day that he said goodbye with an idea on how to calibrate economic downturn LGD (i.e., Loss Given Default) in a credit risk setting. It says a lot about the amazing person he was.

At a personal level, Lyn taught me to be humble. Despite his scientific ingenuity and accomplishments, he was always very accessible, down to earth, and fun to hang out with. This attitude has left a profound impact on me and is something I aim to pursue myself as well.

… given the pace of modern-day technological developments, it is important to continuously educate yourself, but with a critical mindset. Not every new OR/ AI technique represents meaningful progress for every business application

Perhaps a less known aspect of your OR work is around cultural heritage and digital archiving. Can you share something about that experience as well, please?

Yes, history is my passion. I listen to history podcasts every day before I go to sleep. Right now, I am doing a series on the Tudors, but I am also passioned by the Romans, the French Revolution, Napoleon, and both World Wars. About two years ago, I started a collaboration with the National Museum of the Royal Navy (NMRN) located in Portsmouth. Our first project was to use deep learning techniques such as convolutional neural networks to label black and white images of war ships as destroyers, cruisers, submarines, frigates, carriers, etc. We got a publication out of it which I was very proud of. Our most recent collaboration (which was published in AI & Society) uses Meta’s open-source LLM, Llama, to generate keywords from curator/archivist written descriptions of museum and archival collection items. Doing research with the NMRN is something I genuinely enjoy. In fact, I really felt like a kid in a ginormous play garden when visiting their home base in Portsmouth and seeing all the historical ships (e.g.,

HMS Warrior and HMS Victory which featured in the recent Napoleon movie) as well as both aircraft carriers that coincidentally were there at the moment.

IMPACT magazine is enthusiastic that our articles should give a short piece of advice to practitioners wanting to use (in this case) the right blend of OR, ML, and AI in their engagement pieces. Any practical tips you can offer to our community of OR practitioners?

Yes, definitely very happy to share some of my insights and ideas. First of all, I think it’s important to deeply and thoroughly understand the business problem first before you start throwing OR/AI techniques at it. Far too often, I witnessed (complex) state-of-the-art OR/AI techniques being implemented but not addressing the business problem that triggered the question, causing a substantial amount of sunk costs.

Next, simplification is key to success. It’s not because an OR/AI technique is new and sounds fancy that you should start using it ASAP. Especially the OR field has been with us for about a century right now, and I still believe, there is plenty of room for successful linear and/or dynamic programming, Markov Chain, simulation, and/or

decision analysis applications across a variety of business settings. I am convinced many of these applications will excel in simplicity and actionability. Finally, given the pace of modernday technological developments, it is important to continuously educate yourself, but with a critical mindset. Not every new OR/AI technique represents meaningful progress for every business application.

Rather coincidentally, it just came to my mind that if you properly Understand (U) the business problem, Simplify (S) things, and manage to continuously Educate yourself (E), you will undoubtedly succeed in putting AI/OR to very good USE!

… though XAI techniques … are interesting, they are always an approximation of the black-box model they started from

Explainable AI is currently a very hot topic of discussion in the agendas of many folks and organizations. What is your personal take on it, informed by your applied work?

Yes, I very much agree with this. A contemporary typical AI pipeline starts from some business problem and data, preprocesses it, throws an XGBoost alike or deep learning model to it, and then explains it using Shapley values. There are a couple of things I wish to add to this. First of all, when interpretability is important (e.g., in credit risk, fraud, or medical diagnosis), then it is key to keep things as simple as possible. Though Shapley values are nowadays the industry standard, they typically lack robustness (i.e., in terms of multi-collinearity) and to many a business practitioner, they are really not that insightful (e.g., just ask them to properly interpret a Shapley force or summary plot). Counterfactuals have a lot more to offer in that respect, if you ask me, since they give clear recommendations about how to alter

the outcome of an AI model in the most desirable and feasible way.

Furthermore, an important thing to be aware of is that, though XAI techniques (e.g., Shapley, Lime, counterfactuals, dependence plots, etc.) are interesting, they are always an approximation of the black-box model they started from. Knowing the quality of the approximation is key to knowing how much value should be attached to the interpretation.

Next, far too often XAI techniques are used for confirmatory explanation purposes. In other words, this means that as long as the identified patterns correspond to business intuition then these XAI techniques are deemed trustworthy. However, I believe that more attention should be paid to the unexpected patterns, which might reveal various things such as

disruptive behavior, or there may be even biases and unwanted discrimination.

Finally, although interpretation is nice, one should be aware that in certain applications (e.g., online advertising) one could perfectly be fine with a high-performing black-box OR/ AI model without knowing its internal workings.

… far too often XAI techniques are used for confirmatory explanation purposes … I believe that more attention should be paid to the unexpected patterns

What’s coming next, on your personal, impact-bearing journey?

Anything you can anticipate

without giving too many secrets away?

Most definitely, I don’t have that many secrets to be honest, I would have to think really hard, but anyway, I suggest I’ll keep the juicy stuff for whoever reads this and buys me a beer when we meet.

I am actually working on two books related to Credit Risk Modeling at this very moment, both to be published by Oxford University Press. They are the result of a 15-year collaboration with my colleague authors. The first one is entitled ‘Data Science – Applications in Credit Risk’. It is the second book in a trilogy that provides a comprehensive overview of building analytical models, with specific applications in credit risk (e.g., LGD modeling). The second book ‘Advanced Data Science – Applications in Credit Risk’ is the final volume

dedicated to building state-of-the-art analytical models, focusing specifically on advanced applications in credit risk assessment (e.g., validation, backtesting, stress testing). I am happy to add that Lyn wrote a preface for both these books which we are very happy and honored to include.

To conclude, I hope to further inspire people about the wonderful things OR/AI can do for them, continue and expand my research, and see how it can successfully be applied in emerging and exciting settings (such as history, climate change, smart agriculture, etc.). In other words: I want to simply continue to have fun!

Bart Baesens is a Professor of AI in Business at KU Leuven, and a Lecturer at the University of Southampton. He has

done extensive research on data science, AI, credit risk, fraud, and marketing analytics. He has co-authored more than 250 scientific papers and 10 books (with translations into Chinese, Japanese, Korean, Russian, and Kazakh and sales in excess of 40,000 copies worldwide). Bart

FOR FURTHER READING

is the recipient of the OR Society’s Goodeve medal (2016), the EURO award for best EJOR paper (2014, 2017), and the Lyn Thomas Impact Medal (2024). He was also named one of the World's top educators in Data Science by CDO magazine in 2021 and 2023.

[1] Lessmann, S., B. Baesens, H.V. Seow and L.C. Thomas (2015). Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research. European Journal of Operational Research 247(1): 124–136.

[2] Baesens, B., D. Roesch and H. Scheule (2016). Credit Risk Analytics: Measurement Techniques, Applications, and Examples in SAS. John Wiley & Sons.

[3] Van Vlasselaer, V., T. Eliassi-Rad, L. Akoglu, M. Snoeck and B. Baesens (2017). Gotcha! Network-based fraud detection for social security fraud. Management Science 63(9): 3090–3110.

[4] Thomas, L.C. (2003). The best banking strategy when playing The Weakest Link. Journal of the Operational Research Society 54(7): 747–750.

CONTACT: UNLOCKING THE POTENTIAL OF OPERATIONAL RESEARCH FOR CHARITIES

At Pro Bono OR, we believe that Operational Research (OR) can be a powerful tool for third-sector organisations, helping them achieve their missions more effectively and efficiently. As a volunteer-driven initiative, we provide charities with the analytical expertise they need to make data-driven decisions, streamline processes, and enhance their impact. Our volunteers donate their time and skills to assist in tackling complex

challenges, ensuring that charities can make the most of their resources, improve services, and achieve greater outcomes.

One of the key benefits of OR is its ability to address a wide range of organisational needs, whether it's improving operational efficiency, refining decision-making processes, or providing actionable insights from data. The flexibility of our approach allows us to tailor solutions to each

ISMA SHAFQAT

organisation’s specific context and objectives. For example, charities might require assistance with strategic planning, data analysis, process improvement, or measuring the impact of their services. Whatever the requirement, Pro Bono OR brings expert, objective help to the table.

A recent case study with Contact, a charity supporting families with disabled children, illustrates how OR can make a meaningful difference. Contact needed a clearer understanding of the demographics of the people they serve in order to better inform their future programmes, pinpoint gaps in service delivery, and make more effective policy decisions. Their primary goal was to assess whether their services were reaching the diverse communities they aimed to serve and whether certain groups were underrepresented in their user base.

The analysis process began with a straightforward approach of exploring and matching data. By focusing on gender and ethnicity, the OR team

used Excel-based models to process the charity’s existing data and compare it to national census data. This was done carefully, ensuring that the analysis could be replicated in the future without requiring advanced technical skills. A few challenges emerged due to data quality, which required manual adjustments to ensure the demographic categories were properly aligned. Additionally, decisions had to be made regarding the granularity of the data to protect the anonymity of individuals, while still ensuring that the groups were large enough to be statistically relevant.

Despite these challenges, the outcome was a comprehensive report that not only addressed the initial questions but also highlighted areas for further exploration. The report examined whether Contact was reaching the right demographics and if service access varied between different ethnic or gender groups. It also raised important questions about the accessibility of services to men, who

appeared to be underrepresented, and identified certain ethnic communities in specific locations that might not be fully engaged.

This analysis provided Contact with valuable insights and posed questions that could guide their future work. The findings suggested that further research could explore why certain groups prefer particular methods of interaction with the charity. For example, qualitative surveys could help understand the preferences of different ethnic groups and identify barriers to access. These insights would not only inform service design but also contribute to more effective policy advocacy and programme development.

In the longer term, Contact will use this report to drive more inclusive and targeted outreach strategies. The OR work will also be used to plan future programmes, ensuring that no key areas of the population are overlooked in the charity’s efforts to support families with disabled children. The collaboration highlights how OR can

serve as a catalyst for positive change, driving better decision-making and improved services.

‘As a result of support from Pro Bono OR, Contact has a much better understanding of the ethnicity of our service users, how it varies by entry point and area of the UK, and how it compares to the Census. The analysis was rigorous and enabled us to be more accurate in what we say about the diversity of the families we support. The evidence informed our strategy planning process. We are very grateful to the analyst and to Pro Bono OR for their time and expert support in defining and answering the analytical questions.’

- Silvia Laraia, Head of Monitoring and Evaluation, Contact

At Pro Bono OR, we are committed to empowering the third sector with the power of analysis. Whether it's improving services, making informed decisions, or ensuring greater inclusion, our work is all about enhancing the impact that charities can have on the communities they serve. Kate Marles, a volunteer who worked with Contact, shared her experience:

‘I’d encourage everyone to get involved in a Pro Bono OR project. It is an excellent way to practise OR skills in a new setting, solve interesting problems, and have a significant impact. I completed analysis on service use demographics for Contact, a charity that supports families of disabled people. I really enjoyed the opportunity to use my skills for a tangible and worthy cause and seeing a project

delivered from start to finish within a short period of time. My work has been shared widely within the charity and is contributing to strategic discussions while they are in the process of developing their next strategy.’

Through partnerships like this, we hope to inspire more organisations to consider how OR can unlock their full potential and help them achieve their goals with greater confidence. Whether you are an OR practitioner who would like to contribute some of your time to meaningful Pro Bono OR work, or you are a third sector organisation who hasn’t engaged with the OR Society’s Pro Bono scheme yet, please do feel free to get in touch at ProBonoOR@theorsociety.com.

Isma Shafqat, Pro Bono OR Manager at the OR Society, leads initiatives that apply operational research to support third-sector organisations, drawing on her STEM background and leadership in education and managing strategic corporate partnerships.

Contact email: ProBonoOR@ theorsociety.com

OPTIMISATION FOR SURVIVAL – CAPTIVE BREEDING DESIGN

In the history of life on Earth there have been five major extinction events, when far more species go extinct than would be expected at the natural background rate. The best known occurred around 66 million years ago when, for example, all but the avian branch of the dinosaurs were wiped out. But many environmentalists would say we are currently experiencing a sixth event, as the impact of human activity has disrupted the lives of many organisms. If a species is at high risk, one of the few ways to attempt to preserve it is captive breeding. But such programmes, often featuring small

numbers of animals, risk genetic harm from inbreeding.

If animals naturally breed in pairs, it is relatively easy to either maintain pedigrees or check genetic relatedness … But some species only thrive when living in groups, making it hard to control healthy genetic mixing.

If closely related animals mate, it increases the chance that a gene will have a dangerous allele (genetic variant) from both parents, as the available gene

BRIAN CLEGG
RACHEL GRAY WON A NEWCASTLE UNIVERSITY ENGAGEMENT AND PLACE AWARD 2024 FOR HER PROJECT ON ‘BREEDING GIANTS TO REWILD THE GALAPAGOS’. SHE IS PHOTOGRAPHED HERE WITH EVELYN JENSEN AND MATTHEW FORESHAW

pool is limited. Long before captive breeding programmes we were aware of this issue in humans, which is why at the back of the 1662 Book of Common Prayer you will find ‘a table of kindred and affinity’ forbidding marriage between various blood relatives. The best examples we have of human inbreeding come from historical royal dynasties, where it was considered necessary to maintain the blood line. In the case of King Charles II of Spain, born in 1661, rather than the 254 individuals you would expect in eight generations of his ancestors there were only 82 [1]. Charles died young, was epileptic, severely physically disabled and mentally challenged. Amongst those eight generations, childhood mortality was nearly three times that of the normal, far less pampered, children of Spain.

If animals naturally breed in pairs, it is relatively easy to either maintain pedigrees or check genetic relatedness to ensure that the subjects are paired off in a way that maximises genetic variety. But some species only thrive when living in groups, making it hard to control healthy genetic mixing. It was with this in mind that a team made up of experts in the genetics of captive populations and computer model designers came together to apply the Operational Research technique of Constraint Programming (CP) to help with both understanding the best group sizes and which individuals should go into each of these groups, known as corrals.

The initial target captive breeding group was from a subspecies of the Galápagos giant tortoise. The population of these magnificent animals on the islands of the Galápagos archipelago consists of 12 remaining subspecies, with at least two more already extinct. Their population had dropped from around a quarter of a million when first recorded to 15,000 by the 1970s. One subspecies Chelonis niger niger , from Floreana Island, was thought lost in

the nineteenth century, but it appears that some were moved to the larger neighbouring Isabela Island.

In 2002, it was discovered that some tortoises living on the slopes of Wolf volcano on Isabela had Floreana genes: the 23 individuals with the closest genome to the original were to be entered into a programme at the captive breeding centre at Puerto Ayora on Santa Cruz Island. The hope was to restore a colony on Floreana that were as close as possible to the original Floreana subspecies. But with relatively small numbers, maximising genetic viability from the breeding programme was essential.

Initial attempts worked on a subgroup of the 23, managing to get up to 51 individuals at the phase of the programme when the team became involved. Amongst the international team of eight, were data scientist Matthew Forshaw, biomedical information officer Rachel Gray and genetic conservationist Evelyn Jensen, all from Newcastle University (for the original publication, see [2]).

describes how his role bridges computer science, AI and OR: ‘I originally trained as a computer scientist, focusing on performance engineering and energy-efficiency within datacentre environments.

Following my PhD, my first academic role was as a teaching fellow as part of the Centre for Doctoral Training at Newcastle University, which brought together computer scientists and statisticians. I have since moved towards data science and AI practice. In the past

years I’ve increasingly become aware of the role of OR, and have supported collaboration through the Alliance for Data Science Professionals to set accreditation standards for data and AI professionals.’

The approach taken by the team was CP, which began as an algorithmic approach, but has increasingly involved artificial intelligence (AI). According to Forshaw ‘CP involves modelling the problem to be solved, as well as a set of rules which determine whether a solution to the problem is acceptable (for example, the number of males and females in a corral). Furthermore, an objective function may be defined which helps score feasible solutions to determine which properties are desirable.’

© Fotos593/Shutterstock

‘It is a paradigm for problem-solving using artificial intelligence, which has shown exemplary performance in identifying feasible or optimal solutions from a very large set of candidate solutions. These problems range from allocating hospital beds and operating theatre scheduling to repopulation of endangered species, optimising the use of equipment in factories and vehicle routing.’ Over the past two and a half decades, the scholarly community has developed a tradition of blending CP with both AI and OR, with the CPAIOR conference series providing the annual reference event in everyone’s calendar [3].

In the case of the tortoises, the objective of the model was to minimise the sum of ‘pairwise relatedness’ of individuals allocated to the same breeding corral. Although in other populations this may be measured using the pedigrees of the animals, here the relatedness was measured by comparing genetic markers of the individuals. Specifically, for the Floreana tortoises, this involved using whole genome sequences. Using only this objective would tend to allocate the minimum allowed number of individuals to a corral, which might

Forshaw

result in too many tortoises being excluded from breeding entirely—as a result, the model was designed so that the user can set a minimum number of individuals to take part.

This tool is a helper in the decision-making process of designing captive breeding programmes and allows human intuition to make the final management decision.

Some applications of CP require high power computing resources. For the group of 51 tortoises, the model was runnable on a MacBook Pro laptop within a reasonable timescale, getting to within 1 per cent of the optimal obtained value in around 20 minutes, but the group also ran tests on a 64 virtual CPU system which reached the same result in 32 seconds, suggesting the approach would remain viable for larger captive breeding programmes, given appropriate hardware. There is always a balance here, as the ideal with such tools, devised to help workers in the field, is to produce a turnkey package that can be run without assistance from the development team.

Producing a packaged solution takes time. As Forshaw explains ‘This tool is a helper in the decisionmaking process of designing captive breeding programmes and allows human intuition to make the final management decision. The tool provides recommendations on how individuals in the breeding programme can be allocated. Practitioners are presented with a series of visualisations to understand the trade-offs between animal behaviour, husbandry best practices and financial constraints. Ongoing work is centred on working in partnership with end users to develop further tooling support to place this

directly in the hands of captive breeding programme designers.’

My greatest piece of advice would be for people to have the confidence to ask questions, and build empathy and shared understanding with collaborators across other disciplines.

This was a particularly interdisciplinary project. Apart from those working directly in the captive breeding programme, the team had to combine experience in conservation and evolutionary biology with systems expertise. As Forshaw explains ‘Simultaneously one of the most challenging aspects—but also an aspect which has been particularly rewarding— has been working with captive breeding programme designers to translate their knowledge of the breeding programme into constraints, and understand how to specify the objective function to meet their target outcomes. This involved sharing the capabilities of CP approaches and supporting end users to appreciate the CP paradigm, and to translate plainEnglish explanations into CP models, identifying and exploring edge cases.’

By 2025, after the successful runs of the model, its results were used to update the structure of the breeding groups in the Floreana lost lineage recovery programme. This is not a mechanical process of adopting the exact recommendation from the system: the researchers emphasise the importance of allowing human intuition and experience to shape the final management decision,

FOR FURTHER READING

as other factors than relatedness, such as behaviour, husbandry practice and financial constraints, form part of the judgement required.

Looking back over the project in search of advice for practitioners, Matthew Forshaw notes ‘Projects which span interdisciplinary boundaries—such as this one—are hugely rewarding. My greatest piece of advice would be for people to have the confidence to ask questions, and build empathy and shared understanding with collaborators across other disciplines.’

The team is validating its approach by working with a US National Science Foundation-funded project centred on Gulf Coast canids (canine species including wolves, foxes, coyotes, domestic dogs and more) to revive genomic variation in the endangered red wolf. This apparently dry mathematical methodology has provided a major boost to the potential success of the return of giant tortoises to Floreana, and is likely in the future to help many other captive breeding programmes save species for the future.

Brian Clegg is a science journalist and author and who runs the www. popularscience.co.uk and his own www.brianclegg.net websites. After graduating with a Lancaster University MA in Operational Research in 1977, Brian joined the OR Department at British Airways. He left BA in 1994 to set up a creativity training business. He is now primarily a science writer: his latest title Brainjacking looks at the science of influence and manipulation.

[1] Rutherford, A. (2016). A Brief History of Everyone Who Ever Lived Weidenfeld & Nicholson.

[2] Forshaw, M., Gray, R., Ochoa, A., Miller, J. M., Brzeski, K. E., Caccone, A., & Jensen, E. L. (2025, April). Constraint Optimisation Approaches for Designing Group-Living Captive Breeding Programmes. In Proceedings of the AAAI Conference on Artificial Intelligence, 39(27), pp. 27989–27997.

[3] CPAIOR. (2025, May 1). International Conference Series on the Integration of Constraint Programming. Artificial Intelligence, and Operations Research. https://cpaior.org/

THE DYNAMIC RESERVE SETTING PROJECT WON THE PRESIDENT’S MEDAL 2024, RECOGNISING ITS IMPACT AND INNOVATIVE USE OF OR TECHNIQUES

BALANCING ACT: THE PARTNERSHIP OF AI AND OPERATIONAL RESEARCH IN MODERN POWER GRID MANAGEMENT

DR KIERAN KALAIR, DR ALEX BOWRING AND ADAM BRUMMITT

Keeping the lights on across Great Britain is no small feat. The National Energy System Operator (NESO) ensures that supply always meets demand, a challenge made more complex by an evolving grid that incorporates more renewable energy and unpredictable variables. As energy industry experts work to balance the

grid whilst maintaining the energy Trifecta of ensuring system security, at lowest cost whilst moving towards net zero targets, effective decision support tools have never been more important.

Dynamic Reserve Setting (DRS) was a recent NESO innovation project, that we at Smith

Institute were proud to support. The project won the OR Society’s President’s medal this year, recognising its impact and use of OR techniques. It demonstrated that the power of the Operational Research (OR) process, combined with the adaptability of ‘explainable’ artificial intelligence (AI), provides a tangible impact on electricity reserve setting.

MANAGING RISK AND EFFICIENCY IN ELECTRICITY RESERVE SETTING

Electricity reserves act as one of the grid’s safety nets, mitigating risks from unexpected demand spikes or supply interruptions. Historically, reservesetting relied on static forecasting models that could not adapt to the dynamic interplay of modern energy systems. This required control room engineers to manually adjust the

outputs of these models to strike a balance between excessive costs from over-provisioning of reserve and risks of supply failure from underprovisioning.

The project transitioned reserve-setting from a static, bi-annual activity to a dynamic, reactive model, enabling the system to reflect real-time conditions and manage uncertainties effectively.

The UK’s energy landscape is now defined by increased adoption of renewable energy, electrification of transport and heating, and cross-border energy flows through interconnectors. These shifts have increased demand and generation variability and added complexity to balancing the grid, driving the need for improvements to the historic reserve-setting approach. These sources of complexity have

increased uncertainty in both supply and demand – for instance, predicting how much wind generation will be available in the next few days is far harder than predicting how much coal generation we might have had in the same period many years ago.

The DRS project reframed these challenges through an OR lens, applying probabilistic modelling and forecasting techniques to achieve a balance between reliability and cost-efficiency. The project transitioned reserve-setting from a static, bi-annual activity to a dynamic, reactive model, enabling the system to reflect real-time conditions and manage uncertainties effectively. This reframing not only improves operational outcomes but also reduces inefficiencies often inherent in traditional methods.

OR AND AI INTEGRATION

DRS integrated OR and AI to combine predictive accuracy with decisionmaking robustness, two of Smith Institute’s core capabilities. We are an award-winning data science, advanced mathematics and AI consultancy. For over 27 years, we have delivered innovative and impactful solutions to help our clients to solve their most complex challenges.

The Smith Institute team developed probabilistic machine – learning (ML) models that accounted for the inherent non-linear relationships and uncertainties in energy demand and supply.

The models trained on historical data, where the true reserve requirements were computed over time. Smith Institute then created models targeting specific statistical properties of the reserve requirements. More precisely, prediction of quantiles of the distribution of reserve requirements allowed for flexibility to consider different levels of security and the associated reserve holdings to meet said level of security. By integrating real-time

dynamic data, the models adapt to changing system conditions, ensuring network resilience and accuracy.

Transparency was a critical requirement for operational decisionmakers. The project employed explainable AI techniques, such as Shapley additive explanations (SHAP), to demonstrate the reasoning behind AI-driven forecasts. Trust is vital when the stakes are so high. The key idea was to showcase key contributors to the current recommendations so, briefly, a control room engineer could get a sense of what was driving the model predictions and make informed decisions.

The project’s success relied on engagement with control room engineers. Through iterative refinement cycles, the team ensured that technical models address practical constraints, such as the need for real-time and day-ahead decision support. This collaborative approach bridged the gap between advanced methodologies and real-world application, highlighting the importance of stakeholder input in OR projects. The extensive and ongoing problem formulation, engaging domain experts throughout, was key to the model being useful in practice and having real impact when completed.

By integrating real-time dynamic data, the models adapt to changing system conditions, ensuring network resilience and accuracy.

Ultimately, DRS demonstrated the power of combining OR principles and advanced data science techniques to solve complex problems in dynamic environments. Smith Institute and NESO designed each step to make models that were not only robust but also practical and aligned with operational needs.

Data collection and preprocessing

The foundation of the model was historical data and insights from domain

experts. This included over four years of demand records, generation data, weather forecasts (temperature, wind speeds), and system conditions such as interconnector flows and transmission limits. Handling this diverse dataset required significant preprocessing:

• Aligning distinct datasets : Combining data from multiple sources required understanding the recording procedure of each dataset, aligning them in time and ensuring all data input into the model as ‘features’ would be available at run-time for practical usage.

• Feature Engineering: Predictive variables, such as wind speeds, solar load factors, demand forecasts, and lagged versions of such features, were constructed to better capture the dynamics of renewable energy.

• Dynamic Aggregation: Data was structured to reflect temporal patterns, enabling models to

account for hourly, daily, and seasonal variations in demand and supply.

Model development

The development phase involved a hybrid framework that combined OR techniques with machine learning. Key aspects included:

• Probabilistic Forecasting: Using probabilistic methods, the team developed models capable of generating multiple reserve recommendations under varying levels of uncertainty. This approach provided decision-makers with a range of options tailored to different risk tolerances.

• Hyper-parameter tuning: Hyperparameter tuning techniques were employed to optimise the models’ economic performance while maintaining security requirements across the range of different risk tolerances.

Validation and testing

To ensure that the models were both accurate and operationally viable, rigorous validation processes were implemented:

• Backtesting: Historical data was used to evaluate model predictions, comparing the outcomes with actual system requirements. This helped refine the models and build confidence in their predictive capabilities.

• Cross-Validation: Models were refined and tested on held-out data as part of a cross-validation approach across different time periods, including high-demand winter months and lower-demand summer months, to assess their robustness under varying conditions.

• Control Room Trials: Trial periods in NESO’s control room allowed operators to interact with

the models in real-world scenarios, providing invaluable feedback for further refinement.

IMPACT AND BENEFITS

In testing, DRS demonstrated that it could save over 300 megawatts of reserve each settlement period (30 minutes) compared to the existing model. In a control room test, DRS saved 1GW in just 2 hours. This is roughly akin to the output of 2 nuclear power plants.

The hybrid OR and AI approach has the potential to achieve transformative outcomes. It can reduce workload on control room engineers who previously had to make manual adjustments to reserve recommendations. By addressing the balance between over-provisioning and under-provisioning, the system improves efficiency and ensures grid stability.

Explainable models enhanced stakeholder confidence, fostering trust and facilitating the adoption of advanced decision-support systems. The scalability of these methodologies demonstrates their potential for application in diverse sectors. For instance, supply chain logistics could benefit from probabilistic forecasting to provide decision intelligence insights into inventory levels, while healthcare systems could use similar models for resource allocation in emergencies.

This project established a benchmark for integrating forecasting with risk-based decision-making, offering a roadmap for future applications of OR in energy and beyond. The focus on transparency and stakeholder alignment ensured that advanced technical solutions remained accessible and actionable.

In a control room test, DRS saved 1GW in just 2 hours. This is roughly akin to the output of 2 nuclear power plants

THE FUTURE OF OR IN ENERGY SYSTEMS

In a sector that is driving towards net zero objectives and adapting to new innovations and behaviours, the DRS project showcased how OR can provide solutions to critical energy challenges by leveraging interdisciplinary methods and fostering collaboration.

The versatility of probabilistic and forecasting techniques demonstrated can be applied to a diverse range of contexts out with energy systems, such as global supply chains. Combining AI-driven predictions with OR-based decision-making ensures that solutions are both accurate and actionable.

Successful implementation requires alignment between technical innovations and practical needs. Engaging end-users throughout the process ensured that the models address real-world constraints effectively.

At Smith Institute, we believe that the project’s success highlights a broader vision for OR: advancing beyond traditional boundaries to solve emerging challenges. By combining advanced techniques, such as AI and probabilistic modelling, with a deep understanding of real-world systems, OR practitioners are uniquely positioned to lead the charge in addressing global challenges.

Kieran Kalair, a Principal Consultant at Smith Institute, applies advanced mathematics and data science to optimise energy systems. With a PhD from the University of Warwick, he has shaped grid forecasting, dispatch optimisation, and energy resilience. Passionate about a just, sustainable transition, his work drives impactful, data-driven decisions for a greener energy future.

Alex Bowring is a Senior Mathematical Consultant at Smith Institute. Prior to joining, Alex completed an Early Career

Research Fellowship in Neuroimaging Statistics at the University of Oxford. Alex has worked on a range of projects with a focus on work in the energy sector, helping to drive a brighter tomorrow for the economy, society and the environment.

Adam Brummitt is a Marketing Manager at Smith Institute. He is responsible for showcasing Smith Institute’s expertise in data science, AI and advanced mathematics and the positive impact of our work on society, the economy and the environment.

MACHINE LEARNING: WHEN LESS IS MORE

A longstanding challenge in Machine Learning (ML) is feature selection which is the task of finding the most informative variables in a dataset that result in the most accurate predictions by an ML model, whilst discarding ineffective ones. As the number of features increases, the task requires exponentially increasing computational times to complete and often ends up with overfitting. This results in performance and maintainability issues.

In many cases, the addition of more features introduces dimensional clutter rather than useful information. This hampers training and inference, degrading the model’s performance. Optimising feature selection for a given application is an inherently combinatorial task, one that is challenging to formalise mathematically. Exhaustive search is just impossible for medium- to large-sized datasets leaving only

heuristic approaches as the preferred solution. The good news is that there already exist many options available for prompt implementation.

Optimising feature selection for a given application is an inherently combinatorial task, one that is challenging to formalise mathematically.

At Optrak we solve real world routeing problems with Operational Research (OR) approaches. Almost all commercial routeing problems cannot be solved with exact methodsheuristic methods are deployed instead. In many ways a heuristic search for vehicle routeing mirrors the feature selection problem in ML, as both involve exploring vast solution spaces that defy exhaustive evaluation. Some companies provide vast troves of database information to us so we can accurately model their problems, however the relevant data that ends up being used to provide routeing solutions is a tiny fraction of what is

provided. Reduction and simplification of data is ubiquitous in the technology industry; feature selection takes many forms (Figure 1).

. . . the relevant data that ends up being used to provide routeing solutions is a tiny fraction of what is provided

FEATURE SELECTION IN PRACTICE

Both ML practitioners and business leaders too often assume that more data—more features—will yield better results. This is a false myth. As far back as 1986, in his renowned paper ‘You and Your Research’ [1], Richard Hamming—famous for his work on error-correcting codes—highlighted that small, well-collected, and meticulously analysed datasets often provide more meaningful insights than large volumes of poorly gathered or noisy data. A growing body of literature supporting this view has developed ever since.

Leaner models, that use only a fraction of the available features, can achieve equal or even superior performance. For example, it is not uncommon for small subsets of features, sometimes as little as 10% of the full set, to achieve higher accuracy scores than the full set [2]. More frequently, approximately 50% of the features can produce accuracy scores comparable to those obtained with the full set.

For businesses, this suggests the need for a more critical evaluation of existing ML practices and datasets. If accurate predictions can be made using only 10% of the available data, does that justify a shift towards leaner, more efficient data pipelines? While there is clear utility in identifying which features contribute most to predictive accuracy, there is also a temptation to aggressively prune weaker features in pursuit of streamlined processes. This, in turn, raises broader questions about the robustness, responsibility, and ownership of decision-making processes that increasingly rely on either Artificial Intelligence (AI) or ML. More directly, it underscores the limitations of viewing ML as an inherently intelligent process. Ultimately, it reminds us that ML is fundamentally a mathematical optimisation problem, driven by algorithms that seek to maximise objective functions under given constraints. A stark contrast to the popular perception of ML as a form of artificial intelligence that mimics human cognitive processes.

. . . ML is fundamentally a mathematical optimisation problem . . .

This misconception can be traced back to the well-known psychological bias known as the ‘anchoring heuristic’ [3]—the tendency to rely too heavily on the first piece of information

FIGURE 1 FEATURE SELECTION AT A GLANCE

encountered (the ‘anchor’) when making decisions. In the context of ML, the anchor is often the initial assumption that more features equate to better performance. Well, it is about time we revised this assumption! (Figure 2).

THE ‘BITTER LESSON’

Richard Sutton's 2019 essay ‘The Bitter Lesson’ captures this discussion accurately [4]. Sutton argues that the most significant progress in AI has come not from complex, handcrafted models that attempt to mimic human cognition, but from simpler, more general-purpose methods such as search In the case of feature selection, straightforward, scalable methods are now known to perform better than intricate, bespoke ones.

Sutton's essay recounts the history of early AI research competitions, such as those sponsored by the US Defense Advanced Research Projects Agency (DARPA) in the 1970s. There and

then, many participants attempted to use complex methods that incorporated detailed knowledge of language, phonemes, and the human vocal tract. However, these methods were often outperformed by simpler statistical approaches, which ultimately proved more effective in practice. As it turns out, the bitter lesson is not so new in the end.

The instructive point here is that the power of general-purpose methods should never be underestimated. This is particularly relevant in the context of modern ML, where the most successful approaches—such as deep learning and large language models (LLMs)—are built on relatively simple mathematical operations such as linear algebra, applied at scale and in parallel. These methods have achieved results that were once thought to be the exclusive domain of more complex, human-like reasoning processes. In the recent zeitgeist, the story of DeepSeek stands out as an example of simplification: the Chinese start-up greatly refined and

reduced the costs of the inference process of LLMs with its novel Multi-head Latent Attention process.

. . . the power of generalpurpose methods should never be underestimated

In a very similar way, Mixed Integer Linear Programming (MILP) is one of the most established and useful OR methods in industry—it enables formulation of abstract business problems into formal mathematics. MILP is a generalist approach that is constantly being tried and tested— successfully—across many industries including logistics and energy.

Linking back to the ‘more data is better’ narrative, the emergence of ‘Big Data’ as a trend in the 2010s is another industry example. In his essay ‘Big Data is Dead’ [5] Jordan Tigani, a founding engineer at Google BigQuery, writes about how the alarmist narratives around data growing in volume and requiring specific processing technology were unfounded. In the recent past, companies offered Big Data roles, invested in infrastructure, and rescaled their database systems. Nowadays, a search for this term on LinkedIn yields no meaningful results. In reality, very few industries require these overhauls and quite a few successful business models function without intricate data systems or architectures.

WHERE WILL FEATURE SELECTION GO NEXT?

In conclusion, while much has been said about the transformative power of AI/ML on digital infrastructure across various industries, the Bitter Lesson encourages us to reconsider which school of thought—simplicity or complexity—will ultimately guide these transformations.

As Richard Sutton aptly put it: ‘We want AI agents that can discover like

FIGURE 2 THE KLEIN BOTTLE IS TYPICALLY VISUALISED AND UNDERSTOOD TO EXIST IN FOUR-DIMENSIONAL SPACE. STILL, WE CAN UNDERSTAND IT WELL ENOUGH THROUGH A 2D REPRESENTATION, I.E., WE RETAIN MEANING
BY SELECTING TWO FEATURES
© Charles Baden/Shutterstock

we can, not which contain what we have discovered’. The success of heuristic search methods in feature selection aligns with this vision, emphasising the value of simplicity and general-purpose approaches in the ongoing quest to build intelligent systems. In an era where AI is increasingly integrated into our personal and professional lives, this is worth keeping in mind whenever developing and deploying new technologies.

We want AI agents that can discover like we can, not which contain what we have discovered

Tom Murarik is an Operations Research Software Developer at Optrak. Since completing his MSc in OR at the University of Edinburgh he has been pro-actively

FOR FURTHER READING

engaging in the OR community with a particular emphasis on bringing the worlds of software engineering, ML and mathematical optimisation together.

[1] Hamming, R. (1995). You and Your Research. https://www.cs.virginia. edu/ robins/YouAndYourResearch.html

[2] Guyon, I., and Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research 3: 1157–1182.

[3] Tversky, A., and Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science 185: 1124–1131.

[4] Sutton, R. (2019). The bitter lesson. Incomplete Ideas (blog). http:// www.incompleteideas.net/IncIdeas/BitterLesson.html.

[5] Tigani, J. (2023). Big data is dead. https://motherduck.com/blog/bigdata-is-dead/

LINKING UP FOR RESILIENCE: SUPPLY CHAIN STRUCTURES AS SME SURVIVAL TOOLS

MUSTAFA ÇAGRI GÜRBÜZ, SENA OZDEMIR, VANIA

SENA, OZNUR YURT AND WANTAO YU

What have the last five years taught us about supply chains and their resilience? We live in a world where supply chains are very long and complex. While this was often not considered a major risk before 2020, we are now aware of the dangers long global supply chains create for companies, particularly during international crises such as the COVID-19 pandemic. Indeed, the pandemic itself was quite a unique challenge for most companies - given

that it generated both demand and supply shocks on an unprecedented scale - thereby highlighting the vulnerabilities that global supply chains are exposed to, and the limited bargaining power many enterprises have with their upstream suppliers and/ or downstream customers. Five years on, various questions are yet to be answered. In particular: (1) “What alternative supply chain configurations best help to mitigate catastrophic failures?”, and (2) “Do strategies to

limit the havoc created by supply chain shocks vary across these different configurations?”

… the pandemic itself was quite a unique challenge … highlighting the vulnerabilities that global supply chains are exposed to, and the limited bargaining power many enterprises have with their upstream suppliers

THE CLIENT

Over the past five years, we have worked with a medium-sized Turkish textile company (LUR Textile) to answer these questions. More specifically, we have studied its response to the COVID-19 pandemic and its aftermath. Through this partnership, we have sought to gain insights into how different supply chain configurations and mitigation strategies can help to reduce risks to small and medium-sized companies.

LUR Textile is a 160-employee company established in 2003 in Izmir, Turkey. It operates mostly as an intermediary between suppliers in several countries, such as Pakistan and Egypt and buyers in the UK and EU. As a result, LUR has to manage a supply chain which covers in excess of twenty countries. COVID-19 hit LUR Textile very hard, by reducing supplier capacity and creating shortages of

critical raw materials. On the demand side, the firm regularly faced order cancellations by first-tier customers.

… we focused in particular on evaluating the value of sourcing flexibility … and of financial incentives …

Collaborating with the Senior Management Team at LUR Textile, we outlined their complete range of supply chain risk mitigation strategies. This encompassed both proactive and reactive actions: flexible sourcing, employing postponement and delayed differentiation, lowering minimum order quantities throughout the supply chain, broadening the customer base, and actively searching for alternative customers for cancelled orders. LUR Textile also extended financial assistance to trading partners through longer payment terms and discounts, emphasising a thorough approach to risk mitigation. In our study, we focused in particular on evaluating the value of sourcing flexibility (sourcing from multiple suppliers), the ability to customise and market products to various alternative buyers in case of order cancellations and of financial incentives (discounts implemented to avert order cancellations).

To understand how effective these strategies were for LUR Textile, we carried out detailed simulation studies, employing discrete event simulation. ARENA was our software of choice. Our simulations helped to quantify and

analyse the costs and benefits - so critical for SMEs - that were likely to come with the various proactive and reactive mitigation strategies LUR Textile intended to employ. We looked at a wide spectrum of supply chain configuration alternatives, all of which were essentially variations of the core structures shown in Figure 1.

The baseline configuration represents a typical triadic supply chain with LUR Textile (the ‘Focal Firm’) having one major supplier and one major buyer. Additionally, we examined alternative configurations in which buyers and suppliers are either concentrated or dispersed geographically. In concentrated supply chain configurations, we included small suppliers and buyers operating in the same region, potentially facing similar risks and disruptions. Conversely, we characterised dispersed supply chain configurations by suppliers and buyers being geographically scattered.

We investigated various disruptive events impacting suppliers, customers, and transportation services. Our simulation model identified three primary categories of shocks to demand and supply processes: (1) supplier disruptions, (2) demand disruptions (e.g., a buyer cancelling an existing order), and (3) transportation service disruptions (such as broken inbound and/or outbound transport links). These disruptions can arise simultaneously due to a catastrophic event, or occur as isolated incidents independently of any catastrophic event. Our modelling approach addressed supplier availability issues related to these disruptions, with important distinctions regarding both how these shocks manifest within the operations and their duration, both of which can be monitored continuously. Notably, a catastrophic event such as the COVID-19 pandemic

FIGURE 1 NETWORK STRUCTURES FOR TRIADIC AND CONCENTRATED/ DISPERSED SUPPLY CHAINS

did (still does) not necessarily affect all entities within LUR Textile's supply chain. This observation highlights the advantages of collaborating with more “individually reliable” firms under varying conditions. We assumed the likelihood of an individual disruption occurring is higher following a catastrophic event compared to normal circumstances. Likewise, these individual disruptions tend to persist longer (at least on average) in the wake of a catastrophic event.

The effects of the shocks differed among the three configurations. In concentrated network designs, the timing and duration of catastrophic events for buyers/suppliers tend to coincide, while in dispersed networks, they do not.

THE RESULTS

Our simulations revealed insights from LUR Textile’s own supply chain, many of which are likely to be generalisable and which we now discuss.

Firstly, combining proactive and reactive strategies is almost always preferable to either proactive or reactive measures alone. While restructuring supply chains is a long-term strategic decision requiring

significant investment, financial incentives offer a quick, low-cost alternative for managing short-term disruptions. However, excessive or frequent discounts should be avoided, as they may alter future buyer behaviour. While reactive measures are easier to implement, require less investment, and often produce an immediate impact, proactive strategies, particularly if network design related, appear more effective in profit generation. The dispersed supply chain configurations consistently yielded the highest fill rates, while the triadic configurations had the lowest. We also found quite clearly that enhanced fill rates do not

necessarily translate into increased profits, especially when fixed and variable costs grow faster than revenues.

… the findings helped shape critical decisions, such as consolidating our warehouses to reduce costs and transitioning to domestic suppliers …

Moreover, concentrated supply chains are preferable due to lower fixed costs and better unit profit margins, mostly thanks to economies of scope/ scale. However, discounts become critical for the profitability of concentrated supply chains such as LUR Textile’s, especially where probability of a cancellation request is nonnegligible. Higher discounts inevitably mean lower profit margins,

and buyers may intentionally hold threats of order cancellation to secure discounts even when they are not experiencing any crisis yet. Liberal/ flexible cancellation policies can also be problematic as firms lose sales or margins. Therefore, it is crucial to have alternative buyers/markets in mind that may promptly take up cancelled orders, thus highlighting the need for strategic planning in supply chain management.

Erman Guleryuz, CEO at LUR Textiles, commented:“This study provided us with a strategic perspective on strengthening our supply chain resilience. While we initially contributed industry insights, the findings helped shape critical decisions, such as consolidating our warehouses to reduce costs and transitioning to domestic suppliers after realising the risks of import dependencies. Inspired by the study, we also reassessed our supplier network, prioritising not just diversification but also responsiveness. Additionally, we became more selective in our customer base, focusing on financially stable partners to safeguard cash flow. Overall, this research reinforced our approach to risk mitigation and operational efficiency in a volatile market.”

Mayur_Mehta/Shutterstock

LOOKING AHEAD

Our joint work with LUR Textile showed that partnering with a diverse range of smaller suppliers and buyers across different regions significantly reduces the risk of supply and demand disruptions. We also demonstrated that small discounts may effectively address demand fluctuations in dispersed supply chains. This suggests a strategic advantage in working with smaller buyers, who typically have less bargaining power due to their lower order volumes—paralleling revenue management practices that favour price adjustments over capacity changes. At the time of writing, we are in the process of planning future joint work with LUR Textile and other similar companies, exploring the role of big data analytics in improving demand forecasts and detecting disruptions early.

Dr Mustafa Çagri Gürbüz is Professor of Supply Chain Management at the

MIT-Zaragoza International Logistics Program, and a Research Affiliate at the MIT Centre for Transportation and Logistics. His main research interests are inventory and supply chain management, optimising distribution systems, contracts, and modelling of operations systems

Dr Sena Ozdemir is a Senior Lecturer at the University of Lancaster. Her research spans a range of subjects including strategic alliances in new product development (NPD), global NPD, ‘Big Data’ Analytics (including Customer and Marketing Analytics) and, more in general, digital technologies for innovation and social/sustainable innovation.

Prof. Vania Sena is the Chair of Enterprise and Entrepreneurship at the University of Sheffield and a member of the Operational Research Society. She is the Lead of the ESRC-funded Yorkshire and Humber Office for Data Analytics and a co-investigator of the Yorkshire Policy

Engagement Research Network. Her main research interests are SMEs, innovation, productivity and entrepreneurship.

Dr Oznur Yurt is a Senior Lecturer in Operations & Supply Chain Management at the Open University, and an Adjunct Professor at Izmir University of Economics. She investigates the intersection between supply chain management and business-to-business marketing, focusing on service supply chains, buyer-supplier relationships, procurement management, service networks, food supply chains, sustainable transport, and supply chain resilience.

Prof. Wantao Yu is a Professor of Supply Chain Management at the University of Roehampton. His research interests include digital supply chain management, supply chain sustainability, modern slavery in supply chains, and lean, agile, resilient and green (LARG) supply chain management.

UNIV E RSITIES MAKING AN IMPACT

This issue reports on a project carried out at the London School of Economics and Political Science. If you are interested in availing yourself of the opportunity to have a project to help your organisation please contact the OR Society at education@ theorsociety.com.

REBALANCING LONDON’S BIKE-SHARING SYSTEM: A FICO® XPRESS C++ API CASE STUDY

and Analytics)

As urban populations continue to grow, cities around the world are increasingly relying on sustainable transportation solutions like bike-sharing systems. But as bike stations across the city experience fluctuating demand, rebalancing bikes between stations has become a significant challenge. This is known as the Bike Rebalancing Problem (BRP).

For my post-graduate project, I focused on the Santander bike system in London, spanning over 800 stations with more than 12,000 bikes. The data used in my project was obtained from Transport for London (TfL), and this system’s rebalancing problem was addressed using advanced optimization techniques and the new FICO® Xpress API for C++.

A typical bike-sharing system consists of a network of docking stations where users can rent and return bikes. Over the course of the day, some stations become empty (no bikes available to rent), while others become full (no space to return bikes). This imbalance is inconvenient for users and can result in operational inefficiencies. Solving the BRP involves determining how best to redistribute bikes across these stations, ensuring that bikes are available where they are needed and

that docking stations have enough space for returns.

In the London bike system specifically, more than 40% of the 800 bike stations regularly experience a net flow of more than 5 bikes in or out of the station, indicating the need for rebalancing. Also, at this scale, solving the BRP presents a significant logistical challenge.

The main complexity of the BRP arises from the unpredictability of user demand. To account for the stochastic demand for bikes across stations, this study formulated the problem as a two-stage stochastic mixed-integer problem, where the first stage represents the initial bike allocation to each station at the start of the day without yet knowing the exact demand for the day, and the second stage represents the redistribution actions after random demand has realized. The objective was to minimize the total cost, which included both the cost of moving bikes between stations and a penalty cost for unmet demand due to either overfull or empty bike stations.

To solve this two-stage stochastic optimisation problem, first the stochastic demand was captured through a finite set of scenarios (this is standard normal practice in stochastic

programming), thereby resulting in the Deterministic Equivalent Problem (DEP). Essentially, this is one large deterministic version of the (otherwise inherently stochastic) problem and can be solved by a mathematical solver (Figure 1).

One of the key challenges in solving the BRP is its scale. For example, with 800 bike stations in London and many possible demand scenarios, the problem can quickly become too large to solve directly. To address this, the study applied a technique called Benders Decomposition. This works by breaking the problem’s objective and its constraints into a main problem and multiple smaller subproblems. First, the main problem (focusing on the first-stage initial bike allocation decisions) is solved after which the subproblems (handling the secondstage bike redistribution decisions) can be solved. These subproblems then communicate back some information to the main problem (called optimality cuts and feasibility cuts derived using duality theory), and with each iteration of solving the main and subproblems, more information is provided to the main problem until a global optimal solution is found. This is known as the L-shaped method. By using this

decomposition technique, the study was able to handle the complexity of the BRP more efficiently (Figure 2). Additionally, the study implemented several enhancements to improve the performance of the algorithm, including:

• Multi-cuts—Generating multiple cuts per iteration to speed up convergence.

• Warm starts—Using solutions from previous iterations as a starting point for the current iteration, reducing the time needed to find optimality.

• Constraint pool and Callbacks – As the solver finds multiple (sub-optimal) integer solutions during an iteration, callback functions generated additional new constraints which were stored in a constraint pool and added to the problem in a batch at the end of the iteration, thereby decreasing the number of iterations required until convergence.

The study conducted several experiments to test the scalability and performance of the FICO Xpress

C++ API on different instances of the London BRP. These instances varied in size, with up to 800 stations and 50 demand scenarios. Three different solution methods were compared, including the full Deterministic Equivalent Problem (DEP), ‘standard’ Benders Decomposition, and ‘enhanced’ Benders Decomposition (with the features just mentioned). The last approach achieved up to a 100x speedup compared to regular Benders. This demonstrates how the enhancements, facilitated by easy-to-use functions in the API, make the decomposition algorithm effective for large-scale optimization problems like the BRP. However, the performance of Benders Decomposition can be highly dependent on the formulation of the problem. I came up with two different but equivalent formulations of the BRP and one formulation of the DEP scaled better when solved directly, while another formulation performed better when decomposed. These results highlight the effectiveness of the FICO Xpress C++ API and provide practitioners with applicable insights for leveraging modern APIs in solving large-scale optimization problems— all while offering cities a practical solution for optimizing their bikesharing systems. As bike-sharing continues to grow in popularity, solving the BRP efficiently will become increasingly critical.

The C++ implementation of the BRP is publicly accessible as a framework on GitHub (https://github. com/fico-xpress/BikeRebalancing Problem). FICO has released a blog post on this project (https:// community.fico.com/s/blog-post/ a5QQi0000032ihZMAQ/fico5262).

FIGURE 2 BENDERS DECOMPOSITION
FIGURE 1 DETERMINISTIC EQUIVALENT PROBLEM (DEP) FOR THE TWO-STAGE STOCHASTIC PROBLEM

SERENDIPITY’S COUSIN

Think of some people in history who were highly creative. Who came to mind? Maybe an artist like Rembrandt or Picasso, a composer like Mozart or Stravinsky, a scientist like Newton, Darwin or Einstein or an inventor like Edison or Jobs. Or perhaps someone who was outstandingly creative in several fields – Leonardo da Vinci (Figure 1) or Marie Curie say (or even Hedy Lamarr!). Or perhaps it was not an individual but a creative partnership – like Watson and Crick, or Lennon and McCartney?

What about us more ordinary mortals, in particular managers or analysts? Isn’t creativity important for all of us? Could we become more creative? How?

Interesting and valuable things can be discovered by chance – serendipity; but what is being looked for here is serendipity’s cousin – creating something interesting and valuable by systematic means.

Operational Research (OR) is an improvement science so let’s look at where creativity fits into the job of improving something – whether a policy, a process or a product.

ANALYSIS AND SYNTHESIS

Improving something generally requires both analysis and synthesis. Analytical thinking is typically reductive –breaking something down into its component parts to help understand how they fit and work together. Synthesis involves thinking how components could be modified, assembled differently, or supplemented to produce something different and better.

The need for an interplay between analysis and synthesis in problem solving is well illustrated by the story of how the structure of DNA was found. Much vital empirical investigation, particularly Franklin’s x-ray crystallography and Chargaff’s chemistry, had identified key pieces of the puzzle but it was Watson and Crick - who did no laboratory experiments but used their combined skills to put these pieces together, building models to help them see which structure fitted best– who found the full solution (Figure 2).

The double helix structure of DNA provides a nice pictorial analogy for the generative interconnection between analysis and synthesis.

CREATION AND EMERGENCE

Seeing creativity as involving the formation of a new pattern from combining two or more already existing ideas or things – as in Newton’s “standing on the shoulders of giants” and, for that matter, all chemical synthesis - was the key concept in Arthur Koestler’s classic work The Act of Creation [1]. The new combination will likely have properties (”emergent” properties) that its components do not possess (just as table salt has very different properties from its constituent elements of a reactive metal and a poisonous gas). DNA exemplifies this fundamental point. The pairing of bases underpins its self-replicating abilities and its sequence of millions or billions of these four (just four!) basic components allows gazillions of different permutations for coding for the creation of myriads of proteins.

FIGURE 1 LEONARDO DA VINCI SKETCHES
FIGURE 2 DNA-LIKE INTERTWINING OF ANALYSIS AND SYNTHESIS

LEARNING TO BE CREATIVE

Writing this piece has been prompted by recently reading the book Creativity for Scientists and Engineers: A practical guide [2], by Dennis Sherwood. I have already drawn upon that book in giving the story about DNA and will be saying more about it later (it has more general relevance than its title might suggest) but it brought to mind other books on creativity that I have come across (and hopefully learnt from) over the years.

I think the first was How to Solve it by George Polya [3]; this classic presents various approaches to getting a grip on mathematical problems. It gives important clues to the connections between analysis and synthesis – for example it suggests, after decomposing a problem into simpler parts, trying to recombine some or all of these elements in new ways that produce a more accessible problem that might be used as a bridge to solving the original one.

The next ideas that I came across about creativity were from some of Edward de Bono’s (many) books on lateral thinking and related approaches [4]. Lateral thinking typically involves deliberate attempts to associate things that are currently unassociated, and to disassociate things that currently are associated, in order to generate something new. In his books de Bono introduced a raft of approaches to support such thinkng - such as Plus/Minus/ Interesting, Stepping Stones/Random Words, Six Thinking Hats. One of their many insights is the importance of considering alternative ways of perceiving or frame-setting for a problem. For example what use is an adhesive that isn’t very sticky? “None” was the thinking in the company where it had been discovered (3M), until one of its employees, irritated by slips of paper he was using as bookmarks keeping falling out, came up with the (highly profitable) idea of what is now the ubiqutous Post-it note.

Other, highly practical, books on creative thinking that I have found stimulating are Conceptual Blockbusting by James Adams [5], A Whack on the Side of the Head by Roger Von Oech [6], and Thinkertoys: A Handbook of Creative-Thinking Techniques by Michael Michalko [7].

All of the above – and indeed other related approaches such as Gordon’s “synectics” or Zwicky’s “morphological analysis” - emphasise the feasibility and importance of deploying disciplined and deliberate approches to creativity. That may sound like an oxymoron but recognising its truth opens the door to seeing creativity as a learnable skill.

Which takes me back to Creativity for Scientists and Engineers: A practical guide. This book addresses issues of how to have good ideas on demand, how to judge between good ideas and bad ones, and how to build a sustainable innovation culture. It also presents a range of fascinating recent accounts from eminent scientists and engineers of creative thinking in action.

Although the book is aimed at scientists and engineers, its approach to creativity is of wide application. Put simply, it is for a group to select a topic e.g. a problem or process, the group members to write down everything they each know about it, and the group to then select one of the identified features and ask how it could be different, and with what consequences (and then to repeat this step for another feature). Sherwood elaborates his approach by showing how it can be used for idea generation across what he terms the six domains of creativity (content, processes, strategy, structures, relationships and You!) and how this, in an organisation, is influenced by the context of organisational culture.

CREATIVITY FOR AND BY ANALYSTS

Some years ago I came across Creative Thinking in the Decision and Management Sciences by James Evans [8]. This is essentially a textbook, with lots of exercises, and sets out a structured process for how to apply creative thinking at every step of problem solving (described by Evans as a sequence of mess-finding, fact-finding, problem-finding, idea-finding, solution-finding and acceptance-finding) with divergent (generation of alternatives) and convergent (choosing from alternatives) thinking applied (to varying degrees) at each step. Evans draws attention to the emphasis given to creative thinking by one of the early luminaries of OR, Russ Ackoff, in his books The Art of Problem Solving and Idealised Design (see also the review by René Vidal [9] https://paperity.org/p/265904082/ creativity-for-operational-researchers).

There are some memorable historical examples of impactful creative thinking by analysts. For example, during WW2 operational researchers combined insightful analysis of problems with creative approaches to solutions, as shown by the story of how the efficacy of depth charges

was increased. Prior to the research, charges had been set to detonate at a depth of 100 feet, as that was the depth that most U-boats reached after being spotted by RAF planes. Analysis showed however that there was little chance of success against this majority of U-boats, as their position after diving was too uncertain. The best chance was the somewhat counterintuitive approach of attacking the minority – those that were still near the surface (because they had not timeously seen the spotter planes) and so of more accurately known position. That required setting the charges to 25 feet - a setting which apparently they did not originally allow, so some redesign was required! Successful attacks tripled; captured U-boat crews thought a vastly more powerful explosive was being deployed.

Creative thinking is also important in the development of analytical methods . For example, back in the late 1940s one of the pioneers of OR, George Dantzig, was seeking to find a way of solving a planning problem (which was formulated as what we now call a linear programming problem). He had written a doctoral thesis on statistics which contained geometrical concepts that he realised could be applied to his new problem. The result was a groundbreaking method for solving linear programming problems. Dantzig was awarded the US Presidential Medal “For inventing linear programming and discovering methods that led to wide-scale scientific and technical applications to important problems in logistics, scheduling, and network optimization”. His inventionthe simplex algorithm – has been reckoned to be one of the top ten algorithms of the 20th century: the front cover of an edition of the New Scientist featured it as “the algorithm that runs the world”!

CREATIVITY QUOTES

I will finish with a few quotes on creativity from some famous creatives:

“Creativity is seeing what everyone else has seen, and thinking what no one else has thought.”– Albert Einstein

“Creativity is just connecting things.” – Steve Jobs “The best way to get good ideas is to get lots of ideas and throw away the bad ones.”

– Linus Pauling

“Creativity is a wild mind and a disciplined eye.” –Dorothy Parker

“You see things; and you say, “Why?” But I dream things that were never there; and I say, “Why not?”.” –

George Bernard Shaw

Dr Geoff Royston is a former President of the OR Society and a former Chair of the UK Government Operational Research Service. He was Head of Strategic Analysis and Operational Research in the Department of Health for England, where for almost two decades he was the professional lead for a large group of health analysts.

FOR FURTHER READING

[1] Koestler A (1964). The Act of Creation. London: Hutchinson.

[2] Sherwood D (2022). Creativity for Scientists and Engineers: A Practical Guide. Bristol: IOP Publishing.

[3] Polya G (1990). How to Solve It: A New Aspect of Mathematical Method. London: Penguin.

[4] de Bono E (1964). Lateral Thinking: A Textbook of Creativity. London: Penguin Life.

[5] Adams JL (2019). Conceptual Blockbusting: A Guide to Better Ideas, 5th Ed. New York: Basic Books.

[6] von Oech R (2019). A Whack on the Side of the Head. New York: Grand Central Publishing.

[7] Michalko M (2006). Thinkertoys: A Handbook of Creative-Thinking Techniques. Berkeley, CA: Ten Speed Press.

[8] Evans JR (1991). Creative Thinking in the Decision and Management Sciences. London: Thomson Publishing.

[9] https://paperity.org/p/265904082/creativity-foroperational-researchers

YEAR

MAKE 2025 A YEAR OF PERSONAL DEVELOPMENT

Advance your career with expert-led training from The OR Society. Our comprehensive courses cover essential topics in Operational Research, Data Science, and Analytics, designed to boost your skills and enhance your professional impact. WHY CHOOSE US?

Advance training from The Society. comprehensive cover essential in Research, Science, designed skills enhance your impact. WHY CHOOSE

◆ Expert Instructors: Learn from industry-leading professionals.

◆ Flexible Learning: In-person and online options to fit your schedule.

Flexible and online schedule.

◆ Recognised Credentials: Gain certifications that employers trust.

that

◆ Cutting-Edge Content: Stay ahead with the latest industry practices. Explore our courses: www.theorsociety.com/training

with latest industry practices. courses:

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.
IMPACT Spring 2025 by Impact Magazine from The Operational Research Society - Issuu