TEST – November 2018

Page 1

NOVEMBER 2018

THE DEVOPS INDUSTRY AWARD WINNERS INSIDE! ADOPTING AGILE BIG DATA HANDLING QA PROFESSIONAL MASTERY APP DEVELOPMENT ADOPTING DEVOPS

HEALTHCARE+

FINALISTS INSIDE


2019 Practical Presentations | Workshops | Networking | Exhibition #NationalTestConf The National Software Testing Conference is a UK‑based conference that provides the software testing community at home and abroad with invaluable content from revered industry speakers; practical presentations from the winners of The European Software Testing Awards; roundtable discussion forums that are facilitated and led by key figures as well as a market leading exhibition, which will enable delegates to view the latest products and services available to them. The National Software Testing Conference is open to all, but is aimed and produced for those professionals that recognise the crucial importance of software testing within the software development lifecycle. Therefore, the content is geared towards C‑level IT executives, QA directors, heads of testing and test managers, and senior engineers and test professionals. Taking place over two packed days, The National Software Testing Conference is ideal for all professionals aligned with software testing – a fantastic opportunity to network, learn, share advice, and keep up‑to‑date with the latest industry trends.

Register your interest today SoftwareTestingConference.com T E S T M a g a z i n e | N o v e m b e r 2 01 8


1

C O N T E N T S

WHAT'S ON THE HEALTHCARE HORIZON?

4

08

THE NEED FOR AGILE HEALTHCARE

24

28

HEALTHCARE

12

THE FUTURE HEALTH OF TESTING

16

INFORMATION OVERLOAD

Adopting agile can help healthcare organisations adapt to challenges

04

It's an exciting time for application developers in the healthcare market

08

Today's QA professionals must work on mastering new tech

12

It's time to unlock big data in healthcare and tackle those data sets

16

When should you use synthetic vs. production data for testing?

24

Adopt DevOps if you want to build quality software rapidly

28

There is a shortage of skills in artificial intelligence and algorithmic engineering

32

European Software Testing Awards Finalists

37

T E S T M a g a z i n e | N o v e m b e r 2 01 8


December

2019

2019

June

21-22 May

DevTEST Focus Groups consists of 16 syndicate rooms, each with its own subject facilitated by a knowledgeable figure. Each debate session runs three times through the course of the day, with spaces limited up to 10 delegates per session, ensuring meaningful discussions take place, and all opinions are heard. Each session lasts 1.5 hours, giving participants the opportunity to get inside the minds of their peers, helping to understand varying viewpoints from across the industry.

The National Software Testing Conference is a UK‑based conference that provides the software testing community at home and abroad with invaluable content from revered industry speakers; practical presentations from the winners of The European Software Testing Awards; roundtable discussion forums that are facilitated and led by key figures; as well as a market leading exhibition, which will enable delegates to view the latest products and services available to them.

softwaretestingnews.co.uk

softwaretestingconference.com

24-25 September

22 October

The National DevOps Conference is an annual, UK-based conference that provides the IT community at home and abroad with invaluable content from revered industry speakers; practical presentations; executive workshops, facilitated and led by key figures; as well as a market leading exhibition, which will enable delegates to view the latest products and services available to them.

The DevTEST Conference North is a UKbased conference that provides the software testing community with invaluable content from revered industry speakers; practical presentations from the winners and finalists of The European Software Testing Awards; Executive Workshops, facilitated and led by key figures; as well as a market leading exhibition, which will enable delegates to view the latest products and services.

The DevOps Industry Awards celebrate companies and individuals who have accomplished significant achievements when incorporating and adopting DevOps practices. The Awards have been launched to recognise the tremendous efforts of individuals and teams when undergoing digital transformation projects – whether they are small and bespoke, or large complex initiatives.

devopsevent.com

devtestconference.com

devopsindustryawards.com

19

2019

2019

5 March

The European Software Testing Summit is a one-day event, which will be held on the 12th December 2018 at The Hatton, Farringdon, London. The European Software Testing summit will consist of up to 100 senior software testing and QA professionals, who are eager to network and participate in targeted workshops. All delegates will receive printed research literature and have the chance to interact with The European Software Testing Awards’ experienced Judging Panel, as well as receive practical advice and actionable intelligence from dedicated workshops. softwaretestingsummit.com

25-26

2019

THE EUROPEAN SOFTWARE TESTING

2019

12

INDUSTRY EVENTS

2019

2018

2 upcoming

11

THE EUROPEAN SOFTWARE TESTING

November

December

Now in its sixth year The European Software Testing Awards celebrate companies and individuals who have accomplished significant achievements in the software testing and quality assurance market. Enter The European Software Testing Awards and start on a journey of anticipation and excitement leading up to the awards night – it could be you and your team collecting one of the highly coveted awards.

The European Software Testing Summit is a one-day event, which will be held on the 11th Dec 2019 at The Hatton, Farringdon, London. The European Software Testing summit will consist of up to 100 senior software testing and QA professionals, are eager to network and participate in targeted workshops. Delegates will receive research literature, have the chance to interact with The European Software Testing Awards’ experienced Judging Panel, as well receive practical advice and actionable intelligence from dedicated workshops. softwaretestingsummit.com

softwaretestingawards.com

T E S T M a g a z i n e | N o v e m b e r 2 01 8


E D I T O R 'S

IS AI THE ‘NEW’ ANTIBIOTIC? BARNABY DRACUP EDITOR

HEALTHCARE erhaps not since the discovery of penicillin has the world’s healthcare industry been on the precipice of a revolutionary change. The healthcare industry is facing many challenges and, although we are all perhaps well aware of the publicised lack of funding and numerous strains on resources that we have here in the UK, it is the areas of artificial intelligence (AI), cybersecurity, IoT, patient experience and even disaster preparedness that are of prime concern to healthcare executives, doctors, clinicians and other healthcare professionals, according to the PwC Health Research Institute. Focusing on AI, for me, unsurprisingly it came near the top of the list. AI is something that is seemingly making its way into nearly every industry and is becoming a buzzword among Joe Public and professionals alike as the Holy Grail of efficiencies; a time saving, problem solving, mistake negating cure-all. In researching this edition of TEST Magazine, I spoke to various industry professionals from numerous backgrounds, including computer scientists involved in biotechnology and machine learning, to clinical specialists working on developing ‘out of the box’ AI-powered technology, easily deployable in the field, to help improve eyecare in developing nations. All of them, to put it bluntly, were extremely excited and enthusiastic about the future potential of AI technology and software within healthcare. AI is obviously exciting in terms of the patterns and relationships it can uncover in data that humans can’t, and it’s when the consequences of this ability are extrapolated – from the data input end all the way to the end-user and patient end – that its impact can truly be seen. After all, humans are ‘human’ and

P

will always make mistakes if they are undertaking repetitive tasks that require attention and focus and, even if they can’t quantify it, will always have some operational bias they aren’t aware of. A clinician who has had a recent case with an adverse outcome will be more likely to be on the lookout for similar issues, and this will affect their judgement. An AI, however, will have no such issues and it's obviously these ‘inhuman’ capabilities that make AI, in any sector, so exciting and potentially beneficial. The next stage for this sort of technology will be for it to be diffused into clinical workflow, helping considerably in detecting diseases early and in assisting clinicians to reduce clinical variability. AI, of course, doesn’t vary or have an off-day, so the likelyhood of false positives and/or negatives and referrals in diagnosis is vastly reduced, resulting in significant savings in the healthcare system as a whole. However, with AI bringing a host of benefits for the healthcare industry and its patients alike, its pace of development means it is already approaching some obstacles. As several clinicians told me, one of the general problems with these new areas of tech is that they’re outpacing the ethical frameworks, legal frameworks and physical and nonphysical infrastructure of the healthcare sector. They all unanimously agreed it was clearly important there is a need to make sure it is safe, that it does no harm, and that it protects patients, in terms of privacy, confidentiality and data security. In relation to this, it was really encouraging to read the other day that the NHS has now produced a code of conduct for data science companies, which is something I strongly believe all data science and AI companies should adhere to over the coming years.

C O M M E N T

3

NOVEMBER 2018

ADOPTING AGILE BIG DATA HANDLING QA PROFESSIONAL MASTERY APP DEVELOPMENT ADOPTING DEVOPS

HEALTHCARE+

FINALISTS INSIDE

NOV 2018 | VOLUME 10 | ISSUE 5 © 2018 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 EDITOR Barnaby Dracup editor@31media.co.uk +44 (0)203 056 4599 STAFF WRITER Islam Soliman islam.soliman@31media.co.uk +44 (0)203 668 6948 ADVERTISING ENQUIRIES Shivanni Sohal shivanni.sohal@31media.co.uk +44 (0)203 668 6945 PRODUCTION & DESIGN Ivan Boyanov ivan.boyanov@31media.co.uk Roselyne Sechel roselyne.sechel@31media.co.uk 31 Media Ltd, 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 info@31media.co.uk testingmagazine.com PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA  softwaretestingnews  @testmagazine  TEST Magazine Group

T E S T M a g a z i n e | N o v e m b e r 2 01 8


4

THE NEED FOR AGILE HEALTHCARE Adopting agile can help healthcare organisations adapt to the changing needs of patients, regulations and technology without the need for organisation-level restructuring n this era of ever-changing external factors, such as regulatory guidelines and new emerging technologies, healthcare organisations need to be responsive, nimble and able to seize the opportunity more than ever to be able to provide the highest level of patient care while staying in profitable business. Being agile will be an increasingly critical capability moving forward, considering the industry’s turbulence, complexity and accelerating speed of change. In this environment, healthcare leaders must choose to lead the pursuit of greater agility or risk being left behind.

I

WHAT IS AGILE/AGILITY? Agility is our ability to respond to change. The world around us is changing faster

T E S T M a g a z i n e | N o v e m b e r 2 01 8

than ever. For a business to succeed, what is often known as ‘business agility’ is most important. Business agility is responding to changing customer demand, market conditions, new technology entrants and even legislation or customer perceptions. Keeping one step ahead of the game requires the ability to respond swiftly to the changes when needed. There’s a fine line between chaos and carefully deviating from the plan in a considered way to meet changing demands. This is where agile comes in. Agile is a bunch of tools and techniques that help us achieve agility Agile is not a new term, it’s just the extension of the way organisations have worked in the past. For example, planning

PALLAVI KUMAR SENIOR ANALYST PROGRAMMER NHS Pallavi is a senior software development professional and certified Scrum Master. She is passionate about learning new technologies and challenging and improving existing processes to enhance their efficiency


A G I L E

is an agile tool, we would always plan whatever we did, but like all agile methods the key difference is that the agile planning session would be short, focused and done as a team. Putting people at the centre is at the heart of every agile method and we can probably spot any agile technique in the wild quite easily – if it involves a cross section of skills and disciplines collaborating, it’s likely to be agile in action. Agile frameworks like Scrum and Kanban are a good place to start if you’d like to start being agile or introduce agility in to your organisation.

THE AGILE MANIFESTO

According to the agile manifesto, agile is summed up as 'uncovering better ways of developing software by doing it and helping others do it'. Through this work we have come to value: • Individuals and interactions over processes and tools • Working software over comprehensive documentation • Customer collaboration over contract negotiation

M E T H O D O L O G Y

Responding to change over following a plan.

That is, while there is value in the items on the right, we value the items on the left more. This might seem to be more focused on software development, but the core values hold true for any industry.

THE HEALTHCARE SECTOR NEEDS AGILE

Many healthcare organisations have struggled to keep pace with an everchanging business landscape. Thus, the pursuit of being agile – an organisation’s ability to adapt quickly and successfully in the face of rapid change – has taken on increased importance. In numerous industries, the companies thriving most have managed to crack the paradox of agility, balancing a stable foundation of core processes and capabilities with the ability to dynamically redeploy those capabilities to address emerging challenges and opportunities. Both stability and dynamism are needed to excel – organisations that get the balance wrong can find themselves either

5

struggling to keep up or burdened with a bureaucracy that leaves them unable to respond to changing market conditions. The concept of agility is especially relevant to healthcare, which has endured tremendous upheaval in recent years. The industry continues to see strong growth. Financial and regulatory pressures have grown. With the immensely increasing pressure on the healthcare services, there is a dire need to move towards becoming agile to efficiently and effectively manage existing resources and services.

CONSEQUENCES OF NOT BEING AGILE The concept of agile and agility stretches back more than a century, with organisations from the US military to Japanese manufacturers serving as evangelists. More recently, the software development industry started focusing on agile methodologies, and the concept has spread to application development functions within more traditional industries. In some companies, bureaucracy had so slowed product development cycles that businesses

T E S T M a g a z i n e | N o v e m b e r 2 01 8


6

A G I L E

M E T H O D O L O G Y

were investing hundreds of millions on major IT applications only to find that evolving customer needs had rendered the applications obsolete by the time of release. An agile development process has enabled companies to work faster and more collaboratively to reduce the time to market for new products; hence making sure that companies respond swiftly to the changing external factors and customer needs. Organisations that are not agile are often so slow to adapt to changes they find they must pursue a fairly disruptive organisational restructuring every two or three years, just to keep up with changes in the market – costing them a lot of funds, resources and efforts which are detrimental to the long-term goals of the company. Considering the growing pressure on the healthcare services, organisations need to be more efficient in delivering their services and, considering the shortage of clinical staff and ageing population, organisations need to be able to utilise their resources effectively, in fact more so than ever.

HOW CAN AGILE BE INCORPORATED?

At the heart of agile is the concept of small groups self-organising and working on agreed-upon priority tasks. This iterative component thus defines and directs the work, rather than a big project plan created at the beginning of the initiative. Scrum adds a component of time, the ‘sprint’ concept during which teams commit to a certain set of deliverables. Below are some of the key learnings the healthcare sector can take from other agile organisations: Creating cross functional teams and increasing communication and engagement Healthcare organisations should look at moving away from a hierarchical structure towards having cross-functional teams. Cross-functional teams usually consist of colleagues with different areas of expertise and discipline. There will be a product owner of the team who takes care of the backlog items and priorities, depending on the project or the goal. The cross-functional teams can be ramped up, ramped down or reconfigured depending on the need. The team members can be co-located as this will increase the communication and engagement in the team.

T E S T M a g a z i n e | N o v e m b e r 2 01 8

"Considering the growing pressure on the healthcare services, organisations need to be more efficient in delivering their services" In the field of healthcare, members from different departments in the hospital can be included to create cross-functional teams depending on the need. For instance, several shared services for functional areas (e.g., HR, claims) and ‘centres of excellence’ in specialised areas (e.g., quality, compliance). The business lines can draw on these shared services as necessary for resource-intensive but infrequent efforts, and then set priorities and direct activities for the shared staff assigned to their projects (an example of such an effort would be the need to develop the approach for responding to a large request for proposal). This agile setup allows the company to quickly develop and scale capabilities that can then be rapidly deployed as needed. Increased customer collaboration The hospitals can look at collaborating more with the patients on getting a regular feedback on services received. Involving them into the pilot for new applications or services and getting the feedback can help in continuous learning and follow on improvements which are the very important part of agile. Using agile as an alternative to traditional ‘waterfall’ management Waterfall project management involves all the planning in advance before any work is started. That’s how the traditional healthcare organisations have been working. To begin with healthcare organisations can look at the departments which might already be agile, like IT or the healthcare innovation department, and if not then start adopting agile from those departments where it’s easier to follow. Then, step-by-step, launch agile in other areas where it can be used as an alternative to the traditional waterfall project management or as a peer to it.

HEALTHCARE CAN BECOME MORE AGILE

Healthcare organisations can become more agile by the following steps: • Define the strategic objectives and identify how agile can benefit towards them (e.g. increased patient care, enhanced care models) • Assess the departments which might be already using agile like the IT and healthcare innovation departments in the organisation • Identify the areas where there is scope to pilot the agile model and prepare a road map to implement it • Define the stable backbone for the organisation and the dynamic areas can be handled by agile teams • Launch agile based models, iterate and keep evolving with regular feedback from the teams • Create scalability by moving the targeted departments to become more agile over time • By implementing by departments, the organisation will eventually reach the point where agile becomes selfsustainable.

THE BENEFITS OF ADOPTING AGILE

Agile has been widely adopted by many industries because of the benefits it provides. Some of the key benefits healthcare organisations can achieve from using agile are as follows: • Engaged staff: empowering staff by allowing them to work in selfmanaged teams makes them more engaged in their work and therefore they produce a better quality of patient care and minimal errors • Customer focused: agile emphasises focusing on customers and regularly getting feedback so healthcare organisations can benefit from constant improvements • Lowered risks: using agile, risks can be assessed very early in the process because of the iterative approach • Faster ROI: as results are delivered quicker with each iteration, benefits can be seen in the early stages • Agility: adopting agile empowers the organisations to quickly respond to changes • Improved performance transparency: adopting agile helps in improving performance transparency in both clinical and non-clinical areas.


7

T E S T M a g a z i n e | N o v e m b e r 2 01 8


8

WHAT'S ON THE HEALTHCARE HORIZON? It is an exciting time for application developers in the healthcare market, with professionals and consumers alike keen to start using new and innovative technologies hat can we expect the market place to look like and what are the opportunities and challenges that lie ahead? The software landscape in the health service in the UK is a heterogeneous mix of systems and applications. Hospitals run everything from huge electronic patient record (EPR) systems to bespoke skunkworks applications built by frustrated clinicians. Many departments have their own specialist software applications catering to the particular needs of both patients and staff. Add in such things as clerking, HR, and finance systems, to name but three, and the complex pattern of hospital information starts to emerge. We must also now consider the proliferation of applications created to monitor, track and engage citizens with their healthcare and how that information finds its way in to (or indeed, out of) the healthcare economy. All this takes place with data that is incredibly sensitive and personal; clinical patient data. Not only can this data be

W

T E S T M a g a z i n e | N o v e m b e r 2 01 8

used to obtain personal identifiable information but also private health information that is vitally sensitive and deeply personal to individuals. This information and its usage are governed by health policy and regulated by a variety of standards and regulatory bodies as well as data protection and privacy laws. Systems in the health service are not only complex but also need to talk to each other. Inter-system communication is vital and that in itself creates a problem that clinical coding seeks to address. Hospital systems rely on the accurate coding of information to minimise risk and data loss. Drugs and devices are coded along existing guidelines and specific languages and standards are used to ensure accurate internal resourcing, billing and prescribing. With all this in mind it is worth remembering the hospital IT market in the UK is worth â‚Ź59bn and of considerable appeal to all sorts of commercial interests, from start-ups to the giant US Corporations and hospitals. Those

BRIAN PAINTING FOUNDER BFHTS LTD Brian has spent 20 years working with software companies in healthcare, developing award-winning innovative solutions


A P P

who run them are open to any methods by which efficiencies can be achieved, service lines improved and crucially, patients’ health ameliorated. A patient with a set of complex comorbidities can have their vital signs monitored remotely and pre-configured indicators set to alert their GP and care staff should the need arise. This can improve the wellbeing of the patient as they can stay in the comfort of their own home. It will reduce the risk of that patient acquiring further infection in a care facility and reduces the cost to the hospital of keeping that patient in a bed. For any new applications to be successful this trio of parameters must be met, and met in the context of both security and ease of use. The latter is often the hardest to achieve. It must be remembered that within the health service, software is just one of the tools healthcare professionals use to do their job, and often computers can be barriers to clinical interaction. Whilst computing has revolutionised diagnosis and the treatment of illness, on a quotidian level, many computers are removed from the obvious patient journey with data entry happening off-stage.

SECURITY WILL ALWAYS BE PARAMOUNT The users in a clinical environment

D E V E L O P M E N T 99

are multitudinous and their skill levels, confidence, language and time are all variables any new application has to consider. There is no standard worker profile for a hospital system, unless the application is tailored to a specific job role and, even then, the above factors will come into play. Given that the NHS is the world’s fifth largest employer, it is impossible to get successful adoption without bearing in mind these variables. Security is, and will, remain the touchstone of any interaction with health information and recent reports cited in the Financial Times and others relate worrying stories about hackers and international threats to healthcare data. It has been reported that one can find hospital admin logins available on the dark web from as little as £1000. With the growth of patients wanting access to their records and taking a more active role in their own healthcare, no-one can approach the market without a thorough understanding of the risks involved in handling sensitive data. Extending this out to the public health sphere; it is worth remembering the axiom that 90% of healthcare funds are spent in the first five and last 10 years of life. Users here will often involve carers or other third parties. This will require devolved authentication of access to data and consent models, as well as challenges

"Software is just one of the tools healthcare professionals use to do their job, and often computers can be barriers to clinical interaction"

T E S T M a g a z i n e | N o v e m b e r 2 01 8


10

with usability. Language, environment and security issues need to be born in mind with the suitability of any health tools. Accuracy of biometric data and sensors on phones are variable and most, at present, lack the degree of official scrutiny that would authorise them for use in a hospital but are widely in demand by the public. With certain biometric data, accuracy is critical to reducing the potential impact on health services as much as the ability to potentially improve patient health. Fitness trackers, for example, could and should perhaps understand their demographic and have a pregnancy mode. Some recent stories around the challenges of fetal heart rate monitors sadly underline the need for a rigorous testing regimen.

NEXT GENERATION HEALTHCARE APPS

Turning to the future, what are the trends likely to shape the next generation of healthcare applications? Much attention has been given to AI, but this is further down the road and needs a more accurate and broader data set to become truly effective. The first move towards establishing this kind of longitudinal health record, that works along the interstices between the community and the healthcare professional, is most likely to be found in the advancement of the IoT and companion diagnostics. Both these require a comprehensive understanding of the calibration and measurements they are designed to capture and a rigorous testing regime to ensure they are accurate within

T E S T M a g a z i n e | N o v e m b e r 2 01 8

a healthcare regulatory framework. This is necessary to ensure that any risk of inaccurate or false positives is removed from the device capturing the data, wherever possible. Within the control of a hospital environment many such tools can be authenticated to be used with clinicians and professionals certain of their accuracy. In a home or day-to-day setting, ensuring that same level of consistency is key. These technologies are crucial to providing the kind of insight over time that we already know can be harvested and used to revise and ultimately, potentially, improve public health. In the short to medium term we can expect growth in three particular areas. • Virtual/augmented reality: is rapidly gaining traction in the marketplace; from mobility assessments and hospital navigation to the first forays into simple surgery • Greater social media engagement: allowing hospitals and surgeries to promote public health news, dispel alarms and scares, and provide timely, accurate information. This is being throttled at present but will soon be seen as a way to offer a cashstrapped service the ability to manage expectations on its resources • Telehealth: the management of patients via technology. For example, the remote monitoring of diabetics without the need for face-toface meetings, or offering certain consultations via Skype. This form of interaction is one that is currently

offered as a push technology rather than one that patients propose. Looking further ahead, I have spent the last three years working with a number of start-ups in the 3D printing and genetics spaces and this too is a vital and potentially lifesaving field. In both these arenas once again, the idea of extensive reporting from the user cohort is problematic. Major companies also throw their weight into the marketplace. Companies such as Microsoft, Google and Facebook work alongside Trusts to develop software and hardware that can improve outcomes, but too often these are not scientific or definitive programmes, and can often be seen as nothing more than puff pieces for whatever particular product they are trying to sell, though their potential applicability and genuine ability to bring innovation at low cost to healthcare providers cannot be denied.

CAUTION IS ADVISED

The nature of the market determines a smaller representative data set, and the implications of poor products being rushed to the market, hyped as the next big thing, are very real. Clinicians as well as developers fall too easily into the trap that the technology working in a particular setting must have an efficacy outside of their test group. Here the stress must be on empirical trials that mirror as much as possible the current models of drug and device trials. It is worth noting at this point that whilst the health secretary, Matt Hancock , and


the team at NHS Digital have a declared enthusiasm for the advancement of new technologies; regulatory frameworks, testing and usability guidelines have not kept pace. There are a host of initiatives and trial programmes in play at any given point. Trusts such as Alder Hey, Nottingham and Coventry & Warwickshire have development laboratories and software testing regimes for innovative applications. The establishment and development of Academic Health Science Networks (AHSNs) has anchored research and academia within the health community and afforded a greater ability to understand the impact of technology on healthcare. That said, there is still lacking a formal way of ensuring applications and technologies can be assimilated into the healthcare ecosystem authoritatively. The NHS App library is a bold step forward, though even a cursory glance at the contents reveals a worrying gap between insightful application development and clinical need. Surveys do reveal a genuine desire and need within the population for greater engagement through their devices with health professionals and, whilst this is appealing, a broader review of the economics and viability of applications to sensitively and accurately capture any diagnostic information is essential. In the meantime, we will still continue to see an organic growth in usage. Over half of doctors polled share clinical information through instant messaging applications such as Whatsapp. Patients can and do

bring evidence gleaned from the internet to their GPs. The costs and risks of this approach are not yet fully understood, yet the opportunity to change this remains immense. For those seeking entry into this market the following serve as a short, yet not exhaustive, set of recommendations. Healthcare can be a complex interdisciplinary field Very few engagements with the health service involve a single touch; health is rarely simple, and development and subsequent testing needs to consider all the various touch points an application may and could reach; from both ends of the engagement. Is this application connecting to the patient but also to prospective carers; parents, spouses, siblings and care workers? Is the output on the clinical side available to other disciplines and what are the clinical implications for that access? Patients sometimes know best It is worth considering how and what data is available to a patient and what the implications are for them in how they view, consume and share information; how and where do they capture data; what are the implications of the readings and results. Working with researchers at an Australian University we were able to use a software application to determine the likelihood of elderly patients suffering a fall in the next six weeks. This information was invaluable in reducing risk and cost to the care facility, however, telling someone that they

were likely to fall in the near future did next to nothing for the emotional wellbeing of elderly patients. Similarly, much development on vital-sign technology and making it available to sufferers of longterm conditions has failed, as a substantial sample of that cohort reported that they didn’t want to look at their phone or device and be told how ill they were! Scale is king The growth of cloud-based technologies offers massive scope for new diagnostic, epidemiological and consumer tools to rapidly advance the treatment of disease and promote wellness in both physical and mental health. Testing regimes should focus on the potential reach of their applications to as broad an audience as possible, whilst remembering the complex nature of illness and its many touch points within the health environment. The implications of getting this wrong are as great as the potential benefits of getting it right.

CONCLUSION

The market place is broad, and there is appetite and desire for change. This cannot be allowed to be some form of Wild West for developers, but needs an understanding between the clinical community and the developer, and a sympathetic and light touch with the consumers of healthcare services, as we start to try and leverage the great opportunities of technology in the advancement of treatment and better health.

T E S T M a g a z i n e | N o v e m b e r 2 01 8


12

THE FUTURE HEALTH OF TESTING Today’s QA professionals must work on mastering new technologies such as AI, IoT, and blockchain in order to ensure that users will have the best experience possible or the first time, end-user satisfaction ranks at the top of surveyed professionals’ testing strategy goals. According to the recently-released World Quality Report 2018-19 (WQR), organisations are placing the spotlight on customer experience and experimenting more with artificial intelligence (AI) and machine learning (ML) to help them optimise their testing efforts. Significantly, the report found that 99% of respondents are using DevOps practices. DevOps aims to deliver value to the user as quickly as possible, while keeping quality and security high. This requires more automation, which is essential to speeding up the testing process. However, the WQR notes that automation is the biggest bottleneck that is holding back QA and testing today. This article will take a look at the most prominent findings of this year’s WQR,

F

T E S T M a g a z i n e | N o v e m b e r 2 01 8

including a deep dive into the healthcare industry, and will suggest some steps that organisations can take to ensure they focus on developing the skills of testing teams while also keeping users satisfied.

AUTOMATE MORE BUT ONLY WITH STRATEGY

Automation is essential to speeding up the testing process, but the level of automation is surprisingly low, at between 14-18% for certain activities (see Fig.1). This can be explained by a number of factors – the complexity of building a flexible test automation ecosystem that is robust in the face of the everchanging functionality of applications; an inability to provision reliable test data and testing environments; and difficulties in developing the right skills and experience. Organisations need to focus more on end-to-end test automation; however,

ARAFFI MARGALIOT SENIOR VP AND GM MICROFOCUS Raffi has over a decade of experience driving business strategy, product development and delivering innovative technology solutions that solve customer problems


Q U A L I T Y

A S S U R A N C E

13

Figure 1.

ensuring that you are testing the right things consistently does not happen overnight. You might already be doing some automation, so what should you automate next? Jez Humble and David Farley state in their book, Continuous Delivery, that you don’t need to automate everything at once. The first step is to define your

goals and identify the bottlenecks that are preventing you from getting there. Work out what you need to do to resolve them and make that a part of your backlog so it can be prioritised appropriately. As you continuously prioritise and re-evaluate these items, you’ll introduce automation gradually and effectively over time. To maximise the chance of automation

success, the WQR recommends a staged approach coupled with test automation solutions augmented with smarter, more intelligent capabilities. Step 1: Optimise testing Before automating more test sets, organisations should first understand what their tests should be covering to maximise

T E S T M a g a z i n e | N o v e m b e r 2 01 8


14

Q U A L I T Y

A S S U R A N C E

Figure 2.

the amount of coverage and reduce risk. Teams today are using analytics from project data, production data and modelbased testing, and they are adopting AI techniques to guide the direction of testing efforts. By understanding what parts of applications are most important to their customers, organisations can feed this information into their testing efforts to ensure they are focused on reducing risk to the features that are most valued by their users. This can be done by prioritising testing to favour the most sensitive and critical areas. Step 2: Improve automation Test automation involves many moving parts, not least of which is the test environment. Manually deploying test environments is a slow process that tends to be error-prone. By automating test environment deployment and configurations, organisations can achieve predictable and repeatable conditions for their testing efforts. Building this into the continuous integration pipeline reduces the risk of test failure due to environment issues, and reduces the amount of time to determine the root cause of defects. While manual testing is unlikely to disappear completely, teams must look for opportunities to reduce the amount of manual testing performed. Replacing manual testing with automated testing will expose errors earlier and reduce release cycles, but the frequency of change means that automated tests are fragile. In addition to automating end-to-end tests, organisations must shift left and automate unit tests and regression tests at the API level. These tests should be run upon each check-in, and the results should

T E S T M a g a z i n e | N o v e m b e r 2 01 8

be made visible to the team by displaying them on a dashboard to ensure timely feedback and resolution in case a test fails. Step 3: Make automation smarter In the first step, I mentioned using analytics to make critical decisions about which tests to run. But analytics and AI also play a key part in creating a smarter testing architecture that helps in test case design, the identification of risk factors, as well as analysing failures to recommend possible solutions. Self-monitoring and self-healing architectures will also become more prevalent over the next few years. You don’t need to do this yourself. Testing tools today already incorporate many AI, ML and analytics features. In fact, the lifecycle management tools and test automation tools you’re using today are likely to include features such as: • Pipeline failure analysis, to understand why a build failed, and to provide actionable insights to get the pipeline working again • Hot-spot analysis, to identify areas of code that tend to be prone to defects • Cognitive analysis, to identify text and data from applications and graphical images • Anomaly detection, to identify abnormal behaviour in performance tests. Leverage these features to help your organisation implement better and smarter automation throughout your DevOps pipeline.

ARTIFICIAL INTELLIGENCE

We already saw that artificial intelligence refers to the application of AI to QA and

testing, but it also refers to the testing of AI algorithms and applications. It should be noted that it is an evolving technology that is highly technical in nature, and consequently, requires a certain amount of expertise to leverage correctly. Nevertheless, the results of the survey indicate that many respondents are either already adopting AI, or are planning to apply it over the next year (see Fig.2) to internal processes (62%), QA purposes (57%), and customer processes (64%). However, it’s not that easy. Although they understand the importance of AI and want to apply it, more than half of the respondents are struggling with identifying where to apply AI, and many have experienced difficulty integrating AI with their existing applications. This indicates that there is an increasing demand for people who have mastered an additional set of highly technical and mathematical skills like mathematical optimisation, neurolinguistic programming, and algorithmic knowledge.

THE RISE OF NEW ROLES

Finding professionals with these skills today is not easy and it may become even more difficult as more organisations start adopting AI techniques. The World Quality Report discusses the rise of new roles in QA and testing: •AI QA strategists, to understand how to apply AI to business. They will need to master both business and technical knowledge •Data scientists, to sift through test data and use predictive analytics, mathematics, and statistics to build models. They must have a deep understanding and experience of data analysis techniques •AI test experts, to be involved in the


15

testing of AI applications. In addition to traditional testing expertise, they will need to understand machine-learning algorithms and natural-language processing techniques. While AI requires a new skill set for testers and QA professionals, it is by no means the only technological challenge for this group of staffers.

FOCUS ON OTHER NEW QA SKILLS

The report notes that trends such as AI, Internet of Things (IoT), and blockchain all require new skills and roles for QA and testing professionals. The adoption of IoT technology is rising to the point where 97% of the respondents have some kind of IoT presence in their products. IoT devices can capture very large amounts of data, which might be sent to the cloud for processing, or processed in part or entirely on the device itself (edge computing). These devices also communicate using a variety of protocols, some standard and some proprietary, and implement various sensors in hardware. Some 66% of respondents also say they are either already using blockchain technology or have plans to do so over the coming year. No longer exclusively associated with the cryptocurrency Bitcoin, blockchain is being implemented in many systems that require a secure, distributed ledger to record transactions. Implementations must take into account security and data risks and mitigate dangers associated with integration into other systems. Hiring personnel with the necessary skills can be a challenge, so the report recommends building up the skills within the current workforce. It presents a fourstage approach: • Since automation is now an essential skill for QA today, find ways to bring in new agile test specialists with automation skills, or train your existing people • Deepen the automation skillset in the team by adding software development engineers in test. They will be able to introduce and maintain automation throughout the development pipeline, and apply advanced skills and development techniques to make testing more efficient and maintainable • Ensure the team has specialised skills in security, non-functional testing, test environments, and data

management Bring in QA experts with AI skills to optimise the test focus and testing effort.

TAKING THE PULSE OF HEALTHCARE

The healthcare and life sciences sector has traditionally lagged behind when it comes to adopting the latest technologies and development methodologies. It is risk-averse, being highly regulated, but at the same time, patients are being put at the forefront. In fact, 43% of respondents in the healthcare sector indicated that ‘ensuring end-user satisfaction’ is top priority, compared with 42% across all sectors. This is a very encouraging finding, and is what is pushing the sector towards a more holistic approach to healthcare and health management. While the sector is adopting digital health solutions, such as apps to track and act on changes in health metrics, regulatory requirements must be met. This means that many organisations can’t break away from requirements-driven development to agile development, and are still testing as a distinct phase in the software delivery lifecycle. Organisations that are looking to improve on their testing are reporting that the biggest obstacles are a lack of end-to-end automation from build to deployment, slow testing processes, and a lack of skills for QA and testing staff. Nevertheless, the sector is seeing adoption of cutting-edge technologies such as blockchain and IoT-enabled devices, and has even recognised software itself as a medical device. Such software used to be called ‘medical device software’ or ‘health software’, but today it has the name Software-as-a-MedicalDevice (SaMD). The software has its own regulation requirements to protect patients and promote safe innovation, requiring the sector to adapt to a different type of lifecycle testing. Unlike most of the survey respondents, those in the healthcare sector indicate a certain reluctance to move to the cloud. Security continues to be the most important objective, with 47% putting it at the top of their IT strategy. AI is beginning to make inroads into the Healthcare sector. Organisations will be looking at it seriously to facilitate smart testing and automation to help make sense of the thousands of combinations of scenarios that must be tested to reduce

"Many organisations can’t break away from requirementsdriven development to agile development, and are still testing as a distinct phase in the software delivery lifecycle" time to market while minimising the potential for human error.

THE PROGNOSIS FOR TESTING AND QA

In general, the outlook is excellent. The user is top of mind for QA professionals today, although they must work on mastering new technologies such as AI, IoT, and blockchain in order to ensure that users will have the best experience possible. This can be seen across all of the sectors that were analysed in the survey. The healthcare sector in particular will need to focus on automating test data and test environments, because of the huge number of scenarios that must be validated. With back-end systems that are largely using waterfall methodologies, they will be looking towards a digital transformation over the next few years that is enabled through smart automation. Their major challenges will be putting together a testing strategy and roadmap, and acquiring the skillset required to master the move to an agile and DevOps approach. While the World Quality Report includes findings and implementation recommendations for addressing emerging trends in the QA and testing space, it also highlights a quickly changing environment that will need the right people and tools to move forward successfully. Whether we choose to concentrate on the healthcare sector or other industries covered by the WQR, growing areas like automation and AI when paired with the right QA skills on a team, will ultimately have a positive effect on the experience of the end user.

T E S T M a g a z i n e | N o v e m b e r 2 01 8


16

INFORMATION OVERLOAD It’s time to unlock data in healthcare, but managing growing data sets while determining what’s useful can often be overwhelming ata is increasing in volume and variety at an exponential rate. In fact, a study commissioned of IT decision makers (ITDMs) found that 74% of ITDMs, across all sectors in the US and UK, believe their organisation has more data than ever before, but they are struggling to effectively use it to generate meaningful insights. Proving that, while technology is seen as a help, it can also act as a barrier, preventing organisations from managing their data properly. For those in the healthcare sector, managing data properly is a significant problem; 81% of ITDMs in the public sector, which includes public healthcare services, are struggling to harness the data they have to generate useful insights. Unfortunately, this doesn’t come as a surprise.

D

T E S T M a g a z i n e | N o v e m b e r 2 01 8

LETTING GO OF LEGACY

For a long time, public healthcare has been known to have some of the largest and most complex data sets around, but it’s a sector massively struggling with legacy technology and a lack of capabilities to adequately manage the data they have. Indeed, 46% of ITDMs in the sector note legacy technology barriers as one of the primary reasons data is not shared effectively across their organisation. However, addressing this problem isn’t as simple as rip-and-replace in a sector where having access to the right data at the right time can potentially mean the difference between life and death. There is fear that data will not transfer over from the old to the new, or that the two systems will not connect or communicate with each other. This

NEERAV SHAH GENERAL MANAGER EMEA SNAPLOGIC Neerav Shah has more than 20 years’ experience in the software industry, working with many companies across a range of sectors


D A T A

means ITDMs in healthcare have increasingly had to manage lots of disparate systems, and teams who need access to data to care for patients have been left accessing that information from different data silos. As a result, it’s unsurprising that 79% of ITDMs in this sphere don’t have complete trust in the data their organisation holds, because there are inconsistencies across the organisation in how data is collected, defined, and managed. Furthermore, 65% openly acknowledge that their organisation is struggling with siloed data, and 63% state their organisation does not share data well across departments and teams. These are two issues which only serve to amplify each other and certainly don’t help deliver a high-quality level of care to patients. Being able to connect your old legacy systeme to the your new systems is not only key for ensuring data does not get lost, but also so that employees still have access to what they need for their jobs, and that everything works together seamlessly.

FINDING THE RIGHT SKILLS

Healthcare organisations have a big data problem thanks to the volume and complexity of the data they have. But, whilst there is an apparent desire for the healthcare industry to improve their data infrastructure, taking on big data management can be hugely overwhelming for those with data silos, not least because of the skills required to manage it. Managing big data was, and often still is, restricted to large enterprises who have the capital and resources to invest in, build, and manage the necessary infrastructure. But the skill set to do this is incredibly specialised. Experts who can understand the complexity of large-scale data infrastructure and data management are in incredibly short supply, resulting in overburdened IT teams stretched thin as they try to keep critical systems running, whilst still delivering technical advancements that will support the changing demands of healthcare in the 21st century. In fact, almost a third (31%) of ITDMs in the sector believe there is

M A N A G E M E N T

17

"Experts who can understand the complexity of large-scale data infrastructure and data management are in incredibly short supply, resulting in IT teams stretched thin as they try to keep critical systems running"

T E S T M a g a z i n e | N o v e m b e r 2 01 8


18

D A T A

M A N A G E M E N T

a lack of resource in their organisation dedicated to data, and a further 31% stated that there is a lack of training. Unfortunately, this skills deficit in healthcare IT teams is compounded by a budgetary problem. Budget is a big issue for healthcare when it comes to modernising technology and investing in the people to deliver it. It’s important for the healthcare sector to have the most efficient and effective system at the lowest cost possible so that more budget can be focused on delivering improved levels of care. But highly skilled people to manage those systems come at a high cost.

TURNING TO 'AS A SERVICE'

Healthcare’s struggle to manage its data paints a rather bleak picture. Even with all the tools out there, it can be challenging for ITDMs to decide on the best way forward for such a large organisation. The cloud and as-a-service products are both ways in which ITDMs should look to address some of the issues they have in managing and accessing data within their organisation. Having the best Software-as-aService (SaaS) applications is incredibly useful, as it ensures a high quality of application and can be tailored to suit the needs of specific departments. For example, an organisation could have one app for patient records, one for financial management, and one for collaboration between specialists from different departments across the country, enabling them to discuss critical cases and ultimately deliver patient care in a more effective and efficient manner. With budgeting a significant concern for the healthcare sector, creating or migrating big data architecture to the cloud by adopting a Platform-as-a-Service (PaaS) model can have a considerable impact on operational cost savings and can massively increase data processing. Another additional benefit to this is there isn’t a need to fork-out a vast amount of budget up front to make the move, in the same way there is for on-premise infrastructure, so organisations can pay for what they need and scale up or down as necessary. In both of these instances, ITDMs in healthcare should consider taking a multicloud approach, which would also help in reducing operational costs. The problem with sticking to a single cloud provider is that its users are then beholden to a single

T E S T M a g a z i n e | N o v e m b e r 2 01 8

vendor’s innovation roadmap and can get stuck having to pay increasing licensing fees without being able to easily opt out of it. Using different cloud providers can allow healthcare ITDMs to negotiate better contracts and choose which providers are best for which applications, at any given point in time, at the lowest cost possible. Best in class SaaS apps and a multicloud strategy provide tremendous value but Integration-Platform-as-a-Service (IPaaS) will become increasingly important for budget, time and skill-strapped teams to unite all the disparate data they have. Not only can IPaaS link the different cloud applications and systems an organisation has but it can also connect all the old legacy infrastructure with the new systems. As a result, IPaaS can tear down silos between teams and departments, improving access to vital information and promote better collaboration between different specialists and departments, which will ultimately make data more trustworthy, reliable and actionable for healthcare professionals. For example, as patent records continue to be digitised, IPaaS can ensure the information in those records is accurately reflected across different hospitals and practices as required in a safe, secure manner, ensuring practitioners have a single consistent view of a patient and their needs. Ultimately, by investing in the right technologies, it will not only lift the burden from IT departments but will also facilitate better data management across an organisation.

THE FUTURE WITH MACHINE LEARNING

With the UK government’s pledge in May 2018 to invest in artificial intelligence for the NHS, it is becoming apparent that technology is being called upon to play a much bigger role in transforming healthcare systems. AI and machine learning (ML) are only just starting to be used in organisations, but when it comes to healthcare, there is significant potential value regarding health outcomes and efficiencies, and it’s encouraging to see political support for it. Some of the most notable uses of ML technology in healthcare have been in research. Thanks to ML’s ability to analyse incredibly large, unstructured data sets, ML algorithms are ideal for detecting anomalies and patterns in genetic data or medical imaging, far

"Budget is a big issue for healthcare when it comes to modernising technology and investing in the people to deliver it" faster and more accurately than many medical professionals, which enables those individuals to spend more time with patients. However, whilst this use of ML is making great advances, ITDMs should not forget that ML can also benefit other areas of the business which are not patient facing. Using ML to improve data management, for example, can free up time amongst the IT team to deliver better value to the broader organisation and pre-empt any issues as new systems or processes are introduced.

BETTER DATA & LIVES

By better operationalising the data they have, 73% of ITDMs in healthcare believe it will help to drive operational efficiencies and reduce cost, 69% believe it will enable quicker access to information, and a further 69% believe it will improve processes. As a result, over the next five years ITDMs expect to spend over £500,000 on operationalising data. But to really effect change and see these benefits, the healthcare sector needs to go beyond just changing their approach to data infrastructure and management. A ‘data-centric’ mentality needs to be encouraged across the entire sector. Healthcare professionals shouldn’t be spending hours looking for crucial information when they should be out there saving lives. In encouraging a ‘datacentric’ mindset, different departments and practices within trust groups should be coming together to improve data sharing. IT needs to lead the charge in making this happen. After all, it’s only by investing in the right technology and encouraging better data sharing across the entire healthcare sector that we will see improved access to critical information, so that healthcare professionals can make more accurate diagnosis, uncover issues sooner, and continue to deliver the best possible level of patient care.


19

T E S T M a g a z i n e | N o v e m b e r 2 01 8


Announcing

The DevOps Industry Awards Winners 2018!

The DevOps Industry Awards celebrate companies and individuals who have accomplished significant achievements when incorporating and adopting DevOps practices. Sponsored by

T E S T M a g a z i n e | N o v e m b e r 2 01 8


THE

DEVOPS

INDUSTRY

AWARDS

THE DEVOPS INDUSTRY AWARDS 2018 his year's DevOps Industry Awards were an excitement-packed, glittering event, held at the plush Millennium Gloucester Hotel, London. From the hundreds of entries received, the judges painstakingly assessed the anonymised submissions and, after receiving a 10% increase in entries on last year, they spent long hours deliberating the merits of each project put forward. In an action-filled evening, the excited and vibrant crowd, which packed out the ballroom at the Millennium Hotel Gloucester, got everything they bargained for from 'the voice of X-Factor' host, Peter Dickson, whose witty repartee and stage presence kept the guests more than entertained. With companies in attendance such as Lloyds Banking Group, Vodafone, Accenture, Credit Suisse, Virgin Media, Openreach, Sony, NHS, Barclays, Cognizant, Centrica and British Gas, to name but a few, attendees got the opportunity to not only enjoy themselves, but to rub shoulders with their industry peers for plenty of advice, congratulations and banter on this most successful of evenings. Next year's event is due to be held on the 22nd October at the Marriott Hotel Grosvenor, London, so if you or your team are looking to be rewarded for your hard work in incorporating and adopting DevOps practices, then make this a date for your diary and get your entries in!

T

THE DEVOPS INDUSTRY AWARD WINNERS 2018! ★★ Vodafone UK

Judges said: "Their enthusiasm to transform, the range of successes they have achieved and the obstacles they have overcome are testament to the spirit of DevOps. Their keenness to share lessons and embrace the wider DevOps community is a foundation of the DevOps culture and one which makes it accessible to all."

BEST DEVOPS AUTOMATION PROJECT BEST OVERALL DEVOPS PROJECT IN FINANCE SECTOR BEST OVERALL DEVOPS PROJECT IN COMMUNICATIONS SECTOR BEST OVERALL DEVOPS PROJECT IN RETAIL SECTOR BEST CULTURAL TRANSFORMATION

★★ Infosys Limited

BEST USE OF DEVOPS TECHNOLOGY ★★ The Telegraph

Judges said: "This entry was out of left-field and was chosen for its less-than-conventional way of using technology while bringing an outstanding level of success with it."

Judges said: "They were a top-drawer example of DevOps approaches being implemented from the start, with cultural change as an inherent part of their projects. They adopted industry-best practices throughout and subsequently any risks were well managed and mitigated – setting their own rigorous criteria and KPIs along the way."

T E S T M a g a z i n e | N o v e m b e r 2 01 8


T 22 HE

DEVOPS

INDUSTRY

AWARDS

BEST OVERALL DEVOPS PROJECT – ENTERTAINMENT ★★ British Telecom in partnership with Accenture

INFOSYS DEVOPS TEAM OF THE YEAR ★★ Vodafone UK

Judges said: "This entry was a textbook example of a brilliantly executed DevOps transformation at a large company."

Judges said: "This entry exemplified the true power of teamwork, driving business change with the goal of launching high quality products, fast."

HIGHLY COMMENDED - MOST SUCCESSFUL CULTURAL TRANSFORMATION & OVERALL WINNER CATEGORY ★★ Verizon

LEADING DEVOPS VENDOR ★★ Electric Cloud

Judges said: "This team's entries were among the more novel and galvanised significant culture change within their organisation."

Judges said: "This entry revealed their clear commitment to delivering the highest quality and standards to their customers." BEST OVERALL DEVOPS PROJECT PUBLIC SECTOR

BEST DEVOPS CLOUD PROJECT ★★ Vodafone in partnership with Accenture

★★ Metropolitan Police in partnership with IndigoBlue Judges said: "This entry had strong evidence, supportive metrics and an excellent account of the challenges faced." WHITESOURCE DEVOPS MANAGER OF THE YEAR ★★ Brett Delle Grazie IndigoBlue

Judges said: "This entry was chosen above all others as the team had no previous cloud experience – but this enthusiastic team still managed to create an AWS Architecture that received the official thumbs-up from the 'AWS Well Architected' review process."

T E S T M a g a z i n e | N o v e m b e r 2 01 8

Judges said: "This manager has a true understanding of both the technical and cultural elements of DevOps, delivering change and lasting impact."


23

NEWS | VIDEOS | PRODUCT NEWS | WHITE PAPERS

Industry Leading Portal

DevOps Awards

DevOps Online is the premium online destination for news, reports, whitepapers, research and more relating to the DevOps movement. Covering all aspects of IT transformation you can be sure that DevOps Online will keep you informed and up to date.

www.devopsonline.co.uk T E S T M a g a z i n e | N o v e m b e r 2 01 8


24

THE GREAT TEST DEBATE Test data has become the center of attention for QA professionals looking to keep pace with the speed of development. But when should they use production vs. synthetic data for testing? est data provisioning has become a bottleneck that threatens the efficiency gains offered by new test automation technologies. As a result, test data represents a weak link in the chain for organisations implementing continuous integration and delivery. Additionally, test data has been identified as a vulnerability for companies that must adhere to data privacy laws, like General Data Privacy Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA), designed to prevent the accidental or intentional exposure of personally identifiable information. To accelerate the speed of test data provisioning, meaningful change in the process and use of technology is needed; overcoming the threat of exposing sensitive customer information requires a fresh look at the sources of test data being used.

T

QA CHALLENGES How can QA departments

T E S T M a g a z i n e | N o v e m b e r 2 01 8

simultaneously maximise the speed, quality and privacy of test data while minimising the cost and complexity that come with provisioning it? Companies are increasingly needing to address the challenge of keeping up with the accelerated pace of development as the bar simultaneously continues to rise for higher quality code and absolute data privacy. A sea change is underway in the form of synthetic test data that can be generated on-demand as an alternative to the traditional approach of subsetting and masking production test data. But which approach is better? What are the trade-offs? How can IT professionals make the best decision for their environment? These questions set the stage for a great debate about whether production test data or synthetic test data is a better solution for continuous testing. Here I’ll introduce six essential test data criteria to serve as a basis for comparison between the two. Let’s start by defining our terms more precisely.

GARTH ROSE CEO GENROCKET Garth runs a fast-growing software company based in Ojai, CA, which allows him to leverage 30 years of experience being a technology executive in software start-ups and publicly traded software companies


P R O D U C T I O N

PRODUCTION TEST DATA

Production test data is a copy of a production database that has been masked, or obfuscated, and subsetted to represent a portion of the database that is relevant to a test case. Production test data is frequently accompanied by a test data management (TDM) system to prepare, control and use the data. Commercial TDM systems can be expensive, costing upwards of hundreds of thousands of dollars for a typical enterprise deployment. Many organisations have chosen to develop their own in-house TDM systems and processes to save money and to provide a solution that more precisely meets their needs. TDM systems are typically accompanied by a highly controlled and centralised test data provisioning process.

SYNTHETIC TEST DATA

Synthetic test data does not use any actual data from the production database. It is artificial data based on the data model for that database. For the purpose of this article, we'll assume synthetic test data is generated automatically by a synthetic test data generation (TDG) engine. TDG engines generate synthetic test data on-demand and according to a test data scenario that represents the needs of a particular test case. Synthetic test data generation eliminates the need for traditional TDM functions, such as masking and

V S

S Y N T H E T I C

subsetting, because test data can be generated on-demand and without sensitive customer information. As a result, TDG systems can be decentralised and operate through a selfservice model.

6 ESSENTIAL TEST DATA CRITERIA

There are six criteria often used to guide the decision between the use of production and synthetic test data. Each one is essential to the ultimate goal of eliminating the test data bottleneck and avoiding the risk of a data security breach. Each criterion is posed as a question, so you can ask yourself how each one applies to the needs of your organisation: 1. Speed: What are your time requirements for test data provisioning? 2. Cost: What is an acceptable cost to create, manage and archive test data? 3. Quality: What are the important factors to consider related to test data quality? 4. Security: What are the privacy implications of these two sources of test data? 5. Simplicity: Is it easy for testers to get the data they need for their tests? 6. Versatility: Can the test data be used by any testing tool or technology?

D A T A

25

Let’s consider each of these criteria one at a time. As you read them, consider your own test environment and how each criterion can have an impact on the efficiency of your operation. SPEED: What are your time requirements for test data provisioning? A recent survey of DevOps professionals described the provisioning environment as a “slow, manual and high touch process”. In a survey of respondents from QA/ testing, development and operations departments, they found that, on average, 3.5 days and 3.8 people were needed to fulfil a request for test data to support a test environment and for 20% of the respondents, the timeframe was over a week. The survey group used traditional production test data as their principle test data source. What if this timeframe could be reduced from days to minutes? Synthetic test data that simulates real world data can be generated at a rate of 1000’s of rows per second. Dynamically generated synthetic data eliminates the need to request production data from the TDM team and also removes the need to mask and subset the data for use by testers. With a decentralised self-service model, testers can provision their own data whenever they need it and simply discard the data when they have finished running their test.

T E S T M a g a z i n e | N o v e m b e r 2 01 8


26

P R O D U C T I O N

COST: What is an acceptable cost to create, manage and archive test data? Because production data must be prepared, managed and stored, the cost of provisioning the data must be burdened by the cost of a TDM system. This in turn leads to the purchase of a major TDM platform or the internal development and maintenance of a customised TDM solution. The cost can easily reach hundreds of thousands of dollars to procure, customise, support and maintain the platform. If synthetic test data is being generated on demand, there is no longer a need for a TDM platform. Only the test data generation platform is needed with a complement of licences for the testers who need the ability to generate their own test data whenever they need it. This can lower the cost of provisioning test data by up to 90% when compared to a full-scale commercial TDM system. QUALITY: What are the important factors to consider related to test data quality? When provisioning production test data, the elements of data that must be managed include the age, accuracy, variety and volume of data to be copied, masked and subsetted. Testers have little control over the quality of data that comes from production. With production test data, you only get what has been captured in the test data subset. Proper testing usually requires different permutations of data with negative test data and edge case data. Testers are often forced to manually modify the production data into usable values for their tests. And some test data is too complex or time-consuming to build by hand so those data sets are never built. Synthetic test data removes the guesswork that goes into creating a data subset. It is generated based on a test data scenario that specifies the nature of the data patterns and permutations required to cover all edge cases of the test. Further, the test data scenario is able to quickly generate data with a level of complexity that is almost impossible to do by hand. Figure 1. shows a sample of the test data variations that can be specified by a synthetic test data scenario to support the needs of the test environment. The table in Figure 2. provides examples of the synthetic test data output. Another important data quality requirement is referential integrity –

T E S T M a g a z i n e | N o v e m b e r 2 01 8

V S

S Y N T H E T I C

D A T A

Figure 1.

Figure 2.

Figure 3.

maintaining the parent child relationships between database tables that are represented by the test data. It is important for the synthetic test data generation engine to ensure referential integrity to preserve the consistency of the test data and the accuracy of the test

results. The chart in Figure 3. illustrates the referential integrity concept with a variety of data tables that have parent, child and sibling relationships. SECURITY: What are the privacy implications of these two sources of


27

test data? In May 2018, the European Union enacted the GDPR, which requires any organisation doing business within the EU and the European Economic Area to provide data protection and privacy for all individuals. Failure to comply with GDPR carries heavy fines and penalties (up to 4% of global annual revenues) and it joins other security regulations in the United States such as HIPAA. Test data provisioning must remove all PII, not only to be compliant with these laws, but to avoid subjecting the organisation to the enormously high cost of a data breach. According to the Ponemon Institute , the cost of a data breach – including the costs of remediation, customer churn, and other losses – averages $3.8m (£2.9m). Production test data relies on data masking techniques to obscure PII data, but no data masking process is perfect. And production data must still be handled by people during the masking process and archived on systems that can potentially be compromised. In contrast, synthetic test data is completely disconnected from production data, other than the data model used to generate it. This ensures a complete absence of PII from all test data and 100% compliance with all security regulations throughout the testing process. SIMPLICITY: Is it easy for testers to get the data they need for their tests? When compared to a $3.8m data breach, simplicity might seem like a trivial point of comparison. However, test data management systems that are complex and cumbersome introduce unnecessary delay into the provisioning process. As cited earlier, the average time to fulfil a request for test data to support a test environment is 3.5 days and 3.8 people. What is typically a centralised process for provisioning production test data perpetuates the siloed approach to development, testing and operations that DevOps is meant to eliminate. Test data provisioning can and should be a simple, decentralised, self-service model that makes quality test data available to anyone at any time. This is the only way to eliminate the test data bottleneck and pave the way for continuous testing. Synthetic test data generation makes simple, decentralised

Figure 4.

test data provisioning possible with platforms that allow real-time test data to be created on-demand by anyone on the DevOps team. According to the World Quality Report 2017-2018, test environments and test data continue to be the “Achilles heel for QA and testing” and it was identified as the number one challenge in achieving the desired level of test automation by 48% of their survey respondents. As the test data management market continues to grow – by an estimated 12.7% compound annual growth rate (CAGR), reaching $1bn in global revenues by 2022 – it also continues to evolve. The synthetic test data generation segment is expected to grow at the highest CAGR during this forecast period.

the test data use case is related to a database TDM’s can usually satisfy the requirements, but often at a slower pace than is needed by continuous testing. Test data generation platforms have much more versatility so can satisfy a much wider variety of test data use cases and often the data is provisioned up to 10 times faster than TDM’s due to the decentralised approach. As you make your decision about production versus synthetic test data, be sure to closely examine the versatility of the platform. The diagram in Figure 4. illustrates a test data generation platform integrating with a variety of frameworks and formats to maximise versatility.

VERSATILITY: Can the test data be used by any testing tool or technology? Versatility is another way of saying adaptability. The test data provisioning process should be adaptable to any testing environment, of any size, at any level of maturity, for any industry segment. That translates to integrating with a wide variety of frameworks and automation tools for seamless operations and supporting a variety of data formats for compatible test data output. It should also be capable of working with large databases with thousands of tables and potentially hundreds of different applications. Test data management platforms tend to be database-centric, so when

Consider the six essential test data criteria when making your own decision about the use of production data versus synthetic test data. Do they need to be mutually exclusive? Of course not. Production and synthetic test data can coexist in a testing environment, either to optimise their role in various testing operations or as part of a transition from one to the other. This may require you to think differently about test data as you develop a roadmap for your long-term continuous testing strategy. The idea is to be purposeful in your decision and to understand the implications on the speed, cost, quality, security, simplicity and versatility of your ultimate test data solution.

MAKING THE TEST DATA DECISION

T E S T M a g a z i n e | N o v e m b e r 2 01 8


28

IT'S TIME TO ADOPT DEVOPS Want to build quality software in a rapid, continuous and repeatable manner, all while saving time and money? hen it's time to adopt DevOps. While the amalgamation of software development and operations in itself sounds simple, merging the two parts of the business alone is not enough. If you want to reap the full benefits of switching to a DevOps way of working, including reducing testing and development times (by as much as 80%), significantly cutting overall costs (by as much as a third for one part of the project alone) and improving the overall quality of the software and time-to-market, then you need to combine more than just the Dev and the Ops.

T

CHANGING MINDSETS

A successful DevOps programme is about changing the mindset of the whole business. It brings together formerly siloed teams, such as developers, quality assurance (QA) teams, operations and other members of the business, to work as part of an integrated team to build quality software in a rapid, continuous, and repeatable manner. This new mode of delivery requires all parties to shift their model of work to keep up with this fast-paced, innovative, and automated environment and to learn to take

T E S T M a g a z i n e | N o v e m b e r 2 01 8

combined responsibility for both the process and the outcome. This is obvioulsy a big undertaking, and the key to success is understanding not only the roles that everyone has to play in the process, but also ensuring that the end goal is clear to all. In reality, a successful DevOps programme widens the roles that all participants play in the software delivery lifecycle (SDLC), when compared to traditional waterfall or agile environments, and makes them all enablers of the DevOps delivery programme. It removes the notion that QA, Dev or Ops are separate functions in the overall development lifecycle with their own set of roles and responsibilities and necessitates that the integrated teams sync-up and work together to achieve common goals.

ROLE OF QA

With DevOps, QA is integrated within the cross-functional team and is involved with every aspect of software delivery, from requirements-gathering and system design, to the packaging and releasing of the software. QA is no longer a discreet component of the SDLC chain solely focused on finding defects. Instead, its

IYA DATIKASHVILI DIRECTOR BRICKENDON Iya specialises in optimising organisational productivity. Based out of the US, she has more than 15 years’ experience directing quality assurance and test management services


D E V O P S

work is spread throughout the SDLC with quality practices embedded within each phase of software delivery. This is a major shift which makes the role of QA more critical than ever before. In changing the traditional role of QA, DevOps expands the remit to include operational behaviours such as examining each requirement for completeness. QA must start addressing the questions ‘are we building the right feature?’ and ‘are we building it correctly?’ For example, in addition to the functional aspect of a requirement, QA will also examine how that requirement could affect system performance or be monitored in production. It will then design the appropriate cases to test that behaviour. This means QA will play an integral part in ensuring system reliability, stability and maintainability, and will incorporate the necessary test coverage from the onset of the project. In addition, QA will be involved in defining and executing the non-functional tests alongside Dev and Operations teams. This is a departure from traditional QA practices where non-functional testing would typically be out of QA’s scope. With DevOps, QA will need to push the automation boundaries from just functional regression testing to automating everything that will facilitate code passing more quickly through the pipeline. The QA team must look to automate as many processes as deemed feasible, including test case generation,

test data set creations, test case execution, test environment verifications, and software releases. All pre-testing tasks, clean-ups, and post-testing tasks, should be automated and aligned with the continuous integration cycle. Any changes related to software or system configuration should flow through a single pipeline that ends with QA testing. Automation should also apply to all non-functional tests such as performance, stress, and failover and recovery. Additionally, the automation framework should be flexible, easy to maintain, and allow for integration directly into the development workflow.

ROLE OF THE BUSINESS

The same goes for members of the business. Traditionally the business unit was responsible for making strategic investment decisions and managing the financial aspects of projects and programmes. While implementing DevOps doesn’t change this, it does require the business to change the way it operates. To implement DevOps successfully, the business needs to change its approach towards production, budgeting and funding projects, and even adapt its team structure. Yet another example that DevOps is about a change in mindset, not just the adoption of a new methodology. In the traditional software delivery framework, the business works to produce the project scope, develop the

29

project management plan, and define the business and functional requirements. This work is done at the start of the project and follows a strict and sequential process across three distinct phases: project concept, planning, requirements gathering. The output of the first phase is used as the input into the following phase, and each phase needs to be complete before the project can progress further through the SDLC. The challenge with working within the traditional framework is that the business presumes that everything it defines at each phase will still be applicable throughout the entire lifecycle of the project. It leaves little flexibility for the delivery team to adjust to any unexpected changes of the project’s parameters and locks release cycles and deliverables, which if missed, could have financial consequences. By contrast, within the DevOps framework, the business becomes the product owner and moves from working independently to working collaboratively within an integrated multi-skilled DevOps team. As product owner, the business takes ownership of the end-to-end software delivery process and sets the direction of the team in such a way that ensures workflow runs smoothly. It also retains the responsibility of ensuring that every member of the delivery team has a common and clear understanding of the features being built and how each feature delivers value to the end-product.

T E S T M a g a z i n e | N o v e m b e r 2 01 8


30

D E V O P S

"The DevOps value proposition lies in its ability to improve the overall service delivery of a product to the end user by integrating crossfunctional teams into a collaborative delivery team with common goals. As part of the process, the whole business transitions from working in a rigid framework, where the entire scope of the project is defined up front, to an approach where the scope is defined in small targeted sections"

SPREADING THE RESPONSIBILITY

As already discussed, one of the biggest changes with DevOps is that each party is involved in the entire release planning process, and takes equal ownership, to develop a single release plan that aligns the teams’ workstreams. The dynamics of release planning changes from Dev complete and QA complete, to sprint

T E S T M a g a z i n e | N o v e m b e r 2 01 8

complete, where the sprint is only considered complete once all of the tasks within the sprint are successfully tested. One of the key things is to ensure all parties are involved in defining the project strategy and implementation roadmap. In the past, each group would typically develop strategies catered specifically to their own responsibilities (Dev strategy, QA strategy, Ops strategy), while with DevOps, the product owner is accountable for producing a common strategy and implementation roadmap. The team collaboratively sets the project’s goals, determines actions to achieve the goals, and mobilises the resources to execute the actions. A good strategy creates a common (unbiased) view of what the team is trying to implement, how they are going to execute, how each member of the delivery team will be involved in the implementation, and the associated timelines. It pushes each member of the team to think critically about their role in implementing the strategy and instils accountability for each in meeting their goals. The reality is that things will change throughout the project and as such, the product owner must build flexibility into the strategy to seamlessly adjust the implementation roadmap along the way. Rather than writing complete detailed specifications and then reviewing them with the delivery team, the business produces a backlog of feature stories and works collaboratively with the entire DevOps team to define the desired and non-desired behaviour of each feature. Each story must include examples, preand post-conditions, acceptance criteria, and non-functional requirements that guide coding and testing. Having all this information captured as part of the feature leads to fewer coding defects and improved-quality software, while also shortening software development cycles. When defining feature stories, it is essential that every member of the delivery team has all the information they need in order to do their jobs. This means broadening the scope of the features to be defined to include non-functional requirements that address the needs of operations, compliance, and information security. Requirements should cover system reliability, performance, scalability and supportability. The analysis process goes from defining all the requirements up-front,

to the ‘just in time’ concept, where requirements are only defined as they are needed; just in time is an analysis approach of lean methodology and helps ensure there is no waste in the analysis process. With DevOps, work is managed through a Kanban board which provides full transparency of the project work, including work in process, project status, team progress, and team capacity. This level of visibility allows delivery teams to rapidly adjust to any changing requirements. All change requests are filtered through the Product Owner, which allows for a single, central, and controlled change process that prevents any change in priorities from affecting the team’s objectives. This differs from the traditional method where requests would usually come in from multiple sources, cause conflicts between deliverables, and contribute to resource constraints.

AUTOMATION & CONTINUOUS TESTING

DevOps relies heavily on automation and continuous testing, and the level of automation directly corresponds to reducing the overall time-to-market cycles. Continuous testing is a means of identifying and reporting business risks associated with a software release candidate as quickly as possible. This means that with DevOps, testing starts early and is executed continuously throughout every stage of software delivery in-order to prevent issues from propagating to the next stage. It forces developers to test every piece of code within the development phase, quickly identifying and remediating issues which would otherwise be undetected until later in the SDLC. It also means that automation tasks must be embedded within each aspect of the software delivery pipeline, including building, packaging, configuring, releasing, and monitoring code. The scope of automation testing should be expanded to include deployment scripts, backout scripts, configuration management, nonfunctional tests, checkouts, and monitoring components. While having a comprehensive automation testing suite reduces the testing cycles, continuous testing is equally important because it provides early feedback on the overall quality of the software deliverable. While both automation and continuous


31

testing are critical to the DevOps practice, it’s the manner in which the two frameworks are integrated that generates the best results. In particular, automation tools can be applied to remove dependencies that prevent unrelated tasks being carried out alongside each other. For example, provisioning different testing environments to support parallel testing efforts, so that different groups can conduct their testing without having to wait on environment availability, or developing test harnesses (test simulators) that mimic components to allow system integration testing to be carried out much earlier in the SDLC. Although these might not be considered automation in the traditional sense, they each provide a means of aligning software delivery teams to a shift-left way of working, which means carrying out tasks as early as possible in the development process and therefore reducing the risk for costly and timeconsuming fixes late in the cycle. It is unrealistic to automate everything all at once, so the overall automation strategy should factor in an approach to prioritise automation projects by considering the return on investments (ROI) and the metrics that will be used to evaluate the actual returns on their automation investments. The advantage to this approach is that automation improvements can be gradually phased into the backlog, evaluated and prioritised based on needs.

WHY DEVOPS?

The DevOps value proposition lies in its ability to improve the overall service delivery of a product to the end user by integrating cross-functional teams into a collaborative delivery team with common goals. As part of the process, the whole business transitions from working in a rigid framework, where the entire scope of the project is defined up front, to an approach where the scope is defined in small targeted sections. This enables work to be prioritised as appropriate, reduces the time taken to complete tasks and allows the earlier discovery of problems, making them simpler, cheaper and less time-consuming to fix. Put simply, DevOps removes walls, gates and transitions to increase accountability for the full end-to-end software development process. It requires cooperation and collaboration from across the whole business and in

Figure 1.

turn brings a raft of benefits, including: • Removing the potential for human error by increasing automation • Producing results quickly and clearly • saving significant amounts of time and money • Avoiding potential reputational damage from delays and errors. It originally developed as a backlash to the extreme segregation which stemmed from fears of cross-contamination between the different phases and expertise levels of the software development life cycle, and in particular, concerns over regulatory restrictions and issues with some individuals having access to systems they should not. In the past, it had been known for software to be released into production unchecked and content to be updated by individuals without the necessary expertise, which when combined, caused serious errors and lead to trading losses. These isolated working methods meant that no one was accountable for the full end-to-end process and it eventually became apparent this was lengthening the time taken to get products to market, increasing costs and raising the likelihood that issues and defects would be found once the programme was in production. By contrast, the ethos of ownership and accountability instilled by the DevOps methodology helps to create a need to develop the best software system as quickly and efficiently as possible. By adopting the DevOps approach, banks and other institutions can save themselves considerable amounts of time and money and rest assured that the software development is of a highly

superior quality because the whole team is fully aware of what is happening at each stage of the process. (see Figure 1.) Other advantages of implementing the DevOps methodology, include: • An increase in confidence in delivery as less defects are introduced into production • Integrated teams requiring less manpower • Lower costs to rectify bugs as they are identified earlier in the lifecycle • More time for project resources to focus on delivery • Faster releases thanks to standardised and reusable automated tests • Ability to divide releases into modules, giving less opportunity for error. To conclude, while the act of combining development and operations sounds relatively straightforward, the DevOps methodology cannot be properly integrated within a business, and therefore reap the available benefits, without a change in mindset and the promotion of accountability across the whole organisation. To put it simply: you build it; you break it; you fix it. With DevOps there is no place for passing the buck, represented in this case, by the software development life cycle. The successful implementation of DevOps means the successful integration of business users, developers, test engineers, security and operations into a single delivery unit focused on meeting customer requirements.

T E S T M a g a z i n e | N o v e m b e r 2 01 8


32

HOW TO IDENTIFY MACHINE LEARNING TALENT What once was a shortage of coding and software engineering expertise has now translated into a shortage of skills in artificial intelligence and algorithmic engineering ccording to a recent survey on enterprise AIOps adoption , 67% of enterprise IT organisations in the US have experimented with artificial intelligence (AI) and machine learning (ML) for data management and incident remediation. What’s more, global research and advisory firm, Gartner, fully expects that artificial intelligence is expected to create more jobs than it replaces by 2020 . AI is moving fast and enterprises need new talent today, not tomorrow; and not just any old talent. Here, I’m going to discuss some of the things I’ve learnt and some of the practical questions you can pose to uncover and secure the talent you need to help both your enterprise and potential employees succeed and excel.

A

NO SKILLS TO PAY THE BILLS

Skills gaps are cited as among the biggest hurdles to AIOps adoption and implementation, and a recent EY survey of 200 senior leaders found that 56% see talent shortages as the single, largest

T E S T M a g a z i n e | N o v e m b e r 2 01 8

barrier to implementing AI in business operations in 2018. It’s clear that finding machine learning engineers is not an easy task. They’re a bit of a unicorn, combining engineering fundamentals with data modeling and statistical analysis. But, with the right framework, it’s entirely possible to build a team that has the right mix of data science, engineering know-how, and even a little robot emotional intelligence .

MATH IN THE MACHINE

To start, machine learning engineers need a deep expertise in predictive modeling and statistical analysis. I always look for engineers who can combine core engineering fundamentals with the ability to see patterns in data and translate that into action. Not only do they need to be able to manipulate code and build software, but they also need to understand how mathematical models can create insights, and how those insights can drive action in order to establish a candidate’s potential and knowledge. Some examples of the types of

BHANU SINGH VICE PRESIDENT OF ENGINEERING OPSRAMP Bhanu is an accomplished and decisive leader in the software technology industry with extensive experience in product strategy, disruptive innovation and delivery to grow market share, revenue, and improve customer experience


M A C H I N E

practical questions I would pose during initial interview rounds include:

Some other examples of questions I may ask in interviews include:

Q: When should you use classification over regressions? Setting up use-case driven questions like this help us understand how a candidate builds mathematical models. Fundamentally, classification is about predicting a label and regression is about predicting a quantity.

Q: Provide an example of why you would use quicksort versus binary? These are both algorithmic options for organising the data, and help illustrate a use case of two extremes. The AI engineer can employ either to derive actions from.

Q: Do you have experience with Apache Spark or other public machine-learning libraries such as TensorFlow etc.? Ultimately, we want to make software that’s scalable. This is the advantage of a machine learning library. Today’s AI engineers must be proficient in modern tools to build enterprise-grade solutions. The next challenge for a true AI engineer is the ability to solve the big problem of clean data. Creating datasets that are rich, contextual and clean provide the best results for an artificial intelligence solution, as AI relies on data for decision-making. Some further questions based on this might include: Q: How would you handle an imbalanced dataset or how do you handle missing or corrupted data in a dataset? These questions get at the importance of building models using clean data. At its core, AI must use clean datasets for insights. Otherwise, the actions will be incorrect or useless.

Q: How do you clean and prepare data to ensure quality and relevance? This question is relatively tactical but helps us understand the critical processes the engineer employs at the most critical points in AI engineering.

AN EVOLUTION

In short, I look for engineers who can apply statistical analysis to data for enhancement, cleaning and processing – but who also understand the practical ramifications of data as the engine of software. At its core, machine learning engineering sounds like a natural evolution in software engineering. We have, heretofore, been looking for engineers that can translate basic human requests into some sort of computerbased action. We’re now simply trying to anticipate that human request with data. The basic mechanics of data ingestion, analysis, interpretation and action are the kind of actions that humans take every day. Turning these steps into an action

L E A R N I N G

33

"The basic mechanics of data ingestion, analysis, interpretation and action are the kind of actions that humans take every day. Turning these steps into an action that a machine can take unsupervised is an entirely new challenge" that a machine can take unsupervised is an entirely new challenge. This is where statistical analysis and predictive modelling come into focus. The true machine learning engineer is both a geek and a craftsperson. She’s both a math nerd and a builder. Our talent search helps us identify candidates who live at this intersection. And who can help us continue to win the race for efficient, effective technology.

T E S T M a g a z i n e | N o v e m b e r 2 01 8


34

BECOMING MORE MOBILE With the huge increase in mobile phone usage, mobile testing is quickly becoming one of the most vital skills a tester can have ow often do you use your phone? If you are anything like me then the answer will range from ‘a lot’ to ‘too much’, which is perhaps an answer that is indicative of the huge change occurring in how people use web apps, with generations of users now accustomed to having the internet available to them 24/7 on their smartphone With people moving away from accessing the web on their desktop or laptop devices, companies releasing products on the web need to think about how their site looks and works on a mobile phone first, and a computer second. Today’s users will not be sat at a computer, they will be reaching into their pocket and loading up the website on their smartphone. Therefore, mobile testing is quickly becoming one of the most vital skills a tester can have. In the mobile world, we have two options to access a product. We either

H

T E S T M a g a z i n e | N o v e m b e r 2 01 8

use a browser on the phone to access a website, or we use an app. However, it's not as simple as that. Firstly, there’s the operating system of the phone. Android is the most popular, with iOS close behind. Windows is next up after iOS and Android, but there is a huge step down in usage rates. Beyond OS, there are also two types of app; native and web wrapper (often called hybrid). Web wrapper basically means that the app is showing you the exact content that is available to you from the website, but ‘wrapping’ it in such a way to make it appear as an app. For some websites this won’t be that obvious, but for the most part it’s really obvious when you realise what to look for. Native apps on the other hand are completely separate from websites; all of the functionality you see within a native app is coming from the app itself – no web stuff involved. This means that, on the whole, native apps beat web

ROB CATTON CONSULTANT INFINITY WORKS Rob is a software consultant working in web development, mobile testing and commercial areas. He is passionate about helping other testers where he can, and this year presented at the Software Testing Conference NORTH


M O B I L E

wrapper apps on basically all fronts – they’re faster, easier to use, look better and are ultimately more fit for purpose. Native apps are developed to be used exclusively on phones, so you’re going to have a much better time using them. Think of all of the best apps you use; Facebook, Snapchat, WhatsApp, YouTube, Twitter, Spotify, basically all banking apps... they're all native for a reason! Web wrappers usually use websites which aren't developed and tested with app usage in mind, which results in a lot of UI problems. Furthermore, automation tools struggle because they will have to use the web’s HTML DOM to control the app – but more on this later. It’s not really a surprise that mobile developer’s talk about web wrappers with such distaste – it’s basically the cheap option if you already have a website. However, it is a widely used solution by a lot of big companies as they have spent 10-15 years building up a web-based product, and have huge web teams. The move to mobile happened quickly, and as such mobile developers are very much in demand right now. As far as browser testing goes on mobiles, it’s very similar to browser testing on a computer. There

are still nuances which you will only find when using mobile devices, for example screen size or landscape mode.

MOBILE TESTING CONSIDERATIONS

Here is a non-exhaustive list of things to look out for while mobile testing that you likely (most of the time) won't be able to try using a computer: • Screen size testing: mobile testing includes tablets or iPads, so screen size can range from the 4-inch diagonal of the iPhone 5 all the way up to almost 13-inches diagonal for the newest iPads. Apps have to work across all the widely used screen sizes • Internet connectivity testing: people don’t always have 4G or WiFi connection, so we have to either use tools to ‘throttle’ the connection, or actually physically go somewhere with a poor signal • Usability testing: mobile phones these days have colour-blind modes and other various usability options; we have to make sure our apps work properly with these settings • Interrupt testing: what happens to our app when you receive a text or email

T E S T I N G

35

whilst using it? If the user soft-closes the app (not fully ending the apps activity), many things can happen. Our app needs to be able to deal with these interruptions Upgrade testing: arguably one of the most important aspects of mobile testing, users are required to update their app if they want to have access to the latest functionality, from the Play Store for Android and the App Store for iOS. Apps often need to retain data (for example login information or preferences) after an update, which is a big difference to web development as the updating process is done server-side rather than device-side OS integration testing: quite often apps will come up with some functionality, which OS’s will pick up in later versions. This will often render the functionality obsolete in the app, and will need to be removed. A good example is screenshots – some apps would let you fiddle about with the screenshot within the app, but now the latest Android and iOS OS’s have that built in, which means the apps need to remove it as its superfluous.

T E S T M a g a z i n e | N o v e m b e r 2 01 8


36

M O B I L E

T E S T I N G

These are only six areas of mobile testing which differ from testing from a computer, there are still more out there; and the list is only growing as our phones get smarter and smarter.

WHAT TO TEST?

So, we now know that mobile testing offers a lot of complexity and interesting scenarios you might not have initially considered. So how do we decide what to test? We know we can't test everything... it’s never an option. We have to come up with a plan. The most obvious way to decide what we test is thinking about the use cases – what are the most widely used devices? Thankfully due to all the data collection these days, we know pretty accurately which versions of OS and which phones people are using, right down to a weekby-week basis. Using this data, we can decide in the team whether we care about the five most used phones, or the 10 or 15 most used... This gives us a pretty good start, but when you realise you are probably going to have to actually purchase these phones to physically test the apps, your budget really starts to feel the strain. Bear in mind that new phones are released frequently, and that people love buying the newest model, and you face a potentially never-ending spending spree on your test devices. This is where options such as device farms come in handy, for example Sauce Labs or Firebase Test Lab. These thirdparty companies offer access to remote devices for you to test your apps on, but again it is down to pricing and whether it is cost effective to choose these options.

LET'S TALK ABOUT TOOLS!

Lots of the tools you are probably familiar with, such as New Relic and Splunk, are naturally usable for mobile apps, helping to provide some useful information. In one mobile team I worked in, we used these tools to track how many users were using certain versions of our apps, and what device they were using them from. Furthermore, we could see which paths the users were taking through the apps, and used this information to inform our testing strategy – there is little point testing a scenario which users never actually use. Aside from surveillance tools like those, there are other tools that are used during mobile testing which are external to the app’s code.

T E S T M a g a z i n e | N o v e m b e r 2 01 8

Charles is an example of one such tool, which is used to set up a proxy on the phone’s network. Once this is in place, we can see all of the network traffic going through the phone. This allows us to potentially mess around with the way that the app works, and we can simulate a lot of situations that real users will see. This is obviously preferable to walking around in the middle of a forest, trying to simulate a bad internet connection! For all of you Selenium enthusiasts, Appium is one tool that is branched off from Selenium and uses JSONWire Protocol to send commands to the phone. For the most part the syntax is exactly the same, but there are obviously some slight differences. For example, there is a thing called ‘contexts’ in apps which use some web stuff and some native stuff (the hybrid web wrappers mentioned earlier). Because the native code lives in a native DOM, and the web code lives in the HTML DOM, we have to switch between the two contexts to be able to interact with the objects that live in each one. There is even the potential for there to be multiple web contexts. This is why native apps are far, far nicer to automate. That, and it’s a lot easier to have control over how the elements are named when you are working on a native app – something all you Selenium aficionados will understand is important; using ID’s to find elements which you can ask developers to set is perfect. Using other methods... not so much.

LEARN FROM PAST EXPERIENCES

The last thing I’d like to speak about are a couple of experiences I have had during my time working on mobile apps teams, in the hope that I can pass something on to someone who might end up in a similar situation to myself. A lot of the time, the little things are what makes apps fun and cool to use. Everybody likes finding little ‘Easter eggs’ the app developers leave behind, and often some of these little funky features are actually quite useful. One of the apps I was working on required a calendar to pop up and the user would pick a number from the calendar view and be shown the information for that day. However, the functionality for when the user picks a day which hasn't happened yet (and as such there is no page to navigate to), would be for nothing to happen at all.

Now, if you think about that as a user you will realise that it's not immediately obvious why nothing has happened. The user won’t know that the app has actually received the touch event, and might get confused as to why the app ‘isn’t working’. I had a little think about what I would expect the app to do if I was doing something I shouldn't be, and decided that if the calendar number did a small ‘wiggle’ from side to side that it would inform the user they’re doing something incorrect. After checking with the developer, he found some existing library which could do it, and voila, done in five minutes. No product person needed, and the app has become nicer to use. Often, when working on a development team, the developers will be working on the same product. That is obviously the case in mobile development too, but when you are making the same app across two different operating systems, you can actually think of them as separate products to some extent. The team I worked on had some Android developers, and some iOS (Windows wasn’t something we supported). On a project where we were up against it, both sets of developers were working from the same set of designs for a native app. However, not all of the desired functionality could work at every part of the project. Sometimes the designs had to change to allow for this. However, when these designs changed, this change wasn’t always communicated to the opposite OS developers. This meant that we had two products which were meant to be heading towards the same end result, but in actuality we had two apps which were straying further and further away from each other. I think the only way this could have been avoided is through my involvement; ideally, I would have been so involved with the developers’ process that I could see what was happening and how far astray we were getting. Sadly, because of the time constraints, that wasn’t something I was able to do – and we ended up suffering because of it. So, I hope this article has shed some light on mobile testing for those of you who are keen to become further involved in it. I honestly believe mobile development is going to expand and develop even more than it has done in the past couple of years – as soon as we have batteries that last more than a day – who will need laptops!


F I N A L I S T S

Following an extensive, anonymised judging process, we are pleased to announce the finalists in The European Software Testing Awards 2018 C A T E G O R Y

S P O N S O R S

TM


THE

EUROPEAN

SOFTWARE

TESTING

AWARDS

3. BEST TEST

1. CAPGEMINI BEST AGILE PROJECT Awarded for the best software testing project in an agile environment.

FINALISTS

★★ British Telecom in partnership with Accenture ★★ Student Loans Company in partnership with Mastek ★★ Nordea in partnership with Maveric Systems ★★ Lapeyre in partnership with Panaya ★★ Royal Mail Group ★★ Sainsbury's in partnership with Tata Consultancy Services ★★ Schroders in partnership with UST Global ★★ test IO ★★ UBS in partnership with Infosys Ltd. ★★ UST Global

2. BEST MOBILE TESTING PROJECT Awarded for the best use of technology and testing in a mobile application project.

FINALISTS

★★ Aviva Digital ★★ Accenture Technology Digital Testing ★★ Deloitte ★★ Lloyds Banking Group in partnership with Wipro ★★ nFocus Testing ★★ TestingXperts ★★ Ticketmaster ★★ Wall Street Journal in partnership with Applause

AUTOMATION PROJECT FUNCTIONAL The award for the best use of automation in a functional software testing project.

FINALISTS

★★ A1QA ★★ Capgemini ★★ HARMAN Connected Services ★★ Keytorc Software Testing Services ★★ Lloyds Banking Group Account Overview Team (Group Digital) ★★ Metro Bank in partnership with Maveric Systems ★★ Schroders in partnership with UST Global ★★ TestDevLab ★★ Specsavers in partnership with Mastek ★★ Cognizant ★★ Vodafone Ltd. ★★ ING Bank Turkey in partnership with Saha BT

7. BEST OVERALL TESTING PROJECT - RETAIL 5. NTT DATA GRADUATE TESTER OF THE YEAR

Awarded to the most outstanding testing project in the retail sector.

The award for a recent graduate who has completed university or an apprenticeship scheme in the last two years, who has shown outstanding commitment and development in the testing field.

★★ Accenture Technology Digital Testing ★★ Wilko in partnership with Cognizant ★★ Infosys Ltd. ★★ NTT DATA ★★ Cognizant

FINALISTS ★★ ★★ ★★ ★★ ★★

Daniel Phillips, Capgemini Kieran ellis, Cognizant Maria Ekundayo, Deloitte Jess Harrow, KPMG UK Sudarshan Shelke, QA Mentor, Inc. ★★ Graham Hurlston, ROQ

FINALISTS

★★ Accenture Technology Digital Testing ★★ Cognizant ★★ British Telecom in partnership with Accenture ★★ Endava ★★ ABN AMRO Bank N.V. in partnership with Infosys Ltd. ★★ Islandsbanki in partnership with Itera ★★ Tech Mahindra Ltd. ★★ UST Global ★★ ANZ in partnership with Capgemini

T E S T M a g a z i n e | N o v e m b e r 2 01 8

8. BEST OVERALL TESTING PROJECT - FINANCE Awarded to the most outstanding testing project in the finance sector.

FINALISTS

4. BEST TEST 6. WIPRO TESTING AUTOMATION MANAGER OF THE PROJECT YEAR NON-FUNCTIONAL The award for the best use of automation in a non-functional software testing project.

FINALISTS

★★ ★★ ★★ ★★ ★★ ★★

Awarded to the most outstanding test manager or team leader over the last 12 months.

★★

FINALISTS

★★

★★ Michelle Christmas, Nationwide Building Society in partnership with IBM ★★ Vibhu Taneja, Deutsche Bank ★★ Diane Cox, Capgemini ★★ Rayhan Mussa, KPMG UK ★★ Allan Woodcock, Lloyds Banking Group ★★ Suresh Chandra, NewDay Ltd. ★★ Nitin Tawde, QA Mentor, Inc. ★★ Andrew Lazenby, Royal Mail Group ★★ Soma Pattanaik, Tech Mahindra Ltd.

★★ ★★

Aviva Brickendon Consulting Capgemini Close Brothers in partnership with Cognizant Deloitte Deutsche Bank Group in partnership with Infosys Ltd. Islandsbanki in partnership with Itera Lloyds Banking Group Royal Bank of Scotland in partnership with Sandhata The Cheque and Credit Clearing Company in partnership with KPMG UK


THE

EUROPEAN

9. BEST OVERALL TESTING PROJECT COMMUNICATION Awarded to the most outstanding testing project in the communication sector.

FINALISTS

★★ British Telecom in partnership with Accenture ★★ Vodafone UK in partnership with Tata Consultancy Services ★★ Infosys Ltd. ★★ MERA ★★ Sunrise Communications AG ★★ Swisscom in partnership with Accenture Technology - Digital Testing ★★ Tech Mahindra Ltd. ★★ Vodafone in partnership with Accenture ★★ Openreach in partnership with Accenture

partnership with Wipro Technologies

11. UST GLOBAL TESTING TEAM OF THE YEAR

FINALISTS ★★ ★★ ★★ ★★ ★★ ★★ ★★

★★ ★★ ★★ ★★

BrightGen Capgemini Cognizant ICterra Dow Jones DST Systems Lloyds Banking Group in partnership with Wipro Account Overview Team (Group Digital) NewDay Ltd. Tech Mahindra Ltd. VZVZ in partnership with Parasoft Zurich Insurance in

TESTING

partnership with KPMG UK ★★ Nationwide Building Society in partnership with IBM - Test Transformation Management team ★★ NTT DATA ★★ QA Mentor, Inc. ★★ Tata Consultancy Services ★★ University of Cambridge ★★ Lloyds Banking Group Retail Business Assurance Leadership Team

Awarded to the most outstanding overall testing team of the year. ★★ Accenture Technology Digital Testing ★★ Aviva ★★ SimbirSoft ★★ Ciklum ★★ Deutsche Bank in partnership with HCL Technologies ★★ HSBC ★★ Infosys Ltd. ★★ Intellectual Property Office ★★ QArea

TM

12. MASTEK TESTING MANAGEMENT TEAM OF THE YEAR Awarded to the testing management team that has shown consistently outstanding leadership.

FINALISTS

★★ Centrica in partnership with Cognizant ★★ Cognizant ★★ Lansforsakringar AB in partnership with Infosys Ltd. ★★ The Cheque and Credit Clearing Company in

AWARDS

14. INFOSYS BEST UX TESTING IN A PROJECT The award for the best use of user experience testing in a project.

FINALISTS

★★ Sophos in partnership with Box UK ★★ Dow Jones in partnership with Applause ★★ Société Générale ★★ UXservices ★★ Vodafone in partnership with Applause ★★ Deloitte

FINALISTS

10. ACCENTURE BEST USE OF TECHNOLOGY IN A PROJECT Awarded for outstanding application of technology in a testing project.

SOFTWARE

13. KPMG MOST INNOVATIVE PROJECT Awarded for the project that has significantly advanced the methods and practices of software testing.

FINALISTS

★★ Aviva in partnership with Tata Consultancy Services ★★ Centrica Hive Ltd. ★★ ECS Digital ★★ ABN AMRO Bank in partnership with Infosys Ltd. ★★ Mabl ★★ Centrica ★★ Metro Bank in partnership with Maveric Systems ★★ NewDay Ltd. ★★ Tech Mahindra Ltd. ★★ Intetics Inc.

15. SANDHATA TECHNOLOGIES LEADING VENDOR Awarded to the vendor who receives top marks for their product/service and customer service.

FINALISTS ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★ ★★

Brickendon Consulting Capgemini ELEKS HARMAN Connected Services nFocus Testing NIX Solutions NTT DATA QASymphony TestingXperts Tricentis

T E S T M a g a z i n e | N o v e m b e r 2 01 8


THE

EUROPEAN

SOFTWARE

TESTING

J U D G E S

AWARDS

2 0 1 8 CHAIR OF THE JUDGING PANEL

MYRON KIRK HEAD OF TEST, ENVIRONMENT AND RELEASE Boots

CHEKIB AYED HEAD OF TESTING PRACTICES Société Générale Corporate & Investment Banking

AASHISH BENJWAL ASSOCIATE DIRECTOR – GLOBAL ASSET MANAGEMENT UBS

LINDSEY GIBBS HEAD OF TESTING University of Nottingham

NADINE MARTIN SENIOR MANAGER, TEST OPERATIONS Sony Interactive Entertainment

NIRANJALEE RAJARATNE HEAD OF QA, DELIVERY STRATEGY Third Bridge

AL SABET HEAD OF QA & TESTING Mitie

CHINTAN SAVJANI HEAD OF SOFTWARE QUALITY ASSURANCE Kantar Media (a WPP Company)

DEEPAK SELVARAJ HEAD OF TEST AND DEPLOYMENT (TECHNOLOGY SOLUTIONS) Virgin WiFi

Follow The European Software Testing Awards 2018 on Twitter ROUVEN SCHRECK HEAD OF QUALITY ASSURANCE Prudential

T E S T M a g a z i n e | N o v e m b e r 2 01 8

SIMON STRICKLAND UK HEAD OF QA, TEST & RELEASE MANAGEMENT Zurich Insurance Company Ltd

#SoftwareTestingAwards


April at Great Ormond Street Hospital undergoing a number of life-saving surgeries to treat her heart condition.

THE CHILDREN AT GREAT ORMOND STREET NEED YOUR HELP. Great Ormond Street Hospital is a place where, every day, seriously ill children from across the UK come for life-saving treatments. The hospital has always relied on charitable support to deliver the extraordinary care these patients need.

Donations help to fund the most up-to-date equipment, rebuild and refurbish wards and medical facilities, support groundbreaking research, and fund patient and family support services.

TO FIND OUT HOW YOU CAN TRANSFORM THE LIVES OF CHILDREN LIKE APRIL visit gosh.org Great Ormond Street Hospital Children’s Charity. Registered charity no. 1160024.


42

T E S T M a g a z i n e | N o v e m b e r 2 01 8


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.