Page 1




MAY 2017





Test Automation



Any Technology











Recording_1.cs RECORDING_1

void ITestModule.Run { Report.Log(ReportLev "Website", "Opening w

Report.Log(ReportLev "Mouse", "Mouse Left C


Seamless Integration Broad Acceptance Robust Automation

1 Lic

Quick ROI

All Te


chno logi All U pdat es. es.

ut New version o


• • • •


Selenium WebDriver Integration New Test Suite Structure JUnit Compatible Reports Enhanced WPF Plug-In






M A G A Z I N E S T O R Y :




2 0 1 7



Software industry news ................................... 5



The changing face of mobile software quality assurance ....................................................... 10 Under construction: continuous testing for mobile . .................................................... 12

vel.Info, web site 'http://www.ranorex.

Ensuring best quality software ...................... 14

vel.Info, Click at {X=10,Y=20}.", new RecordItemIndex(1));

vel.Info, "Keyboard", "Key sequence 'admin'.", new RecordItemIndex(2));

Visual regression testing ................................ 18 SOFTWARE METRICS

Analytics driven software testing ................. 20 TEST AUTOMATION

Baking a perfect test automation cake .......... 22 AUTOMOTIVE SECTOR


The winding road ahead for smart cars ......... 28



Improving the effectiveness of local authorities . .................................................... 34 Conflict, solutions and resolution . ............... 38 RECRUITMENT

Inspiring, recruiting and retaining talent . ..... 42 MANUAL TESTING

Manual testing isn't dead .............................. 48 SUPPLEMENT



TEST Focus Groups Supplement .................... 53

T E S T M a g a z i n e | M a y 2 01 7

E D I T O R ' S





utonomous cars will be a US$84 billion market by 2030, according to research carried out by Frost & Sullivan.1 And with the list of global cities allowing self-driving cars to be road tested on their streets growing monthly, this ambitious figure seems realistic. Most recently New York, but also cities like London, Melbourne, Paris, and beach town Suzu in Japan, have all granted licences to various automakers and tech firms to test autonomous vehicles on public roads. In South Korea, so serious is the investment in self-driving cars that the government is building an 88-acre site that it claims will be the largest test bed for autonomous driving in the world, to be used by South Korean companies including Samsung, SK Telecom, Naver, Hyundai and Kia. Not to be outdone, The American Center for Mobility has begun construction on a purposebuilt, 335-acre facility focused on testing, verification and self-certification of connected cars in Michigan. According to McKinsey & Company, by 2020 one in five cars will be connected to the internet and the number of networked cars will rise 30%/yr for the foreseeable future.2 The autonomous car will transform the automotive industry, disrupt traditional models and change consumer behaviour. It is quite easy to imagine a digitally savvy consumer making purchasing decisions based on how well a vehicle syncs with their smartphone, over horsepower. This opens up many exciting opportunities for manufacturers, who can monetise new service channels, for example driver-assistance apps or tourism information. The technical innovation needed to develop safe, comfortable autonomous driving will add new, previously unexpected revenue streams to both traditional carmakers and their tech partners. Because it will have to be a partnership. OEMs have to readjust organisationally, seek out and take on board advice and expertise


from software firms. This means the blending of two very different worlds – the manufacturing sector’s typical five-year development cycles versus software companies’ fail fast attitude. 2030 is still a few years away, and it will be exciting to follow this space. It’s still early, both from a technology and service offerings perspective. You can read more about the possibilities of connected cars on p.28 in an article written by the former Head of Software Quality & Testing at Renault Nissan. This issue of TEST Magazine also shares how to disrupt the recruitment process for a test manager; ways software is changing the public sector – whether it be the police force or city councils; why manual testing is far from dead; and much more. If you’ve picked up this copy at The National Software Testing Conference or The National DevOps Conference, then you can find the conference programmes on pp. 51 and 67, respectively. For those of you unable to make it to these events, then rest assured conference summaries, presentations and more will be shared online and in TEST Magazine. In a similar vein, this issue is accompanied by The TEST Focus Groups Supplement – a special issue that summarises and shares the outcomes from the executive roundtable event that took place earlier this year. TEST Magazine’s next conference is going to be the Software Testing Conference NORTH – an annual two-day conference taking place in York, UK. More information about this event can be found here: north.softwaretestingconference.com All that’s left for me to say is, I hope you enjoy this issue and see you around at our events!

MAY 2017 | VOLUME 9 | ISSUE 2 © 2017 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 GENERAL MANAGER AND EDITOR Cecilia Rehn cecilia.rehn@31media.co.uk +44 (0)203 056 4599 EDITORIAL ASSISTANT Leah Alger leah.alger@31media.co.uk +44 (0)203 668 6948 ADVERTISING ENQUIRIES Anna Chubb anna.chubb@31media.co.uk +44 (0)203 668 6945 PRODUCTION & DESIGN JJ Jordan jj@31media.co.uk 31 Media Ltd, 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 info@31media.co.uk www.testingmagazine.com PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA softwaretestingnews @testmagazine


TEST Magazine Group

References 1. ‘Digitisation transforms the automotive sector’, http://www.softwaretestingnews.co.uk/digitisation-transforms-automotive-sector/ 2. ‘The road to 2020 and beyond: What's driving the global automotive industry’, http://www.mckinsey.com/industries/automotive-andassembly/our-insights/the-road-to-2020-and-beyond-whats-driving-the-global-automotive-industry

T E S T M a g a z i n e | M a y 2 01 7




UK GAMES COMPANIES SEEK RELOCATION AFTER BREXIT A survey has found that two fifths of UK games companies want to relocate out of the country, as a result of Brexit. The industries' main concern is the loss of international talent and a skill shortage from European citizens workers, as Ukie studies have found that 57% of UK games companies recruit workers from the continent. With fewer skilled UK candidates, stopping EU workers from living and working in the UK will have a terrible impact on the industry.

HAND MOVEMENTS AFFECT SMARTPHONE SECURITY Researchers claim that hackers are able to steal a smartphone user's four-digit pin, judging by the way that the devices tilt, when the password is being entered. Computer Scientists at Newcastle University revealed that 70% of the researchers guessed the smartphones four-digit pins accurately on their first attempt, and 100% on their fifth, by looking at the devices' gyroscope data. In a bid to prevent hacking, most smartphones now require a six-digit password, or longer, and the majority of websites also require safety questions to be able to log in and access personal, private information.

TESLA: SELF-DRIVING TO SELF-DESTROYING Tesla car owners have taken the self-driving carmaker to court for the first time. Automaker Tesla Inc. has been sued for selling 47,000 misleading vehicles, worth US$81,000 to US$113,0000, with dangerously defective autopilot software when engaged. Last October Tesla started selling the autopilot hardware support, with the aim for the vehicle to have new features, such as the ability to change lanes without driver input and the ability to merge on and off highways. Since then, the Autopilot has had

terrible reviews, with different incidents of drivers losing control of their cars when the autopilot software is active, because of it not functioning properly. The first major self-driving car incident was publicised last year after a 40-year-old man was killed after his Model S drove under the trailer of a 16-wheeler truck on a Florida highway, although the incident wasn't taken to court. Tesla says that they never claimed the vehicles are armed with “full self-driving capability.”

SAMSUNG'S NEW OUT OF WORLD EXPERIENCE Samsung has revealed its new virtual reality technology, in order to create new, emotional VR experiences. The experimental, hands-free VR interface, called FaceSense, converts biometric signals with electric signals, which replace human characteristics whenever the consumer speaks, changes expression, or shifts their gaze. The new technology is targeted to anyone who wants an out of world experience, an escape from reality and to get sunk into an immersive VR experience – consumers with various usage impediments can also use the product as it helps anxiety issues.

T E S T M a g a z i n e | M a y 2 01 7





China's Baidu, a Chinese-American web services company, is revealing the technological secrets behind its self-driving car for free, in hopes to speed up the development of autonomous driving, and to inspire carmakers to its services. The majority of development firms hide their technological ideas from other expertise, but Baidu is showing a different approach by releasing important elements of the creation of its self-driving platform, Apollo. Although China's domestic car market is one of the largest markets in the world, it is yet to be seen whether or not Baidu's move will be successful for automated-driving technology, as it has not been tested as much as other companies, such as Google.

Online marketplace, Etsy has open sourced its waste management software tools in hopes to help other companies expand their sustainability efforts. The DIVERTsy software, now published on Github, helps Etsy track and measure in-house waste streams such as landfill,


recycling, and compost. The software also measures other outgoing materials, such as electronic waste, food scraps and textiles. The firm has also announced a company-wide strategy to achieve “zero waste” operations for its 10 offices throughout the world by 2020.

NATO SEEKS CYBERSECURITY BIDS The North Atlantic Treaty Organisation (NATO) reveals business plans to bid US$3 billion on satellite communications, air and missile defences, cyber security and advanced software for security purposes. The intergovernmental military alliance, NATO, has previous security investment orders, which include US$8 million on control systems for ballistic-missile defence and air defence; US$180 million for advanced software; and US$290 million for informationtechnology infrastructure and cybersecurity. According to NATO officials, US$1.5 billion has been used to expand satellite telecommunications bandwidth to help towards military development, and NATO's security investment programme, which is funded by 28 member countries.

T E S T M a g a z i n e | M a y 2 01 7

​ HINA AND AUSTRALIA C PARTNER ON CYBERSECURITY Australia's Prime Minister, Malcolm Turnbull, has agreed to work with the Secretary of the Chinese Communist Party's Central Commission for Political and Legal Affairs, Meng Jianzhu, in a bid to improve cybersecurity. The Australian government signed a form with China earlier this March, agreeing that they will create a “mechanism” to discuss all cybersecurity, cyber crime and cyber-enabled

intellectual property theft issues, without sharing trade secrets and confidential business information to other countries. In the next three years, both countries have agreed to donate up to AU$6 million to Joint Research Centres, to support technology, strategic science, and innovation collaborations, focusing mainly on manufacturing, medical technologies, and resources and energy. New Zealand is also focusing on cybersecurity issues, with a secretive gathering of top spy chiefs visiting the country recently.





Researchers had found that hackers have the ability to shut down parts of an electric grid, according to General Electric (GE). GE is fixing a bug in software used to control the electricity in a utility's power systems, in hope to prevent hackers from shutting down parts of an electric grid.

The US Air Force announces its first ever bug bounty programme, which will be used to share software vulnerabilities, and is ready to be launched next month. Friendly hackers from outside of the United States will be invited to sign up and use the new programme, to try and help

INDIAN CONGRESS LEADER DEMANDS ACCESS TO VOTING SOFTWARE If the vulnerability isn't solved, this could enable attackers to gain remote control of GE protection relays, as cyberattacks are becoming more and more popular. This isn't the first time this has been heard of, as in December, Ukraine's capital city Kiev was believed to have been cyberattacked by Russia, resulting in 225,000 customers losing power.

T E S T M a g a z i n e | M a y 2 01 7

New Delhi's Senior Congress Leader, Digvijaya Singh, is demanding that Commission Services allow political parties to examine electronic voting machine (EVM) software. Singh firmly requested that instead of Election Commissions limiting objections to EVM hacking, they should allow


cybersecurity and defence posture. Cash prizes will be used as a further incentive, which is soon to be publicised. HackerOne has designed an invite-only programme. Security experts will choose citizens from other countries outside the United States, such as the United Kingdom, Canada, Australia and New Zealand, and the Air Force will be picking worthy United States citizens. Members of the US military are also welcome to sign up, but can't earn rewards.

examination of its software at the stage of writing it from the server. With mixed reviews, several opposition parties have requested that voting in India should return to the old ballot papers systems. Meanwhile, the Supreme Court wants explanations as to why Election Commissions have delayed switching to upgraded machines that provide a paper receipt, and proof of voters. Interestingly, the Election Commission announced that scientists, experts and technocrats have up to 10 days to hack the EVM's, from the first week of May 2017, in an 'open challenge'.




INDUSTRY EVENTS www.softwaretestingnews.co.uk


BETTER SOFTWARE WEST CONFERENCE Date: 4-9 June 2017 Where: Las Vegas, NV, USA www.bscwest.techwell.com ★★★



A Spanish company called GMV has developed a vulnerable software called Checker ATM, which is used by more than 20 banks across 80,000 cash machines, according to GMV's website and security company, Positive Technologies. Once inside of an ATM, the hackers are able to interface with APIs that control features such as dispensing cash and issuing commands for a fraudulent outcome.

The US Transportation Security Administration (TSA) is set to enhance and modernise its business applications, with over 70 enterprise applications. Accenture Federal Services will be dependant on agile methodologies and DevOps practices, in order to meet the goals of the US$64 million contract. To help improve the apps, Transportation Security Administration will be using some core IT services, provided by AFS.

  read more online

AGILE 2017 CONFERENCE Date: 7-11 August 2017

  read more online

Where: Orlando, FL, USA www.agilealliance.com ★★★

SOFTWARE TESTING CONFERENCE Date: 15 September 2017 Where: Irvine, CA, USA www.astqb.com ★★★

SOFTWARE TESTING CONFERENCE NORTH Date: 26-27 September 2017 Where: York, UK north.softwaretestingconference.com R E C O M M E N D E D




Accountants warn people that they may be paying too much tax, if they fall into certain income brackets, because of HMRC's online self-assessment tax calculator being faulty. With HMRC's tax systems being forever criticised, for being too complicated and confusing, legal changes have left HMRC struggling with their online systems.

Companies in the finance sector must begin to realise that it is not about acquiring technological and technical expertise, but more importantly, about establishing a corporate culture, which takes on these challenges. Professionals must be able to present their ideas and do this in a way that other departments can discuss and understand.

  read more online

  read more online

DEVOPS FOCUS GROUPS Date: October 2017 Where: London, UK www.devopsfocusgroups.com R E C O M M E N D E D


AGILE TESTING DAYS Date: 13-17 November 2017 Where: Berlin, Germany www.agiletestingdays.com

T E S T M a g a z i n e | M a y 2 01 7

THE CHANGING FACE OF MOBILE SOFTWARE QUALITY ASSURANCE Kumaresan Narayanaswamy, Unit CTO for Assurance Services Unit, TCS, explores key touchstones for a comprehensive, forward-looking QA and testing strategy.

T E S T M a g a z i n e | M a y 2 01 7


e live in an era when mobile ecosystems, which are at the head of the 'digital five' forces – mobility, big data, social media, cloud computing and artificial intelligence – have revolutionised enterprise priorities. The 21st century is witnessing a digital revolution where mobile devices are powering eBusiness. Wireless technologies are converging with the internet, to form intelligent ecosystems of connected devices, such as the internet of things (IoT). Customers don't just carry mobile devices – they wear them, too! Fitness trackers and wearable gadgets are now intrinsic to users' everyday lives. Augmented reality, with its potential to provide virtual, computer‑generated, real‑time views of the physical world, is not simply a gaming application, but also a promising technology for businesses to enrich their brand with three dimensional models and videos.

GLOBAL SCOPE The expectations from technology solutions are many. Organisational imperatives – short time‑to‑market, tight cost economies and the potential for scaling up globally – are the starting point. Increasingly, software quality assurance initiatives have a global view because enterprises today have millions of connected customers spread across the world.

A MULTI‑CONTEXT Moreover, users access many mobile platforms today. Therefore, tomorrow's software quality assurance strategies need to deliver a range of device‑specific mobile services for various customer touch points. Many countries have their own SIM cards, and mobile assurance must thus factor in



local and roaming connectivity. Besides, some markets are highly segmented, requiring strategy designs that can be customised to local or regional dynamics.

to seamlessly deliver unified, consistent, high‑performance, secure omni‑channel functionality, regardless of device, browser and OS.



High security becomes binding, too, when we consider mobile payments, user privacy and data confidentiality. Competition necessitates that all code stays privy to the organisation during dev, test, production and pre‑launch. Therefore, it could well be that testing must be carried out only in local pre‑production labs and on a wide range of physical devices. And, for efficient and swift in‑field launches, comprehensive testing for multi‑country rollouts may be the only way out.

CX: THE LINCHPIN But the real utility of technology is in successfully providing end users a delightful and memorable best‑in‑class customer experience (CX). Gone are the days when enterprises drove customer needs. Today, customers – more so, mobile customers – drive enterprises by what they want. The QA organisation's vision cannot ignore this.

MOBILE AT THE CENTRE Today's mobile user assumes that anytime, anywhere, access is the default. If that user is a customer, her buying journey starts – and often ends – on a mobile app. Moreover, she expects to see that quality of experience replicated omni‑channel. If the user is employed in a corporate business, she takes that mobile to the workplace and expects to log into the company network with it, while staying connected with her personal social circle. To enable employees to do this, companies need BYOD policies and COPE models that assure company data protection and individual employee privacy and, in turn, transparent governance and lightweight processes.

THINKING DEV‑TEST‑OPS For development, testing and operations teams, this translates into putting in place a comprehensive, customer‑centred, innovative mobile testing strategy that orchestrates use cases using real‑life persona‑based customer experience (CX) user stories. The mandate:

Drilling down, this means addressing coverage and delivery velocity. The first aspect, coverage, includes the functional aspects of the software – a 'minimum' criterion that tells how well the software performs – and also 'beyond functional' aspects such as regulatory compliance around security, usability, accessibility and compatibility.


Today's mobile user assumes that anytime, anywhere, access is the default. If that user is a customer, her buying journey starts – and often ends – on a mobile app

HIGH VELOCITY QUALITY The second aspect, delivery velocity, has to do with delivery throughput. Testing should achieve at least two objectives. First, it should shorten the application development lifecycle and shrink time‑to‑market. Second, it should enable this through automation, tooling and instrumentation; and continuous integration (CI), continuous development (CD), continuous testing, release automation and continuous deployment. With service and network virtualisation, these comprise progressive automation that covers app management and governance and easily accommodates changes in the applications under test. High velocity quality ensures not only that quality is delivered, but that it is delivered at superior velocity, essential when delivering superlative mobile CX.

CUSTOMER FEEDBACK The effectiveness of a mobile testing endeavour is evidenced through continuous user feedback received after go‑live. That feedback comes in on social media, through app reviews and from the marketplace. Listening mechanisms are therefore critical in today's connected age. Dev, Test and Ops must ensure that the applications launched are swiftly tuned, in subsequent sprint cycles, to meet customers' demands and fulfil market expectations. They must monitor an app throughout its entire lifecycle. An entire edifice of technology is built on the exact science of software testing. A sound software assurance philosophy underpins a comprehensive mobile testing strategy. Does yours?


Kumaresan has over 19 years of industry experience in project and program management, customer delivery management, and center of excellence (CoE) portfolio management. He drives assets, IP and solution accelerators development using industry products and open source tools for test automation of composite web applications, and also in the areas of SOA testing, mobility assurance, big data testing and BI and data warehouse testing, static testing and structural quality analysis, assurance analytics and R&D work in the areas of assurance leveraging AI and assurance for IoT.

T E S T M a g a z i n e | M a y 2 01 7

UNDER CONSTRUCTION: CONTINUOUS TESTING FOR MOBILE Building a continuous testing strategy for mobile requires planning and the right tools. Dan McFall, President, Mobile Labs, reveals the steps enterprise mobility teams need to take to get started.


n mobility, things move fast. Today's mobile users have high expectations and are savvy enough to know when a mobile experience on a smartphone or tablet is not up to standard. Enterprise mobility teams are tasked daily with building and testing applications that must always be better and faster than they were the month before, the week before, or in some cases even the day before. How can mobile developers, testers and QA professionals speed up the process of application delivery while staying responsive to these demands? The answer is continuous testing.

CONTINUOUS TESTING POWERS CONTINUOUS DELIVERY With a continuous delivery approach, teams of mobile developers, testers and QA put processes

T E S T M a g a z i n e | M a y 2 01 7

in place to ensure that new applications or new releases are produced in shorter, more efficient cycles. Once new code is developed, continuous testing is the next step for ensuring a successful deployment that is on time and under budget. But, without a careful plan and a direction for continuous testing, even the most capable enterprise mobility teams can get stuck and fall behind. Here are three steps to help teams construct a continuous testing strategy.


Building a continuous testing strategy can be compared to the job of an architect. Before an architect begins drafting plans for a new skyscraper, there is an abundant amount of research and planning that must occur. After all, how can an architect design a safe and structurally sound building without knowing all the facts and limitations of the world around?




The same idea holds true for enterprise mobility teams. The team must take a step back and really analyse their environment to identify elements that will fit into a continuous testing strategy. The team must consider the following: • Where will we begin continuous testing and on which applications? • What tools will we use, and what resources do we need? • How will we know we are successful? Here's an example of why planning is so important: Consider an application that is rarely used. Perhaps it is only employed for a special event and is only updated once or twice per year. If updates happen so rarely, does it make sense to build a continuous testing strategy around this app? Remember that enterprise mobility teams are busy and resources must be allocated wisely to stay on task and to continue to deliver on time. With careful planning, a continuous testing strategy can be put into place for the applications and projects where it will have the most impact.


With a plan put into place, mobility teams have a draft or a blueprint of the scale of work that needs to be accomplished. The next step is choosing the right tools, or the right people, processes and technology to move forward with continuous testing. First, make sure your team is set up properly to deliver testing as part of your delivery process. It is important that testers and developers have open communication. The faster the feedback to development, the faster it is for a developer to turn around a defect or make a change to the app. Quick feedback is important, so organise teams to reduce bottlenecks, even if that requires a change to the organisational structure. Having silos and poor communication slows down delivery and will impact the quality of the app. Consider pairing developers and testers to achieve the fastest possible results. Once the enterprise teams have been organised, they should identify a continuous integration system. Without a solution that can manage communication between different phases of the application lifecycle, achieving a fully realised strategy of continuous testing may well be for naught. Next, it is important to consider the types of tests that enterprise mobility teams will be required to run. How many tests are automated? How many are manual? For test automation,

enterprise mobility teams need to consider the allocation of personnel required to run unit tests, the SDETs needed for integration testing, and the QA resources needed for regression testing. Automation does not always have to be about the automation of the tests themselves. For manual testing, enterprise mobility teams need to identify how they can automate the results of manual testing activities. By automating the feedback loop from the manual test team, it is possible to increase efficiency with an automated ticketing system that provides constant feedback when bugs are identified. But, once you have built your strategy, how do you know it is successful? You will need an analytics solution that allows for either a 'quick decision' for manual deployment in a continuous delivery model or fully automated continuous deployment of your app.

After all, continuous testing implies continuous improvement. This is where analytics and metrics will help your enterprise mobility team to excel


As most architects can attest, even with the most careful plans, the most talented staff, and the best building materials available, problems may still occur. This is where reality will be your guide to refining continuous testing. An enterprise mobility team can plan and do all they can on the front-end to ensure success, but there are always outside factors that affect even the best-laid plans. It is important to be nimble and to adjust quickly. While engaged in continuous testing, application performance monitoring (APM) will provide analytics and metrics that your team requires to be successful and to move forward. By studying these metrics, such as performance and availability, statistics about devices and operating systems, network timing, bottleneck and crash reporting, etc., enterprise mobility teams can constantly evaluate what is and what is not working. Your analytics system will allow you to know that not only was your app ready to be deployed, but that it is still ready to stay in production. After all, continuous testing implies continuous improvement. This is where analytics and metrics will help your enterprise mobility team to excel. Planning and executing a continuous testing strategy takes forethought and care, but with the right tools and resources, your enterprise mobility team can build a successful plan to help your organisation stay ahead of the curve and to continue to address the demands of mobility.


Dan has over 19 years of professional and executive experience in the B2B technology sector. He has worked with global organisations to improve their development and QA processes around mobile device and application testing. Dan is a graduate of the Georgia Institute of Technology with a degree in Industrial and Systems Engineering and is an active member and supporter of the Technology Association of Georgia.

T E S T M a g a z i n e | M a y 2 01 7


Pankaj Upadhyay, Associate Vice President, Key Relationships, Maveric Systems, underscores the role of QA in strengthening digital transformation.

T E S T M a g a z i n e | M a y 2 01 7


he UK government has been actively promoting the technology sector for more than seven years now. It was in 2010 when the then Prime Minister, David Cameron, declared that the city's East End would be transformed into a world‑leading tech hub. Since then, the government has been firmly backing measures to improve the environment to provide the right breeding ground for tech firms and entrepreneurialism in Britain. The UK secured £6.8 billion of venture capital and private equity investment in 2016, which is 50% more than any other European country. Digital technology, in

particular, has fuelled the growth of the UK economy and positioned it at the forefront for its future development. The UK digital‑tech sector is growing twice as fast as its economy and, alone, it is now worth £97 billion to the UK economy, and with the potential to grow further. Businesses that effectively adopt digital technology grow faster and deliver higher growth profits. Although there is progress and the UK's technology sector drew more investment than any other European country in 2016, we have not seen any new startup in the last few years joining the billion‑dollar tech club and becoming the next Uber.




SOFTWARE QUALITY IS KEY Sustaining digital growth is as big a challenge as initiating a digital transformation or setting up a digital business, if not more. With rapid digital technology advances, came the increased risk of threats that have plagued UK digital businesses and effectively prevented the industry from cashing in on its initial momentum. Three major challenges awaiting those failing to address underlying software risk issues are: system outages, data corruption and cybersecurity. Irrespective of whether the company is large or small, hackers are becoming a big problem, and it is an issue that is here to stay. We recently witnessed the Tesco Bank hacking case, which revealed its vulnerability to the recent cyber raid on its customers' accounts, and they are far from alone. We have also seen other recent IT outages at leading organisations that have indicated a systemic weakness in UK businesses. Regulators are threatening huge fines for organisations that get it wrong. The reputation risk alone should be enough for companies to take the problem more seriously. When businesses have so much on the line, i.e., customer experience, brand reputation and costly fines, they must increase their focus towards software quality. As a priority, UK businesses must take security seriously and be aware of the modern‑day innovative cyber threats, as well as the faulty code quality that can lead to such attacks. It is found that 50% of security problems occur in the software design and architecture. The challenges are magnified by many software weaknesses not being detected by our traditional testing methods. Also, when the written code gets longer and complex, the exposure to glitches and faults increases. Complex code also results in businesses taking twice as long to get to the root cause when a glitch or outage happens. Poor software quality, if unaddressed, will continue to let UK businesses down in the digitally enabled world we live in today. Hence, there is a burning need for an in‑depth analysis beyond the traditional code analysis, which can help the organisations detect the dangerous structural flaws in their enterprise systems.


1. Integrated quality assurance strategy: Continuous integration and delivery

of software must be supported by continuous testing. An integrated test strategy should be implemented to cover all aspects across the end‑to‑end software development lifecycle, be it waterfall or agile methodology. The validations and verifications should span from requirements to production, functional to non‑functional, APIs to user experiences across channels and devices, etc. Automation is key! 2. Quality assurance governance: An integrated test strategy must be supported by an end‑to‑end QA Governance which would ensure that each and every entry and exit criteria is well‑defined and met for each step, i.e., starting right from ensuring testability, consistency, completeness, unambiguity, maintainability and comprehensiveness of the requirement up to ensuring complete code coverage, percentage automation, final signoff, etc.

When businesses have so much on the line, i.e., customer experience, brand reputation and costly fines, they must increase their focus towards software quality

The success of these two approaches lies firmly in the establishment of a digital enterprise built on sound strategy and a mission that outlines the path for future growth. While organisations may be optimistic about digital, many of them lack confidence in their vision for the future. They need to take a conscious effort to address any existing gaps in the organisational and technical infrastructure, in order to adequately address user needs. This has to be backed by a digital strategy with clearly defined business objectives with measurable key performance indicators (KPIs) that provide a clear understanding of audiences.

CONCLUSION While a sound software quality strategy is essential in strengthening a business' digital growth, equally important is building a digital enterprise that addresses the obstacles of shortage of follow‑on funding, lack of talent, and expensive infrastructure, in order to provide a suitable environment for the digital and technology sector to thrive. If we continue to do what we always did, we will only get what we always got. We often speak of a much‑needed change, but a change usually uses external influences to modify actions. This is not enough to achieve what digital has set out to achieve. The UK should strive for transformation which is actually about modifying beliefs so that the actions become natural and thereby, achieve what they are striving for.


At Maveric, Pankaj is in charge of creating unique and contextual testing solutions for banking clients in Europe and the US. As a part of the Solution Development Team, he specialises in building newer services, point solutions, frameworks and accelerators to address end-to-end assurance needs for a bank's digital transformation journey.

T E S T M a g a z i n e | M a y 2 01 7


Harri Porten, Co-Founder and CTO, froglogic, presents a new approach to visual regression testing, to help human testers determine the cause of a failure more quickly.


eurologists are still busy explaining how exactly the human brain recognises individual objects in a picture, for example. Now imagine the myriads of neurons firing when a tester is executing an application with a graphical user interface, with the intent of finding bugs! As far as we know, there is no device replicating the human brain. There may be devices, which can simulate some aspects of one, but they are still missing the miracles of creativity and learning capability. The irreplaceable tester always appreciates some assistance before his brain gets tired or bored, especially when having to wade through hundreds of screens of a software that supports multiple platforms, languages and other configuration parameters. Software tools are good at finding regressions – sometimes, too good. Detecting differences is easy, but which differences qualify as bugs? That decision often needs to be made by a human.

T E S T M a g a z i n e | M a y 2 01 7

CLASSIC APPROACHES In cases where the focus of testing is restricted to functional aspects, the verification of individual properties queried directly from UI controls may suffice. Is a switch turned on or off? What number is being displayed in a panel? What error message is being shown by an error dialog? The independence from different screen resolutions or UI style is generally a good practice for robust and long-lived tests. But is the data verified to be correct also rendered correctly? There are many causes for rendering to go wrong: different screen resolutions, truncation of translated texts, user-specific font settings and others. Is there no way around the need to check each and every pixel on the screen? When developing security critical software, there is often no alternative. We don't want to see a critical alarm being missed because it was shown in black rather than red, do we?

As an example, manufacturers from the automotive industry have resorted to grabbing the screen of in-vehicle displays for a one-to-one comparison against expected content. This is a thorough approach, but results in a fragile test framework, and possibly leading to many false results. For example, what if the corporate designer makes a minor change to the colour scheme? In the worst case, every reference image must be redone, requiring a review of hundreds if not thousands of failures.

VISUAL VERIFICATION What we propose is a multi-stage approach that combines the best of both worlds: bringing together the knowledge of properties gained through access to the application internals, as well as the screen rendering, in a pass of multiple stages that goes from abstract to detailed.




Figure 1. Visual Verification Point Editor.

Figure 2. GUI Coverage Browser.

The stages of such a check are best performed from coarse view on the application to the detail of each individual pixel: 1. Structure: Are all the expected basic building blocks (buttons, dials, text labels, etc.) present in the window being verified? If not, the algorithm could bail out early, reporting the lack (or unexpected presence) of a control. 2. State: For each control, the state of each property is being compared against the expected state. Is a button marked as being pressed? Does a display field carry the right numerical value? Again, the algorithm could bail out in case of any deviation from the expected state. 3. Geometry: For each control, verify that it is correctly positioned on the screen and has the expected size. A failure at this stage is probably due to a rendering error. 4. Pixels: The most stringent phase will be to ensure that each pixel on the screen has the right colour. Based on the checks from the previous phases most typical errors have been caught already. But this final check completes the verification with the highest degree of confidence that can be gained from pure software checking.

even dropped from the failure analysis. As a result, time otherwise wasted on reviewing irrelevant fall-out can be saved. Squish GUI Tester by froglogic ships with support for visual verification points implementing the approach described above since version 6.2. Differences found between the actual and expected application state are visualised in an interactive viewer for analysis. In the case where small differences between platforms are acceptable, geometry as well as image checks can have tolerances set.

What do we gain from multiple stages if the screen rendering will be found faulty for each anomaly anyway? It's the reduction to the root cause of visual differences that makes the difference. The tester will foremost be alerted to the fact that an internal property of the user interface has assumed an unexpected state. The consequence for the screen content can be enormous (e.g. through a font change) and hard to overview. With the tester being pointed to the root case of an anomaly, subsequent errors are more easily understood and possibly

Software tools are good at finding regressions – sometimes, too good. Detecting differences is easy, but which differences qualify as bugs? That decision often needs to be made by a human

TEST COVERAGE How to measure the degree of testing coverage that has already been achieved? How to locate untested functionality? The classic approach is an analysis of code coverage: a metric denoting the number of lines, etc. hit during testing. But only a developer with in-depth knowledge of the application's internals is able to draw the connections between the frontend and a specific line of code in the backend. Why not approach the topic of testing coverage from a visual perspective of view as well? Did the tester push every button at least once? Did every switch get toggled at least once? When working with Qt applications, Squish allows the user to browse through the GUI component hierarchy and examine the test coverage in a new GUI Coverage Browser. A red/green map lends itself for a report on the areas of an application that have not been exercised, yet. Naturally, 100% coverage of the GUI does not imply 100% coverage of the underlying code. But a metric that matches the mental model of a black-box tested application will provide much better guidance to yet untested areas.


Harri got involved in the testing of graphical user interfaces 16 years ago. In 2003 he co-founded froglogic, and oversees the company's research and development efforts.

T E S T M a g a z i n e | M a y 2 01 7

ANALYTICS DRIVEN SOFTWARE TESTING Hussein Jibaoui, Head of QA Office, Thomson Reuters Eikon, shares practical ways to utilise analytics and metrics in software testing strategies.


o accelerate growth in a frantically competitive world, many firms have completely transformed their business models by drastically shortening their time‑to‑market. To support this business shift, the software industry re‑invented itself to allow for fast‑paced software systems evolution. This evolution has been made

T E S T M a g a z i n e | M a y 2 01 7

possible by many innovative technologies, architectures and tools (such as cloud infrastructures, open platforms, building blocks method, etc.) combined with new practices and methodologies (such as agile and DevOps). For instance, in today's DevOps organisations, feature teams are fully accountable for promoting any piece of code

at any time. Agile methodologies, advocating frequent shipping of small pieces of code in a continuously evolving and improving environment, are now also well‑spread. In this context, development cycles are increasingly shorter while testing resources often remain at best unchanged. As consequence, testing became a big




challenge, mainly for large and scale software systems. IT professionals seek for industrial ways to maintain the quality of their systems as a whole while they continuously and independently evolve. In other words: How to be selective in testing without compromising on quality? In this article, I want to share practical ways to industrialise a software testing approach based on software metrics. It consists of gathering data from various metrics as inputs (such as code changes, code coverage, usage statics), and then building a business intelligent system that shows the test gaps as outputs.

functions, conditions) is generated and stored into a knowledge database. • The code change analysis is parsed and mapped to the code structure. This mapping is also stored into the knowledge database. • The feature‑code mapping: The logical mapping between the individual features/tests and the structure of the code, which is being executed under test, is recorded and also stored into the same knowledge database.


Aggregating these data mappings within a data visualisation system facilitates the test scope selection and builds dynamic test coverage dashboards. For instance, test only the tests/features that are impacted by the changed code, or show only the coverage gap for the changed code, and then add the right tests to close the gap. Beyond this primary usage, the approach can be extended to a scoring methodology based on additional software or business metrics, such as features priority, features usage, features complexity, deficiency areas, code analysis, customer defects, etc. This significantly strengthens the approach, and ensures a tuned and optimised test strategy. For instance, a 90% changed code impacting a feature with low usage should have a lower score than a 10% changed code impacting a feature with high usage. The first feedbacks from the delivery teams that have implemented this approach are very positive. They confirm a better test productivity (e.g. direct manual test effort reduction) and a better quality control. Indeed, such powerful risk analysis system helps in getting a predictive approach to identify the focus areas for testing on time, throughout the entire release cycle (irrespective of the type: waterfall, iterative or continuous).

The first step consists in collecting the code changes (sub versions, builds, versions, dates), the code coverage (record the code that is being executed under test/use), and the features usage statics. The second step consists in building a data mapping system that shows the code changes versus code coverage versus features usage. Collecting these data is made easy by code coverage and code source management tools. Production usage statics are now part of coding standards, implementing counters at features level to record how products are used by customers. These statistics are usually consumed by executives to assess which features generate revenues and align development efforts with business targets for example. The primary goal of this approach is not about getting data, instead it looks at how to map and consume data. Our challenge resembles, albeit on a smaller scale, the challenge faced by big data. Indeed, we aim at showing how to get valuable patterns to answer legitimate questions that each manager in the DevOps chain asks: QA manager: Did we do the right tests and were they enough? Development Manager: Did we properly and exhaustively control? Release Manager: Are we ready and confident to release? The practical approach consists in creating a mapping system between the three main metrics as follow: • The code coverage outputs under test, also called 'test footprint' (irrespective the type of testing unit: functional, technical, manual or automated), is parsed and the code structure (libraries, classes,


Beyond this primary usage, the approach can be extended to a scoring methodology based on additional software or business metrics, such as features priority, usage, complexity; deficiency areas; code analysis; customer defects, etc.


With over 16 years of experience in software testing and quality assurance, Hussein is passionate about software


testing and is convinced that testing is scientific research which requires

This approach can empower also the modern software testing such as virtualisation usage for testing. It can be deployed into a large and scale cloud infrastructure, used for crowd testing, for instance. Thanks to this approach, we can collect the test outputs in a central way to keep a better control on the test coverage even if we rely on crowd testing models.

continuous improvement and a curious mindset. Prior to joining Thomson Reuters, he worked as a Consultant for ALTRAN CIS, covering various industry sectors including insurance, finance & banking, telecoms and pharmaceuticals.

T E S T M a g a z i n e | M a y 2 01 7

BAKING A PERFECT TEST AUTOMATION CAKE Sean Rand, QA Lead, Nephila Capital, describes test automation through the full software stack and gives practical examples from his test suites.





o often automation is thought to be UI testing, however the fundamentals and foundations are so much more. As testers, sometimes it's thought to be solely our job to automate and test the whole software stack, and in a manner which is continuous and automatic on check‑in. However, this is not entirely the case and this article will attempt to show ways to break down some barriers. Automation testing is the ability to not only test a product, using experience, insight, business domain knowledge, and technical expertise but ultimately for myself, it's to push us into a continuous delivery and regression pattern within a given software stack as part of a delivery team. It is my intention to dive into practices and examples, which I've found to be effective in the work place in 3‑tier architecture, showing how to firm up foundations, whilst keeping an eye on edge cases and acceptance testing with a test suite.

BUILD THE FOUNDATION AND GAIN CONFIDENCE Automation can be many things to many, both testers and developers; however, I like to break it down to the bare bones and see what it allows us to enable. I believe at the fundamental level it allows us to build a large test foundation and gain the confidence in our product through regressing testing and transparency to all involved. Transparency is an imperative within testing along with being impartial, and the core notion of automated tests should allow anyone in the business to see any test result, execute any test, and have a binary outcome of 'Pass' versus 'Failures'. Automation enables the business to gain this level of insight and transparency. The layer cake world of testing reveals three main categories (or layers) of testing: 1. Unit testing. 2. Integration testing. 3. User interface testing. Think of building a house; first, you build the foundations deep and wide – these are our unit tests. Next we move onto the structure of the house, the beams, the loadbearing materials – these are our integration tests. Finally, we build the walls and decorate the house, or hopefully have some help decorating, as I'm terrible at that – these are our user interface tests (which I'm much better thankfully at than decorating). It's that simple! Since we're building the foundation

and gaining the confidence we need to build our base wide and deep (unit testing). Unit testing is sometimes seen as a developer‑only role; however, I like to see it as a joint effort. The test‑driven development (TDD) approach of writing a failing test, then going and making it pass by building the feature and keeping the team honest is an effective approach. This initial test can be written by the developer or by the tester, but I see ownership of testing as a joint delivery of the whole team. Therefore, due to this outlook and setting this expectation at the beginning of a project in the kick off, it's very clear our unit test coverage levels and scope must, for example, not drop beneath 90% coverage if we want to maintain an agile release cycle.

Unit testing is sometimes seen as a developer‑only role; however, I like to see it as a joint effort


For a release, it is advised to ensure your continuous integration build server executes all unit tests and you don't allow branch check‑ins that bring the unit test coverage below your defined percentage and producing failures. This might seem obvious; however, I've seen some places do this and maybe you use their apps daily. Now we have a grasp on unit tests maintaining a percentage coverage and triggering test runs from CI; the key for any good software delivery cycle is to have the largest investment possible in those unit tests, with a hint (a lot) of integration testing on top for good measure. Without that investment, you will fall, and do so hard, sometime in the future, if you only rely on UI automation tests as your regression and therefore safety net.

WHAT'S ALL THIS INTEGRATION SPEAK? Integration testing can be referred too as nearest neighbour testing. This is the testing of two components that are located side by side and used by each other. An example of such tests may be the testing of a web service endpoint; you fire a request at the endpoint and expect the web service to give you a response back of 200 or OK. Another integration test might be creating a user account and passing in a pre‑canned user object – note these are all done at the code level and not driven through the UI. Integration tests differ from unit tests as they rely on one or more external „


With over eight years experience in testing, Sean started out within a software house in the pharmaceutical industry forming their first QA team, and now heads up a global test team for a re-insurance and insurance company with headquarters in Bermuda, but based out of New York and Nashville. His passion is testing and regression testing using automation, enabling software delivery teams to deliver quickly and with confidence.

T E S T M a g a z i n e | M a y 2 01 7


Most importantly though, have fun in experiencing all the different technologies and challenges that come with testing at each layer, it will make you appreciate and understand the proposition, and therefore, the product


components rather than a unit test which is self‑encapsulated and relies solely on itself. My belief is this is where developers in test and/or testers can become effective in supporting the development lifecycle and test foundation. Building tests which run in <100 ms each to test web services, component tests, data drive thousands of permutations in a short timeframe is where I find the most value as a tester comes into play and buys credibility. If you're looking to get into an area of codifying tests, I'd suggest the integration level, as this will allow you to learn quickly, build up credibility with the team and developers, and gain a large contribution to the overall test platform.

UI AUTOMATION TESTS ARE THE CHERRY ON TOP OF THE CAKE I am a contributor to and an advocate for the Selenium Bindings to enable UI testing and Appium for application testing. I am a huge fan, and always will be, of UI driven tests in a strategy, as no matter how many unit tests or integration tests we have, our PMs and managers will always want UI testing and so should we, as the web UI contains JavaScript

T E S T M a g a z i n e | M a y 2 01 7


and we all know how that likes to cause some different sort of behaviour from what's expected! Targeted end‑to‑end UI tests are most effective, as remember UI tests are expensive. Let's see how expensive tests are in the layer cake using my own current test suite that I've produced in the last few months to give some statistics: • Unit tests: ~15,000 tests, average time per test <10 ms. • Integration tests: ~40,000 tests, average time per test <100 ms. • User interface tests: ~500 tests, average time per test <45 seconds. Total run times sequentially, not in parallel: • Unit tests = 2.5 minutes. • Integration tests = 1 hour. • User interface tests = 6.25 hours. The numbers above do not hide the truth that if you build your foundation wide and deep, you can have 55,000 tests roughly at a sixth of the cost of 500 user interface tests. In no way am I saying we should scrap user interface tests, but I am giving a word of caution: keep track of your test strategy, ensuring your tests aren't so long‑running that your developers and team give up on your tests from ever finishing. Always ensure they're baked into „


I think the best piece of advice I can leave you with is no matter where you are in your automation career always be enthusiastic, be humble and open to asking for help

T E S T M a g a z i n e | M a y 2 01 7


a CI server such as TeamCity or Jenkins and executed on code check‑in or nightly against the latest build. Most importantly though, have fun in experiencing all the different technologies and challenges that come with testing at each layer, as it will make you appreciate and understand the proposition, and therefore, the product.

AUTOMATION ENABLING REGRESSION We appreciate and understand the need for automation and the benefits are clear when we execute ~55,500 tests in a matter of hours versus days and weeks, if we had done it manually. We sometimes find ourselves in situations where we need to build regression on a product which has zero tests and not sure how to achieve this. I'll try to explain how I've tackled this issue in the past – an issue so many of us have faced. I feel the pain and sympathise if you're newly hired into a company which has promised huge test suites only to find it's now your job to go build those promised test suites. I feel the pain but also the excitement that comes with such a challenge. If the platform is web we can strip it down to three parts: web UI, service layer and database, and the codebase. I would begin to truly understand the product and the platform which the website is built upon, from there identify key smoke tests around 20 ‑ 30 end‑to‑end flows which are must-haves to begin to build regression and have the business sign off on those smoke tests. Once completed, I'd execute those tests on all the browsers within the products' non‑functional requirements to build end‑to‑end UI automation regression for the browser stack, such as IE10, Chrome latest and Edge. Once those 20 ‑ 30 UI tests are working you'll have a feel for the product and how long it takes to build in new UI tests, but remember user interface tests are more expensive. Assuming you have a little bit of a safety net, then start working in the integration test level and understanding the middle tier architecture and service stack. I would delegate most of my time into the integration test layer. Remember earlier I mentioned integration tests form part of the foundation unlike UI tests, this is where the value begins and your value as a tester can shine at the integration level given you've secured some full end‑to‑end tests via the UI. If you tackle


the integration testing layer and feel like moving into the unit test layer you may, but in general this can be difficult to achieve given the code was developed by someone else and before your time, so think carefully about attacking the unit level without a clear understanding of the codebase, it can be a deep, dark rabbit hole.

AUTOMATION TOOLING To give a flavour of my daily tooling, I thought I'd list them and I'd be keen to hear your list: • Visual Studio 2015 Professional • C# .NET • Mstest Runner • Shouldly Asserts • RESTSharp • Dapper • Mocks • Selenium WebDriver • SpecFlow

LOOKING AHEAD Thank you for taking the time to read this article and I truly hope you have some useful takeaways that can apply to your current work environment. I think the best piece of advice I can leave you with is no matter where you are in your automation career always be enthusiastic (before coffee on a Monday is tough I hear you), be humble and open to asking for help. Be the person asking silly questions! Learn quickly; having a positive attitude in our environment always goes a long way, make sure you're there to help and be part of a team and get stuck in, push yourself by learning, maybe even outside of work time, and dedicate yourself to mastering or progressing a new skillset. Any questions please reach out to me anytime, and happy testing!



hen the automobile was invented, no one wanted them to be connected. They probably wanted automobiles to carry people and connect them to far off people and places. An occasional waving of “hi” by the car owner when he/she sees a friend driving down from opposite side was the best connection that was made. Today's evolving connected cars are not like that. Connected cars are ready to do many things for you, such as: • Start your video conference while on the road. • Understand and recognise friends travelling on the same road, probably to the same destination/event. • Alert nearby hospitals by sensing vital health indicators of the passengers. • Let passengers start 'experiencing' a vacation resort by visually transforming the interiors. • Let drivers use HoloLens technology. • And much more.1 „

Ramkumar 'Ram' Ramachandran, Director & CIO, Internet Techies Solutions Pvt. Limited, discusses the future of connected cars â&#x20AC;&#x2018; new animals that could be your pet with a wilder side.


GM's OnStar, released in 1996, is claimed to be the first car that was able to connect to the outside world to get emergency help


HOW TO DEFINE CONNECTED CARS? Cars that are connected to the outside world while stationary or on the move are connected cars. This connection can be to any of the following: • Another car. • Another device (e.g. road signals, home music systems, another car's equipment). • To the cloud or another proprietary backend. • Another human being inside/outside the car (e.g. pedestrian or law enforcement). • To anything that could improve the car's performance/safety/fuel consumption. It all started with the need to connect during emergencies, for the car or its occupants; and cars started to become 'connected'. GM's OnStar, released in 1996, is claimed to be the first car that was able to connect to the outside world to get emergency help. Connected cars evolved with security as its primary motive and later got into efficiency in driving. Once these parameters are stabilised, it would get into more areas that people would yearn for. Some of them are listed in the bullets at the start of this article.



Formerly the Head of Software Quality & Testing for Renault Nissan, Ram has 20+ years' experience in the software industry. He began his career as a programmer, and has served large organisations including HCL, TVS, Polaris, KPMG and Renault Nissan. His expertise is in software delivery excellence (QA/QC), information/application security and program/project management.

T E S T M a g a z i n e | M a y 2 01 7

Connected cars can connect to many different things; what started as vehicle‑to‑vehicle (V2V) connections has expanded and is now called vehicle‑to‑anything (V2X). As of date, all V2V communications are derived from the IEEE 802.11p Standard. Frequency spectrum in the range of 5.9 GHz has been allocated in the US and, in a harmonised way, in the EU it is called ETSI ITS‑G5. As soon as a vehicle comes into contact with another vehicle or with an intelligent transportation system (ITS) station, they establish an ad hoc network. This helps the ITS stations to know the position, speed and direction of vehicles and provide critical information, such as accidents ahead, traffic status, weather conditions, etc. Once the connection became possible with other vehicles and ITS infrastructure, connections began getting extended to other entities as well. This is what is termed as V2X. Some examples of V2X are: • Vehicle to pedestrian. • Vehicle to infrastructure. • Vehicle to device. • Vehicle to grid.


There are plenty of possibilities with V2X, which includes the following but not limited to: • Expenses made through your car such as fuel, parking fees, road tax etc., will be consolidated and billed to you. • Sequence multiple cars to a common destination/direction and platooning them for better fuel efficiency and safety. • Providing warnings about emergency vehicles, road works, blindspots, etc. The key to success in V2X communications is standardisation. The International Telecommunication Union (ITU) has published a comprehensive list of radio interface standards for ITS applications.2 Another development in the standardisation is the Third Generation Partnership Project (3GPP)'s announcement this year for the first set of LTE‑V physical layer standards for V2V and V2I,3 which is different from the most popular IEEE 802.11p Standard.

WHAT ARE THE REGULATORY REQUIREMENTS? With so much of sophistication built in and too many stakeholders beyond the car manufacturer, it becomes important for governments to safeguard the car passenger. The US and Europe have already laid down many regulations and are further fine-tuning the same. The US Department of Transport (USDOT) tried a large scale testing of V2V capabilities of various manufacturer solutions fitted in different vehicles.4 The US National Highway Traffic Safety Administration (NHTSA) saw this model as acceptable and published a report in August 2014 that it was ready for deployment.5 It felt that real benefit would be achieved only if significant parts of the vehicle fleet were equipped with this technology. On 25th June 2015, the US House of Representatives heard on this matter but went against NHTSA. While it comes to EU, the ITS Directive 2010/40/EU was adopted in 2010 so that ITS applications are interoperable. The EU's industry stakeholder “C‑ITS Deployment Forum” started working on regulatory framework for V2X in the EU. The European Parliament voted on 28th April 2015 requiring the eCall facility in all cars from April 2018.6 This facility will automatically dial 112, Europe's single emergency number, in the event of an accident. eCall also needs







Single vehicle applications are the ones that are managed by the driver on the car's dashboard. Google's Open Automotive Alliance and Apple's CarPlay are two significant solutions in this space. Many of these apps are accessed through the passenger's smartphone and could end up using the phone's connectivity. These applications would include entertainment system, car diagnostics, climate control, etc. Co‑operative safety and efficiency applications operate based on sensory inputs. For example, collision warning, blindspot warning, lane change warning are some of the possibilities. These applications should know the vehicle's position through sensors and look out for any trouble due to other situations on the road and provide warnings wherever they're needed. By nature, these applications should work across various models of vehicles. So, validating single vehicle application is relatively a simple affair, since the environment would be known. Additional



The performance of a connected car is only assessed on the component-level and a final quality check is carried out on assembly. Connected cars use two types of applications: 1. Single vehicle applications. 2. Co‑operative safety, efficiency applications – this list is going to grow, e.g. connected entertainment.

The entire SMAC (social media/mobile/ analytics/cloud) and IoT technologies are used in connected cars: mobile for connectivity; analytics for route/safety prediction; cloud for data storage and retrieval; social media for collaboration; and IoT to share vehicle data. Connected car technology will learn from the aircraft industry's 'Fly‑By‑Wire', which is taking the aircraft into autonomous mode for flying. Connected vehicle applications will be in multiple industries for varied purposes, only limited by the creativity of individuals to derive benefits. All logistics vehicles could turn out to be connected vehicles, which would integrate the data generated by IoT into Blockchain for high integrity. There will be significantly lesser on‑road casualties and thereby lesser demand for traffic cops. Longevity of human life could also be positively impacted. Solutions such as platooning will improve efficient fuel consumption, solar power could probably be the solution. This would lead to decreased demand of conventional fuels and make the world greener. But, the effort to 'certify' a connected car is going to be a challenging task due to various entities influencing the safe functioning of the car. There will be new developments in the QA approach for connected car application certification, which will include core functionality validation and interface components validation. The world will be a very different place and there will be ample opportunities for software testers and QA to be part of the connected car journey.



care needs to be taken while validating the behaviour when getting and providing data to and from other sources. Co‑operative applications need to be validated for compatibility with various models of vehicles and across national borders. Security becomes another critical factor to be validated in co‑operative applications.


provision for a button that can be pressed to trigger this call. Data regarding exact location, time of the accident and direction of travel will be communicated to emergency services by eCall.


Figure 1. Connected cars: need to snugly fit in the complex jigsaw puzzle.

FUTURE PREDICTIONS Connected cars are going to be the future of personal transportation. The blending of autonomous cars, electric vehicles and connected cars has started happening.7 For example, the electric Google car drives autonomously and boasts connectivity.

References 1. 'Gartner Says By 2020, a Quarter Billion Connected Vehicles Will Enable New In-Vehicle Services and Automated Driving Capabilities', http://www.gartner.com/newsroom/id/2970017 2. ITU's comprehensive list of radio interface standards https://www.itu.int/dms_pubrec/itu-r/rec/m/R-REC-M.2084-0-201509-I!!PDF-E.pdf 3. '3GPP system standards getting into 5G era', http://www.3gpp.org/news-events/3gpp-news/1614-sa_5g 4. 'DOT advances the deployment of connected vehicle technology', http://fleetowner.com/technology/dot-advances-deployment-connected-vehicle-technology 5. NHTSA's proposed rules for V2V communications https://www.regulations.gov/docket?D=NHTSA-2016-0126 6. EU mandates eCall in all cars from April 2018 https://ec.europa.eu/digital-single-market/en/news/ecall-all-new-cars-april-2018 7. GSMA: 2025 Every Car Connected: http://www.gsma.com/connectedliving/wp-content/uploads/2012/03/gsma2025everycarconnected.pdf

T E S T M a g a z i n e | M a y 2 01 7

IMPROVING THE EFFECTIVENESS OF LOCAL AUTHORITIES Business intelligence software makes sense of council data, Greg Richards, Sales and Marketing Director, Connexica, explains.





he concept of big data has gone from strength to strength in recent years. With more businesses, devices and services making use of internet connectivity and wireless functionality, there is an abundance of data being generated at every possible moment. Although making sense of this information can be challenging, doing so provides businesses with actionable insights that make an organisation more effective and make resources go further. This is particularly important for local authorities in light of constant pressure to do more with less, as Mid Kent Services (MKS) has been demonstrating for the past six months. Local authorities have been subject to a number of budget cuts in recent years. As the UK government has worked to ease the country back to stability after the 2009 recession, councils have typically felt the squeeze of funding cuts. As a result, there is a constant pressure on them to make their resources stretch further and to achieve more while avoiding additional costs. Big data makes this possible. By analysing the information accumulated by a variety of input sources, such as council tax payment methods and parking permit services, local authorities are able to identify areas where costs can be minimised, as well as where extra funding is necessary or whether there is any potential fraudulent activity. However, this is only achievable with an effective approach to analysing and reporting this data, something that has traditionally proven difficult for councils with larger constituencies.

AN INTELLIGENT APPROACH TO BIG DATA Mid Kent Services (MKS) is one of a handful of local authority partnerships working to tackle this problem. The partnership, which consists of Maidstone, Swale and Tunbridge Wells Borough Councils, shares a combined ICT department that works to serve all three boroughs. This accounts for roughly 410,000 individuals, which subsequently produces a high volume of data and makes analysis a laborious and time‑consuming process. This problem is exacerbated when you consider that individuals with specialist analytical skills are often needed to interpret and present the raw data into something more useful. It is becoming more widely acknowledged that companies and organisations require business intelligence software to make big data work for them. This was recently highlighted in

Gartner's 2016 CIO Agenda Report1, which featured business intelligence and analytics as the top trending priority from a poll of nearly 3000 Chief Information Officers (CIOs) globally. In fact, this is the fifth consecutive year that business intelligence has been top priority. As a result, it is of little surprise that MKS turned to business intelligence software to help bring together its many data sets into one central location. The partnership put in a successful bid for funding from the Transformation Challenge Award (TCA), a scheme introduced by the Department for Communities and Local Government (DCLG), in 2014. A portion of this funding was dedicated to investment in software, the aim of which was to simplify the cross‑referencing of data sets to aid each individual council in meeting government‑imposed spending targets. In order to achieve this, the business intelligence software had to be capable of drawing information from the large quantity of sources and cross‑reference it all effectively, while also allowing for partnership‑wide data reports. MKS turned to Connexica for software that could deliver this, prompted by work the company had recently done in providing counter‑fraud analytics through its CXAIR solution to Kent County Council. “The brief we gave to Connexica was for software that could work across a number of projects within the partnership to analyse a variety of data types,” explained Andy Sturtivant, TCA Project Manager at MKS. “The software was required to not only analyse and cross‑reference all of this data, but to also draw it from several data streams to boost efficiency. “Our previous approaches to managing and analysing data were very one‑dimensional and unable to provide useful cross‑referencing functionality. With much of the data that we are making use of, it can only truly benefit us when we are able to see a more comprehensive overview of information across the partnership.” The ability to cross‑reference data sets was especially important in reducing administrative costs for the borough councils. In one of the MKS projects, Swale Borough Council wanted to use business intelligence software to reduce the cost of processing payments for council services. Although Swale Borough Council was already processing some of its electronic payments automatically, without the need for staff to complete transactions, a large amount of service users in the mid‑Kent region „

It is becoming more widely acknowledged that companies and organisations require business intelligence software to make big data work for them


Greg is the Sales and Marketing Director of Connexica. He joined Connexica in 2007, a year after the company was founded, and is a firm advocate for the democratisation of business intelligence.

T E S T M a g a z i n e | M a y 2 01 7



For a number of years, there has been an increasing challenge across the analytic landscape in managing the exponential growth of not only data size, but data variety

were still using traditional methods such as cheques or phone payments. This resulted in elevated operational costs for the local authorities. One of the objectives for the council in particular was to identify those customers who already make some form of direct debit or electronic payment for some services, but continue to use traditional payment methods for other services where automation isn't available.

UNITING THE DATA Connexica is a business intelligence company formed in 2006 that specialises in making an organisation's data actionable. This is achieved using CXAIR, a search‑based analytics program that can draw from multiple data streams and allow for effective cross‑referencing. The software has proven popular within the healthcare sector for bringing together data from a wide range of sources and producing intuitive visual reports and dashboards.

T E S T M a g a z i n e | M a y 2 01 7

CXAIR is designed to easily integrate a multitude of data streams, whether it's within an SME or encompassing an entire county. However, being able to gather data from multiple sources raises the challenge of dealing with low quality data. Traditional analytics software struggles when presented with data that contains inconsistencies – usually errors that were introduced when the data was entered into the system. Even a missing space in a postcode, for example, can throw off many analytics solutions. This poses a challenge for many software engineers. For a number of years, there has been an increasing challenge across the analytic landscape in managing the exponential growth of not only data size, but data variety. It is not just a case of linearly increasing analysis alongside the amount of data, as this is not a cost‑effective solution. Instead, software must tackle the data inefficiencies by obtaining a single version of the truth, unifying data to tackle the discrepancies that occur when multiple systems are used to deliver insight. By effective use of validation checks, the resulting data is centralised and able to provide a holistic view of activity while ultimately increasing data confidence. CXAIR addresses this issue by running data validation processes as it receives information. This allows for these potential problems to be identified from the very beginning and resolve them before errors occur; helping to ensure all future data is high quality. However, even having high quality data is not enough. Hiring the right people, with the right technical expertise to analyse, interpret and present the data in a way that makes it easy to digest for non‑technical staff is equally as important. This costly and laborious process has traditionally been one of the biggest barriers to making use of big data and gaining deeper insights from it. Business intelligence has become increasingly high on CIO priority lists in recent years, so there are a lot of start‑ups creating analytical software. However, it often requires specially‑trained analysts to draw any valuable insight from what is collected. Connexica set out to achieve a democratisation of business intelligence,


which is the process of enabling any member of staff, regardless of their technical know‑how, to navigate the information and understand it. This has many benefits for a variety of businesses, but is of particular importance to councils looking to minimise costs.

DATA SECURITY “One of our biggest priorities when choosing the right business intelligence software was that of data protection and security,” explained Sturtivant from MKS. “Much of the information that councils work with is of a sensitive nature and so it must be handled in accordance with a number of regulatory guidelines.” One such regulation is the Data Protection Act (DPA) 1998. The DPA 1998 outlines that all personal data held by businesses or organisations must abide by eight data protection principles, the seventh of which relates to the security of held information. This covers protection from both third party compromisation and accidental data loss. However, this causes concerns for many local authorities. There is currently a heated technological debate about the security of cloud computing, which is the platform that many web‑based services use for handling programs and data. In fact, the 2016 state of the cloud survey revealed that many IT staff believe that security is the one of the biggest challenges to cloud implementation – second only to functional competency.2 To calm those fears, Connexica uses a mix of cloud and local storage. The software itself is based in a private cloud, accessible only to an organisation's workforce and keeps a local index of information. For MKS, the data streams came via a mixture of locally hosted back office and other secure cloud hosted systems.

CHALLENGES OF BUSINESS INTELLIGENCE Implementing business intelligence software into a system can come with a unique set of challenges in each application. Whenever an organisation rolls out a new piece of software

or a new computer system, it often places a steep learning curve on staff to get up to speed quickly. In order to overcome this, a series of training sessions were provided, including admin setup and dashboard navigation, to ensure staff were fluent in the software.

“The support we received from the team at Connexica was excellent. They guided us through the installation process and helped with minor teething problems,” continued Sturtivant. “After setting up CXAIR promptly, the team remained on hand to help, ensuring that it was the ideal solution for the partnership.”

MAKING BIG DATA WORK The use of business intelligence software has streamlined the tasks of MKS. Within Maidstone Borough Council, for example, one such task was monitoring and analysing the housing options in the borough. This had previously been a time‑consuming process involving three separate systems, but is now all done within CXAIR. The benefits of this were that staff could see at a glance whether household spending was in line with audited household income, which in turn informed decision‑making. Likewise, the centralised location of data made for better management of funding for temporary accommodation – something particularly important in light of the area's rising reports of homelessness. These efficiency and business‑planning benefits are possible with the combined use of big data and easily‑accessible business intelligence software. As the quantity of generated and accumulated data continues to increase, it will only become more important for higher numbers of staff to be able to make use of it. While not every company or organisation will face the same budgetary limitations as local authorities, many are under the same pressure of streamlining processes and increasing return on investment. Effective business intelligence software is the key to achieving this and making resources go further and gain a competitive advantage in the process. References 1. 'Gartner Executive Programs: Building the Digital Platform: Insights From the 2016 Gartner CIO Agenda Report', https://www.gartner.com/imagesrv/cio/pdf/ cio_agenda_insights_2016.pdf 2. 'Cloud Computing Trends: 2016 State of the Cloud Survey', http://www.rightscale. com/blog/cloud‑industry‑insights/cloud‑computing‑trends‑2016‑state‑cloud‑survey

CONFLICT, SOLUTIONS AND RESOLUTION Nicholas Ashton, CEO/CIO, CommSmart Global Group, reviews the possibilities of software in safeguarding the public.





hich came first in the world of computers? Software or hardware? This was overheard when two engineers argued. The software engineer said: "Without software, hardware cannot even execute its job." The hardware engineer retorted: "Without hardware, your software will have nothing to run." And so on and on, the argument continues. The world of policing is exactly the same, argumentative in nature and with procrastinators in action.

DIGITAL POLICING Sir Robert Peel, the father of modern policing in 1829, stated that 'the people are the police and police are the people'. You would never know it in today's uncanny world of societal failure. Yes, the whistle and truncheon have been replaced, but management is still having meeting after meeting regarding digital policing, amongst budget cuts and the depletion of experienced staffing levels. Computers and programming are not here to make our lives more difficult as some still think. They are here to assist in our daily logical and illogical attitudes of life. Mundane, repetitious actions bore us and we miss the most simplistic signs. Costing time, money and lives. Digital policing is not something that just happened, it has been in use for decades, in its initial form as simple databases. San Diego, California in 1983 had one of the first jail databases in the US and has since developed many programs to assist the daily intake of information and occurrences. Today it is a proven digital enhancement to daily police work, developed, not by software engineers, but with law enforcement officers with a major enterprise connection. Twenty‑one years ago the first real street‑level crime analytics and predictive analysis hit the streets with enormous success. Like Peel, visionaries lead the way with innovative, easy to use crime fighting tools, which come with that street level experience that must go hand in hand with crime fighting and communities. With such a larger percentage of the global population online, it is not just how you respond to them in an online manner, it is the glut of information that you never had access to before for crime solving issues. It is not how the public perceives your Facebook page or the content of your Tweets; it

is all about what they, the public are posting and texting about, which is of vital importance. We have opened Pandora's Box of information, which before, experienced police officers would have had to spend many hours discerning from research, trusted sources and more. The British Home Secretary stated some time back that, “cutting crime means catching criminals, but it also means preventing crime”.


Digital policing is not something that just happened, it has been in use for decades, in its initial form as simple databases

Digital policing relies on agnostic capabilities to look at all data sources and collectively fuse the information under one roof, so to speak. Whether the public understands or not, information is information and privacy factors are removed in most countries. Public social networks are just that, public. Daily social media usage is climbing. We see emotional posts, threats, questions and braggadocios opportunities; and those comments are all insightful, giving law enforcement a chance to be proactive. Watching and paying attention to what others are posting can assist in understanding what the street chatter or atmospherics really are. It is a superb proactive tool and can minimise issues. The ability to geofence areas and see what is transpiring allows for definitive policing. Even after an event, the witness chatter is priceless. When used with analytics, its value increases ten‑fold.

SMARTPHONECONNECTED POLICING The interface with smartphones is massive, with the ability to use all the inner workings such as GPS‑enabled apps, news channels and of course social media. Law enforcement‑wise, it has enriched our capabilities for interaction and information gathering. This leads to more intelligence‑led community policing efforts. The smartphone is a connection to law enforcement; the opportunity to glean information. Allowing it to be a personal journal of the owner in the case of a crime or event. It is more than just a telephone; it is practically a crystal ball! It is a two‑way street for gathering and sending information that can avert a crisis or a public event such as a missing child, or maintain stability during road closures or even a terrorist attack. „


Beginning his career in the Metropolitan Police, Nicholas has decades of experience in counter-terrorism and cybersecurity. He has founded and worked at numerous different companies serving law enforcement agencies and is a champion for digital policing initiatives.

T E S T M a g a z i n e | M a y 2 01 7


Peel could never have dreamt of the amount of data that would be aiding the police of the future


At CommSmart, we have embraced digital policing. It is now an enhancement for law enforcement's original mission and continues the principles of policing. Commercial enterprise has invested billions on analytic attributes and this is a cost saving investment for all global law enforcement.

LOCATE SUSPECTS, WITNESSES AND QUICKLY UNCOVER ASSETS As we know, during the initial stages of an investigation, information is scarce. CommSmart's predictive crime and analytics tool, Atmospherics, uses investigative technology that can expedite the identification of people and their assets, addresses, relatives and business associates by providing instant access to a comprehensive database of criminal/public records that would ordinarily take days to collect. Developed by experienced law enforcement professionals, it enables law enforcement agencies to locate suspects, find missing children and quickly solve cases. This is tied to the RMS/CAD data, used by 'beat officers' to understand their crime stats. Daily, they are reminded of prior events and historical data, which lead to 'hotspots' of crime and predictive analysis. The social and analytical interactions are designed to be accessible at all levels, meaning the information can benefit officers, analysts, and management. Peel could never have dreamt of the amount of data that would be aiding the police of the future. When looking at the chatter, which happens prior to and after a criminal event, you can drill down into social media and monitor the activity of all connections, historically and in real time. Including their photographs, texts and live video. Nothing is left to chance! This also improves and aids probation services and anti‑social behaviour intelligence. The culmination of information enables productive and predictive crime resolution.


It draws all of the above information to one screen, becoming an even more substantial asset at the core of what law enforcement is all about, resolution. It is pure, simplistic, linked analysis. Local law enforcement, under the statutes, is concerned with responding to crime, keeping order, investigating illegal activities and if they have the time and information, they can even try to prevent crime. This, after all, is their role, to police the community they serve.

SUMMARY The people are the police and police are the people. The Nine Peelian Principles of 18291 still stand tall today and even go further when connected with technology and professional policing standards. However, if we learn anything from the global experience it is that conventional policing methods are not an effective way to deal with or to prevent terrorism. By conventional policing, we are assuming that agencies have already evolved and adopted or reinstituted, a community approach to faithfully policing their communities. Many forces are moving away from a collective solution and turning to apps for smartphones, which lead to a disconnected individual solution that will not be able to connect the county, country, and individual intelligence agencies. More than 18,000 law enforcement agencies across the US have entered into the computing and cyber environments with CADs and RMS (street level data), with some even joining data sharing networks. Data analysis has increased policing efficiency and has advanced policing in the right direction in support of information led policing. Sadly, community policing and information led policing, together, are not aggressive enough to deal with terrorism and drug/gang‑related crime in their communities, because agencies ignore the largest sources of data and information in today's society: social media. Predictive analytics and social media monitoring is the key to utilising all resources and ensuring digital policing continues evolving and safeguarding communities.

Reference 1. Robert Peel's 9 Principles of Policing, https://www.gov.uk/government/publications/policing‑by‑consent/ definition‑of‑policing‑by‑consent

T E S T M a g a z i n e | M a y 2 01 7

INSPIRING, RECRUITING AND RETAINING TALENT Shaping the testing minds of tomorrow and recruiting the right people today. Amy Munn, Lead Digital QA Manager, Three, argues it's possible with the right amount of effort.




ttracting the right type of candidates and the quality of applicants in the market are two consistent challenges, being faced by all QA managers tasked with recruitment. I joined Three as their Digital QA Manager in 2015, and like a lot of new managers, one of my priorities was to fill my vacant positions. Firstly, I wanted to understand why they'd struggled in the past. Being located outside of London, I wondered if that would limit the number of applications, but when I sat down with the recruitment team, shortage of applications wasn't the problem – it was the quality of candidates we were attracting that just didn't meet our needs. Were we not being clear on what we wanted or is there a genuine shortage of people with the skills I'm looking for? This led me down a few different paths, so I thought I'd share some of my experiences and hopefully leave you with a few considerations that have helped me be more successful in recruiting, retaining and shaping people entering the industry.

WHO WOULD 'CHOOSE' A CAREER IN TESTING? Through my years working with testers, I've had countless discussions with people about how they ended up in the field. There was always one thing in common that made me chuckle – not one person actively sought out a career in testing, it was always something that people 'fell into'. I also put myself in this category; when I was being given career advice at school, back then the most exciting 'technical' job I could hope to pursue was a graphic designer, the 'women‑focused' roles being pushed were still nurses and PAs. So, like many, I have a varied work history, which I have discovered to be very useful, . Now, as a QA Manager who has struggled to recruit, I'm looking to hire someone that fills a range of needs. When I'm looking for rounded QAs, I'm a huge fan of passion, aptitude and common (although annoying it's uncommon) sense, over experience and professional qualifications. Does this mean I'm jumping to only employ people with no concept of testing? Of course not, what it means is that understanding how to blend and diversify a team can have great benefits for you as a manager, your function and the wider business. Why am I telling you all this? As seasoned testing professionals, I'm sure this is something you're already aware of and

are potentially working towards. I'm actually writing this to reach out to people in my position and provoke a point of thought, 'what can we be doing to shape these new testing minds?' There has been an interesting shift in the industry over the past couple of years; we're now finding young, eager, capable individuals actively pursuing a career in the field of testing. We have people coming out of higher education that aren't just looking for their first foot‑in‑the‑door, they're already seeing what it took many of us years to understand – testing can be great! How can we harness this energy and willingness to learn but also help these people understand the world they're entering, to give them – and us – a great base for making a positive impact from day one? I'm sure some people reading this will have a great grad scheme in place that might be giving these people a great leap into IT but, unfortunately, that doesn't apply to everyone. Grad schemes are also designed to give people a variety of experience and exposure, so by the time you've worked with someone and they're showing their greatness… off they go to their next assignment area. What I was ideally looking for was a group of people that already had a good enough understanding of the function to know they want to join our field, so I wouldn't be investing time on those with no inclination to jump into these (often muddy) testing waters.

There has been an interesting shift in the industry over the past couple of years; we're now finding young, eager, capable individuals actively pursuing a career in the field of testing

WHERE A RECRUITMENT PARTNER CAN HELP It was a couple of years ago at The European Software Testing Awards when a chance discussion at the bar regarding my frustrations as a manager resulted in my introduction to Sparta Global. Obviously every salesperson you speak to has the answer to your problems so, even full of champagne, I was sceptical. What I found though was potentially a company that could provide everything I'd been looking for – reasonably priced, trained and qualified graduates with an intense training schedule covering all the skills QAs need in agile environments. Sparta Global were my first introduction to this type of model so I was keen to see for myself exactly how the courses were run, what their content was and the type of people it attracted so I went to their offices to meet everyone involved. Courses are run in a few different cities and I visited their main training campus in Richmond to get an idea of what graduates experience. I had „


Amy is a digital test manager that's passionate about building strong, blended teams that enjoy what they do and who they work with. She works hard to cultivate an open an honest culture that supports personal and professional growth among people who share the same core values and behaviours.

T E S T M a g a z i n e | M a y 2 01 7


Ideal graduate training doesn't just focus on the processes or the technical elements, soft skills are given equal importance and it means future testers are equipped with knowledge of how to confidently and articulately engage with stakeholders


the opportunity to speak with everyone from those who interview the graduates hoping to join the programme, to the people creating the lesson plans and those carrying out the training. What struck me immediately was how passionate everybody was about the model and why it really can solve some of the recruitment frustrations of managers like me. One of my best experiences was that during my visit I got to see some of the graduates play‑back to the Sparta team what they had achieved during their training. This is a standard graduation piece that all courses go through, where they present to the entire team what they had learned and, more importantly, how they'd put this into practice and what they'd been able to produce. I saw the web development teams graduating and in true agile style, as a practical element of their training they had produced real MVPs that could have both been used by the Sparta office team or for future iterations by other courses. These MVPs are completed over two accelerated sprints totalling anywhere between six and 10 days with results. The graduates demonstrated the software, talked through their build process and technologies used and then backed‑up their approach based on their learnings. They also covered their agile understanding, what their challenges had been and how they adapted their second sprint to take advantage of these learnings. The entire graduating team participated with each person presenting to a group of at least 25 people – and to see a group with such confidence in what they'd learned, easily responding to any questions on their thought process was really exciting.

This is when I realised ideal graduate training doesn't just focus on the processes or the technical elements, soft skills are given equal importance and it means future testers are equipped with knowledge of how to confidently and articulately engage with stakeholders.

WHAT'S THE REAL WORLD LIKE THOUGH? The part which isn't covered by formal training is sharing real‑world experience of what it will actually be like to work in a corporate environment; the attitudes, processes, priorities and (dare I say it) even the politics of companies can be much trickier for someone to understand and adapt to, compared to passing the ISTQB. This is also the area that is so easy for people with experience to help these 'newbies' to understand and, ultimately, benefit from. As Sir Francis Bacon said, knowledge is power – going into a role with an understanding of challenges that will be encountered is a great bonus and it allows people to adapt much faster. Having been inspired by my first visit to Sparta, I have returned on several occasions to their graduation events, which they call huddles, in an effort to share my experiences and advice with as many people as possible. I've presented to them on a range of subjects, from explaining how Three's digital department is made up and what challenges we've faced, to what I experienced as a new tester and what advice I have for their first placement. The more time we can invest in supporting these new minds and helping this new talent, the more rounded and practical the people we will get the opportunity to employ. Whether it's writing your experiences in an article or blog post; seeking out companies or events you can present or network at; I implore you share these everyday challenges as much as you can so our industry can reap the rewards.

ATTRACTING THE RIGHT CANDIDATES Where a model like Sparta's or using a consultancy might increase your pool of candidates, you still need „

T E S T M a g a z i n e | M a y 2 01 7


Re-writing the job spec to better articulate how we work, what our aspirations are as a digital team and emphasising the importance of stakeholder management and great communication instantly improved the quality of applications

T E S T M a g a z i n e | M a y 2 01 7


to ensure they are the right fit for your company – something which is potentially more important when hiring permanent employees. I only say potentially as there is no reason your standards should slip when employing contractors, as depending on your company's practices or current position, contractors may be your only choice. You don't want a high proportion of contractors exhibiting contradictory values disrupting a culture you've worked hard to achieve. Although changing contractors can be easier than permanents, no‑one wants the additional admin, distraction and to repeat the hiring process. When recruiting permanent employees, before you get to assessing someone's cultural fit, you need to make sure your advert or job description is sending out the right message. Although I never suffered in receiving a decent number of applications, I found myself having to discount the majority due to poor communication skills, something which is so essential in an agile environment. I started by reviewing the inherited job spec and found it was like a lot of others you see in the industry – great at listing out the technical skills needed, but said nothing about our environment or culture. Re‑writing the job spec to better articulate how we work, what our aspirations are as a digital team and emphasising the importance of stakeholder management and great communication instantly improved the quality of applications. Rather than feeling I was getting people sending out the same, untailored CV to multiple organisations, I was receiving covering letters explicitly picking up on the cultural element saying why this was an exciting opportunity. Further to this, I was fortunate enough to have the support from my Resourcing Partner at Three, Chris Ellis, who had experience with a tool called Launch Pad. This allowed me to change my recruitment process to one that would ensure only people with the skills I wanted would progress through to the second face‑to‑face stage, as after candidates had been through the initial review, first interview stage was then changed to a video submission. Applicants would be asked five key competency questions, which I would ask via video recording, and allowed between one and three minutes to record their answers. This wasn't something I'd ever tried before and at the earlier stages of my career,

I, as a candidate, would've found it daunting. But would it have

deterred me from applying for a role that I really wanted? Absolutely not. I started putting a higher number of people through to the first stage, knowing that not everyone would respond, and I had a return rate of about 20%. While many were still not suitable, it allowed me to really quickly progress or decline candidates without hours being lost on telephone interviews. For the people that I did want to see face‑to‑face, having seen their CVs and been satisfied with their video answers, it allowed me to focus primarily on behavioural questions so I could truly assess their team fit. I would always interview with one of my peers from another function to get validation on whether they were someone we'd enjoy working with.

SUMMARY In summary, here are a few points that I'd recommend starting with if you're having some of the same challenges: • Experience versus aptitude – understand when you need to pay for experience and when you can afford to take on someone that knows the basics but is keen to learn. A good blend can not only be great for the team but for your budget too. • Focus on behaviours – tailor your recruitment process to the type of person you're looking for; when you find a person that shares your values and is truly passionate about what you're trying to achieve, it will be a great hire for you both. • Identify great vendors/partners – find suppliers that meet a variety of needs and work with them to help them understand your business. The more they understand the more suitable candidates they can provide.

MANUAL TESTING ISN'T DEAD Nick Welter, Senior Market Manager, Testing Products, SmartBear Software, celebrates handsâ&#x20AC;&#x2018;on testing.





ny person with some degree of exposure to software development knows that two of the major trends impacting the industry today are the emergence of agile, DevOps and other methodologies, which are displacing traditional waterfall approaches, and the longer‑term trend of test automation. While test automation technology has been in existence for decades, the technology has become increasingly powerful and ubiquitous in recent years. Today, testers are not only automating tests at the user interface level, but also at the API and unit layers. The growing popularity of open source tools and frameworks such as Selenium and Jenkins has allowed testers to hook automated tests into their rapidly shrinking test cycles. Looking forward, where does this leave manual testing? Will automation overtake most, if not all, of the activities which are currently performed by manual testers? Or are there some things that are not likely to be automated? Looking at the available facts, it's hard to see certain aspects of software testing becoming automated. Since the growth of automation has broader implications for our society beyond software testing, it is illuminating to draw lessons based on experiences outside the world of software development.

KNOWN UNKNOWNS VERSUS UNKNOWN UNKNOWNS In a 2002 news briefing, then US Secretary of Defense responded to a question from a reporter during the lead up to the Iraq War by categorising facts as things we know ('known knowns'), things we know we don't know ('known unknowns'), and things we don't know we don't know ('unknown unknowns'), with the latter being the most challenging to deal with.1 This is often true when applied to software testing, as the most troublesome bugs are often ones which pop up in unexpected places. Automation can be a powerful way to reduce the frequency and magnitude of bugs, but scripting requires a developer or tester to make an informed theory of where bugs will be before execution. Thus, as great as automation can be for identifying your 'known unknowns,' it can be equally insufficient at helping testers identify 'unknown unknowns.' Sometimes, the best way to mitigate the risk of failure is to take a black‑box approach, and have real users interact with the tool independent of the underlying architecture. As a society, we are seeing automation, algorithmic analysis, and predictive modelling permeate our everyday lives. By taking presupposed assumptions and applying them on an unprecedented scale, we are discovering more about the world around us than any other point in human history. But what if the assumptions are wrong? This limitation can have profound consequences, ranging from incorrectly mapping the spread of a deadly disease2, or even potentially swinging the outcome of a presidential election.3 In both cases, a bigger emphasis on real‑time conditions would have made a difference, whether they came from a village in West Africa or a suburb of Detroit. What does this mean for testers? In this context, a mixture of automation and manual testing can help minimise your 'known unknowns' and your 'unknown unknowns' in an efficient manner. „

What does this mean for testers? In this context, a mixture of automation and manual testing can help minimise your 'known unknowns' and your 'unknown unknowns' in an efficient manner


Before joining SmartBear, Nick worked at HealthCareSource, where he specialised in marketing automation and operations. Nick holds a bachelor's degree in Economics from Wheaton College.

T E S T M a g a z i n e | M a y 2 01 7


Despite this, humans will still be responsible for designing applications, and by proxy, the design of the tests for that application (even in a future where AI can write code, somebody still has to design the AI program)


SOME THINGS ARE REALLY HARD TO TEST Most testers with automation experience are familiar with aspects of an application that are difficult to test without doing so manually. Video is a good example of this, as well as how a web application will render when accessed using different devices. In general terms, testing technology is evolving to the point where there are now tools that help testers overcome challenges such as recognising images and virtualising APIs in production. However, while technology is advancing to the point where it can help solve some of the trickiest testing challenges, software is becoming increasingly ubiquitous in our everyday lives. This has major implications for those testing software, as how an application performs from a functional standpoint will largely depend on how humans (who are unpredictable by nature) interact with it. We are already seeing examples of this today. Uber was in the news a few weeks ago, when one of their self‑driving cars got into an accident with another vehicle which flipped on its side during a test drive in Tempe, Arizona.4 The cause of this? Another (human‑operated) car failed to yield to the self‑driven car, triggering the accident. Fortunately, nobody was injured in this incident, but there are sadly examples of overreliance on automation having life and death consequences. In the aftermath of the Asiana Flight 214 crash at San Francisco International Airport, it was discovered that the pilot selected the wrong setting on autopilot, causing it to miss the runway upon landing.5 The takeaway from these accidents isn't that automation itself is problematic (quite the opposite – it can provide profound


efficiency improvements in almost every area in which it is applied), but that its shortcomings are magnified when it comes in close contact with the fickleness and randomness of human nature. The autopilot for both the self‑driving Uber and Asiana Flight 214 likely performed to specification from a programmatic standpoint, but it failed in its real‑world application – something that is very difficult to automate.

THE FUTURE OF MANUAL TESTING We've touched upon three areas where manual testing can fill gaps where automation alone might not be able to fully cover the application under test. 1. Difficult‑to‑test objects, such as images and videos. 2. Unknown unknowns – bugs that are by nature unpredictable. 3. Human factors – how technology interacts with the humans using it. Of these, there has been significant progress made against testing objects that are hard to test, and it is not difficult to imagine a future where technology allows us to test these easily. Despite this, humans will still be responsible for designing applications, and by proxy, the design of the tests for that application (even in a future where AI can write code6, somebody still has to design the AI program). Humans will also be the end users, or beneficiaries of these applications, and in many cases, will interact with it in the wild. In this sense, it is hard to imagine a future where we will automate our way around these 'unknown unknowns' of our applications and how we interact with them. If this is true then manual testing is not, and never will be, dead.

References 1. 'DoD News Briefing - Secretary Rumsfeld and Gen. Myers', http://archive.defense.gov/Transcripts/Transcript. aspx?TranscriptID=2636 2. 'Big data's 'streetlight effect': where and how we look affects what we see', The Conversation, http://theconversation. com/big-datas-streetlight-effect-where-and-how-we-look-affects-what-we-see-58122 3. 'Clinton's data-driven campaign relied heavily on an algorithm named Ada. What didn't she see?', The Washington Post, https://www.washingtonpost.com/news/post-politics/wp/2016/11/09/clintons-data-driven-campaign-relied-heavily-onan-algorithm-named-ada-what-didnt-she-see/?utm_term=.4d913650d54e 4. 'One of Uber's self-driving cars just crashed in Arizona', The Verge, http://www.theverge.com/2017/3/25/15058978/ uber-self-driving-car-crash-arizona 5. 'Disaster on autopilot: How too much of a good thing can lead to deadly plane crashes', National Post, http://news. nationalpost.com/news/world/disaster-on-autopilot-how-too-much-of-a-good-thing-can-lead-to-deadly-plane-crashes 6. 'Self-Programming Artificial Intelligence Learns to Use Functions', Kory Becker, http://www.primaryobjects. com/2015/01/05/self-programming-artificial-intelligence-learns-to-use-functions/

T E S T M a g a z i n e | M a y 2 01 7

froglogic Squish GUI Test Automation Code Coverage Analysis

cross platform. multi language. cross design. Learn more and get in touch: www.froglogic.com

Š Majdanski_shutterstock.com

ThaT is No ordiNary sofTware TesTer



Focus Groups Supplement MAY



A forum for new ideas


arlier this year, senior testing and QA professionals gathered around in boardrooms to discuss and learn from each other, in an open, relaxed environment. Topics covered ranged from test automation and outsourcing, through to performance testing. Attendees were able to share challenges and frustrations, and chat to peers from outside of their own organisation or business sector. It is so important for the testing and QA community to share its thought leadership, collaborate and grow together, as we face demanding times and changing structures. We’ve put together this Syndicate Supplement to ensure that learning from


the day is being shared with the wider community, in hopes that it will help move the market forward. The Focus Groups will next return in October, shining the spotlight on DevOps. Attendees can expect heated discussions around managing culture shifts, automation, cloud technologies, DevSecOps and more. If you’re interested in attending future roundtable discussions, please don’t hesitate to reach out to me.


EDITORIAL ASSISTANT Leah Alger leah.alger@31media.co.uk +44 (0)203 668 6948


ADVERTISING ENQUIRIES Anna Chubb anna.chubb@31media.co.uk +44 (0)203 668 6945



The panacea for digitally transforming organisations 18


Does test automation mark the end of manual testing?




Next generation testers

QA in a DevOps world




Performance testing in 2017


The hidden benefits of outsourcing software testing

© 2017 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 GENERAL MANAGER AND EDITOR Cecilia Rehn cecilia.rehn@31media.co.uk +44 (0)203 056 4599


Test data management takes centre stage



PRODUCTION & DESIGN JJ Jordan jj@31media.co.uk 31 Media Ltd, 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 info@31media.co.uk www.testingmagazine.com PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA softwaretestingnews @testmagazine



Mobilising success

TEST Magazine Group


T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

Test data management takes centre stage T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

Paula Thomsen, Head of Quality Assurance, Aviva UK, reviews how QA and testing departments are preparing in light of the upcoming General Data Protection Regulation (GDPR) legislation.






nderstanding and preparing for the new GDPR legislation, which will come into effect on 25 May 2018 in the European Union, were key themes discussed amongst attendees at the roundtable sessions at the TEST Focus Groups event.

UNFAMILIARITY WITH GDPR Many senior testing managers and executives surprisingly said their organisations weren’t too familiar with the EU regulation, which is set to give consumers control of how organisations use their personal data and requires opt-in consent. One critical, and technically challenging aspect, is the consumer’s right to have their information removed immediately from databases, meaning organisations need to look over test data policies to ensure compliance. As discussed during the roundtable sessions, the penalty for not complying with GDPR can lead to fines of up to 4% of global annual turnover. A few delegates mentioned that there has been a sense of confusion around the likely implementation of GDPR in post-Brexit UK, but since the sweeping legislation essentially covers any company that does business in the EU, it cannot be discounted by most. Regardless of the confusion, it is important that we understand the legislation and how it impacts the companies for whom we work.

HOW TO APPROACH TEST DATA MANAGEMENT GOING FORWARD? The roundtable discussions began by discussing ways to approach test data management going forward. One solution is using anonymised (pseudomised) data for testing, but this must be strictly enforced and controlled to ensure both security and customer privacy. Another option is using synthetic test data, which is not linked to any real accounts. Synthetic data can work well for some, such as ecommerce sites, where the products and processes are relatively straightforward. Another attendee from an analytics firm mentioned that it is practically impossible to generate this type of test data for their needs and applications. Therefore, depending on how your data is used, the relative value synthetic data gives you may not justify the cost.

The roundtable attendees also shared their thoughts on using bespoke tools for test data management versus freeware, or in-house developed solutions. Freeware can be a good starting point for some, but for organisations with complex architecture and multiple databases, a more bespoke solution might be necessary.

UNDERSTANDING THE TEST DATA Another key area that came under focus during the discussions was the issue of „

Live data can creep in if it’s not vigilantly monitored; the pressure of fast delivery means that a team might go in for a quick one-off hit, and don’t have time to properly clean up, and then before you know it the data has been replicated elsewhere


Paula's testing experience spans over 20 years, covering both functional and non‑functional testing. She has worked predominately within the finance industry, but spent a stint in the manufacturing industry. Paula is passionate about how as a community of IT professionals it is our responsibility to promote IT and testing as a career.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7


Ultimately, it was agreed that, for the most part, the GDPR is a good thing – it’s about protecting customer data



duplication. For most test managers around the table, representing large financial organisations, ecommerce sites and the public sector, there is a real challenge in firstly understanding the number of copies of each data set that exist, then to manage all of them effectively to ensure compliance. There can be multiple copies of data across an organisation, particularly if internal systems feed into each other, or external ones, such as government-based systems. For example, general insurance providers capture your driving license and your car registration – this is then checked against other databases within DVLA. As a result, before you know it, your customers' data has been transmitted to tens of other systems, some of which your company may not control. That is why it is important that you do actually understand data and how it’s all connected. Most testing managers around the table were in agreement that live data should be avoided in test, in order to protect customer data. It was pointed out during discussion that live data can creep in if it’s not vigilantly monitored; the pressure of fast delivery means that a team might go in for a quick one-off hit, and don’t have time to properly clean up, and then before you know it the data has been replicated elsewhere.

THE QUESTION OF ACCOUNTABILITY Additionally, testing and QA departments have to take responsibility for monitoring test data usage. It’s not just about testing,

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7


it’s about the testing that your developers or other teams might be doing as well. Who in your organisation manages and monitors ALL test data that is used? Who has access, at what time and for what purpose? And how do you use monitoring to safeguard customer data? Another key area for discussion was where does the accountability stop? Most testing and QA managers in the roundtable sessions mentioned using third party partners to help deliver various different IT services, and that it depends on different contractual agreements whether or not data management, monitoring and masking can be demanded. Where does ownership of your customer data stop and start between companies?

SUMMARY The General Data Protection Regulation legislation is coming, and will affect most organisations and their testing and QA departments. It was illuminating to see so many different executives from different business sectors come together to share their thoughts on test data management, accountability, and more. Ultimately, it was agreed that, for the most part, the GDPR is a good thing – it’s about protecting customer data. For many attendees this represents an evolution in good business practices, which most, regardless of the fines, would want to adhere to. Especially in light of the arguably much higher costs of losing public trust following a breach.

Š Majdanski_shutterstock.com

ThaT is No ordiNary sofTware TesTer


Does test automation mark the end of manual testing? T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

Myron Kirk, Head of Test, Environments & Release, Boots UK, explores how test automation is changing testing and QA departments.





he attendees at the recent TEST Focus Groups took part in a series of roundtable discussions around the changing skills sets of testers, different test automation journeys and tooling. It was clear that test managers’ experiences and hopes for the future depend on organisational size, agility and available team skills.

THE JOURNEY The journey towards test automation has begun for most, and it is evident that some organisations are further ahead of others, with a combination of dedicated automation teams and more agile multi‑disciplined teams made up of technical SDET‑type roles. Typically, both approaches involve a solid budget for commercial tools, training and upskilling testers to build a diverse team with automation skills. In order to meet the demands of today’s fast marketplace and rapid deployments, some form of automation will be required. Most roundtable attendees felt that this adoption is inevitable, as well as an exciting opportunity.


A few participants represented organisations far ahead in their test automation journeys, with test managers expressing joy at working with dedicated test automation teams and dispersed capabilities. Team structure aside, it was evident that these organisations possessed a significant depth of automation expertise and experience. They were made up of large financial and retail/ecommerce institutions primarily and had typically been embarking on the test automation journey for quite some years, with time to iron out the kinks.


In other organisations, not as far travelled on the test automation journey, test managers have been seeing more automation amongst their development teams, and are now thinking about how to apply the same concepts and ideas to speed up regression testing, and free up time for more exploratory and non‑functional testing. A successful method used by testing managers is the spreading out and embedding of technically skilled testers within their teams, to boost a department or group’s capabilities. Either this technical tester is a new recruit, or perhaps an internal transfer from the development team.

Manual testers in these medium mature organisations expressed a worry in not being sure where to start adopting automation. Working with developers to write scripts and help carry out internal training could be a good starting point.


In smaller, more agile organisations, there are seemingly less challenges when it comes to adopting test automation. One test manager working for an app start‑up mentioned that all developers and testers working in the same office are able to switch‑up projects and roles to suit the organisation’s needs. There are some concerns about how to adapt as the company grows larger and teams become more complex. Having the structure and culture to support growth is key when scaling test automation.

Key questions to ask when investing in tools is whether or not an organisation needs all‑in‑one automated functional testing products, such as HP UFT, or if stand‑alone web and mobile automated testing tools are more suited

THE TOOLS Tooling was a key discussion point. How to best bring in test automation suites? HP UFT, SpecFlow+ Runner and Selenium emerged as the most popular tools, and C#, the most popular language. Key questions to ask when investing in tools is whether or not an organisation needs all‑in‑one automated functional testing products, such as HP UFT, or if stand‑alone web and mobile automated testing tools are more suited. As mobile continues to grow in importance, it is important that any overall test automation strategy takes into account mobile platforms as well. Interestingly, the majority of roundtable participants were not familiar with behaviour‑driven development (BDD) as part of their automation efforts. Where BDD was mentioned, it tended to be in the context of smaller organisations.

NEW METHODOLOGY REQUIRES NEW SKILL SETS The roundtable attendees were in agreement that testers cannot remain stagnant in their personal development, as the market is demanding new automation skills. One testing manager with mostly manual experience mentioned being on the job market for over four months before finding a job, and this was in the London area, which was surprising to many. However, „


Myron heads up Test, Environments & Release for a major health and beauty retailer. He is leading an initiative to transform testing through the adoption of tooling, automation and agile working practices. With close to 20 years’ experience working across the IT industry, Myron has spent a large part of his career focusing upon testing and quality.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7


One testing manager with mostly manual experience mentioned being on the job market for over four months before finding a job, and this was in the London area, which was surprising to many


this signals that without automation skills, there is a likelihood of being missed or passed over. The testing manager expressed dismay at just wanting to find a job in “manual testing because that’s what she enjoys,” but others concurred that it is unlikely that these roles will be around much longer and managers, as well as testers just starting out, should look for self‑development and training in different skills, including automation.

CONCLUSION The senior testing professionals at the TEST Focus Groups came together to share thoughts, opinions, hopes and fears about the future of test automation. Whilst some


are happily working with automation on a daily basis, many manual‑only testers are concerned about a disappearing job market and too much change. Ultimately, the change is inevitable but it is clear from conversations that the test automation journey will vary for all. It depends of organisational readiness, maturity and the talent onsite. The challenge of scaling up automation is omnipresent. Finally, there is much to be celebrated in the adoption of test automation; as a method to help speed up the delivery of quality software, and free up time for more creative, exploratory testing. The manual testing skill sets will not disappear, but for those who want it, now is a great opportunity to diversify their skills and grow.




T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7











Helping embed quality, from requirements to delivery through collaborative software design principles. As official partners to Cucumber and SpecSolutions we deliver expert training, coaching and delivery. SIS ALY AN

Delivering context-driven solutions for continuous delivery pipelines and DevOps transformation, we lead by example utilising good practice,best-class tooling and cloud infrastructure such as Docker, Ansible, Chef, Terraform AWS, Azure and New Relic


Improving the quality of software through helping teams adopt a test first mentality supported by proven methodologies in exploratory testing and automation. Checkout our Open Source tooling at www.github.com/MagenTys


ENTERPRISE AGILITY Gain visibility and improve agile maturity at enterprise level for DevOps, Technical and Team health through our expert assessments, intelligent portals and clear dashboards for actionable growth items.

ATLASSIAN Improving how teams use JIRA, Confluence, Bamboo and other Atlassian tools. As official Solution Partners we offer licensing, implementation, training and end-to-end support

To find out about our services contact: magentys.io




0207 1934 850

Next generation testers

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

Dan Martland, Head of Technical Testing, Edge Testing Solutions, explains how we must all evolve from dinosaurs to birds in order to thrive in the modern software development world.




s someone who meets lots of testing organisations and practitioners, I get to hear a lot of problems. It’s a bit like being a priest. Over time, you start to notice patterns and one of the challenges that clients frequently talk about is the difficulty in getting people with the right skills. What does this mean and how can we overcome it? I was very pleased to host one of the topics at the TEST Focus Groups 2017 event, and we had some great discussions about this. We had diverse representation across organisations of different sizes, industries and levels of test maturity. We had a fairly even split between team managers and ‘do‑ers’, which is useful as they each have a distinct perspective on the challenge. We had a couple of attendees whose companies seemed to have a good handle on test resources, whilst the majority were experiencing challenges of some kind or other; this is great for the roundtable as we can learn from each other’s triumphs and tribulations. We even had one participant who claimed not to be a tester, though it turned out he did a lot of testing!

WHAT’S THE PROBLEM? If the topic of discussion is next generation testers, it’s a fairly safe bet that there is a problem (either real or imagined) with the current generation. The short version is that the type of people we have are a poor fit for today’s projects. I’ll explore that statement with a longer explanation. Our experience of test teams is that the majority of people fit into one of two camps. More than half of testers seem to come into the career from a business background. They probably got seconded to a project to work on UAT, and that might easily have been the only testing that the organisation undertook. They discovered that they were good at it (or their bosses did), and so they worked on multiple projects. Finding that they enjoyed testing, they chose to move into it as a career. Many of these testers focus primarily on business or end user testing. Whilst they may have ISTQB certification, they probably don’t have a technology background or computing degree. Perhaps a third of the testers we meet in projects do come from a technology background, and quite possibly have only worked on software development projects. They might be highly qualified in terms of ISTQB or ISEB, but they are quite often hyper‑focused on one aspect of technology.

With older ways of working, this split of skills could be made to work. Waterfall projects split off the more technical testing from the more business process testing, so you can plan resourcing for both separately. In projects which are either less structured (I will avoid the word agile) or where team sizes are pushed down due to the changing economies of development, it is much more important to have a flexible team. We have built a community of testers that is highly polarised in terms of skills and experience, so this is where the problem manifests. Another inexorable force on testing is the pace of change that people now expect in software delivery. Even 10 years ago, most systems had releases annually or even less frequently. Have you noticed how Microsoft Word is now just Word and no longer has the year suffix? It used to have new releases every couple of years. I strongly suspect we won’t see another ‘formal’ update of Word any time soon, just a constant drip feed of updates. This trend has also changed the dynamic of software development, meaning you get new code libraries to test out of sequence or in isolation. You can’t simply focus on the user‑facing aspects of a system, and testing code modules without an understanding of context can be a futile activity. Additionally, the need for regression testing grows exponentially with this approach. The old ways of doing things can’t keep up.

If the topic of discussion is next generation testers, it’s a fairly safe bet that there is a problem (either real or imagined) with the current generation

WHAT’S THE SOLUTION? Clearly there is a need to do things differently, but that requires a different type of test resource. There is a concept that is now quite well established: ‘T‑shaped people’. The idea is that team members should have a base capability in all the skills their team needs (so, able to write some code, document requirements, define tests and more depending on the team’s needs). This is represented by the horizontal beam of the ‘T’. The vertical bar represents deep knowledge in one area. So (keeping the example simple) a T‑shaped tester would have deep knowledge of testing combined „


Dan has worked on many complex projects in his 20 years of test consultancy and is passionate about testing as a career. Having line-managed matrix teams of up to 80 consultants, Dan believes strongly in career development and how every day is a learning opportunity.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7


New tools and languages gain popularity in software development on a very regular basis, so simply having skills is a poor long-term capability; what you need is the ability to gain new skills and so have a team that remains relevant


with a working knowledge of coding and business analysis. Most of our typical testers have about half of this capability set. The challenge is getting them up to speed, and how we address this depends on whether you are a team manager or not.

FROM THE TOP As a team manager, there are a couple of key strategies open to you, one more long term and another more immediate. In the longer term, you can change your recruitment criteria for growing the team. Look for a broader range of skills in your candidates. In particular, the desire to learn new skills and an interest in problem solving are probably good indicators. New tools and languages gain popularity in software development on a very regular basis, so simply having skills is a poor long‑term capability; what you need is the ability to gain new skills and so have a team that remains relevant. You need to support this constant learning, and this is where you can help your existing team members, too. Create an atmosphere of continual learning and provide what resources you can to facilitate this. One of our participants worked for a company where everyone got a half day each week for self‑development – 10% of all their time! That’s a fantastic commitment by the company, however most teams and organisations will need to work with less. The good thing is that there are lots of worthwhile resources available via the internet and you can build up communities of expertise within your company, sharing experience. Invest in fundamental skills like programming and SQL, as well as more specific skills such as test automation. Again, flexibility is key to long term success. What to do with people who really, really don’t want to change? This is not unusual with people in general, and perhaps slightly more common amongst testers as we tend to be naturally cautious as a breed. If you have team members who only want to do manual testing, for example, then consider re‑focusing them on exploratory testing as this can be a potentially very beneficial testing activity and it can’t be

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

done through automation. If people don’t want to learn new technologies or systems, then BAU or maintenance projects might still value that knowledge.

AT THE SHARP END The onus is on us, the testers, to become more flexible. This is hard! We need to increase our technical skills so that we can write simple pieces of code to drive DLLs or make SOAP requests. We need to increase our analysis and collaboration skills so we can better understand what business value a system needs to deliver. As a group, we tend to focus on the negative. We’re testers, it’s our job to find problems! That’s great; however, modern software development needs solutions, too. Make more use of creative solutions in your day job. Develop your skills by building simple tools like SQL queries or recording basic automated tests using a tool like Selenium. Move on to more complex things as your confidence grows. Experiment and gain an understanding of why we do things. This is a key skill in the modern delivery environment. In highly structured projects of old you would know what to do because the plan would tell you what we are doing today. You now need to be able to look at the software components in front of you and work out what you can usefully test, and how! This is a high‑order skill that takes time and either support or trial and error to achieve. Expect the experience to be a challenge but rewarding when you get it right.

WHERE WILL WE END UP? For too long, testers have been human drones, just repeating the tests put before them. This enforced drudgery makes our senior stakeholders struggle to see the value of what we do, as we plod along like dinosaurs in their eyes. As we evolve to thrive in the modern software development world we will get to do more varied and challenging tasks. We will use our creative skills to solve problems, and our experience of past issues to spot them before they impact the team. Our bosses will see our aerobatics as we, like birds, soar over the big picture and dive into the detail. They, and our peers, will better appreciate the value we are bringing to the team. It will be a tough journey, but it will be worth it.

by edgetesting

Want a no-fuss, flexible, low cost, on-demand service? The Digital Test Hub can provide you with an on-demand or scheduled, cost effective testing service from our UK service centres in England and Scotland. We work together to provide a testing services model that gives you what you need, when you need it; whether it is a whole project or specific areas such as Web Testing, Device Testing, Regression Testing, Usability Testing, Functional Testing, Test Automation or Performance Testing. Our ability to rapidly understand the needs of our clients and kick start our service offerings within one to two days from initial discussion has already enabled us to build up a strong client base for our Digital Test Hub service.

The Digital Test Hub provides organisations with an additional testing capacity and capability without the need for a long procurement cycle, financial retainers or protracted service commitments – when you want it and for as long you need it.

Benefits: • Pay-as-you-go, low cost, on-demand solution • Quick and easy to setup - have our team testing within hours or days • UK based service which does not make demands on your office space • Dedicated Service Manager who will be a highly experienced testing professional • Standalone service or as part of our Managed Testing Service providing an onsite and offsite capability • Uses cloud based technologies with no software to install • Management System certified as complying with ISO9001 and ISO27001

What next? If you are interested in how the Digital Test Hub can help you meet your challenges, then get in touch for a no obligation discussion as to how we can build a solution for you. Phone 0121 647 4913 / 01698 464280 or email contact@digitaltesthub.co.uk www.digitaltesthub.com www.edgetesting.co.uk

siteload KICKSTART

website performance testing

The hidden benefits of outsourcing software testing T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

Bruce Zaayman, Software Testing Specialist, DVT UK, explains the key advantages of outsourcing software testing, in particular, to the tip of Southern Africa.




utsourcing your software testing offers many benefits. These benefits are subtle yet introduce significant cost savings and productivity gains. Some of the notable takeaways from our conversations at the TEST Focus Groups centred on the many hidden benefits of outsourcing, either for test automation and regression testing, but also for performance testing. Depth of skill is one of the hidden benefits of outsourcing, when done properly and with the right partners. It eliminates some, if not all, of the risks companies face every day, having their long‑earned intellectual property (IP) pack up and walk out the door. A key question that was raised during the session was how much control the client would have when a senior member of the outsource team decides to leave, taking months – or even years – of IP with him. The answer is straightforward: as a specialist outsource service provider it’s our job to ensure if any of our people leave we can replace them without affecting the velocity of delivery. A good partnership will also match the client’s ambition, technical ability, enthusiasm, and energy, and do so with a positive attitude.

AN ATTRACTIVE ALTERNATIVE Until now, when you think outsourcing from the UK – and specifically test automation and performance testing – you think India or Eastern Europe as the most likely destinations. Both undoubtedly have their strengths and track records, but those strengths may no longer be as relevant as they once were. For one, time zones – which have always been a factor for Indian outsourcing – are going to become increasingly vital as the pace of innovation accelerates in a newly competitive, inward‑looking market. Cost of business and scalability remain India’s trump cards, although the same can’t be said of Eastern Europe, where a weaker Pound is now at a distinct disadvantage to the Euro. Cape Town, as a destination, probably needs no introduction for most of our British colleagues. After all, as South Africans, we share a Commonwealth, a language, close family ties, and a culture that borders on identical. We have friends and colleagues travelling to and from the southernmost tip of Africa, if not

for business then for pleasure. There’s a reason why Brits vote for Cape Town as their preferred holiday destination year after year. But beaches, mountains, fabulous food and cheap beer are hardly the bedrock of a strategic, forward‑thinking outsourcing relationship, although they can’t hurt of course. What makes Cape Town an attractive proposition in its own right are the same things that made India and Eastern Europe attractive – only better.

BUSINESS MEANS BUSINESS South Africa is Africa’s business and technology hub. With the continent’s largest economy, the focus of VC investment in the region has seen tech companies grow and mature, and many of South Africa’s (and Africa’s) tech service hubs are now based in the country. Cape Town and Johannesburg are home to some of the continent’s largest banking, financial, retail and insurance headquarters, many of which we count as our clients. Since the launch of our Global Software Testing Centre (GTC), we’ve been building and refining a model of close‑sourced and outsourced test automation and performance testing for many of our clients in large commercial hubs like Johannesburg and London. Our time zone does not change, and our language of communication is English. That is a big plus when it comes to live testing environments, especially when you consider the foundation of our methodologies is agile. Another differentiator is the test framework we developed using open source software that eliminates license fees and makes the use of multiple machines for simultaneous testing a viable option for most of our clients. While we are technology agnostic, and will happily make use of our clients’ licensed test platforms, our focus is on finding the right tool for the job at hand, and finding solutions to the most complex of problems, no matter what. This makes us particularly attractive to tier two and three companies – the innovators and early adopters – especially on first introducing our services to a new region. A good example is UK parcel company Doddle, for whom we solved a particular problem in getting past a limitation of the delivery company’s Android scanning device for automated testing, and have been testing the company’s software ever since. „

However, as offshoring or outsourcing software testing becomes the norm communication still remains one of the biggest challenges. Very often requests are lost in translation, causing frustration and time delays


Bruce, a technical engineering specialist for DVT, is passionate about the company’s Global Testing Centre – the only one of its size in Cape Town – and innovative, custom software solutions. Formerly Practice Head of the Quality Assurance division DVT, Bruce is now leading the way for DVT in London, introducing UK-based enterprise and fastgrowing technology clients to the exceptional quality and potential of our South African-based products.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7


It is clear that organisations will need to invest in various avenues to tackle testing challenges, including cost‑effective outsourced partners and a definite focus on automation



• • • • • •

• • • • • • • •

IP retention. Talent acquisition and retention. Processes. Regulation. Peak and trough management as large projects come and go. Onboarding of new skills, technologies and techniques, for example automation of software testing activities. Cross‑jurisdictional benefits. Cost-effective and affordable. Immediate access to specialised knowledge and insight. Pricing. Timeline. Communication. Test smarter not harder. Maturing levels of specialisation in ££ PCI ££ SAP ££ CRM ££ POS

A SERIOUS FOCUS ON AUTOMATION It is clear that organisations will need to invest in various avenues to tackle testing challenges, including cost‑effective outsourced partners and a definite focus on automation.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

Automation testing products are reaching a better level of maturity. We are seeing for the first time in the last few years that these products really can do the job, which means that testing automation will start coming into its own in the next five years.

SUMMARY We learned valuable information from our discussions at the TEST Focus Groups sessions. Many of the software testing challenges discussed are ones we are able to address with the GTC. However, as offshoring or outsourcing software testing becomes the norm communication still remains one of the biggest challenges. Very often requests are lost in translation, causing frustration and time delays. We strongly believe that outsourcing software testing to South Africa can address some of these challenges. We’d like to thank everyone involved in the discussions and we look forward to contributing to the next TEST Focus Groups!

The panacea for digitally transforming organisations T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

Vishalsinh Jhala, Product Architect, QMetry, discusses how testing teams can adopt agile testing to achieve digital transformation goals.





ractitioners and evangelists of agile testing recently came together for an invigorating discussion on agile testing’s impact on delivering quality faster and better at the TEST Focus Groups in London, UK. Mr. Ashok Karania, VP – Europe Business, QMetry, moderated the discussion. We had the pleasure of chatting with expert practitioners, scrum masters, new agile converts and aficionados representing companies of all sizes. We all started to break down this topic to discover what lies beneath today’s agile testing realm. Across the board consensus was that agile testing is one of the key enablers of digital transformation. Digital transformation is increasingly driving organisations to transform their business process, competencies and models with new digital technologies. Agility in delivering the digitisation of these components is crucial in achieving quicker outcomes. Thus, the development cycles to deliver software that drive the digitisation are becoming increasingly fast and with shorter sprint cycles. This acceleration leads to ever-increasing demand on testing teams to achieve quality faster as well. Agile testing is the solution here. What is that one key element that enables the agile testing organisations today? One interesting point that emerged in the discussion was about automation. The importance of automation is increasing in the agile world. In-sprint automation is a key goal for most of the practitioners and scrum QA teams can work on the same. The automation authoring is BDD/ATDD driven. Some have a separate automation team if their project demands the same. It all boils down to the sprint planning. Many teams are already at Sprint minus 1 automation. Automation of acceptance test is a key success for many organisations.

at others. The role of the product owner cannot be over-emphasised in setting the right environment and empowerment. The reporting structure of the team needs to be understood and appreciated. Agile means different things to different people. Hence the most important task is to get the team or organisation’s definition of agile right. Let there be a common definition and it should be consistent across the organisation. Definition of ‘done’ is so important in determining the success and failure. Developers want to ship products quickly and QA wants to ensure that the quality product is shipped – but both should agree on the definition of ‘done’. The final step is to have the selection of right tools. Agile has distributed teams and a proper test management tool is required for creating a common view. Agile is about transparency, flexibility, collaboration and joint success. A good test management tool will provide the required visibility and provide a single honest view to all the stakeholders. Rich reports and metrics will help continuous improvement. One of the important aspects is using predictive models based on historical executions and findings from the past runs and improve – that is where analytics and actionable intelligence comes into the picture. So, what did I learn? What do we believe are the key success factors for agile implementation? • Clarity of goals. • Transparency. • Collaboration. • Right tools, processes and metrics. • Flexibility.



The fundamental point is to identify the reason to go agile by focusing on the right people, process and technology. What is the business case? What is the technology case? Be clear about the expectations and goals. The next stage is to identify agile champions and enthusiastic team members and get a strong cross-functional team of product owners, business users, developers and quality engineering team members. The team with the right attitude is going to make or break the agile projects. It is always important to identify failure or success as a common event rather than finger pointing

Digital transformation is being enabled by the increased digitisation of business processes across organisations. This is also increasing the number of software applications developed and deployed. Various internal projects, as well as independent software vendors, are serving this increasing demand with new products. From a QA and testing perspective, all new software created adds more test cases, thousands of them. While automation is driving the ‘shift left’ of quality and faster release cycles; it is also adding many more test cases, automation scripts, multiple test runs „

Agile means different things to different people. Hence the most important task is to get the team or organisation’s definition of agile right


Vishal has 15+ years of experience in software product development in business domains like quality engineering, security & access control system, and supply chain management. He is passionate about technology and has experience in working on wide range of technologies, deployed over various platforms. He holds a Masters of Computer Science Degree from Sardar Patel University.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7




Test Management, Test Automation and Actionable Intelligence

Actionable Intelligence

Data Center

Explore, Document & Report




Open Source

Test Management

Test Management using JIRA

Figure 1. QMetry Digital Quality Platform is designed for agile testing for organisations of all sizes. It follows a modular approach to taking a waterfall-based organisation agile or helping an organisation that is already on the path of agile. The platform is an all integrated, modular and enterprise grade tool stack that is designed to enable QA and testing teams in this new digitally transformative world to deliver better quality fast.

and subsequent result data. For thousands of test cases managed across multiple platforms and devices; the data generated is humongous. Churning out actionable intelligence through this data manually is humanly impossible. Here comes the next wave of innovation to rescue. Using machine learning and predictive analytics (big data driven) to bring predictability by unearthing previously unknown insights can drive the efficiency and productivity to a next level.

BUILDING BLOCKS OF AGILE TESTING The key building blocks of agile testing as discussed during the roundtable sessions, can be summarised as: • Effective test management across tools/ platforms. • Simple transition from manual to automated testing.

• Test automation authoring and execution. • Actionable intelligence through predictive analytics and machine learning. QMetry products are designed to help you create these building blocks. 1. QMetry Test Management delivers total manageability for all test projects. 2. QMetry Voyager delivers an automation authoring tool that helps manual test engineers to author automation scripts with no coding required. 3. QMetry Automation Studio is an Automation IDE based on QMetry Automation Framework supporting BDD/KWD test authoring and unified test automation across multiple platforms such as web services, mobile apps and web apps. 4. QMetry Wisdom is a path breaking analytics engine that delivers complete predictability and total visibility into your test projects. It uses machine learning

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

to learn from thousands of test assets and guide on best ways to increase productivity and reduce time to market. The end goal is to achieve better quality in initial sprints thus reducing the time to market and defect leakage, while adding visibility and increasing the ROI considerably.

CONCLUSION Agile testing is a necessity today and digital transformation initiatives across organisations will proliferate the growth of agile testing even more. To benefit from it, a proper process, test automation and the right set of tools are a must. Predictive analytics and machine learning can offer actionable intelligence and help in providing predictable and better quality software much faster.

QA in a DevOps world

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

DevOps is an easy buzzword to consider applying to your software development process, but what does it mean and how does it impact your QA and testing process? asks Alan Richardson, Independent Consultant.




he idea of DevOps as a buzzword was the basic starting discussion point in the ‘QA in DevOps’ discussions at the recent TEST Focus Groups event earlier this year.

DEFINING DEVOPS DevOps itself has multiple definitions. Some of the features people generally associate with the DevOps buzzword are listed below, but this is not an exhaustive list by any means. • Faster delivery into production. • Continuous deployment into production. • Automated 'everything' (deployment, testing, monitoring, alerting, rollback, etc.). • Cheaper environments (use of cloud infrastructure). The above features and benefits are not exaggerated; they have been experienced by many teams and organisations adopting DevOps, but the change required in the organisation and development approach can be massive and the current start point for an organisation may be far distant from the end point. People often want to adopt DevOps as a quick win, without realising that DevOps requires a culture change rather than process or tool adoption. Many of the participants in the focus group discussions had experienced DevOps, and all of the experiences were different – as were the processes that were described as DevOps, but we can find commonality that might help understand what make DevOps different. The name DevOps suggests a merging of development and operations. Software development includes analysis, requirements, programming and testing, but often misses out the operational deployment. DevOps then can be viewed as a culture where the software development process encompasses the entire scope of system development from requirements through to operational deployment and support. Some companies interpret DevOps as a merging of developers (often just programmers) and operations staff. When this happens the people with a developer role need to learn all the skills associated with deploying and supporting the system in production, similarly the operations staff need to learn how to program release quality code. In practice, this would mean a fairly lengthy period of training and inefficiency. A more effective interpretation might view the merge as a vastly improved collaboration.

CHALLENGES AND COLLABORATIONS Collaboration between the development team and the operations team is often a source of contention. We want to keep people away from production environments because of data protection issues; tinkering with production increases risk of down time; general security and stability. We might have regulatory requirements, which we implement via strict control over environment access and permissions. All of this tends to push operations into a very separate and isolated position, with the negative side‑effect that, despite being a major stakeholder in the system, their system requirements are often the lowest priority. Operations teams have system requirements including: ease of upgrading systems; deploying systems without downtime; backups without downtime; monitoring system performance and functionality; automated deployment to prevent mistakes. With a move to agile software development we very often see improvements in meeting the business requirements, but often still don't include the operational requirements. Adopting DevOps can help improve that situation.

An adoption of a DevOps culture does mean overlap of abilities in the different roles and testers will need to increase their technical understanding of the systems they work on, and their ability to interact with the system and support tools on a more technical level

QUALITY AT EVERY STAGE We often misuse the term QA in software development processes to mean testing, rather than quality assurance. A misuse of the term, since quality assurance applies to every sub process and task within software development and not just testing. When we explore the world of DevOps, QA does actually mean quality assurance, because DevOps brings together development and operations from the start point of requirements through to operation. Possibly for the first time, a team can have full requirement lifecycle responsibility; the term QA implies assuring quality at every stage in the process from requirements gathering through to operational production support.

CHANGING ROLES It transpired in the focus groups sessions that people are worried that testers may no longer be required when teams adopt DevOps. If we take the view that a team has full requirement lifecycle responsibility, then that worry would seem unfounded. But companies do interpret DevOps as programmers and „


Alan has over 20 years of professional IT experience: as a programmer, tester and test manager. He works as an independent consultant, helping companies improve their use of automation, agile, and exploratory technical testing. Alan posts his writing and training videos on the internet at www.compendiumdev.co.uk.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7



operations staff merging, and relying on automated assertions to provide assurance; in this situation testers may find themselves at risk. Organisations may not realise that this particular DevOps interpretation makes the development process higher risk, and increases the likelihood that error‑infused scenario and data combinations are released to production because they were not tested to a professional standard. In this scenario, the systems were tested and the tests were coded to allow them to be executed quickly and continuously on every system change, but the analysis behind the tests may have been superficial compliance to acceptance criteria or limited to the TDD coding used to design the implementation code. Given the prevalence of the view that testers are not required, although some form of testing is still required, those of us in a testing role should not be complacent. When teams work more closely together and collaborate, there is an expectation that we can add value to more processes. For example, a tester might not be a specialist in Unix, shell scripts and deployment of environment and applications, we will have to develop an understanding of that area to add value in collaboration and develop basic skills in that area so that we can contribute to those processes. An adoption of a DevOps culture does mean overlap of abilities in the different roles and testers will need to increase their technical understanding of the systems they work on, and their ability to interact with the system and support tools on a more technical level. Testers also need to be able to communicate the value that their approach to testing adds to the development process, and how it differs from an automated acceptance checking and TDD process, in order to help organisations develop a culture that ingrains both testing and QA into the full lifecycle.

DON’T GET LEFT BEHIND During the forums we also discussed sources of learning about DevOps and the main resources recommended were: • "The DevOps Handbook" describes the

philosophy and practices behind a typical DevOps culture. • "The Phoenix Project" a novelisation of a team adopting and adapting to a DevOps approach. • "The DevOps Adoption Playbook" a more management focussed book about adopting DevOps. I do recommend reading the above books, as these are more likely to form the basis of a team's shared view of what DevOps might mean. I also recommend reading as many blogs and experiences of DevOps as you can, because the implementation of DevOps is a very environmental‑specific endeavour.

envisioned. And when the desired benefits do not manifest quickly enough because the work it takes to change so much: the culture, the development approach, the systems, the monitoring, the environments, organisations often jump to the next, newer buzzword in the hope of quick wins. There is a lot to learn from the DevOps community, regardless of your current approach to software development, but we should be wary of announcing a strategic shift to DevOps and expecting to instantly have a cutting‑edge software development process. DevOps requires a culture change and ultimately means building systems such that we can quickly deliver on requirements and monitor them in production to ensure they give the value we expected when delivered. DevOps is a full lifecycle approach that can take years to develop. Organisations may find value in adopting some of the tools and tactics of DevOps teams, e.g., more automated assertion of acceptance criteria, continuous integration, increased monitoring of live systems, closer collaboration of different disciplines in the development process.


WITH DEVOPS IT’S ABOUT CULTURE Sadly, when organisations move to implement DevOps they do so because of the benefits that they have heard arise, e.g., faster delivery, cheaper environments (use of cloud infrastructure), etc. I write "sadly" because DevOps requires a culture change of even closer collaboration than that required for agile development. It could potentially require deep architectural changes in the systems to achieve all the benefits

Some organisations will describe the adoption of DevOps because they concentrate on the buzzwords and techniques rather than adopting the DevOps philosophy of experimenting to build software effectively and efficiently. The DevOps philosophy of responsibility and incremental improvement through experimentation can be adopted by any team, and any discipline, at any time to gradually improve your process. You do not need to adopt the buzzword of DevOps to learn from practitioners sharing their experiences under that banner. I hope you do read some of the books mentioned in this article, find the time to carry out some research and network with colleagues from different industry sectors to learn their approaches and try to apply them to your existing processes.

Further reading 1. 2. 3.

The DevOps Handbook by Gene Kim, Jez Humble, Patrick Debois, and John Willis http://itrevolution.com/DevOps-handbook The Phoenix Project by Gene Kim https://www.amazon.co.uk/Phoenix-Project-DevOps-Helping-Business-ebook/dp/B00AZRBLHO The DevOps Adoption Playbook by Sangeev Sharma https://sdarchitect.blog/coming-soon-DevOps-adoption-playbook/

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7


Redefining Software Testing Optimise your app, website, IoT and more with our innovative crowd & cloud testing solutions.

CROWD Real people test your products for usability and /or functional issues using their own devices in real-world environments.

CLOUD Virtual testing environments, test automation using virtual machines, remote access to real devices, app distribution and more.

Performance testing in 2017

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

Kashif Saleem, Director QA, Hotels.com, shares the outcomes from the performance testing roundtable sessions.





ow do you speed up performance testing? And how can you get your stakeholders to understand and value this ‘dark art’? Those were key questions that came up during the performance testing roundtable, at the TEST Focus Groups event.

MOVING FROM THE TRADITIONAL INTO THE FUTURE For most testing managers around the table, performance testing is typically carried out traditionally, as part of a waterfall system, held back by risk-adverse stakeholders and a dependency on user stories and non-functional requirement documents. It is possible to successfully incorporate performance testing into continuous delivery, into a continuous integration server. Managers from the tightly regulated finance sector expressed disbelief that such fast delivery would never be allowed. It was pointed out by others that the highly risky pharmaceutical industry, where a mistake could result in life or death, is moving towards more agile and seeing increased, higher quality delivery – including all performance areas.

UNDERSTANDING THE ADDED VALUE Performance testing is always a value add; the roundtable executives were all in agreement. Unless you’ve got a very specific business-to-business application that has no customer-facing components, you will need to keep track of where your audience/customer is and what they are experiencing. It doesn’t make a difference if you’ve got millions of customers or a few hundred – performance testing is about looking at the efficiency of your stack, and opening up a window into your throughput.

It is important to look at performance on a deeper layer, rather than just looking at the number of users. Intelligent performance testing means looking at a multitude of metrics together to understand your system’s behaviour. For example, if you look at transactions per second this helps you gain an understanding of your system’s throughput. Then, once you have this baseline, you can structure tests around this value. A solid understanding of the system’s baseline will help manage expectations of stakeholders in terms of how to measure performance, as well as targets.

MANAGING STAKEHOLDER EXPECTATIONS A key reason why stakeholders at times place little value in performance testing, or see it as a ‘dark art’ is due to the fact that it’s unlike other forms of testing: there’s no right or wrong answer. It is never about “does this work” but “how well does this work?” There is a degree of flexibility in terms of the results. A lot of testing managers at the roundtable revealed their performance testing is dictated by non-functional requirement documents (written by stakeholders such as BAs) or user stories. This shouldn’t have to be the case. It can be argued that the testing and QA function should take full ownership of performance – establish your baseline, then come with recommendations to the business for what the acceptable parameters can be, and how to best achieve them. Show stakeholders what reports will look like, and ask them for what other information they might want to see. It then becomes easier to communicate with the stakeholders, and work together on ROI of performance. What is the effort required to make something faster, in terms of costs, tools, training? And ultimately is this investment worth it, or is the current performance acceptable to the business? „

It is important to look at performance on a deeper layer, rather than just looking at the number of users. Intelligent performance testing means looking at a multitude of metrics together to understand your system’s behaviour


Kash started his QA career straight out of university in 2001 and has never looked back! He has worked in the defence, medical, education, psychology and travel industries for start-ups and multi-national corporations, covering all aspects of QA. Working for Hotels.com, part of the worlds biggest travel company, Expedia Inc., he is responsible for team management, functional/non-functional testing processes and strategies. He still tries to find time to write performance scripts and review automation code!

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7


The vendor landscape has changed a lot during the last five years, with most enterprise options now supporting and allowing other tools to be incorporated. HP Performance Centre, for example, works with Jenkins, TeamCity, and other continuous integration tools





The roundtable sessions covered performance testing tools, outsourcing and limited manual testing options. Large vendor options, such as HP’s Performance Centre, were highlighted as the most popular. It was pointed out during the discussions that the vendor landscape has changed a lot during the last five years, with most enterprise options now supporting and allowing other tools to be incorporated. HP Performance Centre, for example, works with Jenkins, TeamCity, and other continuous integration tools. There are plenty of open source options on the market, such as Gatling and JMeter, and a few senior testing managers expressed interest and experience with these. While the main upside is the (lack of) cost, with open source tools, you need to invest a lot of time to generate reporting, and to make sure your team fully grasps the tool. One test manager brought up the role of manual testing in performance – whilst it is impossible to manually generate the amount of tests that you would need for load testing, there is value to be found in a simple stop/ watch test, to gauge page responsiveness/ page load times. Subsequent tests can then be run from this baseline.

The roundtable discussion covered mobile testing briefly, although everyone agreed this is a huge complexity in itself. You have to think about the performance of your application on 2G versus 3G, versus 4G or 5G. Network providers typically carry out their own performance testing, what information will you be able to access to help run your own tests?

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

SUMMARY Sometimes overlooked by stakeholders, the value of performance testing is clear to the senior testing managers and directors gathered at the TEST Focus Groups. In this digital, speed-to-market landscape, it is vital to understand your services’ performance, and where your customers are, in order to compete. By fully owning performance and establishing baselines and recommending benchmarks, QA will be able to manage stakeholder expectations and add value to the business. The potential of performance is in your hands.


Goodbye release day panic. Hello agile, proactive


load testing and monitoring. For any web application.

Mobilising success

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7

Mobile is fast becoming a key focus area for businesses large and small. A decade after the birth of the iPhone, consumers are used to easy access to products and services at their fingertips. A clearâ&#x20AC;&#x2018;cut mobile strategy, including testing and QA, is no longer â&#x20AC;&#x2DC;nice to haveâ&#x20AC;&#x2122;, but a necessity, Cecilia Rehn, Editor, TEST Magazine, reports.




he recent TEST Focus Groups event saw senior testing and QA professionals from a variety of different sectors coming together to share thoughts on mobile testing strategies and the challenges and obstacles in their way. Most roundtable participants revealed a common thread of not spending enough time and effort on mobile testing, despite business stakeholders recognising its importance. Two testing managers from an insurance firm revealed that over half of their customer enquiries originate from mobile, but this new trend is not reflected in the testing efforts at all. Other senior managers from financial and large enterprises said this pattern was reflective of their organisations also.

A MOUNTAIN OF DEVICES The roundtable discussions kicked off with comments around investment decisions. QA and testing departments regularly face issues with resource management, and mobile testing introduces its own complex challenges when it comes to spending wisely. How to strategically cover a diverse spread of devices and operating systems in testing? Is it possible to maintain an internal device farm? A few professionals mentioned keeping a few different Android and Apple devices in‑house, but they knew this was insufficient to fulfil larger test coverage requirements. The debate continued around the benefits of renting device farms – such as AWS or Perfecto, where you can rapidly access multiple devices and run tests across different operating systems as well. Others felt that crowd testing is a more effective investment, as you can get real‑life feedback from users in different geographies, with different devices, and different, real, network connections. It was mentioned that crowd testing can be particularly useful for user experience testing, and more exploratory testing, but device farms offer a cost‑effective means to carry out functional testing on a majority of devices and operating systems.



investment and the development cycle is long or the scale of regression testing is high; test automation may not be viable. Appium, the open source test automation framework for iOS, Android and Windows apps, was recommended by many, although the difficulties of running test automation for iOS was highlighted. Appium users lost time when new iOS upgrades came through, as it takes a while for the open source tool to become updated and compatible. It seems unlikely that Apple will change its MO anytime soon and allow for more integrated automation tools, instead organisations will need to invest in staff training on Apple’s in‑house tools. One SDET working for a global ecommerce company, positioned that if mobile is a key area for a firm, then they should invest in developer talent to write automation scripts and carry out this level of testing, freeing up a QA team to focus on more exploratory testing. Manual testing, the roundtable participants concurred, is always going to have a place in mobile because there are conditions that cannot be reproduced or scripted with emulators. Likewise, it is impossible to predict chance and human‑error conditions that could affect app performance.

One Head of QA pointed out that it’s important to not only test on the preferred devices and operating systems that current customers have, as this could potentially limit future growth

FINDING YOUR CUSTOMERS The roundtable discussions also considered user analytics as an important first step ahead of selecting devices and operating systems to test on. One Head of QA pointed out that it’s important to not only test on the preferred devices and operating systems that current customers have, as this could potentially limit future growth. It’s important to keep an eye on wider industry trends as well, to ensure future catch‑up doesn’t become a bottleneck. Google Analytics was highlighted as a useful source for device trends in different markets.



Much discussion was centred around the idea of how much automation can, or should, be included in a mobile testing strategy. Automation costs are a challenge for many, and the roundtable participants highlighted that unless mobile represents a significant

The roundtable attendees also discussed other key factors that need to be considered in a mobile testing strategy, for example network strength and variation. How will your app perform on 2G versus 4G? What happens when a user is on the move, and „


Cecilia Rehn is an experienced B2B Editor who took the helm at TEST Magazine and its affiliated products in 2015. Cecilia enjoys cultivating relationships with readers, delegates and sponsors alike, and endeavours to educate, inform and promote the software testing/QA and DevOps industries.

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7


If the app drains a battery or ends up costly in terms of mobile data allowance, it is unlikely that fickle users will remain loyal. Mobile applications that don’t live up to users’ expectations can damage company brands and positioning



the network drops in and out? These are all questions that need to be asked as early as possible; in order for stakeholders to determine what performance attributes are ‘good enough’ as opposed to ‘critical.’ Others brought up the importance of analysing an app’s battery and mobile data consumption, as these can have a massive impact on user experience and performance. If the app drains a battery or ends up costly in terms of mobile data allowance, it is unlikely that fickle users will remain loyal. Mobile applications that don’t live up to users’ expectations can damage company brands and positioning. One senior testing manager from a financial organisation shared how they are looking into different accessibility testing options for mobile, including making use of robotic arms to test app touch responsiveness. The various performance aspects that need to be considered also embody the exciting possibilities of working in mobile. One Senior QA Lead from a well known hospitality chain, passionately shared the experience of working on a new restaurant which incorporates portable tablet devices handed out to customers, mostly families with young children, for ordering, and the new, innovative testing challenges that needed to be taken into account. These included

T E S T M a g a z i n e | Sy n d i c a t e S up p l e m e n t | M a y 2 01 7


strategies for how to transfer orders from one device to another, should batteries run out, as well as anticipating extremely rough treatment from the youngest restaurant patrons.

SUMMARY There are many aspects to consider when developing a mobile testing strategy. It is vital that the testing and QA department get involved early on in conversations with stakeholders, managing expectations and helping to define coverage. Questions that need to be asked include what mobile devices and operating systems do our customers currently use, and how fast will this change? Is a mobile device lab sufficient for our needs, or is it a good idea to include crowd sourced testing as well? Is the mobile project large enough to warrant automation investment? And what additional performance aspects unique to mobile, such as network, need to be tested? One thing was clear, the roundtable participants said that working in mobile was one of the best parts of their job, allowing them to stretch their imagination and further develop their skills. “I wouldn’t want to work at a company that wasn’t investing in mobile – it truly is the future,” one QA Manager said.

Secure Test Data delivered faster than ever before

See why 30% of Fortune 100 use Delphix to: Deliver full sized real masked/unmasked datasets to QA teams Automate delivery and configuration of applications and databases Compress test/fix cycles with innovative data controls

Learn more at delphix.com/tdm

Profile for 31 Media

TEST – May 2017  

A utonomous cars will be a US$84 billion market by 2030, according to research carried out by Frost & Sullivan.1 And with the list of global...

TEST – May 2017  

A utonomous cars will be a US$84 billion market by 2030, according to research carried out by Frost & Sullivan.1 And with the list of global...

Profile for 31media