Page 1





equence 'admin'.", new RecordItemIndex(2));






S T O R Y :




2 0 1 6



Software industry news ................................... 5 THOUGHT LEADERSHIP

The ‘fashion’ of quality assurance ................. 10 Test management in the age of smartwatches............................................. 12 TEST PROCESS ASSESSMENT

Climbing the maturity ladder......................... 14 PREVIEW

Software Testing Conference NORTH ........... 20 TRANSPORTATION SECTOR

Smart equals safe . ......................................... 22


The road to innovation................................... 26 APPLICATION TESTING

Securing mobile testing.................................. 28



From novelty to necessity? . .......................... 32 TEST AUTOMATION

Automation frameworks – buy or build?....... 34 SMART CITIES

Testing the smart city . ................................... 40 RECRUITMENT

Our industry needs specialist test recruiters . ............................................... 44 MEASUREMENT & METRICS

Measured and verified . ................................. 48 SPECIAL



20 Leading Software Testing Providers . ........ 51

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

Validata Test Data Generator


Is your test data “fit for purpose”? Test Data GeneratorTM is an advanced business intelligence tool which delivers the right data, in the right place at the right time reducing project delays by efficient provisioning of quality data. • Shift testing left” with faster time to market. • Reduce the time and resources required to provision “fit for purpose” data by 50%. • Automaticallgenerate richer sets of test data & reduce maintenance costs and execution times. • Create system data with the perfect fit to your test cases with data generation functions, default values, etc. • Reduce the number of test cases and achieve high risk coverage. Test data presented in a way that business testers can read.



Smarter, Simpler, Easy to use.

Visualise your QA and DevOps for better and faster decision making Validata360o is the first cloud-based QA and DevOps analytics solution for cross channel coverage (web, mobile, desktop) designed to create a difference by bringing new thinking in Continuous testing and Analytics. • Enables end-to-end visibility and control over the application lifecycle to keep track of costs, progress and quality. • Provides compelling visualizations and actionable dashboards to stimulate collaboration across teams. • Available anywhere, anytime (web, mobile,desktop). • Integrates with all the tools you use and brings all data into a real-time platform, providing “a single version of the truth”. • Leverages predictive analytics to foresee potential risks

Take your reporting to new heights!

New Thinking | Instant Answers

For more information call at +44 020 7698 2731 or email:

E D I T O R ' S





areer development has been on my mind lately – probably because we just released the findings from our second European Software Testing Benchmark Report on Test Management (, and because we’ve got an interesting piece on recruitment on p. 44 in this issue. Everytime I meet someone new in this industry, I try to find out what his or her route into testing/QA, or perhaps even IT, was. I’m greeted with a different answer almost always. If you’ll allow me a personal anecdote, quite often people wonder if my background is in testing. It’s not – I’ve always been writing/editing – and my first paying job was writing about the global cement industry. So I can completely sympathise when people tell me their career path to date has been unorthodox, that they simply ‘fell into’ testing or QA. I’ve thoroughly enjoyed the transition from limestone concentrations, via a stint in oil & gas, to diving deep into test automation and agile development. The result from the Benchmark Report shows that testing/QA staff are recruited quite evenly from numerous different avenues, including internal transfer, recruitment companies (both specialist and general), and direct via own channels. This certainly echoes conversations that I’ve had with many test managers who say their teams are partly made up of staff poached from other departments, and a mix of people with different backgrounds who’ve found their way there. All supplemented, usually, by excellent contract hires and outsourced personnel. I’ve written before in this column about the need for the testing and QA industry to get vocal about sourcing future testers, and engaging in discussions around what skills, personalities and experiences are to make up a new workforce. It falls upon the managers to not only help mentor their own staff, but also to promote and help grow awareness of our role amongst stakeholders.

Another key set of findings from our Benchmark Report highlighted that most managers (43% of respondents) permit around 3-5 days per year for training, followed by those who answered 6-10 days (28%). The survey also showed that this allocated time for training is spent mostly on technical skills, professional development and soft skills. Perhaps this is no surprise, but I think it is important to showcase the numbers, since for every company that invests multiple days into staff training, there are some who simply won't. Stats like these can help empower managers to ask for more budget and time for the development of staff. I hope you’ll enjoy this issue of TEST Magazine. Please read on for coverage on different topics including thought leadership pieces on the endurance of QA and real-time test metrics; software innovations in the transportation sector; a thorough introduction to TMMi; test automation; and much more. The annual 20 Leading Testing Providers supplement begins on p. 51, and is our definitive guide to testing companies, their services and solutions. On a final note, soon after this issue is out in print, we’ll be announcing all the finalists in the 2016 European Software Testing Awards. It’s been a record breaking year, with more entries than ever before. Alongside more project-based categories, the awards look to celebrate the best testing teams; testing management teams; testing managers; and graduate testers. Having read these entries and worked closely with the judging panel, I can say that there is an impressive display of management, team collaboration and young promise out there. This makes me hopeful for the future!​

SEPTEMBER 2016 | VOLUME 8 | ISSUE 4 © 2016 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 GENERAL MANAGER AND EDITOR Cecilia Rehn +44 (0)203 056 4599 ADVERTISING ENQUIRIES Anna Chubb +44 (0)203 668 6945 PRODUCTION & DESIGN JJ Jordan EDITORIAL INTERN Jordan Platt 31 Media Ltd, 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA softwaretestingnews @testmagazine TEST Magazine Group

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

Remote device access for secure and reliable mobile testing, made possible.

Mobile Device Cloud by Mobile Labs located in Chicago Office

Tester in Chicago Office

Tester in Australia Office

For more information visit






prefer to



% spend THEIR



in test automation skills











2 0 1 6

see ROI





investment within

prefer to


new skilled permanent staff B E N C H M A R K

TEST AUTOMATION TRENDS REVEALED IN SURVEY The results from the first European Software Testing Benchmark Report 2016 survey on test automation are in, highlighting that automation is not yet as commonplace as firms want; there are still numerous challenges hindering some; and sourcing test automation specialists needs to start in‑house. TEST Magazine, in partnership with its online news portal, SoftwareTestingNews, carried out the survey research and has now published the findings. Key findings from the automation report include: • Only 5% of survey respondents said they currently carry out 0:100 manual:automation testing. The majority (66%) are either at 75:25 or 50:50, and 9% said they are only doing manual testing. • When asked what they’d like to achieve in the next five years, the majority (73%) said they’d like to see 50:50 to 25:75 manual:automation testing. 14% said they’d like to have no manual testing at all.


2 0 1 6

• The majority of respondents reported seeing a return on their test automation investment immediately (24%) or within the first 6 months (24%). The remainder saw a result within 6‑12 months (28%) or after one year (15%). Only 9% reported never getting an ROI. • The majority of respondents (94%) said they use test execution tools and automation to support testing efforts. Other popular answers include generation of test data (57%) and deployment of environments (49%). • The survey respondents reported that the primary users of their testing tools are test automation specialists (55%) and testers (27%). Developers make up just 6%. Other responses included offshore contractors software engineers, and product owners. • When asked what automation currently exists in organisations, there were a wide variety of responses. UI (82%) is the most popular; APIs (62%), cross browser/platform (55%), test data/environment (55%), performance (53%) and integration (47%) are all very even. Interestingly, automation in the ecosystem scored the lowest at just 6%. The full survey results can be found on



see ROI



2 0 1 6

WHITE HOUSE SOFTWARE CODE SHARING POLICY GAINS TRACTION The White House has released its Federal Source Code policy in order to tackle the issue of agencies that procure custom‑developed source code, not necessarily making their new code widely available for reuse by the federal government. With this new policy, departments of the US government will share custom code freely with the rest of the federal government, and some will be released to the general US public as open source software (OSS). However, source code developed by the National Security Systems will be exempt from the new policy, and will continue to follow internal regulations and policies. Having government‑developed source code released as OSS could significantly help with improving the developing code, as it opens up the floor for software peer review and security testing, sharing of technical know‑how and reuse of code.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6


SINGAPORE DEBUTS SELF‑DRIVING TAXIS Select members of the public in Singapore are now able to catch free rides through their smartphones in taxis operated by nuTonomy, an autonomous vehicle software start‑up. The company says it is the first to offer rides to the general public in self‑driving cars. nuTonomy managed to beat Uber, who plans to offer rides in autonomous vehicles in Pittsburgh, in coming weeks. The Singaporean service is starting relatively small, with only six vehicles available, but will have up to a dozen by the end of the year.

SEPTEMBER WILL SEE APPLE’S FIRST EVER BOUNTY BUG PROGRAM Head of Security Engineering and Architecture at Apple, Ivan Krstic, has announced that come September, the company will offer cash bounties to hackers and researchers who find and report bugs and security issues in its products. Apple is offering one of the largest single payouts to bug bounty hunters – up to US$200,000 per bug. Although, at the current moment, the company are only opening up the programme to “a couple dozen select researchers.”

T E S T M a g a z i n e | S e p t e m b e r 2 01 6


Officials say that the goal is to have a fully autonomous fleet of taxis in Singapore by 2018 in order to help cut down congestion on the city's roads. Doug Parker, nuTonomy’s Chief Operating Officer, said that autonomous taxis could ultimately reduce the number of cars on Singapore’s roads from 900,000 to 300,000. “When you are able to take that many cars off the road, it creates a lot of possibilities. You can create smaller roads, you can create much smaller car parks,” Parker said. “I think it will change how people interact with the city going forward.”

It is assumed that Apple doesn’t want to be involved in costly bidding wars against governments and criminal organisations, which is one reason behind the company selecting only a handful of researchers to partake in their bug bounty programme. Also, the company will not have to filter through “low quality, poorly validated bugs”, saving on the obvious engineering time and expense that would require. The researchers Apple have selected, have reportedly previously identified bugs for the company, but have not been compensated. The program will eventually be “slowly expanded” to include more researchers that provide useful disclosure.


GERMANY COULD INTRODUCE FACIAL RECOGNITION TO STOP ATTACKS Following two terrorist attacks that took place in Germany last month, Interior Minister, Thomas de Maiziere, has stated that he wants to introduce facial recognition technology in train stations and airports in order to halt further hostilities. The Minister said that software can detect whether or not a person in a photo is a celebrity or politician. He went on to say, “I would like to use this kind of facial recognition technology in video cameras at airports and train stations. Then, if a suspect appears and is recognised, it will show up in the system.” He said a similar system was already being tested for unattended luggage, where cameras report suspicious packages after being left unattended for a certain number of minutes. The technology would entail enormous costs, but could significantly increase safety in areas that need it most.

Mobile Apps Testing


Software Testing Company

Don't give your users any reason to complain.

Entrust your mobile solution to A1QA mobile testing team. Reduced time to market Stable and secure product with bug-free functionality Superb user experience Positive Google Play and App Store ratings Improved retention rates

400+ 12+ 1400+ 100%

engineers to choose from

years in SQA business

tested products

knowledge retention





LOCKHEAD MARTIN LANDS US$101 MILLION F‑35 CONTRACT The US Navy contract worth US$101 million has been awarded to Lockhead Martin, who will be completing laboratory testing for F‑35 software data loads. Lockhead Martin will produce F‑35 software data loads for laboratory testing,

UK UNIVERSITIES AND NHS TRUSTS HIT HARD BY RANSOMWARE ATTACKS According to Freedom of Information requests carried out by two cybersecurity firms, last year saw multiple ransomware attacks on universities and NHS trusts in England. Over 20 NHS Trusts said they had been affected, and Bournemouth University reported being hit 21 times in the last 12 months. When contacted by cybersecurity firm SentinelOne, 23 out of 58 UK universities said they had been attacked by ransomware in the last year. Whilst the largest reported

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

planning for verification and validation (V&V) test, conduct technical support of the test, design, build and delivery of V&V modification kits and mission data file generation tools for the Foreign Military Sales customers. The work will be conducted in Fort Worth, Texas; Orlando, Florida; Nashua, New Hampshire; El Segundo, California; and San Diego, California, and is expected to reach completion by December 2018.

ransom sum demanded was five bitcoins (£2150), no university said it had paid out anything. Only one university had contacted the police. Worryingly, according to SentinelOne, two of the academic institutions said they did not use anti‑virus software. Bournemouth University confirmed the attacks but said: “It is not uncommon for universities to be the target of cybersecurity attacks; there are security processes in place at Bournemouth University to deal with these types of incident.”

Ford has stated that they will mass produce a fully autonomous self‑driving car by 2021. The statement was made by the company’s President, Mark Fields, in Palo Alto, California, as well as announcing that the company would double its investment in its research centre in the city, along with making investments within the autonomous industry, especially within technology companies. The firm has claimed that the car will be in use by its customers as early as 2021, and with the self‑driving car being without a steering wheel, the manufacturer has stated that this will most likely be in the form of an Uber‑like driving service. “As you can imagine, the experience inside a vehicle where you don’t have to take control changes everything,” said Mr Fields, in an interview with the BBC. “Whether you want to do work, whether you want entertainment… those are the types of things we are thinking about as we design the experience for this type of autonomous vehicle.” “There will be a growing percent of the industry that will be fully autonomous vehicles,” Mr Fields said. “Our goal is not only to be an auto company, but an auto and mobility company.”

Conducting separate research, security firm NCC Group asked every NHS Trust in England whether it had been a victim of ransomware. Of the 60 responses, 28 confirmed they had experienced an attack, 31 declined to comment on the grounds of patient confidentiality. Only one respondent said it had not been a victim. “Paying the ransom – which isn't something we would advise – can cost significant sums of money, yet losing patient data would be a nightmare scenario for an NHS Trust,” said Ollie Whitehouse, Technical Director at NCC Group.







UK GAMES STUDIOS PREFER MOBILE PLATFORMS New data has been released showing that while the largest proportion of UK studios made games for mobile, studios focussed on console and pc games employed the most development staff in the UK. According to the report from industry trade association TIGA, in the year ending March 2016, 46% of UK studios were primarily focused on mobile platforms, but employed just 20% of development staff.

  read more online

RIO MARKS FIRST TIME CLOUD TECHNOLOGY IS USED AT A SUMMER OLYMPIC GAMES Rio 2016 staked its place in the history books as it became the Olympic Games with more digital coverage than any previous Olympic Games. From Archery to Golf to Rugby to Wrestling, distributing the results of every single event at the Rio 2016 Olympic Games to the world in less than half a second, is a technological feat years in the making.

  read more online

STARWEST Date: 2‑7 October 2016 Where: Anaheim, CA, United States ★★★

DEVOPS FOCUS GROUPS Date: 18 October 2016 Where: London, UK R E C O M M E N D E D


QA&TEST 2016 Date: 19‑21 October 2016 Where: Bilbao, Spain ★★★

EUROPEAN SOFTWARE TESTING SUMMIT GOOGLE’S UX TESTING IN VR Developers at Google’s mobile virtual reality platform, Daydream Labs are testing different user ‘social’ experiences in VR. On the company’s blog, Google VR, UX Designer Robbie Tilton explains how the company, known for its previous ‘don’t be evil’ motto, is nudging people towards positive social experiences in the VR space.

  read more online

DEVOPS SALARY RANGES REVEALED IN NEW SURVEY This year, more than 4600 technology professionals from around the world took the fifth annual State of DevOps survey. This year’s report shows again that DevOps practices such as continuous delivery and automated testing contribute to both IT team performance and an organisation’s overall productivity and performance.

  read more online

Date: 16 November 2016 Where: London, UK R E C O M M E N D E D


THE EUROPEAN SOFTWARE TESTING AWARDS Date: 16 November 2016 Where: London, UK R E C O M M E N D E D

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

THE ‘FASHION’ OF QUALITY ASSURANCE Jayashree Natarajan, Global Head, Assurance Services, Tata Consultancy Services (TCS), reviews the importance and endurance of quality assurance (QA) and testing in IT.


n the software world, we have seen the lifecycle models evolving from waterfall to iterative to shift left to agile. We have seen QA implementation for the software move from embedded to independent validation and verification to centralised Test Centre of Excellence (TCoE) to testing as a service (test factory) and, now, most recently, DevOps and CI/CD. It is interesting to see the evolution of the ‘fashion’ in QA. I use this term for two reasons: a) Quality is something that will never go out of fashion; it is enduring, a necessity, and b) Just like fashion trends have a revival every few years, business models in QA too seem to have come full circle, but with a difference. As Coco Chanel once famously said: “Fashion changes, but style endures.” So while there are numerous changes in the way QA implementation and business models have evolved over the years, a few fundamental principles continue to remain the same. So what are these basics that remain unchanged? An unbiased and synergistic approach to QA has always been the guiding principle of matured quality models, irrespective of the changing ‘fashion’. The basic ‘style’ endures, and it is important to keep this unbiased view by increasing synergies across the lifecycle. This is, in fact, becoming more and more relevant and throwing open a lot of opportunities for the testing function to innovate and adapt.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6




Tolerance to errors and bugs used to vary among industries and debugging was the mode of making the software defect free. As the business models changed and IT organisations matured, testing/quality assurance (QA) as an engineering stream came into prominence and robust systems became the USP for most organisations across industries. In this context, quality models have also evolved from the debugging mode to development‑led testing to current independent testing model to bring an unbiased view to quality. While the current unbiased quality focus has not lost its sheen – because of the zero fault tolerant market conditions – the independence of QA in the lifecycle model is changing to an inclusive QA model.

AN INCLUSIVE QA MODEL Why inclusive QA? As IT moves up the value chain, Cost‑Quality‑Time (C‑Q‑T) are becoming redefining parameters. So how are IT and testing adapting to the newer definitions of C‑Q‑T? To understand this, it is important to look at the key C‑Q‑T expectations – IT has to operate as a value centre and not a cost centre; high quality is a non‑negotiable, and business needs to be ensured in the shortest possible time. The software engineering community is working towards creating various changes in the IT discipline to meet the C‑Q‑T expectations. One of the key changes is the collaboration within the software lifecycle model to make frequent and value generating user engagement points (moving away from a cost centre view). An inclusive QA model with tightly coupled check on quality enables frequent value generating user engagement points. An inclusive QA model should include newer ways of engaging testing in the software lifecycle model. In the journey towards unbiased testing, we saw the evolution of independent QA but in the current scenario of inclusive QA, there is a need for quality to be tightly integrated without losing the unbiased view. What we see as the current trend is that quality, depending on the project nature is sometimes fully inclusive and sometimes partially inclusive. The degree of inclusiveness depends mainly on the type/level of testing/the technology involved. For example, digital channel programmes are the most common fully inclusive QA model. When QA is fully inclusive, the challenge is to do meaningful QA at a micro level and this requires automation, virtualisation, etc.

At the same time there is a need for engaging testing in a partially inclusive on demand model (as a service/test factory) especially for certain non‑functional testing types such as security testing. These non‑functional testing phases are no longer optional phases but are mandatory due to the all‑pervasive digital business models. The partially inclusive on‑demand models require cloud and other efficient tools/technologies to deliver in a timely manner.

FLEXIBILITY AND CUSTOMISATION IS KEY Multifaceted QA operating models that are flexible enough to accommodate tightly integrated and fully inclusive models like in sprint testing or federated on demand models are the new norm. This is in alignment with the changing lifecycle models that co‑exist in the current scenario. People, process and technology triangle are of paramount importance to any enterprise. So is the triangle and its key elements for a QA organisation. The skills, processes and tools/technology used in testing are rapidly changing but the need for skilled people with a passion for quality, in a process driven repeatable manner using the latest technologies remain the same. There is of course a shift towards automation, with a greater emphasis on the tools and technologies component, but the skills required in a successful QA professional remain unchanged – such as an eye for detail, the need to understand the business, macro view of integrated systems and its functions, an understanding of the software failures and its impact, knowledge of testing techniques, engineering skills for innovation, etc. A scenario where there is no visibility to quality will be a nightmare for any enterprise. Hence, a flexible QA model that focuses on continuously improving its triangle elements by adhering to the basic guiding principles of QA is the need of the hour. Co‑existence of various models will continue for some more time hence a flexible and customised QA model for every IT organisation is key. These newer ways of operating models are also paving ways to engage QA in new commercial models such as pay‑as‑you‑use, crowdsourcing, risk‑reward, etc. which in turn give the needed edge to the IT organisations. The words of another fashion guru Giorgio Armani are rather apt in this context: “The difference between style and fashion is quality.”

There is of course a shift towards automation, with a greater emphasis on the tools and technologies component, but the skills required in a successful QA professional remain unchanged


Jayashree (Jay) Natarajan is the Global Head of Assurance Services Unit (ASU) at Tata Consultancy Services (TCS), which serves a large, diversified customer base globally. With over 22 years of rich delivery and operations experience in the IT industry, Jay has been partnering with global clients to assure business outcomes.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

TEST MANAGEMENT IN THE AGE OF SMARTWATCHES Real-time test metrics on any form factor is the next generation of test management, says Sean Hamawi, CEO, Plutora.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6


rending keywords such as disruption, competition, and start‑ups are becoming the norm of conversation across the enterprise IT world. Business and IT teams are retooling and moving IT application delivery teams to a more frequent feature‑driven delivery model. This shift is happening rapidly, with enterprises increasing their application release cadences so they can stay competitive. Every day, across all industries, SDLC teams are using innovative methods to automate and test their release deployments. This means that traditional test and QA teams need to start thinking about

evolving their departments to support agile, lean, and DevOps principles. A wholesale shift in testing processes won’t necessarily be required, but some things are certain – test teams will need to be able to cope with the influx of feature requests through the software delivery lifecycle, and at a rate never seen before. The level and expectation of quality from business units won’t change and neither will the end users’ expectations. As test teams morph into improved testing practices such as automated test case execution, I want to spend time discussing a key area that is rarely discussed – test metrics, also known as test summary reporting (TSRs).




REPORTING THE METRICS Traditionally, test heads and managers spend a fair chunk of their time working through the test numbers. No matter the size of the test team, a common complaint is that test reporting is cumbersome, dated, and heavily reliant on resources with the skills to make the test metrics meaningful to time‑poor IT executives. Some very common metrics a typical enterprise would report on include: • How many defects were closed today? • How many open Sev1s were raised or closed today? • How are we tracking test case execution versus planned? • How many code drops entered our environment this week? • What is our defect density? • What is the defect age by application grouped by severity? • What is the age of test cases blocked? • How many environment‑related defects? • And much more. Reporting on those metrics normally requires a dedicated reporting team or several members of the test team, who must manually execute queries against a test management tool, extract the data and manipulate it in Excel, and then send it back to the test leaders. As you have probably experienced first hand, the process is slow, subject to error, and inefficient. The biggest issue is that by the time test metrics are finally sent to the stakeholders, there is a high probability that the data is out‑of‑date, rendering it almost useless if the test plan is large.

A DEVOPS MINDSET We can improve test reporting by applying the DevOps mindset to the problem. Next‑generation processes should allow for viewing a test summary report on any device, with the added benefit of the data being real‑time. Imagine a scenario of being able to pull up test execution rates easily on your smartphone, while traveling home on the train, or being notified on your smartwatch that a Sev1 show‑stopping defect has been raised in the middle of an expensive SIT test cycle. With code deployments on the rise due to continuous deployment practices, the increased workload of non‑stop test cycles is straining test analysts. Test leaders, on the other hand, are finding the influx of peer requests for test quality positions on the rise. For test leaders to confidently respond to these requests, automated test status reports

with a visual wow factor are going to be the way of the future. Again, why not give these stakeholders a smartwatch or smartphone app that shows real‑time test status metrics, and provide the innovation that IT executives expect from their test teams? Time‑poor IT executives care deeply about testing but don’t have the bandwidth to trawl through the detail. Introducing real‑time test metrics on any form factor is the next frontier in test management. Recommendations on how to improve test status reporting include: • Move to solutions that provide reporting on tablets, smartphones, and smartwatches. • Automate the extract and manipulation of data. • Standardise test status report (TSR) templating across your test teams. • Email scheduled reports on a periodic basis. • Increase the level of confidence in the value added by the test team by sending weekly test status reports to Heads of VP and GMs of technology. • Implement a single view of testing methodology to reporting. One major dashboard showing all projects, while tracking core test metrics such as test cases passed, failed, or blocked, and organised by progress and regression categories.

Existing test management tools are over a decade old and won’t be able to support the requirements of the future

LOOKING INTO THE FUTURE The future test management approach to test reporting will be far more innovative. Real‑time and consumable data on any form factor will become the norm, but only with the test management tools of the future. Existing test management tools are over a decade old and will not be able to support the requirements of the future. Excel and clunky reports extracted from legacy test management tools will be slowly replaced with amazing interactive dashboards from responsive test tools, which visualise risk in far more meaningful ways. Providing a single view of testing (SVT) dashboard is also something test leaders need to consider. An SVT dashboard provides stakeholders with a hugely rich, visual, consolidated view of all test projects across the enterprise. Creating such a dashboard is typically the holy grail of test management, and making it consumable on a tablet or smartphone is the pinnacle of test reporting.


Sean Hamawi leads a global team of talented individuals at Plutora, an enterprise release and test management vendor. Before co‑founding Plutora, Sean had over 12 years of product development, execution, and IT delivery experience with some of the largest organisations in the Asia‑Pacific region.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

CLIMBING THE MATURITY LADDER Dr Mark Rice, ICT Business Relationship Manager, gives a comprehensive introduction to Test Maturity Model integration (TMMi).


he software industry does not operate in a zero‑defect environment, and, arguably, it never will. In the face of this truism, numerous techniques to reduce the number and severity of defects in software have been developed, with the ultimate, albeit unobtainable, goal of defect elimination. Such optimistic thinking has led to significant improvements in software quality over the past decade, notwithstanding increased software complexity and customer demands.1 One such defect elimination approach is maturity models. Broadly, these are structures which state where an organisation sits on a maturity scale, where its failings lie and what

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

should be done to improve the situation using process improvement frameworks. The archetypal maturity model is the Capability Maturity Model Integration (CMMI)2, in addition to its predecessor, the Capability Maturity Model (CMM).

IN THE BEGINNING: THE CAPABILITY MATURITY MODEL INTEGRATION (CMMI) CMMI is a complex and multifaceted model which focuses on organisational maturity and

capability in terms of service development, service management and service acquisition3. Software development is the principal subject area of this model and the model may be adopted in a continuous or staged form. The former approach places emphasis on capability over maturity and “enables organisations to incrementally improve processes corresponding to an individual process area (or group of process areas) selected by the organisation”4. The latter path “enables organisations to improve a set of related processes by incrementally addressing successive sets of process areas”5. In other words, using the continuous approach, the user selects what matters most, as well as the





order in which to implement improvements, while the staged approach demands that the model itself dictates these factors. The staged model has five maturity stages: initial, managed, defined, quantitatively managed and optimising, while the continuous model has four capability levels: incomplete, performed, managed and defined. CMMI is made up of process areas, goals and practices, and the extent to which these elements are satisfied by the organisation determines its capability/maturity level.

FROM CMMI TO TMMi Although CMMI deals with software development organisational maturity, it only provides limited content on software testing maturity6 and it is this limitation which spurred the development of a closely related maturity model called the Test Maturity Model (TMM)7, which has since been superseded by the Test Maturity Model integration (TMMi), created by the TMMi Foundation8. Other testing‑related maturity models exist9, but TMMi is the focus of this article.

The first thing the observer will notice is the similarity between TMMi and CMMI. This is to be expected since TMMi is based on, and designed to be complementary to, the CMMI framework10; functioning as an adjunct to the CMMI testing maturity measure, in addition to exploiting CMMI to support its own implementation11. TMMi currently does not have a continuous version of its model12; it is staged only, which means that the path of improvement is general rather than user‑specific13. TMMi has five maturity levels arranged on a ‘ladder’. Figure 1 shows the maturity levels of TMMi along with each level’s process areas.

TMMi has five maturity levels arranged on a ‘ladder’. Figure 1 shows the maturity levels of TMMi along with each level’s process areas

THE CONSTITUENTS OF A PROCESS AREA Each process area is made up of goals and practices. The relationships between the constituents are intricate, but are best explained by the relational diagram shown in Figure 2. This diagram shows that each process area is made up of both specific and generic goals and practices; that is to say, „

(5) OPTIMIZATION Defect Prevention Test Process Optimization Quality Control

(4) MEASURED Test Measurement Software Quality Evaluation Advanced Peer Reviews

(3) DEFINED Test Organisation Test Training Program Test Lifecycle and Integration Non-functional Testing Peer Reviews



Mark is a business relationship manager

Test Policy and Strategy Test Planning Test Monitoring Test Design and Execution Test Environment

in the field of ICT. He has previously worked as a functional & localisation software tester and project manager in the area of video games. Mark has a PhD in psychology and is qualified


in Advanced ISTQB (test manager/ agile), Scrum, ITIL, PRINCE2, TMMi and Six Sigma. He is also an affiliate of

Figure 1. The TMMi ladder (source: The Little TMMi).

the ISTQB.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6



Informative component

Required components


Expected components

Process Area Purpose

Typical work products



Specific goals

Generic goals

Specific practices

Generic practices






goals and practices which are either particular to each process area or applicable to multiple process areas respectively. Goals signify what needs to be done to satisfy a process area, while practices break down a goal into smaller objectives. Goals are required components of a process area, while practices are expected components. Some generic goals are only triggered when the organisations attempts to move past a particular maturity level, such as from level two to level three14. Informative components support the comprehension of each process area, and include sub‑practices, examples and work products. For example, the process area Test Policy and Strategy, from Maturity Level Two (Managed), contains three specific goals: • SG1: Establish a Test Policy. • SG2: Establish a Test Strategy. • SG3: Establish Test Performance Indicators. Each of these goals is made up of specific practices. SG1: Establish a Test Policy, includes: • SP1.1: Define Test Goals. • SP1.2: Define Test Policy. • SP1.3: Distribute the Test Policy to Stakeholders.

Figure 2. A relational diagram showing the structure of a TMMi process area (source: The Little TMMi).


Assessment Request

Planning Phase

Signing Assessment Agreement Initial version of the Assessment Plan

Preparation Phase

Assessment Plan Initial version of the Assessment Report

Interview Phase

Interview meeting notes Updated version of the Assessment Report

Reporting Phase

Presentation of assessment results Assessment Report Assessment Evaluation Form

Figure 3. The TAMAR process (source: The Little TMMi).

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

TMMi has a dedicated assessment method called the TMMi Assessment Method Application Requirements (TAMAR)15. Typically, TAMAR consists of four phases: planning, preparation, interview and reporting, as shown in Figure 3. Assessments may be informal or formal. An informal assessment is quick, requires relatively fewer sources of evidence and is an inexpensive way of getting an approximate verdict on the maturity level of an organisation. A formal assessment is more‑in‑depth, more expensive, resource‑heavy and requires relatively more sources of evidence, but it gives a more accurate picture of the situation and is the only assessment type, which is officially recognised by the TMMi community. Thus, the formal assessment is a key marketing tool for organisations to advertise their TMMi credentials. Formal and informal assessments also differ regarding the make‑up of the TMMi assessment team. This includes the number, experience and qualifications16 of team members. Usually, a series of informal assessments is carried out just before a





Analyze & Validate

Set context

Build Sponsorship



Implement Solutions

Propose Future Actions

INITIATING Stimulus for Change



Refine Solutions

Charter Infra structure

Pilot / Test Solutions

Charterize current & Desired States

Create Solutions

Develop Recommenda tions

Plan Actions Set Priorities



Develop Approach


Figure 4. The IDEAL model (source: The Little TMMi).

formal assessment, to help ensure that an organisation is ready to increase its maturity level. The minutiae of the assessment method are complex, and somewhat subject to the specific ‘TAMAR‑compliant’ approach used by an organisation, but each maturity level being assessed is assigned a value according to the extent to which it has been achieved, which is determined by the extent to which its constituent parts (i.e. process areas, goals and practices) have also been achieved. The lowest rated constituent typically determines the value of its parent component, known as the inheritance principle17. The Little TMMi states that: To be able to classify a specific or a generic goal, the classification of the underlying specific and generic practices needs to be determined. A process area as a whole is classified in accordance with the lowest classified goal that is met. The maturity level is determined in accordance with the lowest classified process area within that maturity level.18

There are four possible values: N (not achieved), P (partially achieved), L (largely achieved) and F (fully achieved). Maturity levels which are L or F are considered to have been achieved. Two ancillary ratings are

At its core, TMMi is fundamentally a list of ‘good practices’. Following an assessment, it is up to the testing organisation itself to decide the nature of the improvements it will make to rectify hitherto failing TMMi components, in addition to specifying how these improvements will be implemented

N/A (not applicable, meaning not relevant to the organisation) and NR (not rated, when there is disagreement or insufficient evidence to decide a value). Percentage equivalents of each value rating are provided in the TMMi syllabus and are used to rate practices, though some approaches – which are not necessarily TAMAR‑compliant – just use ‘yes’ or ‘no’ when determining whether or not a practice has been met, to determine a mean goal value. Assessments do not cover all maturity levels each time, this would be expensive and counterproductive. Usually, one or two maturity levels above the existing level are assessed each time, though the constituents of all levels below the target maturity level are assessed (or reassessed) during each assessment19. Because preceding maturity levels form the foundation for higher levels, ‘skipping a maturity level’ during an assessment is not recommended. Yet sometimes it is useful to implement a process area of a significantly higher maturity level in order to assist with the implementation of lower level process areas20.

IMPLEMENTING IMPROVEMENTS At its core, TMMi is fundamentally a list of ‘good practices’. Following an assessment, „

T E S T M a g a z i n e | S e p t e m b e r 2 01 6


In the context of ever‑increasing demands for better and more complex software, delivered more quickly, to a higher quality and at a cheaper price, the role of TMMi is two‑fold



it is up to the testing organisation itself to decide the nature of the improvements it will make to rectify hitherto failing TMMi components, in addition to specifying how these improvements will be implemented. However, TMMi does suggest a framework for implementing changes in recognised areas of weakness: the IDEAL model (Figure 4), but there is no standard approach21. The user could just as readily use the Six Sigma or Plan‑Do‑Check‑Act (PDCA) approaches. IDEAL is shorthand for five stages of process improvement: initiating, diagnosing, establishing, acting and learning.

CONCLUSION: TAKING TMMi FURTHER In the context of ever‑increasing demands for better and more complex software, delivered more quickly, to a higher quality and at a cheaper price, the role of TMMi is two‑fold. On one hand it is a microscope. It details


starkly the testing‑related vulnerabilities of an organisation – particularly when staff interviews are taken into account – and punctuates them with a maturity level which can then be compared with those of other organisations. In tandem with this, TMMi is a roadmap. It lists ‘good practices’ and suggests a framework within which poor ones can be replaced. TMMi does not dictate the specifics of improvements; it encourages the organisation itself to decide the best way to design and implement them. Moreover, for a maturity model practitioner, the TMMi structure is easily adaptable to testing‑related departments, such as software release management or business analysis. The associated frameworks can be made as complex (practices, goals, process areas and maturity levels) or as high‑level (just maturity levels and process areas) as the needs and commitment of each department require them to be. In sum, TMMi is a sound approach to improving the test process.

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

11. 12. 13.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

E. v. Veenendaal and J. J. Cannegieter, The Little TMMi, (The Netherlands: UTN, 2011), p. 11. More information can be found at: http:// (16.02.2016.) Loc. cit. CMMI Product Team, CMMI for Development, Version 1.3, (USA, Carnegie Mellon, 2010), p. 21. Loc. cit. Verification and validation, see Veenendaal and Cannegieter, loc. cit. See:‑science/ research/testing‑maturity‑model‑tmm (16.02.2016.) (16.02.2016.) For instance, TPI Next, covered by B. Weston, ‘Ready, Set and Be Prepared’, TEST Magazine, September 2015, pp. 36‑39. In addition to CMMI and TMM, TMMi developed from other models including the Evolutionary model, after Gelperin and Hetzel. Veenendaal and Cannegieter, op. cit., p. 79‑84. Ibid., p. 13. TMMi is also a process reference model,

14. 15.


17. 18. 19. 20. 21.

rather than a content‑based model. For information on this distinction, as well as more on continuous and staged models, see E. v. Veenendaal, ‘TMMi and ISO/IEC 29119: Friends or Foes’, TMMi Foundation, 2016, pp. 1‑13. Veenendaal and Cannegieter, op. cit., pp. 28‑29. TAMAR is essentially a set of requirements which needs to be satisfied for an assessment to be declared acceptable, rather than strict instructions on how an organisation must assess. An organisation has freedom on how it goes about satisfying TAMAR. However, the TMMi Foundation has developed a system called the TMMi Assessment Method (TAM), which satisfies the requirements of TAMAR ( tmmi,34.html, 08.04.2016). Only assessors accredited by the TMMi Foundation may perform formal assessments. An example of a TMMi qualification is the TMMi Professional. Veenendaal and Cannegieter, op. cit., p. 13. Ibid., p. 65. Ibid., p. 61. Ibid., p. 19. Ibid., p. 67

Seapine comics

Seapine ad


27‑28 September 2016, The Royal York Hotel, York # Testi ng Co nf N O RTH

THE SOFTWARE TESTING CONFERENCE NORTH 2016 Cecilia Rehn, General Manager and Editor, TEST Magazine, gives her highlights ahead of The Software Testing Conference NORTH.


’m pleased to say that The Software Testing Conference North is coming to The Royal York Hotel, York this September. Designed as an extension of the National Software Testing Conference, which takes place in London each May, this northern counterpart brings the best presentations and speakers from across the UK to discuss on all matters software testing and QA. The two-day programme will see speakers from a range of different sectors, covering everything from the future of testing; test management culture; UX; BDD; digital transformation to DevOps – there will be something for everyone!

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

KEYNOTE PRESENTATIONS • Mike Jarred, Senior Manager – Solution Delivery, Financial Conduct Authority. Mike will present on: The Continuous Evolution Of Testing In The FCA. The talk will describe the journey the FCA Test Group has undertaken to provide a modern, efficient and valuable testing service by rethinking its approaches to testing. ££ The shift to relevant, targeted, testing, & the visualisation of programme wide risk profiles. ££ The importance of stakeholder engagement & how we track the

effectiveness of the Test Group to deliver against stakeholders' risk appetite. ££ Collaborating and integrating with application development to improve software engineering approaches, including continuous delivery. ££ The drivers to changing our Test Governance approach to one of Test Assurance, ensuring an effective, valued, collaborative & supportive coaching led model is operating. • Dan Ashby, Global Head of Software Quality & Testing, AstraZeneca. Dan’s talk is titled: How Ignorant Are We? The presentation will discuss how the relationship between



the five orders of ignorance and software testing is much closer than most people realise. Testing isn’t just about checking that software conforms to some explicit, known requirements. Testing is also about exploration and investigating the software and our hypotheses in order to discover and uncover new information. This talk will teach about the five orders of ignorance regarding information and how these orders very closely relate to testing. In this talk, Dan will discuss each layer of the five orders within the context of testing software and he will touch on the key skills and processes that fit within each layer. • Paula Thomsen, Head of Quality Assurance and Tom Berry, Senior Test Manager for Life Finance, Aviva. Paula and Tom will be speaking on: Why Is Diversity More Important Than Ever Within Assurance Disciplines? The presentation considers the role of diversity at a time when technology invades every aspect of our lives and everyone is impacted and the pace of change just keeps getting faster. And how as a technology industry are we ready to handle change within the UK? Are we ready within software testing? Paula and Dan will challenge the audience to consider how they have built up their teams, how they will need to evolve further to meet the industry challenge. Whilst taking into account the increased prevalence of different delivery methods and tooling solutions.

NETWORKING AND NEW BUSINESS OPPORTUNITIES The Software Testing Conference North provides an ideal environment for learning, networking and developing skills. In-between presentations, all delegates will be able to network with each other, as well as peruse new offers and services on display in the exhibition area.


CONFERENCE CHECKLIST Check out the speaker presentation topics in the programme You’ll see each presenter’s biography and presentation topic, ensuring you are well equipped to decide which presentations are the most important to you.

Visit the market-leading exhibition Don’t forget to take in the market-leading exhibition, where you’ll be able to source key information from leading vendors. There will be plenty of time to walk around during the many coffee and tea breaks, as well as during lunch.

Enter the treasure hunt And when you’re walking around the exhibition, don’t forget to participate in the Treasure Hunt – the winner will be announced at the end of the second day.

Share your opinion in the roundtables There are also a series of roundtable debates taking place during the conference – you can find more information about the topics in the programme. Spaces are limited, so make sure you don’t miss out on this great opportunity to learn from your peers and share your views.

Network, network, network! Apart from gleaning knowledge imparted from the excellent speakers, the Software Testing Conference North has been designed to offer prime networking opportunities. Whether it is making new acquaintances during lunch, or conversing over new business opportunities on the exhibition floor, we are encouraging you to get out there and speak to the other attendees!




SUPPORTED BY T E S T M a g a z i n e | S e p t e m b e r 2 01 6

SMART EQUALS SAFE Greig Duncan, Marketing Executive – Rail and Enterprise Risk Software, Ideagen PLC, positions that smart devices are changing the face of safety in the rail and transportation industry.



he adoption of mobile devices continues to grow in everyday working life – it is predicted that by 2017 the total number of mobile phone users will rise to 4.77 billion globally.1 This is particularly dominant with those working in remote conditions and in environments with little or no access to IT infrastructure.

THE DIGITALISATION OF THE RAIL OPERATING AND FREIGHT INDUSTRY The rail sector is such a sector that is experiencing significant growth in remote working and utilisation of smart devices to increase efficiency and accuracy of data gathered. To address these changing market conditions, technology companies are altering their priorities in terms of application development and driving innovation in this area. Due to the mobility of the workforce – sometimes in remote locations, with no desktop access – this method of logging business critical information offline has represented a step change in operational efficiency. Improved efficiencies in recording and managing safety investigations, by making use of modern technology, will allow the responsible managers and workforce to record and manage safety incidents and the flexible use of business process workflow software. A recent survey conducted by the Public Relations Society of America of over 300 business professionals has revealed that 90% of respondents2 believe that mobile applications are an effective method of peer‑to‑peer workplace communication and logging of information (such as safety incidents, content repository and collaborative solutions). One such sector embracing this modern approach to business via mobile and smart technology, with a view to significantly enhancing safety, is the rail operating and freight industry. Traditionally, some critics have falsely viewed the rail industry as being averse to change, dated, and incapable of performing efficiently and promoting use of technology, due to its switch to privatisation. However, there is evidence that the sector is leading the way in app development, with accommodating tickets through smartphones. Customers are also able to display the ticket on screen as a barcode that they can buy and board, removing the need to queue at the station, a trend also witnessed with boarding passes within the aviation industry.



TRACKING SAFETY The United Kingdom’s Rail Safety and Standards Board (RSSB) has recently undertaken significant steps to roll out their new SMIS+ (Safety Management Information System+) to log industry‑wide close calls, safety incidents and tracking investigations. A notable part of making the system accessible is the use of mobile applications and smart devices – hosted on Amazon Web Services. According to the RSSB, one of the key rationales behind this shift is that: “Safety incidents can be captured in real time via mobile devices and processed seamlessly, so that the right incidents are investigated by the right people at the right time. These events could be geographically tagged, stored with relevant pictures and documents, and automatically alerted to those who need to know, when they need to know”. Whilst safety standards on the whole are statistically improving across the global rail sector, (there has been an annual reduction of just under 10%), the majority of recent high‑scale and fatal safety incidents (such as Bad Aibling in Germany3, the Santiago de Compostela derailment in Spain4 and Dalfsen in the Netherlands5) could possibly have been avoided with more robust safety logging. Taking the lead from other safety‑driven industries, such as aviation and oil & gas, to roll out and adhere to industry‑wide standards and directives could yield positive results. While these industries do not have a fault‑free record in terms of safety, these shared standards have helped to promote a collective understanding of a ‘safety culture’ and a responsibility for safety, quality and risk reduction – much of which has been achieved through collaborative and standardised software utilisation. This benefits train operating and freight operating firms from a practical reporting aspect, as data gathered is up‑to‑date and accurate – allowing them to identify threats and put appropriate safety controls in place to reduce risk. Ideagen PLC provides and manages the RSSB’s new SMIS+ service, and sees the development of smart device‑oriented technology as a positive thing for the transportation industry on the whole: Whilst the development, acceptance and implementation of new software tools and technologies cannot be achieved overnight, the future of such technology will continue to deliver enhanced benefits. There is a definite growth in the number of transportation and rail companies using mobile applications, software, and „

It is predicted that by 2017 the total number of mobile phone users will rise to 4.77 billion globally


Greig has worked within the offshore safety, training & occupational health sectors for five years, where he has built up strong knowledge of safety‑critical industries. Promoting a focus on global safety, operational excellence and risk reduction within the rail industry is a big part of his role at Ideagen.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6


There is a definite growth in the number of transportation and rail companies using mobile applications, software, and smart devices to take a risk‑based approach to operational excellence that drives quality, compliance and safety


smart devices to take a risk‑based approach to operational excellence that drives quality, compliance and safety, allowing their respective firms to improve and become more accountable and mature. Robust and innovative software and communications is at the heart of pulling all the information into analytics and reports for a single version of the truth. A key feature in achieving the successful implementations – with the likes of Virgin Trains and the RSSB – has been the in‑app communication capabilities, which has allowed all dialogue related to a certain incident to remain within the application – reducing email dialogue and retaining incident accuracy. This trend is something that has crossed the barrier from social media to workplace applications, with it already being commonplace within the likes of Facebook.

SUMMARY The development of software applications and smart forms in this arena will continue to change the way in which


people undertake their daily tasks. For generations, spreadsheets and hand‑written documentation have added a significant amount of effort to the daily tasks of staff in the rail industry. Smart forms have helped overcome this by intelligently calculating which form is relevant in the workflow and which forms are not applicable – significantly speeding up the process of auditing and safety incident reporting. Seamless integration between quality and safety systems and real‑time monitoring of safety‑critical equipment ensures for a live feed of data to proactively flag when any thresholds have been breached. This has moved the industry forward in terms of developing software that means that incidents can be confronted immediately and not retrospectively following an incident or tragedy. As business intelligence, analytical data, mobile software applications and smart devices continue to dominate everyday working life, combined with the current high speed of technological development, this adoption trend looks set to continue, both in and outside the workplace.

References 1. 2.

3. 4.


T E S T M a g a z i n e | S e p t e m b e r 2 01 6

‘Number of mobile phone users worldwide from 2013 to 2019’, Statista,‑of-mobile‑phone-users-worldwide/ 2016 Digital Workplace Communications Survey: Companies Need to Rethink How They Communicate with Employees’, theEMPLOYEEapp.‑workplace‑communications‑survey‑companies‑need‑t o‑rethink‑how‑they‑communicate‑with‑employees/ ‘Germany train crash: Several killed near Bavarian town of Bad Aibling’, BBC, news/world‑europe‑35530538 Spanish train crash: Driver facing 80 homicide charges, but rail bosses cleared’, Independent, spanish‑train‑crash‑driver‑facing‑80‑homicide‑charges‑but‑rail‑bosses‑cleared‑a6686951.html ‘Dutch rail driver killed in crane crash at Dalfsen’, BBC, world‑europe‑35639164

Helping to create tomorrow’s world

Test Direct provide intelligent technical testing solutions. Our business is to understand your requirements and the quality and success of your delivery is paramount to us. Our services will assure your end user experience and protect your brand. Bespoke technical testing services designed and delivered Experienced technical test specialists Innovative low cost tooling solutions for automated testing Bright and passionate people delivering business outcomes Please contact us to understand more about how our technical testing services could benefit your organisation or if you are interested in pursuing a career with one of the UK’s leading independent test consultancies.

T: 01772 888344 E: W:

THE ROAD TO INNOVATION Connected cars continue to drive security concerns says Raj Samani, CTO EMEA, Intel Security.


ith the world expecting 150 million connected cars1 on the roads by 2020, it’s not just the automotive industry that’s paying attention to this new wave of innovation. In fact, even the Queen has something to say on the matter. In the Queen’s speech2 earlier this year, Her Royal Highness introduced changes to enable driverless cars to be insured under ordinary policies. This commitment to furthering the development of the driverless car economy was echoed by the UK government just this month, with the launch of major consultation3 to help pave the way for automated cars to be used on British roads.

Such forward thinking will ultimately be a fantastic boost to the UK economy. According to Intel, connected cars are the third‑fastest‑growing technological devices after phones and tablets. It is equally important, that in its pursuit of innovation, the government and indeed the automotive industry, doesn’t neglect the security essentials, which will guarantee the success of these new technologies, as well as the safety of its users. Whenever new technology is adopted, criminals look to identify ways to exploit them for financial gain. As the world’s connectivity continues to grow, so too do the risks of attacks from cybercriminals. The potential to hack and gain control of connected vehicles is a very real threat and has been clearly demonstrated through a number of demos. We are yet to see this translate into real‑world attacks, however as with any crime, it is just a matter of requiring a motive. Generally, cybercriminals take action with the aim of financial gain. If driverless and connected vehicles are to become commonplace in the UK and globally, it is just a matter of time before attackers find a means to use this as an opportunity to fulfil one of these motives.

CONSIDERATIONS AHEAD OF THE CONNECTED CAR INNOVATION INFLUX Intel developed the Automotive Security Review Board (ASRB), in conjunction with founding members Aeris and Uber. We form a collaboration of top security and automotive industry talent from across the globe, who work together to stay one step ahead of cybercriminals and secure vulnerabilities before criminals have the opportunity to turn this potential risk into a dangerous reality. By adding internet connectivity to cars, the auto industry is enabling exciting new features, such as real‑time telematics, smart intersections, and autonomous driving. However, it is also exposed to the full force of malicious activity. This is driving the need for designed‑in security solutions to ensure that next‑generation cars can operate to their full potential in a malicious operating environment.


SECURITY FROM DESIGN Like safety and reliability, vehicle security starts in the design phase. Consolidation and interconnection of vehicle systems requires a security design that is intentional and proactive. Expanding on experience from related industries, such as defence and aerospace, there are some foundational principles that can be utilised: defence‑in‑depth, similar to the layers of protection analysis (LOPA) methodology used for safety and risk reduction, and designing secure systems from the hardware to the cloud with identified best practices and technologies for each discrete building block. These include such things as secure boot; trusted execution environments; tamper protection; isolation of safety critical systems; message authentication; network encryption; data privacy; behavioural monitoring; anomaly detection; and shared threat intelligence.

ON THE PRODUCTION LINE But it doesn’t stop with design; automotive security needs to continue right through to the production and operation stages. Best practices for production processes ensure that the design components are correctly implemented and their implementation is linked back to the properties in the secure design, giving customers confidence that the platform is secure. These include code reviews; component and system‑level penetration tests; continuous validation of security assumptions; inbound and outbound materials processes; maintenance and upgrade plans; and a feedback loop for continuous learning and improvement.

ON THE ROAD AND BEYOND Threat analysis and risk assessment continues throughout the life of the car as old vulnerabilities are patched and new ones come to light, so the risk of attack can even increase

with time. Detailed incident response plans in the event of a newly discovered vulnerability or security breach provide confidence to the consumer and manufacturer. Techniques such as over‑the‑air software or firmware patches and upgrades quickly close vulnerabilities and significantly reduce recall costs. Threat intelligence guides the identification and understanding of potential criminal business models to help prioritise threats, their associated risks, and appropriate incident response. These operational measures require secure chains of trust that are designed into the vehicle and meant to last for its deployed lifetime. Best practices for automotive security are an evolution and amalgamation of both product safety and computer security.

2. 3.

Best practices for automotive security are an evolution and amalgamation of both product safety and computer security


• Protecting every ECU, even for tiny sensors. • Protecting functions that require multi‑ECU interactions and data exchange. • Protecting data in/out of vehicular systems. • Protecting privacy of personal information. • Integrating safety, security, and usability goals. • Dealing with the full lifecycle of vehicular and transportation systems.

SUMMARY Before the connected car industry explodes into the mainstream, we need to see security of vehicles and transportation systems improved to such a degree that attacks will be hard to execute, while preventive and mitigation techniques are in place to react to vulnerabilities quickly and before widespread damage can be done. The Automotive Security Review Board’s ultimate goal is to facilitate a world driven by self‑healing cars – vehicles that are able to detect malicious intent, resist attacks and perform self‑repair. Through the collaboration between standards organisations, the automotive industry and security experts, this vision can be achieved.

References 1.



‘Gartner Says By 2020, a Quarter Billion Connected Vehicles Will Enable New In‑Vehicle Services and Automated Driving Capabilities’, Gartner, (26 January 2015). ‘Queen’s Speech debate: Transport and Infrastructure’, queens-speech-debate-transport-and-infrastructure (19 May 2016). ‘Advanced driver assistance systems and automated vehicle technologies: supporting their use in the UK’, Department for Transport, (11 July 2016).


Raj was previously the chief information security officer for a large public-sector organisation in the UK. He volunteers as the Cloud Security Alliance EMEA Strategy Advisor and has been working with the UK’s National Crime Agency (NCA) and EUROPOL European Cybercrime Centre as an advisor since October 2013.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

SECURING MOBILE TESTING Paul O’Callaghan, VP of Global Alliances and International Sales, Mobile Labs, highlights mobile testing case studies in the healthcare and financial sectors.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6


rom the time I entered the information technology industry in the 1980s, certain commercial markets have transcended the simplest challenges for technology developers and vendors; good design, functional excellence and ease of use. These organisations ask for more; corporate governance, security, scalability, collaboration architectures and other strategic, operational and financial demands driven by all areas of a company, not just the IT departments. Always at the forefront of this energising force encouraging vendors to better understand the true nature of the customer’s business has been the financial communities, banks, insurance companies, brokerages and others who are built upon regulatory obligations, customer trust, confidential personal and financial information and quality of products. Back in the 1980s, it started with government mandated disaster

recovery strategies. Today, mobile and mobile interaction with customers has reached ascendancy.

GROWTH OF MOBILE Mobile devices have almost overnight, become the meeting place, messaging and branding platform as well as transaction instrument of preference for existing and new customers. This is true not only for banks, but telecommunication companies, retail organisations, airlines and many others who are challenged by competition and market growth targets. In response, the industry had demanded a new generation of mobile application testing solutions that deliver the technical and functional excellence expected by technology professionals, while answering the demands




of internal customers and executives to lead the field with quality (mobile) products and solutions. deviceConnect™ by Mobile Labs enables high performance device access for QA and development teams, wherever they are located and with a clear operational and financial return on investment. The on‑premise internal, secure mobile application testing cloud automates an end‑to‑end continuous mobile application delivery infrastructure. This article will highlight two mobile testing case studies.

STRENGTHENING THE MOBILE USER EXPERIENCE Simplyhealth serves nearly 3.5 million customers through cash plans, dental plans and pet health plans. Its Independent Living solutions provide daily living and mobility products, including power chairs, mobility scooters and wheelchairs. In 2013, mobile traffic to the company’s website had risen significantly, and the trend was expected to continue. While this was good news, there was a concern that the customer experience might erode as a result, along with the company’s ability to provide continuous application delivery on mobile devices. There were several challenges that needed to be addressed: • Because testing occurred later in the development cycle, the testing team found issues later, causing costly rework for the development team. • Dealing with multiple mobile handsets not only affected productivity but also increased expenses because of the time spent searching for, ordering, and waiting for devices to be shipped. • The development team was using mobile device simulators, which did not accurately represent the behavior of actual devices. • Apple and Android devices behaved differently in some functional areas, and it was becoming impossible to do cross‑platform testing with a single script. • Testing was becoming a bottleneck, and Simplyhealth’s award‑winning customer service levels for its mobile customers were in jeopardy. The testing team needed a new solution – one that offered true continuous mobile application delivery. Being part of

an agile organisation that is committed to collaboration, the team wanted two things: first, to be able to shift left – that is, to conduct early stage testing; and second, to ensure the solution could be used by all departments. After a thorough evaluation, Simplyhealth chose Mobile Labs’ private cloud infrastructure solution, deviceConnect. Because implementation would occur behind the corporate firewall, the solution would be secure, easy to install, manage, and maintain, and would allow them to locally manage security, device inventory, and device sharing. Mobile Labs configured a proof of concept. The team saw first‑hand how this technology truly brought order to the chaos of managing mobile devices and apps in an enterprise test lab. According to Chris Dale, IT Test and Release Manager for Simplyhealth, the implementation went very smoothly. “We had a good working relationship from the beginning,” he said. “Within a few hours, the Mobile Labs team had the solution out of the crate and up and running, and we were writing automation scripts. We were using the product in production mode within a day.”

The team wanted two things: first, to be able to shift left – that is, to conduct early stage testing; and second, to ensure the solution could be used by all departments


The testing team’s productivity improved because they were able to use their existing automation framework. This saved nearly a year of development time. The product has been enthusiastically adopted across departments at Simplyhealth. “The solution was exactly what we were looking for and has delivered outstanding results,” said Dale. “We have been impressed not only with the technology, but with how Mobile Labs listened to us, worked to understand our goals, and has continued to be responsive to whatever we need.”

BOOSTING MOBILE BANKING ABN AMRO Bank N.V. is the third‑largest bank in Netherlands. Its private banking division serves 100,000+ high‑net‑worth clients in 10 markets worldwide. ABN AMRO, long a leader in mobile banking, created online and mobile banking portals for each of its 10 private banking markets worldwide. Its high‑net‑worth private banking clients expect the highest level of convenient mobile access and security of their confidential, personal information, even during application test and release cycles. Development proved a continual pain point for the team. „


Paul is a veteran software executive steeped in the mobile application testing industry. Prior to joining Mobile Labs, Paul held executive positions at ZAP Technologies and Perfecto Mobile. During a career of more than 25 years, Paul has also held executive positions at Jacada, Inc., Cisco Systems, Network Systems Corporation, Optio Software and Xacct.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6


As often happens when physically handing off devices for testing, the team struggled with units going missing, not being charged and failing to be updated to the latest operating systems or browser versions

T E S T M a g a z i n e | S e p t e m b e r 2 01 6


Rather than a singular platform created to serve all markets collectively, the team had developed a unique application for each market. However, roadblocks quickly arose related to mobile testing. The domestic team, located in Amsterdam, shared more than 40 mobile devices for testing purposes. As often happens when physically handing off devices for testing, the team struggled with units going missing, not being charged and failing to be updated to the latest operating systems or browser versions. Meanwhile, the bank also needed to integrate a team of international developers stationed in Dubai into the private, corporate test cloud it had visualised. With each team forced to use their own set of physical devices, this drove duplicate costs and constrained collaboration between the Dutch developers and their Emirati counterparts. Even while staring down these challenges, ABN AMRO Test Manager Sander Stevens was confident the teams could find a route to efficiency and collaboration. While aiming to simplify the testing process, the bank also required security. As a leading Dutch bank serving a number of high‑wealth investors, ABN AMRO needed to comply with rigorous data security restrictions. Prioritising security, efficiency and collaboration, the team primed itself to adopt a device management solution. The on‑premise, cloud‑based mobile device platform deviceConnect provided total local and offshore access and management of mobile app development and testing assets. With the ability to live behind the bank’s corporate firewall, it also meant team members could use existing test scripts and test applications with confidence in the system’s security. As the ABN AMRO team relied on HP Unified Functional Testing (UFT) for test automation, this included implementation of Mobile Labs Trust™. “When we implemented Mobile Labs Trust, that’s when we realised the full benefit of the product,” said Stevens. “We quickly learned we could reuse the same tools and techniques in UFT, including the same scripting language, to test mobile applications on real devices located in deviceConnect.”


As part of a complete testing infrastructure, ABN AMRO also introduced deviceBridge™ into its day‑to‑day processes. deviceBridge serves as a ‘virtual USB cable’ connecting real devices to a local laptop or server. As a result, the bank’s offshore testing and development teams were provided nearly instant access to all devices and automation tools, allowing the ability for continuous application delivery.


Less than nine months into its partnership with Mobile Labs, ABN AMRO has already seen huge gains in efficiency and productivity. Collaboration has also increased as the dispersed domestic and international teams can now test using a shared bank of devices. The mobile testing solutions have allowed ABN AMRO to dramatically increase the quantity and quality of tests before commercial release, to improve the quality of its mobile apps and responsive websites. The integration with HP UFT allowed the team to automate regression testing, freeing up time for manual exploratory testing for even more quality releases. While thrilled with the increase in quality, the team has also managed to shorten manual regression testing across test configurations from nearly five weeks to four days.

CONCLUSION As both Chris and Sander emphasised, mobility was the technology platform deployed, but the driving forces that initiated both projects were improved customer experience and company reputation in very competitive market environments. Mobility has created a much more intimate relationship between suppliers and buyers, dramatically increasing the 'trust and quality factors' beyond the traditional, pre-mobile interactions and dynamic. Mobile Labs entered the mobile application testing space as a next generation infrastructure that recognised that the creation and delivery of critical business and commercial applications had a beginning, middle and release to a continuous delivery cycle and that the integrity and security of company and client information assets had to be protected at each stage of the process. The industry first deployment strategy of implementing an enterprise level, private test cloud at the customer premise reflects a very English common sense adage: 'if you want something important done well, do it yourself.' Mobile Labs agrees.

We Make Products Work


Staff Augmentation

IT Academy

Testing Circle is a leading Software Testing Services provider to leading Finance, Telco, Government, Retail, Gaming, Media and FTSE 100 clients. Since 2005 Testing Circle has been building solid, lasting relationships with satisfied clients through the provision of tailored, cost effective and flexible solutions to meet all their software testing requirements.

Call +44 (0)207 048 4022 to discuss your testing requirements. Testing Circle, 4 Copthall Avenue, London EC2R 7DA

FROM NOVELTY TO NECESSITY? Venugopal Ramakrishnan, Digital Testing Lead, Accenture Mobility, part of Accenture Digital, details the role of testing on the road to contactless payments.


t has been several years now since contactless technologies were first introduced. It was 1983 when Charles Walton patented RFID (radio frequency identification device), and the era of contactless payments took off subsequently with the advent of NFC (near field communications). But it is only in the last five years or so, with the proliferation of complementing digital technologies, that contactless payment solutions have witnessed rapid adoption and success. Within the UK itself, the cards have grown from about 61.6 million to 84.2 million1 in circulation within a year. Contactless transactions have rapidly increased to around 32 million every month. Not surprisingly, the market

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

is expected to grow twofold in the next three years to over US$9.8 billion.2 The evolution can be traced back to Seoul, South Korea in the 1990s when a contactless payment card was first used by bus and train commuters. There have been several adopters since then, from the first experimentation by McDonald’s food joints to Disney’s magic bands more recently. As wearable payment bands begin to find their way into the hands of the digital consumer, one can only expect that the technology will become a way of life for most people very soon. It is but natural for some of the leaders in the connected world today, such as Apple and Google to tap into this market with their own solutions.

As the future of contactless technology unfolds, with wearable technologies leading the trend in the IoT space, there are several interesting possibilities and approaches being seriously explored. Watch manufacturers, for instance, are beginning to tie up with merchant cards. NFC as a technology is enabling more and more wearables, sensors and day‑to‑day devices to communicate intelligently, creating a huge number of possibilities in enhancing business value and offering a rich consumer experience. With biometrics progressing from fingerprint scanning to waving at scanners; the automobile industry beginning to use beacon technologies in parking lots and gas stations and EMV (Europay, MasterCard




and Visa) looking to tokenisation for mobile transactions to be secure and fast, the challenge has shifted. It has moved from getting the technology to work and deliver business outcomes, to getting payment solutions to perform in time, every time while remaining secure. In the meantime Bitcoin, with its blockchain‑based transaction management and other distributed ledger solutions has begun to show more promise. Though not mainstream, IDC predicts that 2% of all global payments will be Bitcoin transactions in the next two years.3

OVERCOMING THE CHALLENGES While there are several challenges facing the contactless industry and its beneficiaries, the focus must be on performance once deployed in order to guarantee adoption and continued use. Anyone passing through a turnstile at a London Underground station would recognise the challenge of the number of taps every minute, and the associated transaction processing on the card. A contactless payment is expected to support about 60 taps a minute, with a transaction time of less than 300 milliseconds. Down time of devices is almost unthinkable. Zero tolerance is expected when it comes to read/write errors on the card or loss of transaction records. They simply cannot happen. With mobile payments, and the entry as a result of mobile devices on several OS platforms, technology challenges have only increased. The fact that several countries follow multiple standards has added to the challenges facing the rollout of contactless payments and ensuring devices interoperate and perform in a secure fashion. Testing and monitoring to assure quality, as a result, is no easy task. Given the stakes, it is critical for any deployment of contactless payments to adopt a test strategy that can support several thousands of cycles, cover as many devices as required including smart cards embedded with an integrated circuit chip, and provide a fool‑proof mechanism for each transaction to be inherently secure.

TIME AND COST SAVINGS IN TESTING For several years now, the industry has been investing in innovation and delivering automated test solutions for contactless payment solutions. The primary objective of

one of the projects Accenture has worked on was to automate and rapidly test a smart card payment system for transit service providers and retailers across the globe. One such implementation involved around 800+ functional tests and 100 performance tests to be executed on several smart cards for their interaction with multiple point of sale devices. Close to 50% of functional tests and 75% of performance tests have been automated, meaning that a typical regression cycle that would once take 45 days can now be executed in under 25 days, resulting in a significant reduction in cost and improvement in quality assurance. The automated test framework included a test control system to control the card reader, actuator and turnstile emulator. The card reader‑actuator used a stepper motor to alternately tap the smart card between the card reader and the device under test. The emulator was used to simulate the closing and opening of gates. Typical performance issues found include the card reader crashing after 2000 transactions, erroneous balances on the smart card after high speed tapping, device registry corruptions, application reboots and device freezing. The solution now has been enhanced to test mobile payments, using a robotic arm that picks up mobile devices and taps against devices, automating test scripts that operate the mobile application and also manage the backend process to provide an end‑to‑end automated testing solution. The solution is also capable of testing secure element APIs, application installation and payment configuration, device and application compatibility, and interoperability. This can result in over 30% savings in effort and cost in addition to helping clients bring contactless payment solutions faster into the market. Over the longer term, as digital technologies continue to come together, the test industry will be expected to rise to the occasion and address the end‑to‑end performance and security needs of retailers, consumers and digital enterprises alike. References 1.



'Contactless Statistics', The UK Cards Association Limited, http://www. contactless_statistics/ 'Contactless Payment Market worth 17.56 Billion USD by 2021', Markets and Markets, PressReleases/contactless‑payments.asp 'The future of payments – 2015 and beyond', http://tailwind‑solutions. com/news‑events/news‑events/ the‑future‑of‑payments‑2015‑and‑beyond

NFC as a technology is enabling more and more wearables, sensors and day‑to‑day devices to communicate intelligently, creating a huge number of possibilities in enhancing business value and offering a rich consumer experience


Venugopal Ramakrishnan leads digital testing globally at Accenture, and has over 25 years of experience in roles including consulting, delivery, offering and new business development across telecoms, aerospace and manufacturing industries.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

AUTOMATION FRAMEWORKS – BUY OR BUILD? Scott Cardow, Managing Director, PreReq Ltd, presents an argument for or against building bespoke test automation frameworks.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6


e have probably all heard a number of times over the years how test automation frameworks can speed automation development, ease maintenance, and save the neck of the poor stressed test manager desperately trying to close out his testing before the project manager finally loses patience. Automation frameworks can give great benefit for sure, but all of them present


test managers with a real cost‑benefit balancing act and perhaps the most crucial consideration is the first one.


As a tester it is very obvious when a piece of software lands, which has missed some of the fundamental elements of software creation, and it is an unfortunate truth that even though the creation of an automation


framework is often undertaken by the very people entrusted to ensure quality, it is too often treated differently to other software and not given the levels of scrutiny we would otherwise apply. A framework development should be considered as a project and not just an automation activity (I’m likely to mention that again). There needs to be requirements, detailed designs, skilled developers and strong test procedures to get the stable and functionally rich framework aimed for. If you are bravely aiming to build rather than buy, then the errors listed in Table 1 need to be avoided: „

Likely outcome • High levels of rework post delivery • Poor automation coverage • High maintenance costs

Poor or insufficient requirements



Whether you have a development team, or automation expert writing your framework, they will be subject to their own opinion as to what makes a good framework. They may be influenced by something they used/built in a previous organisation, or they have never built a framework before. If the requirements are not up to scratch (and let’s face it they rarely are), then undertaking the build of a framework will almost certainly end up with there being spiralling build and maintenance costs beyond original estimates. As always – get the requirements detailed fully and properly to avoid developers having to deliver what they think is required, which may be at odds with what is expected. Framework creation is a project, not just an automation activity, so needs to be treated as such.

It is an unfortunate truth that even though the creation of an automation framework is often undertaken by the very people entrusted to ensure quality, it is too often treated differently to other software and not given the levels of scrutiny we would otherwise apply

• Poor quality of framework • Lack of scalability • Instability

Lack of technical design

Again, whilst the creation of a framework should be considered a development project, and given at least a nod in the direction of an SDLC, the technical design elements of frameworks are often overlooked in favour of the ‘just do it’ approach. There is a price to be paid for this, which is likely to take the form of a framework that has issues with scalability and has defects when introduced to Live. The introduction to Live service itself may also be problematic, as if you have a strong service management in your organisation you will be obligated to prove the framework can be supported in Live and won’t break anything else in your existing infrastructure. Service introduction are also likely to want to review your detailed technical designs, prior to agreeing to have it installed in their estate. Framework creation is a project, not just an automation activity, so needs to be treated as such.


Scott has been in senior test management for over 10 years, spanning multiple sectors including financial services, telecoms and

Table 1. Errors and likely outcomes to consider when building bespoke automation frameworks


T E S T M a g a z i n e | S e p t e m b e r 2 01 6


The cost, dependant on license agreements of course, would typically be less than to build bespoke and comes with a degree of certainty as to what you will get for your money






Requirement gathering

QA managers and automation architect

1 to 2 weeks

Framework design

Automation architect & automation developers

1 to 2 weeks

Framework development

Automation developers

4 to 8 weeks

Framework validation

Automation developers

2 to 3 weeks

Total circa 15 weeks

Table 2. A minimum set of activities required


If we treat the framework creation as a project (I think I mentioned that we should), then part of the impact analysis prior to starting would look at probable timelines and associated costs. Table 2 shows a minimum set of activities required. There are others which could be split into separate activities, such as bespoke functional libraries to be built etc. If we estimate that the 15 weeks is for 2 FTE, and each FTE costs the organisation £350 per day, then the total is going to be approximately £52,500, where: £700 per day 15 weeks = 75 days (assumes a 5 day working week) It will vary significantly from organisation to organisation in many of the areas required, such as developer skill sets, resource costs etc., so

it seems reasonable to apply a large tolerance to the above estimate. Applying a ±50% tolerance to this to allow for the significant variables involved, and the cost sits somewhere between £25,250 and £78,750. Note, this does not factor in any ongoing maintenance costs. In other words, it can be costly to build. However, a big advantage to building as opposed to buying is where an organisation has a large, mature automation team and knows exactly what they want from a framework, then bespoke building can give exactly the results required, especially if the team have done it before and know the pitfalls to avoid. Additionally of course, the organisation is free to decide who can use the framework unencumbered by any licensing restrictions a commercial framework might apply.


The alternative to building of course, is to simply buy a commercial framework. The advantages to this being largely based around the fact that it is pre‑built, tested, supported etc. In buying a COTS framework, a limitation can be expected that code level changes to the framework either won’t be possible, or at best are likely to need to wait until the vendor distributes an upgraded version, (if they agreed to your requested changes in the first place). „

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

Work Smarter, Test Faster —

Create Better Software Test Management Platform Built for Agile Teams qTest by QASymphony helps your development and testing team efficiently manage, track, organize and report on your software testing efforts. qTest is feature rich, easy-to-use and integrates with the most popular agile tools like Atlassian’s JIRA. Visit to see why over 300 companies including IBM, Salesforce, Barclays and Zappos are choosing qTest.

Works with JIRA & 30+ Other Tools

QASymphony 1-844-798-4386 •


Having built and also bought frameworks myself in the past, I can say with absolute certainty that the decision to buy or build should depend upon the circumstances within the organisation you are working with





Free to choose the technologies for the frameworks development

Vendor provided technologies

Framework code level updates are possible

Framework code level updates not possible

Can be used by unlimited number of people in the organisation

Usage is allowed to the number of licenses

Higher development costs

License cost likely to be cheaper than development costs (check license agreement against your projected license usage, in advance of any purchase)

Higher maintenance costs

No maintenance costs, but may have an ongoing subscription allowing service packs/upgrades

Potentially several months required to build the framework

A few days to customise the framework

User interface can be basic

COTS frameworks can be expected to hold a rich, easy to use interface

Functional libraries need to be built

Functional libraries pre‑built

Technical support needs to be provided by in‑house technical resource

Full technical support by vendor as part of license agreement

May be delivered with a limited set of built in reports

Multiple report formats as standard

Documentation can be scarce and in some cases non‑existent

Can be expected to hold full documentation for the product

Table 3. A comparison of building a bespoke test automation framework or buying it

The cost, dependant on license agreements of course, would typically be less than to build bespoke and comes with a degree of certainty as to what you will get for your money. An important consideration when looking to buy, is to make sure you get a trail period from the frameworks vendor and that your end users are given time to use the product in trial. In other words make sure the framework gives you what you need. (Maybe check this against some requirements?) The pros and cons of building a bespoke test automation framework are outlined in Table 3.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

SUMMARY Having built and also bought frameworks myself in the past, I can say with absolute certainty that the decision to buy or build should depend upon the circumstances within the organisation you are working with. This includes team sizes, experience, automation maturity levels, available resource, and available budget. The list goes on. The right framework for your organisation needs to consider all of the above and more, but whether your decision is to buy or build, here’s hoping you choose wisely!

TESTING THE SMART CITY Dr. Arupratan Santra, Sr. Project Manager, Infosys and Anindita Das, Scientist, Government of India, describe the testing challenges for the connected cities of the future.


n urban area is designed and developed as a smart city to create economic prosperity and best citizen life. A smart city authority will provide key services through strong human capital, social capital and information communication technology (ICT) infrastructure.1 Various smart devices connected through cloud ICT infrastructure are able to provide smooth services. Figure 1 lists the miniature major elements of a smart city. India’s Prime Minister Narendra Modi has instigated an urban renewal and retrofitting program dubbed the ‘Smart Cities Mission’, which aims to develop 100 smart cities across the country. Residents of smart cities use smart devices (things) for communication through the cloud (internet) to enjoy sustainable competitive life. The smart nature of the city allows residents to learn, monitor, search, manage (like cities and traffic), control, and play with things.




There are various concerns while setting up a smart city cloud platform, e.g., proper vendor selection with respect to security and safety, data integrity, privacy, relevance standards compliances, etc. Figure 1. Elements of a smart city.

The concept of internet of things (IoT) was named in early 1999 by Kevin Asthon. In IoT, various physical devices or things (embedded system or software) are connected with other devices or apps for exchanging data. These data exchanges happen in dual modes (to and fro) through internet and also connecting to backend systems, which are developed by big data or GPS/GIS data. The data transfer is human‑to‑human, human‑to‑object, and object‑to‑object.

SMART CITY LIMITATIONS As per above, various applications are developed using IoT for sustainable life with big data, GIS/GPS data etc. Handling these data will have various issues, e.g., security, storage and privacy, for the residents (Table 1).

Security • Data stealing is protected? • Level of control or protection?

Table 1. Smart city challenges.

Strong and powerful software platform solutions are required to overcome the major issues in upcoming smart city application software. An example can be found in Figure 2. There are various concerns while setting up a smart city cloud platform, e.g., proper vendor selection with respect to security and safety, data integrity, privacy, relevance standards compliances, etc. Usage of IoT has increased manifolds year‑on‑year. The rise of mobile apps have led to specialisation as users have picked specialised programs to handle discrete needs in this modern arena. The new evolution – or revolution – in computing technology is the proliferation of smart devices and apps. For example, implanted medical devices that monitor metrics of a patient’s health, or a wrist‑worn fitness tracker. These IoT devices do not act solely based on human input but also act independently as per programming and report the analysis data.

Storage • Where will the data be stored? • What are the limits of data in roaming?

Privacy • Who can see personal identifiable information (PII)? • Storing, protecting, transferring PII.


IT Professional with 16 years' cross‑cultural experience in the areas of E2E project delivery and management with multi million USD/Qtr. He is an expert in TCoE implementation, test consulting, specialised testing, vendor management and presales activity.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6




Figure 2. Architecture of a smart city.


Figure 3. IoT growth curve.

GROWTH OF IOT The McKinsey Global Institute predicts the IoT market to grow by US$2.7 trillion to US$6.2 trillion2 annually by 2025. Gartner anticipates 26 billion devices3 on the internet by 2020, but ABI Research4 goes further and foresees 30 billion wireless devices connected to the internet by 2020. The number of connected devices has already surpassed the total world

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

population in year 2005. According to Gartner’s research report, the economic value‑add across different sectors is expected to grow in 2020 to be US$1.9 trillion. For IoT, each smart device needs a unique identifier and it requires an internet protocol (IP) address. The recent adoption of IPv6 has increased the number of available identifiers from 4.3 billion addresses to 3.4×1038 addresses.

IoT building blocks consist of three major components: • Things: embedded objects with sensors. • Network: 4G/3G, LTE, wi-fi connects internet. • Software infrastructure: upload/ download. Product companies across the world are focusing on manufacturing IoT‑based devices and connections, buoyed by great emphasis and incentives from governments to expand the digital footprint in their countries. A recent report from GSMA and BI depicts that currently Asia has most number (~40% of world IoT devices) of IoT connections. To maintain highest trend of usage of IoT devices by smart residents necessitates error‑free and quality services. Proper quality checks are needed to overcome all the limitations of a smart city, IoT infrastructures and communication between key IoT components. So testing of each IoT stage is very important.

IOT AND TESTING Due to continuous demand of IoT‑based smart devices, device manufacturing/mobile app developing companies are releasing amazing products year‑on‑year. Expectations are very high from end users to have the best technological device/app, better GUI with




he nc ica tio un

m g

tin tes


or latf

IoT testing triangle

Application testing Figure 4. IoT testing triangle.

The IoT testing challenges are many, i.e., devices, sensors with backend data integrity and also validation environments where devices are going to be installed.

IOT TESTING APPROACH An IoT QA team will focus on core components or architecture testing to verify E2E functional and non‑functional features. The team will perform various types of testing as below based on priority which will depend on IoT product usage (Figure 5).

Functional testing

Usability testing


Usability testing is a critical factor for success of smart devices in a smart city. It will be better if these devices are tested by smart city residents/users. Testers will have to find and recruit different people to test smart devices, such as: • Regular runners to test fitness devices. • Patients to test implanted instruments. • Drivers to test automobile trackers. • Dedicated cooks to check kitchen smart devices.


Service virtualisation or customisable automation, which enables faster testing process in production‑like environment in reduced cost. It also identifies defects at an early stage. This has two‑way implantation in validation system for E2E testing: • Software based virtualisation. • Miniature hardware based production‑like environment based testing.




or tw Ne

ck s

excellent speed and performance. Testing is crucial in the end‑to‑end (E2E) functional and non‑functional area with the focus of safety, security, data privacy and speed. As many IoT-enabled devices/apps are user based, and multiple usability issues could arise, crowd sources testing is advised. The majority of IoT devices fail due to: • Faster release to market without E2E testing. • Lack of platform compatibility testing. • Lack of usability testing mobile apps. • Improper data integrity testing due to internet communication. Most IoT devices are mission critical application so need high code coverage. A QA team has to perform various testing based on the testing triangle (Figure 4), such as cold testing, environmental testing, and E2E functional testing.

The main aspect of network testing is to check network communication by simulating different network modes. Test teams perform various test, e.g., device‑level validation, energy consumption test, cold test and network performance testing using WAN simulator and accelerator.

• Protocol conformance: wide variety of device’s interfaces and interoperability needs to be validated. • Compatibility test: app to be tested with different OS, browser and different hardware configuration. • Data recorders: this needs to check for any hardware/software which collects the field data. • Security: all the IoT devices are potential security breaches so there is data privacy and security concerns across platform.

CONCLUSION The revolution of IoT devices will require E2E testing practices and processes to deliver ultimate world‑class quality. Shift left test planning and designing will be a crucial factor to success. Lab setup is important for production‑like environment testing. Usability testing will break all the quality aspects in reality for faster market delivery. References 1. 2. 3. 4.


Testing cloud applications should be tested as we test any other web application with a few added test cases to test cloud features. Along with regular functional testing following features to be tested for cloud based application: • Dynamic scaling. • Automated provisioning. • Device synchronisation.


A QA team needs to shift focus of testing from static environment to dynamic environment:

Security testing

Performance testing


Research professional with

Network testing

Compatibility testing

Device testing

Automated testing

16 years' technical experience with embedded software development and E2E test consulting, vendor and

Figure 5. IoT testing types.

purchase management.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

OUR INDUSTRY NEEDS SPECIALIST TEST RECRUITERS Everyone benefits from better recruitment, argues Gordon Baisley, CEO, Quast Ltd.




uality assurance and testing are people‑based services. However successful the mantra that ‘everyone is responsible for quality’ it’s best achieved when there’s a role with clear accountability for assuring quality. Someone making sure that processes are well defined, that everyone is clear what is being delivered and that this stays consistent with requirements. In conducting testing, for all our focus on automation, you still need someone to introduce the automation, to extend the boundaries, and to apply judgement about where it’s focussed. You need exploratory testing constantly thinking about appropriate testing in light of product change. Companies need experienced, capable testers, who integrate well in to the company and are continuously learning and applying new ideas and approaches from the changing environment around them. As individuals we are equally interested in getting the right opportunities at the right companies. Think for a second about the question: Do you have the right work‑life balance? Think not about your personal answer to the question but rather about when you’ve heard it asked. It’s not generally raised because you’re doing too much life. It’s used when there’s a suspicion work is taking over your life. We all give a massive percentage of our waking hours to work and so we want to get connected

to the right opportunities as reliably as possible. This can all be one big virtuous circle. Matching the right people with the right roles can offer individuals better quality of life and companies success. Recruitment is the process that is tasked with making the connections, so it’s worth us all being interested in recruitment processes working well.

WHAT DOES 'GOOD' LOOK LIKE? The two roles recruitment connects are the hiring manager and the candidate. You can see the hiring manager as the ‘customer’, representing the company, and responsible for the QA or test activities needing done. The candidate is the ‘supplier’, the potential recruit who offers skills and capability and seek a range of benefits in return. Good recruitment connects these two roles, matching each’s needs and requirements as efficiently as possible. Perhaps the best recruitment comes when the hiring manager knows someone perfect for a role. It’s great when it happens but few companies can rely purely on their hiring managers’ contacts for all appointments and most will use a recruiter to help. There are a few different types of recruiter and I want to consider which works best. „

Between a limited pool to start with, not all candidates having completed profiles and less regular usage, I’m not yet convinced LinkedIn is a one stop shop for recruitment

Key: Stronger colour equals higher knowledge on that side of the matrix. Stronger purple equals better overall knowledge.

Project Management Business Analysis Architecture

An external, generalist recruiter will follow recruitment good practice but not have as much knowledge of individual customer processes as someone internal. In searching for candidates for a variety of companies external recruitment companies can build larger candidate databases across multiple disciplines than a team only recruiting for one company.

Matrix 3: Specialist Recruitment Agency

A specialist recruiter will also become expert in recruitment good practice. The advantage comes as, by specialising in particular skill, specialist recruiters develop deeper knowledge and candidate pool in that profession. In being better at connecting the right candidates and roles, while still capable recruiters, they deliver the best combined result.


An in-house recruitment team has most knowledge on the company’s desired recruitment practices, more than any external supplier. However, they recruit only for one company and so have less market perspective, work less on individual job skills, and in serving across the company are less able to become expert in any individual discipline.

Matrix 2: Generalist Recruitment Agency


Matrix 1: In-house Recruitment





IT Delivery Processes


Business Management Processes

HR / Human Capital Mgmt

Matrix Outline

Development Test




Support Matrix Management has come to mean more than formal management relationships but a broader state of mind, recognising our roles as cross-functional with multiple stakeholders and being mindful to consider and seek to satisfy all groups. Recognising and exploring a matrix, as here, helps consider whether the role on the axis of the matrix has the right balance of skills, expertise and relationships, and gives appropriate priority to, the desires of both sides. Figure 1. Comparing strength of recruitment and testing knowledge of different types of recruiter.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6



The problem with generalist recruitment agencies, which also applies to in‑house recruitment, is that you’ve moved away from real subject matter expertise


Gordon’s QA background comes from roles as Director of QA at IFDS, and Head of Testing at Eircom, EE and T-Mobile. While at IFDS, Gordon’s team won Management Team of the Year at The European Software Testing Awards.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6


There seems to be a real trend towards in‑company recruitment teams looking to find more candidates themselves. At an interesting intro meeting with the head of recruitment at a prior employer he told me that in two years they’d switched their balance from 80% agency : 20% in‑house recruitment to being 80% in‑house recruitment. The main enabler for this was LinkedIn. The team made heavy use of inmail to engage with candidates. I think there’s real opportunity in social recruitment but have also found flaws. The first is about penetration. When I was leaving I started going through our org chart looking to connect on LinkedIn as a way to stay in contact. I found no more than 40% of the team on there. Second is about engagement. A lot of profiles are quite sparse, job titles and dates rather than full CV. And while companies and individuals are ever more using it to share thought leadership or marketing, the site doesn’t achieve the regular usage of Facebook. Between a limited pool to start with, not all candidates having completed profiles and less regular usage, I’m not yet convinced LinkedIn is a one stop shop for recruitment. This approach definitely has a place in the process, if you can quickly find and contact a great candidate then terrific. But in absence of quick results you need to be able to use other means of finding candidates. Having tried to rapidly grow a team relying on in-house recruitment I found the business case for the shift in‐house to be flawed. Recruitment Agency costs were paid by the recruitment team and were a high proportion of their spend; switching from agency spend towards a larger in-house team and more spend with LinkedIn was seen as a reduction. However, costs for customers and stakeholder in the process weren’t considered. Recruitment was slower, I saw less candidates, I spent more time chasing, and this applied to all hiring managers in my team. More important was the impact on projects needing the recruits. Gaps in team meant delayed deliverables, time spend discussing mitigation, replanning of tasks, and undoubtedly compromises on quality. A small reduction in Recruitment Team costs caused significant knock on costs for other parts of the process.


At all large companies I’ve worked at the recruitment team had a preferred supplier list (PSL) of recruitment agencies they used. Most

companies on the PSL were generalist agencies, which would source roles across the whole business, or at least one large function, like IT. There are definite arguments for this type of relationship. By recruiting for multiple companies agencies engage with a bigger pool of candidates than the recruitment team from a single company. There’s the general outsourcing argument, that if your company’s core business isn’t recruitment then it’s more efficient and less distracting to buy services from someone who’s business is about recruitment than trying to be expert yourself. The last is about what’s easiest for the recruitment manager who engages the agencies. It takes time to find, agree contract with and manage relationships with lots of specialist agencies. If you split IT into functions
– lets say Project Management, Business Analysis, Development, Test, Support
– and you want two specialist agencies for each, that's 10 agencies. If two generalist agencies say they can do all roles it’s just two relationships. The problem with generalist recruitment agencies, which also applies to in‑house recruitment, is that you’ve moved away from real subject matter expertise. You’ve focussed on applying repeatable recruitment practices rather than knowledge of the jobs. ‘Key word’ search becomes the main approach rather than real understanding of the roles being recruited or the experiences of the candidate. My company works for one very big customer that takes submissions from lots of agencies but uses a generalist agency to shortlist from all CVs received. I’ve seen many examples of great candidates being lost because the shortlisting relied on exactly matching terms in job specs, not a real understanding of the roles. • One role emphasised a long list of technical requirements, but centred around BDD, Cucumber, .Net and Selenium experience. All in‑demand skills, and rate wasn’t high, yet we found a great candidate, a really experienced tester with lots of experience in a BDD environment. Automation experience was desired and the candidate had established frameworks from scratch. He had lots of experience of .Net and his CV repeatedly referenced all the constituents of Visual Studio – .Net, TFS, MS Test Manager – but didn’t use the umbrella term. The CV was rejected as “lacks required experience: visual studio”. • Two weeks ago we were asked to find a programme support officer. On the job spec, the ‘title’ and ‘experience required’ shortened this to PSO, but the wider spec used ‘programme support officer’ so there was no ambiguity about the acronym. We found a great candidate with 6 years as a PSO in the same sector. The feedback came



back “lacks PSO experience”. Looking back at her CV we realised her CV used programme support office for both roles and didn’t include ‘(PSO)’. The hiring manager reviewed the CVs that were put forward and decided none fitted the brief and the role was relisted. We resubmitted the same candidate doing nothing but adding ‘(PSO)’ to their latest roles. This time the candidate was shortlisted and the hiring manager immediately asked for interview. My main argument with a reliance on general recruitment is similar to in‑house recruitment. The approach is prioritising recruitment team costs and preferences, not what is best for the end‑to‑end process. There’s less relationships for the company’s recruitment team to establish, contract and manage but this comes at a cost of more effort by the hiring manager on each role. Candidate match is less good, the hiring manager spends more time reviewing CVs, rejects more, has to give more feedback. The candidates need to apply to more roles as there is more of a lottery in the middle.


Like a good financial journalist it’s right that, before I go on, I declare an interest. At Quast we offer a specialist QA and test recruitment service. But I really believe my conclusion comes from experience rather than business interests. It’s the other way around, that I wanted to offer this service because I’d felt recruitment was much more difficult that it needs to be, and there aren’t enough good specialist agencies out there. I believe the most efficient approach for a good recruitment is to use specialist recruiters, who understand the roles they recruit. This might not be optimal for the recruitment team but it is for the process as a whole. Thinking back to what ‘good’ looked like. The hiring manager and candidate come from the same or similar profession. They can and do have detailed discussion in interview to check their needs match. If we accept that the hiring manager can’t go and find all candidates themselves, the next best thing is that someone who can closely represent them goes out and finds candidates on their behalf. You’d ideally provide a spec, give a verbal briefing, and in a few days have the recruiter present three great candidates, all of whom could do the job and you’d be happy to recruit, it just being a matter of deciding who’s best. From the candidate side, you’d like to speak to someone really knowledgeable about roles they recruit for, who understands your experience and will represent it well, and who aims to help you be successful through the process.

Hopefully you agree that this seems a pretty good picture. Next, ask yourself if this is more likely to happen with someone looking for different roles every day or someone who only recruits in your profession. I believe it’s the latter for some of these reasons: • By being constantly immersed in one professional they stay current with practices and tooling. • They will build a larger network of both managers and candidates in QA and test. • They will be constantly in contact with their network, knowing who is recruiting and who is looking, rather than starting fresh each time on occasional roles. • They’ll better recognise the essence of the requirement, rather than treat every key word as equal. Overall, I believe the experience will be better for all involved. Better conversations, better shortlists, easier recruitment. This can then lead to virtuous circles of further improvement. If both hiring managers and candidates become used to dealing with knowledgeable middlemen they’ll invest more in the process. If hiring managers know their job specs are used they are encouraged to make them good; candidates feel greater potential of getting specific roles and invest more effort tailoring their CVs to bring out relevant experience; the hiring manager then feels more confident about the candidate, etc. With less knowledgeable recruiters the whole process is more scatter gun and feels more of a lottery. In the same process I described earlier I found that for a single role recently 86 CVs were submitted. Conversely, that implies that to be successfully getting one role you might expect to have to submit 86 applications. I think both sides would prefer something closer the 3:1 ratio I suggested as ideal.

A SELF ASSESSMENT If you’re reading this you’ve patiently listened to my argument. Let me turn it around and see if this resonates with your experience. A couple of areas to consider: • If you’re on LinkedIn and accept connections to recruiters, have you noticed a few examples of people with hundreds of shared connections with you? • If you go to QA or test forums, conferences or meetups do you find it’s generally the same recruiters you see?

If you can recognise one or both of these things I’d suggest it means the recruiter is networking well in your field. If you’ve interacted with them, have you found the conversation better than the average recruiter?

A CALL TO ARMS If you answered ‘yes’ to my question, or just buy the argument, hopefully you’ll agree it’s worth encouraging specialists in QA or test recruitment. I want to conclude by suggesting action is needed to move us this way. As I’ve described, its rational that company recruitment teams are most conscious of their own effort and costs and will tend towards generalist recruitment – whether internal or supplier – unless encouraged to think more widely. The real benefit of specialist recruiters is to the hiring manager and candidates who both see a higher success rate and better journey. For specialist recruiters to survive and thrive they need these two groups to be asking for them, and highlighting the benefits to the process as a whole.

The fact that there’s only a small number of test specialist recruiters suggests it’s difficult to break in to the existing status quo. So here’s my suggestion to help change things: 1. If you’re a hiring manager: ££ Engage with a QA/test specialist recruiter and find one you think you’d trust. ££ Ask your recruitment team to use them next time you have a vacancy. 2. If you’re a candidate: ££ Recommend test focussed agencies you’ve had good experience with to your manager and the recruitment team. ££ Let the agency know next time you're searching for a role. I believe everyone will see the benefits.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

MEASURED AND VERIFIED Narayana Maruvada, Project Lead ‑ QA, ValueLabs, highlights process efficiency and effectiveness using metrics and measurements.


n today's world we cannot really imagine any work force that is both enabled and driven without information technology, i.e., the practice of information technology has had its imprints in every sector of work. This article is intended to outline the importance of IT and its role in the manufacturing industry in particular, and how certain metrics and measurements are helpful in ascertaining the benefits for achieving the requisite operational efficiency. Currently, every industry in the

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

manufacturing sector has to foresee and get itself prepared to have answers for the questions and/or challenges it encounters from multiple dimensions: customers, quality, productivity and delivery etc. However, to meet those challenges, organisations implement different strategies such as having continuous improvement of work practices, quality circles etc. But, when it comes to achieving operational efficiency, it is only possible through having a well defined IT‑enabled system or process in place. When

we say information technology practice – the spectrum is very big, encompassing everything starting with conventional computer integrated manufacturing, database systems, ERP systems, simulation and computer aided design tools etc. to today’s contemporary big data applications and its analytics. Now, in order to determine the value‑add offered by the IT spectrum, one has to give a thoughtful consideration to having the right metrics and measurements implemented.



IT IMPLEMENTATION TRENDS – A CLASSIC JOURNEY The idea pertaining to integrating IT with manufacturing technology to improve manufacturing operations and/or its support functions is nothing new. The same was conceived during early 1940s and 1950s, which gave birth to the computer‑integrated manufacturing: the combination of information technology and factory automation. Then came the era of CNC (computer numerical control) that has transformed the machining process as a whole, through automation of machine tools. No


While research and improvements continued, every manufacturing company tended to look for a comprehensive solution to leverage operations and improve operational efficiency by streamlining support processes pertaining to procurement, logistics, and manufacturing etc., and this is when ERP software came into existence and transformed the so called more ‘institutionalised’ approach in the manufacturing sector. Today, besides ERP or other followed‑up trends in the IT gamut, it is all about big data and its predictive analytics, which is playing a predominant role in revolutionising the manufacturing sector and taking it to the next level. „


Customer satisfaction index

• Number of system enhancements requests per year • Number of maintenance fix requests per year

Product volatility

Ratio of maintenance fixes (to repair the system and bring it into compliance with specifications) versus enhancement requests (requests by users to enhance or change functionality)


Complexity of delivered product

• McCabe's cyclomatic complexity counts across the system Halstead’s measure • Card's design complexity measures • Predicted defects and maintenance costs, based on complexity measures


Responsiveness to users

• Turnaround time for defect fixes, by level of severity • Time for minor versus major enhancements; actual versus planned elapsed time


Delivered defect quantities

They are normalised per function point (or per LOC) at product delivery or ongoing (per year of operation) by level of severity, by category or cause e.g. design defect, code defect, defect introduced by fixes etc.





Cost of quality activities


• • • •



Costs of reviews, inspections and preventive measures Costs of diagnostics, debugging and fixing Costs of tools and tool support Costs of testing & QA education associated with the product etc.

Availability (percentage of time a system is available, versus the system is needed to be available)

But in order to ascertain the correctness, effectiveness and efficiency of the process changes, the same has to be quantitatively measured, and such measurement is only possible through suitable metrics


Narayana is a computer science and engineering graduate with 10 years of experience working on both



• Re‑work effort (hours, as a percentage of original coding hours) • Re‑worked software components (as a percentage of total delivered components)

developing and testing web‑based applications. His major area of work and expertise is in testing the applications and products that are built

Table 1. High‑level metrics used for assessing the product quality of any given business function

on an open‑source technology stack.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6



To substantiate the above said statement, here are the two use‑cases for reference, which outline how big data implementation has helped to improve quality, increase efficiency, save time and money. Primarily, the processor giant Intel narrates the success story of how it is able to considerably reduce the number of tests required for quality assurance, using big data. No


Otherwise, conventionally the chipmaker has to assess every chip that comes off its production, which normally means running each chip through 15,000+ tests.1 Likewise, a leading biopharmaceutical company was able to increase its vaccine yield by more than 50% thanks to implementing big data and advanced analytics.2



Number of tests per unit size

Number of test cases per KLOC/FP (LOC represents lines of code)


Defects per size

Defects detected/systems size


Test cost

Cost of testing/total cost x 100


Acceptance criteria tested

Acceptance criteria tested/total acceptance criteria


Quality of testing

No. of defects found during testing/(no. of defects found during testing + no. of acceptance defects found after delivery) x 100


Effectiveness of testing (with regard to business)

Loss due to problems/total resources processed by the system


Test execution productivity

No. of test cycles executed/actual effort for testing

Table 2. From the application/product testing standpoint, the following metrics are considered of high importance






MEASURING CORRECTNESS, EFFECTIVENESS AND EFFICIENCY Having outlined a few use‑cases, it is clear that the role of IT in manufacturing is not just to mimic or support the underlying processes or support functions of a typical manufacturing set‑up, but it has evolved as a catalyst to drive the product and/or process changes. Now, irrespective of whether it is a software application, product, package or tool that is used to streamline and/or improve manufacturing operations and its process changes, they need to be thoroughly verified for their correctness and effectiveness. But in order to ascertain the correctness, effectiveness and efficiency of the process changes, the same has to be quantitatively measured, and such measurement is only possible through suitable metrics. ‘Metrics’ not only enable us to measure the current functional and non‑functional attributes associated with any process, but also empower us to use the data to predict and prepare for the future. From a quality assurance standpoint, metrics provides a basis for estimation, quickly identify and help resolve potential problems and identifies areas of improvement. Tables 1, 2 and 3 outline some standard set of metrics (note, the metrics can be customised and/or defined to meet business needs) that are used to measure the following categories: • Product quality. • IT applications and products testing.


Cost of finding a defect in testing (CFDT)

Total effort spent on testing/defects found in testing



Test adequacy

No. of actual test cases/no. of test cases estimated


Effort variance

[(Actual efforts – estimated efforts)/estimated efforts] x 100


Schedule variance

[(Actual duration – estimated duration)/estimated duration] x 100

Metrics forms the basis for any organisation to attain the requisite information to streamline, improve and control the ongoing processes, products or services and measuring which help to achieve the intended results.


Rework effort ratio

[(Actual rework efforts spent in that phase/total actual efforts spent in that phase)] x 100


Review effort ratio

(Actual review effort spent in that phase/total actual efforts spent in that phase) x 100

Table 3. A list of some standard set of software testing metrics that every IT enabled sector tend to consider

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

References 1.


‘Intel Cuts Manufacturing Costs With Big Data’, InformationWeek, software/information‑management/ intel‑cuts‑manufacturing‑costs‑with‑big data/d/d‑id/1109111 'How big data can improve manufacturing', McKinsey Insights, business‑functions/operations/our‑insights/ how‑big data‑can‑improve‑manufacturing

Welcome to the 2016 edition of our 20 Leading Testing Providers We hope that you’ll find this guide outlining different, selected software testing and quality assurance products and services useful. The software testing landscape changes rapidly and we find that an annual update on the marketplace is a good starting place as you consider purchase decisions going forward.

Sponsored by


2 0




2 0 1 6

Tata Consultancy Services (TCS) TCS is an IT services, consulting and business solutions organisation that delivers real results to global businesses, ensuring a level of certainty no other firm can match. TCS offers a consulting‑led, integrated portfolio of IT and IT‑enabled infrastructure, engineering and assurance services. With one of the most comprehensive portfolios of independent test capabilities on offer, TCS addresses both business and quality challenges for its global clients. The company’s independent software testing services wing has 550+ customers worldwide, over 65 mature TCoEs and over 31,000 assurance professionals (of whom over 27,000 are certified in assurance tools). TCS offers Assurance Services across the testing value cycle, including test consulting and advisory, test services implementation, and managed services for test environment and test data management. TCS continually redefines testing and quality assurance paradigms to help its clients stay ahead of the curve. An accomplishment unmatched by any other service provider, TCS’ Assurance Services Unit has been recognised as a leader by all leading analyst firms – Everest Group, Gartner, HfS, IDC, NelsonHall and Ovum, in the QA and testing space. A major UK and Europe employer with associates across 50 locations, Tata Consultancy Services (TCS) has been helping UK and European businesses become more effective and competitive. TCS UK and Europe service over 350 customers across UK and the continent, including 44 of the FT Europe Top 100 companies. Business heads from across Europe have appreciated TCS’ contribution to the continent. As per the findings by Whitelane Research for 2015‑16: • 1500 C level executives from Europe’s top companies across 13 countries gave TCS the highest general satisfaction rating (80%) in the industry, ahead of 22 other top IT services companies. • For the third time in a row, TCS has emerged ahead of its competitors. TCS was ranked #1 in all 9 KPIs polled, including quality, innovation, price, account management and proactivity. They were ranked #1 in both application services and infrastructure services and have retained pole position in the UK, the Nordics, Germany and France. • TCS has been consistently ranked #1 in nine countries across Europe: UK, France, Germany, Switzerland and Austria and the Nordic region.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

• TCS was also ranked as being among the top three in Belgium and the Netherlands, where they were #1 in application services and infrastructure services. • In 2016, with a satisfaction rating of 82% – when 71% was the rating for the European IT services industry, as a whole – TCS took the #1 spot among European IT services companies for the third consecutive year. Twice in a row, TCS has been rated by Brand Finance, the world's leading brand valuation firm, as the most powerful brand in the IT services industry. “TCS has emerged as a dominant force in the IT services industry and is the strongest brand in the sector. Its brand power is indisputable,” says Brand Finance CEO David Haigh. Awards and recognitions The following shortlist of recent awards and recognitions demonstrates TCS’ deep commitment to people, communities and businesses in the UK and Europe: 1. For 2015‑16, Whitelane Research ranked TCS #1 in customer satisfaction in the 2016 UK IT Outsourcing Study. 2. TCS has been rated as the world’s most powerful brand in IT Services by Brand Finance, the world’s leading brand valuation firm. 3. Business Superbrands UK, an independent research body that canvasses marketing experts, business professionals and thousands of British consumers, adjudged TCS as a 2016 Business Superbrand. 4. Amsterdam‑based The Top Employers Institute rated TCS as Britain’s Top Employer in 2013, 2014, 2015 and 2016. 5. In April 2016, TCS scored 96% in Business and a four‑star rating on the Community’s Corporate Responsibility Index. 6. The 2016 Times Top 50 Employers for Women recognised TCS as being one of UK’s leading employers for women. Commitment to the community The premium TCS exerts on commitment to the community may be gauged by the following: CSR Initiatives TCS invests millions of pounds every year in Corporate Social Responsibility (CSR) activities and supports over 40 charities across Europe. Their employees volunteer more than 100,000 hours of their time each year, supporting social causes and community projects.

+44 (0)20 7245 1800 4th Floor, 33 Grosvenor Place, London SW1X 7HY UK

TCS IT Futures TCS’ multi‑award‑winning community engagement programme and a part of its global Purpose4Life initiative, IT Futures aims to inspire a generation of young people to work at the forefront of technological change. TCS Eco Futures This initiative aims to engage employees across TCS UK to reduce environmental footprint. TCS’ focus is currently on energy efficiency, waste, recycling, and business travel. About TCS worldwide TCS is a part of the Tata Group, one of the largest and most innovative industrial conglomerates with 100 companies. Building on more than 40 years of experience, TCS adds real value to global organisations through domain expertise, proven solutions and world‑class service. TCS has 320,000+ employees, representing 118 nationalities. TCS partners with clients across 44 countries. Repeat customers contribute to 99% of TCS’ revenue.

2 0




2 0 1 6


CA Technologies

+44 (0)1753 577733 Ditton Park Riding Court Road Datchet, SL3 9LL UK

CA Technologies makes software for businesses that are development-driven, because we believe those who build the apps will own the future. We help our customers succeed in a future where every business – from apparel to energy – is being rewritten by software.

money with CA Test Data Manager and CA Agile Requirements Designer. Thanks to more accurate requirements and automated test creation, execution and maintenance, you’ll free your teams and resources to focus on higher-value tasks.

From planning to development to management to security, at CA we create software that fuels transformation for companies in the application economy. With CA software at the centre of their IT strategy, organisations can leverage the technology that changes the way we live – from the data centre to the mobile device. Our software and solutions help our customers thrive in the new application economy by delivering the means to test, deploy, monitor and secure their applications and infrastructure.

CA Test Data Manager and CA Agile Requirements Designer joined an already extensive solutions set, which includes CA Service Virtualization, CA Application Test and CA Release Automation. CA therefore offers a comprehensive and integrated tool stack, capable of meeting the most pressing challenges, from design and development, right through to testing and deployment.

Creating great applications requires rigorous testing and high-quality test data, and after acquiring Grid Tools in 2015, CA introduced two specialist products to do just that. Customers can save time and

Our goal is to help organisations develop applications and experiences that excite, engage and open up moneymaking opportunities for their businesses. CA solutions power innovation by helping organisations to understand, plan, manage and control infrastructure, ensuring the best possible business outcomes.

QASymphony QASymphony is a leading provider of test case management and exploratory testing software for agile development and QA teams. QASymphony solutions to helps companies create better software by improving speed, efficiency and collaboration during the testing process. QASymphony offers four key solutions: • qTest is a test case management solution that provides a better way for companies to centralise and manage test cases. qTest is robust, easy-to-use, and integrates with the most popular tools used by agile development teams. • qTest eXplorer is used by teams doing extensive exploratory testing. qTest eXplorer lets a tester record everything he is doing during the testing session and automatically creates detailed documentation of any software issues. This eliminates the need for the tester to do tedious manual documentation, thus saving significant time. • qTest Scenario is the only JIRA add-on that helps teams optimise and scale test first methodologies (BDD, TDD, ATDD) across their organisation.

• qTest Insights is a self-service business intelligence tool that provides comprehensive data and analysis to measure the effectiveness of your testing team. QASymphony offers cloud and on-premise options. QASymphony software integrates with wide range of open source and commercial developer tools including Atlassian’s JIRA, VersionOne, CA Agile Central/Rally, Selenium, Cucumber, eggPlant, Jenkins, Bamboo, Microsoft TFS, IBM Rational, HP ALM and many more. Teams using QASymphony have seen significant improvements in software quality, speed, and efficiency. According to a recent customer survey, 90% of customers say QASymphony has helped improve software quality in their company. 72% of customers say QASymphony has increased helped efficiency by 40% or more. And 66% say that qTest has helped increase the speed of testing by at least 40%. Today, QASymphony has over 300 customers across 20 countries including Salesforce, Adobe, Samsung, Verizon and Office Depot.

+1 888 573 8657 550 Pharr Road NE Suite 400 Atlanta, GA 30305 USA

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

2 0






2 0 1 6


Software Testing Company

+1 (720) 207 5122 3900 S. Wadsworth Blvd. Suite 485 Lakewood, CO 80235 USA

A1QA was founded in 2003 with headquarters in the USA and delivery centres in Central and Eastern Europe. We believe in strong partnerships and tailor our engagement models to the varied customers’ needs to enable maximised effect at significant cost savings. Currently we are a team of 400+ professional testers and QA engineers with great technical background that have successfully completed 1400+ projects for web, desktop and mobile products. Our company has to its credit multiple projects in telecommunications, e-commerce, insurance and healthcare, banking and finance domains. We’ve been fortunate to work with such clients as Acronis, adidas, Pearson, Product Madness, Kaspersky lab, MedicAnimal and many others. What we do We believe that the key of delivering highquality solution is to address the impartial third party experts who will think out of the box to locate defects that could be overlooked by in-house QA specialists. We pride ourselves on delivering secure,

high quality solutions across the SDLC, from requirements to release. Our clients benefit from managed services incorporating single or multiple service lines: • Consultancy services. • Test automation. • Assessment of current test practice. • Pre-certification audit. • Functional, performance, security, compatibility, localisation testing, etc. To respond to technological changes in the shortest time, organise our vast experience and promote further development of our specialists, we’ve established eight Testing Centres of Excellence that cover the main areas of the company’s activities. Industry recognition According to Gartner Inc., the world’s leader in IT-research and consulting, A1QA has made the top three of the world’s 10 Best Pure-Play Testing Service Providers in the Multidomain Skills category. We appreciate the recognition of our company and will keep expanding our competence to deliver value through quality.

Amdocs Innovation. Expertise. Results. Amdocs Testing Services enable world‑class quality products in an ever‑changing business environment. Our holistic testing approach includes innovative technology, communications‑specific skills and industry knowledge to ensure our customers gain testing solutions at an optimised cost, superior speed, and top quality. Our communications testing portfolio highlights our strength in the IT communications testing domain, and introduces new services such as digital testing and core network testing. Amdocs BEAT™, our award‑winning testing framework, standardises and optimises the testing process based on our accumulated communications testing experience and best practices including a repository of over 1,000,000 communications‑specific test cases. Using a sophisticated analytical model to make recommendations, every testing project is significantly more productive and cost effective. In this way, we help our

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

customers achieve their business goals to deliver a superior customer experience to their customers. Our years of testing experience in the communications domain, includes expertise in multi‑vendor environments as well as DevOps, where testing and operations team work hand‑in‑hand. With Amdocs testing services, you can take your business applications to go‑live with industry‑low defect levels while reducing cost and minimising time to market.

+1 314 212 7000 Missouri 1390 Timberlake Manor Parkway Chesterfield, MO 63017 USA

Amdocs Testing is part of Amdocs, a global company (NASDAQ:DOX) with revenue of US$3.6 billion in fiscal year 2015. Amdocs employs a workforce of more than 24,000 professionals serving customers in over 90 countries.






Make your career plan future-proof. As software needs get more complex so does the role of Software Tester. BCS, The Chartered Institute for IT is at the forefront of technological change. We can give you unrivalled support and prepare you for the future with our suite of ISTQB® software testing certifications and unique BCS certifications. To find out how BCS membership and professional certifications can keep you up to date visit ISTQB® is a Registered Trade Mark of the International Software Testing Qualifications Board.


2 0




2 0 1 6

Infrasoft Technologies +44 (0) 207 332 4780 46, Gresham Street London EC2V 7AY UK

Infrasoft Technologies is a CMMi level 5 v1.3 company providing banking products and software services to clients globally. With state‑of‑the‑art test labs at Pune, Mumbai and Chennai in India, London and Jersey in the UK, Dubai in the Middle East and USA, InfrasoftTech has evolved as a valuable testing partner for its 50+ satisfied clients. InfrasoftTech provides testing‑as‑a‑service (TaaS) in areas like functional testing, non‑functional testing, mobile apps testing, product testing, managed testing, building testing center of excellence and has a team of industry experts for providing test advisory services. Majority of its test professionals are ISTQB certified test engineers and include certified PMPs, Certified Test Managers and Six Sigma Black Belts. InfrasoftTech has set‑up test automation, performance, security and product testing CoEs to help its customers in functional as well as non‑functional testing areas. It has expertise in tools like UFT, TestComplete, Ranorex, Selenium, Watir, Robotium, LoadRunner, NeoLoad, WAPT, IBM Appscan, HP Fortify, Burp Suite, Vega, Xanitiser and many more. Its mobile apps test lab at

Mumbai is equipped with over 400 physical devices and extends testing services in digital banking space. The innovations lab at Pune attracts its experts to explore new ideas and provide innovative solutions to testing teams working on various client projects. Clients are benefited with ready to use accelerators, reusable test assets, robust test frameworks and well defined, matured but flexible process framework. Test assets are added to reusable asset library on completion of every project and best practices are shared on regular basis. InfraVarsity, our in‑house training centre organises sessions on the latest trends to keep the team updated while Testing Practice keeps the team motivated by organising events like testing quiz, debates, technical paper contests, slogan competitions and road shows. InfrasoftTech was among the Top 5 finalists in the Best Use of Tools category for The European Software Testing Awards 2015. With a current strength of 250+ test professionals, InfrasoftTech has ambitious plan to double its testing practice in next three years.

Keytorc Software Testing Services “Tested by Keytorc®” Since 2005, Keytorc Software Testing Services has been assisting its clients to manage their critical software testing processes to reduce the total cost of producing high quality systems. The company has the best references from banking, insurance, telecoms, IT, commerce sectors and more. Besides being the regional leading testing services company, Keytorc also provides international software trainings by being an ISTQB Accredited Training Provider®. Innovations and achievements After the establishment of Romania and Azerbaijan offices, Keytorc opened its 4th branch as an R&D centre in the largest technology campus in Istanbul that provides significant improvements and innovations in software testing, especially focused on test methodologies, performance testing and test automation. Keytorc R&D Team lately executed Test Capability Rating® Model (TCR) that brought new and important perspectives on test process improvement. The framework constitutes in four main areas: test process, test organisation, test technology and test related streams. TCR® has a proven record in improving test processes in a broad target audience.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

Moreover, Keytorc R&D Team brought innovations on test automation technologies, which mainly result in improvements for test coverage, test effectiveness and productivity. These advancements are ensured by originating solid integrations between several open-source test automation solutions. Keytorc has valuable achievements in reputable international testing competitions and events. The company also holds, attends and supports prestigious testing events/conferences in the national and international scale. Services provided by Keytorc: • Test outsourcing. • Test automation. • Performance testing. • Mobile application testing. • Test process improvement & test maturity assessment. • Testing center of excellence. • Test data management. • And ISTQB® accredited software testing trainings. Overview of places, projects and clients • Services provided in four continents. • Projects in 20 countries. • More than 700 clients.

+90 212 290 76 60 Giz 2000 Plaza, Maslak-Istanbul, 34398 Turkey

2 0




2 0 1 6


Maveric Systems +91 44 4344 2500 Lords Tower, Block 1
 2nd Floor, Plot No. 1 & 2 NP,
Jawaharlal Nehru Road Thiru Vi Ka Industrial Estate Ekkaduthangal Chennai - 600 032, India

Maveric Systems is a leading domain‑led, technology assurance provider with expertise across the Software Development Life Cycle (SDLC). Our ‘assurance only’ business model and integrated, asset‑based assurance services are aimed at eliminating quality, cost, and time‑to‑market risks associated with large IT transformation programs of financial services and telecom organisations. Since 2000, we have been providing integrated assurance services across the IT lifecycle for key banking domains including retail banking, corporate banking, payments, multi‑channels, wealth management, regulatory & compliance, and equities & derivatives. Our services range from functional and non‑functional testing to requirements validation, test advisory, automation, program management, etc. We offer platform‑led strategic assurance solutions across technology‑centric areas like digital, data, middleware, virtualisation and automation, along with capabilities across leading industry & open tools such as Appium, CA Lisa, Eclipse, JIRA Portfolio, Bamboo, Docker, Sonar, etc.

Maveric also has to its credit multiple assurance engagements around a number of domain‑centric products, including Temenos T24, Flex Cube, Misys Equation, Finacle, Eximbills, Trade Innovation, BankTrade, Fidessa, Actimize, Mantas, Norkom, Avantgard, Trassat, Kondor+, Ingenium, BSCS, Siebel, TIBCO, CS5, iMAL. Backed by our domain, technology and product expertise, we aim to be among the top 3 assurance players in UK, Europe and US geographies. Maveric has been identified as a transformation specialist serving ‘Transformation Focused Clients’, by NelsonHall. We were also the recipient of Frost and Sullivan Product Innovation award, as well as the Banker Middle East Industry Awards for the ‘Best Banking Technology Partner’ for 4 years in a row from 2013 ‑ 2016. We have a dedicated offshore delivery centre in Chennai, India. Our 1000+ assurance professionals operate across centres in UK, US, Europe, Middle East and APAC and provide services to more than 50 financial services organisations across the globe.

Plutora Software has become a fundamental part of every business. An organisation’s viability and competitiveness depend on how rapidly it can adapt and deliver value to customers through its software. The trend towards APIs and microservices introduces a further level of complexity that needs to be managed to ensure high quality. Traditional approaches and tools start to break down and prevent organisations from delivering software in an accelerated manner. Plutora’s next‑generation enterprise platform caters to the increasing need for quality, fast‑paced delivery and gives teams everything they need to manage frequent large‑scale IT releases, test environments, and test and defect execution, including best‑in‑class reporting capabilities. Many of the world’s leading and most forward‑thinking companies use Plutora’s products to improve the delivery of their IT releases for their business. Stryka is Plutora's commitment to bringing high‑quality innovation to the application testing industry. The

software is a cutting‑edge, enterprise test management tool that supports the end‑to‑end test lifecycle, including agile delivery and continuous delivery pipelines. Stryka provides QA and test teams with a flexible, yet powerful test management system. It links requirements, to test cases, to defects, and provides control and visibility across an enterprise's releases, test environments, and test assets. Developed to solve all previous incumbent solution gaps, and be an amazingly simple tool to configure, Stryka allows organisations to manage all their projects easily in a single database, unlike existing legacy solutions. Built from the ground up for forward‑thinking teams, Stryka uses the latest web and mobile technologies, allowing you to view test metrics on any device, anywhere. Stryka is a modern, responsive, and beautiful platform that people want to use every day. It is available as a stand‑alone test management solution or as part of Plutora’s enterprise DevOps platform.

+1 650 282 6613 565 Clyde Ave Suite 610 Mountain View CA, 94043 USA

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

2 0





2 0 1 6

QA Mentor +1 800 622 2602 1441 Broadway, 3rd Floor, New York, NY 10018 USA

QA Mentor is a leading global QA services provider headquartered in New York and with eight different offices around the world. Established in 2010 with an aim to help organisations from various sectors improve their QA functions, QA Mentor proudly boasts of having a unique combination of 150+ offshore and onshore resources who work around the clock supporting all time zones. The company supports clients from startups to Fortune 500 organisations within nine different industries. QA Mentor has uniquely positioned itself in the market by providing subscription based and customisable QA testing services for all businesses by following a hybrid approach with flexible on-demand testing services and solutions at low prices. By leveraging its in-house automation solutions and tools, including a proprietary automation framework, QA Mentor is able to speed up execution time and save money for clients. This process also creates a tailor-made solution for each client based on their specific budget and technology needs. QA Mentor also offers several other unique testing methodologies and engagement models that are easily customisable. The proprietary automation framework methodology alone

includes the choice of 50 different automation testing tools and solutions to ensure that the right one is selected for a client’s specific needs. With the acquisition of a French test automation tool development company, QA Mentor now has their own test management platform as well, QACoverage. So why do customers choose QA Mentor? • Most economical and cost-effective QA testing services provider. • Has 30 different QA services, some unique to the company. • Covers all of the world’s time zones. • Employs QA professionals with a minimum of three years’ experience and 85% of whom are ISTQBI certified. • Offers training and seminar services via an e-learning portal. • Has QA experts that know and use 50+ different automation tools, in offices worldwide. • Contractual obligations for defect leakages and productivity targets. Our unique services:

Ranorex Test automation for desktop, web and mobile applications What kind of application do you want to test? Is it installed on a Windows desktop? Does it run in a browser? Is it used on a smartphone or tablet? It does not matter which platform your software is developed for. You only need one Ranorex license to test any type of mobile, desktop or web application. The powerful UI object recognition, strong technology support, broad variety of flexible tools and quick ROI make it the ideal choice for any team and development environment whether you are using a traditional or agile approach. This is why over 2500 companies worldwide trust in the award-winning Ranorex GUI test automation software. The ideal software for your team and development environment Finding a test automation software that not only fits your development environment and budget but also supports working in teams can be tricky. This is where Ranorex excels. You can seamlessly integrate it into any development environment, using continuous integration and version control

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

systems, enabling you to generate quick test automation results. At the same time, Ranorex offers tools that suit different skills in a team. While you do not need any programming skills to create robust and easily maintainable automated tests with the Ranorex Recorder, test automation experts and developers can add further functionalities or create new test automation projects entirely in C# and VB.NET. In addition, new features are constantly added to further support working in teams. Committed to excellent test automation software The Ranorex test automation software is continuously evolving to provide innovative solutions to help you deal with new testing requirements. This is why all major and minor software updates are already included in a license at no additional costs. Whenever you do need help with test automation, Ranorex will provide you with the in-depth answers you are looking for – whether it is with the outstanding Ranorex support team, monitored online forum, comprehensive user guide, free webinars or training courses.

+43 316 281328 Strassganger Strasse 289 8053 Graz Austria

2 0




2 0 1 6


Seapine Software +44 (0) 208 326 1840 Saracen House Swan Street Isleworth, TW7 6RJ UK

Since 1995, Seapine Software has provided development and IT organisations with the tools, technologies, and best practices needed to deliver quality software on time and on budget. Seapine’s application lifecycle management (ALM) solutions drive the development of recognised brands, life‑saving medical devices, even games of the year. TestTrack, Seapine’s flagship product, helps product teams organise and manage all development artefacts, from requirements and user stories to tasks, changes, test cases, test runs, and issues. TestTrack’s flexibility makes it an ideal ALM solution for any kind of process – agile, waterfall, v-model, spiral, or a hybrid. In addition, TestTrack delivers unparalleled traceability and clear visibility over product development. Seapine’s product line also includes Surround SCM for version control, and QA Wizard Pro for automated functional testing and load testing. Both products integrate seamlessly with TestTrack to provide a single-vendor, end-to-end product development solution. Seapine Software’s quality-centric philosophy is based on three tenets: 1. The best process is your process. Chances are your development model is a hybrid, with

some degree of agile. That’s where TestTrack shines, helping you capture, collaborate, and communicate using your process with one tool. 2. Traceability is essential to improving quality. It is the key to achieving early visibility of problems, staying on top of change (e.g., knowing which test cases are invalid when requirements change), and performing many types of analysis (impact, coverage, root cause, etc.). Seapine’s ALM solutions extend traceability beyond QA to cover the entire development process end-to-end. 3. Support customers with world-class service. From Seapine’s award-winning product documentation to their incredibly knowledgeable and responsive sales, professional services, and technical support teams, they ensure your questions are answered and their solutions are meeting your needs. Adherence to these three tenets of quality is the reason Seapine has many safety- and quality-critical customers, such as NASA, Siemens Energy, and Moody’s KMV. Seapine Software is a multinational corporation with headquarters in Mason, Ohio, and offices in Europe, Asia-Pacific, and Africa, and over 8500 customers worldwide.

2 0





2 0 1 6

Ten10 +44 (0) 203 697 1444 30-31 Devonshire Place Brighton BN2 1QB UK

Ten10 (formerly The Test People – Centre4 Testing) is the UK’s leading software testing consultancy. Through a rigorous and creative approach to software testing – delivered through a combination of best‑in‑class technology and talented, passionate experts – we give our clients the confidence to embrace innovation and business transformation, redefining the limits of possibility.

• • • • • •

Our key areas of expertise are: Test strategy. Functional testing. Test automation. Performance testing. Mobile testing. Agile testing.

Range of services Clients benefit from our flexible and scalable options for delivery, ensuring that the right testing solution is delivered at the client’s preferred pace. Services range from fixed length engagements such as a strategic review or test consultancy, through to on‑going managed services that can be

deployed either on or offsite depending on a client’s requirements. Finally, if it’s simply resource that is the solution, then we’ve created the largest database of professional testing resource in the UK, ready to be augmented into your existing team. Our UK‑based team work across a broad range of tools and technologies and take an agnostic approach, ensuring the best fit solution is created for each client. Ten10 works with a broad variety of clients and technology challenges and has in‑depth experience and expertise with the following industries: • Financial services. • Legal. • Professional services. • Public sector (including major government departments). • Retail/eCommerce. • e-Gaming. Clients are provided with local support through our regional offices in Brighton, Leeds, London and Manchester.

Test Direct Test Direct’s services span the IT quality assurance and testing spectrum. Established in March 2002, we are one of the UK's leading providers of independent IT testing and quality assurance services across the full testing lifecycle for both the private and the public sector (via the G‑Cloud framework). Our service is always tailored to meet our clients’ specific needs and can involve individual consultants to review, manage and/or support client testing capability; complete test teams responsible for project and programme delivery; or a managed service/outsource to provide a client’s full testing capability. We deliver our services on our clients’ sites; in our UK Test Centre; or in a combination of both locations to maximise efficiency without compromising on quality. Our consultants are experienced in various delivery methodologies and able to hit the ground running either as individuals or as a team. Our proven track record and success speaks for itself; over the last two decades we have delivered cutting‑edge projects and significant programmes of change. As our clients have grown and expanded, we’ve

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

gone on that journey with them supporting them in successfully delivering major change programmes. Our expertise in all aspects of testing, our extensive experience of complex systems and industry regulation across the private and public sectors, and the flexibility of our approach ensures our clients get the right quality in the most effective and efficient way. We have distinct functional and non‑functional testing practices that provide specialist consultancy and testing services delivery. From a single consultant to a fully managed service Test Direct offer flexible, bespoke solutions at a range of price levels, underpinned by our proven expertise in testing and quality assurance. Our UK Test Centre delivers functional, performance, web, app and mobile testing to multiple clients across a wide range of market sectors. Test Direct strive to deliver the right quality faster while raising client capabilities and improving processes and we are passionate about what we do which is why we are the leading trusted testing partner for our clients.

+44 (0) 1772 888 344 3a Edward VII Quays Navigation Way, Riversway Preston PR2 2YF UK

2 0




2 0 1 6


TestFort (QArea) +1 310 388 93 34 Offices at: Ukraine, Malta, USA, Spain, UK, India, Israel

Protecting your reputation since 2001 In a world where quality can’t be average and most clients are already spoiled with jaw-dropping solutions, professional quality assurance and quality control services are as essential as air. We, at TestFort have realised this fact far more than a decade ago and have been perfecting our mastery of the craft ever since. TestFort is QArea’s independent QA lab and the company provides full stack solutions that include but are not limited to test design, QC, manual and automated testing. TestFort’s mission We are a QA lab with precise dedication to unravelled levels of quality that can satisfy our customers from various industries. The core values and advantages of TestFort • Committed to quality – we deliver only quality projects with high level of responsibility. • Trust and respect – the core values in relationships with our clients. • Cutting edge solutions – we are aware of all the latest technologies in software testing. • Compliance to clients’ requirements – we provide clients with suitable and flexible terms of co-operation.

• Pleasing rates – our lab is already equipped and fully staffed with professional engineers, thus we can assure our clients receive solutions of finest quality without unnecessary investments. • Confidentiality – most of the projects are protected by a non-disclosure agreement (NDA). TestFort’s team of engineers has followed these principles since 2001. Such an approach helped us to gain a solid reputation in the IT field. Approach to testing We are well aware that every case is unique and every project requires a personalised approach. Thus we’ve mastered the art of flexibility and provide a variety of possible business models like: • Fixed cost. • Time and materials. • Dedicated teams. Well-known and respected companies like Microsoft, Skype, the Huffington Post, Fin.IT and Universal Electronics are 100% satisfied with our level of services, determination, performance and rates.

Testing Circle Testing Circle is a leading software testing services provider to leading finance, telco, government, retail, gaming, media and FTSE 100 clients. Based in London in the UK, our track record of success is reflected in our record of unbroken profitability and organic growth. Since 2005, Testing Circle has been building solid, lasting relationships with satisfied clients through the provision of tailored, cost effective and flexible solutions to meet all their software testing requirements. Our services include: Testing Circle Consulting ‑ Testing Circle's Consulting Division is a leading provider of software and systems testing managed services to enterprise and mid‑market companies. Testing Circle’s consultancy services mitigate risks involved with software implementation, upgrades, integration, system functionality and performance. Specialist software testing consultancy services include strategic consultancy, test management, functional testing, performance testing, test automation, agile

testing, DevOps consultancy, integration testing and test environments management. Staff Augmentation ‑ Testing Circle's test professionals are available to augment test teams to deliver services across the full lifecycle. Testing Circle provides access to a flexible and cost effective pool of test professionals to augment areas where more test expertise is required. With an unrivalled team of test professionals, all with qualified backgrounds in software testing. Testing Circle can assemble the perfect team of software testing professionals. Sparta Global Academy ‑ Testing Circle's IT Academy branded Sparta Global provides high calibre graduate consultants with intensive training in testing, SDET and DevOps to work for our clients onsite or on remote testing projects. Our test consultants are equipped with the necessary practical skills to add value to any client team and offer a cost effective, flexible and high quality alternative to traditional contractors and offshore and nearshore services.

+44 (0) 207 048 4022 4 Copthall Avenue London EC2R 7DA UK

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

2 0





2 0 1 6

Tricentis +44 (0) 844 80 79 905 1 Bedford Row London WC1R 4BZ UK

The continuous testing company Tricentis, the continuous testing company, specialises in agile market leading software testing tools for enterprises. They help Global 2000 companies adopt DevOps and gain success by achieving automation rates of over 90%. Their integrated software testing solution, Tosca Testsuite, consists of a unique model‑based test automation and test case design approach, encompassing risk‑based testing, test data management and provisioning, service virtualisation, and more. They are established as a reliable enterprise partner, helping to deliver significant performance improvements to testing projects. Tosca Testsuite Tosca Testsuite addresses the challenges in end‑to‑end (E2E) testing that result in high costs and delays in time‑to‑market by optimising, managing, and automating your testing. Tosca Testsuite begins with risk coverage optimisation; minimising your overall test portfolio while maximising the amount of business risk the tests cover. Next, the test data management tool provides you with a fully integrated set of capabilities for the design, generation and

provisioning of test cases. Finally, model‑based test automation allows you to build robust, reusable automated test cases – all without requiring technical expertise. In addition, Tosca Testsuite provides a wealth of other tools and resources, such as API testing, mobile testing, orchestrated service virtualisation, hands‑on training, and much more. Awards and customers Gartner recognises Tricentis as a Leader in their 2015 Magic Quadrant for ‘Software Test Automation’. Forrester calls Tricentis a ‘Strong Performer in Functional Automation Tools’, with model‑based test automation as their standout feature. Tricentis’ 400+ customers include global names from the Top 500 brands such as Vantiv, Toyota, Zurich Insurance, A&E, Allians, BMW, ING, Deutsche Bank, Orange, Swiss Re, UBS, and Vodafone. Tricentis has offices in Austria, United States, Germany, Switzerland, UK, Netherlands, Poland and Australia. To learn more, visit them online or follow Tricentis on LinkedIn, Twitter and Facebook.

Validata Group Validata Group is the leader in enterprise software testing, release automation and legacy migration solutions, helping clients build competitive advantage, maximise the speed of implementations and upgrades and improve time‑to‑market for new releases. While the focus of most testing companies has been to test the systems of engagements and mobile technologies, our focus is on testing the back end of these systems, which is the most difficult task. Validata provides the first unified test automation and DevOps platform, built from the ground up with agile and cloud in mind, to bring together requirements, testing, defects, planning, resources, development and deployment, capable of integrating with external systems such as HP QC, CA Clarity, IBM RTM, Atlassian Jira etc, delivering a ‘single version of the truth’ and enabling continuous delivery, integration, testing and monitoring. Validata’s platform drives quality and velocity for DevOps‑oriented or traditional development teams showing 10x overall

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

productivity improvements over siloed QA tools. Its model‑based approach enables the automatic generation of test cases and eliminates the high maintenance costs of traditional script‑based test automation. It bridges DevOps with continuous testing, involving service virtualisation, model‑based test automation, effective test data management, test analytics and scriptless automated test case design. Automated testing using Validata’s platform makes it possible to test the complete software stack, run more tests during the development phase and provide the ability to manage the list of releases. Validata enables its clients establish a well‑planned QA strategy, to align Dev, Ops and QA/Testing as part of the true DevOps philosophy. Validata solutions are also available as software‑as‑a‑service (SaaS), enabling customers to establish their testing environment with minimum upfront capital investment on a unique pay‑as‑you‑go model.

+44 (0)20 7698 2731 52 Brook Street London W1K 5DS UK

2 0





2 0 1 6

Vistatec +353 1 416 8000 Vistatec House 700 South Circular Road Kilmainham, Dublin 8 Ireland

Industry leaders in scalable, global testing solutions Software and user assistance engineering is the one of the core areas of expertise upon which Vistatec was built. Over the years we have honed our software and user assistance solutions, expanded and evolved with technology changes and consolidated our position at the vanguard of our industry. ​ edicated engineers across the globe D Vistatec has a sterling reputation in software engineering and localisation and our dedicated team of world-class engineers ensure a fast, reliable and completely transparent service. In this vital component of software localisation, they will work on bug fixing, build engineering and mastering, software analysis and, of course, comprehensive localisation and software test cycles. Every project has dedicated in-country teams and testers in each of its target markets throughout the product's lifecycle. Vistatec is one of the only companies in its field with its own dedicated software and user assistance engineering lab, maximising efficiency and security.

Bug testing, databases and retesting The bug testing process is painstakingly documented and maintained. Bug fix databases ensure that no problems found in the testing stage re-emerge in new target markets or with new products. A cycle of bug reporting, bug-testing, fixing, re-testing and regression is followed to make sure all potential problems are addressed before they can occur. Internal, automated tools for bug management are supplied early in the project life cycle. Our team of software engineers produces localised builds that we give to our language specialists, who carry out translation, agile localisation and language quality checks on the localised product. Every product is tested by a fluent speaker and it is then re-examined and signed off by a native speaker. We understand what it takes to be successful on a global scale.


Vornex Vornex is a provider of enterprise software testing solutions. Our flagship product, TimeShiftX is a date and time simulation software that lets you time travel software into the future or past for temporal testing or time shift testing to validate all date and time sensitive functionality and code such as year‑end, daylight savings, leap year, billing, rates, policies, etc.

• Cross platform & cloud ‑ container time travel. TimeShiftX is compatible on all platforms & operating systems and can be run in the cloud and inside containers • Distributed environment time shifting. TimeShiftX allows you to easily temporal test large, distributed software stacks and environments.

TimeShiftX enables time travel (even inside Active Directory & Kerberos) without code changes, manual work, or server isolation in order to perform your forward date or back date testing with features including: • Instant time travel. No code changes, no altering server clocks, no isolating servers. Just turn on TimeShiftX & start time traveling. • Active directory compatibility. Safely time travel inside Active Directory, Kerberos, LDAP, and other domain authentication protocols. • Total app & DB compatibility. TimeShiftX enables time travel for all applications & databases such as SAP, SQL, Oracle, WAS, .NET, and others.

Vornex services worldwide clients with time travel who extend many industries including banking, finance, healthcare, state & local government, utility, and insurance. In addition, Vornex works with industry leading IT consultants, 3rd party integrators, and VARs such as Accenture, CapGemini, Wipro, IBM Global Services, etc.

T E S T M a g a z i n e | S e p t e m b e r 2 01 6

Despite the range of specialised engineers working on your product throughout its lifecycle, stringent security is maintained, with NDAs, limited, staggered access to labs and works and clear and real time communication on the status of your project.

All these industries and enterprises rely on TimeShiftX to validate their date sensitive applications and accelerate their time of delivery. Vornex enables organisations to increase efficiency, reduce resources & cost, and improve corporate bottom line by providing TimeShiftX as an affordable solution that can fit any company’s IT budget.

+1 408 713 1400 43575 Mission Blvd. Suite 613 Fremont, CA 94539 USA

In December 2015, The Test People and Centre4 Testing merged to create the UK’s leading software testing company. We are excited to introduce our rebranded organisation, Ten10. Expert-led flexible and scalable testing solutions; from strategic consultancy through to managed services and staff augmentation. Services include: • • • • • • •

Test Strategy Functional Testing Automated Testing Performance Testing Mobile Testing Agile Testing Accessibility Testing

Find out how we can help. Please call +44 (0) 203 697 1444 or email for an initial chat about your testing requirements. Offices in Brighton, Leeds, London and Manchester.

TEST – September 2016  
TEST – September 2016  

Everytime I meet someone new in this industry, I try to find out what his or her route into testing/QA, or perhaps even IT, was. I’m greeted...