TEST - January 2019

Page 1

JANUARY 2019 SEPTEMBER 2018 JULY 2018 THE CYBERSECURITY ISSUE transportation

SPECIAL

20 LEADING TESTING PROVIDERS

STAY AHEAD OF SECURITY 2019 TRENDS & PREDICTIONS THE BITCOIN THREAT TO SECURITY THE WINNERS

MODERN MANUAL TESTING LOCALISATION TESTING DEVOPS & DEVSECOPS TEST AUTOMATION DATA SCIENCE


T E S T M a g a z i n e | J a n u a r y 2 019


C O N T E N T S

1

THE BITCOIN THREAT TO SECURITY

4

08

2019 TRENDS & PREDICTIONS 16

HOW TO BECOME A PENTESTER

22

12

STAY AHEAD OF SECURITY

26

CYBERSECURITY We asked the industry for its software testing trends and predictions for 2019

04

The largest threat to security is the immense computing power being put behind Bitcoin

08

How do software engineering teams at the largest companies stay ahead of the security curve?

12

How to become a Pentester and train your team in penetration testing methodologies

16

GDPR: data breaches dominate headlines as businesses evolve to put data privacy first

22

The need has arisen for a more advanced and robust form of cybersecurity

26

When it comes to security, many experts believe that app users are like sitting ducks

30

Top 3 defect prevention techniques: quality can be considered earlier in the application lifecycle

32

Minimal time investment can turn your documentation writers into full-time software testers

34

European Software Testing Awards winners' PHOTO GALLERY!

49

T E S T M a g a z i n e | J a n u a r y 2 019


2 upcoming

INDUSTRY EVENTS

5

21-22

18-19

March

May

June

DevTEST Executive Workshops consists of 16 syndicate rooms, each with its own subject facilitated by a knowledgeable figure. Each debate session runs three times through the course of the day, with spaces limited up to 10 delegates per session, ensuring meaningful discussions take place, and all opinions are heard. Each session lasts 1.5 hours, giving participants the opportunity to get inside the minds of their peers, helping to understand varying viewpoints from across the industry.

The National Software Testing Conference is a UK‑based conference that provides the software testing community at home and abroad with invaluable content from revered industry speakers; practical presentations from the winners of The European Software Testing Awards; roundtable discussion forums that are facilitated and led by key figures; as well as a market leading exhibition, which will enable delegates to view the latest products and services available to them.

The National DevOps Conference is an annual, UK-based conference that provides the IT community at home and abroad with invaluable content from revered industry speakers; practical presentations; executive workshops, facilitated and led by key figures; as well as a market leading exhibition, which will enable delegates to view the latest products and services available to them.

softwaretestingnews.co.uk

softwaretestingconference.com

devopsevent.com

24-25

22

19

September

October

November

The DevTEST Conference North is a UKbased conference that provides the software testing community with invaluable content from revered industry speakers; practical presentations from the winners and finalists of The European Software Testing Awards; Executive Workshops, facilitated and led by key figures; as well as a market leading exhibition, which will enable delegates to view the latest products and services.

The DevOps Industry Awards celebrate companies and individuals who have accomplished significant achievements when incorporating and adopting DevOps practices. The Awards have been launched to recognise the tremendous efforts of individuals and teams when undergoing digital transformation projects – whether they are small and bespoke, or large complex initiatives.

Now in its sixth year The European Software Testing Awards celebrate companies and individuals who have accomplished significant achievements in the software testing and quality assurance market. Enter The European Software Testing Awards and start on a journey of anticipation and excitement leading up to the awards night – it could be you and your team collecting one of the highly coveted awards.

devtestconference.com

devopsindustryawards.com

softwaretestingawards.com

11

THE EUROPEAN SOFTWARE TESTING

December

The European Software Testing Summit is a one-day event, which will be held on the 11th Dec 2019 at The Hatton, Farringdon, London. The European Software Testing summit will consist of up to 100 senior software testing and QA professionals, are eager to network and participate in targeted workshops. Delegates will receive research literature, have the chance to interact with The European Software Testing Awards’ experienced Judging Panel, as well receive practical advice and actionable intelligence from dedicated workshops. softwaretestingsummit.com

T E S T M a g a z i n e | J a n u a r y 2 019


E D I T O R 'S

3

TEST MAGAZINE MAGAZINE SEPTEMBER MARCH JANUARY 2018 2019 2018 VOLUME VOLUME VOLUME 10 ISSUE 10 ISSUE 1 64 MAY 2018 VOLUME 10 10ISSUE 2ISSUE

AN INSECURE FUTURE?

C O M M E N T

JANUARY 2019 SEPTEMBER 2018 JULY 2018 THE CYBERSECURITY ISSUE transportation

SPECIAL

20 LEADING TESTING PROVIDERS

MODERN MANUAL TESTING LOCALISATION TESTING DEVOPS & DEVSECOPS TEST AUTOMATION DATA SCIENCE

STAY AHEAD OF SECURITY 2019 TRENDS & PREDICTIONS THE BITCOIN THREAT TO SECURITY THE WINNERS

WWW.TESTINGMAGAZINE.COM WWW.TESTINGMAGAZINE.COM

BARNABY DRACUP EDITOR

CYBER SECURITY f 2018 was the year of the ‘mega breach’, then what will 2019 hold? Will the world see an increase in the size, scale and impact of these kinds of attacks, or will cybersecurity rise to the challenge, with companies, individuals and organisations doing their upmost to keep personal and private information out of the hands of hackers and their insidious networks? In the wake of privacy scandals, network outages, IT infrastructure collapses and mega data breaches – cybersecurity is now a growing core concern across the board – according to the Allianz Risk Barometer 2019 (agcs.allianz.com), Cyber incidents (37% of responses) are level-pegged with business interruption (37% of responses) as the top risks to businesses globally. In with 27% of responses was companies’ fear regarding changes in legislation and regulation, with the resulting impacts of things such as Brexit, trade wars and tariffs. Climate change claimed 13% of responses and fears over a shortage of skilled workers received 9%. However, after years of hacks, intrusions, date breaches and thefts, it seems that companies and organisations are still yet to understand what's required to ensure strong, well-implemented cybersecurity programmes. So, let's take a look at least year's scoreboard! In November last year, the hotel giant, Marriott, announced that around 500 million customers who made a reservation at a Starwood hotel, since 2014, had had their data compromised. Right on the heels of this attack, in December last year, question-and-answer site Quora announced its platform had been invaded, with the attackers stealing information from 100 million accounts. In the lead up to the Pyeongchang

I

Olympics in February last year, Russian hackers began a series of inter-related cyberattacks as a retaliation for the country's doping ban from the games. Then, finally, right before the opening ceremony of the games, they orchestrated a hack that crippled the event's IT infrastructure, knocking out the Olympics website, network devices and even its WiFi in the process. The Russian hackers used a worm called ‘Olympic Destroyer’ to wreak as much havoc as they could as the poor Korean technicians scrambled to restore service. Towards the latter end of 2018, advertising and data-harvesting giant, Facebook, disclosed to the press a huge data breach in which hackers gained access to 30 million accounts by stealing user authorisation tokens – access badges that mean a user does not have to continually login while navigating the site’s various sections. In September of last year, British Airways revealed a data breach affecting 380,000 reservations – names, addresses, email addresses, and payment card details were all stolen. Cathay Pacific also announced an even larger data breach, undertaken in March, and that impacted 9.4 million travellers. And let's not forget the real big boys: Google announced in October it was shutting down Google+ after deciding it wasn't worth the expense to support and secure it. During this time they also reported a bug in Google+ that had exposed 500,000 users' data for around three years. Google later announced that an additional bug in a Google+ API had exposed user data from 52.5 million accounts. Stay safe out there! BD

JAN 2019 | VOLUME 10 | ISSUE 6 © 2019 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 EDITOR Barnaby Dracup editor@31media.co.uk +44 (0)203 056 4599 STAFF WRITER Islam Soliman islam.soliman@31media.co.uk +44 (0)203 668 6948 ADVERTISING ENQUIRIES Max Fowler max.fowler@31media.co.uk +44 (0)203 668 6940 PRODUCTION & DESIGN Ivan Boyanov ivan.boyanov@31media.co.uk 31 Media Ltd, 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 info@31media.co.uk testingmagazine.com

PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA  softwaretestingnews  @testmagazine  TEST Magazine Group

T E S T M a g a z i n e | J a n u a r y 2 019


4

2019 TRENDS & PREDICTIONS With the way that software is tested, developed and delivered constantly evolving – what will this year hold? We asked several experts in their fields to share their insights about what the software testing and cybersecurity landscape will look like in 2019

think it'll be more of the same from 2018. We'll still have people talk about automation and ML and AI (/ANI) replacing testers. And we'll still be talking

about how that's all still impossible, based on the difference between asserting expectations and exploring unexpected unknowns. For me, I'm hoping that we start to do more 'continuous testing'. Not in the 'continuously run your automation in a CI/CD pipeline' sense, but around continuously conducting investigative testing earlier and throughout the SDLC: testing the ideas for software solutions, testing the artefacts we create and the UX and UI designs, then testing the code design and the code as it's being written, all before we operate and test the software and test in production. This isn't new by any means, but it's still relatively unknown to many people. So I hope 2019 brings more awareness and allows people to start incorporating this continuous testing into their strategies.

he board, rather than just IT or operations departments, is now realising it has a responsibility to understand cybersecurity and ensure comprehensive procedures are followed. Many businesses appreciate it might be a matter of time before they experience a major incident, so a more proactive approach to cybersecurity is necessary. One result will be more investment from firms in solutions which want to cut through the data, see the alerts which really matter – so action can be taken quickly. Continuing skills shortages will mean businesses which lose cybersecurity expertise will be left facing the challenge of operating security systems and determining the real threats within all the noise of dayto-day business-as-usual alerts.

While cybercrime is constantly evolving, many criminals continue to make use of old ‘tested' vulnerabilities. Research we conducted last year on more than 170k Magento websites worldwide, found no region registered less than 78% of its sites being at less than 'high risk' from hackers for failing to update security patches. There's no reason to see this changing until there's a shift in the effectiveness of vulnerability management governance. There's been a shocking lack of investment and interest in the greater environment and very often that ‘lessprotected' environment is used as a beachhead to gain access to sensitive areas. The growth in the IoT and the media attention it will gain in relation to security will bring this issue to the fore.

DAN ASHBY HEAD OF SOFTWARE TESTING EBAY Dan is a testing enthusiast with a diverse range of experience in managing, training, mentoring & inspiring testers and teams

I

T

T E S T M a g a z i n e | J a n u a r y 2 019

On top of this, I’m hoping that the external communities mix things up a bit. In our workplaces, we encourage that people mix across roles, that we understand and appreciate the value that each team member brings in these different roles. But we don’t see that in the external communities yet. Each conference is still aimed at a specific group of people – a dev conference for a specific type of programming language, a testing conference for a specific testing mindset, an Agile conference for Agile coaches, etc. It’d be great to see conferences mix more, and cater to all different kinds of people, bringing folks from different domains and roles together to learn from each other. I’m hoping we can start to break this current mould in 2019 too.

BENJAMIN HOSACK CHIEF COMMERCIAL OFFICER FOREGENIX Ben specialises in managed detection and response, defining and reducing PCI DSS Scope, account data compromises, investigations and website security


P R E D I C T I O N S

ug bounties are trying to reinvent themselves in light of emerging startups in the field and not-forprofit initiatives such as the Open Bug Bounty project. Most crowd security testing companies now offer highly-restricted bug bounties, available only to a small circle of privileged testers. Others already offer process-based fees instead of resultoriented fees. We will likely see crowd security testing ending up as a peculiar metamorphose of classic penetration testing. Millions of people have lost their money in cryptocurrencies in 2018. Many due to crypto-exchange hacks or fraud, others were victims of sophisticated spearphishing targeting their e-wallets, some simply lost their savings with the Bitcoin crash. People believed in innate immunity, utmost resistance and absolute security of cryptocurrencies, while now their illusions about cryptocurrency security have vaporised. The problem for 2019

B

ROB CATTON SOFTWARE TEST CONSULTANT INFINITY WORKS Rob is a software consultant working in web development, mobile testing and commercial areas. He is passionate about helping other testers where he can and regularly presents at conferences and events

s in most areas of the software industry, the cloud is a hot topic. Many providers are moving their data into the cloud, and as a tester, I’ve always had an interest in monitoring and data visualisation, this is now coming in useful for testing distributed systems that are often opaque by nature. As a lot of the healthcare related work I do is datadriven, which is very much often the case in the healthcare industry, one of the biggest responsibilities lies around how to get the most out of the data for testing

A

ILIA KOLOCHENKO CEO HIGH-BRIDGE TECH Ilia is CEO & founder of High-Tech Bridge and a member of the Forbes Technology Council. He is a regular columnist for the Application, Security & Cybercrime website

F O R

2 0 1 9

5

is that many victims irrecoverably lost their confidence in blockchain technology in general. It will be time-consuming to restore their trust and convince them to leverage blockchain in other areas of practical applicability. On the other side, it’s not too bad, as potential future-victims are now paranoid and won’t be lowhanging fruit for fraudsters. Cybercriminals have attained a decent level of proficiency in practical AI/ML usage. Most of the time, they use the emerging technology to better profile their future victims and to accelerate time, and thus effectiveness, of intrusions. As opposed to many cybersecurity startups who often use AI/ML for marketing and investor-relationship purposes, the bad guys are focused on its practical, pragmatic usage to cut their costs and boost income. We will likely see other areas of AI/ML usage by cybercriminals. We will probably have the first cases of simple AI technologies competing against each other in 2019.

"Ultimately I think the main trends to be seen in the healthcare section of software development in times to come will be around usability and security, as those are the 2 areas which both deliver the most quality to the users and have the most potential to go wrong"

purposes without crossing the line; the vast majority of people don’t want their data to be seen by anybody apart from medical professionals – much less used for achieving software quality. This raises the challenge of what can be done to create fit-for-purpose test data without potentially raising ethical concerns - potentially there is room for another test tool to be able to do this, through the use of AI or otherwise. Ultimately, I think the main trends to be seen in the healthcare section of software development in times to come will be

around usability and security, as those are the two areas which both deliver the most quality to the users and have the most potential to go wrong. How these challenges are overcome is going to be down to working with the data providers and the general public both, as healthcare is a product for everybody. The bar is so low for public services expectations in the UK, so a real opportunity is available to make a service that smashes those expectations and sets a new precedent for quality in healthcare.

T E S T M a g a z i n e | J a n u a r y 2 019


6

n 2019, it’s software developers and the business testers who will see the most change to their role. In order to enhance their test productivity, these teams will embrace smart testing tools that are powered by machine-learning and artificial intelligence capabilities. DevOps teams will embrace tools and technologies to boost productivity. This includes more investment in automation of the entire DevOps pipeline activities from

coding through production – together with cloud and SaaS services that include lab environments, service virtualisation, bigdata management and more. ML and AI tools have the ability to facilitate reliable and stable test automation – helping optimise test suites across the entire DevOps pipeline through the identification of flaky, redundant and duplicate cases. The manipulation of test data which helps decision makers validate their software quality on-demand will be crucial to teams’ success in the year ahead. To keep pace with innovations like AR/VR and the rollout of 5G networks, DevOps teams will need to re-think their schedule, software delivery processes and existing architecture. Automation will be the key – and having the right tools, test environments and labs will be crucial. Teams will need to spend more time on developing test cases to cover the innovative features like the new 5G network, AI/ML, AR and VR and IOT. By the end of 2019, all websites need to comply with strict accessibility requirements and to make this less painful and impactful on the overall pipeline, these tests will need to be automated as much as possible.

nterprises are losing vital mainframe development and operations skills. Automating processes within the software development cycle helps to mitigate the effects of that lost knowledge. Take mainframe testing for example. It’s essential in helping find bugs earlier in the development lifecycle. However, unit and functional testing in the mainframe environment have traditionally been manual and time-consuming for experienced developers and prohibitively difficult for inexperienced developers to the degree that they skip it altogether. With automated testing, however, developers are able to automatically trigger tests, identify mainframe code quality trends, share test assets, create repeatable tests and enforce testing policies. When these processes are automated, developers can confidently make changes to existing code knowing they can test the changes incrementally and immediately fix any problems that arise. Enterprises want to leverage existing investments in people, toolsets and

technology to support mainframe system delivery. Central to this effort is constructing an open, cross-platform, mainframe-inclusive DevOps toolchain that empowers developers of all skill levels to perform and improve the processes necessary to fulfil each phase of the DevOps lifecycle. Application Program Interfaces (APIs) make this open delivery architecture even more extensible by enabling users to leverage existing system delivery automation investments, reducing risks and time and increasing efficiency and velocity throughout the lifecycle. 2019 will see companies advance in their digital transformation efforts by empowering developers to be more product focused and less project focused, making development more mindful and directed. KPIs will become critical in keeping digital transformation efforts on track. As the brain drain continues within development ranks, automation will be key in helping new to the mainframe developers be successful by making vital processes less time consuming and error-prone.

ERAN KINSBRUNER CHIEF EVANGELIST & AUTHOR PERFECTO Eran is an expert in web and mobile testing. A thought leader, speaker and author he specialises in nearshore and offshore QA team building and management

I

E

T E S T M a g a z i n e | J a n u a r y 2 019

"DevOps teams will embrace tools and technologies to boost productivity. This includes automation of the entire DevOps pipeline, activities from coding through production - together with cloud and SaaS services that include lab environments, service virtualisation, big-data management and more"

MAURICE GROENEVELD VP ENTERPRISE BUSINESS EMEA APAC COMPUWARE Maurice is an enthusiast for creating products that fit into a unified DevOps toolchain enabling cross-platform teams to manage mainframe applications, data and operations with one process


P R E D I C T I O N S

ug bounties, AI-driven development and Machine Learning will gain momentum in 2019 as the trend continues from past years. AI is the future and it will start making its way towards the mainstream in 2019. As the use of AI and automation is making existing processes more efficient and scalable, the focus in 2019 would be towards including AI in the mainstream software development process. We need software systems which can analyse the patterns out of the huge amount of unorganised data being generated every day and be able to learn and make decisions out of it. So, AI and automation are going to be adopted massively in mainstream software development in 2019. With the increase in the market of IoT (Internet of Things), AI is becoming the

B

BOB DAVIS CMO PLUTORA Bob is a management and marketing executive with more than 30 years of experience leading high technology companies – from emerging start-ups to Global 500 corporations – and has a proven P&L management track record

household help. IoT is about connected devices while AI is about connected intelligence. With the vast amount of data being generated with the IoT devices, AI is the solution needed to analyse it and make a meaningful decision out of it, which will help in improving the existing IoT devices for future. In the year 2019, AI is going to be heavily adopted in the cybersecurity industry. AI will play a very significant role in overcoming the human errors leading to successful cyberattacks as happened in case of the data breach of Equifax. Also, AI in cybersecurity would help in efficiently identifying the attacks and learn with the past data available hence, improve with learning from the existing data and accurately identify the future attempts of cybercriminals.

Mark is an experienced software and IT executive with business strategy, sales and enterprise expertise. He is a leader and key contributor with over 20 years of large enterprise software and business experience

2 0 1 9

7

PALLAVI KUMAR SENIOR ANALYST PROGRAMMER NHS Pallavi is a senior software development professional and certified Scrum Master. She is passionate about learning new technologies and challenging and improving existing processes to enhance their efficiency

n software development, the big story in 2019 will be machine learning and AI. In the coming year, the quality of software will be as much about what machine learning and AI can accomplish as anything else. In the past, delivery processes have been designed to be lean and reduce or eliminate waste, but to me that’s an outdated, glass-half-empty way of viewing the process. This year, if we want to fully leverage these two technologies, we need to understand that the opposite of waste is value and take the glass-half-full view that becoming more efficient means increasing value, rather than reducing waste. Once that viewpoint becomes ingrained in our MO, we’ll be able to set our sights on getting better through continuous

i

improvement, being quicker to react and anticipating customers' needs. As we further integrate and take advantage of machine learning and AI, however, we’ll realise that improving value requires predictive analytics. Predictive analytics allow simulations of the delivery pipeline based on parameters and options available so you don’t have to thrash the organisation to find the path to improvement. You’ll be able to improve virtually, learn lessons through simulations and, when ready, implement new releases that you can be convinced will work. Progressive organisations, in 2019, will be proactive through simulation. If they can simulate improvements to the pipeline, they will continuously improve faster.

loud management and control is going to become vitally important. Before, the mantra was 'move to the cloud at all costs'. In 2019, the 'at all costs' part will go away, and cloud waste management will become increasingly important to businesses. I don't see a significant change in global cloud trends in 2019, but I think there will perhaps be an even greater focus on keeping data in-country and specifically 'out of the US', as the geopolitical environment over there, and globally, heats up even more. Don’t expect AWS to go away, as it is still growing faster than all other cloud

providers combined. It is good to have healthy competition and having Azure, Google Cloud Platform and IBM be more viable is good for the industry. The growth of Azure, specifically, will bring more cloud success in Europe. More organisations will migrate aggressively to the public cloud and build net-new applications there. Whether workloads are in the public cloud or on-premises, IT teams increasingly will seek a platform that moves away from 'static' resource allocation to 'dynamic' resource allocation – which means automatically scaling, dynamic routing, functions-as-a-service and other 'serverless' technologies.

C MARK FIELDHOUSE VP NORTHERN EUROPE NEW RELIC

F O R

T E S T M a g a z i n e | J a n u a r y 2 019


8

THE BITCOIN THREAT TO SECURITY Over the last decade testers have made good progress in understanding application security, but few have appreciated the changing use of asymmetric cryptography esters have new opportunities to work in crypto-currencies and blockchains if they adequately understand security. But lurking on the horizon is the risk that unproven assumptions made in algorithmic and computational asymmetry face devastating destruction. In that scenario, crypto-skilled testers will be in huge demand to rebuild our world without reference to current paradigms.

T

THE EMPIRE

From the beginning of humanity until the 1970’s it was impossible to communicate secret messages without the risk of interception and decryption by eavesdroppers. The fundamental problem was the symmetric nature of encryption and decryption. Both the sender and receiver needed to use the same secret key, and somehow that key

T E S T M a g a z i n e | J a n u a r y 2 019

had to be sent from the sender to the receiver before any encoded messages could be transmitted. If an eavesdropper obtained sight of the key they could then view and change the messages at will. By WW2 this had become a huge problem for armies, navies, and air forces communicating via thousands of wireless radios. In October 1944 Walter Koenig at Bell laboratories wrote the Final Report on Project C-43. He suggested recipients of telephone messages could add interference to prevent eavesdropping, then subtract the noise from a recording to hear the message in the clear. Although this was technically impractical at the time, it theoretically removed the need for a sender to pass a secret key to the recipient. Twenty-nine years later Clifford Cocks read the report during his sixth week of work for GCHQ in

DECLAN O'RIORDAN CCP SECURITY & INFORMATION RISK ADVISOR TESTING IT LTD Declan is a CESG Certified Professional SIRA, keynote speaker and organiser at international testing conferences. He manages risk for Government Digital Services within the Cabinet Office


S E C U R I T Y

Cheltenham. As a brilliant Cambridge mathematician, he immediately realised this could be applied to cryptography. Multiplying two large primes is easy, even if they are more than a hundred digits long. Factoring the numbers from the much larger product (i.e. working backwards from a semiprime to discover the primes) is very hard. Using the two prime numbers as a private key held only by the recipient, a public key (the semiprime product) could be passed to the sender to encrypt messages with an unbreakable cypher. If the public key was seen by an eavesdropper they would still be unable to decrypt the messages without the private key (the prime numbers held by the recipient and never transmitted). It took the freshly recruited spy about thirty minutes to come up with his prime number solution. The young mathematician’s discovery seemed immediately applicable to military communications, and it would become one of GCHQ’s most prized secrets. Asymmetric (public key) cryptography had begun, but remained in the shadows of UK and US spy agencies. Then the discovery was repeated in 1976 by independent researchers. New Directions in Cryptography was published by

&

C R Y P T O G R A P H Y

Whitfield Diffie and Martin Hellman at Stanford, with Ralph Merkle at UC Berkeley. Their ideas were progressed into a working solution using randomly chosen prime numbers over one-hundred digits long in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman (R.S.A.) from MIT. The US National Security Agency (NSA) warned cryptographers that presenting and publishing their research could have legal consequences. The NSA also issued gag orders and tried to censor crypto research. And so the war began.

THE CYPHERPUNK REBELLION

Public-key cryptography provided unbreakable secret communications for the first time in history, but not for ordinary people. The relationship between citizens and state has always been highly unequal. A counter-culture movement began to emerge with strong interests in technology and promoting individual privacy. One member of the group, Judith Milhon, combined the popular term ‘cyberpunk’ with cypher (an algorithm for performing encryption or decryption) and created a new word ‘cypherpunk’ to describe her best friends. Cypherpunk developments are

9

driven by philosophy to rapidly produce revolutionary code and highly advanced hardware. We can see it works in practice, but there isn’t yet a theory to describe it. Several universities in the USA are currently researching the mining chip design, delivery, and processing speed phenomenon. "We examined the Bitcoin hardware movement, which led to the development of customized silicon ASICs without the support of any major company. The users self-organised and self-financed the hardware and software development, bore the risks and fiduciary issues, evaluated business plans, and braved the task of developing expensive chips on extremely low budgets. This is unheard of in modern times, where last-generation chip efforts are said to cost $100 million or more." - Michael Bedford Taylor, University of California "The amazing thing about Bitcoin ASICs is that, as hard as they were to design, analysts who have looked at this have said this may be the fastest turnaround time - essentially in the history of integrated circuits - for specifying a problem, which is mining Bitcoins, and turning it around to have a working chip in people's hands." - Joseph Bonneau, Postdoctoral research associate, Princeton University.

T E S T M a g a z i n e | J a n u a r y 2 019


10

S E C U R I T Y

&

Cypherpunks objected to governments and large organisations treating individuals’ data as their own property to collect, analyse and use however they liked. In 1991, cypherpunk, Phil Zimmerman, released PGP (Pretty Good Privacy) to enable the general public to make and receive private email communications. In 1993 Zimmerman became the formal target of a criminal investigation by the US government for "munitions export without a license" because encryption was classified as a weapon. On 9th March 1993, Eric Hughes published The Cypherpunk Manifesto. The closing statement provided the reason for cypherpunks to create bitcoin and blockchain: “The act of encryption, in fact, removes information from the public realm. Even laws against cryptography reach only so far as a nation's border and the arm of its violence. Cryptography will ineluctably spread over the whole globe, and with it the anonymous transactions systems that it makes possible”. Before Bitcoin, 98 digital currencies were created and destroyed by attacks upon the central trust authority (e.g. imprisoning the owners and/or regulating their businesses out of existence), or by hackers ‘double spending’ the currency (copying the digital money file and respending it while corrupting the central authority). There was clearly a need for a better digital currency that prevented doublespending and removed the attack target presented by central control.

THE GENESIS BLOCK

The root of trust for the Bitcoin blockchain is the genesis block (the first Bitcoin block in the first blockchain). Every block header references a previous block header hash to connect the new block to the previous block in an unbreakable and immutable blockchain. By summarising all previous transactions in the form of double-SHA256 hashes in the block header, a Merkle tree is built linking every block back to the Genesis block built on 3rd January 2009. Bitcoin messages in transit and the transaction log are not encrypted because the security model is reversed from the traditional central control of trust. All Bitcoin nodes are responsible for establishing trust linked back to the Genesis block using a distributed peer-topeer consensus network. Data is visible

T E S T M a g a z i n e | J a n u a r y 2 019

C R Y P T O G R A P H Y

‘in the clear’ to enable validation by all nodes in the network.

HOW BITCOIN WORKS

A cypherpunk using the pseudonym ‘Satoshi Nakamoto’ used cryptography to circumvent the mistakes that had previously led to digital currency failures. He avoided the legislative and security vulnerability of having centralised control by introducing a decentralised peer-topeer network to verify transactions were valid and not ‘double spends’. The consensus mechanism includes an ingenious adoption of a cryptographic hash process known generically as ‘proofof-work’. All variations on proof-of-work contain a decision point: Is the hash solution valid or invalid? The decision point is a test. This test is accurately executed at far higher speeds than any other computation in history (currently around 200 billion tests in the time a photon of light travels one metre, with exponentially increasing speeds every year). To produce a valid Bitcoin block header hash and win the mining race, a miner needs to construct a candidate block filled with transactions, then use a Secure Hash Algorithm with 256-bit output (SHA-256) to calculate a hash of the block’s header that is smaller than the current difficulty target. In simplified terms, the more zeros required at the start of the hash, the harder it is to satisfy the difficulty target. It’s like trying to roll a pair of dice to achieve a score less than a target value. The probability of rolling less than twelve is 35/36 or 97.22%, while if the target is less than three, only one possibility out of every 36, or 2.78% will produce a winning result. If the miner fails to produce a hash lower than the target they modify a variable header value known as a nonce (short for number used once), usually incrementing it by one, and retry. If they succeed, the miner includes the nonce that allowed the target hash to be achieved in their block header metadata. The nonce is then used by all the other nodes to quickly (in one operation) verify that nonce is the correct key to producing the target hash. The crucial characteristic of proof-ofwork is computational asymmetry. Solving a proof-of-work problem must be hard and time consuming, but checking the solution is always quick, like a game of Sudoku. When advances in computer

processing power reduce the time taken to provide a solution, the difficulty (i.e. number of operations) to calculate a solution is increased. The validity of the harder solution can still be quickly checked, usually in one operation, even if billions of extra operations are added to finding the solution. Imagine playing Sudoku when the number of rows and columns are increased every time you learn how to solve the problems faster. Satoshi Nakamoto saw the potential to use proof-of-work in machine-to-machine (M-2-M) testing and prevent invalid financial transactions being recorded in a ledger without the oversight of a trusted third party. There are many fascinating variations on M-2-M tests such as proof-of-stake and proof-of-useful-work, but so far these testing breakthroughs have excluded the efforts of professional testers due to the general lack of cryptography skills among testers. In the case of Bitcoin, there is no central ledger. The Bitcoin ledger is distributed as a copy to every full node in the peer-to-peer network and each miner races to complete a proof-ofwork solution. Satoshi Nakamoto’s most important invention is the decentralised mechanism for emergent consensus. Thousands of independent nodes follow a common set of rules to reach majority agreement built upon four processes, continuously executed by machine-to-machine testing: 1. Independent verification of each transaction received by any node, using a comprehensive-criteria shared by all nodes 2. Independent aggregation of transactions by mining nodes into candidate blocks, coupled with demonstrated computation through the proof-of-work algorithm 3. Independent verification of candidate blocks by every node and assembly into the chain of blocks. Invalid blocks are rejected as soon as any one of the validation criteria fails and are never included in the blockchain 4. The chain with the most cumulative computation demonstrated through proof-ofwork is independently selected by every node in the peer-topeer network. Once a node has validated a new block it attempts


11

to assemble a chain by connecting the block to the existing blockchain. The ‘main chain’ is whichever valid chain of blocks has the most cumulative proof-of-work associated with it.

MINING SPEED

By any other industry standard, the growth in Bitcoin mining performance is extraordinary: • 2009 - 0.5 MH/sec to 8 MH/sec (x 16 growth) • 2010 - 8 MH/sec to 116 GH/sec (x 14,500 growth) • 2011 - 116 GH/sec - 9 TH/sec (x 562 growth) • 2012 - 9 TH/sec - 23 TH/sec (x 2.5 growth) • 2013 - 23 TH/sec - 10 PH/sec (x 450 growth) • 2014 - 10 PH/sec - 150 PH/sec in August (x 15 growth). Looking at the current Bitcoin hashing rate we should expect the incredible advances shown above to appear as huge spikes in a graph. Amazingly, everything earlier than mid-2014 appears to be a flat line compared to the recent exponential growth. At the time of writing, Bitcoin is now performing up to 61,866,256,000,000,000,000 tests per second, or put another way, almost 62 billion tests per nanosecond. The atomic limitations of Application Specific Integrated Circuit (ASIC) chip design are now being approached decades earlier than expected.

AND NOW THE RISK OF DISASTER

There is however, a fundamental risk that could undermine proof-of-work and all variations used by blockchains. Proofof-work is totally dependent upon the existence of computational asymmetry. In Bitcoin, the level of effort to solve a problem (calculate a valid hash and provide the nonce) must be high, but the level of effort to check the solution (use the nonce to check the hash is valid) must be low. To picture how asymmetric the current proof-of-work difficulty target is, imagine checking the winning calculation equals the physical volume of an amoeba, then calculating a solution by exhaustive searching is an equivalent volume to the whole planet earth. This level of difficulty

achieves a steady average addition rate of one new block onto the blockchain every ten minutes, while also preventing cheating. The risk begins with the assumption that proof-of-work is a nondeterministic polynomial (NP) time problem. NP problems have the characteristic of being hard to solve yet quick to check the result. Jigsaw puzzles are NP problems. The only way to be sure a pile of jigsaw pieces builds a complete picture is to try fitting every piece into place. At the end of the task it is instantly obvious if the jigsaw is complete. We shall follow the general assumption that proof-of-work is an NP problem. Now comes the biggest assumption of all, one that is implicitly made by all blockchains: P ≠ NP. P represents polynomial complexity problems such as addition and multiplication, for which there exists a polynomial time algorithm that generates a solution. i.e. can be solved ‘quickly’. NP represents nondeterministic polynomial complexity problems such as Rubik’s cube and prime number factorisation, which consist of two phases: firstly, guess the solution in a non-deterministic way; secondly verify or reject the guess using a deterministic algorithm that is performed in polynomial time. All P problems exist within the set NP, but no-one has been able to prove if P problems could be equal to NP, or definitely not equal to NP. The working assumption adopted by all blockchains is that P does not equal NP. While P vs NP is rarely discussed by testers, it is the greatest unsolved problem in computer science, and possibly all of mathematics. The implications are so enormous the Clay Mathematics Institute set a $1 million prize to anyone providing a proof that either P = NP, or P ≠ NP. It is one of seven Millennium Prize Problems set on 24th May, 2000. Anyone able to solve proof-of-work in polynomial time can avoid the cost and effort of working through all possible solutions by arriving at the target in a single step. NP problems are like looking for a needle in a haystack, which conventionally requires looking through the entire haystack until the needle is found. A P = NP solution doesn’t require faster searching, it requires an approach that doesn’t involve searching at all.

Metaphorically speaking, a solution would be like pulling a needle from a haystack using a super-powerful magnet. A mathematician may discover a new solution anytime, yet an increasing risk is the advance of quantum computing. Once computing steps beneath the nanometre scale and inside the atom, the rules change. We will have entanglement, interference, superposition and decoherence to consider. Most importantly, answers will not be in a binary state. It may become possible to return many, perhaps all possible answers simultaneously using Shor’s algorithm. This looks increasingly like a route to solving NP problems in polynomial time. Experts in general purpose quantum computers and don’t expect them to replace classic computers for more than a decade and to be highly expensive. But let’s not forget the mining chip design, delivery, and processing speed phenomenon. Miners are financially incentivised to accelerate the development of quantum computers targeted at solving proof-of-work, and the ultimate miner would solve proof-of-work in polynomial time to collect newly issued coins at high frequency. From there it is a short step to disaster. When a miner can submit valid blocks to nodes with the correct SHA-256 header hash as fast as the blocks can be tested (i.e. in polynomial time), emergent consensus is defeated.

THE END IS NIGH?

There are wider ramifications. If P = NP, every public key cryptosystem we have such as RSA, Elliptic Curve Cryptography (ECC), Secure Socket Shell (SSH) and Transport Layer Security (TLS) all become solvable in polynomial time. This would mean the end of privacy and secrecy as we know it. The quantum cryptography era would also be the beginning of a new frontier for testing. In the scramble to reestablish a secure means of transmitting and storing data, any tester with an understanding of cryptographic schemes resistant to quantum computing, such as quantum key distribution methods like the BB84 protocol, and mathematicalbased solutions such as lattice-based cryptography, hash-based signatures, and code-based cryptography would be worth their weight in gold. There might still be enough time for smart testers to prepare.

T E S T M a g a z i n e | J a n u a r y 2 019


12

COLLABORATION AND CODE TESTING FOR START-UP SECURITY Collaboration and code testing can bring enterprise security to start-ups. How do the software engineering teams at the largest and most innovative companies stay ahead of the security curve? echnology powerhouses such as Google, Microsoft and NASA know how to get security right. They’ve invested huge sums of money into the technology, people and processes that help their engineering teams build secure software. Engineers at these companies have been open about their own approaches to product security. The Microsoft Security Response Center (MSRC) keeps a blog where they often post approaches and ideas to improve enterprise security. Michal Zalewski, VP of Security at Snap and previously head of product security at Google, summarized his thoughts on how to manage a product security team in an illuminating blog post. Having worked with companies like these for years, I’ve learned a thing or two about what it takes to put security first in the world’s largest enterprises. What lessons can we learn from them? How do the software engineering

T

T E S T M a g a z i n e | J a n u a r y 2 019

teams at the largest and most innovative companies stay ahead of the security curve? Here are seven things that companies of any size can do to change team structure and business practices and improve security mindset and posture. As you will see, these don’t require investments akin to those made by technology behemoths – you can stand on the shoulders of these giants.

USE THE COMMUNITY

Skilled security researchers are hard to find and in constant demand. Even the largest enterprises struggle to hire enough researchers to fill their needs, and many times those researchers don’t have the budget to support an ideal security structure. In 2017, an EY (Ernst & Young) survey revealed that 87% of companies need up to 50% more budget for security, yet just 12% actually expect an increase of more than 25%. In a world where an average

OEGE DE MOOR CEO AND CO-FOUNDER SEMMLE Oege spent 21 years as a professor of computer science at Oxford during which time he joined Microsoft as a visiting researcher, working with Charles Simonyi - the original creator of Word and Excel - on a new type of programming environment


S O F T W A R E

breach takes more than six months to identify and a further 66 days to contain, and each one costs an organisation an average of $3.62m (£2.82m), top-of-theline security is imperative. Many of the world’s mega-corporations have recognised the futility of relying solely on internal security measures, and have instead turned to the open source community. Microsoft’s acquisition of GitHub was the biggest in a series of commitments the technology giant has made to open sourcing, and it aligns with other community-based security practices the company employs, such as their monthly Patch Tuesday program. Open source security makes sense for a simple reason – the more people that see code before it goes into production, the more likely any vulnerabilities within that code will be detected. In many ways, it’s no different than writing a novel. Everyone makes mistakes and writers often miss the typos and other problems in their own copy. Multiple editors, reading the book with a critical eye, are far more likely to see an error. When we put our code in front of other people, we greatly increase the quality of that code.

THE PRODUCT SECURITY TEAM

Taking advantage of external resources is one thing, but organisational structure is important too, as eloquently argued

in the blog post by Michal Zalewski I quoted earlier. A common trait at all the most security-forward companies is that there is a lack of separation between security and engineering. Integrated, the two functions are able to work together, prioritising security in the development cycle, and looking for opportunities to automate security expertise and integrate it in the workflow. As just one example, in the recent news that Facebook’s security chief Alex Stamos resigned, the New York Times quoted an internal memo stating that the security team would no longer operate as a stand-alone entity, but instead work more closely with product and engineering teams. This trend has a name: the product security team. Ideally, this team exists within the engineering organisation, reporting to the CISO, or a similar function. While the CISO looks after IT security much more generally, the product security team is responsible for addressing security in the products being developed internally. Failing to move product security into engineering can have grave consequences: Security teams want to make sure bugs are discovered and fixed, while the developer team is pushed to deliver on new features, creating a tension between two teams that should be in lockstep. Instead of keeping security top of mind as they work, some developers fail

S E C U R I T Y

13

to respond to the reports by the security team. Security teams are incentivised to report as many problems as possible (to cover their butts in case of a breach), yet developers don’t have time to look at all the reports, because many of them are in fact not real bugs, but false positives. This separation is antiquated, and it is a detriment to security progress within the organisation. True product security can only be achieved when every developer in the pipeline takes responsibility for the security of the code that he or she writes. The product security team’s job is to give developers the knowledge and tools to do just that. There is no universal playbook for how these important tech companies handle security, but they are sharing their tried and true methods with the community — something every company that excels at security should do. As an industry, we need to think of security as part of an ecosystem and realize that sharing best practices is the best way to individually and collectively improve.

VARIANT ANALYSIS

Open sourcing your security can certainly help, but no security strategy can truly eradicate vulnerabilities. Product security teams, while effective, often can’t keep up with the volume of code changes required and struggle to provide developers with

T E S T M a g a z i n e | J a n u a r y 2 019


14

S O F T W A R E

S E C U R I T Y

effective solutions to prevent vulnerabilities from reaching production. Even when security strategies do result in identification and containment of vulnerabilities, there remains an overwhelming likelihood that variants of that vulnerability are still able to be exploited. Finding vulnerability variants is an unrealistic task for security researchers, even if they had no other responsibilities. Windows contains tens of millions of lines of code; the software in connected cars includes approximately 100 million lines; and Google’s portfolio of Internet services includes about two billion lines. That much code is impossible to manually secure, and researchers have no way of knowing how many variants may exist within the codebase. A variant analysis engine works by enabling security teams to quickly explore codebases of any size, and intelligently locate zero-days and variants of vulnerabilities. For instance, our QL engine uses machine learning to identify the patterns in code that led to the identified vulnerability. From there, it is able to rapidly search the codebase, alerting researchers when it identifies similar patterns. This process not only greatly reduces the number of vulnerabilities in a given codebase, it also reduces the number of false positives that often come up in other security tests. Because of this, security researchers can focus just on vulnerabilities they know are worth investigating and patching, freeing them up to work more effectively and efficiently.

SAFER APIS TO PREVENT VULNERABILITIES Writing clean code is, in some ways, like eradicating a disease: prevention is better than a cure, and ideally you make

T E S T M a g a z i n e | J a n u a r y 2 019

certain common mistakes impossible. For instance, common cross-site scripting vulnerabilities can be avoided by judicious use of automatic context-aware escaping. Similarly, the notorious problem of SQL injection can be avoided if you give up the ability to run arbitrary string data as queries on a database. Instead you should use a restricted API that builds up the queries in a structured manner, for instance as prepared statements.

CATCH VULNERABILITIES AT TIME ZERO Mistakes will happen, even with perfectly designed safe APIs. It’s important, therefore, to run continuous analyses that catch mistakes that may have slipped through. While many people would think to wait, the perfect point to do that is at code review time: close enough to time zero so that the developer’s focus is still with the relevant code change, and yet with a time budget to run deep analysis. In an article written by the Google code analysis team last year, they go through the various elements needed for success, pointing out that it’s critical the creation of new analyses can be crowdsourced, with everyone chipping in to define what good, secure coding standards are, and performing real time updates of the analyses when new classes of vulnerabilities are identified.

RED TEAMS / PEN TESTERS & BLIND SPOTS

To identify your blind spots, use internal Red Teams to do penetration testing, or hire an outside company to try to attack your systems. It’s an investment, but it enables teams to catch problems of a type that are too hard to detect mechanically. Bug bounty programs, such as those administered by BugCrowd and

HackerOne, can be tremendously effective to help find vulnerabilities that expose the code to exploitation. However, it’s a waste of money if you don’t first implement automated means to stop the more obvious holes. In fact, advances in AI now make it possible to apply some of the fuzzing techniques that professional pen testers employ, but do so completely automatically. The Microsoft Security Risk Detection service is an example of this trend. When pen testers or automated fuzzers find new blind spots, you want to eliminate them once and for all by creating new code analyses, as described in the previous paragraph.

MAKE EXPLOITS VERY HARD

You’re not going to stop all vulnerabilities that are introduced into the source code; you have to be prepared for the worst, making sure that they are extremely hard to exploit. An exciting technique that’s currently gaining traction is moving target defence – randomising heap layout, or indeed code layout, so that attackers have a very hard time figuring out how to exploit weaknesses. ASLR (Address Space Layout Randomisation) is used as additional protection both in Windows and in Android, for example. In nearly all of these examples, there is a common thread: collaboration. The most secure enterprises remain that way because of a company-wide commitment to putting security first during every aspect of development. No matter the size of your budget or the number of employees, security can and should be top of mind. Not many companies have the resources of a Microsoft or Google, but being strategic and smart about your security priorities and following best practices will help you go far.


15

16 Sessions

5 March 2019 Park Inn by Radisson

London Heathrow REGISTER YOUR PLACE TODAY softwaretestingnews.co.uk/dtew

About DevTEST Executive Workshops DevOpsOnline.co.uk and TEST Magazine, the authoritative, thoughtprovoking, and informative sources of information on all issues related to software testing and development, operations and digital transformation, must consistently evolve to ensure they deliver their readers the most up-to-date information on the latest industry trends. In order to achieve the above, the DevTEST Executive Workshops are a pivotal mechanism, bringing together Heads of IT, Heads of Departments, Chief Engineers, Chief Architects, Release/Delivery Operations Managers, Software Developers and Testing professionals through a series of debates, peer-to-peer networking, and supplier interaction.


16

BECOMING A SUCCESSFUL PENTESTER In this day and age of regular data breaches it has become common practice to require pentesting as part of standard best-practices in cybersecurity and compliance frameworks. This article aims to show you how to grow your pentesting skills and your in-house teams his article will cover the principles and steps required to become a successful penetration tester. Penetration testing, general known as ‘pentesting’, is of upmost importance in this day and age of increasingly common data breaches – and the resulting penalties and brand damage that can and will likely result. The popularity of pentesting has greatly increased the demand for good pentesters, so much in fact that there are entire organisations that specialise in this practice. However, if your organisation cannot afford to procure professional pentesting services, you may be forced to grow your pentesting team in-house, and that is why this article will be useful. This article could therefore be considered as Pentesting 101, however, because the area of security testing tools is rapidly changing and is in a constant state of flux,

T

T E S T M a g a z i n e | J a n u a r y 2 019

this article is not tool-centric. In a field that is as important and rapidly changing as pentesting, it is highly advisable that you collect and curate as much information as possible on the security-related tools that are available, to stay abreast of the latest technologies and testing methods. In my own technical digital library of security tools, these are the subdirectories where I keep this information for learning and for reference when I collect it. (see fig. 1)

WHY IS PENTESTING IMPORTANT?

Pentesting is important primarily for two major reasons: 1. You want to identify (and ultimately remediate) your digital infrastructure weaknesses before the bad guys find and exploit them.

WILLIAM FAVRE SLATER III PRESIDENT & CEO SLATER TECHNOLOGIES INC. William Favre Slater III is a senior IT consultant, project manager and author in cybersecurity, blockchain and data centers. He began his professional career began in the United States Air Force, and has more than three decades of experience in IT


P E N E T R A T I O N

T E S T I N G

17

Figure 1. 00_2017_Pentesting_Tools 00_2018_Pentesting_Tools 00_2019_Tools 00_Angry_IP_Scanner 00_Applied_Security_Visualization 00_Arachni 00_ASTo_for_IoT 00_Autosploit 00_AWS_Zeus 00_Barkley_Cybersecurity_Toolkit 00_Belati 00_Brutal_Kangaroo 00_BruteX 00_Burp 00_Cartero 00_CloudPiercer 00_CrackMapExec_Pentesting Active Directory Environments 00_Crowbar_Web Application Brute Force Attack 00_Cryptomator 00_CurrPorts 00_cve-search 00_CyberObserver 00_Detect_KRACK_Attacks 00_DNSDB 00_DNSDiag 00_dnsmap 00_Dripcap 00_DumpsterFire Toolset Security Incidents In A Box 00_Elite_Field_Kit_by_Hak5 00_Emergynt_Instinct_Engine 00_Eternal_Blue 00_FAME__Open Source Malware Analysis

Platform 00_Fiddler 00_Gramfuzz 00_Hacking_Tools 00_HellRaiser_Vulnerability_Scanner 00_HijackThis 00_Incident Response Forensic Framework-NightHawk 00_IR_Rescue_Windows_Forensic_Data_ Collection 00_Kai_Pfiester 00_Kali_Linux 00_Knowledge_Management 00_Kvasir_Penetration Test Data Management 00_LAN_Turtle 00_Linux_Distributions_for_Pentesting 00_LOIC--Low Orbit Ion Cannon 00_Lordix 00_Machinae__Security Intelligence Collector 00_Maltego 00_Mantra 00_Metasploit 00_Mitre_Attack_Test_Tools 00_Mobile_Security_Framework_MobSF 00_Morpheus_Ettercap 00_Netcat 00_Netcraft 00_Netsparkler 00_Nexpose 00_NextWare_Cyber_Collaboration_Toolkit 00_Nikto 00_Nishang_for_Powershell_Penetration_ Testing 00_nmap 00_NOSQL_Exploitation_Framework

2. Many major cybersecurity compliance frameworks, such as PCI DSS and SOC 2 Type 2, and the New York Department of Financial Services Cybersecurity Regulation require periodic pentesting in order to achieve the minimum required level of compliance. Failure to achieve compliance can have serious repercussions to the well-being of an organisation, its leadership, and its stakeholders.

YOUR GOALS IN PENTESTING

Your goals in pentesting should be to: • Satisfy your Stakeholders’ business requirements to make the organisation more secure • Find and document your infrastructure and application vulnerabilities before the bad guys do • Choose the right tools that are going to reveal the best results in the shortest time possible • Document the required remediations • Report the results to your management • Do all this as efficiently and effectively as possible, in the shortest amount time possible.

WHAT DO YOU HAVE TO KNOW?

Obviously, you need to know a fair amount about the system and tools you are using, as well as the system or systems on which you are performing the pentesting. But, if you are just starting out and aspire to be a great pentester,

00_Open Source Firewall - OPNsense 00_OpenVAS 00_Open_Source_Network_Security_ 00_Open_Source_Reconnaissance 00_Ostinato 00_OWASP Mutillidae_Web_App_Pentetsting 00_OWASP Offensive Web Testing Framework OWFT 00_p0f - Passive Traffic Analysis OS Fingerprinting and Forensics Tool 00_P4wnP1 highly customizable USB attack platform, based on a low cost Raspberry Pi Zero 00_Penetration_Testing_Tools 00_Pentesting 00_Pentest_Toolbox 00_Phishing 00_PirateBox 00_Powershell_Penetration_Testing_Framework 00_Privacy_Tools 00_ProcDOT 00_PRTG_Network_Monitor 00_PTF 00_PUTTY 00_PWNIE_EXPRESS 00_PwnPad 00_Quad9 00_ReconScan 00_Recovery Boot Password Reset 00_Red_Team 00_RevIP__Reverse IP Lookup Tool 00_RF_Hacking_Field_Kit 00_scanless_Public_Port_Scan_Scrapper 00_SCP 00_Security_Onion 00_Shodan

then this article and the resources listed at the end, are a good starting point. The important thing to understand is that the landscape is constantly changing. If you follow this path, as in most areas of cybersecurity, you will have to dedicate yourself to being in a constant state of research and learning. What was important 12 or 18 months ago may not be so important now. Likewise, the tools associated with automated testing, such as ZAP and Netsparker, continue to improve and get more powerful, so to give yourself an edge and to be competitive it is important to understand how to use these tools, and to understand the results they produce in their reports.

00_Slowloris 00_Sparta_Vulnerability_Scanner 00_Sploitego 00_Splunk 00_Spyware_Removal 00_SQLiv 00_SQLMap 00_Stackhackr 00_TCPDump 00_TDSSKiller 00_The_Harvester 00_Tools_Watch 00_Top Best Ethical Hacking Tools 2018 00_Tor 00_Tor_Browser 00_USB_Amory 00_USB_Canary 00_USB_Rubber_Ducky 00_v3n0m 00_vane_- WordPress Vulnerability Scanner 00_W3AF 00_WATOBO 00_WebGoat 00_WebPwn3r 00_Wfuzz 00_WiFite_Automated_Wireless_Attack 00_Wifi_Pineapple 00_Windows_Warez 00_WinDump 00_Wireless_Gear_by_Hak5 00_Wireshark 00_WPForce - Wordpress Attack Suite 00_WS-Attacker 00_Yeti 00_ZAP

cheaper and simpler than White-Box audits. Disadvantages of Black-Box audits include the fact that they will not uncover configuration errors, errors in policies and procedures, and errors in design. (Norberg, 2001).

THE BASICS

In order to assure that every area of your infrastructure security is reliable and effective, penetration tests should be conducted on a regular basis, at least every six months, and probably every three months, if your stakeholders will permit it. As deficiencies are noted, corrections and improvements should be made. There are two general types of audits, the Black Box Pentests and the White Box Pentests.

BLACK BOX PENTESTING

In a Black-Box pentest the pentester is only provided with an IP address or a range of IP addresses to scan and probe for known vulnerabilities, much the same as a hacker would. Sometimes this is known as vulnerability scanning or just penetration testing. Advantages of BlackBox audits include the fact they are faster,

WHITE BOX PENTESTING

As shown below, the White Box Pentest will involve auditors working inside your site: "They will require all possible information about your site, including network diagrams, configuration files, all available documentation of the systems. Using all this information, the pentesters will be able to identify possible theoretical attacks against your environment. The pentesters should also review and comment on your policy documents, for example your backup policy". (Norberg, 2001). Norberg recommends the

T E S T M a g a z i n e | J a n u a r y 2 019


18

P E N E T R A T I O N

White Box approach over the Black Box approach, despite its additional consumption of resources and additional length of time, because it will pay off in terms of finding more problem areas and vulnerabilities. The end result, if you follow through on their recommendations will be a more secure facility and IT resources. However, the three known disadvantages of White Box Pentesting are: 1. It usually takes longer 2. It is an unrealistic view of the infrastructure because a hacker will actually have limited knowledge of the infrastructure 3. The quality of the pentest results will be closely tied to the accuracy and completeness of the infrastructure documentation provided. Many organisations don’t do a very good job of keeping their infrastructure documentation updated and complete, so the quality of the pentest project will not be good if the documentation provided is not good quality. Remember, your time is limited, and the scope of your job is to find serious technical vulnerabilities as quickly as possible, not to be responsible for the accuracy or completeness of your client’s infrastructure documentation.

CHOOSE OR ADOPT A METHODOLOGY

Often you will be asked if you have a methodology for your pentesting. There are several structured approaches, but here are three methodologies that you can explore and possibly select and use: the Information System Security Assessment

T E S T M a g a z i n e | J a n u a r y 2 019

T E S T I N G

Framework (ISSAF), the Open Source Security Testing Methodology Manual (OSSTMM), or a simplistic, structured approach, which I developed to get the job done fast. Your stakeholders and/or project sponsor may favour one over another, so it is good to be knowledgeable on each of these and to be flexible, so you can be more well-rounded, and just in case you need this information for a job interview in the future! General details of each of these methodologies are covered below.

ISSAF

The ISSAF pentesting is based on the Project Management Institute’s Project Management Body of Knowledge standard methodology. If you are a Project Management Professional (PMP), you may be very comfortable with this methodology. These are the pentesting assessment phases under ISSAF: • Planning and Preparation • Assessment: Information Gathering / Network Mapping / Vulnerability / Identification / Penetration / Gaining / Access and Privilege Escalation / Enumerating Further / Compromise /Remote (Users and Sites) / Maintaining Access / Covering Tracks • Network Security: Password Security Assessment / Switch Security Assessment / Router Security Assessment / Intrusion Detection System Security Assessment / Virtual Private Network Security Assessment / Antivirus System Security / Assessment and Management Strategy / Storage Area Network Security Assessment / Internet User Security / Email Security / Host Security • Application Security • Database Security • Social Engineering • Reporting: Reporting / Clean-up and Destroy Artifacts

OSSTMM

The OSSTMM was created by the Institute for Security and Open Methodologies (ISECOM) and the current version is OSSTMM 3.0, and it is approximately 213 pages in length. You can obtain the OSSTMM for free at isecom.org/research/. Note: OSSTMM 4.0 is in draft format.

This is the structure of the OSSTMM 3.0: • Rules of Engagement • Channels: Network Security / Physical Security / Wireless Communications / Telecommunications • Data Networks: Network Surveying / Enumeration / Identification / Access Process / Services Identification / Authentication / Spoofing / Phishing / Resource Abuse • Modules: (OSSTMM has repeatable processes in the form of Modules) Phase I – Regulatory - Posture Review - Logistics - Active Detection Verification Phase II – Definitions - Visibility - Access Verification - Trust Verification - Controls Verification Phase III – Information Phase - Process Verification - Configuration Verification - Property Validation - Segregation Review - Exposure Verification - Competitive Intelligence Scouting Phase IV – Interactive Controls Test Phase - Quantitative Verification - Privilege Audit - Survivability Validation - Alert and Log Review

SIMPLISTIC, STRUCTURED APPROACH In my simplistic, structured approach, I list twelve major tasks, each of which is described below: 1. Collect details about pentesting: Black Box or White Box, goals, etc. 2. Align pentesting goals to capabilities of pentesting tools to ensure the necessary tools are available. If unavailable, obtain the necessary pentesting tools. 3. Create prospective schedule and get permission from management and if applicable, the Cloud Services Provider (i.e. Amazon Web Services) to perform the pentesting. 4. Create the pentesting project plan 5. Communicate the pentesting start time prior to commencing the pentesting. 6. Start the pentesting. 7. Check the pentesting results and reports. 8. Communicate the completion of pentesting to the stakeholders.


19

9. Check the pentesting results. organise, label and analyse the results. 10. Create the first draft of the report to show the pentesting results, analysis and recommendations 11. Deliver the first draft of the pentesting results report. Revise as required. 12. Prepare and distribute pentesting results report to the project stakeholders.

WRITE A DETAILED PLAN No pentesting project should take place without an organised plan to describe

what you are doing, and when, where, and how you will do it. Besides ensuring that, such a professional approach will increase stakeholder confidence that you know what you are doing, preparing a plan like this will help keep you on track and it will make the final pentesting project results report much easier to write. NB: If you are a contractor, both the pentesting project plan and the pentesting results report will be contractually required deliverables. Fail to produce these and you will likely not be paid nor invited back for return engagements! These are the steps to writing the pentesting project plan:

• • • • • •

Communicate: what & why you will do it; when you will do it; how you will do it; where you will do it Create and provide a general attack diagram to communicate to stakeholders what you are doing Get your pentesting tools ready and ensure that you know very well how to use them Get permission from your stakeholders Get the plan approved Follow the plan!

Figure 2 below shows an example decision flow chart from one of my pentest project plans.

Figure 2. Decision flowchart

T E S T M a g a z i n e | J a n u a r y 2 019


20

P E N E T R A T I O N

T E S T I N G

Figure 4 above shows an example high-level attack diagram from one of my pentest project plans. The Black Box penetration testing was performed in phases as shown in the annotated diagram below, beginning from the preparation phase and attempting to progress as far as possible, without destroying data

PREPARE YOUR ATTACK MACHINE AND TOOLS

It is important to understand that your pentesting attack machine will not only have to have its pentesting tools installed and configured, but it will also have to have ALL of its security defence software either disabled or uninstalled. This is because many of the security-related tools will kill and/or disable your attack tools.

START THE TESTING Here are some important points

T E S T M a g a z i n e | J a n u a r y 2 019

to remember when you begin your pentesting: • Follow the pentest project plan • Make sure you carefully collect, name, and then organise and name all the artifacts in a structured way e.g. Black_Box_Pentesting_Data_ Day_01_Test_02_from_William_ Slater_2016_0801 • Pace yourself and be methodical and thorough • If something doesn’t work as you expect, research and ensure that you

• • •

are doing everything correctly Time is NOT your friend! Always tell the truth about your findings, and what worked and what didn’t Be ready at any point during the programme to give a clear status report (or presentation) of your progress at a moment’s notice. Your stakeholders will likely want to understand what your progress is and understand that you know what you are doing.


21

Figure 5 above shows a high-level infrastructure diagram that depicts the Black Box pentesting that would be taking place

WRITE THE REPORT

Up to this point, if you did your pentest project plan and saved your artifacts as recommended, your pentest report should be pretty simple to write. Just ensure that you are thorough, organised, and truthful when you write the report.

Figure 6 above shows example table of contents for the pentesting results report

REMEDIATE THE RESULTS

After you complete the pentesting results report, you need to either work with those responsible for the remediation of the vulnerabilities you identify to ensure that all those vulnerabilities get remediated, according to their severity, in as a timely manner as possible. There are two reasons for this: 1. Every vulnerability identified is an opportunity for a bad guy to perform a successful exploit 2. Any IT Security Auditor will require

evidence that each of these identified vulnerabilities were remediated in a timely fashion. To do anything less is to risk busting an audit.

CONCLUSION

In conclusion you can see that pentesting is not 'rocket science' – and if you follow the advice and steps in this article you can, and will, be successful. Remember that being structured and organised, and knowing your tools, are your keys to pentesting success.

Figure 7 above shows an example summary report of pentesting tools and results. My stakeholders appreciated this high-level summary

T E S T M a g a z i n e | J a n u a r y 2 019


22

GETTING TO GRIPS WITH GDPR With our feet firmly under the GDPR table more than six months since the deadline, data breaches dominate headlines as businesses evolve to put data privacy first ut where does this leave data security? We spent years anticipating the General Data Protection Regulation, speculating over what it would mean for businesses. Predictably, the world did not stand still, and businesses continue to thrive. However, a greater awareness of consumer rights and data privacy ensues, and by default a greater emphasis on data security. We all know by now that these are in fact two different, complementary issues. With the number of data breaches racking up already this year, we are facing the evolving reality of data security in a world where we essentially live digital-first lives. In a world in which our data and its privacy is paramount and while data security ensures the front door is bolted shut, data privacy requires data management processes with privacy-by-design at its core. The GDPR that came into play with much fanfare on May 25th 2018, deals primarily with the legal issue of data

B

T E S T M a g a z i n e | J a n u a r y 2 019

governance. The GDPR provided a milestone moment for data governance. It was the biggest overhaul of EU data protection law in over 20 years, replacing the EU Data Protection Directive with the aim to create a unified data protection legislation covering all individuals within the European Union. Having been initially adopted by the European Commission in April 2016, we were given two years grace before the regulation came into force.

GETTING TO GRIPS WITH GDPR

There was much ado, but not about nothing, as the saying goes. GDPR is a dramatic change to the way the collection, access, use, storage and transfer of personal data is regulated. Affecting any business which has access to, or processes, the personal data of an EU resident, regardless of where the business is located, it became a communications issue – all businesses had to get their house

MARTIN WARREN CLOUD SOLUTIONS EMEA NETAPP Martin has a wealth of expertise in data protection, storage cloud and virtualisation, as well as understanding customer needs and business and future market trends


G D P R

in order. Meanwhile, the extraterritorial nature of the regulation meant ripples were felt around the world. Lest we forget, GDPR is first and foremost a legal compliance issue, which is achieved through a robust data privacy compliance framework. Understanding privacy laws along with transparent policies, procedures, processes, consents and notifications are the foundation for achieving GDPR compliance. And so, the legal foundation must be in place first, prior to investment in tools and technology. It is impossible to understate how crucial data is for businesses today. We may be tired of the much-touted sayings ‘data is the life blood of businesses’, or ‘data is the new oil’ – but therein lies the truth. Data-driven businesses will thrive, transform and innovate in order to keep pace with technological revolution. Those that fail to recognise the value in their data will survive, at best. And GDPR provides a marker for those data-centric businesses with a lid on their data governance – a seal of invaluable consumer trust. GDPR whooshed onto the scene, breaking up the multi-cloud party with a mission to re-focus the business agenda on data privacy: to reaffirm consumer trust. The legislation makes it essential for all businesses handling the personal data of EU citizens to understand exactly where their data is stored, ultimately taking

responsibility for the data it processes. The implications of failing to comply are significant, with fines of up to 4% of a business’s global annual turnover, or up to €20m – depending on whichever is greater.

DATA PRIVACY VS DATA SECURITY

The potentially crippling fines provided a key talking point, as well as the more obvious and yet slippery question: ‘but what does this actually mean for my business?’ In response, businesses must first address a core and often-missed differentiation between data privacy, data security and data protection. Security is not privacy. Data privacy is the full lifecycle of the personal data from the time you collect it to the time you destroy it. A useful analogy is to think of a filing cabinet: privacy is the contents in the cabinet, the cabinet lock is the security, and making sure that only the right people have access to the content is the protection. While data security is certainly important for businesses, encryption and data masking will not help a business become GDPR compliant. Equally, it does not help companies if they secure data they are not legally allowed to have. Therefore, GDPR is not just an IT issue. The compliance process needs to be led from the C-suite down, as a legal and business concern before a

23

"While data security is certainly important for businesses, encryption and data masking will not help a business become GDPR compliant. Equally, it does not help companies if they secure data they are not legally allowed to have"

T E S T M a g a z i n e | J a n u a r y 2 019


24

technology one. This required a massive shift in mindset. So how are we faring now, as we inch closer to GDPR’s first birthday?

LIFE AFTER GDPR: THE DATA BREACH BREAK-DOWN

Hefty fines and reputational damage haunted businesses in the build-up to the passing of the General Data Protection Regulation. When we surveyed UK companies in April 2018, 56% said that their company’s reputation would suffer as an impact of non-compliance with GDPR. At the time, 47% were concerned about revenue loss, and 41% thought their company survival was at stake due to the potential financial penalties. We can now see that these concerns united the C-suite and IT decision makers as awareness grew. More than six months on and we are starting to slice through the hype. While businesses and consumers alike grapple with the logistics – clunky and often irritating online pop-ups alerting us to our right to ‘opt-out’ – as predicted, the number of data breach notifications has significantly increased. The Information Commissioner’s Office (ICO) reported 367 data breach notifications in April, 657 in May and 1,792 in June. Between July and September, the ICO reported a total of 4,056 notifications. Far from highlighting flaws in the system, it reflects a fresh, cautionary approach that makes it less likely for a breach to slip through the net. It means those compliance processes are working. Meanwhile, large scale data breaches, like the mammoth Facebook data breach exposing the personal data of tens of millions of users, or the Marriott hack

T E S T M a g a z i n e | J a n u a r y 2 019

which affected 500 Starwood guests, are now perceived with more clarity by consumers, now awake and more aware of their data rights. This only makes the reputational risks of non-compliance more significant. Big tech firms are paying attention, as the US observes and builds momentum towards the creation of a similar regulation.

GAINING COMPETITIVE ADVANTAGE

Ultimately, GDPR compliance boils down to competitive advantage. Those that want to thrive, not simply survive, in the datadriven world have to put consumer data privacy first. The survey we conducted six months ago found that three quarters of UK businesses believed GDPR would improve their competitiveness, and with increasingly data savvy consumers, they weren’t wrong. Major data breaches making headlines in the last few months build into this narrative. Handling this through the adequate management and protection of data wherever it lives, onpremises or in the cloud, is now of the utmost importance for companies.

FIGHTING FOR COMPLIANCE IN FINANCIAL SERVICES

But of course, all businesses are not created equally. While the drive towards data-centric business models is being experienced across sectors, the type of data being processed varies, along with its purpose and sensitivity. Similarly, data literacy varies. So how have some of those key sectors managed the tightening of data governance?

The financial services industry was one of the best placed to weather regulatory changes. With the sensitivity of financial data and recent history, it had benefitted already from heavy regulation. Six months after the introduction of the GDPR, initial confidence and high levels of understanding of the business-critical nature of data uncovered in a survey last April rings true. At the time, 88% of IT decision-makers in the FSI thought that GDPR compliance would give them a competitive advantage. This was mirrored by high levels of confidence in achieving compliance, with almost all FSI businesses (96%) saying they knew where at least some of their data was stored – a key requirement for GDPR compliance. Of the 4,056 data security incidents reported to the Information Commissioner’s Office (ICO) between July and September last year, 293 were due to disclosure of data and 145 were security incidents in FSI companies. These figures could be attributed to a more cautionary approach to potential breaches, as a result of greater awareness. Either way, GDPR represents a stepping stone in the quest for greater transparency around data management and protection, regardless of where that data lives: on-premise or in the cloud. This will help FSI to digitally transform, with the confidence of their customers.

HEALING PATIENT TRUST WITH GDPR? Meanwhile, over six months after the GDPR regulation came into play, the healthcare sector accounts for the highest number of incidents reported to the ICO between July and September 2018. This


is perhaps indicative of the sensitive data being handled and, as with the financial sector, is in part due to a cautionary approach around reporting following the 25th May GDPR compliance deadline. However, with 420 incidents of data disclosure and 190 security incidents reported, it is also a cause for concern. GDPR is an essential tool, helping datadriven healthcare businesses to optimise patient care with digital transformation. But trust sits at the heart of this transformation and GDPR provides the vital key. While the number of incidents demonstrate the effectiveness of the regulation, they also demonstrate the need for it. It is critical that businesses in the healthcare industry implement solutions that help them manage and protect data, wherever it might live: looking ahead, with the diverse possibilities of innovative technologies set to transform healthcare, evolving data management practices to not only comply, but to perfect data privacy protection will be essential. Our health and the quality of our healthcare depends upon it.

RINGING IN GDPR IN THE RETAILS SECTOR In the retail sector, where customer data provides the essence of digital marketing, bridging the gap between bricks and mortar retail, businesses feared the worst. But GDPR did not bring retail to its knees as some worried. Rather than resulting in companies having to go out of business – in April, 41% of UK businesses thought their company survival was at stake due to potential financial penalties as a result of non-compliance – GDPR is building

credibility among retail businesses and their consumers, with trust now the aim of the game. The use of third-party data in marketing activities can be risky, and GDPR reinforces accountability for this by placing responsibility on anyone who handles that data. An example this year was made of a parenting website, which was fined £140,000 by the ICO for illegally collecting and selling personal information belonging to more than one million people in August 2018. In which case, a renewed focus on first-party data brings retailers closer to their consumers, in terms of communicating with a known audience and in the meantime reinforcing trust. This communication became more human. As a sector, retail accounted for only 62 of the 4,056 data breach notifications reported to the ICO between July and September 2018. Of these, 35 were due to disclosure of data, 26 to security and one for retention of data. This builds upon our survey findings, which found that confidence was high among retailers, who showed optimism when it came to gaining a competitive advantage from compliance. As retail businesses continue to evolve their data management practices and their consumers become accustomed to the seemingly irritating ‘opt-in’ pop-ups online, they will see that GDPR was only the first step in transforming their approach to datadriven business. In order to successfully adopt new and innovative technologies, putting the consumer first is imperative: they are now awake and alert to their data privacy rights. It is now up to businesses to continue to perfect these practices and win the trust of their customers through

an adequate, continuous management and protection of data.

THE NEXT CHAPTER

With data breaches rolling in and GDPR now being called upon to investigate the compliance errors at hand in the EU, the rest of the world is looking on with bated breath. In the last six months, the US has made headway with its own blueprints for best practice data management. California, home to the world’s tech giants in Silicon Valley, brought in the California Consumer Privacy Act, set to come into effect on January 1st 2020. Meanwhile, a federallevel data privacy law is being explored and backed by tech heavy weights. Apple’s CEO, Tim Cook highlighted the importance of federal privacy legislation, saying: “We will never achieve technology’s full potential without the full faith and confidence of the people who use it.” While consumers around the world claw back their rights as digital citizens, and owners of their personal data, it is clear that great ground has been made in the drive for awareness and preparedness for GDPR. But while the compliance process is being evolved and perfected, nurturing that essential consumer trust, the question is: are adequate security measures being taken in tandem, to protect the security of our data to begin with? Poor security and data hacks are the cause of the majority of data breaches. So, while we shore up our data governance – ticking off that GDPR checklist – let’s not take our eye off the ball when it comes to data privacy and security. It is not enough to be compliant alone. We cannot be complacent over data management and data security.

T E S T M a g a z i n e | J a n u a r y 2 019


26

AI: THE FUTURE OF CYBERSECURITY With the surge in data breaches and advanced cyberattacks in recent years, the need has arisen for a more advanced and robust form of cybersecurity rtificial intelligence and a sharp surge in cyber-attacks have been making the headlines of late – not only in the technology industry, but also too in the wider media. Artificial intelligence (AI) and machine learning (ML) have been widely adopted and received with open arms by many different industries around the world – considering the value they add in increasing the efficiency and decreasing the time scales to achieve goals. Cybersecurity has become crucial, omnipresent and has grown to prominence as an undeniably essential element for the very survival of any online business (or otherwise), and is no exception when it comes to adopting AI and ML.

A

AN OVERVIEW

Artificial Intelligence is the branch of computer science which deals with

T E S T M a g a z i n e | J a n u a r y 2 019

developing the capabilities of machines to imitate intelligent human behaviour, whilst machine learning is the branch of AI which focuses on development of software which can use existing data to learn and improve over time, to make better predictions or decisions without human intervention. Cybersecurity is a very broad area which involves, and is not limited to, defending and protecting computers, servers, mobile devices, electronic systems, networks, software and data, from malicious attacks.

ADOPTION OF AI IN CYBERSECURITY

In the past, cybersecurity was thought of as the practice of protecting the hardware, software and data from cyber-attacks and analysing the attacks after they have happened to find and understand the root cause. However, since data security and online availability

PALLAVI KUMAR SENIOR ANALYST PROGRAMMER NHS Pallavi is a senior software development professional and certified Scrum Master. She is passionate about learning new technologies and challenging and improving existing processes to enhance their efficiency


A R T I F I C I A L

are now so crucial for the survival of businesses, cybersecurity means protecting, predicting and being onestep-ahead of the cyber attackers. Being ahead of the game means we need to be more efficient in spotting the vulnerabilities in our system and being able to detect any anomaly in the system before it becomes a critical vulnerability, which can be exploited by hackers. Herein lies the role of AI tools: being used alongside human cybersecurity teams in safeguarding systems and detecting any vulnerabilities before they are exploited.

WHY AI IS CRUCIAL FOR CYBERSECURITY

According to recent research by cloudcompany, Domo, it’s revealed that the Internet’s users generate 2.5 quintillion bytes of data each day. And it’s predicted to grow exponentially into the future. This ever-expanding ‘nebula’ of data has led to the evolution of what we now call ‘big data’ and also cloud computing. This shift in the paradigms has contributed to the growing need of AI to be incorporated in the cybersecurity industry. Below are some of the main reasons why AI is so crucial for Cybersecurity.

I N T E L L I G E N C E

1. Human Error In 2017, there was a major data breach at credit bureau, Equifax, which compromised social security numbers and the confidential data of more than 145 million people. And the former Equifax CEO blamed the breach on a single person who failed to deploy a security patch! This is a classic example of human error that could have been easily avoided. In such scenarios AI and ML can play a very important role in routinely checking and applying important security patches with 100% efficiency and zero forgetfulness – helping to avoid any vulnerabilities left due to human error. 2. Shortage of cybersecurity professionals in future It is estimated that by year 2022, there will be more than 3 million vacant positions for cybersecurity professionals. It will be difficult for the software industry to be able to fill in those positions as it requires a very specific skill set. To overcome this massive shortage in the workforce, there will be need for more AI-based solutions which can automate the tasks carried on by the security professionals.

27

3. Rise in these kinds of cyber-attacks In May 2017, we witnessed the WannaCry ransomware attack on the NHS, which cost us a whopping £92m and weeks of disruption . In recent years, we have witnessed a surge in ransomware attacks and other new kinds of malicious malware, such as the malware which mines cryptocurrencies using the victim’s processing resources. To be able to evolve quickly to respond to the changing threats, AI and ML can play a significant role by learning through the existing data and improving the detection for the new malwares. 4. Exhaustive penetration testing In the cybersecurity industry, the potential vulnerabilities in a network or a software are identified by rigorous penetration testing. It is performed by a team of ‘ethical hackers’ or the security experts whose task is to try to break into the system to be able to spot the vulnerabilities and provide the feedback to the person responsible. The issue with this process is that it is time consuming and expensive. In such scenarios, AI deep learning and ML’s predictive analysis can play a

T E S T M a g a z i n e | J a n u a r y 2 019


28

significant role alongside security experts to analyse and uncover such vulnerabilities in less time and in a more cost-effective way. 5. Growth of Internet of Things (IoT) The IoT industry has grown massively in recent years. The global IoT market was worth $157bn in 2016 and has grown ever since. Unfortunately, many IoT devices have been found having security holes which can be exploited by hackers. If a cyber-attack is launched on hundreds of coordinated IoT devices, then it has the potential to be catastrophic. With the evolution of more IoT devices in our networks, we need improved ML-based security analytics which can predict any possible attacks and block the attack attempts in the first instance.

AI BASED CYBER SECURITY START-UPS

Multiple cybersecurity start-ups have emerged that are using AI in their products and services. These AI systems are using ML, deep learning, advanced pattern recognition and predictive analysis to spot the anomalies and automatically profile any new devices connected to the network. One of the highest valued start-ups in this field is Boston-based ‘Cybereason’, a cyber security company specialising in endpoint detection and response software, founded in 2008 and currently valued at $900m. It has been backed by Lockheed Martin, Japanese telecommunication companies and SoftBank. This company’s AI cybersecurity technology seems to subscribe to the theory that the best defence is a good

T E S T M a g a z i n e | J a n u a r y 2 019

offense when it comes to endpoint security. An ‘endpoint’ is generally a user device, such as a laptop or mobile device, or even servers. Its platform is based on behavioural analytics, meaning it correlates data to understand what the attacker is doing. Sensors on every endpoint help monitor the system, with the Cybereason Hunting Engine collecting all the data, detecting patterns between past and present activities, and learning to be more effective as more data comes in. At the heart of the system is what Cybereason calls its in-memory graph, which asks each endpoint eight million questions per second, every second of the day, to instantly detect malicious attacks or intentions. One of the UK’s most well-funded companies applying AI to cybersecurity is Darktrace. In July, the UK firm raised £58m in a funding round, valuing it at £625m. Darktrace makes software that polices a company’s network from the inside. When it spots abnormal behaviour, it springs into action, alerting IT staff and, where possible, stopping malicious activity. Another AI based Cybersecurity startup, founded in 2011, which claims to have deflected more than $1bn of cyber-fraud in various industries is Shape Security, currently valued at $106m and backed by companies such as Google. These are some of the many successful AI based cybersecurity start-ups which have grown to become multimillion-dollar companies and which are a very clear indication of the value AI is going to add to the Cybersecurity world in future.

BENEFITS OF AI BASED CYBERSECURITY

Artificial intelligence-based systems have been widely deployed and used by many different industries such as manufacturing, eCommerce, finance etc. Many government departments are deploying AI based solutions due to the numerous benefits they provide: 1. Automate routine processes, patching and system updates Many routine processes, like applying security patches, can be done by automating these processes using AI. Machine learning based security solutions can learn over time when a security patch is to be applied or when an update is due and hence can do it in time without the need of any human intervention. 2. Robust security When AI-based cybersecurity systems are used alongside the human security team, it will provide more robust security as compared to just the human team alone. AI looks for behavioural abnormalities that hackers display – for instance, the way a password is typed or the geo-location of the user logging in. AI can detect these small signs that otherwise might have gone unnoticed and halt the hacker in their tracks. This can also be useful in spotting user error or manual changes to system protections that could let a hacker gain access to the network. 3. Save time and money By automating some of the tasks which consume time and human efforts, AI can perform many tasks, saving time for the


29

security team to focus on more important tasks and hence saving money. Also, some of the time-intensive tasks like penetrative testing can be performed by AI systems, which can save a lot of time and money. 4. Learn and evolve over time One of the key benefits of using any AI system is that it learns and evolves with the amount of data it’s provided. ML software can easily detect anomalies from huge amounts of data and hence can be the very beneficial in spotting any abnormal behaviour displayed by malicious software and malwares. With hackers getting advanced in their attacks, we need security systems that evolve and are always a step ahead to be able to protect against advanced cyber threats. 5. Improve productivity AI can help improve the productivity of the cybersecurity team by automating routine tasks, detecting and blocking intrusions at the end points, hence saving the security team’s time and effort to focus on other important security tasks, increasing the overall productivity.

CHALLENGES AI MIGHT BRING

AI is still an uncharted territory and it’s still making its way into the mainstream software industry. We know there of a lot of potential benefits which AI can bring, especially to the software development industry, however, we are also expecting AI to bring challenges too. In my opinion, some of the biggest challenges AI will pose in the future will be as follows:

Shift of mindset: It will need a lot of encouragement and motivation for software professionals to accept AI in the mainstream software industry. I have experienced resistance and scepticism about AI from the people in the industry and extra training will be needed to learn and understand about the new concepts of AI. We will first have to create acceptance in the industry as many people have the opinion that AI is here to take their jobs. It will be a real challenge to explain to people that AI is going to be an ally rather than a competitor Lack of skillsets: AI is relatively a new concept and the industry would face a shortage of AI professionals as the demand increases. If we were to use the AI-based cybersecurity solutions as an ally to our existing manual cybersecurity team, we will have to train the team to be able to use the new AI solutions Investment: businesses will have to spare extra investment for the AI solutions as the set-up and cost of computing resources is expensive and not all businesses would be willing to invest More sophisticated attacks: AI and ML will prove to be a double-edged sword as it can be used by hackers to launch more sophisticated attacks. These more complex and labourintensive attacks will be harder to foil Biased results: since ML relies on the data provided for learning, there are high chances (and already-proven cases) that the AI programs’ decision making could be affected due to any

the biased or corrupted data provided to it to learn from. In my opinion, some of the biggest challenges large organisations like the NHS might face in the implementation of AI-based cybersecurity could be based on several factors: Firstly, implementing an enterprise-level AI-based cybersecurity solution would be a major project, which would need a substantial funding and resources from within the organisation to support it. Secondly, once we overcome the funding-level difficulties and if the AI cybersecurity solution is implemented organisation wide, then we will have to retrain our existing cybersecurity team to be able to use them, as AI-based systems are a new concept for many organisations and cybersecurity professionals alike. Thirdly, it would be hard to find the right professionals with AI-based skillsets, as there is a shortage of these skillsets at present. Also, implementing AI solutions in the mainstream software development team would be a challenging task, as I am sure there will be some resistance from people regarding the adoption of new technologies (widespread scepticism about AI and the belief it will replace humans), though the reality is that AI can’t be successful without human aide. Overcoming this belief would be the most important step in being able to utilise its potential. Once software teams are able to overcome these types of barriers, then it wouldn’t take long for the team to achieve goals, with AI as an ally.

T E S T M a g a z i n e | J a n u a r y 2 019


30

TEST AUTOMATION IS KEY TO APP SECURITY Mobile applications have become an essential part of our lives, but when it comes to security, many experts believe that app users are like sitting ducks study on the state of application security says that 84% of users believe that their mobile health and finance apps are adequately secure, but in fact, this is far from the truth. According to NowSecure, 85% of public app store applications on the Apple App Store and Google Play violate one or more of the OWASP Mobile Top 10 Risks. The same report says that 80% of Android apps lack basic obfuscation, allowing attackers to reverse engineer the app. NowSecure also draws attention to the many mobile and health applications which contain serious security vulnerabilities. The truth is that security can be a false perception if we do not know how our applications were developed and tested. In the age of agile development, the organisational tendency for many

A

T E S T M a g a z i n e | J a n u a r y 2 019

has been to blow past the uncovered security vulnerabilities in effort to meet the speeds that business demands. So how do we reset the balance? If mobile testing is absolutely vital in the fight against security breaches, how do developers embrace best practices and pragmatically embed these into secure development activities and the DevOps toolchain - without losing the speed-tomarket that they need?

DEVOPS FRIENDLY SECURITY TEST APPROACHES

There are plenty of tools that can be used for testing and ensuring the security of mobile apps. The savviest development teams choose a mix of technologies at each stage of the development process - helping to detect and remedy security

ERAN KINSBRUNER CHIEF EVANGELIST & AUTHOR PERFECTO Eran is an expert in web and mobile testing. A thought leader, speaker and author he specialises in nearshore and offshore QA team building and management


A P P

issues earlier and faster. This all starts in the coding process itself. For example, IDE plugins scan code in the background for thousands of potential risks, leveraging a set of continuously updated rules to meet industry standards and to address recent security vulnerabilities. Runtime Application SelfProtection (RASP) is a security technology that is built into an application and can detect and prevent real-time application attacks in production. Moving into an operational phase, tools like dynamic scan (DAST) are able to analyse applications in their dynamic running state during testing or operational phases. DAST simulates attacks against an application and analyses the application's reactions to determine whether it is vulnerable. Equally popular is static source code scan (SAST, ‘White-box testing’) which analyses an application's source, bytecode or binary code for security vulnerabilities, and is typically employed at the programming and testing phases of the software development life cycle (SDLC). Interactive scan (IAST), is an emerging category of scanning where security testing code is built into the application by including third-party vendor test libraries, then security tests are run in combination with QA and functional tests. Finally, Software Composition Analysis (SCA) Scans are available to scan the third-party libraries and operating systems that you use to build your apps. With myriad tools and approaches available, it can be difficult to know what technology to choose, and when in the development process to implement it. For us, the secret to effective testing is an automated, continuous, approach – which is deployed at all the way through the SDLC.

DEVOPS-ING IT ALL TOGETHER

When a defect is uncovered late, or even in production, developers spend expensive time finding the root cause, then undoing and redoing code. To avoid this, security testing needs to happen more frequently. With continuous testing, team leads can direct investment in-sprint on a daily basis to address areas of concern. Identifying and tracking security vulnerability risks and fixing them more quickly is crucial to ensuring fast and reliable releases – real make-or-break stuff in the hypercompetitive app world. Adopting a multi-layered approach with

consistent regular scanning in the DevOps pipeline is recommended, ideally including SCA before using components, IDE security code plugins, SAST post-commit pre-build, DAST and/or IAST post-build every day, and generating tickets back into issue-ticketing systems. The fundamental key is automation and integration everywhere. Proper security practices mandate recurring, unattended execution of various security scans. This means integration with key continuous integration components. Testing at each step reduces vulnerabilities, enables faster fixes, lowers costs, and ultimately, builds security into the whole of the DevOps pipeline.

BEST PRACTICES

So how do DevOps teams get to this place? I use a continuous testing blueprint which further breaks down the steps needed in order to get to a skilled automated testing suite. It’s split into four phases; and takes the team step-by-step through this process: 1. Prepare the infrastructure, making sure the DevOps stack, the lab, the CI and the deployment are integrated 2. Choose a small number of tests that are high value, and performed on a regular basis, and automate these. Start small and prove the use case, and the mindset shift we talked about will start to happen. If teams are using these tests regularly, and relying on them, it’s easier to move on to step three – scaling automation 3. As well as scaling the number of tests, teams must put fast feedback capabilities into place – even using machine learning to capture data – and then analyse and record the tests effectively on a big scale. It’s a common mistake not to put fast feedback loops in place and then teams drown in data. There’s no point in automating hundreds of scrips it they’re not effectively analysed 4. The final stage is putting together a process on how to maintain these tests and keep them valuable at the same time. By following these steps, teams will be well on the way to effective automation. So, security has never been more important. Mobile apps are increasing in popularity, but developers are often

S E C U R I T Y

31

"The fundamental key is automation and integration everywhere. Proper security practices mandate recurring, unattended execution of various security scans. This means integration with key continuous integration components. Testing at each step reduces vulnerabilities, enables faster fixes, lowers costs, and ultimately, builds security into the whole of the DevOps pipeline"

left drowning in security vulnerabilities. Cybercriminals are increasingly targeting mobile apps for attacks, due in part to lax security standards, and in part because historically test coverage simply hasn’t been good enough. Today, continuous and automated testing, at every stage of the SLDC, is the answer to security, and any organization that fails to build security into its app development process is wilfully being left exposed to those ever-present threats. It’s time for DevOps to commit to solving this problem, and closing security loopholes as quickly as possible.

T E S T M a g a z i n e | J a n u a r y 2 019


32

TOP 3 DEFECT PREVENTION TECHNIQUES Market trends around emerging technologies and agile methodologies are shaping software development priorities, driving demand for faster release cycles and the need for quality to be considered earlier in the application lifecycle ow software teams are measured for success will expand, developers will work with colleagues in cross functional roles, and the products being built will reach new limits. Although ensuring software quality is the primary goal for development teams, it is always surprising to see a low level of involvement toward defect prevention activities across the software development lifecycle (SDLC). Only 26% of executives identified increasing quality awareness among all disciplines as one of their objectives. The reality of this is that many applications are still released with known defects. This huge cost savings alone should be enough to convince senior leaders to examine why so few prevention activities are taking place across their projects.

H

CATCH DEFECTS EARLY

Research has shown that the cost of fixing a bug in the testing phase could be 15 times more-costly than if the defect was found during design and six times

T E S T M a g a z i n e | J a n u a r y 2 019

more costly than if found during a build phase. Defects that are not found earlier have a larger impact the further they progress in the SDLC. As developers continue to build an application, the defect can impact more user scenarios, increasing complexity and adding cost, time, and effort to come to a resolution quickly. There are simple measures software teams can adopt to uncover bugs earlier in the lifecycle, whether teams are designing enterprise-grade desktop applications that store highly-sensitive financial data or a modern, digital-store front that processes thousands of transactions an hour.

PRIORITISE UX WITH THE RIGHT UI DEV APPROACH How do we make great, best-in-class software that not only looks pretty, but can also be used by real humans? How do we hold developers accountable to think about the user? Behaviourdriven development (BDD), a software

AKSHITA PURAM PRODUCT MANAGER SMARTBEAR Akshita is a global digital strategist and technology evangelist with 10 years of experience in technology implementation and strategy in retail, financial services, and biotechnology


Q U A L I T Y

development process where teams create simple steps on how an application should behave from a users’ perspective, can drive entire teams to prioritise UX in their UI design. Adopting BDD demands a mindset shift and updated workflows that incorporate multiple stakeholders from business analysts to designers to developers to testers. Test-Driven Development (TDD), an alternative approach, is focused on the developers’ perspective on how software should function, which can misdirect development to prioritise functionality over user experience.

APPLE AIRPODS

Take for example the dev of the AirPod. The BDD approach starts your development lifecycle with the end in mind. It outlines the steps a user will take and the impact to the user once that action occurs. Apple, well-known for its design thinking, has transformed how consumers listen to music from simple white headphones to its recent crowd-pleaser, AirPods. They did not start their development lifecycle looking to design a product that had the ability to connect sound between a phone and a user’s ear. Instead, they honed in on the customer’s one desire: to hear and control music away from a device. This technique unleashed design possibilities and empowered the development team to innovate, introducing one of the most popular earphones beloved for its sound quality, sleek design, and features. Building software with a BDD approach at the forefront brings developers closer to the ideal user experience, further preventing bugs that seem functionally correct, but that are not what the customer expected. With BDD support in test automation tools, development teams can accelerate an end-to-end behaviour-driven and userfocused perspective. With BDD in a test management platform, software teams can collaborate on building feature stories with easy-to-use editors that convert action words to user scenarios and provide step auto-suggestions. With BDD in test automation, teams can then accelerate BDD workflows by converting feature scenarios into automated tests instantly.

EXPOSE CRITICAL DEFECTS EARLIER

Developing quality code that is clean, efficient, and results in zero critical defects

does not have to be a pipe dream for software teams. I like to call this aspiration, Dev Zero, a rigorous approach to software development aimed at keeping defect count at zero (or almost zero) at all times. By having developers test a key userscenario as soon as it is feasibly possible, this will compliment QA efforts to uncover high-priority, critical issues and application flaws earlier in the SDLC. This creates a strong application foundation that prevents critical defects from going past development and having a broader impact as code continues to be layered. With native integration for test automation in a development environment, developers can quickly test their code against pre-defined functional user scenarios. With embedded test automation functionality in an integrated development environment (IDE), developers can access their entire application source code in one location instead of inspecting a web page or a desktop component manually to include in an automated test script. QA can then collaborate with developers to not only ensure software quality from a groundup, development perspective, but also a top-down, user perspective as a tester. Leveraging one tool, QA can then launch both UI automated tests created by testers and by developers concurrently.

ENSURE SOFTWARE QUALITY WITH AI

Regardless if your development team follows an agile or waterfall methodology, one of the initial steps will be to design the purpose of the application and how to address key user needs. It is the most crucial, pivotal phase in the development lifecycle that transforms a logical system design into a physical system. AI provides a major leap forward by being able to detect application components across a wide array of different formats and technologies, including application blueprints in the design phase. AI-powered visual recognition enhances current techniques in software testing by being able to automatically create tests directly from mock-up designs or wireframes. Imagine creating UI functional tests that normally would be built after development in the design phase – the very beginning of your SDLC. Test automation tools with AI-powered visual recognition allow software teams to identify application components and automatically create test scripts based off

A S S U R A N C E

33

"AI-powered visual recognition enhances current techniques in software testing by being able to automatically create tests directly from mock-up designs or wireframes. Imagine creating UI functional tests that normally would be built after development in the design phase – the very beginning of your SDLC"

of those component properties before they are built.

THE DEVELOPER’S IMPERATIVE

The role of the developer is changing. Future developers will need to learn and adopt new skills, including how to use native-test automation with IDEs, how to adopt new development methodologies, and how to leverage AI to ensure software quality at every stage of the SDLC. Software teams aspire to hire future engineers with an entrepreneurial mindset to not only create efficient and robust code, but also practice Dev Zero to emphasise the importance of quality-driven innovation. Developers have a responsibility to identify where their teams are missing the mark and an imperative to adopt their approach to ensure they expose critical defects earlier in the development lifecycle, prioritising the need to keep costs low, while accomplishing quality at speed.

T E S T M a g a z i n e | J a n u a r y 2 019


34

MAKING THE CASE FOR THE DOCU-TESTER Do you want to increase your number of inhouse testers for free? It’s hardly alchemy, but minimal time investment can turn your documentation writers into full-time software testers r does that sound too good to be true? Customer-facing organisations that create instructional materials on using their software already have individuals performing many of the same tasks as manual testers. Minimal training can enable these same documentation teams to perform effective functional, exploratory, and usability testing. For overworked test teams, that means more people testing. Meanwhile, documentation teams avoid constantly reworking already created materials, by virtue of the resolution of defects before they are reflected in their guides and videos. Together, that spells software with fewer bugs, and an all-round better user experience. (see fig. 1) This article compares the skills required to perform basic manual testing, to those already possessed by documentation teams. It makes the case that providing

O

T E S T M a g a z i n e | J a n u a r y 2 019

minimal training to documentation teams can enable them to perform effective testing, both exploratory and functional. Finally, it makes the case for including documentation teams fully in the sprint cycle, in a ‘shift left, shift right’ approach that facilitates pre-release user feedback.

WHERE SKILLS OVERLAP

Documentation teams already perform many of the same tasks as manual testers, requiring only minimal additional training to test. This is reflected in the simplified comparison below. To document a system, I must: • understand the steps a user is expected to perform, to achieve a given expected result • perform the steps myself, entering data to follow end-to-end journeys through the system • document the process as I go, producing written step-by-step guides and videos.

TOM PRYCE MANAGER CURIOSITY SOFTWARE Tom is a technologically hands-on manager with interests in model-based testing, test data management and robotic process automation


D O C U - T E S T I N G

35

Figure 1.

To test a system I must: • understand how a system will be exercised by users, and the associated expected results • enter data to perform these user actions, comparing the actual results to the expected • classify and log defects where the actual results deviate from the expected. In practical terms, steps one and two are equivalent. The major difference, therefore, is the output: whereas documentation teams produce written guides and videos, testers produce tickets and bug cards. Though documentation teams do not formally log defects, they frequently perform the thinking behind them. They often find themselves returning to development teams to ask questions like, ‘I cannot find this, should it be here?’, or ‘I followed these steps, but I get this result’. Documentation teams are therefore already testing the systems they document, and are reporting bugs via email, chat, or in person. They are furthermore logging their actions as they go, with screenshots, text, and video. In other words, they are creating what could be highly detailed bug reports, setting out the exact steps required to create a

defect in their environment. Showing documentation teams where to log these detailed defect reports will equip them with testing skill number three described above. Meanwhile, brief training in classifying defects will sharpen the documentation teams’ eye to bugs of various types, detecting more defects before systems reach the end user. Where is ‘docu-testing’ most effective? Documentation teams will, of course, not be suited to every type of testing. They are not, for instance, already performing back-end or API testing. Some of the types of tests that documentation teams are already executing are considered below, with explanations as to why they are readily equipped to perform them.

FUNCTIONAL TESTING

The documentation task par excellence is creating clear and concise guides on how to use applications. These used to be housed in complete user guides, enclosed in CD cases; more frequently today, it takes the form of online ‘knowledge base’ articles and videos, crammed full of screenshots. A typical process for creating such documentation runs as follows: I am provided with a video demonstrating a just-developed feature, or an alert to a

user story that has just been developed. This should tell me how, why, and when an end-user would use the new functionality. I then act as the user, performing the steps they would, creating written description and screenshots as I go. This process essentially converts user stories into test cases, where the ‘how-to’ steps are equivalent to test steps. These are executed as the guide is being written, collecting screenshots along the way. Writing instructional guides is therefore similar to functional testing – insofar as the actions exercised against the system are concerned. In particular, it is ‘happy path’, end-to-end functional testing: documentation teams act as users are intended, with the aim of documenting the complete journeys through the system that users are expected to perform. Docutesting, in this regard, is therefore better equipped for functional smoke testing just before a release than exhaustive functional testing. The end-to-end functional testing will sometimes throw up functional defects. The video or user story provided by development provides a description of how the system should work; when I do not see these expected results, it might be a bug. Some typical questions documentation teams might return to development include: • I followed the steps in your video, but

T E S T M a g a z i n e | J a n u a r y 2 019


36

when I clicked this, nothing happened. Have I done something wrong, or could it be the browser I am using? • I have set up my files in this way, but when I click ‘run’, I get this result. Have I configured something incorrectly? Such questions will often be posed via email or chat. Training documentation teams to instead log their questions in systems such as JIRA will enable them to act as manual functional testers. This will introduce all the benefits of defect tracking and reporting provided by testing tools, in roughly the same time already spent sending emails or firing off questions.

UI & USABILITY TESTING

The overall understanding of a system that is ultimately distilled into the concise user guide or articles is relatively comprehensive. I must understand what a user is expected to do, and also why the system has been developed this way. Documentation writers therefore act as fairly unique stakeholders: they adopt the persona of a new user, but with subject matter expertise of the overall system. However, this understanding is not deeply technical, and does not extend ‘under the hood’ to the back-end systems. Acting just as a new user would, documentation writers are well-placed to spot usability defects. They might struggle to find the location of a button, feature, or menu, for instance, or might notice when a screen is configured confusingly or is too busy. This might not be immediately obvious to testers and developers who have seen that same screen hundreds of times, and who understand the technical reason why it is set up that way. The same applies to the terminology used to label features and buttons, which can often be fairly idiosyncratically chosen by developers or requirements gatherers.

T E S T M a g a z i n e | J a n u a r y 2 019

The temptation as a documentation writer is to assume that it’s one’s poor understanding of the system that has created the confusion. However, if something is not immediately obvious or clear, it’s potentially a usability error that will impact end-users in the same way as the documentation writer. This is especially true for UIs. Documentation teams need to instruct users on where to find buttons and features, and must document the journey through panes, panels, pop-ups and screens. They will directly experience convoluted or over-engineered processes or hard-to-find buttons. Typical questions that documentation teams feed back to developers and business analysts include: • this button is named ‘X’; in most other tools I’ve used, the equivalent feature has been called ‘Y’. I assumed that this button would do ‘Y’ • would it make sense to include this button on this tab of the main menu, in addition to where it is now? • I find myself having to frequently switch between these two windows in order to perform this task. Would it be quicker and more convenient for users to have one view?

EXPLORATORY TESTING

Gaining the understanding of a system needed to document it often requires the same playful, cogitative curiosity exercised by exploratory tests. For comprehensive documentation, I need to understand and document every button and every screen, and will frequently click buttons for no reason other than to find out they do. This process often exercises combinations of buttons or features in unusual or unexpected ways. It can throw up defects similar to those found during exploratory testing, and questions asked

of development in return can include: • I clicked off this screen using this button highlighted. I lost what I had been editing. Should we have a popup warning? • when I clicked these six buttons, in this order, these buttons were disabled. Is this intentional? Why? • when I open these three windows at once, this one renders weirdly. The response in return is often ‘good spot’, reflecting how documentation teams are again well placed to incidentally identify bugs where others might not look.

RERESSION TESTING

Documentation needs to be kept upto-date as systems change, adding new features but also making sure that screenshots and instructions reflect the current system. This means capturing new screenshots and video after something has changed, following the functional steps exercised when the original documentation was written. However, I cannot simply begin with the exact screen or location in the system that has been updated, and must re-execute the steps required to reach the point of the change. Maintaining documentation is therefore substantially comparable to functional regression testing, and can throw up the sorts of functional defects discussed above, as well as integration errors created by changes made in development.

THE CASE FOR IN-SPRINT DOCUMENTATION These are just some of the ways in which documentation writers are performing many of the same tasks that testers perform against a system. A few easy-toimplement steps can align documentation and QA efforts, effectively creating ‘free’ full time test equivalents.


37

These steps might include: training documentation teams on where to log the defects they find. This might already be done via email and chat. Ideally, it will be performed directly in test systems like JIRA • encouraging documentation teams to use a range of environments when documenting systems. If documentation teams use different browsers and operation systems, then this is a quick win in diversifying the configurations that they are tested against • including documentation teams in sprint meetings and planning sessions. •

This reflects a ‘shift left’ mentality that aims to design better systems before investing the time and expense of developing and testing them. Documentation teams are well placed to ask the tricky questions that an end-user might, while their extensive experience in actually using applications allows them to advise on what might be done differently in upcoming development. The result is fewer requirements defects, leading to the development of higher quality systems first time round. The resultant benefits for testers and developers include: • pre-development feedback, leading to less time and expense spent remediating defects that stem from requirements errors • more pre-release testing, identifying bugs before they can impact the ultimate end-user experience • high quality bug reports, with stepby-step instructions and images for replicating defects in a certain environment • higher quality system requirements, as docu-testers work through

"A few easy-toimplement steps can align documentation and QA efforts, effectively creating ‘free’ full time test equivalents"

user stories and system designs, converting them into step-bystep instructions and videos, they are essentially building on the requirements. This helps to avoid technical debt, keeping systems well documented for future development and on-boarding detection of otherwise unfound bugs, more testing should mean more assurance, while documentation teams furthermore provide a fairly unique persona while testing. They can be more likely to spot bugs that might be found by non-technical users, but not by the tech-savvy testers and developers who create and validate systems.

The relatively small time investment in attending sprint meetings and properly logging defects offers further rewards for documentation teams: • less frustrating and time-consuming rework, changes to existing functionality usually require updates to documentation, taking the time to laboriously update screenshots and instructions across potentially vast user guides. Identifying defects before they are developed and reflecting in documentation reduces the immediate need to make such changes later • easier documentation maintenance, bringing documentation teams into the sprint cycles and planning meetings keeps them up-to-date on changes being made to systems, before they are made. They can therefore identify and plan for changes that will need to be made to documentation. With documentation teams further using ALM and test tools, documentation can be linked to given user stories, with alerts set up as the system changes. The result

is less out-of-date documentation piling up, improving the end-user experience better upfront understanding of the system, closer communication and collaboration with those who design and develop systems provides insights into how applications have been developed, and why. This reduces the need to ask questions and await feedback when producing documentation, speeding up the dayto-day work of documentation teams. It’s rewarding! Documentation teams possess good overall understanding of the systems they work with; however, it is often exercised only after the fact. Providing input during the design phase and seeing the results in the developed systems we work with is rewarding, no matter how small the impact.

These benefits are achievable with a small time investment by documentation teams. By simply letting documentation teams know where to input the questions they are already asking, you can effectively create ‘free’ full time test equivalents!

T E S T M a g a z i n e | J a n u a r y 2 019


38

HOW DO YOU BECOME A SOFTWARE TESTER? Software testing is often touted as an easy job to get. In the quickly growing tech industry it has less hype than software developer and more junior roles than the likes of scrum master. Here TEST Magazine hears a first-hand account of a young professional taking his first steps towards the world of software testing ere I’ll try to share my journey with you and explore the reality of changing to a career in software testing. I didn’t go to university, this was for understandable reasons. I was young, I didn’t know what I wanted to do, and the structure of school and tuition didn’t work for me. I had a lot of fun in those years but in terms of my career I was left aimless and undecided. I took on a couple of jobs for the sake of work which, not surprisingly, were mostly soul destroying! I was good at my jobs, but the lack of passion and identity with the roles really got me down and this quickly affected my mood and my punctuality suffered. Before I knew it, I had been fired and was back at home with my parents, not a good start. So, I looked for a new job. I thought about what I was good at during my misspent school years. At the age of 19 I

H

T E S T M a g a z i n e | J a n u a r y 2 019

knew as much about myself as I did about girls – and pretty much anything else for that matter, so I relied on my parents as a guide and naturally, being from the baby boomer generation, they wanted me to have a stable job that paid well. ‘Pursuing your passion’ was something reserved for less practical people who didn’t accept the realities of life, so they said. I was good with numbers and I had spent two weeks doing work experience in accountancy firms – so accountancy it was. This was after the financial crisis so there were no jobs in commuting distance and the place we lived, Doncaster, had one of the highest unemployment rates in the country. I sent letters on yellow paper to all the accountancy firms in Doncaster, 26 letters. I can still remember the taste of those 26 strips of gum. The letter made note of the fact that under the

CAMERON HUNTER TRAINEE SOFTWARE TESTING Cameron has been looking for his first role in the software testing industry after deciding to leave behind six years of success as an accountant


B E C O M I N G

apprenticeship scheme they could employ me for well below the minimum wage. A couple of weeks passed with no luck. My next step was to do the same again and repeat my bulk application in Wakefield, the place I went to school. Wakefield was an hour’s commute away and it had a much lower unemployment rate. This is where I found my current job.

A GROWING REALISATION For six years I made a career of being an accountant, I became a professional. But something still didn’t feel right. Six years of growing up had taught me more about myself – I was creatively minded, I was passionate about technology, I was very sociable and I loved puzzles. I wanted a job that involved these things. My best friend was a software tester and after plenty of late-night discussions about his job, it slowly became obvious to me that I too should become a software tester.

A NEW CAREER

I had two weeks off work to study, I chose to repurpose this time to focus on my career change. I set out with buckets of enthusiasm and when I googled the obvious question, ‘how do you become a software tester’, it quickly became apparent that there was a wealth of free knowledge available online. A list of blogs came up which targeted my situation, one I found very useful was asktester.com. It

A

S O F T W A R E

offered a slew of detailed and empathetic articles: ‘new testers – stop worrying about these things and you’ll be fine’ and ‘how to become a software tester – a complete guide’ to name a couple. The amount of effort that was put into this blog was astonishing. A number of websites with endless modules were available. These taught all the essentials that a tester needs; the importance of testing, types of testing, manual vs automation, BDD/TDD, SDLC/ STLC. There were even articles that attempted to rate these websites – a good indicator of just how much is out there. It really speaks for the testing industry that there is so much free information online. This altruistic custom of knowledge sharing was refreshing.

THE NEXT STEPS

I was getting to grips with the basics of software testing. Scrolling through the job adverts I noticed they all asked for at least one year of experience in manual testing. So, I turned to the uTest platform; crowdsourced testing that anyone can have a go at. Most test cycles on here involve functional testing of web-based applications, but ironically the user interface was a nightmare! I had a go at some test cycles and got a feel for what it was like reporting bugs, completing test cases and reviewing user interfaces. The amount of form filling and

T E S T E R

39

box ticking was draining, but it can’t all be fun and games. Software testers do more than break stuff and this grounded me, there are boring parts to any job. To really push the boat out I attended some meetups; I could rub shoulders with those in the industry whilst learning about my new passion, a win-win. Who knows, maybe I would find someone who was hiring? That same week I went to Code Up – a meetup aimed at teaching people how to code. Here I learnt how to animate a rectangle, not bad. There were plenty of people to talk to, some in my position and some who just wanted to learn something new. This was hosted by Infinity Works who laid out free food and had plenty of mentors available to help people out. It was an incredibly relaxed and friendly atmosphere. Next week I attended a Ministry of Testing meet-up. Here we split into groups based on our experience. I did some functional testing which involved finding bugs on the popular website Tumblr, of which there were plenty! This was a similar experience to what I’d done on uTest and something that I enjoyed – creatively trying to break something. The evening ended with each group sharing what we’d learnt with the other groups and I think everyone there learnt something new. Knowledge sharing really is a thing in this industry. I got talking to some people afterwards and there was a

T E S T M a g a z i n e | J a n u a r y 2 019


40

B E C O M I N G

A

"Automation was the meal of the day; using AI to automate, how to convert your manual testing team to automation, how to set up your first automation test package. The Software Testing Conference North was great – Barclays, EE and Deloitte all had speakers give honest accounts of just how difficult it was to make these changes"

S O F T W A R E

T E S T E R

lot of chat about grad schemes, something I’d overlooked because I don’t have a degree. Apparently, they might consider me based on my other merits. My friend had recently got a role as a consultant; he had a long list of recruiters for me and he also got in touch with his contacts from different firms. I didn’t take this for granted, I knew his efforts were likely my best chance of getting a job in the industry. At this stage I’d sorted my CV out and I felt comfortable with sending it off. My phone would not stop ringing! There were still no roles for juniors – they all required 1+ years’ experience but this didn’t stop endless recruiters calling to have a chat with me. Some recommended grad schemes but most just said they would keep me on their system until a junior role came up.

TIME TO PROVE MYSELF

A few days had passed and I was offered a telephone interview for a trainee automation role. It required manual experience and, somehow, I made the cut. Being so soon in my search, this was a great result. When the time came for the interview, my heart was pounding. I picked up the phone and ran my mouth for 25 minutes with, probably, a little too much passion. Luckily, I didn’t scare the interviewer away and he offered me a tech test. What an amazing opportunity, I was required to teach myself Java and write some tests, this is the sort of thing I would do as a hobby. This is another wonderful aspect of the tech industry, interviewers find it useful to set logical puzzles as tasks for interviewees. It gave me childlike excitement, exactly what I

T E S T M a g a z i n e | J a n u a r y 2 019

needed. In three days, I learnt Java and wrote four pages of code testing a sample website using Selenium and I thoroughly enjoyed it. I could get lost for days troubleshooting and solving logical problems, which is what this turned out to be. I couldn’t learn much Java in the three days I had, but I absorbed what was necessary to complete the task that was set. Proudly, I managed to utilise Junit as well to make my tests appear in a more user-friendly manner. Hopefully they would recognise my abilities and I could start the next stage of my life. My efforts paid off and I had earned myself a face-to-face interview. The company seemed very forward thinking – as I sat in the reception it felt like I was at Google HQ. Slogans were painted on every wall and I could even see my interviewer splayed over a couch chatting to a trainee at the end of the corridor. I was eventually welcomed into a meeting room. That day their staff had been working on personal projects, a way of giving them more purpose. I had seen this in a TED talk before, I couldn’t believe it was happening here in Yorkshire! We had a passionate discussion about software testing and their company values: empathy, quality software and a focus on customer experience. These were values I could get behind. Moving on with the interview I was asked to test a hypothetical vending machine while they left the room. I scribbled all over their floor to ceiling whiteboard and then presented it to them with as much gesticulation as I could muster. Then we sat down to discuss my tech


41

test – not so easy. I was asked about what certain things meant in my code and in all honesty, I couldn’t describe these things in enough detail. Instantiating classes, calling methods from other classes, I didn’t fully grasp some of the concepts I had hijacked in my code. Their disappointment was immediately obvious. We discussed this and they made it clear there was an expectation that I would have to copy and paste code, but that they wanted someone who was able to get further in the time I had. Tough crowd! It was too much of a stretch. Within a few weeks of deciding to change career I, maybe unsurprisingly, was not able to clutch an automation role. I had tried though, and it encouraged me to keep going. This was a prestigious company and to get that far was no mean feat. I received an apologetic email from them indicating that I would be welcome to reapply when I had spent some more time learning Java. This was reassuring, hopefully they meant it.

could get a real taste of what it’s like to be tester and maybe get a chance to do some networking, which is something that admittedly, I am terrible at. Off we went to York to learn about some of the esoteric topics and challenges of the industry. Automation was the meal of the day; using AI to automate, how to convert your manual testing team to automation, how to set up your first automation test package. The Software Testing Conference North was great – Barclays, EE and Deloitte all had speakers give honest accounts of just how difficult it was to make these changes. Once again there was this theme of altruistic knowledge sharing. These big companies were open and communicative and plenty of people were engaging in discussion. As far as networking was concerned, I got lots of pens and business cards, but the advice was usually the same – apply to grad schemes.

A TESTING CONFERENCE

Spurred on, I explored further avenues. The grad schemes BJSS, Ten10 and FDM. FDM wouldn’t let me click apply because I didn’t have a degree, oh dear. I sent an email to them anyway. It’s worth noting that FDM required me to be able to relocate anywhere, something that I’d heard some negative stories about in the meetups. A woman had relocated from Glasgow – a fair distance. That would be hard to commit to when I have a network of friends and a mortgage to consider. BJSS offered me a telephone interview for a summer 2019 start… hopefully I can get a job before then. I found a couple of

Ticking over on the side, I had been a user on uTest for a short while now. I came to realise they were quite unforgiving. I typically only got three days from the announcement to complete the test cycle and with few spaces and unusual start times, it was quite rare I actually got to participate in those that I’d been invited to. Time was precious, so uTest had to take a back seat. Meanwhile, my friend had managed to get me a ticket to the Software Testing Conference North in York, something he had been talking about for a while. What a great opportunity! I

IF AT FIRST YOU DON'T SUCCEED

apprenticeship roles available but these offered eye-wateringly low salaries, which I just couldn’t take without building up some savings in advance. That brings me to the present. A few months have passed and I still haven’t managed to change career. I made a big push to start with, but the realities of life, the need to focus on keeping my current job and to keep my spirits high, have slowed my progress a great deal. I am still passionate though and in time I know I can build up some of that much needed experience. Now I have settled in for the long haul there is plenty to be getting on with; I will be learning Java for that chance at an automation role, I will knock-up a bulk application to all the firms in the area and I will make a start on studying to get my foundation in testing from the ISTQB. The ISTQB qualification is often cited in job descriptions but the time and money cost put me off, initially. I really would have preferred to get it after finding a junior role. The software testing industry isn’t perfect, the reliance on grad schemes to provide the majority of junior roles leaves a gap for people like me, who have no degree and little flexibility to change location. It would be nice to see more junior roles on the job market. On the other hand, I have to be thankful for the premium that is placed on learning ability and knowledge sharing which enables employers to take a chance with wildcards like myself by setting tech tests. It is difficult to make a career change but as long as I stay passionate and keep working at it, I know I will get there eventually.

T E S T M a g a z i n e | J a n u a r y 2 019


42

MANUAL TESTING IN QA & DEVOPS With the sheer volume of tools available, it’s safe to say we are living in a golden age of software. However, the human element will still be critical in a world of automation and DevOps ith the sheer volume of apps, tools, SaaS, cloud technology and the IoT, it’s safe to say we are living in a golden age of software. Businesses and consumers alike have more access to a wider range of products, services and support now than ever before. But we are also in the age of automation, where businesses are looking to cut costs and streamline departments, getting production, development and testing to work together in a synonymously smooth way. It’s the unenviable job of DevOps to integrate several teams within an organisation and get them working together as quickly and efficiently as possible, bringing in their plethora of tools and software under one banner. One way to do this is to automate as much as possible, particularly with repeatable

W

T E S T M a g a z i n e | J a n u a r y 2 019

tasks that might otherwise drain time and resources. Some research even suggests an actor-based framework to solve the multiple communication problems, passing data to each relevant entity autonomously (Cois, Yankel & Connell 2014). However, in a world increasingly dominated by automation, it’s the vital role of quality assurance that can miss out, as teams focus too much on efficiency and not enough on quality, leading to poorer products. That’s why it’s important to remember that for all the benefits of automation, the human is still a crucial part of the process, and that’s doubly so when it comes to software testing.

WHY WE AUTOMATE

The desire to automate is understandable, after all, it’s good practice – machine-

SCOTT SHERWOOD FOUNDER TESTLODGE

Scott is the founder of the software testing tool TestLodge managed monotonous tasks free-up developers and testers for other fun activities such as, well, setting up more automated test scripts. It can also aid in quicker and smoother releases meaning they get into production faster. In some instances, testing can account for 50-75% of the cost of a product’s release, and so cutting that through automating can have an impact across the project (Blackburn, Busser & Norman 2004). Manual testing takes time, skill and in many cases patience. It’s far easier to set a test suite to run through a library of test cases and report the findings. But, whilst this might give you a better yield, it can also lead to an over-reliance on automation and therefore make you vulnerable to some of its misgivings. So, what are the weaknesses of automation, and where does it fall short


M A N U A L

compared to its human counterpart manual testing? Time & Resources It may seem silly, given that automation is designed to free up time, but to create the appropriate automated testing for a specific case can take a lot of time and resources. Obviously, it depends on the complexity of your software, but building or finding an automated test that matches your requirements can take a lot of work. This is fine if you expect to be testing similar items for a long time, but for individual projects, or those with tight deadlines, setting up an automated function might see you falling behind schedule as you look to future-proof your foundation of testing. Another factor is that if a change is made, then the automated testing will also need updating which again can be a big time drain. Sometimes, even the smallest of changes to the interface can lead to a large amount of test re-writing. This can lead to pushing releases back, which has a knock-on effect across other departments and further down the line, can cause concern among investors or stakeholders. Automation Errors For the most part, when a human makes a mistake he/she knows it and can set about correcting the task. In an automated test, QA’s can often spend as much time trying to figure out the error in the test as they would have testing manually. There have even been cases of automated tests introducing new faults not covered by existing test cases, which can cause further delays (Nakajima et al 2016). There is a great temptation to replace manual testing with automated, when in-fact the latter should be used to support the former. When automated testing is the sole means of bug hunting, there will always be a gap in testing. For example, if the developer is writing the tests, and there’s an error in the code that they have not thought of, then they likely haven’t written a test to find it, either. Or, there may be unexpected usability issues, such as with the UX, if the onscreen text doesn’t have enough contrast with the background, the user will be unable to see it clearly. A human would naturally spot this straight away, but an automated script most likely won't

have been programmed to check contrast on pieces of text. The train-track style execution of set commands is limited to its own design, and it fails when it meets unexpected issues. Lost in Translation (false positive and false negatives) Without preaching to the choir, automation, by its very nature, is simply a machine running through its list of commands. There’s no creativity to wonder ‘have I checked this’ or ‘perhaps if I run something different’ - it does what it was told to do and nothing more. That lack of creativity is cited as being one of the biggest drawbacks to automation (Pan 1999). This is fine if the test is a simple one, but for the more complex, there is a greater risk of something getting lost in the translation. The setback that an undiscovered false positive or negative can cause can be detrimental to the whole QA process, and it’s where thorough manual testing really holds sway over automated. Less Creativity in Tests As previously mentioned, automation takes creativity out of the equation. This may be fine for basic tasks, but for more in-depth and exploratory testing, you need to take time and get into the core of your software, testing the expected and unexpected performance of critical functions. A skilled and experienced tester will know what to look out for, what functions can cause problems, and perhaps most importantly, what defects warrant new test cases of their own. A manual tester discovering a bug can also feedback to the automation test writers, to cover that issue in the future. Indeed, the most often cited reason for exploratory testing is the difficulty in designing test cases for complicated functionality and the need for testing from the end-users’ viewpoint (Itkonen & Rautiaien 2005). Keeping Skills in Practice (for when you need them!) Even the most finely tuned machine will eventually break down. A testing team could have a library of automated tests running like a swiss watch, but eventually, there will come a time where the testers will be tested. When something different and unexpected goes wrong, and you need someone in your team to get creative and find a new solution to this unprecedented

T E S T I N G

43

"I believe, the ability to think creatively about the user experience will mean manual testing will always have a place in the QA process, even if it comes down to something as simple as ‘that doesn't look right’."

problem. That takes skill and experience. Skills and experience are only gained when they are put into practice, and for the most part that only happens through the development and implementation of manual testing.

MANUAL TESTING WILL ALWAYS HAVE A PLACE

With hundreds of thousands of new applications created every year, there’s no doubt that as the range and diversity of software continues to grow, automated testing will be needed to take over repetitive tasks and free teams up for more complex issues. However, no matter how advanced automated testing becomes, a human will always be needed to identify the more advanced and unexpected issues. A statement somewhat confirmed by a survey of 115 software professionals, which found that 80% of practitioners disagreed with the vision that automated testing would fully replace manual testing (Rafi & Moses et al 2012). I believe, the ability to think creatively about the user experience will mean manual testing will always have a place in the QA process, even if it comes down to something as simple as ‘that doesn't look right’.

T E S T M a g a z i n e | J a n u a r y 2 019


44

CONTAINERS BRING TEST BENEFITS Having worked with containers in multiple projects for a couple of years now, I’ve been blown away by the advantages they can offer for software testing o, the main question is what are containers and what do they do? When an application traditionally runs on a system, it’s in a fight for that system’s CPU, memory and resources alongside all the other processes that are running. However, when a container runs, it is given its own slice of CPU, memory and resources to use, freeing it up from this contention. Another major difference is that on a traditional system, the application would be running on the system’s operating system, but containers are abstracted away from the OS and only share access to its kernel. So, now we have a basic understanding of what containers are, let’s examine their benefits, specifically focusing on testing and quality.

S

T E S T M a g a z i n e | J a n u a r y 2 019

Creating A Standard Environment When a non-containerised application is created, you usually need to version control the source code and then look to deploy this code to multiple environments through its lifecycle. This might be test, pre-production, UAT or any other environments through to production. This process is fallible and can mean the application is running on completely different environments each time. With a containerised application, you deploy the application on to an image, which is a packaged version of the entire environment and the instructions to run the application on that environment. This image can then be version controlled and passed around the different environments meaning that a lot of the variables are removed from each environment, making it much easier to investigate any issues.

STEVEN BURTON SENIOR CONSULTANT INFINITY WORKS Steven has over a decade's experience in the industry and is a big advocate of CD pipelines - agile and scrum This advantage extends to running locally – as in order to test a containerised application locally you can just get the relevant image and spin it up! This means you are running the application in the exact same environment that it is running on in production. Issue Recovery With standard applications when there is an issue you may need to restart the application to recover. On a normal environment this could cause knock-on effects like locked files, frozen processes, memory not released and so on. However, with a containerised application all you need to do is stop the container and start it again. As everything within the containers is isolated from the rest of the system any potential issues that happen within the container are isolated as well.


C O N T A I N E R S

CONTINUOUS INTEGRATION PIPELINE

There are actually many benefits to a CI pipeline when it comes to containers, so let's examine some of them individually. Standard Automation Clients How many times as a tester have you seen or experienced issues where you are running automated checks on one machine that don't happen on another? Containers can provide a solution to this by allowing you to actually containerise the client running the automation. You no longer have to worry about the client operating system, the client resources or other applications running on the client because every client will be running in the same environment! Simplistic Deployment In a normal CI pipeline, to deploy your application version to many environments, you would need to access code from some kind of repository at different stages, build that code and then attempt to deploy the artifacts. Alternatively, you might build the application once and pass around the build artefacts which would still need to be unpacked and deployed. Which containers you can build your application once, running automation as appropriate and then package this up as an image and pass this around to different stages. This means all you need to do it run that container on the next stage without any building or deployment. Depending on your hosting solution you may need some way to get the image to the host and run it, but doing this will almost always be simpler than deploying artefacts. Standard Deployment Environments It’s not only the application environment that can be made in to an image. As we’ve seen above the client used to run the tests can also be an image and we can also use a standard image to actually perform the steps in the CI pipeline. A lot of CI technologies these days (such as Concourse) already do these and spin up containers to run the tasks you define. However, even with these, we can control the base image the task is run on ourselves. Controlling these images means when we build our application we can control the software packages on the environment, any environment variables that might be hanging about and anything else we want to. This allows us to be much

more confident that when we do any deployment activities they are not being affected by things outside of our control. Rollback When an issue occurs in production with a non-containerised application, it can be difficult to determine where the issue was introduced. Ideally, we’d always aim to fix-forward with issues, but in certain companies any issues or downtime may be very costly and rolling back might be a reasonable temporary solution. However, this can be difficult to do, especially if you have multiple applications talking to each other and external dependencies like a database. But with a containerised application, you can simply go back to a previous image of your application and spin this up! If your application has external storage you may still have to consider these, but the rollback of the application itself is simple. It’s also easier to find out which version you can rollback to safely, as every version of the application will have its own image so you can go to that specific image and test it locally or on any environment before making any changes to production. Public Images Containerisation is no longer a new technology and has reached a good level of maturity meaning some of the biggest companies in the world have climbed on board. These companies (such as IBM, Microsoft, Sun etc) have handily built their own publicly available images for you to use locally. Widely Used Technologies If you need to run a particular technology locally like a database technology or an environment for a particular language, then you can just download the image and spin up a container. Some examples I’ve used before are: • AlpineLinux: Linux based images, which have been extended for many languages such as Java, Python and Javascript flavours • Databases: Multiple technologies such as Oracle, Cassandra and DB2 as well as cloud technologies such as DynamoDB from AWS and No-SQL technologies such as MongoDB • Streaming: Kafka and RabbitMQ docker images are ones I’ve used, but other streaming technologies also have public docker images.

45

There are many others including some great images available for CI and testbased tools, but this is just a few examples from my own usage. Utilities Another use of publicly available images is to spin up utilities and services that you have complete control over. Rather than having to hit some physical server somewhere, you can download and use these images easily and for free. Some examples of this I have used are: • Wiremock: If you need to have something that runs and mocks out rest calls, there is a Wiremock image you can download and run • FTP/SFTP: A very useful set of images for any time you need to run an FTP or SFTP server, which you can then query and interrogate to check the files being received • SMTP: I’ve used an image called MailHog for emails before, which means you can create a container from this and have your own SMTP server to hit and retrieve the emails from - excellent for automation • HTTP: A simple image running a http server which opens routes for requests, meaning anywhere you send out requests to an external service can be pointed here for automated checks to ensure those messages are sent correctly • Zebra Printers: Very useful for automating checks to and from printers where you want to ensure the right data is sent out to an external printing source. There are hundreds more of these types of images available and for me this is the biggest way that testing has been revolutionised by containerisation.

CONCLUSION

As I’ve shown, there are a number of advantages for testing by making use of containers. This article is not designed to persuade you to containerise your applications themselves, because that’s not always appropriate or easily achievable, however, I think I’ve shown some good benefits if you do choose that approach. However, I would encourage you to think about other ways that containers can help you with testing - perhaps with using public images in your automation or using images in your CI.

T E S T M a g a z i n e | J a n u a r y 2 019


46

FOR THE ENTERPRISE CHANGE ISN’T OPTIONAL NEITHER IS AUTOMATION

More corporations are meeting the challenge of increased frequency of releases, emergency security updates and the demands of the business by deploying Worksoft’s Certify end-to-end automation riven by the threat of being overwhelmed by the explosion of commerce through mobility, today’s enterprise must transform from a heavily customised, resistant-to-change environment to a nimble platform; where flexibility is the means by which systems keep up with the demands of business. This march to enterprise-wide innovation has created an environment where applications now live in a state of constant change. Shifts to decentralised, autonomous crossfunctional systems and teams has increased the rate and the amount of operational change, as much if not more than the rate of change of the IT application landscape itself. The unintended consequence of this evolution of system interoperability

D

T E S T M a g a z i n e | J a n u a r y 2 019

is the ever-increasing complexity of the business processes and the demands on those executing these processes. It is this inadvertent increase of tasks that creates an immediate need for automation—as much for the processes as for the testing of those processes—in order to scale and to ensure all end-to-end processes work seamlessly. Worksoft provides robotic automation solutions for testing mission-critical end-to-end business processes across the enterprise. Our solutions enable the enterprise to realise the goal of successful transformation by enabling on-time delivery of the latest updates, reducing the number of onerous tasks on labour so as to focus on critical services, and mitigating the risk of falling behind and sacrificing the

SHOEB JAVED CTO WORKSOFT Shoeb is responsible for technology strategy, software development, quality assurance, and customer support for all Worksoft solutions flexibility these new platforms are meant to deliver. Worksoft has been the driving force behind robotic, code-free, scalable automation for over ten years. Our real-world transformation experience, embedded integrator collaboration and customer innovation are at the heart of Worksoft’s success. Currently over 300 major multi-national corporations have turned to Worksoft to help them deliver on the promise of flexibility and to keep up with the pace of change while ensuring critical end-to-end business processes work as intended. Key differentiators of Worksoft: • 100% Code-Free solution that enables


V E N D O R

non-technical users to capture end-toend business processes and autogenerate documents, test steps and robotic process automation (RPA) Automated End-to-End Business Process Assurance Tests that can be easily created, maintained, shared and consumed as part of continuous testing, integration and deployment cycles Automation Accelerators for Packaged Applications including SAP, Workday, Oracle, Salesforce.com, Hybris, Success Factors, and many others.

DEVOPS ISN'T JUST FOR CUSTOM APPS ANYMORE

Up until now, DevOps has primarily focused on the Development (Dev) side of the house while the Operations (Ops) has largely been ignored. In this new state of constant change, there are numerous changes occurring on the Ops side from server updates, to network changes to security patches; all of which have the potential to cause entire systems to break and cause business disruption. Business process assurance (BPA) goes beyond testing updates and changes to a specific application or system level API to insure the end-to-end process continues to function no matter the nature of the change. BPA is an organization’s best defense for ensuring mission critical business processes continue to run regardless of the changes occurring across the organisation. The reality of an application’s move to the cloud to enable greater frequency of change is yet another unintended consequence driving demand for automation. Enhancing teams with cross functional capability and rapid deployment methodologies fall short of equipping them to meet these accelerated software updates. Organisations are just now discovering the challenges of delivering on quarterly releases in cloud-based systems such as SuccessFactors. By adopting continuous testing and leveraging shareable automation, teams have ready-to-run daily BPA testing to quickly identify changes and defects for resolution. In this latest variantion of testing, teams not only have to excel at stopping defects before they enter production but they must also become the critical contributor for solving defects that have entered production. This is where Worksoft has begun to see an increasing number of organisations run testing against production systems enabling continuous monitoring for changes and

issues. This change is especially important in organisations experiencing frequent 'emergency' security patches and/or have moved to Continuous Delivery, pushing changes from Dev directly to production. Deployments with end-to-end automation in place also save the business time by eliminating the need to do User Acceptance (UA) testing. By continuously running end-to-end tests that recreate real users’ interaction with production processes, organisations can ensure their business is always up and running. What role or need does a COE meet in the new world of Agile and DevOps? As companies move to adopt Agile & DevOps the question of what role or need does a COE serve has become a common question. Again, here we need to consider the evolution of DevOps as it continues to move away from a predominantly Dev focus. Organisations with enterprise-class deployments that contain a mix of custombuilt and packaged applications still need to apply both approaches to testing. However, it is not possible to map every step/field in a business process to a requirement or to maintain change lists that are reliable enough to catch every possible issue in a complex business process. End-to-end tests that cover the complete business process have to be built and maintained; a need that typically falls outside the realm of work of any single development team. There is also the question of which team(s) own the creation and maintenance of tests that will address changes made by operations. Worksoft is now seeing the real-world evolution of the COE and a merging of QA and Operational Change Management teams. Where the centralised team takes ownership of tool standardisation, the distribution of best practices, testing resources and testers is across project teams. The COE maintains responsible for running end-to-end tests for critical systems and ensuring the business continues to work, eventually moving from change management to change monitoring. Tests are run both in QA and production on a continuous basis to ensure any issues are resolved before possible critical outages. Worksoft specialises in packaged application testing? What does that mean? Why aren’t custom app methodologies good for packaged apps?

P R O F I L E

47

The Dev in DevOps is by its nature a focus on the development of custom built applications. The reality is most Fortune 2000 organizations run their most critical business processes on packaged applications—SAP, Oracle and Salesforce. com et al. Worksoft focuses on business outcomes across all applications and simplifies enterprise application projects. Our unmatched experience and solutions offering continuous automation for packaged apps makes us the right partner to help manage business process assurance and realise true end-to-end testing. SAP S/4HANA is rapidly becoming the most discussed packaged application that both clients and interested parties meet with Worksoft to discuss. Many of these discussions revolve around the acknowledgement that not all of today’s critical business processes begin or end in SAP, but most will traverse that system at some point. Worksoft Certify is the industry’s first automated codeless testing solution built to test end-to-end business processes across packaged applications like SAP at scale. Certify enables users and vendors to easily package a process with all of its related dependencies, sub-processes, record sets, record filters, layouts, and variables into container that can be shared across Certify instances. This allow Certify users to easily share tests rather than recreating them. Certify supports more than 150+ applications across mainframe, SAP GUI, HTML5 modern web, mobile, and much more. The Worksoft Extensibility Framework (XF) uses machine learning to identify how an app generates UIs and tailors object recognition based on those patterns. Extensibility Frameworks are built to follow the apps they support and include predefined optimisations that improve object recognition and test execution performance. As new versions of an application are released, the XF is updated while simultaneously updating the corresponding tests eliminating the need to recreate tests based on the underlying XF definition. Today Worksoft supports XFs for more than 30+ of the leading enterprise applications including: • Microsoft Dynamics 365 • Oracle Fusion • SAP: S/4HANA, Concur, CRM, ESS XF, Fiori, Hybris C4C, MDG, Portal,

T E S T M a g a z i n e | J a n u a r y 2 019


48

V E N D O R

P R O F I L E

"The unintended consequence of this evolution of system interoperability is the ever-increasing complexity of the business processes and the demands on those executing these processes. It is this inadvertent increase of tasks that creates an immediate need for automation... in order to scale and to ensure all endto-end processes continue seamlessly"

• • • •

SRM, SuccessFactors and WebGUI Salesforce and Salesforce Lightning ServiceNow Siebel Open UI Workday.

AUTOMATING EARLY, OFTEN AND EVERYWHERE Worksoft recognised years ago that our customers’ enterprise applications were in the process of undergoing a wholesale transformation from primarily monolithic, on premise systems to a diverse, distributed, heterogeneous set of applications deployed in the cloud, on premise and in hybrid configurations using modern web and mobile based UIs. This transformation would require a fundamentally different approach to creating, deploying and managing test automation at an unprecedented scale. The needs of a varied group of business experts, developers, testers, project managers and IT operations people will need to be addressed effectively in a collaborative manner. In response, we re-architected our platform to be more open, modular and easier to integrate with, so we could better fit within Agile methodologies (primarily SAFe based). Worksoft built an extensive automated business process discovery and analysis solution that integrates seamlessly with our top-ranked testing platform. Allowing organisations to implement an integrate automation in a consistent way across systems makes it progressively easier to share and share the deployment patterns amongst the teams. This unique approach dramatically reduces the time needed to discover, document, and prioritise critical end-to-end business processes, and accelerates the creation and execution of automated continuous test libraries.

How is Worksoft’s approach to continuous testing unique? Running unattended, automated endto-end tests driven from a UI in parallel creates unique challenges for testers: • The device running the test needs to be powered on and confirmed running • Specific user needs to be logged into the device, the interface screen unlocked • Tests need to be orchestrated across multiple devices in multiple labs - on premise or in the cloud. Worksoft Execution Manager addresses these challenges and gives you

T E S T M a g a z i n e | J a n u a r y 2 019

centralised control for running remote Worksoft Certify tests via documented REST calls from any CD platform. It handles logging into machines, launching tests, taking screen shots, and reporting pass and fail tests back to the continuous integration/continuous delivery (CI/ CD) platform. Support for advanced scheduling enables users to specify the order of tasks to be performed and define dependencies between test processes passing and failing in order to continue testing. Why do clients choose Worksoft? Real-world experience, collaboration and innovation are at the heart of Worksoft’s success. Hundreds of clients have turned to Worksoft to help them ensure all their critical business processes remain up and running through any change. Core reasons customers choose Worksoft: • Proven business driven approach and customer experience • Focus on testing enterprise packaged applications (SAP, Salesforce.com, Oracle etc.) • Code-free solution that can be leveraged across business users, testers and developers • Ability to support Agile-plus-DevOps testing practices • Stand-alone automated discovery and documentation capabilities • Advanced object recognition capabilities for complex web UIs like SAP Fiori and rapid release of version updates • Out-of-the-box integrations with other testing tools, CI tools ALM systems and mobile testing platforms.

HISTORY & BACKGROUND For over a decade, Worksoft has been the industry’s leading continuous robotic automation platform for Enterprise Packaged Apps, offering a diverse ecosystem of service providers, software integrations, and machine learning solutions to enable true end-to-end, unattended automated testing of missioncritical business applications, including SAP, Oracle, Salesforce, Workday®, SuccessFactors, ServiceNow, and more. Microsoft, Cardinal Health, P&G, Honda, 3M, Intel and Siemens are just a few of the world’s leading global companies who have turned to Worksoft to achieve unparalleled continuous testing at scale and realise DevOps and Agile initiatives.


The winners! he European Software Testing Awards were held in November last year at the fabulous Old Billingsgate, London, where teams from around the globe gathered for this excitement-packed, glittering awards event. From the hundrdeds of entries received, the judges painstakingly assessed the anonymised submissions and, after receiving a huge increase in the number of entries, spent long hours deliberating the merits of each project put forward. The European Software Testing Awards celebrate companies and individuals who have accomplished significant achievements in the software testing and quality assurance market and winners and runners-up alike gathered for this hotly-contested, auspicious occasion and, with the room full to capacity, the crowd got everything they bargained for from the fantastic Michael McIntyre's Comedy Roadshow comedian - Sean Collins. Attendees got the opportunity to not only enjoy themselves, but to rub shoulders with their industry peers for plenty of advice, congratulations and banter on this most successful of evenings. This year's event is due to be held on the 20th November at Old Billingsgate again, so if you or your team are looking to be rewarded for your hard work in software testing and quality assurance, then make this a date for your diary, and get your entries in!

T

THE EUROPEAN SOFTWARE TESTING AWARD WINNER ★★ Lloyds Banking Group

T E S T M a g a z i n e | J a n u a r y 2 019


50

THE

EUROPEAN

SOFTWARE

TESTING

AWARDS

WIPRO TESTING MANAGER OF THE YEAR ★★ Diane Cox, Capgemini

BEST MOBILE TESTING PROJECT ★★ Lloyds Banking Group in partnership with Wipro

CAPGEMINI BEST AGILE PROJECT ★★ Royal Mail Group

BEST TEST AUTOMATION PROJECT – NON-FUNCTIONAL ★★ Tech Mahindra Limited

NTT DATA GRADUATE TESTER OF THE YEAR ★★ Kieran Ellis, Cognizant

BEST TEST AUTOMATION PROJECT – FUNCTIONAL ★★ Specsavers in partnership with Mastek

T E S T M a g a z i n e | J a n u a r y 2 019


THE

EUROPEAN

SOFTWARE

TESTING

AWARDS

BEST OVERALL TESTING PROJECT – RETAIL ★★ HIGHLY COMMENDED: Cognizant

BEST OVERALL TESTING PROJECT – RETAIL ★★ WINNER: Infosys Limited

BEST OVERALL TESTING PROJECT – FINANCE ★★ Royal Bank of Scotland in partnership with Sandhata

SANDHATA TECHNOLOGIES LEADING VENDOR ★★ NTT DATA

UST GLOBAL TESTING TEAM OF THE YEAR ★★ Ciklum

KPMG MOST INNOVATIVE PROJECT ★★ Aviva in partnership with Tata Consultancy Services

T E S T M a g a z i n e | J a n u a r y 2 019


52

THE

EUROPEAN

SOFTWARE

TESTING

AWARDS

BEST OVERALL TESTING PROJECT – COMMUNICATION ★★ Swisscom in partnership with Accenture Technology Digital Testing

ACCENTURE BEST USE OF TECHNOLOGY IN A PROJECT ★★ Lloyds Banking Group in partnership with Wipro Account Overview Team (Group Digital)

MASTEK TESTING MANAGEMENT TEAM OF THE YEAR ★★ The Cheque and Credit Clearing Company in partnership with KPMG UK

INFOSYS BEST USER EXPERIENCE (UX) TESTING IN A PROJECT ★★ UXservices

COMEDIAN SEAN COLLINS ★★ Kept the crowds entertained all night!

T E S T M a g a z i n e | J a n u a r y 2 019

THE CHARITY RAFFLE RAISED £4395 FOR GREAT ORMOND STREET HOSPITAL ★★ Ambassador, Caroline Shaikh, hosts the charity raffle prizedraw

T E S T M a g a z i n e | J a n u a r y 2 019


21-22 May 2019 The British Museum London

REGISTER YOUR PLACE TODAY softwaretestingconference.com The National Software Testing Conference is a UK‑based conference that provides the software testing community at home and abroad with practical presentations from the winners of The European Software Testing Awards, roundtable discussion forums that are facilitated and led by top industry figures, as well as a market leading exhibition for delegates to view the latest products and services available to them. The National Software Testing Conference is open to all, but is aimed and produced for those professionals that recognise the crucial importance of software testing within the software development lifecycle. Therefore the content is geared towards C‑level IT executives, QA directors, heads of testing and test managers, senior engineers and test professionals. Taking place over two action‑packed days, the National Software Testing Conference is ideal for all professionals aligned with software testing – a fantastic opportunity to network, learn, share advice, and keep up‑to‑date with the latest industry trends.

2

Days of

Practical Presentations Workshops Networking Exhibition

44

Presentations

6

Workshops


54

CANADA 2019

SOFTWARE TESTING

news canada

SoftwareTestingNews.CA

T E S T M a g a z i n e | J a n u a r y 2 019


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.