__MAIN_TEXT__
feature-image

Page 1

MARCH 2019 SEPTEMBER 2018

QA evolution Enabling agility Information anxiety

Pain free cloud migration The emergence of TestOps


froglogic Squish GUI Test Automation Code Coverage Analysis

cross platform. multi language. cross design. Learn more and get in touch: www.froglogic.com T E S T M a g a z i n e | M a r c h 2 019


1

C O N T E N T S

DO YOU HAVE INFORMATION ANXIETY?

4

10

TESTOPS

22

THE NEXT QA EVOLUTION 14

THE 'SEE' IN CI / CD

SECURITY

18

TESTING FAST AND HARD

30

DEVOPS, CLOUD & QA Munich's InnoGames tell their QA journey from quality assurance to quality assistance.

04

'Information anxiety' is a growing and serious problem among software testing professionals.

10

The impact and adoption of automated visual testing has rocketed.

14

What's the right way to test? And can it ever be done without problems?

18

Open source GitHub security applications you and your team should be using.

22

Are we seeing the emergence of Testing and DevOps as a combined field: TestOps?

30

The switch to DevOps is challenging software testing and QA professionals’ positions in the industry.

34

When it comes to cloud migration, preparation is crucial to project success!

38

What is business agility, and how can it be enabled by teams?

42

National Software Testing Conference at the British Museum - PREVIEW !

48

T E S T M a g a z i n e | M a r c h 2 019


2

UPCOMING INDUSTRY EVENTS 21-22

18-19

24-25

May

June

September

The National Software Testing Conference is a UK‑based conference that provides the software testing community at home and abroad with invaluable content from revered industry speakers; practical presentations from the winners of The European Software Testing Awards; roundtable discussion forums that are facilitated and led by key figures; as well as a market leading exhibition, which will enable delegates to view the latest products and services available to them.

The National DevOps Conference is an annual, UK-based conference that provides the IT community at home and abroad with invaluable content from revered industry speakers; practical presentations; executive workshops, facilitated and led by key figures; as well as a market leading exhibition, which will enable delegates to view the latest products and services available to them.

The DevTEST Conference North is a UKbased conference that provides the software testing community with invaluable content from revered industry speakers; practical presentations from the winners and finalists of The European Software Testing Awards; Executive Workshops, facilitated and led by key figures; as well as a market leading exhibition, which will enable delegates to view the latest products and services.

softwaretestingconference.com

devopsevent.com

devtestconference.com

THE EUROPEAN SOFTWARE TESTING

22

12

October

November

November

The European Software Testing Summit is a one-day event, which will be held on the 11th Dec 2019 at The Hatton, Farringdon, London. The European Software Testing summit will consist of up to 100 senior software testing and QA professionals, are eager to network and participate in targeted workshops. Delegates will receive research literature, have the chance to interact with The European Software Testing Awards’ experienced Judging Panel, as well receive practical advice and actionable intelligence from dedicated workshops. softwaretestingsummit.com

Now in its sixth year The European Software Testing Awards celebrate companies and individuals who have accomplished significant achievements in the software testing and quality assurance market. Enter The European Software Testing Awards and start on a journey of anticipation and excitement leading up to the awards night – it could be you and your team collecting one of the highly coveted awards.

The DevOps Industry Awards celebrate companies and individuals who have accomplished significant achievements when incorporating and adopting DevOps practices. The Awards have been launched to recognise the tremendous efforts of individuals and teams when undergoing digital transformation projects – whether they are small and bespoke, or large complex initiatives.

devopsindustryawards.com

5 November

The Canadian Software Testing & QE Awards celebrate companies and individuals who have accomplished significant achievements in the software testing and quality engineering market. So why not enter the Canadian Software Testing & QE Awards and start on a journey of anticipation and excitement leading up to the awards night? It could be you and your team collecting one of the highly coveted awards! softwaretestingawards.ca

T E S T M a g a z i n e | M a r c h 2 019

20

softwaretestingawards.com


E D I T O R 'S

C O M M E N T

GAME ON!

3

MARCH 2019 SEPTEMBER 2018

QA evolution Enabling agility Information anxiety

Pain free cloud migration The emergence of TestOps

EMBRACING THE NEW CHALLENGES hange is inevitable, growth is optional – as the old saying goes – and now more than ever, testing and development teams are being challenged to grow like never before. In this issue of TEST Magazine we have several features on the role of testing and QA strategies and how they are changing over time. I feel this is indicative of where the industry is at as a whole – the SDLC is forever being rewritten and the responsibilities of the tester are having to change from project to project under the banner of DevOps. And, if you didn't know the definition of DevOps by now, here it is again: DevOps (development and operations) is an enterprise software development phrase used to mean a type of agile relationship between development and IT operations. The goal of DevOps is to change and improve the relationship by advocating better communication and collaboration between these two business units. Empowering team members at all levels to take part in the quality conversation and take responsibility throughout the SDLC is an essential element to achieving a ‘better DevOps life’ – as one tester recently put it. Our lead feature in this issue focuses on one games company who have been taking their QA function to new realms and, after many years of evolution and experimentation, have moved from a quality assurance to a 'quality assistance' approach and have empowered the whole development team to own the quality topic within their company. In Fostering the next evolution of QA (p.4), Munich's InnoGames' senior QA engineer, Jana Gierlof, and program manager, Joseph Hill, discuss where they’ve come from and where they’re going in terms of QA within their fast-paced company (and

C

BARNABY DRACUP EDITOR

just FYI – they’ve come a very long way, but still feel there's always further to go! Continuing this theme of growth and change, the shifting industry tide towards DevOps is challenging the position of software testers and QA professionals within the industry and in Will QA professionals get left behind? (p.30), TestCraft CEO and Co-Founder, Dror Todress, asks how can we gain a deeper understanding of DevOps, and what will it take for a QA tester to stay relevant in the future? Despite these demands and concerns, in recent years the emerging practice of DevOps has been a welcome challenge within the software testing industry. In The role of QA in DevOps (p.42), software consultant, Rob Catton, discusses this exact topic, asking whether or not people in the QA function of a DevOps team should have any more or less affiliation than anybody else and, looking to the future, perhaps it should ideally be a joint effort for all involved. In DevOps and the emergence of TestOps (p.26), testing professional, Ditmir Hasani addresses this most pertinent of questions: as testing becomes increasingly varied, do we see Testing and DevOps becoming a combined field? As the lines between testers and developers become increasingly blurred, this is certainly looking like the future that lies ahead. However, any change in the culture and mindset of a team or company is no small feat, and will obviously take time to implement. Hopefully, the features contained within this issue will help you and your team spend your energy on building the new, instead of fighting the old. So, embrace the changes and remember now, more than ever, it's game on! BD

MARCH 2019 | VOLUME 11 | ISSUE 1 © 2019 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor of TEST Magazine or its publisher, 31 Media Limited. ISSN 2040‑01‑60 EDITOR Barnaby Dracup editor@31media.co.uk +44 (0)203 056 4599 STAFF WRITER Islam Soliman islam.soliman@31media.co.uk +44 (0)203 668 6948 ADVERTISING ENQUIRIES Max Fowler max.fowler@31media.co.uk +44 (0)203 668 6940 PRODUCTION & DESIGN MANAGER Ivan Boyanov ivan.boyanov@31media.co.uk 31 Media Ltd 41‑42 Daisy Business Park 19‑35 Sylvan Grove London, SE15 1PD +44 (0)870 863 6930 info@31media.co.uk testingmagazine.com

PRINTED BY MIXAM, 6 Hercules Way, Watford, Hertfordshire, WD25 7GS UK  softwaretestingnews  @testmagazine  TEST Magazine Group

T E S T M a g a z i n e | M a r c h 2 019


4

FOSTERING THE NEXT EVOLUTION OF QA InnoGames have redefined the role of QA within their organisation – with their new approach facilitating an evolution from Quality Assurance to Quality Assistance he video games industry is a faced paced sector, especially for freeto-play products which adopt a Games-as-a-Service approach to quench the desires of its content hungry users. Trying to keep on top of the software quality aspect is a monumentous task utilising the QA approach of yonder. Here at InnoGames we decided, after years of experimentation, to focus on the 'quality assistance' approach and empower the whole development team to own the quality topic. Follow us on a journey from the past – to what the new approach means to us now – as well as taking a peek at how our approach impacted our largest product. InnoGames is a free-to-play browser and mobile games developer from Germany, which was founded in 2007

T

T E S T M a g a z i n e | M a r c h 2 019

just outside of Hamburg. We’re currently focused on developing mobile products utilising the Unity Game Engine. We have around 400 employees, of which 12 are dedicated to QA. However, before we dive into where we are now and what we’re doing, we should first take a trip down memory lane to see how we got to this point.

JANA GIERLOFF SENIOR QA ENGINEER INNOGAMES Jana has 10+ years of experience in the games industry and led the transition to quality assistance at Forge of Empires

AN EVOLVING STRATEGY Over the years, like many companies within our sector, the QA strategy and approach has slowly evolved with the times. The original methods relied heavily on utilising an external QA partner to provide the bulk of the QA bandwidth to our products, which had a purely black box testing focus. The aim was to provide as many eyes (and

JOSEPH HILL PROGRAM MANAGER INNOGAMES Joseph focuses on the QA Strategy across the organisation and manages the central game technology department


Q A

hands) as possible to try and cover all of the various edge cases and features. This approach worked for many years but was a process that couldn’t really be scaled, both from a business aspect and a purely organisational one. Back then we were releasing at a much slower rate and the complexity of our products was still manageable. In 2012: the company slowly built up an in-house centralised QA team, moving away from the heavy reliance on an external partner, this team had the focus of black box testing across our product portfolio and would be able to move much more fluidly than the previous approach. This presented a lot of varied challenges, one being how to scale the team when different product teams have requests on the same day! We brought the knowledge in-house but we still had a scaling issue and were still approaching everything from a purely manual perspective. In 2013: the company made a strategic decision to decentralise the quality assurance division and move the personnel to be embedded on the product teams to

improve collaboration and ownership. In 2014: it was decided that the QA staff embedded on the product teams would report into the technical product lead of the product that they were on. This was in line with the whole organisation’s shift to a studio structure and moving away from a centralised matrix management. The first foray into test automation also began in this year, with a proof of concept using the image recognition system called Sikuli. In 2015: it was decided that the Sikuli approach was not a stable or maintainable approach and a research project was established to look into alternative solutions for test automation that would be compatible with the technology being utilised in the company at the time (Flash, Cocos2DX & Unity). In 2016: the decision was made to approach QA with a whole new mindset, this was inspired by Atlassian and their many talks and posts about quality assistance. The short of it is that with quality assistance, everyone who contributes towards the development of the product is empowered and

E V O L U T I O N

5

responsible for owning the quality topic. This, in essence, means that QA's job is to provide the team with the tools and knowledge to be able to handle the new responsibility, by becoming coaches and mentors to each and every team member – no longer being at the end of the pipe for development. Coupled with a test automation framework which can utilise Java or C# for the test suites, this makes it easier for developers to contribute to the continued success and coverage of our test automation. To make sure that the QAs do not become siloed on their respective products, we formed the 'QA Community', which meets on a weekly basis to share and exchange knowledge. We believe this is a crucial element as we do not want to reinvent the wheel every time a specific topic appears.

CULTURE DEVELOPMENT & MISSION STATEMENT Something else that we began to notice from taking this approach was that it inspired the QA members to organically

T E S T M a g a z i n e | M a r c h 2 019


6

Q A

E V O L U T I O N

Above: Bringing a big feature like Cultural Settlements to Forge of Empires was not only a huge success externally, but was a feature that touched every aspect of the game and was successful due to our quality assistance approach. Everyone involved with its development believed and contributed towards the 'quality' mind-set. help each other out, which in turn created a very strong culture that everyone has become very proud of. We even created a mission statement for quality assistance to really embody the new approach and to help communicate QA’s new role within the organisation. The mission statement reads: • The WHOLE TEAM is responsible for the quality of the product • Every team MEMBER who CONTRIBUTES to the product is responsible for the entire FEATURE development life cycle • QA is responsible for ASSISTING the whole team in PROVIDING high quality.

SO, WHERE ARE WE NOW?

We’re still slowly rolling the approach out team-by-team. All of our live products are following the strategy and we’re installing this success and learning into our new products. You can imagine that such a shift in strategy takes time, as a lot of people are used to just throwing it over the fence to QA to handle and now this is no longer a valid approach! But, we’re really happy with the progress and it has been a big surprise that the developers also now really enjoy the new empowerment and responsibility.

T E S T M a g a z i n e | M a r c h 2 019

We now have the QA approach and test automation being added to the company’s tech strategy, which all products have to conform to, which is a big achievement and demonstrates that our management see the benefits of the approach and believe in it. With test automation, we’re still driving the technology internally and want to eventually open-source our approach as we feel a lot of other companies would benefit from seeing our method. There is still no timeline for this, but it’s a topic which is always on the table. We also see a lot of additional development opportunities to improve our test automation tool as well as research things such as as machine learning to enhance its functionality. It is still early days down this new path and, as Atlassian say, they have not finished with their migration to this new approach some 10 years on. We still have some way to go before it’s truly embedded into the company culture. We didn’t stop there though, and are very active in sharing our knowledge and experiences with a number of companies in the free-to-play space. We're very open with the ins-and-outs of our approach and we believe this is the next evolution in Games QA and want to help shape its vision.

THE NEW QA IN ACTION

Now, let’s have a closer look at how this new approach had an impact on our most successful product. Forge of Empires is a cross-platform city builder. It is one of the most played and successful titles in the strategy genre worldwide. Starting with a small city in the Stone Age, you lead it throughout different eras, across the Middle Ages and Post Modern Era, all the way to the Virtual Future and beyond. Battle friends and foes, in a single player campaign or with your guild mates in the Guild Expeditions, anywhere at anytime. Originally developed as a browser game based on Flash, the game has grown in complexity over the years – not only by expanding the platforms with a move to mobile, but also new eras and features have been steadily added since the game was first released. While trying to keep up with the developments, and with the mobile platform and the browsers on their way to abandoning Flash, we were faced with further technical challenges, which led to QA increasingly becoming the project bottleneck. Although QA was already part of the development process, reviewing design concepts and contributing to groomings and plannings, the QA process still was


7

Above: InnoGames' Town Hall is a great meeting spot for the 400 colleagues during lunch times – as well as a great location for internal and public meet-ups on technology and games. more like a mini-waterfall with stories going through a designated testing phase. This led to defects being found too late, resulting in delayed mobile submissions and even re-submissions which a few times put events and feature activations on risk. To give some insights, at its peak we had an overall bug count of up to 350 known issues. Although most of them were of a minor nature, the sheer amount could easily tarnish the perception of the quality of the game. With this status quo in mind, we have started to implement quality assistance on Forge of Empires, which moves the quality more into the focus of the team as a whole. The new approach to QA is not just a new QA process, it is a mindset involving the whole team which eventually enables the team to own the quality topic. A change in the mindset and culture of a team isn’t easy, of course, and it takes time. Within quality assistance everyone who contributes to a feature is responsible for the entire feature development lifecycle, which starts with the design of a feature till the feature sees the light of day. Typically, QA team members know the ins and outs of the whole system better than anyone else on the team! Thus their input in grooming and planning helps

the team to understand the features and to estimate them. These discussions can identify functionality some team members may not have considered, but can also provide additional input on the overall system knowledge, particularly around inter-dependencies

and the potential impact on other parts of the system. The estimations do not include the development effort only, but also the testing effort, which usually gets estimated higher by QA as developers often focus on the happy path! For example a small change in the

T E S T M a g a z i n e | M a r c h 2 019


8

Q A

E V O L U T I O N

"QA is responsible for assisting the whole team in providing high quality. The best defects are faults which don't end up in the code. To prevent them we use testing notes which are a set of hints, written before the development of a user story starts, about the type of bugs and risks which might be expected to be found in the story once completed. They are a guide, and by no means a checklist, to help the developers during implementation to avoid introducing bugs in the first place." resource handling may just involve two to three story points from a developer's perspective, however when looking at it from a QA perspective this might be a risky change and can have an effect throughout the whole game as resources are involved with almost every feature and may bump up the story points to 13. While the whole team is invited to give feedback on features at any time, QA is responsible for assisting the whole team in providing high quality. The best

T E S T M a g a z i n e | M a r c h 2 019

defects are faults which don't end up in the code. To prevent them we use testing notes which are a set of hints, written before the development of a user story starts, about the type of bugs and risks which might be expected to be found in the story once completed. They are a guide and by no means a checklist, meant to help the developers during implementation to avoid introducing bugs in the first place.

MOVING FROM 'DEMO' TO 'DONE'

The testing notes also serve as the base for the QA demo which happens once a developer is happy with the implemented user story. A QA demo is a discussion between equals, and not a test that the developer or story can 'pass' or 'fail'. This allows the QA engineer to understand the implementation details of the story, down to the code level. They receive information regarding decisions taken surrounding risks, and what testing they still intend to carry out before the changes are to be merged to master. At the end of the demo, the QA engineer and developer will make a joint decision on what should happen to the story next. If both parties are confident about its quality, the story is moved directly to 'done'. A story being marked as 'done' is a big deal – it means that the feature will be deployed in a release to production with no further manual testing or review. Note that anyone can slip in to the role of the QA engineer for the demo and take part in the discussion (this can be another developer, game designer, or artist). This can, of course, depend on the story and experience in the team – as other team members may be better suited for different topics, like an art-heavy story would certainly benefit from an artist joining the demo. Not all issues can be prevented after all, especially with features being split into multiple user stories, and allowing for integration issues to evolve. Hence we organise 'testing dojos' with the team, particularly around bigger feature sets. These serve two purposes; the team finds bugs and gathers feedback about potential improvements, but also gains confidence in the quality of the feature, as well as the experience it provides the user, ensuring it will be something fun to play once released.

IMPORTANCE OF TEST AUTOMATION

Automation plays a crucial part in the process as it serves as a safety net for the development team to be able to make mistakes at times, because after all we are only human and are not infallible! We’re constantly increasing the stability, coverage, and general efficiency of our UI automation and we currently have over 500 tests per platform executed on a nightly basis. With the strength of our UI automation, we were not only able to reduce the number of produced bugs, but also to completely eradicate our existing bug backlog within a year – now we are staying consistently below 20 active bugs overall. Along with other process improvements and automations, such as our deployment flow, it enabled the team to confidently release the latest version in development to our beta server on a daily basis.

CONCLUSION

That’s our brief overview of where we’ve come from and what we’re doing in terms of QA here at InnoGames. We’re always learning from our challenges and everyday we’re striving for the global ownership of quality within our organisation. We truly believe the next evolution of QA is that everyone involved is upheld to a high quality standard and is empowered from all levels to own it.


9

BOOK NOW! 21-22 May 2019 The British Museum London

#NationalTestConf

2 Days

of Networking and Exhibitions

44

Presentations

 Practical presentations  Workshops  Networking  Exhibitions

8

Keynotes

6

Workshops

2

Q&A Sessions

The National Software Testing Conference is a UK‑based conference that provides the software testing community at home and abroad with practical presentations from the winners of The European Software Testing Awards, roundtable discussion forums that are facilitated and led by top industry figures, as well as a market leading exhibition for delegates to view the latest products and services available to them. The National Software Testing Conference is open to all, but is aimed and produced for those professionals that recognise the crucial importance of software testing within the software development lifecycle. The content is geared towards C‑level IT executives, QA directors, heads of testing and test managers, senior engineers and test professionals. Taking place over two action‑packed days, the National Software Testing Conference is ideal for all professionals aligned with software testing – a fantastic opportunity to network, learn, share advice, and keep up‑to‑date with the latest industry trends.

REGISTER YOUR PLACE TODAY! SoftwareTestingConference.com T E S T M a g a z i n e | M a r c h 2 019


10

WHAT HAPPENED TO LUKE? In this article we explore the phenomenon of Information Anxiety in the context of software testing, by telling the story of Luke, an archetypical tester whose story resonates with many in the testing community s software testers, we are tasked with understanding the evolving needs of multiple stakeholders, understanding the domain of the products we test, keeping up to date with testing techniques, mastering technical challenges, communicating effectively, testing new features, retesting bug fixes, and more! Whilst the work can be interesting and rewarding, we are witnessing an increased presence of a phenomenon that occurs as a result of the cognitive demands placed on software testers: Information Anxiety. In this article we explore the phenomenon of Information Anxiety in the context of software testing by telling the story of Luke, an archetypical tester whose story resonates with many in the testing community.

A

LUKE'S STORY

On a cold day in November 2017, Luke, a highly experienced tester, jumps out of bed full of energy. Today is the first

T E S T M a g a z i n e | M a r c h 2 019

day of his new job at a major advertising company. After eating a healthy breakfast, Luke takes the Central Line train from Buckhurst Hill into central London and is greeted by his new line manager. Over the course of the next few days, Luke gets to know his new colleagues, familiarises himself with the company’s processes, and is assigned projects to work on. Luke is happy. When he gets a call from a recruiter asking if he’s interested in a new exciting opportunity, Luke politely turns it down. Fast-forward to November 2018 and Luke gets another call from the recruiter. This time, Luke is more receptive to the call and agrees to take a few interviews. "What has changed in just a few months?" asks the recruiter. "I’m exhausted", exclaims Luke, "simply exhausted." So what did happen to Luke? Luke’s story, in whole or in part, is one

MARK MICALLEF PHD SENIOR LECTURER UNIVERSITY OF MALTA Mark has been involved in the software testing community for 18 years having worked for the BBC, Macmillan Publishing, Bank of Valletta, Clearsettle and Ixaris Systems

CHRIS PORTER PHD LECTURER UNIVERSITY OF MALTA Chris has a PhD in Computer Science from University College London and has specialised with research in humancomputer interaction and software engineering


I N F O R M A T I O N

that we hear about all too often. Indeed, most of us have experienced it ourselves in the course of our careers. Although Luke is an experienced tester, he was not very familiar with the domain of advertising. This means that as a new employee, he had a lot to catch up on in terms of how the advertising industry functions, who the key players were, what innovations were on the horizon, and so on. For the first few weeks on the job, Luke struggled to digest these new concepts, especially since he had worked in the telecommunications industry for most of his career. Not only did he find it difficult to understand, but he found that the dynamic nature of the industry led to a situation of information overload, where he felt that the gap between what he should have known, and what he actually knew, kept growing; not shrinking as time passed and responsibilities evolved. He also found it difficult to locate the information that he needed to do his work and sometimes was also denied access to it because people were too busy to talk to him.

AN EVOLUTIONARY PERSPECTIVE

Whilst information requirements and communication technologies continue to grow at a sustained pace, the truth is that, evolutionarily speaking, the human brain is not much different to what it was a few thousand years ago when hunter-

gatherer tribes roamed the earth. If Luke was alive 10,000 years ago, his job would have probably consisted of joining a group of hunters every day to track down a herd of deer, making a kill and taking the spoils back home to feed the community. Luke would probably have lived his entire life without ever meeting a single individual from a different community, meaning that his exposure to new ideas was severely limited. In fact, the rate of change in the world at this time was so slow that it was practically imperceptible for generations. It was not until the Agricultural Revolution, the advent of money, the Scientific Revolution, the Industrial Revolution and the advent of the internet, that this situation changed. Today, we find ourselves participating in the so-called Knowledge Economy, an economy powered by knowledge workers whose job it is to create, understand, transfer and apply knowledge in order to generate wealth. Luke, like all testers, is in fact, a knowledge worker. On a daily basis, he needs to understand abstract concepts, communicate with a variety of stakeholders, deal with context switches, changing information and countless interruptions from colleagues, mobile devices and emails. The simple truth is this change to the knowledge economy has happened so fast that the human brain has not evolved in step. This has resulted in a new phenomenon, particularly affecting knowledge workers, which we

A N X I E T Y

11

know as Information Anxiety. Information Anxiety is a term coined by Wurman in his 1989 book bearing the same name. He defines it as "stress caused by the inability to access, understand, or make use of information necessary for employees to do their job". That is to say, a situation whereby the sheer volume, velocity and variety of information that knowledge workers are expected to handle, can affect knowledge workers’ job performance as well as other aspects of their well-being.

HOW DID INFORMATION ANXIETY AFFECT HIM?

As time passed, Luke’s exposure to information anxiety affected his job performance, his psychological state, and his physical health. Job Performance As Luke found it increasingly hard to cope, he developed damage limitation strategies in which he attempted to limit the amount of sources of information he considered when carrying out his tasks. He also started purposely ignoring useful information so that he could just get on with the job. For example, one day as he wrapped up the testing of a release, a colleague of his informed him that the way commissions were calculated for online advertising had changed and would need to be re-tested. "I haven’t seen that in the Jira ticket", Luke

T E S T M a g a z i n e | M a r c h 2 019


12

I N F O R M A T I O N

retorted. So, he let the release go ahead without revisiting and carrying out crucial tests. Luke also found himself dedicating less time to contemplative activities. That is to say, activities whereby he could update his internal knowledge maps and find ways of generating innovative ways to take the organisation forward. This had a severe impact on Luke’s job performance and motivation, ultimately leaving him feeling that he was stuck in a rut and simply going through the motions. Psychological effects Overwhelmed, intimidated, fearful, worthless, lost, threatened, stressed, uncomfortable and timid. These are all emotions which Luke admitted to experiencing at one point or another throughout his experience at the advertising company. Ultimately, these feelings spilled over into his personal relationships and led to him being under increased stress and ultimately seeing his level of job satisfaction dwindling. Physical Health As the situation got worse, Luke found himself experiencing physical health problems due to lack of sleep, fatigue and unhealthy eating habits. The amount of sick days he took during the first year at the advertising company far exceeded what he normally needed at previous places of work. Making sense of it all Luke’s experience is consistent with what a variety of researchers have reported when investigating Information Anxiety

T E S T M a g a z i n e | M a r c h 2 019

A N X I E T Y

across a number of sectors. As a frame of reference, the reader is encouraged to use Maslow’s Pyramid of Human Needs and Motivation. This model states that human beings have needs which fall into one of five categories: 1. physiological 2. safety 3. belongingness and love 4. esteem 5. self-actualisation.

a direct impact on the company’s success through his work. At the very least, this places his satisfied needs at Level 4 (esteem) in Maslow’s pyramid. However, as time passes, Luke starts to complain about low job satisfaction and how it feels like he is simply going through the motions without being able to build himself up, as well as reaching his goals. Someone in this state is unlikely to have a very high level of self-esteem, at least

Above: Maslow’s Pyramid of Human Needs and Motivation. Maslow posits that an individual cannot achieve higher levels in the pyramid before first achieving the lower levels. For example, one cannot feel fulfilled in one’s job if she fears for her safety when at home. At the beginning of our story, we see Luke full of energy and motivation. As an accomplished knowledge worker, his needs are probably being met at the higher levels of the pyramid. He has been offered a prestigious job at a major company and feels that he is likely to have

so far as his career is concerned. Luke has now descended to Level 3 (Belongingness and Love) on Maslow’s pyramid. The symptoms persist however, and Luke begins to complain that the psychological effects of his situation are spilling into his personal life and influencing personal relationships. This means that he is in real danger of dropping down to Level 2 (Security and Safety). Even then, the needs at this level are likely to be affected by the effects of Information Anxiety on Luke’s physical health.


13

This leads to a state whereby his most basic needs are not being met and the optimism and energy that we observed in Luke on his first day at work have all but disappeared. In this state it is no wonder that when the recruiter calls again, the exhausted and burnt out Luke agrees to take interviews with a view of making a career change. It is clear that not only has this affected Luke, but it has also affected his employer who has lost a valuable employee and must now restart the recruitment and training process again to replace him.

HOW TYPICAL IS LUKE'S EXPERIENCE?

Information Anxiety has been studied across various sectors but has not been explicitly dealt with in the realm of the tech industry, much less across software testers. Wurman’s definition of information anxiety states that there are five contributors to the phenomenon. These are: 1. difficulty understanding information 2. information overload 3. not knowing whether required information exists 4. not knowing where the required information resides 5. not having access to the required information. To this end, we carried out an experimental study with a cohort of tech workers in Malta in which we measured perceived anxiety levels on a daily basis for a month. The empirical data we collected points towards a significant presence of information anxiety

amongst testers. In fact, when compared to other job roles in the tech industry, testers in our cohort exhibited significantly higher levels of Information Anxiety overall. Furthermore, whereas software developers complained mostly of information overload, testers reported difficulty across all of Wurman’s five contributors to information anxiety.

MITIGATING INFORMATION ANXIETY

Researchers recommend several ways for coping with information anxiety. Whilst it is impractical to discuss them all in the confines of this article, we will provide a taster of the more simple and effective methods proposed. Wurman recommends you start with acceptance. You need to accept that there is much in your job that you might never understand or get round to. He goes on to recommend a number of actions aimed at reinforcing this fact. These include: 1. reducing your pile of office reading 2. moderating your use of technology 3. separating material that you are genuinely interested in, from material which you think you should be interested in. The act of accepting the fact that you will simply never be able to know it all can be surprisingly liberating. This is because you can then focus your efforts on what really matters, typically using a risk-based approach much in the same way that you carry out any other activity in software testing. The notion of creating a safe space

where people can ask questions is also a common theme amongst researchers. People suffering from information anxiety are also likely to avoid asking questions for fear of being labelled ignorant. Therefore, establishing an environment where one can safely ask questions without being judged or scowled at can go a long way in helping people cope. Some researchers argue that a shift in educational paradigms is required so as to take into account the digital age. Students and workers should be taught more effective reading and filtering techniques. Finally, the simple act of just being aware of the phenomenon and its risks should already put you in an advantageous position to notice symptoms and subsequently decide on the appropriate way of dealing with it in your particular context.

HELP US BY PARTICIPATING!

Over the past few years we have been seeking to understand and mitigate the phenomenon of information anxiety and how it relates to the software testing community. To this end we are always looking for practitioners to share their experiences with us, participate in studies and help with the evaluation of tools that we develop as part of our ongoing research. Would you like to contribute by participating in our ongoing research? Contact us on hcitest@um.edu.mt for more information or register your interest to help at: https://bit.ly/2VJ8xt6

T E S T M a g a z i n e | M a r c h 2 019


14

PUTTING THE ‘SEE’ IN CI/CD! The impact and adoption of automated visual testing has rocketed. With so many large companies and progressive engineering teams using it – perhaps you should take a look too? isual testing - in this article I’ll explain what it is, why it adds value to your delivery pipeline and some tooling options you can explore yourself. We’ve all been there – where a seemingly innocuous CSS change passed all the way through your delivery pipeline, passing tests before rendering your site unusable due to layout or similar issues on a particular browser. This hurts, but you’ve bought into continuous delivery and it’s impossible to catch those sort of issues automatically, right? Well, maybe not... I remember sitting an interview for what was my first job in software and they asked about automated testing and what areas were hard to automate. I didn’t have any experience at all with it but the obvious example was the 'look and feel' of a website. It’s possible for our automated tests to still be 'functionally' correct and working even though elements may be overlapping or in totally wrong locations.

V

T E S T M a g a z i n e | M a r c h 2 019

Visual testing attempts to fix that by allowing you to make assertions about how an application or website is rendered. Some applications now lay claim to AI powered visual testing, but at its core, and for most commonly used offerings, it’s a much simpler strategy – one of screenshot comparison. Using a browser, or commonly these days a headless browser, an image is taken or stitched of the viewport and then, using computer vision techniques, can be compared against a baseline. Some tools offer differing levels of matching – be that pixel perfect or less granular. So, armed with a collection of screenshots that represent the desired look and feel of your application (and those can also differ by device/browser etc.), you then use visual testing to ensure that your application hasn’t diverged from that in an automated way. This can be built right into your existing delivery pipeline and sit beside your existing functional tests. In fact

PATRICK WALKER SENIOR SOFTWARE ENGINEER TEAMWORK.COM Patrick has a range of experience from quality assurance and support to software development experience in Java and .Net encompassing Presentation, Logic and Data Tiers


V I S U A L

with a lot of these tools it can be interwoven into your existing functional tools and the visual test takes the place of a test assertion.

So, you know you can use this technique in your current workflow; you know a little bit about how it works, but why use it? Your existing tests may thank you!

T E S T I N G 1515

And what you’re really doing there is trying to codify visual layout confirmation through a sequence of individual assertions. It’s painful when the screen is complex and can be pretty brittle when selectors change etc. Often times when you’re doing this it could be replaced with a visual test, which will not only ensure the right elements are visible but also their placement and styling. Automate cross-browser testing Some tools integrate nicely with providers like Saucelabs and Browserstack and allow multiple baselines allowing you to quickly visually test your application across the entire board. This is a huge time-sink for your engineers who are doing this testing and automating this opens them up to do higher level work.

Above: Cypress Visual Regression testing showing workflow & output. The image above shows the workflow and type of output you can expect when using visual testing. This example, from github.com/mjhea0/ cypress-visual-regression, shows the incredible power of having it live with your existing tests.

A common pattern you may find in a lot of tests is something like: cy.get('.action-div').dblclick().should('not. be.visible'); cy.get('.action-input-hidden').should('be. visible');

Catch visual regressions Small visual details are often thought of as the best-fit for a manual tester to find – but let’s be honest – we all suffer from fatigue or the effects of muscle memory when working on an application we’re overly used to, so small visual differences can pass your tests by. Automated visual testing will flag them by either failing the test or asking for your approval. It adds another comfort level to continuous delivery.

T E S T M a g a z i n e | M a r c h 2 019


16

V I S U A L

T E S T I N G

Makes application visual identity clear These baselines aren’t only useful to run tests, but also they give a clear indication of the expected look and feel of the application. Visual testing is still a developing and pretty dynamic field and so a lot of things are still a bit open, vague or divergent in terms of enterprise workflows. Some of those drawbacks you should be aware of include: • Dynamic content: having dates, image carousels or other dynamic regions can make this hard. Some tools allow you to define/blackout or ignore regions and others suggest using static test data • Speed: if the assertion is happening in the cloud it can add a fair amount of time to your tests, so use it wisely. Screenshotting after every focus and interaction probably isn’t the best usage if execution time is something you’re focused on (and you probably should be) • Continuous delivery: when using a SaaS solution you can find that updated visual expectations need manual intervention to approve or deny. This puts manual gates and toggles in to your platform. Prioritise the services which offer API or programmatic interfaces to toggle assertions, or that allow the baselines to travel with the code • Animation support: our visual baselines identify static points of what pages should look like, but with animation

T E S T M a g a z i n e | M a r c h 2 019

becoming a central point to a lot of websites and interactions, there is the chance that discrepancies will be missed. Some of the applications handle this better than others but personally I think this is something that also impacts some design apps too Usability: visual testing is mainly about checking layout and it’s based on assertions or expectations that you’ve set. This does nothing for usability testing and understanding if your website actually is enabling and helping the users through their journey.

So, if your interest is piqued and you’re interested in trying out visual testing the good news is this: it's a well trodden path now and there’s a number of exciting open source options for you to look at. Including but not limited to Wraith: (github.com/BBCNews/wraith) and BackStop (github.com/ garris/BackstopJS). I find that some of the test frameworkfacing OSS projects help speed up adoption, so with that in mind I can heavily endorse github.com/mjhea0/cypressvisual-regression for use with Cypress and github.com/Crunch-io/nightwatch-vrt for Nightwatch. If you’re in an enterprise though, the thought of spinning up infrastructure or dealing with open source may not be appealing in order to just kick the tyres on something. With that in mind there are a few SaaS offerings you can check out: • Percy.io: with an impressive list of

users including Google and Spotify, the newly announced free tier and abundant SDKS should make a POC of visual testing a reality for you. However, it only supports Firefox and Chrome and may offer some trickiness with dynamic content Applitools: Having previously used this in a former job, I can say it's probably the most mature SaaS offering in my opinion. Some interesting working on branching for assertions and integrations with cross-browser test offerings like Saucelabs. No Free Tier.

Another good place to look is screener.io, which features many more A-list companies, there's also Internet Explorer support. No free tier.

THE FUTURE OF VISUAL TESTING

So, what does the future potentially hold? Well Cypress (cypress.io) have indicated some interest into rolling this functionality into their test framework (github.com/ cypress-io/cypress/issues/495). They already have rich screenshot and video support with test runs, so there is a chance this would be the biggest step to bring visual testing to the masses. No additional tools needed and what I’m expecting to be a very clean API. So, in summary visual testing is finding some sweet spots in modern web development and I think it’s definitely worth trying out to see if it can improve either your current tests or current workflow.


18-19th June 2019

The British Museum London

#NationalDevOpsConf

2 Days of Networking and Exhibition

44

Presentations

8

Keynotes

6

Workshops

2

Q&A Sessions

BOOK NOW!!!!  Practical presentations  Workshops  Networking  Exhibitions

The National DevOps Conference is targeted towards C-level executives interested in learning more about the DevOps movement and its cultural variations that assist, promote and guide a collaborative working relationship between Development and Operations teams. The National DevOps Conference is an annual, UK-based conference that provides the IT community at home and abroad with invaluable content from revered industry speakers; practical presentations and executive workshops, facilitated and led by key figures. This top industry event also features a market leading exhibition, which enables delegates to view the latest products and services available to them.

REGISTER YOUR PLACE TODAY! DevOpsEvent.com


18

TESTING SOFTWARE FAST AND HARD Despite much debate about the 'right' way to test, software development is a process, and almost impossible to achieve perfectly

A

t some point in the 2000s, when PHP wasn’t just considered a templating language anymore and Ruby just got on Rails, the programming community decided that dynamically typed languages are a great way to reduce cognitive load on the programmer, to stay ruthlessly pragmatic, and avoid the factory-like culture of Java. Since, the Facebook mantra of 'move fast and break things' has been perpetuated all over the start-up scene, and is looked at in the same way you build business, products, houses, or anything that can collapse on you. And, if you’ve ever worked on software, you know that what can happen, generally will happen. Somewhere along the way clean code arrived, and the rise of TDD gathered a crowd around 'writing tests first' and the belief in 'the coverage' – and

T E S T M a g a z i n e | M a r c h 2 019

other mantras that are fun to recite, but significantly less fun to do. A lot of self-help books, like Clean Code, are vaguely based on personal experience and bring certain programming patterns into domains where they had traditionally not been used, e.g. simplify the C++ esque object orientation with some functional programming concepts such as small, simple, pure functions. In reality, there has been a lot of research on the software crisis and how to get out of the mess we’re in, and it often contradicts the wisdom of the crowd. So, let’s take a look at different strategies that drive software quality, and where they actually make a difference. From the bottom to top, I generally look at software verification at the

PETER PARKANYI LEAD SECURITY ARCHITECT RED SIFT Peter has been software engineering for over 15 years, with a special interest in privacy and security-related projects. He favours building new products, and he previously led product development at Cyberlytic


S O F T W A R E

following layers: type system, unit tests, integration tests, and organisational management structure. Wait what? Organisational management structure? Well, maybe we can start with that then.

MANAGEMENT

Organisational behaviour is a social science in its own right, and studies the subtle art of people-management structures. Since software is usually made by humans, they have needs, internal and external motivators, and normally they need to work together to deliver some deliverables. "Quantifiable results from the study show that team and collaboration structure can be a better predictor of quality than tooling, testing strategies, or other code-based metrics." Google ran a study on its teams as working units to identify what made them more effective, but Microsoft‘s research focused on how the organisational structure determined software failure rates. Both are interesting approaches in their own way, and Microsoft’s study gives us an interesting view into the development of Windows Vista.

The study tells us that smaller, more focused teams produce more reliable software. High-churn of engineers lowers software quality, while tighter collaboration between teams working on the same project will result in lower failure rates. These might seem as statements coming from Captain Obvious, however, quantifiable results from the study show that team and collaboration structure can be a better predictor of quality than tooling, testing strategies, or other codebased metrics. The way any individual team controls the quality of their output is the next step from here. Code reviews in particular are a great way to create and maintain a common set of standards. Written code reviews force engineers to communicate their concerns clearly and this increased technical communication will help everyone on the team learn about different styles and perspectives, and simultaneously help level the skills across the team.

INTEGRATION TESTS

Integration tests are, surprise-surprise, test integration of components or modules in a system. You can also test the integration of

T E S T I N G

19

integrated modules, and there are turtles all the way down (infinite regress). It’s often easier to write correct code in isolation, so a large amount of bugs occur at system boundaries; validating inputs and formatting outputs, failure to check for permission levels, or bad implementation of interface schemas. This problem is amplified by the current trend of microservices, where interface versions can fall out-of-sync between various services within the system. At this level, we’re best off writing passthrough end-to-end tests for features, and try to leverage the fact that we have so many other layers of protection against failures – something will eventually trip those wires. In fact, code coverage in integration tests is shown not to be a reliable indicator of failure rates. If you look at it another way, production is just one big integration test. A trick I love to do is create a mute-production instance, which receives a portion of the actual production traffic, but will never generate responses to the users. With enough investment in a stateless orchestration layer, we can even mute-test subtrees of services at strategic places,

T E S T M a g a z i n e | M a r c h 2 019


20

then make them active and discard the old subtree once the workload is gone. Coupled with principles behind building highly observable systems, this kind of test environment removes a lot of anxiety around what happens when we deploy to production, because the mute-prod will receive precisely the same data. The more knobs and probes we expose in live systems, the better visibility we get into the internals.

UNIT TESTING

So, in order to integrate modules, we want some confidence that the modules themselves work according to specification. This is where unit testing enters the picture. Unit tests are usually fast, and more or less comprehensive tests of isolated pieces of easy-to-grasp building blocks. How fast? Ruby on Rails’ master repo runs about 67 tests, and 176 assertions per second. As a rule of thumb, one test should cover one scenario that can happen to a module. In comparison to integration testing, the same study by Niedermayr, Juergens and Wagner shows that code coverage on a unit testing level does influence failure rates, if done well. A study from ’94 by Hutchins et al claims that coverage levels over 90% showed better fault detection rates than smaller test sets, and 'significant' improvements occurred as coverage increased from 90% to 100%. The BDD movement has this fun practice of developing specifications and turning them straight into unit tests. The benefits of clear, human readable unit tests help document the code, and ease

T E S T M a g a z i n e | M a r c h 2 019

some of the knowledge transfer that needs to happen when developers inevitably come and go, or requirements of existing components change. Unit testing in my book also includes QuickCheck-style generated tests. The idea of QuickCheck is that instead of having some imperative code of if-thisthen-that to walk through, the programmer can list assumptions that need to hold true for the output of the function given some inputs. QuickCheck then generates tests that try and falsify these assumptions using the implementation and, if it finds one, reduces it to a minimal input that proves them wrong. Interestingly, the amount of scenarios a unit test normally has to cover is heavily influenced by the programming language it’s written in. Which leads me to necessarily discuss the holy flamewar about static and dynamic typing.

TYPE SYSTEMS

Hindley-Milner: Hindley-Milner-style static type systems such as Haskell or Rust, force the programmer to establish contracts that are checked before the program can be run. What the programmer finds, then, is that they’re suddenly programming two languages in parallel: the type system provides a proof for the program, while there is also program that fulfils the requirements. This allows a style of reasoning about correctness that, coupled with a helpful compiler, will allow the programmer to focus on things that cannot be proven by the type system: business logic. Of course, this is not a net win. In many

cases, a solution using a dynamic type system is much more straightforward and elegant, or, in other cases, the type system constraints make certain implementations basically impossible. In other cases, using Haskell allowed writing a much smaller program to occur significantly faster, compared to the alternatives, so much so they had to repeat the test in disbelief. Elegance is in the eye of the beholder, and a beautifully typed abstraction that reduces to a simple state machine during compilation can be just as attractive as a quick and dirty LISP macro. Sometimes, all that complexity is hard to justify only to please the compiler. It comes through experience, taste, and applying reasoned judgement in the right situation. People often look at programming as craftsmanship. Yes, we do know how to do the math, but with this neat trick we get 90% there with 10% of the effort, and it may just be good enough – and it will explode in that edge case I think would never actually happen – but I digress. Weak but static: somewhat more approachable, but providing less stringent verification are languages in the C++/ Java/C#-style OOP family, as well as the likes of C and Go. The type systems here allow for a different kind of flexibility, and more desirable escape hatches to the dynamic world. A weaker, but still static, type system provides fewer guarantees about the correctness of the programs, something that we have to make up for in testing,


21

and/or coding standards. NASA’s Jet Propulsion Lab, a mass manufacturer of Mars rovers, maintains a set of safe programming guidelines for C. Their guidelines seem to be effective. Opportunity exceeded its originally planned 90 days of activity by 14 years via careful maintenance. Curiosity is still cruising the surface of Mars, and is being patched on a regular basis. Dynamic: speaking of JPL, the internet folklore preserves the tale of using LISP at the NASA lab, a dynamic, functional programming language from the 1960s that’s still looked at as one of the most influential inventions in computing science. Today’s most commonly used LISP dialect is Clojure, which sees an increasing popularity in data science circles. Dynamic languages provide ultimate freedom, and little safety. Most commonly, the only way to determine if a piece of code is in any way reasonable is to run it, which means our testing strategy needs to be more principled and, indeed, thorough, as there’s no 'next layer' to fall back to. Probably the most widespread use of a dynamic type system today comes from JavaScript. Interestingly, as companies strive for lowering the entry barrier for contributors while scaling up development and maintaining quality, Google, Microsoft, and Facebook all came up with their own solutions to introduce some form of static type checking into the language. Even though Google’s Dart hasn’t seen significant adoption, TypeScript from Microsoft did, and its use is driven by large and popular projects such as VSCode. Both approaches introduce a language

with static type checking that compiles to JavaScript, making it easy to gradually introduce to existing projects, too. In contrast, Facebook’s Flow is a static analyser purely built on JavaScript, which introduces its own kind of type annotations. The idea is, if there are type annotations at strategic places, the type checker should be able to figure out if there are any type errors in a part of the program by tracing the data flow. Enthusiastic programmers are going to tell you that both approaches to static typing in JavaScript are flawed in their own way, and they would be right. In the end, a lot of the arguments boil down to subjective ideas and tastes about software architecture. It seems difficult to deny, however, that some form of static type checking provides several benefits to scaling and maintaining software projects.

THE LITTLE THINGS THAT SLIP AWAY

The list of things we can do to ensure correctness of software is far from over. The 'state of the art' keeps pushing further, and new approaches gain popularity quickly, especially within the security community. In absolutely critical modules, such as anything cryptography or safety related, formal verification can increase confidence in parts of the system, but it’s hard to scale. A familiar sentiment can be seen behind the principles of LangSec (langsec.org). In many cases, the power and expressiveness of our languages allow inadvertent bugs to creep in. LangSec says: 'make all the

invalid states un-representable by the language itself'. Make the language limit what the programmer can do, so they can avoid what they shouldn’t. This is also the motivation behind coding standards such as JPL’s, which allows for easier reasoning about state and data flow throughout the program code. When we’re reasonably sure that what we need is good enough, we can start fuzzing it. Fuzzing is great. It is all about feeding unexpected states into a system, and waiting for it to cause failures. This simple idea helps discover security holes in popular software, or can help engineer chaos in the cloud. As always, producing a stable and secure system requires principled engineering, in software just as much as architecture. We need to understand the pieces that make up the whole, analyse then verify their interactions internally and with the environment. Despite our best efforts, bugs will always creep in, and all we can do is try to ensure the ones that remain are not catastrophic. However, once software goes live, verification does not stop. Designing for observability by exposing knobs, tracing, alerting, and collecting a set of operational metrics all help us reason about the state of the system while it’s running, which is the ultimate test of it all. Software development is a process, and it’s practically impossible to achieve perfectly. As long as the team has a plan to approximate it, and that everybody is committed, we can call it good enough – and then get out of the office and enjoy the sunshine.

T E S T M a g a z i n e | M a r c h 2 019


22

OPEN SOURCE SECURITY GITHUB APPLICATIONS YOU SHOULD BE USING Open source libraries have long become the main building blocks of software products, however, developers often lack the right tools to make informed choices pen source libraries have long become the main building blocks of software products. Industry estimates are that 80% of the code base in many applications is actually open source. This doesn’t come as a surprise to developers since this is how software is being built nowadays – assembled rather than written from scratch. The problem, however, is that developers lack the right tools to make informed choices when it comes to selecting the best and most secure open source components. After all, if 80% of your codebase is not secure or has bugs, won’t it be reflected in the security and quality of the end product? The good news is that four different GitHub tools were recently released to help developers detect vulnerable open source. Better yet, these tools are

o

T E S T M a g a z i n e | M a r c h 2 019

absolutely FREE. You can download them as apps from the GitHub Marketplace and they will alert you to any vulnerable open source components in your repositories. I’ve downloaded and tested these tools (well actually, one comes as a default from GitHub) to find out which one is just right for you or your team. So, let’s start with the top four.

1. WHITESOURCE BOLT FOR GITHUB

(There’s an additional version for Microsoft Azure DevOps/TFS available). It is the newest kid on the block, yet the most comprehensive one. Since it supports over 200 programming languages, it should cover all your projects. WhiteSource allows users to scan an unlimited number of repositories

ABRAHAM WAITA SECURITY CONSULTANT IT Abraham is an IT security consultant and avid security researcher / writer who is heavily focused on cybersecurity trends


O P E N

but with a limit of five scans per day. The tool can scan private and public repos. Scans can be triggered via a valid GitHub push action. The app also detects newly reported vulnerabilities in all existing libraries. When it detects a vulnerable component, the app creates an issue for each detected vulnerability. WhiteSource will then give detailed information about the library, the vulnerability and the dependency tree (if the vulnerability lies in a dependency). On top of this, and this is of importance – the tool gives a suggested fix as recommended by the open source community. (Pic. 1). WhiteSource claims that it has the broadest coverage when it comes to vulnerability detection as it fetches its data from many databases besides the commonly used NVD. This claim is hard to prove, however, but the tool is highly effective in securing and managing your open source components.

2. SONATYPE DEPSHIELD Sonatype offers unlimited scans for public and private repos. Once a vulnerability is detected and DepShield creates an issue

S O U R C E

S E C U R I T Y

23

Pic. 1: WhiteSource Bolt for GitHub

and offers information on the vulnerable library, dependency and the vulnerability itself. (Pic. 2). It is noteworthy that DepShield fetches its data on vulnerabilities from Sonatype’s database but doesn’t provide full access to the database unless one pays for the premium version of the app called

Sonatype Nexus. This is how Sonatype explains the coverage of the free tool and the paid version. (Pic 2a). Sonatype currently only supports Apache Maven and JavaScript. The company has already stated that it is planning to support Python as well, but has not given a timeline for this.

T E S T M a g a z i n e | M a r c h 2 019


24

O P E N

S O U R C E

3. SNYK

Like the other tools reviewed previously, Snyk has an open source security app in the GitHub marketplace. Snyk currently supports only eight programming languages: Gardle, Scala, PHP, .NET, Java, Python, Ruby and Go. The company says that it has a comprehensive vulnerability database that is continually fed with new data from diverse sources. (Pic. 4). The difference from other apps is that Snyk forces you to go into its website to see the vulnerability report. Though different developers might have varying opinions about this, I find that the tool is harder to use as you need to switch to a different environment to get vulnerability data. Snyk’s vulnerability report includes the vulnerability’s library name and CVE number. Snyk also allows fixing options to be added as pull requests on GitHub’s interface so that you can patch or replace a vulnerable library directly.

S E C U R I T Y

Pic. 2: SonaType DepShield

Pic. 2a: SonaType Nexus

4. GITHUB SECURITY ALERTS

Unlike the previously reviewed tools, GitHub Security Alerts is not an app. It is a feature by GitHub that helps keep open source vulnerabilities out of private and public repositories. The feature currently supports only two languages – JavaScript and Ruby. GitHub Security Alerts relies on data from the NVD only. It thus recommends that developers use at least one of the security partners in the marketplace for better vulnerability coverage. However, it says that it plans to fetch data from many other databases in the near future. Since this tool is part of GitHub, installation is not needed. The vulnerability alerts will by default appear under the Insights tab, on the dependency graph. For private repos, you will need to enable the feature in your repo’s settings. When viewing the dependency graph under the Insights tab, you will see the vulnerable dependencies in yellow. Clicking a vulnerability alert opens a small window with the vulnerability details, which includes the CVE number, severity level, and suggested fixes.

WHAT'S RIGHT FOR YOU? In this article, we have covered four security tools for enhancing the security of your open source code hosted in GitHub. From the discussed tools, WhiteSource Bolt has the most extensive language support, and a fairly good

T E S T M a g a z i n e | M a r c h 2 019

Pic. 3: Synk

integration into GitHub. DepShield can be your choice only if your codebase is based on Apache Maven or JavaScript. Support for Python will be available in the near future. The downside is that DepShield doesn’t rely on a database as comprevensive as other tools have, and doesn’t offer automatic fixes. Snyk covers almost all the commonlyused languages and has a comprehensive database for libraries written in these languages. However, unlike all other tools discussed, Snyk doesn’t create issues on

GitHub, which makes it a bit harder to use while working inside GitHub. You can consider the tool if you do not mind the extra hop from GitHub to Synk’s website. GitHub Security Alerts is the way to go if you don’t want to perform in-depth code analysis checks. Just as the name suggests, it offers alerts when something serious gets disclosed in an open-source dependency that you’re using. If you’re slightly more concerned about security, you should pair it up with another tool available in the GitHub Marketplace.


START ON A JOURNEY OF ANTICIPATION AND EXCITEMENT!

ENTRIES ARE NOW OPEN! The benefits of entering the DevOps Industry Awards are considerable! • Best-of-the-best: being shortlisted or winning an award will ensure you are perceived by your industry peers and clients as a leading provider of DevOps services and solutions • Marketing: just being shortlisted can improve brand awareness and promote your business to new customers and increase credibility • Employee motivation: awards recognise the hard work and achievements of your staff, so winning one helps boost morale and improve motivation • New talent: by promoting your business as the bestof-the-best you can attract the talent you need to push your business forward • Benchmarking: the application process alone forces you to look at your business from a different perspective and compare yourself to the competition

DevOpsIndustryAwards.com


26

KEY QUALITY ASSURANCE CONSIDERATIONS FOR PACKAGED APPLICATION IMPLEMENTATIONS

Enterprises now accept the reality that integrated digital technology signals constantly changing customer experiences, operation models, and business models hese organisations cannot afford disruptions to the business that consume resources and budgets from new planned activities. However, this is the exact situation many organisations find themselves in when undertaking large enterprise packaged application implementations such as SAP S/4HANA. The adoption of modern-day ERP, CRM, and HR solutions represents a major technological change as well as a significant cultural change that requires companies to evolve and mature communication, collaboration and change management practices. The days of undergoing one large application implementation every three years are gone. Change management for enterprise applications must evolve to handle environments where multiple

T

T E S T M a g a z i n e | M a r c h 2 019

Fig 1. Modern day enterprise change management.

DEAN ROBERTSON MANAGER OF SOLUTION ARCHITECTS WORKSOFT Dean Robertson heads the Worksoft EMEA Solution Architect team, proving Worksoft's capabilities to EMEA clients changes are occurring independently of one another, and change cycles are no longer coordinated across applications. Automation is the only way to keep up with the pace of constant change. Organisations need to automate and run tests continuously across the application landscape to ensure intermittent changes do not affect end-to-end business processes. (Fig.1) The adoption of Agile and DevOps is another way organisations are dealing with the new state of change. Given that Agile and DevOps both began in the custombuilt application space, the processes associated with implementing and maintaining large enterprise applications like S/4HANA are significantly different from those used for custom applications. Determining how to adopt Agile and


V E N D O R

P R O F I L E

27

Fig 2. Scaled Agile adoption for enterprise packaged apps. DevOps practices to enterprise applications can prove challenging for enterprise application teams. In addition, organisations must also deal with varying UI technologies. Simply because the UI is now web-based doesn't mean it is going to be like testing any other web application. In fact, many of the new UIs we see such as SAP Fiori are even more complex than their fat client predecessors. This article will focus on the three most important factors for evaluating business process assurance and automation solutions for enterprise packaged applications. Automation as accessible to the business as to the application The reality of today’s enterprise applications is that implementations are complex, time and resource intensive and demand frequent upgrades. Automation that is accessible to both business and IT users is critical to unlocking the benefits of these applications. Consider the custom application world. Features and stories are created, loaded into Agile management systems and then distributed to scrum teams. Enterprise application projects typically do not work the same. Business process documentation consists of multiple stories joined together to create a feature. This documentation is typically produced by business analysts, who then release the documentation over to IT to use as the basis for test plans. Because of the amount of manual work typically involved in creating the documentation, process teams struggle to keep pace with Agile and DevOps testing cycles. Further adding to the bottleneck is the fact that test automation for enterprise projects is generally done at the feature level vs. the story level. The finished feature is what the business cares about and is the primary focus for end-to-end testing. At the end of a Wave, documentation for the feature is reviewed by the business user

and end-to-end tests are built. Agile collaboration requires business users and testers to work together in parallel. Solutions like Worksoft’s Visual Capture, which requires no knowledge of testing solutions, enable this type of collaboration. Business users run Capture to document the stories and create automation while walking through the steps of a business process. Once the feature is complete, test automation professionals then take the associated captures, finish the automation and schedule into continuous testing cycles. (Fig.2) Look beyond automation creation When organisations begin to consider increasing their levels of automation, the initial focus often involves considering the easiest and cheapest automation options available. However, to be successful with automation - especially for complex endto-end business process tests - teams need to look beyond merely 'easy to create'. They also need to consider how easy the automation will be to maintain, scale, and integrate with other systems and processes. Tests that don’t easily scale cost more in both time and money and many times leads to the abandonment of the automation effort. Additionally, there is the consideration of whether the automation solution can be used for RPA or production monitoring. All these factors play into the long-term value of the solution. Maintainability: End-to-end business process tests are multi-step, spanning multiple systems and modules. Critical to the test automation of these extensive processes is the need to minimise the number of and maintenance of tests. The ability to make universal changes to shared objects and sub-processes dramatically simplifies the maintenance efforts. Comparing new and existing tests to better understand the differences helps in the consolidation of tests. Automation solutions that use scripting as a basis for testing do not provide a graceful method

for maintaining tests. Instead, these previous generation solutions require users to create all new tests, abandoning all previous work. Data-driven tests reduce overall maintenance due to their dynamic nature, reading and applying data to the UI. This dynamic quality decreases the total number of required changes associated with field updates or UI navigation; keeping the report output the same. Scalability: Testing enterprise applications requires running parallel, unattended, automated end-to-end tests driven from a UI. This creates unique challenges for testing teams. The device running the test needs to be on, and a specific user needs to be logged into the device. In addition, the screen can’t be locked, and tests need to be orchestrated across multiple devices in multiple labs both on-premise or in the cloud. Choosing a solution that addresses these types of challenges and gives centralised control for running remote tests in parallel, on-demand is critical to scaling to continuous testing. Integration Support: Implementing Agile-plus-DevOps requires not only for the individual teams to work together, but also the tools as well. No application or automation solution is an island. Solutions need to enable clients to leverage existing investments and integrate with a variety of best-of-breed tools such as continuous integration servers like Jenkins, ALM systems like Micro Focus (HP) ALM, defect management systems like Jira and more. Brace for Increased UI Complexity Similar to their client-based predecessors, modern-day packaged applications also come with their own unique UIs. Just because the new UIs are web-based, testing teams should not assume they will be able to test them like any other webbased UI. UIs like SAP Fiori and Salesforce. com Lightning come with nuances unique unto themselves. For example, SAP Fiori apps leverage UI5 and use a dynamic UI to deliver customised

T E S T M a g a z i n e | M a r c h 2 019


28

V E N D O R

P R O F I L E

Fig 3. The value of automated business process testing.

"Purpose built automation platforms like Worksoft Certify are built to follow the applications they support. They leverage machine learning to identify how an application generates UIs and tailors object recognition based on those patterns" T E S T M a g a z i n e | M a r c h 2 019

content based on user roles. In addition to on-going updates to the usability of the application and to expanded business processes, Fiori also presents automation teams with a wealth of challenges including: • dynamically generated objects • lazy-loading of lists, lightboxes and other controls • massive HTML structures– up to 20k tags on a single page • customisations made by the user, allowing real-time layout changes. When looking at the high functionality of Fiori, including the use of tables, drag-anddrop and field-by-field validation, the actual code running the browser is potentially a system fraught with bugs. Updates to the browser can break the complex java script libraries that provide the rich user experience. Purpose built automation platforms like Worksoft Certify are built to follow the applications they support. They leverage machine learning to identify how an application generates UIs and tailors object recognition based on those patterns. As new versions of the UI are rolled out, the

supporting automation is automatically updated. In addition, the automation solution should include things like: • predefined optimisations to interact with 300+ different controls • automation engine with built-in logic to handle lazy-loading of lists • optimisations to handle massive, complex, customisable web pages. This type of functionality makes it easier to create automation, increase automation playback speed and minimise maintenance by ensuring tests do not have to be recreated each time a new version of the UI is released. (Fig.3) Updating an organisations core systems like SAP has the potential to transform the way they operate and lead in the new digital world. But, with any great change comes great risks. Ongoing business process assurance enables companies to accelerate change while improving quality. Choosing an automation solution designed to handle the unique nuances of enterprise applications significantly reduces the time to tests, accelerates project completion and generates additional revenue.


softwaretestingnews.co.uk/debates

ABOUT TEST Executive Debates

TEST Executive Debates are hosted by TEST Magazine in collaboration with major software industry companies and experts. These exclusive roundtable discussions for directors, heads of departments and other industry leaders are designed for you to discuss and debate issues, voice your opinions and swap and share advice with your peers.

BECOME A SPONSOR

Benefits of being a sponsor of TEST Executive Debates: • Up to to 6 focused hours with 8-10 targeted professionals • A thought-provoking and challenging subject matter • The opportunity to ‘really’ understand what your target market thinks • Full delegate list post event • Recording of the debate • All lunch and refreshments provided • Get inside the minds of your target audience and understand future objections • Central London location • Findings published on the industry news portal softwaretestingnews.co.uk

softwaretestingnews.co.uk/debates


30

DEVOPS AND THE EMERGENCE OF TESTOPS In recent years the emerging DevOps field has been a welcome challenge in our industry. But as testing becomes increasingly more important and varied, do we see the emergence of Testing and DevOps as a combined field – TestOps? he tech landscape is forever changing; a perfect example of this being the constant ripping up and rewriting of the software development life cycle. It comes as no surprise there are demands and expectations for people to learn new skills and adapt to the new ways of working. The way we operate, not only as an engineering team, but within an organisation, is quickly shifting. In recent years, the emergence of DevOps has a tester’s responsibilities and title varying from project to project, and from client to client. This is a frequent topic of conversation wherever you go and whoever you meet. Why is this? Because life is better with DevOps. DevOps is more than just a methodology, it paves the way for quicker and more frequent releases. It enables the QA to spend more time

t

T E S T M a g a z i n e | M a r c h 2 019

on improving the release process and in turn take a better quality product to market. Let’s just make one point clear DevOps is a 'methodology', it is not just another development task, it is a way of working. Often I have seen DevOps classified as another role. DevOps is the process by which you shorten the time it takes to deliver to consumers. It is a means to deliver as quickly and efficiently as possible and to optimise the stream of development from conception to delivery. None so more than in recent years has this been so crucial to our industry and the wider market. In a tech-dominated society, the quicker you can deliver the longer you can stay ahead of your competitors. In this type of environment, the tester should be proactive in detecting and mitigating issues.

DITMIR HASANI CEO QA TECH CONSULTANCY

Ditmir is a software testing and automation professional who has worked and consulted for CH4, the BBC and NewsUK


T E S T I N G

THE MODERN TESTER

So, if there are DevOps engineers already doing the job then why are we talking about TestOps and what is TestOps anyway? Our tech industry is booming with microservices and cloud services. Things are getting a lot more complex; most companies no longer build an application on its own. Most applications are a mix of many small services that come together to make a complete product. This complete product usually lives in the cloud, where it is easily accessible at any time, anywhere, given you have sufficient privileges. But, what does this all mean and how does this have anything to do with the average tester? All of these interconnected services and components are a part of a chain that is the complete product. This chain of services means that there are many more points of interception for a tester to be involved in. Long gone are the days where you have a manual regression pack that continuously gets bigger and longer to test. Testing today is comprised of many forms, such as visual testing, performance, security, functional and API testing. As you can imagine having to test a combination of those for every release that goes out can

really slow down the delivery process – in fact this will most likely cause a bottleneck which the tester is responsible for. Continuously testing and validating the product is a tester's responsibility. This is where TestOps comes in. TestOps is responsible for utilising tools and identifying tech to implement some of these types of testing.

AN EMERGING SKILLSET

TestOps is an emerging skill within our testing communities and is being driven by the need to continually test products at different levels using different tools. It involves using a test automation framework to continually test and validate the product as it is being built. Testing frameworks have become much easier to work with and scale up with new extensions and features in recent years. QA is one of the fundamental pillars of DevOps; it’s rightly so that a tester should bear the main responsibility to enhance the test automation process. TestOps is a combination of a software testing intellect with some of the skills of a DevOps engineer and this is why it is different from the ordinary tester. I believe this is the 'complete tester' and combines the quality, development mindset and the knowledge to connect it all together. An

&

D E V O P S

31

"TestOps is a combination of a software testing intellect with some of the skills of a DevOps engineer and this is why it is different from the ordinary tester. I believe this is the 'complete tester' and combines the quality, development mindset and the knowledge to connect it all together"

T E S T M a g a z i n e | M a r c h 2 019


32

"TestOps can change this, by owning this process and handling Test related DevOps activities. This enables the QA to be self-sufficient and directly involved in their respective area. Just like the developers, they will curate the test pipeline to their exact function and needs. By all means, we don't expect a tester to become a developer or DevOps engineer, but they will own a part of the pipeline.Those practicing TestOps should endeavor to make their automation pipeline is as bespoke as possible, to ensure it can handle business changes and adapt as quickly as possible"

T E S T M a g a z i n e | M a r c h 2 019

ordinary QA these days is responsible for writing automation tests and executing them as well as mixing in the manual testing aspect too. They don’t normally get involved in setting up environments or CI/ CD builds. A tester adopting TestOps should understand how to integrate their framework with tools such as Docker, Browserstack and Jenkins, to name a few, so that they can get the most out of their tests. Not only should they be technical in working with these, but they must also understand how to best approach the level of testing at different stages and different environments. The key goal for TestOps is to work in such a way that your tests should be as robust as possible with the lowest execution time.

MORE INVOLVED IN THE ECOSYSTEM

TestOps is there to set up an ecosystem which includes the automation framework and integrated tools, with a longerterm vision to continually enhance this ecosystem. It’s common practice in our industry for developers to build their

own build/deploy pipeline, with some assistance from a DevOps engineer. For the tester this is different, In practice all, or a majority of this work, is handled by a DevOps engineer and developers, this, in turn, doesn’t allow the tester to have a greater input into the process. TestOps can change this, by owning this process and handling Test related DevOps activities. This enables the QA to be selfsufficient and directly involved in their respective area. Just like the developers, they will curate the test pipeline to their exact function and needs. By all means, we don't expect a tester to become a developer or DevOps engineer, but they will own a part of the pipeline. Those practicing TestOps should endeavour to make their automation pipeline as bespoke as possible, to ensure it can handle business changes and adapt as quickly as possible. Another key pillar, not just in TestOps but in testing as a whole, is that you are constantly trying to inform the business of what is going on through feedback mechanisms, whether through logging output, visual displays, or other means.


T E S T I N G

This also means notifying early by detecting failing tests faster, in turn allowing the business to react and respond quicker. A fundamental part of this visibility is being able to interpret test outputs into a clear format and easy to find locations. It is increasingly important for TestOps to integrate an effective reporting tool to their automation; they should not leave it to a basic console output. It is TestOps' responsibility to ensure full visibility of tests to all concerned and convey a clear understanding to others what those tests are outputting as well as how and where. If you have a scenario to check whether or not clicking a button does something, then make sure this is presented, it is easier to read and understand something that says ‘could not find x button to click’ than to display error:ff5hfh undefined exit status code 1. The reason this is especially important is that the results are often reserved and checked only by QAs. It is TestOps' responsibility to ensure the full visibility and readability of test execution outcomes to all stakeholders in the business.

FUNDAMENTAL UNDERSTANDING

TestOps for me is more than technical skill; it is understanding the fundamental reasons why we need QA at every stage possible and how we can be involved without being directly involved, and how we can give confidence to the business when we are not there. Understanding how we can utilise testing tools to reduce the repetitive workload of QAs, and how this can transform the development process into a faster, more reliable, more responsive operation. Like all facets of the tech industry, testing is evolving rapidly. We have seen SDETs and QA Engineers become commonplace in tech teams across many sectors. In recent years, the industry and developments in technology have dictated changes in how testing is performed, including bringing elements of DevOps into the arsenal of tools available to testers. For me, these demands imposed make it imperative that testers adopt TestOps in the future, despite any resistance to

&

D E V O P S

33

deviation away from the classic manual testing approach that has been such a staple approach in the past. Companies are realising the potential that testers have and how underutilised sometimes they are – they want to get the most out of them. Testers are essentially the gatekeepers to the quality and usability of development projects. With all the talk about AI, Blockchain and IoT in recent years, it is definitely a fast-paced and constantly moving job role. These will all play a crucial part in the future of how we do our job. This is a journey I have been personally taking myself. As a tester, you have this rich exposure to everything and it can feel like something we take for granted or not take up – an opportunity to extend our skillset. Testers should also adopt this to their respective area in DevOps. The lines are constantly getting blurred between testers and developers. This is not just born out of necessity for the hiring business, but naturally by the progression of technology and different streams of work.

T E S T M a g a z i n e | M a r c h 2 019


34

WILL QA PROFESSIONALS GET LEFT BEHIND? The switch to DevOps is challenging software testing and QA professionals’ positions in the industry. How can we gain a deeper understanding of DevOps and what it takes to stay relevant as a QA tester for the future? ike many industries, the telecommunication industry has gone through major technological development. To stay relevant in today’s competitive market companies must make swift and frequent changes in their apps, they must constantly improve their products, experiment, and measure how well changes are being accepted by their customers and iterate accordingly. Customers' expectations are rising constantly and companies must react fast and embrace changes rapidly. We all know DevOps as a practice where development (Dev) and IT operations (Ops) teams collaborate to deliver software quickly throughout the service lifecycle. At the same time, we also know that DevOps has many definitions and can be used for a wide range of concepts in software development. I would like to define it as a practice to deliver software fast through team collaboration all within the development

l

T E S T M a g a z i n e | M a r c h 2 019

lifecycle. By understanding DevOps in more detail, we can better see where QA professionals fit in this methodology and can continue to grow in this new age of development.

BREAKING DOWN DEVOPS

At its core, DevOps is a method of collaboration between development and operations teams. By implementing shorter release cycles with stronger feedback loops, DevOps takes endto-end control of many engineering processes, allowing for more productivity. Another term that people often associate with DevOps is 'agile'. While DevOps provides the framework where developers and operations work more closely together, agile ensures that this happens with collaboration, small and rapid releases, and customer feedback.

DROR TODRESS CEO AND CO-FOUNDER TESTCRAFT Since he founded TestCraft, Dror has met and learned from test executives about their testing challenges and works with them to solve those challenges


Q A

Through its promotion of collaboration, agile offers a way to deliver a speedy and flexible response to changes in software. The important thing to note here is that with an agile methodology, testers are no longer separate from the rest of the development lifecycle. This idea is an integral part of software development and necessitates extensive collaboration between your developers and QA engineers. There is a strong emphasis on a team effort in an agile environment. It is the developers’ and engineers’ roles to include the testers and QA professionals early on in the development process. This way your testing team can make any recommendations needed for test plans and architecting testing down the line. Conversely, your QA team can create a test plan or a testing template in advance and bring it forward to the dev team. This way both teams of professionals are constantly in the loop and are consistent with their collaboration.

HOW TO SUCCEED WITH DEVOPS

Now that we understand DevOps in more depth, we can better plan for ways to succeed with this important methodology. There are plenty of approaches to succeeding with DevOps, but the way I advocate for DevOps success is through

continuous testing. Continuous testing, or CT, is the method of executing automated test scenarios as part of the software development lifecycle. Continuous testing allows a firm to reduce its overall costs of defects. As an example, if an e-commerce site goes down during an exciting and pivotal sales event the company behind the event will encounter a magnitude of losses. This can happen to companies of all sizes, including Amazon, whose hours’ long crash led to a tragic $34 million loss per hour of downtime. Of course, Amazon held its head high and made it through with just a couple billion dollars of revenue after the sale! But, although Amazon made it out okay, your business might not be as lucky. To avoid colossal damage similar to Amazon’s Prime Day event, you must incorporate testing into your DevOps pipeline and automate your testing.

CONTINUOUS TESTING & SHIFT LEFT

You've probably heard the term ‘shift-left’. It’s the idea of building tests as early as possible, even before the app is ready, and start testing earlier in the process, so Amazon-level outages don’t happen to your company. By shifting left, your CT should offer all DevOps personnel, including dev, test, and business professionals in your

T E S T I N G

35

organisation, a universal outlook for all testing activities. These activities range from functional flows to load testing and UX. Website crashes can happen with functional and nonfunctional flows. This is why in continuous testing there is a need to automate regression tests and execute them pre-production – better known as ‘shifting left’.

TEST AUTOMATION FROM TABOO TO NORMAL The software industry has come a long way since 100% manual testing. Test automation has only recently shifted into the most recognised testing framework. It has become vital to continuous integration and the DevOps process over the years. According to Gartner’s Magic Quadrant for Software Test Automation report 2018, Selenium came out on top with 43% of respondents naming it their top vendor — with the next four vendors coming in between 18% and 24%. This shows us that firms are finally leaning toward a test automation solution. The research is clear – firms that follow agile and DevOps best practices are the ones that combine, implement, and follow the two methodologies in complete sync. The way a firm succeeds in this is with automation. Automating your software framework is a crucial action to take: "Over half (53%) of firms that follow

T E S T M a g a z i n e | M a r c h 2 019


36

Q A

T E S T I N G

Agile + DevOps best practices consider automating the software quality process as critical, compared to just 27% of other firms". These leading companies understand the crucial benefits that end-to-end test automation can bring to their organisation. It provides a fast-paced and high-quality delivery of software. Bringing all this research together is fascinating, but I wanted to find out more from my audience of current users and over 17k social media followers, who still use manual testing and who have hopped on the automation bandwagon. I shared a poll question asking, "Are you working with any automation tools?" and the results that came back were clear – 88% of respondents answered they do work with automation. After conducting my own research and following Forrester’s and Gartner’s reports, I can now confidently say that test automation is the new normal, far from being the taboo concept it once was. (Fig.1)

THE QA PROFESSIONAL'S ROLE IN DEVOPS

You may ask where do we, as testers, fit in this world of DevOps. There are many ways QA professionals can jump into the DevOps framework and make their work smoother and more collaborative. Along with the new agile methodology, there is an endless need to introduce new software versions and continuous updates into the release cycle. This makes it important for development teams to adapt to these new processes. Development is a core activity of any software company and the constant changes to its methodology require attention and teamwork within the

T E S T M a g a z i n e | M a r c h 2 019

organisation. When developers bring in testers early to the development process, there is a better opportunity to develop trust and communication. This, in turn, allows less breakage and confusion in the dev pipeline. Trust and communication between your team will increase the quality of work and software your team produces, plus it will allow for a smoother dev workflow all around. Test automation is another way to connect QA testers to the DevOps era. When the shift to agile came about, manual testers were faced with an impossible feat. Their task was to keep up with the agile pace while using outdated legacy tools as most of the tools that were developed to support agile, focused on the dev process, rather than QA. This had caused a QA bottleneck. A bottleneck that is only growing, as release cycles get shorter and regression gets longer. Fig.1

It’s a constant balance between quality and velocity while delivering the digital experience that is expected from you. This is why you must implement test automation. When implementing test automation, the most common solution is Selenium as mentioned above. According to my survey, over 84% of respondents answered that they use Selenium as their de facto automation solution. Selenium is a free open source code with a large and active community, discussion forums, code libraries, and even conferences to support its usage. The Google trends graph (Fig.2) shows clearly that Selenium has more than doubled its power in the last decade, while legacy tools, such as UFT, are lagging far behind. But, Selenium comes at a cost. Alongside these above-mentioned advantages, and although it is a free open-source, it is far from being free and has plenty of costs


37

that should be taken into consideration. Before switching to Selenium you need to: • Set up a framework - this set up often requires you to hire a specialist to get your environment ready to work with Selenium • Restructure your team of testers Selenium requires coding knowledge. To start working with Selenium you will need to first let go of some of your existing team members. You might have to let go of the testers who understand the business and testing processes well. You will then need to hire test engineers, which require higher salaries and are harder to find. Lastly, you will need to train them - teach them about the product, your work methods, get them familiar with the work and the team. These processes take a lot of time and require extensive fundings, but once all is said and done you will start creating the Fig.2

Selenium-based automated tests and will hit the most burning issue - maintenance. Selenium tests tend to break. A lot. Tests may break upon changes in the app, therefore there is time wasted to fix these broken tests. I’ve seen many companies that have gone back to manual from using Selenium, simply because of maintenance issues. How can we overcome that?

THE CODELESS WAY

Same team, easy start, and low maintenance. No, this is not too good to be true. The codeless way allows you to keep your QA professionals' visual tools for creating their tests and is very easy to start. There are a handful of codeless SaaS tools out there that have no set-up costs. Some also come with new and updated features, like machine learning algorithms, to catch bugs in real time. The codeless way is designed specifically for QA professionals with no

"It is imperative to monitor your site constantly. This is a hard task to do with a manual framework, so a codeless test automation tool can step in admirably" coding skills. It bridges the gap between the need for high software quality and constant release cycles, while also dissolving the burden of test maintenance. With the help of your new codeless test automation tool, your QA team can succeed in the DevOps arena by tackling your site’s customer support initiatives. With this in mind, it is imperative to monitor your site constantly. This is a hard task to do with a manual framework, so a codeless test automation tool can step in admirably here. Here is a list of ways a tool can help monitor your site: • Create real-life monitoring scenarios and schedule the executions to your liking • End-to-end continuous monitoring • Controlled and monitored database. This list is just the beginning of several ways a codeless test automation tool can help your QA team succeed in delivering the highest software quality.

T E S T M a g a z i n e | M a r c h 2 019


38

8 STEPS FOR PAIN FREE CLOUD MIGRATION When it comes to cloud migration, preparation is crucial to project success. By progressing through the following key stages you can achieve a better chance of running a smooth migration with minimum disruption loud adoption by UK companies has now neared 90%, according to the Cloud Industry Forum, and it won’t be long before all organisations are benefiting to some degree from the flexibility, efficiency and cost-savings of the cloud. Moving past the first wave of adoption we’re seeing businesses ramp up the complexity of the workloads and applications that they’re migrating to the cloud. Perhaps this is the reason that 90% is also the proportion of companies that have reported difficulties with their cloud migration projects. This is frustrating for IT teams when they’re deploying cloud solutions that are supposed to be reducing their burden and making life simpler. Performing a pain-free migration to the cloud is achievable, but preparation is crucial to project success. Progressing through the following key stages offers a better chance of running a smooth migration with minimum disruption.

C

T E S T M a g a z i n e | M a r c h 2 019

1. SET GOALS AT THE OUTSET

Every organisation has different priorities when it comes to the cloud, and there’s no 'one cloud fits all' solution. Selecting the best options for your organisation means first understanding what you want to move, how you’ll get it to the cloud and how you’ll manage it once it’s there. You also need to identify how migrating core data systems to the cloud will impact on your security and compliance programmes. Having a clear handle on these goals at the outset will enable you to properly scope your project.

2. ASSESS YOUR ONPREMISES

Preparing for cloud migration is a valuable opportunity to take stock of your on-premises data and applications and rank them in terms of businesscriticality. This helps inform both the

AMY HAWTHORNE, VP GLOBAL ILAND Amy has over 15 years of experience in the sector, previously holding leadership roles at tech companies Phunware, a mobile app platform company, and Rackspace


C L O U D

structure you’ll want in your cloud environment and also the order in which to migrate applications. Ask the hard questions: does this application really need to move to the cloud or can it be decommissioned? In a cloud environment, where you pay for the resources you use, it doesn’t make economic sense to migrate legacy applications that no longer serve their purpose. Once you have a full inventory of your environment and its workloads, you need to flag up those specific networking requirements and physical appliances that may need special care in the cloud. This ranked inventory can then be used to calculate the required cloud resources and associated costs. Importantly, this process can also be used to classify and prioritise workloads which is invaluable in driving costs down in, for example, cloudbased disaster recovery scenarios where different workloads can be allocated different levels of protection.

3. ESTABLISH MIGRATION TECH SUPPORT Many organisations take their first steps into the cloud when looking for disaster recovery solutions, enticed by the facility to replicate data continuously to a secondary location with virtually no downtime or lost data. This is fundamentally the same as a cloud

migration, except that it is planned at a convenient time, rather than prompted by an extreme event. This means that once the switch is flipped, the migration should be as smooth as a DR event. However, most organisations will want to know that there is an expert on hand should anything go wrong, so 24/7 support should be factored into the equation.

4. BOOST WHAT YOU ALREADY HAVE

Look at your on-premises environment and work out how to create synergies with the cloud. For example, VMwareusers will find there’s much to be said for choosing a VMware-based cloud environment which is equipped with tools and templates specifically designed for smoothly transitioning initial workloads and templates. It’s an opportunity to refresh the VM environment and build out a new, clean system in the cloud. This doesn’t mean you can’t transition to a cloud that differs from your on-premises environment, but it’s a factor worth taking into consideration.

5. MIGRATION OF PHYSICAL WORKLOADS

Of the 90% of businesses that reported difficulty migrating to the cloud, complexity was the most commonly cited issue, and you can bet that shifting

M I G R A T I O N

39

physical systems is at the root of much of that. They are often the last vestiges of legacy IT strategies and remain because they underpin business operations. You need to determine if there is a benefit to moving them to the cloud and if so take up one of two options: virtualise the ones that can be virtualised – possibly using software options – or find a cloud provider that can support physical systems within the cloud, either on standard servers or co-located custom systems.

6. DETERMINE INFO TRANSFER APPROACH

The approach to transferring information to the cloud will depend on the size of the dataset. In the age of virtualisation and of relatively large network pipes, seeding can often be viewed as a costly, inefficient and error prone process. However, if datasets are sufficiently large, seeding may be the best option, with your service provider providing encrypted drives from which they’ll help you manually import data into the cloud. A more innovative approach sees seeding used to jumpstart the migration process. By seeding the cloud data centre with a point in time of your environment, you then use your standard network connection with the cloud to sync any changes before cut-over. This minimises downtime and represents the best of both worlds.

T E S T M a g a z i n e | M a r c h 2 019


40

C L O U D

M I G R A T I O N

7. CHECK NETWORK CONNECTIVITY

Your network pipe will be seeing a lot more traffic and while most organisations will find they have adequate bandwidth, it’s best to check ahead that your bandwidth will be sufficient. If your mission-critical applications demand live-streaming with zero latency you may wish to investigate direct connectivity to the cloud via VPN.

8. CONSIDER POST MIGRATION SUPPORT

Your migration project is complete, now you have to manage your cloud environment and get accustomed to the variation from managing on-premises applications. The power and usability of management tools should be part of the selection criteria so that you are confident you will have ongoing visibility and the facility to monitor security, costs and performance. Furthermore, support is a crucial part of your ongoing relationship with your cloud service provider and you need to select an option that gives you the support you need, when you need it, at the right price. As more and more businesses take the plunge and move mission-critical systems to the cloud, we’ll see the skills and experience of in-house teams increase and the ability to handle complex migrations will rise in tandem. Until then, IT teams charged with migration projects shouldn’t be afraid to wring as much support and advice out of cloud service providers as possible so that they can achieve a pain-free migration and start reaping the benefits that only the cloud can bring.

T E S T M a g a z i n e | M a r c h 2 019

WHY DISASTER RECOVERY IS A MUST-HAVE Natural disasters and data breaches have been hitting the headlines this year. While it might not happen to you on such a large scale, any sort of outage can cause IT administrators and CIOs to lie awake at night wondering if they are well protected. Disaster Recovery as a Service (DRaaS) is now a mainstream use of the cloud. This makes a lot of sense, as it helps companies avoid having to double their infrastructure, with half of it waiting to recover from full or partial outages. But there are many more reasons to opt for a cloud-based disastery recovery (DR).

You can’t afford downtime! Whether you call it IT resilience, data protection or business continuity, the reality is that you cannot afford downtime. DR is not a luxury, it’s a necessity as it is critical to be always available for your internal and external customers. You know that if you can’t be reached by a customer, they’ll simply move to your competitor instead. Or even worse, your internal customers may turn to shadow IT or other things outside your purview, which can cause a whole lot of problems further down the line. When you look at the news, natural disasters get the biggest headlines, but normally that’s not what’s happening in your environment. Most of the time, you suffer downtime from hardware or software fault, malicious ransomware or careless users and accidents. The biggest challenge is that IT budgets haven’t grown bigger to accommodate DR, yet it’s fast becoming mission-critical. For every single budget cycle, the first thing

that tends to get cut is DR. You hope that for just one more year nothing bad is going to happen, and no matter how many times you talk to the stakeholders about how important DR solutions are, something will cause a budget block. However, at the end of the day it’s IT who will come under the spotlight, if an actual issue occurs. Gartner estimates that an outage can have an average impact of $5.6K for every minute of unplanned downtime, but it’s actually quite difficult to put a precise cost on downtime. It could be anything from your ordering system, to your emails, to your presence on the web – all will have varying impacts on your business performance and when you do post-event analysis, these are the tangible effects that you can put a pound sign next to. However, what is harder, is putting a figure against the intangible costs of downtime. Ask yourself: "What’s the perception of your customers if they can’t reach their data or application?" If you have a hundred different applications but the one which your customer really needs is down, what’s this really telling your customer about your overall ability to support their needs? Another intangible cost of downtime is customer and internal loyalty. If your competitors are up when you’re not, your customer might decide to change his provider and the cost of retaining and/or reacquiring this customer is astronomical. Finally, let’s consider the issue of 'confidence'. Are your internal sales team confident they can get their orders processed? Are your customers confident that you can deliver on your promise? If that confidence goes down, customer and


41

employee retention goes down as well. The numbers are quite scary. Disasters can make or break an organisation, but everybody thinks that it’s not going to happen to them. We’ve talked with customers over the years and found that nearly 50% of organisations protect less than half their virtual machines with a DR plan (according to a 2018 survey conducted by Veeam). What’s even more scary is that 85% of decision makers are not confident in their current solution’s ability to recover virtual machines (according to Veeam 2017 availability report). In the past, it would be necessary to shut down the whole data centre and spend a three-day weekend failing over and running through an extensive recovery plan. Usually, even by the end of these three days, organisations still weren’t sure this would work during a real disaster. Then Monday would come around and it would be necessary to power on and back-up anyhow, and just hope that the solution in place was enough. It’s not unusual to have some kind of IT disruption – according to Spiceworks, 77% of organisations reported experiencing at least one outage (i.e. any interruption to normal levels of IT-related service) in the last 12 months. Simply having a power failure in your data centre can affect your whole business. According to FEMA, nearly 40% of small businesses close after a disaster. Think about that, if you lose your customers’ context, your sales receipts, orders, even if you lose your customers’ confidence, how much will it cost to regain it, and is it even possible at that point? Your backups aren’t enough! The second reason you need a cloud-

based disaster recovery is because your back-ups aren’t enough. After a disaster, you can still reinstall applications, you can always get new hardware, you can even run your internet line. Nevertheless, your business is based around your data and, if you don’t have it, you can’t operate. Backups are not like IT resilience or disaster recovery. The disadvantage of back-ups, especially when you try to recover quickly, is that they are prone to unacceptable Recovery Point Objectives (RPO). If you’re only backing up every night and something happens an hour before the back-up, an entire day of workloads will be lost. In addition, backups can have painful Recovery Time Objectives (RTO). If it takes you 17 hours to recover your entire data centre from tapes, in addition to how long it has been down for, that’s a considerable amount of time. On top of that, local back-ups can be targeted by malicious software, especially some nasty new ransomware packages, which target your back-up applications, database back-up files, and will search to lock them all down. Back-ups also need something to restore to. This means that if the power in your building is out, a back-up is not going to help. If a server catches fire, a back-up is not going to help either. It’s going to take a week to get new equipment and back it up to that. Now, this is not to say that you don’t need back-ups: they’re huge when it comes to data protection, long term archives, and security compliance, but a full IT resilience plan means back-ups and DR.

Cloud makes your job easier! There are many reasons why some organisations don’t have a DR plan. It can be a lack of manpower, skills, space or budget. However, the cloud has come along and it’s agile, scalable, highly performing, highly adaptable and costeffective. When you think about traditional DR and purchasing a secondary data centre, duplicate hardware, replications, licences and so on, at some point you have to take into account additional headcount to help run all of that. When you move to a cloud-based DR solution and a cloudbased service provider, you have access to the expertise of the cloud team, which makes your job considerably easier. Most people don’t spend every single day of their lives worrying about DR or business continuity, but providers do, so they can help you create that security. We often hear people saying that if they create a secondary data centre it’s because they don’t trust the cloud or they need to have certain security and compliance solutions in place and the cloud doesn’t provide it. The reality is that in a proper cloud environment, security and compliance are built into the platform. In fact, the cloud can be even more secure than your on-site environment. The reality is that data should be able to flow in and out effortlessly without unexpected costs. Fears about security shouldn’t be minimised because it’s "just the recovery site". Customers need to be in control at every step of the process. When disaster strikes, fears and unknowns are the worst things to have to deal with, so a DR solution should be simple, well-understood, secure and available.

T E S T M a g a z i n e | M a r c h 2 019


42

CREATING THE CONDITIONS FOR BUSINESS AGILITY What is business agility, and how can it be enabled? Why do organisations that create the conditions which allow their Agile delivery teams to thrive, fare much better? rganisations need to constantly adapt in order to succeed – I believe that (top down) organisations need to understand their customers evolving needs, and invest in the development and delivery of products and services that best meet those needs. I also believe that (bottom-up) delivery teams using user-centred agile approaches are best placed to understand users needs and to develop and deliver the right products and services to meet those needs. In this article I'll outline what I mean by business agility, and consider how business agility is enabled by organisations who create the conditions which allow agile delivery teams to thrive. I'll also begin to explore the dimensions to be considered by organisations

o

T E S T M a g a z i n e | M a r c h 2 019

who want to be truly agile, from the perspective of the people involved.

THE CASE FOR ADAPTIVE ORGANISATIONS

In our view, business agility is about the ability of an organisation to adapt in a fast changing world. Agile is a well-used term in the development of software products and digital services, and agile approaches can help organisations to deliver better products and services to meet customer needs. Regardless of the methodology used, in order to reap the full benefits from agile approaches, organisations need to create the conditions for agile teams to succeed and thrive.

HUGH IVORY FOUNDER CXO AGILESPHERE Hugh works with colleagues in the leadership team and company partners to define and implement business strategy


B U S I N E S S

As digital becomes more prevalent, and new technologies transform the way organisations engage with their end-users, customers expect to have more influence over how they interact with organisations. They expect their feedback to be valued, and that providers will react to that feedback through rapid improvements in products and services. When their feedback is ignored, customers will move to another provider. Today’s successful organisations are those that are most capable of responding to the needs of their customers (or the citizens they serve) and the marketplace they operate in. We call these 'adaptive organisations'. It is the adaptive organisation that will retain and grow its customer base. Adaptive organisations have the ability to take constant feedback (from clients, citizens, suppliers, the wider marketplace etc.) and improve their products and services in response to that feedback. In order to be adaptive, we believe organisations need to create the conditions which enable their people to execute rapid cycles of learning and

adapting: seeking user feedback, making adjustments to products and services, and delivering those to market in a meaningful timeframe.

THE DIMENSIONS OF ORGANISATIONAL AGILITY In order to create the conditions to allow their people to enable business agility, adaptive organisations will need to transform and continuously improve the following dimensions:

Product and service delivery: • The push for agility is often driven from here, as delivery teams (software developers, testers, user researchers, designers, delivery managers, product owners etc.) use agile methods to put the user at the centre of product and service development, and apply new technologies and DevOps approaches to improve speed to market. Governance (and management): • As delivery teams start doing things differently, learning to fail fast and iterating based on proper research

A G I L I T Y

43

and user feedback, ways of governing and assuring are challenged to remain fit for purpose Governance processes need to be effective enough to allow accountable stakeholders sufficient transparency to ensure that they are making the right investments and delivering them well, but efficient enough to allow the delivery teams to get on with responding to user feedback and delivering value.

Organisational structure and culture: • Adaptive organisations need to empower their people, trusting them to deliver customer value because they are closest to users and most familiar with their needs. They need to allow time and space for people to innovate, to try things out, take feedback, adjust products and services • Adaptive organisations need to be structured around the delivery of services to customers, and not around the traditional organisation silos (finance, HR, operations) – yes these

T E S T M a g a z i n e | M a r c h 2 019


44

B U S I N E S S

A G I L I T Y

"If you have good agile delivery teams, their very approach to developing and delivering products and services will be safeguarding your investment. They will ask you to empower your best, most visionary people to work with them to deliver what you really need. They will ask for time to explore, make mistakes, learn. They will ask for your patience – don’t expect the false certainty of a twoyear plan; let them know your desired outcome, give them space to figure it out, help them by removing obstacles that get in their way. Visit them as often as you can"

T E S T M a g a z i n e | M a r c h 2 019

Above: Adaptive organisations will concentrate on transformation across these dimensions. functions need to exist, but their primary focus should be on enabling their people to support the delivery of customer value.

HOW THIS IMPACTS ON PEOPLE

Your perspective on the adaptive organisation, and how you can assist in, and benefit from, the implementation of a user / citizen / customer value focus, will

be influenced by your role. Those of you in delivery teams using agile approaches to build products and services, will often feel frustrated, hamstrung by the mechanisms and structures that delay your progress. You should realise that the leaders in your organisation want the same thing as you – delivery of value early and often in response to customer and market need. They just want to protect their


investment, and they look for assurance about that. You can help by explaining how, by putting users at the centre of your product or service development initiatives, and spending appropriate time to understand their needs, you can ensure that you are delivering the right thing. You can outline how iterative and incremental delivery, with frequent demonstration (and delivery) of product, protects the organisation’s investment. Providing easy access to your information radiators, and inviting them to visit you often will help with this. Those of you in leadership positions want your organisations to be agile and adaptive, to react to the forces of change – better informed customer and citizen needs, reduced budgets and changing legislation. You may be frustrated by the slow pace and high cost of change. You will be concerned about the risk of wasted investment, and the consequences of that in terms of investor, regulator and media scrutiny. So, you look for assurance and

appropriate governance to safeguard your investment. If you have good agile delivery teams, their very approach to developing and delivering products and services will be safeguarding your investment. They will ask you to empower your best, most visionary people to work with them to deliver what you really need. They will ask for time to explore, make mistakes, learn. They will ask for your patience – don’t expect the false certainty of a twoyear plan – let them know your desired outcome, give them space to figure it out, help them by removing obstacles and blockers that get in their way. Visit them as often as you can – they’ll welcome you. Those of you responsible for governing and assuring are caught in the middle of this drive for agility and adaptability. You are expected to be the brokers between the sponsors and the delivery teams, facilitating the means for ensuring that money is invested appropriately, and is being used effectively to meet real user needs. You will need to create the conditions whereby leaders can:

• • •

Make decisions about the most important things to do Allocate skilled, knowledgeable and empowered people to the delivery teams Come and see the progress for themselves.

Delivery teams will expect that the information they generate as they work should be sufficient to demonstrate progress and control. And everyone will expect you to ensure that governance approaches add value and don’t slow down delivery. The potential for agile to enable organisational transformation can only be fully realised when leaders, managers and delivery teams align behaviours, and organisations mature beyond focusing on the delivery dimension (using agile practices to deliver a specific product or service) through to the organisation dimension, harnessing the agile mindset and values to create a learning, evolving, adaptive organisation.

T E S T M a g a z i n e | M a r c h 2 019


46

THE ROLE OF QA IN DEVOPS With the lines becoming increasingly blurred within DevOps, multi-disciplined teams are looking set to be the shape of things to come nbeknownst to me – until I used the power of the Internet to learn such pub quiz facts – the term ‘DevOps’ was coined in 2009 and popularised with an event held in Ghent, Belgium called DevOpsDays. This was run by an American agile enthusiast named Andrew ‘Clay’ Shafer, and a Belgian agile enthusiast named Patrick Debois. I would imagine that is a widely unknown fact amongst industry professionals, and yet these gentlemen are responsible for naming one of the most in-demand disciplines in software development today. I think they're worthy of a shout-out for that reason, and it's nice to know where the term we use on a day-to-day basis comes from. Moving on from the history lesson, let’s delve into a high-level description of

U

T E S T M a g a z i n e | M a r c h 2 019

DevOps, what it is responsible for, aims for, and importantly; what it isn't.

WHAT IS DEVOPS?

To quote my friend and colleague Steven Burton, "DevOps is the narrowing of the gap between the creation of the product and the delivery of that product to the customer". It achieves this through the use of different frameworks and tools, of which there is a large market to choose from to suit the needs of the project. DevOps is about company and team culture to deliver value. This means that it’s about making sure the correct thinking processes are in place and there are reduced barriers to getting the right things done. Tooling and frameworks are very similar to automation, in that they are side effects and almost symptoms of the right kind of culture – we don’t want

ROB CATTON CONSULTANT INFINITY WORKS Rob is a software consultant working in web development, mobile testing and commercial areas. He is passionate about helping other testers where he can, and will present at this year's National Software Testing Conference


Q A

to put the cart before the horse just because we know the cart is going to be needed at some point. Traditional Ops are concerned with the infrastructure of systems; not the actual code of the product itself, but where and how it is hosted, deployed, updated, managed.

WHAT IS DEVOPS RESPONSIBLE FOR?

DevOps' ultimate responsibility is to ensure any features that are developed are delivered to the customer as efficiently and quickly as possible. Test environments are an important part of this and part of DevOps is ensuring they are available and realistic when compared to live. DevOps has to ensure that it is simple and efficient, to deploy versions of the software with the lowest amount of risk possible. It should adhere to clear agreements and provide metrics to stakeholders that show how well the agreements are being met. As alluded to earlier, DevOps' primary concern is getting what was written in an IDE into a usable state for the client as quickly as possible, and facilitating all of the steps in between.

WHAT IS DEVOPS NOT?

DevOps is not: • Responsible for what the product functionally does • A tool, any more than 'software development' is a tool • Subjective, it asks specifically what is required, and delivers that (that's one of the reasons that, as a tester, I can really identify with it) • For someone else to worry about • Synonymous with automation, however automation of tasks/testing can be involved • A role or team, any more than you have a specific 'agile' team to make your company more agile. As stated earlier, we want to reduce the overall time it takes to get new functionality out and live. DevOps is largely going to be responsible for this, and what a responsibility it is. It won't matter how polished your product is if it is simply late to market; you will have been overtaken by then. Furthermore, moving code across and through the different staging environments that companies use facilitates other necessary operations – hence the subject of this article.

TESTING IN DEVOPS

Testing is a vital part of the DevOps ways of working and an ultimate benefactor of a good pipeline. As a tester, we are trying to help assure quality software and lower the risk as early in the process as possible. Having pipelines which narrow the gaps in different stages of the software development lifecycle enables us to do this. From ensuring that a product is more testable to making that product available on a realistic test environment as soon as possible, DevOps encourages many processes that make testing a bettertargeted activity. Therefore, as a tester, you have a vested interest in driving forward a DevOps culture within your team and department.

TEAMWORK MAKES THE DREAM WORK

In my career I have seen multiple ways of handling DevOps, from there being an integrated member of a team who specifically deals with it, an entire external team who are responsible, to lastly it being an additional responsibility of a whole development team. Let me say this in no uncertain terms: I wholeheartedly hold the opinion that the last option is strictly superior to the former two. I am aware that this isn't exactly an outspoken view per se, but it will certainly be against what some readers hold dear, or what they are accustomed to. The advantages of having a team full of people who are embracing a DevOps culture is that they will fully understand the product they're delivering, as it's what they work on every day. This means they are less reliant on external dependencies and have less blockers with external teams. They won't have to raise a ticket with a team in a different part of the building. They know exactly what the product needs, what it needs to run suitably, how it might be unperformant, what level of support is required. If they need something to be slightly tweaked or changed... they can do it! In reality, I believe multi-disciplined teams are where we are going to end up in the near future – as a standard. Would you prefer to hire a developer with no DevOps experience or one with plenty, all other things considered equal? It's a no brainer in reality. Teams need people who take

&

D E V O P S

47

responsibility for what they work on, and don't just throw something over the fence and decide it's not for them to worry about any more. This is the current lay of the land in the team I work in, and I feel empowered in the knowledge that if something wasn't how we wanted it to be, we could just change it. Be it our Slack integrations, our monitoring, our AWS deployment system – it doesn't matter. It's all for us to decide. This leads me onto my final point: what can we do as aspiring DevOps-ateers? (here's hoping that phrase catches on so I can be quoted in an article in 10 years time!) Responsibility is a word I have used a lot over the last few paragraphs, but I can't stress how much your company will want its employees to be taking some on. Not only does it develop your skills in areas you may not have much expertise in, but taking on more responsibility genuinely makes your career more fulfilling in my opinion – heck, this is just a fact of life in general. People often love to boast about how they have no responsibilities so they have free reign to do whatever they want. It only gets you so far, however, before you realise in life it's taking on responsibility that makes you stronger as a person and far, far more capable to deal with the problems of both work and life. I don't think people in the QA part of the team should have any more or less affiliation than anybody else in the team – it should ideally be a joint effort of all. My recommendation for becoming more adept at DevOps practices is to read as much as you can in your spare time about the topic, and then take a really good look at the way your process in your team or workplace currently operates – from your use of CI pipelines to your code merging strategy. You will know where the pitfalls are, where you are slow when you need to be fast, where you are unstable where stability is key. Discuss it with your team, and see what they think. Soon, a list will emerge. Some things won't be fixable the next day, or even in the next six months. But once you write down the problems, they're in your sights and you can start chipping away at them one-by-one. Too many businesses like to turn a blind eye to these things, and then wonder why there are blockages or inefficiencies. Don't be wilfully blind – find those problems.

T E S T M a g a z i n e | M a r c h 2 019


he National Software Testing Conference is the premier UK‑based event that provides the software testing community at home and abroad with practical presentations from the winners of The European Software Testing Awards. This premier event features roundtable discussion forums that are facilitated and led by top industry figures, and market leading exhibitions, which enable delegates to view the latest products and services available to them – all alongside top food and drink provided over the two days. The National Software Testing Conference is open to all, but is aimed and produced for those professionals that recognise the crucial importance of software testing within the software development lifecycle. Therefore the content is geared towards C‑level IT executives, QA directors, heads of testing and test managers, senior engineers and test professionals. Taking place over two action-packed days, The National Software Testing Conference is ideal for all professionals aligned with software testing – a fantastic opportunity to network, learn, share advice, and keep up-to-date with the latest industry trends. With hundreds of delegates alongside top exhibitors and networking opportunities, the National Software Testing Conference will be held this year at the British Museum, Great Russell St, Bloomsbury, London, and will play host to two days of software testing content: 44 practical presentations, eight keynote presentations, six workshops and two expert Q&A sessions. For full details and to register, please visit: https://bit.ly/2USoxIf

T

Visit: https://bit.ly/2USoxIf

T E S T M a g a z i n e | M a r c h 2 019


The British Museum 21st-22nd May, 2019 2 DAYS OF NETWORKING AND EXHIBITIONS / 44 PRACTICAL PRESENTATIONS / 8 KEYNOTES / 6 WORKSHOPS / 2 Q&A SESSIONS

Visit: https://bit.ly/2USoxIf

T E S T M a g a z i n e | M a r c h 2 019


50

The Canadian Software Testing & QE Awards celebrate companies and individuals who have accomplished significant achievements in the software testing and quality engineering market. Enter the Canadian Software Testing & QE Awards and start on a journey of anticipation and excitement leading up to the awards night. It could be you and your team collecting one of the highly coveted awards!

Why wait? Register today! SoftwareTestingAwards.ca

T E S T M a g a z i n e | M a r c h 2 019

Profile for 31 Media

TEST - March 2019  

TEST - March 2019  

Profile for 31media