Page 1








hings are hotting-up in cyberspace. Firstly, anti-spam organisation Spamhaus was hit by possibly the largest DDoS (distributed denial of service) attack in history – a colossal 300Gbps was thrown against its website but the organisation said it was able to recover and get its core services back up and running quickly. Because the not-forprofit organisation supplies a ‘blacklist’ of IP addresses linked to the distribution of spam, it makes it an obvious target for spammers. According to reports, the attack which began on 18 March flooded Spamhaus’ connection to the rest of the Internet, knocked its site offline. Steve Linford, chief executive for Spamhaus, told the BBC that the scale of the attack was unprecedented. “We’ve been under this cyber-attack for well over a week. But we’re up - they haven’t been able to knock us down. Our engineers are doing an immense job in keeping it up - this sort of attack would take down pretty much anything else.”

reported the site. Of course the prime suspect has to be North Korea following weeks of heightened tension in the region but no hard evidence has yet to come to light.

Rum happenings have also been reported in South Korea where data-wiping malware knocked out PCs at TV stations and banks. Reports said that several South Korean financial institutions - Shinhan Bank, Nonghyup Bank and Jeju Bank - and TV broadcaster networks were impacted by a destructive virus which wiped the hard drives of infected PCs, preventing them from booting up upon restart.

It is nigh on impossible to stay a step ahead of the hackers in a dynamic and ever advancing online world, but a proactive attitude to keeping security as close to the cutting edge as possible and a rigorous approach to penetration and load testing are the best ammunition in this war against destructive cyber ‘terrorists’ and criminals.

Initially it was thought that the servers of South Korean antivirus company had been hacked and malware inserted which was then distributed as an update patch. But speculation is rife and according to The Register ( the latest theory suggests hackers first obtained administrator login to a security vendors’ patch management server via a targeted attack. “Armed with the login information, the hackers then created malware on the PMS server that masqueraded as a normal software update,”

© 2013 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor or TEST Magazine or its publisher, 31 Media Limited. ISSN 2040-01-60

APRIL 2013 |


Until next time...

Matt Bailey, Editor

TO ADVERTISE CONTACT: Grant Farrell Tel: +44(0)203 056 4598

EDITORIAL & ADVERTISING ENQUIRIES 31 Media Ltd, 41-42 Daisy Business Park, 19-35 Sylvan Grove, London, SE15 1PD Tel: +44 (0) 870 863 6930 Fax: +44 (0) 870 085 8837 Email: Web:


PRINTED BY Pensord, Tram Road, Pontllanfraith, Blackwood, NP12 2YA

EDITOR Matthew Bailey Tel: +44 (0)203 056 4599





LEADER COLUMN 1 The war in cyberspace goes on – rum doings on the interweb.



The European Software Testing Awards is now open for entries and the first batch of awards judges has been announced.




Robert Winter warns this lack of preparation can have serious consequences if a data disaster strikes.




California-based test automation expert Mark Lehky explains the benefits of a simple data-driven test framework approach for verifying simple functionality.





As apps become more popular, offering use-anywhere convenience to the customer, Chris Livesey says mobile access to websites and applications is changing the expectations of back-end software.






Following her look at Agile methods in the last issue, this time Angelina Samaroo tackles Agile principles.



Looking at the example set by the sport of baseball, and especially the movie Moneyball allows you to look at the mistakes we make in evaluating information and making decisions according to Peter Varhol.

MAKING TESTING MORE EFFICIENT 36 Making the testing process more efficient should be a priority for all companies developing software.




As enterprise mobility becomes a necessity for businesses, Matt Davies discusses the key considerations for developing applications that can be accessed on a range of mobile devices.

APRIL 2013 |




Big Data is big news. Mike Holcombe offers a practical example of how testing is progressing in this field.




With a new contract to work on, it’s back to basics for Dave Whalen.


NEWS SECOND IT GLITCH IN NINE MONTHS HITS NATWEST RBS Following news about the latest RBS ‘IT glitch’ the partially publically-owned bank has confirmed it was a hardware fault, not a repeat of last summer’s three-day systems outage which caused problems for RBS and NatWest customers nine months ago. “While history has shown that poor customer service is not a strong driver of customer churn in banking, customers are far more sensitive to inability to access their money,” comments Daniel Mayo, practice leader,


financial services technology at Ovum. “Trust is critical in banking. Given the issues RBS had with its computer systems last year, a second, albeit far shorter, disruption is likely to cause both short-term embarrassment and greater longer-term impact on both customer acquisition and retention. “That said, from an IT perspective RBS responded quickly to the operational issues,” concedes Mayo, “suggesting that business and continuity plans have been enhanced. However, the issue for RBS and all banks with legacy systems is the need to prevent operational failures in the first place, and if the frequency of critical incidents continues RBS will need to think of taking a more transformational route to cure some of the underlying challenges.”

IT RAISES THE BAR Over the past five years the length of the interview process has nearly doubled for the average successful applicant in the IT industry, according to research from IT recruiter Randstad Technologies.

On Wednesday 5th February 2003 the UK’s first computer museum opened. The Museum of Computing, based in Theatre Square, Swindon, celebrated its 10th anniversary with a robotics theme, and the museum says it will be staging a new interactive exhibition of robotics, which will run for six months as well as talks on state-of-the-art robotics sponsored by the information technology giant, Intel. Other events will include a Lego robotics programming day at the museum, where people can drop in and have hands-on experience of programming to control a robot. “What an astonishing 10 years it has been,” said Simon Webb, the museum’s curator. We have created a vibrant interactive museum, which is run by an enthusiastic team of volunteers. We have engaged with the community and assisted with the education of today’s generation of computer enthusiasts. Every year there are seismic changes in technology, and we have captured an impressive catalogue of hardware, documenting its incredible development. This is going to be a fantastic year for the museum with a huge amount of activity and growth. We look forward to exhibiting today’s cutting edge technological developments.” In 2005 The Guardian newspaper listed the Museum of Computing as one of the top 25 museums to visit in the country, rivalling the Science Museum in London. Visitors to the museum have included Sir Clive Sinclair in 2006 and the Duke of Kent in 2007, and there is a constant stream of visitors from around the globe.


interviewed for a job in the last year state the process was harder than five years ago.

A separate poll of Randstad Technologies’ consultants suggests that the level of testing in the IT sector during the application process IT professionals who has increased, too. Five secured a new job in the years ago, 18 percent of IT last 12 months spent 83 roles required some form percent longer on the of psychometric, technical interview process for that or aptitude test, a figure role compared to 2008, which has now more than according to an doubled to 40 percent independent poll – far higher than the of 2,000 people wider average of 28 MORE THAN carried out by percent across all HALF OF IT PROFESSIONALS WHO market research industries. INTERVIEWED FOR A firm Canadean. The number of JOB STATE THE PROCESS On average, they WAS HARDER THAN interviews employers spent 8.2 hours FIVE YEARS AGO. in the IT industry on the interview conduct with a process – an successful candidate increase of 3.7 hours has also risen sharply compared to five years since 2008. For a junior role, ago. This compares to the employers required an current national average average of 1.9 interviews for professionals in other five years ago, a figure that disciplines of seven hours. has risen to 2.5. Employers As a result, the total time now interview successful taken to find a new job in candidates for senior roles an IT has risen by 38 percent average of 3.2 times, up from since 2008. Active job 2.8 five years ago. hunters in the sector spend The number of vetting an average of eight weeks checks carried out after and a day in the process of the interview process finding and securing a new has concluded has also job, compared to the five increased in the sector. weeks and six days taken five Five years ago, employers years ago. Despite the rise, vetting credentials such as this compares favourably to qualifications, CRB checks the ten weeks and five days and references delayed the spent on average across hiring process by an average other disciplines. of 3.7 days. This delay More than half (53 percent) has now increased to an of IT professionals who average of 7.1 days.

APRIL 2013 |


IT FIRMS PESSIMISTIC ABOUT THE ECONOMY As economic growth continues to show little sign of bursting into action, information technology businesses are learning to adjust to conditions that look set to stick. The eighth national business sentiment survey conducted by business insurance specialist QBE shows that just 21 percent of the sector’s businesses now expect a full recovery within two years with 74 percent expecting the recovery to be two years or more and 37 percent believing we will need to wait until at least 2016 – three years or more, before a full recovery can be expected. As businesses adapt to the ongoing economic circumstances and the ‘new normal’, 79 percent of businesses in information technology expect to

Over half of information technology firms do not expect their business to grow during 2013 with 58 percent now expecting their turnover to remain the same compared to 43 percent in 2012. More positively the number of firms expecting turnover to decline over the next six months has fallen from 22 percent in 2012 to 5 percent in 2013, demonstrating that most businesses are stable. Adding to this is the news that 37 percent of information technology businesses expect turnover to increase in the

TECH GIANT BANS HOME WORKING The news that the chief executive of Yahoo! Marissa Mayer has banned staff from working from home will probably not be too popular with some in the organisation. Indeed she has been accused of taking the company back to the 1980s. Richard Branson who was drawn into the debate commented that Ms Mayer’s pronouncement was ‘perplexing’ and ‘backwards ’in today’s mobile work environment. Mr Branson, who has never worked from an office in his entire career, commented that this is backwards step in an age when remote working is easier and more effective than ever. He believes that if you provide the right technology to keep in touch, maintain regular communication and get the right balance between remote and office working, people will be motivated to work responsibly, quickly and with high quality. “I couldn’t agree more,” says David Sturges, chief operating officer of WorkPlaceLive. “Technologies, such as cloud computing and Skype, have allowed businesses over the last 10 years to be much more flexible about where their employees are based. A solution such as hosted

desktops allows people to work seamlessly from any device with an internet connection from any location. “This technology is great for forward thinking businesses that are happy for their workforce to work flexibility, but also businesses where employees are on the road a lot perhaps seeing clients or suppliers, home and abroad,” says Sturges. “Whilst I can understand the point made by head of human resources at Yahoo! Jackie Rees in the memo just sent to all their staff “that is why it is critical that we are all present in our offices. Some of the best decisions and insights come from hallway and cafeteria discussions, meeting new people, and impromptu team meetings,” to have an out and out ban seem ludicrous.” A You Gov survey last year found that 52 percent of British adults say they would like to work from home if they had the necessary IT resources to so, and 58 percent of British office employees believe they can be as productive (36 percent) or more productive (23 percent) when working from home. However it also found that 51 percent of all employed British office workers say that their employer doesn’t allow them to work from home.

APRIL 2013 |

next six months, up from 35 percent in 2012.

keep staffing levels the same for 2013, up from 54 percent in 2012. There has also been a reduction in the number of the sector’s businesses intending to employ new staff, down from 32 percent in 2012 to 21 percent in 2013.

Elliot Miller, general manager UK National, QBE European Operations comments: “Almost all information technology firms expect it to be a long haul before the economy fully recovers. While the hope of a quick recovery has certainly petered out over the time QBE has been conducting this research, it is encouraging to now see a level of stability from businesses in this sector. Most of their cost cutting measures have already been implemented, leaving management to focus on growing turnover. While the prospects for the economy may look weak, information technology firms remain resilient.”

TEST LAB FACILITY FOR THE NORTH WEST A growing demand in the marketplace for software testing services to return to the UK and be delivered onshore is what ROQ IT says led to the expansion of its facilities in the North West of England. With space to seat high numbers of testing staff and state-of-the-art, secure servers and communications facilities, the consultancy says it is better suited to meet an expanding client base and their growing needs. The extension of the Test Lab increases the availability to assist on a broad range of projects which already include the global implementation of a service management solution, implementation of enterprisewide Oracle platforms and the implementation of eCommerce platforms, incorporating a shift to an Agile delivery model. Stephen Johnson, founding director of ROQ IT said of the move, “we’re thrilled with the new Test Lab in terms of location and facilities. It marks a huge milestone in the development of ROQ IT as we continue to grow. We’re better able to service our global clients on large scale projects and look forward to acquiring more.” This was reiterated by Mark Bargh, another founding director of the company, who said, “from a technical point of view, we couldn’t ask for more from the infrastructure and comms systems. We recently won some highly sensitive and technologically demanding projects which we will now be able to deliver with ease.”



MAKING THE CASE FOR TESTING Understanding the tester’s role in the cost of poor quality is crucial according to Karen Thomas, head of Test Practice at Barclaycard Professional Services & Development. Is it time to convince project managers about the real value of testing?



APRIL 2013 |



here have been many articles written about the cost of poor quality but do we really appreciate what the tester’s role can be in ensuring that live issues do not happen or are greatly reduced? As a seasoned tester, the lack of appreciation for testing in the project lifecycle continually astounds me. A good project manager needs to understand that testing is pivotal in delivering the quality element of their project.

THE BLACK ART OF TESTING Is it their fault or ours that project managers see testing as a black art? It is the tester’s role to guide them through or manage the Quality Assurance phase of their project. To do this we need to make them understand what our role is and how we can assist in providing a better quality product. When you look at any project management methodology there is nothing that suggests that quality should ever be compromised but this is something that is becoming an everyday occurrence. It is common practice to work back from a delivery date and when other phases of the project overrun, the testing window reduces. The Time Cost Quality triangle is something that underpins Project Management methodology but is the consequence of deviating from this fully understood?

DEFECT LEAKAGE FROM PHASE TO PHASE – DEFECTS NEED TO BE FIXED AS EARLY AS POSSIBLE: Establishing controls to monitor exit/entry criteria at each stage of testing improves quality. Defects discovered and fixed early in the lifecycle cost less to fix. Component Testing, System Testing and Acceptance Testing should all have a clear defined entry/exit criteria to the next stage.

Minimal defects should be carried forward to the next stage. The test managers should have a checkpoint meeting to discuss the approval to move to next stage testing. This minimises the number of defects discovered in the latter phase and decreases the cost of fixing and reduces the testing effort. Acceptance Testing becomes just that an affirmation of the change. POORLY


According to research the cost of poor quality can be 15-20 percent of the total cost of the programme/project. That is an astounding figure and tough to comprehend, this is not only made up of the cost to fix the issue which can be up to 100 times the cost to fix in the test phase, but also by things such as reputational risk, litigation or reduced business. The following five points need to be addressed to keep quality firmly under control: REDUCING THE TESTING WINDOW: Reducing Testing = Reducing Testing scope. This will mean that certain areas of the solution will not be tested and does introduce a level of risk. This risk will of course be articulated by probability vs likelihood but is this ever defined in the total cost. POORLY DEFINED REQUIREMENTS: Poorly defined requirements cause ambiguity and can lead to a solution being built which is far from meeting the business objective. Providing clear and concise requirements in a standard format undertaking review or workshops with key personal in attendance such as testers, system architects, developers, business users increases quality, leads to consensus and business acceptance. NO BUSINESS INVOLVEMENT – SOLUTION NOT FIT FOR PURPOSE: Involving the business in the project increases quality; they are able to see the solution being delivered early in the lifecycle. A good mechanism for this is pro-typing or being involved in the business testing preparation. They are more likely to raise issues and misinterpretations as the project progresses and not

APRIL 2013 |

at the end. Being involved allows them to accept the change and agree that it matches their requirements. It reduces the number of defects found in business testing and reduces cost. Defects found in Unit Testing cost far less to fix than in the business testing stage.

LACK OF UNDERSTANDING – TEAM ETHOS: Lack of understanding of the solution under test causes poor quality. Our reliance on cutting costs to a minimum has made us reliant on using outsourcing to provide us with those cost savings. This does not always save cost as the depreciation in knowledge or distant involvement can result in the need for additional resources and results in a longer duration of testing.

The reason that Agile works is that the team have one goal and a one team concept. Everybody delivering what the customer needs. What we need to do is to promote knowledge sharing, create that ‘one team’ mentality, this needs to be done across teams, across locations to achieve a better collaborative approach. We need to build trust between the development and testing teams, we need to be honest and transparent about progress and ask for help if needed. Be a team.

LESSONS FROM AGILE If you look at successful software delivery teams out in the bigger companies they have nailed all these issues and testers have very high profiles, after all, they do protect the integrity of the product for the company. At Google, they’re very big on highlighting an individual’s strengths and using them to make teams and products better. Google test engineers are product experts, flexible, clear communicators, good coordinators, they have impact. If you look at the V Model which promotes testing involvement early on in the software development lifecycle (SDLC) and then look at the methodologies in which we are delivering software are there not more lessons we can learn from the golden child which is Agile? I am not saying we could apply all of them to non-Agile projects but some of them could help us get a better quality product.



PRINCIPLES BEHIND THE AGILE MANIFESTO We follow these principles: Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. Business people and developers must work together daily throughout the project. The ones that I think should be the norm for all project teams are:

Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

• Always considering the customer’s satisfaction; • Collaboration within project teams with

The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

the business;

• Create a supportive environment; • Trust and empowerment; • Face to face communication; • Working software equals measured progress; • Technical excellence and good design; • Be simplistic – clear and concise requirements.

Working software is the primary measure of progress. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. Continuous attention to technical excellence and good design enhances agility. Simplicity--the art of maximizing the amount of work not done--is essential.

GETTING RADICAL I could say we need to get radical here but all it needs is for us to go back to basics and use the Agile principles as a building block. There is no doubt that a tester’s key contributions should be:

The best architectures, requirements, and designs emerge from self-organizing teams. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.

1. Requirements workshops; 2. Design workshops; 3. Reviewing prototypes or early sight of the design; 4. Controlled entry/exit criteria through the testing phases; 5. Involvement in the Go/No Go recommendation; 6. Change Request impact assessment; 7. Business stakeholder management; 8. Test execution.


Testing should never be a forgotten phase at the end of a project. All we have to do is convince the project managers that we have a key role to perform throughout the project lifecycle. And as this is proven to save them money it shouldn’t be a major issue. Right?


APRIL 2013 |

NEWS EXTRA - TESTA Headline Sponsor




The European Software Testing Awards is now open for entries and the first batch of awards judges has been announced...


The European Software Testing Awards (TESTA), the independent awards programme designed to celebrate and promote excellence, best practice and innovation in the software testing and QA community launched in the last issue of TEST Headline Sponsor is now receiving nominations and is open for entries in all its categories. The awards which will be held at the Marriott Grosvenor Square Hotel in central London on the 20th November 2013, were launched to commend the individuals, teams, and businesses that are actively involved in the pursuit of technological perfection in the software testing sphere.

CELEBRATING THE BEST IN TESTING TESTA aims to celebrate and spotlight those that have excelled within the testing community over the last 12 months. Entering TESTA will raise entrants’ profiles in the industry and among some of the most influential organisations in Europe; it will highlight, acknowledge and reward the efforts of individuals/teams/projects/ THE EUROPEAN SOFTWARE AWARDS businesses. In addition to this there TESTING will be extensive coverage of the awards in TEST magazine and on our CELEBRATING TECHNICAL website and email newsletters as well EXCELLENCE as a PR and marketing campaign to promote the award winners.

Plus, a copy of the European Software Testing Benchmark Report - an annual report that will help you assess the strength of your business for the coming 12 months and provide industry-specific knowledge about current trends, thinking, methodologies and practices - will be provided to all finalists. As the report surveys over 1,000 software-testing professionals at all levels, and from businesses large and small across various vertical sectors, and countries, it enables all TESTA entrants to see how they measure up against other in the European software testing community.

THE JUDGING PANEL Obviously judging the merits of all entrants is a significant responsibility. TEST is putting together a panel of top industry talent from a range of software testing backgrounds and industries to assess the entries. We hope to profile all the judges in the pages of TEST, but the following are the first to come on board:

SIMON JONES TEST MANAGER, AMEC Peter Hyams currently works at Simon Jones is a test manager currently working for Amec, one of the world’s leading engineering, project management and consultancy companies. In this role he manages all functional, regression and acceptance test activities in relation to the company’s Microsoft Dynamics AX implementation and its ongoing ‘business as usual’ support. With over 10 years of experience working for major organisations in the banking, consumer goods, financial services, retail and telecommunications sectors he has built up a strong understanding of testing best practices and the implementation of the HP test management tool set. Passionate about creating a ‘learning culture’ within his teams, Simon constantly monitors QA forums and reviews testing publications to ensure that the latest tools and techniques can be utilised wherever possible to improve system quality.


CHRIS LIVESEY WORLDWIDE VP OF SALES FOR BORLAND Chris has over 20 years of experience in the software industry - as a practitioner and consultant; in sales and business development; and in senior management. He worked for many years as a developer, test manager and project manager working with enduser clients in financial services, telecoms and manufacturing. This has been followed by a successful career with several software companies, in technical and sales roles, and more recently in executive management and leadership. His key strengths include software development & test processes; programme management & governance; and change management for organisational, functional & operational improvement. Chris has a successful track record in several ‘turnaround’ ventures & the management of mergers & acquisitions.

APRIL 2013 |

NEWS EXTRA - TESTA KAREN THOMAS HEAD OF TEST PRACTICE AT BARCLAYCARD PROFESSIONAL SERVICES & DEVELOPMENT Karen Thomas, who is this month’s cover author, is the head of UAT at Barclaycard. She came to testing from the business and became a UAT test manager in 1993 with no previous testing experience. She says that it was sink or swim and luckily she swam and found that she had an aptitude for testing. Since becoming a tester Thomas has been an advocate of business testing and she defends its worth with a passion. She has worked in life & pensions, retail banking and investment banking for various blue chip companies, finally coming to work at Barclaycard where she has been for the last three years. Her current position is the head of test practice. What Karen Thomas says she loves most about her job is the people and enthusiasm that they have for protecting the business interest in an IT world. She has a wonderfully supportive husband and two amazing children. Her hobbies are horse-riding, travel and fine wine – but, she says, not all at the same time. “I am really excited about this opportunity and look forward to working with some like minded professionals,” she says.

PETER HYAMS DEUTSCHE BANK TECHNICAL ASSURANCE Peter Hyams currently works at Deutsche Bank in the Global Technology organisation as part of a small, relatively senior team called Technical Assurance. The team’s role is to provide assurance to senior management that the right process and technology decisions are being made to best ensure integrity of live production and the quality and efficiency of delivery into it. The team has an active interest in pretty much everything from emerging technologies to application release management. Hyams’s areas of expertise and interest are chiefly release delivery assurance, quality assurance and business acceptance; and optimisation of software development and testing. Prior to Deutsche Bank he held a range of positions including: QA director for global rates & currencies IT at Bank of America Merrill Lynch; client-facing senior manager within Ernst & Young’s strategic testing services group; QA lead for credit derivatives IT at Barclays Capital; Delivery Assurance lead for systems development at euronext.liffe; but he says ‘grew up’ as a consultant within Accenture. When not hard at work, he used to enjoy a little gliding, windsurfing and the cinema, but now he is married and has a 13-year-old daughter and sons of 10 and five, his more recent hobbies include snap, Star Wars and providing family taxi services.

APRIL 2013 |

LISA DONOVAN QA MANAGER, PROXAMA Lisa Donovan’s career spans 13 years, beginning in helpdesk support gradually progressing into testing, test management and product management, achieving the creation and continued growth of the QA function across a range of organisations. Lisa has an enthusiasm for her field of work that she constantly inspires in others, always motivated, she still finds the time to form relationships and connections between team members, constantly developing in her field. In her free time she enjoys competitive open water swimming and netball. With her ambition, motivation and experience, Lisa earned her role as programme and QA manager at Proxama, a next generation near field communications (NFC) mobile commerce company that connects the physical and digital worlds. Leading the development and test teams and Proxama’s programme of work she is responsible for the quality delivery of leading edge products. “Mobile and NFC technology is complex, challenging and exciting,” says Donovan. “With all the variants to consider with mobile testing plus the capabilities that NFC across marketing and payment solutions has to offer it requires every aspect of my 13 years experience in quality software delivery.” Donovan approaches each new business challenge with her inherent flair for innovation and creative problem-solving. Under her QA leadership Proxama has grown into a leading-edge company focusing on producing quality products by introducing a testing culture and best practices across the test and development teams.

PROFESSOR MIKE HOLCOMBE UNIVERSITY OF SHEFFIELD Regular TEST contributor Professor Mike Holcombe’s research interests include software testing, agile software development and empirical software engineering, as well as simulation, computational biology and computational economics. He was previously head of department of Computer Science, dean of the faculty of Engineering at the University of Sheffield. Holcombe has authored many research publications and books including: Correct Systems (published by Springer); and Running an Agile Software Development Project (published by Wiley). He is the founder and director of epiGenesys – a commercial software company specialising in medical informatics using test-driven agile development; and also founder of Genesys Solutions – an internal University software company run by senior computing students as part of their course. The students are mentored by the professionals in epiGenesys and IBM.



WHEN DISASTER STRIKES: THE REAL COST OF NOT TESTING VIRTUALISED AND CLOUD ENVIRONMENTS Virtualisation and cloud technologies are bringing greater flexibility, agility and capabilities to users - but very little has been done to test data recovery plans. Robert Winter, chief engineer at Kroll Ontrack warns this lack of preparation can have serious consequences.


APRIL 2013 |



ver the past few months we have conducted research with VMware to gauge perceptions about virtualisation and data recovery. Whilst the adoption of virtualisation is well documented and continuing, our findings reveal that most organisations fail to update, test and implement data recovery plans. This is a serious oversight when one considers the surge of information being transferred into virtual environments – and the impact that losing it can have on the reputation and financial performance of a company.


decreases the chances of data loss. In reality, the chance of minimising data loss is only possible if data backups are performed correctly and tested carefully. Otherwise, the impact of data loss is greater than before, since a data disaster in a virtualised world can bring down many servers that share the same storage. Surprisingly, 20 percent of respondents believe that virtualisation doesn’t affect the chance of data loss at all. Are they not responsible for backups/data recovery? Perhaps they do not understand the complexity of virtualised systems? Data loss is always an issue, regardless of what IT infrastructure is used.


Understandably, data loss isn’t the first thing that Another important finding of the survey is respondents’ comes to mind when businesses adopt a virtualised answers to the question of what to do to recover lost environment. Organisations are often too caught up data. Thirty six percent of respondents said if virtualised with the benefits that it brings –namely the cost data is lost they would try to rebuild the data savings associated with maximising the use themselves instead of calling a data recovery of computing resources and streamlining IN company. 22 percent of respondents said processes. However, businesses that REALITY, they would take this decision. focus too much on the cost saving THE CHANCE OF Doing it yourself often makes data benefits of virtualisation are at risk of MINIMISING DATA recovery much harder - and in some not taking the necessary measures to LOSS IS ONLY POSSIBLE cases impossible to retrieve anything. protect their data and end up being A lot of the complexity is hidden from at higher risk of experiencing major IF DATA BACKUPS the users and administrators when data losses. Users will make the cost ARE PERFORMED systems are virtualised. Without a solid savings attendant with virtualisation, CORRECTLY data recovery programme, it’s very but only if the implementation is AND TESTED easy to lose data. There are too many solid and data is secure. CAREFULLY. risks involved in rebuilding data and using In a virtualised environment the most a reputed recovery company is the best important component is the data and this is option to avoid any problems. the only thing that does not get virtualised. Users can reconstruct and recreate any other component LACK OF DATA RECOVERY PLANS in a virtual environment within seconds and with just a few clicks, but this cannot be done with the data In another survey conducted by Kroll Ontrack and that is created in one’s virtual environment. Therefore, VMware, respondents were asked whether they tested while businesses can make savings everywhere else in data recovery plans regularly to ensure proper protocols a virtual environment they should be spending more are in place to protect data on virtualisation and the money protecting the data when they move to a cloud. The survey, carried out at VMware Forums globally virtual environment. among 367 IT professionals reveals that while 62 percent of survey respondents admitted to leveraging the cloud Of course, this seems to go against what virtualisation or virtualisation, only 33 percent of these organisations is widely seen to be about, which is saving money. But tested data recovery. up-front investment to protect data is a fundamental requirement to avoid even costlier data losses down This is an important finding - and a remarkable onethe line. considering that 49 percent of organisations also reported experiencing some type of data loss in the last year. Twenty six percent of the respondents reported a INCREASED CHANCE OF LOSING DATA data loss from a virtual environment while three percent According to a survey completed by 338 IT professionals reported a loss from the cloud. Sixteen percent who at a recent VMworld conference, 37 percent of experienced data loss from both a virtual environment as respondents believe that virtualisation significantly well as the cloud.

APRIL 2013 |


TESTING VIRTUALISED ENVIRONMENTS MINIMISING DATA LOSS Our research shows how quickly cloud and virtualisation are gaining ground among organisations. However, history has taught us that data loss can occur in any environment- regardless of the specific technology. The way to reducing data loss risk and successfully recovering from a loss is asking the right questions prior to adopting a new storage medium and amending your policies and procedures accordingly. Important questions to consider before adopting cloud or virtualisation include:

• Are backup systems and protocols in place? Do these

systems and protocols meet your own in-house backup standards? And possibly more important do the restore from backup time meet the organisations requirements?

• Does your cloud vendor have a data recovery

provider identified in its business continuity/disaster recovery plan?

• What are the service level agreements with regard to data recovery, liability for loss, remediation and business outcomes?

• Can you share data between cloud services? If you

terminate a cloud relationship can you get your data back? If so, what format will it be in? How can you be sure all other copies are destroyed?


This is partly associated with inadvertent damage caused to the media in the process of trying to recover the data by a less experienced party. This is also true for data stored in cloud and virtualised spaces. In fact, data loss incidents continue to grow in size and complexity as more organisations move into virtual environments. There has been a 140 percent increase in virtual data loss when compared with the year before and this number will undoubtedly increase as more companies embrace new trends such as desktop virtualisation and BYOD. The only way to minimise these risks is to know which problems to watch out for and to seek professional help during the golden hour.

MINIMISING FINANCIAL LOSSES AND DAMAGE TO COMPANY REPUTATIONS Even the smallest disruption to daily activity can have major implications for businesses and individual. The longer data is unavailable, the greater the impact on financial losses and corporate reputations. Organisations should establish a formal incident management plan which includes a data backup strategy. This will help to reduce the risk of losing data. A disaster recovery assessment should also be adopted which identifies a reputable data recovery service provider that staff can rely on when things go wrong.


In the survey conducted at VMWare forum, respondents were asked about their cloud provider’s ability to properly handle data loss incidents. Twenty nine percent cited a lack of confidence, compared to 55 percent of respondents in 2011. Only 17 percent of respondents confirmed that they test their data recovery plan regularly to check technical and personnel readiness against cloud or virtual data loss technical recovery capabilities. Around 13 percent responded that they do not have a data recovery plan. Virtualisation is the engine of cloud technology. If virtualisation fails, the cloud fails. Whether it is human error or an operating failure, it is important to know who to turn to. Yet only 14 per cent initially turn to a data recovery provider.

CRITICAL HOUR OF DATA RECOVERY Many businesses underestimate the challenges of recovering data when things go wrong. They also don’t realise that’s there’s a critical time period following a data disaster when fast intervention by an expert will increase the chances of a successful recovery. However, when disaster strikes people often panic and make the mistake of either trying to solve the problem themselves or handing over the data recovery task to a poorly qualified data recovery company. These approaches often lead to permanent loss and big consequences for businesses. On average, the success rate of recovering data from failed HDDs is many times higher if the media comes to experienced engineers without having been worked on following a data loss.


APRIL 2013 |


KEEPING IT SIMPLE WITH A DATA-DRIVEN TEST FRAMEWORK California-based test automation expert Mark Lehky explains the benefits of a simple data-driven test framework approach for verifying simple functionality.


or verifying simple functionality a quick data driven test (DDT) approach is usually my first choice. DDT is popular due to its simplicity and ease of initial setup. In the purest form you would have a set of input and expected output values, gathered in a table like a spreadsheet. The inputs are sent to the application under test (AUT) and the outputs from the AUT are compared against the expected outputs in the table. An overly simple example would be a calculator. For addition operation our data table might look something like this:

ADDEND 1 0 1 1

ADDEND 2 0 1 1000000

SUM 0 2 1000001

It is highly desirable to include boundary values in our data set. So if you know the inputs are for example integers, whose largest allowed value is 232, setting at least one of the addends to this value should still produce a non-error result. SOAPUI AS DDT FRAMEWORK SoapUI ( is a tool specifically aimed at testing APIs, such as web services. Out of the box the Pro version provides easy access to read data from several different sources, including MS-Excel or even an external database. In the following example I will demonstrate how to read data in SoapUI-Pro from an XML-formatted external file.


A PRACTICAL EXAMPLE One of our vendors that processes credit card payments for us has a way for us in test to elicit various decline responses based on specific inputs. I was given three files that have filenames like Visa_AVS.xml, MasterCard_ AVS.xml, AmEx_AVS.xml and they each contain singleline entries of data in the form: <test accountNumber=”4112344112344113” amount=”1.00” street=”1 Main Street” city=”Boston” zip=”031010001” expectedAvsResp=”B” /> Specifically the account number and the amount should produce the expected AVS response; the rest of the inputs are optional. STEP 1: READ IN THE RAW FILES The XML DataSource step in SoapUI requires that the data already be a part of the current test run - it must have been generated as output of some previous step. To get the data into SoapUI we use the DataSource - Directory step. For convenience my entire project is a Composite Project, so every TestSuite is a separate directory. I am going to keep all my data sources in the same directory as the TestSuite, so as to make it easier for uploading to SVN. Set the Directory = ${projectDir}/AA-PPS-soapuiproject/experimental-Test-Responses-TestSuite. Make sure that you make it relative to where the project is; keep in mind if the project is checked out by another user, you do not have any control over where on their machine they will keep it. Set the Filename Filter = *_AVS. xml; this should be pretty explanatory. In this step there is no way to format the data, so the entire contents of each file will be read into only one Property. It does not matter what you call it; I used “allData”.

APRIL 2013 |

DATA-DRIVEN TESTING Once you hit the play button, you should see correct data in the Data Log. If you did anything wrong there will probably not be an error, you will just not see correct (or any?) data in the Data Log. Check your XPath. STEP 3: YOUR TEST STEPS


I know I have three files, so when I hit the play button, I should get three entries in the Data Log.

At this point you can use any number of any test steps you like. Any of the data above can be accessed with standard SoapUI Property expansion, for example ${DataSource XML#accountNumber}, and can be used or manipulated just like any other data either in an Input or in an Assertion.


In my case I had only one SOAP step.

Now that we have everything read into SoapUI, we can format and extract the data. This is also done with a DataSource step, this time with DataSource = XML.


Note that the documentation for Source Step says: “could be another DataSource”. Set Source Step = DataSource - file (from Step 1). The Source Property = allData which is the only thing you have available from Step 1.

Finally you need to add two DataSourceLoop steps. The first loop will iterate over all the data within one file. So the DataSource Step should be whatever you named your Step 2 above. The Target Step should be whatever is the first thing in Step 3 above.

You need several XPath; one that would select each row of data, and one for each of the data columns. The row of each data is quite simple: Row XPath = //test.

The second loop iterates over all the files. So the DataSource Step should be whatever you named your Step 1 above. The Target Step should be Step 2 from above.

Next start creating the Properties (columns) of your data table. I decided to name each Property after the data attributes, but you can name them anything you like. As soon as you create a new Property, the form will automatically insert a new Column XPath. The form assumes that your source data is formatted something like: <test> <accountNumber>4112344112344113</ accountNumber> <amount>1.00</amount> <street>1 Main Street</street> ... </test> And so it inserts an XPath like: accountNumber/text(). In our case this is not correct, so I needed to Edit this to select the attribute: @accountNumber. One of my source files, instead of the attribute zip, has an attribute zip5. So my XPath in the screenshot below ended up being: @ zip | @zip5. KEEPING IT SIMPLE The data-driven test approach, due to its low overhead, is an excellent choice of approach for simple functional tests.


APRIL 2013 |


THE MOBILE MULTIPLIER: TIME TO RETHINK THE BACK-END FOR MOBILE As apps become more popular, offering use-anywhere convenience to the customer, Chris Livesey, VP of Borland Applications Management and Quality at Micro Focus, says mobile access to websites and applications is changing the expectations of back-end software.


he infrastructure that supports mobile applications (apps) is in a constant state of catch-up. Designers have made great strides in creating ever more imaginative and easyto-use apps. However, as apps continue to improve and become more ‘usable’, they have also become more popular, wide-spread and frequently used. This can create challenges to the back-office systems that many apps interface with as they need to deal with increasingly frequent traffic and often larger data volumes.

repeatedly viewed. This architecture initially worked well, but not for mobile apps. When mobile apps began to appear, which called data from core back-office systems running on

The average bank customer checks his or her accounts three or four times a month using a desktop application. A mobile customer, making use of spare time while waiting for a train, sitting at a café or even waiting to pick up a child from football practice, will check his or her bank account 15 to 20 times per month. This quadruples the amount of traffic to the site, placing stress on the infrastructure. Mobile access to websites and applications is therefore changing the expectations of back-end software. The obvious problem is that if traffic volumes exceed the capabilities of the established infrastructure, then the performance will drop and the app won’t deliver the promised user experience. Even worse, the performance of the back-end systems will also drop, impacting all parts of the business and forcing (in this example) the bank to add more resources and incur resulting costs to maintain acceptable performance. So just how should software designers reorient their development efforts to make life easier for both users and infrastructure?

THE MOBILE MULTIPLIER This ‘mobile multiplier’ is driving a change in the traditional three-tier infrastructure that has defined application architectures almost since the inception of the Web. The three-tier infrastructure — presentation, application and data — was great. You would initiate a request and get a response. Well designed responses branched into two areas: realtime acquisition of data and local caching of items


APRIL 2013 |

MOBILE APP DEVELOPMENT mainframe computers, designers were able to simply wrap existing mainframe applications as Web services. However, taking advantage of existing applications by converting them to wrapped Web services can only go so far before they are dragged down by the ‘mobile multiplier’. For example, 1,000 users of online applications might hit your website on a daily basis and will produce a volume of data transfer ‘X’. If those users have now moved to have additional multiple devices which they can use to access your system, they’ll start interacting with your back-end a little bit more often, especially if you’re getting into customer intimacy applications where you’re wanting to push data with increasing frequency. The volume of data and the frequency of access to data increase as the additional devices make themselves more useful. Soon the volume hits 2X, 3X, 4X or more.

GREAT EXPECTATIONS Users have very high expectations of the mobile apps they use, and one of the most critical is speed (both to load and to run). To deliver this speed means the app has to be given a high priority to be served by the backend systems. Yet, this will impact mainframe performance and therefore performance of every user on the system, including mobile app users, and thus is self defeating.

The problem here is the infrastructure is set up for the requester of that data to work through a very high-speed internal network. But now, the majority of interactions may come from a vastly slower network, a 3G or 4G network, or at a mix of speeds. This creates many issues; the need to have very quick responses for in- app purchases, the fact that security is still an issue with REST-based services (Representational State Transfer), management of requests that are both slow but also of high priority, and many others. These drive the need to change how people think about implementing their back-office infrastructure, and the technology they’re using to deliver that infrastructure. Developers must therefore think about different architectures for delivering mobile access to core systems and about the right performance testing from the very inception of the project. Options to address these issues might include architectural solutions such as node.js, which creates smaller, localised servers, and Kaazing’s WebSocket Gateway, which is designed to fire off rapid requests over and over again, almost like mini HTTP servers.

GETTING ROUND THE BACK-END ISSUES To get round the back-end impact issues, we’re currently seeing customers setting up dedicated infrastructure for mobile traffic. However, ’mega vendors’ will make the necessary adjustments in their technologies eventually to better accommodate mobile apps as they interface with mainframe and back-office systems. We’re also seeing companies like AppFog, which offers a Platformas-a-Service, making a number of acquisitions for higherspeed, higher-scale node.js deployments. Another key technology is HTML5, which enables developers to give customers the look and feel that they’ve become accustomed to with their consumer applications, while ensuring that performance matches expectations, without hitting core performance. It’s not going to solve all issues or be suitable for all systems, and enterprise organisations need to make sure their transaction gateways are strong enough to satisfy such things as in-app purchases – for example, when players buy 100 coins in the middle of a game, you need to verify the coins are delivered without hanging up the game. Further, these organisations must be able to test their applications across every platform - not only functional and load testing, but also doing network simulation to ensure the application will perform optimally regardless of where the request is coming from. We’re seeing a lot of customers not only rethink how they’re doing their infrastructure planning, but also their pre-production load testing. Traditionally, you put MOBILE out a Web server, send a ACCESS TO sustained user load and WEBSITES AND see what happens. But APPLICATIONS IS customers are asking us to THEREFORE CHANGING simulate network-loading patterns. They want 100 THE EXPECTATIONS users to come in at 9.00 am OF BACK-END on a 4G network, followed SOFTWARE. by everyone on a T1, then after lunch have users coming in on a BlackBerry, to see what’s

APRIL 2013 |




happening and why. Customers are still in learning mode to see how their infrastructure is doing on the back-end to accommodate those traffic patterns.

large amounts of confidential data from a mainframe. They also have to minimise the impact they have on back-office systems’ performance while doing this.

Don’t just think you can throw a mobile app out there and hope that the existing infrastructure works. You’re going to need to think it through from top to bottom.

IMPACT AND CONSEQUENCES: The successful and widespread adoption of mobile apps has had a massive impact on back-office systems. Designers, architects and testers must think about not just how systems manage the performance of mobile apps, but how delivering good performance for mobile access might impact performance elsewhere in the system.

ADDRESSING THE ISSUES In summary, software designers need to think about a range of issues when developing mobile apps for data driven back-off systems: LOOK AND FEEL: Users now expect a certain type of interface which includes buttons, swipes and gestures. They want apps that get right to the functionality as soon as possible and want apps that give them accurate, up to date and useful information, simply and easily.

All of these design areas have a major impact on the data traffic and processing required within the back-office infrastructure. Careful planning, design, development and above all testing are therefore critical to the success of mobile app development for datadriven enterprise systems.

SPEED OF RESPONSE OF USE: The mobile apps built today for data driven applications such as banking need to behave, run and load against a standard set by apps that don’t have to refer to back-office systems or pull-down


APRIL 2013 |




TESTâ&#x20AC;&#x2122;s Sponsoring 20 Leading ers. Testing Provid 7 Pages 35-4




ber 2012 Issue 5: Octo Volu me 4:

What gr . er in testing from a care

testing | on | Agile t automati Inside: Tes

p testing Mobile ap estmagazin ine at www.t Visit TEST onl



t futurect A brigh n expe aduates ca



Volum e 5: Issue 1: Februa ry 2013

Testing collabo ration Getting mobisocial in the test lab with TS Narayana n and Sireesha Jaj ala

Inside: Virtual env ironments | Pen etration Visit TEST online

TEST 0213.indd

testing | Test Auto mation

at www.testmag


30/01/2013 12:11


Published By 31 Media Telephone: +44 (0) 870 863 6930 Facsimile: +44 (0) 870 085 8837 Email: Website:



APRIL 2013 |


Q: WHEN IS A HACKER AN ETHICAL HACKER? A: HE’S NOT! A hacker in a suit is akin to a wolf in sheep’s clothing – it’s whether you can trust him not to eat the flock that makes the difference. Research engineer Conrad Constantine and Dominique Karg, chief hacking officer at AlienVault wonder who to trust...


he subject of whether or not to hire an ‘ethical hacker’ has been debated since the ’90s, albeit the subject was perhaps a little less misdirected back then. I’d argue that the ‘ethical’ hacker simply does not exist, so perhaps the time has come for a new question. If you find yourself on the wrong side of a locked door, you do not think to yourself ‘I need an ethical locksmith,’ unless you’re a thief, in which case you probably have a whole host of other questions (but I digress). Instead, you look for a locksmith – pure and simple. You trust that the person that turns up to break your lock will do no more, and no less, than the job you’ve hired him for. Calling him ethical does not legitimise his practice of breaking in. So why is there a need to justify hiring a hacker by claiming he’s ‘ethical’? In my opinion, the job title is the problem.

ARGUMENT 1 : A HACKER IS A HACKER IS A HACKER The term ‘hacker’ has two connotations: -

someone that has been convicted of a computer related criminal activity, or


Someone who thinks a certain way about technology.

If you consider it a term that refers to criminal intentions then you’re basically saying ‘ethical criminal’. How is it possible to argue that that makes sense when it’s obviously a contradiction?

APRIL 2013 |

On the other hand, if you are using it to describe a person who thinks about technology in a certain way – then why does it need the word ‘ethical’ in front of it?



This takes me back to my ethical locksmith argument. Yes, hackers have had bad press for many years, but calling the practice ‘ethical’ will not change that. The job of the hacker is to clandestinely look for ways to infiltrate systems. What is then done with that access is the differentiator. It’s easy right now to pick on bankers who are having a hard time, especially as many are being tarred fraudsters and thieves. However, I don’t see any of these professionals clamouring to repackage themselves as ‘ethical’ to distance themselves from their unsavoury peers.

ARGUMENT 3: LEGITIMATE VERSUS DISHONEST Some hackers would argue that they’re not criminals, but activists. Others would say that they’re just rebellious in the way they think about technology and have a duty to


SECURITY TESTING highlight an organisations’ poor security. Does that make them unethical? My personal view is that we need people who are willing to stand up and challenge authority – in so doing, does that then make them ethical? I don’t see why it should. It just means that they can look at something – an application or a business process, for example -- and can see why something won’t work and are willing to explain why – or better still how it can be improved.


A case in point is the Fukushima nuclear disaster. A report into the incident stated that the disaster was completely preventable. It wasn’t the earthquake, or resulting tsunami, that was to blame but human error, or human oversight, spawned from a culture of unquestioning obedience. All it would have taken was for one person to stand up and state that the various technical processes employed to implement safety regulations, rather than preventing an accident, could fail. And that’s precisely a hacker’s mindset - not to take things for granted, to question authority and challenge the regimented way of doing something that pushes back on the status quo. Ethical or unethical doesn’t come into the equation.

ARGUMENT 4: HIRING A ‘NON-CRIMINAL’ I would concede that for many convicted of hacking it could be argued that there are extenuating circumstances. For example, a few years ago it was almost impossible to get access to code to learn on your own, resulting in many resourceful technical people being convicted of ‘hacking’. Today, this argument of ‘I had to hack so I could learn’ would not be considered adequate defence as the availability of virtual infrastructure technologies -- among other interesting tools -- means there is so much more that can be set up in your own home to learn your craft. Additionally, Germany’s ‘Hacking’ law defines many security tools as illegal purely because of their design and ability. For that reason, you don’t even have to be doing anything with these tools that could harm someone to be found guilty of hacking. This ambiguity has resulted in the argument that not all hackers are criminals and therefore the term ‘ethical’ started to be used. While I would agree that not all hackers are criminals, I would therefore also argue that the term ‘ethical’ is unnecessary. Ultimately it comes down to the fact that most organisations would not hire a criminal - therefore why do we need ‘ethical’ in front of hacker to prove this.

And before we move on from talking about skills, I’d also like to clarify that I ‘personally’ think that ‘ethical hacking certificates’ aren’t worth the paper they’re printed on. The reason you want to employ a hacker is not because they know the ‘rules’ to hacking, can run them and produce reports. What makes a hacker desirable as an employee is the very fact that they don’t play by the rules, with an ‘anything that works’ mentality, as it’s this combination that will give them the skills to test your systems to the very limit.

A SPADE IS A SPADE When people use the term ‘ethical’ hacker, they mean someone who is good at breaking into things by using creative techniques and methods but without the criminal intention. However, my case is that the inclusion of the term ‘ethical’ does not legitimise the practice. It is still hacking – end of argument. I’m also not saying that you shouldn’t hire a hacker, just don’t make them out to be something that they’re not. If they’re a hacker – they’re a hacker. By describing them as ethical does not necessarily make them ethical, or unethical for that matter. And for hackers, you have a talent and should not have to hide it under a rock because some people practice the art for malicious or fraudulent reasons. If we’re too embarrassed to openly admit that we need and want a hacker to test our systems then let’s give them a new name not legitimise the practice. Answers on a postcard please.

ARGUMENT 5: CRIMINAL TURNED PROTECTOR Moving on from the last argument – I wouldn’t think it logical to refer to a hacker as an ‘ethical hacker’ because he or she has moved over from the dark side into the light. It just makes them a bad hacker. Kevin Mitnick isn’t famous because of his skills – he’s famous because he got caught!


APRIL 2013 |




efore I start this article I would just like to clarify that I’m not advocating the hiring of computer criminals. If you are being held to ransom by someone claiming to have control of your infrastructure, and demanding payment to ‘prevent further damage or exposure’, then you need to contact the relevant authorities. However, if you want to prevent said criminals hijacking your systems then perhaps a ‘hacker’ is exactly the person you need for the job! If you need a flat head screwdriver to remove a screw, would you use a cross head? Of course you wouldn’t – it wouldn’t work for one thing. Similarly, if you needed to dig a hole would you use a spoon? While you’d get the job done the time wasted could be better invested elsewhere. It’s only natural to use the tool that’s been perfectly designed for the job yet, for some reason, when it comes to securing the corporate infrastructure, many are frightened by the idea of hiring a hacker. I believe they’re missing out. In the other article I discussed the term ‘ethical hacker’ and, while I don’t intend on regurgitating the theme here, it is worth just reminding you that I believe you should call a spade a spade and a hacker a hacker – ethics is irrelevant. I also define a hacker as ‘someone who thinks a certain way about technology’. For that reason, if you want to make sure your systems are secure then the best way is to test their strength and that would be best done by someone ‘who thinks a certain way about technology’. That said, not all hackers are the same so here are the skills, I believe, a hacker should display:

OUT OF THE BOX My hacker definition sums this up perfectly. Rather than looking at how something should work, a hacker will approach it from a different angle. He won’t try your ‘security doors’ to make sure they’re locked, but instead push on the wall around it to see if the bricks hold up and if the windows have glass - does the putty hold them in place.

‘NO’ ISN’T IN HIS VOCABULARY Tenacity is another key skill a hacker must possess – someone who doesn’t take ‘no’ for an answer. Take a locked door – there are a number of ways of ‘opening’ it and a hacker will keep trying until he manages it. Of course the easiest way is to locate the key but, if one isn’t on hand, then can the lock be

APRIL 2013 |

picked? Can it be drilled? What about cutting the lock out altogether? I think the phrase from a legendary film - ‘You’re only supposed to blow the bloody doors off’ perfectly encapsulates a hacker’s enthusiasm to get the job done.

MORALS OF AN ALLEY CAT Now, before everyone starts baying for my blood, I don’t for one minute advocate paying a criminal for his services – unless they’re rehabilitated and you’re into second chances. However, a hacker needs to think and act like a criminal or what’s the point? Criminals don’t play by the rules and being afraid to push the boundaries is why a lot of companies end up experiencing breaches.

PORRIDGE FOR BREAKFAST While I’ve said there’s no reason why a rehabilitated hacker shouldn’t be employed, it does raise serious concerns – primarily, why did they get caught? Professional hackers will pride themselves on their skill at infiltrating systems, undetected, and will certainly not want to leave an electronic ‘fingerprint’. A criminal conviction shouldn’t be seen as a ‘qualification’ but rather testament that perhaps they’re not up to the job!

A BIG HEAD An egotistical hacker isn’t necessarily a brilliant hacker – in fact quite the reverse is often true. I’ve sat and listened to far too many people claiming responsibility for something that I’ve known they didn’t do - often because I was in fact responsible, but that’s for another time. There are a number of reasons why bragging is a bad trait in a hacker:

• They should be able to prove their ability rather than just talk about it;

• If they’re loose lipped they could inadvertently expose the organisation to ridicule;

• A hacker likes nothing better than ridiculing someone else’s inadequacy.

At the end of the day, someone who has the skill and tenacity to get the job done is the perfect fit for any organisation. Don’t let a ‘name’ come between you and the opportunity to secure the perfect asset for your business.



GOVERNMENTS MAN THE BARRICADES AS CYBER ATTACKS INTENSIFY The need for careful scrutiny of cyber security has never been more obvious as cyber attacks across the globe intensify. Governments, security consultancies and experts around the world are launching initiatives and opening up new fronts in the war against the hackers. The UK government has launched the Cybersecurity Information Sharing Partnership (CISP) initiative an ‘Overdue’ step in the right direction. Matt Bailey reports.


t is perhaps inevitable given our everincreasing reliance on technology, software, networks and the Internet that the levels of cyber crime would increase too.

Jonathan Evans, the head of the UK’s MI5 security service has been quoted as saying that the scale of cyber attacks was astonishing in 2012 and you would have to be a very brave, or perhaps foolish pundit to speculate that the situation will improve this year. High-profile attacks on UK broadcaster the BBC’s Twitter account, on banks and broadcasters in South Korea and on the antispam organisation Spamhaus provide a solid indication of exactly where the situation is as of the first quarter of 2013, so a new initiative to share information on cyber attacks between businesses and governments is an overdue step in the right direction.

victims in terms of industrial espionage and intellectual property theft, with losses to the UK economy running into the billions of pounds annually. This innovative partnership is breaking new ground through a truly collaborative partnership for sharing information on threats and to protect UK interests in cyberspace.” Maude describes CISP as a cyber security ‘fusion cell’ for cross-sector threat information sharing, incorporating figures from government, industry and information security analysts.


Commenting on the initiative, IT security pioneer Wieland Alge of Barracuda Networks said: “The new anti cyber threat centre initiative, known as Project Auburn to share information on cyber threats between businesses The initiative, known as the Cyber Security Information and governments, reflects the realisation that cyber Sharing Partnership (CISP), will include experts from attacks are a threat to all business establishments. Any government communications body GCHQ, MI5, police critical infrastructures could be targeted. Private and business with the aim to “better co-ordinate and public sector companies need to have a responses to the threats”. clear and immediate understanding of the IT threat situation, which requires businesses IS PERHAPS to report attacks in full as soon as they PROJECT AUBURN INEVITABLE are discovered. This sharing of attacks, Under a pilot scheme in 2012, known as vulnerabilities and damage is essential GIVEN OUR EVERProject Auburn eighty companies from to developing countermeasures to INCREASING RELIANCE ON five sectors of the economy - finance, protect others from falling prey to the TECHNOLOGY, SOFTWARE, defence, energy, telecommunications same kind of attack. NETWORKS AND THE and pharmaceuticals - were encouraged “Businesses’ protests of nervousness to share information. Earlier this year INTERNET THAT THE LEVELS of revealing publicly when they have the pilot was expanded to 160 firms and OF CYBER CRIME been attacked due to the potential a more permanent structure has been WOULD INCREASE threat of revealing trade secrets and announced. The kind of information CISP TOO. data confidentiality are quite unfounded. shares includes technical details of an attack, By focusing on their reputation and stock methods used in planning it and how to mitigate market value only, they forget that what’s at and deal with one. At a new London nerve centre a stake in an attack is their customers’ data. And that group of 12-15 analysts with security clearance will monitor means us and our data. If our data is being stolen, then attacks and provide details in real-time of who is being we need to know about it. We stand to suffer from its targeted. misuse. We need to be aware of potential secondary Targeted companies have been nervous about revealing attacks we might be facing with the data we thought publicly when they have been hacked because of the safe with our service providers, our banks, hospitals, even potential impact on their reputation and share price if they the stores we shop at and media we subscribe to. are seen as having lost valuable intellectual property or “Any piece of sensitive information about us and our other information. The BBC reported that one major London behaviour could be used in targeted phishing attacks,” listed company had incurred revenue losses of £800m as concludes Alge, “so we need all the help we can get a result of a cyber attack from a hostile state because of to avoid falling victim. If industry security agencies commercial disadvantage in contractual negotiations. and businesses can’t work together they will become One government official told the BBC: “No one has full paralysed and unable to prevent and protect our data visibility on cyberspace threats. We see volumes of attack from cyber attacks.” increase and we expect it to continue to rise.” If you would like to find out more about the CISP or The Cabinet Office minister Francis Maude, who is if you are interested in applying to join, please contact responsible for the UK national cyber security strategy commented, “We know cyber attacks are happening on an industrial scale and businesses are by far the biggest PAGE 28

APRIL 2013 |


The Whole Story Print Digital Online


Telephone: +44 (0) 870 863 6930 Facsimile: +44 (0) 870 085 8837 Email: Website:


AGILE PRINCIPLES Following her look at Agile methods in the last issue, this time Angelina Samaroo tackles Agile principles.


APRIL 2013 |



back to the last time you were in an elevator and felt the need to press the ‘close’ button before the automatic system does it – but we would not have wanted the elevator to have been built quickly. A short timescale could create the climate of ‘in a hurry’. When we’re in a hurry we may do just what needs to be done, for 1. Our highest priority is to satisfy the customer that release. The developers may then incur technical through the early and continuous delivery of debt, and in testing we may also not do all that we valuable working software; should. If we do not have a full and properly defined regression test pack for our level of testing, against the 2. Welcome changing requirements, even late in agreed user stories (so not reliant on unit tests being the development. Agile processes harness automated – they’re at unit level) then we may change for miss that something is now broken that wasn’t the customer’s competitive advantage. before. Thus the principle of delivering The third is: A SHORT working software frequently could become the antithesis of the add-on principle of TIMESCALE COULD 3. Deliver working software frequently, preference to a short timescale, to those CREATE THE CLIMATE from a couple of weeks to a of us with the responsibility for testing. couple of months, with preference OF ‘IN A HURRY’. WHEN to the short timescale. WE’RE IN A HURRY WE n this article we continue the analysis of the Agile principles, and what each might mean when applied in the real world. The first in this series of articles for 2013 explored the first two. These were:

PRINCIPLE 4 MAY DO JUST WHAT This follows on from the first principle, adding some idea of what it means Now to principle 4 – business people NEEDS TO BE DONE, to be early - weeks and months, not and developers must work together daily FOR THAT RELEASE. quarters and half years. It is of course throughout the project. The first thing that a good idea to give the customer early jumps out with this principle is that we are visibility of what they might get, so that any conspicuous here by our absence. There is fixes and change requests can be made early. no mention of testers. So, we will assume that The benefit of an early fix to code is that it should developers include testers. Working together can be no be cheaper than doing it downstream. An early fix or bad thing, assuming that we get along, most of the time. change should require less regression testing since we The Scrum framework allows for a daily stand-up to let will not have to retest an entire system, just where we are the team know how you’re getting on, and what your now compared to the previous version. blockers might be. It invokes a Scrum Master to remove the blockers. We will discuss more on the team in the next issue when we consider the fifth principle, on building ESTABLISHING A BASELINE motivation. For a tester however a valuable regression testing Working together daily is an aspiration worth fighting for process relies on having established a baseline set – the earlier you spot the problem the cheaper it is to fix of tests to compare against. We must know what the if you fix it then. In the old days of course, we could all be system did in the past that was correct, in order to in the same building. In today’s world of outsourcing and determine whether a change has caused the system off-shoring this becomes less likely. In last month’s issue to regress. In other words, we must have had sign-off TEST’s editor Matt Bailey reported on the entrepreneurial on the last sprint, with the documentation to show the developer who outsourced his own work, terrible I’m sure. agreed user stories and the tests to be rerun in order to detect any points of regression. We not only have different buildings, we have different countries, time zones, language, culture and clearly However, a point in the Agile Manifesto is that ‘We value loyalty. The spider phone is there, daring you to speak working software over comprehensive documentation’. at it, but sometimes you wish it would just crawl away. If In other words, provide ‘just enough’ documentation. you’ve ever used one, you either find yourself talking to a The issue is in defining what is ‘just enough’. For a machine and not to each other, or trying to get into the business analyst, ‘just enough’ may be to convey to conversation at all from your location. It is useful when the developers the idea behind the user story. For a you already know each other, but can be daunting if developer ‘just enough’ may be to have an idea of you’re new to the project. what the user might want so they can build something for them and us to check out. For a dedicated tester Video conferencing is another option of course, but ‘just enough’, to satisfy the need of this third principle we do not get away from the time zone/cultural issues. on delivering working software frequently, is that of Over reliance on the tools already paid for though mean product quality. We cannot be limited to ‘it looks ok to often that organisations are unwilling to bear the added me’ we need to know what it does under a realistic set cost of travel to another site. This could be short-sighted. of scenarios and how it does it. We may not individually Culture and tacit knowledge cannot be gleaned be responsible for all of the non functional requirements, remotely – for these you really do need to be there. nor all features, but as a community, our mission is Often revelations, good humour and team building demonstrable quality, not of process, but of product. The come at the end of a business day, over a cold beer, testing or QA group must accept responsibility for the some exotic looking hot food and a joke or two at the quality of information it provides to the business (if asked) expense of the visitor - principle 4 - priceless. on go-live, is it ‘good enough’? SHORT TIMESCALE The second part of the third principle is that there is a preference to the short timescale. Again, short from whose perspective? We all want things quickly – think

APRIL 2013 |




APRIL 2013 |


MONEYBALL AND THE SCIENCE OF TESTING Looking at the example set by the game of baseball, and especially the movie Moneyball allows you to look at the mistakes we make in evaluating information and making decisions according to Peter Varhol, solutions evangelist at Seapine Software.


oneyball is a book and movie about baseball. But it’s also about the mistakes we make in evaluating information and making decisions. Understanding these mistakes and how to overcome them can help your team improve its testing practices and results. Moneyball came into being because the Oakland Athletics baseball team had a problem. Its payroll was one-tenth the size of the wealthiest teams in the league, but it needed to field a winning team in order to remain profitable. The Athletics’ general manager, Billy Beane, started looking into the characteristics that produced winning teams, and came to a surprising conclusion. Beane discovered that the baseball experts were all wrong in the way they evaluated and used talent. The experts evaluated individual players by projecting their own biases on the value of individual abilities to team goals. How does this apply to software testing teams? To answer that question, we have to delve into the realm of human error. Daniel Kahneman’s book, Thinking, Fast and Slow, describes two approaches to human thought and decisionmaking, which he labels System 1 and System 2.

THINKING MISTAKES WITH HEURISTICS AND THEIR IMPACT ON TESTING Most errors in decision-making are the result of fallacious System 1 thinking. There are a number of ways of classifying reflexive, or System 1, errors in thinking. One of these types of errors is found in our use of heuristics.

Heuristics, or rules of thumb, are a great example of where System 1 thinking can get us into decision-making trouble. Heuristics enable us to categorise situations and make fast, pre-conceived decisions. They are formed and activated through as few as one prior experience and response with a similar situation. But heuristics are by definition based on a set of assumptions, which break down when a new problem at first appears similar to a previous challenge OUR we’ve overcome. Without thinking, we notice the similarities and apply the EXPECTATIONS CAN existing rule of thumb to a problem that ULTIMATELY INFLUENCE is in fact different, and that’s where the THE FINAL RESULT, EVEN failure is likely to occur.


System 1 is immediate, reflexive thinking that is mostly unconscious in nature. We do this sort of thinking many times a day. It keeps us functioning in an environment where we have numerous external stimuli. But it is sometimes too quick in forming conclusions. Consider this question: A baseball and bat together cost $1.10. The bat cost $1.00 more than the ball. What does the ball cost? If you didn’t think before responding, you likely said ten cents, which is incorrect. That’s System 1 decision-making. System 2 is more deliberate thought. We use it for complex problems that require mental effort to evaluate and solve. While System 2 thinking helps us make more accurate evaluations and decisions, it can’t respond instantly and it takes effort. These two systems of thinking work seamlessly together to help us function and make decisions in our lives. Most of our usual reactions are governed by System 1. Our senses pick up a cue, and we respond to it in a familiar way. We do System 1 thinking “without APRIL 2013 |

thinking.” When we are presented with a new problem, or have to deal with an unfamiliar situation, we think it through. That is System 2 in action. Both work together to enable us to function across a wide range of sensory inputs and subsequent decisions.

What does our natural tendency to develop and use heuristics have to do with testing? Having preconceived notions about our success is a particular problem, similar to what we know as the Pygmalion effect, or self-fulfilling prophecies. Our expectations can ultimately influence the final result, even if we aren’t aware of the influence or swear that it doesn’t exist. As we engage in testing, our heuristics might seem to be a useful way to speed up that work. We’ve seen testing scenarios before, and have a good idea of what the results are supposed to be, so we let System 1 thinking drive our interpretation of the results. But reliance on heuristics in testing can result in incorrect evaluations of a result and wrong decisions. For example, we might run a test and have the expected test results. We think the test run is successful, but fail to notice that other parts of the application don’t correspond to the result. We see what we expect to see, but don’t observe other issues that may have an impact on quality. When evaluating test data, we recognise that there are data and performance points which are outside of what we expect. Instead of attributing them to luck, good or bad, we PAGE 33

TEST TEAM MANAGEMENT usually attempt to explain them. This results in our drawing conclusions that are both inaccurate and misleading.

MANAGING A TESTING TEAM Biases inherent in heuristics and other types of erroneous conclusions come from System 1 thinking, where we react and respond immediately, and where prior experiences play a significant role in our decisions. But as a leader, our decisions impact people beyond ourselves. When our decisions affect others, they often need to be questioned and thought through more deliberately. So we should strive to engage System 2 thinking more frequently, through unstructured testing practices and failure analysis. However, there is a downside there, too. System 2 thinking is hard work, and too much of it causes us to mentally tire and make mistakes. And System 1 thinking has value in many situations, because of its efficiency and relative accuracy.


One way of using heuristics while recognising their limitations is to assume a conclusion, then discuss ways it may have happened. This can break people out rote thinking. An interesting practice when making a decision is to tell the team that the decision has failed a year from now, and for team members to list the possible causes of failure. The decision may not change, but it fully engages System 2 thinking among the team.

Using heuristics appropriately, while making sure to think through the implications of testing results, is critical to the outcome of any testing effort. Managers, leads, and team members all look at information on the project and make decisions daily. By understanding how we evaluate these decisions, how our own thought processes work, and where we are likely to USING make thinking errors, we can train ourselves HEURISTICS and our team to make better decisions APPROPRIATELY, all the time. The result is better testing WHILE MAKING SURE practices and results.


APRIL 2013 |


MAKING TESTING MORE EFFICIENT Making the testing process more efficient should be a priority for all companies developing software. Paymon Khamooshi shows how his company, Geeks Ltd has made improvements by automating the most time-consuming elements of the testing process.


esting is of central importance in software development, but a thorough testing regime can eat up an enormous amount of a project’s resources. Depending on the development process followed, testing can take up to 50-60 percent of a project’s time. And although testing is meant to be a method for mitigating risks, if designed improperly it can actually introduce more risk into a system. With so much at stake it is therefore unsurprising that making testing more efficient is a priority for software developers everywhere. At my own company, Geeks Ltd, we have tackled this problem by automating the most time-consuming elements of the testing process, and this has created enormous efficiencies that span several levels. Using our own programming language called M#, we have reduced testing times for our .NET web applications by up to 75 percent. We have also managed to substantially reduce risks in several areas of development, sometimes in ways that are not immediately obvious. Because the stakes are so high, I strongly encourage every programming executive to re-examine their development process and identify areas where automation can improve their testing regime. Here are the areas where I believe automated testing offers the most value:

• • • • •

Elimination of human error; Automated unit test creation; Automated testing data creation; Human resources (smaller development teams); Assisting Agile development.

ELIMINATION OF HUMAN ERROR A well-designed test will root out any human error from the finished code, but test coding is just as susceptible to human error as any type of coding. By automating as


much of the test creation process as possible, designers can ensure the creation of quality, error-free code, and free themselves up for higher value functions. Repetitive and monotonous functions benefit the most from automation. These tasks are not only time consuming, but are also naturally fertile ground for errors to creep into a system. Whenever possible, programmers should be offered pre-written coding options to choose from, much as mobile phone users are given options with predictive text. This ensures consistent, pre-tested coding is used as much as possible, and insures against coding mistakes. Besides the obvious value of eliminating mistakes from software testing, automation also realises enormous timesavings in the development process across several other layers. Eliminating errors at the creation stage means they will not pop up later in the development process (which can cost both time and money). As well, by taking the repetitive activities away from the development team there is more time to allocate to higher-value testing. The more time your team can allocate to higher-value testing, the better quality finished product.

AUTOMATED UNIT TESTING Software is tested on several levels and, as a general rule, the lower the level, the more monotonous and repetitive the work involved. The most basic level, known as the ‘unit test,’ verifies the functionality of each and every object in the system. Unit Tests by themselves won’t verify that the system as a whole works, but they do make sure that each of the building blocks in the system are themselves sound. The more unit tests that are written and deployed, the more assurance developers can have that they are designing a solid, robust system. Unfortunately, creating and executing unit tests is extremely repetitive, and is often not the best use of

APRIL 2013 |



APRIL 2013 |


TEST AUTOMATION an experienced programmer’s time. Unit testing should not be skipped, however, so this work is often allocated to junior programmers or, in some cases, outsourced offshore. Although this can reduce costs initially, it introduces new inefficiencies and risks into the system. To ensure that the testing regime offers good coverage for as much code as possible, best practice is to create a unit test for each and every new object created in the system. This ensures the integrity of the overall system, but even a highly-trained programmer makes mistakes (especially when engaged in repetitive work). Or, perhaps even more dangerously, developers sometimes take shortcuts and skip the creation of some unit tests to save time. Badly constructed, or missing unit tests can return ‘false positives.’ This might not be spotted until late in the development cycle, or even after delivery to the client, and can lead to the most expensive types of problems to solve. Automated unit test creation is therefore an invaluable asset for any programmer. Automated unit tests are also enormously helpful for ‘regression testing.’ This is a process that re-examines software for bugs after a modification to the original programming has been made. The complex nature of software means that any alteration can throw up new HAVING A FULL problems, or cause old RANGE OF UNIT problems to re-surface. TESTS WITH GOOD Having a full range of CODE COVERAGE unit tests with good code coverage in place takes IN PLACE TAKES THE the hassle out of regression HASSLE OUT OF testing. Every element in REGRESSION the system can be tested TESTING. using an automated, error-free regime anytime a change is made, no matter how trivial.

TESTABLE ARCHITECTURE Creating unit tests is time consuming, but equally time consuming can be modelling the data for the tests to run on. The better the quality of data used, the more accurate the test results. This advice may sound obvious, but following it using traditional software development techniques is far from easy. This is another area where automation can come to the rescue and cause development time and costs to plummet. Again, rather than employ a junior developer to handsculpt sample data for each test (possibly supplying data with the wrong parameters in the process), automation can allow testing to operate with real live data that reflects more closely the data the object will be supplied with outside the test environment. At Geeks we employ a system we call ‘World Snapshot,’ where a snapshot of the client’s data is used and reused for each test, saving time and improving the accuracy of results.

ELIMINATION OF THE JUNIOR PROGRAMMER As we’ve seen, automating the lower levels of software testing saves time and lowers risk across the system. These efficiencies are so dramatic that this can lead to another step-change in efficiency and risk mitigation: designing with smaller teams.


Removing the repetitive, monotonous tasks from software design means there simply is no ‘donkey work.’ By shifting to automation where possible, the need for junior programmers is sharply reduced, and this in turn means fewer people on a project. Fewer people means fewer opportunities for misunderstanding and miscommunication. The smaller the development team the easier it is to control the output. Besides delivering higher quality at lower costs, smaller development teams can also mitigate against other types of risk. Companies that outsource their low-level programming offshore not only face the awkward task of managing a task across cultures and time zones, but potential security breaches as well. Keeping the development team small, and limiting it to high-value programmers who work together, reduces risk, lowers costs, and improves control.

AGILE SOFTWARE DEVELOPMENT Automation is a perfect complement for Agile software development. Vastly reduced development times mean that programmers can respond quickly to change, and responsive systems ensure that the changes are made instantly and system-wide. An important precept in the Agile model is that a working software prototype is more useful and welcome than simply presenting clients with documents. Agile developers therefore assemble basic working software models very quickly, and automation can help take this happen even more quickly. At Geeks we use our automated programming language M# to create trivial versions of the project (and to document the constraints as unit tests) during requirement-gathering meeting with our clients. This means we are actually designing and testing models as the client is explaining to us what they need. Because the client can see what is being produced even as they describe it, it is easier to spot where modifications are needed, and the final product is more likely to meet their needs. Another important part of Agile is responsiveness to change. In complex software projects it isn’t realistic to expect all the requirements to be fully collected at the beginning of the development cycle. The continuous involvement of the client is crucial, and the development system needs to be robust enough to cope with a continuous change of requirements. Automated testing has huge benefits here. The instant anything, no matter how small or trivial, is modified or renamed all relevant unit tests are updated. This ensures programmers can concentrate on making valuable improvements, not struggling to keep the testing system up-to-date.

THE BENEFITS OF AUTOMATION Software has brought the benefits of automation to our daily lives by freeing us from repetitive and monotonous tasks. Advances in programming now mean that the benefits of automation can be brought full circle, to help make testing in software itself cheaper, more reliable, and more efficient. The potential benefits are enormous, and should be a priority for software designers everywhere.

APRIL 2013 |


DATA QUALITY & INTEGRITY â&#x20AC;&#x201C; CORNERSTONES IN THE DELIVERY OF INFORMATION The old adage garbage in, garbage out has never been truer than in our information age and data integrity is essential. Here Paul Fratellone, programme director of Test Consulting at Mindtree, reviews the various test techniques and methods to be considered for database and Extract, Transform and Load (ETL) data processing.



APRIL 2013 |



n our age of information and intense technology marketing it is sometimes helpful to take a step back from the hype and review the basics. I was recently engaged with a client about the cost of testing and before I answered him I asked what the cost of failure is? How much (testing effort/cost) should we really invest to ensure quality or to meet user requirements and is it really important to have accurate and reliable data? As we discussed the topic, I introduced the reality of risk. What is the risk to your investment/company in the event of failure, or some degree of failure? How much of an appetite for risk do you have and we could always test less and reduce your cost/effort. In this particular case, the client was making an investment in business information and analytics along with major database changes. ETL (Extract, Transform and Load) was going to be a major part of the initiative. In this article I will review the various test techniques and methods to be considered for database and ETL data processing.

APPLICATIONS, FILES AND DATABASES I have never worked on an application or system that had a tolerance for inaccurate data. Consistency, reliability and accuracy are always mandated. Figure 1 identifies the major testing control points, (stars) that require validation of the data. Making sure the data is correct (quality & integrity) needs to be validated from any transactions initiated from the GUI or by means of some ETL process from an existing database or external file. Needless to say query capabilities should be part of every testerâ&#x20AC;&#x2122;s toolbox and those of you who do not have this skill, address it as standard good testing practice is to validate the database impact of any GUI/application transaction/ETL. We perform this via querying the database for the test condition we created. Adding, deleting modifying records are recorded properly in the database and that is in addition to any referential integrity that needs to be maintained for those actions.

APRIL 2013 |

I once worked for a company that had three channels of delivery for their software platform. They had a desktop, a web-based version and EDI (electronic data interface). Each channel had its own code base and different validation rules so many of the tests did not apply across the platform. Even though there were these technical differences, our standard test approach for validating data quality and integrity were applicable to all delivery channels and specific challenges and additional effort were addressed for the EDI channel. These were formatted message files that were very cumbersome to create and then test. Good testing practice addresses quality dimensions and objectives regardless of the delivery mechanism and indeed, when I speak of quality dimensions we group them into some of these examples such as reliability, stability, accuracy, performance, scalability, etc. In our discussion of data we need to include completeness.

DATABASE POPULATION There are three main ways in which to populate a database. You can copy records from one database to another, you can process inbound files and populate the database or you can directly add records via a front end GUI/application. There also could be a business rules engine applied during the processing that will change the data from the source system prior to loading to the target. The validation of these GOOD business rules can be most TESTING complex and, as always, PRACTICE talk to your developer/dba and understand the code ADDRESSES paths, the list of values and QUALITY DIMENSIONS the data conditions that AND OBJECTIVES trigger the rules engine and REGARDLESS OF those that take different THE DELIVERY branches/code paths.




KINDS OF DATA When you copy data from one database to another, it is usually structured data. This kind of data is discrete units of information; customer number, transaction ID etc and will be uniquely and separately identified in the column of the table within the database. Indeed, most business processes and their corresponding data fall very nicely into a structured database. However, in our social media world images, videos, tweets, ‘likes’, etc do not fall nicely into a structured data world. When this information is being captured, you need to expand the number of textual combination tests to ensure these situations are addressed and certified. Fast growing use of unstructured data can be seen in the medical field. Besides the doctor’s notes and


comments that are in text, you have images of x-rays and MRIs/ scans of all sorts that are being captured digitally. Referential integrity and data completeness in the medical field is paramount. Not only are there regulatory compliance issues but the impact to a patient could be life or death. Thankfully, all my patients are applications and by the push of a button we bring the patient back to life. In the mid 90’s, I was testing an application that was parsing a name (salutation, given, family and suffix) from a single field into separate data columns within their customer table. Besides the Mr, Ms, Mrs salutations, we needed to identify those that used “Dr” before their name and those that had ‘MD’ at the end; all with and without the periods. There were family names that had an apostrophe and

APRIL 2013 |

DATA QUALITY those that had multiple family names and some with dashes between them. I found out that some use a single initial in addition to the given name. The number of combinations became daunting and required a lot of effort. I remember sitting next to the developer for the entire project, as it was her first experience with testdriven development, and as I found the situations, she coded for them. The risk was high to the company in not having all their customers properly identified in the database, so the effort was commensurable.

DATA COMPLETENESS Many people working in QA might consider the list of data completeness tests (see below) to be unit testing for data. Developers, database developers, database administrators could all play the lead role for this area of testing. Of course they will have other focus areas like data manipulation, data definition and data control language to certify, but will they be able to adequately validate the data (positive and negative) permutations? If they are in the lead role and your team is only testing at the GUI layer then do not let this opportunity pass you by. This is a value-add opportunity for QA/testing and in working with your technical teams you can apply good testing practice to ensure a higher level of data quality and integrity. If you have a higher standard of testing for your GUI than database or file processing then you have two thirds of the data that could be suspect. Completeness can be thought of as the loading of data that is extracted from a file, copied from a database or transformed through some business rules engine. We discuss tests in terms of source and target and this includes validating that all records, fields and the full contents of each field are loaded, including: • Record counts between source data and data loaded to the receiving database repository; • Error control header and trailing records for handling; • Tracking and writing out to file rejected records to a suspense file; • Check constraints between source and target; • Store Procedures: validate joins, update, deletion, etc of the tables/data elements being called in the procedure; • Unique values of key fields between source data and data loaded to the target repository; • Populating the entire contents of each field to validate that no truncation occurs at any step in the process; • Testing the boundaries of each field to find any database limitations; • Testing blank and invalid data; • Data types and formats; • Data masking – (could be part of the transformation/ business rules engine); • Data profiling (range and distribution of values) between source and target; • Parent-to-child relationships (part of referential integrity); • Handling and processing orphaned records and error suspense records. How much testing is going to be necessary? There are many control points and each has to exercise some amount of the data completeness checklist. You and

APRIL 2013 |

your business owners need to decide on how much risk is going to be acceptable with the amount of testing planned. In figure 1, I make no distinction between using production NO MATTER data versus creating test HOW WELL WE data. When you consider DESIGN AND ENSURE A the number of tests (error testing included) how HIGH COVERAGE USING much additional effort TEST DATA THAT USING and time is needed to PRODUCTION DATA create the conditions? ALWAYS SEEMS TO FIND Obviously look for those ERROR CONDITIONS. tools to save you money (and effort) as there should be a return on investment for that purchase. Once you have a quantifiable number of test cases along with an estimated duration and time to create the data (files/external database) you could estimate the cost of the (Database/ETL/Data Warehouse) test effort for that piece of the project. I left out control point testing for BI tools. Obviously there should be some level of certification, but the heavy lifting has already been performed with all the prior data validation. I have seen reporting (besides the BI Tool) be generated from the operational data store, the data warehouse and/or the data marts. Be careful not to duplicate your test case effort for reporting. Flag those test cases in your test case repository that can be re-used as part of report testing as I have always found that no matter how well we design and ensure a high coverage using test data that using production data always seems to find error conditions. It is all too common for that end user to create that unnatural occurrence and they seem to have a knack for finding hidden issues.

FILE PROCESSING; FIRST LAST AND MIDDLE – ERROR TEST CONDITION Back in the ’80s and ’90s when I was testing file extracts and loading into a database I would create the following tests. This testing is in addition to the error testing for header and trailing control records. Cause an error condition in the first record. Then repeat for the number of different error routines that are in place and usually there were several places to create an error so this would have to be repeated for those different error routines. Then cause an error in the last record and repeat as you did in the first record testing and it was common to have the file not properly commit or finish loading to the database because of the first/last record error. The middle was just in case.

HIGHEST VALUE The words boring and monotonous can be very easily applied to this kind of testing, but this is not low-value. Maybe there can be no higher value than ensuring your data is of the highest quality as I would think most business depends upon it. New data is being added and addressed as the business climate changes and data certification like many things is not a one and done. It is not uncommon for data to go through several iterations and transformations during its life, so consider what the original data was and where it resided. Thus the client realises that there is more to data quality than meets the eye and the effort to deliver and maintain it cannot be minimised as indeed everything depends upon it.



GOING MOBILE: BUILDING SUCCESSFUL APPLICATIONS As enterprise mobility becomes a necessity for businesses, Matt Davies, director, Cordys, discusses the key considerations for developing applications that can be accessed on a range of mobile devices.


APRIL 2013 |



obile working and BYOD are no longer just buzzwords, they are a fact of life for businesses around the world. In fact, a recent global survey found that 89 percent of IT leaders from enterprise and mid-sized companies supported BYOD in some form. The big question faced by the enterprise is how to allow customers, employees and partners access to the right applications and data on a multitude of mobile devices? Three quarters of all apps developed in 2012 were integrated with enterprise services, according to IDC.

MOBILE APPROACHES It is no great secret that the emergence of mobile technology is changing the way we all do a lot of things. This is especially true when it comes to the impact mobile technology has on business operations and the processes that support them. This huge shift means that the default way of interacting with applications, services and processes will become the mobile device. This has significant impact on business operations, how organisations engage their customers and increasingly the approach to business processes will be to design ‘mobile first’. Organisations developing enterprise mobile applications fall into three main camps:

• The first group want to develop applications as the means to ‘mobile enable’ customers, employees and partners to participate in business processes;

• Secondly, businesses are looking to offer employees access to their data, systems and work through any device;

• Finally, some organisations are starting to build new

kinds of enterprise mobile applications. By combining on-premise systems, SaaS, data, work and tasks these organisations are building their next generation of applications with a ‘mobile first’ approach.

An example of a business embracing this third strand of enterprise mobility is a manufacturing company who use their existing data, ERP systems and SaaS to create smart process applications for mobile devices. This has enabled the organisation to offer enterprise mobility not only to employees but also the entire supply chain. As a result, customers now get better service through these mobile applications delivering the right information to the right people at the right time.

Native or HTML5? The debate continues as to the approach organisations should take to build these mobile applications and offer enterprise mobility. Should you build native applications or should you take an HTML5 approach? There are many arguments for native vs. HTML5 with benefits and challenges in both approaches outlined below. HTML5 based applications are seen to have an edge on lower cost of development and you won’t need an application store to discover apps. They are accessed through a web browser, require a connection and typically don’t have access to native device features. Native applications are faster to respond, have a more interactive user interface, are available offline and can make use of device features. However, they are more expensive to develop and need an application store to be discovered. The type of application you are building can also have an impact on the choice of technology. Reading, shopping, APRIL 2013 |

researching and searching tend to be more suited to browser based usage. Social networking, managing content, data navigation and personal productivity tend to be done through native applications. The choice doesn’t have to be as black and white as native vs. HTML5 – there is a spectrum of choice in between with strengths and weaknesses in both.

APP WRAPPING The most important thing when building an enterprise mobile application is to ensure IN AN that the end product is ENTERPRISE, user friendly while still being agile and flexible. MOBILE One solution is to work APPLICATIONS WILL with an ‘app in app’ TYPICALLY NEED TO concept where enterprise COMMUNICATE WITH applications delivered THE RIGHT BACKin HTML 5 are ‘wrapped’ in native applications, so END SYSTEMS. a native app is installed on the phone and can make maximum use of the device’s functions, features and capability. This allows instant role based mobile application delivery and a single native app can deliver the right catalogue of products, services and data to the right person. By having the ‘app in app’ approach the native application can present a highly dynamic HTML application. This gives an organisation the agility to instantly change, personalise and deliver the right HTML5 application without the need to constantly update and manage the delivery of new versions of a native app, neatly avoiding the lengthy app store ‘upload/approve/ check/confirm’ publication cycle.

THE DEVELOPMENT CYCLE The ‘app within an app’ model provides significant benefits to organisations looking to offer enterprise mobility, and this carries through to the development cycle. Enterprise developers and IT departments can quickly and easily develop new applications and application content in standard HTML5 and widely adopted open source frameworks such as jQuery and the JSON programming model. This means that they can work with the standard technologies with which they have experience and the native app can simply act as the ‘container’ for this new application content. To cater for the differing needs of an organisation and the responsibility of different teams, there are two possible approaches to application development within an enterprise. Either the development and delivery of an application can be done using existing IDEs and tools, or a model driven approach to application development can be taken. A development or code based approach can be adopted if that makes most sense for the organisation or the type of application. The latter on the other hand is suitable for a less developer focused audience, or an organisation that wants to trade off low level control for productivity.

REMEMBER THE BACK END In an enterprise, mobile applications will typically need to communicate with the right back-end systems. The need to develop and maintain the same application across multiple device and operating system combinations can lead to extensive maintenance issues and PAGE 45

MOBILE TESTING exploding cost so it’s important to bear this in mind in the planning stages. To support the development of enterprise mobility, the right plugins, frameworks and APIs need to be made available to these mobile applications. As an example, development plugins including jQueryMobile, case and process management, human workflow and outbound communication via web services, all allow mobile applications to access backend systems and customer information. Crucially, these plugins allow the application to become more than just a disjointed mobile app and support the business value in enterprise mobility.

capability that covers the company’s middleware, underlying technology and processes together with mobile access. Enterprises will need to consider sophisticated testing, validation and QA capabilities across this spectrum combined with industry standard testing tools for mobile applications. Developing enterprise mobile applications is a challenge many businesses are currently facing, and the choices can seem daunting. However, with careful consideration of the required capabilities, it is possible to strike a perfect balance between usability and cost effectiveness.

TESTING When it comes to a mobile application that supports an organisation’s enterprise mobility strategy, the testing needs to go beyond just the app. Clearly the app needs to function correctly, be easy to use and work on a multitude of devices. But thinking more holistically, the organisation’s enterprise mobility strategy also needs to encompass processes, access to key data and systems and include a full development, test, acceptance and production cycle. This requires a joined up testing



APRIL 2013 |


TESTING AND BIG DATA Big Data is big news. Mike Holcombe offers a practical example of how testing is progressing in this field.


n an ideal world we will have complete knowledge about what some software being developed is supposed to do and then the task of testing will be much easier. If a complete and detailed specification is available then it is known how to generate highly efficient and effective test sets that will, more or less, guarantee that the software is correct if all of the tests are passed.


that is skin and transforming into more specialised cells. The program then combines all these agents together in a complex interacting software system.

The division and movement of these cells involve the calculation of the physical forces that the cells exert on each other and has to be taken into account. The resulting simulation program is run a large number of times – since a lot of what is going on is stochastic the results are always slightly different. Analysing all this simulation data will Life is never that simple, however, and we rarely know tell us if the programme is behaving correctly. all that we would like. There may not be the time or If it is we can then carry out more types of resource to develop such a detailed specification, experiments and simulations in order to perhaps the requirements are still evolving predict what might then happen in the and an Agile approach is needed, perhaps LOTS OF real biology of growing tissue cultures for the software is a component that is to be COMPANIES ARE use as skin grafts etc. integrated into a complex system with many unknowns – including unknown unknowns! OFFERING SERVICES The output from the program lists For legacy systems there is often a lack of AND ADVICE THAT the states, the position, the speed of documentation and the experts moved on movement and the direction of every PROMISES TO HELP MAKE decades ago! single type of cell at every time-point. SENSE OF LARGE, OFTEN This is a lot of data to understand This issues present testers with lots of LOOSELY STRUCTURED, and to see if the program is behaving headaches. We can generate large sets of properly. test sequences and these can be applied DATA SETS. to the system under test. This will result in an We used an analysis tool called avalanche of lots more data. But what can it DAIKON, a dynamic invariant detector. tell us? This analyses the values of all the variables in the program trying to find properties and THE BUSINESS OF BIG DATA relationships that always hold. An invariant might be a statement about the range of values that a variable The business of Big Data or Data Analytics is big news these always lies between, or a statement that some formula days. Lots of companies are offering services and advice always holds between different variables. that promises to help make sense of large, often loosely structured, data sets. Some testers are beginning to wonder Here is a sample of the output: if there are opportunities here for them. I am aware of •z == motility one global player who is dealing with massively complex •x <= 500.0 system designs and taking the view that exercising the •x >= 0.0 software with massive automatically generated test sets •y != 0 might provide a basis for understanding the behaviour of •y <= 467.957706 the system if there were sophisticated ways to analyse the •y >= 27.36479 terabytes of data produced. •z == 0.0 Let’s look at a simple example – this is an emerging •force_x <= 0.288605 research area so it may be some time before tools and •force_x >= -0.30796 techniques are mature enough for practical application. •force_y <= 0.312008 •force_y >= -0.311693 I have been involved in developing simulation software •force_z != 0 force_z <= 4.9E-324 force_z >= for medical researchers. In one project we were working -0.399635 with tissue engineers trying to understand how human skin behaves when wounds are repairing. The biologists In the original program we were getting non-zero values for can culture the cells in skin – epithelial cells of various the variable z which represented the vertical position of the types – and investigate what they are doing, the centre of the initial stem cells. These were put on the basal molecules and chemicals that are being produced and membrane and should have stayed there. However, there how this leads to the wound getting better – primarily were examples where some of these cells moved upwards. through complex processes of cell division, adaptation This means that the detailed calculations involving the forces and migration - leading to a coherent and properly acting on these cells was wrong and the adhesion force of structured section of new skin. the basal membrane was being overcome by the combined forces acting on the cells from the other cells in the model. The software was developed by creating a number of Once this was corrected the model ran as expected. models of individual cells – we call this type of software agent-based modelling – so that the program for each This is an indication of the potential of tools like DAIKON type of cell behaves as far as possible like a real cell. for analysing large data sets generated by software tests. The skin tissue is made up of a number of different types There is still a long way to go in terms of making these tools of cells, each of these can be in a number of different usable and suitable for use in software testing but it is likely states. Some cells, such as stem cells, can divide into two that we will be using such technology more and more. identical cells. The healing process consists of stem cells dividing, moving across the three-dimensional structure

APRIL 2013 |



TALK TO ME LIKE I’M A FIVE YEAR OLD With a new contract to work on, it’s back to basics for Dave Whalen


alking to five year olds can be a challenge until you learn how to do it. If an adult takes what appears to be the simplest task and breaks it down in to very small steps - they’ve been a parent. If they get frustrated, angry, walk away shaking their heads, and just do it themselves - no kids. An example: You hand a five year old a piece of paper and tell them to throw it away. The experienced parent: “Please take this piece of paper and put it in the garbage can.” The inexperienced adult: “Throw this away.” A few minutes later they find the crumpled up piece of paper on the floor a few feet from where it was given. Which brings me to my point; I have a new contract. As a consultant, you tend to move around a lot. At the end of one contract you leave as a system expert - the adult. As the new guy (or girl) on a new contract you are a five year old. Don’t get me wrong, I’m a good tester. I know a lot about software testing, but am I an expert? Hardly. That really hits home when I start a new contract. I want to be a productive member of any team, and I want to be productive immediately. Is it an impossible task? I don’t think so. Ultimately you need to talk to me like I’m a five year old - at first anyway. In every writing class I’ve ever taken I was always told to consider the audience. Communicating with your intended audience will help to insure that the communication is effective and that the other person understands it the same way you understand it. Consider their background, experience, subject knowledge, and other important issues as well. Writing a test is a form of communication. You need to consider the potential audience. Who are they? What’s their background? What should they know? What may they not know? How much detail should there be in a test? I am constantly asked that very question. To which I give my usual answer, “It depends”. Who is your target audience? Just because you are working on a Web application, don’t assume all applications are the same. Don’t assume that just because it is the 21st century that everyone knows how to navigate a Web page. Sure they can find out some completely useless fact on Google, but they need to understand your specific application. Hopefully they at least know how to open a browser window. A dangerous assumption by the way - in this day and age it’s a pretty safe assumption, but it doesn’t hurt to confirm in during the interview process. When I write a test, that’s usually where I start. As release dates get closer, project managers will recruit people to help test. The help may be external, such as test consultants. More likely, the help is internal, such as developers, or business analysts (or worse - managers!)


These people may be incredibly smart (stop COMMUNICATING laughing), and WITH YOUR INTENDED they may know the system well. But they AUDIENCE WILL HELP don’t know how to test! TO INSURE THAT THE Testing is part art and part COMMUNICATION IS science. I can teach the EFFECTIVE science. I can’t teach the art. The reason that I have written a test may not seem clear to the inexperienced tester. I have a very specific reason why I need a test run in a particular way. I’m probably looking for a very specific type of problem. I need them to do exactly what I say, in the exact order. Sounding familiar? What’s the old axiom? When you ‘assume’... you know the rest. I try to be very specific when I write a test. I leave nothing open to interpretation. Some would say I’m too specific. Actually, they use an ‘a’ word that we won’t go into right now. Where an inexperienced tester may just say “Create a new order”, I will typically take a few more steps to get there. Developer’s are, by far, the worst testers! But testing an application can be a very eye-opening experience for them. When I find an error, I’ll usually find the developer and demonstrate how I found the error. They’re response is usually: “Why did you do that?” My answer: “Because I could!” They don’t like that answer. But If I can do it, a customer can do it. Again, if I’m testing from the position of a potential user, my guess is that they probably will. The developer will then show me an alternate path, typically a path that even an experienced user wouldn’t take. But they wrote the code and they know it works. While I agree that the alternate path may work, no customer would ever do that. Some developers get very defensive. I can’t blame them. I am after all, calling their baby ugly. After a little basic tester training, they usually get it. They will never admit it, but, I think that in the end, I make them better developers. You’re welcome! Ice cream?

APRIL 2013 |

Headline Sponsor



An independent awards programme designed to celebrate and promote excellence, best practice and innovation in the software testing and QA community.




If you would like to enter the awards please contact Jade Evans on: +44 (0) 203 668 6942 or email her at

Headline Sponsor

TEST April-May 2013  

The April-May 2013 issue of TEST Magazine

TEST April-May 2013  

The April-May 2013 issue of TEST Magazine