TEST Magazine - August-September 2012

Page 1

Sponsoring TEST’s 20 Leading Testing Providers. Editor’s Focus page 29-39.

INNoVATIoN FoR SoFTWARE QUALITy Volume 4: Issue 4: August 2012

Institutional applications testing Richard Eldridge explains how to tackle testing in the financial sector.

Inside: Agile development | Mobile apps testing | Testing techniques Visit TEST online at www.testmagazine.co.uk



TEST : INNOV ATION FOR SO FTWARE QUAL ITY

Leader | 1 Feature Sponsoring TEST’s 20 Leading Testing Providers.

INNOVATION FO R SOFTWARE QU ALITY

Editor’s Focus page 29-39.

Volume 4: Issue 4: August 2012

Institutional applications testing Richard Eldridge explains how to tackle testing in the financial sector.

VOLUME 4: ISS UE 4: AUGUST 2012

INNoVATIoN FoR SoFTWARE QUALITy

Inside: Agile developme nt | Mobile apps testing

| Testing techniques

Visit TEST online at www.testma gazine.co.uk

Too big to fail?

T

I can’t help bringing to mind an infamous phrase from the banking sector: ‘Too big to fail’. One would like to think that the reason that events like the RBS upgrade issue aren’t an everyday occurrence is because they are seen as too important to leave anything to chance. I suspect there are backups, there is built-in redundancy, and there are fail-safe mechanisms that prevent catastrophes from happening – but Murphy’s Law has

here seems to be a bit of a thread running through this issue – I’d like to say it is by design, but that wouldn’t be entirely honest; sometimes there is just something ‘in the air’. The mistake of – apparently – one technical person at RBS caused ripples that are still being felt, when an upgrade of some scheduling software went wrong and the list of payments to be made had to be re-built. A couple of our contributors address the issue and – as you may have already noticed - our cover story offers some advice to testers hoping to work in the banking/finance sector – perhaps the hapless upgrader could have benefitted from its wisdom. Anyway, suffice to say that events have once again pointed up how crucial it is that systems run smoothly and upgrades are performed without a glitch. Of course we never hear of the 99.999 percent of them that do go off without issues, but that tiny botched minority make the headlines every time. I can’t help bringing to mind an infamous phrase from the banking sector: ‘Too big to fail’. One would like to think that the reason that events like the RBS upgrade issue aren’t an everyday occurrence is because they are seen as too important to leave anything to

chance. I suspect there are back-ups, there is built-in redundancy, and there are fail-safe mechanisms that prevent catastrophes from happening – but Murphy’s Law has a habit of afflicting the unwary. The banking sector isn’t our sole concern in this issue however. We also have advice on how to approach doing business in the mobile devices sector as well as an in-depth look at an open source option for testing in this ever more important market... Didn’t I write one of these very Leader columns in the dim and distant past about what a topnotch opportunity ‘apps’ would prove to be for testers? Also in this issue we have the 20 Leading Testing Providers Editor’s Focus. This section presents thought leadership from four of the companies that will appear in the October issue’s 20 Leading Testing Providers section. Check it out. Until next time...

Matt Bailey, Editor

a habit of afflicting the unwary. Matt Bailey, Editor

© 2012 31 Media Limited. All rights reserved. TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available. Opinions expressed in this journal do not necessarily reflect those of the editor or TEST Magazine or its publisher, 31 Media Limited. ISSN 2040-0160

www.testmagazine.co.uk

Editor Matthew Bailey matthew.bailey@31media.co.uk Tel: +44 (0)203 056 4599 To advertise contact: grant Farrell grant.farrell@31media.co.uk Tel: +44(0)203 056 4598 Production & Design Toni Barrington toni.barrington@31media.co.uk Dean Cook dean.cook@31media.co.uk

Editorial & Advertising Enquiries 31 Media Ltd, Unit 8a, Nice Business Park Sylvan Grove London, SE15 1PD Tel: +44 (0) 870 863 6930 Fax: +44 (0) 870 085 8837 Email: info@31media.co.uk Web: www.testmagazine.co.uk Printed by Pensord, Tram Road, Pontllanfraith, Blackwood. NP12 2YA

August 2012 | TEST



Contents | 3

Contents... august 2012 1 Leader column

Shouldn’t banking systems be “too big to fail”?

4 News 6 Institutional application testing: Software quality in the financial sector The embattled financial sector is still crucial to the global economy and now more than ever it is reliant on the security and functionality of its hardware and software systems. Richard Eldridge, test analyst for Markets & International Banking at RBS explains how banking is different and how to tackle testing in the financial sector.

12 Removing the barriers to Agile development Most IT organisations are still optimised to support the waterfall approach to testing. From the way that projects are chartered and funded, to the way that they are staffed, to the way that they are governed, the IT management system that is today in place stands in the way of adopting an Agile approach according to Kurt Bittner CTO, Americas at Ivar

6

Jacobson International

16 The ins and outs of software testing for communication service providers Hot Telecom – a market research firm specialising in the telecomms sector – has conducted extensive interviews to define what communications service providers (CSP) are looking for in terms of software testing services and support, company president Isabelle Paradis reports.

20 Monkey magic

12

Gorilla Logic explores its MonkeyTalk, an open source, general purpose functional testing command language, and dives into its core design elements, to explain why it made the many design decisions that led to the resulting MonkeyTalk 1.0 specification.

26 Training corner – Techniques and the tortoise Following last issue’s ‘Agile and the hare’ outing, this time Angelina Samaroo tackles the tortoise of techniques.

29 20 Leading Testing Providers Editor’s Focus

26

40 Design for TEST – The bank glitch

The recent disaster at RBS raises the issues of batch processing, outsourcing and personnel management according to Mike Holcombe.

41 TEST Directory 48 Last Word – What would you say you do here?

While it is his raison d'être, Dave Whalen is pondering the necessity of going on a real

bug hunt.

www.testmagazine.co.uk

38 August 2012 | TEST


Today’s top testing challenges

A

survey of software testers at TestExpo in London has perhaps unsurprisingly identified application testing as the biggest challenge facing software testing and quality assurance professionals today, while cloud-based testing and mobile testing were both highlighted as significant ‘future’ challenges. The survey, conducted by Sogeti at the TestExpo 2012 software testing event held in London in June, also revealed that 80 percent of respondents welcomed the concept of ‘Shift Left’, with 32 percent saying it would definitely help them and a further 48 percent agreeing it would help but that they would want to see results with their own project workloads. Only three percent said they did not think that the concept would help or enhance their software testing processes.

‘Shift Left’ is a strategy that moves software testing quality measures to the start of the application lifecycle – or left along the timeline – dramatically reducing the number of change requests required at the end of each project. This methodology reduces the overall timespan and cost of projects by ensuring that software testing and Quality Assurance practices take place at the earliest and most relevant stage in development as possible. When asked about their current and future challenges in testing, 67 percent of respondents said that application testing was a major ‘current’ challenge, compared to 43 percent citing mobile testing, 35 percent Windows 7 migration, 34 percent for outsourcing, 27 percent for establishing a Test Centre of

The ‘stealth’ cloud option

8

4 percent of employees are putting company data at risk as they secretly access consumer cloud solutions such as DropBox and SkyDrive in the workplace, says Computacenter research. With no visibility of files available to IT managers, employees are opening networks up to potential security threats. The research, conducted amongst 150 IT decision makers highlights that employees are being forced to turn to consumer cloud products to share files as current business systems simply cannot offer the same level of service. “Stealth cloud is a major issue for organisations,” says Paul Casey, cloud practice leader at Computacenter. “These cloud products are very convenient, easy to access, simple to use and perfect for remote working. Unfortunately, most IT departments don’t offer similar file sharing tools which are secure and as a result are

TEST | August 2012

losing the battle to keep company data on the office network. “The second an employee stores files and data using a solution such as DropBox, IT managers lose all visibility of what is going on and potentially confidential information and intellectual property is open to security threats and breaches.” These threats are understandably keeping IT managers awake at night with 56 percent worried about possible security breaches and a further half wishing they had full visibility of what data is stored within the cloud. “It is imperative that businesses address this problem now,” warns Casey. “It is clear that everyone knows the risks of consumer cloud products, but until the correct solutions are put in place or alternative sanctioned solutions, employees will continue to turn to consumer clouds to get the job done – no matter what the consequence might be.”

Excellence (TCoE), and 22 percent for cloudbased testing. On the subject of future testing hurdles, the biggest challenge named was cloudbased testing, which was the answer from 51 percent of respondents, while 29 percent of respondents said mobile testing would be a future challenge. Other responses included 28 percent challenged by establishing a TCoE; 27 percent by Windows 7 migration; 11 percent outsourcing; and 9 percent still regarding application testing as a future challenge. The survey also asked respondents about the key priority for their department and business if they had to choose between quality, technology, innovation, speed and cost. The highest response backed quality, followed by cost and then speed.

IT quality experts urge adoption of TickITplus

L

eading IT governance and compliance experts are urging UK organisations to upgrade to a new, accredited TickITplus IT certification scheme or risk losing valuable business. TickITplus aims to improve the quality of software for business and industry internationally, covering sectors as diverse as finance, construction and transportation. The goal is to help software suppliers, including in-house developers and IT service and system providers, define and implement a quality system that covers all the essential business processes in the product lifecycle, within the framework of the quality management systems standard ISO9001. Peter Lawrence, director of global quality excellence at CSC and chair of BSI’s Joint TickITplus Industry Steering Committee (JTISC), says: “Building on the established TickIT scheme, TickITplus is now the recognised software and IT quality assurance benchmark for UK businesses. Any company without TickITplus risks being overlooked for valuable contracts by potential customers, who will seek the assurances of quality control TickITplus certification provides. “Launched in 1990, the original TickIT no longer meets the rapidly evolving requirements of 21st-century IT. Technology and working practices have changed. TickITplus, however, represents a solid long-term investment because, unlike TickIT, the new scheme is flexible enough to evolve at the same pace as the business, industrial, IT and software communities in the UK and abroad.”

www.testmagazine.co.uk


NEWS

INDIAN TESTINg AgREEMENT

W

eb load and performance testing specialist Refl ective Solutions has announced that goodTest Technologies, an independent software testing and infrastructure management consulting company based in India, has entered into an agreement to use StressTester for its performance testing engagements with clients in their core markets of banking, insurance, retail and fi nance, as well as reselling in the Indian market. After reviewing a number of load testing tools, GoodTest Technologies says it chose the software based on the ease of use, in addition to its cost effectiveness and flexible licensing model. Nitin Kulkarni, director, comments: “We were so impressed with the software and its ability to meet the shorter testing

timescales often demanded by clients. The levels of technical support offered by the vendor make this a logical choice for us as our business continues to grow.” Peter Richardson, partner director at Reflective Solutions, adds: “From

Botched software update causes problems at RBS

their base in Pune, GoodTest Technologies will broaden our coverage of the growing Indian market, supplying StressTester software as well as performance testing services to their core markets.”

Focussing on the top suppliers Sponsored by

I

t seems likely that a botched software upgrade to its cA-7 batch processing software was at the heart of RBS’s problems in June. Stephen Hester, the bank’s chief executive confi rmed that a software change was responsible for the widespread computer problems. “It shouldn't have happened and we are very sorry,” he said. Following the problems in June he told the BBc the bank's systems were working normally but it would take a few days for the backlog to be cleared. “There was a software change which didn't go right and although that itself was put right quickly, there then was a big backlog of things that had to be reprocessed in sequence, which is why on Thursday and Friday customers experienced difficulty which we are well on the way to fixing.” He declined to speculate about how much the fiasco might have cost the bank. According to some reports, errors committed by an “inexperienced operative” – allegedly a recent recruit in India - caused the widespread problems. However RBS responded that there was: “No evidence this is connected to outsourcing.” Hester told the UK’s Treasury Select Committee: “Preliminary investigations

www.testmagazine.co.uk

would indicate that the root cause was a routine software upgrade managed and operated by our team in Edinburgh, which caused the automated batch processing software to malfunction. The immediate software issue was promptly identified and rectified. Despite this, significant manual intervention in a highly automated and complex batch processing was required. This resulted in a significant backlog of daily data and information processing. The consequential IT problems and backlog have taken time to resolve. However, I would like to emphasise that at no point was any customer data lost or destroyed.” A report on IT website The Register (www.theregister.co.uk) blamed a bungled update to the CA-7 batch processing software used by RBS. The Register’s source, “who worked at RBS for several years”, claimed “an inexperienced operative made a major error while performing the relatively routine task of backing out of an upgrade to the CA-7 tool. It is normal to find that a software update has caused a problem; IT staff expect to back out in such cases. But in the process of backing out a major blunder was committed.” Anna Leach, the reporter at The Register, said the problem was “unique to RBS” and not a flaw in CA-7's system.

T

he next issue of TEST magazine will feature 20 Leading Testing Providers and TEST is proud to announce that Micro Focus has confi rmed that it will be the headline sponsor for this special section of the magazine. As a more in-depth preview for 20 Leading Testing Providers, in this issue (see the special section on pages 29-39) we are focussing in a little more depth on four of the testing providers to be included in the October issue’ s profiles. The Focus in this issue features thought leadership from Micro Focus, TechExcel, Facilita and Tricentis. We have comment from top thought leaders from these companies covering a range of topics including automated testing for apps, rules for purchasing application lifecycle management software, new approaches to load testing and the importance of data warehouse and business intelligence testing to ensure increased effectiveness and efficiency. The 20 Leading Testing Providers section in the October issue of TEST will profile 20 of the top testing product vendors, so make sure you check it out.

August 2012 | TEST


6 | Cover Featurestory

Institutional application testing: Software quality in the financial sector The embattled financial sector is still crucial to the global economy and now more than ever it is reliant on the security and functionality of its hardware and software systems. As recent developments have shown, even a minor upset in this delicate balance can have enormous impact. Richard Eldridge, test analyst for Markets & International Banking at RBS explains how banking is different and how to tackle testing in the financial sector.

TEST | August 2012

www.testmagazine.co.uk


Cover story | 7

I

f you’re reading this I’m hoping you’re interested in the fi nance sector and how testing differs for this area compared to others. To summarise I have compiled seven key factors that are especially important when venturing into testing within the fi nance sector, they are mostly applicable to any role but should be focused on with higher importance for a successful start when testing for the fi rst time in fi nance and banking. The first and most important item to clarify is ‘the institutional AUT’, this separates the applications used throughout a company by trained employees from publicly available applications. In addition to that you’ll hopefully find this article is focused on applications that only face internal employees and not customers; from my experience the differences are stark in their contrasting testing requirements. Ideally this article can aid you in any future roles testing institutional applications or what skills to look for if you are taking on a new testing recruit for your team.

Why is the fi nancial sector so distinct? With dedicated companies focusing on testing for the banking sector, with new FSA (Financial Services Authority) requirements hitting banks, in the aftermath of the 2008 economic crisis and problems with production realised software affecting end users negatively; excellent testing in the finance sector is highly sought after and arguably at its most important level since the last financial 'wobble' in the early nineties. In an industry where minimising risk is at its highest priority, superior testing for IT projects is paramount and lucrative for those who can master it.

My seven factors to consider 1. Targeted testing: The financial sector will deliver projects for trained individuals who have, or should have, a detailed understanding of what they need to accomplish in their role and with the application under test. As such the applications provided for them

www.testmagazine.co.uk

are arguably tangibly different to any other application that might be given to 'Joe Blogs'. This should be at the forefront of any testing focus and is a key difference between testing in other areas and the financial sector. Consider the testing heuristic of risk-based testing, the risks and the likely hood of them occurring are very different in institutional AUT’s compared to other applications. For these applications IT projects aren't as worried about an incompetent user, because if they were to misuse the application they will be asked to follow their training procedures and although recognising and fixing the issue will be required, the project won’t suffer such catastrophic implications as seen by customer facing applications. There is also a much higher degree of control on the user’s future actions. This confounds the point that you won’t need to focus so much on what might be considered incompetent actions and spend more time on other more common actions from trained users. In essence remember you will be testing for employees and not ‘Joe Bloggs’. 2. Most trivial bugs are even less important: Finding trivial bugs that add cost and aren’t likely to cause any issue for trained users is not as important when you’re not trying to ‘win over’ customers. If a tester is constantly finding trivial bugs, and more importantly seems to miss bigger issues as a result, it damages confidence in the testing. Despite this it is still important to raise all issues so that it’s clear the testing has picked up even trivial issues but remember they’re even less important in institutional, non-customer facing AUTs. One way to make sure your testing is focused properly is to learn the minimum expectations for the end users, do this by; explicitly asking questions about what the user is doing on each part of the application, by implicitly taking on board feedback from any bugs you discuss with BAs or other more experienced team members. Remember the difference between an institutionally intended AuT and a public facing AUT is the end user, and all your tests should be related to their activity.

With dedicated companies focusing on testing for the banking sector, with new FSA (Financial Services Authority) requirements hitting banks, in the aftermath of the 2008 economic crisis and problems with production realised software affecting end users negatively; excellent testing in the finance sector is highly sought after and arguably at its most important level since the last financial 'wobble' in the early nineties. In an industry where minimising risk is at its highest priority, superior testing for IT projects is paramount and lucrative for those who can master it.

August 2012 | TEST


8 | Cover story

Personal ‘silos’ of information are not good for the longevity of IT projects and not good for any team, although there will always be key employees who you’d hate to lose, there needs to be really good knowledge transfer across the team and this has never been more true in the turbulent job market we have today. The financial sector has a high level of required understanding to keep up with the requirements of their applications and processes; as a result sharing knowledge and avoiding personal information silos is very important for continuing the quality of testing in the financial sector.

TEST | August 2012

To illustrate this point here is a simple scenario: If the application throws an unhandled error message due to the user entering a string in a numerical field and this exposes java class names or sql queries a customer facing application would rate this bug extremely highly due to the risk and impact on customer experience. Compare this to our institutional application, users will be asked to use the field as they are supposed to and while fixing the issue the impact is far less severe. 3. Jargon requirements: Jargon is immensely important in the financial sector, without knowing the jargon you can’t communicate properly with developers or BAs. Over time you will have to understand exponentially more and more acronyms, abbreviations, terms and perhaps even outdated jargon that is still used within the company. Unlike other applications and some other sectors there is an incredibly large amount of this Jargon in the financial sector. The best way to make a positive impact as soon as possible is to take an active interest in these terms and I’d suggest using financial and banking sites to learn all the terms that might be related to the area you are testing in or are looking to test in. One of my favourites is investopedia, if you prefer the moving images to the written word the selection of YouTube videos at marketplacevideos is very helpful also. Don't think that you need to know everything regarding finance and banking, you don't, the important thing is to make sure you know what you need to know, really well. 4. Internal sites: Most banks and financial companies, if not all of them, will have their internal jargon listed and widely available if you know where to look. This is a resource that will help countless numbers of new starters and if it isn't in existence you should be the one to suggest it and start compiling useful phrases, acronyms and general jargon. unlike other sectors the internal sites alongside the ‘Jargon requirements’ highlight the key difference between this and other sectors: terminology. Taking the initiative to look at any internal documentation is the fastest way to learn and the most impressive way. Using said initiative makes a big

impact on how people view your ability to absorb new information and take on board the way the specific areas communicates its financial logic. Most importantly make sure you completely understand the terms - if you’re unsure always ask if your understanding is correct, which leads nicely onto my next points... 5. colleagues’ knowledge: The people you work with know more then you do, the chances are in financial brown field projects that unless they fall behind or you take over the mantel as the resident expert they will always have a deeper understanding of areas than you do. While this isn't always the case, it is always the case that asking your colleagues is a sure fire way to make sure you learn what you need to as quickly as possible. If you ask in a sensitive way by considering their workload it will instil a positive impression of yourself because they will see your thirst for knowledge. This factor really does apply to every single job but in a sector like banking and finance there will always be knowledge that can be shared, furthermore the [incline of taking on board new information] is one of the most difficult in this sector. When it comes to learning new information consider that there is only one thing better than asking questions; and that's answering questions. Don't be the guy who sends the next new starter to see the resident 'guru', tell them everything you know but don't be afraid to explain any limits to your understanding. Telling them what you know and checking if your understanding conflicts with what others are telling this 'new kid on the block' is limitless in its ability to cement your own knowledge base. Personal ‘silos’ of information are not good for the longevity of IT projects and not good for any team, although there will always be key employees who you’d hate to lose, there needs to be really good knowledge transfer across the team and this has never been more true in the turbulent job market we have today. The financial sector has a high level of required understanding to keep up with the requirements of their applications and processes; as a result sharing knowledge and avoiding personal information silos is very important for continuing the quality of testing in the financial sector.

www.testmagazine.co.uk


FUZZ IT LIKE YOU MEAN IT! Codenomicon's Fuzz-o-Matic automatizes your application fuzz testing! Fuzz anywhere, early on in the development process, get only relevant results and remediate on time. Access your test results through your browser anywhere.

Get actual, repeatable test results Save time and money by avoiding false positives! Find unknown vulnerabilities before hackers do. Not everyone has the staff, time, or budget to effectively use penetration testers and white hat security auditors. For executable but untested software, Fuzz-o-Matic gives longer lead time to remedy bugs before release.

Identify bugs easily

Verify your applications

Fuzz-o-Matic gives you working samples that caused issues during testing. Each crash case includes also programmerfriendly trace of the occurred crash. Identification of duplicate crashes is effortless. In addition to plain old crashes, you also get samples that caused excessive CPU or memory consumption. After remediation, a new version can be tested to verify the fixes.

Test your builds with Fuzz-o-Matic. The world’s best software and hardware companies have Software Development Lifecycle processes (SDLC) that identify fuzzing as a necessity to pre-emptively thwart vulnerabilities. These days simple functional testing done as Quality Assurance is not enough. Improve your application resilience by finding and remediating bugs that cause product crashes.

www.codenomicon.com/fuzzomatic


10 | Cover Featurestory

The more you take on here, in the ‘Inquisitive time frame’ the better prepared you'll be later on and once outside of this time frame don't be afraid to keep asking questions. You're not expected to retain everything, forever, and you're not expected to have clairvoyance for areas of the projects development you have yet to tacklethis final point is true in all sectors and roles.

TEST | August 2012

6. Inquisitive time frame: This time frame is like the 'golden hour' of learning; a time when asking questions has its highest benefit, building on the previous points regarding colleagues’ knowledge and jargon, testing in the finance sector has imbued within it the expectation that you will take an active interest in understanding the processes you are testing. Only through this understanding can you ever hope to be testing effectively. This time frame or ‘golden time’ people will expect you not to know as much, and more importantly they want you to know it so you can work to the best of your ability - without much reliance on anyone else for help. Managers will enjoy seeing you actively searching out information and if you can quickly absorb this information, quicker than expectations suggested you should, then everyone’s a winner. The interviewers who backed your ability will pat themselves on the back and think how good their judgment is and you will surprise yourself and those around you. All this relies on you setting the bar high, because in reality your ability to test expertly not only effects the end users experience; in the finance sector millions and billions of pounds worth of transactions may be stopped because of even slight bugs in key areas. The more you take on here, in the ‘Inquisitive time frame’ the better prepared you'll be later on and once outside of this time frame don't be afraid to keep asking questions. You're not expected to retain everything, forever, and you're not expected to have clairvoyance for areas of the projects development you have yet to tackle – this final point is true in all sectors and roles. 7. Autonomous learning: Finally we have the independent learner. Again, this is applicable to many sectors and is equally important in the financial sector. Learning on your own is important; people don't want you asking them if you can find the answer yourself. Any sector that has a high level of basic knowledge requirements will be the same – once outside the period of inquisition you need to become self sufficient to a large extent. Don't ask first and later realise you didn't need to bother anyone in the first place. Be logical, stay focused and use your natural intelligence, inquisitiveness and intuition to try and find the answer.

As a tester in this sector you need to use the skills that help in any testing/QA role. Seek out gaps in your understanding, try to fill those gaps and if the inability to fill those gaps is affecting your ability to write tests effectively or understand upcoming development then you need to dedicate extra time to your individual learning. Unlike testing public facing applications, testing the internal applications of a financial institution requires you to dedicate more time to expanding your knowledge base, and as the need for quality testing is ever presents the need for continuous, autonomous learning is equally a persistent requirement. Despite these key facts please remember: don't waste time. This applies more to Agile development teams where time is absolutely king. You need to balance your workload and if it’s taking too long to work it out yourself then take what information you have and ask your colleagues. Autonomous learning can continue from day one through to the day you retire, and that's the best way to keep on impressing – to keep sharp and to keep interested in your role.

Take away messages To conclude these ramblings, here are my take away messages, I hope remembering these will help testers when coming into contact with the financial and banking sector. ‘Targeted Testing’– the focus of your testing needs to be different here compared to most other sectors. ‘Trivial bugs really are trivial. Keep perspective on any issues you face, embrace the feedback on bugs that are more or less dismissed and re-align your testing efforts to test for these trivial bugs with the very lowest priority. ‘Jargon’, ‘Jargon busting sites’, ‘Knowledge transfer’ and ‘inquisitions’knowledge cannot be understated as truly being ‘power’ in financial and banking companies. You cannot hope to effectively test anything without expanding your understanding rapidly. Finally probably the most crosssector- and cross-discipline-applicable message is to pursue continuous autonomous learning. Doing this will push you to the front of the pack and make you an outstanding and committed employee that any sector or job role will want to retain.

Richard Eldridge Test Analyst Markets & International Banking RBS

www.testmagazine.co.uk


Join the Revolution Don’t let your legacy application quality systems hamper your business agility At Original Software, we have listened to market frustrations and want you to share in our visionary approach for managing the quality of your applications. We understand that the need to respond faster to changing business requirements means you have to adapt the way you work when you’re delivering business-critical applications. Our solution suite aids business agility and provides an integrated approach to solving your software delivery process and management challenges.

Find out why leading companies are switching to Original Software by visiting: www.origsoft.com/business_agility


12 | Agile testing

Removing the barriers to Agile development Most IT organisations are still optimised to support the waterfall approach to testing. From the way that projects are chartered and funded, to the way that they are staffed, to the way that they are governed, the IT management system that is today in place stands in the way of adopting an Agile approach according to Kurt Bittner CTO, Americas at Ivar Jacobson International.

M

ost organisations use a traditional waterfall approach to software development. Increasingly they are realising that while widely used, the waterfall approach to software development is predictable in only one sense: it almost always fails to deliver the desired results. Organisations have learned that increasingly detailed project plans and comprehensive documentation coupled with exhaustive reviews and signoffs have failed to deliver better results. IT knows it and the business knows it. Fortunately there is a better way.

Many IT organisations are embracing Agile software development as a way to more reliably and effectively deliver value to their business clients. Agile development has the promise of reducing overhead and delivering business value faster and at lower cost. While some organisations will succeed, most will not because the extensive barriers that they have

TEST | August 2012

erected over decades will prevent them from achieving the benefits of an Agile approach. Unfortunately most IT organisations are optimised to support the waterfall approach. From the way that projects are chartered and funded, to the way that they are staffed, to the way that they are governed, the IT management system that is today in place stands in the way of adopting an Agile approach. Understanding this is key to explaining why many organisations have succeeded with Agile development pilots but failed miserably when attempting wide-spread adoption of the approach. So long as a ‘pilot’ is able to skirt the regular rules of the organisation it can succeed, but when those rules are no longer suspended, the ‘organisational antibodies’ attack an Agile project team, causing it to revert back to the traditional waterfall approach.

‘Agile changes everything’ One client with whom we are working refers to this as the ‘Aha! Moment’ -

www.testmagazine.co.uk


Agile testing | 13

the point at which someone begins to fully grasp the complete extent of the fundamental changes that becoming Agile requires. Successful Agile development changes the way projects are planned and measured, and how roles interact; it affects career paths and how people are measured and rewarded; it affects how IT and the business interact. The organisational behaviours behind the traditional approach are deeply ingrained in both official policies and the informal habits of people. Policies must be adapted, and old habits must be unlearned and new habits established. Some common organisational barriers to agility include: • Lack of business commitment; • Governance models biased toward the traditional approach; • Overly-specialised resourcing models; • Ineffective outsourcing strategies; • Change management strategies that discourage valuable rework. Each of these barriers is discussed in detail below.

Lack of business commitment The business has been ‘trained’ to expect to be intensively involved for a short burst at the start of a traditional project, while requirements are being gathered, but thereafter they can return to their ‘regular job’. An Agile project rests upon the principle that the business is continuously involved in the project through a ‘Product Owner’ who is actively engaged with the project team. Involvement in the project is the ‘regular job’ of the Product Owner. This is actually a benefit, and is a principal reason why Agile projects succeed – the continuous guidance and feedback provided by the Product Owner ensures that the project delivers the right solution to the right problem. The challenge can be that getting the best results requires the participation of the best people, people who are often in high demand in other areas of the business. To ensure getting the most value out of the investment being made in the software development effort, the best people must be recruited, freed from their dayto-day responsibilities, and rewarded

www.testmagazine.co.uk

when the development project achieves superior results.

Governance models biased toward the traditional approach Traditional project governance emphasises detailed requirement specification and associated planning up front, with progress measured by completion of planned activities; project health is measured by deviations (or lack thereof) from the plan. Changes to the original plan, even when they will result in a better solution, are discouraged by a change management process that is designed to make even positive change as painful as possible. Instead of trying to create a perfect set of requirements, locking them down, and then creating a perfect plan that implements the agreedupon requirements, it is far better to recognise that solutions evolve as needs are understood, and that the best solution is usually developed by building a small part of the solution to demonstrate shared understanding, and then refining the solution through collaboration. Governance then focuses on assessing whether adequate progress is being made, whether adequate quality is being delivered, and whether risks are being effectively addressed. Progress in the Agile world is measured by working, tested code. Overall schedule and budget are still important, but the scope will vary in order to deliver the greatest business value within a given budget and time frame. If better solutions are discovered along the way, as often happens, then the project team and the business adjust scope and priorities accordingly.

Overly specialised resourcing models Because the day to day work on an Agile project is quite fluid, team members have to be able to play different roles depending on the need of the moment. Ideally, team members should be broadly skilled enough to play a number of roles while still having technical depth in an area of specialty. Overspecialisation of roles means that staff skills are narrow but deep, resulting

Instead of trying to create a perfect set of requirements, locking them down, and then creating a perfect plan that implements the agreed-upon requirements, it is far better to recognise that solutions evolve as needs are understood, and that the best solution is usually developed by building a small part of the solution to demonstrate shared understanding, and then refining the solution through collaboration.

August 2012 | TEST


14 | Agile testing

in formal hand-offs between roles that introduce delays and extra work. For an Agile team moving quickly, these delays are deadly. An additional challenge occurs when there is a lack of continuity of resources, with people coming onto and leaving the project, potentially resulting in a lack of team cohesion and accountability for the overall end result. The lack of cohesive teams that work together over the course of a project can result in applications that are of lower quality than in cases where people ‘own their own mistakes’ and feel accountable for the overall solution.

Ineffective outsourcing strategies

Kurt Bittner CTO, Americas Ivar Jacobson International www.ivarjacobson.com

TEST | August 2012

Many organisations pursue outsourcing as a rational approach to reducing the per-head labour cost of project work, especially development and testing. The problem is that it is difficult to create a cohesive, accountable, self-directing team with people from different organisations. Geographic dispersion of team members, a common occurrence in outsourcing arrangements, also can impede team formation and communication. Many outsourcing contracts are ‘fixed bid’, requiring all requirements to be contractually documented up-front. In these cases there is virtually no chance for the team to work in an Agile way. One of the assumptions of the Agile approach is that the solution is mutually determined between the team and the business through the process of working together. Documenting all requirements up-front is a return to the traditional approach. Effective outsourcing is best done once a ‘core’ team has identified the components and/or services that the solution will use. The components and services can then be developed by another team, either inside or outside the company, with relative independence from the core team and interactions with the business. The core team retains the responsibility for integrating components and services into a solution, and for interacting with the business to validate this solution. This concentrates responsibility in the

core team and yet provides flexibility and reduced cost for low-risk parts of the solution.

Change management strategies that discourage valuable rework The traditional approach’s focus on sign-off and change management creates a strong bias against rework, even when the change is merely clarifying existing requirements, or where a better solution is discovered. The process seems intended to discourage and punish rework by making it painful and visible through mandatory sign-offs and change requests throughout the project lifecycle. Unfortunately discouraging necessary rework may lead to higher overall cost, lower quality and lower satisfaction. The key is balance and scope management: a certain amount of rework should be regarded as a good thing, and teams should not be punished for rework when it will result in less work overall by producing a better business result. Rework is expected and essential in an Agile approach – it results from learning and feedback, and leads to increased value for the business, and less cost overall.

Removing the barriers Though nearly all organisations moving to an Agile approach face some or all of these barriers, none of them is insurmountable. Removing the barriers requires persistence and patience, and concerted effort, often across organisational boundaries. An Agile approach is a better way, but the Agile journey is not always an easy one. Your organisation has probably spent years optimising itself to support the traditional approach. Dismantling that approach and building a new support system for the Agile approach will take time and effort. This article is an abridged version of a white paper written by Kurt Bittner and published by Ivar Jacobson International. The full version of the white paper can be downloaded at: http://www.ivarjacobson.com/ resource.aspx?id=1441.

The key is balance and scope management: a certain amount of rework should be regarded as a good thing, and teams should not be punished for rework when it will result in less work overall by producing a better business result. Rework is expected and essential in an Agile approach - it results from learning and feedback, and leads to increased value for the business, and less cost overall.

www.testmagazine.co.uk


TM

Powerful multi-protocol testing software

- A powerful and easy to use load test tool - Developed and supported by testing experts - Flexible licensing and managed service options

Don’t leave anything to chance. Forecast tests the performance, reliability and scalability of your business critical IT systems. As well as excellent support Facilita offers a wide portfolio of professional services including consultancy, training and mentoring.

Facilita Software Development Limited. Tel: +44 (0)1260 298 109 | email: enquiries@facilita.com | www.facilita.com


16 | Testing markets

The ins and outs of software testing for communication service providers Hot Telecom – a market research firm specialising in the telecomms sector – has conducted extensive interviews to define what communications service providers (CSP) are looking for in terms of software testing services and support, company president Isabelle Paradis reports.

G

etting testing right can make a big difference to both revenue and cost and many of the communications service providers (CSPs) we spoke to during our interview process constantly referred to the need to reduce costs and the time to market for new communications products. Testing service providers outlined how this can be achieved, for instance, through centralisation and off-shoring of some testing functions, placing a greater emphasis on requirements planning before test planning and execution begins, and critically, by reducing the rates of software defects in the production environment (that is the real operating environment rather than the controlled ‘test environment’). Based on our discussions with testing service providers, it is especially important to reduce

TEST | August 2011

errors in those systems that have the most direct impact on a CSP’s financial and operational performance. When CSPs were asked to prioritise the importance of systems to be tested, billing emerged as the most important system to test thoroughly (see table 1). Around half of CSP respondents singled out the billing system as most critical from a test perspective. Critical software applications, such as the billing system, can be a source of major headaches for CSPs: functional limitations, or the reconfiguration effort required to support changes to products or the introduction of new products, mean that time-to-market can be long: testing is an intrinsic part of reconfiguration or functionality improvement projects. Nevertheless, some CSPs told us they believed that no one system was more significant in testing terms than any other. Rather, it was more important to conduct system integration and endto-end process testing, as the greatest number of errors tends to occur in the

gaps between individual systems. From the discussions we had with CSPs in relations to how they measure the success of a testing program, we can conclude that CSPs take different approaches to measuring success. Some have a set of very detailed metrics that cover all aspects of a test process, while others simply say that reducing the number of software errors going into the live production environment is the only significant measure of success. Table 2 indicates the ranking of those measures mentioned to us by interviewees.

The increasing importance of testing Many of the test managers and software quality assurance specialists that were interviewed said that software testing was increasing in importance and becoming more challenging, and that the ways in which it was carried out were changing, in response to a number of specific drivers.

www.testmagazine.co.uk


Testing markets | 17

Table 1

Table 2

The four key drivers which were identified through our discussions with CSPs, increasingly impacting their telecom software testing needs are: - Increased competition and the need for innovation and speed to market; - Rising complexity of services and IT; - Growing mobility and the rise of user applications; - Sustained cost reduction imperatives.

Software testing for CSPs market revenue reaches US$1.7 billion The study also defines the current and forecasted total value of independent software testing for the CSP segment. Independent software testing for

www.testmagazine.co.uk

CSPs has been enjoying, and will continue to benefit from growth for a number of reasons, with the primary driver of market expansion being the continued strong investment in telecoms software itself. The global telecoms software market as a whole is expected to demonstrate a CAGR of six percent between 2011 and 2016. Meanwhile, the services and applications that service providers are delivering to customers are becoming increasingly complex. They are also being delivered across a much wider range of fixed and mobile devices including different varieties of feature phones, smartphones, tablets, laptops,

Critical software applications, such as the billing system, can be a source of major headaches for CSPs: functional limitations, or the reconfiguration effort required to support changes to products or the introduction of new products, mean that time-to-market can be long: testing is an intrinsic part of reconfiguration or functionality improvement projects.

August 2012 | TEST


18 | Testing markets

Table 1

Much of the market growth is coming from the emergence of this end-toend testing of applications - with testing processes that span everything from the OSS/BSS systems to enduser devices. These processes are bringing network and device testing into a field that has traditionally been considered separately from the OSS/BSS stack.

TEST | August 2012

PCs, IP phones and TVs. This means they are harder to deploy and configure in the first place, and the task of ensuring that the customer experience remains high is becoming more intricate and multi-faceted. This creates a need and an opportunity for an independent testing organisation to ensure that everything works end-to-end, from the provisioning of a service, through the back-office systems needed to support and bill for the services, to the end user application itself. Much of the market growth is coming from the emergence of this end-toend testing of applications – with testing processes that span everything from the OSS/BSS systems to end-user devices. These processes are bringing network and device testing into a field that has traditionally been considered separately from the OSS/BSS stack. As defined in our market sizing exercise, telecom software testing revenue is estimated to have reached US$1.7 billion in 2011, while software testing revenue is estimated to have reached US$12.7 billion. This means that the CSP segment generates around 13 percent of the software testing revenue on a global basis.

Top testing providers

Isabelle Paradis President Hot Telecom www.hottelecom.com

When it comes to the top independent software testing providers, all except one are global consultancies/IT solution providers and business process outsourcing specialists. At the end of 2011, the top five suppliers of software testing to the CSP segment, defined in terms of their total market share, were: 1. Amdocs 2. IBM Global Business Services 3. HP

4. Accenture 5. Tech Mahindra Amdocs is the lone representative of the OSS/BSS vendor community in the top five, as it provides testing services to CSPs for software beyond its own portfolio. “To achieve success, CSPs must provide a consistent, high quality user experience across all customer touch points, efficiently and with a low cost of quality. This is driven largely by the way testing is managed, planned and executed,” said Gil Briman, vice president, Amdocs Consulting Division and head of Amdocs Technology Integration Services. “Amdocs has developed industry-leading testing methodologies and expertise that can be applied to our customers’ entire IT environments, including both Amdocs software and third-party applications. With a team of testing experts across 30 geographical locations, Amdocs Testing Services has become the leading independent software testing provider in the communications market.” Further analysis of the software testing revenue for the CSP segment can be found in our report: ’Software testing for CSPs – Market Analysis’, together with drivers and trends impacting customers’ expectations and the competitive landscape of the suppliers in the CSP market. More information can be found at: http:// www.hottelecom.com/softwaretesting.html An Executive summary of this report can also be downloaded: http:// www.hottelecoms.com/CSP-softwaretesting-Executive-Summary.htm

www.testmagazine.co.uk



20 | Open source testing tools

Monkey magic Gorilla Logic explores its MonkeyTalk, an open source, general purpose functional testing command language, and dives into its core design elements, and explains why it made the many design decisions that led to the resulting MonkeyTalk 1.0 spec.

F

irst, a bit of history; Gorilla Logic, an enterprise software consulting company, has been leading several open source development projects around functional test automation tools. The first such tool, FlexMonkey, provided script recording, playback, editing and test code generation for Adobe Flex applications. FlexMonkey was soon joined by FoneMonkey, an open source tool for native iOS testing, followed by FoneMonkey for Android. Although each of these tools was platform-specific, the three shared a common architecture and logical approach to functional test scripting, so much so that Gorilla Logic ultimately integrated the tools into a single, open source, cross-platform testing platform called MonkeyTalk.

A key aspect of this unification process was the specification of a single command language that could be used for testing any user interface implemented on any platform. In the remainder of this article, we’ll refer to the command language simply as MonkeyTalk, while Gorilla Logic’s open source system for executing MonkeyTalk scripts will be referred to as the MonkeyTalk Platform.

Platform architecture The MonkeyTalk Platform architecture consists of the following primary elements: MonkeyTalk Record/Playback Wire Protocol: A simple http-based wire protocol for sending and receiving MonkeyTalk commands to and from an application under test. MonkeyTalk Record/Playback Agents: Platform-specific libraries that record and playback MonkeyTalk Wire Protocol. The MonkeyTalk Platform currently provides agents for iOS,

TEST | August 2012

Android, Mobile and Desktop Web/ HTML, and Adobe Flex. MonkeyTalk Recorder: A tool that receives commands via the MonkeyTalk Wire Protocol, and writes MonkeyTalk Scripts. The MonkeyTalk platform provides an Integrated Development Environment (IDE) that records, plays, edits and manages MonkeyTalk tests and suites of tests. MonkeyTalk Runner: A tool that processes MonkeyTalk scripts and sends MonkeyTalk commands over the wire protocol. The MonkeyTalk Platform currently provides a command-line runner and an Apache Ant runner, which can be easily called from Continuous Integration environments such as Jenkins. As discussed in more detail below, MonkeyTalk commands have been designed so that they can be handled mostly ‘opaquely’ by a recording tool or script runner. The job of actually interpreting commands for a particular platform is left mostly to the platform’s corresponding MonkeyTalk Agent. In this way, a particular platform agent can extend MonkeyTalk with new commands without requiring any changes to the MonkeyTalk IDE or command runners. Adding MonkeyTalk scripting support to a new application platform requires only writing a playback agent (and optionally a recording agent) for the platform. While, as far as we know, MonkeyTalk scripting is currently only supported by the open source MonkeyTalk Platform itself, the language and its associated http-based wire protocol are suitable for integration with most typical functional testing frameworks, especially any that conform to a logical architecture similar to the MonkeyTalk Platform’s. Because all communication to and from applications under test, recorders, and players is accomplished via the platform-independent MonkeyTalk Wire Protocol, it is straightforward to

swap out existing components of the MonkeyTalk Platform or integrate new components that extend the MonkeyTalk Platform with new types of recorders and players, and target additional user interface platforms. The MonkeyTalk Command Language was defined to meet the following goals: • Test scripts should be highly readable by all project constituencies, including business stakeholders. • It should be possible to automatically create scripts through interactive recording, but it should also be easy to create scripts from scratch without recording. • The language should directly support parameterized and data-driven scripting. • It should be easy to combine scripts into complex scenarios and test suites. • It should be easy for test automation engineers to extend the language through the definition of new “component types”, and “actions”. • The language should be easy for anyone to learn, even people with no programming experience, or with no technical knowledge about underlying user interface platforms. • It should be easy to generate conventional language (for example, Java code) from a MonkeyTalk script via simple production rules.

Talking the talk Before getting into the deeper concepts, let’s take a quick look at scripting. The MonkeyTalk script below enters a username and password, clicks on a login button, and verifies that a welcome message is displayed. Input username enterText user01 Input password enterText mysecret Button Login tap Label * verify “Welcome, user01”

www.testmagazine.co.uk


Open source testing tools | 21

scripters to easily define new custom component types and actions.

A testing Tower of Babel

Table 1

Each command above, and indeed every MonkeyTalk command, consists of the following space-delimited tuple of values (with values containing embedded blanks being quoted): Component Type: The kind of component (typically a UI component) on which some action should be applied. Examples include input fields, buttons, and labels. Monkey ID: A value that identifies which specific component of the specified type is to be manipulated. For example, the ‘username’ Input field, or the ‘password’ Input field. Action: The action that should be applied to the component. For example, enterText, tap, or verify. Args: Zero or more arguments needed to complete the action. For example, the text value “user01” that should be entered into an Input field. Because MonkeyTalk command consists of this same sequence of values, MonkeyTalk scripts can be displayed and edited in tabular formats such as the one above. Missing ‘cell’ values are entered as an asterisk (*). The example above assumes that there is only one label on the screen, so we don’t have to specify a monkeyId to identify it. This tabular view makes MonkeyTalk well suited for interactive command recording and editing with tools such as the MonkeyTalk IDE, but MonkeyTalk scripts are actually stored in the simple, space-delimited text format we saw above. This simple format allows MonkeyTalk to also be created or updated with any simple text editor. Additional benefits of this simple format are that it allows MonkeyTalk to be easily parsed and generated by automation tools. Perhaps most importantly, the result is a ‘lowpunctuation’, easy-to-learn language that is easily approachable even by non-programmers. There’s a bit more to MonkeyTalk, like parameterising and data-driving scripts, but fundamentally it’s all about rows of these simple values. Although in themselves simple, these values are interpreted in such a way as to create

www.testmagazine.co.uk

a powerful, object-oriented, crossplatform scripting language, as we shall see.

Welcome to the zoo Every user interface platform consists of a menagerie of user interface components such as buttons, input fields, tables, sliders, and checkboxes, and associated actions such as clicking and keyboard entry. At its simplest level, a functional test script specifies a sequence of actions to be performed against particular user interface components, and so the notions of user interface components and actions is fundamental to automation scripting. MonkeyTalk enhances script readability and simplifies component identification by explicitly including a target component type on each command. In contrast, other popular user interface automation frameworks have no explicit notion of a UI component type. In these frameworks, you specify an action and a component identifier but no explicit component type appears in the command. With such a framework, you click on an OK button with a command that essentially says “Click on the user interface element called OK”, whereas in MonkeyTalk, you essentially say “Click on the Button called OK”. While this difference may seem subtle, we’ll see that it is actually a very import aspect of MonkeyTalk. In addition to improving readability, the inclusion of component types within commands allows actions to be command-specific. Otherwise, every new action added to the language simply becomes part of one great, unmanageable namespace. Assigning actions to component types has the same benefit as associating methods with classes in objectoriented programming languages, in that it makes much of the language ‘self-organising’ from a code and documentation perspective. As we’ll see a bit later, component types are the key to MonkeyTalk’s own simple, but powerful object-oriented features. MonkeyTalk also makes it easy for

Virtually every user interface platform has a notion of a button, but other component types are platformspecific. Radio buttons for example are natively supported in both the Android and iOS SDKs, but iOS has no native notion of a radio button. On the other hand, iOS has a ‘segmented control’ that, like radio buttons, provides for selecting a single value from a group, but has a very different visual presentation than radio buttons. Some UI components have no counterparts on some platforms. For example, both iOS and Android have native slider-type components, while HTML has no corresponding native element. Various user interface platforms also support different types of user actions or gestures. Logically equivalent actions have different names on different platforms. For example, a mobile application “tap” is largely equivalent to a desktop application “click”. Moreover, not all actions correspond to gestures like clicking and tapping. Some actions correspond to “higher level” operations like selecting a value from a list of choices. The various component types, names and actions supported by a particular application platform implicitly define a scripting “dialect” for the platform. An application written for multiple platforms however might be logically equivalent across several platforms. For example, the following script could describe the login sequence for several platform implementations of the same application: 1. Enter ‘user01’ into the username field 2. Enter ‘mysecret’ into the password field 3. Hit the ‘Login’ button 4. Verify that the welcome message says “Welcome, user01” In order to allow scripts, or at least portions of scripts, to be shared across platforms, we early on made a decision to define logical, platform-independent user interface components and actions that would together comprise a cross-platform dialect for application scripting. It is up to each platform’s MonkeyTalk Agent to map the platform’s native dialect to and from MonkeyTalk. A platform agent however is free to define platformspecific aliases for any component type or action, so that scripters that would prefer to use platform-specific terminology can be allowed to do so.

August 2012 | TEST


22 | Open Feature source testing tools

Table 2

The table above shows a few examples of MonkeyTalk component types and their platform-specific counterparts. Note that iOS’s ‘UISegmentedControl’ maps to MonkeyTalk ‘RadioButtons’. Mappings like these increase the portability of MonkeyTalk scripts for applications that are “logically equivalent” on multiple platforms, without necessarily being identical. In addition to facilitating crossplatform scripting, MonkeyTalk’s logical components allow scripters with no knowledge of underlying user interface technologies to learn one consistent dialect that can be used for scripting any user interface on any platform.

As object-oriented as you wanna be While MonkeyTalk can be used without any understanding of object-oriented programming, it does provide simple object-oriented mechanisms including inheritance, method overriding, and polymorphism that make it simpler to understand and use, even by non-programmers. Additionally, MonkeyTalk’s object-oriented structure allows it to map naturally to any modern user interface framework or common language API, since virtually all are themselves object-oriented. More on that later. Like native user interface frameworks, MonkeyTalk’s platform-independent components are organised into a component hierarchy, and actions supported by a parent component are inherited by its subclasses. For example, the root user interface component in MonkeyTalk is called ‘View’, and it includes basic actions like ‘tap’ and ‘enterText’. A MonkeyTalk Input component inherits the ‘tap’ and ‘enterText’ action from View, and adds an additional ‘clear’ action that blanks out an input field. The View component type also provides various ‘verify’ commands that allow scripts to test that actual component values match expected values or patterns, and these actions are likewise inherited by all component types. It is common for developers to subclass standard user interface

TEST | August 2012

components to create custom application behaviour, and MonkeyTalk automation is inherited by custom classes. Thus, the command: Button OK tap will tap on a native button or any button subclass, for example, “MyFancyButton”, within the application under test. Similarly, since View is the root component type, a command such as: View OK tap will tap on the first component called “OK”, regardless of component type.

Identifying with a monkey While some other popular UI testing frameworks identify component through the use of search expressions with varying levels of complexity, MonkeyTalk relies on the fact that components typically have ‘naturally identifying’ properties that serve as unique identifiers. These identifiers are often visual, for example, the label on a button, or its tool tip, but others, such as properties like “id” or ‘name’ are oftentimes invisible to users of an application, although typically easy to discover through inspection tools (like Firebug or the MonkeyTalk IDE’s Tree Viewer), or simply through recording an interaction with a component. MonkeyTalk defines the notion of a ‘monkeyId’ in order to identify specific instances of a component type. The interpretation of a monkeyId is platform and component-type-specific, but in general consists of matching the value against each of the various properties that naturally identify components of some type. For example, for HTML/ Web applications, the monkeyId is compared against several properties including the HTML element’s id, name, title, and text value to see if any one of them matches. In less common cases where a component has no unique value for any of its natural identifiers, monkeyId’s can be subscripted, so if for example there were multiple OK buttons on some screen, you could specify which one by subscripting the monkeyId with parentheses like this: Button OK(2) tap Additionally, an asterisk (*) is a wildcard identifier so that the resulting command matches on component type only,

which is useful in situations where there is only one component of a particular type currently displayed, so you could, for example, say: Button * Tap This command would click on the first button found on the screen, regardless of its id values. We could also specify a particular Button by index (stating with one), by specifying an ‘ordinal id’ of the form #n, for example: Button #2 Tap At recording time, the platform’s Recording Agent determines which value to use as a monkeyId. This is typically determined by first examining the most logical identifier, for example the label on the button, and if it is blank continuing to search the other properties until a non-blank one is found. If no value is found, then the recording agent returns an ordinal id. At playback time, it is the responsible of a platform’s Playback Agent to search the currently displayed UI components to find a match.

If at first you don’t succeed An important aspect of automation scripting is the ability to synchronise script execution with that of the application under test. Consider again the simple login test script. Input username enterText user01 Input password enterText mysecret Button Login tap Label * verify “Welcome, user01” Assume that the application takes a few seconds to process the login before displaying the welcome message. If the MonkeyTalk script tries to verify the welcome message before it is actually displayed, then the script will fail. MonkeyTalk commands are retried until they succeed or timeout, and default timeout intervals can be specified to the MonkeyTalk Runner. MonkeyTalk also allows the timeout to be lengthened or shortened for any command by specifying a new timeout value in a “command modifier”. For example: Label * verify “Welcome, user01” %timeout=5000 The command above will retry until the label is displayed with the expected value, or until five seconds (5000 milliseconds) has elapsed. Similarly, the command: Button OK tap %timeout=5000 will wait for up to five seconds for the OK button to be displayed before failing.

Any language can MonkeyTalk Most test scripts step through a sequence of perform-action-and-

www.testmagazine.co.uk



source testing tools | Feature 24 |Open

application under test, but the script is otherwise essentially identical to its MonkeyTalk counterpart.

Baby you can drive my test

check-for-expected-result steps, and so ‘straight line’ execution is sufficient for a majority of test scripts. For this reason, MonkeyTalk provides no ‘control flow’ constructs such as if-then or while-loops (although data-driving does provide a looping mechanism and by assigning a script name to a variable, you can essentially create a switch statement that for example allows code to be executed conditionally depending on the target platform). Rather than create (yet another) fullblown scripting language, we instead designed MonkeyTalk to be easily mapped onto any object-oriented language API, and to facilitate code generation via simple production rules so that it’s relatively trivial to create a code generator for any conventional object-oriented language. Binding any language to MonkeyTalk consists simply of implementing a language API that sends the very simple, MonkeyTalk Wire Protocol. In the simplest case, a language binding need only provide a ‘run’ command that takes a MonkeyTalk command as an argument. For example, some language could provide a MonkeyTalk object with a single ‘run’ method so that executing a command would look something like: monkeytalk.run(“Button OK Tap”); This is what we call a ‘command-asdata’ binding.

TEST | August 2012

It can be much nicer however to create (or generate) a language API that allows MonkeyTalk commands to be translated into objects and methods that produce more natural, native programming constructs, which in turn benefit from the syntax checking and code completion facilities of modern programming environments. As pointed out earlier, MonkeyTalk commands are just sequences of values: ComponentType, MonkeyId, Action, and Args. This sequence of values maps quite naturally onto object-oriented programming language API’s. For example, here’s the login script we looked at earlier, along with the equivalent JavaScript statements generated by the MonkeyTalk Platform. Input username enterText user01 Input password enterText mysecret Button Login tap Label welcome_message verify “Welcome, user01” Here is the equivalent JavaScript. app.input(“username”). enterText(“user01”); app.input(“password”). enterText(“mysecret”); app.button(“login”).tap(); app.label(“welcome_message”). verify(“Welcome, user01”); The MonkeyTalk platform initialises the ‘app’ variable to reference the

While most component types correspond to UI components, MonkeyTalk also includes a few systemdefined component types used for things like script parameterization, data-driving, and defining test suites. As mentioned earlier, most command handling is left to platform-specific MonkeyTalk Agents, but the MonkeyTalk Runner handles script parameters, data-driving, and the run-time call stack. MonkeyTalk Playback Agent authors only need to implement logic that finds a UI component with the specified monkeyId, and executes an action with concrete values. Variables are specified using a ‘dollar-curly’ format. For example, suppose we had a script called ‘login.mt’ consisting of the following commands: Input username enterText ${username} Input password enterText ${password} Button Login tap Label welcome_message verify “Welcome, ${username}” The script can now be driven with a csv file containing the variables names on line one, and the actual values on subsequent lines. For example, suppose you had a file called ‘login.csv’ consisting of the following lines. username, password user01, mysecret anotheruser, anotherpassword You could then data-drive the script with the command: Script login.mt runWith login.csv ‘Script’ is an example of a component type that doesn’t correspond to any user interface component. It is defined by the MonkeyTalk specification as a means of calling a script. Note that the monkeyId specifies the name of a script file. The runWith action causes the script to be run once for each line of data in the csv file named as an argument. You can declare arguments within a MonkeyTalk script so that you can call a script with the arguments specified on the command itself, rather than within a file. A “Vars” statement specifies the order of the arguments and default values for each. Vars * username=user01 password=mysecret (The monkeyId is ignored for Vars statements, so we specifiy “*” as a placeholder for the empty “cell”). Having thus defined the arguments, we

www.testmagazine.co.uk


Open source testing tools | 25

In the script above, init.mt is run before each test, and cleanup.mt is run after each one.

It’s MonkeyTime

can call the script with the command Script login.mt run john23 johnspass

Extend yourself While it is possible to add custom recording and playback support to any platform by extending its MonkeyTalk agent, doing so requires advanced native programming expertise. MonkeyTalk does however provide a simple extension facility that allows custom component types and actions to be defined even by nonprogrammers. When executing any command, MonkeyTalk checks to see if there is a script with a name of the form ComponentType.Action.mt, and executes it if found. So, by saving our login script with the name ‘User.login. mt’, we could call the script via the MonkeyTalk command: User * login john23 johnspass In this way, scripters can add application-specific component types and actions that execute complex test cases.

Round tripping (sort-of) Oftentimes a tester would like to convert a MonkeyTalk script to another language and add complex control logic to the resulting program, but also be able in the future to re-record parts of the recorded script. Although the MonkeyTalk Platform provides no facility for converting a generated program back into MonkeyTalk, the platform does provide a framework for calling MonkeyTalk from other languages and vice versa, so that advanced control logic can be implemented in some language but MonkeyTalk commands can be maintained within MonkeyTalk scripts that can be recorded and played with the MonkeyTalk IDE.

www.testmagazine.co.uk

The MonkeyTalk Script Processor is written in Java, and can be easily extended to call any language that is callable from a java application. Conversely, the MonkeyTalk Script Processor can be called from any language that provides support for calling Java. The MonkeyTalk Platform currently includes support for calling JavaScript, and also provides a JavaScript API for invoking the MonkeyTalk’s command processor. Support for other languages is planned for the near future. In the example below, the User.login. mt script is called from a JavaScript program: for (i=0; i<10; i++) { app.user().login(“user”+i, “password”+i); } Conversely, if we had some JavaScript program called User.login.js, we could call it from MonkeyTalk with the command: User * login user01 mysecret At runtime, the Script Processor searches for files named User.login.js. If found, the processor runs it. If not it will attempt to run User.login.mt instead. In this way, you can convert a MonkeyTalk script to JavaScript, and every script that was previously calling the MonkeyTalk script will now call the JavaScript instead.

Ain’t it suite MonkeyTalk provides system-defined component types for managing test suites. The ‘Test’ component runs a test, while the SetUp and TearDown components specify logic to run before and after each test, similar ‘xUnit’ testing frameworks such as JUnit. SetUp init.mt Run TearDown cleanup.mt Run Test enterUser RunWith users.csv Test deleteUser RunWith users.csv

As we’ve seen in this article, MonkeyTalk has been designed to be a general purpose functional automation language. While the MonkeyTalk team hopes that the MonkeyTalk Platform is widely adopted for use in automation testing, we also hope that the MonkeyTalk Command Language will become a standard supported by a variety of automation tools. MonkeyTalk defines a common dialect for user interface scripting across platforms. It uses component types as both a means of abstracting the underlying UI application platform being scripted, and also as a means of realizing a simple, object-oriented command language that’s highly expressive but easy to learn. MonkeyTalk can be mapped in a straightforward way for generating code and API’s for common object-oriented languages. Its fundamentally tabular structure makes MonkeyTalk well-adapted to interactive recording and playback, but its simple, space-delimited format can be edited with any plain text editor. Certainly the MonkeyTalk Platform itself strongly validates MonkeyTalk’s logical architecture and its implementability. Released in January 2012, the Platform has already been downloaded more than ten thousand times, and has an active community of users. While initially supporting iOS and Android, we were able to leverage the clean separation of recording, editing, and playback functionality in the MonkeyTalk Platform so that we could add support for Adobe Flex, mobile Web (including hybrid native/web apps), as well as desktop web in just a few months time. Gorilla Logic is actively discussing what lies ahead for MonkeyTalk beyond the 1.0 language specification, and welcomes input from the community. We also hope to collaborate on developing agents for additional app platforms beyond the mobile and browser environments supported today. The Beta5 version of the MonkeyTalk Platform was released in early July. All source is available at www.gorillalogic. com/monkeytalk and documentation can be found at www.gorillalogic. com/monkeytalk. The 1.0 release is planned for early September 2012, and will continue to be available as a completely free and open source tool, licensed under the Affero Gnu Pulic Licence (AGPL).

August 2012 | TEST


26 | Training Feature Corner

Techniques and the tortoise Following last issue’s ‘Agile and the hare’, this time Angelina Samaroo tackles the tortoise of techniques.

A

ll that talk of Agile be nimble, Agile be quick in the previous issue has got me hankering for some good old-fashioned test design, so I thought I would provide a reminder of one of the many formal techniques out there. Whatever the model for software development, a valuable tester will know his craft. This includes reading, understanding and tearing down a requirement with a view to testing it, not in any old ad-hoc fashion relying on his superior experience, but in a systematic, traceable and testable (yes, tests need to be tested too) method. You will no doubt be familiar with the black and white (glass) box techniques. The former uses the specification documents as the basis for test design, the latter the code. A technique that I’ve found useful for testing requirements is Decision Table Testing, a black box (or specificationbased technique).

Decision tables Decision tables can be used to capture system requirements that contain logical conditions, and to document internal system design. They may be used to record complex business rules that a system is to implement. The specification is analysed, and conditions and actions of the system are identified. The input conditions and actions are most often stated in such

TEST | August 2012

a way that they can either be true or false (Boolean). A decision table is created which contains the triggering conditions, combinations of true and false for all input conditions, and the resulting actions for each combination of conditions. Each column of the table corresponds to a business rule that defines a unique combination of conditions that result in the execution of the actions associated with that rule. The number of permutations is equal to 2(no. of conditions). So full decision table testing would have 2(no. of conditions) number of test cases. For example, if there were seven conditions we would have 2x2x2x2x2x2x2 =128 permutations of inputs to consider. The coverage standard commonly used with decision table testing is to have one test per business rule. A strength of decision table testing is that it draws out combinations of conditions that might not otherwise have been identified without the thoroughness of this approach. Here’s an example scenario: A mobile phone supplier wishes to construct a decision table to decide how to target customers, according to three characteristics: 1. User type – Individual or Corporate; 2. Gender – Male or Female; 3. Age group – A (under 20); B (between 20 and 40) and C (over 40). The company has four phone types (P, Q, R and S) to market. Type P is targeted to women who are not corporate users. Type Q will target

www.testmagazine.co.uk


Training Corner | 27

Table 1

Table 2

females in age group A. Type R will target male corporate users in age group B. Type S is likely to appeal to all but those females in age group C. The first point to note is that the tester can quickly calculate (not guess) the number of possible business rules and thus the number of possible tests. Thus test planning is less a game of poker and more an engineering method. To do this we need to identify the number of conditions (which can be set to TRUE or FALSE). In this scenario there are five: 1. Individual user (true or false); 2. Male (true or false); 3. Age group A (true or false); 4. Age group B (true or false); 5. Age group C (true or false). (Note that in 3,4 and 5 we cannot have just Age group as a single condition, since this does not lend itself to a true or false setting.) Thus, we now have five conditions, making our number of combinations 25 = 2x2x2x2x2 = 32. In other words, there are 32 possible business rules and thus 32 possible tests. Next we need to identify what these are. This is a simple process.

www.testmagazine.co.uk

If we use binary mathematics, then we start each condition at 0, and increase each by 1 in turn until they’re all set to 1. Here we use I – individual; M- male; A-age group A; B – age group B; C-age group C. This gives: Combination I M A B C I M A B C 1.

0-0-0-0-0

or N N N N N

2.

0-0-0-0-1

or N N N N Y

3.

0-0-0-1-0

or N N N Y N

1-1-1-1-1

or Y Y Y Y Y

... 32.

Another method is to recognise that there are two options per condition (Y or N), and that there are 32 combinations. Select a condition, and set this to N and Y equally (ie, 16 times for each). For each set of 16, set the next condition equally to N and Y (ie, eight times for each). For each set of eight, set the next condition equally to N and Y (ie, four times each). For each set of four, set the next condition equally to N and Y (ie, two times each). For each set of two, set the next condition equally to N and Y (ie, one each). This is shown in Table 1.

You will no doubt be familiar with the black and white (glass) box techniques. The former uses the specification documents as the basis for test design, the latter the code. A technique that I’ve found useful for testing requirements is Decision Table Testing, a black box (or specificationbased technique).

August 2012 | TEST


28 | Training Feature Corner

A test professional need not limit himself or herself to testing systems. He or she should seek out extra skills – requirements analysis and code analysis being amongst them. We do not need to write either, but we do need to be able to test them. Leaving this to the business analysts and the developers is to do half the job and thus doubling the risk.

A decision table would look like the following:

This could form a solid basis of discussion with the marketing, sales and supply teams. Whichever development model you choose, the principle of ‘the earlier you find an issue the cheaper it is to fix then’ still holds. Seeking to challenge requirements, not just on your hunch but on a method that is easy to use will reap rewards later on – you may well become the unsung hero at go-live, but a quiet day at the (back) office after launch is worth some up-front thinking.

Note that this activity would take no more than few minutes to sketch out, assuming a properly written scenario and prior practise in using the technique. A study of the final table shows that there are no offerings targeted at corporate females over 40; that phones R, Q and P are targeted at just 1-3 groups respectively whilst phone S is targeted at 11 groups of people.

A test professional need not limit himself or herself to testing systems. He or she should seek out extra skills – requirements analysis and code analysis being amongst them. We do not need to write either, but we do need to be able to test them. Leaving this to the business analysts and the developers is to do half the job and thus doubling the risk.

The next step in effective test design is to study the table and deduce what to do next. In this case, a little study will reveal that many test cases are not realistic. Here, we cannot have a Y in more than one age group, or N in all age groups (subject of course to checks that the code does not allow this, perhaps by providing a drop down menu). This leaves us with 12 combinations or business rules or tests:

Table 3

Angelina Samaroo Managing director Pinta Education www.pintaed.com

TEST | August 2012

www.testmagazine.co.uk


20 Leading Testing Providers EDITOR’S FOCUS

Sponsored by


Industry-leading Cloud, CEP, SOA and BPM test automation Putting you and the testing team in control Since 1996, Green Hat, an IBM company, has been helping organisations around the world test smarter. Our industry-leading solutions help you overcome the unique challenges of testing complex integrated systems such as multiple dependencies, the absence of a GUI, or systems unavailable for testing. Discover issues earlier, deploy in confidence quicker, turn recordings into regression tests in under five minutes and avoid relying on development teams coding to keep your testing cycles on track. GH Tester ensures integrated systems go into production faster: • • •

Easy end-to-end continuous integration testing Single suite for functional and performance needs Development cycles and costs down by 50%

GH VIE (Virtual Integration Environment) delivers advanced virtualized applications without coding: • • •

Personal testing environments easily created Subset databases for quick testing and compliance Quickly and easily extensible for non-standard systems

Every testing team has its own unique challenges. Visit www.greenhat.com to find out how we can help you and arrange a demonstration tailored to your particular requirements. Discover why our customers say, “It feels like GH Tester was written specifically for us”.

Support for 70+ systems including: Web Services • TIBCO • webMethods • SAP • Oracle • IBM • EDI • HL7 • JMS • SOAP • SWIFT • FIX • XML


20 Leadig Testing Providers Feature | 31

Welcome to

20 Leading Testing Providers Editor’s Focus

Leader

T

he october issue of TEST magazine will feature a section named ‘20 Leading Testing Providers’ which

profi les 20 of the top suppliers to the software testing industry and we are very proud to announce that Micro Focus has confi rmed that it will be

contents 32 Minimising mobile risk: Automated testing for apps Whether it's in our professional or social lives, the majority of us are becoming increasingly reliant on mobile Internet and Smartphone technology. But as the number of apps, operating systems and devices proliferates, so does the risk of application error. Clinton Sprauve, product marketing director at Micro Focus reports.

the headline sponsor for this section. As a preview to this section, in this issue we feature thought leadership from four of the companies profi led. This Editor’s Focus features comment,

34 A Big Data bonus TRICENTIS discusses the importance of data warehouse and business intelligence testing to ensure increased effectiveness and efficiency.

opinion and case studies from Micro Focus – the headline sponsor – Facilita, Tricentis

36 10 simple rules for purchasing ALM software

and TechExcel. I hope you find it useful.

TechExcel outlines its 10 simple rules for buying application lifecycle management (ALM) software.

38 Load testing: new approaches to fresh challenges. According to Facilita, the pressure is on for testing professionals Matt Bailey, Editor

www.testmagazine.co.uk

and tool vendors alike.

August 2012 | TEST


32 | 20 Leading Testing Providers Editor’s Focus

Minimising mobile risk: Automated testing for apps Whether it's in our professional or social lives, the majority of us are becoming increasingly reliant on mobile Internet and Smartphone technology. But as the number of apps, operating systems and devices proliferates, so does the risk of application error. Clinton Sprauve, product marketing director at Micro Focus reports.

M

obile applications must be thoroughly tested prior to release. If customers don’t have an excellent experience with an app – as a result of a bug, for example – they may well switch to a rival product. That's why testing, or more specifically, testing automation, is critical for today's mobile application developers. 12.7 million US mobile users used banking apps in Q2 2011, an increase of 45% from Q4 2010

Mobile meltdown Citibank: Earlier this year, Citibank, the fourth largest bank in the Unites States, released a bill payment app for iPad that charged some customers twice for a single transaction. While many customers quickly spotted the erroneous transaction and complained

TEST | August 2012

to Citi, others were caught off guard when the bank notified them of the discrepancy. In a few rare instances, the problem even led to some unsuspecting customers' accounts being overdrawn. To add insult to injury for Citibank, the press release for the app's launch announced it as the ‘new standard for digital banking’. Citibank's gaffe underscores the risk of entering the relatively new world of online banking without adequate application testing. Facebook: The social media site is currently facing criticism from users after replacing email addresses listed in members' contacts with those provided by its @facebook.com system. Previously, users may have had a yahoo.com or gmail.com address displayed, so that if people wanted to contact them outside Facebook, they could. Facebook's email strategy is ultimately designed to drive more

traffic to the organisation's pages and boost ad sales. Unsurprisingly, many users are annoyed about the new move and instructions on how to revert to original email addresses were quickly posted online by disgruntled users. The main issue is the lack of communication – Facebook has acted without asking for members' permission. To make matters worse, in the wake of the change, users began complaining that they had lost address books, emails, and messages after synching with Facebook over a mobile device. Android: Cupcake, Donut, Éclair, Froyo (frozen yoghurt), Gingerbread, Honeycomb, Ice Cream Sandwich, Jelly Bean… This is the suite (or should that be sweet?) of codenames for versions of the Android operating system (OS) released since 2009. The OS began with the release of Android beta in November 2007 and has a complex history involving frequent updates. Trying to test multiple Android

www.testmagazine.co.uk


20 Leading Testing Providers Editor’s Focus | 33

Borland Silk Mobile Borland's new solution is designed to provide an easy to use, comprehensive approach to the functional testing of real mobile devices. With strong native, image, and text based recognition, Silk Mobile ensures that you can build robust, repeatable, and maintainable test suites. Development teams can create and run tests on mobile devices that are connected to a PC by a simple USB cable or WiFi. The solution runs on an actual device, so the tests represent genuine user experiences. Silk Mobile includes a recorder that allows you to create tests within minutes. In addition, it has a rich set of editing and debugging capabilities, to ensure that any recording can easily be tweaked exactly the way you want. There’s no need to modify the device and testers don’t have to jailbreak OS versions on multiple mobile devices is a challenge in itself. Factoring in other competing release platforms such as iOS, Windows Phone, Blackberry and HTML5 raises an important question. How can risk be effectively managed when it comes to deploying mobile applications? After all, it's unrealistic to think that an organisation can run its entire regression testing on all the possible variations and combinations of operating systems and vendor firmware versions.

Reducing risk for mobile development teams To reduce risk when testing mobile apps it is necessary to identify relevant criteria for the testing automation solution. Supporting the broad range of platforms: Implement a strategy that supports the broad range of operating systems and versions and can withstand frequent OS changes.

www.testmagazine.co.uk

Ensuring reusability: With rapid advances in mobile app development, teams need a system that can with stand frequent user interface changes and keep pace with development. Testing real world scenarios: It sounds obvious, but your mobile apps and devices need to be tested in the same way that your users will use them. Using real development languages: Industry standard languages, like Java or C#, helps reduce risk by providing code that can integrate into any framework. Multiple devices: The ability to test multiple devices from a single platform is crucial. To learn more about the automated, real world test benefits of Silk Mobile, visit: borland.com/solutions/mobile or contact us at: borland.com/contact or microfocus.communications@ microfocus.com

or root the device to use the product. Silk Mobile is: • Open: Build tests using C#, Java, Python, Perl; • Agile: Create tests in minutes with the point and click visual interface and by simply connecting a device via USB or WiFi; • Enterprise: Execute robust tests on actual devices in use rather than virtualised devices; • Complete: Supports Android, iOS, BlackBerry, Windows Mobile, Symbian, and HTML5. Test more accurately with Silk Mobile.

August 2012 | TEST


34 | 20 Leading Testing Providers Editor’s Focus

A Big Data bonus TRICENTIS discusses the importance of data warehouse and business intelligence testing to ensure increased effectiveness and efficiency.

T

he finance sector is buzzing with talk of new compliance requirements such as SOX and Basel II and III for banks. This comprehensive set of reforms demands exact and consolidated data reporting. Savvy organizations quickly realised that the same investment, infrastructure, and process models could be the basis for effective corporate management on both the strategic and tactical levels.

Data warehouses (DWH) and business intelligence (BI) provide the required reports. As the importance of reporting continues to increase, so does the importance of DWH/BI testing to ensure high data and processing quality. However, until now DWH/BI testing has not been a high priority. In fact in the DWH/BI space little regression testing happens to ensure functionality and even release testing is often inconsistent. Nevertheless, both requirements are crucial in Big Data/ BI testing. According to Wolfgang Platz, founder and CEO of TRICENTIS, “Government regulations require software testing and reporting to comply with Sarbanes Oxley, EuroSOX, J-SOX, Basel III, Solvency II, and other regulations. In the Big Data/Data Warehouse testing area many enterprises are struggling with regression testing that covers the entire chain. To make matters worse, many of the errors are found very late in the process, just prior to reporting, when there is a huge time pressure to meet the deadline. With a high level of test automation much earlier in the process, much of this late discovery can be avoided. “Information from many disparate systems and applications must be consolidated, aggregated and tested,”

TEST | August 2012

adds Platz. “Resolving the missing data and data integrity issues consumes expensive time and effort and can add as much as a third of the time needed to the testing effort. Only TOSCA delivers the ability to directly tie synthetic test data to any of the test cases, and to replicate test data for a sufficient supply.This ensures that testers and SAP Key Users never run out of test data. Next to increased test automation, missing data and data integrity issues are the second biggest headache of Big Data/BI software testers. According to Franz Fuchsberger, CEO, TRICENTIS, “We see a critical need for end-to-end Big Data and BI testing in order to achieve measurable improvements in testing excellence. TOSCA Testsuite delivers a comprehensive and much easier way to automate the testing earlier on, with increased automation for regression testing of up to 85 percent across end-to-end Big Data/BI processes. TOSCA provides a paradigm shift with new technology which dramatically increases both test automation and test coverage.”

The TOSCA@BigData Solution

Technological framework conditions

Compliance in finance

Traditional testing protocols rely heavily on SQL queries to test quality. However, the use of plain SQL raises some problems. Complex SQL queries require technicians and result in enormous maintenance effort. Further, the huge number of database scans required actually limits test execution performance and coverage. Synthetic test data make it possible to reduce the volume of the scanned data. However, in practice, synthetic test data can only be used to a limited degree in regression testing and cannot be used for testing data quality, where real data must be used.

After screening the market for suitable tools, TRICENTIS partnered with DWH experts to create TOSCA@BigData to meet specific requirements arising from existing customer projects. Today, TOSCA@BigData represents a paradigm shift in DWH/BI testing. With its TOSCA IQ (Intelligent Querying) component, TOSCA is expanding the possibilities of SQL queries and thus breaking through the barriers of conventional methods.

What to expect TOSCA@BigData offers a performance boost in test execution, where run times can be reduced by up to 95 percent. TOSCA provides business-based test case definition and reduced query complexity as well as dramatically reducing test case maintenance effort. In addition, it gives coverage of the entire DWH/BI process with automated testing for a wide variety of result types; and setup of test sets with high test coverage (greater than 90 percent) and very short run times (three to six months).

The big compliance topics in finance (SOX compliance as well as Basel II & III in the banking sector) require exact and consolidated reporting of the most important company data. Furthermore, consolidated data are the basis for effective corporate management both on the strategic and tactical level (for example, product development in the insurance sector). Reports for internal and external recipients are provided by the DWH and the related BI systems of the respective company. As the functional capability and importance of these reports increases, so does the importance of DWH/BI testing.

www.testmagazine.co.uk


20 Leading Testing Providers Editor’s Focus | 35

DWH/BI systems have generally evolved over time and they have mainly been set up and developed further in a series of small steps. There is hardly any regression testing to ensure existing functionality, and the lack of documentation from the business perspective makes it more difficult to set up ex post testing. Even if the functionality has been sufficiently tested, the quality of the data delivered often represents the greatest risk in the creation of correct reports. The TOSCA@BigData solution has been developed on the basis of the practical experience of experts in DWH/BI testing and specific requirements arising from customer projects.

Big data TOSCA@BigData represents a paradigm shift in DWH/BI testing. Conventional DWH/BI test projects and tools use SQL queries (plain SQL) as the testing tools. So far, this has resulted in a number of insurmountable constraints. TOSCA@BigData has used its TOSCA iQ (intelligent Querying) component to expand the possibilities of SQL queries and to thus break the barriers of conventional methods. The use of plain SQL severely limits the test coverage that can be achieved in DWH/BI. The impact of these constraints grows in proportion to the size and complexity of the DWH. TOSCA iQ is a tool for the intelligent grouping of queries with similar joins. It offers methodical test case design for defining test cases with the highest effectiveness (test cases that can be read from the business perspective). It has parameterisation based on test case design in a simple language based on SQL. The technical generation of queries at run time, which minimizes the number of required database scans. In practice, the run time of queries can be reduced by up to 95%. This is equivalent to a twenty-fold increase in performance.

Implementing Big Data The implementation of TOSCA@ BigData is based on the ideal test process in TOSCA Testsuite. The high volatility of the business-based requirements in DWH/BI necessitates the implementation of comprehensive tests in the shortest possible run time.

www.testmagazine.co.uk

TOSCA@BigData and the capabilities of iQ (Intelligent Querying) provide optimal support for this requirement. In practice, TRICENTIS together with its customers has implemented regression testing that has achieved test coverage of > 90 percent within 3 to 6 months. Franz Fuchsberger concludes: “TOSCA 7.5 makes it even easier to manage and monitor the progress and results of software testing including Big Data and BI testing. Colour-coded trend charts visually represent risk coverage, the percent of successful and failed test executions, and improvements over time. Now, management instantly knows the status of their software test coverage. In this way, TOSCA is ideal for the enterprise executive working in a highly regulated environment who wants to sleep well at night. TOSCA Testsuite assures the optimised level of test coverage for business process testing and saves up to 50 percent on total costs of Big Data Testing." www.tricentis.com

“We see a critical need for end-toend Big Data and BI testing in order to achieve measurable improvements in testing excellence. TOSCA Testsuite delivers a comprehensive and much easier way to automate the testing earlier on, with increased automation for regression testing of up to 85 percent across end-toend Big Data/ BI processes. TOSCA provides a paradigm shift with new technology which dramatically increases both test automation and test coverage.”

August 2012 | TEST


36 | 20 Leading Testing Providers Editor’s Focus

10 simple rules for purchasing ALM software Follow these ten rules to get your application lifecycle management (ALM) processes started on the right path. Whether you are buying for the first time or just replacing your current system, this feature aims to provide concrete advice on buying and implementing an ALM system.

F

ollow these ten rules to get your application lifecycle management (ALM) processes started on the right path. Whether you are buying for the first time or just replacing your current system, this feature aims to provide concrete advice on buying and implementing an ALM system.

1. Create a project brief and identify requirements Process – Begin by outlining and documenting all current business processes and procedures used during application development. The system should be able to grow and adapt with your organisation. Note any existing shortcomings and add those scenarios as criteria for your new system. Finally, list all of the business requirements that you have and create a matrix of their relative importance to your organisation. Teams typically classify criteria into three groups: ‘must have’, ‘nice to have’ and ‘could have’. Metrics – Develop a short list of five or six key metrics that you want to capture and track over time to ensure that your investment continues to pay off and future changes are not detrimental to the original objective. One of the biggest mistakes organisations have made when implementing a new solution is either not tracking any metrics or only tracking metrics for six months to a year. Companies that

TEST | August 2012

don't keep track of metrics pay in the long run as they have no insight into their system. Best practices dictate that in order to get the most from any system, Return on Investment (ROI) and system optimisation metrics need to be maintained for the lifetime of a system for organisations to benefit from its true potential. People – Get buy-in from end-users, developers, QAs, PMs, and upper management to ensure maximum return. The trick is outlining the business benefits and impact on bottom line – not just the number on the final invoice but how much else is gained – employee productivity, traceability, enhanced build quality, etc. Invite upper management to say a few words at the beginning of launch meetings and training sessions about the importance of the new system. ALM implementations are rarely successful without upper management's support.

Another area to look at is Total Cost of Ownership (TCO). TCO must include all direct and indirect costs associated with the system, over a minimum three to five years, to reflect the true cost over time. Make sure to compare costs of a new system against what you are currently paying. If the current system isn't fulfilling your requirements and you are looking to replace it, why should you expect to pay the same price? Finding the solution that is right for you and can be customised to meet your unique needs is priority number one – not the price. Keeping this in mind, the system you end up with could even end up being cheaper. Often companies buy big product packages that give organisations a lot of features but if they are not used they simply become wasted money and implementation time.

2. Budget and time scales

It is important that a system has the capability of being extended. Not only from the perspective of adding processes such as Customer Support or Source Control but also from the perspective of adding hundreds or even thousands of users; users that may be distributed globally. It is vital that the underlying database be accessible for both additional reporting and data mining. Easily accessible and comprehensive APIs are also a worthwhile consideration

It is important to set a budget early on as it will save a lot of time when selecting vendors. The process is typically started with some basic ROI calculations that determine where you can make substantial savings. It is also important to create time scales for the selection, evaluation and implementation of the new system. When looking at the budget, be sure to evaluate both shortand long-term costs associated with the purchase to make educated decisions.

3. Scalability and architecture

www.testmagazine.co.uk


20 Leading Testing Providers Editor’s Focus | 37

as supplementation to database access.Next, be sure the ALM system you’re evaluating can scale to not only thousands of users but also to thousands of database records. Economical solutions often seem great but once the number of records builds up they can quickly become slow and unmanageable. Finally, check what processes are included out of the box; the minimum should be Agile, traditional and defect tracking templates.

4. Information management & integration A vital part of any ALM system is how it manages information such as knowledge, best practices, coding standards, FAQs, etc. For it to be effective, knowledge management needs to be easily and readily accessible for all team members to avoid wasting time. The best option is an integrated knowledge management or wiki-based system that can easily link to development artefacts. Artefacts can be anything from requirements, to stories, to test cases. In addition to artefacts linking to knowledge, they should be able to flow seamlessly though the entire ALM process. This means the requirements system should be tightly integrated with resource management, development processes, and testing procedures. At any given moment you should be able to trace through the lifecycle of an artefact to find its status.

5. Browser support Browser-based ALM tools are pretty much the standard now. However, be sure your ALM tool supports all major browsers. Especially if your development team is working in a Linux- or Unix-based environment; if it needs any plugins (Flash, java, etc.), be sure your browsers support them. If you are going to host your ALM system then be sure to designate who is in charge of maintaining the server. Make sure this individual understands the systems hardware requirements, the deployment process, and how it scales to support more users.

6. Workflow and automation The quickest way to improve the ROI is to implement an efficient workflow

www.testmagazine.co.uk

with appropriate automations. The system should allow you to quickly setup and communicate to your team a simple work flow. It should also support restrictions based on field values as well as state based notifications. Automations should be available to move items through the workflow or create tasks based on template. For example you may frequently wish to create unit-testing tasks when a development item is created. An efficient work flow with proper automations will save your team countless hours and assist in preventing simple mistakes due to missed tasks or requirements.

7. Dashboards, reports & alerts ALM systems should have a home page. Users should be able to customise report widgets, create drill-down pivot charts, choose page layouts and language settings, and easily organise their user interface by dragging and dropping the various page elements. Reporting should be able to cover all the common areas and come with a range of 'out of the box' reports that are easily customisable. There should also be access to reports outside the system for casual users or managers that do not require use of the full system. System evaluators often forget to make sure a system has the ability to add third party reporting. Alerts should be possible on all types of tasks and cover several mediums such as email, on-screen, dashboards, SMS, etc. You should have the option to colour code tasks to quickly see what is urgent. This helps to escalate relevant alerts and to manage visibility.

8. Maintenance & administration At a minimum, you should be able to maintain and administer the system without any vendor interaction. A core requirement is that changes to user interfaces, views, options and work flows can be done without programming, consultant help, or downtime. Another key consideration is how the upgrade process is executed. It should not impact any customisations or user specific functions. Not only will you have to maintain the system as is but you will also need to consider the ever-changing world of business

requirements. Be sure the system will adapt to new business needs and development methods.

9. Licensing & deployment options There are some real differences in how different systems are licensed. These differences need to be taken into account during the purchase and budget process. Nowadays there is a much larger market for hosted systems and software as a service (SaaS), which can result in great savings on the investment over the short term. However, it is important to understand that if you use it over a period of more than 3-4 years then the total cost of ownership is much higher than a traditional system. Furthermore, it is very important to ensure that if you decide on a SaaS system that you have access to export the data in case you want to move away from the selected platform.

10. Short listing & selecting a system Short list no more than five or six vendors that fit within budget and scope for your project. Then score these vendors against a checklist of needed features and requirements. Be clear in advance with vendors so that demos are tailored to your requirements. The key to a great demo is giving the presenter your unique requirements and challenges in advance of the demo. This way your needs are addressed with greater insight into your requirements. After all, the vendor may have additional functions or features that would help you optimise your system that you may not have thought about. Explore all areas of the systems you evaluate and take into account things like how scalable the system is. Will the system cost you more to add additional processes or features in the future? Make sure that the system is able to capture all of your business/development processes and procedures. Also be sure you can customise without help from the vendor, as that will inevitably lead to high costs. Finding a solution that will fit all the requirements can be difficult but taking into consideration the preceding advice will help in making the best choice for your company.

www.techexcel.com

August 2012 | TEST


38 | 20 Leading Testing Providers Editor’s Focus

Load testing: new approaches to fresh challenges According to Facilita, the pressure is on for testing professionals and tool vendors alike.

T

Load testing

Internet users are becoming intolerant of poor response times – they will subconsciously benchmark against the best (Google, Amazon etc.). In the year 2000 a person would wait for an average of eight seconds for a page before navigating away. By 2009 this had reduced to three seconds (source Akamai). Slow response times have a direct impact on the bottom line – Amazon has estimated that every 100ms delay costs one percent of sales. Research by the Aberdeen Group suggests that a one second increase in page load time results in a seven percent loss in conversions and 11 percent fewer page views. For a midsize commercial site earning a £20K per day that translates to losing £0.5M in a

For both websites and back-office systems responsiveness and reliability have enormous commercial implications. Load testing is crucial. Best practice incorporates comprehensive load testing throughout the development process not just preproduction testing to target the final system configuration. Of course in many circumstances this is not possible – the development may be outsourced or the system may consist of a heavily re-configured mixture of standard packages or application components Load testing has become even more vital – yet has never been more challenging. Why should that be? Accelerating technological change and increasing complexity, the rise of ‘mix-in’ systems, virtualisation and elements of cloud hosting are part of the explanation. So is the move to a mixture of user interfaces (browser, mobile, thick client, web services and so on). As load testing usually consists of either emulating or driving

he recent RBS debacle will have concentrated minds in corporate boardrooms. Indeed, hardly a day seems to go by without a well publicised incident involving a poorly performing corporate website or system outage. Retaining customers, brand image, reputations and careers all depend on delivering reliable high performance IT systems.

TEST | August 2012

year and you can easily calculate the eye watering potential losses for the giants of e-commerce.

www.testmagazine.co.uk


20 Leading Testing Providers Editor’s Focus | 39

client instances (to represent users) heterogeneous client technology is itself a significant challenge. So, how should a load test tool vendor respond?

Getting the fundamentals right The first thing is to get the fundamentals right. No to closed, inflexible tools; yes to extensibility that enables complex applications to be tested effectively and yes to ease of integration with other tools. Yes to scalability and powerful automation features. In a fast moving world it is important that a tool can handle a wide spectrum of technology and supports multiple Virtual User types. For instance, a tool that can only simulate users by generating HTTP traffic would not be able to cope with browser hosted components that used different protocols or binary encoded data. Your current system might use standard web techniques and desktop browsers but will that be the case in the future? Would your testing benefit from applying load at different interfaces within a multi-tier architecture? Similar considerations apply to how the tool functionality is delivered both technically and in terms of licensing and commercial models. Again, the guiding principles should be choice and flexibility so that users can be confident that their investment in a tool will be long lasting and can survive both technology changes and changes in their own business. There have been several recent entrants to the load testing marketplace with cloud based SAS offerings. Now cloud based testing is a good approach for many but it is surely desirable that the same toolset is also available for testing within the corporate firewall and that test resources can be seamlessly deployed for both modes of testing? Test managers also look for flexibility in terms of licensing (including short term rental) and supporting services. No single model can fulfil every need. Businesses want access to an optimal mixture of tools and related services (managed testing, consultancy, training, mentoring, and support) that will evolve with their requirements.

www.testmagazine.co.uk

Evolving web technology What about the challenges of evolving Web technology such as Ajax asynchronous communication, push technology and new forms of data encoding? One approach is to drive real browser instances (or HtmlUnit the open source test browser). The other is to script at the level of HTTP requests with emulation of browser functionality (cookies, redirects etc). The latter approach is preferable in most circumstances, not least for reasons of scalability, provided that the full complexities of modern web applications are handled easily. It is important that both of these fundamental approaches to implementing Web Virtual Users are available in the same tool. In addition, offering both a Java and a .Net runtime that share a common core with closely aligned scripting models and a common recorder/script generator has many benefits. Providing a choice means leveraging the power of each platform as well as meeting the preferences of testers and developers in each camp. The greatest productivity challenge is test script creation and maintenance. Attempts have been made to create ‘script-less’ load test tools. These attempts reveal a paradox: the practical equivalent of script-less tests (no manual editing) can be achieved in practice by using intelligent rulebased recording/generation but tools that enforce no scripts in principle represent a trap that springs after a seductive sales demo. Users may end up trapped in a world of ‘death by a thousand mouse movements’, maintenance nightmares and hitting a functionality wall. The realistic view is that tests are in themselves software. They should be simple if possible and may sometimes be disposable but tools should encapsulate the best modern development techniques. Finally, load testing does not stand in isolation and there are exciting prospects opening up to deploy Virtual User technology in production monitoring and to interface load testing effectively to deep server diagnostics so that white box analysis can meet black box testing. www.facilita.co.uk

The first thing is to get the fundamentals right. No to closed, inflexible tools; yes to extensibility that enables complex applications to be tested effectively and yes to ease of integration with other tools. Yes to scalability and powerful automation features.

August 2012 | TEST


40 | Design for TEST

The bank glitch The recent disaster at RBS raises the issues of batch processing, outsourcing and personnel management according to Mike Hilcombe.

T

he recent disaster at RBS raises the issues of batch processing, outsourcing and personnel management and that a management decision to outsource some of the batch processing functions to India for reasons of cost might have been a false economy1 (Editor’s note: at the time this issue went to press RBS chief executive Stephen Hester said there was “No evidence this is connected to outsourcing.”) Naturally, information is in short supply but the evidence is that an inexperienced team was expected to understand the complexities of the large-scale legacy IT systems that the bank, and its customers depended on. What this tells us is that you may have the best processes in the world but if the knowledge about how to carry out these processes to a high level of quality is missing then anything can – and often does – happen. Never neglect the people and what they may know that is not written down. This highlights an important point. We may have the technology covered in the sense that the software and processes are tested but what about the people carrying out the work. There is a need to define more carefully the roles of people and their capabilities. We need to test whether they can do what they are supposed to do and do it to an acceptable standard. In this case it might have been sensible to have a dummy run so as to check that the new people on the job knew what they were doing. I wonder if this was done? It all comes down to trying to understand how to judge whether the people employed have the skills and knowledge that any business-critical process demands. It’s

TEST | August 2012

very easy to make sudden changes in a routine process in a system but you have to think very hard about the risks associated. Presumably, no risk analysis was done about the outsourcing decision. If it was then, clearly some of the assumptions made were faulty. There may be ways of carrying out some formal testing of people in processes. What needs to be done is that the processes or workflows must be defined precisely and analysed to make sure that they are complete and consistent. Techniques for doing this already exist. What is more challenging, however, is how to deal with the people involved in the processes. Clearly, we can describe the levels of skills and training and this was possibly one of the issues with the RBS outsourcing exercise, but also there is the complex issue of individual personality that can have a big impact on how tasks are performed – especially group-based tasks. The Myers-Briggs2 test is a test of personality – there are various others – and is being used in organisations to identify people’s strengths – and weaknesses – and could be relevant here. However, very little research or experience seems to be available that would form the basis of a practical way of dealing with the issues that seem to have cost the Bank a large amount of money and loss of reputation.

Batch testing I’ll end with a bit about batch testing, just to highlight its importance in keeping things running. So how does one provide assurance that the uploading of critical batch processing data is done correctly? The first issue is that batch jobs need to be scheduled and this schedule needs to be tested or reviewed in some reliable way. To achieve that you need experience and expertise, something

that is gained over many years and lost in an instance. We would expect that detailed documentation would have been left by the previous team but it is only too easy for this to become out of date. A warning to us all! The batch will consist of a number of files in a standard format. There may be very large number of these – potentially millions. The first issue is to check that these are all of the correct format – this would be done automatically using suitable scripts. Some sort of consistency checks will also need to be done to make sure that the information is complete. The schedule that controls the submission of the batch must be tested. One would hope that there is some model of the mainframe environment that could be used to test the schedule’s behaviour. Such models could be criticised because they cannot reflect all of the many dimensions and behaviours that such a mainframe might exhibit but the key point is that when the batch is run there is very little else going on – except, perhaps the installation of a patch which would need very great care from the systems managers. Again one would expect that those installing the patch would have some sort of test system that replicated the running of batch files at the same time. I guess that the testing of batch files is probably one of the least exciting aspects of what many, mistakenly think, of as an unexciting career as a tester. However, if we ignore both the technical and the people aspects of the issue the cost could amount to millions – if not billions! Reference: 1 http://www.theregister. co.uk/2012/06/28/rbs_job_cuts_and_ offshoring_software_glitch/) 2 http://www.humanmetrics.com/cgiwin/jtypes2.asp

Mike Holcombe Founder and director epiGenesys Ltd www.epigenesys.co.uk

www.testmagazine.co.uk


Can you predic TEST company profile | 41

Facilita Facilita load testing solutions deliver results Facilita has created the Forecast™ product suite which is used across multiple business sectors to performance test applications, websites and IT infrastructures of all sizes and complexity. With class leading software and unbeatable support and services Facilita will help you ensure that your IT systems are reliable, scalable and tuned for optimal performance.

Forecast™ is proven, effective and innovative A sound investment: Choosing the optimal load testing tool is crucial as the risks and costs associated with inadequate testing are enormous. Load testing is challenging and without the right tool and vendor support it will consume expensive resources and still leave a high risk of disastrous system failure. Forecast has been created to meet the challenges of load testing now and in the future. The core of the product is tried and trusted and incorporates more than a decade of experience and is designed to evolve in step with advances in technology. Realistic load testing: Forecast tests the reliability, performance and scalability of IT systems by realistically simulating from one to many thousands of users executing a mix of business processes using individually configurable test data. Comprehensive technology support: Forecast provides one of the widest ranges of protocol support of any load testing tool. 1. Forecast Web thoroughly tests web based applications and web services, identifies system bottlenecks, improves application quality and optimises network and server infrastructures. Forecast Web supports a comprehensive and growing list of protocols, standards and data formats including HTTP/HTTPS, SOAP, XML, JSON and Ajax. 2. Forecast Java is a powerful and technically advanced solution for load testing Java applications. It targets any non-GUI client-side Java API with support for all Java remoting technologies including RMI, IIOP, CORBA and Web Services. 3. Forecast Citrix simulates multiple Citrix clients and validates the Citrix environment for scalability and reliability in addition to the performance of the published applications. This non-intrusive approach provides very accurate client performance measurements unlike server based solutions. 4. Forecast .NET simulates multiple concurrent users of applications with client-side .NET technology.

6. Forecast can generate intelligent load at the IP socket level (TCP or UDP) to test systems with proprietary messaging protocols, and also supports the OSI protocol stack. Powerful yet easy to use: Testers like using Forecast because of its power and flexibility. Creating working tests is made easy with Forecast's application recording and script generation features and the ability to rapidly compose complex test scenarios with a few mouse clicks.

4

G

Facilita Software Development Limited. Tel: +44 (0)12

Supports Waterfall and Agile (and everything in between): Forecast has the features demanded by QA teams like automatic test script creation, test data management, real-time monitoring and comprehensive charting and reporting. Forecast is successfully deployed in Agile "Test Driven Development" (TDD) environments and integrates with automated test (continuous build) infrastructures. The functionality of Forecast is fully programmable and test scripts are written in standard languages (Java, C# and C++ ). Forecast provides the flexibility of Open Source alternatives along with comprehensive technical support and the features of a high-end commercial tool. Monitoring: Forecast integrates with leading solutions such as dynaTrace to provide enhanced server monitoring and diagnostics during testing. Forecast Virtual User technology can also be deployed to generate synthetic transactions within a production monitoring solution. Facilita now offers a lightweight monitoring dashboard in addition to integration with comprehensive enterprise APM solutions. Flexible licensing: Our philosophy is to provide maximum value and to avoid hidden costs. Licenses can be bought on a perpetual or subscription basis and short-term project licensing is also available with a “stop-the-clock” option.

Services Supporting our users In addition to comprehensive support and training, Facilita offers mentoring by experienced consultants either to ‘jump start’ a project or to cultivate advanced testing techniques. Testing services Facilita can supplement test teams or supply fully managed testing services, including Cloud based solutions.

5. Forecast WinDriver is a unique solution for performance testing windows applications that are impossible or uneconomical to test using other methods or where user experience timings are required. WinDriver automates the client user interface and can control from one to many hundreds of concurrent client instances or desktops.

Facilita Tel: +44 (0) 1260 298109 Email: enquiries@facilita.co.uk Web: www.facilita.com

www.testmagazine.co.uk

August 2012 | TEST


42 | TEST company profile

Parasoft Improving productivity by delivering quality as a continuous process For over 20 years Parasoft has been studying how to efficiently create quality computer code. Our solutions leverage this research to deliver automated quality assurance as a continuous process throughout the SDLC. This promotes strong code foundations, solid functional components, and robust business processes. Whether you are delivering Service-Orientated Architectures (SOA), evolving legacy systems, or improving quality processes – draw on our expertise and award winning products to increase productivity and the quality of your business applications.

Specialised platform support:

Parasoft's full-lifecycle quality platform ensures secure, reliable, compliant business processes. It was built from the ground up to prevent errors involving the integrated components – as well as reduce the complexity of testing in today's distributed, heterogeneous environments.

Trace code execution:

What we do Parasoft's SOA solution allows you to discover and augment expectations around design/ development policy and test case creation. These defined policies are automatically enforced, allowing your development team to prevent errors instead of finding and fixing them later in the cycle. This significantly increases team productivity and consistency.

End-to-end testing: Continuously validate all critical aspects of complex transactions which may extend through web interfaces, backend services, ESBs, databases, and everything in between.

Advanced web app testing: Guide the team in developing robust, noiseless regression tests for rich and highly-dynamic browserbased applications.

Access and execute tests against a variety of platforms (AmberPoint, HP, IBM, Microsoft, Oracle/ BEA, Progress Sonic, Software AG/webMethods, TIBCO).

Security testing: Prevent security vulnerabilities through penetration testing and execution of complex authentication, encryption, and access control test scenarios.

Provide seamless integration between SOA layers by identifying, isolating, and replaying actions in a multi-layered system.

Continuous regression testing: Validate that business processes continuously meet expectations across multiple layers of heterogeneous systems. This reduces the risk of change and enables rapid and agile responses to business demands.

Multi-layer verification: Ensure that all aspects of the application meet uniform expectations around security, reliability, performance, and maintainability.

Policy enforcement: Provide governance and policy-validation for composite applications in BPM, SOA, and cloud environments to ensure interoperability and consistency across all SOA layers. Please contact us to arrange either a one to one briefing session or a free evaluation.

Application behavior virtualisation: Automatically emulate the behavior of services, then deploys them across multiple environments – streamlining collaborative development and testing activities. Services can be emulated from functional tests or actual runtime environment data.

Load/performance testing: Verify application performance and functionality under heavy load. Existing end-to-end functional tests are leveraged for load testing, removing the barrier to comprehensive and continuous performance monitoring.

Spirent CommunicationsEmail: plc Tel:sales@parasoft-uk.com +44(0)7834752083 Email: Web: www.spirent.com Web: www.parasoft.com Tel:Daryl.Cornelius@spirent.com +44 (0) 208 263 6005

TEST | August 2012

www.testmagazine.co.uk


TEST company profile | 43

Seapine Software TM

With over 8,500 customers worldwide, Seapine Software Inc is a recognised, award-winning, leading provider of quality-centric application lifecycle management (ALM) solutions. With headquarters in Cincinnati, Ohio and offices in London, Melbourne, and Munich, Seapine is uniquely positioned to directly provide sales, support, and services around the world. Built on flexible architectures using open standards, Seapine Software’s cross-platform ALM tools support industry best practices, integrate into all popular development environments, and run on Microsoft Windows, Linux, Sun Solaris, and Apple Macintosh platforms. Seapine Software's integrated software development and testing tools streamline your development and QA processes – improving quality, and saving you significant time and money.

TestTrack RM TestTrack RM centralises requirements management, enabling all stakeholders to stay informed of new requirements, participate in the review process, and understand the impact of changes on their deliverables. Easy to install, use, and maintain, TestTrack RM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Whether as a standalone tool or part of Seapine’s integrated ALM solution, TestTrack RM helps teams keep development projects on track by facilitating collaboration, automating traceability, and satisfying compliance needs.

TestTrack Pro TestTrack Pro is a powerful, configurable, and easy to use issue management solution that tracks and manages defects, feature requests, change requests, and other work items. Its timesaving communication and reporting features keep team members informed and on schedule. TestTrack Pro supports MS SQL Server, Oracle, and other ODBC databases, and its open interface is easy to integrate into your development and customer support processes.

TestTrack TCM TestTrack TCM, a highly scalable, cross-platform test case management solution, manages all areas of the software testing process including test case creation, scheduling, execution, measurement, and reporting. Easy to install, use, and maintain, TestTrack TCM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Reporting and graphing tools, along with user-definable data filters, allow you to easily measure the progress and quality of your testing effort.

QA Wizard Pro QA Wizard Pro completely automates the functional and regression testing of Web, Windows, and Java applications, helping quality assurance teams increase test coverage. Featuring a nextgeneration scripting language, QA Wizard Pro includes advanced object searching, smart matching a global application repository, datadriven testing support, validation checkpoints, and built-in debugging. QA Wizard Pro can be used to test popular languages and technologies like C#, VB.NET, C++, Win32, Qt, AJAX, ActiveX, JavaScript, HTML, Delphi, Java, and Infragistics Windows Forms controls.

Surround SCM Surround SCM, Seapine’s cross-platform software configuration management solution, controls access to source files and other development assets, and tracks changes over time. All data is stored in industry-standard relational database management systems for greater security, scalability, data management, and reporting. Surround SCM’s change automation, caching proxy server, labels, and virtual branching tools streamline parallel development and provide complete control over the software change process.

www.seapine.com Phone:+44 (0) 208-899-6775 Email: salesuk@seapine.com United Kingdom, Ireland, and Benelux: Seapine Software Ltd. Building 3, Chiswick Park, 566 Chiswick High Road, Chiswick, London, W4 5YA UK Americas (Corporate Headquarters): Seapine Software, Inc. 5412 Courseview Drive, Suite 200, Mason, Ohio 45040 USA Phone: 513-754-1655

www.testmagazine.co.uk

August 2012 | TEST


44 | TEST company profile

Micro Focus Deliver better software, faster. Software quality that matches requirements and testing to business needs. Making sure that business software delivers precisely what is needed, when it is needed is central to business success. Getting it right first time hinges on properly defined and managed requirements, the right testing and managing change. Get these right and you can expect significant returns: Costs are reduced, productivity increases, time to market is greatly improved and customer satisfaction soars. The Borland software quality solutions from Micro Focus help software development organizations develop and deliver better applications through closer alignment to business, improved quality and faster, stronger delivery processes – independent of language or platform. Combining Requirements Definition and Management, Testing and Software Change Management tools, Micro Focus offers an integrated software quality approach that is positioned in the leadership quadrant of Gartner Inc’s Magic Quadrant. The Borland Solutions from Micro Focus are both platform and language agnostic – so whatever your preferred development environment you can benefit from world class tools to define and manage requirements, test your applications early in the lifecycle, and manage software configuration and change.

Requirements Defining and managing requirements is the bedrock for application development and enhancement. Micro Focus uniquely combines requirements definition, visualization, and management into a single '3-Dimensional' solution, giving managers, analysts and developers precise detail for engineering their software. By cutting ambiguity, the direction of development and QA teams is clear, strengthening business outcomes. For one company this delivered an ROI of 6-8 months, 20% increase in project success rates, 30% increase in productivity and a 25% increase in asset re-use. Using Micro Focus tools to define and manage requirements helps your teams: • Collaborate, using pictures to build mindshare, drive a common vision and share responsibility with rolebased review and simulations. •R educe waste by finding and removing errors earlier in the lifecycle, eliminating ambiguity and streamlining communication. • I mprove quality by taking the business need into account when defining the test plan. Caliber ® is an enterprise software requirements

definition and management suite that facilitates collaboration, impact analysis and communication, enabling software teams to deliver key project milestones with greater speed and accuracy.

Software Change Management StarTeam® is a fully integrated, cost-effective software change and configuration management tool. Designed for both centralized and geographically distributed software development environments, it delivers: • A single source of key information for distributed teams • Streamlined collaboration through a unified view of code and change requests • Industry leading scalability combined with low total cost of ownership

Testing Automating the entire quality process, from inception through to software delivery, ensures that tests are planned early and synchronize with business goals even as requirements and realities change. Leaving quality assurance to the end of the lifecycle is expensive and wastes improvement opportunities. Micro Focus delivers a better approach: Highly automated quality tooling built around visual interfaces and reusability. Tests can be run frequently, earlier in the development lifecycle to catch and eliminate defects rapidly. From functional testing to cloud-based performance testing, Micro Focus tools help you spot and correct defects rapidly across the application portfolio, even for Web 2.0 applications. Micro Focus testing solutions help you: • Align testing with a clear, shared understanding of business goals focusing test resources where they deliver most value • Increase control through greater visibility over all quality activities • Improve productivity by catching and driving out defects faster Silk is a comprehensive automated software quality management solution suite which enables users to rapidly create test automation, ensuring continuous validation of quality throughout the development lifecycle. Users can move away from manual-testing dominated software lifecycles, to ones where automated tests continually test software for quality and improve time to market.

Take testing to the cloud Users can test and diagnose Internet-facing applications under immense global peak loads on the cloud without having to manage complex infrastructures. Among other benefits, SilkPerformer ® CloudBurst gives development and quality teams: • Simulation of peak demand loads through onsite and cloud-based resources for scalable, powerful and cost effective peak load testing • Web 2.0 client emulation to test even today’s rich internet applications effectively Micro Focus, a member of the FTSE 250, provides innovative software that enables companies to dramatically improve the business value of their enterprise applications. Micro Focus Enterprise Application Modernization, Testing and Management software enables customers’ business applications to respond rapidly to market changes and embrace modern architectures with reduced cost and risk.

For more information, please visit www.microfocus.com/solutions/softwarequality

TEST | August 2012

www.testmagazine.co.uk


TEST company profile | 45

Original Software Delivering quality through innovation With a world class record of innovation, Original Software offers a solution focused completely on the goal of effective software quality management. By embracing the full spectrum of Application Quality Management (AQM) across a wide range of applications and environments, we partner with customers and help make quality a business imperative. Our solutions include a quality management platform, manual testing, test automation and test data management software, all delivered with the control of business risk, cost, time and resources in mind. Our test automation solution is particularly suited for testing in an agile environment.

Setting new standards for application quality Managers responsible for quality must be able to implement processes and technology that will support their important business objectives in a pragmatic and achievable way, and without negatively impacting current projects. These core needs are what inspired Original Software to innovate and provide practical solutions for Application Quality Management (AQM) and Automated Software Quality (ASQ). We have helped customers achieve real successes by implementing an effective ‘application quality eco-system’ that delivers greater business agility, faster time to market, reduced risk, decreased costs, increased productivity and an early return on investment. Our success has been built on a solution suite that provides a dynamic approach to quality management and automation, empowering all stakeholders in the quality process, as well as uniquely addressing all layers of the application stack. Automation has been achieved without creating a dependency on specialised skills and by minimising ongoing maintenance burdens.

An innovative approach Innovation is in the DNA at Original Software. Our intuitive solution suite directly tackles application quality issues and helps you achieve the ultimate goal of application excellence.

Empowering all stakeholders The design of the solution helps customers build an ‘application quality eco-system’ that extends beyond just the QA team, reaching all the relevant stakeholders within the business. Our technology enables everyone involved in the delivery of IT projects to participate in the quality process – from the business analyst to the business user and from the developer to the tester. Management executives are fully empowered by having instant visibility of projects underway.

Quality that is truly code-free We have observed the script maintenance and exclusivity problems caused by code-driven automation solutions and has built a solution suite that requires no programming skills. This empowers all users to define and execute their tests without the need to use any kind of code, freeing them from the automation specialist bottleneck. Not only is our technology easy to use, but quality processes are accelerated, allowing for faster delivery of businesscritical projects.

Top to bottom quality Quality needs to be addressed at all layers of the business application. We give you the ability to check every element of an application - from the visual layer, through to the underlying service processes and messages, as well as into the database.

Addressing test data issues Data drives the quality process and as such cannot be ignored. We enable the building and management of a compact test environment from production data quickly and in a data privacy compliant manner, avoiding legal and security risks. We can also manage the state of that data, so that it is synchronised with test scripts, enabling swift recovery and shortening test cycles.

A holistic approach to quality Our integrated solution suite is uniquely positioned to address all the quality needs of an application, regardless of the development methodology used. Being methodology neutral, we can help in Agile, Waterfall or any other project type. We provide the ability to unite all aspects of the software quality lifecycle. Our solution helps manage the requirements, design, build, test planning and control, test execution, test environment and deployment of business applications from one central point that gives everyone involved a unified view of project status and avoids the release of an application that is not ready for use.

Helping businesses around the world Our innovative approach to solving real pain-points in the Application Quality Life Cycle has been recognised by leading multinational customers and industry analysts alike. In a 2011 report, Ovum stated: “While other companies have diversified, into other test types and sometimes outside testing completely, Original Software has stuck more firmly to a value proposition almost solely around unsolved challenges in functional test automation. It has filled out some yawning gaps and attempted to make test automation more accessible to non-technical testers.” More than 400 organisations operating in over 30 countries use our solutions and we are proud of partnerships with the likes of Coca-Cola, Unilever, HSBC, Barclays Bank, FedEx, Pfizer, DHL, HMV and many others.

www.origsoft.com Email: solutions@origsoft.com Tel: +44 (0)1256 338 666 Fax: +44 (0)1256 338 678 Grove House, Chineham Court, Basingstoke, Hampshire, RG24 8AG

www.testmagazine.co.uk

August 2012 | TEST


46 | TEST company profile

Green Hat The Green Hat difference In one software suite, Green Hat automates the validation, visualisation and virtualisation of unit, functional, regression, system, simulation, performance and integration testing, as well as performance monitoring. Green Hat offers codefree and adaptable testing from the User Interface (UI) through to back-end services and databases. Reducing testing time from weeks to minutes, Green Hat customers enjoy rapid payback on their investment. Green Hat’s testing suite supports quality assurance across the whole lifecycle, and different development methodologies including Agile and test-driven approaches. Industry vertical solutions using protocols like SWIFT, FIX, IATA or HL7 are all simply handled. Unique pre-built quality policies enable governance, and the re-use of test assets promotes high efficiency. Customers experience value quickly through the high usability of Green Hat’s software. Focusing on minimising manual and repetitive activities, Green Hat works with other application lifecycle management (ALM) technologies to provide customers with value-add solutions that slot into their Agile testing, continuous testing, upgrade assurance, governance and policy compliance. Enterprises invested in HP and IBM Rational products can simply extend their test and change management processes to the complex test environments managed by Green Hat and get full integration. Green Hat provides the broadest set of testing capabilities for enterprises with a strategic investment in legacy integration, SOA, BPM, cloud and other component-based environments, reducing the risk and cost associated with defects in processes and applications. The Green Hat difference includes: • Purpose built end-to-end integration testing of complex events, business processes and composite applications. Organisations benefit by having UI testing combined with SOA, BPM and cloud testing in one integrated suite. • Unrivalled insight into the side-effect impacts of changes made to composite applications and processes, enabling a comprehensive approach to testing that eliminates defects early in the lifecycle. • Virtualisation for missing or incomplete components to enable system testing at all stages of development. Organisations benefit through being unhindered by unavailable systems or costly access to third party systems, licences or hardware. Green Hat pioneered ‘stubbing’, and organisations benefit by having virtualisation as an integrated function, rather than a separate product.

• ‘ Out-of the box’ support for over 70 technologies and platforms, as well as transport protocols for industry vertical solutions. Also provided is an application programming interface (API) for testing custom protocols, and integration with UDDI registries/repositories. •H elping organisations at an early stage of project or integration deployment to build an appropriate testing methodology as part of a wider SOA project methodology.

Corporate overview Since 1996, Green Hat has constantly delivered innovation in test automation. With offices that span North America, Europe and Asia/Pacific, Green Hat’s mission is to simplify the complexity associated with testing, and make processes more efficient. Green Hat delivers the market leading combined, integrated suite for automated, end-to-end testing of the legacy integration, Service Oriented Architecture (SOA), Business Process Management (BPM) and emerging cloud technologies that run Agile enterprises. Green Hat partners with global technology companies including HP, IBM, Oracle, SAP, Software AG, and TIBCO to deliver unrivalled breadth and depth of platform support for highly integrated test automation. Green Hat also works closely with the horizontal and vertical practices of global system integrators including Accenture, Atos Origin, CapGemini, Cognizant, CSC, Fujitsu, Infosys, Logica, Sapient, Tata Consulting and Wipro, as well as a significant number of regional and country-specific specialists. Strong partner relationships help deliver on customer initiatives, including testing centres of excellence. Supporting the whole development lifecycle and enabling early and continuous testing, Green Hat’s unique test automation software increases organisational agility, improves process efficiency, assures quality, lowers costs and mitigates risk.

Helping enterprises globally Green Hat is proud to have hundreds of global enterprises as customers, and this number does not include the consulting organisations who are party to many of these installations with their own staff or outsourcing arrangements. Green Hat customers enjoy global support and cite outstanding responsiveness to their current and future requirements. Green Hat’s customers span industry sectors including financial services, telecommunications, retail, transportation, healthcare, government, and energy.

• Scaling out these environments, test automations and virtualisations into the cloud, with seamless integration between Green Hat’s products and leading cloud providers, freeing you from the constraints of real hardware without the administrative overhead. • ‘Out-of-the-box’ deep integration with all major SOA, enterprise service bus (ESB) platforms, BPM runtime environments, governance products, and application lifecycle management (ALM) products.

sales@greenhat.com www.greenhat.com

TEST | August 2012

www.testmagazine.co.uk


TEST company profile | 47

T-Plan T-Plan since 1990 has supplied the best of breed solutions for testing. The T-Plan method and tools allowing both the business unit manager and the IT manager to: Manage Costs, Reduce Business Risk and Regulate the Process. By providing order, structure and visibility throughout the development lifecycle from planning to execution, acceleration of the "time to market" for business solutions can be delivered. The T-Plan Product Suite allows you to manage every aspect of the Testing Process, providing a consistent and structured approach to testing at the project and corporate level.

What we do Test Management: The T-Plan Professional product is modular in design, clearly dierentiating between the Analysis, Design, Management and Monitoring of the Test Assets. • What coverage back to requirements has been achieved in our testing so far?

Test Automation: Cross-Platform Independence (Java) Test Automation is also integrated into the test suite package via T-Plan Robot, therefore creating a full testing solution. T-Plan Robot Enterprise is the most flexible and universal black box test automation tool on the market. Providing a human-like approach to software testing of the user interface, and uniquely built on JAVA, Robot performs well in situations where other tools may fail. • Platform independence (Java). T-Plan Robot runs on, and automates all major systems, such as Windows, Mac, Linux, Unix, Solaris, and mobile platforms such as Android, iPhone, Windows Mobile, Windows CE, Symbian. • Test almost ANY system. As automation runs at the GUI level, via the use of VNC, the tool can automate any application. E.g. Java, C++/C#, .NET, HTML (web/browser), mobile, command line interfaces; also, applications usually considered impossible to automate like Flash/Flex etc.

• What requirement successes have we achieved so far? • Can I prove that the system is really tested? • If we go live now, what are the associated Business Risks?

Incident Management: Errors or queries found during the Test Execution can also be logged and tracked throughout the Testing Lifecycle in the T-Plan Incident Manager. “We wanted an integrated test management process; T-Plan was very exible and excellent value for money.” Francesca Kay, Test Manager, Virgin Mobile

Web: hays.co.uk/it Email: testing@hays.com Tel: +44 (0)1273 739272

www.testmagazine.co.uk

August 2012 | TEST


48 | The last word...

the last word... What would you say you do here? While it is his raison d'être, Dave Whalen is pondering the necessity of going on a real bug hunt

I

once had a development manager approach me and advise me that he had just hired a developer that he had known for years. He bragged that in all the years he has worked with this developer, the developer had only created fi ve bugs. I maintained my composure and tried hard not to burst into hysterical laughter. This developer had obviously never met me.

What do I do here? I am a Software Entomologist - bugs are my life! I study them. I know where they live. I will find them. There is no such thing as perfect software and I'll prove it! In this case I did. As the project's test manager, I assigned myself this developer's features and proceeded to test them. I have to admit, the code was very well written. The bugs were hard to find but I found them - eleven in total. But I had to work really hard to find them and it ate up a lot of valuable time in the test schedule. Was it worth it? Maybe. I've been giving a lot of thought to what we do lately. I tend to read a lot of things from my fellow software testers and from those I like to refer to as ‘Test Celebrities’ (you know, the ones that write the books and speak at the conferences). Someone actually recognised me at a conference once. I was shocked. I suppose that makes me a minor test celebrity. Whatever. Anyway, I digress. It's nice to know what the celebrities are up to, what the latest trends are, etc. I have found that lately they seem to spend a lot of time telling us how to find those really elusive, hard to find bugs in an application. One celebrity in particular is somewhat obsessed with the subject. I have admit, I blindly followed their advice and have built a reputation as someone who can typically find those elusive bugs. But is that what I should be doing? I don't think so.

TEST | August 2012

At the other end of the spectrum are the software architects or project managers that think you do too much testing. If only it were that simple! Most managers don't understand the value of negative testing. All they want is a quick positive, or happy path test. Until the user finds a problem anyway. Many developers don't get it either. When I find a bug as a result of a negative test I'm usually asked: “Why did you do that?” My answer is always the same: “Because I could!” After a while it becomes a challenge to them to build code I can't break. Many have tried. Many have failed. It can't be done. It probably can - just don't tell them. Basically, our job is to make sure the application works as it was designed. We verify that the application is functional and usable for the target user. We ensure that the user is gracefully led back onto the correct path should they go astray – and they will go astray. Users, especially in Web applications where there is little or no documentation, sometimes tend to get lost or do things they really shouldn't do. Those are the things that I need to brainstorm and test for. There is a lot more value to me entering invalid data, submitting empty forms, violating database constraints, and stuff like that than there is to holding down the Alt key while pressing the Q key during a leap year. Sure that may break something, but do we really care? What are the odds a typical user (not one like me) would do that. By the way, in the event that you do encounter a user like me - thank them and buy them a cup of coffee. If the odds are pretty good that a user could do it - test it. If the odds are pretty low....maybe later if there is time left in the schedule (because there is always time left over at the end of the schedule right?) Well thought out negative testing adds value. I think the jury is still out on exploratory testing and bug quests. I may be wrong. I've been

What do I do here? I am a Software Entomologist – bugs are my life! I study them. I know where they live. I will find them. There is no such thing as perfect software and I'll prove it!

wrong before and I'm sure I'll be wrong again. I guess that is why I'm not a huge fan of exploratory testing (as I understand it anyway). I prefer a well-structured, methodical, highly repeatable approach. Both positive and negative testing. Is there some value to exploratory testing? Maybe. If I have time, I'll go on a bug quest. I will find bugs. Developers will despise me, but hopefully most will be low priority bugs and they will get fixed eventually. But since I rarely have the time in any test cycle to do the testing that I need to do, I don't really see much exploratory testing happening. Can we find all the bugs in a software application? Given sufficient time and resources, I think we can get very close. But it would take so long to test and as a result be so expensive that it wouldn't be economically feasible. But I do love a dare.

Dave Whalen

President and senior software entomologist Whalen Technologies softwareentomologist.wordpress.com

www.testmagazine.co.uk


INNOVATION FOR SOFTWARE QUALITY

Subscribe to TEST free! rua ry Issu e 1: Feb Vol um e 4:

LIT RE QUA

201 2

Y

TEST : INNOV ATION FOR SO FTWAR E QUA LITY

A R SOFTW TION FO

TEST : INNOVATION FOR SOFTWARE QUALITY

LITY E QUA FTWAR FOR SO ATION INNOV TEST :

A INNOV

INNOV ATION INNOVATION FOR SOFTWARE QUALITY FOR SO FTWA Volume 4: Issue 2: April 2012

Vol um e 4: Issu e 3: Jun e 201 2

RE QUA LIT

Y

Testing in land New Zea Cloud testing for er-8 the 'numb ach ro p p a ' ire fencing w

an Agile world

The end of th for the Te e road st Phase ? Derk-Jan de

Fred Beringer reports from Silicon Valley

VOLUM E 4: IS SUE 3: JUNE 2 012

VOLUME 4: ISSUE 2: APRIL 2012

12 ARY 20 FEBRU SUE 1: E 4: IS VOLUM

So I'm sure the question on everyone's mind is: am I now an Agile fan? Will I convert? Did I drink the Agile Kool-Aid? After all, I'm the guy that wrote the ‘I Hate Agile’ cover story for this very publication. The answer is (drum roll) Yes! Well a qualified yes.

Grood ask s is it time goodbye to say to a sepa rate testin g phase?

en testing Data-driv Inside: Te urcing | st Au n | Outso Inside: Automation tools | Model-based testing tomatio io | Testing certifi cation at m to n | So st au ftware se Inside: Te curity | Pe ine.co.uk az ag rformance tm w.tes testing Vis line at ww on Visit TEST

FEATURE FOCUS: Fuzzing web applications – The new webit auditing: TEST onlinP20-23 e at www. testmagaz ine.co.uk

For exclusive news, features, opinion, comment, directory, digital archive and much more visit

www.testmagazine.co.uk Published by 31 Media Ltd Telephone: +44 (0) 870 863 6930 Facsimile: +44 (0) 870 085 8837

www.31media.co.uk

Email: info@31media.co.uk Website: www.31media.co.uk

INNOVATION FOR SOFTWARE QUALITY


Create, develop and deliver better software faster with Borland

Deliver it right, deliver it better and deliver it faster Gather, refine and organize requirements – align what you develop with what your users need. Accelerate reliable, efficient and scalable testing to deliver higher quality software. Continuously improve the software you deliver – track code changes, defects and everything important in collaborative software delivery. Give your users the experience they expect with better design, control and delivery.

Work the way you want. The way your team wants. The way your users want with Borland solutions.

© 2012 Micro Focus Limited. All rights reserved. MICRO FOCUS, the Micro Focus logo, among others, are trademarks or registered trademarks of Micro Focus Limited or its subsidiaries or affiliated companies in the United Kingdom, United States and other countries. All other marks are the property of their respective owners.


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.