SD Times - September 2018

Page 1

FC_SDT015.qxp_Layout 1 8/28/18 2:00 PM Page 1

SEPTEMBER 2018 • VOL. 2, ISSUE 15 • $9.95 •

Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:33 PM Page 2

003_SDT015.qxp_Layout 1 8/28/18 12:16 PM Page 3





News Watch


W3C winds down XML work


Living on the edge: IoT is the future, and security is playing catch-up


Digital transformation starts with culture


Why writing clean code matters


How GraphQL is competing with REST


Providing insight, visibility into code to gauge its value


Higher education benefits are win/win for software engineers


JRebel speeds all Java app development

Scaling Scrum is Just Scrum

page 8

5 Ways Static Code Analysis Can Save You


GUEST VIEW by Devin Gharibian-Saki Avoid security pitfalls with automation


ANALYST VIEW by Rob Enderle The advantages of Moto Z


INDUSTRY WATCH by David Rubinstein Software still starts with requirements

page 28

Soffttware TTesting esting SH O W C A S E page 37 38

Micro Focus on the growth of intelligent testing


Choose Mobile Labs for Appium success


Parasoft Simplifies API Testing


Tricentis Continuous Testing Platform

Java 11 delivers high-quality features at speed page 46

Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at

004_SDT015.qxp_Layout 1 8/29/18 10:53 AM Page 4


Instantly Search Terabytes EDITORIAL EDITOR-IN-CHIEF David Rubinstein NEWS EDITOR Christina Cardoza

dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data


2YHU VHDUFK RSWLRQV LQFOXGLQJ ‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search

CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum


Developers: ‡ $3,V IRU NET, C++ and Java; ask about new cross-platform NET Standard SDK with Xamarin and NET Core ‡ 6'.V IRU :LQGRZV 8:3 /LQX[ 0DF L26 LQ EHWD $QGURLG LQ EHWD ‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH DQG PRUH







The Smart Choice for Text Retrieval® since 1991 1-800-IT-FINDS

PUBLISHER David Lyman 978-465-2351


D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803

Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:33 PM Page 5

006,7_SDT015.qxp_Layout 1 8/28/18 10:24 AM Page 6


SD Times

September 2018

NEWS WATCH Google looks to revive Dart programming language with v2 Google is trying to revive the Dart programming language as a mainstream language for mobile and web development with its latest announcement. Dart 2 features a complete rewrite of the web platform with a strong focus on productivity, performance and scalability. “Dart 2 marks the rebirth of Dart as a mainstream programming language focused on enabling a fast development and great user experiences for mobile and web applications,” Kevin Moore, product manager for Google, wrote in a post. “We want to enable developers building client applications to be productive, with a language, framework and components that reduce boilerplate and let them concentrate on business logic, along with tooling that identifies errors early, enables powerful debugging and delivers small, fast runtime code.” A lot of the work in Dart 2 went into cleaning up the language, adding more support for web and mobile frameworks, and providing more tools and components to support Dart outside of Google.

GitLab creates new open-source tool for data teams GitLab has revealed it is working on a new tool for the data science lifecycle. Meltano is an open-source solution designed to fill the gaps between data and understanding business operations. “Meltano was created to help fill the gaps by expanding the common data store to support Customer Success,

Android 9 now available

After over a year of being in development, Android 9 Pie is finally here. Google pushed the source code to Android Open Source Project and began rolling it out to Pixel devices last month. The new version adds performance improvements via the Android runtime (ART) by expanding its use of execution profiles for optimizing apps and reducing memory footprints. Android 9 is also optimized for Kotlin, with several new compiler optimizations. Google is continuing to work with JetBrains to further optimize Kotlin’s generated code. In addition, Android 9 includes many new features and improvements. According to Google, it is “a smarter smartphone, with machine learning at the core.” With Android 9, your phone will learn as you use it, picking up on your preferences over time. “Android 9 harnesses the power of machine learning to make your phone smarter, simpler, and tailored to you,” according to Dave Burke, VP of engineering for Android at Google.

Customer Support, Product teams, and Sales and Marketing,” the team wrote in a post. “Meltano aims to be a complete solution for data teams — the name stands for model, extract, load, transform, analyze, notebook, orchestrate — in other words, the data science lifecycle. While this might sound familiar if you’re already a fan of GitLab, Meltano is a separate product. Rather than wrapping Meltano into GitLab, Meltano will be the complete package for data people, whereas GitLab is the complete package for software developers.” The goal of Meltano is to make analytics accessible to everyone, not just data professionals. According to GitLab, while the company has experience collecting data and presenting it in a readable format to business users in order to make predictions based on the data, it takes too many steps and tools to complete this.

Prometheus second project to graduate CNCF incubation The Cloud Native Computing Foundation (CNCF) has announced the systems and

service monitoring framework Prometheus from the foundation’s incubation program has now graduated. This makes it the second project to meet the CNCF’s criteria for graduation after Kubernetes in March of this year. The open-source infrastructure monitoring platform’s new status means that it has met the CNCF’s requirements that the project “demonstrate thriving adoption, a documented, structured governance process, and a strong commitment to community sustainability and inclusivity,” and adheres to the CNCF Code of Conduct, among other accomplishments, according to the foundation. Prometheus was started in 2012 by developers at SoundCloud to monitor their burgeoning microservice infrastructure and has since become a rapidly growing part of the Kubernetes ecosystem, the CNCF says. It entered incubation at the CNCF in 2016 with its version 1.0 release. “Since its inception in 2012, Prometheus has become one of the top open-source monitoring tools of choice for enterprises building modern cloud native applications,” said Chris

Aniszczyk, COO of the CNCF in the announcement.

Amazon Aurora Serverless solution for DBs now available Amazon Web Services (AWS) has announced a new serverless solution for starting, scaling and shutting down database capacity. Amazon Aurora Serverless was first announced as a preview last year at re:Invent. The solution is now generally available, and designed as an on-demand, auto-scaling service without having to provision, scale and manage servers. “More and more customers are moving production applications and databases from Oracle and SQL Server to Amazon Aurora because it’s a highly available, highly durable, built-for-the-cloud database at one tenth the cost of the older guard database offerings,” said Raju Gulabani, vice president of databases, analytics, and machine learning at AWS. “With the availability of Aurora Serverless, we now make it more cost effective for our customers to run intermittent or cyclical workloads that have less pre-

006,7_SDT015.qxp_Layout 1 8/28/18 10:24 AM Page 7

dictable usage patterns such as development and test workloads or applications that experience seasonal spikes, making Aurora even more attractive for every imaginable workload.”

Postman update brings services into free version API development environment provider Postman has announced major updates to its app. Starting with Postman 6.2, all free Postman users will be able to create Postman teams, use team workspaces, and use collaboration features, all of which were previously available only to customers of Postman’s paid plans. These features will now be scaled for individuals and small projects. In addition, this release adds a new feature named Sessions, which adds sessionspecific collection, environment, and global variables, the company explained. Session variables will not be synced to the cloud, so developers can be assured that when they are working with sensitive information, it will stay local to their Postman instance.

Microsoft introduces project references in TypeScript 3.0 Microsoft released version 3.0 of its JavaScript plug-in TypeScript, which brings static types, type declarations and type annotations to JavaScript users. Though the company says the update is light on “breaking” changes, TypeScript program manager, Daniel Rosenwasser highlighted the new features such

as project references. According to Rosenwasser, project references were one of the biggest improvements of the release, making it easier for developers to share dependencies between multiple TypeScript projects by allowing tsconfig.json files to reference each other. “Specifying these dependencies makes it easier to split your code into smaller projects, since it gives TypeScript (and tools around it) a way to understand build ordering and output structure,” Rosenwasser wrote. “That means things like faster builds that work incrementally, and support for transparently navigating, editing, and refactoring across projects. Since 3.0 lays the foundation and exposes the APIs, any build tool should be able to provide this.”

Google announces Cloud Functions serverless features Google is giving development teams the ability to build apps without worrying about managing servers with new serverless features announced at its Google Cloud Next conference in San Francisco last month. The company announced the general availability of Cloud Functions, new App Engine runtimes and the integration of Cloud Firestore with GCP services. “Every business wants to innovate — and deliver — great software, faster. In recent years, serverless computing has changed application development, bringing the focus on the application logic instead of infrastructure,” Eyal Manor, vice president of engineering, wrote

September 2018

SD Times

in a post. “With zero server management, auto-scaling to meet any traffic demands, and managed integrated security, developers can move faster, stay agile and focus on what matters most — building great applications.” Cloud Functions is an eventdriven compute service. It features the ability to scale automatically, runs code in response to events, the ability to pay only while code runs, and does not require any server management. Use cases include serverless application backends, realtime data processing and intelligent applications such as virtual assistants, chatbots and sentiment analysis. The generally availability of Cloud Functions also comes with support for Python 3.7 and Node.js 8, and networking and security controls. z

Indigo.Design brings together designers, developers Infragistics has announced the release of Indigo.Design, which is a cloud-based platform for visual design, UX prototyping, code generation, and app development. According to the company, the solution facilitates collaboration between developers, UX architects, product managers, and application developers throughout the software design and development process, while allowing each individual to use the tools they want. According to the company, the solution facilitates collaboration between developers, UX architects, product managers, and application developers throughout the software design and development process, while allowing each individual to use the tools they want. The solution features four key components: n Indigo.Design System, which includes Sketch UI kits that feature a library of UI components, UI patterns, pre-defined screens and complete app scenarios. According to Infragistics, each symbol in the Sketch UI kit maps to a twin component in Ignite UI for Angular. n Indigo.Design Cloud, a cloud service that enables designers to import visual designs to add hotspot navigation and screen transitions, collaborate with stakeholders to get feedback, and perform unmoderated usability studies with built-in video playback and analytics. n Indigo.Design Code Generation Service, a Visual Studio Code extension that integrates with designs in the Indigo.Design Cloud to generate UI code in Angular. n Ignite UI for Angular, which includes over 50 material-based UI components designed and built using the Angular framework.


008-011_SDT015.qxp_Layout 1 8/27/18 4:49 PM Page 8


SD Times

September 2018

Scaling Scrum is

008-011_SDT015.qxp_Layout 1 8/27/18 4:50 PM Page 9

Just Scrum O

ne of the hottest questions these days, whether online or in the boardroom, is “How does the organization become more ‘agile’?” As the discussion evolves, it leads to, “How do we scale agility from one team to multiple, or should we?” Scrum being the most common agile way that teams work leads to the question of, “How do we scale Scrum beyond a single team?” However, the question should instead be more concerned with what product(s) is being delivered and what is the most effective way of doing so. When should you scale Scrum to multiple teams? This question isn’t as difficult as it may sound. Scrum is based on the delivery of a product. The optimal development team size is between 3 and 9 people plus the product owner and Scrum Master. This means that if you can deliver your product with a team of 9 people or less, then there is no need to scale. If you determine that you need more people, then the decision to scale must be made, but scaling decisions should not be taken lightly. As soon as you go from a single Scrum team to more than one, you now have added cross-team dependencies and communication. This can be solved, but it is something that has to be considered and addressed. For every additional Scrum team, the coordination of product delivery becomes more complex. Together they need to deliver an integrated product that meets the vision of the product owner and the needs of various stakeholders and users. If you scale Scrum correctly, there should not be a major impact on the organization other than better coordination across teams and people. If you Eric Naiburg is responsible for marketing, support, and communications for

September 2018

SD Times


look at scaling Scrum as multiple Scrum teams working together on a product basis, replicating how a single Scrum team works, then the organization will become more efficient. They will have greater visibility into the work being done through higher levels of transparency, providing a greater value to users of the product and therefore to the organization overall.

Define ‘product’ It is imperative that the organization agrees on the definition of products and what the boundaries are for a single product. Is a set of capabilities a product, or parts of a bigger product? Without this understanding, it becomes impossible to maintain focus as a team, or set of teams, and to provide product ownership. So, look at how you are defining the product, how many products you have and what may tie them together or separate them from each other. When you go beyond a single Scrum team to deliver a product, coordination needs to scale as well. This is why there is a single product backlog for the product and it is transparent across all teams. Planning at the product level also must be done across Scrum teams so that everyone is on the same page with what is being planned to deliver and how they can best organize their work. At the same time, you need to have a vision for the product, have an idea of what you want to deliver over a small (35) set of future Sprints and work together in this planning. It doesn’t mean that your vision won’t change and that you won’t inspect and adapt in each Sprint, planning what you will deliver. It means you have a common understanding and vision moving forward. As you scale, this becomes more and more important. Don’t, however, let the vision and Product Backlog become stale. Too often, the “roadmap” becomes set in continued on page 10 >


008-011_SDT015.qxp_Layout 1 8/27/18 4:50 PM Page 10


SD Times

September 2018

Too often I hear people at conferences or in meetings talk about how they want to “buy agile” or “go agile,” but they lack the understanding of what that really means. When digging into the conversation, it tends to go down a path that they can just buy some tools and

be completed until its parent is complete (can include technology, domain, software…) • People / Skills – Only certain people/teams can complete an item • External – The parent item is being delivered from outside the Scrum teams Once dependencies have been visualized and sequenced, conversations should focus on opportunities to minimize and remove them. Some solutions include: • Moving work between teams so that there are less cross-team dependencies. • Moving people between teams so that there are less cross-team dependencies. You may significantly reduce delivery risk if certain skills are rebal-

apply some processes and now they are magically agile, but that just isn’t the case. There is no “silver bullet.” Even if you have a team of two people working together, you start to encounter dependencies on each other. Now as you scale up to multiple people, the dependencies increase as well. Imagine now scaling to multiple Scrum teams working together to deliver a product, the dependencies will not only cross individuals, but also Scrum teams. As product backlog items are being decomposed, dependencies will emerge. It is highly recommended that you categorize dependencies and visualize them to make it easier to grasp and communicate them. Categories for dependencies may include items such as: • Build Sequence – An item cannot

anced across teams for a Sprint or 2 in order to minimize dependencies. • Reshaping the work. By splitting items in different ways, it may be possible to eliminate dependencies. • Using different risk-based strategies. Some groups might try to entirely remove an ‘in-Sprint’ cross-team dependency. Other groups may opt to front-load all the risk as early as possible and take many cross-team inSprint dependencies in earlier Sprints in order to learn and respond. Once the Scrum teams understand the product backlog and item dependencies, plan for a Sprint and begin developing, they are then well on their way to product delivery. In Scrum, you can deliver as often as the product owner wants during the Sprint, but you must

< continued from page 9

stone and it isn’t evolved Sprint by Sprint. This leads to a mini-waterfall approach and runs the risk of delivering the wrong thing or less value over time. If you revisit the product backlog on an ongoing basis, you are much more likely to continue fine-tuning it for each Sprint and deliver higher value products in the end.

Challenges of Scale (HINT: There is No Silver Bullet)

deliver a potentially releasable product at least once per Sprint. As you scale to multiple Scrum teams working together to deliver the product, you need to start thinking as a whole. No longer are you thinking of individual “team products,” but instead, “How do we deliver an integrated product together?” As the Scrum teams come together for the Sprint Review, they are coming together to review a single product and should be demonstrating that product in its entirety, not pieces based on the individual teams. This helps get true feedback from stakeholders on the real usage of the product, but also determine if any integrations, cross-team dependencies, etc. were missed. It isn’t always about getting bigger, however. As a team, you need to look at what is needed to deliver the product and that may not be getting bigger. It could be a reduction in size, or potentially scaling up at a point and then being smart about scaling back down over time. As we know, and I already wrote about above, the bigger the team, the greater the dependencies and more difficult they are to deal with. So, as you are looking at the product backlog and working with stakeholders and the product owner, start to determine what the right size of the Nexus should be. Evaluate options for how quickly to scale up — faster isn’t always better — and if you have opportunities once scaled up, scale back down over time.

Conclusion At the end of the day, Scrum is a simple framework that is difficult to master. Scaling Scrum or just scaling up to bigger teams adds complexity to how people and teams work. Being a framework, Scrum provides an excellent way to helps teams scale without having to change their entire worlds. Scaling Scrum is just Scrum. You don’t change how you work as a Scrum team nor do you change the work being delivered. By adding the Nexus framework on top of Scrum, it provides teams with a way to deal with cross-team dependencies and ensure that they are all working together toward a common product goal and product increment. z

008-011_SDT015.qxp_Layout 1 8/27/18 4:50 PM Page 11

September 2018

SD Times

Introducing the Nexus Framework Nexus, created by Scrum co-creator Ken Schwaber and his team, extends Scrum to guide multiple Scrum teams on how they need to work together to deliver working software in every Sprint. It shows the journey these teams take as they come together, how they share work between teams, and how they manage and minimize dependencies.

n The Nexus Guide Nexus is based on the Scrum Framework and uses an iterative and incremental approach to scaling software and product development. Nexus augments Scrum minimally with one new role and some expanded events and artifacts. The Nexus Framework was created by Ken Schwaber, co-creator of Scrum, and was released by, along with a body of knowledge, the Nexus Guide, in 2015 and updated in 2018. It preserves Scrum.

n Nexus Additions to Scrum When you use the Nexus framework, Scrum doesn’t change. Just like adding any practices to Scrum, it is a framework that you build upon, but doesn’t change that foundation itself. The Nexus framework adds a new role, the Nexus Integration Team, a Nexus Daily Scrum, a Nexus Sprint Planning, Nexus Sprint Backlog, and Nexus Sprint Retrospective and Refinement is now formalized as an Event in the framework.

n Nexus Integration Team The Nexus Integration Team is accountable for ensuring that a “Done” Integrated Increment (the combined work completed by a Nexus) is produced at least once every Sprint. The Scrum teams are responsible for delivering “Done” Increments of potentially releasable products, as prescribed in Scrum. All roles for members of the Scrum teams are prescribed in the Scrum Guide.

n Nexus Daily Scrum The Nexus Daily Scrum is an event for appropriate representatives from individual Development Teams to inspect the current state of the Integrated Increment and to identify integration issues or newly discovered cross-team dependencies or cross-team impacts.

n Nexus Sprint Planning The purpose of Nexus Sprint Planning is to coordinate the activities of all Scrum teams in a Nexus for a single Sprint. The Product Owner provides domain knowledge and guides selection and priority decisions. The Product Backlog should be adequately refined with dependencies identified and removed or minimized prior to Nexus Sprint Planning. During Nexus Sprint Planning, appropriate representatives from each Scrum team validate and make adjustments to the ordering of the work as created dur-

ing Refinement events. All members of the Scrum teams should participate to minimize communication issues.

n Nexus Sprint Backlog

A Nexus Sprint Backlog is the composite of Product Backlog items from the Sprint Backlogs of the individual Scrum teams. It is used to highlight dependencies and the flow of work during the Sprint. It is updated at least daily, often as part of the Nexus Daily Scrum.

n Nexus Sprint Retrospective

The Nexus Sprint Retrospective is a formal opportunity for a Nexus to inspect and adapt itself and create a plan for improvements to be enacted during the next Sprint to ensure continuous improvement. The Nexus Sprint Retrospective occurs after the Nexus Sprint Review and prior to the next Nexus Sprint Planning.

n Refinement

Refinement of the Product Backlog at scale serves a dual purpose. It helps the Scrum teams forecast which team will deliver which Product Backlog items, and it identifies dependencies across those teams. This transparency allows the teams to monitor and minimize dependencies. Refinement of Product Backlog Items by the Nexus continues until the Product Backlog Items are sufficiently independent to be worked on by a single Scrum team without excessive conflict. z


012_SDT015.qxp_Layout 1 8/28/18 10:24 AM Page 12


SD Times

September 2018

W3C winds down XML work First published in 1998, the specification is mature, widely used BY CHRISTINA CARDOZA

As the World Wide Web Consortium (W3C) winds down its work standardizing the Extensible Markup Language (XML), it is looking back at the history that brought XML to its success today. “W3C XML, the Extensible Markup Language, is one of the world’s most widely used formats for representing and exchanging information. The final XML stack is more powerful and easier to work with than many people know, especially for people who might not have used XML since its early days,” Liam Quin, XML activity lead who recently announced he would be be leaving W3C after almost 17 years working with XML, wrote in a post. XML 1.0 was first published as a W3C recommendation on Feb. 10, 1998, as a way to tackle large-scale electronic publishing problems. Today, it is a markup language used to define rules for encoding documents that are both human- and machine-readable. According to Alexander Falk, president and CEO of the software development company Altova, the evolution and success of XML has been widely misunderstood. “Today, much of what we take for granted — and sometimes don’t even think of as being related to XML anymore — is, in fact, based on XML. Every Word document, Excel spreadsheet, and PowerPoint presentation is stored in OOXML (Open Office XML) format. Every time you e-file your taxes ... the information is sent from your tax software provider to the government in XML format. Every time a public company provides its quarterly and annual financial reports to the SEC, the data is transmitted in XBRL (an XML format). Every time you talk to your Alexa device, you’re interacting with an app that uses SSML (Speech Synthesis Markup Lan-

guage, an XML format). And the list goes on and on,” Falk wrote in an email to SD Times. According to W3C’s Quin, XML can work with JSON, linked data, documents, large databases, the Internet of Things, automobiles, aircrafts and even music players. “There are even XML shoes. It’s everywhere,” he said. But, how did we get here? The W3C created the Web Standard Generalized Markup Language (SGML) Working

Group to create an SGML specification to be shared and displayed on the web and within browser plug-ins. While XML is very similar to HTML, the W3C explained the intent was not to replace HTML. XML is designed to carry data; HTML is designed to display data. XML tags are not predefined and HTML tags are, so there are still many differences among the two. At the time the Web SGML Working Group was working on the SGML specification, there were two plug-ins: Panorama from SoftQuad and EBT/ Inso, which was never released. The W3C realized the need for a standard because it was clear that it would be too complex to develop a SGML document that would support both plug-ins. “XML has some redundancy in its syntax. We knew from experience with SGML that documents are generally hard to test, unlike program data, and the redundancy helped to catch errors early and

could save up to 80 [percent] of support costs (we measured it at SoftQuad). The redundancy, combined with grammarbased checking using schemas of various sorts, helped to improve the reliability of XML systems. And the built-in support for multilingual documents with xml:lang was a first, and an enduring success,” wrote Quin. Today, Quin believes most of the work with XML is finished. “The rate of errata has slowed to a crawl,” he explained. However, the end of the W3C’s work does not mean XML is ending; it simply means it has reached a mature stage where it is widely deployed, Quin wrote, “People aren’t reporting many new problems because the problems have already been worked out.” Altova’s Falk believes the future of XML looks bright. “As it gets even more ubiquitous, it will be easier for people to forget that much of the data that flows between different systems is based on XML, but that doesn’t mean it is becoming less important,” wrote Falk. “As the core of XML has matured and been refined over the years, we’ve seen a whole range of supporting standards emerge that help process, structure, transform, query, and format XML data — all coming together to establish a rich infrastructure of related technologies, including XML Schema, XSLT, XSLFO, XPath, XQuery, XBRL, etc., that enable standards-based information processing that spans operating systems, platforms, and software products.” Quin added, “It’s time to sit back and enjoy the ability to represent information, process it, interchange it, with robustness and efficiency. There’s lots of opportunities to explore in making good, sensible use of XML technologies,” Quin added. z

Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:34 PM Page 13


Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:39 PM Page 15

015,16_SDT015.qxp_Layout 1 8/28/18 10:23 AM Page 15

September 2018

SD Times

Living on the edge:

IoT is the future, and security is playing catch-up BY DAVID RUBINSTEIN


mart homes. Smart cities. Smart factories. Intelligent cloud. Intelligent edge. While many still believe the Internet of Things has a way to go before we see widespread adoption, there is no questioning that it is here today. Some things are prototypes upon which larger deployments can be built, and some are already in wide use, depending upon the industry. But a world in which millions upon millions of devices can store, process and, when necessary, transmit data to back-end systems for analysis and action already is at hand. “There are now more IoT devices on the planet than there are people,” said Ed Adams, president and CEO of Security Innovation, a technology training company. “We passed that tipping point in 2016.” Omer Arad is in IBM Research, where his main focus is IoT infrastructure, and he believes businesses are only just beginning to understand the potential. “IoT will have a huge impact on our lives,” he said. “The future is that people won’t have to interact with devices. The integrations should all be seamless. Sensors can understand and analyze context to get a more meaningful output to extend the user experience.” If only Marc Andreesen knew how prescient his comment

was back in 2011, when he declared that “software is eating the world.” But if software can be compared to the intelligence and digestion of the human body, data is its life blood. In an all-things-connected world, with all these streams of data, creating devices with the ability to make decisions as to what data to pluck from the stream, and to process it there without the latency of back-end query and response, is critical. “The question of how or when to store data, and when should we send it to the cloud, is both a technical and business decision,” Arad said. With this rapid evolution, some may see the early adopters of IoT as living on the edge. But isn’t that the point? Arad said IoT is already transforming the health industry, where personalized data on each patient wearing sensors can be gathered. “They can see what people ate before surgery, or monitor the hearts in patients, and you don’t have to send all that data to the cloud. You can collect it and analyze it on the continued on page 16 >


015,16_SDT015.qxp_Layout 1 8/28/18 10:23 AM Page 16


SD Times

September 2018

< continued from page 15

edge, or sensor, and only transmit what’s important to the cloud.” Indranil Chakraborty of Google’s IoT Core team said he too is seeing “a good amount of interest” in IoT solutions, in areas such as smart cities, oil and gas, manufacturing, as well as in areas you might not expect. “We have a wide spectrum of users and customers,” he said. “We work with urban bike sharing, where there are devices in the bike — GPS, SIM card — and we use mobile geofencing to unlock”

and the automotive engineers, they know how to make great hardware, but man, do they stink at making good software,” Adams said. “So, companies have to deal with new paradigms they never had to deal with before. Bose makes really great speakers, but now those speakers are connected via Wi-Fi and Bluetooth and mobile applications, all of which are collecting data and sending it back to Bose for data aggregation so they can market to you better. And if they’re not doing that with security in mind, and with privacy in mind, not only

When you’re driving down the street, you’re surrounded by a few computers, 100 million lines of software code, and four tires that’s moving you along.” the bike when someone wants to ride. Yet there are hurdles to more widespread adoption. The lack of a standard protocol for connectivity is one of things holding IoT back, Chakraborty said. “Different devices have different operating systems and different protocols,” he pointed out And then there’s security. As more devices are put out into the world, each running smaller pieces of functionality, the more vectors are presented to malicious attackers. And this leaves people wondering: Is the communication secure? Is the data encrypted at rest? How do we know the device is acting in my best interest? The answers to those questions are multi-faceted. Part of the reason is that companies creating IoT devices have solid backgrounds in hardware, but not so much in developing secure software. And, what Security Innovations’ Adams described as the sad state of software development today, “the vast majority of engineers still don’t know how to write secure code. “The Bose and Sonys of the world,

are you putting your customers at risk but your brand’s at risk, and your company is facing massive fines from the likes of GDPR.” Adams said he sees the convergence of two massive problems: the amount of software that is “running the world,” and the number of developers who are either reusing open-source code and not checking it for security, or not being educated on how to write secure code in the first place. And those two, he said, “are a potential meteoric disaster waiting to happen.” Adams tried to put the problem in perspective. “Look at the 787 Dreamliner, Boeing’s latest and greatest airplane. That’s about 6 ½ million lines of code, designed from the ground up. It’s a modern marvel. ... But compare that to a 2017 S-Class Mercedes. That has 100 million lines of code. That’s insane. And it’s not just 100 million lines of code. It’s got five separate networks, and over 10 different operating systems, and the software-based costs of that car is now approaching 50 percent. When you’re driving down the street,

you’re surrounded by a few computers, 100 million lines of software code, and four tires that’s moving you along.” Now, consider that the average developer makes one error every thousand lines of code. “Do the math,” Adams said. “Think about how many software defects are rolling around with you in your Mercedes. It’s pretty daunting.” So, it is the sheer number of IoT devices — with Adams’ claim that since 2016, there are more IoT devices on the planet than people — the fact that there are no good automated testing tools for IoT software, and the lack of security awareness of development teams creating these apps that leave IoT applications, data and devices vulnerable. Finally, for the first time, IoT security transverses both cybersecurity and safety. That, Adams, said, is something we haven’t really seen before. “You’ve got medical devices, cars … cars are IoT devices. Now, you’re introducing a safety factor. It’s not just stealing credit cards or identities anymore. You’re literally talking about driving cars off the road, being able to deliver the wrong doses of medicines, being able to stop someone’s pacemaker. These are all legitimate attack vectors that have previously never even been imagined, and it’s all enabled through the magic of software and the beauty of IoT. So the importance of securing IoT software specifically, it’s just critically important. I can’t stress that enough.” Despite these potential doomsday scenarios, Google IoT’s Chakraborty sees proof-of-concept prototypes rolling out in areas such as transportation and manufacturing, where the value proposition is clear. In manufacturing, though, challenges include complexity and a lack of connectivity in aging factories — some simply don’t even have Wi-Fi on their factory floors. “There is a huge opportunity in predictive maintenance,” he said. “It’s the first step to automation. But owners now don’t have visibility into multiple factories. They need to connect to get a central view of all factories. Then they can build machine-learning models to predict when a machine might fail.” That, is living on the edge. z

Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:39 PM Page 17


Full Page Ads_SDT015.qxp_Layout 1 8/28/18 10:26 AM Page 18

019_SDT015.qxp_Layout 1 8/28/18 10:22 AM Page 19

September 2018

SD Times


Digital transformation starts with culture a digital business, it will be hard for the Digital transformation is one of those company to compete against upstart terms being bandied about from all cordigital industry disrupters. In this case, ners of the IT universe, which makes it think Tesla. difficult to define, and even harder to Of those German car companies, understand what it means to successful- Novakovic said: “Think about a manly do it. agement structure, from CEOs down One camp of thought is that it means through the management level, you becoming more agile in your develop- basically have mechanical engineers. ment and business processes. Mirko For the past 20-30 years, they built the Novakovic, CEO of application perform- best mechanical engines for cars, and ance monitoring software supplier now they have to understand that’s not Instana, believes the transformation occurs when IT no ‘You have to bring other longer merely supports the people into management business but becomes the busipositions, people who are ness. He told of being at an more digital thinkers.’ innovation workshop at a big —Mirko Novakovic bank in the UK, and how one of the bank’s group CEOs declared the company is transforming the real deal anymore. Tesla has no the bank into a technology company. Of mechanical engine, it’s electrical, and course, for now, the traditional banking it’s basically 70 percent software, not business will continue to drive revenues, many moving parts anymore. I think it but the bank’s leaders see a digital future. means you have to bring other people This path to transformation requires into management positions, people who a number of changes to occur: changes are more digital thinkers. At [the UK in leadership and management style, bank], there were 5 group CTOs who changes in the jobs and roles of work- all came from digital startups or bigger ers, and changes in corporate culture. digital companies, who are now driving A digital transformation has to be this digital transformation, and they’re driven from the top executives, but in hiring their own people. It’s essentially many organzations, existing leadership a new type of people getting into these has no experience in the digital world. companies and driving change.” Novakovic cited the example of German Even with the right management in auto manufacturers. “If you look at the place, fully aligned with the plan for CEOs of the car companies, five years transformation, changing an ago they were all running around in suits entrenched corporate culture is diffiand ties, now they’re running around in cult at best. Novakovic said he had a sneakers without ties. It’s just a symbolic conversation with the CIO at a leading thing, but in Germany it’s a big symbolic German car manufacturer, who thing....It’s a full culture change, like explained the company was adopting a managing by example. You have to rep- bimodal IT model, in which one IT resent this whole culture change from team works in an agile way to deliver the top down, and that’s a really big thing more frequent software updates while that’s happening right now.” another team works in the more tradiIf the current management struc- tional “waterfall” way that results in ture doesn’t have experience in running fewer defects and is more sustainable. BY DAVID RUBINSTEIN

Content provided by SD Times and

The move to bimodal IT failed, Novakovic said, because “the guys in mode 2 felt like they were not needed anymore in the future, and they were not part of the new business, so the company now said, ‘We have to build an IT where everybody is agile,’ otherwise they don’t get it culturally.” Successful digital transformation also involves re-configuring teams to become more autonomous and selfmanaging, which provides greater flexibility for making decisions and driving value in the context of their work. But it’s not only about training workers to take on new tasks, Novakovic said; it’s about putting tools in place that enable communication and collaboration between team members who had been siloed before in the older IT structure. “The whole app delivery organization is now more working together, where before you had the IT at least split up into development and operations, and probably the business part,” he noted. “Because people need more speed, we see that these teams are coming together. Not only do they get more responsibility because they have to move fast, they also have more authority to drive the business forward at a faster pace.” Novakovic cited one final piece of the transformation puzzle: Automation. Shifting routine manual tasks that can be done by a machine frees up workers to focus on core business solutions and drive business value. “Automation is a big part of that transformation” Novakovic said. “[Instana] helps to automate this whole idea of monitoring these new digital products, the software and the infrastructure of these products, without the need of people who need to configure it, who need to set alerts, who need to manually instrument and look at data.” Learn more at z


Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:39 PM Page 20

GET TOGETHER. GO FASTER. Take your DevOps journey even further with three full days of immersive learning & transformational leadership stories.

2018 Featured Speakers Visit for the full list of speakers

JASON COX Director, System Engineering The Walt Disney Company

CORNELIA DAVIS Sr. Director of Technology Pivotal

COURTNEY KISSLER Vice President, Nike Digital Platform Engineering Nike

THOMAS LIMONCELLI SRE Manager Stack Overflow, Inc.

DR. CHRISTINA MASLACH Professor of Psychology, Emerita University of California, Berkeley

ROSALIND RADCLIFFE Distinguished Engineer, Chief Architect for DevOps for z Systems IBM

DR. TOPO PAL Senior Director & Sr. Engineering Fellow Capital One

JEFFREY SNOVER Technical Fellow and Chief Architect for Azure Storage & Cloud Edge Microsoft

KEANEN WOLD Manager, DevOps Transformation Delta Airlines

SAVE $300 WHEN YOU REGISTER WITH PROMO CODE DEVOPS300* *Promo code is limited to first 200 registrants — register now to take advantage of this discount

021,22_SDT015.qxp_Layout 1 8/28/18 10:33 AM Page 21

September 2018

SD Times

Why writing clean code matters — and how it helps engender good developer practices and culture


Clean code — a term first coined by Robert C. Martin in his book ‘Clean Code: A Handbook of Agile craftmanship’ — is very relevant in today’s fast-paced, highly complex software development and lifecycle management environments. It makes it easier to evolve or maintain a finished product. Compare it, if you will, to the work of an electrician; a cabinet of tidy wires and connectors, all clearly organized and labeled, will make future changes that much easier and faster, with fewer risks of error and maybe even reduced costs over time. Part of clean code is also being considerate of others. Writing code that everyone understands, that the developer is confident is error-free and supported by clear documentation is being respectful of other team members. ‘Do as you would be done by’ is an applicable motto here; code that breaks once beyond the experimentation stage is likely to annoy colleagues, be embarrassing for the developer and even damage his or her reputation. The benefits are inarguable — what’s not to love about code that works better and has a lower cost over time? Chuck Gehman is an IEEE member and an engineer with Perforce Software

Yet many do not think it is necessary and view writing ‘pretty code’ as something that could slow down getting a product out to market. They are not trying to create a work of art, I’ve heard them say. The reality is that it can and should be an asset, not a hindrance, particularly if certain good practices (such as naming conventions) are applied. For more detail into best practices for clean coding I recommend purchas-

ing Martin’s book — but this article provides an overview of the main building blocks of clean code. First, let’s look at a few example scenarios of the consequences of not having clean code. Missing deadlines. Taking shortcuts while coding may feel like a way to speed up productivity, but in a large organization where teams are dependent upon one another, there can be a domino effect of missed deadlines, impacting other team’s productivity too. Difficulty making enhancements. One of the tenets of clean code, per Martin, is to make it easier to change. In today’s competitive landscape, responding to customer feedback is critical, and can make the difference between the product being a hit or a miss. When a product ships late, the company’s results can be impacted. Angering customers. It’s an extreme circumstance, but companies have had their share prices falter and even laid off staff because of customer dissatisfaction due to chronically poor-quality code being released to production. The knock-on effect of, for example, electronics products that have to be recalled because of bugs in the embedded software, can be huge. Losing the best developers. Solving complex problems and the thrill of seeing code become part of a successful product is what drives the most talented software developers. Spending all continued on page 22 >


021,22_SDT015.qxp_Layout 1 8/28/18 10:33 AM Page 22


SD Times

September 2018

< continued from page 21

your time struggling to fix ugly, dirty code created by your colleagues can lead to exits.

Building blocks Let’s look at the main building blocks required to create lean code. Remember that code is a language, and like any other language, that means it should be easily understandable and its intent clear. So, first on the list is using formatting, naming conventions and segmentation. Martin’s book goes into more detail, but the basic idea is to break up code into small pieces, writing short functions with fewer parameters and separating out those functions into those that are commands and those that are queries. Some rework is generally an inevitable fact in most projects and

products have evolved from what started out as great demos. That means the code will be in play for several years. Another reason for use comments might be that code is not allowed to be checked until it has been documented, something that often happens in large companies with lots of contributors involved.

Unit test cases Many who have studied clean code concepts also follow the tenets of Test Driven Development (TDD), because it is a process that promotes clean code. The first step is to write a test. Then, write the minimum amount of code necessary to pass the test. Next, enhance and refactor that code. If the code can be enhanced and the tests still

Creating code that is tidy, comprehensible, ‘plays nice’ with other developers’ code, matters. often, the need for that rework may not be discovered until after the software has passed production, such as a user finding a defect. The problem can be exacerbated by Agile, which, while it has many benefits, the iterative nature of Agile processes may mean that John Smith has moved on to the next task, but Jane Brown sitting next to him is evolving John’s code for the next release. This is disruptive and can lead to unhappy customers. Making sure that proper ‘labeling’ is in place right from day one — like a very clear set of road signs — will help simplify and speed up rework. Use comments are something else to keep top of mind, particularly in largerscale projects involving multiple contributors, but even startups benefit from processing the documentation about what is being built at the beginning, not least because many great

pass, the developer has the assurance that the existing code has not been broken. As discussed, it’s great to be able to look at source code and understand what it does, through good structure and the inclusion of comments. One of the biggest problems with comments and other documentation, though, is that developers all too often change the code without changing the comments. Tests should be repeatable, fast, consistent and easy to read. As such, they provide almost automatic documentation, because they show how to use the code, from the standpoint of

the original authors. By reading the unit tests, anyone who needs to work with the code, whether testers or other developers, can learn how the author of the code being tested would use that code in production. Many teams find unit tests to be as important as the code being tested, especially in the context of a larger project where code reuse is critical to meeting a demanding schedule. For this reason, it is critical to write the tests with the same standards of quality as the code itself.

Beyond clean code Most of these basics for clean code are common sense, but for even the most basic software product to succeed in the market over time, more focus and attention to details around documentation, simplified code and testing are required. Increasingly, products have more at stake than money and pride. In an age where we’re about to have autonomous vehicles on the road, rules for how code is structured and written have gained importance, and even more diligence needs to be applied to safety and security-critical systems. Industries such as aerospace and defense, automotive, medical device manufacture, and banking and finance have compliance standards for coding that are stringent. Increasingly, code must be proven to meet industry standards. Static code analysis tools, which scan source code, can detect hidden defects and security vulnerabilities, and ensure compliance with best-practice coding standards. This technology helps ensure code is secure, reliable, and reusable. There is an added benefit in productivity enhancement gained by relieving some of the mentorship responsibilities of senior developers. Static code analysis can help all team members understand problems with their code and improve their skills. Creating code that is tidy, comprehensible, ‘plays nice’ with other developers’ code, matters. Plus, rather than slowing down project completion, it can proactively contribute to better productivity in the long run. z

Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:40 PM Page 23

024,25_SDT015.qxp_Layout 1 8/28/18 12:14 PM Page 24


SD Times

September 2018

How GraphQL is competing with REST BY CHRISTINA CARDOZA

The Representational State Transfer (REST) API has served its purpose of exposing application-level information for web services over the last 15 years, but in an industry that’s constantly evolving, there are always new approaches that are popping up and transforming the way we work. Enter GraphQL. GraphQL is a data query language for APIs that was developed by Facebook in 2012. Unlike traditional query languages like SQL, GraphQL goes beyond just databases. It combines ideas from the world of databases and couples it with web API features to enable developers to build data-rich applications, according to Lee Byron, former engineer at Facebook, who works on the GraphQL project. After three years under development at Facebook, the company decided to open-source the project in 2015. “When we built GraphQL in 2012 we had no idea how important it would become to how we build things at Facebook and didn’t anticipate its value beyond Facebook,” Byron wrote in a post at the time of the open-source announcement. Over the last three years, the opensource project has created a strong community of developers, matured, and improved so significantly that the industry has come to take a closer look. As of recently, one of the main motivators for moving to GraphQL has been to replace current REST initiatives. According to Byron, GraphQL is a different way of thinking about the problem of exposing data. “People are excited about GraphQL because there are some very real problems that just about everyone building a data rich app runs into,” Byron said in an interview with SD Times. Those problems include getting all the data you need in a reasonable amount of time, and

figuring out what to do with the data from the API once it comes back.

Making a case for GraphQL Hasura, a cloud infrastructure company and GraphQL solution provider, finds that while writing a GraphQL server is a little bit more difficult than a normal server, the benefits are even more difficult to ignore. Tanmai Gopal, cofounder and CEO of Hasura, explained that in a traditional development environment there is too much back and forth between the front-end developer and back-end developer. For instance, for every piece of data, API endpoint, or feature the front-end developer needs, they need to request it from the from the back-end developer first. Once the back-end developer builds it, they need to test it and document it before sharing it back with the front-end developer. GraphQL eliminates the complexity and back and forth because it enables the back-end developer to just build and present a GraphQL server for

fetching data, according to Gopal. That server provides a single API endpoint that front-end developers can make any type of query to. “At a high level, the front-end developer can basically see every possible API query they could make, and they don’t need to keep asking the back-end developer to give them an API for this use case or that use case. This cuts down the back and forth that is required with

According to Gopal, the mean benefits of GraphQL over REST are: n GraphQL helps app developers make more efficient queries as compared to typical REST because developers can fetch (“query”) for exactly the slice of data they want

n GraphQL helps app automatically discover APIs and associated documentation without back-end developers having to maintain explicit documentation that becomes painful to maintain during frequent iterations. This reduces communication between developers drastically and reduces the tooling and process typically required to share APIs, test APIs & document APIs which is unavoidable with REST.

Tanmai Gopal, co-founder and CEO of Hasura

n GraphQL community tooling helps app developers validate their API calls before deploying their apps, automatically generates code and reduces the boilerplate typically required to integrate APIs into their apps.

024,25_SDT015.qxp_Layout 1 8/28/18 10:33 AM Page 25

traditional development, and accelerates feature development,” said Rajoshi Ghosh, co-founder of Hasura. Byron explained with REST APIs, there is also tight linking between the concept of a single idea and a URL to load that idea. “That means, you can only load one thing at a time. That becomes a problem when you need all the data for a

complicated view. You might have to load lots of different URLs because each piece of data is going to be presented by one URL,” he said. With GraphQL, there aren’t multiple URLs to learn. There is one single URL and it adds in the concept of a query language. “With GraphQL, you describe the data you need by writing it in the query language, and then you send that query to the one URL. The URL looks over your query, makes sure all the relevant data you need to fulfill the query is available, puts it together in the shape that is expected by the query and returns it back,” Byron said. So with GraphQL, you get all the data you need in a single roundtrip whereas REST has to make multiple roundtrips for each piece of data, Byron added. Unlike GraphQL, a REST API also has a very fixed set of data, and if the front-end developer doesn’t describe absolutely everything they need, it won’t be there in the end result, according to Byron.

Looking at it from another angle Uri Sarid, CTO of MuleSoft, an integration platform provider, believes REST and GraphQL are two very different solutions. “One major advantage of REST APIs, and the reason they won over other approaches such as SOAP (‘web services’) in the enterprise space despite the latter being supported by all the major vendors, is the fact that REST APIs are in fact ‘just HTTP.’ In particular, the common needs for routing requests, for transmitting metadata along with data, for standardized responses, for caching and security semantics that can be reliably shared across a whole host of physical and software components because they are not only standard but also simple — all these and more have been worked out, tested and hardened across millions of servers and sites and clients and applications since the web first emerged,” Sarid wrote in an email interview with SD Times. “On the other hand, GraphQL is a very specific protocol and set of tools introduced by Facebook to expose data and capabilities as functions, and to query multiple of those functions in one call, asking for and receiving only the data needed by the caller,” said Sarid. “There is no notion of domainspecific nouns and standard verbs — only whatever functions the service decided to offer, and however it decided to represent the data.” According to Sarid, people who believe GraphQL will replace REST APIs are probably not implementing them correctly. “When done well, RESTful APIs expose a clean, elegant, understandable, and extensible model of any domain. When done poorly, they are little more than remote function calls, and because each function call is a roundtrip on the network, using them to assemble the information or orchestrate the capabilities needed by the client can be difficult and slow. This slowness and difficulty is particularly exacerbated in mobile applications, where the network is particularly precious, and speed of development is paramount,” wrote Sarid.

September 2018

SD Times

However, Sarid explained he does believe GraphQL calls attention to a need that is not being met by REST APIs, but he doesn’t believe it will replace REST as a general paradigm for APIs. According to Sarid, the reasons GraphQL won’t take over REST are: The specific needs of mobile clients aren’t as important in many other contexts. Those needs, whether in mobile or other contexts, may be met by layering querying capabilities on top of REST APIs. REST APIs offer many advantages over GraphQL which are not easily met, even when extending GraphQL. Conversely, GraphQL introduces many problems versus REST APIs The domain modeling built into REST API design, and the resultant exposure of clean domains as APIs, leads to a loose coupling between different domains and different organizations, which is critical to maintaining agility in large networks of applications. There is really only one example of a very large system of applications that has survived, scaled, evolved, and yielded tremendous value over not just years but decades: it is the web itself. The web provides powerful evidence of the benefits of REST that’s stood the test of time; it seems much easier to imagine meeting the needs that GraphQL addresses by building on top of REST, versus recreating all those advantages anew for GraphQL as robustly. “With GraphQL today, developers must roll their own, bet on one of the emerging open-source but non-standardized solutions, or rely on a proprietary vendor solution,” wrote Sarid. In spite of all that, Hasura’s Gopal believes over the next couple of years we will see more of a migration to GraphQL. “The benefit of GraphQL over REST is extremely real, and people who are championing for GraphQL have been leaders in the industry for 10 to 20 years,” said Gopal. “GraphQL is going through a golden age. By this time next year, we will know what the landscape for replacing REST will be and I expect we will see much broader adoption.” z


026_SDT015.qxp_Layout 1 8/28/18 10:30 AM Page 26


SD Times

September 2018


Providing insight, visibility into code to gauge its value CollabNet VersionOne updates VSM solution BY CHRISTINA CARDOZA

CollabNet VersionOne announced the latest release of its VS solution at the 2018 Agile conference last month. VS aims to provide insight, visibility and traceability into the code to tie it directly to a revenue value stream. The latest release is designed to enable the practice of Value Stream Management at the enterprise level. “This release is significant because organizations want to establish VSM to be competitive in the next wave of digital transformation that will demand more speed, quality, efficiency and market alignment,” said Flint Brenton, CEO at CollabNet VersionOne, “But they lack a solution for effective VSM at the enterprise scale.” The latest release aims to establish enterprise VSM by enabling the scaling of Agile, applying DevOps orchestration and business intelligence, and leveraging Git across the enterprise.

According to the company, establishing VSM has been difficult in the past because of existing barriers from current tools and the disconnect across planning, version control and underutilization of DevOps. “There’s no other way to benefit from enterprise VSM than by tapping into the full promise of scaling Agile and DevOps orchestration,” said Brenton. VS enables that due to its unique way of bringing together planning, Git version control and DevOps to close the gaps and remove barriers to smooth value streams that span the enterprise.” The newest VS solution is also made up of VersionOne and its scaling Agile features, TeamForge SCM for Git enterprise version control and Continuum for DevOps orchestration. Other features include performance management orchestrate planning and delivery activities, ability to break down silos, and risk management. z

In other DevOps news… ■ Eggplant is giving DevOps teams the ability to predict outcomes and the effect on the business with the addition of Eggplant Release Insights. The new solution provides analytics and insights into understanding the impact on user satisfaction and business outcomes when releasing a new version of their product. Predictors of the solution include a bug content predictor, development quality, test coverage, and usability quality. ■ New Relic announced the general availability of distributed tracing last month. The new feature is designed to provide DevOps teams with the ability to trace the path of a single request to understand a complex system. New Relic explained this will help teams discover what is causing latencies, where errors are originating and how to improve code for customer experience. Key features include automatic instrumentation, details across modern and traditional systems, and root cause analytics. ■ SnapLogic announced new DevOps and automation capabilities, such as integration with GitHub and Mesosphere support for automating elements of CI/CD. These new updates are being made to the company’s Enterprise Integration Cloud solution as well as its Iris AI technologies. The company also announced a new patterns catalog designed to help users build integration pipelines. The new integration with GitHub is designed to provide CI/CD support and access to different versions of SnapLogic. The latest support for Mesosphere enables users to spin up Docker containers instead of managing them manually. Other updates include connectivity for Microsoft Dynamics 365 Sales, API integration enhancements and enhanced dashboard search. z

Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:41 PM Page 27

Software delivery; it’s a team sport

In software delivery if teams aren’t rowing together, they’re rowing in circles. Integrate the complex network of disparate tools that you use to plan, build and deliver software at scale. $XWRPDWH WKH ćRZ RI product-critical information across your entire software delivery process. Help your teams pull together.




5 Ways Static Code Analysis Can Save You

028,29,31,32,33_SDT015.qxp_Layout 1 8/28/18 10:28 AM Page 28


SD Times


September 2018

f you’re not doing static code analysis (aka static analysis), now is the time to start. Delivering code faster has dubious value if the quality degrades as development cycles shrink. On the other hand, if you’re not doing static code analysis, you’re not alone. Despite the mature age of the tool category, not a lot of developers are using it — still. In fact, Theresa Lanowitz, Voke founder and CEO of market research, believes, based on a past survey, that only 15% of developers are using static analyzers today. “If you’re working on something that deals with compliance and regulation, you’re probably using static analysis solutions,” said Lanowitz. “Aside from the highly regulated areas, static analysis is not broadly used.” There are several reasons why developers aren’t using the tools. For one thing, they’re already using a lot of tools, and the list continues to grow. For example, developers at professional services firm Avanade use six different tools during the coding process. “That’s a lot of tools. They all do different things, so they’re all important, but if you have to run six tools every time you do a build, that’s a lot of overhead to run,” said Jasen Schmidt, director of solutions architecture at Avanade. Schmidt is a big proponent of static code analysis tools for several reasons — quality, enforcing corporate standards and security. They also allow him to spend more time on higher-level issues than he’d have the luxury of doing without the tools.


Meanwhile, developers everywhere are feeling the pressure to produce, so the last thing they want to do is slow down their processes. However, delivering code faster does not mean delivering better quality code faster. Static code analysis tools help developers avoid nasty surprises in production that are far more time-consuming and expensive to fix than errors found earlier in the SDLC. In fact, following are five ways static code analyzers can save your bacon.

#1: Get Early Feedback

Static code analysis provides insights into code errors. While the tools won’t catch every defect and they’re not a replacement for other tools such as dynamic code analysis, they are a staple that more developers could be using to improve their code quality. “One element of shift left is code analysis. As far as the implementation is concerned, that is the key step,” said Joachim Herschmann, research director at Gartner. “When I feel I’ve finished a small piece of functionality, I want to get feedback about a number of things: Am I documenting what I’m doing? Am I following standards? I want to find anything that could cause a security vulnerability or the code to crash.” Granted, there are different kinds of static code analysis tools, some of which are specific to security, specific languages or particular types of errors, while others span multiple languages and code quality issues. In fact, organi-

zations may run several different kinds of static code analysis tools, to take advantage of their respective strengths. “You have to look at it in the context of what you’re doing,” said Herschmann. “If you’re consistently using quality metrics like cyclomatic complexity, it will give you an indication of whether they’re going up or down. I may have added a significant chunk of open-source code or made a fairly substantial code change that has resulted in more or less complex code. Without a static code analysis tool, it’s more of a gut feel I have as a serious developer, but that’s not the same as having an accurate report or measurement about how my code change has affected overall code quality.”

#2: Advance Best Practices

Most static code analysis tools are rulesdriven, so it’s important to make sure the rules align with what the organization is trying to achieve. For example, in some highly-regulated environments, the rules help ensure safety compliance. “There are a number examples of where there are fairly rigorous coding standards, so you want to make sure your developers are adhering to the standards for audit reasons,” said Herschmann. However, rules aren’t always driven by regulations. They’re also driven by corporate standards and security standards. “Sometimes you have a mix of senior and junior developers. [Static code analysis] allows the junior developers to work independently without guardrails while making sure they’re developing

028,29,31,32,33_SDT015.qxp_Layout 1 8/28/18 10:29 AM Page 29

along the lines of the team’s expectations,” said Avanade’s Schmidt. “That’s the set of rules that no one actually teaches you or there are no documents saying you must write your code this way. The tool will kind of teach them as they go.” As the junior developers learn what the static code analysis rules are over time, it helps them as they blossom into more senior positions. The downside can be over-dependence, though, when developers get so caught up in what the tool is saying, they’re not thinking critically about why the error message came up in the first place. “They also may not ask whether the tool wasn’t right in the first place or if the rule wasn’t applicable,” said Schmidt. “[However,] as a development lead, knowing that your team is running those tools allows you to focus on reviewing your team’s code from a logic and functionality perspective instead of focusing on some of the more basic code standards, rules and error checking type practices.” “Static” does not imply that the rules the tools enforced are static, however. The rules need to be maintained to ensure they reflect the general code quality the organization wants to enforce. New laws, regulations, or even a merger or acquisition can necessitate change. “In the past I was able to [adhere to rules] without the use of tools, but it’s going to be less and less likely I can do this going forward, because things are accelerating,” said Gartner’s Herschmann. “Now I have much quicker iteration cycles. At the same time, scale and complexity increases. Just think about how the industry is shifting from the monolithic applications to something that is more microservices-oriented, so now I have pieces developed by different developers, perhaps in different languages.” Static code analysis helps enforce best practices across the developers building microservices. “Having tools like this helps to

make sure those things are being done in a standard way at a micro level because if you have a developer writing microservice, he’ll own it front to back,” said Avanade’s Schmidt. “You want to make sure he’s not deviating from your corporate standards for security, coding style and that stuff. And if he leaves, it’s easier for someone else to pick it up because he’s been using best practices.”

#3: Save Time and Money

Static code analysis takes time, but it’s time well-spent. The amount of time depends on the number of tools used, the tools themselves and what developers allow into production. However, the time the tools save in the long run is well worth the time invested during development. “Once people put static analysis in their processes and in their toolboxes they can rid themselves of race conditions, stack overflows and other defects that rear their ugly heads in production,” said Voke’s Lanowitz. “If you don’t prevent those defects in the first place and go into production, you can’t figure it out because you can’t recreate the problem. With automated testing, you’re not going to see every path in the code execute. With static code analysis you can see everything and then determine where those defects are.” Static code analyzers also help reduce the burden of code reviews. “When you have a lot of teams, code reviews can be quite a substantial time effort. If you’ve got a team of 40 people and you have to review every single person’s code for style, logic, security, all of these things static code analysis tools check, and you kept layering those on, [it would be overwhelming],” continued on page 31 >

September 2018

SD Times


Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:41 PM Page 30

028,29,31,32,33_SDT015.qxp_Layout 1 8/28/18 10:35 AM Page 31

September 2018

SD Times

< continued from page 29

said Avanade’s Schmidt. “The amount of time I have to spend reviewing those aspects of people’s code greatly goes away which allows me to focus on whether we’re building the right logic for our business and customers.” He’s also able to ensure that developers are focusing on quality and telemetry that might have been overlooked. Meanwhile, static code analysis also helps facilitate more effective DevOps by emphasizing quality processes early in the lifecycle. “The recommendation is make static code analysis part of your CI process. You want to stop corrupt code from deploying, so you want to stop it as early as possible,” said Gartner’s Herschmann. “If you’re just starting out, figure out how to use it. A common path is to start with Sonarqube, which is open source. Look for differences in how you’re doing at a basic level. Then, add more dedicated tools like Coverity, Parasoft, or other options and go to a level deeper.” While the continuous process of learning and improvement dovetails nicely with Agile and DevOps mindsets, developers may nevertheless resist changing their processes, especially when it appears the enhanced process will slow the delivery of code. “If you’re saying you want to release software faster, you should be doing things to make that happen and some of those things require process changes,” said Voke’s Lanowitz. “ While static code analysis is going to take a little bit of time, it’s better than having that catastrophic defect in production that you can’t reproduce and costs you untold amounts of dollars in terms of downtime, customer loyalty and having to implement a crisis management strategy.” Developers moving quickly in the absence of static code analyzers may find themselves making the same errors time and again that result in the same defects in production.

#4: Improve Code Security

Security is on everyone’s mind from boards of directors to front-line developers. Since just about everything runs on software these days, it’s important to

[Static code analysis] allows the junior developers to work independently without guardrails while making sure they’re developing along the lines of the team’s expectations, said Avanade’s Schmidt.

analyze code for potential vulnerabilities from different perspectives. Hence the need for multiple tools, including static code analyzers. “It’s not a catch-all for everything. There are limits to it and a lot depends on what kind of tools you’re using and how you’re using them,” said Herschmann. “ Usually we see organizations using several tools [such as] Sonarqube for basic sanity checks and more dedicated tools like Veracode or CheckMarx that do security checks. The bottom line is I want get an additional safety net.” Avanade uses static code analysis for security purposes. In fact, it’s part of the check-in process. “We doing this on current projects I’m working on. You check in your code, it scans for any security vulnerabilities, and it checks for credentials stored in code that a developer might have accidently put in there,” said Avanade’s Schmidt. “It’s not a replacement for developers [understanding] security practices or preventing bad code, but it does check for the low-hanging fruit security items. It’s there to make sure you’re passing the sniff test.” Apparently, Gartner is getting more calls from CIOs or application portfolio managers because they lack insight into the quality issues and risks their applications represent. Static code analysis

plays a part there. “Think about a large organization that has thousands of applications. I’ve had calls where the person says, ‘We don’t really know what’s going on there,” said Hershmann. “What is our risk? How much code do we have? Who is making the changes? Are there significant changes in code quality from Supplier 1 versus Supplier 2?” In those cases, the data generated by static code analyzers comes into play. That data is merged with data from HR and other sources to provide insights into who wrote the code, where else it might have come from, how code quality is changing over time, and other details portfolio managers want to understand. “Static code analysis is one key element. The intent isn’t just to say here’s an error and here’s the resolution we recommend. The view is more we are at a rating X for this particular type of application and we’re rating it Y. Why is there a difference? It could be different suppliers or applications,” said Herschman. “The tools contribute to the overall dataset.”

#5: Shift More Quality Left

Shift left does not make testing and less important than it has been; however, it does enable higher quality to be built in continued on page 32 >


028,29,31,32,33_SDT015.qxp_Layout 1 8/28/18 10:36 AM Page 32


SD Times

September 2018

< continued from page 31

earlier in the SDLC which saves time later. Risk, user expectations, competitiveness, and operational soundness are all being viewed through the lens of software quality, particularly as businesses undergo digital transformations. “Static code analysis is another piece I can throw in that will help me be more proactive from a quality perspective and more efficient from a testing perspective,” said Gartner’s Herschmann. Gartner is also receiving inquiries about non-functional requirements and non-functional testing. Specifically, its clients say they don’t want to wait weeks to get the results of a dedicated performance test, they want to get performance feedback during the CI process. While static code analysis is not a substitute for performance testing, some of the more sophisticated static code analysis tools can help developers understand where they’re introducing code that could impact performance. Avanade’s Schmidt said he has noticed developers adding static code analysis to their build processes, because if it’s included as part of code check-in, developers don’t have to think about it as much. “I’ve seen [static code analysis] evolve from somewhat useful to where we are today which is giving me a lot of good recommendations of how to make code better,” said Schimdt. “As developers are pushing code directly into production, static code analysis is another tool to validate the quality. If you’re trying to release on a more frequent basis, you need to be able to trust that the code you’re deploying is of higher quality.” The point of the shift left trend is to ensure that fewer errors make their way downstream. Static code analysis is one of many tools developers can use to ensure they’re providing code that contains fewer bugs. “Static code analysis is perfect for the shift left trend in terms of making sure you’re going to have the proper type of application to put in production, the type of source code a traditional QA person would want to test and would be able to test, said Voke’s Lanowitz. “You want to make sure the developers are

Static code analysis is one key element. The intent isn’t just to say here’s an error and here’s the resolution we recommend, said Gartner’s Herschmann.

doing what they can up front so when the QA team tries to do some functional or performance testing later on, the code will be in good enough shape that other tests can be run. “

The Future is Faster and More Efficient

Some people have avoided static code analysis tools because the older versions were comparatively cumbersome. Today’s developers don’t have time to waste, and thankfully, the tools have improved greatly. They’ll continue to get faster and more efficient, and there will probably be better in-IDE experiences across the board. At the present time, developer experiences vary across IDEs. “It’s good that [static code analysis] is coming back into vogue because people aren’t using it and they should use it,” said Voke’s Lanowitz. “The tools have gotten very good. Ten years ago, people would say the tools give you a lot of false positives. “ Meanwhile, there’s many more types of static code analysis tools that have been built to ensure that new languages and new architectures such as microservices are supported. In fact, there are so many static code analysis tools available today that reviewing even one of the lists can be a daunting experience. Right now, the tools identify errors. Perhaps in the future, more of them will tell devel-

opers how to address those errors. “I might get a report that has thousands of warnings about errors, but it doesn’t give me a starting point that says if you fix these five things, 80% of your issues will be gone, “said Herschmann. “Not all of the tools have that [kind of] intelligence. This is where I think the expertise of specialists is still needed.” Avanade’s Schmidt would like to see static code analysis tools become part of the compile time in the IDE so developers don’t have to think about even running it. “If you get a compile error that says you did something bad, your code analysis should be the same thing. Compile, run code analysis you know you did something bad or not and go fix it as part of that development or roundtrip process,” said Schmidt. “I don’t want to spend 25 minutes running all the static code analysis tools. The tools have to get faster and there’s some movement toward that, especially with people moving it into their build processes and check-in processes.” Given the sheer proliferation of static code analysis tools, there will likely be consolidation among at least some of them, which would simplify choices for developers. “Right now, we have roughly six different tools we run as part of our coding process. You have to remember that you ran all six which is why it’s important to be part of that build process,” said Schmidt. “I assume they’re going to consolidate these tools so you have a to-go, best-of-breed tool, like a security tool, and that’s a little bit different than a coding standards tool.” Even if there is a consolidation, developers will still need to use the right tools for the right jobs. Right now, some developers think they have enough other tools so they don’t need static code analyzer. Conversely, it’s also possible to place too much faith in a single tool. “You should implement a comprehensive quality strategy that in addition to static code analysis includes BDD, pair programming, automated testing and dynamic code analysis,” said Herchmann. “All of this should be part of this comprehensive quality strategy.” z

028,29,31,32,33_SDT015.qxp_Layout 1 8/28/18 10:30 AM Page 33

Code analysis is about more than software BY DAVID RUBINSTEIN

Static code analysis is usually thought of in terms of preventing vulnerabilities from existing in code. And it’s thought of in terms of things like memory leaks and tainted data. But as businesses become more reliant than ever on software to drive their revenues, it is important to think about the damage these vulnerabilities can do to the bottom line. So here, we present “5 Ways Static Code Analysis Can Save Your Business From Ruin,” as detailed by Walter Capitani, director of product management at Rogue Wave Software.



Think of the amount of money companies spend to establish their brand — by meeting criteria the public establishes for reliability and security, along with ease of use, emotional connection and more. Back in the day, before iPhones came to dominate that market, it was known that if you bought a BlackBerry, it was the most secure device on the market. Yet Apple offered emotional bonds and a delightful user experience — and a camera — and has come to just about own that market. And since then, it has become a target for hackers. In 2016, a security flaw in Apple’s iMessage system enabled users to spoof addresses to gain access to data and have users believe they were interacting with trusted addresses when they weren’t. “Static code analysis might have been able to tell Apple, ‘these inputs could have prevented a crash.’ Tainted data could cause the crash, but they were not scanning for tainted data in messaging,” Capitani said. “The number of times you have to update and fix software affects your reputation.” While Apple might have enough goodwill built to avoid irreparable harm to its reputation, a steady stream of breaches and fixes will do damage.

And a smaller company, without the history of providing thrilling experiences, will be damaged much more.







Capitani told of a breakdown of the point-of-sale system at Starbucks in 2015 that prevented them from making sales in all of the company-owned stores in the U.S. and Canada. Starbucks said it was due to a failure during a daily system refresh. Food has an expiration date, and if those kinds of breakdowns — caused by software bugs — last too long, the company has to trash its food inventory and suffer the losses. “If the supply chain is interrupted by software, there’s a ripple effect that costs you money down the line,” he said.

When Boeing was rolling out its 787 Dreamliner fleet of airplanes, it incurred a battery problem that grounded the fleet until the root cause of the problem could be determined. “Salespeople were in final negotiations for sales,” Capitani said, “and they might have had to make price concessions or lose the business altogether. Problems like that affect the sales cycle.”

Earlier this year, Equifax lost a tremendous amount of data as the result of a software vulnerability discovered in their application. Since then, they’ve been sued for damages by innocent third parties affected by the data loss.

September 2018

SD Times

“This is significantly distracting to companies, who now are worrying about legal issues instead of doing their day job,” Capitani said. And that affects the bottom line.



In late 2015, the NEST smart home company had problems with their thermostats not turning the heat on in winter. Aside from pipes freezing, people away on vacation might have left pets home that could succumb to the cold if it went on tool long. “In today’s world, one unhappy customer can create a huge amount of negative publicity,” Capitani explained. NEST has a range of products, and if a potential customer were to read about the thermostat, he might think twice about buying a security camera from the company. All of the above examples involved software that was not secure, or implementing newer technologies without thoroughly testing it. “Why would a company add static code analysis to what they’re doing?” Capitani asked. “Think about the impact of software quality on these factors. We’re not talking about quality for quality’s sake.” Actually, the business is at stake. z


Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:42 PM Page 34

You Can’t Master Five Languages Take your programming skills to wizard level. Master complex languages including Java™, C#, Swift, JavaScript, and HTML. Learn to use layered architectures, design patterns, application frameworks and more. Study online and during evening hours to get your M.S. in Software Engineering.

Regis University is regionally accredited by The Higher Learning Commission.

M.S. in Software Engineering Specialize in Web, Java or Mobile Software. REG IS . E DU/MASTE RTHECODE Application fee waived with code: CODE

035_SDT015.qxp_Layout 1 8/28/18 10:47 AM Page 35

September 2018

SD Times


Higher education benefits are win/win for software engineers comprised of 36 credits and costs $750 American scientist Neil Degrasse Tyson per credit, which maps out to around said, “There is no greater education than $27,000 without scholarships. That’s not one that is self-driven.” That couldn’t be inexpensive, but there are over 200,000 more true than in the fields of software software engineering job openings to be engineering and computer science filled paying up to $140,000.00 accordtoday. While an undergraduate degree ing to, making the investoften opens the door to an entry-level ment in the software engineering M.S. job in a chosen field, a higher degree like compelling. a Master’s or Ph.D. will arm its holder with deeper knowledge, a broader Careers can dovetail skillset and greater career opportunities. Degrasse Tyson believes, “The cross polUndergraduates learn the fundamen- lination of disciplines is fundamental to tals of software design and architecture, while ‘My own personal experience opened a Master’s candidates, who lot of doors and gave me the confidence already have the under- that hey, I could do anything; solve graduate degree, get to problems and take in new challenges.’ —Mohammad Abu Matar, PHD, Regis University delve into more exploratory directions and address more real truly revolutionary advances in our culworld problems. According to Moham- ture.” Years ago, people went to work in mad Abu Matar, associate professor in one field, doing one thing for most of the College of Computer and Informa- their life. That’s not the case anymore. tion Sciences at Regis University, As technology transforms the culture we “Undergraduate students learn how to live in, it’s commonplace to see someone solve problems, specific to the software switch career paths. Abu Matar says that engineering or computer science field. an M.S. can open doors to opportunity They learn algorithms and how to design in other fields like computer science, solutions to meet specific needs. When artificial intelligence, and cybersecurity. they get into the master’s program, we “Because software engineering is so equip them with research and develop- popular right now, we have a lot of stument skills.” He points out that the big dents who are switching their careers. In jump is the ability to tackle ambiguous the last two years, I’ve seen students problems and come up with designs for who have accounting backgrounds who them. He describes the problems at the decided they wanted to change their master level as much more open-ended careers, so they came to us for master’s than in the undergraduate level. “So degree.” Regis has a bridge program that they are a bit vague, because they basi- consists of a three-class sequence that cally are similar to what a customer will students with no prior engineering or be requiring in the real market.” computing background, must take to Abu Matar says the M.S. can be com- enable them to get into its master’s propleted in 2 years, although it took him gram. Abu Matar describes two candiabout 3. If someone wants to be aggres- dates who came with no software engisive, he says it can be done in a year and neering background; one’s background a half. The Regis master’s program is was accounting, the other’s psychology, BY ALYSON BEHR

Content provided by SD Times and

who finished the program and found jobs before they finished. Software engineers, developers and DevOps people already have a handle on the industry. The benefits of going back to school for them are on the one hand, financially driven and on the other, more esoterically motivating. Nowhere is the bump in pay grade more evident than when it comes to government and military personnel. Abu Matar says he has students who are currently in the Air Force, and many students who work for military contractors, like Lockheed Martin, working on their M.S.

For the passion In addition to greater remuneration, another benefit of an advanced degree is placement at world-renowned research and development institutions where innovation is a way of life. He points to companies like IBM in the private sector who have huge R&D programs that require at a least a master’s, and even better, a Ph.D. Degrasse Tyson again puts it succinctly saying, “Everyone should have their mind blown once a day.” The people who have been supporting legacy systems and existing customers for years come to mind. Their chops with the technologies they’re exposed to daily are excellent, but their understanding of new trends and architectures isn’t up to speed. Attaining a master’s degree modeled on what the industry requires now is a perfect way to build skillset and yes, make more money too. Having a master’s degree or a higher graduate degree is a challenging journey with many rewards. Abu Matar adds, “My own personal experience opened a lot of doors and gave me the confidence that hey, I could do anything; solve problems and take in new challenges.” z


Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:42 PM Page 36

SD T Times imes News on Mond day The latest news, news analysis and commentary delivvered to your inbox!

• Reports on the newest technologies affecting enterprise deve developers elopers • Insights into the e practices and innovations reshaping softw ware development • News from softtware providers, industry consortia, open n source projects and more m

Read SD Times Ne ews On Monday to o keep up with everything happening in the software devvelopment industrry. SUB BSCRIBE TODA AY! Y!

037_SDT015.qxp_Layout 1 8/28/18 10:43 AM Page 37

Soffttware TTesting esting g SHO W C A S E BY DAVID RUBINSTEIN


hese are many technologies available to organizations looking to bring their testing up to the speed of software development. Ensuring quality can no longer be the drag on software deployment, if businesses want to stay competitive and be able to take advantage of changes in their markets. Some are choosing continuous testing, while others are automating their tests. Some are writing tests first and then writing code to pass the test, while others write code based on the desired behavior of the application, and then test to make sure the app is doing what is was intended to do. Still others are employing service virtualization, to ensure the components and APIs their applications need are reliable. The companies are asking themselves about the amount

of risk they’re willing to take when their applications go live. How do organizations decide which path to take? Are they trying to test during sprints? Are they convinced that manual testing is the only way to be certain the software meets their level of quality? The SD Times Testing Showcase has been put together to give our readers a look at the many offerings on the market to help them address their testing challenges and align their testing with the rhythms of their software development life cycle. So no matter which direction you’re heading with your testing — standing pat is not an option — we’re sure you’ll find something from the following providers to help you to your future of testing. i SEPTEMBER 2018 37

038_SDT015.qxp_Layout 1 8/28/18 12:40 PM Page 38


Micro Focus on the growth of intelligent testing The advent of Agile methodology, DevOps, and emerging technologies like artificial intelligence, robotic process automation and machine learning creates a seismic shift in application testing requirements and test tools. As companies gravitate towards digital transformation strategies, there are more applications and new technologies that require testing at increasingly rapid rates. Key challenges include the elimination of dedicated testing time, meeting new requirements, trying to work more with user-specific testing experiences — whether those be shift left or shift right — and the maturity of the business itself. When asked to read the tea leaves on the forward direction of the industry, Michael O’Rourke, product manager at Micro Focus, said, “There’s more of a branch into the future of the industry that’s focused on intelligent test automation, meaning where the user expects the applications to work in any type of environment. They expect more abundance of smart devices and self-healing environments that grow the approaches we currently have for automated testing today. It includes being able to test intelligently, while also dealing with the rapid variation of changes.” As an example, he cited self-healing environments that come into play when customers have issues with a breaking UI change that requires them to stop the test where it is, or go back and modify scripts. This takes time away from their schedule and self-healing environments adapt to those type of breaks. It’s mission-critical for test engineers and QA teams to choose test tools that support their existing environments as well as those on their roadmap.

are that it can save users from dealing with test execution on their local system, and once in the cloud, testers can review tests and run results as they happen throughout the day, providing a managerial view. Also, large organizations that want test information to be in one silo can do that.

ROBOTIC PROCESS AUTOMATION With robotic process automation starting to become less hype and more real, O’Rourke said Micro Focus is moving into the RPA space due to customer demand. He said, “This segues into our adoption of intelligent automation. While it’s interesting that most people don’t look at a testing product in the RPA space, we’ve had numerous customers and partners ask us to support RPA and their efforts towards automating exten-

‘[Users] expect more abundance of smart devices and self-healing environments that grow the approaches we currently have for automated testing today.’ —Michael O’Rourke, Product Manager, UFT, BPT and Sprinter

sive scripts and recordings across distributed infrastructures. This is precisely where UFT comes into play in conjunction with our Operations Orchestration tool. Other competitors in the RPA space don’t have that functionality or integration.” Micro Focus customers already leveraging their test automation solutions to easily create bots that automate rules-driven business processes, as well as integrating Micro Focus operations orchestration to complete actions and automate workflows that link steps to standardize RPA processes.

THE OMNI-CHANNEL TESTING POWERHOUSE The Micro Focus test portfolio is well-equipped to address these challenges. It’s wide-ranging, supports numerous technologies and is expanding its offerings on a frequent basis. The company’s flagship product is Unified Functional Testing (UFT). “The product’s our powerhouse with a huge customer base, 40-plus different support technologies, different flexible licensing mechanisms and it’s fully integrated with Agile and DevOps methodologies,” O’Rourke said. In late 2017, the company introduced the ability to burst tests out to the cloud with the new Micro Focus StormRunner Functional product. The customer can either design the test in their IDE, or within UFT, then when they run the test, it bursts to the cloud. A couple of cool benefits


GETTING WHAT MATTERS RIGHT “We often talk about the fact that we want to use these devices to make sure we have the right test coverage, that we’re focusing our tests on what’s actually being used and what people care about,” said Amy Fenwick, Micro Focus product marketing manager. The target for Micro Focus is on delivering those tools that track how the end users are actually using the application, so testers can address test coverage accordingly. O’Rourke said it’s about the growth of intelligent testing automation and promises they’ll continue to give the customers the tools to be able to build things intelligently into the future. Learn more at i

Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:42 PM Page 39

040_SDT015.qxp_Layout 1 8/28/18 10:48 AM Page 40


Choose Mobile Labs for Appium success Organizations often face significant challenges when implementing automated mobile testing. Choosing the right test automation tools and getting started writing test scripts can be complex for testing and quality assurance teams. Mobile Labs works with enterprise mobility teams to simplify the test automation journey at every level of maturity, so organizations can ensure more reliable mobile app and mobile web experiences. “Enterprises are moving away from established commercial automated testing tools to open-source tools like Appium,” said Dan McFall, president and CEO of Mobile Labs. “When organizations do that, they face the usual open source challenges which are: Who’s going to support it? Who do I go to with questions? How good is the documentation? How often am I going to get tool updates?” All of those issues impact the scalability of test automation. However, newer open-source projects, including Appium, have technical challenges that more mature projects have already addressed. When enterprise mobility teams try to adopt or scale Appium on their own, their use tends to be limited to small deployments that require low levels of concurrency. “The creation of tests and test execution takes time, so trying to scale that up can become very expensive and very challenging,” said McFall. “You have to keep adding Appium hardware, which is difficult to manage and support.”

MOBILE LABS IS YOUR APPIUM PARTNER Mobile Labs integrated Appium hardware into its mobile device cloud infrastructure so customers can continue down their continuous testing paths without worrying about Appium hardware-related issues. “We understand mobile devices, infrastructure and the ties between hardware and software,” said McFall. “The benefit is vastly increased concurrency in terms of the number of tests you can run simultaneously in a single environment at any point in time. We also accelerate test execution time and provide scripting tools so you can more easily integrate the mobile device cloud and Appium into your daily workflow.” Mobile Labs also supplements its tooling with in-house Appium support, so customers can quickly get answers to questions as well as technical assistance with Appium setup and scripting. “Our goal is to be your partner throughout the lifetime of your mobile application journey,” said McFall. “If you have an Appium problem, Mobile Labs has an Appium problem. If it means fixing Appium or our solutions, or teaching you how to use Appium more effectively, we’ll do that. We’ll help you overcome the documentation and expertise-related challenges you face.” 40 SEPTEMBER 2018

GET REALISTIC, REAL-TIME MOBILE TESTING CAPABILITIES Mobile Labs has continued to improve interactive device performance so when a customer uses one of its remote devices, the real-time experience is nearly identical to having an actual device in hand. That way, manual testers and test automation engineers building and debugging scripts can increase productivity. “Even for customers experienced with Selenium, Appium presents new and sometimes difficult challenges,” said McFall. “Some of these are due to the nascent nature of Appium as a technology, and I am sure will over time be addressed by the open-source ecosystem and vendors. Some other challenges appear to be never-ending in the mobile application space. New devices, operating systems, etc., are continuously being released and typically require ‘work under the hood’ of the automation

‘The creation of tests and test execution takes time, so trying to scale that up can become very expensive and very challenging.’ —Dan McFall, president and CEO

framework. That is, before the likely need to refactor existing automation scripts for new capabilities and object changes.” “Appium’s popularity is growing, but the tool needs a lot of help, and not just from Mobile Labs on the infrastructure side,” said McFall. “Our partners are focusing on the Appium ecosystem now because there’s a lot of opportunity for innovation. The commercial support seems to be fueling enterprise adoption.” For example, Mobile Labs’ current quarterly growth rate is consistently 30% or better, fueled by existing and new customer demand. “Everything we’re doing around Appium has been customer-driven,” said McFall. “Some of our customers were having trouble getting started with Appium. The more mature organizations were facing later-stage challenges. Regardless of where you are, we’ll support you every step of the way.”

CHOOSE THE CLOUD THAT FITS THE NEED Mobile Labs is well-known as the leading provider of onpremises device clouds, although the company now offers hosted and hybrid solutions as well. “We have more than 2,000 devices in on-premises cloud deployments, so that’s still a popular option,” said McFall. “We also have more than 100 devices we’re hosting on behalf of customers to support whatever deployment model the customer prefers.” Learn more at i

Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:43 PM Page 41

042_SDT015.qxp_Layout 1 8/28/18 10:20 AM Page 42


Parasoft Simplifies API Testing Developers and testers are finding common ways to SOAtest users can then build a comprehensive API testing deploy and interface with API tests, but they still don’t fully strategy. The product includes visual tools and testing artifacts understand how their applications use APIs or the interrelacan be easily shared between development and testing teams. tionships among those APIs. Parasoft SOAtest uses AI and Right now, a lot of teams favor UI tests over API tests machine learning to reveal API behaviors that have not been because they’re the easiest to associate with requirements observable previously. It uses that information to automaticaland non-technical users can understand them. However, API ly generate tests and to aggregate changes across APIs. tests reveal the component-level interactions with the appli“Developers and testers spend a lot of time talking about cation which accelerates defect resolution. what an API should do and how to test it. Then, as soon as Knowing how individual APIs work isn’t enough, because there’s a change, they have to have the whole conversation the limited view doesn’t provide insight into how a particular again,” said Chris Colosimo, product manager at Parasoft. application will order APIs or how APIs behave when they Meanwhile, more organizations are decomposing monointeract with other APIs. lithic applications into microservices. Developers are expected “You may have hundreds or thousands of APIs. You might to know what the associated APIs do and get something to happen if you get them how to use them, which becomes more in the wrong order, but if you want somedifficult with scale. For example, one thing meaningful to happen, you have to Parasoft customer was managing 250 to get them in the right order,” said Hicken. 300 unique APIs just a couple of years A healthcare company recently ago. Today it’s managing 2,000. slashed the time it takes to build API test “The developers were in constant scenarios in half using SOAtest. Its situatraining mode, explaining to each of the tion was complex because it was simultaindividual testers this is what the API neously adding new brokers, consolidatdoes, this is how it works, this how you ing systems, and ensuring that all doctors Chris Colosimo and Arthur Hicken should test it, and this is how you can and providers could be found under their understand it,” said Colosimo. “All of that goes out the window respective areas of specialty regardless of the insurance comas soon as there’s change.” panies involved. The developers integrated all of those conIn fact, test maintenance has become so costly for some nections in a week, then the testers were told they needed to organizations that they’re questioning the ROI of automated identify those connections which they were able to do swiftly testing. with API Test Generator. “Test maintenance kills you if you’re doing it wrong,” said CHANGE ADVISOR AUTOMATICALLY UPDATES API TESTS Arthur Hicken, evangelist at Parasoft. “When people say automaParasoft Change Advisor collects information from all the tion isn’t working, it’s almost always a maintenance problem.” API tests to identify changes. Users can create templates that Forrester recently named Parasoft a leader in its 2018 automatically apply the changes across all relevant tests. Forrester Omnichannel Functional Test Automation Wave “When you build scripts you don’t want to lose all the work based on its innovative use of AI and machine learning. you’ve done,” said Colosimo. “Change Advisor takes the existINTRODUCING SOATEST SMART API TEST GENERATOR ing test and updates it, so it automatically works with the new Most API testing tools provide a place to create API tests, API version.” For example, a simple API update that changes a label from but it can be difficult to understand which APIs to test and “price” to “cost” can be aggregated across tens of thousands of how to piece them all together into a meaningful test scetests, automatically. nario. SOAtest Smart API Test Generator monitors how testers “This is really important because if I have 100 testers on my are interacting with an application in a non-disruptive way. team, when changes take place I need one individual to come From that, it extracts out relevant test scenarios. in, make the change, and then apply that to all the test cases,” “We capture the interaction between the application and said Colosimo. “SOAtest Change Advisor handles the changes the backend services and then leverage artificial intelligence automatically. It’s quick, cost-effective, and more accurate.’ to then extract out relationships and patterns in that interacLearn more at i tion,” said Colosimo. 42 SEPTEMBER 2018

Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:43 PM Page 43

Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:50 PM Page 44

The Gartner Magic Quadrant for Software Test Automation is here! Download your copy now!

Figure 1. Magic Quadrant for Software Test Automation CHALLENGERS



Micro Focus Tricentis

IBM SmartBear

CA Technologies Testplant





COMPLETENESS OF VISION Source: Gartner (November 2017)


As of November 2017

© Gartner, Inc

Software testing reinvented for DevOps

045_SDT015.qxp_Layout 1 8/28/18 10:45 AM Page 45


Tricentis Continuous Testing Platform Software has evolved from a process enabler to a competitive differentiator. In fact, software represents the interface to the business — and the point at which entire brand either succeeds or fails. Despite the heavy dependence on software, developers and testers lack visibility into the business risks caused by changing code. With the demand for new software at an all-time high, software testing practices must be transformed to both protect and more importantly elevate the brand. Using the Tricentis Continuous Testing Platform, organizations gain the visibility and control to manage the risk of releases. The platform includes: • Agile Dev-Testing – Rapid, in-sprint testing and Agile test management shifts-left your testing effort with deep support for open source tools

‘We give you the ability to understand how to prioritize your tests based on business risk, which is becoming increasingly critical.’ —Wayne Ariola, chief marketing officer

• Intelligent Test Automation – With over 120 technologies and packaged applications supported oft-of-the-box, it enables intelligent automated regression testing across any architecture or application stack at the speed of DevOps; and • Continuous Load Testing – Delivers a cloud-based, ondemand performance testing lab that allows any tester to prevent application performance issues As software evolves, quality becomes harder to maintain. With increasing levels of complexity and accelerating release cadences, testers require a central platform which enables them to understand whether the release candidate has an acceptable level of risk. “Even though an incremental change to an application or a service may be an atomic piece of work, it’s often a component that impacts multiple applications, machines and processes,” said Wayne Ariola, chief marketing officer at Tricentis. “The Tricentis Continuous Testing Platform allows organizations to understand the complexity of a release and how it impacts quality, which is essential for speed and ultimately DevOps success.”

SCALE AGILE AND DEVOPS SUCCESS Agile and DevOps practices have little value if the resulting software puts the business at risk. The Tricentis Continuous Testing Platform allows test creation, management and execution to be approached from a business risk perspective. While

most organizations are able to tell if a requirement or user story has been tested, they have no insight into the potential business risks a change may cause. “Even though the user story can be broader than the code change, it’s still a bottom-up description,” said Ariola. “Yet, that user story may impact a larger set of transactions or processes. So to say that the component changed as expected is one thing, but to understand the risk profile associated with the broader impact of that change is totally different.” Software testing has been approached as a bottom-up activity, validating the user story at change time. Business risk management is a top-down exercise. Achieving both in a single platform, at scale, is critical to ensure the value of software over time. “We give you the ability to understand how to prioritize your tests based on business risk, which is becoming increasingly critical,” said Ariola. “You also know how to prioritize the execution of those tests to meet your time constraint.” The Tricentis Continuous Testing Platform is test-agnostic with broad support for open source tools. It also supports Tricentis’ model-based test automation (MBTA). That way, different teams can continue using their preferred tools, understand the impact of code changes, and see whether the release candidate’s level of business risk is acceptable. The industry average test automation rate is less than 20% for large enterprises. This must improve significantly to provide the fast feedback required for quality@speed. To get there, faster and simpler ways of creating =automated tests and a sustainable way to maintain them, is necessary to escape the “maintenance trap” created by flaky, brittle tests. Tricentis MBTA significantly reduces the overhead associated with the construction and maintenance of tests. “With model-based test automation, you’re able to make global changes to your complete test suite in seconds versus days or even weeks,” said Ariola. Creating an automated test is one thing, ensuring that the test will run is another. The Tricentis Continuous Testing platform provides risk-based design, test data management, service virtualization and advanced analytics to ensure tests can be executed where and when they’re needed. Testers also benefit from the consolidated analytics and decision criteria required to meet advanced automation goals. Some organizations have arduous compliance requirements that require them to track = the impact of a change across everything from legacy systems to microservices. Achieving that is difficult if not impossible without a continuous testing platform that is capable of tracking such changes. Learn more at i SEPTEMBER 2018 45

046-47_SDT015.qxp_Layout 1 8/28/18 12:59 PM Page 46


SD Times

September 2018



ava 11, which will be released this month, is the second and final major release of Java this year, following Oracle’s new schedule of putting out a major release every six months. According to the company, one of the goals of this new release schedule was to be able to deliver new features faster. Oracle also wanted to reduce the effort needed to upgrade to newer versions. So far, the company feels that this new schedule is working well, according to Donald Smith, Senior Director of Product Management for Java SE at Oracle. “JDK developers are focusing on delivering high-quality features rather than worrying about meeting arbitrary deadlines or risk facing multi-year delays,” said Smith. “JEPs that might have been rushed into JDK 10, if the next version was years away, are instead being delivered with JDK 11.” Pushing features that are not quite ready to the next major release is not as big of a deal when that next major release is only a few months away. Oracle claimed that while it’s still too early to tell, it has found anecdotal evidence that the migration from Java 9 to Java 10 has been straightforward. According to Smith, early adopters reported that updating to JDK 10 has been a “nonevent.” In addition, the transition to the early access release of Java 11 has been a smooth one. Java 11 will deliver on Oracle’s final goal of the new release schedule, which is to provide stability for deployments of Java applications, with reliability and long-term availability being valued over new features. Java 11 is the first Long Term Support (LTS) release in this schedule, meaning that Oracle will be providing security and bug-fixing updates for Java 11 until at least 2026, which is something that it did not do for Java 9 and 10, which were immediately superseded by the next release. The next LTS release after Java 11 is not scheduled to be released until September 2021. According to Smith, these

delivers features LTS releases “enable enterprises wanting a slower pace to migrate from one well supported Java SE LTS release to the next at their own pace.” Oracle is also offering Oracle Java SE Subscriptions, which are a low-cost way of obtaining Java SE licenses and support for the systems that they need, only when they need them. This makes it easier for developers to get access to performance, stability, and security updates for Java SE 6, 7, 8, 11, and later versions, Smith said. In terms of features, Java 11 will add 17 new Java Enhancement Proposals, according to Smith. The new features are as follows: 1. Nest-based access control 2. Dynamic class-file constraints 3. Improve Aarch64 Intrinsics 4. Epsilon: A No-Op Garbage Collector 5. Remove the Java EE and CORBA Modules 6. HTTP Client (Standard) 7. Local variable syntax for lambda parameters 8. Key agreement with Curve25519 and Curve448 9. Unicode 10 support 10. Flight Recorder 11. ChaCha20 and Poly1305 cryptographic algorithms 12. Launch single-file source-code programs 13. Low-overhead heap profiling 14. TLS 1.3 support 15. ZGC: A scalable low-latency garbage collector (Experimental) 16. Deprecate the Nashorn JavaScript Engine 17. Deprecate the Pack200 Tools and API One new enhancement in this version is support for Transport Level Security 1.3, which was released in August. TLS 1.3 features improvements to security and performance, and removes some of the insecure features from version 1.2. A new HTTP/2 client has been added, as well as two new garbage collectors for memory management: Epsilon and

046-47_SDT015.qxp_Layout 1 8/28/18 1:00 PM Page 47

high-quality at speed ZGC. Epsilon is a no-op garbage collector, while ZGC is a scalable, low-latency garbage collector for large heaps on 64bit Linux systems. Oracle will also be open-sourcing Flight Recorder, which is a data collection framework for troubleshooting Java applications. Developer productivity improvements include updates to the security libraries, support for Unicode 10, the ability to launch single-file source code programs right from the Java launcher without first needing to compile it, and enhancements to the local variable type inference used in lambda expressions. “With every six-month release, developers have the opportunity to take advantage of new enhancements on a more predictable and digestible scale,” Oracle said. “This is much faster than the previous model, which forced developers to wait anywhere from two to three years to see some changes, and then there were many that needed to be adapted to.” Developers can download Java SE 11 Early Access prior to the release to get acquainted with some of the new features. Dependencies may need to be updated prior to trying it out, as popular open-source libraries are regularly updated by the Java community to make them work better with new Java releases, Oracle explained. Oracle is cautioning developers to watch out for and avoid using deprecated APIs in code and dependencies. Compilation warnings will be given when there are deprecated APIs, and Java 9 and later include a tool called jdeprscan, which helps identify issues in one’s code as well as in third-party Java libraries. For example, Java 11 removes the deprecated Java EE and CORBA APIs, so developers using those APIs need to switch to their corresponding standalone modules and libraries. Additionally, because JavaFX is no longer bundled with the Oracle JDK, developers should look into using standalone OpenJFX libraries as a replacement. According to Smith, developers that are planning to move to Java 11 from Java 8 can get started by evaluating their code using Java 10 first, and then reading its migration guide, release notes, and other accompanying documentation. z

September 2018

SD Times


Java’s new release schedule is proving to be a bit painful for some organizations In theory, having two major releases every year sounds great, but some companies are struggling to adjust to this new schedule. Though Oracle claims that transitioning to newer versions has been smooth, Brandon Donnelson, product architect of GXT at Sencha, a provider of cross-platform tools for web developers, finds that the new release schedule has been a bit painful. Up until this new release schedule, Java moved pretty slowly. Now it feels like it is really starting to take off, Donnelson said. From what he has seen, there are a lot of groups that want to upgrade to the latest version, but are being held back because the toolchain has not yet caught up. Many organizations have a lot of investment in their current platforms, which depend on a certain toolchain and have dependencies to factor in, he explained. So while a company may want to upgrade to take advantage of new features, such as type inference, it may not be feasible. “It took a while for Java 8 to catch on,” Donnelson said. “And the enterprise world is always behind anyway.” Even though some organizations are being held back, there are still plenty that want to jump right into the newer releases. Donnelson Donnelson compares Java to Android upgrades in terms of its fragmentation. There are several different versions of Android currently in use, similar to how there are some folks that will want to use Java 11, while some will remain on Java 10, Java 9, etc. “To upgrade your shop to the latest version twice a year might be expensive for some shops,” Donnelson said. He explained that some large projects may take half a year to completely upgrade. He suspects organizations might be a bit timid in upgrading at first until watching a few more releases come out to see that the “Java ship” is sailing smoothly. “I think folks want it, but I think — at least at the enterprise level — they won’t be able to move it as fast as they’re releasing at first,” Donnelson said. One thing that is nice about the new release schedule is that pre-releases are available, so developers can test out the new release in advance, rather than having to wait for it to come out, Donnelson explained. Being able to put new releases into the staging process could be nice, Donnelson said. According to Donnelson, the features that will make it the most enticing for developers to upgrade to newer versions of Java, such as 9, 10, 11, are type inferences and syntactic changes. Those make writing Java more like dynamic languages such as C and C#, which already have type inference, which makes the language more readable since it’s not so busy. “I think those are benefits to come that would attract folks to move,” said Donnelson. “I mean, there’s other features that kind of tickle hot spots within the development strategy. There’s lots of ways to use Java so there’s different features for that.” z —Jenna Sargent

Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:44 PM Page 48

049_SDT015.qxp_Layout 1 8/28/18 10:45 AM Page 49

September 2018

SD Times


JRebel speeds all Java app development BY LISA MORGAN

Which version of Java should you use? If you’re not sure, you’re not alone. Since the cadence of Java version releases shifted from three or four years to six months as of the Java 9 release, lots of developers have been confused. Many of them have avoided Java 9 and Java 10 because they were short-term releases that lacked timely tool support. So, some developers stayed loyal to earlier versions while others decided to wait for Java 11, which is a long-term release Oracle will support for the next five years. Regardless of which version of Java developers choose, Rogue Wave’s JRebel saves time and money. JRebel is a JVM plugin that integrates with more than 100 leading frameworks as well as application services, IDEs and build environments. Developers can use it for any type of Java application.

App changes don’t require redeployments Application redeployments delay application changes. With JRebel, developers can make code changes and view the changes in real time while the application continues to run. “There’s no need to restart or redeploy your application,” said Toomas Römer, CTO at ZeroTurnaround, a Rogue Wave company. “It’s a lot like using PHP for web apps, because all you have to do is go to the browser and refresh the page.” JRebel also helps developers avoid the user experience gaffes that application redeployments cause. “For example, when a developer is coding some functionality that requires many steps of user interaction then redeploying the code might throw him Content provided by SD Times and

back to the initial step, because this is how redeployments work, you lose whatever state you had. Here JRebel also comes into play and will reload the changes on whichever step the developer is on when writing that functionality, enabling him to continue from where he was,” said Michael Rasmussen, product manager of JRebel. JRebel preserves state, which avoids such disruptions. “You can just go in, change the code, recompile the class and reload the page. All the information is saved on the server so the session remains tied to the user,” said Römer. “If you restarted your server, you’d have to recreate the state.” JRebel’s features have been expanded for SAP Hybris developers, based on demand. Those developers are trying to avoid the impacts application changes have on ecommerce and digital transformation processes. “We’ve made a point of staying current with new versions of Java, frameworks and application servers, including Hybris, because some of our customers always upgrade to the latest platforms and tools,” said Rasmussen. “Other customers work in strict environments that keep them locked into older platforms and tools. We make sure that all of them can take advantage of JRebel’s productivity benefits.”

Build and update microservices in parallel More developers are taking advantage of microservices and containers, although managing an increasing number of components can be a complicated and daunting task. “After a couple of years, you can

either end up with a very high number of microservices that run fast or some microservices that have grown quite large,” said Römer. “It’s not that microservices necessarily reduce application complexity compared to monolithic applications. The complexity has just shifted from one huge application to smaller chunks of inter-communication functionalities.” Those building microservice architectures tend to develop multiple microservices simultaneously, some of which may be dependent on others. “With JRebel, you can develop a microservice and a client-microservice using it simultaneously without restarting either of them,” said Rasmussen. Like all Rogue Wave tools, JRebel helps enterprise developers build better code faster, whether they’re accelerating the move from monolithic architectures to microservices or improving the performance, scalability and security of production systems. “JRebel enables dramatic productivity gains,” said Rasmussen. “We eliminate the time-consuming build, deploy and run cycle, which speeds Java development and removes the wait-time frustration so many developers have.” Without JRebel, Java developers can spend hours waiting as builds and redeploys connect across various development frameworks. JRebel collapses that time down to minutes or seconds, so developers can avoid context-shifting between tasks. In fact, 10 developers using JRebel can improve their efficiency by 20%, enabling them to complete the work of 12 developers. “Developers should not have to compile, build, package and deploy their applications every time they want to see how new code works or every time they make a change,” said Römer. “Each interruption costs time and money.” Learn more at z


Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:45 PM Page 50

051_SDT015.qxp_Layout 1 8/28/18 12:41 PM Page 51

September 2018

SD Times


Avoid security pitfalls with automation T

he rise of robotics is raising a number of concerns. Many are overblown or entirely unwarranted, and others need to be addressed and mitigated. While job loss, the main concern associated with robotics, is getting much of the attention, in my field — robotic process automation (RPA) —robots are taking the mundane, repetitive, spreadsheet-centric tasks off employees’ hands, allowing them to do their jobs better and with greater satisfaction. However, security concerns that many businesses have when it comes to robots are not being adequately addressed. When implemented, robotics automates core processes across several areas of the business, leveraging platforms containing high volumes of customer and employee data. It can be nerve-racking to give a piece of software unbridled access to such sensitive information. Without appropriate security measures in place to safeguard and manage this data, all the good that robotics are doing can be contradicted if a single vulnerability is exploited, leaving the organization at risk of being hacked or worse. Iin the case of robotics, steps can and should be taken to ensure that the benefits greatly outweigh the risks. Organizations need to identify, understand and avoid these common security pitfalls when it comes to automation.

Robots and humans are not interchangeable Automation technology has advanced to where it can take on processes that depend on human-tobot interactions, but it is not yet ready to take on human user credentials. These solutions tend to encounter security issues when bots are assigned human user credentials because they are hard-coded, meaning they cannot be altered without changing the program. The degree of security sophistication is entirely dependent on the developer, which may not be consistent enough to ward off all vulnerabilities. To avoid needing to rely on developer consistency, utilizing encrypted protocols, independent credentials and change audit software are crucial for a robust security posture.

Prevent unintentional escalations from the outset With more operators and developers managing the software, risk of privilege escalation is heightened.

This overall tends to increase the need for more third-party software to look out for fraud. While it might seem simple, the most effective way to avert privilege escalation from these traditional solutions is to make sure that all bots have only the necessary access and capabilities required to complete their given processes.

Devin Gharibian-Saki is the chief solution officer at Redwood Software

Rip a page from Software Development 101 Most software applications must go through several phases of development and testing to ensure they are ready to be put into production, with processes in place to help ensure quality and security at every phase. However, when building traditional RPA tools, setting up a secure 3-tier landscape creates significant overhead for the operations and developer teams due to the added complexity of connected systems that need to be managed. The actual automation functionality of traditional RPA tools should be smart enough to be able to distinguish how to behave on development, testing and production systems. To ensure this is mitigated at the get-go, RPA providers and businesses deploying tailored bots should take a page from traditional software developers, adopting the best practice of testing for quality and security from the ground up.

There are security challenges in robotic process automation, but they can be mitigated through a strict, streamlined approach.

When in doubt, stick to the process Some security risks associated with automation are entirely preventable. Concerns around a lack of process oversight, audit requirements, or undetected vulnerabilities can be addressed. There are security challenges in robotic process automation, but they can be mitigated through a strict, streamlined approach, rather than creating a fragmented patchwork of automation tools. By sticking to the processes, and simplifying where possible, security threats are diminished from the start. When automation is leveraged in a secure environment with the necessary protective layers in place and given the right amount of attention, businesses can truly capitalize on the technology without compromising security. z


SubscriptionAd_2018_for PDF.qxp_Layout 1 8/28/18 2:08 PM Page 1

Discovery. Insight. Understanding. SD Times subscriptions are FREE!

SD Times offers in-depth features on the newest technologies, practices, and innovations affecting enterprise developers today — Containers, Microservices, DevOps, IoT, Artificial Intelligence, Machine Learning, Big Data and more. Find the latest news from software providers, industry consortia, open source projects and research institutions. Subscribe TODAY to keep up with everything happening in the ever-changing world of software development! Available in two formats — print or digital.

Sign up for FREE today at

053_SDT015.qxp_Layout 1 8/28/18 10:45 AM Page 53

September 2018

SD Times


The advantages of Moto Z I

’ve been carrying the Moto Z from Motorola for several weeks now and I’m really falling in love with the modular aspects of this phone. I firmly believe that we have been focused too much on making these things thin, which has made them harder to hold, done ugly things to battery life and thermals, and generally made the devices far less useful over time. In short, we’ve focused on form over substance and this was largely driven by Apple’s strong marketing rather than our own wants and needs. The Moto Z starts out as thin as any phone out there, but it has the unique capability of being modular. The Moto Z Mods come in a variety of forms, extended battery, speaker, printer, camera, gamepad, 360-degree camera, projector, and, most recently the announced 5G mod. My personal favorite is the Moto Turbopower battery pack because it adds an additional 16 hours of battery life, but it also makes the phone fit in my hand better and I’m less likely to drop the thing. Unlike a typical battery case, this integrates with the phone and updates it to wireless charging. The Hasselblad True Zoom camera mod is amazing in that it pretty much turns the phone into a digital SLR with 10x optical zoom, full Xenon flash, and a better grip for enhanced camera control. The Moto Gamepad is interesting because it adds Nintendo-like controls to the phone, which not only improves gameplay but keeps your fingerprints off the screen. Most 360 cameras wirelessly connect to the phone, but this makes them awkward to use because you have to juggle the camera (which has no viewfinder) in one hand and your phone in the other. This mod integrates the two items, making it far less likely you’ll drop the camera or phone, and more likely you’ll get a steady shot. The speakers are very easy to use but I think I still prefer a wireless accessory because I’m likely to want to use the phone while listening to music and having the speaker attached to the back makes this awkward. But, in terms of volume for size, the speakers crank out a lot of sound and don’t take up a lot of room in the backpack. The

Moto Stereo Speaker provides some separation between the stereo channels. Another interesting mod is the Insta-Share projector, which could be used in a dark room for presentations or in a dark hotel room to stream movies or show pictures. For its size it puts out a lot of light and could be useful in a pinch if you need to share a PowerPoint presentation. I doubt I’d use it though. Perhaps the most interesting mod is the new Moto 5G, which allows you to upgrade the phone to 5G technology when those networks go live. Because you’d likely stress figuring out whether to use the battery mod or the 5G mod, this one has both, so you don’t have to give up battery life for network performance. But this allows you to uniquely upgrade your 4G Moto Z to a 5G phone for a fraction of the money. What these mods do is extend the revenue cycle of the phone for Motorola and extend the capabilities of the phone for the user. Typically, the manufacturer only sees the purchase of the phone and often the accessories, like cases, while revenue flows to third parties. With the mods, the revenue from each phone is extended to the mods. For the user they get more of a custom phone experience that results from the mods they purchase, so it is a win/win. It strikes me that if Motorola had the kind of marketing budget that Apple enjoys, this modular approach, which is unique, could become dominant. Motorola’s approach to smartphones is very different and their use of mods both enhances their revenue opportunity and the user experience. The users can customize their phone for their use adding better speakers, cameras, and even upgrading their phone for the next network relatively cheaply (at least when compared to a new phone). I think this is actually a better model for both the user and the manufacturer. z

Rob Enderle is a principal analyst at the Enderle Group.

It strikes me that if Motorola had the kind of marketing budget that Apple enjoys, this modular approach, which is unique, could become dominant.


054_SDT015.qxp_Layout 1 8/28/18 10:45 AM Page 54


SD Times

September 2018


Software still starts with requirements David Rubinstein is editor-in-chief of SD Times.


ow can companies know that their development process is on track, and that the product they’re developing is actually what they need to provide to customers? Scott Roth, CEO of Jama Software, believes predictive product development is essential for that. The key is visibility into your development processes, and comparing it to benchmarks — either internal, or against a cohort of other companies in the same industry. But let’s back up. Jama Software, founded in 2007, has its core in the creation, documentation and management of requirements. And one of the things they’ve come to learn is that when there’s volatility in requirements early in the process, success is sure to follow. Why is that? Roth explained. “If you go out and you define the requirements for what you’re going to be building, and those requirements have a high level of volatility early on in the life cycle — meaning that requirements are being updated, they’re being deleted, there’s more requirements are being added — the more of that you can see spike at the beginning of the process, the smoother things will go as you go throughout the rest of the process, because you’ll have less rework, you’ll have less quality issues because you’ve really spent the time up front to collaborate on and beat up those requirements. And so with a volatility report, what we are able to do is draw that trend line where you see a high-level spike at the beginning and then it drifts down over time, and what we’re able to do is take any live project or product that one of our customers is building and map their requirement volatility to that curve. “That’s one simple example of the predictive nature that we’re attempting to do,” he added. “How do [our customers] know that they’ve really done their work from the requirements standpoint to know their development process is actually going to run smoothly?” Yet the notion of doing heavy up-front work on requirements flies in the face of the agile, continuous delivery model organizations are adopting. Roth, however, argues that requirements are critical

Agile organizations understand they need some level of requirements gathering and collaboration.

to all software development projects. He believes that “requirements gathering and requirements management can and should live together with agile development and continuous deployment operations.” It’s just that the idea of spending time and writing hard and fast requirements that aren’t expected to change for 18 months — the duration of a waterfall software project — is not conducive to the kind of development being done today. Agile organizations, he said, understand they need some level of requirements gathering and collaboration on what it is they’re going to build. It’s just that they’ll likely spend less time defining and collaborating on requirements than “an organization that is building a satellite system that’s going to take two years to develop.” Jama has expanded its platform beyond the core requirements capabilities over the years, and now can offer test management, workflow management, an analytics layer and integrations with popular platforms widely used today. Roth said the analytics layer was added to give product development teams a single place to manage and gain insights from their data being generated from the multiple used in a typical project. Jama wants to better guide development teams as to how they’re building products, as well as what products they should be building. The Jama platform is a relational database that can offer a document view of requirements, but Roth explained “it’s all bite-size bits within tables and forms and it’s very much structured for today’s modern, agile way intentionally because of the need for that flexibility and in many cases a much lower level of fidelity into the upfront requirements gathering and definition process.” And they’ve moved their target upstream, into larger organizations writing highly complex, mission-critical software, and focusing more on physical products that are being embedded with software and connectivity. “One of the challenges that companies have when they get bigger is they struggle to have visibility into the portfolio and the body of work that’s being done,” he said. “By having upfront gathering of requirements and success criteria at the end is one way to help them get visibility into the process versus a mysterious pipeline of work that just magically comes out the other end.” z

Full Page Ads_SDT015.qxp_Layout 1 8/27/18 2:45 PM Page 55

Full Page Ads_SDT015.qxp_Layout 1 8/28/18 1:48 PM Page 56


bad data threatening

your business?


Fabulous It’s Clobberin’ Time...with Data Verify Tools!

ddress ! Address

Email !

P hone !

N ame !

Visit Melissa Developer Portal to quickly combine our APIs (address, phone, email and name verification) and enhance ecommerce and mobile apps to prevent bad data from entering your systems. With our toolsets, you’ll be a data hero – preventing fraud, reducing costs, improving data for analytics, and increasing business efficiency. - Single Record & Batch Processing - Scalable Pricing - Flexible, Easy to Integrate Web APIs: REST, JSON & XML - Other APIs available: Identity, IP, Property & Business

Let’s Team Up to Fight Bad Data Today! 1-800-MELISSA 1-800-MELISSA