SD Times - December 2018

Page 1

FC_SDT018.qxp_Layout 1 11/19/18 12:16 PM Page 1

DECEMBER 2018 • VOL. 2, ISSUE 018 • $9.95 •

Full Page Ads_SDT018.qxp_Layout 1 11/15/18 1:56 PM Page 2

003_SDT018.qxp_Layout 1 11/16/18 4:12 PM Page 3





News Watch


Preserving software’s legacy


Atlassian rebuilds Jira for modern software development


Report: New approaches to software development will disrupt the status quo


Google explores the challenges of responsible artificial intelligence


ElectricFlow 8.5 features “DevOps your way”


Microsoft releases plans for Azure DevOps

iPaaS: The middleware for hybrid, multicloud page 8


GUEST VIEW by Jans Aasman Knowledge graphs: The path to true AI


ANALYST VIEW by Michael Azoff Making Java a modern language


INDUSTRY WATCH by David Rubinstein Good ideas are lost in the process

page 21

BUYERS GUIDE APM needs to change to keep up with continuous development practices page 29

Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at

004_SDT018.qxp_Layout 1 11/16/18 3:14 PM Page 4


Instantly Search Terabytes EDITORIAL EDITOR-IN-CHIEF David Rubinstein NEWS EDITOR Christina Cardoza

dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data


2YHU VHDUFK RSWLRQV LQFOXGLQJ ‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search

Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum


Developers: ‡ $3,V IRU NET, C++ and Java; ask about new cross-platform NET Standard SDK with Xamarin and NET Core ‡ 6'.V IRU :LQGRZV 8:3 /LQX[ 0DF L26 LQ EHWD $QGURLG LQ EHWD ‡ )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH DQG PRUH







The Smart Choice for Text Retrieval® since 1991 1-800-IT-FINDS



D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803

Full Page Ads_SDT018.qxp_Layout 1 11/15/18 1:57 PM Page 5

006,7_SDT018.qxp_Layout 1 11/15/18 2:15 PM Page 6


SD Times

December 2018

NEWS WATCH GraphQL to get its own foundation under Linux organization In an effort to grow and sustain the GraphQL ecosystem, The Linux Foundation has announced plans to launch a new open-source foundation for it. GraphQL is an API technology that was initially developed by Facebook. The GraphQL Foundation will be a collaborative effort between industry leaders and users. Its mission will be to “enable widespread adoption and help accelerate development of GraphQL and the surrounding ecosystem.” “We are thrilled to welcome the GraphQL Foundation into the Linux Foundation,” said Jim Zemlin, executive director of the Linux Foundation. “This advancement is important because it allows for long-term support and accelerated growth of this essential and groundbreaking technology that is changing the approach to API design for cloud-connected applications in any language.”

Angular 7.0 now available The latest major release of the mobile and desktop framework Angular is now available. Angular 7.0.0 makes updates to the entire platform, core framework, Angular Material and CLI as well as provides new features to the toolchain and several partner launches. “Early adopters of v7 have reported that this update is faster than ever, and many apps take less than 10 minutes to update,” Stephen Fluin, developer advocate for Angular, wrote in a post.

Richard Stallman’s GNU Kind Communication Guidelines In an effort to steer conversations in a kinder direction, Richard Stallman, president of the Free Software Foundation and founder of the GNU project, has announced the GNU Kind Communication Guidelines. The guidelines were created after a conversation that GNU development pushes people like women away. “The GNU Project encourages contributions from anyone who wishes to advance the development of the GNU system, regardless of gender, race, religion, cultural background, and any other demographic characteristics, as well as personal political views,” Stallman wrote in a post. “People are sometimes discouraged from participating in GNU development because of certain patterns of communication that strike them as unfriendly, unwelcoming, rejecting, or harsh. This discouragement particularly affects members of disprivileged demographics, but it is not limited to them. Therefore, we ask all contributors to make a conscious effort, in GNU Project discussions, to communicate in ways that avoid that outcome—to avoid practices that will predictably and unnecessarily risk putting some contributors off. The guidelines suggest that participants assume others are posting in good faith, to think about how to treat other participants with respect even if you disagree with them, not to take a harsh tone, recognize criticism is not a personal attack, and be kind to others when they make a mistake. The full version of the guidelines is available here.

One of the key updates in their release is the availability of CLI prompts. The CLI now has the ability to prompt users when using common commands and discover built-in features, according to Fluin. The release also comes with a focus on performance. After reviewing common mistakes throughout the ecosystem, the team found developers were using reflect-metadata polyfill in production, when it should only be used in development. As a result, the update version of Angular will now remove this from polyfills.ts files and include it as a build step. Other performance updates includes Bundle BUdgets in the CLI that provide the ability to warn when an initial bundle is more than 2MB.

Report: Open source a struggle for devs Developers say that replicating environments and maintaining proper information about the

quality of open-source software packages are the two biggest difficulties they face in incorporating new languages and open-source into their development pipeline, according to ActiveState’s Developer Survey 2018 — Open Source Runtime Pains report, released today. “To manage open source code development, many enterprises use homegrown build systems, manual processes or legacy versions of languages that need to be manually updated,” Brian Copeland, CEO of ActiveState, said in a release. Among the bigger challenges developers say they face in coordinating tools are incorporating new languages and dependencies, with only 9 percent and 17 percent respectively reporting that it was “Not Difficult.” Additionally, 67 percent of respondents said that they’ve skipped out on incorporating better or more suitable technology due to the difficulty.

The surveyed developers also stated that stability and security were their biggest concerns over incorporating new technologies, with handling threats and dependencies with open-source language distribution also causing issues or problems.

NativeScript 5.0 released with NativeScript-Schematics The open-source framework for building native apps has reached version 5.0. NativeScript 5.0 marks a major milestone in the framework’s history as it has also hit 3.5 million downloads since it launched in 2015. The latest release features a number of new developer experience improvements as well as new native features. A major highlight of this release is the addition of NativeScript-Schematics, an initiative with the Angular team to enable developers to create

006,7_SDT018.qxp_Layout 1 11/15/18 2:15 PM Page 7

schematics for both web and mobile apps in one single project. Angular first announced the initiative in August. “In 2016, NativeScript 2.0 gave developers the ability to use Angular and NativeScript to create native mobile apps, but we wanted to take it a step further by enabling them to keep their code in one place and share business logic between web, iOS and Android,” said Brad Green, engineering director for Angular at Google. “We’ve worked closely with the NativeScript team resulting in the NativeScript-Schematics that integrates with the Angular CLI, making the user experience for developing both web and mobile within a single project completely seamless.”

Parasoft supports open-source community Parasoft is announcing a new initiative to help support open-source communities and projects. As part of the initiative, the company will be offering free access to its tool suite, enabling developers to leverage test automation software, deep code analysis, and security capabilities for their open-source projects. According to the company, it is becoming the norm for organizations to leverage open-source software in their software development processes. As a result, the technologies they are using should be built with the same level of quality as the applications they are building. Parasoft hopes access to its tool suite will enable the developers of these open-source projects to ensure their technology is secure, reliable and scalable.

In order to be eligible to participate in the Parasoft Open Source Support Program, a developer must be able to prove they are an active contributor and vital to an open-source project that is recognized within the global open-source community. The free user licenses will be valid for one year.

Qt Design Studio 1.0 released With customer engagement and retention driving much of today’s software, organizations are starting to bring designers closer to the development process. Angular announced plans to bridge the gap between developers and designers in May with the introduction of Angular for Designers. Infragistics announced Indigo.Design in July to provide better collaboration throughout the software design process. This week, the Qt Company is announcing the release of Qt Design Studio 1.0, a UI design and development environment. “We believe that collaboration between designers and developers in an effective workflow fosters and boosts product innovation and ultimately leads to a better user experience,” Petref Saraci, product manager at the Qt Company, wrote in a blog post. The Qt Design Studio has been built to improve the collaboration between designers and developers by providing graphical views for designers and Qt Modeling Language code for developers. According to the company, this enables designers to leverage their Photoshop designs and skills. The company plans to support more graphical design tools in the future.

OutSystems launches new AI capabilities Low-code provider OutSystems is launching a new program to advance the company’s mission of bringing the power of AI to software development. will build on the company’s previous Project Turing initiative, named for the father of computer science and artificial intelligence, Alan Turing. Project Turing established an AI Center of Excellence in Lisbon, committed 20 percent of OutSystems’ R&D budget to AI and machine learning, and “developed partnerships with industry experts, tech leaders, and universities to drive original research and innovation.” “Eight months ago, we announced our bold vision for Project Turing,” said Paulo Rosado, CEO of OutSystems. “Today’s launch of moves the needle even further toward reducing the complexity of development and changing enterprise software with exciting new and ongoing research into AIassisted development.”

CockroachDB now a geo-distributed DaaS In an effort to make data easy, Cockroach Labs has released a fully hosted and fully managed version of CockroachDB, cloudagnostic and available on both AWS and GCP. Managed CockroachDB will enable development teams to focus on building scalable applications without having to worry about infrastructure operations. While the team believes that getting started with CockroachDB is a seamless process, it recognizes that not all customers want to manage the day to day operations of distributed systems. Cockroach

December 2018

SD Times

Labs believes that removing the need to manage infrastructure operations is the next step in making data easy. Managed CockroachDB automatically replicates data across three availability zones, supports geo-partitioned clusters at any scale, and can migrate from one cloud service provider to another.

GitHub launches GitHub Actions and Connect GitHub Actions, available now in a limited public beta, is a community-powered platform. that enables developers to connect and share containers for their workflow; and build, package, release, update and deploy projects in any language without having to run code. GitHub Actions was developed to empower developers to choose their developer tools, language and deployment platforms that make them the most productive and creative without having to get locked into tools or workflows. “By applying open source principles to workflow automation, GitHub Actions empowers you to pair the tools and integrations you use with your own custom actions or those shared by the GitHub community, no matter what languages or platforms you use,” Jason Warner, head of technology at GitHub, wrote in a post. GitHub Connect aims to bridge the gap between business and open-source communities. It features Unified Business Identity, Unified Search and Unified Contributions. According to the company, this will enable developers to connect to GitHub’s public data and communities whether their business runs GitHub Enterprise or Business Cloud. z


008,9_SDT018.qxp_Layout 1 11/16/18 3:12 PM Page 8


SD Times

December 2018


Adoption of integration platform-as-aservice solutions for combining disparate services into a cohesive application has seen a steady but slowing growth among businesses. Gartner Research found in its “Magic Quadrant for Enterprise Integration Platform as a Service” report that more than 50,000 companies globally have implemented some form of iPaaS as of 2017. And, Gartner writes, “By 2021, enterprise iPaaS will be the largest market segment in application integration middleware, potentially consuming the traditional software delivery model along the way.” This can be attributed to a growing need to more easily manage and maintain hybrid portfolios of on-premise and multi-cloud application environments, Gartner said. “Fewer organizations with existing integration skills are finding that their established on-premises integration practices can be used to integrate with multicloud applications,” Gartner wrote in its report. “Most organizations find that their existing approaches are just not delivering fast enough to meet the new challenges. For organizations that never established integration practices on-premises, the thought of having to start now is daunting. The large costs, lack of available skills, long delivery times and complex infrastructure builds associated with traditional on-premises approaches are just not in line with today's lean approaches and timelines.” iPaaS implementations can have a more abstract benefit outside of simply streamlining how internal systems communicate and evolve. Forrester Research says that with a properly managed network of integrated systems, businesses can more easily predict trends and rapidly reconfigure their integrations to keep up with what their customers value most in an application service.

Henry Peyret, a principal analyst at Forrester, led the firm’s “Now Tech: iPaaS And Hybrid Integration Platforms, Q3 2018” report, and explained how customer dissatisfaction with one aspect of a product portfolio can lead them elsewhere. “While thinking about continuously

to customer, Peyret said, but rather as a two congruent ecosystems — that of the users’ expectations and values, and that of the business’s systems and adaptation to meet those values — interconnected by a mesh of suppliers, retailers and digital services. Forrester breaks down the applica-

iPaaS: The middleware

trying to deliver more value to customers, those values will evolve, provoking a change in the business — to completely changing to what we are calling the ‘dynamic ecosystem of values,’ ” Peyret said. “The customers are constantly looking to get more value for themselves, and they will look for different brands amongst products, and when the enterprise is not supplying the piece of value the customers are looking for, those customers will go to some different supplier to provide them that additional value.”

tion provider’s systems, or the components that an iPaaS solution primarily integrates into systems of design, or the broader way a business is planned and developed; systems of engagement, which are concerned with direct customer experiences and feedback; systems of operation, which includes automation and overall application operation; and systems of record, which collect and manage data. Their integration culminates in a system of insight made up of dynamic integration features and Data Lake architecture. “The previous integration, when we were only integrating systems of record, we were using batch file transfer and that sort of thing to do integration, and it was very static because the systems of record were changing maybe only every 10 years, driven by a gain or an SAP,” Peyret said. “Now that we have to integrate the system of record with the system of engagement, where systems of engagement are continuously changing, perhaps every week or month, we need to get more flexibility. That's what I describe with ‘dynamic integration’ and the tooling to support that dynamic integration and provide some elasticity between systems which are getting very different lifecycles in the IPaaS category.”

Value-aligned relationship The construction of this value-aligned relationship begins with conceptualizing the relationship between the end-user and application service provider not as the linear motion from vendor to retailer

Lifecycles of systems Peyret elaborated on how each system comes with its own life cycle, or time between iteration or significant change. “There is a big difference in terms of

008,9_SDT018.qxp_Layout 1 11/16/18 3:12 PM Page 9

the life cycle of each of those systems,” Peyret said. “With system of record, the life cycle is driven by software companies. While at the same time, system of engagement is driven more by those customers who want to engage differently — they want to engage first on the web, then move to phone and when the phone's not working, they want to chat on mobile and then perhaps the customer gets a chatbot. So that's the sort of evolution of system of engagement, which is getting your life cycle so that it

• iPaaS for dynamic integration, aimed at small and mid-sized organizations just beginning to develop a cloud strategy and adoption of integration technologies, often including templates and no-code, visual development and administration wizards • strategic iPaaS (SiPaaS), a more complex category which “includes some B2B integration, IoT integration, and connectors to on-premises applications,” and which is “essentially cloud-based;” • hybrid integration platforms

December 2018

SD Times

they are more complex, so it is more difficult to operate by a small IT team.” Forrester’s report found that the growth of the types of products that are easier to use and aimed at the mid-sized market will increase, in some small way, the role of the citizen integrator, or employees within an organization who aren’t dedicated to integration technologies but whose knowledge of the business and of the customers’ needs can be used in setting up integrations in no/lowcode visual environments. But Peyret was very clear with his judgment that iPaaS technology, and broad knowledge of the technology, is not to the point where companies can skimp on dedicated integrators just yet. “Even if some citizen integrator can do some of the job, I don't recommend to let those citizen integrators do everything,” Peyret said. “There is always some risk in providing that sort of integration — it requires governance. And putting someone in charge is really the right approach, along with those citizen integrators. If you are really getting some people with low knowledge about what integration is, the different consequences, the different risks, we should also get the right level of governance.” Though Peyret says that too much governance can lead to the static, unre-

for hybrid, multicloud is driven to answer that continuous research of value for customers. The same also for system of design, where it's different for different products. If you're building planes, your system of design should support planes, refurbishing and maintaining for probably 50 years so that your system of design really lasts for 50 years. When you start to integrate those different systems, we start to see big changes in integration.”

Consider iPaaS options But the world of iPaaS isn’t so plugand-play that every option is right for every business. Both Gartner and Forrester noted that there is a growing space for IPaaS solutions aimed at midsized businesses, while large-scaled solutions tailored for enterprise customers (EiPaaS) are fairly developed. “During 2017, we noticed a clear split in the iPaaS market,” Gartner wrote in its report. “The larger vendors shifted to what has become EiPaaS, for more strategic adoption and a broader set of use cases. There was also huge growth in the number of domain-specific vendors, with a much narrower go-to-market strategy focused on domains such as B2B, education, accounting and others.” Forrester’s Peyret says this comes down to the difficulty for an iPaaS company to provide a managed service of sufficient quality at the price-point and scale that characterizes mid-size business operations. Forrester breaks iPaaS down into the categories of:

(HIT), which Forrester calls “mandatory” for large enterprise customers with API management, B2B and IoT integrations and come in “bundles” of multiple products that run on-premises, in the cloud and in hybrid infrastructure. “IPaas is mainly a more technical product, and mainly made to support the midsize market, which is discovering the separation the system of record and system of engagement,” Peyret said. “And as they want to deal with some very simple integration capabilities, they are dealing with the customer iPaaS category. [These offerings] are simpler, so [those customers] are getting the pricing that is more in-line with

‘Now that we have to integrate the system of record with the system of engagement, where systems of engagement are continuously changing, perhaps every week or month, we need to get more flexibility.’ —Henry Peyret, principal analyst at Forrester

the mid-size market, costing $2,000 to $20,000 in subscription costs per year... Then some of the mid- to large-sized companies which are adopting a broad category of hybrid integration platforms are also dealing with the problem of agility and of dynamic integration. In that case, they are choosing either to replace, or to complement, the hybrid integration platform with what we are calling strategic iPaaS, where pricing is closer to $20,000 to $70,000 per year. And they are also more powerful and

sponsive type of integration that failed before iPaaS and is a hindrance towards cultivating that ‘dynamic ecosystem of values,’ which he says will help retain customers and ultimately be a boon to everyone enmeshed in the ecosystem. “When companies are organizing and aligning with their customers and always providing more value for their customers, they are also getting value to their employees, to their shareholders, to their partners and also regulators as well,” Peyret said. z


010_SDT018.qxp_Layout 1 11/15/18 2:16 PM Page 10


SD Times

December 2018

Preserving software’s legacy BY CHRISTINA CARDOZA

All throughout our lives we are reminded of events from the past. History teaches us about what happened before us to help us understand how society came to be as it is today. But today we live in a digital age, and while leaders, laws, wars and other parts of our history will always be important to know; what about software? Technology is everywhere and it is rapidly changing every day. Should we care about where it all started? The Software Heritage was launched with a mission to collect, preserve and share all software source code that is publicly available. It is currently working towards building the largest global source code archive ever. The Software Heritage was founded by the French Institute for Research in Computer Science and Automation Inria, and it is backed by partners and supporters such as Crossminer, Qwant, Microsoft, Intel, Google and GitHub. According the heritage, software is an essential part of our society and lifestyle. It has become crucial for businesses and industries to succeed. It’s enabled the emergence of social and political organizations. It has provided us the ability to communicate, pay bills, purchase goods, access entertainment, find information, and more. It would be a shame to our future generations if we were to lose access to that. “Cultural heritage is the legacy of physical artifacts and intangible attributes of a group or society that are inherited from past generations, maintained in the present and bestowed for the benefit of future generations,” the organization wrote on its website. “Software in source code form is produced by humans and is understandable by them; as such it is an important part of our heritage that we should not lose. Software is furthermore a key enabler for preserving other parts of our cultur-

al heritage that we would de facto lose if we lose the software needed to access them. Preserving software is essential for preserving our cultural heritage.” According to Robert Di Cosmo, founder and CEO of the Software Heritage, the organization specifically looks at source code because it is written and understood by humans, and contains high-level programming languages that explain what they want machines to do. There are three main properties of the source code collected: availability, traceability and uniformity. The way the heritage collects the software is like a

search engine, explained Stefano Zacchiroli, founder and CTO of the Software Heritage. It crawls specific places where software lives and where developers go to develop, collect or distribute software. Some of the current places include Debian, GitHub, GitLab, Gitorious, Google code, GNU, HAL, Inria, and Python. It will constantly continue to go back to these places to look for new software and updates. Currently, the Software Heritage has archived more than 80 million software projects, according to Di Cosmo. Zacchiroli explained it is not only bits of

code itself that the heritage is collecting. It is collecting the development history from the first version that was stored to the commit data and all the releases, information and artifacts attached to a project along the way. “Today, the amount of the software that has been developed around the world is actually doubling every two to three years, so we need a common platform where we collect all the software and can study it to find the efforts, to improve the quality, to understand what is going on and to prepare a better future in software development,” Di Cosmo said in a video. Aside from preservation, Zacchiroli hopes the archive will be used for scientific and industrial applications. For scientific use, Zacchiroli says scientist can mirror the archive and run experiments using the dataset. “To be able to reproduce an experiment, knowing the exact version of the software used is essential,” the Software Heritage wrote on its website. “Software Heritage will ensure availability and traceability of software, the missing vertex in the triangle of scientific preservation.” For industrial applications, Zacchiroli explained that every IT product nowadays contains some opensource software. The problem, however, is open-source software usually changes or uses different environments, and developers are some times only tracking a specific version. According to Zacchiroli, since the Software Heritage tracks the origin, history and evolution of projects, it will be able to spot and fix vulnerabilities easier. Going forward, the Software Heritage hopes to increase coverage, provenance information, and improve its source code indexing and search capabilities. “We are so focused on developing new things that we forget about storing, preserving what we have developed up to now,” Di Cosmo said in the video. z

Full Page Ads_SDT018.qxp_Layout 1 11/16/18 9:48 AM Page 11

012_SDT018.qxp_Layout 1 11/15/18 2:15 PM Page 12


SD Times

December 2018

Atlassian rebuilds Jira for modern software development Atlassian has released a completely overhauled and rebuilt version of its Jira project management software from the points of view of permissions, navigation and user experience. “The Jira you needed in 2002 and the workflow you built in 2010 and the permissions model we put together back then isn’t the one we need today,” said Sean Regan, head of growth for software teams at Atlassian. “And that’s really driven by this change in the market around modern software develop-

uct, and that drove a lot of changes.” Among the changes, Atlassian opened up new APIs, including one for feature flag integration. Already, companies such as LaunchDarkly, Optimizely and Rollout have integrated with Jira. “Feature flags and Jira moves [companies] from software factories to labs. Developers get autonomy but executives can see the rollouts aren’t crushing users.” One of the key new features in Jira is a product roadmap, which “mirrors the flexibility and customization Jira always

ment,” including the adoption of cloud computing, microservices and containers, as well as organizations empowering autonomous development teams. “You don’t get too many chances to formulate a new foundation to your product. This is the best chance we’ve ever had to do open-heart surgery on Jira,” he added. “That moment was when we made the decision to split our product into a cloud version and a server-based version. We made that decision a couple of years ago, and when we made that decision, we moved our cloud version to AWS. And that was the moment we cracked it open and took a really hard look at the guts of the prod-

had but in one single view,” said Jake Brereton, head of marketing for software cloud at Atlassian. As Regan said, “When you have 10 different teams, all shipping and releasing and testing on different schedules, nobody know what’s going on. It’s mayhem. It’s like writing a book with a different author for every paragraph. It all has to come together and work.” With the new roadmaps, teams can share their task statuses with their internal stakeholders, so they can see who is working on what task. Work has been done on project boards in Jira as well. Epics are listed and can be drilled into to see the status


of work items. Boards have been reengineered to enable drag-and-drop workflows, filters and progressions that in the past required developers to write Jira Query Language statements to create. Worfklows, issue types and fields can be customized, without the need to get developers involved. Jira issue functions, the core unit of work, can also be customized to show developers only what they need. The notions of units of work and developer autonomy are reflected in how the new Jira offers feature choices. “We took the idea of templates and ripped out the features and functionality to turn them on or off,” Brereton said. “With autonomy, teams don’t want strict Scrum or Kanban processes. Let the teams figure out what’s best for them. Kanban zealots say there are no backlogs, but if you want, in Jira you can work with a Kanban board and backlogs.” Regan explained that the philosophical underpinnings of the Jira changes are based in the fight between developer autonomy and management control. Managers were using Jira to enforce ‘really ridiculous’ process and control, he said. “Atlassian and Jira are siding with developer autonomy,” he said. “We want developers to be able to design the way they want to work, any workflow, build their own board, design their own issue, give the freedom to develop cool stuff, but make sure the project managers and administrators and executives have enough of the reporting and enough of structure to feel comfortable with that freedom. The Jira of the past allowed administrators to be too restrictive. We’re trying to set an example with the way we’ve designed the product in favor of developer autonomy.” z

Full Page Ads_SDT018.qxp_Layout 1 11/16/18 9:49 AM Page 13


014,15_SDT018.qxp_Layout 1 11/15/18 2:14 PM Page 14


SD Times

December 2018

Report: New approaches to software development will disrupt the status quo BY CHRISTINA CARDOZA

The year 2019 will bring new approaches to increase software development productivity and better align development teams and organizations, according to a recent report by research firm Forrester. Among the new approaches are cloud native, value stream management and artificial intelligence-based tools. “New platforms for cloud-native app architectures, value stream management tools, and infusing artificial intelligence (AI) into testing are among the breakout technologies we expect in 2019,” the research firm wrote in a 2019 software predictions report. 2018 saw the start and increase in interest in cloud-native technologies and tools. The Cloud Native Comput-

dynamically managed and microservices-oriented. 2019 will be the breakout year for cloud-native technologies as cloud vendors start to bring microservices to the masses and blur the lines between container-based and serverless approaches, according to Forrester. “From a software development perspective, the cloud offers simplicity, velocity, elasticity, collaboration, and rapid innovation that isn't easily replicated, if at all possible, using traditional onpremises tools. And if you are going to be hosting on the cloud, why not develop in the cloud to make sure your development environment is as close to your run time environment as possible?” said Christopher Condo, senior analyst for Forrester. “Everyone is coming to grips

New platforms for cloud-native app architectures, value stream management tools, and infusing artificial intelligence (AI) into testing are among the breakout technologies we expect in 2019. ing Foundation released a report in September that found the use of cloudnative technologies in production grew more than 200 percent since the beginning of the year. The CNCF states, “Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.” In addition, the CNCF explains cloud-native systems must be container packaged,

with the fact that some portion of their business will be hosted in the cloud, so they then make the logical leap to say if we’re hosting this particular feature or product in the cloud, let’s go ahead and develop in the cloud as well.” Value stream management is another term that started gaining traction towards the end of 2018. It refers to an effort to align business goals and visualize the end-to-end development of pipelines. According to Forrester, visual stream management tools are designed to capture, visualize and analyze critical information about speed

and quality during the software product creation process. The research firm predicts value stream management will become the new dashboard for DevOps teams in 2019. “When we started researching these tools in 2017/2018, we weren't even sure if the term Value Stream Management would resonate. But it has, because the development community has been doing their homework on Lean software development and realizes that in order to improve, you need to measure end-to-end performance over time,” said Condo. “They also recognize that delivering value is why they build software to begin with, so these tools are a natural complement to a DevOps tool chain. Once these tools grow in use, we'll need to be on the lookout for practitioners being too focused on KPIs and not enough on delivering value to the customer.” As AI continues to advance, Forrester also sees new AI-based tools emerging in 2019 to provide better insights into development and testing. “Digital acceleration won’t happen without higher-quality software. Despite improved automation with traditional UI and API testing, the field as a whole is still lagging,” Forrester wrote in its report. “Can AI help augment testers and automate testing? You bet; a whopping 37% of developers are already using AI and machine learning (ML) to test better and faster to increase quality.”

DevOps initiatives will still struggle in 2019 Despite Forrester declaring DevOps the new norm, the approach will still face its set of challenges throughout the new year.

014,15_SDT018.qxp_Layout 1 11/15/18 2:14 PM Page 15

“DevOps remains a very active topic of customer inquiry. The questions have shifted, but not declined. Five years ago people were asking ‘what is it,’ and three years ago they were asking “how can I get started?” Now they are asking ‘how do we scale it out? What do we do with ITIL? Can and should we shift entirely to integrated product teams? What about governance, risk, and compliance?’” said Charles Betz, principal analyst at Forrester. “It’s a period of great experimentation and learning, which we anticipate will continue for

some years more before best practices start to stabilize.” Forrester predicts 2019 will be the year of DevOps governance and technology rationalization. “Issues of risk and governance, including security, change management, and quality control, will take center stage. Impacted by these challenges and increased DevOps vendor competition, enterprises will also emphasize toolchain standardization and moreseamless automation,” the research firm wrote in its 2019 DevOps predictions report provided to SD Times. As a result of the need for governance, Forrester sees fewer businesses cobbling together their own DevOps toolchains and instead turning to more integrated solutions for improved consistency and compliance. “This will force players from multiple spaces to compete,” Forrester wrote. In addition, Forrester expects DevOps will experience a major security breach in 2019. “Continuous delivery toolchains are powerful — perhaps too powerful. If an attacker gains access to

December 2018

SD Times

the toolchain, the entire infrastructure, both upstream and downstream, is at risk,” the firm wrote. According to Forrester, a major security breach to the DevOps toolchain will result in businesses investing more in governance and risk analytics as well as the adoption in privileged identity management. “It will also prompt more ‘policy as code’ and secure integration between discrete parts of the development and continuous delivery toolchain,” Forrester wrote. Other DevOps predictions for the next year include: 25 percent of openings for skilled DevOps engineers will go unfilled, mean time-to-resolution rates will increase, and businesses will start to revolt against legacy operational processes. “In 2019, firms will face increasing complexity and risk as they attempt to scale their DevOps initiatives. This will force them to create more holistic technical, organizational, and governance strategies to address serious capability gaps and better manage risk,” Forrester wrote. z

683(5 )$67 $1' $'9$1&(' &+$576



y y y y




&Z dZ/ >


016_SDT018.qxp_Layout 1 11/15/18 2:13 PM Page 16


SD Times

December 2018

Google explores the challenges of responsible artificial intelligence

instance, due in part to high levels of ‘striped’ features and comparatively low levels of ‘polka dots,’” Sheth wrote.


AI is changing the world, whether we’re ready for it or not. We are already seeing examples of AI algorithms being applied to reduce doctors’ workloads by intelligently triaging patients, connect journalists with global audiences because of accurate language translations, and reduce customer service wait time, according to Google. But even as we begin to benefit from AI, there is still an air of uncertainty and unease about the technology. For example, Google recently backed out of a controversial military contract using AI after receiving public backlash. Now, the company is taking the future of responsible AI more seriously. In June, Google laid out its AI principles, and last month it began opening up a discussion of the concerns that customers most frequently have about AI. The concerns are broken into four areas: unfair bias, interpretability, changing workforce, and doing good.

UNFAIR BIAS: How to be sure that machine learning models treat all users fairly and justly Machine learning models are only as reliable as the data they were trained on. Since humans prepare that data, the slightest bias can make a measurable difference in results. Because of the speed at which algorithms perform, unfair bias is amplified, Google explained. In order to address issues of bias, Google has created educational resources such as recommended practices on fairness and the fairness module in its ML crash course. It is also focusing on documentation and community outreach, the company explained. “I’m proud of the steps we’re taking, and I believe the knowledge and tools we’re developing will go a long way towards making AI more fair. But no single company can solve such a complex problem alone. The fight against unfair bias will be a collective effort,

CHANGING WORKFORCE: How to responsibly harness the power of automation while ensuring that today’s workforce is prepared for tomorrow

shaped by input from a range of stakeholders, and we’re committed to listen. As our world continues to change, we’ll continue to learn,” Rajen Sheth, director of product management for Cloud AI for Google, wrote in a post.

INTERPRETABILITY: How to make AI more transparent, so that it can better understand recommendations

In order to trust AI systems, we need to understand why they’re making the decisions they’re making. The logic of traditional software can be laid out by examining the source code, but that is not possible with neural networks, the company explained. According to Google, progress is being made as a result of establishing best practices, a growing set of tools, and a collective effort to aim for interpretable results. Image classification is an area where this transparency is being exhibited. “In the case of image classification, for instance, recent work from Google AI demonstrates a method to represent human-friendly concepts, such as striped fur or curly hair, then quantify the prevalence of those concepts within a given image. The result is a classifier that articulates its reasoning in terms of features most meaningful to a human user. An image might be classified ‘zebra,’ for

Our relationship to work is changing, and many organizations are trying to balance the potential of automation and the value of their workforce, Google explained. While not all jobs can be automated, there needs to be something done to ease the transition for those that can. Google has set up a $50 million fund for nonprofits that are providing training and education, connecting potential employees with ideal job opportunities based on skills and experience, and supporting workers in low-wage employment.

DOING GOOD: How to be sure that AI is being used for good The final facet is to ensure that AI is having a positive impact. There is an enormous grey area, especially in terms of areas such as AI for weaponry. “Our customers find themselves in a variety of places along the spectrum of possibility on controversial use cases, and are looking to us to help them think through what AI means for their business,” Sheth wrote. Google is working with customers and product teams to work through these grey areas. It has brought in technology ethicist Shannon Vallor to bring an informed outsider perspective on the matter, Google explained. “Careful ethical analysis can help us understand which potential uses of vision technology are inappropriate, harmful, or intrusive. And ethical decision-making practices can help us reason better about challenging dilemmas and complex value tradeoffs—such as whether to prioritize transparency or privacy in an AI application where providing more of one may mean less of the other,” wrote Sheth. z

Full Page Ads_SDT018.qxp_Layout 1 11/16/18 9:50 AM Page 17


018_SDT018.qxp_Layout 1 11/15/18 2:01 PM Page 18


SD Times

December 2018


ElectricFlow 8.5 features “DevOps your way” BY CHRISTINA CARDOZA

There are many tools and ways of doing DevOps. Electric Cloud wants to make sure businesses have the flexibility to use the tools and processes that work best for them in its latest release of its enterprise SaaS and on-premises Application Release Orchestration solution ElectricFlow. ElectricFlow 8.5 features a new Kanban-style pipeline view, CI dashboarding capabilities and object tagging for customer reporting and improved searchability. “When we say your way, we literally mean it. Maybe you are happy with your process. Maybe you are not. Maybe you want to evolve. We allow you to change your process as part of the process or tool, but we won’t force you to,” said Anders Wallgren, CTO of Electric Cloud. “It becomes the platform you evolve and improve, and be flexible in the way you do your software pipeline.” The new Kanban view shows the entire release cycle with all the stages in one screen. It can tell users where they are, what happened and what is about to happen. Then, for more details, users can switch back to the pipeline view to see what is running where and when,

ElectricFlow’s new capabilties track and analyze build processes, failures and success.

and what team was tasked with doing. The Kanban view was added out of the need to provide release managers more about the process and less about the automation. “It is a higher level of abstraction for release managers, but still allows them to drill down into details if they need to,” said Wallgren. The new CI dashboarding capabilities provide data-driven visibility into CI processes and bottlenecks. This will provide a clearer view into builds, according to Wallgren. In addition, it tracks all of a team’s CI processes no matter the tools

they are using. It supports CI tools such as Jenkins, Bamboo and Git. The newly added object tagging functionality provides more granular, real-world reports for release managers and project owners, according to the company. You can now create reports with universal data segmentation across any object types such as applications, pipeline stages and environments. In addition, ElectricFlow now features support for SaaS teams in onpremise, cloud native and serverless environments. z

Microsoft releases plans for Azure DevOps BY CHRISTINA CARDOZA

Microsoft released Azure DevOps in September, and already the company has a number of upcoming features and updates planned for the quarter. Azure DevOps is Microsoft’s set of tools for better planning, collaboration and faster shipping. For the fourth quarter of 2018 , users can expect an update to Azure Artifacts and Azure Pipelines. Azure Artifacts enables users to create, host and share packages with team members as well as add artifacts to their

CI/CD pipelines. Updates will feature upstream sources for feeds across organizations to enable users to share code between people no matter the team they are on. “As adoption of Azure DevOps Services has grown, especially in the enterprise, we’ve found that sharing across organizations is important to getting the most value out of Azure DevOps at scale,” Alex Nichols, senior program manager on the Azure DevOps team at Microsoft, wrote in a post. Azure Pipelines enables users to

build, test and deploy with CI/CD in any language, platform and cloud. Updates will be split between the 2018 Q4 and the 2019 Q1, and include YAML editor for the web, YAML editing in Visual Studio Code and a public preview for release pipelines. Other planned updates for 2019 Q1 include work item support for markdown editing in Azure boards, and the public preview of GVFS for Mac in Azure Repos. The company also plans adding a centralized auditing experience for the second quarter of 2019. z

Full Page Ads_SDT018.qxp_Layout 1 11/15/18 1:57 PM Page 19

Full Page Ads_SDT018.qxp_Layout 1 11/16/18 9:50 AM Page 20

021-25_SDT018.qxp_Layout 1 11/16/18 12:31 PM Page 21

December 2018

SD Times


he software industry keeps expressing how it is under immense pressure to keep up with market demand and deliver software faster. Automated testing is an approach that came about not only to help speed up software delivery, but to ensure the software that did come out did what it was supposed to do. For some time, automated testing has been great at removing repetitive manual tasks, but the industry is only moving faster and businesses are now looking for ways to do more. “Rapid change and accelerating application delivery is a topic that used to really be something only technology and Silicon Valley companies talked about. “Over just the past few years, it has become something that almost every organization is experiencing,” said Lubos Parobek, vice president of product for the testing company Sauce Labs. “They all feel this need to deliver apps faster.” This sense of urgency has businesses looking to leverage test automation even further and go beyond just automating repetitive tasks to automat-

ing in dynamic environments where everything is constantly changing. Antony Edwards, CTO of the test automation company Eggplant said, “As teams start releasing even weekly, let alone daily or multiple times a day, test automation needs to change. Today test automation means ‘automation of test execution,’ but the creation and maintenance of tests, impact analysis and the decision of which test to run, the setup of environments, the reviewing of results, and the go/no-go decision are all entirely manual and usually ad-

hoc. “The key is that test automation needs to expand beyond the ‘test execution’ boundary and cover all these activities.”

Pushing the limits Perhaps the biggest drivers for test automation right now are continuous integration, continuous delivery, continuous deployment and DevOps because they are what is pushing organizations to move faster and get software into the hands of their users more quickly, according to Rex Black, president of Rex Black Consulting Services, a hardware and software testing and quality assurance consultancy. “But the only way for test automation to provide value and to not be seen as a bottleneck is for it to be ‘continuous,’” said Mark Lambert, vice president of products at the automated software testing company Parasoft. According to Lambert, this happens continued on page 22 >


021-25_SDT018.qxp_Layout 1 11/16/18 12:31 PM Page 22


SD Times

December 2018

< continued from page 21

in two ways. First, the environment has to be available at all times so tests can be executed at anytime and anywhere. Secondly, the tests need to take change into account. “Your testing strategy has to change resistance built into it,” said Lambert. “Handling change at the UI level is inherently difficult, which is why an effective testing strategy relies on a multi-layer approach. This starts with a solid foundation of fully automated unit tests, validating the granular functionality of the code, backed up with broad coverage of the business logic using API layer testing.” “By focusing on the code and API layers, tests can be automatically refactored leaving a smaller set of the brittle end-to-end UI-level tests to manage.” Part of that strategy also means having to look at testing from a different angle. According to Eggplant’s Edwards, testing has shifted from testing to see if something is right, to testing to see if something is good. “I am seeing more and more companies say, ‘I don’t really care if my product complies with a [specification] or not,’ ” he said. “No one wants to be the guy saying no one is buying our software anymore, and everyone hates it, but at least it complies with the spec.” Instead, testing is shifting from thinking about the requirements to thinking about the user. Does the software increase customer satisfaction, and is it increasing whatever the business metric is you care about? “If you care about your user experience, if you care about business outcome, you need to be testing the product form the outside in, the way a user does,” Edwards added. Looking at it from the user’s side involves monitoring performance and the status of a solution in production. While that may not seem like it has anything to do with testing or automation, it’s about creating automated feedback loops and understanding the technical behavior of a product and the business outcome, Edwards explained. For example, he said if you look at the page load speed of all your pages and feed that back into testing, instead of automating tests that say every page has

to respond in 2 seconds, you can get more granular and say certain pages need to load faster while other pages can take up to 10 seconds and won’t have a big impact on experience. “Testing today is too tied to the underlying implementation of the app or website,” Edwards said. “This creates dependencies between the test and the code that have nothing to do with verification or validation, they are just there because of how we’ve chosen to implement test automation.”

uct owners to write tests at the same time developers are developing code. These approaches have helped teams get software out multiple times a day because they don’t have to wait for days to create the tests and get back results, and it enables them to understand if they make a mistake right away, Parobek explained. However, if a developer is submitting new code or pull requests to the main branch multiple times a day, it can be hard to keep up with TDD and BDD,

‘As teams start releasing even weekly, let alone daily or multiple times a day, test automation needs to change.’ —Antony Edwards

But just because you aren’t necessarily testing something against a specification anymore, doesn’t mean you shouldn’t be testing for quality, according to Thomas Murphy, senior director analyst at the research firm Gartner. Testing today has gone from a calendar event to more of a continuous quality process, he explained. “There is a fundamental need to be shipping software every day or very frequently, and there is no way that testing can be manual,” he said. “You don’t have time for that. It needs to be fast.” Some ways to speed things up is to capture the requirements and create the tests upfront. Two approaches that really drove the need for automating testing are test-driven development (TDD) and behavior-driven development (BDD). TDD is the idea that you are going to write the test first, then write the code to pass that test, according to Sauce Labs’ Parobek. BDD is where you enable people like the business analyst, product manager or prod-

making automated testing impossible because there aren’t tests already in place for these changes. In addition, it slows down the process because now you have to go in manually to make sure the code that is being submitted doesn’t break any key existing function, according to Parobek. But Parobek does explain if you write your test correctly and follow best practices, there are ways around this. “As you change your application and as you add new functionality, you do not just create new tests, but you might have to change some existing tests,” he said. Parobek recommends page object modeling as a best practice. It enables users to create tests in a way that is very easy to change when the behavior of the app is changed, he explained. “It enables you to abstract out and keep in one place changes so when the app does change, you are able to change one file that then changes a variety of test cases for you. You don’t have to go continued on page 25 >

Full Page Ads_SDT018.qxp_Layout 1 11/15/18 1:58 PM Page 23

Full Page Ads_SDT018.qxp_Layout 1 11/15/18 1:58 PM Page 24










021-25_SDT018.qxp_Layout 1 11/16/18 12:31 PM Page 25

December 2018

SD Times

< continued from page 22

into 100 different test cases and change something 100 times,” he said. “Rather you just change one file that is abstracted through page objects. Another best practice, according to Parobek, is to be smart about locators. Locators enable automated tests to identify different parts of the user interface. A common aspect of locators is IDs. IDs enable tests to identify elements. For example, when an automated test goes in and needs to test a button, if you’ve attached a locator ID to it, the test can recognize the button even if you moved it somewhere else on the page. Other approaches to locators are to use names, CSS selectors, classes, tags links, text and XPath. “Locators are an important part for creating tests that are simpler and easier to maintain,” said Parobek. In order to successfully use locators, Parobek thinks it is imperative that the development and QA teams collaborate better. “If QA and development are working closely together, it is easy to build apps that make it easier to test versus development not thinking about testability.” No matter how much you end up being able to automate, Rex Black explained in order to be successful at it, you will still always have to go back to the basics. If you become too aspirational with automation and have too many failed attempts, it can reduce management’s appetitive for wanting to invest. “You need to have a plan. You need to have an architecture,” Black said. “The plan needs to include a business case so you can prove to management it is not just throwing money into a bright shiny object.” “It’s the boring basics,” Black added. “Attention to the business case. Attention to the architecture. Take it step by step and course correct as you go.”

The promise of artificial intelligence in automated testing As artificial intelligence (AI) advances, we are seeing it be implemented in more tools and technologies as a way to improve user experience provide business value. But when it comes to test automation, the promise of AI is more

Despite the efforts to automate as much as possible, things for the time being will still require a human touch. According to Rex Black, president of the Rex Black Consulting Services (RBCS), a hardware and software testing and quality assurance consultancy, you can break testing down into two overlapping categories: 1. Verification, where a test makes sure the software works as specified; and 2. Validation tests, where you make sure tests are fit for use. For now, Black believes validation will remain manual because it is very hard to do in an automated fashion. For example, he explained, if you developed a video game, you can’t automate for things like: Is it fun? Is it engaged? Is it sticky? Do people want to come back and keep playing it? “At this point, automation tools are really about verifying that the software works in some specified way. The test says what is suppose to happen and checks to see if it happens. There is always going to be some validation that will need to be done either by people,” he said. Lubos Parobek, vice president of product for the testing company Sauce Labs explained that even if we get to a point where everything is automated in the longterm future, you will still always want a business stakeholder to take a final look and do a sanity check that everything works as expected to a human. “Getting a complete view of customer experience isn’t just about validating user scenarios, doing click-counts and sophisticated ‘image analysis’ to make sure the look and feel is consistent — it’s about making sure the user is engaged and enjoying the experience. This inherently requires human intuition and cannot be fully automated,” added Mark Lambert, vice president of products for automated software testing company Parasoft. z

inspirational than operational, Black explained. “If you go to conferences, you will hear about people wanting to use it, and tool vendors making claims that they are able to deliver on it,” he said. “But at this point, I have not had a client tell me or show me a successful implementation of test automation that relies on AI in a significant way. What is happening now is that tool vendors are sensing that this is going to be the next hot thing and are jumping on that AI train. It is not a realized promise yet.” When you think about AI, you think about a sentient element figuring things out automatically, according to Gartner’s Murphy, when in reality it tends to be some repeated pattern of learning

something to be predictive or learning from past experiences. In order to learn from past experiences, you need a lot of data to feed into your machine learning algorithm. Murphy explained AI is still new and a lot of the test information that companies have today is very fragmented, so when you hear companies talk about AI in regards to test automation it tends to be under-delivering or over-promising. Vendors that say they are offering an AI-oriented test automation tool are often just performing model-based testing, according to Murphy. Model-based testing is an approach where tests are automatically generated from models. The closest thing we have out there to an continued on page 26 >


021-25_SDT018.qxp_Layout 1 11/16/18 12:32 PM Page 26


SD Times

December 2018

< continued from page 25

AI-based test automation tool are imagebased recognition solutions that understand if things are broken, and can show when it happened and where through visual validation, Murphy explained. However, Black does see AI having potential within the test automation space in the future; he just warns businesses against investing in any technologies too soon. Areas where Black sees the most potential for AI include false positives, and flaky tests. False positives happen when a test returns a failed result, but it turns out the software is actually working correctly. A human being is able to recognize this when they look further into correcting the result. Black sees AI being used to apply human reasoning and differentiate the correct versus incorrect behavior. Flaky tests happen when a test fails once, but passes when the test runs again. This unpredictable result is due to the variation of the system architecture, the test architecture, the tool, or the test automation, according to Black. He sees AI being used to handle validation issues like this by bringing a more sophisticated sense of what fit for use means to the testing efforts. Kevin Surace, CEO of, sees AI being applied to test automation, but in different levels. Surace said there are 5 levels of AI that can be

‘As you change your application and as you add new functionality, you do not just create new tests, but you might have to change some existing tests.’ —Lubos Parobek

applied to test automation: 1. Scripting/coding 2. “Codeless” capture/playback 3. Machine learning: self-healing human-created scripts and money bots 4. Machine learning: Near full automation with auto-generated smart scripts 5. Machine learning full automation: auto-generated smart scripts with validation When deciding on AI-driven testing, Surace explained the most important qualification is to learn what type of level of AI a vendor is offering. According to Surace, many vendors have offerings at levels one and two, but there are very few vendors that can actually promise levels three and above. In the future, Parasoft’s Lambert expects humans will just be looking at

Test automation vendors are flocking to this idea of robotic process automation (RPA). RPA is a business process automation approach used to cut costs, reduce errors and speed up processes, so what does this have to do with test automation? According to Thomas Murphy, senior director analyst at Gartner, RPA and test automation technologies have a high degree of overlap. “Essentially both are designed to replicate a human user performing a sequence of steps.” Anthony Edwards, CTO of the test automation company Eggplant, explained that on a technical level, test automation is about automating user journeys across an app and verifying that what is supposed to happen, happens. RPA aims to do just that. “So at a technical level they are actually the exact same thing, it’s simply the higher level intent and purpose that is different. But if you look at a script that automates a user journey there is no way to tell if it has been created for ‘testing’ or for ‘RPA’ just by looking at it,” said Edwards. “The difference for some people would be that testing focuses on a single application whereas

the results of test automation with the machine actually doing the testing in an autonomous way. But for now, the real value of AI and machine learning will be used to augment human work and spot patterns and relationships in the data in order to guide the creation and execution of tests, he explained. Still, Black warns to enter AI for test automation with caution. “Organizations that want to try to use AI-based test automation at this point in time should be extremely careful and extremely conservative in how they pilot that and how they roll that out,” he said. “They need to remember that the tools are going to evolve dramatically over the next decade, and making hard, fast and difficult to change large investments in automation may not be a wise thing in the long term.” z

RPA typically works across several systems integrated together.” Over the next couple of years, Gartner’s Murphy predicts we will see more test automation vendors entering this space as a new way to capitalize on market opportunity. “By moving into the RPA market, they are expanding their footprint and audience of people they go after to help them,” he said. This move is especially important as more businesses move toward open-source technologies for their testing solutions. Rex Black, president of the Rex Black Consulting Services (RBCS), a hardware and software testing and quality assurance consultancy, sees the test automation space moving towards open source because of cost. “It’s easier to get approval for a test automation project if there isn’t a significant up-front investment in a tool purchase, especially if the test automation project is seen as risky. Related to that aspect of risk is that so many open-source test automation tools have been successful over recent years, so the perceived risk of going with an opensource tool is lower than it used to be,” he said. z

Full Page Ads_SDT018.qxp_Layout 1 11/15/18 1:58 PM Page 27

SD T Tiimes News on Mond day The latest news, news analysis and commentary delivvered to your inbox!

• Reports on the newest technologies affecting enterprise deve developers elopers • Insights into the e practices and innovations reshaping softw ware development • News from softtware providers, industry consortia, open n source projects and more m

Read SD Times Ne ews On Monday to o keep up with everything happening in the software devvelopment industrry. SUB BSCRIBE TODA AY Y!

Full Page Ads_SDT018.qxp_Layout 1 11/15/18 1:59 PM Page 28

029-35_SDT018.qxp_Layout 1 11/16/18 3:03 PM Page 29

December 2018

SD Times

Buyers Guide

APM needs to change to keep up with continuous development practices


n an industry where every company is competing to be the best, if your application is giving users problems, they might not hesitate to ditch you and go to your competitors who do have a well-performing app. Companies that cannot provide their customers and users with satisfying experiences will lose out on business. This is why APM (Application Performance Management) is more important than ever. APM provides organizations with insights into performance that can them help expose bottlenecks and dependencies within application code, explained Amena Siddiqi, product marketing director of SteelCentral APM at Riverbed. This allows companies to “find and fix issues before users become aware of them and well before the business is impacted.” According to Siddiqi, APM enables organizations to achieve many things, such as shift left, perform code audits, understand feature usage and usability, map business dependencies and impact, and accelerate the application life cycle. Pete Abrams, chief operating officer of Instana, explained that APM is becoming increasingly important as

BY JENNA SARGENT more companies adopt CI/CD practices. “As DevOps team shift to a continuous deployment model there is a significant increase in the amount of dynamism in production. More software is updated or added more frequently on more infrastructure. It’s great for solving business problems but difficult on monitoring tools that were designed for a much slower rate of change.” However, the traditional method of APM is no longer sufficient. Traditionally, when developers and operations teams were more siloed, developers would write code and send it off to operations teams, after which point the developer might not see that code, explained Tal Weiss, CTO of OverOps. In the past, APM tools were traditionally only used by operations teams, and they often were used as a means for operations teams to point blame at the developers when things went wrong, Weiss said. According to Weiss, as a result of the DevOps movement, development and operations have become less siloed, and APM tools need to change to reflect that. Now, APM tools will need to “go from serving one audience to serving multiple audiences with different needs and different levels of understanding

and depths of understanding of that company’s codebase and anything that goes into it,” said Weiss. This poses a challenge in terms of APM tool adoption that organizations need to figure out how to overcome in order to effectively use APM, Weiss explained. According to Weiss, power has shifted in a dramatic way to developers in the past two years. Developers now have much more control in choosing the infrastructure that code runs in and the tooling used. APM vendors will have to respond to this shift by keeping developers in mind when developing their tooling. He believes that APM solutions are currently lagging behind with that. Weiss added that APM tools that are still primarily focused on operations teams struggle in gaining widespread adoption in enterprises where the developers have certain expectations they want met. Every team involved with application delivery has use cases requiring the data and information that would be provided by an APM tools, Instana’s Abrams explained. For example, an engineering team can use APM to plan and monitor the capacity of their platcontinued on page 33 >


029-35_SDT018.qxp_Layout 1 11/16/18 4:16 PM Page 30


SD Times

December 2018

How these companies can help you monitor your applications Pete Abrams, COO of Instana As companies continue to adopt a DevOps culture and embrace Agile methodologies, it has become painfully obvious that traditional monitoring can no longer keep pace with the complexity and scale of dynamic application environments run by modern enterprises. Software-defined businesses require solutions to protect them from outages and bad user experiences — which is exactly what we deliver to our customers across the globe. APM enables software teams to monitor application performance and availability. Every time there is a fundamental change in the underlying application/infrastructure stack, a new opportunity arises. The age of containers and Kubernetes is upon us and with it comes the latest opportunity to revolutionize APM. Instana’s automatic APM solution includes: l Exceptional one second metric granularity and a distributed trace for every user request, the best in the industry l Continuous, and automatic, discovery, mapping and monitoring of all hosts and “apps” whether they are bare metal, virtual machines, or containers. l Automatic tracing for nine languages (including Java) and also as OpenTracing support l An AI approach to discovery, problem detection and troubleshooting made to handle the massive dynamic applications that result from modern architectures Dennis Chu, director of product marketing at LightStep LightStep [x]PM provides APM for today’s complex systems including microservices and serverless functions. [x]PM helps organizations running microservice architectures maintain control over their systems, so they can reduce MTTR during firefighting situations and make proactive application performance improvements. [x]PM’s unique design allows it to ana-

lyze 100% of unsampled transaction data from highly distributed, large-scale production software to produce meaningful distributed traces and metrics that explain performance behaviors and accelerate root cause analysis. In addition, [x]PM’s intuitive user experience makes it easy to discover, categorize, and explain distinct latency behaviors occurring throughout an application. Users can even visually compare current performance against past performance along any application dimensions, so at a glance it’s immediately clear whether the behavior being observed is normal or not. Finally, [x]PM follows entire transactions from clients making requests down to lowlevel services and back, revealing how every service interacted and responded to one another. [x]PM even automatically computes a transaction’s critical path. Tal Weiss, CTO of OverOps OverOps extends the traditional definition of application performance management to address the functional issues causing slowdowns and performance bottlenecks. This not only improves developer productivity but is critical for delivering reliable applications. The typical APM workflow helps identify slowdowns, yet still relies on log data as the primary resource for troubleshooting their cause. Log files can be incredibly useful, but are not without limitations. While they can provide you with some information about what went wrong in your application, they lack significant context and depth. They also require the foresight to anticipate errors, but we’ve found that the most impactful errors are often unexpected. OverOps goes beyond logs to capture code-level insight about application quality across any environment, and give developers and operations teams deeper visibility into the root cause of application issues and performance slowdowns. Without requiring code modifications or performance overhead, the OverOps agent collects a snapshot at the moment a timer threshold is exceeded or any error

occurs — even those not logged. Each snapshot is analyzed and processed to extract code, contextual values of variables and the state of the virtual machine and physical host at the time of an issue. By combining static and dynamic code analysis, OverOps arms development and operations teams with everything from the full source code and state at the point of execution, to high-level stats around the health of new deployments and their impact on the overall application. Amena Siddiqi, product marketing director of SteelCentral APM at Riverbed Riverbed’s big data technology for APM is fully adapted to today’s hyperscale cloud-native applications, giving you complete insight, at any scale. It captures, stores and indexes across billions of transactions a day without sacrificing data completeness, granularity or depth. And its powerful analytics extract business-relevant insight so you never miss a performance problem. With Riverbed, you can reconstruct incidents in great detail for immediate insight into even infrequent or intermittent issues, and quickly resolve problems before end users are impacted. Distributed transactions are traced from the end user down to deep levels of user code, even in overhead-sensitive enterprise production environments. Every diagnostic detail is preserved, including end user, log, SQL, network and payload data, along with fine-grained systems metrics, for unified visibility across the application ecosystem. This results in faster troubleshooting for even the toughest performance problems. Riverbed provides full operational visibility into transactions, containers and microservices running on modern cloudbased application infrastructure. Riverbed’s APM solution, AppInternals, is part of Riverbed SteelCentral’s Digital Experience Management platform that unifies device-based user experience, application, infrastructure, and network monitoring to provide a holistic view of a user’s digital experience. z

Full Page Ads_SDT018.qxp_Layout 1 11/15/18 1:59 PM Page 31

Solving the Microservices Murder Mystery

When you run microservices, every outage can feel like you’re solving a murder mystery. It’s hard to sift through performance data and find the true culprit. LightStep [x]PM helps you pinpoint the root cause and gain control over microservices complexity. It has tracing that scales and is a unique approach to application performance management. [x]PM analyzes 100% of the performance data, 100% of the time, with no upfront sampling, so you never miss an outlier. IN PRODUCTION AT

Try the [x]PM guided demo





Full Page Ads_SDT018.qxp_Layout 1 11/16/18 9:52 AM Page 32

029-35_SDT018.qxp_Layout 1 11/16/18 3:04 PM Page 33

< continued from page 29

form and troubleshoot issues, while a development team might want to utilize APM to understand upstream and downstream calls, errors including context and stack traces, and see timings on calls to databases or other subsystems, the company explained. Another challenge organizations are facing with APM is that there is so much diversity in terms of the kinds of tools, languages, and frameworks that teams use to build applications, said Daniel Spoonhower, CTO and cofounder of LightStep. “One challenge for APM is how a single tool can provide visibility across all of those different languages and frameworks,” he said. “The point of an APM tool is to understand the bigger picture of the whole application, and especially a whole application as it’s used and viewed by the users and the customers of that organization. And so it’s really important to have a standard set of both methods and tools for understanding what’s happening across an organization. As the organization chooses and becomes more fragmented in the way its application is built, that’ll create some challenges in APM itself.” Spoonhower believes that organizations can overcome this challenge with top-down decision making. “I think if an organization can make those determinations it sort of has to come from the top down. If they can do that, it’ll go a long way for the organization as a whole moving more quickly.” According to Siddiqi, APM is not well-suited to all types of applications, for example components where you have no control over their back-end components, such as SaaS applications or Office 365. Another use case not supported by APM would be a Citrix app running on a virtual desktop where the code cannot be instrumented. “For these reasons, a majority of business applications are simply not monitored at all,” said Siddiqi. Another shortcoming of APM is that all monitoring systems could be showing that everything is fine, yet help desk tickets are still being logged, Siddiqi explained. “You need a way to see what

users actually see when they use these applications, with visibility into the device itself—the mobile phone, PC, laptop or any other device that your internal users are using to serve your customers,” said Siddiqi. Another issue is that many APM vendors are limited in their data collection. They typically are forced to resort to sampling or snapshotting based on errors. “This puts APM at a disadvantage when it comes to troubleshooting. Oftentimes, the data simply isn’t there to reconstruct and diagnose an issue,” said Siddiqi. Going forward, APM vendors will need to start incorporating AI and machine learning into APM by incorporating AIOps into company strategies, Siddiqi said. Another trend that is being seen is the commoditization of distributed tracing, Siddiqi explained. “This is often positioned as a war between traditional APM and OpenTracing but it’s really not an ‘either/or’ but rather a ‘better together.’” OpenTracing is a CNCF project that includes a set of vendorneutral APIs and instrumentation that is used for distributed tracing. “This artificial divide can be bridged by enhancing application tracing with the context exposed by OpenTracing to produce a richer distributed trace,” said Siddiqi. Siddiqi also explained that while open-source options may look attractive, organizations need to choose sustainable APM strategies that are managed, curated, supported, and universal.

Changes in architecture require changes in APM In order to survive, LightStep’s Spoonhower believes that APM providers will need to rethink tooling. A lot of the tools that are out there today are geared toward more monolithic architectures, but as those architectures get broken up into more and more pieces, the APM tools will need to change to accommodate that switch. “Changes around architectures are definitely forcing a lot of organizations

December 2018

SD Times

to think differently about their tools and about their APM solution,” said Spoonhower. Specifically, breaking up applications into microservices or serverless tends to increase the volume of data coming out of those application. APM tools will need to be able to scale and accommodate that increase in data volume in order to maintain the same level of visibility. “But at the same time, there’s a lot of cost to that and that cost is scaling in a way that’s not necessarily proportional to their business,” said Spoonhower. “So that’s creating a lot of pressure on those teams and organizations to really think about the way that they’re using that data and the way that they’re gathering and analyzing that data.”

Automation even more important for keeping up with continuous development Instana’s Abrams believes that APM will need to change to incorporate more automation. “Any manual interaction slows down the development cycle, and that’s true of monitoring as well,” Abrams said. “APM has to adjust to the new normal, distributed applications, a polyglot of languages, continuous deployment, all built around extremely dynamic applications. Even simple configuration can get in the way, but APM tools must move to an automatic approach that can deal with the dynamism of the modern environment.” According to Abrams, conventional APM tools need to be extensively configured and optimized to monitor applications effectively. “They require too much manual work to even get started. Take an application map, as an example,” said Abrams. “Manual mapping is not only error prone, it is also likely to be obsolete before the first set of data is reported. Once you inject faster change associated with CI/CD, manual tools actually slow down every single cycle.” Manual APM struggles to deal with the dynamic environments created by microservices and containers, Abrams said. This ultimately leaves “a large hole in APM strategies everywhere.” z


Full Page Ads_SDT018.qxp_Layout 1 11/16/18 9:53 AM Page 34

The Digital Performance Company Riverbed application performance solutions provide superior levels of visibility into cloud-native applications — from end users, to microservices, to containers, to infrastructure — that help you dramatically accelerate the application lifecycle from DevOps through production. Try for free at © 2018 Riverbed Technology, Inc. All rights reserved.

8:22 PM

029-35_SDT018.qxp_Layout 1 11/16/18 4:10 PM Page 35

December 2018

SD Times

A guide to APM tools n AppDynamics: The AppDynamics Application Intelligence Platform provides a realtime, end-to-end view of application performance and its impact on digital customer experience, from end-user devices through the back-end ecosystem — lines of code, infrastructure, user sessions and business transactions. n AppNeta: AppNeta provides APM for DevOps and APM for IT. APM for DevOps includes the ability to identify trends and outliers at a glance, resolve issues faster, track user performance, and measure performance, functionality and availability from the user’s perspective. APM for IT includes SLA tracking on any network and end-user experience monitoring with synthetics. n BMC Software: TrueSight App Visibility Manager goes beyond application performance monitoring to provide deep insight into user experience. In addition to tracking and measuring user activity at the individual or location level, it filters data to ensure the delivery of relevant application information without the unnecessary noise. n CA Technologies: CA APM offers easy deployment of APM agents, including Node.js and PHP. Proactive identification and resolution of issues occur across physical, virtual, cloud, containerized and mobile applications. Intelligent insight comes through 360-degree mobile-to-mainframe visibility, which captures billions of critical metrics per day to verify transactions. n Catchpoint Systems: Catchpoint offers innovative, real-time analytics across its Synthetic Monitoring and Real User Measurement (RUM) tools. Both solutions work in tandem to give a clear assessment of performance, with Synthetic allowing testing from outside of data centers with expansive global nodes, and RUM allowing a clearer view of end-user experiences. n Dynatrace: Dynatrace User Experience Management and Dynatrace Synthetic Monitoring help developers to proactively understand and optimize user experience; Dynatrace Application Monitoring is designed to maximize application performance; and Dynatrace Data Center RUM delivers app-aware network insights.



n Instana: As the leading provider of Automatic Application Monitoring solutions for microservices, Instana leverages automation and artificial intelligence to deliver the visibility and actionable information DevOps teams need to manage dynamic applications. Instana enables application delivery teams to accelerate their CI/CD cycle and deliver high performance business services with confidence. n LightStep: Lightstep’s mission is to deliver insights that put organizations back in control of their complex software applications. Its first product, LightStep [x]PM, is reinventing application performance management. It provides an accurate, detailed snapshot of the entire software system at any point in time, enabling organizations to identify bottlenecks and resolve incidents rapidly. n OverOps: Capture code-level insight about application quality in real-time to help DevOps teams deliver reliable software. Operating in any environment, OverOps employs both static and dynamic code analysis to collect unique data about every error and exception — both caught and uncaught — as well as performance slowdowns. This deep visibility into an application’s functional quality not only helps developers more effectively identify the true root cause of an issue, but also empowers ITOps to detect anomalies and improve overall reliability. n Riverbed: The company recognizes the need to maximize digital performance and is uniquely positioned to provide organizations with a Digital Performance Platform that delivers superior digital experiences and accelerates performance, allowing our customers to rethink what is possible. Riverbed application performance solutions provide superior levels of visibility into cloud-native applications — from end users, to microservices, to containers, to infrastructure — to help you dramatically accelerate the application lifecycle from DevOps through production. Try for free at n Micro Focus: Its Application Performance Management suite empowers businesses to deliver exceptional user experiences by examining the impact of anomalies on applications before they affect customers. Business applications are available with proactive end-user monitoring and actionable diagnostics. n Neotys: NeoSense actively monitors the performance and availability of critical business transactions within recorded user paths. NeoSense leverages the test scenario design capabilities of NeoLoad to quickly create realistic monitoring profiles. Dashboards, alert triggers, and notification rules provide actionable insights. n New Relic: New Relic’s SaaS-based Software Analytics Cloud delivers code-level visibility for applications in production that cross six languages — Java, .NET, Ruby, Python, PHP and Node.js — and supporting more than 70 frameworks. n Pepperdata: Pepperdata enables enter-

prises to manage and improve the performance of their big data infrastructures by troubleshooting problems, maximizing cluster utilization, and enforcing policies to support multi-tenancy. n SmartBear: AlertSite’s global network of more than 340 monitoring nodes helps monitor availability and performance of applications and APIs, and find issues before they hit end consumers. The Web transaction recorder DejaClick helps record complex user transactions and turn them into monitors, without coding. n SolarWinds: SolarWinds AppOptics is a seamless application and infrastructure monitoring solution with distributed tracing, and custom metrics that all feed into the same dashboarding, analytics, and alerting pipelines. Out-of-the-box support for dozens of frameworks and libraries in Java, .NET, PHP, Scala, Ruby, Python, Go, and Node.js. Over 150 integrations and plugins available for collection, processing and publishing of system data. z


036_SDT018.qxp_Layout 1 11/15/18 2:01 PM Page 36


SD Times

September 2018


Knowledge graphs: The path to true AI Jans Aasman is an expert in cognitive science and is CEO of


nowledge is the foundation of intelligence— whether artificial intelligence or conventional human intellect. The understanding implicit in intelligence, its application towards business problems or personal ones, requires knowledge of these problems (and potential solutions) to effectively overcome them. The knowledge underpinning AI has traditionally come from two distinct methods: statistical reasoning, or machine learning, and symbolic reasoning based on rules and logic. The former approach learns by correlating inputs with outputs for increasingly progressive pattern identification; the latter approach uses expert, human-crafted rules to apply to particular real-world domains. True or practical AI relies on both approaches. They supplement one another for increasingly higher intelligence and performance levels. Enterprise knowledge graphs— domain knowledge repositories containing ideal machine learning training data—furnish the knowledge base for maximum productivity of total AI.

Machine learning complements rules-based AI by creating a feedback mechanism for improving the latter’s outcomes.

Symbolic reasoning Knowledge graphs are at the basis of symbolic reasoning systems using expert rules for real-life business problems. Regardless of the particular domain, data source, data format, or use case, they seamlessly align data of any variation according to uniform standards focused on relationships between nodes. Semantic rules and inferencing create new types of understanding about business knowledge that machine learning couldn’t generate at all. Examples include optimizing the array of sensor data found in smart cities for event planning based on factors such as traffic patterns, weather conditions, previous event outcomes, and preferences of the hosts and their constituencies. Symbolic reasoning also has the advantage that in the end, one still can explain how and why certain new knowledge and new suggestions were generated.

Statistical feedback loop Despite what has been said in the previous paragraphs, knowledge graphs can greatly benefit from

machine learning and add value to the symbolic rules-based systems. When modeling car driving behavior, for example, modern image recognition systems (relying on deep learning) can produce more realistic models when deployed in conjunction with rules. Nonetheless, the general paradigm by which machine learning complements rulesbased AI is by creating a feedback mechanism for improving the latter’s outcomes—and enhancing the knowledge of semantic graphs. In the preceding smart city use case organizations can deploy machine learning to the outcome of rules-based systems, especially when those outcomes are measured in terms of KPIs. These metrics can assess, for example, the success of the event as measured by the enjoyment of the attendees, the subjective costs and the real costs to the municipality, these costs for the organizations involved in the event, the rate of attendance, etc. Machine learning algorithms can analyze those KPIs for predictions to improve future events.

Horizontal applicability The interplay of the knowledge graph foundation with both the statistical and symbolic reasoning form of AI is critical for several reasons. Firstly, they all augment each other. The graphs provide the knowledge for rules-based systems and optimize machine learning training data. The machine learning feedback mechanism improves the graph’s knowledge and the rules, while the output of rules-based systems provides knowledge upon which to run machine learning. Secondly, this process is applicable to any number of horizontal use cases across industries. Most of all, however, there are amazingly advanced applications of AI empowered by this combination, the likes of which makes simple automation seem mundane. There’s risk management use cases in law enforcement and national security in which one can observe terrorists, for example, integrate that information and create hypothetical events or scenarios based on probability (determined by machine learning). Rules-based systems for security measures, then, are transformed into probabilistic rules-based systems that unveil the likelihood of events occurring and how best to mitigate them. Similar processes apply to many other instances of risk management.z

037_SDT018.qxp_Layout 1 11/15/18 2:13 PM Page 37

December 2018

SD Times


Making Java a modern language P

rogramming languages go through cycles of adoption. A nice visual timeline of popularity as measured by TIOBE shows Java dominant since the index began in 2002, with C showing close tracking and resilience throughout, and with C++, Python, and VB.NET falling into the next cluster that formed through 2018 at half the percentage rating. Other metrics, such as the RedMonk Programming Language Rankings (which measures number of projects on GitHub) shows JavaScript, C#, PHP, and Python crowding around Java at the top. There have been two major pressures on Java. The greatest test has been cloud native computing, which has knocked aside the traditional Java EE monolithic architecture (despite being service-oriented and distributed — its monolithicness is related to the degree of coupling between components). The other pressure has stemmed from the rise of Python, used as both a teaching language of choice in colleges and the primary programming language in data science/engineering and machine learning. The stewards of Java at Oracle have long recognized that Java needs to change to remain relevant as new technology unfolds. A good example is the modularization of Java in JDK 9, which radically transforms deployment sizes. The latest change with a six-monthly release cadence allows Java to move faster with new features, while at the same time reducing pressure to get a new feature into the next release (because the next window is not a two to three year wait), so helping

high-quality production. Oracle has made a series of important changes to Java: Java EE has been migrated to the Eclipse Jakarta EE project, licensing has been consolidated around GPL, and Oracle has donated proprietary features from its JDK to the OpenJDK, effectively making one JDK. And it offers a long term support model available for a minimum of three years, staggered around certain releases. These changes taken together reflect a response to the challenges of making Java a modern cloud native language, where changes happen fast: taking Kubernetes as an example and flagship of the cloud-native computing world, its code base (written in Go) has changed 95% in the last three years. It was fashionable to call out whether Java was dead some years ago; I was skeptical because Java is not just a language but a platform of many languages around the JVM. I believe the changes taking place in the Java community today will give the language/platform a further lease on life, but one can see other languages bubbling under: Go and Swift are highly modern, Python will continue its trajectory in the science and engineering world, and the fragmentation in the user interface is anyone’s call, with JavaFX only just still in the mix (now run by an open community). For now, advising enterprise decision makers, Java is a safe choice. z

Be a more effective manager Visit the Learning Center Watch a free webinar and get the information you need to make decisions about software development tools.

Learn about your industry at sections/webinars

Michael Azoff is a principal analyst for Ovum’s IT infrastructure solutions group.

For now, advising enterprise decision makers, Java is a safe choice.


038_SDT018.qxp_Layout 1 11/15/18 2:01 PM Page 38


SD Times

December 2018


Good ideas are lost in the process David Rubinstein is editor-in-chief of SD Times.


y career began in general circulation newspapers. Today, these organizations are facing digital transformations and are feeling the pain of adopting new tools, creating new processes and workflows, and changing the culture of journalists. (We know most people dislike change; grizzled, curmudgeonly writers and editors especially hate change.) Friends throughout that industry tell me that they’re having to wrestle with multiple systems that don’t play well with one another — one for online publication, another for producing print and digital editions of the papers — and the longtime editors and reporters, who cut their teeth on hot type and later electric typewriters, are feeling devalued as the next generation of digital-native journalists come in undaunted by the technology. Today, they say, editors who can navigate the systems and get the stories out quickly are more prized than those who recognize flaws in the content, because today, you can publish, wait for user feedback and correct almost instantly. This type of transformation is playing out in every industry. And, while processes such as Agile, Lean manufacturing and DevOps are designed to get product out more quickly, organizations are finding that the process is leading the way, rather than the creative side. Working more quickly and efficiently should result in more time to innovate, but somewhere, the creative side is lost in the process. “People are so beholden to process that ideas become secondary,” Compuware CEO Chris O’Malley told me at the recent DevOps Enterprise Summit 2018 in Las Vegas. In most large companies in the software industry, project management — the folks who come up with ideas to differentiate the company’s products — are still subservient to engineering. “You need to blow that up,” he said. “You need to make them equal.” That, he said, is how you’ll move to be able to create experiences that are different. When it comes to product management and development process, the teams need to be more like John Lennon and Paul McCartney, he said.

Product management and development teams need to be more like John Lennon and Paul McCartney.

They clash, but in doing so make each other better. “That’s why it’s not Lennon and Pete Best,” he said, alluding to the fired “fifth Beatle” to illustrate the importance of the two sides being equals. Jeff Bezos and Amazon are hurting many businesses, to the point of putting quite a few out of business altogether. But the thing of it is, O’Malley said, is that he’s giving his secrets out. He’s telling people exactly what he’s doing, in plain speak. “Be customer obsessed,” O’Malley said, using the Bezos mantra. “Assume they are always dissatisfied, that they always want something better. Then you have to close the gap between what the customer wants and what you’re delivering.” But to motivate workers to buy in to the transformation, you need better messaging to your teams. “You can’t change people with wonkiness,” he said. “You need to tug at their heart stings.” Meanwhile, leaders have to be decisive, and the resilience that Agile and DevOps provide can mitigate the downside risk of those decisions. “Decision-makers are afraid of taking the company into a cul-de-sac they can’t turn around from,” O’Malley explained. If they aren’t using processes designed in the digital age, and they go down a path they can’t turn from, they lose the ability to compete with digital-first companies such as Amazon. Compuware, of course, is a mainframe company that has had to teach COBOL programmers new interfaces and DevOps pipelines to keep pace with the changing world. The mainframe virtues of pervasive security, compliance enforcement, scalability and resiliency are just as key — if not more so — in the digital age. It’s clear that the reports earlier this century about how the mainframe would die out when all the COBOL programmers retired. At Compuware, O’Malley said the younger programmers, who are digital-native, are taking to the language easily, and are stepping up to take leadership roles in the company. The older programmers are also adapting to the new ways, and are valued for their institutional knowledge. So mainframes are finding new life in the digital age. But the tools, O’Malley said, “just can’t look like they’re out of a NASA movie from the 1960s.” z

Full Page Ads_SDT018.qxp_Layout 1 11/15/18 2:06 PM Page 39

Full Page Ads_SDT018.qxp_Layout 1 11/15/18 2:00 PM Page 40

Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.