SD Times - November 2017

Page 1

cov_SDT05.qxp_Layout 1 10/16/17 2:20 PM Page 1

NOVEMBER 2017 • VOL. 2, ISSUE 5 • $9.95 •

SDT05 Full Page Ads.qxp_Layout 1 10/12/17 5:27 PM Page 2

003_SDT05.qxp_Layout 1 10/16/17 1:56 PM Page 3





News Watch


3 steps to applying DevOps


Top 10 technology trends for 2018


Java EE finally gets rebooted


Workplace automation is key to keeping pace, easing IT burdens


GitHub outlines plans, tools for the future of software development

Best Practices for Agile ALM


IBM expands AI research to support an aging population


Puppet announces the next phase of software automation at PuppetConf 2017


GitLab shares its vision for Complete DevOps


ALM Suite revs up agile development


Account-based intelligence

page 25

What’s New in Software Licensing?

page 44


GUEST VIEW by George Karidis HCI: Ready for mainstream?


ANALYST VIEW by Peter Thorne Will software always need users?

DevOps Showcase Securing Microservices: The API gateway, authentication and authorization page 50

page 31 33 34 37 38 40 42

Use Modern Tools on Mainframe Apps Tasktop Helps Enable Successful DevOps Micro Focus Eases DevOps Transitions Improve Software Releases with JFrog CA Unlocks the Power of DevOps DevOps Showcase

Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 225 Broadhollow Road, Suite 211, Melville, NY 11747. Periodicals postage paid at Huntington Station, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2017 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 225 Broadhollow Road, Suite 211, Melville, NY 11747. SD Times subscriber services may be reached at

004_SDT05.qxp_Layout 1 10/13/17 3:37 PM Page 4



Over 25 search features, with easy multicolor hit-highlighting options

ART DIRECTOR Mara Leonardi CONTRIBUTING WRITERS Jacqueline Emigh, Lisa Morgan, Frank J. Ohlhorst

dtSearch’s document filters support popular file types, emails with multilevel attachments, databases, web data

CONTRIBUTING ANALYSTS Rob Enderle, Michael Facemire, Mike Gualtieri, Peter Thorne


Developers: ‡ $3,V IRU 1(7 -DYD DQG & ‡ 6'.V IRU :LQGRZV 8:3 /LQX[ 0DF DQG $QGURLG ‡ 6HH GW6HDUFK FRP IRU DUWLFOHV RQ faceted search, advanced data FODVVLILFDWLRQ ZRUNLQJ ZLWK 64/ 1R64/ RWKHU '%V 06 $]XUH HWF



The Smart Choice for Text Retrieval® since 1991 1-800-IT-FINDS



D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 10:34 AM Page 5

006,7_SDT05.qxp_Layout 1 10/13/17 3:23 PM Page 6


SD Times

November 2017

NEWS WATCH Android developers are switching from Java to Kotlin Google surprised Android developers back in May when it announced for the first time it was adding a new programming language to the operating system. Since then, Kotlin language adoption has exploded among developers so much that it is set to overtake Java in the next couple of years, according to a new report. Realm, a real-time mobile platform provider, announced the first edition of the Realm Report. The report takes an in-depth look at the mobile development world based on the analysis of its more than 100,000 active developers. Kotlin is a statically typed programming language developed by JetBrains for JVM, Android, JS browser and native applications. According to the Realm Report, since August 2015, the number of applications built with Kotlin has increased by 125%, and about 20% of Kotlin applications today were previously built with Java. In addition, the report found the number of Android apps built with Java has decreased by 6.1% over the past four months. Realm predicts by December 2018 Kotlin will overtake Java for Android development similarly to how the Swift programming language overtook Objective C for iOS app development.

EdgeX Foundry launches first major code release Linux Foundation open-source project EdgeX Foundry has launched the first major code release of its common open

Microsoft acquires AltspaceVR Microsoft has acquired the virtual reality company AltspaceVR. AltspaceVR provides a social community in VR that Microsoft hopes to expand on. AltspaceVR had originally closed down in July, but connected with Alex Kipman at Microsoft “and found a natural overlap between his goals for mixed reality and their hopes for the future of AltspaceVR,” the company wrote in a post on their website. The AltspaceVR community will continue to exist in its current form and is supported on the HTC Vive, Oculus Rift, Daydream by Google, Samsung Gear VR, as well as in 2D mode on Windows and Mac. “Microsoft is excited to incorporate communications technology into our mixed reality ecosystem. AltspaceVR takes personal connections, combines them with real-time experiences, and leverages immersive presence to share experiences. Situations of people, places, and things have deeper meaning and in turn, are more memorable. We’re excited to see how far this technology can go,” the company wrote. framework for IoT edge computing, Barcelona, originally announced in April. The release features key API stabilization, better code quality, reference Device Services supporting BACNet, Modbus, Bluetooth Low Energy (BLE), MQTT, SNMP, and Fischertechnik, and double the test coverage across EdgeX microservices. “Barcelona is a significant milestone that showcases the commercial viability of EdgeX and the impact that it will have on the global Industrial IoT Landscape,” said Philip DesAutels, senior director of IoT at The Linux Foundation. EdgeX has established a biannual release roadmap and the next major release, “California,” is planned for next

spring. California will continue to expand the framework to support requirements for deployment in business-critical Industrial IoT applications. According to the company, “In addition to the general improvements, planned features for the California release include baseline APIs and reference implementations for security and manageability value add.”

DeepMind creates new research unit to explore ethics of AI DeepMind has announced that it is launching a new research unit called DeepMind Ethics & Society, aimed at helping the

company explore the impacts of artificial intelligence in the real world. The organization believes that AI scientists must hold their work to the highest ethical standards because they are responsible for the social impact of their work. According to DeepMind, AI applications must remain under human control — a thought that is shared by many. They also believe that this technology should be used for socially beneficial purposes. In a blog post, the organization writes that this new research unit “has a dual aim: to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.”

Kubernetes 1.8 adds better security, workload support The open source project for deploying, scaling and managing containerized applications is getting a number of new functional improvements in its latest release. Kubernetes 1.8 is the third release of the year, and features security enhancements, workload support and extensibility improvements. Key features include stable support for role-based access control, a beta version of the Transport Layer Security certificate rotation, a beta version of the core Workload APIs, advanced auditing, alpha support for CRD schema validation, service automation, and cluster stability.

Chef announces new solution for cloud native architectures Chef is building on top of its application automation solution with the release of a new

006,7_SDT05.qxp_Layout 1 10/13/17 3:24 PM Page 7

SaaS-based service. Habitat Builder builds on Habitat, first announced last year as a way to give apps the ability to selforganize and self-configure throughout their lifecycle. Habitat Builder adds to that technology by enabling modern application development teams to build, deploy, and manage cloud native applications. With Habitat, teams can connect to their source code, package and build apps, deploy those apps in any format and runtime, and manage those apps in any cloud native architecture, according to the company. Chef explains it decided to tackle cloud native architectures in this release because of the industry’s move to the cloud, growth of containers, accessibility of containers in the cloud and the recent uptake in microservices. The solution features GitHub SCM and authentication integration, automated builds, automated dependency rebuilds, public and private origins, release channels for continuous delivery, container publishing to Docker Hub, scaffolding for Node.js and Ruby, and more than 500 packages for common applications and libraries.

Google Firebase launches document database for apps The Google Firebase team has launched the public beta of Cloud Firestore, a NoSQL document database solution for mobile and web app development built in collaboration with the Google Cloud Platform team. SDKs for Cloud Firestore are available for iOS, Android and web development, allowing developers to integrate a powerful querying tool, real-time data synchronization, automatic, multi-region data replication and server SDKs for Node, Python, Go and Java. “Managing app data is still hard; you have to scale servers, handle intermittent connectivity, and deliver data with low latency,” Alex Dufetel, product manager for Firebase, wrote on the company’s development blog. “We’ve optimized Cloud Firestore for app development, so you can focus on delivering value to your users and shipping better apps, faster.” Cloud Firestore is designed for extensive scalability and data synchronization across any number of users, including the use of client-side storage and serverless security

ShiftLeft releases Security as a Service solution for cloud and microservices A new company exited stealth mode with a mission to help organizations protect their cloud and microservices applications. ShiftLeft is an application-specific cloud security provider designed to secure cloud apps as part of the continuous integration pipeline rather than tackling threats as they are discovered in production. The company is launching an automated Security as a Service (SECaaS) platform that brings together source code analysis and runtime behavior to understand the security of an application and create a custom threat solution for it. The solution features realtime protection from unknown threats, protection from key OWASP top-10 risks, data leakage prevention, detection of open source software usage risks, and data flow visibility.

rules in the case of limited connectivity or offline use.

Postman: Developers need improved API documentation API development platform provider Postman has released the results of their 2017 State of API Survey which gathered insight from their community of 3.5 million developers on API usage, technologies, tools and concerns. Some of Postman’s key findings show that around 70 percent of Postman developers spend more than a quarter of their week working with APIs; most development work involved private and internal APIs, though public APIs have their place; microservices were identified by respondents as the most interesting technology for 2017; and that documentation was one area that needs general improvement, with respondents providing concrete suggestions for how this could be done. One irony of the findings lies in developers’ call for improved API documentation, while showing an aversion to documenting their own APIs. While there were many suggestions for what sorts of improvements

November 2017

SD Times

could be made, according to the survey, the two most important were standardization and better code examples.

Perforce acquires Agile planning tool provider Hansoft Perforce continued its push into the bigger software development life cycle and DevOps with the acquisition of Swedish Agile planning tool provider Hansoft. Terms of the deal between the privately held companies were not disclosed. “We combine the developer productivity you’d find in GitLab or GitHub plus add scaling and repository management for version control and builds, and now have project planning and management,” explained Tim Russell, chief product officer at Perforce. With its foundational Helix version control system and the recently released TeamHub CI/CD solution, Perforce is targeting organizations that are building technology products that generate revenue, Russell said. “Our differentiator is that no one has figured out scale,” he said. “Helix Core [server] has proven scale, and it extends to Git and projects in repository management.” z



SD Times

November 2017

Implementing the methodology can get daunting, but using this approach will lead to success through the creation of application and CI/CD pipelines BY SARAH LAHAV

Everyone seems to be talking about DevOps but, if you are new to it, it might all seem a little overwhelming. For an organization that doesn’t use DevOps today, the adoption of this three-step approach will promise a generally clean implementation.


Creating the Application Pipeline Regardless of the type of application, the pipeline looks very similar. The goal is to weave application releases into a new coordinated process that looks something like this: 1. The developer makes changes locally to their laptop/PC/Mac, and upon completion (including their testing etc.) issues a Pull Request. 2. Code-review kicks in now and the developer changes are reviewed, accepted, and a new release can be created. 3. Someone, or the system, runs a procedure to create a Release Artifact, which could be a Java JAR file, a Docker File, or any other kind of Unit of Deployment. 4. Someone, or the system, copies the artifact to the web or application servers and restarts the instances. 5. Database migrations/updates may be additionally applied, although the Sarah Lahav is the CEO of SysAid Technologies

database is often left outside of the application release process. This process evolves into an automated application-only continuous integration and continuous deployment (CI/CD) pipeline by using a toolset to automate the process as shown in Figure 1. However, be aware that this isn’t ideal (yet), because at this point the application alone is being released (injected) into existing environments. If the developer has altered the local environment as well as the application, but these local environment changes are not part of the release, then they’re not in version control. And, if they are not applied at the same time as the application, then the application will break. The solution is for application and infrastructure releases to be synced.


Creating the Infrastructure Pipeline Ideally, the infrastructure team has learned from the developer team’s DevOps and CI/CD pipeline journey and can expand and adapt it for the infrastructure (which increasingly is a public cloud). There are some differences with the infrastructure pipeline, specifically around units of deployment that are now the infrastructure layers in environments, the things that surround an application such as the DNS, load balancers, the virtual machines and/or

008,9_SDT05.qxp_Layout 1 10/13/17 3:23 PM Page 9

containers, databases, and a plethora of other complex and interconnected components. The big difference here is that the infrastructure is no longer described in a Visio diagram: it is brought to life in code, in a configuration file, in version control — this is Infrastructure as Code (IAC). Before this, a load balancer was described in Visio diagrams, Word documents, and Excel spreadsheets of IPs and configurations. This is now swapped to describing everything about the load balancer in a configuration file. Figure 2 is an example AWS CloudFormation configuration for a load balancer. Whenever this file is changed in version control, such as changing the subnets a load balancer can point to, then the automation engine can update an existing infrastructure environment to reflect that one change, and any dependencies that change. It also means that you can apply this template to multiple environments, and be very confident that all environments are consistent. It is also possible to make the templates dynamic to change their behavior according to the environment, so in production the environment will scale out across three datacenters, but on a developer’s laptop it will use a local VirtualBox single-system.

Figure 1: A typical CI/CD Pipeline

thing is programmable. This means everything can be captured in version control, and the same configuration can use dynamic input parameters to build an environment on a developer’s laptop, or a QA system in the cloud, or updates to production. Imagine a developer makes an application change that also requires a change to the database, to the web instance scaling configuration, and to

SD Times


the DNS. All of these changes are captured in one version control branch and the developer builds a system on their laptop from this branch, and tests it. This is what Platform-as-a-Service systems can do. By adding an environment configuration file inside the same code base as your application, you can ensure that you have bound your application to the infrastructure. z


Creating the Full Stack Pipeline The goal of a full-stack pipeline is to ensure that the application and infrastructure changes over time are in sync, both in version control and the release deployments across each pipeline stage. The popular CI/CD tools can now automate the full stack because every-

November 2017

Figure 2: AWS CloudFormation configuration for load balancer


SDT05 Full Page Ads.qxp_Layout 1 10/13/17 10:35 AM Page 10

Data Quality Made Easy. Your Data, Your Way. NAME

@ Melissa provides the full spectrum of data

Our data quality solutions are available

quality to ensure you have data you can trust.

on-premises and in the Cloud – fast, easy

We profile, standardize, verify, match and enrich global People Data – name, address,

to use, and powerful developer tools, integrations and plugins for the Microsoft and Oracle Product Ecosystems.

email, phone, and more.

Start Your Free Trial

Melissa Data is now Melissa. See What’s New at


011_SDT05.qxp_Layout 1 10/16/17 11:51 AM Page 11

November 2017

SD Times

Top 10 technology trends for 2018 BY CHRISTINA CARDOZA

With 2017 drawing to a close, Gartner is looking to the future. The organization announced its annual top strategic technology trends at the recent Gartner Symposium/ITxpo. The basis of Gartner’s trends depends on whether or not they have the potential to disrupt the industry, and break out into something more impactful. The top 10 strategic technology trends, according to Gartner, are: AI foundation: Last year, the organization included artificial intelligence and machine learning as its own trend on the list, but with AI and machine learning becoming more advanced, Gartner is looking at how the technology will be integrated over the next five years. “AI techniques are evolving rapidly and organizations will need to invest significantly in skills, processes and tools to successfully exploit these techniques and build AI-enhanced systems,” said David Cearley, vice president and Gartner Fellow. “Investment areas can include data preparation, integration, algorithm and training methodology selection, and model creation. Multiple constituencies including data scientists, developers and business process owners will need to work together.” Intelligent apps and analytics: Continuing with its AI and machine learning theme, Gartner predicts new intelligent solutions that change the way people interact with systems, and transform the way they work. Intelligent things: Last in the AI technology trend area is intelligent things. According to Gartner, these go beyond rigid programming models and exploit AI to provide more advanced behaviors and interactions between people and their environment. Such solutions include: autonomous


2 3

vehicles, robots and drones as well as the extension of existing Internet of Things solutions. Digital twin: A digital twin is a digital representation of real-world entities or systems, Gartner explains. “Over time, digital representations of virtually every aspect of our world will be connected dynamically with their realworld counterpart and with one another and infused with AI-based capabilities to enable advanced simulation, operation and analysis,” said Cearley. “City planners, digital marketers, health care professionals and industrial planners will all benefit from this long-term shift to the integrated digital twin world.” Cloud to the edge: Edge computing is a form of computing topology that processes, collects and delivers information closer to its source. “When used as complementary concepts, cloud can be the style of computing used to create a service-oriented model and a centralized control and coordination structure with edge being used as a delivery style allowing for disconnected or distributed process execution of aspects of the cloud service,” said Cearley. Conversational platforms: Conversational platforms such as chatbots are transforming how humans interact with the emerging digital world. This new platform will be in the form of question and command experiences where a user asks a question and the platform is able to respond. Immersive experience: In addition to conversational platforms, experiences such as virtual, augmented and mixed reality will also change how humans interact and perceive the world. Outside of video games and videos, businesses can use immersive experience to create real-life scenarios and apply them



6 7

to design, training and visualization processes, according to Gartner. Blockchain: Once again, blockchain makes the list for its evolution into a digital transformation platform. In addition to the financial services industry, Gartner sees blockchains being used in a number of different apps such as government, health care, manufacturing, media distrubtion, identity verification, title registry, and supply chain. Event driven: New to this year’s list is the idea that the business is always looking for new digital business opportunities. “A key distinction of a digital business is that it’s event-centric, which means it’s always sensing, always ready and always learning,” said Yefim Natis, vice president, distinguished analyst and Gartner Fellow. “That’s why application leaders guiding a digital transformation initiative must make ‘event thinking’ the technical, organizational and cultural foundation of their strategy.” Continuous adaptive risk and trust: Lastly, the organization sees digital business initiatives adopting a continuous adaptive risk and trust assessment (CARTA) model as security becomes more important in a digital world. CARTA enables businesses to provide real-time, risk and trust-based decision making, according to Gartner. “Gartner’s top 10 strategic technology trends for 2018 tie into the Intelligent Digital Mesh. The intelligent digital mesh is a foundation for future digital business and ecosystems,” said Cearley. “IT leaders must factor these technology trends into their innovation strategies or risk losing ground to those that do.” z




To compare, last year’s trends are available here:


SDT05 Full Page Ads.qxp_Layout 1 10/13/17 10:35 AM Page 12 10/6/17

2:55 PM


DEVELOPMENT PLATFORM? Free yourself with a multi-cloud, multi-tech application development and modernization platform that allows developers to create cloud-native and cloud-enabled applications faster with microservices and containers.

RED HAT® OPENSHIFT APPLICATION RUNTIMES • Multiple runtimes • Multiple frameworks • Multiple clouds • Multiple languages • Multiple architectural styles

Copyright © 2017 Red Hat, Inc. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, and JBoss are trademarks of Red Hat, Inc., registered in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.

013_SDT05.qxp_Layout 1 10/13/17 3:34 PM Page 13

November 2017

SD Times


Java EE Finally Gets Rebooted BY LISA MORGAN

The Java EE 8 release has been the slowest in history. Meanwhile, Java EE developers have been anxious to get their hands on it, because the world has changed dramatically in five years. Nevertheless, Java EE is an extremely successful platform, and the Java Community Process (JCP) has stewarded more than 20 compatible implementations, resulting in a $4 billion industry. Java isn't going away anytime soon. In fact, it's better than ever with the latest release. “The Java EE releases were faster during its first releases and started to slow down with Java EE 6,” said Cesar Saavedra, technical marketing manager at Red Hat. “The longer release cycles started back in 2006 which meant a slowdown in innovation introduced by each new version of Java EE.” A lot has happened since 2005, however. The proliferation of mobile devices, the growth in communication bandwidth and storage capacities, and the commoditization of cloud infrastructure services enabled new business models and channels. The digital economy was born which necessitated a whole new set of technical and business requirements. Concepts like omni-channel, multi-speed IT, agile methodologies, continuous integration and continuous delivery, DevOps, microservices architecture, hybrid cloud, and API management have all become popular because they address the needs of businesses embracing digital transformation. Likewise, technologies such as containers and open-source software have also garnered significant market traction because they're able to deliver the agility required by digital businesses. All of this means that Java EE developers and their organizations are now Content provided by SD Times and

challenged with adapting their existing Java EE applications to the new digital norm. Many have already started doing this by developing microservices using Java EE. They're either moving their Java EE applications into lighter-weight application servers and deploying them using container technologies, or they're rewriting their Java EE applications onto cloud-native application runtimes and frameworks. If you're faced with the same reality, consider these four points:

Java EE goes to Eclipse Foundation Java EE is moving to The Eclipse Foundation, which is great news for the developers and organizations. The Eclipse Foundation runs many open-

workloads to new digitally-friendly environments, such as container-based environments, or develop microservices using Java EE. “Skill reuse is really important,” said Saavedra. “Organizations can continue running and developing applications in Java EE, a trusted framework with RASP (reliability, availability, scalability, and performance) capabilities to run production workloads. In addition, companies will be able to secure Java EE resources from the existing job market and they won't have to spend money retraining their existing Java EE resources.”

Eclipse MicroProfile will spur innovation

Eclipse MicroProfile is an open source specification for Enterprise Java microservices. As an open-source project under The foundation, I think the innovation cadence ‘ MicroProfile focuses on rapid and evolution of the project will innovation and has a time-boxed increase with the feedback and incremental release schedule. input of the community for the “Eclipse Microprofile comprises a community of individubenefit of the community.‘ als, organizations, and vendors —Cesar Saavedra collaborating to bring microservices to the Enterprise Java source projects with a community- community,” said Saavedra. “They're based governance approach that fosters leveraging Java EE technologies and adding new functionality.” collaboration and rapid innovation. Eclipse MicroProfile sub-projects can “I think the innovation cadence and evolution of the project will increase become Java Specification Request (JSR) with the feedback and input of the candidates, like the MicroProfile Config community for the benefit of the com- API which was recently proposed to the munity,” said Saavedra. “Developers, JCP. Now that Java EE is moving to the organizations and the Java EE ecosys- Eclipse Foundation, there are plans to tem will be able to reap the output of leverage MicroProfile by Java EE. the project, which will contain innovations that directly address the needs of Java EE implementations Lastly, Java EE implementers and digital businesses.” ecosystem players can continue to Java EE developers can reuse their skills evolve their support and capabilities for Java EE developers can continue to use DevOps, hybrid cloud, and modern and leverage their Java EE experience architectures including microservices and knowledge to maintain existing and container-based environments workloads. They can also adapt their orchestrated by Kubernetes. z


SDT05 Full Page Ads.qxp_Layout 1 10/13/17 10:38 AM Page 14

015_SDT05.qxp_Layout 1 10/13/17 3:57 PM Page 15

November 2017

SD Times

Workplace automation is key to keeping pace, easing IT burdens Survey finds execs seek efficiencies through human, machine collaboration BY FRANK J. OHLHORST

A new survey from ServiceNow illustrates that intelligent automation must become part of any transformative developer’s bandoleer of capabilities — a realization driven by the fact that some 86 percent of executives surveyed globally feel their companies will soon hit a wall and, by 2020, will need

formance data and spotting network device outages. The virtual engineers work alongside human engineers to learn patterns in the network data and eventually act on their own to solve technical problems. According to Wright, a network device outage typically would go to a queue and take human engineers about

“Workplace automation is about enhancing productivity, not eliminating occupations. Machines can take on the burden of busy work, such as QA testing and its various iterations, and free up developers to do the creative work they crave.”

IBM’s Watson enters the picture

—Dave Wright, Service Now

greater automation to get work done. SD Times recently spoke with Dave Wright, chief strategy officer at ServiceNow, to gauge what the survey results mean to today’s software engineers. Wright said, “It’s clear that we can’t keep up with the pace of work and most IT departments are far too busy. The survey shows that companies need workload relief now.” Wright believes a shift to automate work is imminent and will happen faster than expected. Wright said, “There’s an economic payoff for automation that companies can’t ignore. Our survey showed highly automated companies are six times more likely to experience revenue growth of more than 15 percent.” Wright pointed to a multinational insurance corporation with more than 65,000 employees to illustrate how intelligent automation results in better business process efficiency and productivity. This company recently deployed five “virtual engineers” inside its IT infrastructure that work 24 hours a day collecting and analyzing system per-

ed others. However, McKinsey estimates that fewer than 5 percent of today’s occupations are candidates for full automation, but nearly every occupation could be partially automated. “With so much of the workplace still relying on manual tasks, machines can revitalize human work,” added Wright.” Wright claims that ServiceNow is working to deliver on that vision with its Intelligent Automation Engine, which applies machine learning and advanced analytics to four of the biggest challenges for IT organizations today.: preventing outages, automatically categorizing and routing work, predicting future performance and benchmarking performance against peers.

3.5 hours to address. Using the virtual assistants, which the company nicknamed “co-bots,” there is no queue and most incidents can be fixed within 10 minutes. Wright added “If a machine can’t solve a problem on its own, it is kicked back to a human engineer.” That has significant implications for those responsible for software QA.

Will the bots take away jobs? One concern is that automation will eliminate jobs and reduce headcount, something that may sound like a good idea to a cost-conscious CFO, but could be bad news for developers, and in turn, DevOps team members. However, Wright offers a different view: “Workplace automation is about enhancing productivity, not eliminating occupations. Machines can take on the burden of busy work, such as QA testing and its various iterations, and free up developers to do the creative work they crave.” Historically, technological innovation has killed off some jobs and creat-

ServiceNow partnered with IBM to combine Watson’s abilities with ServiceNow’s intelligent automation to deliver greater efficiencies to their mutual customers. According to Wright, the engine applies intelligent automation to workflows that allows companies to replace both repetitive manual tasks and complex business processes. “The volume of back and forth work across every department for common tasks like resetting of passwords or onboarding new employees is straining their systems,” said Wright. “ServiceNow provides a way out. Trained with each customer’s own data, our engine enables customers to realize game-changing economics.” Obviously, ServiceNow has skin in the automation game. Nevertheless, it’s clear that enterprises have reached a tipping point and intelligent automation is the answer to keeping pace with runaway workloads — a simple fact that transformative IT managers must embrace to ensure that IT remains relevant to business operations, while increasing service levels. z


016_SDT05.qxp_Layout 1 10/13/17 3:27 PM Page 16


SD Times

November 2017

GitHub outlines plans, tools for the future of software development Sees data driving work, looks to provide security, custom experiences BY CHRISTINA CARDOZA

About 10 years ago, GitHub embarked on a journey to create a platform that brought together the world’s largest developer community. Now that the company believes it has reached its initial goals, it is looking to the future with plans to expand the ecosystem and transform the way developers code through new tools and data.

using GitHub needs security. We heard from our first open-source survey this year that open-source users view security and stability above all else, but at the same time we see that not everyone has the bandwidth to have a security team,” said Han. GitHub is leveraging its data to help developers manage the complexity of dependencies in their code with the

GitHub’s new security alerts use human data along with machine learning to track when a dependency is associated with a potential vulnerability.

“Development hasn’t had that much innovation arguably in the past 20 years. Today, we finally get to talk about what we think is the next 20 years, and that is development that is fundamentally different and driven by data,” said Miju Han, engineering manager of data science at GitHub. The company announced new tools at its GitHub Universe conference in San Francisco that leverage its community data to protect developer code, provide greater security, and enhance the GitHub experience. “It is clear that security is desperately needed for all of our users, open source and businesses alike. Everyone

newly announced dependency graph. The dependency graph enables developers to easily keep track of their packages and applications without leaving their repository. It currently supports Ruby and JavaScript, with plans to add Python support in the near future. In addition, the company revealed new security alerts that will use human data and machine learning to track when dependencies are associated with public security vulnerabilities, and recommend a fix for it. “This is one of the first times where we are going from hosting code to saying this is how it could be better, this is how it could be different,” said Han.

On the GitHub experience side, the company announced the ability to discover new projects with news feed and explore capabilities. “We want people to dig deeper into their interests and learn more, which is one of the core things it means to be a developer,” said Han. The new news feed capabilities allow users to discover repositories right from their dashboard, and gain recommendations on open-source projects to explore. The recommendations will be based off of people users are following, their starred repositories, and popular GitHub projects. “You’re in control of the recommendations you see: Want to contribute to more Python projects? Star projects like Django or pandas, follow their maintainers, and you’ll find similar projects in your feed. The ‘Browse activity’ feed in your dashboard will continue to bring you the latest updates directly from repositories you star and people you follow,” the company wrote in a blog. The “Explore” experience has been completely redesigned to connect users with curated collections, topics, and resources so they can dig into a specific interest like machine learning or data protection, according to Han. Han went on to explain that the newly announced features are just the beginning of how the company plans to take code, make it better, and create an ecosystem that helps developers move forward. “These experiences are a first step in using insights to complement your workflow with opportunities and recommendations, but there’s so much more to come. With a little help from GitHub data, we hope to help you find work you’re interested in, write better code, fix bugs faster, and make your GitHub experience totally unique to you,” the company wrote. z

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 10:44 AM Page 17



018,19_SDT05.qxp_Layout 1 10/13/17 3:39 PM Page 18


SD Times

November 2017

IBM expands AI research to BY IAN C. SCHAFER

Late last month, IBM and the University of California San Diego announced their partnership with the opening of the Artificial Intelligence for Healthy Living Center on UCSD’s campus, the latest piece of IBM’s Cognitive Horizons Network, a research collective focused on the emerging fields of Internet of Things, artificial intelligence and machine learning. UCSD’s team of researchers are tasked with tackling what Susann Keohane, founder of IBM’s Aging-in-Place Research Lab, wrote “will have unprecedented effects on health care, the economy, and individual quality of life,” — society’s demographic shift towards an older population. “For the first time ever, there are more people over 65 than under 5,” Keohane said. “There is essentially a shortage of care providers. If you look at this demographic shift across the globe, this is why a company like IBM is interested in looking at aging. What does the aging demographic shift mean for our clients? We’re in every industry and every one will be impacted.”

IBM’s research will be able to predict trends and eventually help the elderly live better lives, says IBM’s Susann Keohane. By combining IBM’s IoT and health care research with their Watson machine learning, Keohane says that the effect this shift will have can be broken down in such a way that it will benefit everyone from the elderly in need of improved care to businesses, now more aware of their customer base. “Aging isn’t a disease,” Keohane said. “We’re all doing it. But it does have impact on health. So could we surround ourselves with emerging technology in the home, while assuring the privacy

and security that comes with health care and design something that will help someone understand how well they’re aging in place?” Tajana Rosing, one of the researchers at UCSD, has already taken a stab at answering this question. By tracking things like bathing and toileting with unobtrusive, IoT-connected sensors installed voluntarily in some eldercare facilities like Paradise Village in National City, California, with more test beds being set up over the course of the next year, Keohane and Rosing say that some trends are already coming to light. “We are starting to see consistent patterns in behaviors,” Keohane said. “So what that tells us is that, one, because the sensors are in the right place, our algorithms are working really well. For example, if I was to look at a month of data and I was trying to detect bathing with a consumer-grade sensor, I, with really good accuracy, can tell you that with 31 days in a month, that sensor goes off at 8:30 [a.m.] every day but two days. The consistency and the accuracy is really exciting, because if it was all over the place, then maybe the person wasn’t doing so well, maybe they weren’t taking care of themselves.” Rosing described the example of a 90-year-old grandfather who’d begun forgetting to turn off the stove after

cooking though he was otherwise fairly capable. “The goal is to try to understand how people’s habits may change as they get a little bit older,” she said. By combining a sensor with robotics and AI, the latter two of which are handled by other branches of the research team, their efforts would serve to “detect and then provide intervention and turn off the stove when we know that the person isn’t standing in front of it or when there’s no pot on it and it’s still burning, for example.” Keohane provided further examples, such as tracking bathroom and shower usage to reasonably infer risk of urinary tract infection, a common cause of death for those over 70, and tracking motion for nighttime wandering, an early sign of dementia. Rosing says that it’s these types of easy-to-track detections — running water or an alert that a subject has missed a doctor’s appointment — that are the backbone of the research. Keohane refers to these small but significant data points as “little ‘heartbeats, just on-and-off signals.” But by corroborating data from all of the individual sensors and finding the patterns, there are many observations to be made. “This is important because it tells us about the state of people's cognitive capability,” Rosing said. “The more of this type of forgetfulness you have, the more it correlates with issues you may have during a daily-living situation. This could be indicative of needing more support or needing some intervention.” While collecting accurate readings seems to be a cinch for IBM’s IoT sensors, the next step is feeding that data to IBM’s Watson computer for AI. That’s where Keohane says the true challenge of their research lies. “One of the core technologies we have is the Contextual Data Fusion

018,19_SDT05.qxp_Layout 1 10/13/17 3:39 PM Page 19

November 2017

SD Times

support an aging population Engine, which allows us to pull together disparate, siloed data sets, normalize them and look for the patterns across them,” Keohane said. “The challenge is really having good data and data that’s labeled. If you start to train your algorithms on data, you want to make sure it’s good, otherwise your predictive model is going to be completely off.” Keohane says that UCSD’s research will help improve the reliability of the data with a team of dedicated ethnographers that will annotate the collected readings and look for patterns themselves, before they start to generate a predictive model for AI. Rosing says that one of the most exciting parts of the project for her is the way that the team is utilizing triedand-true ethnography techniques on a much broader scale, looking for more specific behaviors over a much longer period and across populations. “Studies look at coarse behavior usually — when do you wake up, when do

you shower, eat and so on,” Rosing said. “What we’re looking at in this case is much finer — are you having longer pauses in your speech now, are your body movements a little bit rougher or a little bit more clumsy? These microbehavioral changes are very highly correlated with the kind of cognitive changes we’re seeking to detect.” By studying how these “microbehaviors” correlate with data gathered from the sensors installed by Rosing’s team, the ethnographers will be able to begin designing the predictive model that will eventually guide the initiative’s efforts. The ethnographers are also working on ways to create customized monitoring methods for individuals that can then be extrapolated. Rosing described how she might test the cognition of her grandfather, an avid gardener, by asking him to list off what should be planted around March. “Right now he can rattle off plenty of things you should be planting in

March,” Rosing said. “Maybe he doesn’t remember all of them; the fraction that he forgets is a metric that tells us that maybe his cognitive abilities need a little support.” Rosing says this would not be useful, for instance, in testing the cognition of her mother-inlaw, who doesn’t care much for gardening, but might respond better to questions about her beloved dog. Knowing how to apply specialized tests like these on a broader scale is something that Rosing says the ethnographers will continue to work on. Past that, Keohane thinks the final hurdle for assistive technologies like the ones her team is researching is adoption of the technology. “How do we help people embrace and feel good about embracing it?” said Keohane. “Privacy and security are very important to [IBM]. So we have the right infrastructure in place, and I think that creates a lot of potential for what a company like IBM can do in this space.” z

683(5 )$67 $1' $'9$1&(' &+$576



y y y y




&Z dZ/ >


020_SDT05.qxp_Layout 1 10/13/17 3:39 PM Page 20


SD Times

November 2017

Puppet announces the next phase of software automation at PuppetConf 2017 BY CHRISTINA CARDOZA

Puppet wants to take automation to a new level by making it more repeatable, scalable, and even more automatic. The company announced at PuppetConf 2017 it is expanding its product portfolio from one to six products in order to support this next phase of automation. “We are entering a new era of automation. One where software makes it apparent where to go next. One with insights about all of the critical resources running across your estate— along with the ability to take action, and

agement with no way to continually discover what else is going on across your estate. Both these approaches need to be tied together at the hip,” he said. The company also announced the release of Puppet Tasks: a new set of offerings designed to make it easier for users to approach automation and expand it across their infrastructure. Puppet Tasks is available in two ways: through Puppet Bolt and through Puppet Enterprise. Puppet Bolt is an open-source, standalone task runner that allows users

Puppet’s Package Inspector lets users browse, search, discover, and secure infrastructure.

actually do something about those insights,” according to the company. To reach this new era, the company unveiled Puppet Discovery, a new standalone solution that provides insight into hybrid infrastructure and makes it easier to control and manage. Discovery enables users to discover resources across their virtual machines, both onpremises and in the cloud, and inspect running containers. For instance, Puppet Discovery will be able to tell if an organization has seven different versions of SSL running, detect which ones are vulnerable and take action, according to Tim Zonca, vice president of marketing and business development at Puppet. “We believe in a year from now it will seem silly to just have discovery with no ability to take action or to just have man-

to quickly automate manual tasks without having to install any agents. Puppet Enterprise Task Management is the big new capability available in Puppet Enterprise 2017.3. It provides the same benefits as Task Bolts, but with governance, scale, flexibility and team-oriented workflows, according to the company. It enables users to execute tasks across tens of thousands of nodes in order to scale their automation footprint faster. “At Puppet, we’re providing the tools to help our customers start simple, prove success and build from there to get more work done,” said Omri Gazitt, chief product officer, Puppet. “Organizations that are just starting their automation journey can use Puppet Bolt to easily automate ad hoc, manual

work, and then over time, bring it all under control. And our customers that are already using Puppet Enterprise can take advantage of the enterprise task management capability in our latest release to get added scale, governance and flexibility as they automate tasks across their estate of infrastructure and applications.” Additional Puppet Enterprise 2017.3 features included: improvements to the Package Inspector for browsing and searching for packages discovered on the nodes connected to Puppet Enterprise; enhancements to the core Puppet Platform; a new configuration data capability to improve code reusability and make it easier to configure nodes; and Japanese language support. In addition, the company announced it is building on its acquisition of the continuous delivery platform Distelli with the introduction of three new products: Puppet Pipelines for Apps, Puppet Pipelines for Containers and Puppet Container Registry. Puppet Pipelines for Apps and Puppet Pipelines for Containers automate the delivery of applications through the CI and CD chain from initial build all the way through release automation. Puppet Container Registry is open source and provides a way for teams to host Docker images within their infrastructure and get a unified view of all their images stored in local and remote repositories. “At Puppet, our heritage has really been steeped in helping deliver better software faster for the underlying infrastructure in middleware that applications require. Distelli has focused on the application workload through its continuous integration and continuous delivery pipeline. Together, we unify the two,” said Zonca. Other announcements from the conference include the release of Puppet Enterprise 2017.3, a new Splunk partnership, new partners for DevOps, and cloud and container updates. z

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 11:43 AM Page 21




SDT05 Full Page Ads.qxp_Layout 1 10/13/17 11:43 AM Page 22

023_SDT05_friday.qxp_Layout 1 10/13/17 1:39 PM Page 23

November 2017

SD Times


GitLab shares its vision for Complete DevOps BY JENNA SARGENT

GitLab has announced a new round of funding to help make its vision for Complete DevOps a reality. Last year, the company unveiled a plan to simplify the software development process. Today, they are taking that plan a step further by uniting development and operations into one user experience. The recent $20 million Series C funding round will help GitLab reimagine and restructure DevOps tools to create one experience that “reduces friction, increases collaboration and drives a competitive advantage,” also known as Complete DevOps, the company wrote in a blog. GitLab has said it believes in changing all creative work from read-only to read-write so that everyone is able to contribute. While DevOps helps create faster software iteration cycles with greater quality and security, the current software landscape has developers and operations using different tools, making collaboration difficult. In order to collaborate, they would have to integrate their tools together, which tends to slow progress and lead to poor code. According to the company, a Complete DevOps solution includes a single UI for dev and operations, integrates all phases of DevOps, and enables dev and operation teams to work together with less friction. “We want to build GitLab into the complete DevOps tool chain. We already cover every stage of the software development lifecycle. Why stop at production? Why not go beyond that, into operations? We want to close the loop between Dev and Ops, automating processes and reducing complexity so that you can focus on a great customer experience,” the company wrote. GitLab already has a monitoring dashboard that is developer-centric, but will be creating a second dashboard with much of the same content but

GitLab envisions a dashboard for operations similar to its developer-centric dashboard, above, but with the boxes representing deployments of projects within their infrastructure.

instead be focused on the needs of operations. This will allow operators to get an overview of all projects in production, and keep developers and operations using the same tool to view the status of production. GitLab also wants to be able to automatically create merge requests and do them automatically, instead of having to notify someone to go in and make changes manually. This allows developers to focus their time on other important things. Cloud development is another important part of their vision. Developing the cloud removes the need to constantly be maintaining multiple different languages and their dependencies. Developers can make changes, push them back, and then commit them to a repo. GitLab PaaS (Platform as a Service) is a platform for ops to be built on. They give the example of their platform, which is built on Kubernetes, where developers never have to touch the Kubernetes dashboard or tools. They are able to fully manage their ops

environment using GitLab tools without having to deal with the tools of the underlying platform. Feature flags is the idea of writing code and deploying it, but not immediately delivering it. This allows developers to merge code more often because no one is using it and allows them to have more control over the rollout when they are ready to deliver it. Developers can roll it out to a small percent of users, and if there turns out to be a problem with the code they can fix it without having affected that many people. Auto DevOps is the ability to automatically detect programming languages and build an app in that language, and then automatically deploy it. Kubernetes users can Auto Build and Auto Deploy with just one click. Other aspects of their DevOps vision include artifact management, SLO and auto revert, onboarding and adoption, and a pipeline view of environments. All of these features are talked about in detail in their DevOps strategy. z


NOM2017AD.qxp_Layout 1 7/26/17 12:25 PM Page 1

Subscribe to SD Times News on Monday to get the latest news, news analysis and commentary delivered to your inbox.

• Reports on the newest technologies affecting enterprise developers — IoT, Artificial Intelligence, Machine Learning and Big Data • Insights into the practices and innovations reshaping software development such as containers, microservices, DevOps and more • The latest news from the software providers, industry consortia, open source projects and research institutions

Subscribe today to keep up with everything happening in the software development industry.













9:24 PM

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 11:44 AM Page 24

025,28_SDT05.qxp_Layout 1 10/13/17 3:41 PM Page 25

November 2017

SD Times

A r o g f i l eA s e it c


Ag il

e me th odol og y


Best pr ac


a p s




ri n







L ifecy

c le Ma nage m




Many may view it as a collision of ideologies, but pairing ALM with Agile practices makes a lot of sense to those who have pushed ahead in the latest application development and delivery methodologies.

Raj Mehta, president and CEO of Infosys International, an enterprise services firm, explains it best: “Most people do not realize that Agile and ALM are two different concepts that complement each other very well. Agile is all about the Agile Manifesto, which focuses on establishing practices, while ALM is all about managing lifecycles,” he said. More simply put, Agile is the how, and ALM is the why of application development. Mehta is not alone in his observations. Sagi Brody, CIO of WebAir, a cloud services firm, said “While adopting Agile methodologies helps to accelerate

application development, it is critical to think about long-term goals and how an application evolves to meet changing business needs, and that is where ALM comes into play.” Pretty much any application will change over its lifetime to maintain its usability, regardless of who built it. Those changes may be driven by business needs, changes in related software or hardware elements, or even to meet changing compliance or other legislative needs. While Agile methodologies help to expedite that change, the process needs to be managed, continued on page 28 >


SDT05 Full Page Ads.qxp_Layout 1 10/13/17 11:46 AM Page 26

Do you need to bridge for hybrid application development today?

Support choice in the enterprise application portfolio with Micro Focus Application Delivery Management. As you transform your enterprise application portfolio from waterfall development to Agile and DevOps based delivery, a tool that supports Project Agile will not deliver to enterprise scale. This is why an integrated application lifecycle management toolchain matters. Manage complexity across the portfolio, to continuously deliver quality applications at scale. Discover the New. Start your free trial today.

027_SDT05.qxp_Layout 1 10/13/17 3:43 PM Page 27

November 2017

SD Times


ALM Suite revs up agile development BY JACQUELINE EMIGH

Enterprises are approaching DevOps and Agile ALM at various paces and from myriad directions. With quality control at its center, the Micro Focus ALM Suite is aimed at helping organizations at all levels of adoption to produce software applications effectively, efficiently, and collaboratively, and at enterprise scale, regardless of methodologies. “We’ve been helping our customers to deliver high-quality, high-performance applications for around 20 years with our ALM and QC solutions,” said Esther Balestrieri, ALM product manager for Micro Focus, which merged with HPE Software in September. “More recently, with the digital revolution and teams embracing Agile and building DevOps-based practices, we’ve recognized that our customers’ life cycle management needs are changing.” Many organizations are in the process of transitioning to DevOps, a set of practices for automating processes between development and IT operations, enabling the two divisions to work together instead of functioning as solo silos.

Octane addresses DevOps and Agile needs “We added ALM Octane to our ALM portfolio about two years ago to address the DevOps and Agile needs of our customers, with growing adoption across enterprises,” Balestrieri said. Yet despite the rise of Agile development and new cloud-enabled software delivery methods, many organizations also hang on to waterfall technologies to help support legacy applications. In fact, the recent HPE Quality 2020 Report showed that while only 16 percent of organizations are using Agile yet, 51 percent are leaning in an Agile direction and 24 percent are using hybrid development environments. Content provided by SD Times and

Across hybrid environments In response, Octane’s analytics-driven iterative test capabilities support these hybrid environments through integration with a full scope of waterfall and Agile methodologies. “Micro Focus understands the complexity that exists in our customer environments,” Balestrieri noted. “By embedding the notion of quality and not limiting it to specific commercial tools, we are able to provide a comprehensive end-to-end quality view across the pipeline, and use the data to provide actionable insights.”

Octane supports hybrid environments through integration with a full scope of waterfall and agile methodologies Octane integrates out of the box with continuous integration servers like Jenkins, Team City and Bamboo through built-in plugins. Rest APIs are also included, for custom integrations. “Leveraging the synchronizer, the ALM Suite provides a single source of truth. It acts as a data hub for visibility and traceability in life cycle management, facilitating business processes and rules for governance and compliance,” she explained. “ALM Octane brings all the tools that developers use together under one roof. The dashboard offers a quality heat map for the application which is continuously updated. It provides information on what tests are passing, defects, etc. You can see a view of the state of quality every time you have a build,” according to Balestrieri. Octane supports a highly extensive selection of tests right out of the box, ranging from BDD tests like Gherkin

to UFT-based tests, unit tests, manual tests, load tests, and regression suites.

Support for IT operations, too Beyond continuous testing, Micro Focus provides tools for continuous monitoring and control throughout the application life cycle, in cloud and traditional software environments alike. “No matter what an organization’s current level of DevOps, [Micro Focus’] Hybrid Cloud Management suite provides a collaborativedframework for dev and ops to rapidly deliver applications,” noted Arul Murugan Alwar, senior product manager, IT Operations Management, Micro Focus. “This allows the development team to focus on deploying their applications; the operations team to aggregate simple cloud services or compose complex cloud services; applications to be provisioned in environments compliant with IT policies; and executives to track and optimize the cost of cloud services.” Micro Focus Deployment Automation supports continuous delivery and production deployments by automating the deployment pipeline, reducing cycle times, and providing rapid feedback to development, test, and operations teams. Orange, one customer, needed to accelerate application delivery to be able to release several times a day. Orange chose Micro Focus ALM Octane to automate testing management and introduce continuous deployment. Octane transforms testing by accelerating quality through the integration of Agile practices, the Jenkins CI server, and other Micro Focus continues to support ever-evolving customer needs through compliance with emerging standards like SAFe as well as innovations such as predictive analysis for big data. A predictive analytics module is currently in beta as a SaaS application. z


025,28_SDT05.qxp_Layout 1 10/13/17 3:41 PM Page 28


SD Times

November 2017

< continued from page 25

which leads to the ideology of pairing development and delivery with application lifecycles. Mehta said “ALM helps to bring together the people, tools and procedures that are necessary to create an integrated process that lends itself to predictable and repeatable application development.” That said, it seems obvious that for ALM to achieve those goals, there must be some form of planning or project management, as well as defined requirements, and a development process that includes build and testing activities. And that is where Agile methodologies come into the picture.

Best practices for Agile ALM

Benefits to moving into Agile ALM Agile is about processes, but ALM is about the tools. With that in mind, it becomes critical to pick the appropriate tools to support ALM.

n Incorporate planning tools: ALM has a direct impact on project planning, the information surrounding the steps of a plan should be captured and analyzed. n Incorporate reporting and analytics tools: Having insight into the process and better understanding how things work in an Agile environment is critical for success. n Leverage Scoped Work Tools: Having tools that are aware of project scope proves to be an important building block for creating Agile ALM. Scopes often change and evolve, so tools must be able to handle those changes with little fuss or muss. n Incorporate version control: With Agile, application iterations can happen quickly and often change based upon feedback and other considerations. Part of lifecycle management is understanding where an application is in its lifecycle. Here it becomes critical to track iterations and relate those to versioning practices.

As with any process, there are tried and n Remember QA: Quality assurance is often glossed over in an Agile cycle, with true best practices that smooth adopmany relying on iteration feedback to fully validate the quality of any change tion and Agile ALM is no exception. pushed down the pike. However, QA is a big part of ALM, and ALM forces the However, the intersection of Agile and introduction of better QA policies. Integrated tools that validate and track QA ALM means that DevOps teams must tests are critical to make sure released code meets minimum requirements. think about application n Incorporate automation: There are numerous tools on the mardevelopment and ket that can automate much of the QA testing process. Those tools More simply put, deployment in a differcan test iterations much faster than humanly possible and can be ent light, one that illuscripted to look for contingencies. RPA (Robotic Process AutomaAgile is the how, minates how application) comes into play here and by wrapping an RPA tool with tion iterations fit into and ALM is the why machine learning, testing can be further automated, speeding the business processes that processes and ensuring that Agile driven timelines are met. are changing at an ever of application n Team Involvement: Picking tools for Agile ALM should be a faster pace. team process. In other words, Agile team members should have development. Brody said, “There a say in what tools are used, how they are selected and what are multiple vendors results are expected from the tools. That lends to self-organizaand organizations that provide temtion of team members and also helps to normalize processes. plates and guidance for building n Institute a Feedback System: One of the most recognized aspects of Agile DevOps into an Agile ALM practice. is the feedback loop, where iterations offer feedback that is used to improve However there is no one-size-fits-all, future iterations. With ALM, the feedback loop is just as important to prevent and different projects may require difdeclining application quality and ideological shifts. Incorporating feedback as ferent ideologies. It all comes down to part of ALM helps to prevent those issues and brings ALM into the folds of Agile. adhering to some basic guidance and selecting what works best.” n Make sure selected tools handle governance: ALM is the continuous As a matter of fact, numerous venprocess of managing the lifecycle of an application through governance, meandors have created services that provide ing that governance must be executed accordingly. That requires management guidance around implementing Agile tools that incorporate governance ideologies that align with Agile, yet offer the appropriate reporting and control of iterations. ALM. Those vendors bundle together training, templates, software tools, and The adoption of appropriate ALM tools is quickly becoming part of the Agile other resources into a package that ALM manifesto. It is those tools that provide the framework for instituting ALM, businesses can adopt. Mehta said while also supporting the ideologies of Agile. Bringing ALM into the world of “some of those offerings prove to be a Agile should not be taken likely, it is a new frontier that many organizations are good starting point, however most will ill equipped to deal with, much less understand. However, numerous resources find that customization is a must and are only a web search away, which also shows the flexibility of Agile ALM, as well that can run into excessive costs when as there is no one answer for the Agile ALM symbiosis. Organizations will need relying on outsiders to fully understand to fully understand how Agile impacts their DevOps before bolting on ALM. your business processes.” z

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 11:47 AM Page 29

Pro Cloud Server Collaborate with






AAFree of theversion Pro Cloud for up to+ 25 users 5-25version user, Free of Server the Pro+ WebEA Cloud Server WebEA $CUGF QP Æ’XG QT OQTG EWTTGPV QT TGPGYGF NKEGPUGU QH VJG 'PVGTRTKUG #TEJKVGEV For those with five or more current or renewed licenses of the Enterprise Architect Corporate Edition(or (or above). Conditions apply,apply, see websee site web for details. Corporate Edition above). Conditions site for details.


Online Demo: North America: | Europe: | Australia:

er v r NEW e dS

u ss o l C Pro Expre

Visit for a trial and purchasing options

030_SDT05.qxp_WirelessDC Ad.qxd 10/16/17 9:58 AM Page 1

DON’T MISS A SINGLE ISSUE! Renew your FREE subscription to SD Times!

Take a moment to visit Subscribing today means you won’t miss in-depth features on the newest technologies affecting enterprise developers — IoT, Artificial Intelligence, Machine Learning and Big Data. SD Times offers insights into the practices and innovations reshaping software development such as containers, microservices, DevOps and more. Find the latest news from the software providers, industry consortia, open source projects and research institutions. Available in two formats — print or e-mail with a link to download the PDF. Subscribe today to keep up with everything happening in the software development industry!

Sign up for FREE today at

031_SDT05.qxp_Layout 1 10/13/17 3:19 PM Page 31



here are many ways to approach DevOps, which is quickly gaining mindshare in all manner of enterprises. Some come at it from the more established and understood area of continuous integration/continuous delivery. Others are looking beyond that, to using automation on every process possible to maintain an Agile coding and delivery cadence. Still others are taking a test-driven development approach; by shifting testing left, organizations are finding defects more quickly and delivering better quality code. Then there is the security question: If teams are iterating and delivering at such a rapid pace, how vulnerable are they to malicious attacks or unintended data loss? Then there’s the entire Ops side of the issue,


where performance monitoring and management come into play. Not to mention container management for development teams creating microservices, which enable even faster changes to applications. The fact is, no matter what angle you take, DevOps will soon be the chosen method for delivering software. If your organization is not already using DevOps methodologies, it is likely investigating it now to see how it would fit. This showcase was created to give readers of SD Times a look at the software providers in the space, to hear from them and what their solutions offer, and help them make decisions that will help them down the DevOps road. We hope you find it useful. n


SDT05 Full Page Ads.qxp_Layout 1 10/13/17 11:52 AM Page 32

033_SDT05.qxp_Layout 1 10/13/17 5:08 PM Page 33



Use Modern Tools on Mainframe Apps arge enterprises often struggle to build and maintain their mainframe applications, typically written in COBOL. Older COBOL developers are retiring from the workforce and the applications — many developed decades ago— are without documentation, often complex and difficult to understand. Compuware has developed a novel approach to this problem, initially addressing its own challenges, and offers a suite of products that enable non-mainframe developers to understand and therefore maintain these business-critical applications. “The largest shops in the world run mainframes. They want to implement DevOps and integrate their mainframe applications or mainframe system of record with their mobile and web applications,” said David Rizzo, VP of product development at Compuware. “One of the big struggles they have is implementing new technologies and applications across the enterprise.” Three years ago, Compuware began its journey toward becoming an Agile DevOps company focused on mainframe software development. Today its mainframe developers work hand-in-hand with non-mainframe developers creating a modern DevOps toolchain. Importantly, they’re able to use the same tools to work on different platforms. Using Compuware solutions, the mainframe becomes just another platform versus something esoteric that’s nearly impossible to manage. In addition, Compuware has partnered with XebiaLabs, SonarSource, Jenkins and others to ensure its tools work within a customer’s existing DevOps toolchain.


Many Platforms, One IDE Compuware integrates mainframe and non-mainframe development efforts using Topaz, its comprehensive suite of mainframe development and testing tools. Now any developer can work on any program, no matter how old or complex, regardless of their experience. “Topaz allows you to do mainframe

development in a modern UI. Java developers can use it to develop in other languages,” said Rizzo. “Topaz along with Compuware’s other tools also facilitate DevOps from dev to ops and production.” Topaz also integrates with industrystandard DevOps tools including Jenkins and SonarSource’s SonarLint and SonarQube. With Topaz, developers can understand an application’s entire lifecycle from one system to another, including systems of record. “Topaz users can look at the same information and see how each part of the appli-

Manage Source Code Some of the old, monolithic source code management systems used on mainframe applications don’t allow for Agile development or multiple streams of development. Compuware ISPW changes that, so changes can be made faster and with the confidence and quality customers require. ISPW integrates with Git and other tools so customers can keep their distributed code on distributed platforms and their mainframe code on the mainframe platform. That way, source code maintenance remains on the platform used to build and

The largest shops in the world run mainframes. They want to implement DevOps and integrate their mainframe applications or mainframe system of record with their mobile and web applications.

—David Rizzo

cation is working,” said Rizzo. “The third-party integration enables a consistent toolchain so developers can speak the same language and develop in a consistent way.” Over the past two years, Compuware has added capabilities that help new application developers understand mainframe applications using graphical representations.

Automate Mainframe Unit Tests Compuware’s newest addition to the Topaz suite, Topaz for Total Test, enables unit testing on COBOL applications. It supports products such as DB2 so tests can be run without a live system. In addition, applications can be tested automatically because Topaz for Total Test is integrated into a toolchain that includes Jenkins, ISPW and SonarQube. That same toolchain gives developers insight into how a change impacts an application. “Unit testing is new to most mainframe developers,” said Rizzo. “Java developers have things like JUnit for unit testing. Most mainframe developers do full testing, regression testing, very extensive application testing but not unit testing. You shouldn’t have to run large-scale tests when you make a small change.”

deploy the application. ISPW Deploy integrates with XebiaLabs XL Release to automate, standardize and monitor code deployments across multiple platforms into multiple target environments. “You can deploy an entire application with code on both platforms in one deployment,” said Rizzo. “You’re able to do continuous integration and continuous deployment of applications knowing it’s good code, you understand the code, it’s been fully tested and you can predict the results in production.” Topaz’s end-to-end and multiplatform capabilities allow developers to monitor applications across mainframe and nonmainframe platforms, set up KPIs and metrics using tools such as Sonar and Jenkins to maintain product quality and understand and monitor how applications are running in production.

Get More Insights, Live Rizzo will be presenting at DOES San Francisco Nov. 15, 2017. His session, entitled, “Creating a Modern DevOps Toolchain with Mainframe Development” will provide practical insights about best practices and common challenges, including cultural issues. Learn more at n

034_SDT05.qxp_Layout 1 10/13/17 4:29 PM Page 34


Tasktop Helps Enable Successful DevOps omplexity stands in the way of effective DevOps, especially among large, regulated companies. Some of the world’s most successful organizations employ thousands of developers, in addition to testers, QA engineers, product owners and operations personnel. The amount of artifacts, such as features, requirements, test cases and defects in any one area, can be overwhelming, let alone all the artifacts that comprise a DevOps value stream. Tasktop addresses that very problem so companies can innovate more effectively. “I talked to a group recently whose whole growth model is M&A, acquiring


ate a network of tools and associated artifacts so they can build true DevOps value streams. Using the Tasktop Integration Hub, DevOps teams can create a value stream, complete with tools and artifacts, manage them and get visibility into them, all with point-and-click simplicity. There’s no limit to the number of tools or artifacts that can be connected, which is necessary to keep pace with the rapidly changing needs of customers and governmental entities. It’s also essential for companies who are actively evolving their businesses through mergers, acquisitions and divestitures. “Once you realize you can’t have suc-

Once you realize you can’t have successful DevOps at scale without having ingtegration and connectedness, you realize you need more visibility into it as well. —Nicole Bryan and divesting companies. When you’re trying to implement a DevOps strategy with that kind of complexity, it presents a set of problems that the textbooks don’t address,” said Nicole Bryan, Tasktop VP of product. “DevOps is about the full end-toend value stream. If you don’t have that complex, full end-to-end value stream connected, you really don’t have DevOps.” Most companies want to deliver software and get feedback faster, but when there are tool and process disconnects, it’s hard to succeed. Regulated or not, today’s companies need traceability throughout the entire SDLC to make sure products align with customers’ expectations, whether that’s a better user experience or regulatory compliance. “If you don’t have traceability throughout all of the complexity it takes to build software, your business will absolutely be affected,” said Bryan. “In our world, this complexity boils down to the fact that building software requires many specialized roles, not just developers, which in turn translates to a tremendous number of tools, all of which operate on artifacts.” Tasktop enables DevOps teams to cre-

cessful DevOps at scale without having integration and connectedness, you realize you need more visibility into it as well,” said Bryan. “Our tool allows you to have visibility into all of your integrations, manage them and change them over time.”

Understand the Value Stream Understanding an organization’s value stream and its true end-to-end flow is tantamount to enabling a successful DevOps journey. Comprehending the flow of artifacts starts with a request, whether it comes from users or the government. That request is combined with similar requests, all of which are collectively analyzed and turned into features. In highly regulated fields, those features must also include detailed, documented requirements. “When you realize even the most complicated requirements are just a small piece of the end-to-end value stream, you realize the need for connectedness and traceability between all of the artifacts,” said Bryan. “You realize that features need to be broken down into epics and

user stories because that’s how developers work. Now you’ve increased your network dramatically.” Testing needs to be added because automated testing needs to connect to the originating stories, which are connected to the epics. The epics connect to the features, which connect to the original mandate that came from the business or government. Once testing has been connected, the next step is to connect security testing and compliance. Bryan said Tasktop recently challenged customers to draw out their value stream on a white board at its first user conference which opened their minds to a new way of thinking. “When customers step back and begin to think about their value stream, they see they need to be thinking about DevOps and architecting from the perspective of full value stream management,” said Bryan. “We recommend starting with either alignment and connectedness between developers and testers, or between ITSM support team and developers and flow of defects between those teams and then grow from there.” Clearly the successful delivery of features depends on more than just code. DevOps teams need to think about the problem end-to-end, realizing that the core artifacts can make the difference between secure software and a security breach, or innovation versus business as usual.

A Layer to Connect Tools Changing one’s view of DevOps is one thing. Building an infrastructure layer that connects all the tools and artifacts in a DevOps value stream is another. “Enterprises can’t do this themselves. You can’t just build this and it just works,” said Bryan. Tasktop has more than 500,000 API tests running continuously to ensure all the endpoints work as necessary. As recent history proves, one missed patch can literally change history. Learn more at n

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 11:53 AM Page 35

If your DevOps team is disconnected from the rest of the software value stream, it’s likely you’re building the wrong product.

Learn how to 1omm;1| o u vo[ -u; 7;Ѵb ;u |;-l b|_ 1 v|ol;u m;;7vĺ |ol-ঞ1-ѴѴ ĺ

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 12:07 PM Page 36

Micro Focus Enterprise DevOps

Release software at the speed of business. Racing against time is nothing new for software delivery teams. But digitalization has forever changed the business landscape. That’s where Micro Focus can help. Achieve enterprise agility through continuous software integration and delivery, while maintaining visibility and control across the software development process. Visit us at

037_SDT05.qxp_Layout 1 10/13/17 5:09 PM Page 37



Micro Focus Eases DevOps Transitions mplementing and scaling enterprise DevOps can be a lot more challenging than building it from scratch. While cloud-based startups use their DevOps capabilities to disrupt industries and change competitive landscapes, large enterprises struggle with DevOps transitions. Micro Focus (formerly HPE Software) helps enterprises ease DevOps adoption so they can meet their goals of delivering higher quality software faster. “It’s easier to build an effective DevOps practice when you’re starting with a blank slate,” said Ashish Kuthiala, senior director at Micro Focus. “It’s harder for enterprises to change the way they operate so they can implement DevOps efficiently. To do that, they have to choose the right team members and the right toolsets, and stitch those toolsets together.” Unlike startups, large enterprises have spent a lot of time and money on purpose-specific tools. They want to leverage those investments as customer requirements change, but their cultures and processes also need to change. “It’s difficult for enterprises to pivot when they have a legacy culture and considerable complexity built into their existing tool chains and processes,” said Kuthiala. “It’s an iterative journey that takes time to understand and implement.”


Transition Slowly Too often, enterprise DevOps endeavors stumble because the team is moving so fast it doesn’t realize It lacks the right tools, processes, and people to succeed. Despite pressure to adopt DevOps practices right now, successful transformations take time. “Every change needs to be codified and version-controlled. Whether it’s a change in executable code, configuration, the infrastructure environment, data or monitoring, it needs to go through the CD pipeline in an automated way,” said Kuthiala. “Any changes are fed into the CD pipeline so it goes through the entire cycle until it’s productionready. Then the business can make the

decision of when to introduce it to production environments.” Micro Focus built a set of automated gates so code automatically integrates with the main trunk of the branch, tests run automatically, the infrastructure is provisioned through code and more — all on a hybrid delivery platform. Like other businesses, Micro Focus uses open source, its own tools and other tools. “It’s really important to build automated gates. That way, if something doesn’t work, it’s immediately pushed back to the source it came from and fixed before it can go any further, said Kuthiala. “Technically, nothing

and encouragement that’s needed to make this work at an enterprise scale,” said Kuthiala. “Your first pipeline serves as a proof point and then you have the best practices in place to build successive pipelines. That’s how we’re scaling this up for our customers and ourselves.”

Use Your Favorite Tools Enterprises shouldn’t have to “rip and replace” tools simply because software delivery processes and practices are changing. At the same time, whatever technology is in place has to meet modern requirements.

It’s really important to develop a culture that allows people to experiment. You have to allow people to fail fast, learn from it and keep moving forward. —Ashish Kuthiala defective ever reaches the production environment.” To achieve that, enterprises have to standardize their development, testing and production environments so they can be made available on-demand. Micro Focus containerized all of those environments, which has accelerated changes to its CD pipeline. Transitioning to DevOps isn’t all about tools, though. Cultural adaptation is also important, but it doesn’t necessarily come easily. Micro Focus helps guide enterprises through their transitions, including the cultural transformations needed to enable successful DevOps and CD. It also assesses customers’ environments, tool portfolios and processes. “It’s really important to develop a culture that allows people to experiment. You have to allow people to fail fast, learn from it and keep moving forward,” said Kuthiala. Embracing an iterative approach to software delivery requires significant cultural change that can be difficult if not downright painful. It’s important to incentivize teams working across different functions to work toward common goals. “There’s a lot of leadership change

“Enterprises have very structured teams, processes, culture and toolsets,” said Kuthiala. “They’re also using more open-source tools than ever before. We help them leverage what they have in fully-automated, CD pipelines.” ALM Octane integrates with all the third-party open source and commercial tools customers use today. Stitching all of that together is important, but building an effective CD pipeline also requires the right people and processes. “We advise customers to take a very experimental, iterative approach so they can continuously improve their pipelines,” said Kuthiala. “Try things out, measure them, improve them and do it all very quickly. We can help you put tool chains together, assist you with the culture and process changes, and share best practices of how we do it and how our other customers do it.” Businesses that want to build enterprise DevOps practices and implement them at a global scale choose Micro Focus to help build all or part of the CD pipeline. The company also has tools available to optimize every part of the CD pipeline. Test drive Micro Focus Octane at en-us/signup/try/alm-octane. n

038_SDT05.qxp_Layout 1 10/13/17 3:35 PM Page 38


Improve Software Releases with JFrog s software release cycles accelerate, DevOps teams lose insight into what they actually released. The higher levels of automation necessary to speed software delivery also accelerate the delivery of bugs. With JFrog, DevOps teams get the insight and control needed to improve release management effectiveness. “Release cycles can happen thousands of times an hour and there’s no human intervention involved. Still, you need the ability to manage the artifacts and software releases, throw away any software releases that are not good and make sure


es wherever they need to run,” said Landman. “We fill the gap between Git and Kubernetes.” DevOps teams using JFrog products benefit from fine-grained insight they provide. “People call us the ‘database of DevOps’ because we’re basically the storage and the single source of truth for all your software releases and fast releases,” said Landman. JFrog ensures the reliability of the DevOps pipeline so software releases can be managed more effectively. The end result is faster delivery of higher quality

Software is being released at such an amazing rate almost nobody is checking release notes and they don’t really know by name what version of an application they’re running, as long as they can trust it. —Yoav Landman

the good releases are kept. You also need to ensure that every piece of software that lands in production is reproducible,” said Yoav Landman, CTO and co-founder of JFrog. “All of that can be very painful, and you can’t solve the problem without the proper tooling.” DevOps teams using JFrog products benefit from fine-grain insight they provide. “We take care of the binaries,” said Landman. “Developers write source code, but it evolves and it’s subject to change by multiple people. What ends up running in your runtime, in your production server, on your iPhone and kiosk in a shopping center — anywhere software is being deployed — you need to build zeros and ones that have to be taken care of in a very rigid way.” Leading companies across industries rely on JFrog to improve the quality and reliability of releases. JFrog’s customer base is already at 5,000 and more organizations strive to improve the output of their DevOps teams leveraging JFrog. “Companies use JFrog to host their software releases, manage them, curate them, ensure they don’t have any security issues, and distribute the software releas-

software and a means to ensure continuous improvement.

Get Universal Visibility and Control DevOps involves a lot of modular ecosystems that have changed the way software is developed, packaged and shipped. Especially with the rise of microservices and containers, there’s even more emphasis on reusability across different environments and languages. “There are dozens of standards and dozens of packaging types and APIs that are expected to be supported by tools,” said Landman. “If you’re not using JFrog products, you’re probably installing a lot of different tools, configuring them to work with your identity provisioning and your organization, and devising different methodologies to work with each tool, not just in terms of security but how you manage cleanup and so forth. Our tools provide a universal solution that supports all the different ecosystems in one set of tools.” For example, JFrog’s flagship product, Artifactory, is a universal artifact repository manager that fully supports software packages created with any technology or

language. It’s the only enterprise-ready repository manager capable of supporting secure, clustered, high-availability Docker registries, Landman said. Importantly, Artifactory integrates with all major continuous integration, continuous delivery and DevOps tools, providing an end-to-end automated solution for managing software artifacts from development to production. Similarly, Bintray, JFrog’s distribution and IoT gateway system, natively supports all major package formats so DevOps teams can work seamlessly with industry standard development, build and deployment tools. It supports massive scalability and worldwide coverage. “DevOps teams want to be able to run their software onpremises or in any public, private or hybrid cloud environment,” said Landman. “We support all of that and provide visibility across all of it.”

Release Fast or Die Just about every company is a software company now, and if they’re not releasing quality software as fast as their competitors, the results can be fatal. JFrog enables unprecedented release control and visibility while hiding all of the complexity. Conversely, DevOps teams that lack the software release management capabilities lack full visibility between Git and Kubernetes. In addition, since all types of software are being commoditized, competitiveness depends on how quickly they can address the customers’ needs and how fast they can make the changes. “DevOps teams are faced with the liquidation of software,” said Landman. “Software is being released at such an amazing rate almost nobody is checking release notes and they don’t really know by name what version of an application they’re running, as long as they can trust it ‘to deliver the goods.’ JFrog gives you the visibility and control you need to deliver high-quality software and bug fixes in an automated fashion with confidence.” Learn more at n

Project16_Layout 1 10/17/17 3:33 PM Page 1

040_SDT05.qxp_Layout 1 10/13/17 3:36 PM Page 40


CA Unlocks the Power of DevOps ifferent companies face different challenges as they adopt DevOps practices and endeavor to improve them. Some companies have trouble acquiring the tools they need to be successful; others struggle with the necessary cultural adjustments. CA Technologies provides the widest array of highly automated solutions necessary to build and optimize integrated workflows across application development, delivery and operations. It also helps customers address the cultural and procedural challenges they face with a variety of services including coaching, benchmarking and current-state workshops. “One challenge is the lack of integration


accelerated its loan funding 500% and reduced regression testing by 93%.

With automation and virtualization, testing can be done continuously as code is planned, developed and operating. The results are more comprehensive test cases and confidence in delivery speed and quality, Ravichandran said.

ly true as businesses attempt to adopt mobile-first strategies and other accelerated development approaches. By 2019, Gartner predicts that 70% of enterprise DevOps initiatives will realize the importance of implementing security into their DevOps practices. Similarly, a recent study conducted by CA’s Veracode found that 62% of IT pros felt app security was very important to their development team. The same study also revealed that 43% of IT pros said fixing flaws during development is easier than patching. “DevSecOps is gaining greater visibility because companies are realizing that protecting their apps after code is written is a reactive approach that is simply too little, too late,” said Ravichandran. Put simply, security is a critical aspect of quality throughout the software development lifecycle. The only way to ensure app security is to automatically scan code for vulnerabilities starting from development, through production and continuing through deployment. It’s the only way to protect users from bad experiences and businesses from making data breach headlines.

The Intersection of DevOps and Cloud

Is Your DevOps Endeavor Successful?

The intersection of DevOps and cloud hasn’t received as much attention as it deserves. While they are important technological and cultural shifts, little has been done to explore the link between the two. “DevOps and Cloud have a powerful connection that organizations should consider more closely as they strive to deliver software faster and with better quality,” said Ravichandran. CA recently commissioned Freeform Dynamics to explore this topic. Interestingly, organizations with a high level of commitment to DevOps and Cloud saw an 81% increase in overall software delivery performance. The same organizations were able to deliver software 90% faster with a 69% increase in user experience.

Ravichandran said she’s often asked how to measure the success of a DevOps implementation or practice. Quite often, DevOps teams are established and running before the thought even occurs to them. A better approach is to establish success metrics before beginning or extending a DevOps practice. Since each company is unique, the metrics differ from organization to organization. “DevOps has a wide range of benefits,” said Ravichandran. “I’ve seen organizations often have their own primary motivation—whether its costs, velocity, competition or something else. It’s best to focus on metrics that are aligned to your organization’s primary objectives.” CA helps organizations identify and track the success metrics that are unique to the business. Its tools help ensure successful DevOps implementations. Learn more at n

Test Continuously Continuous testing requires DevOps teams to integrate testing processes throughout the development life cycle. A recent study by Computing Research showed that 63% of DevOps practitioners consider QA testing the biggest bottleneck. One reason for this is that testing hasn’t garnered the attention that other tools within the DevOps ecosystem have, which means many teams are still conducting tests manually, late in the process, slowing software delivery and negatively impacting quality.

DevSecOps is gaining greater visibility because companies are realizing that protecting their apps after code is written is a reactive approach that is simply too little, too late. —Aruna Ravichandran across the software development life cycle toolchain,” said Aruna Ravichandran, vice president, DevOps products and solution marketing at CA Technologies. “Successful DevOps requires a level of integration throughout the software development life cycle—starting with the planning tools, through development and testing and all the way to operational tools.” CA’s offerings span the entire application lifecycle including planning, development, testing, release and operations. With CA, organizations get the full range of capabilities needed to deliver secure, quality applications quickly that provide unparalleled customer experiences. Its products provide open integrations with hundreds of third-party and open-source software and systems. That way, customers can choose what works best for them, whether it’s an end-to-end solution or integration into their existing technology ecosystem. “As a leading provider of DevOps tools and services, CA Technologies helps you build, monitor and manage better apps faster and at lower costs,” said Ravichandran. For example, United Airlines saved $500,000 in testing costs while increasing test coverage by 85%. GM Financial

DevSecOps Is Now Mandatory Developers are under constant pressure to deliver software faster. This is especial-

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 2:10 PM Page 41

Continuous Testing Enable continuous testing across your software delivery lifecycle. Adopt next-generation testing practices to test early, often, automatically, and continuously.

Only CA offers a continuous testing strategy that’s automated and builń upon end-to-end integrations and open source. Enable your DevOps and continuous delivery practices today.


8:13 PM

042,43_SDT05.qxp_Layout 1 10/13/17 4:47 PM Page 42



n Atlassian:

Atlassian offers cloud and on-premises versions of continuous delivery tools. Bamboo is Atlassian’s on-premises option with first-class support for the “delivery” aspect of Continuous Delivery, tying automated builds, tests and releases together in a single workflow. For cloud customers, Bitbucket Pipelines offers a modern Continuous Delivery service that’s built right into Atlassian’s version control system, Bitbucket Cloud.

n Appvance: The Appvance Unified Test

Platform (UTP) is designed to make Continuous Delivery and DevOps faster, cheaper and better. As the first unified test automation platform, you can create tests, build scenarios, run tests and analyze results, in 24 languages or even codeless.

n CA Technologies: CA Technologies DevOps solutions automate the entire application’s life cycle — from testing and release through management and monitoring. The CA Service Virtualization, CA Agile Requirements Designer, CA Test Data Manager and CA Release Automation solutions ensure rapid delivery of code with transparency. The CA Unified Infrastructure Management, CA Application Performance Management and CA Mobile App Analytics solutions empower organizations to monitor applications and end-user experience to reduce complexity and drive constant improvement. n Chef:

Chef Automate, the leader in Continuous Automation, provides a platform that enables you to build, deploy and manage your infrastructure and applications collaboratively. Chef Automate works with Chef’s three open source projects; Chef for infrastructure automation, Habitat for application automation, and InSpec for compliance automation, as well as associated tools.

n CloudBees: CloudBees is the hub of enterprise Jenkins and DevOps, providing companies with smarter solutions for automating software development and delivery. CloudBees starts with Jenkins, the most trusted and widely-adopted continuous delivery platform, and adds enterprisegrade security, scalability, manageability and expert-level support. n CollabNet: CollabNet helps enterprises

and government organizations develop and deliver high-quality software at speed. CollabNet is a Best in Show winner in the

application lifecycle management and development tools category of the SD Times 100 for 14 consecutive years. CollabNet offers innovative solutions, consulting, and Agile training services.

n Compuware: Compuware is changing

the way developers develop. Our products fit into a unified DevOps toolchain enabling cross-platform teams to manage mainframe applications, data and operations with one process, one culture and with leading tools of choice. With a mainstreamed mainframe, the mainframe is just another platform, and any developer can build, analyze, test, deploy and manage COBOL applications with agility, efficiency and precision.

n Datical: Datical solutions deliver the database release automation capabilities IT teams need to bring applications to market faster while eliminating the security vulnerabilities, costly errors and downtime often associated with today’s application release process.

n Dynatrace: Dynatrace provides the industry’s only AI-powered application monitoring. Bridging the gap between enterprise and cloud, Dynatrace helps dev, test, operation and business teams light up applications from the core with deep insights and actionable data. We help companies mature existing enterprise processes from CI to CD to DevOps, and bridge the gap from DevOps to hybrid-to-native NoOps.

n Electric Cloud: Electric Cloud is a leader in enterprise Continuous Delivery and DevOps automation, helping organizations deliver better software faster by automating and accelerating build, test and deployment processes at scale. The ElectricFlow DevOps Release Automation Platform allows teams of all sizes to automate deployments and coordinate releases.

n GitLab: Designed to provide a seamless development process, GitLab’s built-in Continuous Integration and Continuous Deployment enables developers to easily monitor the progress of tests and build pipelines, then deploy with the confidence that their code has been tested across multiple environments. Developers are able to develop and deploy rapidly and reliably with minimal human intervention to meet enterprise demands.

n Micro Focus:

The company’s DevOps services and solutions focus on people, process and tool-chain aspects for adoption and implementing DevOps at large-scale enterprises. Continuous Delivery and Deployment are essential elements of HPE’s DevOps solutions, enabling Continuous Assessment of applications throughout the software delivery cycle to deliver rapid and frequent application feedback to teams. Moreover, the DevOps solution helps IT operations support rapid application delivery (without any downtime) by supporting a Continuous Operations model.

n JFrog: JFrog’s four products, JFrog Artifactory, the Universal Artifact Repository; JFrog Bintray, the Universal Distribution Platform; JFrog Mission Control, for Universal DevOps flow Management; and JFrog Xray, Universal Component Analyzer, are used by Dev and DevOps engineers worldwide and are available as open-source, on-premise and SaaS cloud solutions. The company recently acquired CloudMunch, a universal DevOps intelligence platform to peocide DevOps BI and analytics, and help drive DevOps forward. n JetBrains:

TeamCity is a Continuous Integration and Delivery server from JetBrains. It takes moments to set up, shows your build results on the fly, and works out of the box. TeamCity will make sure your software gets built, tested, and deployed. TeamCity integrates with all major development frameworks, version-control systems, issue trackers, IDEs, and cloud services, providing teams with an exceptional experience of a well-built intelligent tool. With a fully functional free version available, TeamCity is a great fit for teams of all sizes.

n Microsoft: Visual Studio Team Services, Microsoft’s cloud-hosted DevOps service offers Git repositories; agile planning tools; complete build automation for Windows, Linux, Mac; cloud load testing; Continuous Integration and Continuous Deployment to Windows, Linux and Microsoft Azure; application analytics; and integration with thirdparty DevOps tools. Visual Studio Team Services supports any development language, works seamlessly with Docker-based containers, and supports GVFS enabling massive scale for very large git repositories. It also integrates with Visual Studio and other popular code editors.

042,43_SDT05.qxp_Layout 1 10/13/17 4:47 PM Page 43

Showcase n New Relic:

New Relic is a software analytics company that makes sense of billions of data points and millions of applications in real time. Its comprehensive SaaSbased solution provides one powerful interface for Web and native mobile applications, and it consolidates the performancemonitoring data for any chosen technology in your environment. It offers code-level visibility for applications in production that cross six languages (Java, .NET, Ruby, Python, PHP and Node.js), and more than 60 frameworks are supported.

n Neotys: Neotys is the leading innovator in Continuous Performance Validation for Web and mobile applications. Neotys load testing (NeoLoad) and performance-monitoring (NeoSense) products enable teams to produce faster applications, deliver new features and enhancements in less time, and simplify interactions across Dev, QA, Ops and business stakeholders. Neotys has helped more than 1,600 customers test, monitor and improve performance at every stage of the application development life cycle, from development to production, leveraging its automated and collaborative tooling.

n OpenMake: OpenMake builds scalable

Agile DevOps solutions to help solve continuous delivery programs. DeployHub Pro takes traditional software deployment challenges with safe, agentless software release automation to help users realize the full benefits of agile DevOps and CD. Meister build automation accelerates compilations of binaries to match the iterative and adaptive methods of Agile DevOps

n Orasi: Orasi is a leading provider of soft-

ware testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology. The company is laserfocused on helping customers deliver highquality applications, no matter the type of application they’re working on and no matter the development methods or delivery processes they’ve adopted. In addition to its end-to-end software testing, Orasi provides professional services around testing, processes and practices, as well as software quality-assurance tools and solutions to support those practices.

n Puppet: Puppet provides the leading IT automation platform to deliver and operate


modern software. With Puppet, organizations know exactly what’s happening across all of their software, and get the automation needed to drive changes with confidence. More than 75% of the Fortune 100 rely on Puppet to adopt DevOps practices, move to the cloud, ensure security and compliance, and deliver better software faster.

n Rogue Wave Software: Rogue Wave helps thousands of global enterprise customers tackle the hardest and most complex issues in building, connecting, and securing applications. Since 1989, our platforms, tools, components, and support have been used across financial services, technology, healthcare, government, entertainment, and manufacturing to deliver value and reduce risk. From API management, web and mobile, embeddable analytics, static and dynamic analysis to open source support, we have the software essentials to innovate with confidence. n Sauce Labs:

Sauce Labs provides the world’s largest cloud-based platform for automated testing of Web and mobile applications. Optimized for use in CI and CD environments, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium, both widely adopted open-source standards for automating browser and mobile application functionality.


SOASTA is the leader in performance analytics. The SOASTA Digital Performance Management Platform enables digital business owners to gain unprecedented and end-to-end performance insights into their real user experiences on mobile and web devices, providing the intelligence needed to continuously measure, optimize and test in production, in real time and at scale.

n Synopsys: Through its Software Integrity Platform, Synopsys provides a comprehensive suite of best-in-class software testing solutions for rapidly finding and fixing critical security vulnerabilities, quality defects, and compliance issues throughout the life cycle. Leveraging automation and integrations with popular development tools, Synopsys’ Software Integrity Platform empowers customers to innovate while driving down risk, costs, and time to market. Solutions include static analysis, software


composition analysis, protocol fuzz testing, and interactive application security testing for Web apps.

n Tasktop: Transforming the way software is built and delivered, Tasktop’s unique model-based integration paradigm unifies fragmented best-of-breed tools and automates the flow of project-critical information across dozens of tools, hundreds of projects and thousands of practitioners. The ultimate collaboration solution for DevOps specialists and all other teams in the software lifecycle, Tasktop’s pioneering Value Stream Integration technology provides organizations with unprecedented visibility and traceability into their value stream. Specialists are empowered, unnecessary waste is eradicated, team effectiveness is enhanced, and DevOps and Agile initiatives can be seamlessly scaled across organizations to ensure quality software is in production and delivering customer value at all times. n TechExcel:

DevSuite helps organizations manage and standardize development and releases via agile development methods and complete traceability. We understand the importance of rapid deployment and are focused on helping companies make the transition over to DevOps. To do this, we have partnered with many automation tools for testing and Continuous Integration, such as Ranorex and Jenkins. Right out of the box, DevSuite will include these technologies.

n Tricentis: Tricentis Tosca is a Continuous Testing platform that accelerates software testing to keep pace with Agile and DevOps. With the industry’s most innovative functional testing technologies, Tricentis Tosca breaks through the barriers experienced with conventional software testing tools. Using Tricentis Tosca, enterprise teams achieve unprecedented test automation rates (90%+) — enabling them to deliver the fast feedback required for Agile and DevOps. n XebiaLabs: XebiaLabs develops enter-

prise-scale Continuous Delivery and DevOps software, providing companies with the visibility, automation and control they need to deliver software faster and with less risk. Global market leaders rely on XebiaLabs to meet the increasing demand for accelerated and more reliable software releases. n


SD Times

November 2017

What’s New In Software Subscription- and usage-based licensing models have largely replaced perpetual licenses, but more innovation is on the way


ubscription- and usage-based pricing are all the rage these days. So how are companies becoming even more creative in their software pricing and licensing? Although a lot of innovation is starting to happen, much more appears to be in store over the next couple of years. ISVs such as Siemens are already charging on the basis of features used. Amazon AWS charges for data storage and transfer on the basis of gigabytes. Many other models are emerging too, ranging from outcome-based pricing to the binding of software to virtual machines. Upheavals are even spilling over to the open-source community.

Goodbye to perpetual licenses One model we’re already seeing less of though, is the perpetual software license, once the dominant license in the software industry. In a recent study by Constellation Research, published by Flexera, less than half of software vendors, or 43 percent, said that perpetual licenses currently account for more than half of their revenues. Also, over the next two years, ISVs expect to change their licensing models to generate more revenue, grow more competitive, and better their relationships with customers, with 43 percent citing moves to the cloud, 46 percent mentioning SaaS, 47 percent naming virtualization, and 55 percent pointing to mobile technologies. Many enterprise customers today are deploying usage analytics and asset management tools to find out which features they’re actually using, to what extent, and in which operating environments and geographic locations. However, shelfware isn’t going away yet, nor is licensing management.

BY JACQUELINE EMIGH Microsoft, IBM, and hordes of other ISVs continue to use licensing enforcement tools to fend off software piracy and other violations. Enterprises use licensing management software to keep track of their licenses and avoid infractions. said Colin Earl, CEO at Agiloft, makers of asset management software. As some see it, though, newer subscription-based pricing models really aren’t more innovative than perpetual licenses or multi-year enterprise license. All of these pricing models are still based on increments of time, suggested Dave Abramson, CTO at, a no-code solutions company. Subscription-based models let customers pay out on a monthly, weekly, daily or quarterly basis, for example, instead of making multiyear investments all at once. Yet it’s crystal clear that models like Microsoft’s Office 365 still involve contractual agreements, and customers can’t just pay for those features — or even for single applications — that they want. A small business interested only in Microsoft Word, and not in Excel or PowerPoint, might be better off to purchase Word as shelfware than to invest in Office 365 online suites, to the tune of at least $10.95 monthly for a 12-month contract. Subscription-based pricing isn’t necessarily best suited to all vendors, either. In implementing Revulytics’ usage analytics tools, for example, one software company found that customers tended to use the product intensively for two or three months in connection with a specific project, and then to stop using it entirely until another project comes along. The company decided not to

move to subscription-based pricing, concluding that users would unsubscribe after each project ended.

Usage-based licensing under way Usage-based licensing models can be implemented in SaaS and pure-play cloud environments as well as onpremises and with embedded software. Often, customers pay just for the features they need. For example, Siemens Energy Automation worked with monetization specialist Gemalto to arrive at a new licensing model after repeated requests from various customers for the ability to either turn on or turn off certain features in its software. Other new approaches are based on a pay-per-use model. For instance, if a given application is viewed as valuable by a customer, but is only used occasionally, the enterprise might agree to pay a set fee for using the app 10 times, noted Dave DeMilo, principal consultant, Gemalto Software License Professional Services. Depending on how the ISV configures the pricing model, the vendor might shut off access to the app after the first 10 times until payment is re-upped. Alternatively, the customer might need to pay a higher, non-discounted fee after the first 10 times of use are done. Embedded software that comes with medical equipment such as MRI machines can also get charged on a pay-per-use basis, DeMilo noted. Nicole Segerer, team manager in product management at Flex-

044-46_SDT05_friday.qxp_Layout 1 10/16/17 2:03 PM Page 45

Licensing? era, foresees the prospect of usagebased pricing in still-emerging Internet of Things (IoT) apps, with fees per transaction low enough to qualify as micropayments. Also known as utilization- or consumption-based pricing, usage-based pricing typically involves some type of metering, Vendors can also track and license software based on criteria such as the number of API calls made, the amount of storage and transfer used, or the number of emails sent, Abramson explained. As part of its multifaceted licensing model, Amazon AWS charges on a per gigabyte basis for data storage and transfer. Unearth, a specialist in software for the construction industry, is experimenting with two pricing models, one of them usage-based. “One of the initial pricing plans we’re testing involves charging by project layer. Layers are a unique feature of our software, and depending on the size of a project, a customer will have more or less layers to place on top of their project maps,” said Brian Saab, CEO. “However, we’re also testing a sim-

pler plan: charging a low introductory rate to help new customers get going fast on a pilot project so they can quickly see our value. As they become convinced of our capabilities and ROI we grow with that account by offering more usage, namely around adding more of their people (users) to the platform.” In Saab’s view, SaaS vendors have a choice of two main approaches to monetization. One is to adopt premium pricing for “skimming off the top,” identifying the “cream of the crop” in terms of customers who will adopt new technology without price sensitivity. The other is to use value pricing for high market penetration. “With market penetration, you find a price that gets lots of adoption fast, and then you’ll typically raise pricing down the road once you’ve proven your value,” he advised. Outcome-based pricing, on the other hand, is in an early stage of infancy. This model assumes that customers will pay a premium for technology when it delivers something they really want. One component that sometimes shows up in licensing contracts is that the provider will be paid

November 2017

SD Times

extra for on-time delivery if the customer has a mission-critical deadline. Beyond licensing models for cloud environments and SaaS apps, vendors are also planning licensing changes in the areas of virtualization and mobile technology, according to the study by Constellation Research. Licensing issues, though, can get confusing in virtualized environments. In order to protect vendors’ software from copying and duplication, Gemalto has developed a licensing model that binds the software to a single virtual machine.

Mobile developers already monetizing Meanwhile, mobile ISVs have been arriving at their own pricing models through licenses posted on Google Play and Apple’s iTunes. Google Play, for example, offers a licensing service that allows the developer to enforce licensing policies for the app. The service is mainly intended for developers of paid apps who want to make sure that a current user has paid. If necessary, the service can also apply custom constraints based on the developer’s licensing status, such as restricting the use of continued on page 46 >


044-46_SDT05_friday.qxp_Layout 1 10/16/17 2:03 PM Page 46


SD Times

November 2017

< continued from page 45

an app to a specific device. Typically in these environments, though, mobile developers only charge a few dollars in licensing fees. Many, especially in the gaming industry, are monetizing their software through inapp purchases, and it will be interesting to see how this pricing model might apply to enterprise software. Mobile app developers are also pursuing monetization by starting to “white label” their software, licensing their underlying technologies to other vendors, who may tweak the apps and perhaps sell ads while adhering to the developer’s licensing requirements.

Open source licensing is intricate Open-source software actually has a licensing environment that’s better established at this point than that of commercial software. The Open Standards Institute (OSI) recognizes 10 types of open-source software licenses. Depending on the type of license, in many cases, commercial developers pass along the source code for free, while charging for extra features, addons, and/or maintenance and support services, Abramson observed. Consequently, some commercial software used in enterprise environments contains open-source components. In response, Flexera released an asset management solution specifically designed to help customers determine whether they are using any open-source software and, if so, whether they are in licensing compliance, said Ed Rossi, VP of product management at Flexera. Shakeups in commercial software

Licensing management software helps avoid infractions, says Agiloft’s Earl.

CockroachDB code is open source but the CCL license doesn’t allow free redistribution.

licensing, however, could be spilling over into open-source development. A company called Cockroach Labs has developed a database that competes with the likes of Oracle, Postgre, and SQL Server, along with an accompanying license known as the Cockroach Community License (CCL). Customers are asked to follow an “honor system.” In registering to use the software, they are agreeing to abide by the CCL. “The CCL is very similar to the Apache license (APL2). In fact, the CCL was derived directly from the APL2 in order to make understanding it straightforward. The primary difference is that the CCL withholds the free redistribution right, meaning that features licensed with the CCL can only be resold or given away by Cockroach Labs. Even though we provide the full source code for CCL features, this technically means that they are not open source, according to the official definition,” said Spencer Kimball, CEO of Cockroach Labs. The CCL allows Cockroach to sell an enterprise version of CockroachDB while still providing the source code for users and contributors to learn from, augment, or debug as necessary, according to Kimball. “What’s particularly interesting about this model is how it acts to protect our business from competitors who might otherwise resell our product without any

revenue-sharing agreement. Because all of the source code, both APL2 and CCL, is in the same GitHub repository, we only need to maintain and test one product. If a competitor intends to resell CockroachDB with their own versions of our enterprise features, they must fork our code base to reimplement the enterprise features, and then must maintain that fork indefinitely — an unenviable task.” So far, response to the “honor system” has been positive, among large enterprises and startups alike, according to Saab. “CockroachDB is an OLTP database meant for mission-critical workloads, which means that the level of support provided by the enterprise license is a must for any company with valuable data in production. What’s been surprising is that even startups have demonstrated a high level of regard for the terms of the license, in some cases bringing up concerns over the potential costs as they scale. The good news is we’re creating a new free tier for startups that provides access to all enterprise features, but with a more basic support model,” he said. There are few things for certain in the land of software licensing right now, but one of them is that, after years of living mostly with the perpetual software license, we’re heading for a lot of changes. Which monetization efforts will succeed best will play out in time. z

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 2:13 PM Page 47

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 2:12 PM Page 48



Optimize Your Product Development with Software Usage Analytics •

First software usage analytics tool designed for distributed C/C++, .NET, Obj-C and native Java applications on Windows, Mac, and Linux

Anonymously track and analyze end user activity

Get granular insight into user environments

Prioritize development based on actual feature usage

Want to learn more? Request a demo today, call +1.781.398.3400

Free Trial

Get started with out-of-the-box reporting in just 30 minutes.

© 2017 Revulytics, Inc.

049_SDT05.qxp_Layout 1 10/13/17 3:46 PM Page 49

November 2017

SD Times


Account-based intelligence Revulytics’ analytics show ISVs how customers user their software BY JACQUELINE EMIGH

As adoption of consumption-based pricing keeps gaining ground, many ISVs and software customers are using Revulytics’ account-based intelligence to gain insights into product use throughout the lifecycle, said Vic DeMarines, VP of product strategy. As Revulytics defines the term, account-based intelligence refers to mapping of aggregated software data to customers’ processes to see how ISVs’ software is actually being used according to criteria such as specific features, OS and hardware environment, and application versions. While Revulytics offers tools for usage analysis, too, more and more customers are combining tools in both categories, DeMarines said. Initially, Revulytics’ account-based intelligence tools appeared most often in SaaS and pure-play cloud environments, but the embedded technology is now picking up momentum in shelfware and hybrid deployments, too. According to analyst group IDC, by 2018, almost 50 percent of organizations will have tools and processes in place for metering their on-premises software.

Used and unused features If Revulytics’ analytics demonstrate that customers are not using a particular feature, the vendor can work with customers to determine the reasons why, DeMarines said. “If for a certain percentage of customers the feature doesn’t improve the value of the application, the vendor might use this feedback to eliminate the feature,” he elaborated. Alternatively, the ISV might conclude that the feature is so unnoticeable Content provided by SD Times and

in the product that many users don’t even realize it’s there. For instance, if users of a packaged security solution are making big use of the antivirus software but little use of the firewall included in the package, the ISV might decide to redesign the UI, DeMarines pointed out. Recognizing that a feature needs to be made more visible can give the vendor a competitive edge, he said. Customers might be less likely to buy a rival firewall solution, for example, if they

‘If for a certain percentage of customers the feature doesn’t improve the value of the application, the vendor might use this feedback to eliminate the feature.‘ —Vic DeMarines

realize they already have a great firewall at hand from a trusted vendor. At the same time, account-based analytics can create a positive customer experience by giving users opportunities to understand which features are used most and to dialogue with vendors about the product. For customers already using Revulytics’ licensing compliance tools, the addition of account-based analytics can make visits by vendors or resellers less likely to be viewed as simply audits or sales calls and more likely to be seen as ways of collaboratively sharing information. “A customer might say, ‘Why am I paying for a feature I’m not using?” DeMarines noted. The ISV or partner might then negotiate a new consumptionbased pricing model for the customer. At the same time, the vendor can also draw attenion to the features the customer is using most, helping the customer to appreciate the value in the product.

Findings from the licensing compliance tools can be part of the discussion, too. These tools can help to protect customers from the legal and financial risks of licensing violations as well as security risks related to using software that’s been tampered with, said DeMarines. Like Revulytics’ usage analysis tools, the licensing compliance module provides both location- and account-based information. In viewing the stats, the ISV and customers might note that most instances of piracy are happening in company subsidiaries in certain parts of the world. The vendor might then develop special editions of the product with anti-piracy features for those international markets. ISVs often think that they can build account-based intelligence into their own products. “But in reality, if this happens, it’s usually just an afterthought, because vendors can’t spend a lot of time on this feature. Also, customer support for this built-in feature tends to be minimal,” according to DeMarines.

Communicating and displaying data Revulytics’ tools also include contextual in-app messaging, a Web reporting API, and the capability to conduct customer surveys. After noticing that a new performance evaluation workflow received widespread use right away, an HR software firm took a customer survey to help figure out how to improve the feature for inclusion in future releases. ISVs can use the Web reporting API to build data visualizations for display on customer portals. “By allowing customers to see what they are using and even compare their use to the general community, this provides a gamification experience to increase adoption,” he said. z


050,51_SDT05.qxp_Layout 1 10/16/17 1:55 PM Page 50


SD Times

November 2017

Securing Microservices: The API gateway, authentication & authorization BY MOSTAFA SIRAJ FIGURE 1


Mostafa Siraj, senior security adviser at WhiteHat Security, is a globally recognized Information Security expert specializing in Application Security.

ecently I was building a 1,000-piece puzzle with my girlfriend. Experienced puzzle builders have some techniques to finish the task successfully. They follow what we call in algorithms “Divide and Conquer.” For example, in the puzzle I built above with my girlfriend, we started by building the frame, then we gathered the pieces of trees, ground, castle and sky separately. We built each block separately, then at the end we collected all the bigger blocks together to have our full puzzle. Possibly, if we didn’t follow this approach and we tried to build line by line, we could have done it, but it would have taken a lot more effort and energy. We also wouldn’t have benefited from being a team because we would have been looking at the same line, instead of doing working collaboratively and efficiently. This is the same for software! Building software is quite complex, and that’s why software architects used to design the solution into separate modules, built by separate teams, then integrated into a full solution. However, a lot of the teams were still struggling with this approach. A single error in a small module could bring the entire solution down. Also, any update to any of the components meant that you would have to build the entire solution again. If a certain mod- There is no widely accepted definition ule has performance issues, for Microservices, but there is you’d likely need to upgrade consensus about the characteristics. MICROSERVICES: all of the servers supporting • are usually autonomously developed your application. are independently deployable For these reasons (and • use messaging to communicate many others), software firms • each deliver a certain business • such as Netflix, Google, and capability. Amazon have started to


050,51_SDT05.qxp_Layout 1 10/16/17 1:55 PM Page 51

adopt a “Microservices Architecture.” Fig. 1 illustrates the structure of a typical microservice. You’ll notice that some microservices will have their own data stores, while others only process information. All of the microservices communicate through messaging. While a microservices architecture makes building software easier, managing the security of microservices has become a challenge. Managing the security of the traditional monolithic applications is quite different than managing the security of a microservices application. For example, in the monolithic application, it is easy to implement a centralized security module that manages authentication, authorization, and other security operations; with the distributed nature of microservices, having a centralized security module could impact efficiency and defeat the main purpose of the architecture. The same issues hold true for data sharing: monolithic applications share data between the different modules of the app “in-memory” or in a “centralized database,” which is a challenge with the distributed microservices.

November 2017

SD Times



The API Gateway Probably the most obvious approach to communicating with microservices from the external world is having an API Gateway. The API Gateway (Fig. 2) is the entry point to all the services that your application is providing. It’s responsible for service discovery (from the client side), routing the requests coming from external callers to the right microservices, and fanning out to different microservices if different capabilities were requested by an external caller (imagine a web page with dashboards delivered by different microservices). If you take a deeper look at the API Gateways, you’ll find them to be a manifestation of the famous façade design pattern. From the security point of view, API Gateways usually handle the authentication and authorization from the external callers to the microservice level. While there is no one right approach to do this, I found using OAuth delegated authorization along with JSON Web Tokens (JWT) to be the most efficient and scalable solution for authentication and authorization for microservices.(Fig. 3) To illustrate further, a user starts by sending his credentials to the API gateway which will forward the credentials to the Authorization Server (AS) or the OAuth Server. The AS will generate a JASON Web Token (JWT) and will return it back to the user. (Fig. 4) Whenever the user wants to access a certain resource, he’ll request it from the API Gateway and will send the JWT along with his request. The API Gateway will forward the request with the JWT to the microservice that owns this resource. The microservice will then decide to either grant the user the resource (if the user has the required permissions) or not. Based on the implementation, the microservice can make this decision by itself (if it knows the permissions of this user over this resource) or simply forward the request to one of the Authorization Servers within the environment to determine the user’s permissions.


The approach in Fig. 4 is much more scalable than the traditional centralized session management. It allows every individual microservice to handle the security of its own resources. If a centralized decision is needed, the OAuth server is there to share permissions with the different microservices. A challenge with this approach will be if you want to revoke the permissions of the user before the expiration time of the JWT. The microservices are distributed, possibly in several locations around the world, and the API Gateway is just forwarding the requests to the responsible microservice. That means that revoking the permissions requires communicating with every single microservice, which is not very practical. If this was a critical feature, then the API Gateway can play a pivotal role by sending a reference of the JWT to the user instead of the JWT value itself (Fig. 5). Each time the user requests a resource from the API Gateway, the API Gateway will convert the reference to the JWT value and forward it as normal. If revoking the permissions is needed, then only a single request to the API Gateway will be provided, then the API Gateway can kill the session for that user. This solution is less “distributed” than the JWT value so it’s up to the Software Architect and the application requirements to follow either approach. z


052_SDT05.qxp_Layout 1 10/13/17 3:47 PM Page 52


SD Times

November 2017


HCI: Ready for mainstream? George Karidis is Chief Executive Officer at Virtuozz.


e’ve all read the eye-popping stats about the number of devices coming online as part of the impending Internet of Things (IoT) wave — about 6.4 billion connected devices in 2016 to almost 20.8 billion by 2020, according to Gartner. The numbers are staggering. Why? Because fundamentally, most of us take for granted the ability to have fast mobile internet, cloud storage and instant access to apps with just one or two devices — our smartphone and maybe a laptop or tablet. When something goes wrong, we get instantly frustrated. But think about how frustrating it could be when everything from the traffic lights to the engine in your car to the refrigerator in your home is connected and at risk of failure. If these sensorenabled devices don’t have the instant, always-on access to the compute, storage and networking they require, people will be shouting at more than their mobile phones. IoT will never achieve its full potential without an infrastructure that provides the pure computing power needed to keep up with this exponential growth in devices. IoT also requires new approaches to storage to ensure that devices, apps, and users have access to the massive amounts of data at play. Hyperconverged infrastructure (HCI), which enables breakthrough performance and cost efficiencies by integrating compute, storage and networking in a single software-defined stack that can distributed across private and public clouds, is the breakthrough that IoT needs to go mainstream.

But think about how frustrating it could be when everything in your home is connected and at risk of failure.

levels of redundancy needed to ensure that data is always available for IoT devices and apps. The second clear benefit involves managing load spikes. HCI offers improved ability to manage the peaks and troughs of compute resources that will be typical with IoT devices and systems. For example, a speed camera’s data stream is lower at night, meaning that with the right flexibility, compute power can be freed. That is not necessarily the case with legacy data center infrastructure. Finally, HCI offers better integration with IoT systems on the back end. With the proliferation of connected devices comes new requirements for processing and storing the large of amounts of data created by sensors. One option is using the public cloud, but for those companies that want a private cloud option, HCI is a sensible choice.

Efficiency and isolation required by IoT Containers will be imperative for building flexible IoT systems that perform efficiently, can be tested adequately and can be updated easily. IoT objects will need to operate in low-bandwidth environments in many cases, and containers enable more efficient updates and less failure than traditional push-based updates. This, combined with the increased performance and efficiency of compute resources that containers provide will open the doors to IoT sooner than we think. To support production workloads, containers require a highly efficient platform that integrates with storage and adds technology like disk encryption to protect data. Hyperconverged infrastructure platforms with integrated, persistent softwaredefined storage offer exactly that.

How HCI meets IoT’s steep infrastructure challenges There are three key benefits of HCI in supporting IoT vs. traditional virtualization and storage solutions. Typically, total cost of ownership comes into play, and this case is no exception. The costs of scaling a traditional compute and storage infrastructure to manage the IoT data deluge would be unmanageable. HCI offers a fully virtualized, software-defined infrastructure approach to handle data streams from this mass distribution of devices. You can significantly reduce infrastructure costs by leveraging all your storage resources as a single, distributed resource pool, while enabling the high

First mover industries to set the tone In the consumer arena, the deployment of smart home devices and systems will provide a very public window into how well supported connected devices are by back-end infrastructure. As adoption scales, how many anguished social media posts will we see? Are there media stories detailing consumers’ frustrations with so called “life-changing” IoT gadgets? This will be one measuring stick that can be used to determine if IoT devices are getting the back-end infrastructure needed to reach mainstream adoption. z

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 2:13 PM Page 53

054_SDT05.qxp_Layout 1 10/13/17 3:48 PM Page 54


SD Times

November 2017


Will software always need users? Peter Thorne is director at analysis firm Cambashi.


nteractive software needs users to guide it through a process. But many steps have been or can be automated. The promise of machine learning is to automate any remaining un-automated steps. How should a software architect find the limits of automation and the right role for people in a system? In the early days of interactive word processing, the user had to do it all — typing, layout and checking. Now, spelling and grammar checking are standard, and automatic layout according to a template is routine. What next? Will natural language generators absorb our ideas and data, and then generate a suitable narrative? It’s been done for weather forecasts, and NLG technology is anticipated to provide a text narrative alongside the widely used ‘dashboards’ of analytics systems. The point I want to make is that the role of the user in every interactive system will change. If the software you work on is interactive, you need a vision for opportunities to add automation that makes your users more productive. Also, you need to remember that machine learning can enable automation without ever having to understand the task.

A development team needs to be able to see how their interactive system fits into its environment.

entry task, and I’m not trying to promote this idea. But this approach transformed software for language translation, and it also re-opens the question of whether every user action is needed in interactive software.

Interactive means now Engineering software is fertile ground for interactive software — dynamic images, context-aware menus, rapid click-response sequences to create and manipulate design data. Except in simulation. The numerical calculations behind structural or fluid flow simulations take too long. Until now. This year, Ansys demonstrated its ‘Discovery Live’ software. This uses the parallel processing power of Graphics Processing Units (GPU) to provide simulation results as part of the interactive response to design changes. This will trigger change in the way product development teams work. Interactive simulation will make simulation part of the up-front process, improving early-stage design choices, eliminating (or, more likely, reducing) the need for a longer cycle of later simulation for a selected subset of all potential choices.

The right role for people No understanding? Consider old-school data entry — yes, transferring the information from handwritten forms into a computer system. Unlike an optical scanner, a human will look at a blurred handwritten form at different angles, squint at it, hold it closer to the light, and perhaps ask a colleague for their opinion. More than that, humans understand context. Machine learning theory says you can add a learning layer. This would require scanned input images, then it would build correlations between those images and the outputs generated by the data entry clerks. When the correlations are good enough, transform the learning layer into an automation layer to generate outputs which contain all the learned correlations. Hey presto — automated data entry, and all the task know-how came from existing users doing their assigned jobs. That’s the theory. We all know the limitations. I have never heard of machine learning being the whole solution to automation of a manual data

Smartphones have led the way, finding shortcuts and convenient interactive sequences. Now many systems make similar assumptions about user intent and make most option selections unnecessary. Though important, these are quite low-level, detailed design and optimization choices. There are also bigger-picture choices. The examples above point to three types of bigger choice: direct task automation (often implemented as new functions); machine-learning based automation (likely to capture and duplicate an existing way of working); and moving existing ‘batch’ software into the interactive sequence (perhaps enabled by special hardware, cloud capacity or data access). To make these choices, a development team needs to be able to see how their interactive system fits into its environment — of other systems and business processes. The important wall to break down is the assumption that users will continue to do the same thing in the same ways. z

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 2:13 PM Page 55

SDT05 Full Page Ads.qxp_Layout 1 10/13/17 2:59 PM Page 56

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.