FC_SDT016.qxp_Layout 1 9/24/18 11:11 AM Page 1
OCTOBER 2018 • VOL. 2, ISSUE 16 • $9.95 • www.sdtimes.com
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:11 PM Page 2
003_SDT016.qxp_Layout 1 9/24/18 10:40 AM Page 3
VOLUME 2, ISSUE 16 • OCTOBER 2018
Defining software quality metrics for Agile, DevOps
page 9 12 15 17 18 25 26
The CNCF sees a surge in cloud-native adoption APM for today’s new architectures
The 6 core pillars of the Microservices Manifesto The Commons Clause causes open-source disruption Ensure security and quality at speed Report: DevOps initiatives are paying off
GUEST VIEW by Ayman Sayed Developer culture dictates security
ANALYST VIEW by Arnal Dayaratna Understanding cloud native
S WCASE SHO
INDUSTRY WATCH by David Rubinstein A manifesto for modern development
XebiaLabs powers enterprise DevOps
Compuware’s bridge to enterprise DevOps
A CALMR approach to DevOps
Tasktop improves end-to-end value flow
39 GitLab powers entire DevOps life cycles
BUYERS GUIDE APIs help developers do more with less
Laying out a new network landscape
Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at email@example.com.
004_SDT016.qxp_Layout 1 9/21/18 3:54 PM Page 4
Instantly Search Terabytes
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein firstname.lastname@example.org NEWS EDITOR Christina Cardoza email@example.com
dtSearchâ&#x20AC;&#x2122;s document filters support: Â&#x2021; popular file types Â&#x2021; emails with multilevel attachments Â&#x2021; a wide variety of databases Â&#x2021; web data
2YHU VHDUFK RSWLRQV LQFOXGLQJ Â&#x2021; efficient multithreaded search Â&#x2021; HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ Â&#x2021; forensics options like credit card search
SOCIAL MEDIA AND ONLINE EDITOR Jenna Sargent firstname.lastname@example.org ASSOCIATE EDITOR Ian Schafer email@example.com ART DIRECTOR Mara Leonardi firstname.lastname@example.org CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum
ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 email@example.com
Developers: Â&#x2021; $3,V IRU NET, C++ and Java; ask about new cross-platform NET Standard SDK with Xamarin and NET Core Â&#x2021; 6'.V IRU :LQGRZV 8:3 /LQX[ 0DF L26 LQ EHWD $QGURLG LQ EHWD Â&#x2021; )$4V RQ IDFHWHG VHDUFK JUDQXODU GDWD FODVVLILFDWLRQ $]XUH DQG PRUH
SALES MANAGER Jon Sawyer firstname.lastname@example.org
CUSTOMER SERVICE SUBSCRIPTIONS email@example.com ADVERTISING TRAFFIC Mara Leonardi firstname.lastname@example.org LIST SERVICES Jourdan Pedone email@example.com
Visit dtSearch.com for Â&#x2021; KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV Â&#x2021; IXOO\ IXQFWLRQDO HQWHUSULVH DQG developer evaluations
The Smart Choice for Text RetrievalÂ® since 1991
REPRINTS firstname.lastname@example.org ACCOUNTING email@example.com
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:12 PM Page 5
006,7_SDT016.qxp_Layout 1 9/21/18 3:53 PM Page 6
NEWS WATCH SmartBear’s UI test tool free for OS SmartBear is making its UI functional web testing tool available for free for opensource projects. CrossBrowserTesting enables teams to automate Selenium scripts, manually debug web apps and compare solutions on more than 1,500 desktop and mobile browsers in the cloud. According to the company, the full access to its device lab will maximize test coverage and ensure a quality user experience. “The Swagger development team has used CrossBrowserTesting for a few years now, and it has become a critical part of how we ensure that the SwaggerHub SaaS product and Swagger.io website function perfectly for any user who visits those sites,” said Ron Ratovsky, Swagger developer evangelist.
Microsoft contributes projects to Go Microsoft is giving back to the Go programming community with the announcement of Project Athens and GopherSource. According to the company, this is an important milestone in Microsoft’s efforts to support Go in Visual Studio Code, Visual Studio Team Services and Azure cloud platform. Project Athens is an opensource project designed to create proxy servers for Go modules. The project includes a Go module proxy server for edge deployments, a protocol for authenticated module proxies, module notary services, and the ability to specify what to include and exclude when approving external Go packages. The company explained the project is still in its alpha
Program to create revenue stream for open source Storj Labs announced a new Open Source Partner Program this week designed to generate revenue for open-source projects and companies. The way the programs works by generating revenue as users of open-source partner software store data in the cloud, Storj Labs explained. The program is being launched in conjunction with 10 partners: Confluent, Couchbase, FileZilla, InfluxData, MariaDB, Minio, MongoDB Nextcloud, Pydio and Zenko. Revenue is earned every time end-users store data on the Storj platform. It can be earned through new user referrals or incremental cloud storage use. “Our Open Source Partner Program will help open source companies to remain open and free and invest in growth,” Golub wrote. “It will also enable them to achieve more within their budgets, supporting them in becoming profitable, accelerating roadmaps or meeting other financial-related goals. And, it should do so without trying to demonize existing players or requiring unnatural acts with regard to licensing Ultimately, open source companies — even the ones that only provide free products — require revenue to sustain themselves, and the Storj Open Source Partner Program can help.
phase, and will continue work on improving the modules experience. GopherSource is a new initiative designed to bring more users and contributors to the Go ecosystem and “upstream” key Go projects. Additionally, Microsoft is working to improve the Go developer experience within its own products and services. Some improvements include an extension for native Go support in Visual Studio Code, support for Go across Azure services, and CI and CD capabilities for Go apps in Visual Studio Team Services.
Atlassian’s take on code review in Bitbucket Cloud Atlassian is taking a code-first approach in its version control repository hosting service. The company announced a redesign of Bitbucket’s pull requests and a focus on modernizing code review at its annual Atlassian Summit in Barcelona last month. According to the company, as software development keeps changing, so do the tools and approaches develop-
ers use. Pull requests have been redesigned with new navigation and UI for collaboration and more effective reviews. According to Atlassian, early users have already seen a 21 percent improvement in code reviewers’ “timeto-approve.” The company also announced new integrations for its continuous delivery tool Bitbucket Pipelines. The tool is now integrated with Jira Software for visibility and traceability across the development lifecycle.
Applitools: Version control for UI elements In order to see what a previous version of an application looks like, developers often have to actually rebuild that earlier version. This process could take days, which prevents teams from releasing new functionality, negatively impacting customer relationships and therefore revenue. Applitools released its UI Version Control system to address that problem. A version control system for user interfaces will enable users of the system to view the entire
history of web and mobile application user interfaces. They will also be able to see when things have been changed and by whom. According to the company, this is the first time this level of visibility is being made available to developers, test automation engineers and product managers. This type of visual record allows R&D and product teams to intelligently drive application development by showing which features have worked and which haven’t in the past, the company explained. The Applitools UI Version Control system works similarly to source code version control systems. The solution runs visual tests as part of the GitHub build process as well as performing visual validations for GitHub pull request, preventing visual bugs from being released in production apps.
006,7_SDT016.qxp_Layout 1 9/21/18 3:53 PM Page 7
GitLab 11.2 brings insight into real-time changes
have been added on the instance level, allowing organizations to easily manage project templates.
Share code between web, mobile apps Angular announced developers can now build web and mobile apps once with Angular and NativeScript, instead of having to build both a web and native mobile app. According to the team, developers have always been able to use NativeScript to build mobile apps with Angular. NativeScript is Progress’ open-source framework for native mobile apps. Having to build a web app and then a build a native mobile app and the maintain those two apps separate “got the job done,” the Angular team explained, but “it quickly became apparent that we could do better than that.” “This challenge led to a dream of a Code-Sharing Project. One that would allow you to keep the code for the web and mobile apps in one place.
GitHub addresses ‘paper cuts’ in workflows GitHub is addressing small to medium-sized workflow problems that get under developers’ skin. The company announced Project Paper Cuts to fix problems, iterate on UI/UX and make improvements. Project Paper Cuts is based off of GitHub’s work with the Refined GitHub browser extension that was designed to improve on the GitHub experience. According to GitHub, some of the features of the project include: l Unselect markers when copying and pasting the contents of a diff l Ability to edit a repository’s README from the root l Access to repositories from the profile dropdown l Highlighted permalink comments l Ability to remove files from pull requests l Branch names in merge notification emails l Ability to create pull requests from Pull Request Page l Ability to add a teammate from discussions l Ability to collapse all diffs at once l And the ability to copy a URL of a comment In addition, the team says they are looking at fixing “paper cuts” that will have the most impact with the least amount of process, friction, discussion and dependencies.
One that would allow us to share the business logic between web, iOS and Android, but still be flexible enough to include platform-specific code where necessary,” Sebastian Witalec, senior developer advocate for Progress, wrote in a blog post on Angular. The team was able to do this with Angular’s Schematics and ng add features. Both teams set out to create nativescript-schematics, “a schematic that enables you to build both web and mobile apps from a single project,” wrote Witalec. In addition, the team separates the web code from the mobile code with a naming convention. Developers can specify NativeScript code with .tns and web code without the .tns extension.
Google’s Tink: A code library cryptography Google wants to ensure developers have the tools necessary to protect user data with the open-source release of Tink. This new project is a multi-language, cross-platform cryptographic library designed to ship secure cryptographic code. “At Google, many product teams use cryptographic techniques to protect user data. In cryptography, subtle mistakes can have serious consequences, and understanding how to implement cryptography correctly requires digesting decades’ worth of academic literature. Needless to say, many developers don’t have time for that,” Thai Duong, information security engineer for Google, wrote in a post on behalf of the Tink team. According to the company, Tink is already being used in many of its services. z
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:12 PM Page 8
$PHULFDV (0($ 2FHDQLD VDOHV#DVSRVHSW\OWG FRP
009-11_SDT016.qxp_Layout 1 9/21/18 4:57 PM Page 9
Serving up application data to users on the go
ata can be viewed to be as valuable in the digital business world as oil is to the economy. Oil fuels homes, cars, railroads, and even electricity. Data powers the business and revenue. Businesses that want to continue to evolve, transform and improve need are being fueled by their data to tell them where and when to make changes. It’s not just about obtaining data though. Businesses that want to remain competitive need to gain intelligent and valuable insights fast and in real-time. Yesterday’s data will be worthless tomorrow when every second to deliver high-quality experiences matters today. If a service is slow, if a system goes down, or if a customer can’t find what they are looking for, they won’t waste too much time with your business before moving on. The challenge, however, is that there is an influx of data coming in from every direction, and that is making it difficult to collect, store, and manage data. The traditional approach to storing data is to
BY CHRISTINA CARDOZA put it on a disk-based system for later integration and analysis. In addition to being slow and not conducive to today’s fast-paced way of working, disk requires a lot of overhead involved with querying, finding and accessing data. “Customer expectations around latency and responsiveness have skyrocketed; today everyone expects constant availability but latency is now considered the new downtime. In other words, brownout is the new blackout in today’s world. In order for businesses to meet and exceed these customer expectations, businesses and IT leaders can no longer rely on old traditional ways of storing and managing data,” said Madhukar Kumar, vice president of product marketing for database provider Redis Labs. The alternative approach is to store data in memory, but that has not been an option to many businesses because of the cost expense compared to disk until just recently. Recent advances in the
memory space are making storing data in memory and in-memory databases more cost effective, according to Mat Keep, senior director of products and solutions for the database platform provider MongoDB.Keep explained that over the last couple of years, memory technology has improved with capacity going up and costs going down. “We are seeing real growth and commoditization of in-memory technology,” he said. This gives you a competitive advantage because with in-memory computing, businesses can feed their compute capability at a much higher rate and get a much faster access path to data, Keep explained. “The reality is that disk-based systems are just too slow. You can do all these optimizations to your data, but in the end what a company needs is 10 times to 100 times more improvement and in-memory computing is the only way to get that scale and performance,” Abe Kleinfeld, president and CEO of GridGain Syscontinued on page 10 >
009-11_SDT016.qxp_Layout 1 9/21/18 4:00 PM Page 10
< continued from page 9
tems, an in-memory computing platform provider, added. “The good news is that computer hardware doesn’t cost that much anymore. It continues to get cheaper and cheaper. You can fit a lot of memory in modern computers today.” According to the research firm Forrester, an in-memory database refers to “a database that stores all or most critical data in DRAM, flash, and SSD on either a single or distributed server to support various types of workloads, including transactional, operational, and/ or analytical workloads, running on-premises or in the cloud.” While in-memory systems have been around for years, the costs associated with it placed it in very niche applications for very specialized tasks such as critical systems, explained Keep. Thanks to lower memory prices, Forrester principal analyst Noel Yuhanna sees in-memory technologies moving more towards complex mobile, web and interactive workloads such as real-time analytics, fraud detection, mobile ads and IoT apps. “We find that everyone wants to run operational reporting and insights quickly to make better business decisions. This is where in-memory comes in. Inmemory allows you to process data more quickly than traditional disk-drives. Customers are telling us that they are getting five times or even 10 times faster results with in-memory,” said Yuhanna.
How in-memory databases can strengthen your data strategy The fact is that the increasing amount of data is putting extreme pressure on existing infrastructure, according to Yuhanna. The main benefit of applying in-memory data technology to your strategy is the amount of speed you get. “The more data you have in-memory the quicker it takes to do the analytics such as fraud detection, patient monitoring, customer-360 and other use cases. Many organizations struggle with performance of critical apps, an issue that’s further exacerbated by growing data volumes and velocities. Business users want fast, real-time data, while IT wants to lower cost and improve operational efficiencies,” Yuhanna said.
Top use cases by industry In-memory technology is still in its early days, with Forrester estimating that about 35 percent of enterprises leverage in-memory databases for critical business apps and insights. Forrester expects that number to double over the next three years. The biggest use cases will be around real-time apps with low latency, customer analytics, IoT apps and mobile apps that need integrated data, according to Forrester. In addition, Forrester’s Yuhanna also sees in-memory technology benefiting the following industries. He wrote: Financial services: Most financial servers have relied on in-memory for a decade to support their key applications and are now starting to take advantage of in-memory databases to support applications that mostly exist on relational models. Retail: Customer data stored and processed in-memory help create an opportunity for businesses to upsell and cross-sell new products to a customer based on his or her likes, dislikes, circle of friends, buying patterns, and past orders. Manufacturing: Today, most manufacturers deal with highly sophisticated machinery to support their plants, whether building a car, airplane, or tire, or bottling wine or soda. With in-memory and Big Data technologies, manufacturers can track machines every minute or even every second to determine if any machine is likely to fail, as well as what parts might be needed to be repaired if a breakdown does occur. As a result, this is allowing large manufacturers to be more proactive, saving them millions of dollars from downtime. Online gaming: Online gaming has changed dramatically over the years to deliver rich graphics, a global user base, and real-time interactive experiences. In-memory delivers a real-time data platform that can track individuals, groups, game scores, personalized user preferences, and interactions as well as manage millions of connected users, all in real-time. z
With in-memory databases, there is no disk input/output, so that means applications can read and update data in milliseconds, according to Redis Labs’ Kumar. “An in-memory database stores the data in the memory which eliminates seek time when querying the data, thereby making data access exponentially faster,” Kumar said. This is because data is being stored closest to where the data processing is happening, according to Ravi Mayuram, CTO for Couchbase, a NoSQL database provider. Mayuram explained that processing happens in the CPU, and the closest place for the CPU to consume data is in the main memory of a system. “Memory is the place where you keep the data and where computing can happen the fastest,” he said. “That is where you can get those fast response time. Inmemory computing is becoming more and more ubiquitous.” It also enables businesses to develop
applications that are “population scale,” which refers to providing a service or solution that anyone in the world can access, said Mayuram. “It is really being able to provide much lower latency to your users, much more predictable latency, and fast responsiveness of your applications. If you are running analytics, being able to do that in-memory enables you to pull that data out much more faster,” said MongoDB’s Keep. “Speed is the competitive advantage today. How quickly can you act on data, how quickly can you use it, and how quickly can you extract insights from it.” The in-memory speed also makes it easier to work with data. “In-memory is great at bringing in a lot of data, having it constantly be updated, and then being able to look at all the data and generate it in real-time,” said Nima Negahban, CTO of analytics database provider Kinetica. “It just makes things
009-11_SDT016.qxp_Layout 1 9/21/18 3:53 PM Page 11
a whole lot easier because it is faster.” In addition, Negahban mentioned the desire to generate maximum ROI from an enterprise’s data lake or data stream is fueling the move toward in-memory technology. In 2010, the Apache Hadoop movement sparked an investment in terms of data generation and storage. What Hadoop has accomplished is the ability to collect all this data, but then the next step was to derive insight and value from that. “That part of the problem is what the Hadoop movement didn’t solve very well, and that is where in-memory data comes in,” he said. The old days of running a batch report nightly and generate a static report are over. When you are going through a digital transformation and trying to create a business around data, you need real-time visibility into that data, and disk-based systems aren’t going to get you there, Negahban explained. “Everyone is turning into a technology company, and everyone is turning into a company where customer experience is the number one priority. It is real-time customer experience based on real-time information, and in that world, traditional computing is just too slow. No matter what type of industry you are in, you are trying to collect information about your customers in real time and act in real time to take advantage of opportunities that present themselves from moment to moment. In-memory computing is the only way to achieve that success,” said GridGain’s Kleinfeld.
How in-memory databases can weaken your data strategy Reduced cost, speed and real-time insights... what could go wrong? Inmemory technology is not a silver bullet, and still comes with its own set of limitations and disadvantages. For instance, while there have been huge advancements to reduce the costs of memory, it is still expensive. The price of RAM has become static, and databases as well as businesses are constantly working toward compressing the cost curve, Kinetica’s Negahban explained. “You cannot put all your data completely in-memory,” said Forrester’s
Yuhanna. “You must be selective on what data and what apps would benefits.” Businesses now need to consider the cost of memory and figure out how they can get the most benefit with the least amount of memory, such as compressing data, storing data on disk or only keeping a certain amount of data in memory, according to Couchbase’s Mayuram. “In memory gives you a big performance boost, but comes with the [barriers] of being more expensive and complex, especially as the datasets grow or exceed a system,” said MongoDB’s Keep. If you have only a certain amount of in-memory data you can use, once your data exceeds the available memory, the system will either stop accepting data, crash or crash with an out-of-memory message, he added. From there, the business has to figure out whether to increase the available memory in their system, or add more services or instances. “Storing data in memory means that when the data volume becomes very large—for example hundreds of terabytes—it becomes expensive to store all of that data in memory since RAM is more expensive than disk storage,” said Redis Labs’ Kumar. In addition, in-memory is not persistent, meaning it has to be backed up with persistent storage to ensure data reliability, according to Forrester’s Yuhanna. Unless you are checkpointing your data from in-memory to disk — which is going to slow down the system — if your system dies, you will have lost that data, Keep added. There are efforts to improve the reliability of in-memory. Keep says as memory continues to get cheaper and capacity increases, we will start to see the adoption of non-volatile RAM such as Intel and Micron’s 3D XPoint technology, which can provide a persistent form of memory so if you lose power, your data is still persisted. Keep does warn about the wear rate of this type of technology though. “The more you write data to this device, the more it actually wears out, so you can actually start to lose data again,” Keep said. “Regardless, in-memory technology is here to stay, and its usage is likely to expand to all apps and insights in the coming years,” Yuhanna said.
As time goes on, Kinetica’s Negahban agrees in-memory technology will continue to expand and evolve and move toward what some call the data gravity theory. “Wherever the data lives is where the rest of the solution stack will move toward because data is expensive and the need to be able to have fast access and do mutations of that data is what drives all computing,” he said. “So as data continues to explode and as inmemory databases get more creative, we are going to see databases take on a lot more roles where they become places where you house app containers, visualization engines, machine learning moderns and more. The data gravity phenomenon is going to take effect.”
Overcoming the challenges To get the most out of in-memory data, Yuhanna sees organizations leveraging a tiered storage strategy. “Organizations are putting hot data in DRAM, warm data in SSD/Flash and cloud data on traditional disk drives. Many apps, databases and cloud solutions are starting to leverage tiered storage in a more intelligent and automated manner that's helping accelerate insights,” Yuhanna said. MongoDB’s Keep does not think we will ever get to a place where everything is completely stored in-memory. “The reality is it is never going to happen because the amount of data we are collecting is growing faster than in-memory technology,” he said. Instead, what Keep expects to see is in-memory and diskbased systems merge into a single intelligent system where the “freshest,” most important data will be held close to the processor such as in-memory and the slightly more “aged” data that is accessed less frequently will be stored into disk. “There will never be a winner-takes-all. What will you see is a combination of these technologies to give the best latency to the hottest part of data,” he said. “For data that requires instantaneous processing, the best practice is to store and process in-memory,” Redis Labs’ Kumar added. “For data that doesn’t require frequent reads/writes and can be processed with batch jobs, the general acceptable practice is to store them on a disk.” z
012_SDT016.qxp_Layout 1 9/21/18 3:56 PM Page 12
The CNCF sees a surge in cloud-native adoption SD Times
tainerizing their apps and having a Additional challenges include culThe industry is fully jumping on board Kubernetes orchestrator automatically ture changes, complexity, lack of trainwith cloud-native technologies. The decide which apps are running on which ing, security, and monitoring, according Cloud Native Computing Foundation servers, Kohn said. “You get much high- to the report. released its bi-annual CNCF survey at er resource utilization and efficiency. “Cloud native allows IT and software the Open Source Summit in Vancouver You can either run your existing apps on to move faster. Adopting cloud-native last month and found the use of cloud- a smaller number of services and save technologies and practices enables comnative technologies in producmoney, or you can serve a much higher panies to create software in-house, tion has grown more than demand and load with your allows business people to closely partner 200 percent since existing servers,” he said. with IT people, keep up with competiContainers December 2017. The other benefit, tors and deliver better services to their deployed on cloud: The survey is and perhaps the customers,” the CNCF wrote on its based off of biggest advantage website. Azure AWS 2,400 responses to cloud-native As for the cloud, the report found a 29% 63% GCP from people in computing, Kohn mix of on-premises, private cloud and 35% developer or ITsaid, is its ability public cloud solutions. When it comes related roles such as to accelerate the to deploying containers on the cloud, IT operations, IT managers, project development velocity of enterprises 63 percent are deploying to AWS, 35 managers, and developer relations. adopting the technology. “There are still percent to Google Cloud Platform, and The CNCF said the findings rein- companies that are on quarterly release 29 percent to Microsoft Azure. While force the current trends they are seeing cycles, and part of the idea of cloud AWS and Google Cloud Platform in the industry, and that 2018 was really native and the use of continuous integra- deployments were down from the last the year Kubernetes and other cloud- tion and continuous delivery is trying to report, the report found Microsoft native technologies crossed the chasm move that down to weekly or daily,” said Azure deployments were up from 16 from early adopters to early majority. Kohn. percent. Respondents cited using con“Cloud-native computing represents Other benefits from the report tainers for development, test and prothe next paradigm of computing that included improved scalability and cloud duction. Kubernetes remains the the role of virtual machines played in portability. number one choice for container manthe last decade. Cloud native is now Despite the benefits, Kohn said agement, with 83 percent saying they going to play that role for the next there are some “sharp corners” when it use it, followed by Amazon ECS, decade,” said Dan Kohn, executive comes to cloud native and that is Docker Swarm and Shell Scripts. z director of the CNCF. because for any given organization to Cloud-native technologies refer to containerize their apps, it is going to 10. Software distribution the ability to build and run scalable take work and effort to learn about all 9. Container runtime apps in dynamic environments, using the options. “But the investsuch fundamentals as containers, serv- ment organizations are making 8. Messaging ice mesh, microservices, immutable into it have an STEPS TO GET infrastructure and declarative APIs. extremely short payoff 7. Distributed database STARTED WITH “These techniques enable loosely cou- time,” he said. CLOUD-NATIVE pled systems that are resilient, manage6. Networking able, and observable. Combined with COMPUTING 5. Service mesh and discovery robust automation, they allow engiThe CNCF recommends organizations neers to make high-impact changes frelooking to get started with 4. Observability and analysis quently and predictably with minicloud-native take a look at its mal toil,” according to the CNCF. trail map. Along the way, the map 3. Orchestration and application definition The best known benefit will suggest cloud-native projects of cloud-native technologies 2. CI/CD and tools to help organizations is the efficiency successfully get through each step. users get from con- 1. Containerization BY CHRISTINA CARDOZA
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:13 PM Page 13
$PHULFDV (0($ 2FHDQLD VDOHV#DVSRVHSW\OWG FRP
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:13 PM Page 14
015_SDT016.qxp_Layout 1 9/24/18 1:54 PM Page 15
APM for today’s new architectures you have more demand. He explained The role of APM has traditionally been that “it’s really about the fluid access to to flag issues in code that affect perthe processing, and the ability to realloformance. To resolve those issues, it has cate and move things around in a very been necessary to understand all the ad hoc way to meet demand.” spaghetti code of a monolithic applicaToday’s environments aren’t as static tion — written by multiple developers as they used to be, when code would run who often do not have an overarching in a specific virtual machine on a specific view of the entire application — and server. Requests could be easily tracked then deconstruct that code to get to the and related to performance. With conroot cause. tainers, the code could begin running in But as the way software is developed a container on a specific server, but then has changed from monolith to be moved by the container microservice, the way APM assesses perform- ‘The problem isn’t the code now; that’s ance must change as well. actually quite simple. The problem will “One of the fundamen- be in the overarching picture, tal tenets of microservices the interaction between the services.’ is that you write your logic —Pete Abrams in as small a logical construct as possible, so every service management software to a difshould only do one thing,” said Pete ferent server to gain efficienAbrams, co-founder and COO at cies. So how that service performs is Instana, an application monitoring solu- more difficult to track. tion provider. “The work gets done by “There’s another new capability the messages flowing, the requests that needed from APM, and that is really happen between the services. The prob- precise correlation of exactly where lem isn’t the code now; that’s actually every request ran,” Abrams said. “I have quite simple. The problem will be in the a need to know, need to have an overaroverarching picture, the interaction ching, hierarchical map of every request between the services.” that ran through the system. You need a So one of the new challenges for IT is structural map that is constantly updatthat this cannot be tested adequately in a ing itself with the here and now.” test environment. The only way to underThat’s one of Instana’s biggest differstand performance from this new archi- entiators. Its APM solution understands tecture is in a production environment. where all the pieces are running, and “The modern APM solution has to can see the impact on performance understand message patterns and the when a container is moved from one dynamics between the services, and server to another. “We talk about that as that’s a real shift [in APM],” Abrams said. continuous updating and correlation,” Compounding that problem is how Abrams said. software is deployed, in containers, and how container management software It’s a cultural issue as well such as Kubernetes and Docker Swarm In years past, the roles of development moves them around to maximize appli- and operations were performed in sepacation performance. Abrams called that rate orbs. APM was something pur“structural dynamism,” the ability to chased by and run by operations, which manage scale and add capacity when had large budgets for core infrastructure BY DAVID RUBINSTEIN
Content provided by SD Times and
tools and training. As the APM tools were quite complex, organizations had to train and create experts in running the tools and understanding the system. With the advent of and uptake in DevOps, much of the responsibility for application performance is falling to developers, who often don’t have a highlevel view of the application in production. “A typical microservices application might have 300 to 500 services operating with discrete APIs humming along, messaging to each other. That’s a lot of moving parts. But the developer doesn’t care about all of that,” Abrams said. “The developer cares about their piece. Maybe a developer had a hand in 10 of those services. The first thing you have to solve is to give them an easy way to just focus on what they care about. That is the whole point of our major new release, which we call Application Perspectives. Yet as much as DevOps has taken hold, there remain a handful of operations people in an organization who are responsible for the quality of service of the whole application and architecture. Abrams explained, “There’s most likely a set of 10, 20 or 30 very common requests coming in to the application that are good indicators of overall service quality, the so-called core transactions to the system — the log-in transaction, the add-to-cart transaction, the checkout. Those things are easy to delineate and have to run well.” Abrams added the transformation to DevOps and containers provide a good moment to assess tooling, to find those that leverage the new techniques and technologies. “Almost all the tools in the market were created before these words existed — DevOps, Kubernetes, containers… even cloud. Instana started from scratch, with this in mind.” z
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:13 PM Page 16
017_SDT016.qxp_Layout 1 9/21/18 3:56 PM Page 17
BY CHRISTINA CARDOZA
nterprises of all sizes are adopting the microservice approach, but not everyone is adopting it correctly. According to Chase Aucoin, developer evangelist for AppDynamics, too often, businesses think if they move their monolith applications to microservices, their problems will just magically go away. However, once they start to move to microservices and realize their problems aren’t going away, they try to place blame on the technology. Aucoin explained the reason businesses are having such a hard time is that they don’t realize their problems are embedded within the culture of their business, not the technical aspect of the service. “The biggest challenge when it comes to adopting microservices is businesses think they can throw more technology at a culture problem and fix it, which unfortunately you can’t,” he said. After realizing that this was an ongoing pattern within the industry, Aucoin decided to create the Microservice Manifesto. The manifesto is an online template that consist of six pillars and responsibilities. According to Aucoin, it is similar to the Agile Manifesto in that it is not an all-encompassing book. It is meant to guide the discussion of microservices. Unlike the Agile Manifesto, the Microservice Manifesto’s pillars are designed to be stacked on top of each other in order of importance. “We all have the same goal — getting out great software quickly, securely, and error free. Microservices can be a great approach for large teams and codebases to help address this need, but there is not a lot of codification around what exactly Microservices are and why they help solve problems,” according to the manifesto’s website. “This document hopes to address that in an opinionated, but dogma-free approach with the end result being happier teams, companies, and clients.” The six pillars, in order of most importance, are: 1. OWNERSHIP: Ownership is the most important pillar, according to Aucoin. “Without creating ownership, without really valuing your team members and knowing that they are all trying
The 6 core pillars of the
e c i v r se o r c i M ifesto Man to work towards the same goal of making your company successful, you won’t be successful,” he said. According to the manifesto, businesses typically organize systems in a multi-tier way with segmented teams. However, if you create crossfunctional teams that have full ownership and are aligned with the business, it allows solutions to be delivered faster, and the company to respond to hurdles quicker. 2. AUTOMATION: Automation is a top pillar because it helps businesses break down large monoliths quicker, with fewer errors. “If you don't have automation, trying to break down a monolith or deploy several new greenfield microservices becomes an exercise in madness, and no one wants that,” the manifesto states. Aucoin added that automation enables the success of the other pillars, such as testing. 3. TESTING: Testing as much as possible requires being able to automate as much as possible, which is why testing is the third pillar. “Having automated tests that can run during each deployment ensures that we are delivering quality
products and not regressing. The benefit to automating that testing is that we get much better use of people time so that instead of executing tests, quality engineers can be writing tests instead,” the manifesto states. 4. DISCOVERABILITY: Discoverability refers to being able to find what you need, when you need it. This is important from a business perspective as well as a technical perspective, Aucoin explained. It enables the business and teams to manage and utilize the system’s functionality. In addition, discoverability refers to data governance and making sure data is consistent and accessible. 5. ACCESSIBILITY: Accessibility is making sure services can access each other regardless of the program they are in. “After your services can be found, other services need to be able to connect to them. Within the world of microservices, the preferred methodology for this is to expose your services via HTTP(s) and use standard serialization formats that have excellent support across multiple languages such as JSON,” according to the manifesto. 6. RESPONSIBILITY: Lastly, after you build something, you need to be responsible for it — not just how it impacts your team, but how it might impact other teams, services and people, Aucoin explained. “Responsibility also means being responsible for the care and feeding of the services your team owns,” according to the manifesto. “It is your responsibility not only to make services that fulfill the needs of the business; you also need to make sure that the services are fault tolerant, stable, and new releases don’t interrupt consumers.”
* * *
Going forward, Aucoin hopes to gain signatures from community and industry leaders. He does expect the core pillars to expand over time and hopes to eventually provide reusable patterns and practices for microservices. “The world around us is not static, and we don’t know what we don’t know. As we learn new lessons and find out what works and what doesn’t, these pillars can certainly change, grow, develop and create more clarity around them,” he said. z
018-20_SDT016.qxp_Layout 1 9/21/18 3:54 PM Page 18
The Commons Clause causes open-source disruption BY CHRISTINA CARDOZA group of software companies and developers are going after cloud infrastructure providers for what they claim is an abuse of the open-source software ecosystem. The group says cloud providers are using open-source software for their own commercial benefit without returning anything back to the opensource community, helping sustain those projects, or giving open source any credit at all. This is causing them to turn to a new license to protect their work and preserve monetary gain. “I call it the big code robbery. Amazon and other cloud providers are taking successful open-source code and adopting it as their own cloud service. This is a complete abuse of the opensource concept,” said Ofer Bengal, cofounder and CEO of the software company Redis Labs, creator of the open-source project Redis. “If this continues, there will be no incentive for any developer or any startup to develop any meaningful open-source project.” Bengal’s frustration stems from his company’s own open-source work. Redis is an open-source, in-memory data structure store that can be used as a database, cache and message broker under the BSD license. In addition, the company offers its own enterprise database offering based off the open-source technology, but because of its opensource success the company is finding it hard to monetize its own offering. This is because cloud providers like Amazon are abusing the open-source code by taking it, building their own database services off of it, and offering them as an integral part of their platform without contributing anything back to the opensource project, according to Bengal. Google Cloud’s website also shows it offers a CloudMemorystore for Redis as
a service. “So basically they are enjoying huge revenues for something they did not develop,” he said. “We, Redis Labs, only make a small fraction of those revenues with our offering, and this is not because their product is better in any way. This is just because the [open source] Redis service is way more accessible for users that go on the AWS console while our
Redis service is buried in the marketplace. This scenario does not just apply to Redis. Almost any open-source project nowadays suffers from the same unfair competition.” While Amazon by definition isn’t doing anything illegal or violating any of the terms of the ‘open-source agreement’ as stated by the Open Source Initiative, Bengal believes they are abusing their power and resources for monetary gain and destroying the entire essence of what it means to be open source. “The issue of monetizing opensource projects by cloud providers without contributions back to the community has become a significant challenge for the open-source business model. While all cloud providers are guilty of this at some level, AWS leads the charge due to its market share but also because of its business practices,” said Manish Gupta, CMO of Redis Labs. “Accelerated ‘forfee’ services introduced by cloud providers have increasingly relied on the work done by others without any significant payback for the community.” SD Times reached out to Amazon for comment, but had not heard back at the time of writing. Kevin Wang, founder and CEO of FOSSA, a contributor to the clause, explained the problem doesn’t lie only with cloud providers. In principle, the Commons Clause is meant to protect open-source projects from any bad actors.
Redis sought project protection Redis Labs tried to legally stop cloud providers from abusing its trademark, but found it difficult because of the legal resources and budgets these giant companies have. So the company took another route and decided to change the licenses of certain open-source Redis add-ons with
018-20_SDT016.qxp_Layout 1 9/21/18 3:55 PM Page 19
the Commons Clause. This change sparked huge controversy within the community with many stating that Redis was no longer open source. “We were the first significant company to adopt this and announce it in such a way that we got most of the heat from the community on this one,” said Bengal. The reason for the uproar is because the Commons Clause is meant to add “restrictions” that limit or prevent the selling of open-source software to the Open Source Initiative’s approved open-source licenses. “ … ‘Sell’ means practicing any or all of the rights granted to you under the License to provide to third parties, for a fee or other consideration (including without limitation fees for hosting or consulting/ support services related to the Software), a product or service whose value derives, entirely or substantially, from the functionality of the Software. Any license notice or attribution required by the License must also include this Commons Clause License Condition notice,” the Commons Clause website states. According to the OSI, this directly violates item six of its open-source definition in which it states no discrimination against fields of endeavor. “The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research,” the definition explains. According to Vicky Brasseur, vice president of the OSI, by restricting users from making money off a project where it is applied, the Commons Clause cannot be by definition labeled as open source. “As the Open Source Definition is no longer applicable to those projects, they—quite literally by definition—are no longer open source. At best they can be called ‘source available,’” she wrote in an email to SD Times. However, Bengal stated that “Redis is always and will always remain open source via the BSD license.” He believes this still promotes the idea of open source because it does not change users’ ability to use the product and still can be freely used within applications
and sold as a combined product. Instead, what it does is prevents cloud providers from taking the add-on itself and monetizing it without contributing anything back to open source.
Looking for sustaining value “The Commons Clause is getting a lot of attention because developers are frustrated at how hard it is to commercialize and sustain open-source projects. Many open-source projects require tens of millions of dollars to fund something that’s given away for free. Therefore, they adopt the Commons Clause to try and commercialize the value in order to sustain the core project,” said FOSSA’s Wang. “Users are afraid that this means the world is getting less open. This is not true, the greatest threat to open source is not narrow licenses, but lack of resources for
Redis Lab’s Ofer Bengal says there is a ‘big code robbery’ of open-source projects.
open-source projects to succeed. That misunderstanding and lack of empathy for the developer is causing the uproar.” However, developers see it differently. Drew DeVault, a software developer, flatly stated: “The Commons Clause will destroy open source.” “It preys on a vulnerability opensource maintainers all suffer from, and one I can strongly relate to. It sucks to not be able to make money from your open-source work. It really sucks when companies are using your work to make
money for themselves. If a solution presents itself, it’s tempting to jump at it. But the Commons Clause doesn’t present a solution for supporting open-source software. It presents a framework for turning open-source software into proprietary software,” DeVault wrote in a blog post. Bengal agrees this could have been addressed by introducing a proprietary, non-open-source license. But after several open-source companies raised a need to address the unfair competition in the community, it was decided to add this clause as a way to provide a solid concept that could be applied to any underlying open-source project and protect it from being abused as a standalone product from cloud providers. He believes the community needs to recognize that the world has changed since the open-source concept was defined 20 years ago and that this concept doesn’t work in today’s reality. “It is time to reexamine the ethos of open source,” Salil Deshpande, managing director at Bain Capital Ventures and investor in companies like Redis Labs, provided in an email to SD Times. “Our view is that open-source software was never intended for cloud infrastructure companies to take and run as a service, keep the profits, and give very little back. That is not the original ethos of open source. Commons Clause is reviving the original ethos of open source. Academics, hobbyists, or developers wishing to use a popular open-source project to power a component of their application can still do so. But if you want to take substantially the same software that someone else has built, and offer it as a service, for your own profit, that’s not in the spirit of the open-source community. As it turns out, Commons Clause makes the source code not technically open source. But that is something we must live with, to preserve the original ethos.” But the OSI does not believe the concerns of the Commons Clause’s supporters warrant their actions. “The Commons Clause FAQ states that ‘[t]he Commons Clause was intended, in practice, to have virtually no effect other than force a negotiation with those who take predatory commercial advancontinued on page 20 >
018-20_SDT016.qxp_Layout 1 9/21/18 3:55 PM Page 20
< continued from page 19
tage of open source development,’ however at no point have I seen a statement from either the Commons Clause creators nor any project(s) applying it that they attempted other approaches to encourage collaboration from those they see to be taking ‘predatory commercial advantage’ of their projects,” Brasseur wrote. “For instance, Redis recently applied the Clause to a few of their modules due to ‘…today’s cloud providers… taking advantage of successful open source projects and repackaging them into competitive, proprietary service offerings.’ Their statement on the change does not say, ‘We approached the cloud providers and asked them to collaborate, but they refused,’ or even ‘We approached the cloud providers and asked why they do not collaborate and how we can improve this experience for them.’ From the outside looking in, it does not appear as though Redis tried to encourage collaboration before throwing a tantrum and relicensing to this proprietary license. That’s not how to be a good free and open source citizen.” z
How the Commons Clause arose The Commons Clause was draft earlier this year lawyer Heather Meeker and a group of developers who felt open-source projects were under an enormous amount of pain and pressure to keep up with today’s constantly-changing business world. “It wasn’t created to end open source, but start a conversation on what we can do to meet the financial needs of commercial software projects and the communities behind them,” according to its website. The clause is intended to satisfy the needs of business and legal requirements without closing source to open-source code. By the Open Source Initiative’s definition, the Commons Clause is not an approved open-source license. Instead, the developers call it a license condition meant to still include free access, the freedom to modify, and the freedom to re-distribute, but it does have its restrictions. “The Commons Clause was intended, in practice, to have virtually no effect other than force a negotiation with those who take predatory commercial advantage of open source development. In practice, those are some of the biggest technology businesses in the world, some of whom use open source software but don’t give back to the community. Freedom for others to commercialize your software comes with starting an open source project, and while that freedom is important to uphold, growth and commercial pressures will inevitably force some projects to close,” the website states. “The Commons Clause was not designed to restrict code sharing or development, but preserves the rights of developers to benefit from commercial use of their work. However, those that adopt the Clause should understand the broader implications of making a license change and commitments to source availability.” The promoters say they are dedicated to open source. “Some people believe that all software must be open source, and they will never condone anything else. But in reality, there are lots of models for licensing software. Commons Clause is just one alternative,” the website states. z
683(5 )$67 $1' $'9$1&(' &+$576
/LJKWQLQJ&KDUW y y y y
:3) DQG :LQ)RUPV 2SWLPL]HG IRU UHDO WLPH GDWD PRQLWRULQJ 5HDO WLPH VFUROOLQJ XS WR ELOOLRQ SRLQWV LQ ' +XQGUHGV RI H[DPSOHV
y y y y
2Q OLQH DQG RII OLQH PDSV 7RXFK VFUHHQ IHDWXUHV $GYDQFHG 3RODU DQG 6PLWK FKDUWV 2XWVWDQGLQJ FXVWRPHU VXSSRUW
Ϯ ĐŚĂƌƚƐ Ͳ ϯ ĐŚĂƌƚƐ Ͳ DĂƉƐ Ͳ sŽůƵŵĞ ƌĞŶĚĞƌŝŶŐ Ͳ 'ĂƵŐĞƐ
ZZZ /LJKWQLQJ&KDUW FRP GR
&Z dZ/ >
021-23_SDT016.qxp_Layout 1 9/21/18 4:48 PM Page 21
Defining software quality metrics for Agile, DevOps
n a Tricentis-commissioned report from Forrester released in July, “The Definitive Software Quality Metrics For Agile+DevOps,” surveyors found that it’s a common trait of companies that have seen the most success from Agile and DevOps adoption that they have made another operational transition. These companies have moved on from considering “counting” metrics — for instance, whether you’ve run tests an adequate number of times — as key indicators of success, to “contextual” metrics — whether the software meets all of the requirements of the user experience, and that these are better indicators of success. This conclusion is based on the report’s findings that of the 603 enterprise Agile and DevOps specialists surveyed, those referred to as “DevOps leaders” are significantly better at measuring success and are much further along in their pursuit of end-to-
BY IAN C. SCHAFER end automation, a key ingredient in meeting the rapid development and delivery expected of modern software cycles. It seems that somewhere along the way, there is a bottleneck holding some businesses back while the DevOps leaders speed ahead; fifty-nine percent of respondents placed the blame squarely on manual testing. This focus on automated end-to-end functional testing is among four other best-practices that the report’s findings say are almost universally employed by DevOps leaders. They are: l Properly allocated testing budgets and a focus on upgrading testing skills l The implementation of continuous testing “to meet the demands of release frequency and support continuous delivery”
l The inclusion of testers as part of
integrated delivery teams l A shift left to implement testing earlier in development But businesses shouldn’t be quick to rush the gates on implementing test automation across the board — there are some risks involved. “Automating the software development life cycle is an imperative for accelerating the speed and frequency of releases,” Forrester wrote in its report. “However, without an accurate way to measure and track quality throughout the software development life cycle, automating the delivery pipeline could increase the risk of delivering more defects into production. And if organizations cannot accurately measure business risk, then automating development and testing practices can become a huge danger to the business.” continued on page 22 >
021-23_SDT016.qxp_Layout 1 9/21/18 4:46 PM Page 22
Top five most important metrics to manage risk suggested by Agile+DevOps leaders, split by development stage Build
End-to-end regression testing
Number of automated tests prioritized by risk
Requirements covered by tests
Requirements covered by API tests
Percent of automated end-to-end test cases
Successful code builds
Count of critical functional defects
New API defects found
Requirements covered by tests
API bug density
Total number of defects identified in tests
Total number of defects
API test pass/fail rate
Number of test cases executed
API functional code coverage/API risk coverage (tie)
Test case coverage
Base: 157 enterprise Agile and DevOps decision makers in North America, EMEA, and APAC that use Agile+DevOps best practices Source: A commissioned study conducted by Forrester Consulting on behalf of Tricentis, March 2018
< continued from page 21
All businesses, even DevOps leaders, should pay heed. According to Forrester, 80 percent of respondents believe that they “always or often deliver customer-facing products that are within acceptable business risk.” This despite that only 29 percent of DevOps “laggards” think that delivering software within acceptable business risk is “very important,” and only 50 percent of leaders think so. Forrester speculates that this finding must be a mass overestimation. “Given that most firms, even the ones following continuous testing best practices, admit that their software testing processes have risk gaps and do not always give accurate measures of business risk, it stands to reason that the 80% who say they always or often deliver within acceptable risk may be overestimating their capabilities,” Forrester wrote in the report. Diego Lo Giudice, vice president and principal analyst at Forrester and the lead on the report, found this disparity unusual. “I think it’s more intentional than truth — they get it that business risk is of paramount importance and have the illusion of addressing it,” Lo Giudice said. “It’s a misinterpretation of what
real business risk is perhaps? I think it’s hard to keep testing and quality focus on business risk.” Wayne Ariola, chief marketing officer at Tricentis, was a little more sure of where this disconnect lay. “This is your classic geek gap,” Ariola said. “Business leaders assume that the definition of risk is aligned to a business-oriented definition of risk, but the technical team has it aligned to a very different definition of risk. This mismatch is your primary cause of overconfidence. For example, assume a tester saw that 100 percent of the test suite ran with an 80 percent pass rate. This gives you no indication of the risk associated with the release. The 20 percent that failed could be an absolutely critical functionality, like security authentication, or it could be trivial,
like a UI customization option that’s rarely used.” Avoiding this pitfall comes with the preparedness provided by knowing which metrics to watch to know whether a process is ready for automation. But there is yet another gap that sees less developed DevOps and Agile initiatives tracking metrics that used to mean a lot, but have very little value once a digital transformation is fully underway. “‘Counting’ metrics lost their value because the question changed for DevOps,” Ariola said. “It was no longer about how much testing you completed in the allotted time. The focus shifted to whether you could release and what business risks were associated with the release. If you can answer this question with five tests,
‘Teams focus on the amount of output, or read code, not necessarily valuable outcomes for the business —Diego LoGiudice or the right code.
021-23_SDT016.qxp_Layout 1 9/21/18 4:46 PM Page 23
that’s actually better than answering it with 5,000 tests that aren’t closely aligned with business risk. Count doesn’t matter — what’s important is — one, the ability to assess risk and two, the ability to make actionable decisions based on test results.” Forrester’s Lo Giudice also elaborated on what “outdated” metrics were and where focus should be shifted. “Measuring productivity is one example,” Lo Giudice said. “Teams focus on the amount of output, or read code, not necessarily valuable outcomes for the business or the right code. Agile focuses on both building the right things and [evaluating] things right, so value metrics become more important than productivity of teams, and so related productivity metrics lose relevance. Quality, however, remains always of paramount importance. “ Quality can severely slip if proper metrics aren’t tracked and businesses continue operating on old metrics while trying to implement modern automated testing initiatives. Businesses need to be aware of those risks, Tricentis’ Ariola explained. “We’ve seen many organizations that are severely over-testing,” Ariola said. “They’re running massive test suites against minor changes—self-inflicting delays for no good reason. To start, organizations must take a step back and really assess the risks associated with each component of their application set. Once risk is better understood, you can identify practices that will mitigate those risks much earlier in the process. This is imperative for speed and accuracy.” In its report, Forrester lists which metrics and factors are rated the highest in importance by the successful DevOps leaders. The figure below, taken from the report, breaks down the stages of development and shows which factors these leaders considered a priority during each stage. Productivity, while still an obvious goal of a successful business, is not precisely where Agile and DevOps specialists should be putting their thoughts. “Once risk is better understood, you can identity practices that will mitigate
Analysis of key development metrics for advanced Agile+DevOps firms Build phase: Top five most important metrics Successful code builds
Total number of defects identified
Unit test pass/fail
Number of automated tests prioritized by risk
Functional validation phase: Top five most important metrics Pass/fail rate
Requirements covered by tests
Count of critical functional defects
Integration testing phase: Top five most important metrics API test pass/fail rate
API bug density
Requirements covered by API tests
API functional code coverage/API risk coverage (tie)
New API defects found
Integration testing phase: Top five most important metrics Test case coverage
Number of test cases executed
Requirements covered by tests
Percent of automated end-to-end test cases
Total number of defects identified in test
Base: 157 enterprise Agile and DevOps decision makers in North America, EMEA, and APAC that use Agile+DevOps best practices Source: A commissioned study conducted by Forrester Consulting on behalf of Tricentis, March 2018
those risks much earlier in the process. This is imperative for speed and accuracy,” Ariola said. Productivity will then follow. “In the past, when software testing was a timeboxed activity at the end of the cycle, we focused on answering the question, ‘Are we done testing?’ Ariola said. “When this was the primary question, counting metrics asso-
ciated with the number of tests run, incomplete tests, passed tests, failed tests, etc. drove the process and influenced the release decision. As you can imagine, these metrics are highly ineffective in understanding the actual quality of a release. In today’s world, we have to ask a different question: ‘Does the release have an acceptable level of risk?’” z
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:14 PM Page 24
025_SDT016.qxp_Layout 1 9/21/18 4:03 PM Page 25
Ensure security and quality at speed BY LISA MORGAN
Today’s companies must become software companies to keep pace with competitive pressures and customer demands. As organizations become increasingly software-enabled, their footprints are extending out to cloud environments and the Internet of Things (IoT), increasing application complexity and the associated risks. With Synopsys, software teams can avoid the usual trade-offs between faster time-to-market imperatives, security and quality. Instead, they can achieve all three simultaneously. Synopsys has a 30-year history of helping companies improve the stability and robustness of their innovations. In fact, Gartner’s 2018 Magic Quadrant for Application Security Testing and the Forrester Wave 2018 for both Static Application Security Testing (SAST) and Software Composition Analysis (SCA) recognize Synopsys as an industry leader. Synopsys’ leading software integrity tools and services offerings help customers build security into DevOps and throughout the SDLC. More than 4,000 organizations around the globe depend on Synopsys to build smart, secure software, including financial services applications, software for IoT and medical devices, embedded software for automobiles, and software anywhere that is mission critical. “Businesses are on a mission to improve their software development and delivery processes,” said Andreas Kuehlmann, general manager, Software Integrity Group at Synopsys. “Our tools and Professional Services help them understand their current state, where they need to go and what they need to do to get there.” Most of today’s security vulnerabiliContent provided by SD Times and
quality remains important, the advent of IoT devices, including connected cars, medical devices, wearables and critical infrastructure, means that quality must also extend to compliance. “You have to build compliance in to keep pace with what’s happening in the market,” said Kuehlmann. “For example, automotive companies have had eight-year product cycles, but product delivery speed is now everything. They’re abandoning waterfall processes for agile, and ‘If your application security they have to ensure more depends on the traditional security cycle and security team, complex forms of compliance to meet the regulatory you’re losing valuable time.’ demands associated with —Andreas Kuehlmann smart and self-driving cars.” advantage of the security gaps Deliver software faster to facilitate exploits. “Our customers now have Faster application development and embedded IoT applications that are deployment necessitate faster processconnected to the cloud. To effectively es. While agile practices and DevOps implement security, they have to build help, developers are being held it in,” said Kuehlmann. “When you accountable for application integrity. As build security in, you can move faster. a result, quality and security are shifting We allow developers to catch vulnera- left so the number of potential issues bilities as they write code so there are can be reduced by design. “Software development organizafewer issues to deal with later in the tions need to move to modern tools and SDLC.” Like testing earlier and often, shift- automation if they want to simultaneously ensure quality, security and faster ing security left saves time and money. “If your application security time to market,” said Kuehlmann. depends on the traditional security “Speed is essential to being competitive cycle and security team, you’re losing in the market.” Software organizations can no valuable time and creating unnecessary work for everyone,” said Kuehlmann. longer afford to weigh speed against “With our help, developers are building quality and security or make trade-offs. and deploying more secure code, and With Synopsys, they can achieve all DevOps teams are improving the secu- three goals simultaneously, which rity aspect of automated processes. All enables them to spend more time that enables portfolio managers to do a focusing on innovation. “At the end of the day,” said Kuehlbetter job of risk management.” mann, “our north star is simple: Help organizations build secure, high-quality Extend quality to compliance Quality assurance has also become a software, and help them do it faster.” Learn more at www.synopsys.com/ lifecycle practice, driven by time-tomarket imperatives. While functional software-integrity. z ties exist at the application layer, primarily because security has not been addressed adequately in development. Meanwhile, companies are accelerating innovation using more open source software and third-party libraries than they have in the past. Greater reliance on third-party software increases developer productivity but also software complexity and, in turn, the number of potential vulnerabilities. Hackers take
026_SDT016.qxp_Layout 1 9/21/18 4:00 PM Page 26
Report: DevOps initiatives are paying off of developing, delivering, and operating software,” the report states. ‘We call this new construct software delivery and operational performance, or SDO performance. This new analysis allows us to offer even deeper insight into DevOps transformations.” When it comes to SDO performance, DORA finds it unlocks competitive advantages such as profitability, productivity, market share, customer satisfaction and completed mission goals. To capture SDO performance, the team looked at global outcomes rather than output. The four main measures of SDO performance the report looks at are deployment frequency, lead time for changes, time to restore service and change fail rate. When looking deeper into teams, the report found high, medium and low performers. This is the first year the data showed a fourth high-performance group: elite performers. “This new category exists for two reasons. Elite performers are more likely to deploy on-demand and take less than one hour for lead time changes. In comparison, high performers deploy between once per hour and once per day and take
between one day and one week to make changes. Low-performing teams deploy between once a week and once a month and take between one to six months to make changes. In addition, the elite performers deploy code 46 time more than low performers, and 2,555 times faster. They have a 7 times lower change failure rate and are 2,604 times faster to recover from incidents. When it comes to availability, which in this report means ability to make and keep promises and assertions on a software product or service, the report found availability correlated with the performance profiles. Elite performances are 3.55 times more likely to have strong availability practices, for example. The report also found organizational performance is measured through profitability, productivity, market share, number of customers, satisfaction and quality of services. Other key findings were that technical practices such as monitoring, observability and continuous testing drive high performance, and industry doesn’t matter when it comes to achieving high performance for software delivery. z
Microsoft releases Azure DevOps
custom reporting capabilities. Azure Artifacts features Maven, npm, and NuGet package feeds. Azure Repos includes unlimited cloud-hosted private Git repos. Lastly, Azure Test Plans is a planned and exploratory testing solution. “Working with our customers and developers around the world, it’s clear DevOps has become increasingly critical to a team’s success. Azure DevOps captures over 15 years of investment and learnings in providing tools to support software development teams. In the last month, over 80,000 internal Microsoft users and thousands of our customers, in teams both small and large, used these services to ship products to you,” wrote Cool. z
BY CHRISTINA CARDOZA
The idea of having developers and operations work together towards one goal to achieve velocity and high-quality software sounds like a good idea in theory, but how has it been working in practice? A newly released report revealed that DevOps practices are actually paying off for organizations in terms of performance and quality outcomes. The 2018 Accelerate State of DevOps Survey report comes from DevOps Research and Assessment (DORA) in collaboration with Google Cloud. About 1,900 professionals worldwide participated in this year’s study. The report is designed to find what issues matter most to technical professionals and find new ways organizations and teams can improve. The survey looks at IaaS and PaaS, monitoring and observability, databases, testing, workflow, culture, security and reliability. According to DORA, this is the first time in five years the research is expanding to include availability. “This addition improves our ability to explain and predict organizational outcomes and forms a more comprehensive view
BY CHRISTINA CARDOZA
Microsoft has announced Azure DevOps with the introduction of new tools to help software development teams collaborate to deliver high-quality solutions faster. Azure DevOps was first revealed last year during Microsoft’s Connect():2017 conference. According to the company, Azure DevOps represents an evolution in Microsoft’s Visual Studio Team Services. VSTS users will be upgraded to Azure DevOps with no loss of functionality. “The end to end traceability and integration that has been the hallmark of VSTS is all there.
Azure DevOps services work great together. Today is the start of a transformation and over the next few months existing users will begin to see changes show up,” Jamie Cool, director of program management for Azure DevOps, wrote in a post.
The new solution includes Azure pipelines, Azure Boards, Azure Artifacts, Azure Repos and Azure Test Plans. Azure Pipelines is designed to help with CI/CD initiatives and works with any language, platform, or cloud, according to the company. Azure Boards provides tracking capabilities, Kanban boards, backlogs, team dashboards, and
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:17 PM Page 27
GET TOGETHER. GO FASTER. Take your DevOps journey even further with three full days of immersive learning & transformational leadership stories.
2018 Featured Speakers Visit events.itrevolution.com/US for the full list of speakers
JASON COX Director, System Engineering The Walt Disney Company
CORNELIA DAVIS Sr. Director of Technology Pivotal
COURTNEY KISSLER Vice President, Nike Digital Platform Engineering Nike
THOMAS LIMONCELLI SRE Manager Stack Overflow, Inc.
DR. CHRISTINA MASLACH Professor of Psychology, Emerita University of California, Berkeley
ROSALIND RADCLIFFE Distinguished Engineer, Chief Architect for DevOps for z Systems IBM
DR. TOPO PAL Senior Director & Sr. Engineering Fellow Capital One
JEFFREY SNOVER Technical Fellow and Chief Architect for Azure Storage & Cloud Edge Microsoft
KEANEN WOLD Manager, DevOps Transformation Delta Airlines
SAVE $300 WHEN YOU REGISTER WITH PROMO CODE DEVOPS300* *Promo code is limited to first 200 registrants â&#x20AC;&#x201D; register now to take advantage of this discount
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:14 PM Page 28
SD T Times imes News on Mond day The latest news, news analysis and commentary delivvered to your inbox!
• Reports on the newest technologies affecting enterprise deve developers elopers • Insights into the e practices and innovations reshaping softw ware development • News from softtware providers, industry consortia, open n source projects and more m
Read SD Times Ne ews On Monday to o keep up with everything happening in the software devvelopment industrry. SUB BSCRIBE TODA AY! Y!
029_SDT016.qxp_Layout 1 9/21/18 4:02 PM Page 29
DEV OPS SHOWCASE
SHOWCASE BY DAVID RUBINSTEIN evOps is one of those overarching terms that encompasses a number of technologies and techniques. Some consider continuous integration and delivery as the cornerstone of DevOps. Others say you can’t keep pace with Agile development and deployment with automated processes that kick off tests, builds and performance alerts. On the development side, some organizations are taking test-driven and behavior-driven approaches. They’re shifting testing to the left, to find defects more quickly so that better quality code is delivered. On the operations side, security remains the number one priority. WIth teams iterating their software so quickly, and using new architectures such as microservices and containers, hackers have more
moving parts and end points to target. The adoption of DevOps is growing, and most organizations are using at least some of the methods described above. Others who are not yet “doing DevOps” are at least exploring some of these techniques to see what value they can gain. DevOps, though, it not a one-size-fits-all solution. Organizations need to evalute their processes, see where benefits could be derived, and implement those methods and technologies that deliver real value to the business. This showcase offers a look at some of the software providers in this market and what their solutions provide, in the hopes of giving our readers help as they head down the DevOps road. n
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:15 PM Page 30
031_SDT016.qxp_Layout 1 9/21/18 4:03 PM Page 31
DEV OPS SHOWCASE
XebiaLabs powers enterprise DevOps oftware teams implement DevOps in different ways whether they’re starting from scratch or transitioning from Waterfall practices. Also, what works for small teams doesn’t tend to scale well, especially in large enterprises building and maintaining different types applications. XebiaLabs enables DevOps at any scale so software teams have the flexibility they need to use their favorite tools and methodologies while the enterprise gets the visibility, reporting, analytics and auditing capabilities it needs. “It’s relatively easy for a small DevOps team to succeed, but scaling that success can be really challenging,” said Tim Buntel, VP of Products at XebiaLabs. “We allow you to take all the great work being done by the community and leverage DevOps at enterprise scale.” Gartner’s recent Application Release Orchestration (ARO) Magic Quadrant the Forrester Wave for Continuous Delivery and Release Automation (CDRA) both recognize XebiaLabs as a leader. “Industry analysts consider this category critical for scalable enterprise software delivery,” said Buntel. “We’re very pleased they’ve responded so positively to our products over the years.” Equally impressive are what large enterprises are able to accomplish with XebiaLabs. For example, multinational banking financial services company Societe Generale reduced its software delivery cycles by 45% while lowering operating costs by 10%. Satellite company Digital Globe reduced the time it takes to launch a satellite from three years to one year. Meanwhile, GE Power Systems reduced the cost of delivering software to production by 45% in less than six months. Other customers include Bank of America, Toyota and TD Bank. And, following a year of 100% growth, XebiaLabs just received $100 million in Series B funding.
Orchestrate DevOps across the enterprise Enterprise and individual software team requirements shouldn’t conflict, but they often do. Different software teams use different tools which can prevent the enterprise from getting the breadth and depth of visibility and tracking capabilities it needs. “Teams should be able configure their own process in a way that suits them but you also need enterprise controls as you scale,” said Buntel. “It’s like an orchestra where you’ve got a lot of expert musicians but you still need a conductor who can see the big picture and get the best performance from each of them.”
with API endpoints that have no SQL backend. The flexibility allows organizations to evolve their tools and processes at their own pace, as well as adopt new tools, processes and methodologies with higher levels of confidence. “A lot of our customers start with a mix of automated and manual processes. As they automate more processes, we can support that, so a task that was manual but is now automated fits in the same kind of pipeline architecture,” said Buntel. “We consistently reduce friction as your DevOps capabilities mature and technologies, architectures and processes evolve.”
‘It’s like an orchestra where you’ve got a lot of expert musicians but you still need a conductor who can see the big picture and get the best performance from each of them.” —Tim Buntel
To enable that, XebiaLabs complements whatever tools and architectures individual teams use while enabling a consistent approach to DevOps. Without an orchestration layer, a lot of time and money is wasted. “If scaling DevOps was as simple as just buying a couple of products, we’d have that covered. Because of the complexity, it’s very difficult to get all of those tools working together smoothly,” said Buntel. “We complement all of your existing tools and can coordinate any combination of them, so no matter what you use for a CI, Agile planning, change management or automated testing tools you’re using and no matter what kind of platform you’re using, XebiaLabs can accommodate all of the different combinations.” Similarly, XebiaLabs supports the breadth of application types from monolithic Java applications that rely heavily on traditional SQL databases to 100% cloudnative serverless AWS Lambda functions
Unify all stakeholders Software stakeholders are technical and non-technical in large enterprises. If the entire DevOps process is written in scripts, it’s hard to translate that into something meaningful which non-technical and less-technical stakeholders can use to provide value. “If you’re in a tiny startup with a twopizza team, generally, all of the folks involved tend to be fairly technical, so they can all script things up and understanding what everyone else is doing,” said Buntel. “As you scale, the challenge is trying to achieve a common understanding when you also have to involve business owners, testers, regulatory and compliance specialists and security professionals.” With XebiaLabs, enterprises don’t have to make trade-offs. Its ease of use makes it accessible to anyone regardless of their skill set. At the same time, developers can maintain continuous delivery pipeline components as code artifacts. Learn more at www.xebialabs.com. n
032_SDT016.qxp_Layout 1 9/21/18 4:03 PM Page 32
32 DEV OPS SHOWCASE
Compuware’s bridge to enterprise DevOps arge organizations often have development teams dedicated to mainframe and non-mainframe development. Both teams must embrace DevOps practices to meet time-to-market and quality imperatives, but they’re using different tools and operating at different speeds. Compuware bridges the gap with a comprehensive DevOps toolchain for enterprise DevOps. With it, software organizations can build, test, manage and deploy all of their code inside an Eclipse-based IDE. “If you’re going to keep pace with mar-
DevOps toolchain. Its Eclipse-based IDE serves as a front end to Compuware’s mainframe products, enabling users to interact with those tools in a modern environment, so critical for mainframe-inexperienced developers. Topaz integrates with SonarSource SonarLint, enabling the automatic discovery of defects introduced into COBOL code. An integration with Jenkins lets organizations publish code metrics into SonarSource SonarQube to track code quality and technical debt.
‘Public and private companies around the world are all facing the same challenge: their mainframe is still the most important piece of hardware they own and they want to implement DevOps.’
ket demands, you need to get software changes and updates out faster regardless of what kinds of applications you’re building,” said David Rizzo, VP of Product Development at Compuware. “We allow you to innovate on any system using modern tools in a modern way.” Compuware’s DevOps offerings include Topaz, a modern Agile platform for mainframe development that includes an Eclipse-based IDE; Topaz for Total Test, an automated unit testing solution; and ISPW, a source code management and deployment automation solution. The tools enable any developer, regardless of experience, to understand and work on any program, no matter how old or complex.
Topaz A recent Compuware-commissioned Forrester Consulting study found that 64% of respondent organizations plan to run more than half their mission-critical workloads on a mainframe by 2019. Meanwhile, they are only replacing one-third of retiring experts. Organizations need forcemultiplying mainframe DevOps tools that enable them to manage growing workloads with fewer workers. Topaz is that force multiplier and the foundational element of Compuware’s
“Our DevOps toolchain includes familiar cross-platform partner tools from SonarSource, XebiaLabs, Splunk and many others so firms can have a common tool chain across the enterprise,” said Rizzo. Customers can now install Topaz in their AWS instances, providing an infinite number of users with a common experience.
Topaz for Total Test Coding and testing go hand in hand. Topaz for Total Test automatically generates unit test cases as developers write code and debug changes. The tool also saves the test data so testing can be done without a live system. “Instead of relying on static or production data which may be out of date, you’re able to keep test data up to date. You can create individual test cases or multiple test cases simply by manipulating the data,” said Rizzo. “Those tests can be added to an automated test suite so they can run any time they need to and ensure that a developer’s changes aren’t hurting code quality.” Compuware recently acquired XaTester to enrich Topaz for Total Test’s testing capabilities. Specifically, developers can
take advantage of unit testing, system testing and regression testing capabilities that help them improve quality at speed.
ISPW Large enterprises with mainframes tend to use monolithic source code change management systems that support Waterfall development practices. The organizations moving to DevOps want to achieve continuous integration and continuous delivery. “We acquired ISPW because our customers want their developers to be more productive,” said Rizzo. “ISPW allows for multiple code paths and integrated changes in the same code.” ISPW continuously merges code changes so different developers can work on the same piece of code simultaneously without losing their respective changes or introducing incompatible changes. With ISPW, developers can deploy software changes through multiple testing stages as well as across disparate production environments concurrently. “Applications aren’t either mainframe or distributed anymore; they’re multiplatform applications,” said Rizzo. “ISPW enables concurrent multi-platform deployments, tracks all the pieces and allows you to do rollbacks. That way, if there is an issue with a deployment, rollbacks are fast and simple to do.” Compuware developers have been able to double their code output over the past three years. Meanwhile, customer requests for bug fixes have diminished year-over-year. “Public and private companies around the world are all facing the same challenge: their mainframe is still the most important piece of hardware they own and they want to implement DevOps,” said Rizzo. “Topaz, Topaz for Total Test and ISPW collectively ensure the quality and timely delivery of all your enterprise software.” Learn more at www.compuware.com. n
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:17 PM Page 33
Force Multiplier: \fo(e)rs \m˵l-t˵-pl Ɍ(˵)r n: A tool that dramatically amplifies your effectiveness.
73% of customer-facing apps are highly dependent on the mainframe. Yet 2 out of 3 lost mainframe positions remain unfilled, putting quality, velocity and efficiency at risk. You need Compuware Topaz as your Force Multiplier to: • Build and deploy with agility • Understand complex applications and data • Drive continuous, automated testing Learn more at compuware.com/force-multiplier compuware.com | @compuware | linkedin.com/company/compuware
The Mainframe Software Partner For The Next 50 Years
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:20 PM Page 34
Go DevOps or Go Agile? ® Go both with SAFe. Culture of shared responsibility
enables low risk releases
of Continuous Delivery Pipeline
Measurement of everything
Lean flow accelerates delivery
Take the CALMR Approach SAFe®, the world’s leading framework for enterprise agility, embraces DevOps and Release on Demand. Learn more at scaledagile.com/devops © Scaled Agile, Inc.
035_SDT016.qxp_Layout 1 9/21/18 4:03 PM Page 35
DEV OPS SHOWCASE
A CALMR approach to DevOps igital disruption impacts organizations across industries. To stay relevant, they need to implement change faster and provide customers with continuous value. While many enterprises are focused on continuous delivery, their ultimate goal is to provide value on demand. Using the Scaled Agile Framework (SAFe), organizations can provide high-quality value to customers in the shortest sustainable lead time. SAFe is a freely available knowledge base of proven, integrated principles and practices for Lean, Agile and DevOps.
Extend DevOps to the Full Value Stream Value delivery starts with an idea or a hypothesis about what customers need. Those needs must be continually explored and evaluated to ensure that the end product aligns with customers’ desires. Meanwhile, Agile teams need to continuously build and integrate those systems and solutions, and continuously deploy them to production where they can be validated and await the business decision to release. “Most DevOps transformations focus on the work that happens from the time a developer checks code to the time it’s in production. But that’s just part of the work; it’s missing everything it takes to mature an idea, with development on the one side, and everything it takes to operate and measure a solution on the other side,” said Inbar Oren, SAFe fellow, principalconsultant at Scaled Agile, Inc., the provider of SAFe. “Until you’ve evaluated the hypothesis based on real user data, you can’t really understand whether the work you’ve done provides value or not.” A common mistake is to attempt an Agile transformation without DevOps or vice versa. Agile without DevOps creates quality work that cannot be delivered to the customer fast enough, so it fails to achieve the ultimate economic results. Conversely, DevOps without Agile provides an underutilized pipeline at best. At
worst, Agile teams deliver the wrong things faster. “You need both Agile and DevOps to achieve value,” said Oren. “This kind of effort requires not just development and operations, but rather everyone on the value stream, from product management, to audit, compliance, security, quality, testing, and of course, development and operations.”
Take a CALMR approach to DevOps Most DevOps teams focus on automation, but effective DevOps requires much more. Scaled Agile recommends a holistic approach to DevOps that includes Culture, Automation, Lean Flow, Measurement, and Recovery (CALMR). Culture. DevOps is built on a culture of
whether the value they are providing their customers is the what they need. Recovery. Fast delivery is too risky if procedures for fast recovery are built without rehearsing failures. Organizations must consider how they can react quickly to problems by rolling back or fixing foreword.
Master SAFe DevOps One of the hardest questions enterprises face as they undergo a DevOps transformation is deciding where to start. The SAFe DevOps course answers that very question. The experiential class allows attendees to map their existing delivery pipeline and identify the time and quality bottlenecks that impact flow using SAFe. Course participants also use the SAFe
“Most DevOps transformations focus on the work that happens from the time a developer checks code to the time it’s in production. But that’s just part of the work.” —Inbar Oren collaboration between silos. SAFe provides the construct of an Agile Release Train (ART) — a team of Agile teams — who work together to build, deliver, and operate the solution. The mindset of risk tolerance and knowledge sharing enables everything else. Automation. Without automation, continuous delivery pipelines can’t be efficient and effective. DevOps relies heavily on automation to provide speed, consistency, and repeatable processes and environment creation. Lean Flow. Ensuring work in small batches, managing work-in-process limits and queue length is crucial. Large batches and bottlenecks slow down delivery, so identifying and solving them is a key activity for a Lean enterprise. Measurement. DevOps need to continuously evaluate the delivery pipeline and its associated results. Enterprises must build telemetry that allows them to find the bottlenecks in flow as well as identify
DevOps Health Radar to understand their level of maturity in each of the 16 subdimensions of the pipeline. They then proceed to create a future state pipeline by identifying skills across the four dimensions of the delivery pipeline — Continuous Exploration, Continuous Integration, Continuous Deployment, and Release on Demand — that will improve the identified bottlenecks. By the end of class, the teams have identified the top three improvement items that would provide the best results in their environment. “People find bottlenecks in places they don’t expect,” said Oren. “For example, a mature DevOps team discovered that some of the early Software Development Life Cycle (SDLC) items could be accelerated from 30 minutes to 5 minutes, but their long release approval process was creating a wait time of 28,000 minutes. Ultimately, they realized they were trying to improve the wrong thing.” Learn more at www.scaledagile.com/ devops. n
036_SDT016.qxp_Layout 1 9/21/18 4:40 PM Page 36
36 DEV OPS SHOWCASE
Tasktop improves end-to-end value flow evOps and Agile are common practices, but they’re not enough to ensure the timely delivery of value to the business. While DevOps improves software delivery from code commit to code release and Agile improves the ability to create value via improved delivery timeframes, complete insight into the value that software teams deliver tends to go missing. Also missing is a business-level view into what’s happening in the software delivery process. “To stay competitive, you need a han-
errors and delays and gain unprecedented visibility into the entire software delivery process to help business and IT leaders identify costly bottlenecks and opportunities to improve. “You can’t fix bottlenecks if you don’t have visibility into the entire flow,” said Bryan. “If you’re not integrating all tools and teams, it’s impossible to see what is actually going on.” “Organizations must make assumptions about what they can do to improve, because they don’t have the complete
‘The business thinks about the value they’re bringing customers and the business outcomes they are trying to achieve. IT thinks about how to get it done using existing resources.’
dle on the end-to-end value flow through each product’s software delivery value stream,” said Nicole Bryan, Tasktop’s VP Product. “Software is not the core competency of most enterprises, yet software is critical to their success. Tasktop helps software delivery organizations generate more value by connecting and measuring end-to-end flow.” “The first step to understanding, managing and measuring end-to-end flow is integrating your tool chain,” said Bryan. “Tasktop enables enterprises to integrate all of the tools and teams in the software delivery process. If you can see and measure flow, you can improve it,” Bryan added.
Integrate the DevOps tool chain Large organizations rely on multiple tools to design, develop and operate software, with each tool designed to increase productivity of a specific role. “Software is not just about writing code,” said Bryan. “There are tools for planning, prioritizing, analyzing and designing requirements, testing, scanning, and deploying.” Tasktop enables data exchange and synchronization across these tools to eliminate manual duplication, human-
picture. They may decide to improve their testing process, when actually it’s their design process that is the bottleneck. Without end-to-end visibility, they can’t know that. Tasktop shines a light on the entire system, to help them to identify where investments will truly move the needle.”
Prove value in business terms One of the biggest challenges IT organizations face is finding a common language with the business. “The business thinks about the value they’re bringing customers and the business outcomes they are trying to achieve,” said Bryan. “IT thinks about how to get it done using existing resources. That requires breaking the work down into many technical tasks and implementation details. Tasktop helps IT see those details while abstracting them to concepts of value, which the business understands.” “Software delivery organizations need to account for the business value they create, which is expressed in features delivered, defects fixed and security issues resolved,” said Bryan. “When
viewed side by side with desired business outcomes, IT and business leaders can make joint decisions on how to improve, adjust and calibrate to meet objectives. That’s what value stream management is, and what IT leadership is looking for.”
Five key metrics There are five Flow Metrics that measure how value flows through a product’s value stream. They are calculated on four Flow Items - units of work organizations should want to measure: features, defects, debt, and risk. Any work a software delivery organization undertakes can be categorized as one of these core Items. The Flow Metrics are: • Flow Velocity: The number of Flow Items of each type completed over a period of time. This metric is used to gauge whether value delivery is accelerating. Also referred to as throughput. • Flow Distribution: The ratio of the four Flow Items completed in particular window of time. This metric is used to prioritize specific types of work during specific time frames to meet a desired business outcome. • Flow Time: The end-to-end time interval it takes for Flow Items to go from ‘work start’ to ‘work complete’, including active and wait times. This metric identifies when time to value is getting longer. • Flow Load: The number of Flow Items in progress (active or waiting) within a value stream. This metric monitors over or under-utilization of value streams. • Flow Efficiency: The percentage of time Flow Items are actively worked on out of the total time elapsed. This metric identifies when waste is increasing or decreasing. More than 1,000 organizations around the globe, including 43 of the Fortune 100 and 300,000 customers, rely on Tasktop to accelerate their software delivery capabilities. For more information visit www.tasktop.com. n
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:20 PM Page 37
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:22 PM Page 38
039_SDT016.qxp_Layout 1 9/21/18 4:41 PM Page 39
DEV OPS SHOWCASE
GitLab powers entire DevOps life cycles evOps continues to gain traction but most organizations are still unable to realize the full potential of its promise because their teams remain siloed using different tools. Most enterprise teams are trying to stitch together 10 or 15 different types of tools, only to discover they lack the visibility and control they need to deliver high quality products faster. “Enterprises face a tool chain crisis because it takes money and resources to get everything to work throughout the SDLC when you’re working with a collection of tools that weren’t designed to work together.” said Ashish Kuthiala, director of product marketing at GitLab. GitLab’s single application for the entire DevOps life cycle helps its customers achieve a 200% faster DevOps life cycle by accelerating the entire SDLC, including planning, development, testing, security, integration, releasing, configuring, and monitoring. Using GitLab, teams are able to collaborate on smaller pieces of incremental change, see the same issues, and simultaneously track progress. “GitLab enables Concurrent DevOps” said Kuthiala. “Everyone wants to work collaboratively, but it’s really hard when you’ve got a complex tool chain and multiple teams. GitLab solves that problem by letting different teams work together from a single application enabling full visibility, efficiency, and governance which increases collaboration and efficiency among teams.” GitLab is used by more than 100,000 organizations and millions of users globally. In less than a year, its staff has more than doubled and its valuation has quintupled. GitLab’s popularity and momentum recently attracted $100 in Series D funding which catapulted the company to $1 billion unicorn status. Like some other unicorns, GitLab has a unique culture, which in GitLab’s case is unusually transparent and allows everyone including their customers to contribute to their core product offering.
Everyone can see and contribute to shaping the company’s strategy, roadmaps, meetings, and what’s being worked on. “We’re have a fully remote global workforce,” said Kuthiala. “And because we’re open source, we have more than 2000 contributors, including customers, who help us further accelerate our velocity and traction of our vision of being the only single application for all DevOps teams.”
Deliver value faster GitLab’s end-to-end visibility enables organization to seamlessly understand and improve cycle times and product quality. Further, GitLab is infrastructure and language-agnostic. The company has built GitLab to work with cloud-native and multi-cloud architectures including built-
Kuthiala. “With our Auto DevOps capability, users can just drop in the code and provision the destination infrastructure in just two steps.” In January 2018, GitLab acquired Gemnasium, a service that alerts developers to known security vulnerabilities in open source libraries. That way, software teams don’t have to implement separate security products that further complicate the tool chain and cause yet more unnecessary delays. “You need to integrate security right at the very beginning of the life cycle,” said Kuthiala. “GitLab has built-in static application security testing (SAST), dynamic application security testing (DAST), container scanning, dependency scanning as well as license management.
‘Everyone wants to work collaboratively, but it’s really hard when you’ve got a complex tool chain and multiple teams.’ —Ashish Kuthiala in Kubernetes integration. “It really doesn’t matter what your infrastructure is, whether it’s private, public, hosted or bare metal,” said Kuthiala. “GitLab abstracts the complexity for enterprises to deliver value to customers faster.”
Automate DevOps and security Effective DevOps requires high levels of automation, which GitLab enables through its Auto DevOps capability. For example, a single click triggers automated end-to end processes including scanning the code, initiating the build, running tests (including security tests), configuring the software, deploying it and monitoring it in a matter of minutes. GitLab’s effectively enables shift-left practices including security into a single merge request where dev, QA, security, and operations teams can view any code changes in a single place. “GitLab is purpose-built for enabling the entire DevOps life cycle,” said
Every line of code that is changed goes through comprehensive security tests before the code is deployed.” Meeting compliance requirements is also easier to do with GitLab. Companies using disparate tools can require months to prepare for audits. With GitLab, the data is available in one place, which avoids desperate searches for missing information. Ticketmaster has realized 15X speed improvement using GitLab. A large financial institution is now releasing software six times per day versus once every two weeks. ExtraHop Networks has achieved “meaningful continuous integration” using GitLab by aligning divergent engineering workflows. “GitLab’s value to the enterprise is that it removes all the complexity from underneath the hood, so that companies can deliver software faster and with higher levels of confidence,” said Kuthiala. Download the Continuous Integration Tools evaluation report to learn more. n
040-41_SDT016.qxp_Layout 1 9/21/18 4:05 PM Page 40
DEVOPS SHOWCASE n Atlassian: Atlassian offers cloud and onpremises versions of continuous delivery tools. Bamboo is Atlassian’s on-premises option with first-class support for the “delivery” aspect of Continuous Delivery, tying automated builds, tests and releases together in a single workflow. For cloud customers, Bitbucket Pipelines offers a modern Continuous Delivery service that’s built right into Atlassian’s version control system, Bitbucket Cloud.
n Appvance: The Appvance IQ solution is
an AI-driven, unified test automation system designed to provide test creation and text execution capabilities. It plugs directly into popular DevOps tools such as Chef, CircleCi, Jenkins, and Bamboo.
n CA Technologies:
CA Technologies DevOps solutions automate the entire application’s life cycle — from testing and release through management and monitoring. The CA Service Virtualization, CA Agile Requirements Designer, CA Test Data Manager and CA Release Automation solutions ensure rapid delivery of code with transparency. The CA Unified Infrastructure Management, CA Application Performance Management and CA Mobile App Analytics solutions empower organizations to monitor applications and end-user experience to reduce complexity and drive constant improvement.
Chef Automate, the leader in Continuous Automation, provides a platform that enables you to build, deploy and manage your infrastructure and applications collaboratively. Chef Automate works with Chef’s three open source projects; Chef for infrastructure automation, Habitat for application automation, and InSpec for compliance automation, as well as associated tools.
CloudBees is the hub of enterprise Jenkins and DevOps, providing companies with smarter solutions for automating software development and delivery. CloudBees starts with Jenkins, the most trusted and widely adopted continuous delivery platform, and adds enterprise-grade security, scalability, manageability and expert-level support. The company also provides CloudBees DevOptics for visibility and insights into the software delivery pipeline. DevOptics provides performance metrics, portfolio-wide insights, CD platform monitoring, shared visibility, and real-time value stream.
n CollabNet VersionOne:
CollabNet VersionOne’s Continuum product brings automation to DevOps performance with performance management, value stream orchestration, release automation and compliance and audit capabilities. It enables teams to deliver faster with higher quality, reduce risks and track business value. In addition, users can connect to to DevOps tools such has Jenkins, AWS, Chef, Selenium, Subversion, Jira and Docker.
Compuware is changing the way developers develop. Our products fit into a unified DevOps toolchain enabling cross-platform teams to manage mainframe applications, data and operations with one process, one culture and with leading tools of choice. With a mainstreamed mainframe, the mainframe is just another platform, and any developer can build, analyze, test, deploy and manage COBOL applications with agility, efficiency and precision.
Datical solutions deliver the database release automation capabilities IT teams need to bring applications to market faster while eliminating the security vulnerabilities, costly errors and downtime often associated with today’s application release process.
Dynatrace provides the industry’s only AI-powered application monitoring. Bridging the gap between enterprise and cloud, Dynatrace helps dev, test, operation and business teams light up applications from the core with deep insights and actionable data. We help companies mature existing enterprise processes from CI to CD to DevOps, and bridge the gap from DevOps to hybrid-to-native NoOps.
n Electric Cloud: Electric Cloud is a leader
in enterprise Continuous Delivery and DevOps automation, helping organizations deliver better software faster by automating and accelerating build, test and deployment processes at scale. The ElectricFlow DevOps Release Automation Platform allows teams of all sizes to automate deployments and coordinate releases.
GitLab aims to tackle the entire DevOps lifecycle by enabling Concurrent DevOps. Concurrent DevOps is a new vision for how we think about creating and shipping software. It unlocks organizations from the constraints of the toolchain and allows for
better visibility, opportunities to contribute earlier, and the freedom to work asynchronously.
n JFrog: JFrog’s four products, JFrog Artifactory, the Universal Artifact Repository; JFrog Bintray, the Universal Distribution Platform; JFrog Mission Control, for Universal DevOps flow Management; and JFrog Xray, Universal Component Analyzer, are used by Dev and DevOps engineers worldwide and are available as open-source, on-premise and SaaS cloud solutions. The company recently acquired CloudMunch, a universal DevOps intelligence platform to peocide DevOps BI and analytics, and help drive DevOps forward. n JetBrains:
TeamCity is a Continuous Integration and Delivery server from JetBrains. It takes moments to set up, shows your build results on the fly, and works out of the box. TeamCity will make sure your software gets built, tested, and deployed. TeamCity integrates with all major development frameworks, version-control systems, issue trackers, IDEs, and cloud services, providing teams with an exceptional experience of a well-built intelligent tool. With a fully functional free version available, TeamCity is a great fit for teams of all sizes.
n Micro Focus:
The company’s DevOps services and solutions focus on people, process and tool-chain aspects for adoption and implementing DevOps at large-scale enterprises. Continuous Delivery and Deployment are essential elements of HPE’s DevOps solutions, enabling Continuous Assessment of applications throughout the software delivery cycle to deliver rapid and frequent application feedback to teams. Moreover, the DevOps solution helps IT operations support rapid application delivery (without any downtime) by supporting a Continuous Operations model.
n Microsoft: Microsoft recently introduced
Azure DevOps, a suite of DevOps tools that help teams collaborate to deliver high-quality solutions faster. Azure DevOps marks an evolution in the company’s Visual Studio Team Services. VSTS users will now be upgraded to Azure DevOps. The solution features Azure Pipelines for CI/CD initiatives, Azure Boards for planning and tracking, Azure Artifacts for creating, hosting and sharing packages, Azure Repos for collaboration and Azure Test Plans for testing and shipping.
040-41_SDT016.qxp_Layout 1 9/21/18 4:05 PM Page 41
DEV OPS SHOWCASE
n New Relic: New Relic is a software ana-
their software, and get the automation needed to drive changes with confidence. More than 75% of the Fortune 100 rely on Puppet to adopt DevOps practices, move to the cloud, ensure security and compliance, and deliver better software faster.
lytics company that makes sense of billions of data points and millions of applications in real time. Its comprehensive SaaS-based solution provides one powerful interface for Web and native mobile applications, and it consolidates the performance-monitoring data for any chosen technology in your environment. It offers code-level visibility for applications in production that cross six languages (Java, .NET, Ruby, Python, PHP and Node.js), and more than 60 frameworks are supported.
n Rogue Wave Software:
Continuous Performance Validation for Web and mobile applications. Neotys load testing (NeoLoad) and performance-monitoring (NeoSense) products enable teams to produce faster applications, deliver new features and enhancements in less time, and simplify interactions across Dev, QA, Ops and business stakeholders. Neotys has helped more than 1,600 customers test, monitor and improve performance at every stage of the application development life cycle, from development to production, leveraging its automated and collaborative tooling.
n Sauce Labs:
n Neotys: Neotys is the leading innovator in
OpenMake builds scalable Agile DevOps solutions to help solve continuous delivery programs. DeployHub Pro takes traditional software deployment challenges with safe, agentless software release automation to help users realize the full benefits of agile DevOps and CD. Meister build automation accelerates compilations of binaries to match the iterative and adaptive methods of Agile DevOps
n Orasi: Orasi is a leading provider of soft-
ware testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology. The company is laserfocused on helping customers deliver highquality applications, no matter the type of application they’re working on and no matter the development methods or delivery processes they’ve adopted. In addition to its end-to-end software testing, Orasi provides professional services around testing, processes and practices, as well as software quality-assurance tools and solutions to support those practices.
Puppet provides the leading IT automation platform to deliver and operate modern software. With Puppet, organizations know exactly what’s happening across all of
Rogue Wave helps thousands of global enterprise customers tackle the hardest and most complex issues in building, connecting, and securing applications. Since 1989, our platforms, tools, components, and support have been used across financial services, technology, healthcare, government, entertainment, and manufacturing to deliver value and reduce risk. From API management, web and mobile, embeddable analytics, static and dynamic analysis to open source support, we have the software essentials to innovate with confidence. Sauce Labs provides the world’s largest cloud-based platform for automated testing of Web and mobile applications. Optimized for use in CI and CD environments, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium, both widely adopted open-source standards for automating browser and mobile application functionality.
n Scaled Agile: Scaled Agile is the provider
of the world’s leading framework for enterprise agility, the Scaled Agile Framework (SAFe). SAFe is currently in its fourth iteration, and has been adopted by 70 percent of Fortune 100. Through learning and certification, a global partner network, and a growing community of over 250,000 trained professionals, Scaled Agile helps enterprises build better systems, increase employee engagement, and improve business outcomes.
n SOASTA: SOASTA, now part of Akamai, is
the leader in performance measurement and analytics. The SOASTA platform enables digital business owners to gain unprecedented and continuous performance insights into their real user experience on mobile and web devices — in real time, and at scale.
n Synopsys: Through its Software Integrity
Platform, Synopsys provides a comprehensive suite of best-in-class software testing solutions for rapidly finding and fixing critical security vulnerabilities, quality defects, and compliance issues throughout the life cycle. Leveraging automation and integrations with popular development tools, Synopsys’
Software Integrity Platform empowers customers to innovate while driving down risk, costs, and time to market. Solutions include static analysis, software composition analysis, protocol fuzz testing, and interactive application security testing for Web apps.
n Tasktop: Transforming the way software
is built and delivered, Tasktop’s unique modelbased integration paradigm unifies fragmented best-of-breed tools and automates the flow of project-critical information across dozens of tools, hundreds of projects and thousands of practitioners. The ultimate collaboration solution for DevOps specialists and all other teams in the software lifecycle, Tasktop’s pioneering Value Stream Integration technology provides organizations with unprecedented visibility and traceability into their value stream. Specialists are empowered, unnecessary waste is eradicated, team effectiveness is enhanced, and DevOps and Agile initiatives can be seamlessly scaled across organizations to ensure quality software is in production and delivering customer value at all times.
n TechExcel: DevSuite helps organizations
manage and standardize development and releases via agile development methods and complete traceability. We understand the importance of rapid deployment and are focused on helping companies make the transition over to DevOps. To do this, we have partnered with many automation tools for testing and Continuous Integration, such as Ranorex and Jenkins. Right out of the box, DevSuite will include these technologies.
n Tricentis: Tricentis Tosca is a Continuous
Testing platform that accelerates software testing to keep pace with Agile and DevOps. With the industry’s most innovative functional testing technologies, Tricentis Tosca breaks through the barriers experienced with conventional software testing tools. Using Tricentis Tosca, enterprise teams achieve unprecedented test automation rates (90%+) — enabling them to deliver the fast feedback required for Agile and DevOps.
XebiaLabs develops enterprise-scale Continuous Delivery and DevOps software, providing companies with the visibility, automation and control they need to deliver software faster and with less risk. Global market leaders rely on XebiaLabs to meet the increasing demand for accelerated and more reliable software releases. n
042-45_SDT016.qxp_Layout 1 9/21/18 4:06 PM Page 42
Defining a New Network Landscape By Jeffrey Schwartz
Applications are on the cusp of becoming
network-aware, based on the latest SDN switching
architectures from Cisco, VMware and others
The latest advances in softwaredeﬁned networking (SDN) promise to enable automation of IT operations, particularly among enterprises shifting to DevOps, application modernization initiatives and hybrid and multi-cloud architectures. Network systems providers such as Arista, Big Switch Networks, Cumulus, HPE, Nicira and Juniper were among the earliest to deliver on the concept of SDN at the beginning of this decade. In short order, every major supplier of network equipment, including market leader Cisco Systems, embraced SDN. Many described it as a natural progression of virtualization, but just as important — an inevitable move toward more open networks. The success VMware experienced a decade ago by allowing enterprises to virtualize their servers validated the notion that pooling network resources with a similar objective would remove dependencies on speciﬁc hardware and give more autonomy to applications. Initial SDNs, based on the Open Networking Foundation’s (ONF)’s OpenFlow standard, introduced a more efﬁcient ﬂow of network trafﬁc by removing the network controller’s dependency on speciﬁc switches. SDNs are incrementally reshaping the 42 October 2018
makeup of carrier networks, including telecommunications providers, ISPs and public cloud operators, as well as on enterprise datacenter operations. As SDNs and the ability to have programmatic network functionality extend to the datacenter and cloud, they will have signiﬁcant implications for automation, service delivery and agility. A shift to hyperconverged systems — bridging compute, storage and network services in uniﬁed hardware managed by a shared control plane — has already put a more nuanced focus on the software-deﬁned datacenter (SDDC). VMware, now a subsidiary of Dell Technologies, has become a major force in bringing SDN to the datacenter and to the cloud thanks to its 2013 acquisition of Nicira, an early startup responsible for creating the ﬁrst OpenFlow switch before contributing it to the ONF. Nicira’s technology became the foundation of VMware’s NSX network virtualization platform.
Cisco’s Network Intuitive
Around the same time, Cisco, by far the most dominant suppliers of network gear and management soft-
ware, took its ﬁrst major step into true SDN with the launch of its Application Centric Infrastructure (ACI). It formed the basis of Cisco’s Application-Centric Infrastructure Controller (ASIC), introducing software-based policy control and paving the way for the company’s Digital Network Architecture (DNA), launched in 2016. Cisco DNA is the foundation for the company’s push into creating a fully programmable network architecture. CEO Chuck Robbins laid down the gauntlet last year with the announcement of the company’s “The Network. Intuitive” campaign, outlining Cisco’s new design approach, built on DNA with a new class of switches and routers that require less mapping of systems and offer more application intelligence. “Customers are increasingly less interested in being systems integrators, and they are more interested in buying stuff that just works and gets to that outcome that they're trying to drive for,” Robbins said, speaking at the company’s recent annual Cisco Live conference. “When we talk about how these next-generation network architectures evolve you can look at the cost of how the trafﬁc ﬂows in the future is going to be completely different.” Just as important is building out a programmable network. To ensure that programmability, Cisco put an emphasis on APIs and a strategy of creating a developer ecosystem, resulting in the creation of DevNet four years ago. “We started DevNet to be
042-45_SDT016.qxp_Layout 1 9/21/18 4:06 PM Page 43
able to get the world ready for the world of network programmability and network APIs,” said Susie Wee, VP and CTO of Cisco DevNet. Wee explained that Cisco built DevNet to enable traditional network engineers learn how to build and expose network APIs, but also to allow developers who have little or no network experience to work with these network interfaces when building software and services. “We needed to make it easy for developers to use the APIs to use the platforms and then build innovations,” she said. Since its creation, DevNet has created resources to enable developers to program for Internet of Things (IoT), cloud, security and collaboration. DevNet also offers various resources for API documentation and offers labs and developer sandboxes and has held events in 34 countries. The big news at Cisco Live is that there are now 500,000 participants in DevNet. Wee emphasized that the 500,000 are those who are engaged in some manner, whether participating in learning sessions or using tools in the
sandbox. “What happens at this stage is that you get critical mass — you get critical mass in the ecosystem, and what that does is that it changes the innovation model for networking,” Wee said. “It’s not just the community, it is now an ecosystem of people who are coding, people who can use that community code,” she added. “Now what you are doing is enabling the business ecosystem to do business using our products as well as products from the community.”
Connecting Infrastructure and Applications
Hoping to accelerate its momentum, Cisco added the new DevNet Code Exchange and DevNet Ecosystem Exchange, aimed at bringing ISVs and infrastructure providers together using Cisco’s APIs. It also includes a developer center for those engaging with Google Cloud, building on the two companies’ partnership to build out hybrid cloud offerings. Since delivering its first program-
mable system based on Cisco DNA, the Catalyst 9000 switch for large enterprise networks, the company reported it has shipped it to nearly 70,000 customers. Robbins said it is the fastest ramping product in Cisco’s history. “We've hit some sort of sweet spot that our customers are looking at, particularly with this automation and analytics architecture,” he said. The new programmability efforts also map with organizations shifting to hybrid and multi-cloud environments, tied with the move to modern applications built on containerization and microservices. The new Cisco Container Platform is now available with the company’s HyperFlex platform that lets enterprises deploy cloud-native apps on-premises that can use Kubernetes orchestration to share those apps in public Google Kubernetes Engine (GKE) instances. IT operations professionals can also manage and monitor the performance of those applications with the monitoring tools from AppDynamics, now a subsidiary of Cisco. As Cisco builds out its programmable network portfolio, despite its huge edge in terms of installed base, numerous rivals hope to cut into that lead, among them Arista, Big Switch, Juniper and VMware.
VMware Extends NSX Architecture
Given VMware’s own large customer base of those that use its virtualization and private cloud tools, the company has somewhat of an inside edge. But its trump card was the fact that it nabbed Nicira five years ago. Nicira’s scalable network virtualization system, a clustered control plane using Apache ZooKeeper, is code that brings together a set of components that are fault tolerant continued on page 45 >
042-45_SDT016.qxp_Layout 1 9/21/18 4:07 PM Page 44
SDN makes troubleshooting network issues painful By Jenna Sargent
The proliferation of social media, mobile devices, and cloud computing is pushing the traditional data center to its limits. Software-deﬁned networking is “the network industry’s response to a problem that spans a couple of decades,” explained Jason Baudreau, project strategist at NetBrain. That problem is troubleshooting issues when they arise. It is essentially an effort to catch up with the virtualization that has been happening in the industry over the years. According to Baudreau, SDN has the potential to revolutionize those data centers and provide a more ﬂexible way of provisioning and controlling networks. SDN allows for networked applications and services to be delivered at the speed that both businesses and consumers are demanding, Baudrau said. Baudreau believes that the industry is moving to SDN, whether operations teams are ready to embrace that or not. According to Baudreau, the biggest downside comes when the need to troubleshoot the network arises. “These software-deﬁned networks, they bring a high degree of abstraction to the network since devices are conﬁgured and managed by that central controller,” said Baudrau. Physical devices that network operators used to be able to touch and feel are now virtualized, making it difﬁcult to answer basic questions, such as ‘what’s in my network?’ Inventory and documentation are made challenging in these virtualized environments. Application issues are the most common reason for help desk calls, so SDN is making the process of troubleshooting end-user problems more complex as well. “We’re hearing a lot from customers that being able to map the application task across a network becomes 44 October 2018
very complex,” said Baudreau. “Again, since SDN is a lot of abstraction, it makes this very challenging. And so tasks that were already difﬁcult and time-consuming, like mapping, become even more challenging in software-deﬁned infrastructures.” Baudreau says that the solution to this problem is abstraction and visibility. Whether the network complexity you are trying to solve with SDN is a result of network scale or has been brought on by new technologies, visibility is key. Visibility used to be considered something that was nice to have, but “the industry is hitting a tipping point where automated network documentation is now becoming a must-have, just for teams to grapple with the complexity of this SDN infrastructure that they’re trying to deal with,” said Baudreau. Baudreau said he also sees that there is a need for better collaboration across these teams as they ramp up on new technology. The solutions that allow people to share knowledge are very few and far between currently, he said. As infrastructure gets handed off to other teams to manage the day-to-day aspects, there is a challenge associated with that handoff. The knowledge that the architect who designed it had isn’t always translated to the network operations team that now has to manage it, he explained. “I think there’s a need for tools that can provide a way for these people to codify their knowledge, if you will, and to make that knowledge executable, and I think that’s where automation comes in,” said Baudreau. Finally, there is a need for tools that integrate to solve complex problems. For example, a monitoring tool that looks for events that can integrate with a diagnostics tool can provide real-time analytics in the event that there is a problem. n
042-45_SDT016.qxp_Layout 1 9/21/18 4:07 PM Page 45
Defining a New Network Landscape < continued from page 43
and highly scalable, designed to function as a single point of control. “We fantasized about having APIs to control networks that we couldn't on traditional hardware boxes,” said Bruce Davie, who was Nicrira’s chief service provider architect and now a VMware CTO, who talked up the future of networking at the recent VMworld 2018 conference in Las Vegas. VMware’s NSX is now that central point of control, Davie said. At a high level, NSX consists of a management plane for interacting with the switch via API calls, a control plane where all the networking functions are executed and the data plane, which implements services. “You can launch an API request and within a fraction of a second get your networking service deployed wherever you need it,” Davie said. “This idea of DevOps-centric IT is this sort of collaborative model with the developers and the operators working together. In some cases, the same people are making sure the applications stay highly available, but they are bringing a developer mindset to the operation of IT.” An SDN’s ability to enable a more programmatic and ﬂexible approach is also important in addressing security. Both Cisco and VMware have emphasized this by enabling support in their architectures for network segmentation, which is particularly important as organizations embark on multi-cloud efforts. VMware’s answer to enabling the SDN for its multi-cloud integration capability is NSX Data Center. It provides switching (including VXLANbased overlays), distributed, static and dynamic routing between virtual
networks; a distributed ﬁrewall, load balancer, VPN, gateway, contextaware micro-segmentation, multisite systems, endpoint, network and cloud management. The NSX Data Center RESTbased API provides integration with third-party cloud management platforms and custom automation tools as well as various security offerings and application delivery controllers, among other tools. VMware this year has extended the multi-cloud integration capabilities of NSX Data Center with support for both AWS and Microsoft
ize Network Insight 3.9, which provides operational management of VMware NSX deployments.
Organizations can now also share their NSX security policies with the switches from Arista in multi-cloud scenarios. The two companies, which formed a partnership back in 2014, have enabled interoperability between NSX and Arista CloudVision, which also links Arista’s Macro-Segmentation Services (MSS) with NSX’s micro-segmentation features.
“It’s not just the community, it is now an ecosystem of people who are coding, people who can use that community code.”
— Susie Wee, VP and CTO of Cisco DevNet
Azure, as well as the ability to run workloads on its Cloud Foundrybased Pivotal platform, Kubernetes and OpenShift. The new NSX Data Center 2.3, launched at VMworld, adds support for bare metal hosts, including Linux and bare metal workloads running in hypervisor and container environments. VMware said because it supports standard Open vSwitch automation, which the company said permits any Linux host to function as an NSX-T transport node, it enables the termination of overlay networks. VMware also added the ability to plan and deploy micro-segmentation capabilities with its new vReal-
The advances in SDN notwithstanding, traditional LAN and WAN infrastructure still makes up the bulk of most enterprise networks today. But that is changing with rapid adoption of technologies such has hybrid cloud and SD-WAN for branch ofﬁce acceleration. As telecommunications carriers look to transition from their multiprotocol label switching (MPLS) offerings into software-deﬁned infrastructures in the coming years, SDN will take a bigger piece of the network pic. A recent report by Information Services Group said that’s inevitable, noting AT&T’s goal of having 75 percent of its networks SDN-compliant by 2022. n October 2018
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:23 PM Page 46
047,49,50,51_SDT016.qxp_Layout 1 9/21/18 4:43 PM Page 47
APIs help developers do more with less D
evelopment teams are under increasing pressure to develop more software, and to deploy it faster. There are plenty of things that are contributing to helping them achieve this goal and to become more agile, and one of those things is the use of APIs. An API is a set of defined methods for how components communicate with each other. APIs can help shorten the development lifecycle, explained John Armenia, product director of self-hosted services at Accusoft, a document management company. APIs have exploded over the past decade, and that growth can be attributed to being mostly a function of the growth of microservices, said David Codelli, director of product marketing at Red Hat. “One of the things that came out of this microservices revolution is the idea that teams can move more quickly if they cut dependencies to other projects.”
BY JENNA SARGENT Microservices teams started refraining from sharing databases and not having dependencies when transferring files around. However, Integration is still a dominant requirement so they had to figure out a way to share data, and the API movement grew out of that, Codelli believes. “People decided that hey, we have these open standards for sharing information and we have these management products that can allow us to not only enforce our governance, but enforce our security and enforce our organization policies,” said Codelli. “But they also provide a mechanism for socializing and sharing the APIs in an organization. And that’s really where API management products grew out of.” Like in any aspect of software development, it’s important to have a set of best practices that your organization
and development team can follow. According to Randy Heffner, vice president and industry analyst at Forrester, there are two major best practices that organizations should follow. The first is to have an API taxonomy. The key thing that organizations should understand here is that not all APIs are alike; some are more important to an enterprise’s API strategy than others. The other major best practice is API portfolio management, which is the notion that you should have some idea of the vision that your organization is working towards. Rather than letting development teams create whatever APIs they want, they should develop APIs in such a way that at the end they wind up with a coherent set of APIs that represent a business domain, Heffner explained. “You have to evolve your set of APIs incrementally,” said Heffner. “The only way that you wind up with something continued on page 49 >
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:23 PM Page 48
CREATE API-BASED SOLUTIONS Bring together everything with agile integration
LEARN MORE red.ht/agile_integration
RED HAT POSITIONED AS A “LEADER” IN API MANAGEMENT* In “Magic Quadrant for Full Life Cycle API Management,” Gartner compares 3scale by Red Hat with other vendors and API solutions — and calls Red Hat a leader for its vision and execution.
*Magic Quadrant for Full Life Cycle API Management,” April 2018. https://www.redhat.com/en/resources/magic-quadrant-full-life-cycle-api-management-gartner-report
047,49,50,51_SDT016.qxp_Layout 1 9/21/18 4:44 PM Page 49
< continued from page 47
coherent that you are doing piecemeal is to, as Stephen Covey long ago smartly said, you have to begin with the end in mind in some way.” Those are the two major necessities that all organizations should probably adhere to, but Heffner notes that some best practices are dependent on the organization itself. Organizations that are more mature will tend to have different practices in place than those starting off. It’s important for organizations to know that APIs generally do not achieve value individually. When talking about the bigger picture of APIs, an individual API may have some value, but take for an example a scenario where an organization develops 10 mobile apps because they see 10 mobile app opportunities. Each of those apps would be independent. Once you develop 10 different APIs, however, there are relationships between them that makes the organization’s ability to use them more nuanced and tricky than just using 10 different mobile apps, Heffner explained. What this means for APIs is that they have a portfolio dimension to achieving value. The portfolio value may be that they can be used across a variety of different business contexts, lines of businesses, or scenarios. Sometimes organizations need to step back the maturity and understand that APIs are a portfolio value generator, that they derive value as a group, he explained. “For an organization, often there just isn’t that kind of maturity and discipline,” Heffner said. “Sometimes that’s easier to achieve in a smaller organization where you can have better governance.” No matter the size of the organization that governance needs to be a collaborative effort. This can take on a variety of different forms and is going to be specific to the organization, Heffner explained. One of the major trends that Heffner continued on page 51 >
What benefits does your company bring to the area of API development? Megan Brooks, Director of Marketing at Accusoft For over 25 years Accusoft has provided software components — built by developers, for developers. Our APIs shorten development cycles by providing complex image and document processing functionality. Our developer-centric approach makes integration of our solutions easy and seamless. We offer a robust and comprehensive set of documentation that is updated with each product release and we regularly add new features that enhance and future-proof your application. In addition to our documentation, we have a team of software engineers who provide customer support before, during, and after a product is purchased. This means that when a developer contacts our support team they are working with an engineer who is not only familiar with the product but also writes code and has helped hundreds of customers to implement Accusoft solutions. This benefit helps you manage your product lifecycle and shortens and simplifies the proof of concept phase. Our products solve mission-critical business problems related to document security, document capture, forms processing, document imaging and much more. We are a trusted partner in a variety of industries including financial, insurance, government, legal, medical, and education. With the power of Accusoft handling image and document processing inside your applications, you can rest assured that you have a trusted partner that will elevate and support your development projects.
David Codelli, Director of Product Marketing at Red Hat Red Hat’s API technology, comprised of the core Fuse, 3scale API Management, and OpenShift Container Platform, delivers the full spectrum of features for implementing an internal or external (or both) API program. With Red Hat Fuse, developers can rapidly build services either by orchestrating existing services or creating new ones, using a graphical canvas in either case. Next, developers can deploy those services to multiple destinations using proper DevOps pipelines and run them with the most advanced container management technology on the planet, using Red Hat OpenShift. Finally, with Red Hat 3scale API Management, users can publish the services to a portal, and establish all of the essential API governance aspects: security, authorization, rate limiting, monetization, and analytics. What really distinguishes Red Hat’s products is the support for cloud-native development by agile teams. Every artifact of these components, including code, connectivity, and policy can be deployed as containers in the agile developers’ native process. This cloud nativity means that agile teams can include API management components with their development artifacts in DevOps pipelines, and deploy those components as containers managed by Red Hat’s Kubernetes implementation, OpenShift. With Red Hat, agile teams across the enterprise have access to integration tools as well as the APIs of their peers. This access frees the team from the traditional bottleneck of an ESB managed by an integration competency center, giving them the agility they need to innovate. z
047,49,50,51_SDT016.qxp_Layout 1 9/21/18 4:44 PM Page 50
A guide to API development tools n Akana by Rogue Wave Software: Akana API management encompasses all facets of a comprehensive API initiative to build, secure, and future-proof your API ecosystem. From planning, design, deployment, and versioning, Akana provides extensive security with flexible deployment via SaaS, on-premises, or hybrid. Akana API security, design, traffic management, mediation and integration, developer portal, analytics, and lifecycle management is designed for enterprise, delivering value, reliability and security at scale. The world’s largest companies trust Akana to harness the power of their technology and transform their business. Build for today, extend into tomorrow. n CA Technologies: CA Technologies helps customers create an agile business by modernizing application architectures with APIs and microservices. Its portfolio includes the industry’s most innovative solution for microservices, and provides the most trusted and complete capabilities across the API lifecycle for development, orchestration, security, management, monitoring, deployment, discovery and consumption. n Cloud Elements: Cloud API integration platform Cloud Elements enables developers to publish, integrate, aggregate and manage their APIs through a unified platform. Using Cloud Elements, developers can quickly connect entire categories of cloud services (e.g. CRM, Documents, Finance) using uniform APIs, or simply synchronize data between multiple cloud services (e.g. Salesforce, Zendesk, QuickBooks) using its innovative integration toolkit. Cloud Elements provides a oneof-a-kind API Scorecard so organizations can see how their API measures up in the industry. n Dell Boomi: Boomi API Management, provides a unified and scalable, cloud-based platform to centrally manage and enrich API interactions through their entire lifecycle. With Boomi, users can rapidly configure any endpoint as an API, publish APIs onpremise or in the cloud and manage APIs
FEATURED PROVIDERS n
n Accusoft: Accusoft offers a robust portfolio of document and imaging tools for developers. Our APIs and software development toolkits (SDKs) allow developers to add high-performance document capture and forms processing, viewing, search, compression, conversion, barcode recognition, OCR, ICR, and MICR to their applications. Visit www.accusoft.com for more information. n Red Hat: Red Hat is the world's leading provider of open source software solutions, using a community-powered approach to provide reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As a connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT. Learn more at http://www.redhat.com.
with traffic control and usage dashboards. n IBM: IBM Cloud’s API Connect is designed for organizations looking to streamline and accelerate their journey into the API economy; API Connect on IBM Cloud is an API lifecycle management offering which allows any organization to create, publish, manage and secure APIs across cloud environments — including multi-cloud and hybrid environments. This makes API Connect far more cost-effective than limited point solutions that focus on just a few lifecycle phases and can end up collectively costing more as organizations piece components together. n Mashape: Mashape powers Microservice APIs. Mashape is the company behind Kong, the most popular open-source API Gateway. Mashape is based in San Fran-
cisco but has a presence in Europe and Japan. Mashape has successfully operated in the API market for more than seven years and offers a wide range of API products and tools from testing to analytics. The main enterprise offering is Kong Enterprise, which includes Kong Analytics, Kong Dev Portal and an enterprise version of the API Gateway with advanced security and HA features. n MuleSoft: MuleSoft’s API Manager is designed to help users manage, monitor, analyze and secure APIs in a few simple steps. The manager enables users to proxy existing services or secure APIs with an API management gateway; add or remove pre-built or custom policies; deliver access management; provision access; and set alerts so users can respond proactively. n NevaTech: Nevatech Sentinet is an enterprise class API Management platform written in .NET that is available for On-Premise, Cloud and Hybrid environments. It connects, mediates and manages interactions between providers and consumers of services across enterprises for businesses or endcustomers. Sentinet supports industry SOAP and REST standards as well as Microsoft specific technologies and includes an API Repository for API Governance, API versioning, auto-discovery, description, publishing and Lifecycle Management.
047,49,50,51_SDT016.qxp_Layout 1 9/21/18 4:44 PM Page 51
n Oracle: Oracle recently released the Oracle API Platform Cloud Service. The new service was developed with APIfirst design and governance features from its acquisition of Apiary as well as Oracle’s own API management capabilities. The service is provides an end-toend solution for designing, prototyping, documenting, testing and managing the proliferation of critical APIs. n Postman: Postman provides tools to make API development more simple. Over 3 million developers use its apps. Its API Development Environment is available on Mac OS, Windows, and Linux, enabling faster, easier, and better API development across a variety of operating systems. Postman developed from the ground up to support API developers. It features an intuitive user interface to send requests, save responses, and create workflows. Key features include request history, variables, environments, tests and prerequest scripts, and collection and request descriptions. It also provides API monitoring for API uptime, responsiveness, and correctness. n SmartBear Software: SmartBear Software empowers users to thrive in the API economy with tools to accelerate every phase of the API lifecycle. SmartBear is behind some of the biggest names in the API market, including Swagger, SoapUI and AlertSite. With Swagger’s easy-to-use API development tools, SoapUI’s automated testing proficiency, AlertSite’s monitoring functionality and ServiceV Pro’s virtualization capabilities, users can build the best performing APIs that everyone loves to use. SmartBear tools are available in the cloud or on-premise. n TIBCO Software: TIBCO Mashery Enterprise is a full life cycle API management solution that allows users to create, integrate, and securely manage APIs and API user communities. Mashery is available either as a full SaaS solution, or with the option to run the API gateway on-premise in a hybrid configuration. The offering enables digital transformation and API initiatives that expand your market reach by exposing and sharing data and services with developers. z
< continued from page 49
is seeing in the industry is the use of API management solutions. API management solutions typically have ways of doing security enforcement and some way to ensure that the APIs are in compliance with the organization’s policies, Red Hat’s Codelli explained. But a good API management solution should also have a social aspect, some sort of portal that developers can access to learn more about the APIs. Developers, both internal and external to the organization, can use those features to do things such as share fragment of code or ask questions to learn and make them more effective developers, he said. Looking towards the future, Heffner believes that organizations need to think beyond REST APIs being the way that organizations integrate between themselves. “There’s a lot of architecture work to be sorted out in how APIs and microservices interact.” This impacts how organizations will need to think about both microservice platforms and API management solutions, Heffner said. He also believes that API management solutions will soon need to be able to integrate more natively for policy enforcement rather than always using their own gateways. In the future, Red Hat’s Codelli sees three things as being crucial. First, OpenAPI, which is a specification for REST APIs, will become a necessity. He also believes that the industry is moving to a place where organizations will need to target the hybrid cloud when implementing APIs and API management layers. “You have to be able to deploy the services necessary on-prem and in any combination of the two at the implementation level and the delivery level. and even policy management.” As more and more organizations migrate to a cloud or hybrid cloud architecture, this will become more important. Another thing he believes we will see a need for in the future is a proper DevOps platform. It will become important to be able to get API management information, configuration
information, and provisional information in DevOps terms.
Use APIs to drive business revenues in indirect ways Heffner notes that 98 percent to 99 percent of the times he hears people use the word monetization in relation to APIs, they typically mean charging for APIs. “But there are many, many indirect ways to make money from APIs and so monetizing APIs should be reserved for the bigger concept of how APIs will improve our business’s ability to make money,” said Heffner. While Accusoft is first and foremost a document management company, the company is a great example of the power of APIs. By developing APIs, Accusoft has strengthened its document management portfolio. Its APIs enable Accusoft’s customers to easily integrate the company’s PrizmDoc solution into their own applications. PrizmDoc enables developers to add document viewing and document conversion functionality into their web applications, Accusoft’s website states. Once the customer has that integration in place, they can start thinking about customization and being able to do more things with the product. Accusoft notes that their documentation is robust and their support team is made up of engineers who have actually worked on the product. Those two components are also important when monetizing APIs, whether directly or indirectly. Documentation enables developers to be self-sufficient, but customer support provides experts to work with if needed. The integration that APIs enable have allowed Accusoft to provide more value to their customers, and turn that value into business revenue. This is just one example of why organizations should develop an API strategy. The indirect revenue from APIs can come from improving processes or making customers happier, thus more likely to buy from a company in the future, Heffner said. Even if a business is not profiting directly from the sale of an API, they are monetizing their APIs in this indirect way. z
052_SDT016.qxp_Layout 1 9/21/18 4:05 PM Page 52
Guest View BY AYMAN SAYED
Developer culture dictates security Ayman Sayed is president and chief product officer at CA Technologies.
he 24/7 digital economy is requiring many organizations to release apps and application updates on a near-continuous basis in order to keep up with increasing customer demand — or face being left in the dust by competitors. Developer teams have their hands full trying to deliver functional, feature-rich updates on time. In this hyper-competitive environment, security is often too easy to deprioritize when faced with the pressure to get an app out the door. The rising trend of breaches from outdated and insecure applications and IT infrastructure should serve as a stern reminder for developer, security and operations teams alike of what is at stake if products are not properly secured. Fortunately, DevSecOps provides an easier way for organizations to keep up with quickening timelines and the increasing urgency of security in their development processes. And the trend is catching on quickly. According to a recent report from Gartner, Inc., “by 2021, DevSecOps practices will be embedded in 80 percent of rapid development teams, up from 15 percent in 2017.” The success of DevSecOps depends on more than just changes to processes, but also on the way teams work together to make sweeping impacts across the whole organization.
The rising trend of breaches... should serve as a stern reminder of what is at stake if products are not properly secured.
Collaboration Part of the new development process is bringing together a diverse group of people to accomplish the best outcome. IT security teams need to work hand-in-glove with application developers to balance the speed of application development against all of the potential risks involved. Bringing these teams together earlier in the process makes it easier to incorporate different perspectives on what each team needs for their separate measures of success from the beginning. By collaborating on a plan and process early on, they will not have to backtrack to meet another set of requirements. Furthermore, bringing together a diverse set of team members with different perspectives promotes collaboration and innovation. It is a balance of doers,
thinkers, idea makers and idea finishers that creates the most innovative development teams. Diversity is a catalyst to innovations and the creation of new ideas through collaboration. It is important for organizations to foster diversity through corporate programs and initiatives that cultivate and encourage diverse thinking. Especially as businesses begin to phase out the role of IT in the engineering process, developers will need to broaden their exposure beyond their original siloes of expertise. Trending technologies and processes like automation, analytics, DevOps and open source require programmers to know how to write the appropriate code and how it plays into the operations of the full stack.
An ongoing commitment Baking security into the development process cannot be a one-time move for organizations. Instead, it must include continuous investment and a commitment to constant improvement. They can use acquisitions and partnerships to increase their strength in security, continuous delivery and release automation and help address the emerging market requirements of secure development and operations. These investments will help organizations provide the best defense against the rapidly changing security threat landscape. Investments like this help organizations find the best approach to reducing the attack surface by combining technologies that address problems like application security or lost, stolen and weak credentials. Organizations can also invest in improving their developers’ abilities to tackle security problems by training them with new skills. Developers take a lot of pride in the quality of their work, which includes security. By training them with some of the basics of application security, they’ll be able to build security into the development process automatically without slowing down code production. This process must begin with a benchmark of where they started by measuring things like “time to deployment” and “number of vulnerabilities fixed” in code reviews. By comparing to these benchmarks, organizational leaders will be able to track the effectiveness of security integrations and can make improvements to the process where it is not meeting organizational goals. z
053_SDT016.qxp_Layout 1 9/21/18 4:05 PM Page 53
Analyst View BY ARNAL DAYARATNA
Understanding cloud native T
he term â&#x20AC;&#x2DC;cloud nativeâ&#x20AC;&#x2122; is responsible for significant confusion in conversations about contemporary application development because of a lack of clarity about its meaning. For example, contemporary usage of the term often equates it with applications that are optimized for cloud computing infrastructures. Another use of the term equate it with cloud-based applications that are accessed by a web browser. Both these definitions of cloud native applications are inaccurate. This article elaborates on the definition of cloud-native applications and proposes, in keeping with the Cloud Native Computing Foundation, that cloud-native applications are defined as exemplary of microservices architectures, container-based and dynamic orchestration. Containers: The first thing to remember about cloud-native applications is that they are first and foremost container-native applications, which means they are constituted by containers. Containers are packages of software that contain everything that an application needs to run. For example, containers encapsulate code, runtime, toolchains, libraries and settings. By packaging everything that an application needs to run, containers empower developers to focus on application design and logic while delegating the deployment and operational management of containers to IT operators. Containers abstract applications from the environment in which they are deployed, thereby enabling applications to be written once and deployed anywhere. They also deliver consistent application environments that improve collaboration between developers and operators by ensuring that the same application environment used for the development of an application is subsequently used later in the application development lifecycle for deployment. Separate from their abstraction of application dependencies and delivery of consistent application environments, containers deliver a slew of benefits related to application performance, portability, fault isolation and security. Microservices: Microservices architectures structure an application as a suite of services that implement business capabilities. These services can be independently deployed in a modular architecture that delivers a decentralized approach to the development of software. An example of this decentralized approach extends not only to the segmenta-
tion of application and business logic within discrete microservices, but also to databases. In other words, each independent microservice in an application tends to manage a separate database, even if that database happens to be a replica of another database tied to another microservice. Container orchestration platforms take responsibility for automating the deployment, replication, scaling, load balancing and self-healing of container-based applications. In addition, they assume responsibility for automating the rollout of upgrades and the rollback to previous versions as desired. Whereas cloud computing platforms orchestrate technologies by automating the provisioning and scaling of infrastructure, for example, container orchestration platforms dynamically automate the allocation of resources to containerbased applications in a similar vein. Conclusion: From a development perspective, cloud-native applications improve the development experience by enabling a high degree of development agility in addition to enhancing collaboration between development and operations teams. Moreover, cloud-native applications automate many of the manual processes required for managing the underlying infrastructure on which applications run, and correspondingly empower developers to focus on writing code as opposed to tackling the operational challenge of managing infrastructure. The emphasis of cloud-native applications and development on development agility and automation bears notable similarities with the agility and automation delivered by cloud computing practices to the management of infrastructure, although the attributes of cloud-native applications and development do not necessarily map neatly to those of cloud computing in a one-to-one, one-tomany, many-to-one or many-to-many fashion. As such, cloud native applications and development represent a disruptive innovation to previous models of application design, development and delivery that borrows concepts commonly used in conversations about cloud computing, without replicating them or even using them as a foundational infrastructure. z
Dr. Arnal Dayaratna is Research Director, Software Development at IDC.
Containers empower developers to focus on application design and logic.
054_SDT016.qxp_Layout 1 9/21/18 4:30 PM Page 54
Industry Watch BY DAVID RUBINSTEIN
A manifesto for modern development David Rubinstein is editor-in-chief of SD Times.
hen my girls were little, they were big fans of “The Little Mermaid,” so at bedtime I would read to them from a book called “Ariel’s Painting Party.” The story goes like this: Ariel saw a sunrise that was so beautiful that she decided to paint it. When she was finished, she showed it to her friends. But her seagull friend thought the painting needed more birds, and her friend Sebastian the crab thought more shellfish would make the painting better, and friend Flounder though it needed more fish. Ariel decided they should all paint pictures and to let her father, King Triton, decide which was best. (Trust me, I’m getting to a point about software development.) So they did, and Triton, in his infinite wisdom as the ruler of all things wet, said that while all the pictures were nice, no one of them alone could tell the story of how wonderful their world was. He placed them all side by side, creating a beautiful mural, and Ariel and her friends saw just what he meant. The end. Okay, good night. I love you! (Oops, wrong audience!) Anyway here’s why that story is relevant. I hear from developers, data managers, IT Ops people, consultants, analysts and vendors, who see the IT world only from their point of view. Ariel would say you need thingamabobs and whatzits. In IT, they say you need to be agile. You need automated testing. You need CI/CD pipelines and DevOps. You need digital transformation. You need more data intelligence. On their own, each of these can improve your IT operations. But when you put them all together, they create a truly beautiful picture of business value to the organization. In a services world, applications are more cobbled together than written from scratch. So APIs are the new currency in development, enabling programmers to integrate pieces of functionality into a living, breathing application. With analytics tools, any service that fails can be easily identified and unplugged, repaired, and plugged back in to the application — without bringing it down. This change in how applications are developed was led by the Agile movement, first defined 17 years ago as a different way to think about soft-
If APIs are the new currency in development, data are the crown jewels.
ware. The word ‘speed’ never appears in the original manifesto, but through processes like Scrum and Lean, development work has become broken down in smaller, more targeted projects, enabling organizations to rapidly iterate software. Taking that a step further, testing and other workflows are becoming more automated so as not to be a drag on production. CI/CD pipelines have been created to allow deployment to keep pace via automated builds and deployments. Automated error detection and repair are replacing manual log files. Of course, more end points in an application mean more targets for malicious attacks, so container security — and open-source security — must be addressed. There’s a booming market in this space, as the major roadblock to adopting cloud and container technology is the fear of having your data lost or stolen. If APIs are the new currency in development, data are the crown jewels. Today we have the capability and the need to collect, store, analyze and act upon terabytes and even petabytes of data to inform business decisions. With more devices feeding data back to ever-growing data lakes, massive processing headaches can arise. The evolution of edge helps reduce latency by processing the data close to device and only transmitting anomalies to the back end for analysis. Of course, none of this brings any value to the organization unless the applications being created delight and retain end users. Organizations are realizing the value of UI/UX designers, working in tandem with application developers to create more engaging experiences for end users. So with that in mind, here is my “Little Mermaid”-inspired Manifesto for Modern Software Development. 1) Employ loosely coupled services architectures based on microservices and containers. 2) Agile development and DevOps practices are a must. 3) Automation is critical. 4) Security is now foremost on everyone’s mind. 5) Data is king. 6) Always be creating value for the business. 7) First and foremost, it’s about the end user. Organizations today wish they could be a part of that world. z
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:24 PM Page 55
Full Page Ads_SDT016.qxp_Layout 1 9/21/18 4:24 PM Page 56
bad data threatening
C ALL IN THE
Fabulous It’s Clobberin’ Time...with Data Verify Tools!
ddress ! Address
P hone !
N ame !
Visit Melissa Developer Portal to quickly combine our APIs (address, phone, email and name verification) and enhance ecommerce and mobile apps to prevent bad data from entering your systems. With our toolsets, you’ll be a data hero – preventing fraud, reducing costs, improving data for analytics, and increasing business efficiency. - Single Record & Batch Processing - Scalable Pricing - Flexible, Easy to Integrate Web APIs: REST, JSON & XML - Other APIs available: Identity, IP, Property & Business
Let’s Team Up to Fight Bad Data Today!