SD Times - January 2018

Page 1

SDT07 cover.qxp_Layout 1 12/20/17 3:49 PM Page 1

JANUARY 2018 • VOL. 2, ISSUE 7 • $9.95 •

SDT07 Full Page Ads.qxp_Layout 1 12/20/17 2:52 PM Page 2

003_SDT07.qxp_Layout 1 12/21/17 9:33 AM Page 3





News Watch


Business continuity with OpenEdge


12 questions to ask before purchasing new software


Motor City board game revs Kanban principles for developers


ZeroStack releases DevOps Workbench with over 40 tools


Predictions for 2018

Where does Big Data go from here? page 8

2017: The Year in Review


ANALYST VIEW by Chirag Dekate 3 Data Center Predictions for 2018


GUEST VIEW by Shannon Mason Overcoming ‘temporal myopia‘


INDUSTRY WATCH by David Rubinstein Transformation can be a monster

The Year of Digital Transformation DevOps remains a competitive advantage Testing catches up to pace of development Security is no longer afterthought Changes in Java AI: All in for automation

page 20

page 28

Introducing a new quarterly section for SD Times readers

As the walls between developers and IT operations crumble, it's more important than ever for managers to stay on top of the latest news across the disciplines. This month, SD Times is launching a quarterly section on IT Ops, and a website — IT Ops Times (coming Jan. 18) — to bring the latest trends, techniques and tools to organizations looking to gain agility, leverage services and improve their customer experiences. We believe you'll find it informative, interesting and relevant.

Continuous testing enables the promise of Agile page 37

Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at

004_SDT07.qxp_Layout 1 12/21/17 9:24 AM Page 4



Over 25 search features, with easy multicolor hit-highlighting options

ART DIRECTOR Mara Leonardi CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Frank J. Ohlhorst

dtSearch’s document filters support popular file types, emails with multilevel attachments, databases, web data

CONTRIBUTING ANALYSTS Rob Enderle, Michael Facemire, Mike Gualtieri, Peter Thorne CUSTOMER SERVICE SUBSCRIPTIONS

Developers: ‡ $3,V IRU 1(7 -DYD DQG & ‡ 6'.V IRU :LQGRZV 8:3 /LQX[ 0DF DQG $QGURLG ‡ 6HH GW6HDUFK FRP IRU DUWLFOHV RQ faceted search, advanced data FODVVLILFDWLRQ ZRUNLQJ ZLWK 64/ 1R64/ RWKHU '%V 06 $]XUH HWF


The Smart Choice for Text Retrieval® since 1991 1-800-IT-FINDS



D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803

SDT07 Full Page Ads.qxp_Layout 1 12/20/17 2:53 PM Page 5

006_SDT07.qxp_Layout 1 12/19/17 3:03 PM Page 6


SD Times

January 2018

NEWS WATCH HTML 5.2 now a W3C recommendation The World Web Web Consortium (W3C)’s Web Platform Working Group has announced a new specification to replace the HTML 5.1 Recommendation. The team announced HTML5.2 is ready and now a W3C Recommendation. New features include the <dialog> element, integration with the JavaScript module system of ECMA-262, an update to the ARIA reference, and a referrer policy. “HTML is the World Wide Web’s core markup language. Originally, HTML was primarily designed as a language for semantically describing scientific documents. Its general design, however, has enabled it to be adapted, over the subsequent years, to describe a number of other types of documents and even applications,” the group wrote. Also available is the first public working draft of HTML 5.3.

CNCF announces Kubernetes 1.9 The Cloud Native Computing Foundation has announced the release of the productiongrade container scheduling and management solution: Kubernetes 1.9. This is the fourth release of the year. As part of this release, the Apps Workloads API is now generally available. The Apps Workloads API groups together object such as DaemonSet, Deployment, ReplicaSet, and StatefulSet, which make up the foundation for long running stateless and stateful workloads in Kubernetes. Deployment and ReplicaSet are the two most commonly used and are now stable after

OWASP releases the Top 10 2017 security risks The Open Web Application Security Project (OWASP) officially released its Top 10 most critical web application security risks. This is the first time the organization has updated the Top 10 since 2013. “Change has accelerated over the last four years, and the OWASP Top 10 needed to change. We’ve completely refactored the OWASP Top 10, revamped the methodology, utilized a new data call process, worked with the community, re-ordered our risks, rewritten each risk from the ground up, and added references to frameworks and languages that are now commonly used,” the OWASP wrote in the Top 10 2017. According to the OWASP, some significant changes over the past couple of years that resulted in an update to the Top 10 include microservices, single page apps, and the dominance of JavaScript as a primary language on the web. The Top 10 now consists of: • Injection • Broker Authentication • Sensitive Data Exposure • XML External Entities (XXE) • Broken Access Control

a year of use and feedback. Though Kubernetes was originally developed for Linux systems, support on Windows Server has been moved to beta status so that it can be evaluated for usage. This release also features an alpha implementation of the Container Storage Interface, which is a cross industry initiative meant to lower the barrier for cloud native storage development. CSI will make it easier to install new volume plugins and allow third-party storage providers to develop without have to add the Kubernetes core codebase. Other features in this release include CRD validation, beta networking IPVS kubeproxy, alpha SIG node hard-

• Security Misconfiguration • Cross-Site Script (XSS) • Insecure Deserialization • Using Components with Known Vulnerabilities • Insufficient Logging and Monitoring

ware accelerator, CoreDNS alpha, and alpha IPv6 support.

Angular 5.1 released with Angular CLI 1.6 and Angular Material Following November’s release of Angular 5.0, the team has announced Angular 5.1, Angular CLI 1.6 and the first stable release of Angular Material. Angular says that version 5.1 is just a minor release that provides smaller features and bugfixes. It includes improved universal and App Shell support in the CLI, improved decorator error messages, and TypeScript 2.5 support. Angular Material provides Material Design components for Angular. It is based off of

Google’s Material Design visual language and provides 30 UI components. In addition, the release includes the Angular Component Dev Kit to provide developers with a set of custom component building blocks. Starting with this release, Angular Material and Angular CDK will follow the same major release schedule as the rest of the platform. Angular CLI 1.6 includes support for building apps using the new Server Worker implementation. This new implementation was released with Angular 5.0. In addition, it includes improved support for Angular Universal and App Shell support for generating and building an app shell. z

SDT07 Full Page Ads.qxp_Layout 1 12/20/17 2:53 PM Page 7



008-010_SDT07.qxp_Layout 1 12/19/17 3:33 PM Page 8


SD Times

January 2018

said. “Many good definitions are out there, but the term is being extended to things that aren’t Big Data.” What Big Data simply does, Sharma explained, is help resolve scaling problems. But where the real challenges are, “Big Data is a solution looking for a problem.” One of the key early tenets of Big Data was to use NoSQL databases, as SQL was seen as too rigid to deal with unstructured data. Now, with time to look back on that, some experts are saying that that might not be necessary. “There’s no reason why Big Data problems should be different than problems in the SQL world,” CData’s Sharma said.

Where does Big Data go from here? What innovations in data technology will inform the next-generation of Big Data solutions? BY DAVID RUBINSTEIN

SQL still relevant in Big Data


he term Big Data has been bandied about since the 1990s. It was meant to reflect the explosion of data — structured and unstructured — with which organizations are being deluged. They face issues that include the volume of data and the need to capture, store and retrieve, analyze and act upon that information. Technologies and techniques such as MapReduce, Hadoop, Spark, Kafka, NoSQL and more have evolved to help companies get a handle on their everexpanding data. So, where does it all go from here? What will inform Big Data 2.0? Not so fast, say several data experts. Not everyone is as far along the Big Data journey as you might think. For small and midsized companies that do

not have copious IT resources and data scientists to enable them to take advantage of the Big Data technologies, it’s been something they’ve read about but haven’t been able to implement. “Hadoop is much too complex for organizations that can’t afford large IT departments,” said Tony Baer, a research analyst at Ovum. “The next 2,000 or 3,000 Hadoop adopters won’t have the same profile as the first 3,000, who tend to have more sophisticated IT departments. [The newer adopters are] still trying to figure out the use case. They realize they need to do something, but a lot of them are like deer caught in the headlights.” Amit Sharma, CEO of data driver provider CData, echoed that sentiment. “The Big Data term itself is slippery,” he

Monte Zweben, CEO of data platform provider Splice Machine, explains Big Data started at Google. “They published a MapReduce paper and the birth of the open-source version of it — called Hadoop — emerged, and everyone jumped on. It’s revolutionary because it fundamentally enabled the average Java programmer — and later other programming languages — to use many computers, servers and GPUs to attack Big Data problems. And this was revolutionary. But as time went on, new inventions came up where people needed to do that more effectively — Spark was an invention that came out of the analytics world. Spark was an advancement over the original MapReduce. Key value stores emerged, like Cassandra and HBase that allow you to do the serving of applications. So you had innovations in analytics, you had innovations in being able to serve operational applications, continued on page 10 >

008-010_SDT07.qxp_Layout 1 12/19/17 3:36 PM Page 9

January 2018

SD Times

How do we make sense of Big Data? BY DAVID RUBINSTEIN

Machine learning. Data pipelines. Multi-cloud implementations. Containers. All of these will play a larger role in how organizations analyze, sort and deliver data to applications. “Taking advantage of Big Data analytics and taking advantage of machine learning, and AI, is certainly very important for most organizations, and there are tangible benefits. I just think that basically organizations are going to need a lot more guidance, which is why you see more guided analytics, and why I expect that implementations are going to trend toward managed implementations in the cloud — basic managed services,” Ovum analyst Tony Baer said. There is a caveat, though, with managed services, Baer cautioned. “A lot of organizations, as they go into the cloud and start using managed services, they’ll need to make the decision of how dependent am I going to be on this single cloud vendor and where do I insulate myself so I have some freedom of action? And do I get my managed services from a third-party so it’s transparent? Will it abstract me from

Amazon so if I decide I want to run elsewhere I can? In a way, it’s almost like an enterprise architecture decision… where do I have some insulation between us and the cloud provider? Or are we going to the whole Amazon stack? It’s a sleeper issue.. it’s not going to all of a sudden be headlines next year, but I think a lot of organizations are going to start seeing this stuff.” As Manish Gupta, CMO at Redis Labs, pointed out, complexity in the data space is only growing. “It’s not a swimming pool of data anymore, but an ocean,” he said. Handling data in realtime needs to be a foundational element to any data strategy, he said. Bots will be required to handle the flow of data, and organizations will have to decide how much data can or should be analyzed. Gupta believes that “15 percent of data will be tagged, and about one-fifth of that will be analyzed.” He also said that the life cycle of technologies will shorten. “Hadoop became mainstream over the past two years, and yet now some enterprises are skipping Hadoop entirely and going

straight to Spark. And with Apache Kafka, perhaps you don’t need separate streaming technology.” For the technology investments organizations are making today, Gupta said they can hope to get five years out of it. “Organizational structures need to be more agile because of the churn of technology.” Machine learning tools have advanced a long way, noted Eric Schrock, CTO at Delphix. Other tools are advancing just as quickly. In fact, he said, “people don’t even necessarily want to shove their data into a Hadoop data lake anymore. They just want to run Spark or TensorFlow or whatever directly on data sources and do whatever they need to do without having the intermediate step of the data lake. The quality of your analytics, the speed of your data science and the quality of your machine learning is highly dependent on your ability to feed data into it. Some of that data is from Twitter feeds and event logs and other things, and if your data is stuck in these big relational databases, you still have that same problem.” z

Online Predictive Processing: OLPP Online Predictive Processing, as defined by Splice Machine CEO from an ERP system, but you also might get streaming ingestion Monte Zweben, is essentially the combination of Online Transac- like split-second transactions off POS terminals in retail stores. tion Processing and Online Analytical Processing — with a bit Those kinds of things now are all capable inside this relational more thrown in. database management system that is both good at transactions Zweben explained: “First they take their old app and put it on and power an application and make predictive analytics actionan OLPP platform and it just works because it’s SQL. Then they able. add a little bit of predictive analytics to it, and now all of a sudden OLPP gives you a relational database management system this old, stodgy SQL app has a component on it that might be that’s capable of OLTP workloads, like powering commerce sites using machine learning and is getting better and better over time. and mobile applications, at petabyte scale, it can get petabytes of We see OLPP, because it’s SQL, as the on-ramp to AI for even the data and look up a single record in literally milliseconds. You also oldest of SQL applications out there. get OLAP processing, so if you ‘OLPP gives you a relational “You get a SQL database you can connect have petabytes of transactional database management system to it with standard APIs like JDBC and ODBC, data — you’re a credit card comthat’s capable of OLTP workloads.’ pany and you have petabytes of you get an Apache Zeppelin notebook available with it and you get machine learning —Monte Zweben transaction data in a database, and you get a call center that libraries in process so that you can needs to look up a single record. That’s OLTP. Find out the average implement predictive capabilities, and transaction per zip code for frequency of transaction and transyou get streaming as well, totally action size... you must aggregate all that data together in a large embedded, so you can ingest either big data set, and that’s OLAP processing. You also get machine learnbatches of data that might be inventory ing, streaming and the notebook.” —David Rubinstein downloads in a supply chain application


008-010_SDT07.qxp_Layout 1 12/21/17 1:34 PM Page 10


SD Times

January 2018

Big Data < continued from page 8

What’s our EDGE? Helping customers reduce risks and realize results with an enterprise data governance experience. Data Governance Enterprise Architecture Business Process Data Modeling

We get the hard stuff right. Free trials at

you had streaming innovations emerging like Kafka. But one thing is true across all of these things… the low-level programming that was necessary to make these things work is no longer acceptable.” “For it to be actually acceptable by the Global 2000 [companies], it has to be in a higher abstraction or language, and that we believe is SQL, the standard data language. You see many, many projects now putting SQL layers on top of these computage.” Zweben went on to explain that organizations attributed the scalability problem of relational databases to the SQL language, because SQL was so robust and comprehensive. “It lets you join together tables and lets you do transactions, and these are very complicated data operations. But people thought these databases were too slow and don’t scale to my Big Data problem, so let me go to these NoSQL architectures on the Hadoop stack,” he said. “But the problem was, they threw away the baby with the bathwater. It wasn’t SQL that was broken; it was the underlying architecture supporting the SQL. ” Adam Famularo, CEO of architecture modeling company Erwin, said modeling “will become the heart and soul of your data architecture, your data structure, your data elements…” Famularo said it all begins with business processes. “Let the business lead the data architecture, which then needs data models to model the schema, then to your governance and your approach to

governing that data. And that’s where the business comes back in, to be able to help define what the business infrastructure is, the business dictionary, straight through to the data dictionary. It starts with the business and ends with the business, and in between is a whole bunch of data structures that need to be put in place that are then monitored and managed throughout the enterprise, usually by the [chief data officer] and the CDO organization. MongoDB’s CTO Eliot Horowitz noted that once data is written, teams don’t want to change it or rearchitect it. “Everyone always wishes they had a perfect data architecture and they’re never going to have it,” he said. “What really matters is, can you easily allow people to collaborate on the data, share the data in meaningful ways easily, while maintaining incredibly high security and privacy controls. “The way I think this is going to go is you’re going to have data, you’ll have some database with things in it, and you will configure rules such that different people can see different things, but then you can query that data without having to copy it or move it, and you can just decide who you want to share different things with. If you’re in health care, you can share certain things with insurance agents or insurance companies, or certain aggregate data with researchers, without having to give them a copy of the data, and without having to write a ton of really complex logic. It’s a pretty different kind of model, more like a federated model. The trick there is to get security and privacy done right.” z

011_ITOpsAd_SDT07.qxp_Layout 1 12/20/17 4:01 PM Page 11

Coming soon,

to a browser near you... 01.18.18

SDT07 Full Page Ads.qxp_Layout 1 12/20/17 2:54 PM Page 12

013_SDT07.qxp_Layout 1 12/19/17 3:02 PM Page 13

January 2018

SD Times


Business continuity with OpenEdge BY LISA MORGAN

Digital transformation and enhanced user expectations have established the need for 24x7x365 availability as the “new normal.” While business continuity is not a new concept, real-time and nearreal-time expectations make outages more costly and disruptive to the business than ever before. The natural disasters of 2017 demonstrated that businesses cannot afford a single point of failure. However, outages occur due to many other factors including security breach, oversight in recovery testing or a simple implementation error. In fact, according to the February 2016 Forrester Research report, “Building the Always-On, AlwaysAvailable Digital Enterprise,” one-third of companies have had a major disaster or suffered a significant disruption in the past five years due to human error. “Enterprises are relying more and more on data, so they need to ensure their businesses are operating effectively and continuously,” said Dan Mitchell, principal sales engineer at Progress. “We provide a database engine and language that make it easier to develop applications and continue to enhance it to provide increased always-on capabilities. Coupled with proactive monitoring, data encryption and online maintenance capabilities we help our customers to achieve business continuity.”

Assess your business continuity profile A solid business continuity plan can help ensure that a company can continue to do business without interruption. Part of a solid risk management strategy is assessing an organization’s sensitivity to outages. While tolerance ranges from company to company, today’s consumers, business partners and employees expect systems to always be available. “When our customers ask what kind of availability we enable, we ask them to estimate the cost of downtime for a Content provided by SD Times and

single hour,” said Mitchell. “As companies become more digital, uptime becomes more critical because the cost of downtime increases.” A typical cost of downtime calculation examines lost productivity and/or revenue, regulatory fine fallout, reactive recovery efforts, and intangibles such as loss of customer loyalty or brand reputation, which are harder to measure but have lingering effect.

Complete continuity

ment and for various situations minimizes impact.

Choose the right tools The Progress OpenEdge RDBMS Advanced Enterprise Edition (AEE) covers all aspects of Forrester’s technology resiliency model. It provides a unique on-premises, cloud or hybrid computing and application production solution in one package that’s both costeffective and secure. With it, OpenEdge customers enjoy high database performance, reliability and 24x7x365 availability plus scalability that can support thousands of concurrent users and terabytes of data. OpenEdge AEE includes six modules that can be purchased individually or as a bundle, including OpenEdge Manage-

The February 2016 Forrester Report introduced the concept of “technology resiliency” which it defined as the ability for an organization to absorb the impact of any unexpected event without failing to deliver on its brand promise. The concept goes beyond traditional notions of business continuity and disaster recovery, defining ‘As companies 5 key areas that enterprises become more digital, must address simultaneously: uptime becomes • Proactive monitoring more critical.’ • Operational backup —Dan Mitchell • High availability • Disaster recovery • Security “Enterprises tend not to address all five of those things equally,” said Mitchell. “They tend to ment, OpenEdge Replication and address one or two of them at the OpenEdge Transparent Data Encrypexpense of the others, so they’re caught tion (TDE). OpenEdge Management enables IT off-guard when they should have been able to proactively manage the situation.” teams to monitor applications and proacFor example, when most people tively troubleshoot impending problems think of downtime, they think in terms in real time so they can ensure customer of disaster recovery when the problem satisfaction and meet SLAs. OpenEdge Replication provides could have been caused by an errant application that’s causing an entire sys- flexibility in maintenance and reduces tem to freeze up. A Progress customer unplanned downtime by protecting recently faced an availability issue business critical data and eliminating a which it blamed on their application. single point of failure. OpenEdge Transparent Data EnWith Progress’ assistance, it became clear that another application had cryption (TDE) uses standard encryption libraries and encryption key management caused a network card to overflow.” Being prepared across the environ- to provide secure, encrypted data. z


014_SDT07.qxp_Layout 1 12/19/17 3:02 PM Page 14


SD Times

January 2018

12 questions to ask before purchasing new software BY JENNA SARGENT

Buying and implementing a new piece of software is no easy task. There are so many different factors that contribute to whether or not a software implementation is successful, that it is important to do your research before investing in something new. Craig Flynn, founder and EVP engineering at relationship management software provider Impartner, offered up his tips for purchasing new software. According to him, there are 12 questions all executives must ask before purchasing new software. First of all, executives must understand what the technology is, Flynn said. “We tell the business buyer that they need to make sure they can clearly articulate the technology they’re asking to implement and be able to briefly explain why it’s so important for them to have and how it’s different from what they already have,” he explained.

Flynn also believes they must understand whether or not the technology is trusted and reliable. “It always makes you feel a little more comfortable if you see a couple big names who have already used it,” he says. “They are not going to want to be a guinea pig on something that is new.” When implementing something that will affect potentially thousands of users in a company, you must make sure that it is reliable, he said. It is also important to be sure that

the new technology is secure. “Big companies have a security document that they usually have vendors fill out. Vendors need to be able to have that ready for the business buyer in hand or be ready to provide it to the security teams or IT person,” he explained. Making sure a solution is multitenant is also very important. “You want to make sure that you’re getting the latest and greatest software fixes at the same time as everyone else,” Flynn says. With single-tenant software, you risk having to wait for important software or security upgrades. Similarly, the technology needs to be able to scale so that the software can grow alongside the company without having to be retooled to serve a larger number of users or add functionality. Executives must also ensure that the technology will hold up in a disaster. “That topic really speaks to your hosting infrastructure. Are you on the cloud? If you are, is it using a CDN? Do you have cloud redundancy? And if you’re hosting it on your own network and your own infrastructure as we do, IT people are going to want to see that you’re in a Tier 3 or Tier 4 data center,” said Flynn “They’re going to want a network diagram — the general one and the detailed one — and they’re going to want to make sure you have your network setup the way they have their network set up with the correct zones, best practice, and geographically dispersed locations.” The technology must also be easy to implement. An IT person who is signing off on a technology or helping to support it will want to make sure it will not take up a lot of their time. If it does require a lot of time, they will want an honest assessment of just how much time, Flynn explains. “It’s really important to be straightforward with the IT department and let them know what they’re getting into.” Companies should also look for

solutions that support Single SignOn and that complies with standard authentication protocols. “Good IT departments want to be the source of record so they’re going to want to make sure SSO or API integration is seamless, best-in-class, and it supports the latest technology,” said Flynn. Additionally, buyers should want to know how the technology integrates with their other existing solutions. According to Flynn, IT teams will look for robust APIs that are easy to configure. Executives should look for software that has regular updates that are well-communicated. Companies should have an environment that enables dev, stage, and production. Having a second environment where they can vet their modifications before it gets pushed to the production environment is also helpful. “They’re going to want to have your product update on stage so that they can make sure it plays well with everything they’ve built,” said Flynn. “We can tell them we’ve QAed it for over a year and they still would want to go test it.” It is important to verify the integrity of the data as well. “You want to make sure the data is protected within your infrastructure,” said Flynn. He says it is also important to find a vendor that will allow you to do objectlevel auditing and field-level auditing. Finally, it is important to find out if it will be cost-prohibitive to cancel the contract. “You can do all the due diligence in the world with a vendor and go through all the things I just mentioned and have them pass, but you really don’t know until you launch it,” Flynn said. Vendors that are confident in their solution should be able to say upfront that it will not be cost-prohibitive to finish or get out of a contract, he explained. While all of these may seem like a lot to look into, it is important that you do in order to find a solution that will work for you and accomplish what it needs to. z

SDT07 Full Page Ads.qxp_Layout 1 12/20/17 2:59 PM Page 15




016_SDT07.qxp_Layout 1 12/19/17 3:01 PM Page 16


SD Times

January 2018

Motor City board game revs Kanban principles for developers BY JENNA SARGENT

Board games are as popular as ever these days and while they are fun to play, they can also hold educational value in the field of software development. There are lots of board games that teach the concepts of programming, such as Robot Turtles or RoboRally. John Yorke, product owner and former agile coach at Asynchrony Labs, an IT consulting firm, has taken the concept of educational board games to the next level with his new game called Motor City. This game features many elements of the Kanban methodology and was originally designed for use in training courses. He took some inspiration from Eliyahu Goldratt’s book “The Goal” and also added in Kanban concepts, he explained. The game is all about fulfilling customer orders in the era of 1930s/1940s cars. Players can pick an order and decide which cars to build, which he compares to picking Destination Tickets in the popular board game, Ticket to Ride. Throughout the game players follow their car through the production line, and certain orders in your factory will have different constraints that you will need to account for. Some cars will flow through the factory easily, while others may get stuck. “You’re learning the principles of Kanban and at the same time you’re understanding the principle of constraints and how you adapt your flow to code with those constraints, and it’s all wrapped up in a fun game,” Yorke explained. The winner of the game is the player that can complete the most customer orders. Yorke says this is true to the

The game takes players through an automobile production line.

world of software development, where he says it’s important to deliver value to your customers. “You understand that you win the game by delivering value to your customers, and producing things that aren’t needed is waste, just like being inefficient in the way you work through things in your work is waste,” he says. The game took about a year to get to where it is now, and it has already been used in several training sessions. Yorke has attended 4-5 conferences this year where he has done a talk on the game and then played through it. The game is mostly being marketed to businesses wishing to use it for training, as the price point of $120 is currently a bit too high for a casual gamer.

Yorke hopes that the game is successful enough to warrant mass-producing it so they can bring the price down and have it be more accessible. Yorke’s experience in programming helped him design the game. Throughout the design stage Yorke and his team did a lot of play testing and had to get rid of things that didn’t work well. “You could say in a programming sense there’s a lot of refactoring that goes on,” he said. “You don’t want to lose the heart of what you’re trying to get to, but there are certain things that work and certain things that don’t work. And you have to be prepared to throw away things that you put a lot of effort into, which is always hard, especially with programming where you get tied to your code and you’re not comfortable throwing it away. It’s a very iterative process, that’s for sure.” Though the game design process is never true to the Kanban process, especially the more complex a game is, the principles of Kanban will shine through during game play. z

SDT07 Full Page Ads.qxp_Layout 1 12/20/17 3:00 PM Page 17

Pro Cloud Server Collaborate with






AAFree of theversion Pro Cloud for up to+ 25 users 5-25version user, Free of Server the Pro+ WebEA Cloud Server WebEA $CUGF QP Æ’XG QT OQTG EWTTGPV QT TGPGYGF NKEGPUGU QH VJG 'PVGTRTKUG #TEJKVGEV For those with five or more current or renewed licenses of the Enterprise Architect Corporate Edition(or (or above). Conditions apply,apply, see websee site web for details. Corporate Edition above). Conditions site for details.


Online Demo: North America: | Europe: | Australia:

er v r NEW e dS

lou ress C o Pr Exp

Visit for a trial and purchasing options

3:26:56 PM

SDT07 Full Page Ads.qxp_Layout 1 12/20/17 3:01 PM Page 18

019_SDT07.qxp_Layout 1 12/19/17 3:01 PM Page 19

January 2018

SD Times


ZeroStack releases DevOps Workbench with over 40 tools BY JENNA SARGENT

ZeroStack has announced the availability of its DevOps Workbench. The tool will allow developers to create workbenches from a combination of open source and commercial tools using oneclick deployment. “I&O leaders are under intense pressure to deliver services to production faster to enable the business to pursue opportunities and respond to threats that may endanger the entire enterprise,” wrote George Spafford and Ian Head, researcher directors at Gartner. “In Gartner’s Fall 2017 Enterprise DevOps survey, 76 percent of respondents indicated that they are using DevOps in regulated situations, which is an increase from 47 perfect in 2015. Clearly, the DevOps use is evolving rapidly and can work in regulated environments with pragmatic adaptation.” The ZeroStack DevOps Workbench will feature more than 40 preferred developer tools to speed up the development process as well as extensible templates that will enforce consistency among development teams. According to the company, the software adapts to a developer’s existing hardware and gives IT more visibility and control over development infrastructure. “Our DevOps Workbench delivers the ultimate in developer and IT productivity,” said Kamesh Pemmaraju, vice president of product management at ZeroStack. “By automating infrastructure management while giving developers the flexibility to choose their own environments, we accelerate software development to enable digital transformation.”

As part of the announcement, the company is adding OpenMake’s DeployHub Pro Continuous Deployment solution to the DevOps WorkBench. OpenMake’s solution provides agentless software release automation designed for agile DevOps. “We’re in the business of making developers more productive, and we want our solutions available in as many places as possible,” said Tracy Ragan, CEO of OpenMake. “By adding DeployHub Pro to the ZeroStack DevOps Workbench, we are opening a new channel that provides single-click access to our solution from within the ZeroStack private cloud.”

ZeroStack lays out a five-step DevOps plan The company also recently revealed its five-step plan towards a self-driving cloud. The plan leverages existing investment in on-prem virtual infrastructure and combines DevOps automation with customer control. The five steps include: 1. Automatic installation and configuration 2. Integration that brings together AWS and VMWare 3. Self-service provisioning 4. Advanced machine learning to automate processes 5. Automated upgrades “Our customers thrive on automated systems for creating productive Agile DevOps environments,” said Ragan. “ZeroStack delivers an automated DevOps system that empowers developers while reducing the burden on the IT organizations that support them, creating IT that moves at the speed of Agile DevOps.” z

In other DevOps news… n Chef is bringing web application security to the speed of DevOps with Signal Sciences. The company announced it has selected Signal Sciences Web Protection Platform to provide threat protection and security visibility into its web apps. Signal Sciences is designed to limit false positives without the feat of breaking legitimate traffic, and provides broad coverage against real threats and attack scenarios. It provides a menu of deployment options, next-gen web application firewall, runtime app security protection modules, and the ability to operate as a reverse proxy for legacy apps. n Datadog announced a new AI feature to help DevOps teams receive and response to critical alerts. Datadog Forecasts is designed to predict performance and stability issues, ad alert teams days, weeks or months in advance. Users can use Forecasts to visualize predicted trends as well as specify how far in advance they want to be alerted about potential issues. According to the company, adding forecasts to the dashboards will allow customers to combine historical trends with future predictions. This will allow them to add correlations to metrics, such as anomalous traffic to the web servers, forecasted API request traffic, and forecasts of disk usage on database servers. n StreamSets is adding DevOps sensibilities to data movement architectures with the release of StreamSets Control Hub. The hub provides a centralized collaborative design of dataflow topologies and enables testing, provisioning and elastically scalable executions of dataflows on prem, on edge or in the cloud. Features include a hosted dataflow designer, automated deployment, a development toolkit that supports Java and Python, governance integration and hosting options.


020-026_SDT07.qxp_Layout 1 12/19/17 4:45 PM Page 20

SD Times

January 2018

Digital Transformation: BY JACQUELINE EMIGH

In the year 2018, how will companies use development processes, business practices, IT infrastructure, and technologies like artificial intelligence and IoT to help drive digital transformation? While many organizations are starting down the path, sometimes with an eye to improving the user experience (UX), progress levels, types of initiatives, and specific implementations vary a lot from one industry to another and even between individual companies. That’s the general consensus of opinion among a variety of informed observers. In a study of 450 heads of digital transformation, 80 percent of respondents said that they were at risk of being left behind by digital transformation. Moreover, 54 percent agreed that unless their digital transformation efforts are successful, their companies will go out of business or be absorbed by a competitor within four years, according to the survey, commissioned by Couchbase and conducted by Vanson Bourne. On the other hand, another recent study looked at the differences between the “top 100 leaders” in digital transformation versus other companies, turning up much more hopeful results. Performed by SAP, with research and analysis support from Oxford Economics, the study found that, on average, the leaders expect a 23 percent growth rate over the next two years through digital transformation. “What set these leaders apart is that they have internalized the need to transform what they think as well as what they do — to create a digital mindset across the organization. This is the difference between saying “we need a new mobile app” and “we need new ways to serve customers in the ways they want to be served,” according to the report. Digital transformation initiatives typically involve cloud computing platforms, DevOps and agile development approaches, and emerging technologies

Image: The Nerdery


like non-relational databases, big data analytics, AI, virtual reality (VR), augmented reality (AR), and IoT devices. Organizations are making a massive mistake, though, if they focus on sheer technological change. Instead, the end objective should be a better customer experience. “If the ultimate goal of digital transformation is to more effectively understand, engage, sell to, and support customers, it’s fundamentally a business transformation that requires an extraordinarily strong partnership between IT and business leaders,” noted Rich Murr, CIO of Epicor Software.

Exceptions are the rule Across the board, “born in the cloud” startups hold advantages over older companies, which must contend with modernization of (or migration from) legacy servers, mainframes, and network infrastructures, said Brian Klingbiel, executive VP at Ensono.

Amazon, for example, is completely disrupting long-time brick-and-mortar retailers like Macy’s and Sears, pointed out Chris Locher, VP of software devel-

opment at The Nerdery. In fact, Dell Technologies’ Digital Transformation Index demonstrates that 78 percent of leaders of large companies view start-ups as threats to their businesses. When it comes to staffing, only 20 percent of business and IT excecutives in the public sector rate their organizations’ digital skills as “highly or quite developed, “in contrast to 46 percent of those in the healthcare, pharmaceutical and life sciences industry, for example, according to recent research by PwC. Looking forward into the year ahead, here’s a drilldown into ten trends to watch for in 2018, culled from numbers and analysis from many thought leaders.

1. The UX will take center stage. “Digital transformation provides companies with the ability to build better products, deliver better experiences, and better understand customers,” noted Dave West, a former Forrester analyst who is now CEO of These days, digital apps with engaging UX are a business necessity. Insurance customers, for example, will no longer stand for calling up agencies all over the place to get rate quotes, filling out paper-based claims, and then waiting weeks to get paper-based checks from insurance companies, Locher maintained. Delivering a great UX can extend far

020-026_SDT07.qxp_Layout 1 12/19/17 4:45 PM Page 21

January 2018

SD Times

10 Top Trends for 2018

beyond just producing compelling smartphone apps for end customers. Ultimately, it can mean creating a seamless digital experience for interacting with customers, employees, partners, and other stakeholders, across mobile devices, the web, and iOT devices.

2. DevOps will forge ahead, especially in config management. Digital transformation begins (although certainly doesn’t end) with Agile and DevOps development technologies. Agile methodologies originated mainly at smaller companies as a way of getting development teams to work more efficiently. Enterprises began hopping aboard with the emergence of DevOps, which added automation — along with IT and operations teams — to the mix, said Steve Brodie, CEO of Electric Cloud. Speed to market with Agile and DevOps can be impressive indeed. The UK’s central tax agency, Her Majestry’s Revenue and Custom, is now able to roll out one new feature every week, observed Paul Fremantle, co-founder of WSO2. 3. Apps will get smarter “Starting in 2018, we’ll take gargantuan strides in embedding near-instant intelligence in IoT-enhanced cities, organi-

zations, homes and vehicles,” predicted Guarav Chand, senior VP of global solutions for Dell EMC. Right now, AI is still in its infancy. However, scattered implementations are happening in areas ranging from health care to human resources (HR) and marketing. In health care, for instance, a company called FDNA has deployed an application for Next Generation Phenotype (NGP) facial pattern matching among more than 2000 medical facilities worldwide. Running in FDNA’s cloud and accessible from mobile devices and PCs, the app allows pediatricians to match children’s facial characteristics against an ever-growing database, narrowing down a diagnosis to a handful of known medical syndromes in seconds. FDNA has produced a 75 percent improvement in the ability to reach diagnoses through genome testing, according to Dekel Gelbman, CEO.

4. Small digital transformation projects will expand Some small digital transformation projects are slated for internal expansion this year. For example, MelhorTrato .com, a Spanish-language financial site, recently completed a pilot test of TensorFlow deep learning software running on its own network servers. About four months ago, the company’s HR department began using the AI technology to screen out the best job candidates, said Cristian Rennella, CTO and co-founder of the 9-year-old Latin America-based startup, which employs about 134 people. In the past, the HR department spent 67.2 percent of its time reading the CVs of candidates who applied through its own website and third parties. To automate this process, the company tried TensorFlow. For the first 30 days of the trial, HR staff needed to work with the software to identify errors and alert the system for improving predictions. This caused a temporary slowdown in productivity in the HR division. “But we are sincerely surprised with the results since then and very happy. On the path to digital transformation, boring, manual and repetitive business processes can be performed automatically thanks to AI. We can now invest more time on interviewing the best candidates,” Rennella told SD Times.

5. Crossfunctional teams will add business execs Yet in large enterprises, moving digital transformation beyond small pockets of deployment demands the formation of crossfunctional teams that also include the business side of the organization, according to many observers. In some instances, these teams are led by newly minted “chief digital officers,” but elsewhere, leaders with other Clevel titles are at the helm. “To effectively make the transition [to digital transcontinued on page 26 >


020-026_SDT07.qxp_Layout 1 12/19/17 3:31 PM Page 22


SD Times

January 2018

2017: Year in Review

DevOps remains a competitive advantage


DevOps continued to dominate development teams and businesses throughout the year with organizations trying to reap the benefits. A study found that despite DevOps being a well-known phenomenon, 50 percent of respondents are still in the process of implementing DevOps or have just implemented it within the past year. In the past year, many software companies teamed with or acquired others to broaden their DevOps solutions. CA acquired Veracode in the beginning of the year to help add security to its DevOps portfolio. JFrog acquired DevOps intelligence platform CloudMunch in June. CollabNet and VersionOne announced a merger in August to bring agile and DevOps together. Perforce made a push into DevOps with the acquisition of agile planning tool provider Hansoft in September. In addition to acquisitions, compa-

nies developed and released their own DevOps solutions from scratch throughout the year. Dynatrace started the year off with the release of UFOs, a status alert system designed to help DevOps teams get a better look into their deployment pipelines. Microsoft announced the release of Visual Studio 2017 in March with DevOps as one of its core pillars. VS 2017 included code editing, continuous integration, continuous delivery, and Redgate database integration. Microsoft continued its DevOps commitment throughout the year, ending with the preview release of Azure DevOps projects in November. GitLab took a new approach to DevOps with the release of AutoDevOps in July, and shared its vision for Complete DevOps in October. AutoDevOps provides the ability to automatically detect programming languages and build an app in that language, and then automatically deploy it. The Complete DevOps vision combines development and operations into

one user experience. CloudBees released a DevOptics solution to provide metrics and insights between teams in August. Electric Cloud released ElectricFlow 8.0 with new DevOps insight analytics. Atlassian unveiled the Atlassian Stack and DevOps Marketplace to break down silos and accelerate DevOps adoptions, and brought DevOps workflows to scale with the release of Bitbucket Server 5.4 and Bamboo 6.2 in October. Companies also worked throughout the year to bring DevOps together with other software development approaches and tools. In January, CA released a report that revealed agile and DevOps worked better together than alone. Later in the year, CA released another study that found if businesses really wanted to boost their software delivery performance, they should combine DevOps with cloudbased tools. and the DevOps Institute teamed up on ScrumOps, a new approach to software

020-026_SDT07.qxp_Layout 1 12/19/17 3:32 PM Page 23

delivery that brings Scrum and DevOps together. One of the biggest new approaches to come out of 2017 was the idea of DevSecOps. DevSecOps is a new notion that bakes security into the DevOps lifecycle in order to find and fix security vulnerabilities earlier and all throughout the life cycle for faster, higher quality code. Veracode started the year off with its release of Greenlight, an embedded DevSecOps solution that enables developers to identify and fix security vulnerabilities, and rescan the code to doublecheck issues are fixed. DBmaestro released the Policy Control Manager, a DevSecOps feature designed to eliminate risks, and reduce downtime and loss of data. In July, WhiteHat Security took a look at the success of a DevSecOps approach in its Application Security Statistics report. The report found critical vulnerabilities in apps were resolved in a fraction of the time it takes without a DevSecOps approach. Other reports throughout the year looked at the challenges blocking DevOps: Redgate found databases were one of the most common bottlenecks for DevOps teams. Quali discovered infrastructure and fragmented toolsets were among the top barriers for DevOps adoption. And in a combined report, Atlassian and xMatters found successful DevOps implementations make the most out of culture, monitoring and incident management. The State of DevOps report found in order to achieve DevOps success, automation, leadership and loosely coupled architectures and teams are key. According to Forrester’s software development predictions for 2018, DevOps tools will continue to proliferate and consolidate, and DevOps will drive the use of APIs and microservices. z

January 2018

SD Times

Testing catches up to pace of development Continuous testing, automated testing, and change-based testing were popular in 2017 BY JENNA SARGENT

In the past, testing mostly has been an afterthought. These days, as development cycles get shorter and shorter, there has been a push to test earlier in the development process. There are many types of testing out there, but the ones that really held a presence this year were those related to DevOps: continuous, automated, and changebased testing. In February, Parasoft released a new continuous testing solution for IoT devices. It allows teams to identify and remediate vulnerabilities; verify functionality, performance, security, and compliance; balance testing efforts and code coverage with change-based intelligence; and efficiently support agile and continuous testing processes. Tricentis Tosca is a tool that allows for automated and continuous testing. This tool provides businesses with a way to address the increased complexity and pace of modern application development. It was named a Gartner Magic Quadrant for Software Automation leader in 2017. In June, Business Intelligence and Data Warehouse components were added to the tool. This addition provides a full lifecycle approach to those areas of testing, which adds unprecedented data integrity. In April, Sauce Labs added a new plug-in for Microsoft Visual Studio Team Services that automates the build process, removing the potential for human error. With this plug-in, teams can configure tests via VSTS so that they will run whenever there is a pull request or check in. As delivery cycle times

decrease and the technical complexity needed to deliver a positive user experience increases, these forces create a gap. Continuous testing is there to bridge that gap, but as the gap increases, there is a need to go beyond continuous testing by incorporating Artificial Intelligence. Finally, change-based testing allows for a more agile testing process. In change-based testing, you look at which files have changed and what tests have been run on those files. As a result, it is possible to analyze the changes between two builds and determine what tests need to be re-executed. “Digging deeper we can quickly get insight into where in the code the changes have occurred, how the existing tests correlate to those changes, and where testing resources need to be focused,” wrote Parasoft’s Mark Lambert in a post. “From here, we can create a test plan, addressing failed and incomplete test cases with the highest priority, and using the re-test recommendations to focus scheduling of additional automated runs and prioritizing manual testing efforts.” z


020-026_SDT07.qxp_Layout 1 12/20/17 2:50 PM Page 24


SD Times

January 2018

Security is no longer afterthought BY CHRISTINA CARDOZA

Year after year businesses face challenges when it comes to security, and 2017 was no different. Instead of trying to lecture the industry about the importance of application security testing, organizations tried to find new ways to bring security front and center. The problem is that developers don’t have proper security education for today’s world of coding, according to CA Veracode and’s 2017 DevSecOps Global Skills Survey. “With major industry breaches further highlighting the need to integrate security into the DevOps process, organizations need to ensure that adequate security training is embedded in their DNA,” said Alan Shimel, editor-in-chief of “As formal education isn’t keeping up with the need for security, organizations need to fill the gap with increased support for education.” CA Veracode State of Software Security Developer Guide in November reiterated the need for security education, but noted that it isn’t a one-time proposition. With the threat landscape constantly changing along with application architectures, languages and features, developers need to keep learning application security skills and keep experimentation in their professional and personal lives. However, with the speed of development moving faster throughout the year, organizations went further than education and evolved the security role. DevSecOps became a popular term and strategy in 2017 as a way to get DevOps teams to start thinking differently about security and bake it into the entire lifecycle. “The biggest problem today with application security is that the development organization is not goaled

to secure their software. They are goaled to release software quickly. Without a mandate and shared accountability between security and development that is measured and reported at every level of the organization, security will continue to be hard,” said Peter Chestna, developer engagement for Veracode. Other ways teams and developers tackled security is through tools like analytics. Analytics can provide visibility into threats in real-time and respond faster. In addition, shifting security left (which is also a notion of DevSecOps) implements security earlier on in the software development life cycle instead of leaving it to the end. The year ended with the Open Web Application Security Project (OWASP) releasing its Top 10 most critical web application security risks development teams should be aware of. This is the first time since 2013 OWASP has updated the Top 10. The Top 10 2017 edition includes: Injection, broken authentication, sensitive data exposure, XML external entities, broken access control, security misconfiguration, cross-site scripting, insecure deserialization, using components with known vulnerabilities, and insufficient logging and monitoring. Looking ahead to 2018, Gartner predicts more organizations will adopt a continuous adaptive risk and trust assessment (CARTA) model as security becomes more important in a digital world. Technologies that will pose the biggest risk throughout the new year will include intelligent transportation systems, machine learning and smart probots, according to Carnegie Mellon University’s Software Engineering Institute (SEI) 2017 Emerging Technology Domains Risk report. z



Over the past year, Java went through many changes. At the start of the year, Java EE was in an uncertain state and the Java 9 release had been pushed back again from its originally scheduled 2016 release date. At JavaOne in 2016, Oracle announced its plans to address the platform as well as information on Java SE 9 and OpenJDK 9. In June 2017, the Java Community Process executive committee voted for and approved the Java Platform Module System known as JSR 376, laying the groundwork for Java 9. Java 9 finally was released in September after being pushed back several times. It features a modular architecture instead of the monolithic architecture seen in past versions of Java. This enables scalability on smaller devices, which is a feature that should have


Last year, we stated 2016 was the year of artificial intelligence as tools and solutions became smarter and more advanced. This past year, artificial intelligence went from a sought-after technology to a reality for most organizations. Today, we have devices that can react, respond and recommend on their own. Developer solutions have also evolved to automate and speed up the software development lifecycle through automated testing and automated builds as well as the ability find, detect and fix bugs through monitoring and testing. A few years ago, those capabilities were only an idea for the future or only available to a few technology giants, and now it is becoming the new norm for devices and solutions. This is only the beginning as organizations invest heavily in the technology and clear the path for developers. Google continued its work in the machine learning realm with its open-

020-026_SDT07.qxp_Layout 1 12/19/17 4:46 PM Page 25

s in Java

been included in JDK 8, but wasn’t ready at the time of that release. Java 9 also features a link time phase occurring between compile time and runtime. JShell added Read-Eval-PrintLoop functionality to Java, which allows developers to have instant feedback as they write code, making it helpful for beginners or even experienced Java developers experimenting with a new API, library, or feature. There were also several new features that improved JVM compilation and performance, as well as enhancements to the core library. In July 2016, developers were still awaiting news about when Java EE would get updated, and at the time there was no information about it coming from Oracle. A group called the Java Guardians was formed to try to get Oracle to give Java EE the attention they felt it needed to move forward. At

SD Times

In 2017, Java EE was moved to the Eclipse Foundation and Java 9 was released JavaOne in September, Oracle finally spoke up about Java EE, saying that it planned to finalize and ship it in 2017. In August, Oracle announced that it wanted to move Java EE to an opensource foundation. One month later, Oracle turned Java EE over to The Eclipse Foundation, with Oracle continuing to support existing Java EE licenses. According to Oracle, the move to The Eclipse Foundation enabled organizations to “adopt more agile processes, implement more flexible licensing, and changed the governance process.” The Eclipse Foundation has many other open-source projects and a community-based governance approach. This allows for greater collaboration on projects and rapid innovation. Also in September, Oracle proposed changes to the Java SE and JDK release cycle in an effort to make the

for automation source framework for machine learning, TensorFlow. In February, the library finally reached 1.0. Google built upon the library throughout the year with the release of TensorFlow Debugger for visibility into TensorFlow graphs, Tensor2Tensor for training deep learning models, the TensorFlow Object Detection API, and TensorFlow Lite for machine learning models on mobile and embedded devices. By the end of the year, TensorFlow reached version 1.4 with the machine learning framework Keras, the dataset API, support for Python generators, and new functions for simplifying training and evaluation. Other companies worked on providing developers with more tools to build on AI. NVIDIA announced in May at the GPU Technology Conference plans to increase the amount of AI developers tenfold this year through the NVIDIA Deep Learning Institute. NVIDIA also teamed up with Facebook earlier in the year to release Caffe2, an open-source

January 2018

deep learning framework. Salesforce announced Einstein Platform Services to give developers more tools to build AI into their apps. Microsoft continued to invest in artificial intelligence technology throughout the year with the launch of the Open Neural Network Exchange in conjunction with Facebook to provide neural network framework interoperability, the release of its Security Risk Detection tool, the launch of the open-source deep learning library Gluon with Amazon, Visual Studio Tools for AI, and Azure IoT Edge. Other investments included: IBM announced plans to create an AI research partnership with MIT, and the Partnership on AI to Benefit People and Society expanded its efforts with eight new forprofit partners and 14 non-profit partners. New venture capital funds such as Toyota AI Ventures, and Google’s Gradient Ventures were created to provide funding, mentorship and support to AI startups.

releases more agile instead of feature-driven. It wants to put out a major release every six months, starting in March 2018 with Java 9. Update releases will continue to be released every quarter and longterm support releases will come out every three years. This new release cycle is reflective of the way in which we consume technology at an accelerated rate these days, as opposed to the longer adoption cycle in the early days of Java. At JavaOne in October, several software tool provides announced new services. Parasoft released an updated to Jtest, which is a Unit Test Assistant for Java. JNBridge announced Java.VS, a plugin to that allows developers to write Java code in Visual Studio. Java.VS also has a Java code editor, Java project system, and allows Java developers to use the VS build system and debugger interface. z

The Linux Foundatoin also introduced the Acumous Project to help democratize the building, sharing and deploying of AI apps. However, despite the advantages to artificial intelligence, others warned that users should not get too comfortable. Carnegie Mellon University’s Software Engineering Institute released its 2017 Emerging Technology Domains Risk report in October warning about the security impact of vulnerabilities in machine learning, especially when sensitive information is involved. Hyrum Anderson, technical director of data science for cybersecurity provider Endgame, had similar views when he spoke at the 2017 USA Black Hat event in July and stated that machine learning has blind spots and can be easy to exploit. Looking towards 2018, Gartner revealed having an AI foundation will be a top strategy technology trend with areas like data preparation, integration, algorithm and training methodology selection, and model creation being ripe for investment. z


020-026_SDT07.qxp_Layout 1 12/19/17 4:45 PM Page 26


SD Times

January 2018

Top ten trends for digital transformation in 2018 < continued from page 21

formation], organizations will start to throw away traditional business models in favor of those where customer-centric teams are central to all operations,” West advised. What can happen if business departments are left out of projects? Illustrated Murr: “When customers places items in an online shopping cart and pays for them, only to find out afterwards that the items is on back order, it’s quite possible they’ve experienced a technology transformation without a complementary business transformation — an e-commerce platform built atop legacy bricks and mortar supply chain processes. That experience isn’t the way to wow a customer, win their loyalty, or grow a business.”

6. Databases will diversify Some 84 percent of organizations have experienced cancellation, delays, or reductions in scope of their digital projects due to limitations of their legacy databases, according to the Couchbase survey. While traditional relational database management systems (RDBMS) have served the needs of enterprises until recently, new open source, NextGen, and data analytics apps will demand new types of databases, said Peter Rutten, an IDC analyst. These new databases vary in design according to the needs of applications. Some, such as Postgre, are also relational, but others are non-SQL databases. “Some are document-oriented. Some are wide column stores, and some are key-value stores. High-volume data collection and ordering environments, such as Hadoop, are important, too,” Rutten wrote in a report sponsored by IBM. 7. Mega-clouds will follow multi-clouds The year 2018 will also see the formation of multi-clouds and even megaclouds, some predict. “This trend is driven by the obvious economic and flexibility benefits of the public cloud balanced by the need to manage securi-

ty and regulatory concerns,” said Josh Epstein, CMO at Kaminario. However, as more applications and workloads move into various clouds, cloud silos will become inevitable, inhibiting the organization’s ability to fully exploit data analytics and AI initiatives, according to Dell EMC’s Chand. “This may also result in applications and data landing in the wrong cloud leading to poor outcomes,” he declared. As a next step, enterprises will move on to mega-clouds, which will weave together multiple private and public clouds to work as a coherent, holistic system, according to Chand. “To make the mega-cloud possible, we will need to create multi-cloud innovations in networking (to move data between clouds), storage (to place data in the right cloud), compute (to utilize the best processing and acceleration for the workloads), orchestration (to link networking, storage and compute together across clouds) and, as a new opportunity, customers will have to incorporate artificial intelligence and machine learning to bring automation and insight to a new level from this next generation IT environment.”

8. Mainframes will get modernized Organizations that have included mainframes in their infrastructures will be able to achieve huge ROI by modernizing their mainframe systems, integrating these systems with the data center infrastructure and IT processes, and opening up mainframes to the outside world, said Rutten and another IDC analyst, Matthew Marden, in an earlier

report, commissioned by IBM and CA Technologies. Enterprises are pursuing strategies that include running Linux and Java on the mainframe and allowing the mainframe to communicate with other parts of the infrastructure through web services and SOA to provide new revenuegenerating services. Organizations are also integrating mainframes with mobile apps, operating mainframes in the cloud, and running agile development, DevOps, and internal and external APIs on mainframes, according to the report.

9. Network infrastructures will go NextGen, too Also in support of digital transformation, companies will increasingly adopt NextGen networks over the years ahead, according to Rich Hillebrecht, CIO of Riverbed Technology. “These networks will largely be software-defined and will have a management plane that provides IT with the ability to leverage the right network paths, assign appropriate priority to network traffic, and ensure the health of the network at all locations,” Hillebrecht told SD Times. “They will also incorporate an integrated end-to-end view of the UX from the data center to end devices at the edge, so that anything that might jeopardize performance is identified and managed before the end user is impacted.” 10. Organizations will deal with new security threats Most business and IT executives are aware that iOT and other new technologies will pose new security risks for their companies, suggests a new study from PwC entitled “The Global State of Security 2018.” A total of 67 percent of executives polled said they have an iOT strategy in place or are currently implementing one. However, only 34 percent have yet assessed device and system vulnerability across the business ecosystem, according to Prakash Penkata, principal, Cybersecurity and Privacy, for PwC. z

027_SDT07.qxp_Layout 1 12/19/17 3:06 PM Page 27

Aruna Ravichandran, VP of DevOps product and solutions marketing at CA technologies We will continue to see end-users make a tighter connection between a company’s brand and the quality of its code, based on their experiences across a company’s applications. As a result, more organizations will look to integrate security into development and intensify their automated continuous testing efforts/shift testing left to earlier in the SDLC as they work to release higher quality code, faster. Additionally, businesses will look to increase their adoption of digital experience monitoring and analytics solutions to help them understand how users are using applications and apply enhancements that optimize experiences.

Predictions for 2018

January 2018

SD Times

Florian Leibert, CEO, Mesosphere The autonomous car market will become more real (and more competitive). All signs point to Apple or Google formally launching an autonomous car program to compete with the traditional car companies and Uber in the next year. With a major tech player throwing their hat in the ring, we’ll start to see major innovation that advances autonomous cars as a reality.

Patrick McFadin, vice president of developer relations at DataStax “Data Autonomy” — fear of the big cloud players will become the main driver for large digital transformation projects. More and more brands will want data autonomy in a multi-cloud world in order to compete and stay ahead. The need and urgency to meet the big cloud players head on with data driven applications will intensify.

Jeff Williams, CTO and founder of Contract Security Attacks after a vulnerability disclosure will happen faster than ever. While attacks once took weeks or months to emerge after a vulnerability disclosure, today it’s been reduced to about a day. That “safe window” will get even smaller, giving organizations only a few hours to respond. Security budgets will increase focus on application security. Major breaches like Equifax and Uber have shone a light on organizations that are not doing nearly enough to secure their software supply chain. Today, every organization has an Equifax problem and it has created room for even more budget towards improving all aspects of application security. Jason Warner, SVP of technology at GitHub Open source will keep climbing the stack. A decade ago, Linux was a big deal. Now it’s standard. Back in the day, companies like Amazon, Google, and Microsoft were forced to build their own, proprietary tools because no other software existed to meet their needs. Many of these frameworks have since been open sourced—and other open source technologies, like Kubernetes, are becoming integral to developers’ workflows. This shift is changing what companies are investing in, making open source software traditional software’s biggest competitor.

Mark Pundsack, head of product at the open-source platform, GitLab By 2018, there will be a backlash against the DevOps tool chain. Developers will begin to demand a more integrated approach to the development process. In 2017, developers voiced frustrations around using multiple tools to complete an entire development life cycle. This frustration will turn to action in 2018 and both developers and enterprises will request an approach that is seamless and effective. As a result, vendors will begin offering integrated toolsets to help developers and enterprises move faster from idea to production.

Kostas Tzoumas, co-founder and CEO of Data Artisans Enterprises will invest in new products and tools to productionize and institutionalize data stream processing. As companies are moving real-time data processing to large scale both in terms of data processed and number of applications, they will need seek out new tools that make it easy to run streaming applications production and reduce the manpower, cost and effort required.

Toufic Boubez, VP of engineering for Splunk The buzz stops here. Artificial intelligence and machine learning are often misunderstood and misused terms. Many startups and larger technology companies attempt to boost their appeal by forcing an association with these phrases. Well, the buzz will have to stop in 2018. This will be the year we begin to demand substance to justify claims of anything that’s capable of using data to predict any outcome of any relevance for business, IT or security. While 2018 will not be the year when AI capabilities mature to match human skills and capacity, AI using machine learning will increasingly help organizations make decisions on massive amounts of data that otherwise would be difficult for us to make sense of.


028-035_SDT07_folio.qxp_Layout 1 12/20/17 3:47 PM Page 28

What IT Operations Wants Developers to Know By Lisa Morgan

Operations engineers have a few complaints about working with developers that shouldn’t be dismissed as merely rants because they are very real issues that Ops faces. Granted, developers have their own gripes, although this article is meant to provide some insight about what developers should know about operations, not the reverse. Three common complaints from Ops personnel are:

operations engineer is the one who wakes up to take the call or in a best case scenario, wrote a nice recover script to ‘self-heal’ the environment. Their job isn’t done after that because the root cause of the problem has to be determined swiftly to prevent the issue from happening again.” Some organizations have addressed the ownership problem by creating a level playing field, which

1. We’re tired of being called in on nights and weekends to fix your software. 2. We’re not as dumb as you think we are; and

3. If you understood what we do, you’d understand why we’re not moving as fast as you are.

Some developers still have a throw-it-over-the-wall mentality as it relates to operations personnel because hey, after code is committed, the developer’s job is over. In a DevOps context, a developer’s responsibility should not end with a code commit. “Developers need to know that operations-focused engineers are responsible for every layer of the application stack, including its stability, security and recovery plan said James Giles IV, DevOps engineer at database release automation solution provider Datical. “When it’s 2:00 in the morning and the system metrics have reached a state of unreliability or severe stress, the 28 January 2018

can mean developers are responsible for their code throughout the entire SDLC. “Typically, it’s the production operations group that has to bear the burden of being on call. Here at Jibestream everyone is on rotation, including management so everyone knows we’re all in this together,” said Matt Baxter, VP of engineering at Jibestream, an indoor navigation and mapping engine provider. “It’s not like, ‘Oh, it’s my turn to be on call, everyone’s trying to screw me.’ It’s I get to be on call. If you’re a manager and you’re not in the trenches with the team, it destroys trust.” Sometimes, it’s difficult to overcontinued on page 32 >

028-035_SDT07.qxp_Layout 1 12/19/17 3:17 PM Page 29

What Developers Should Know About IT Ops By Lisa Morgan

Operations is going through a fundamental shift as infrastructure itself shifts from hardware to software. The “software-defined” future has arrived and it’s something enterprises must embrace if they want to deliver Internet-scale applications and pivot as quickly as today’s business environment requires. Software-defined architecture uses a virtualization layer to optimize the use of resources. It also speeds provisioning and the ability to adapt to changes. The resulting flexibility is necessary in light of some of today’s biggest trends, including Big Data analytics, IoT, mobile, and social. The software-defined trend is consistent with application development trends including Agile, DevOps, continuous delivery and continuous deployment. Delivery speed is important, as is the ability to pivot as necessary in in light of company objectives, customer demands and disruptive innovation. However, speed isn’t the only requirement. Customers expect software to be of high quality, which isn’t just a matter of coding, it’s about performance, security and other things that fall into the realm of IT Ops, such as container management. Increasingly, developers and IT Operations have to work closer together to deliver better quality products faster despite rapid technology

changes, shifting business priorities and increasingly fickle customers. DevOps and continuous delivery efforts underscore the need for developers and operations to work together as a team. Collaborating is one thing, understanding what the other side does is another.

How Much Should Developers Know About IT Ops?

Developers should understand something about operations, and operations should understand something about development, but how is a matter of debate. Some say developers just need to understand the broad brush while others believe developers should understand what operations does in considerable detail, including the technology stack that comprises the production environment. In that view, if developers understand the environment in which their software will run, then they’re in a better position to deliver software that performs more reliably in production. Robert Stroud, a principal analyst at Forrester Research, believes developers need to understand the supported operating systems and environments though not in detail. In his view, developers only need to continued on page 31 > January 2018


028-035_SDT07.qxp_Layout 1 12/20/17 2:48 PM Page 30

Infrastructure Terminology You Should Know By Lisa Morgan

Infrastructure is changing quickly. Specifically, it’s being virtualized like everything else historically done only in hardware. “Software-defined” is one of the biggest infrastructure trends, not just for compute, but also storage and networks. Developers should know something about these concepts so they understand the environment in which their applications will run. Being familiar with these terms will also help avoid confusion when talking to IT Operations. Following are some of the popular terms. Composable infrastructure – allows an individual to “compose” infrastructure via an API so it conforms to requirements. Developers could compose their own application-specific infrastructure to suit mobile or IoT development, for example. Composable infrastructure pools compute, network and storage resources and makes them available as services regardless of whether the resources are physical or virtual. Because composable infrastructure is software-defined, it can be rapidly configured, saving days, weeks or even months of time that would otherwise have been spent waiting for resources. The flexibility, fast configuration, and scale-up/scale-down capabilities help organizations deliver modern, high-quality applications 30 January 2018

faster, consistent with and further enabling DevOps, continuous integration, and continuous delivery. Composable infrastructures are also being used to optimize the performance and ROI of data centers. Converged infrastructure – provides a turnkey hardware, storage, network and virtual machine solution in a single chassis that has already been configured and tested, which means IT doesn’t have to do it. It just works “out of the box.” Hyperconverged infrastructure – builds upon the converged infrastructure idea, adding more software components to it, such as backup. Hyperconverged infrastructure is software-defined, so it provides greater flexibility than converged infrastructure. For example, instead of being limited to local storage, hyperconverged infrastructure can created a unified pool of local and external storage. Immutable infrastructure – that which is replaced instead of changed. The benefit of immutable architecture is stability, because changes are implemented through replacement. The replacement can be tested and proven prior to the switch, which compares to making changes to live equipment that may result in continued on page 35 >

028-035_SDT07_folio.qxp_Layout 1 12/20/17 3:48 PM Page 31

What Developers Should Know About IT Ops < continued from page 29

be aware of what comprises the environment and understand the security parameters. “What operations really wants to know conversely from developers is that developers have completed appropriate testing so the code is of a relative quality level so it won’t fail in production,” said Stroud. Given the growing trend in which everything hardware is being virtualized in software, one might assume that no knowledge of systems or infrastructure is required, which isn’t exactly the case. “Despite all the trends, software still needs hardware and hardware still needs software,” said Greg Schulz, founder of “If you’re deploying software-defined networks, then you better have a fundamental understanding of networks and if you’re deploying software-defined storage, you need to have knowledge of storage. You need to understand the fundamentals.” “Software-defined” everything is of importance to enterprises because of the scalability, flexibility, and ROI it provides In a software-defined context, one does not have to configure hardware, although configuration is still necessary. There are also infrastructure-related metrics of which developers may not be aware. “There are still things you need to consider from an operations point of view, even if you’re in the cloud, including scalability, durability, availability, mean time to recovery, and mean time between failures,” said independent infrastructure engineer Tom Hall. “There are all these service-level metrics that an infrastructure administrator has to be concerned with that developers are

not concerned with.” Understanding what the other side does and why they do it helps the organization as a whole, because there’s less finger-pointing based on misunderstandings. “It’s crucial that engineers understand that operations is not separate, but essential to being an engineer and creating good products,” said Benjamin Forgan, founder and CEO of IoT Stack provider “As a new entity, we think of DevOps as the platform on which you do continuous deployment. It’s important to understand [operations] because if you engineer something but you don’t have an infrastructure that lets you scalably deploy or it doesn’t consider enterprise requirements, you’re in trouble.”

Infrastructure as Code Helps

The virtualization of computers, storage, and networks has given rise to the infrastructure as code movement, which is becoming increasingly necessary. Almost every organization wants to deliver software faster because it’s necessary from a competitive standpoint. Infrastructure as code helps expedite software delivery, because developers no longer have to wait weeks or months to get access to physical equipment. Virtual resources can be provisioned almost instantaneously, except in organizations that still require ticket filing and a string of approvals. Another reason infrastructure as code is important is because it helps bridge the gap between development and operations. Historically, developers have not understood infrastructure. Conversely, operations has not understood programming. They un-

derstand configuration scripts, which differs from creating applications. The point about scripts is important in an infrastructure-as-code context because it requires more than just automation scripts. Infrastructure as code is code and so it has to source-controlled, and peerreviewed. Infrastructure as code also requires testing and validation. “Developers have to understand that this not is a normal way for operations people to think and work, and that they’re going to need help,” said Hall. “If they’re not doing things in a predictable, repeatable way, it’s not because they don’t understand the reality of that; it’s just not the domain they live in, and not the environment they grew up in.” When WWT Asynchrony Labs started doing infrastructure as code, the operations team was using Chef. It didn’t take long before Ops realized it needed help with the code part of infrastructure as code. “The tech lead came to us and told us the code wasn’t very good. He said, ‘Do you want me to refactor this?’ so I said sure,” said Matthew Perry, WWT Asynchrony Labs director, IT Ops. “Developers can look at infrastructure as code and understand it.” Now, the WWT Asychrony Labs operations team is able to provide self-service environments so developers can configure their own environments without filing a ticket. Whiteboarding also helps break down the walls that have separated development and operations because it allows operations to explain what comprises the infrastructure, why the pieces are important, and how all of that impacts developers. “It helps to have a common

continued on page 33 >

January 2018


028-035_SDT07_folio.qxp_Layout 1 12/20/17 3:49 PM Page 32

What IT Operations Wants Developers to Know < continued from page 28

come prejudices that have been engrained in one’s thinking for years or decades. For example, Olivier Bonsignour, EVP of Product Development at global business consulting and IT solutions firm CAST has been a developer for 25 years. DevOps has helped him view the relationship between Dev and Ops differently. “It’s a long-term change. Dev has to learn from Ops and the opposite is true,” he said. “I’ve been a developer for 25 years, so I’m not trying to blame Ops. It’s not easy. [I might think] Ops is not doing the right thing [or] they’re not using the right infrastructure. When you do DevOps you remove this kind of excuse.”

why DevOps should be viewed as a journey rather than a destination. Having the right tools is one thing. Adapting organizational culture is another and it’s much more difficult to do. Another point of contention, is the differences in cadence between developers and operations. Developers continue to move faster, but operations isn’t keeping pace. Robert Stroud, a principal analyst at Forrester Research, thinks it’s im-

“Developers need to know that operationsfocused engineers are responsible for every layer of the application stack.”

— James Giles IV, DevOps engineer, Datical

Some prejudice is rooted in a sense of intellectual superiority. For decades, testing and QA personnel have often been accused of being developers that never could be. The same is true for Ops. “There’s a general perception that people who are smart enough to become developers become developers and that people who are not smart enough to become developers or development managers find themselves in operations,” said independent infrastructure engineer Tom Hall. “Systems administration, infrastructure management and architectures are valid career choices, not something they have to settle for because they can’t be developers.” Old prejudices only fuel the ageold animosity between developers and operations, which is one reason 32 January 2018

portant to understand what the common goals are and take more of a systems view of the SDLC. “What does it take for a change to go from idea to software a customer can actually utilize? It’s wise to look at every piece of the flow, throughout the value chain. How do we improve each of the steps along the way and which of these steps can we eliminate?” said Stroud. “You need teams collaborating in a way they haven’t done before.” n

028-035_SDT07.qxp_Layout 1 12/19/17 3:20 PM Page 33

What Developers Should Know About IT Ops < continued from page 31

medium that both parties can talk through and understand the details,” said Matt Baxter, VP of Engineering at Jibestream, an indoor navigation and mapping engine provider. “When you really want to have a conversation about this stuff, you need to white board the architecture of the system, draw boxes and lines to show where things are and how they’re communicating.”

Automation and Its Effects

Delivering software faster means automating more of the DevOps pipeline, but it needs to be done carefully and mindfully. “We’re starting to see this new trend where Ops is starting to develop what the code pipeline looks like. I go from development to operations to testing to production, or some derivation, and that pipeline is totally automated allowing a developer to push a piece of code out using Jenkins or something like that,” said Forrester’s Stroud. “Because everything is modeled as a release and treated as a release, it automatically transitions itself through the various stages and upon success into production.” Forrester is seeing end-to-end pipeline automation in only 29 percent of companies at the present time. The idea is that operations can help developers dramatically by delivering a consistent pipeline that developers can just push their code to without worrying about operational requirements. “The real challenge, which is why developers have been building everything in DevOps, is because operations teams have not been giving

developers these types of environments that they can just lift, push code to, and have a true representation now in production,” said Stroud. The pushback from some operations personnel is the fear of being automated out of a job. Stroud thinks automation is actually a positive for operations team members because it allows them to focus on managing pipelines rather than coming in and fixing everything that’s broken.

Containers Are Also Driving Convergence

Many of today’s developers are very interested in containers because of the flexibility they provide, but who should be in charge of container orchestration? After all, container orchestration platforms are being promoted to developers and operations, although operations is more likely to make the purchase and take on the responsibility. The use of containers requires the participation of developers and operations, with developers deciding what goes into the containers and operations handing the orchestration platform. “Right now, most container management systems developed with somebody coding around Kubernetes, which most developers don’t have the skills to do,” said Forrester’s Stroud. “Secondly, they may choose to use a platform like OpenShift or Cloud Foundry or one of the could providers, or they’ll try to twist existing technologies to manage containers. So, we’re seeing three camps at the moment.” Developers like the fact they can write an application in a container once and have it run on any Docker

server or cloud service that supports Docker. “For containers to run in production, operations must understand how to use containers,” said Hall. “I see a lot of developers really excited about containers and honestly, I don’t think most operations people I’ve met are still trying to understand configuration management.”’s Schulz has a different observation. He’s also noticed developers’ interest in containers and he’s also seen considerable interest from the infrastructure side. “Containers are the shiny new thing. We have this great solution so let’s go find problems for it,” he said. “That said, containers are great because you can develop a freestanding piece of software that’s part of something bigger.” When some of the teams at WWT Asynchrony Labs started using containers, they were installing Docker on a single virtual machine and deploying to that. Then they realized they had a single point of failure. “You get to that point and you realize you need an orchestration engine that will allow you to do different types of deployments, such as Canary deployments. That way, you don’t have any down time,” said WWT Asynchrony Labs’ Perry. “We have quite a few teams that have asked how to handle downtime and we’re like, ‘you don’t have to worry about that now. There are tools and orchestration engines that can run things in a way that you don’t have to worry about downtime at all.”

Security and IoT

Should security be built into code or should operations ensure security? continued on page 35 >

January 2018


028-035_SDT07_folio.qxp_Layout 1 12/20/17 3:49 PM Page 34

3 Data Center Predictions for 2018 ANALYST VIEW

The move to digital business plat2. It can enable easier horizontal scalforms is compelling infrastructure and ing, due to the auto-scaling properties of Chirag Dekate, Ph.D., is a Research the back-end resources. Scalability now operations (I&O) leaders to rethink their Director at Gartner data center strategies. Digital business becomes a software design issue. platforms like AI, which encompasses 3. Public cloud hosted infrastructure machine learning (ML), deep neural networks (DNN) and as a service (IaaS) serverless frameworks allow for a the IoT, are driving requirements for agile and scalable truly on-demand consumption, because there are no idle compute infrastructures. resources or orphaned VMs or containers. In 2018 I&O leaders should focus on enabling increased Enterprise adoption of serverless computing is still agility and effective ecosystems for emerging digital quite nascent due to the immaturity of the technology for business initiatives by deploying serverless architectures, general purpose enterprise workloads, as well as the fact container ecosytems and three-tier environments. that a majority of workloads today are “request driven” These three predictions represent fundamental rather than “event driven.” However, event driven workloads changes that will impact data center infrastructures will grow in importance as next generation front-ends are through 2020. driven by new technologies.

By 2020, 30 percent of data centers that fail to effectively apply artificial intelligence to support enterprise business will not be operationally and economically viable.

With the advent of AI and ML, I&O leaders have the opportunity to balance and reduce system complexity and create a new paradigm of “self-organizing systems. Under this model, I&O leaders can expect a broadened and enhanced role for AI, either as a platform or as a service. I&O leaders who fail to invest in ecosystem and platform intelligence such as artificial intelligence for IT Operations platforms risk becoming irrelevant and ultimately jeopardizing their enterprise’s ability to compete as a business. This is especially true as their skills and tooling fall behind growing operational complexity and the increasing demand for proactive, personal and dynamic services.

By 2020, 90 percent of serverless deployments will occur outside the purview of I&O organizations when supporting general-use patterns.

Since the launch of AWS Lambda — arguably the first serverless computing service — interest in harnessing serverless technologies has exploded among the developer community in leading IT organizations. Serverless computing offers three primary benefits to developers: 1. It supports running code without having to operate the infrastructure. This enhances developer productivity and allows them to focus on code development without having to worry about the underlying infrastructure.

34 January 2018

By 2020, more than 50 percent of enterprises will run mission-critical containerized cloud-native applications in production, up from less than five percent today.

Container adoption has been growing at a viral pace for software development and testing use cases due to containers’ ability to bring environmental parity to the software development life cycle and to enable continuous development and deployment of application software. Understandably, organizations want to extend those benefits into production environments to fully reap the value of the agility and efficiency they have achieved in the development and testing phase. While developers are primarily driving tool adoption around containers, I&O leaders need to be prepared to support these containerized applications in production. Crucially, they must also ensure that business SLAs around security, performance, data persistence and resiliency are met. As well as improving developer productivity I&O leaders can expect additional benefits from this technology. As they can run on a bare-metal infrastructure, containers can be operated more efficiently than VMs on single tenant server infrastructure. Because of their smaller resource footprint, containers can also enable a much higher tenant density on a host. Containerized applications can be managed more effectively with less configuration drift, as it is possible to more easily redeploy services and automate their lifecycle management. n

028-035_SDT07.qxp_Layout 1 12/19/17 3:24 PM Page 35

Infrastructure Terminology You Should Know < continued from page 30

unforeseen consequences. Another benefit is security. Given the number of vulnerabilities in ever-more complex software stacks, immutable infrastructure provides an elegant way to replace software with known bugs or vulnerabilities with patched software. Network Functions Virtualization (NFV) – virtualizes network functions, decoupling functionality from the underlying hardware such as caching, directory name service (DNS), firewalls, intrusion detection, etc. Multiple functions can be chained together to create a service. Similarly, multiple services can be chained together to provide more complex services. Software-defined Access Network (SDAN) – provides virtual access network control and management functions. Broadband service providers such as AT&T use SDANs to accelerate the provisioning of services, improve operations and compete more effectively. Software-Defined Data Center (SDDC) – virtualized data center infrastructure that is made available as a service. While virtual machines have been used in data centers for some time, software-defined data centers take virtualization to the next level by making the entire environment software-defined. They provide better ROI than traditional data centers because, like virtual machines, they optimize the use of resources, providing better equipment ROI. Also, because the management of SDDCs is automated in software, it reduces the need for data-center related headcount. Further, not all organizations are moving to the cloud at the same speed. Although some modern businesses were born in the cloud, older organizations may be keeping data in their own data center for security purposes or adopting a partial (hybrid) cloud strategy. SDDCs make sense for both. Software-defined network (SDN) – an umbrella term for specific types of software-defined networks that include access networks (SDANs) wide-area networks (SD-WANs) and data centers (SDDCs). They allow administrators to change, initialize, or manage network behavior, via a software interface. SDNs are becoming increasingly critical for enterprises that have to keep pace with the accelerating pace of business. While there’s still hardware somewhere underneath the software-defined layer, the layer itself provides greater flexibility and agility than can be achieved with physical equipment alone. n

What Developers Should Know About IT Ops < continued from page 33

The answer isn’t either/or, it’s both. Cloud platforms provide encryption by default, although organizations continue to lose control of IP, electronic health records, and other kinds of data. Nothing is inherently secure, which is why DevOps teams need to be more vigilant about security on a number of levels. “Part of a developer’s job, in addition to writing code, is also making it secure,” said’s Schulz. “Someone has to make it easy for them or tell them, let these people in, put these safeguards in so your application and data are safe.” To some extent, the IoT is driving the need for developers and operations to work closer together. More developers are getting pulled into IoT software development regardless of what industry they’re in. At the same time, operations and IT need to worry about a larger infrastructure footprint and the scalability to support it. Moreover, a lot of IoT devices aren’t being designed with security in mind which can wreak havoc on infrastructure. “DevOps and IoT go together,” said’s Schulz. “You’ve got a particular item, maybe it’s a sensor or a switch or a monitor of some sort that has the ability for somebody to write a piece of code for managing, monitoring, and configuring it and plugging it in somewhere. That’s ideal for creating a module, which you might develop on your desktop or in the cloud, but it’s going to be deployed somewhere else.”

The Bottom Line

Modern software development and software-defined architectures are both necessary for organizations to achieve the level of ability they seek. To do that, developers and operations have to overcome the points of friction that still exist between them. Doing that requires bilateral understanding at several levels including understanding something about what the other side does. For developers that means understanding what operations people do day-to-day, how it impacts software delivery, and what the production environment looks like. Working software is everyone’s problem, particularly in this modern era in which businesses compete on software. n January 2018


SDT07 Full Page Ads.qxp_Layout 1 12/20/17 3:01 PM Page 36

The Gartner Magic Quadrant for Software Test Automation is here! Download your copy now!


Gartner recognizes Tricentis as a Continuous Testing Leader

037-045_SDT07.qxp_Layout 1 12/20/17 4:10 PM Page 37

January 2018

SD Times

Buyers Guide

Continuous testing enables the promise of Agile Test at every step in the development cycle, reducing costs and resources



n a time when software must be released on shorter and shorter cycles, there needs to be a way to make every step of the process faster. No longer can developers build and then wait until they are finished to test everything. In order to stay ahead in this digital world, organizations must look to continuous testing. Continuous testing can be defined as testing that happens continuously and automatically as code is developed, said Jeff Scheaffer, general manager of continuous delivery at CA Technologies. Gerd Weishaar, chief product officer at Tricentis, defines continuous testing as “the steps that are necessary to actually make sure the quality of the software that is being developed is ensured continuously.” He says, “you’re in lockstep with the development cycle and you are continuously ensuring the quality of the software that has been developed is okay.” “Strictly speaking, continuous testing is the practice of introducing automated testing into the entire software delivery pipeline gaining faster quality feedback throughout the whole release,” said

Sean Hamawi, chief technology officer and co-founder at Plutora. According to Hamawi, continuous testing can be comprised of unit tests, integration tests, and end-to-end tests. Unit tests are part of the check-in process when the continuous integration server is notified of changes. These tests are sometimes done on a 15 or 30 minute cycle. Integration tests are regularly run, often nightly, throughout the development cycle, often for several hours at a time. The end-to-end tests are planned on a longer cycle, usually weekly or bi-weekly. These tests usually include a broad set of automated tests. The depth of inspection increases as the code moves closer to production, according to the company. “For me, continuous testing is a key and critical component for a DevOps initiative,” said Mark Lambert, vice president of products at Parasoft. “DevOps is all about removing the bottlenecks in your software development process or your software delivery process so that you can realize the benefits of Agile. If you have a lot of manual steps between writing the code and actually getting something out there,

then you really don’t get the benefits of Agile because you end up spending all of your time doing the release and deployment component and not enough of the time doing the development.” By properly implementing continuous testing practices, the benefits of Agile can be realized. “The promise of Agile to the business is that you’re going to be able to deliver more to the market and deliver it faster, but without continuous testing in place you really can’t accelerate without sacrificing quality,” said Lambert. The interest in continuous testing has continued to increase in recent years. “What’s going on in most of the industries is what’s called a digital transformation,” said Weishaar. “And a digital transformation means that you have to serve your customers faster and in a better way than before. Otherwise the competition is going to disrupt you.” Companies that understand this will be able to put out releases faster and stay ahead of their competition. “Time pressure reigns in software development, and now that it’s possible for organizations to ship software at continued on page 38 >


037-045_SDT07.qxp_Layout 1 12/20/17 3:53 PM Page 38


SD Times

January 2018

< continued from page 37

will, businesses demand more from development teams, and faster,” said Philip Soffer, CEO of test IO. “Continuous testing is an outgrowth of the Agile and DevOps movements and holds out the promise that organizations can shift faster while simultaneously improving quality.” “Ultimately, testing is the ‘final frontier’ of sorts in terms of improving the overall software development life cycle,” said Scheaffer. “Approaches like Agile and DevOps have helped accelerate both the pace and quality of software that’s delivered, but there is an increasing recognition that testing processes have to continue to evolve to keep pace with advances in other areas.” According to a study conducted by Freeform Dynamics that CA Technologies sponsored, continuous testing leaders are 2.4 times more confident in software quality, 1.5 times more likely to deliver 10 times faster, and 2.6 times more likely to reduce costs by 50 percent. With shown benefits such as these, it is no wonder that continuous testing has become so popular. Scheaffer lists speed, quality, and costs as the three primary benefits of continuous testing. According to them, these three things lead to a better user experience, which then leads to increased customer loyalty and satisfaction. Those using CA Technologies’ continuous testing tools have experienced 50 percent faster delivery speeds, improved test coverage by 80 percent, and reduced testing costs by 25 to 85 percent, according to the company. “When adopted correctly, continuous testing centers testing in the software development process, creating a constant rhythm of refinement and a culture of collaboration throughout the software development life cycle,” said Soffer. “Continuous Testing strategies look to achieve four capabilities: Test early, test faster, test often and automate more tests,” said Hamawi. “Earlier testing reduces the overall cost of fixes. Faster testing improves the efficiency of the development machine. More frequent testing improves the overall coverage.” By testing earlier in the devel-

opment cycle, bugs can be found sooner and fixed faster. “The benefit of continuous testing really is to enable the promise of Agile,” said Lambert. In order to truly have an agile development lifecycle, organizations must adopt continuous testing. If they do not, they will be wasting both time and resources saving testing for the end. According to Hamawi, there are a number of challenges that can be faced when switching to a continuous testing model. It can be difficult to discover test coverage, which is typically discovered by creating a requirements traceability matrix. He recommends that organizations include both automated and manual tests in the traceability matrix. Usually developers provide test cases for positive and happy paths, but with continuous testing there also needs to be units tests for negative paths, boundary conditions, injection, and data conditions, to name a few, says Hamawi. Another challenge noted by Hamawi is the fact that the business, testers, developers, and operations are all working off of different sets of requirements. As a result, each feature will go through an evolution between what the customer wanted and what was produced. The final challenge that Hamawi sees is that not all projects or applications will even lend themselves to automation at all points. To overcome this, he suggests that the most high value areas be automated first. Plutora has many offerings to overcome the various challenges of continuous testing, such as Plutora Test, which is a SaaS-based

test management tool that can handle the entire test process. Lambert believes that roadblocks to continuous testing fall into three buckets related to dependencies. The first is that there might be an external third-party dependency. That dependency might even have a financial fee associated with it so that every time a test is run against it, you get charged for those executions. Another bucket is that one team can’t move forward because they are dependent on another team within their organization. That other team may be developing functionality that hasn’t been implemented yet, he explained. Finally, teams wishing to implement continuous testing need to shift left performance testing, which might result in bottlenecks due to external dependencies, Lambert said. Organizations can overcome these roadblocks with service virtualization. “What service virtualization allows me to do is to emulate backend system dependencies, control their functional behavior, and their performance characteristics so it can remove typical roadblocks that stop me from running my automated tests continuously,” Lambert said. As a continuous testing company, Parasoft is able to help organizations create efficient test strategies based on their individual needs. Different techniques can be blended together to give organizations a holistic view of their environment and testing practices, Lambert explained. “API testing becomes more important for the simple reason APIs are typically defined sooner and are also stable sooner in the cycle,” said Weishaar. “Instead of focusing on the user interface, which is usually at the end of the cycle, you can start testing sooner by using API testing.” So, incorporating API testing into the process can be another challenge. Its founders solved some of the big problems in testing at the start, and now Tricentis is able to give its customers the tools they need to automate as much as possible and to provide a complete continuous testing platform. Scheaffer notes that switching to a continued on page 41 >

SDT07 Full Page Ads.qxp_Layout 1 12/20/17 3:01 PM Page 39

Continuous Testing Enable continuous testing across your software delivery lifecycle. Adopt next-generation testing practices to test early, often, automatically, and continuously.

Only CA offers a continuous testing strategy that’s automated and builń upon end-to-end integrations and open source. Enable your DevOps and continuous delivery practices today.


8:13 PM

SDT07 Full Page Ads.qxp_Layout 1 12/20/17 3:02 PM Page 40

037-045_SDT07.qxp_Layout 1 12/21/17 9:31 AM Page 41

< continued from page 38

continuous testing model requires leadership and a change in mindset. “The first challenge to overcome is people,” he says. “Another challenge many testers face is around open source software,” Scheaffer says. This is because open source software can sometimes be difficult to scale. He recommends that organizations find the right balance between open source and commercial offerings. CA Technologies has many offerings that can help organizations realize the benefits of continuous testing. Soffer explains some of the challenges he sees with continuous testing. First of all, legacy code cannot be tested. “The closer a test is to the end-user, the flakier it is,” Soffer says. “Unit tests are very reliable,” Soffer claims. “Integration tests are quite reliable. Automated functional tests can be very flaky.” He says that when tests are consistently flaky people will begin to ignore them, making them worse than useless. He also notes that software that changes quickly is hard to test, even for organizations that have the best unit and integration testing practices. Finally, “the last mile of testing is hard and expensive,” Soffer says. “As you approach full test coverage, each additional point is harder to win than the last, and may be less valuable because the tests are flakier and harder to maintain. Moreover, these tests often don’t mirror real-world conditions.” Soffer explains that continuous testing is not usually reflective of the different devices that software would run on or the varying environmental conditions. Because of test IO’s extensive network of devices and people, it is able to bridge the gap between the lab and the real world and actually test software with real users on devices they would actually use. “Organizations can overcome these challenges by approaching continuous testing pragmatically, with the first principle that no matter how much technology we put toward the problem of software testing, all of it is done on behalf of customers — that is, people,” says Soffer. z

January 2018

SD Times

A guide to Continuous Testing tools n


n CA Technologies: Only CA delivers next-generation, integrated solutions that enable test environment simulation; automatic test case creation, even from requirements; on-demand test data management; orchestration that progresses applications from phase to phase based upon the passing of test cases; SaaS-based performance testing; and open source integrations with tools like JMeterTM, Jenkins, Selenium, Appium, and more. CA’s continuous testing solutions enable a robust continuous delivery model, so your organization can meet the demands of today’s application economy. n Parasoft: Parasoft helps organizations perfect today’s highly connected applications by automating time-consuming testing tasks and providing management with intelligent analytics necessary to focus on what matters. Parasoft’s technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software, by integrating static and runtime analysis; unit, functional, and API testing; and service virtualization. With developer testing tools, manager reporting/analytics, and executive dashboarding, Parasoft supports software organizations with the innovative tools they need to successfully develop and deploy applications in the embedded, enterprise, and IoT markets, all while enabling today’s most strategic development initiatives — agile, continuous testing, DevOps, and security. n Plutora: Plutora provides high quality software delivery throughout the entire release portfolio. Plutora Test is a SaaS-based management tool handling the entire testing process, with the capability of integrating with existing development tools, allowing development teams to work together with test teams. Plutora Environments is a pre-production environment management solution that provides a single location for internal and external teams to collaborate on and view environment booking, allocations, configurations, and conflicts. Plutora Release unifies the release management process and provides testing insights so that real-time decision making about the quality of the release can be acted upon. n test IO: test IO provides customers with immediate access to the creativity and bug-finding power of skilled software testers. Continuous testing is usually not reflective of the real world situations and conditions under which software runs, but test IO’s extensive network of thousands of people and hundreds of thousands of devices allows it to bridge the gap between the lab and the real world by testing software with real users on real devices. n Tricentis: Tricentis Tosca is a Continuous Testing platform that accelerates software testing to keep pace with Agile and DevOps. With the industry’s most innovative functional testing technologies, Tricentis Tosca breaks through the barriers experienced with conventional software testing tools. Using Tricentis Tosca, enterprise teams achieve unprecedented test automation rates (90%+)—enabling them to deliver the fast feedback required for Agile and DevOps. n Applause: Applause delivers in-the-wild testing, user feedback and research solutions by utilizing its DX platform to manage communities around the world. The company’s testing solutions span the entire app lifecycle and include access to its global community of more than 250,000 professional testers. n Appvance: The Appvance Unified Test Platform (UTP) is designed to make Continuous Delivery and DevOps faster, cheaper and better. As the first unified test

automation platform, you can create tests, build scenarios, run tests and analyze results, in 24 languages or even codeless. n IBM: IBM provides a number of test automation teams for agile teams to gain continuous feedback throughout the software delivery lifecycle. The solutions provide user interface and integration test automation, performance testing and service virtualization. continued on page 45 >


037-045_SDT07.qxp_Layout 1 12/20/17 3:53 PM Page 42


SD Times

January 2018

What edge do you have over other companies that provide continuous testing services? Jeff Scheaffer, general manager of continuous delivery, CA Technologies To achieve true continuous testing, it’s not sufficient to take a piecemeal approach. Your ability to deliver—and test software—will only be as fast as the weakest link in your toolchain. That means testing has to occur early, often, and incorporate all aspects of security, performance, functional and API testing. While many testing vendors may have great point testing tools for regression, performance, API, or integration testing, and even automate and drive efficiency in these testing areas, no other vendor can offer true end-to-end, continuous testing from planning to production. If your goal is delivering better apps, faster—while reducing costs—CA’s continuous testing solutions can help you get there. Sean Hamawi, CTO and co-founder, Plutora We are the market leader in continuous delivery management, encompassing release, test environment, deployment and test management solutions. The Plutora platform transforms IT release processes by correlating data from existing toolchains and automating manual processes providing a single view of releases and associated metrics, such as testing quality. This results in predictability in the software release process, improving the speed and fre-

quency of releases and better aligning IT software development with business strategy. Plutora has helped the world’s largest companies and Fortune 500 brands, including eBay, Merck and Dell. Mark Lambert, Vice President of Products, Parasoft Parasoft is the only vendor to provide a complete Continuous Testing solution, unlocking the potential of Agile by leveraging software test automation up and down the testing pyramid. From helping developers build a solid foundation of Unit Tests (with Parasoft Jtest for Java and Parasoft C/C++test for C/C++ development), through automated API testing (with Parasoft SOAtest’s 120+ support protocols and message formats) and combining those API level tests with Web and Mobile tests, the Parasoft tool suite facilitates a comprehensive omni-channel testing practice. Once tests have been created, organizations implementing Continuous Testing face the next big challenge — the constraints of the test environment itself. Dependent systems might be unavailable, not implemented yet, have transaction fees, or cannot be configured to meet the use-case or performance characteristics required. To solve this challenge, you can use service virtualization and test environment management from Parasoft Virtualize. By simulating the complex test

dependencies and encapsulating them into a re-useable environment that can be dynamically reconfigured, tests can be executed continuously anytime, anywhere. Finally, a comprehensive Continuous Testing practice generates a lot of data, and getting the most value from that data means going beyond a simple dashboard that indicates the number of test failures. Instead, you can leverage Parasoft Development Testing Platform (DTP) and its Process Intelligence Engine, to automatically turn raw data coming from the testing practices into actionable insights. Gerd Weishaar, Chief Product Officer, Tricentis A Continuous Testing platform must be able to assess the risk of a release candidate— in real time, and in context of the iterative changes impacting the application under test. The Tricentis Continuous Testing platform focuses on delivering that insight. The Tricentis Continuous Testing platform was architected to accelerate testing to keep pace with rapid delivery processes. At the core of the Tricentis Continuous Testing platform is “Model-Based Test Automation” technology, which detects changes to the application under test and helps you rapidly evolve the regression test suite to accommodate new user stories. Second, the Tricentis solution continued on page 45 >

SDT07 Full Page Ads.qxp_Layout 1 12/20/17 3:02 PM Page 43

Test_IO_v3.qxp_Layout 1 12/20/17 2:19 PM Page 1

On-Demand, Human-Powered QA Testing Continuous testing by professional testers

Real devices, real-world conditions

24/7 testing, results in 1 hour

Seamless integration with tools like Jira and more

Get a free QA test for your website or mobile app Try us free at

037-045_SDT07.qxp_Layout 1 12/21/17 9:31 AM Page 45

< continued from page 42

focuses on risk-based testing. We start by identifying your top business risks (considering both frequency and damage), then optimize the test suite to cover the greatest risks. This results in a highly-efficient test suite that is fast to execute and easy to maintain. On average, this approach yields significantly greater risk coverage with about 66% fewer tests. Finally, intuitive, business-readable interface makes it simple to automate tests across 100+ technologies—everything from web apps, to packaged applications such as SAP, SFDC, Oracle, and Workday), to legacy applications and APIs. This is all accomplished with scriptless test automation, which enables extreme automation without the overhead associated with managing scripts. Philip Soffer, CEO, test IO test IO brings human flexibility and creativity — at scale and speed — to continuous testing. Where most companies approach continuous testing solely as a technical concern, test IO puts the customer’s experience front and center. Many tools will orchestrate tests, create simulated production environments, and report the results. You probably need those. But eventually, many organizations find that as many tools as they deploy and as much automation as they build, they don’t know whether their software will meet customers’ expectations. test IO gives you immediate access to real people, testing your software under real world conditions, using it in ways that closely mirror what customers will actually do. And we do so in a way that lets you integrate human-powered testing into an increasingly automated environment. test IO levels the playing field, giving any organization access to smart people to act on behalf of their customers and ensure the best possible customer experience. z

January 2018

SD Times


A guide to Continuous Testing tools < continued from page 41

n LogiGear: With the no-coding and keyword-driven approach to test authoring in its TestArchitect products, users can rapidly create, maintain, reuse and share a large scale of automated tests for desktop, mobile and web applications. n Micro Focus: Micro Focus’ Functional Testing solutions help to deliver high-quality software while reducing the cost and complexity of functional testing. HPE’s solutions address the challenges of testing in agile and Continuous Integration scenarios, as well as hybrid applications, cloud and mobile platforms. HPE ALM Octane provides insights into software, speeds up delivery, and ensures quality user experiences. n Mobile Labs: The company’s patented open platform device cloud, deviceConnect enables automated continuous quality integration, DevOps processes, as well as automated and manual app/web/ device testing on real managed devices. n Neotys: Neotys load testing (NeoLoad) and performance monitoring (NeoSense) products enable teams to produce faster applications, deliver new features and enhancements in less time and simplify interactions across Dev, QA, Ops and business stakeholders. n Orasi: Orasi is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology. n Progress: Telerik Test Studio is a testautomation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. n QASymphony: QASymphony’s qTest Pulse is a continuous testing solution for teams practicing DevOps. It features agile test planning, source code traceability, real-time updates, and JIRA integration. Additionally, QASymphony’s qTest is a Test Case Management solution that integrates with popular development tools. n Rainforest QA: Rainforest aims to help teams perform QA testing at the speed of development with its web, mobile and

exploratory testing solutions. It provides an AI-powered crowdtest platform for agile testing and development that provides results from regression, functional and exploratory tests. n Rogue Wave: With Rogue Wave Klocwork, detect security, safety, and reliability issues in real-time by using this static code analysis toolkit that works alongside developers, finding issues as early as possible, and integrates with teams, supporting continuous integration and actionable reporting. n Sauce Labs: Sauce Labs provides a cloud-based platform for automated testing of web and mobile applications. Its service eliminates the time and expense of maintaining an in-house testing infrastructure, freeing development teams of any size to innovate and release better software, faster. n SOASTA: SOASTA’s Digital Performance Management (DPM) platform provides the ability to continuously monitor, test, analyze and optimize solutions in real-time and at scale. It includes five technologies: mPulse real user monitoring (RUM); the CloudTest platform for continuous load testing; and TouchTest mobile functional test automation. n Synopsys: Through its Software Integrity platform, Synopsys provides a comprehensive suite of software testing solutions for rapidly finding and fixing critical security vulnerabilities, quality defects, and compliance issues throughout the SDLC. Solutions include static analysis, software composition analysis, protocol fuzz testing, and interactive application security testing for Web apps. n Tasktop: Tasktop Sync provides fully automated, enterprise-grade synchronization among the disparate life-cycle-management tools used in software development and delivery organizations. Tasktop Data collects real-time data from these tools, creating a database of cross-tool life-cycle data and providing unparalleled insight into the health of the project. n Tech Excel: DevTest is a sophisticated quality-management solution to manage every aspect of their testing processes from test case creation, planning and execution through defect submission and resolution. z

046_SDT07.qxp_Layout 1 12/19/17 2:59 PM Page 46


SD Times

January 2018


Overcoming ‘temporal myopia‘ Shannon Mason is VP of Agile Central, CA Technologies


ow many times have you started the year with a list of resolutions meant to transform you into an entirely new person? And how many times have you succeeded? The majority of us have set out on such a mission to reinvent ourselves. And by Valentine’s Day, we’re fishing through the freezer for Rocky Road, the ukulele’s gathering dust in the corner and the “Teach Yourself French” tapes are…well, je ne sais pas où. The truth is, we don’t fail to reach our goals just because we’re lazy or we lack self-discipline. The fundamental problem is in the way our brains process information.

The brain isn’t optimized for long-term decisions Our difficulties with reaching longer-term goals may stem from what UCLA Neurobiology Professor Dean Buonomano describes as “Temporal Myopia”— the inability in the decision-making process to consider the long-term consequences of an action. In fact, what he refers to as “brain bugs” may stand in the way of our success in multiple ways. First, our brains are hard-wired to identify the quickest, easiest path to gratification. Perhaps it’s the evolutionary result of ancestors that had a far shorter life expectancy, but our brains have been programmed to take advantage of the now, and deal with the later when (and if) it comes. Second, and equally important, is that thinking of ourselves in a dramatically different future state is like talking to a stranger. You may have an idea of who that person would be—how they would think, act and feel, but today, they’re so far removed that you have no connection to them. They’re intangible. And it’s nearly impossible not to get distracted and demotivated on the path to a highly abstract goal.

Goals broken down into stages can be better understood and visualized by everyone involved.

Temporal myopia within the organization How does this translate into the world of business? Simple: organizations are made by people. And organizations have their own New Year’s resolutions; they’re called “yearly planning.” As each team, department and executive decides who and what they're going to be a year (or two or three) from now, they describe an organiza-

tion that does not yet exist. The more change that’s required, the more distant and imperceptible that future organization is. The people listening—the ones that must buy into the vision and tactically move the business towards it—have difficulty envisioning that reality. Their brains, predisposed to immediate gratification with minimal effort, start looking for an escape. The stage is set for failure.

Tactical actions bring the vision to life The key is to bring that vision closer to those responsible for making it a reality, keeping them focused and driven, rather than discouraged and demotivated. The solution is the same for both the organization and the individual. Goals broken down into stages can be better understood and visualized by everyone involved. Digestible, obtainable and reasonable short-term goals effectively overcome temporal myopia, but in the end, combine to deliver the desired result. The challenge is keeping a focus on both the tactical implementation of short-term goals as well as on the overall vision the company’s striving to achieve. It’s a process of keeping your eye on the ball while remaining aware of everything happening on the playing field.

Agility: The antidote for temporal myopia I describe it as a delicate balance that’s a lot like skipping a stone across a lake. As a stone skips across the water, it touches down, bounces off the water and is elevated back up. As the stone impacts the water, you’re focused on the tactical implementation, and as it ricochets back up into the air you get a broad view of the horizon. This illustrates why agile business practices have proven so valuable in so many organizations. Agile expedites the entire process by first identifying the end goal, dividing it into specific increments and then managing the delivery process. Despite the fact that both individuals and organizations sometimes fail to attain their goals, setting them remains essential. According to the Statistic Brain Research Institute, individuals who make resolutions are 10 times more likely to achieve goals than those who don’t. That maps to businesses as well. z

047_SDT07.qxp_Layout 1 12/20/17 3:50 PM Page 47

January 2018

SD Times


Transformation can be a monster F

rankenstein is a monster. IT can harm the people who created it. Ergo, IT is a monster. Taking a software tool from this vendor, hardware from another vendor, using a cloud-based storage and network system, can often lead to problems not foreseen by those who approved that approach. “There’s a new philosophy of how IT aligns with business, and being able to justify bestof-breed,” Gartner analyst Mike Cisek told attendees at his session at the recent Gartner IT Infrastructure, Operations Management and Data Center Conference 2017 in Las Vegas, Nevada. Cisek’s talk looked at five technology trends impacting mid-sized enterprises. For purposes of definition, Gartner identifies mid-sized enterprises as – among other metrics – having IT budgets reflecting 2-4 percent of revenue, cloud as 19 percent of the company’s IT portfolio, and where adoption is strategic, not gone into head-long. Companies of this size report that staffing and resources are the biggest barrier to executing their IT vision, followed by a skills gap, and budget constraints. The five trends he identified are: 1 – Adoption of agile, cloud-inspired hybrid infrastructure 2 – Anything as a service 3 – Use of enhanced security detection and response capabilities 4 – Embracing platforms that optimize operations and customer experience 5 – Exploitation of data and analytics. Cisek said a perfect example of the first trend is hyperconvergence: a combination of storage, network, management, cloud data backup and more. “Integrated infrastructure solutions consume the fewest technical and financial resources possible,” he said, recommending that “you never put everything in the cloud.” As for services, Cisek said “the right XaaS can reduce capital expenses, address skills gap and enhance capabilities” for companies that are resource-constrained. But these services “must be viewed in terms of impact on business processes, customer experience and tech architecture,” he added. Mid-sized enterprises also need immediate protection from advanced attacks as much as larger organizations with staffs of people working on the problem. Things such as firewall as a service, appli-

cation security as a service, and security brokers with cloud access “address the changing threat landscape,” Cisek said. “Visibility and control of sensitive data in the public cloud is now a requirement.” Embracing platforms that make it easier for people to share and exchange information and ideas are also critical for MSE’s, Cisek noted. Available tools to help solve those problems include third-part support for ERP, content services such as Office in the cloud, and customer self-service capabilities help these organizations work more efficiently and effectively. “Social and collaboration capabilities will grow and play a greater role in productivity,” Cisek said. The final area might be the most challenging for mid-sized enteprises, due to the very technical nature of data and analytics. “Most MSEs simply don’t have the skills” to perform high-level analytics. Things that can begin to level the playing filed for MSEs include using analytics for customer intelligence, visual data discovery, and self-service data preparation. “These can make your small team more efficient,” he said. “High calue insights and decision-making enables teams with scarce resources to be significantly more productive.” What struck me is that these issues are both department-level – how can IT make systems more secure, and how can developers find services that will add value to their applications and bring value to their business – as well as business-level. How can the company become more agile to meet market changes and gain competitive advantage? What we’ve seen and heard in talks with developers and I&O engineers, marketers and executives, at conferences and in site visits, is that IT is no longer that black box that business doesn’t understand but assumes is providing some kind of value. Silos are coming down, walls are being broken, and whether companies are accepting it or not, they are all becoming software companies. They need applications to move product, track inventory and delivery, and perform all manner of back-end services. They need infrastructure, whether their own or in a cloud, to run their businesses. Decisions are no longer made based solely on new technology being available, but by how that technology may or may not advance their business. That is the digital transformation we address in this issue. To all our readers, we hope you have a happy, successful and prosperous 2018. z

David Rubinstein is editor-in-chief of SD Times.


SDT07 Full Page Ads.qxp_Layout 1 12/20/17 3:03 PM Page 48

Data Quality Made Easy. Your Data, Your Way. NAME

@ Melissa provides the full spectrum of data

Our data quality solutions are available

quality to ensure you have data you can trust.

on-premises and in the Cloud – fast, easy

We profile, standardize, verify, match and enrich global People Data – name, address,

to use, and powerful developer tools, integrations and plugins for the Microsoft and Oracle Product Ecosystems.

email, phone, and more.

Start Your Free Trial

Melissa Data is now Melissa. See What’s New at


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.