__MAIN_TEXT__

Page 1


Full Page Ads_SDT017.qxp_Layout 1 10/23/18 10:35 AM Page 2


003_SDT017.qxp_Layout 1 10/24/18 10:38 AM Page 3

Contents

VOLUME 2, ISSUE 17 • NOVEMBER 2018

FEATURES

page 8

Ethical Design: What it is and why developers should care

page 10

NEWS 6

News Watch

18

Six pillars of monitoring automation

28

JFrog acquires DevOps consulting company

28

GitLab raises $100 million for DevOps vision

Getting to the root of your data’s history page 20

COLUMNS 44

GUEST VIEW by Brian Johnson Serverless: A bad rerun

45

ANALYST VIEW by George Spafford Overcome the people problem in DevOps

46

INDUSTRY WATCH by David Rubinstein The developer transformation

BUYERS GUIDE Embedded Analytics: Easier to create, more accurate than ever

Going to school on open-source security — It can be done, if you’re smart about it

page 30

How to keep your agile processes from becoming stagnant

page 41

page 36 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


004_SDT017.qxp_Layout 1 10/23/18 10:22 AM Page 4

®

Instantly Search Terabytes

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com

dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data

SOCIAL MEDIA AND ONLINE EDITOR Jenna Sargent jsargent@d2emerge.com ASSOCIATE EDITOR Ian Schafer ischafer@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com CONTRIBUTING WRITERS

2YHUVHDUFKRSWLRQVLQFOXGLQJ ‡ efficient multithreaded search ‡ HDV\PXOWLFRORUKLWKLJKOLJKWLQJ ‡ forensics options like credit card search

Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum

ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com

Developers: ‡ $3,VIRU NET, C++ and Java; ask about new cross-platform NET Standard SDK with Xamarin and NET Core ‡ 6'.VIRU:LQGRZV8:3/LQX[0DF L26LQEHWD$QGURLGLQEHWD ‡ )$4VRQIDFHWHGVHDUFKJUDQXODUGDWD FODVVLILFDWLRQ$]XUHDQGPRUH

.

.

.

SALES MANAGER Jon Sawyer jsawyer@d2emerge.com

CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Jourdan Pedone jpedone@d2emerge.com

Visit dtSearch.com for ‡KXQGUHGVRIUHYLHZVDQGFDVHVWXGLHV ‡IXOO\IXQFWLRQDOHQWHUSULVHDQG developer evaluations

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com

PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein

D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com


Full Page Ads_SDT017.qxp_Layout 1 10/23/18 10:36 AM Page 5

PLEASE ENJOY UP TO $200 OFF ANY OF THESE UPCOMING TECHWELL EVENTS WHEN REGISTERING WITH PROMO CODE “SDMP”*

events

APRIL 28–MAY 3, 2019 | ORLANDO STAREAST.TECHWELL.COM

JUNE 2–7, 2019 | LAS VEGAS AGILEDEVOPSWEST.TECHWELL.COM

JUNE 23–28, 2019 | CHICAGO AGILETESTINGDAYS.US

SEPTEMBER 29–OCTOBER 4, 2019 | ANAHEIM STARWEST.TECHWELL.COM

OCTOBER 20–25, 2019 | TORONTO STARCANADA.TECHWELL.COM

NOVEMBER 3–8, 2019 | ORLANDO AGILEDEVOPSEAST.TECHWELL.COM

*DISCOUNT APPLIES TO TECHWELL EVENTS ONLY. OFFER ONLY VALID ON NEW REGISTRATIONS WITH A VALUE OF $400 OR MORE USING PROMO CODE SDMP.


006,7_SDT017.qxp_Layout 1 10/23/18 11:13 AM Page 6

6

SD Times

November 2018

www.sdtimes.com

NEWS WATCH JS Foundation, Node.js to merge communities The Node.js Foundation and JS Foundation announced an intent to bring their Node.js and JavaScript communities together to serve a broader range of users. “After having two separate Foundations for two years, we believe there needs to be a tighter integration between both Foundations to enable greater support for Node.js and a broader range of JavaScript projects. We look forward to continuing to support the healthy growth of the JavaScript ecosystem and look forward to the potential of supporting an even wider range of projects that the JavaScript ecosystem is dependent on as well as projects that focus on new areas of growth for JavaScript,” the Node.js and JS Foundations’ Board of Directors wrote in a post. The goals of this intended merger will aim to provide: • Improved operational excellence • Streamlined member engagement • Increased collaboration across the JavaScript ecosystem and standard bodies • An umbrella project structure for JavaScript projects • And a single home for the JavaScript ecosystem

Industry leaders partner on an Open Data Initiative A new initiative has launched to transform data into digital customer experiences that drive business value. Adobe, Microsoft and SAP announced the Open Data Initiative at

Mozilla launches Firefox Reality Mozilla’s Firefox Reality, a mixed reality web browser, has arrived. It was designed from the ground up to work on various standalone virtual and augmented reality headsets. It is available in the Viveport, Oculus, and Daydream app stores. According to Mozilla, Firefox Reality was built to be able to move seamlessly between the 2D web and immersive web. Firefox Reality is built on top of Mozilla’s quantum engine for mobile browsers, resulting in the smooth and fast performance crucial for VR browsers, the company explained. “We had to rethink everything, including navigation, text-input, environments, search and more,” said Andre Vrignaud, head of mixed reality platform strategy at Mozilla. “This required years of research, and countless conversations with users, content creators, and hardware partners. The result is a browser that is built for the medium it serves. It makes a big difference, and we think you will love all of the features and details that we’ve created specifically for a MR browser.” Microsoft Ignite in Orlando in late September. The Open Data Initiative was created out of the need to gather data in real-time and make business decisions based on it. According to the companies, too many businesses struggle with their data being trapped in silos, limiting a business’ ability to gain value and make the right connections. The initiative is being driven by three guiding principles: 1. Every organization owns and maintains complete, direct control of all their data. 2. Customers can enable AI-driven business processes to derive insights and intelligence from unified behavioral and operational data. 3. A broad partner ecosystem should be able to easily leverage an open and extensible data model to extend the solution.

Berners-Lee is solid in belief of data control World Wide Web creator and director of W3C Tim BernersLee believes the web has reached a critical tipping point. “I’ve always believed the web is for everyone. That’s why I and others fight fiercely to protect it. The changes we’ve managed to bring have created a better and more connected world. But for all the good we’ve achieved, the web has evolved into an engine of inequity and division; swayed by powerful forces who use it for their own agendas,” Berners-Lee wrote in a post. Over the past few years, Berners-Lee has worked with researchers at MIT to develop Solid, which is an open-source project that aims to restore power and agency on the web

to individuals. In the current model of the web, users submit their personal data to giant companies in exchange for a perceived value, he explained. Solid will give “every one of us complete control over data, personal or not, in a revolutionary way,” he said.

Eggplant launches RPA solution Eggplant is releasing Eggplant RPA, an intelligent robotic process automation tool that leverages Eggplant’s Digital Automation Intelligence solution. According to research firm Gartner, RPA is a business process automation solution used to cut costs, reduce errors, and speed up processes. “Although the Robotic Process Automation industry is fairly new, Eggplant has a long


006,7_SDT017.qxp_Layout 1 10/23/18 11:13 AM Page 7

www.sdtimes.com

heritage in automating the unautomatable. Our customers have naturally used Eggplant for all kinds of automation including RPA and so it is was listening to our customers and looking at their use cases, that lead us to create Eggplant RPA. When faced with the prospect of RPA, many organizations believe that it is too technical and that a lot of developers need to be involved. However, with the powerful modeling and fusion automation of Eggplant RPA, we are proving this is simply not the case,” said Antony Edwards, CTO of Eggplant.

Compuware brings Agile, DevOps to mainframe A majority of enterprises are running their mission-critical apps on the mainframe, but a lack of mainframe expertise is getting in the way of successful development and deployment. To ease the pain, Compuware is updating its zAdviser solution to give modern mainframe leaders the ability to track, measure and improve development velocity, quality and efficiency. zAdvisor utilizes artificial intelligence and machine learning to find connections between developer behaviors and key performance indicators as well as pinpoint DevOps trends and patterns. The latest release includes an improved dashboard interface built on Elastic’s Elastic Cloud Service, advanced behavioral and operational metrics, near real-time streaming, and expanded AI/algorithmic correlation, according to the company. This will help enterprises determine high detect rates following an update. It will also enable them

to see if developers are maximizing the use of their specific DevOps tools, and enable them to act accordingly with new training or tools.

Progress updates Kendo UI for accessibility Progress’ web development UI toolkit is being updated to include the latest version of the Web Content Accessibility Guidelines (WCAG). WCAG is a World Wide Web Consortium initiative to provide a shared standard for individuals organizations and governments internationally. It enables people with disabilities such as visual, audio, physical, speech, cognitive, language, learning and neurological disabilities to easily access web content. WCAG 2.1 adds new criteria for compliance such as mobile accessibility, low vision requirements, and improvements for those with cognitive, language and learning disabilities. Kendo UI now fully supports the latest

November 2018

SD Times

compliance version, and will enable developers to build accessible UI for desktop, voice browsers, mobile phones, and auto-based personal computers without having to develop additional code, according to Progress.

the company, Kong 1.0 combines low latency, linear scalability, and flexibility with a vast feature set and support for server mesh patterns, Kubernetes Ingress controller, and has backward compatibility between versions.

API platform Kong 1.0 is feature-complete

Hortonworks, Cloudera agree to merger

Open-source API platform Kong has launched as version 1.0. According to the company, this launch serves as the foundation of Kong’s vision of creating a service control platform that takes advantage of AI, machine learning, API security and other emerging technologies in order to broker and secure the flow of information between services. Kong is already a highly used open-source project with over 45 million downloads currently, and with the 1.0 release it is now feature-complete. Kong 1.0 incorporates three years of learning from production experiences. According to

Cloudera and Hortonworks have announced that they have entered into an agreement in which the two companies will join in an all-stock merger of equals. The decision has been unanimously approved by both companies’ boards of directors. The companies believe that this merger will set them up to become a next-generation data platform that spans multi-cloud, on-premises, and the edge. They also hope to set the standard for hybrid cloud data management and accelerate customer adoption, community development, and partner engagement. z

Puppet prepares for a new era of DevOps Delivery and operation automation company Puppet announced a series of DevOps, continuous delivery and continuous integration product launches and updates during its Puppetize Live conference last month. CEO Sanjay Mirchandani said in the announcement that the announced tools are designed to “make DevOps transformations data-driven and automatic.” The company highlighted the release of Puppet Insights during the conference, a new performance metrics monitoring platform, which the company explained will help DevOps teams more easily identify where their processes are lacking through easy-to-use dashboards and reporting tools. Puppet Insights was also “built to help technology leaders measure the impact of their DevOps investments by aggregating, analyzing, and visualizing data across the entire toolchain,” Puppet senior director of product Alex Bilmes wrote in a blog post about the release. Other announcements include Puppet Discovery 1.6, Puppet Enterprise 2019, Continuous Delivery for Puppet Enterprise 2.0 and Puppet Bolt 1.0.

7


008,9_SDT017.qxp_Layout 1 10/23/18 12:53 PM Page 8

8

SD Times

November 2018

www.sdtimes.com

Applitools WHAT THEY DO: Application Visual Management

WHY WE’RE WATCHING: Providing visual tools to compare differences in an application, in an automated way as changes are made, helps organizations ensure their releases maintain the highlevel experience users expect.

BrowserStack WHAT THEY DO: On-device web testing WHY WE’RE WATCHING: The company has more than 2,000 devices to run tests against, to make sure the user experience is optimized regardless of the mobile device used to gain access to the app.


008,9_SDT017.qxp_Layout 1 10/23/18 12:53 PM Page 9

www.sdtimes.com

November 2018

SD Times

9

Protego Drone.io

WHAT DO THEY DO: Serverless

Stackery

security

WHAT THEY DO: Serverless

WHAT THEY DO: Continuous delivery WHY WE’RE WATCHING: Serverless WHY WE’RE WATCHING: It is becoming more important to deliver faster and faster every day. Drone.io provides a open-source continuous delivery platform to help automate testing and release workflows.

Hasura WHAT THEY DO: GraphQL WHY WE’RE WATCHING: Industry leaders believe GraphQL will soon overtake the market share of REST APIs. Hasura offers GraphQL on Postgres to provide GraphQL queries, mutations, real-time capabilities, and event-triggers on database events.

Jama Software WHAT THEY DO: Product development

technology is a huge trend expected to even grow even larger next year. As businesses begin to make this transition and adopt this technology, it means new security holes can pop up that they need to be aware of.

Pulumi WHAT THEY DO: Cloud development WHY WE’RE WATCHING: Cloud development is an ongoing trend in the industry right now, but Pulumi is looking at it from a cloud-native approach. The recently emerged company provides a open-source cloud development platform for cloud-native apps using containers, serverless functions, APIs and infrastructure.

platform

Revulytics

WHY WE’RE WATCHING: With a tool

WHAT THEY DO: Software usage

that supports development from idea to delivery, Jama’s development platform helps managers understand the risks and opportunities behind the software being created.

analytics

MissingLink

WHY WE’RE WATCHING: The company’s solution gathers data on how applications are being used, which can inform future development decisions and keep the applications relevant to their users.

WHAT THEY DO: Artificial intelligence

Semmle

WHY WE’RE WATCHING: MissingLink

WHAT THEY DO: Analytics

launched in September with the promise of linking deep learning with data scientists and enabling more people to manage data, gain insights and provide value with machine learning models.

WHY WE’RE WATCHING: Security will always be an important aspect of the software world, and Semmle aims to address this with its new approach to software engineering analytics. The company’s platform provides a “Looks Good to Me” solution as well as a query engine for preventing software mistakes.

WHY WE’RE WATCHING: With serverless application development and deployment gaining more traction, Stackery offers a complete toolkit to ease the process.

StreamSets WHAT THEY DO: DataOps WHY WE’RE WATCHING: Enterprises want to gain real-time insights into their users and systems, but the explosion of data can make that more complicated. StreamSets wants to take the successful principles of DevOps and apply them to data management to speed things up.

TigerGraph WHAT THEY DO: Graph database WHY WE’RE WATCHING: As businesses focus on providing excellent user experiences, the use of data-based technologies such as recommendation engines will become increasingly important. Graph databases enable companies to derive even more value from their data.

TruSTAR WHAT THEY DO: Threat intelligence WHY WE’RE WATCHING: While security continues to be a heavily talked about topic in the industry, TruSTAR has a different meaning and view on what it means to remain secure. The company believes security means more than securing your own solutions and taking responsibility for your own actions, but it means taking responsibility for the entire industry, sharing ideas, and not being afraid about being transparent.


010-16_SDT017.qxp_Layout 1 10/23/18 12:06 PM Page 10

10

SD Times

November 2018

www.sdtimes.com

afety by design, security by design, privacy by design. As software capabilities continue to evolve, developers need to adapt the way they think and work. Ethical design is the next thing that will be integrated into the pipeline, given the popularity of artificial intelligence (AI). Like safety and security by design, ethical design helps avoid unintended outcomes that spur lawsuits and damage brands. However, there’s more to ethical design than just risk management. “Digital ethics is mission-critical. I don’t see how something that could damage people, whether it’s disappointing them, annoying them, discriminating against them or excluding them, could be considered academic,” said Florin Rotar, global digital lead at global professional services company Avanade. “It’s a question of maturity. We have team leads and development managers that are putting together a digital ethics cookbook and adding it to their coding and security standards.” Generally, the practical effect of ethical design is less intuitively obvious at this point than safety by design and security by design. By now it’s common knowledge that the purpose of safety by design is to protect end customers from harm a product might cause and that security by design minimizes the possibility of intentional and inadvertent security breaches. Ethical design helps ensure several things: namely, that the software advances human well-being and minimizes the possibility of unintended outcomes. From a business standpoint, the outcomes should also be consistent with the guiding principles of the organization creating the software. Although the above definition is relatively simple, integrating digital ethics into mindsets, processes and products takes some solid thinking and a bit of training, not only by those involved in software design, coding and delivery but others in the organization who can foresee potential benefits and risks that developers may not have considered. “The reality is, as you think about a future in which software is driving so many decisions behind the scenes, it creates a new form of liability we

S

BY LISA MORGAN

haven’t had before,” said Steven Mills, associate director of machine learning and artificial intelligence at Boston Consulting Group’s federal division. “If something goes wrong, or we discover there’s a bias, someone is going to have to account for that, and it comes back to the person who wrote the code. So it’s incumbent upon you to understand these issues and have a perspective on them.”

What’s driving the need for ethical design Traditionally, software has been designed for predictability. When it works right, Input X yields Output Y. However, particularly with deep learning, practitioners sometimes can’t understand the result or the reasoning

that led up to the result. This opacity is what’s driving the growing demand for transparency. Importantly, the level of unpredictability stated above referred to one AI instance, not a network of AI instances. “Computer scientists have been isolated from the social implications of what they’ve been creating, but those days are over,” said Keith Strier, global and Americas AI leader at consulting firm EY. “If you’re not driven by the ethical part of the conversation and the social responsibility part, it’s bad business. You want to build a sustainably trustable system that can be relied upon so you don’t drive yourself out of business.”


010-16_SDT017.qxp_Layout 1 10/23/18 11:03 AM Page 11

www.sdtimes.com

Business failures, autonomous weapons systems and errant self-driving cars may seem a bit dramatic to some developers; however, those are just three examples of the emerging reality. The failure to understand the long-term implications of designs can and likely will result in headline-worthy catastrophes as well as less publicly visible outcomes that have longer-term effects on customer sentiment, trust and even company valuations. For example, the effects of bias are already becoming apparent. A recent example is Amazon shutting down an HR system that was systematically discriminating against female candidates. Interestingly, Amazon is considered the poster child of best practices when it comes to machine learning and even it has trouble correcting for bias. A spokesperson for Amazon said the system wasn’t in production, but the example demonstrates the real-world effect of data bias. Data bias isn’t something developers have had to worry about traditionally. However, data is AI brain food. Resume data quality tends to be poor, job description data quality tends to be poor, so trying to match those two things up in a reliable fashion is difficult enough, let alone trying to identify and correct for bias. Yet, the designers of HR systems need to be aware of those issues. Granted, developers have become more data literate as a result of baking analytics into their applications or using the analytics that are now included in the tools they use. However, grabbing data and understanding its value and risks are two different things. As AI is embedded into just about everything involved with a person’s personal life and work life, it is becoming increasingly incumbent upon developers to understand the basics of AI, machine learning, data science and perhaps a bit more about statistics. Computer science majors are already getting exposure to these topics in some of the updated programs universities are offering. Experienced developers are wise to train themselves up so they have a better understanding of the capabilities, limitations and risks of what

they’re trying to build. “You’re going to be held accountable if you do something wrong because these systems are having such an impact on people’s lives,” said BCG’s Mills. “We need to follow best practices because we don’t want to implement biased algorithms. For example, if you think about social media data bias, there’s tons of negativity, so if you’re training a chatbot system on it, it’s going to reflect the bias.” An example of that was Microsoft’s Tay bot, which went from posting friendly tweets to shockingly racist tweets in less than 24 hours. Microsoft shut it down promptly. It’s also important to understand what isn’t represented in data, such as a protected class. Right now, developers aren’t thinking about the potential biases the data represents, nor are they thinking in probabilistic terms that machine learning requires. “I talk to Fortune 500 companies about transforming legal and optimizing the supply chain all the time, but when I turn the conversation to the risks and how the technology could backfire, their eyes glaze over, which is scary,” said EY’s Strier. “It’s like selling a car without brake pads or airbags. People are racing to get in their cars without any ability to stop it.” In line with that, many organizations touting their AI capabilities are confident they can control the outcomes of the systems they’re building. However, their confidence may well prove to be overconfidence in some cases simply because they didn’t think hard enough about the potential outcomes. There are already isolated examples of AI gone awry, including the Uber self-driving car that ran over and killed a Tempe, Ariz. woman. Also, Facebook Labs shut down two bots because they had developed their own language the researchers couldn’t understand. Neither of these events have been dramatic enough to affect major changes themselves, but they are two of a growing number of examples that are fueling discussions about digital ethics.

November 2018

SD Times

“You’re not designing an ethically neutral concept. You have to bake ethics into a machine or potentially it will be more likely to be used in ways that will result in negative effects for individuals, companies or societies,” said EY’s Strier. “If you are a designer of an algorithm for a machine or a robot, it will reflect your ethics.” Right now, there are no ethical design laws so it is up to individual developers and organizations to decide whether to prioritize ethical design or not. “Every artifact, every technology is an instantiation of the designer so you have a personal responsibility to do this in the best possible light,” said Frank Buytendijk, distinguished VP and Gartner fellow. “You can’t just say you were doing what you were told.” And, in fact, some developers are not on board with what their employers are doing. For example, thousands of Google employees, including dozens of senior engineers, protested the fact that Google was helping the U.S. Department of Defense use AI to improve the targeting capability of drones. In a letter, the employees said, “We do not believe Google should be in the business of war.”

Developers will be held accountable Unintended outcomes are going to occur and developers will find themselves held accountable for what they build. Granted, they aren’t the only ones who will be blamed for results. After all, a system could be designed for a single, beneficent purpose and altered in such a way that it is now capable of a malevolent purpose. Many say that AI is just a tool like any other that can be used for good or evil; however, most tools to date have continued on page 13 >

11


Full Page Ads_SDT017.qxp_Layout 1 10/23/18 10:36 AM Page 12

$PHULFDV(0($2FHDQLD VDOHV#DVSRVHSW\OWGFRP


010-16_SDT017.qxp_Layout 1 10/23/18 11:03 AM Page 13

www.sdtimes.com

< continued from page 11

not been capable of self-learning. One way developers and their organizations could protect themselves from potential liability would be to design systems for an immutable purpose, which some experts are advocating strongly. “In many ways, having an immutable purpose is ideal because once you’ve designed a purpose for a system, you can really test it for that purpose and have confidence it works properly. If you look back at modeling and simulation, that’s verification and validation,” said BCG’s Mills. “I think it will be hard to do that because many of these systems are being built using building blocks and the building blocks tend to be open-source algorithms. I think it’s going to be hard in a practical sense to really ensure things don’t get out for unintended purposes.” For now, some non-developers want to blame systems designers for anything and everything that goes wrong, but most developers aren’t free to build software or functionality in a vacuum. They operate within a larger ecosystem that transcends software development and delivery and extends out to and through the business. Having said that, the designer of a system should be held accountable for the positive and negative consequences of what she builds. The most obvious reason why developers will be held accountable for the outcomes of what they build is that AI and intelligently automated systems are expressed as software and embedded software. “I think the big piece of this is asking the really hard questions all along. Part of it comes back to making sure that you understand what your software is doing every step of the way,” said Mills. “We’re talking about algorithms in software, why they’re acting the way they are, thinking about edge cases, thinking about whether groups are being treated equally. People need to start asking these kinds of questions as they build AI systems or general software systems that make decisions. They need to dig into what the system is and the consequences that can manifest if things go awry.” z

November 2018

SD Times

13

Axon prioritizes ethical design While Google is amassing data on everyone, nudging them to do this or that or buy this or that, Axon is trying to make the world a safer place. Perhaps somewhat ironically, the company builds technology solutions and weapons for law enforcement, self-defense and the military. Axon — formerly known as TASER International — has received significant recognition for its attention to ethical design in part because the company exercises top-down and bottom-up approaches to digital ethics. For example, the company has an ethical Moji Solgi, review board, which few companies have director of AI and formed to date, but more will do in the machine learning future. Axon also makes a point of ensuring ethically designed products because it’s an extension of the company’s culture. “[Ethics] has always been top of mind for Axon. Our CEO is a visionary who thinks about the future of technology and how it can save lives,” said Moji Solgi, director of AI and machine learning at Axon. “Axon wants to make bullets obsolete and ensure evidence is always captured.” Apparently, some consider AI for law enforcement and military use cases unethical because something might go wrong. Solgi argues that stifling innovation out of fear is unethical because the potential good that comes from innovation would also be negated. “There are already a few open-source packages for making sure that your dataset is not biased, making sure your model is secure and making sure that adversarial attacks cannot severely impact your model,” said Solgi. There are more tools and libraries people can leverage as the guardrails and tools for ensuring we can deal with bias, security and privacy and auditing. [There are also] tracking logs so if something goes wrong — and things will go wrong if you look at Facebook’s recent news — that we're prepared for negative consequences.”

Developing an ethical culture and practices There’s an idea behind every company, and in some cases, necessity is the mother of invention. Axon grew out of its CEO’s desire to stop the violence that occurs when law enforcement uses handguns. The company, which was founded in 1991, began its AI initiative in 2017 and has addressed the ethics of that specifically. “We started at a high level with things being vague such as we need AI ethics, so we decided to look at literature, but it turned out the best way [to implement AI ethics] is to bring together a group of people who are authorities in their own field, for technology, ethics and community,” said Solgi. “[The ethical review board guides] us in the ethical development of this technology. It’s a work in progress so we don't have all the answers, but we are going in a bottom-up way for each one of the things we're doing, asking what are the considerations and what kind of due diligence we should do.” Apparently, Axon is approaching digital ethics at many layers ranging from the long-term impact on society to the low-level details of what the code will look like, including ensuring that the data isn’t biased. “As the people who are building this stuff, we have more responsibility and it can't be all executives. We all have an obligation, a moral responsibility, to consider the impact it can have on society,” said Solgi. “When it comes to individual software engineers, they should learn about those tools and how to make a machine learning model secure. Even if your product manager or your manager doesn't know much, it's not on his radar in terms of putting in logging, tracking and due diligence systems in our software pipelines, you as the person who's building it should raise that as a way that — Lisa Morgan processes should change.” z


010-16_SDT017.qxp_Layout 1 10/23/18 11:03 AM Page 14

14

SD Times

November 2018

www.sdtimes.com

How to achieve ethical design BY LISA MORGAN

Ethical design is at the very early stages when it comes to high-tech innovations. Other industries, such as healthcare, financial services, and energy have had to ponder various ethical issues and, over time, they became highly regulated because the safety and security of their customers or the public at large are at stake. In the high-tech industry, ethical considerations lag behind innovation, which is an increasingly dangerous proposition in light of self-learning systems. As more tasks and knowledge work are automated, unexpected outcomes will likely occur that are tragic, shocking, disruptive and even oppressive, mostly arising out of ignorance, blind faith and negligence rather than nefarious intention, though all will coexist. Ideally, ethical design would begin in higher education so fewer people would need to be trained at work. Some universities are already designing engineering ethics or digital ethics classes, including Columbia, Cornell and Georgetown. Existing developers and their employers first need to understand what digital ethics is. Then they need to start thinking consciously about it in order to build it into their mindset, embed it in their processes and design it into products. Right now, the bottom-line impact of digital ethics is not obvious, like the early days of computer security, so like computer security, most companies aren’t investing in it yet or forming ethical review boards. However, like security, digital ethics will eventually become a brand issue that’s woven into the fabric of organizations and their products because customers, lawmakers and society will want assurances that intelligent products and services are safe, secure and can be trusted.

How to build ethics into your mindset Before ethics can be baked into products, there has to be an awareness of the issue to begin with, and processes need to evolve that ensure that ethical design can be achieved. Recognizing this, global professional services firm EY recently published a report on the ethics of autonomous and intelligent systems because too many of today’s companies are racing to get AI-enabled products to market without considering the potential risks. “The ethics of AI in a modern context is only a couple of years old, largely in academic circles,” said Keith Strier, global and Americas AI leader at EY. “As a coder, designer, or data scientist, you probably haven’t been exposed to too much of this, so I think it’s helpful to build an awareness of the topic so you can start thinking about the negative implications, risks and adjacent impacts of the technology or model you’re building.” Because the risks associated with self-learning systems are not complete-

ly understood, developers and others in their organizations should understand what ethical design is and why it’s important. What may not be obvious is it requires a mindset shift from shortterm gain to longer-term implications. To aid the thinking process, a popular resource is the Institute for the Future’s Ethically-Aligned Design document, which just completed its third round of public commentary. (Disclosure: this author is on the editing committee.) The IEEE also recently announced the Ethics Certification Program for Autonomous and Intelligent Systems, since there isn’t anything that otherwise “proves” that a product is ethical or not. The program will create specifications for certification and marking processes advancing transparency, accountability and reduction of algorithmic bias in autonomous and intelligent systems. In addition, the Association of Computing Machinery recently updated its code of ethics, the first such update since 1992.

How to build ethics into processes Digital ethics has to be a priority for it to make its way into processes and, ultimately products. “Most software developers don’t have it in their job description to think about outcomes. They’re focused on delivery,” said Nate Shetterly, senior director of data and design at design and innovation company Fjord. “As we look at the processes we use to drive our industry, like Agile development, you can modify those processes. We do testing and QA that’s part of the development life cycle, let’s add ethical considerations and unintended outcomes.” Right now, ethics after-the-fact is the norm. However, if the Facebook/Analytica scandal is any indication, ethicsafter-the-fact is an extremely expensive proposition. It may erode customer trust, permanently in some cases. continued on page 16 >


Full Page Ads_SDT017.qxp_Layout 1 10/23/18 10:37 AM Page 15


010-16_SDT017.qxp_Layout 1 10/23/18 11:04 AM Page 16

16

SD Times

November 2018

www.sdtimes.com

< continued from page 14

“The question of building digital ethics into processes is one of organizational values,” said Nicolas Economou, chairman and CEO of legal technology firm H5. “Some companies are focused on efficiency, but efficiency can be contrary to notions of fairness or treating people well and having respect for people. You need to create mechanisms for review and governance of those principles and you have to engage a broad range of folks by design. You also have to have an ongoing digital ethics impact assessment.”

development life cycle — anywhere — in most organizations as of yet. Fjord’s Shetterly suggested that ethics and unintended consequences might be part of the QA process. He also suggested tools might nudge developers to consider what would happen if a particular bug affected another one. “It’s like a startup going from shipping code every day to ensuring they don’t break a financial transaction. As you grow, you adapt your software development processes,” said Shetterly. “As an industry, I think we need to adopt processes that consider ethics and unintended consequences.”

How to build ethics into products Global professional services firm Avanade places a set of data ethics principles in the hands of its developers and data scientists that remind them behind the software and the data is a person. “One of the biggest risks and

The way that a digital assessment works is the impact of development is assessed against the organization’s principles and values. If the development deviates from that, corrective steps are taken and another digital ethics impact assessment is done. “You need to think about the harms you may cause and the societal impact you want to have. If you really understand digital ethics, you know it has to involve all of your processes because it’s not just the job of engineers,” said Economou. “If you want to build digital ethics into your product, you have to build it into your processes and your mindset.” However, unlike security, testing and all other things shifting left, it’s not like digital ethics is baked into the software

opportunities is the correlated use of repurposed data,” said Florin Rotar, global digital lead at global professional services company Avanade. “When you put data through a supply chain and analyze it, it’s basically the intention of how that data is used and whether that differs from the [original] intent of data use.” One thing developers need to understand is that self-learning systems are not deterministic, they’re probabilistic. Increasingly, computer science majors are getting the benefit of more statistics, basic data science and basic machine learning in the newer university curricula. Experienced developers should also brush up on those topics as well as learn any other relevant areas they may be unfamiliar with.

However, given the speed and scale at which AI is capable of operating, humans won’t be able to see or understand everything, which is why supervisory AI instances are being proposed. The job of a supervisory AI is to oversee the training and operation of another AI instance to ensure it is operating within bounds and if not, to intervene autonomously by shutting the monitored AI instance down or notifying a human. Meanwhile, organizations need to understand digital ethics within the context of their industries, organizations, customers and products and services to affect ethical design in the practical sense. For example, H5 has an entire range of ethical guidelines and obligations it has to comply with, some of which are regulatory and professional. Its use of data differs from the average enterprise, however. Most of today’s companies use personal data to target products, monitor behavior, or change behavior. H5 uses that information along with other information to prove the facts in a lawsuit. “We use data to support the specific objectives that a litigator has in proving or disproving a case,” said Economou. “The data we have has been created through a legal process in litigation.” Digital ethics is not a one-size-fits-all proposition, in other words. In the meantime, the pace of innovation is moving at warp speed and laws that will impact the use of AI, robotics, and privacy are moving at the pace of automobiles. Organizations and individual developers have to decide for themselves how they will approach product design, and they need to be prepared to accept the consequences of their decisions. “Evolution is now intrinsically linked to computers. We’re shaping our world through computers and algorithms and the world to date has developed as a result of human intervention,” said Steven Mills, associate director of machine learning and artificial intelligence at Boston Consulting Group’s federal division. “Computers are shaping the world now and we’re not thinking about that at all.” z


Full Page Ads_SDT017.qxp_Layout 1 10/23/18 10:37 AM Page 17

$PHULFDV(0($2FHDQLD VDOHV#DVSRVHSW\OWGFRP


018_SDT017.qxp_Layout 1 10/23/18 2:28 PM Page 18

18

SD Times

November 2018

www.sdtimes.com

INDUSTRY SPOTLIGHT

Six Pillars of Monitoring Automation BY DAVID RUBINSTEIN

Changes in software development that have led to accelerated delivery cadences are stressing other parts of the application life cycle. This is especially true in organizations adopting microservices architecture, where teams are working autonomously to deliver their software, which by definition relies on communication with other services to form a more complete application. And one area where this has caused major changes to occur is application monitoring. The decentralization of authority over change means the ‘classic’ way of doing APM — creating a few experts who understand the complexity of an application, understand the underlying data and ultimately dig down to the root cause — no longer holds up. “We speak about the democratization of the whole APM territory, which allows everyone — be it an occasional user, be it an expert user, or a developer looking at a current problem — to have access to the data immediately, to see what is running there, what is being collected, and to get to the root cause within a few seconds,” said Pavlo Baron, chief technology officer at Instana. “There is no central authority anymore that is doing the job of keeping the things together and understanding how they communicate and talk to each other,” and understanding how many components can impact each other. The idea of a dynamic application and changes is actually one of the pain points of monitoring microservices, because things change all the time, and it has to be monitored all the time, according to Baron. “The only way to do that and not go insane, and to keep up with the changes in the application, is through automation. Every aspect of monitoring — what to monitor, what to look for, what makes it healthy — all Content provided by SD Times and

those things have to be automated. What you risk if you don’t, or can’t automate it, if you use a tool that requires any kind of manual intervention, you risk being obsolete even before your changes are rolled out. We have customers who are updating their applications or services dozens of times a day.” There’s an organizational shift within IT application delivery teams, focused on Agile methodologies and achieving CI/CD. In microservices deployment cycles, this shift is automating every

ligence, and building an intelligent system begins with the ability to recognize what is running in the system that is being monitored. From there, the monitoring system needs to know how to attach to a component and consume the data from within — automatically — without any human intervention in terms of configuration. Baron explained that a big part of making that work is data precision, which is why Instana built one-second resolution for all metrics and metadata,

The Six Pillars The only way to effectively manage microservice applications is to automate the entire monitoring life cycle by applying AI to discovery, monitoring and analysis. Instana’s AI strategy is built on six fundamental pillars, the first three of which are focused on automation. Those pillars are: l l l

Automatic and Continuous Discovery & Mapping

l l

Precise High Fidelity Visibility Cloud, Container & Microservice Native Deployment

other part of the deployment process — monitoring is the last piece.” That’s why Baron and others in the industry believe in monitoring automation. It is time-consuming and not costeffective to have people writing monitoring configurations every time a microservice within an application changes, he pointed out. “Whenever you deliver a change, you change something in the whole crowd of your services, you’re effectively monitoring configurations and the rules you use to identify problems, and those must be adjusted. You need to keep people continuously busy, instead of delivering value, just configuring and reconfiguring your monitoring. Our approach is to automate this as much as possible and take that problem from the teams so they can focus on what really matters.” The basis for this new APM is intel-

l

Full Stack Application Data Model Real-time AI-Driven Incident Monitoring & Prediction AI-Powered Problem Resolution and Troubleshooting Assistance

so Instana’s automation engine (the robot) can always be up to date. Whenever a change occurs, the robot will be able to capture the change and readjust the underlying graph to the new connections. In Baron’s mind, automated monitoring should be a commodity. Only a few people in the world have the job of monitoring systems, he said, while most people have the job of keeping systems up and running. With the IT world moving toward combining those roles, Instana is trying to deliver maximum automation, intelligence and visibility to its users. “We plan to evolve the whole APM territory into something that perfectly supports the modern way teams work,” he said. “While the classic APM tool is functional, from a process perspective, it’s not a good fit.” z


Full Page Ads_SDT017.qxp_Layout 1 10/23/18 12:11 PM Page 19


020-27_SDT017.qxp_Layout 1 10/23/18 3:08 PM Page 20

20

SD Times

November 2018

www.sdtimes.com

Data lineage helps organizations understand where their data originated and how it may have changed, to make better business decisions BY JACQUELINE EMIGH


020-27_SDT017.qxp_Layout 1 10/23/18 3:09 PM Page 21

www.sdtimes.com

W

ith the rise of big data platforms like Apache Hadoop and Spark, more and more enterprises are pouring enterprise information into data lakes and launching related initiatives around data quality, data governance, regulatory compliance, and more reliable business intelligence (BI). To prevent the new lakes from turning into swamps, however, businesses are organizing their reams of data via the data’s lineage. Enterprises have long managed and queried relational data in structured databases and data marts. Emerging environments such as Hadoop, however, often bring together this information with semi-structured data from NoSQL

databases, emails and XML documents, as well as unstructured information like Microsoft Office files, web pages, videos, audio files, photos, social media messages, and satellite images. “Even though data is becoming more accessible, users still rely on receiving data from trusted internal sources,” said Sue Clark, senior CTO architect at Sungard AS, a customer of Informatica, Teradata, and Qlik. “For a company, it’s important for users to know and understand the source and veracity of the data. Data lineage tools enable companies to track, audit and provide a visual of data movement from the source to the target, which also ties into the required data governance processes.” Through new laws like the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) of 2018, government regulators are requiring organizations to perform better management of data originating from all types of raw formats. Enterprises also face increasing demands from business managers for higherquality data for use in predictive analysis and other BI reports. “Today, companies can’t afford not to make data-driven decisions, which means understanding where data comes from — and how it has changed along the way — to solve business problems,” according to Harald Smith, director of product management at Synscort, a specialist in data integration software and services. “Regulatory compliance demands accuracy, and data lineage tools guarantee a significantly more accurate approach to data management,” echoed Amnon Drori, founder and CEO of Octopai, maker of an automated data lineage and metadata management search engine for BI groups. Data lineage tools also show up in self-service BI solutions, although apparently, such solutions aren’t yet available to all that many users. In one

November 2018

SD Times

recent study, TDWI found that only 20 percent of the companies surveyed said their personnel could identify trusted data sources on their own. Further, merely 18 percent responded that personnel could “determine data lineage — that is, who created the data set and where it came from — without close IT support,” according to the report. “If users and analysts are to work effectively with self-service BI and analytics, they need to be confident that they can locate trusted data and know its lineage,” recommended TDWI.

‘Regulating compliance demands accuracy.’ —Amnon Drori, founder and CEO of Octopai

“For self-service to prosper, IT and/or the CDO function must help users by stewarding their experiences and pointing them to trusted, well-governed sources for their analysis.” Even fewer of the respondents to TDWI’s survey, just 16 percent, said their end users were able to query sources such as Hadoop clusters and data lakes — but then again, only about one-quarter of the participating organizations even had a data lake.

What are data lineage tools, anyway? Dozens of proprietary and opensource vendors are converging on data continued on page 22 >

21


020-27_SDT017.qxp_Layout 1 10/23/18 3:09 PM Page 22

22

SD Times

November 2018

www.sdtimes.com

Where does data lineage fit? Experts slice and dice the data management and BI markets into myriad pieces. In characterizing where data lineage tools fit, major analyst firms such as Gartner and IDC place these tools in the general classification of metadata management. GARTNER’S TAKE. Beyond tools for data lineage and impact analysis, products in the metadata management category can include metadata repositories, or libraries; business glossaries; semantic frameworks; rules management tools; and tools for metadata ingestion and translation, according to Gartner. Tools in the latter category include techniques and bridges for various data sources such as ETL, BI and reporting tools, modeling tools, DBMS catalogs, ERP and other applications, XML formats, hardware and network log files, PDF and Microsoft Excel/Word documents, business metadata, and custom metadata. Vendors who made it into Gartner’s 2018 Magic Quadrant for MMS are as follows: Adaptive, Alation, Alex Solutions, Mohan ASG Technologies, Collibra, Data Advantage Group, Datum, Global IDs, IBM, Infogix, Informatica, Oracle, SAP, and Smartlogic. “I don't have exact adoption rates, but awareness of doing proper metadata management is growing. Initial resistance at the thought it would take away from agility is going away. Organizations can actually add new workloads much faster because the proper discipline is in place,” said Sanjeev Mohan, a research analyst for big data and cloud/SaaS at Gartner, during an interview with SD Times. Organizations, though, have differing reasons for engaging in data quality initiatives. Before launching an initiative and deciding on an approach to take, an enterprise should first determine the business use case, he advised. “Is it regulatory compliance? Risk reduction? Predictive analysis?” IDC’S VIEWS. Stewart Bond, an IDC analyst, classifies metadata management tools as belonging to a larger category, called data intelligence software. Further, Bond views data intelligence software as a collection of capabilities which can help organizations answer fundamental questions about data. The

< continued from page 21

lineage, from a bunch of different directions. Vendors, customers and analysts define data lineage tools in a wide variety of ways, but Gartner has arrived at one short yet highly serviceable definition. “Data lineage specifies the data’s origins and where it moves over time. It also describes what happens to data as it goes through diverse processes. Data lineage can help to analyze how information is used and to track key bits of information that serve a particular purpose,” according to Gartner’s 2018

list of questions is rather long, but it includes when the data was created, who is currently using the data, where it resides, and why it exists, for example. The answers can inform and guide use cases around data governance, data quality management, and self-service data, he explained. “To collect these answers, organizations must harness the power of metadata that is generated every time data is captured at a source, moves through an organization, is accessed by users, is profiled, cleansed, aggregated, augmented and used for analytics for operational or strategic decision-making,” Bond wrote in a recent blog. “Data intelligence software goes beyond just metadata management, and includes data cataloging, master data definition and control, data profiling and data stewardship.” Data intelligence is a subset and different view of Data Integration and Integrity software (DIIS), another market view defined by IDC, according to Bond, who is research director of DIIS at IDC. “Data intelligence contains software for data profiling and stewardship, master data definition and control, data cataloging and data lineage — all which also map into the Bond data quality, metadata management and master data segments in the full DIIS market,” Bond told SD Times in an email. Examples of vendors included in IDC’s data intelligence and DIIS views are Alation, ASG Technologies, BackOffice Associates, Collibra, Datum, IBM, Infogix, Informatica, Manta, Oracle, SAP, SAS, Syncsort, Tamr, TIBCO, Unifi, and Waterline Data. However, many products containing data lineage tools are not included in IDC’s data intelligence and DIIS views, or in Gartner’s MMS Magic Quadrant, typically because they don’t meet the specific criteria for those categories and are covered by other areas of analysts’ organizations. z —Jacqueline Emigh

Magic Quadrant for Metadata Management Solutions (MMS) report. Muddying the definitional waters a bit is the fact that enterprises generally use data lineage tools within sweeping organizational initiatives. Accordingly, vendors often integrate these tools with related data management or BI functions, either within their own platforms or with partners’ solutions. Customers also perform their own tool integrations. Some data lineage tools also transform, or convert, data into other formats, although other vendors perform

these conversions through separate ETL (extract, transform, load) tools. Syncsort’s DMX-h, for example, accesses data from the mainframe, RDBMS, or other legacy sources and provides security, management, and end-to-end lineage tracking. It also transforms legacy data sources into Hadoop-compatible formats. Beyond simply tracking data, for example, organizations need to be able to consume the data lineage information in a way that gives them a better understanding of what it continued on page 27 >


Full Page Ads_SDT017.qxp_Layout 1 10/24/18 1:10 PM Page 23

From Origin to Destination, Tracing Your Data Ancestry Getting the most out of your data requires knowing what data you have, where it is, and where it came from—plus understanding its quality and value to the organization. But you can’t understand your data in a business context, much less track its physical existence and lineage or maximize its security, quality and value, if it’s scattered across different silos in numerous applications.

Data lineage enables data tracking from origin to destination across its lifespan and all the processes it’s involved in. It also plays a vital role in data governance. Beyond the ability to know where data came from and whether or not it can be trusted, there’s an element of statutory reporting and compliance often requiring knowledge of how that same data (known or unknown, governed or not) has changed over time. A platform that provides insights like data lineage, impact analysis and full-history capture serves as a central hub from which everything can be discovered about the data—whether in a data lake, data vault or traditional data warehouse. In a traditional data management organization, spreadsheets are used to manage the incoming data design, what’s known as the “pre-ETL” mapping documentation, but this doesn’t provide visibility or auditability. In fact, each unit of work represented in these ‘mapping documents’ becomes an independent variable in the overall

system development lifecycle, and therefore nearly impossible to learn from, much less standardize. The key to accuracy and integrity in any exercise is eliminating human error—that doesn’t mean eliminating humans from the process, but incorporating the right tools to reduce the likelihood of error as humans apply thought to the work. Knowing what data you have, where it lives and where it came from is complicated. The lack of visibility and

control around “data at rest” combined with “data in motion,” as well as difficulties with legacy architectures, means organizations spend more time finding the data they need rather than using it to produce meaningful business outcomes.

management architecture; construct business glossaries; assess what data aligns with specific business rules and policies; and inform how that data is transformed, integrated and federated throughout business processes— complete with full documentation.

Organizations need to create and sustain an enterprise-wide view of, and easy access to, underlying metadata, but that’s a tall order given numerous data types and sources never designed to work together, with infrastructures cobbled together over time with disparate technologies, poor documentation and little thought for downstream integration. So the applications and initiatives that depend on a solid data infrastructure may be compromised, resulting in faulty analyses.

Centralized design, immediate lineage and impact analysis, and change-activity logging means you will always have answers readily available, or just a few clicks away. Subsets of data can be identified and generated via predefined templates, generic designs generated from standard mapping documents, and pushed via ETL process for faster processing via automation templates.

These issues can be addressed with a strong data management strategy underpinned by technology that enables the data quality the business requires, which encompasses data cataloging (integration of data sets from various

sources), mapping, versioning, business rules and glossaries maintenance and metadata management (associations and lineage). An automated, metadata-driven framework for cataloging data assets and their flows across the business provides an efficient, agile and dynamic way to generate data lineage from operational source systems (databases, data models, file-based systems, unstructured files and more) across the information

With automation, data quality is systemically assured and the data pipeline is seamlessly governed and operationalized to the benefit of all stakeholders. Without such automation, business transformation will be stymied. Large companies with thousands of systems, files and processes will be particularly challenged by a manual approach. And outsourcing data management efforts to professional services firms only increases costs and schedule delays. erwin Mapping Manager automates enterprise data mapping and code generation for faster time-to-value and greater accuracy for data movement projects, as well as synchronizes data in motion with data management and governance efforts. Map data elements to their sources within a single repository to determine data lineage, deploy data warehouses and other Big Data solutions, and harmonize data integration across platforms. There’s no need for specialized resources with knowledge of ETL and database procedural code, making it easy for business analysts, data architects, ETL developers, testers and project managers to collaborate for faster decision-making. This article was contributed by Danny Sandwell of erwin, Inc., as part of a content sponsorship.


020-27_SDT017.qxp_Layout 1 10/23/18 3:09 PM Page 24

24

SD Times

November 2018

www.sdtimes.com

Which data lineage tools are best? With so many choices available, which data lineage tools will best meet your needs? Factors to consider include whether an initiative is IT or businessdriven, the types of additional data management or BI functionality that will be required, and whether using open-source software is important to the organization, experts say. Some IT-driven initiatives are concerned with pruning through and curating the organization’s information into data catalogs, so that the most accurate data can then be reused through enterprise applications. Other initiatives are sparked by business managers seeking to quickly put together consistent and reliable data sets for use within corporate departments or company-wide. For IT-driven initiatives, for example, Informatica provides data lineage through Metadata Manager, a key component of Informatica Power Center Advanced Edition. Metadata Manager gathers technical metadata from sources such as ETL and BI tools, applications, databases, data modeling tools, and mainframes. Metadata Manager shares a central repository with Informatica’s Business Glossary. The technical metadata can

be linked to business metadata created by Business Glossary to add context and meaning to data integration. Metadata Manager also provides a graphical view of the data as it moves through the integration environment.

When considering tools, it’s important to consider if your data initiatives are driven by IT or business.

IT developers can use Metadata Manager to perform impact analysis when changes are made to metadata. Enterprise data architects can use the solution’s integration metadata catalog for purposes such as browsing and searching for metadata definitions, defining new custom models, and

The capabilities within Hortonworks Data Steward Studio

maintaining common interface definitions for data sources, warehouses, BI, and other applications used in enterprise data integration initiatives. In stark contrast, Datawatch targets its Monarch platform at businessdriven initiatives. Monarch allows domain experts in business departments to pull metadata for documents in multiple formats — such as Excel spreadsheets, Oracle RDMS, and Salesforce.com, for example — and then use the metadata to build dashboard-driven models for reuse within their departments, said Jon Pilkington, Datawatch’s CPO, in an interview with SD Times. Monarch’s data lineage tools document “where the raw data came from, how it's been altered, who did it, when they did it,” Pilkington said. “The model then becomes what users search for and shop.” Monarch extracts the raw data in rows and columns. After it’s extracted, a domain expert uses Monarch’s point-and-click user interface to convert, clean, blend and enrich data without performing any coding or scripting. It can then be analyzed directly within Monarch or exported to Excel spreadsheets or third-party advanced analytics and visualization tools through the use of built-in connectors. Within its own marketing department, for example, Datawatch has used its tools to generate reports by salespeople about how information turns into a sales lead and how long it takes to turn a lead into a sale. “We use 11 different data sources for this — including Google AdWords and the Zendesk support system — and the apps don’t necessarily play well together. It took many steps for the domain expert to get the information into shape, but now that the model is done, it can be reused by any salesperson in the department,” said Pilkington. z


Full Page Ads_SDT017.qxp_Layout 1 10/23/18 11:14 AM Page 25

Is

bad data threatening

your business?

C ALL IN THE

Fabulous It’s Clobberin’ Time...with Data Verify Tools!

ddress ! Address

Email !

P hone !

N ame !

Visit Melissa Developer Portal to quickly combine our APIs (address, phone, email and name verification) and enhance ecommerce and mobile apps to prevent bad data from entering your systems. With our toolsets, you’ll be a data hero – preventing fraud, reducing costs, improving data for analytics, and increasing business efficiency. - Single Record & Batch Processing - Scalable Pricing - Flexible, Easy to Integrate Web APIs: REST, JSON & XML - Other APIs available: Identity, IP, Property & Business

Let’s Team Up to Fight Bad Data Today!

melissadeveloper.com 1-800-MELISSA


Full Page Ads_SDT017.qxp_Layout 1 10/23/18 11:15 AM Page 26


020-27_SDT017.qxp_Layout 1 10/24/18 10:31 AM Page 27

www.sdtimes.com

November 2018

SD Times

An example of Octopai’s data lineage visualization. < continued from page 22

means, said Syncsort’s Smith. Consequently, Syncsort recently teamed up with Cloudera to make its lineage information accessible through Cloudera Navigator, a data governance solution for Hadoop that collects audit logs from across the entire platform and maintains a full history, viewable through a graphical user interface (GUI) dashboard. For organizations that don’t use Navigator, DMX-h makes the lineage information available through a RESTAPI, which IT departments can use for integration with other governance solutions.

Some perform impact analysis Impact analysis capabilities are also offered in some data lineage tools. “With the implementation of GDPR, companies in possession of personal data of EU residents have had to make significant changes to ensure compliance,” according to Drori. “A large part of this pertains to access — giving people access to their own personal data, enabling portability of the data, changing or deleting the data. Before any company can make a change to its data, it must first locate the data and then of course understand the impact of making a particular change. Data lineage tools are helping BI groups to perform impact analysis ahead of compliance with regulations like GDPR.” In one real-world scenario, for

Three approaches to data management As Gartner’s Sanjeev Mohan sees it, enterprises can take any of three approaches to data management initiatives: customer-developed, mixing and matching best-ofbreed tools, and investing in a broader platform or suite. By choosing a best-of-breed data lineage tool or metadata management package, a customer can achieve strong support for a specific use-case scenario, the analyst observed. On the other hand, customers often need to perform their own tool integrations, a process that can be expensive and time-consuming. Sungard AS is one example of an enterprise which is taking a best-of-breed approach. “As part of its internal handling of data and its sources, Sungard AS uses Teradata and Informatica, with Qlik on top of Teradata for ease of business user access and to make data-backed business decisions easier,” Sungard’s Sue Clark told SD Times. Clark Open source vs. proprietary. Most solutions offering data lineage capabilities are proprietary, said Gartner’s Mohan. Yet some are open source, including offerings from Hortonworks, Cloudera, MapR and the now Google-owned Cask Data, in addition to Teradata’s Kylo. “We don’t like to lock customers into a specific vendor,” said Shaun Bierweiler, vice president of U.S. Public Sector at Hortonworks, in an interview with SD Times. Hortonworks is now working with the United States Census Bureau to provide technology for the 2020 census, the first national census to be conducted in a mainly electronic way. HDP will serve as the Census Data Lake, storing most of the census data, while also acting as a staging ground for joining data from other databases. The Census Lake will store both structured data in addition to unstructured data such as street-level and aerial map imagery from Google. z —Jacqueline Emigh

example, a business analyst needed to erase PII, an age column in a particular report, so that customer age would become private. Data lineage tools helped to solve the problem. “Before erasing a column the analyst had to understand which processes were

involved in creating this particular report and what kind of impact the deletion of this age column would have on other reports,” he told SD Times. “Without data lineage tools, impact analysis can be really tricky and sometimes impossible to perform accurately.” z

27


028_SDT012.qxp_Layout 1 10/23/18 10:21 AM Page 28

28

SD Times

November 2018

www.sdtimes.com

DEVOPS WATCH

JFrog acquires DevOps consulting company BY CHRISTINA CARDOZA

JFrog wants to bring DevOps further into the enterprise with its acquisition of development and DevOps technology consulting company Trainologic. The company provides consulting services for Java, Scala, front end, DevOps, big data, software architecture and DevOps technologies. According to JFrog, Trainologic’s years of consulting and training experience make it a perfect fit and will help accelerate the company’s vision of liquid software, which refers to providing software updates that run automatically and continuously. “We are ecstatic to see the tremendous DevOps industry adoption of JFrog tools with a growing ecosystem of market experts who are available to support customers in the implementation of our universal binaries solution,” said Shlomi Ben Haim, CEO and co-founder of JFrog. “Trainologic stands out in their know-how of the developer tools space and being seen as trusted advisors by customers. Together we will be able to

Shlomi Ben Haim, CEO and co-founder of JFrog and Gal Marder, CEO of Trainologic.

further support our Enterprise customers and open source community. We are thrilled to have the team on board and welcome them to the JFrog family.” In addition, JFrog plans to launch a new DevOps consulting unit that will work towards accelerating the adoption of DevOps practices along with the company’s Enteprise+ platform. Trainologic’s CEO Gal Marder will become

the head of the unit and VP of DevOps Consulting. The company’s global team will join JFrog’s local offices. The company also recently announced a $165 million round of funding to go towards enterprise DevOps. The funding will be used to drive JFrog product innovation, support expansion to new markets, and accelerate organic and inorganic growth, according to JFrog. z

GitLab raises $100 million for DevOps vision BY CHRISTINA CARDOZA

GitLab is now valued at more than $1 billion thanks to a recent $100 million Series D round of funding. The company plans to use this new investment to strengthen its position with DevOps and tackle everything from planning to monitoring. The round of funding was led by ICONIQ Capital and included participation from Khosla Ventures and Google Ventures. “GitLab is emerging as a leader across the entire software development ecosystem by releasing software at an exceptional velocity,” said Matthew Jacobson, general partner at ICONIQ Capital. “They’re taking the broad software development market

head-on by developing an application that allows organizations to ship software at an accelerated rate with major increases in efficiency.” According to GitLab, too many enterprises are struggling to succeed with DevOps because of the amount of tools it takes to drive the different stages of software development and operations. A typical DevOps environment includes tools from VersionOne, Jira, GitHub, Jenkins, Artifactor, Electric Cloud, Puppet, New Relic and BlackDuck, the company explained. This “tool chain crisis” slows down cycle times and leads to poor visibility. GitLab’s Concurrent DevOps vision aims to break down barriers, build fea-

tures for each DevOps stage in one application, and provide the ability to manage, plan, create, verify, package, release, configure, monitor and secure software more easily. “Since raising a Series C round last year, we’ve delivered on our commitment to bring a single application for the entire DevOps lifecycle to market, and as a result have been able to reach over $1 billion in valuation,” said Sid Sijbrandij, CEO of GitLab. “With this latest funding round, we will continue to build out our management, planning, packaging, deployment, configuration, monitoring and security features for a more robust DevOps application.” z


Full Page Ads_SDT017.qxp_Layout 1 10/23/18 12:09 PM Page 29


030-35_SDT017.qxp_Layout 1 10/24/18 11:33 AM Page 30

30

SD Times

November 2018

www.sdtimes.com

pen-source software forms the backbone of most modern applications. According to the 2018 Black Duck by Synopsys Open Source Security and Risk Analysis Report, 96 percent of the 1,100 commercial applications that the company audited for the survey contained opensource components, with each application containing an average of 257 opensource components. In addition, on average, 57 percent of an applicationsâ&#x20AC;&#x2122; code was opensource. In terms of security, open-source is no more or less secure than custom

O

code, the report claimed. Because opensource is widely used in both commercial and internal applications, it is attractive to attackers since they will have a â&#x20AC;&#x153;target-rich environment when vulnerabilities are disclosed,â&#x20AC;? the report stated. According to Rami Sass, co-founder and CEO of WhiteSource, a company that specializes in open-source security, thousands of open-source vulnerabilities are discovered annually. The most notorious one in the last year may be the vulnerability in the open-source web application framework Apache Struts 2, which was a contributing factor to the Equifax breach last year, and

has since been heavily reported in the media, mainly due to the massive scale of that breach. From a code perspective, 78 percent of the codebases looked at in the Black Duck by Synopsys report had at least one vulnerability caused by an opensource component, and the average number of vulnerabilities in a codebase was 64.

Knowing what components are in use According to Tim Mackey, senior technical evangelist at Synopsys, organizations should have a governance rule in place to define what the acceptable


030-35_SDT017.qxp_Layout 1 10/24/18 11:34 AM Page 31

www.sdtimes.com

BY JENNA SARGENT

risks are when using open-source components. That rule should encompass everything from open-source license compliance to how the organization intends to manage and run systems that are dependent upon a certain component. “The reason that that upper-level governance needs to exist is that it then provides the developers with a template for what is and what is not an acceptable path forward and provides that template to the IT operations team,” he said. That governance rule also needs to specify associated timelines that are

expected by the organization for things such as patching activities or triage of a new issue. Once that policy is in place, it is the responsibility of the product owner to ensure that there is an appropriate review process, both at the time that a component is added and on an ongoing basis, Mackey explained. Once a component has been ingested, there’s a pretty good chance that it will stay in that application for a long time, he said. He advised that product owners ensure that within each development sprint, there is someone that is performing some amount of library hygiene.

November 2018

SD Times

In Black Duck by Synopsys’ report, it found that on average, identified vulnerabilities were known for almost six years. Heartbleed, Logjam, Freak, Drown, and Poodle were among these vulnerabilities. In a report titled “Hearts Continue to Bleed: Heartbleed One Year Later” by security company Venafi Labs, it was revealed that as of April 2015, exactly one year after the disclosure of Heartbleed, 74 percent of Global 2000 companies with public-facing systems still contained that vulnerability. Heartbleed is a vulnerability in the OpenSSL cryptography library that allows an attacker to extract data that includes SSL/TLS keys from the target without ever being detected, Venafi Labs explained in their report. Similarly, attackers targeting Equifax were able to expose the information of hundreds of millions of users between May and July 2017, despite the fact that the Apache Foundation disclosed the Struts 2 vulnerability a few months earlier in March. “At that time, companies like Equifax should have checked for the presence of the vulnerability in their codebase,” WhiteSource wrote in a blog post following that breach. “Only that in order to check for the vulnerable version, Equifax and tens of thousands of other companies using the Struts 2 framework would have needed to know that they were using that component. They would have needed to know of its existence in their product.” The key thing that organizations must do is actually know what components are in use in their systems. Once they know that information, they can determine if any of those components are vulnerable and take action to remediate that, Sass from WhiteSource explained. “That’s something that requires attention to work, and to be honest, I don’t think it’s feasible for practically any size of organization to be able to manually discover or figure out all of their open-source inventory and then in that inventory, find the ones that are vulnerable. I’ve never seen anyone do that successfully without using an automated continued on page 32 >

31


030-35_SDT017.qxp_Layout 1 10/24/18 10:34 AM Page 32

32

SD Times

November 2018

www.sdtimes.com

< continued from page 31

tool,” Sass said. Organizations should look for tools that will automate the visibility and transparency of open-source components that are being used. Those tools should also include alerting for components that do have vulnerabilities. Another challenge organizations face when managing open-source projects in their environment is that open-source works differently from proprietary software in that you can’t just go and get a patch from a source, explained Mackey from Synopsys. “You have to first figure out where you got your code from in the first place because that’s where your patch needs to come from.” For example, a company could get a patch of Apache Struts from where the code is actually being developed, or they could get a patch from a company that has its own distribution of it. Those patches may all be a little bit different, so if your organization does not know where it came from in the first place, you might be changing behavior by patching, Mackey explained. Ayal Tirosh, senior principal analyst at Gartner, recommends having a system in place that can alert you when there is an issue. A bill of materials is a great proactive approach, but in terms of governance, it only identifies the issues that were identified at the time it was created. A few months down the road, when a new vulnerability is revealed, it would be helpful to have a mechanism in place that will alert the organization about these issues as they arise, Tirosh explained. “There are a class of tools — and there are some open-source solutions that do the same thing — that’ll essentially run through and fingerprint these different components and then provide you information from a variety of sources, usually at the very least public sources, sometimes proprietary databases, that’ll provide you that bill of materials saying these are the components that you have and these are the known vulnerabilities associated with those,” Tirosh explained. Those tools might also alert you if there is a new version available as well,

Governance needs to be a team effort Ultimately, the security officer is going to be the one held responsible if anything bad happens, but securing software should really be a team effort. Kathy Wang, director of security at GitLab, said that it’s important to empower developers to raise the bar on how they develop securely. But, security teams are also responsible for ensuring that whatever product comes out of the development pipeline was created in a way that is consistent and secure, she explained. Often what happens in many organizations, however, is that security teams are siloed and don’t get brought into the development process until a project is almost complete and ready to be released. Unfortunately, the later security comes in, the more expensive it becomes to fix the problem, she explained. “What we’re trying to do is attack this from two different directions. One is to get security looped in,” Wang said. “That means the security team needs to be a very good collaborative player across all of engineering, so that there’s a good working relationship there, but the development team also needs to be empowered to do their own checks early in the process so that it’s not all being left for the end of the process.” Wang believes that the industry is beginning to start moving in that direction. “A lot of the very large enterprise companies are trying to move in that direction, but it takes longer for them to change their existing processes so that people can start following that.” She believes that cloud-native companies may be moving faster towards that goal because they are generally more agile and have more tools and processes in place to achieve that goal. Aqua Security’s chief solutions architect Tsvi Korren agrees that in the event of a major breach or security event, the security person is going to be the one that gets blamed. “The overall accountability lies with security,” said Aqua Security’s Korren. z

though Tirosh cautions that a new version isn’t necessarily more secure. It might still suffer from the same issues as previous versions, or it might have a completely different issue, he explained.

Not all vulnerabilities are a concern Fortunately, organizations may be able to save some time and resources by prioritizing effectively. According to WhiteSource’s The State of Open Source Vulnerabilities Management report, 72 percent of the vulnerabilities found in the 2,000 Java applications they tested were deemed ineffective. According to the report, vulnerabilities are ineffective if the proprietary code does not make calls to the vulnerable functionality. “A vulnerable functionality does not necessarily make a project vulnerable, since the proprietary code may not be making calls to that functionality,”

WhiteSource’s report stated. By determining what the actual risk a vulnerability may pose, organizations can save security and development teams precious time. “If you’re able to map out how you’re using vulnerable components, you can save yourself a lot of work that’s not necessary and on the other hand make sure that you are protected in the places that are actual exposures,” said Sass. While you can’t completely ignore those vulnerabilities, your organization can prioritize its security efforts based on importance. Maya Kaczorowski, product manager at Google Cloud, said that the amount an organization should care about those components will depend on their specific infrastructure. “If it’s still possible for someone through another vulnerability to gain access to that vulnerability, then that continued on page 34 >


GET THE ANNUAL REPORT www.WhiteSourceSoftware.com


030-35_SDT017.qxp_Layout 1 10/24/18 10:35 AM Page 34

34

SD Times

November 2018

www.sdtimes.com

What is open-source software? To be considered open-source, software must meet the following criteria, according to the Open Source Initiative. It is some of these characteristics, though, that make open-source software an attractive target to hackers:

l Code must be redistributed for free.

l The license cannot discriminate against any person or groups.

l The program needs to include source code and distribution must be allowed in source code or compiled form.

l The license cannot discriminate against specific fields of endeavor.

l The license needs to allow modifications and derived works and allow them to be distributed under the same license terms as the original software.

l The rights attached to the program need to apply to all that the program is redistributed to without those parties needing to execute an additional license.

l The license can only restrict source code from being distributed if it allows the distribution of patch files with the source code to modify it at build time. The license needs to explicitly permit distribution of software built from source code that has been changed, and it may require that derived works have a different name or version number from the original software.

l The rights attached to the program cannot depend on the program being part of a certain software distribution.

< continued from page 32

could be a concern,” said Kaczorowski. “There’s been a number of attacks where we’ve seen attackers chain together multiple vulnerabilities to get access to something. And so having one vulnerability might not actually expose you to anything, but having two or three where together they can have a higher impact would be a concerning situation. So the best practice would still be to patch that or remove it from your infrastructure if you don’t need it.”

Treat all code the same WhiteSource’s Sass believes that governance over open-source code should be as strict as governance over proprietary code. Mackey agrees that “they should treat any code that’s coming into their organization as if it was owned by themselves.” “I always like to put on the lens of the customers and the users,” said Kathy Wang, director of security at GitLab. In the end, if a user is affected by an attack, they aren’t going to care whether it was a result of a smaller project being integrated with higher or lower standards. “So in the end, I think we have to apply the same kind of standards across all source code bases regardless of whether it was an opensource or proprietary part of our source code base.” Gartner’s Tirosh believes that conceptually, neither one is riskier than the

l The license cannot place restrictions on other software that is distributed alongside the licensed software. l The license must be technology-neutral.

other. “Both of them can introduce serious issues. Both of them need to be assessed and that risk needs to be controlled in an intelligent fashion. If you look at the statistics and you find that the majority of the code is open-source code, then that alone would say that this demands some attention, at least equal to what they’re doing with the custom code. And for some organizations that number’s going to vary.” Tsvi Korren, chief solutions architect at Aqua Security, also agrees that the security standards need to be the same, regardless of whether an organization wrote its own code or is using some open-source component. “It doesn’t really matter if you write your own code or you get pre-compiled components or code snippets or just libraries, they really should all adhere to the same security practices,” he said. He added that the same goes for commercial software. Just because you’re getting code from a well-known company doesn’t mean that you don’t need to do vulnerability testing against it, he explained. All code coming in should be tested according to your organization’s standards, no matter how reputable of a source it is coming from. If you have access to the source code, the code should go through source-code analysis. If you do not have access to that source code, it still needs to go through a review of the known vulnerabilities that are assigned to that

package, Korren explained. Once the application is built, it should also go through penetration testing, Korren said. “There are layers and layers of security. You can’t execute everything, everywhere. You don’t sometimes have access to the raw source code of everything like you have for your own software, but you do what you can do so that at the end of the development cycle, when you have an application that has some custom-made code, some open-source code, and some commercial packages, you come out the other side certifying that it does meet your security standards.”

Different strategies for security There are several different strategies that security teams can take in terms of securing software with open-source components, Korren explained. One strategy is to have security teams do everything themselves. This means they will likely have to build a developer-like personality so they can review and understand source code. A lot of times organizations don’t have that, so they leverage things such as vulnerability assessment tools or they can embed security leads inside application teams, Korren explained. This isn’t without its challenges, especially when development and security teams don’t communicate. “The security people need to have a little bit of understanding of application design


030-35_SDT017.qxp_Layout 1 10/24/18 10:37 AM Page 35

www.sdtimes.com

and methodology, while you need that developer to understand what that security information is,” explained Korren. Korren believes that security is shifting left as well. “So while the accountability for security ultimately lies with security [teams], the execution of that is now getting closer and closer to development.” Compared to developers, IT operations will be subject to greater regulatory scrutiny, said Synopsys’s Mackey. “They’re going to, out of necessity, likely trust that the developers have done their job and that the vendors have done their job. But they need to be in a position where they can actually verify and vet that.” If there is a vulnerable component that has not been patched, IT operations is ultimately going to have to deal with whatever attacks are mounted against that component, as well as deal with the cleanup after a breach or other malicious activity, Mackey explained.

Small vs. large open-source projects As mentioned in Black Duck by Synopsys’ report, larger projects are a more attractive target for attackers because they are present in so many different applications. But those larger projects may be more secure for that same reason. “I think inherently, large open-source projects tend to actually have fairly high levels of security in that there are lots of people who care about it and look for that,” said Google’s Kaczorowski. Larger projects tend to have a huge amount of uptake and use in real organizations, she explained, so there are a lot of eyes on a problem and more people finding vulnerabilities. Smaller projects might be full of vulnerabilities, but because they are small and not widely used, the chances of those vulnerabilities being exploited are equally small. According to Wang, besides the security risk, there is also a risk associated with continuity of support with smaller projects. From the security side, a smaller project may mean that you can do audits in a potentially less complex way because you’re looking at fewer lines of code. “But the decision about

whether to integrate a smaller project is more than just about the security posture, it’s about the continuity of being able to continue that integration over many years as well,” Wang said. Sass believes that if all things are equal and you have the option to choose between a popular, well-maintained project and a smaller, unmaintained project, you should go with the more popular one. “But very often there aren’t many alternatives and if you’re looking for a very specific type of functionality to put into your code and you want to save yourself the trouble of building from scratch, then your only options will be lesser-known projects,” he said. Some open-source projects even have bug bounty programs to mitigate their security, explained Kaczorowski.

November 2018

SD Times

Kaczorowski explained that you should probably also look to see if the project has a public disclosure policy. In other words, what action would they take if somebody comes to them with a newly discovered vulnerability? While not all large projects will have a good disclosure policy, it may be that larger projects with more maintainers will be able to respond effectively, while a project owned and maintained by only a few people might not have the manpower to put a process like that in place. According to Gartner’s Tirosh, the appearance of large vulnerabilities in open-source components is not scaring organizations away from using open source, but rather is making them put more of a focus on security. “I think it’s often times hard for those security issues to drive folks away from the value

The appearance of large vulnerabilities in open-source components is not scaring organizations away. For example, GitLab relies on external developers to contribute, and they also utilize a program called HackerOne, Wang explained. HackerOne is a program where hackers can submit vulnerabilities they have discovered. GitLab then triages those findings, validates them, and then rewards bounties for findings, she said. Smaller projects might not have the resources to take advantage of programs such as this. There are also human risks to using open-source projects. Even if you verify that a project has enough maintainers, you won’t necessarily be able to tell if those maintainers have two-factor authentication accounts. “It might actually be very easy for someone to inject new code into that project,” said Kaczorowski.

of what they’re getting out of these open-source components,” he said. He explained that, in his experience, the number of conversations he’s had around tools that can identify components and alert organizations on issues has increased quite a bit in the last year, coinciding with some of the major issues that have occured. He also attributed the increased interest to the fact that organizations are starting to get better in terms of identifying vulnerabilities. “They’re recognizing that the question isn’t just the internally developed stuff, but also open-source code. I would say that the awareness of the issue and the desire to address it in one way, shape, or form has risen and so the security part of the equation is now more prominent.” z

35


036-39_SDT017.qxp_Layout 1 10/23/18 1:17 PM Page 36

36

SD Times

A

November 2018

www.sdtimes.com

gile has been around for nearly two decades now, and just like most things in life that we come to accept, it is starting to be taken for granted. It seems that somewhere along the way, the Agile approach has lost its mojo. “A lot of teams have been going through the motions and keeping the rituals because ‘that’s what we’ve always done,’” said Dominic Price, head of R&D for software development company Atlassian. “Instead of looking at why they started doing something or how it adds value to keep using a practice or framework, the team has gone on auto-pilot.” The problem is that Agile should not just be something you do, it should be the driving force behind your business and why you exist, according to Zubin

BY CHRISTINA CARDOZA

Irani, CEO of Agile consulting company cPrime. “When Agile becomes a goal, it becomes disconnected with the goals of the business, and that’s where it can start to feel stale,” he said. What tends to happen when you have a lack of purpose is the things that you once implemented to help you move faster and deliver better start to become a roadblock to progress and success, according to Price. For example, “the Kanban boards have become a tool of the micromanager, or the daily standup has become a place to vent frustrations instead of sharing and removing blockers,” he explained. Agile by definition means the ability to move quickly and easily, but it should mean more than that to a business. Agile involves removing delays and bottlenecks, being more efficient, getting

customer feedback fast, adapting to change, and constantly improving. If you are only focused on the velocity part of it, things will start to fall apart, according to Shannon Mason, vice president and product manager for CA Agile

Andrey Mihailenko, co-founder and co-CEO of Targetprocess


036-39_SDT017.qxp_Layout 1 10/23/18 1:17 PM Page 37

www.sdtimes.com

Central, an Agile software provider. “I am using the term Agile less and less in my conversations and I am having more conversations with organizations around evaluating the way they work, and how they are trying to change those ways of working to be more modern, responsive and adaptive,” she said. But how can you tell if things are falling apart and Agile is becoming stale or stagnant within your business? Stop and ask yourself what are your objectives and how are you tracking the results of those objectives, according to Atlassian’s Price. “Do people frequently no-show to your weekly meetings? Do projects languish in your ticketing system? Are you delivering reports that nobody reads? Are you going through all the rituals, but not feeling any better or faster?” he asked. Price experienced Agile going stale in one of his own teams when they became too busy to fill out reports on objectives and key results. “Months later, nobody asked what objectives we’d set, how we were tracking on our key results,” he explained. “So the next quarter, we just didn’t do them, and again, nobody noticed. It turns out that at that time, this initiative had become stale, and people were just going through the motions. If you take a look at your boards and meetings and project status updates, what insights are you gaining that lead to actions? If you’re just sharing words every week but never changing your behavior, you’re stagnating.”

How to avoid stagnant Agile As the software industry constantly evolves, so must the approaches we use to build software. With Agile still as the underlying motivator, businesses have come up with a number of new and modern ways to approach delivering software: Back to the basics: It is not a new concept, but it is good to go back to the basics to make sure you understand the values and tenets of Agile. For instance, Agile isn’t just about moving faster, it is about putting the customer at the center of all your decisions, choices and thinking, CA’s Mason explained. Remember to start small, get it right

November 2018

SD Times

The four values of the Agile Manifesto David DeWolf, founder and CEO of 3Pillar Global, a software development company, explained that businesses find themselves stuck because they have lost sight of the true meaning of Agile and why the movement started in the first place. When we become too focused on just doing Agile for the sake of doing Agile, we become too focused on executing practices that are defined as Agile practices without remembering why they are good Agile practices in the first place, 3Pillar’s DeWolf explained. “Without understanding the values of Agile, you won’t be able to deliver on the promise of flexibility and react to change as you hoped for,” he said. To get back to its core meaning, he suggested revisiting the Agile Manifesto. While the four principles have been around since 2001 and are widely known, they are often overlooked. “If we go back and look at the manifesto, it was supposed to be something that was at the center of everything we do, but it hasn’t been,” said Shannon Mason, vice president of product management for CA Agile Central, an Agile software provider. “What does it look like to truly have the user or different types of personas we serve at the center of our decisions when we are making product application David DeWolf, founder and decisions and choices versus just doing everything on a whim or using our gut to CEO, 3Pillar Global make decisions?” As a reminder, the Agile Manifesto states: “We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: 1. Individuals and interactions over processes and tools 2. Working software over comprehensive documentation 3. Customer collaboration over contract negotiation 4. Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more.” z

and leverage that small team as an example for the rest of the organization, according to David DeWolf, founder and CEO of 3Pillar Global, a software development company. “Be nimble. Start small. Instead of trying to turn the crew ship, turn the little boat,” he said. Get a team up and running that truly understands the values, buys into the principles, and then put the practices into play, he explained. In addition, establish clear objectives, according to Andrey Mihailenko, co-founder and co-CEO of Targetprocess, a project management software provider. “Put team members together in the same room, at the same time and actually have an open conversation regularly about how well the business is doing,” he said. Outcomes versus outputs: Measure the results or business values, not

just the amount or speed of software you produced, according to Mason. To do this, focus on metrics and understand what those metrics mean in terms of the business. If you are truly looking at people over process, you’ll ask, “What does your customer satisfaction look like? How much feedback are you receiving? And are you building the right things?” according to cPrime’s Irani. Focus more on the leading indicators rather than the lagging indicators, continued on page 38 >

37


036-39_SDT017.qxp_Layout 1 10/23/18 1:19 PM Page 38

38

SD Times

November 2018

www.sdtimes.com

< continued from page 37

Mason explained. Focus on the flow of value to customers, not the projects delivered and deadlines. Leading indicators can sometimes lead you to predict lagging, long-term indicators, she added. Business agility: Business agility or product agility also relates to outcomes versus outputs. In product agility, you are not focusing so much on the velocity, you are focusing more on what you are building, Irani explained. “If you build the right thing, you are more likely to be a successful business,” he said.

“People are very focused on velocity and output, and they are not focused on the quality of that or the business impact. We are not doing Agile to do Agile. We are doing Agile to solve a business problem or take advantage of business opportunities.” Value stream management: In order to achieve business agility, you need everyone from the business stakeholders all the way down to the task owners, developers, and testers to have a clear understanding of what they are working on and why. In order to successfully achieve that clear understand-

Think before you scale

Agile at the team level is pretty widely understood by now. One of the main issues Agile is facing is when businesses find success within teams and then try to scale it before they are ready. Then you just have stale Agile at scale. Agile at the team level and Agile at scale are very different beasts, explained Andrey Mihailenko, co-founder and co-CEO of Targetprocess, a project management software provider. Many of the problems Mihailenko sees is businesses scale before their culture is ready or before program management has been figured out, so they then have to descale and restart, which wastes time and money. “A lot of the time it is just the poor lack of visibility and communication for the business goals, and the lack of constant collaboration with business stakeholders,” said Mihailenko. “There is still a lot of work to be done around a clear understanding of what business agility is, and how to make companies more Agile and adopt the culture not just on a team level, but company-wide. How do you break down the silos and really have open collaboration, feedback and focus on constant delivery of value to the customers.” Some ways to address this is to have good value stream or program management in place, Mihailenko explained. Without clear alignment, teams tend to operate as separate units, and make the mistake of pulling from the wrong backlog. With clearly defined program management in place, program managers can coordinate work across multiple times, have regular time boxes for planning events and integration events across teams, and provide a regular cadence so everyone is working together around a common immediate value. In addition, Mihailenko said there needs to be all-hands-on meetings where businesses present their own vision to developers and teams and have an open discussion of what is preventing them

ing, you need to have value stream management in place. Value stream management includes all the potential bottlenecks, the ability to identify them, and the ability to reduce any potential delays, according to Targetprocess’ Mihailenko. “It is about the ability to integrate everyone and provide visibility into how the value flows from an organization, from a requirement all the way to the customer,” he said. “The more flexible the system of tracking bottlenecks and visualizing values and dependencies between teams and businesses, the more you can see things

from meeting those objectives. “Are we clear on the vision? What are we actually trying to do? Do we have the culture of actually working together and collaborating across all aspects? Can we commit? Do we have a place where we can really visualize what we are working on and react when we things are wrong? Have we really embraced this lean system thinking for the entire company? Does everyone really understand what that means for us?” Mihailenko asked. “A deep understanding of what it means to be Agile as a company, and a focus on constant feedback and collaboration needs to be within the business’ DNA,” he said. The latest version of the Scaled Agile Framework (SAFe) was recently released and aims to address some of the scaling difficulties facing the enterprise. SAFe 4.6 includes five core competencies of the Lean Enterprise. “They are now the primary lens for understanding and implementing SAFe. Mastery of these five competencies enables enterprises to successfully navigate digital disruption and to effectively respond to volatile market conditions, changing customer needs, and emerging technologies,” Scaled Agile, provider of SAFe, wrote. The five core competencies are: l Lean-Agile leadership, which describes how leaders can drive and sustain organization change. l Team and technical agility, which describes the skills, principles and practices needed to create and support high-performing teams. l DevOps and Release on Demand, which describes the principles and practices of DevOps within the enterprise. l Business solutions and lean systems, which explains how enterprises can develop large and complex software using a lean, Agile and flow-based model. l Lean portfolio management, which explains how an enterprise can implement lean approaches to its strategy and investment funding. z


036-39_SDT017.qxp_Layout 1 10/23/18 1:18 PM Page 39

www.sdtimes.com

better and manage things better. Turn your workflow into one well organized system.” Continuous delivery: Mark Curphey, vice president of strategy at CA Veracode, a software security company, sees organizations focusing more on continuous delivery than Agile, which allows them to focus on delivering software in small, executable chunks quickly. “The reality is it is just small batches and trying to move as much from left to right as possible in a way that is consistent and predictable — and all the things Agile gave us, but it is really taking it to the extreme. Small batches may be a couple hours worth of work or it could be a day’s worth of work,” he said. Artificial intelligence: AI is the perfect technology to be applied to things like Agile and DevOps, according to Michael Wolf, managing director for modern delivery at KPMG, a professional advisory company. Today, you are building something that is highly data-driven and needs to be able to evolve, change and roll out in a systematic way quickly. This can be very daunting to do at human scale when you add in code reviews and testing, he explained. “Much like mobile was the industry motivator for web services and what we used to call SOA, I would say AI is the perfect storm motivator for DevOps, microservices, cloud and Agile,” Wolf said. “If you are going to respond to change fast enough, you have to be able to keep up.” The Four L’s: Instead of doing formal retrospectives, Atlassian’s Price suggests taking note of what is loved, loathed, longed for and learned. “At the end of each quarter, I go through my schedule and projects and think about each of these items. From there, it’s clear what you should keep doing and what you should stop doing. The key is not letting the lists get out of hand. For example, I don’t get to pursue a ‘longed for’ until I’ve removed a ‘loathe,’” he said. Agile is not a transformation program, but a constant evolution, Price explained. “Change and adaptation is the norm. Don’t make it a project. Make it a way of working,” he said. z

November 2018

SD Times

The DevOps connection Another reason the Agile movement can dry out within a business is because it is treated as a separate entity from a number of different approaches the business is implementing. In a modern delivery world, things like Agile, DevOps, microservices, cloud, product management, divine thinking, and lean finance are all connected, according to Michael Wolf, managing director for modern delivery at KPMG. “People get frustrated with Agile because they are not seeing the big picture,” he said. “If these things are disconnected, then you end up becoming disenfranchised.” According to cPrime’s CEO Zubin Irani, DevOps is the missing piece to most Agile initiatives. “A lot of Agile revolves around organizing, how you move Agile,” he said. “But to actually have the tools and automation to take advantage of these great cloud technologies, you really have to rebuild and rethink how you get code from a developer’s computer into a production environment, and how you support that code. That is where DevOps comes in.” DevOps also really enables the crossfunctional teams that Agile builds success off of. Within Agile, you bring the developer and tester together through roles like the Scrum Master and product owner, but then when you add DevOps into the equation you add the operations person to the team as well, Irani explained. The problem, however, is DevOps is often misunderstood because it’s a move-

ment or concept, meaning there is less prescription around it than Agile, and that causes a lot of people to interpret it a lot of different ways, according to Mark Curphey, vice president of strategy at CA Veracode, a software security company. Whereas with Agile, the manifesto provides clear guidelines and ideas on what the principles and values of Agile are, and how to get started, Curphey explained. A recent report from CA Technologies found 75 percent of respondents believe Agile and DevOps approaches drive business success when implemented together, however only a small proportion have been able to reach true DevOps agility. This is because of culture, skills, investment and leadership. In order to improve effectiveness, CA explained the culture of the organization needs to be improved to encourage and reward collaboration, more support from management at all levels needs to be added, and additional training and resources need to be provided to help improve Agile and DevOps together. “The pressure is on to make all parts of an organization as flexible as possible when responding to changing customer demands, user expectations, regulatory changes and — most important of all — market opportunities,” said Ayman Sayed, president and chief product officer of CA Technologies. “Business leaders need to be aggressive and intentional about driving adoption of agile and DevOps within their organizations. The success of their business depends on it.” z

39


Full Page Ads_SDT017.qxp_Layout 1 10/24/18 1:01 PM Page 40


041-43_SDT017.qxp_Layout 1 10/24/18 10:40 AM Page 41

www.sdtimes.com

November 2018

SD Times

Buyers Guide

Embedded Analytics: Easier to create, more accurate than ever BY IAN C. SCHAFER

T

his past year has seen the technology of embedded analytics — the inclusion of data ingestion, analysis and visualization capabilities within business applications — begin to leverage other growing technologies to improve the accuracy and scope of reporting, as well as make it easier for developers, and even non-developers to start including such capabilities in their software. According to Forrester’s Deep Learning: The Start Of An AI Revolution For Customer Insights Professionals report released in September, advancements in deep learning have improved the accuracy of speech, text and vision data ingestion, allowing for

In last year’s Predictions 2018: The Honeymoon for AI is Over report, Forrester determined that:

A quarter of firms will supplement point-andclick analytics with conversational UIs

AI will make decisions and provide real-time instructions at 20 percent of firms

AI will erase the boundaries between structured and unstructured data-based insights

And Forrester encouraged developers of analytics software to start experimenting.

platforms that implement these technologies to “extract intent, topics, entities, and relationships.” All of this is important data for companies that would like to know what their customers’ needs are, and adjust properly.

But before businesses start to apply advanced features into their embedded analytics solutions, Brian Brinkmann, vice president of product marketing at Logi Analytics, explained they need to continued on page 42 >

41


041-43_SDT017.qxp_Layout 1 10/24/18 12:17 PM Page 42

42

SD Times

November 2018

www.sdtimes.com

Five ways to differentiate your analytic solution Business Intelligence and analytics company Logi Analytics echoed Forrester’s AI and machine learning findings in its 2018 State of Embedded Analytics report. The report determined that there were five places where embedded analytics companies should focus their attention in order to remain competitive and provide adequate features to their customer base. Those are: l AI

implementation

l Predictive l Natural

analysis

language generation

l Workflow

management

l Database

writeback

“We do have people ask us what to do, and we turn that question around and ask them: What are the business challenges that their customers would have that they’re trying to solve for, and does the technology help you get to an answer?” Brian Brinkmann, vice president of product marketing at Logi Analytics explained. “We’re all kind of technologists at heart, so we like to tinker with and use the latest gadgets and gizmos, but I think when you step back and say ‘How am I adding real value to my application for the customer?’ That’s where you need to start.” But once that’s been hammered out, Brinkmann said, there is a good place to start expanding. “I think the one that people will go to first, and probably should go to first, is predictive analytics, and that’s because they have a set of historical data, and when you can unleash the machine learning algorithms on that data, it is highly likely that there will be business problems that people can solve right off the bat,” Brinkmann said. “And there are problems everyone has regardless of the industry you’re in. You know, people have customers that turn, people have machines that break down, people have payments that aren’t paid. There are things that they can do that will have a real impact on the business.” z —Ian C. Schafer

How Logi Analytics can help you gain insights into how your apps are being used Brian Brinkmann, vice president of product marketing at Logi Analytics: At Logi Analytics, we know delivering compelling applications with analytics at their core has never been more crucial — or more complex. Logi’s analytics development platform is built for product managers and developers who need to quickly build and embed dashboards, reports, and data visualizations into their applications. Logi is faster than coding yourself, gives you total control over the brand and user experience, and is powerful enough to support complex scaling and security requirements. Logi has the only developer-grade platform for embedded analytics, specifically designed to make mission-critical applications smarter. Only Logi gives developers complete control over the look, feel, and functionality of the analytics experience, so you can deliver a solution that empowers and engages end users while keeping the application uniquely your own. Logi also works with your current tech stack, leveraging your investments in data, security, and server infrastructure to support sustainable, streamlined product delivery life cycles. Modern embedded analytics keeps users in your application and sets your product apart from the competition. For over 17 years, we’ve helped over 1,900 application teams embed sophisticated analytics in their software products. Logi is rated the #1 embedded analytics platform by Dresner Advisory Services. z

< continued from page 41

make sure they’re ready at a more basic level first. “What we have seen is when people are building applications, they have a set of requirements, they go build out those requirements, and when they roll it out to their customers, their customers have an additional set of requirements that maybe they haven’t thought of,” Brinkmann said. “The application’s end-users say ‘I want to build something.’ Or ‘I want to be able to enhance the application.’” According to Brinkmann, finding ways to support and bring more people into the development process can be invaluable. “If I have to build an application 100 percent, I’ll probably never get there, but if I can build out an application, say, 70 percent and let the last 30 percent be done by other folks, or citizen developers who probably have a much better grip on [customer] needs and requirements, I’m not only going to get there much faster, I’m going to get there with a much better answer and solution,” Brinkmann said. Forrester believes AI-based predic-

tions are a good answer for how to begin to include more people, end users or people outside of a core development in the creation process, but Brinkmann explained there are other options. “I think conversational AI is a way to do it if people aren’t familiar with building pieces out and you can speak to something, and they can satisfy and help build additional pieces. That’s one way that they can answer some of those questions,” Brinkmann said. But the other options offer more traditional approaches such as simplified experiences and wizards for surfacing reports, data and predictive analytics. Because of the amount of computing power required and lack of familiarity with the capabilities of AI, Brinkmann said that Logi Analytics has seen more requests for the more traditional options. “If you’re a massive organization, and you might be able to afford to go out to a third party that runs a large, cloudbased AI like IBM’s Watson, that’s great,” Brinkmann said. “For the 99 percent of everyone else, I’m not sure that’s completely applicable.” z


041-43_SDT017.qxp_Layout 1 10/24/18 10:41 AM Page 43

www.sdtimes.com

November 2018

SD Times

43

A guide to embedded analytics tools

n Datapine: Datapine aims to make

exploring, visualizing and communicating information stored in multiple databases, external applications and any number of spreadsheets simpler for developers and non-developers at businesses of various sizes by providing connectors to a wide array of relational databases, including Google Analytics, Google Spreadsheets, SAP ERP/BW and others. n Izenda: Izenda wants to help businesses avoid wasted technical resources from hard-coded, individual analytics requests by organizing data into visually appealing reports and dashboards that can be integrated directly at the code level into your application or viewed in a separate portal, allowing users to create, manage and maintain powerful analytics. n Qlik: Qlik provides an analytics development platform built around their proprietary Associative engine. The company has developed in-house, open-source libraries for building, extending and deploying fast, interactive, data-driven applications delivered at massive scale, within any cloud environment. This allows for the ingestion and combination of disparate resources regardless of size, automatic indexing of data relationships, fast processing and calculations and application interactivity through state management. n Sisense: Sisense is trying to extend the talent pool for business users that need the ability to rapid-

FEATURED PROVIDER n Logi Analytics: Logi Analytics is a developer grade analytics platform that helps application teams embed dashboards and reports in their software products. Dresner Advisory Services rated Logi the #1 embedded analytics platform on the market. Logi understands that delivering compelling applications with analytics at their core has never been more crucial—or more complex. That’s why over 1,900 mission-critical applications have trusted Logi’s analytics development platform to deliver sophisticated analytics and power their businesses. ly discover important insights and take informed and data-driven action by making every business user data-fluent. Their In-Chip analytics technology provides the fastest analytical processing power on the market, allowing for instant data blending, easy analysis, cloud or hybrid deployment, securely embedded analytics and machine learning insights. n Revulytics: Revulytics gives software development organizations the analytics they need for data-driven product development and license compliance programs. Whether you are in sales, marketing, product management or software development, Revulytics

provides analytics that can drive measurable results. For compliance analytics, Revulytics can detect and identifies users of unlicensed software. For usage analytics, the company will track downloads, installations and feature adoption. n TIBCO: TIBCO Software is a global leader in integration and analytics software. The company’s data solution covers data management, advanced analytics and data visualization. Jaspersoft is the company’s embedded business intelligence and analytics and reporting software for integrating reports, dashboards and analytics for any app. With TIBCO’s embedded BI capabilities, users can integrate interactive reports, embedded dashboards and mashboards, self-service reports, data exploration features, data virtualization, data integration, mobile ready reports and dashboards, and multi-tenant BI. Benefits of the solution include increased customer satisfaction, the ability to free up development resources, and faster and smarter apps. n Tableau: Tableau is a business intelligence and analytics company with a focus on getting people to quickly analyze, visualize, and share data. Its embedded analytics solution gives users access to powerful analytics all within their app or product. The solution offers the ability to deliver flexible visual analytics to customers quickly and easily, enables developers to focus on building a great solution, and provides customer support, product training and roadmap collaboration so users can integrate, deploy and get to market quickly. n Zoho: Zoho provides an embedded analytics and white label business intelligence solution for simplifying complex business processes. Zoho provides includes dashboards and visualizations for gaining actionable and valuable insights to transform your business. It includes the ability to discover hidden insights easily, connect to any data source, visually analyze data with a drag-and-drop interface, share and collaborate online securely, and combine data from a range of sources for a crossfunctional report. z


044_SDT017.qxp_Layout 1 10/23/18 3:04 PM Page 44

44

SD Times

November 2018

www.sdtimes.com

Guest View BY BRIAN JOHNSON

Serverless: A bad rerun Brian Johnson is CEO of DivvyCloud.

I

n today’s fast-moving world, DevOps teams are struggling to solve the same problem: What is the best way to build, deploy, and maintain applications in a cloud-native world? This problem has spawned a heated debate between the serverless and container communities. While I usually am a firm believer that the answer is somewhere in the middle, I have seen this play out before and I know how it ends. Spoiler alert: serverless will fade into oblivion, just like its predecessors. Many services such as Heroku and Google App Engine have been trying to abstract away the dreaded server for a long time. While more configurable and flexible then its predecessors, serverless platforms continue to suffer from many of the same problems. Scaling a black box environment such as AWS Lambda or Google Cloud Functions can be a serious challenge, often resulting in more work than it is worth. So, what exactly is serverless? Serverless is a cloud-native framework that provides its users with a way to execute code in response to any number of available events, without requiring the user to spin up a traditional server or use a container orchestration engine. Cloud providers such as AWS offer fairly mature toolchains for deploying and triggering lambda methods. Using these tools, a developer can essentially duct tape together a set of services that will emulate all of the functionality that would normally be available in a server or container model. There are numerous challenges in scaling this approach, some of which I have listed below. Complexity: As with most things in the development world, abstraction leads to complexity. Serverless is no different. Take a simple Python-based RESTful web service for example. To deploy this service to AWS Lambda first you must upload your code to an S3 bucket, build IAM roles and permission to allow access to that S3 bucket in a secure fashion, create an API gateway, tie the API gateway to your lambda method using a Swagegr API model, and finally, associate proper IAM roles to your lambda method. Any one of the above stages comes with a staggering number of configuration options. Your simple REST service has been broken up into

Spoiler alert: serverless will fade into oblivion, just like its predecessors.

numerous complex and infinitely configurable components. No fun to maintain and secure. Scalability: As an application grows in complexity, it will eventually begin to hit bottlenecks at scale. To solve the scaling issues, developers often need to dive deep into the internals of the environment to understand why there is a bottleneck. Unfortunately, cloud-native serverless frameworks provide no great way to understand what is going on under the hood. This lack of visibility can lead a developer down a long and winding path trying to guess why the application isn’t performing as expected. Overrun by functions: Serverless is built to allow users to easily deploy single functions to be triggered by specific events. While this can be useful for applications that only handle one method, it is not very useful for real-world applications with hundreds or thousands of endpoints. This highly fragmented approach makes it very challenging to track, deploy, maintain, and debug your highly distributed application. Vendor lock-in: For enterprises of today, agility is paramount. Leveraging multiple providers gives the enterprise the ultimate in flexibility and access to best-in-class services. By building your application on top of serverless technology, your code must directly integrate with the serverless platform. As your application — or should I say loose grouping of methods — grows, it becomes harder and harder to maintain a provider-agnostic code base. Testing: Testing your code inside of a serverless framework is incredibly time-consuming. The only true way to test is to upload your code and run it inside the serverless framework. This can lead to hours of additional testing and debugging. So, until there is an IDE that can detect and solve logic mistakes, you’re probably in for a long night (or weekend). In conclusion, serverless frameworks continue to solve the ever-elusive goal of allowing engineers to build applications, without having to worry about any type of pesky computing components. Serverless is a wonderful option for anyone who enjoys slamming their head into a keyboard, slowly, over many hours while testing their 200 individually packaged methods. While this sounds like fun, I am going to stick with my predictable and performant container that can run anywhere, including on my local system. z


045_SDT017.qxp_Layout 1 10/23/18 12:08 PM Page 45

www.sdtimes.com

November 2018

SD Times

Analyst View BY GEORGE SPAFFORD

Overcome the people problem in DevOps T

hrough 2023, 90 percent of DevOps initiatives will fail to fully meet expectations due to the limitations of leadership approaches, not technical reasons. The value in adopting DevOps practices is substantial, but if initiatives are to be successful, organizations must appropriately implement. The most common cause of DevOps failures is people — not process. Many organizations invest in DevOps tools without addressing organizational change and the value they will provide to the larger enterprise. Here I identify the top five reasons for DevOps failures, and how infrastructure and operations (I&O) leaders can actively avoid them.

DevOps is not grounded in customer value Organizations often launch DevOps efforts with insufficient consideration of business outcomes. I&O leaders need to ensure their staffs and customers connect with the term “DevOps,” and the value it will bring, prior to introducing the initiative. As such, organizations should use marketing to identify, anticipate and deliver the value of DevOps in a manner that makes business sense. I&O leaders must seek to refine their understanding of customer value on a continuous business to evolve capabilities and further enable organizational change.

Organizational change is not properly managed In the Gartner 2017 Enterprise DevOps Survey, 88 percent of respondents said team culture was among the top three people-related attributes with the greatest impact on their organization’s ability to scale DevOps. However, organizations overlook the importance of getting their staffs on board with the upcoming change and instead strictly focus efforts on DevOps tools. Since tools are not the solution to a cultural problem, organizations should identify candi-

dates with the right attitude for adopting DevOps practices. Individuals who demonstrate the core values of teamwork, accountability and lifelong learning will be strong DevOps players.

George Spafford is a research director with Gartner, Inc.

Lack of team collaboration Successful DevOps efforts require collaboration with all stakeholders. DevOps efforts, more often than not, are limited to I&O. Organizations cannot improve their time to value through uncoordinated groups or those focusing on I&O exclusively. It is thus necessary to break down barriers and forge a team-like atmosphere. Varying teams must work together, rather than in uncoordinated silos, to optimize work.

Trying to do too much too quickly

Organizations cannot improve their time to value through uncoordinated groups

It is important to realize that a “big bang” approach — launching DevOps in a single step — comes with a huge risk of failure. DevOps involves too many variables for this method to be successful in a large IT organization. To combat this, an incremental, iterative approach to DevOps will enable the organization to focus on continual improvements and group collaboration. Starting with a politically friendly group to socialize the value of DevOps and reinforce the credibility of the initiative is the way to go.

Unrealistic expectations of DevOps among employees A disconnect exists in many organizations between expectations for DevOps and what it can actually deliver. Manage expectations by agreeing on objectives and metrics. Use marketing to identify, anticipate and satisfy customer value in an ongoing manner. Expectation management and marketing are continuous and not a one-time affair. z

45


046_SDT017.qxp_Layout 1 10/23/18 12:07 PM Page 46

46

SD Times

November 2018

www.sdtimes.com

Industry Watch BY DAVID RUBINSTEIN

The developer transformation David Rubinstein is editor-in-chief of SD Times.

C

ompanies today are told that if they hope to remain competitive, they’ll have to embark on a digital transformation. So as tools emerge to help organizations make that journey, one point is irrefutable: Organizations need to focus on changing how they leverage their developer talent to truly remain competitive. In a study called “The Developer Coefficient,” put together by digital payments company Stripe, CEOs said the lack of quality developer talent is a potential threat to their business. So is the misuse of developers they do have, which has them using most of their time on maintenance and little on creating actual business value. “The main point is around inefficiency and productivity,” said Richard Alfonsi, head of global revenue and growth for Stripe. “Developers spend way more time than they want to on menial tasks, thinking about overall toil, tech debt, fixing bad code, etc. Seventeen hours, a third of their work week, is spent doing these kinds of things [according to the survey]. It’s not a good way to keep developers excited and energized, and motivated. Being able to do all you can as a business to make them more productive, to be able to push down the inefficient part of their job and enable them to work on the pieces that are truly innovative and are probably much more meaningful to the employer, that’s a big deal, and the main finding of the survey.” Developers (52 percent) polled by Stripe said maintenance of legacy systems and technical debt are the biggest hindrances to productivity, while leadership’s prioritization of projects and tasks was a hindrance to 45 percent of respondents. As for morale, 81 percent of developers said work overload was the big morale-killer, followed by changing priorities resulting in discarded code or time wasted and not having enough time to fix poor quality code (both at 79 percent) Tools aimed to help developers be more productive are exploding into the market, though some of them — the low-code/no-code sector —empower business people to create some of their own apps, which frees up developers to work on the more innovative projects that drive revenue in organizations. But first you have to find the developer talent.

Developers spend way more time than they want to on menial tasks.

Countless articles have been written about the skills gap here in the United States, leaving companies desperate to find skilled, quality software engineers. Where, in the past, having limited access to capital held organizations back from remaining competitive and even growing, today, CEOs say it is limited access to developers that is holding them back. In fact, of the executives surveyed by Stripe, 55 percent cited access to talent as the biggest constraint on company growth, followed by industry regulations (54 percent), access to software engineers (53 percent) and access to capital (52 percent). They also expressed the impact developers have on the company’s efforts. The biggest impact among the executives surveyed (71 percent) was in the area of bringing products to market faster, followed by increasing sales (70 percent) and differentiating products and services (69 percent). Further, 96 percent of the executives surveyed said increasing productivity of developers is a high/medium priority, to be able to keep pace with new trends and market opportunities. When asked which trends were having the most impact on their companies, there were splits between developers and C-suite executives. For example, 34 percent of the executives saw AI as having the greatest impact, while only 28 percent of developers believed that was true. Meanwhile, 22 percent of developers saw API-based services as having the greatest impact, while only 15 percent of executives did. Finally, of executives not confident that their companies have the resources to keep up with the trends, 44 percent said their companies are too slow to react to trends, 42 percent said they don’t have enough skilled employees, and 36 percent said leadership doesn’t prioritize technology, or that they are too focused on short-term gains to prioritize long-term growth. Why does all this matter? Because, according to Stripe’s calculations, bad code costs companies $85 billion annually. But, the report states, if used effectively, developers have the potential to raise global GDP by $3 trillion (with a T) over the next 10 years. Clearly, it’s time to look at what you have your developers working on, and to free them from menial tasks to create products that drive business value. z


Full Page Ads_SDT017.qxp_Layout 1 10/23/18 12:10 PM Page 47


Full Page Ads_SDT017.qxp_Layout 1 10/23/18 12:10 PM Page 48

Profile for d2emerge

SD Times - November 2018  

SD Times - November 2018  

Profile for d2emerge