FC_SDT030.qxp_Layout 1 11/19/19 11:27 AM Page 1
DECEMBER 2019 • VOL. 2, ISSUE 30 • $9.95 • www.sdtimes.com
Full Page Ads_SDT030.qxp_Layout 1 11/18/19 11:38 AM Page 2
003_SDT030.qxp_Layout 1 11/19/19 12:43 PM Page 3
VOLUME 2, ISSUE 30 • DECEMBER 2019
FEATURES Waving the flag for feature experimentation page 8
Looking back on 2019
CCPA set to take effect at the start of 2020
DevOps Summit takeaway: Culture can be a competitive advantage, or it can be an inhibitor
How no-code disrupts mobile development
No-code mobile app development: Do more with less
COLUMNS 40 41 42
GUEST VIEW by Hans Otharsson Best of friends, greatest of enemies ANALYST VIEW by Arnal Dayaratna Evaluating the ethics of software INDUSTRY WATCH by David Rubinstein What follows CD? Progressive Delivery
Angular powers business apps in the enterprise *But React and Vue.js are catching up
Legacy app modernization is not a slash and burn process page 32
page 29 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2019 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at firstname.lastname@example.org.
004_SDT030.qxp_Layout 1 11/15/19 1:02 PM Page 4
Instantly Search Terabytes
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein email@example.com NEWS EDITOR Christina Cardoza firstname.lastname@example.org
dtSearchâ€™s document filters support: Â‡ popular file types Â‡ emails with multilevel attachments Â‡ a wide variety of databases Â‡ web data
SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent email@example.com Jakub Lewkowicz firstname.lastname@example.org ART DIRECTOR Mara Leonardi email@example.com CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz
2YHUVHDUFKRSWLRQVLQFOXGLQJ Â‡ efficient multithreaded search Â‡ HDV\PXOWLFRORUKLWKLJKOLJKWLQJ Â‡ forensics options like credit card search
Developers: Â‡ 6'.VIRU:LQGRZV/LQX[PDF26 Â‡ &URVVSODWIRUP$3,VIRU&-DYDDQG NET with NET Standard / 1(7&RUH
Â‡ )$4VRQIDFHWHGVHDUFKJUDQXODUGDWD FODVVLILFDWLRQ$]XUH$:6DQGPRUH
CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx, Ovum
ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 firstname.lastname@example.org SALES MANAGER Jon Sawyer email@example.com
CUSTOMER SERVICE SUBSCRIPTIONS firstname.lastname@example.org ADVERTISING TRAFFIC Mara Leonardi email@example.com LIST SERVICES Jourdan Pedone firstname.lastname@example.org
Visit dtSearch.com for Â‡KXQGUHGVRIUHYLHZVDQGFDVHVWXGLHV Â‡IXOO\IXQFWLRQDOHQWHUSULVHDQG developer evaluations
The Smart Choice for Text RetrievalÂ® since 1991
REPRINTS email@example.com ACCOUNTING firstname.lastname@example.org
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com
Full Page Ads_SDT030.qxp_Layout 1 11/18/19 11:39 AM Page 5
006,7_SDT030.qxp_Layout 1 11/14/19 5:03 PM Page 6
NEWS WATCH Google brings TensorFlow to the enterprise Google is bringing its opensource platform for machine learning to the enterprise. According to the company, enterprises that want to use TensorFlow struggle because of the higher demands and expectations they have for running ML workloads. TensorFlow Enterprise will feature enterprise-grade support, cloud scale performance and managed services. “Together, these services and products can accelerate your software development and ensure the reliability of your AI applications,” Craig Wiley, director of product management for the cloud AI platform at Google, wrote in a post.
Dart 2.6 released with dart2native Google has announced the latest release of its programming language Dart. Version 2.6 comes with dart2native, an extension of its existing compiler with the ability to compile Dart programs to self-contained executables containing ahead-of-time compiled machine code. According to the company, this will enable developers to create tools for the command line on macOS, Windows or Linux using Dart, which was previously only available for iOS and Android mobile devices.
JetBrains debuts writing checker for developers JetBrains is introducing Grazie, a spell, grammar and style
ZenHub makes following project roadmaps easier Project management tool provider ZenHub is making it easier for teams to follow their roadmaps with the launch of a new solution called ZenHub Roadmaps. The tool is a road mapping solution that integrates with GitHub. According to Aaron Upright, co-founder and head of strategic accounts at ZenHub, one of the reasons roadmaps fail is because they don’t get updated as the project progresses. Because ZenHub Roadmaps integrates with GitHub, it will always be up-to-date and won’t require any manual intervention. In addition, this tool will help developers feel more connected to the code they’re working on since they can understand how their work fits into the objectives of the company as a whole.
checker designed for IntelliJ IDEA. According to the company, programming is not all about code. Code contains things like string literals, comments, Javadocs, commit messages and more that require a natural language understanding — so an IDE should do more than check the code, JetBrains explained. Grazie plugs into IntelliJ IDEA to provide intelligent checks such as inappropriate style and grammar rules. The checks are done locally after a model is downloaded and ready to go. The tool includes English by default, but enables users to add 15 other language models.
Red Hat’s Quarkus: A cloud-native Java stack Red Hat has announced the release of Quarkus 1.0, a Kubernetes-native Java stack built for containers and cloud deployments. According to the company, as application development continues to evolve, Quarkus will work to bring Java into the future and get it ready for serverless, cloud and Kubernetes environments.
“Quarkus represents a fundamental shift in modern app dev and is designed to address some of the shortcomings that Java faces with regard to cloud-native application architectures like containers, microservices and serverless,” the Quarkus team wrote in a blog post.
Sumo Logic acquires ASOC provider JASK Continuous intelligence company Sumo Logic announced that it acquired JASK Labs, a provider of cloud-native autonomous security operations center (ASOC) software. Sumo Logic plans to expand its cloud-native security intelligence solution to supersede legacy SIEM technology. Ninety-three percent of security professionals think traditional SIEM solutions are ineffective for the cloud, according to the company. With the acquisition, JASK Labs will integrate ASOC with Sumo Logic’s cloud SIEM to transform security alerts into actionable insights in order to allow analysts to identify and respond to incidents faster
and more efficiently, the company said.
Microsoft launches Project Cortex for data insight Last month at Microsoft Ignite, Microsoft announced the first new service in Microsoft 365
006,7_SDT030.qxp_Layout 1 11/14/19 5:03 PM Page 7
since they released Microsoft Teams in 2017. Project Cortex is a solution that will allow business users to gain valuable insights with their data, in ways they previously couldn’t. Over the years Microsoft has created a number of different ways for businesses to access data. But one of the biggest challenges companies face is figuring out how to make sense of all that data, Doug Hemminger, director of Microsoft Services and SPR, explained to SD Times. The solution leverages AI to make sense of content shared across teams and system. It can recognize content types, extract important information, and automatically organize content into shared topics.
Ionic’s Capacitor can now be added to Angular projects Ionic has announced that its open source Capacitor project now has Angular Schematics. This means that it can be
added to Angular projects. Capacitor is a bridge for cross-platform web apps. According to Ionic, “Capacitor provides a consistent, webfocused set of APIs that enable an app to stay as close to web standards as possible, while accessing rich native device features on platforms that support them.”
Netflix open sources polyglot notebook Netflix has announced that it is open sourcing Polynote, which, as the name implies, is a polyglot notebook. Polynote provides Scala support, Apache Spark integration, as-you-type autocomplete, and multi-language interoperability with Scala, Python, and SQL. According to Netflix, Polynote will allow data scientists to integrate its JVM-based machine learning platform with Python’s ecosystem of machine learning and visualization libraries. Polynote was created due to an internal frustration with
People on the move
existing notebooks. Netflix developers felt that current notebooks lacked things like support of Scala and had a bad code editing experience. And in the machine learning community, there is a need for polyglot support since machine learning researchers are often working with multiple programming languages, the company explained.
Angular 9’s Ivy will be available for all applications The Angular team revealed its plans for the upcoming release of Angular 9 at its annual conference AngularConnect 2019. According to the team, a key goal of Angular 9 is to make the Ivy compiler available for all apps. The main benefit of Ivy is that it is able to significantly reduce the size of small and large-sized applications. The team laid out the roadmap for Ivy going forward, as well. There will be an opt-out option in version 9,
and available through Angular 10. Starting with version 10, libraries will ship Ivy code and on version 11, ngcc, which is a compatibility compiler that makes code Ivy-compliant, will be used as a backup only.
OMG to develop new AI standards The international standards organization Object Management Group (OMG) announced that it has begun working on defining artificial intelligence standards. These standards will be designed to help “accelerate and improve the creation of useful AI applications,” OMG explained. “When a technology area reaches a certain degree of maturity, standards enable innovation, rather than impede it,” said Richard Soley, chairman and chief executive officer for OMG. “With defined AI standards, organizations won’t have to constantly worry about the plumbing of a system or reinvent platform techniques and tools.” z
n Prashanth Chandrasekar has been named the new CEO of the online developer community Stack Overflow. Chandrasekar is replacing Stack Overflow’s co-founder Joel Spolsky, who will remain chairman of the company’s board of directors. “Prashanth is a phenomenal leader with an exceptional track record of driving strategic growth in the global cloud and services industry,” said Spolsky. “His passion for our community and products, along with a deep understanding of developers, make him a tremendous asset for Stack Overflow as the company continues its path of innovating the most promising tools for the developers architecting our digital future.”
n Susan Lally has become the new vice president of engineering for CloudBees. Lally will plan, implement and oversee engineering strategies for the company’s continuous integration, continuous delivery and application release orchestration solutions. “As CloudBees continues on its enterprise DevOps strategy, we are dedicated to ensuring that our solutions continue to lead the industry and meet the enterprise demands of our customers,” said Sacha Labourey, CEO and co-founder of CloudBees. “Susan’s deep technology experience and broad skill set managing teams in high-growth environments make her a perfect fit.”
n Databricks has appointed Dave Conte as chief financial officer. Conte will report directly to the company’s CEO Ali Ghodsi and be responsible for all of the company’s financial and operational functions. According to Databricks, Conte has 30 years of experience in finance and administration in multi-national public and private companies within the technology industry.
n Hyperconverged Kubernetes solution provider Robin.io has announced a new CEO. Jef Graham will lead the company in helping enterprises modernize applications and data infrastructure. Previously, he has worked as an EVP for large public companies like Juniper Networks and 3Com Corporation and has served as CEO of privately funded start-ups.
008-12_SDT030.qxp_Layout 1 11/15/19 12:51 PM Page 8
Waving the flag for feature experimentation BY LISA MORGAN
008-12_SDT030.qxp_Layout 1 11/15/19 12:51 PM Page 9
igital transformation is making companies more software-dependent than they’ve ever been. As analog products become digital and manual processes are superseded by their automated or machine-assisted equivalents, organizations are building more apps at the core and edge to compete more effectively. One way of hedging bets in an organization’s favoris using feature flagging to determine what is and is not translating to business value.
Digital transformation is making companies more software-dependent than they’ve ever been. As analog products become digital and manual processes are superseded by their automated or machine-assisted equivalents, organizations are building more apps at the core and edge to compete more effectively. One way of hedging bets in an organization's favor is using feature flagging to determine what is and is not translating to business value. Of course, feature flagging isn’t a new concept for developers, but the thought processes around it and its usage are evolving. “I think the major change is that people realize that instead of being the exception to how engineering works where they’re using feature flags because they have a high-risk feature or they’re not quite sure how it’s going to work, now they’re looking at it as like maybe that’s how we should manage things all the time,” said Chris Condo, senior analyst at Forrester.
What’s new about feature flagging Feature experimentation platforms tend to support a range of methods that may include feature flagging, A/B testing, and canary and blue-green deployments. The platforms also reflect the shrinking nature of what’s being released with some platforms supporting increments as small as a code path. Interestingly, the users of such platforms are evolving to include not only developers and DevOps teams, but product managers, designers, and even line-of-business users. Who can access feature flags depends on the platform
selected and the customer’s culture and practices. “The needs of the product manager, the developer, the designer and the analyst come together in this world,” said Jon Noronha, senior director of product at Optimizely. “Increasingly, [developers] are collaborating directly with their product manager or with the data scientist that might be assigned to their team or going to a designer to answer certain questions upfront of a project that they might not have even asked before.” Apparently, the collaboration is having some interesting side effects, like developers telling their product managers they want agreement on success metrics before they start writing code. That’s a stark contrast to the traditional way of working in which the product manager tells developers what features to build, and then developers build them. Instead, it’s becoming more of a collaborative effort in which goals are collaboratively defined. “I think a lot of different functions now can benefit from experimentation. Obviously, marketing has played a part in optimization but also thinking of the HR department in your company. Can they test their job opening pages? How can they benefit from experimentation? Are they getting the right candidates in?” said Sophie Harpur, product manager at Split.io. “I think it kind of can go across the board, so making it accessible to everyone across the org.” Feature flagging has been a developer concept historically. Product managers haven’t been able to access anything or change anything themselves. Similarly, a salesperson couldn’t turn a feature on or off for a prospect that wants to try something in beta. “If you can only change things by playing around with command-line tools or editing values in the database, it’s kind of an archaic and error-prone process even for developers, so they’re not likely to use it all that often,” said John Kodumal, CTO and co-founder of LaunchDarkly. “It’s a niche technique you’re only going to use when you absolutely have to do it.” continued on page 10 >
008-12_SDT030.qxp_Layout 1 11/15/19 12:52 PM Page 10
Process drives better feature flag management Feature experimentation platforms make feature flagging easier to do and easier to manage, but even the greatest tools don’t contain all the DNA an organization needs to succeed. Without a process in place, developers, product managers, and even salespeople may be able to turn flags on and off at will, which may not be the most prudent course of action. When a process is in place, businesses are in a better position to ensure their feature experimentation aligns with their goals. “Trouble can ensue if everyone is turning things on and off willy-nilly,” said Chris Condo, senior analyst at Forrester. “It should be understood that certain people have authority to do things and other people don’t. If any salesperson, any account manager, anyone, can have access to this, I think that that’s asking for trouble.” Feature experimentation platforms tend to provide visibility and insight into who’s seeing what, when, and why they’re seeing it. Condo said LaunchDarkly’s designation as a “feature management” platform is ingenious because it suggests that feature flagging should be an ongoing practice. However, it’s not just clever marketing; the platform (and some others) provide true feature management capabilities. “If you’re littering your code with code paths that aren’t being executed, you’re going to end up with technical debt that you want to keep in check,” said John Kodumal, CTO and co-founder of LaunchDarkly. “You need tools that are prescriptive about telling you when to remove feature flags. If you don’t have that, you can end up with a lot of problems.” For example, one company built its own feature flagging system, but feature flags were never deleted. When the system crashed, the website reverted to an old version that was a decade out of date because there were a lot of code paths no one had ever seen before. The old version of the site was non-functional, and apparently no one had tested it. So, the end result was a site-wide outage.
TOOLS AREN’T EVERYTHING Tools are essential for sound feature flag management, but tools can’t guarantee sound feature flag management, particularly if they’re not supported by processes. “Process, tools, and culture have to work together. You have to have a process that allows you to remove flags as quickly as possible,” said Kodumal. Conversely, while a process may exist, developers may still not feel comfortable deleting flags if they don’t trust the tooling. Alternatively, feature flag removal might be something that’s done during a cleanup week as opposed to continuously. “A set of practices and techniques are only useful if they’re adopted consistently across an organization. The technology is sort of the prerequisite for that, but it’s not the whole story,” said Jon Noronha, senior director of product at Optimizely. “Big established companies that haven’t always done things like feature flagging and experimentation [need help making] that jump along the way.” Documentation is also important, according to Sophie Harpur, product manager at Split.io. That means documenting the hypothesis, what was tested, how it was tested, what was learned by turning the feature on and off, and next steps, including the next feature that’s going to be released. “Having that all in a centralized place is really important to foster an experimentation culture,” said Harpur. “Having that documentation of experiments gives you that opportunity to make sure things are being done correctly, so allowing people to review what people are testing, getting more eyes on it to make sure people are testing correctly and effectively. We’re always keen for process around experimentation.” z
< continued from page 9
Without a feature experimentation platform, a product manager usually files a Jira ticket. With a feature management platform, the same product manager can access the feature flags they need and modify them.
Driving positive business impacts As more companies adopt a quantitative mindset, they’re compelled to measure the effectiveness of individual features through experimentation. Toward that end, organizations are monitoring technical metrics such as feature and API performance, and business metrics such as increasing customer engagement. “Restaurants, airlines, and car manufacturers realize that in order to compete in 2019, they need to have the best software on the market,” said Optimizely’s Noronha. “They need to bring that in-house, build it themselves and adopt some of the best practices that the Silicon Valley elite use. Those companies use feature flagging pervasively throughout their processes.” There are also some organizational dynamics that are fueling the demand for feature flagging, including CEOs who are questioning the value of software investments and the sought-after, recently acquired talent that wonders why the code base is such as mess and how anyone manages to get work done. “I think those guys are there to make forward progress on both the infrastructure and on the value of the work,” said Optimizely’s Noronha. “Feature flags are often the first step in just getting your infrastructure under control. You fence off certain areas of your products where you can make progress on them independently. One customer [called this] ‘containing the blast radius of new changes,’ which I really liked.” Feature flags also allow developers to change the narrative about their own success metrics. Instead of telling the CEO they built 30 new features last quarter, they can show how much the new features increased the value delivered to the customer, which is what the CEO cares more about. Demonstrating positive business impact as verified by continued on page 12 >
008-12_SDT030.qxp_Layout 1 11/15/19 12:52 PM Page 11
Speed releases with feature flags
eature experimentation platforms can help organizations deliver software faster because they are aware of expected and unexpected feature-related behavior sooner. As organizations continue to accelerate their release cycles, they continue to ship ever smaller amounts of code that should be experimented with, individually. “It’s the next obvious step in continuous delivery [because] continuous delivery is all about shipping the smallest unit of change possible, which is an individual code path,” said John Kodumal, CTO and co-founder of LaunchDarkly. “Feature planning is fundamentally about that, and what that unlocks is pretty game-changing.” Feature flagging also enables teams to work even more independently than they have been, which also tends to accelerate development. “Customers using DVCS and feature branches have told us it was really expensive to take a long branch, merge that back into the mainline code base and deploy it because the two code branches diverged pretty quickly,” said Kodumal. “Integration time is extremely expensive.” With feature flagging, teams can merge code and keep it turned off and then turn it on in an isolated environment, like a staging server or a developer’s production account, which also helps teams move faster and be more productive. However, “move faster” doesn’t necessarily equate to “move faster and break things.” Organizations need to take risk management into account, because the “value” they’re delivering isn’t
BY LISA MORGAN as valuable as they think if it’s being delivered at the price of customer angst or the company’s reputation. “Release velocity has gone up in many cases, but increasingly you see teams scratching their heads wondering about the actual value of all these launches,” said Jon Noronha, senior director of product at Optimizely. “People find themselves in ‘the feature factory’ where they’re shipping one thing after another, but uncertain about the actual [business] outcomes.” Increasingly, businesses want to understand the value and outcomes of investments. Feature flagging helps. “People want to be able to use feature flagging for all kinds of software, not just the web, desktop or mobile devices, but things like Amazon Fire Sticks or Roku boxes,” said LaunchDarkly’s Kodumal.
SPEED DOES NOT TRUMP QUALITY Digital experiences in all channels are constantly resetting customers’ expectations. While some organizations may deliver great experiences in one or more channels, they may fall short in others. Yet, from the customer perspective, the best example in any channel is the standard by which all others are measured. “People expect the software they’re using on a day-to-day basis to be flawless and they don’t tolerate crashes the way they used to,” said Kodumal. “When we as consumers trust software companies to so much of our lives, it’s just not acceptable for
the bank app to crash or for the banking app to be unavailable because the company needs three hours of downtime to upgrade the app.” Feature flagging allows users to see and measure expected and unexpected impacts. “Doing things in a data-driven way can [help you identify expected and unexpected impacts] quickly and allow you to reduce the number of customers that are affected,” said Sophie Harpur, product manager at Split.io. “You should have metrics tied to every feature release and make sure you’re detecting things that you’re not expecting.” Another benefit of feature flagging is the ability to move beyond brain-dead minimal viable products (MVPs) to MVPs that enable customers to test-drive software on their own data. “I can put out an MVP to a select set of alpha users who are maybe super users of my product and get their direct feedback. If I’m using a feature experimentation platform and you’re not, one of the problems you’re going to have is you’re going to have to deploy that code to a staging server, but users won’t be able to use it on their data because your software is quarantined on a separate set of servers,” said Chris Condo, senior analyst at Forrester. “Meanwhile, my users are testing my feature directly on their data. They can look at it, see how it works, try their complex queries or look at the data in different ways. I’ve lowered risk and I’ve increased my velocity because I know I can control the exposure of that feature.” z
008-12_SDT030.qxp_Layout 1 11/15/19 12:53 PM Page 12
< continued from page 10
data also tends to lower the barriers to future funding. Importantly, feature flagging enables companies to test small changes and detect if they’re impacting the metrics that the organization cares most about. “A lot of the kind of large enterprise companies are all moving in that direction,” said Split’s Harpur. “ It’s thinking about that transformation of organizations turning into digital companies and then turning into experimentation companies.” Experimentation allows organizations to move faster and invest in the right things. The overarching benefit is staying ahead of the curve. “To be data-driven is to let your [users] tell you what they like rather than having the highest paid person tell you what you should be doing,” said Lizzie Eardley, senior data scientist at Split.io.
How canary and blue-green releases fit in Feature flagging can be done in place of canary and blue-green releases or they can be implemented as complementary practices, depending on the goal. In a canary release, a change is pushed out to a small group of users to see how it performs. If it performs well, then it’s rolled out to more users. Feature flagging allows a more precise selection of users, down to individual users. “Typically, what happens in a canary release is that particular code is put out there and anybody can get access to it. It’s just that it’s only available on a certain set of systems and then you can measure whether or not it’s ready for further deployment,” said Forrester’s Condo. “If you’re putting out a brand new product and during that canary release you only have alpha users and then maybe beta users and then you decide it’s actually performing well, let’s spread it out. If it’s just a microservice that’s been updated, a small piece of your website that’s changed or has a new feature, maybe use a feature flag and measure the impact. I think there’s the right tool for the right situation.”
Blue-green deployments involve identical hardware environments, one of which runs live software while the other remains idle. New code is deployed to the idle environment and tested. Users are then switched over to the formerly idle servers that now run the updated software, and the process continues. “With blue green-deployments you can flip back and forth between one version and a newly launched version, but it’s just two versions, really, because nobody scales bluegreen beyond that,” said LaunchDarkly’s Kodumal. “Feature flagging allows you to do things in a more fine-grained way, so if you have 20 different developers committing and releasing at the same time you have the granularity to say these things are not risky, so let’s turn them on and those other things are risky, so let’s turn them off. And at deploy time, with blue-green, it’s still a very binary decision: either all of the new code is being deployed or not.” With feature flagging, the level of granularity can be as fine as a code path, so it’s possible to decide whether an individual code path should be executed. On the other hand, not everything needs to be feature-flagged. For example, a simple bug fix may not warrant feature flag overhead. If it’s an infrastructure configuration change, a bluegreen release may make more sense than a feature flag. Another type of testing that tends to be supported by feature experimentation platforms is A/B testing in which companies are experimenting with two different shopping cart flows, or two different site designs, to determine which is most successful, statistically speaking. “Feature flagging and A/B
testing have gone down parallel tracks,” said Optimizely’s Noronha. “You’ve had development teams implementing feature flags for the purpose of continuous integration and deployment, and product analytics where there’s been this evolution of A/B testing from being something that just the biggest tech companies do to something that’s much more mainstream. Those practices have converged into something that a combined product and engineering team does to monitor their progress.” In short, feature experimentation isn’t one thing. There are different ways to experiment, each of which has its benefits and drawbacks, depending on the use case.
Using feature flags for entitlements Feature flags can be used as a means of controlling access rights based on a subscription. Instead of having huge buckets into which customers fall, such as Basic, Professional, or Enterprise product levels, feature flagging can allow individually customized products. “You’re seeing companies take feature flagging one step further,” said Forrester’s Condo. “They’re saying, ‘Hey, instead of managing multiple levels of licenses and people having to install keys or do these complicated setups, we can simply put them in different demographic [categories] and turn features on or off or give them the ability to turn features on or off and let them decide what level of subscription they should have.” LaunchDarkly uses its own platform itself for many things, including changing the rate limits of its API based on the customer or characteristics of their traffic. That way, the company can customize the flow based on an individual customer’s requirements. “We can impact not only the way people develop software, but how businesses run their software because the cost of customization is lower,” said LaunchDarkly’s Kodumal. “Being able to release specific versions to specific customers is incredibly powerful.” z
Full Page Ads_SDT030.qxp_Layout 1 11/18/19 11:41 AM Page 13
014,15_SDT030.qxp_Layout 1 11/14/19 5:02 PM Page 14
CCPA set to take effect at the start of 2020 BY JENNA SARGENT
Remove passwords, use encryption Encryption provider StrongKey’s CTO Arshad Noor believes that one way for companies to mitigate their risk is by having better security in the first place. He recommends companies implement “security controls that minimize breaches because in the end, damage is done to a company and shareholders only when there is a breach, not just by the fact that they are collecting data,” said Noor. If companies are just relying on user IDs and passwords, they will be taking on a huge, unnecessary amount of risk, Noor said. Because passwords are at the heart of the majority of data breaches, the first step should be to remove them. The second thing Noor believes companies should consider is encrypting personally identifiable information (PII) using industry best practices. “Until that information is encrypted, and encrypted in the applications that are authorized to see that data, they are taking on huge amounts of risk, and by just taking these two very simple ethical actions, regardless of which way the law goes, companies will protect themselves from the risks that will wait for them.” Moir also added that backups will play an important role in compliance. “What will be very interesting in all of this is the right to be forgotten and how do we deal with data that’s sitting in backups,” he said. Moir explained that the challenge lies in the question of whether backup data needs to be deleted alongside data that is in production. If you only delete production data, then restore from a backup, that previously deleted data may wind back up in production, which may put your company in violation of the law. “You need a process to make sure it doesn’t get back into production systems,” Moir said. “So again it’s kind of this process-driven element, rather than a technology that is doing it for you.” z
hen the GDPR was launched almost a year and a half ago, it forced companies to take a look at their data practices. And though some might say enforcement has been pretty lax so far, it has changed the way companies deal with their data. And it has inspired a new major regulation in the US: the California Consumer Privacy Act, or CCPA. The CCPA was signed into law about a month after the GDPR went into effect, though the idea was first introduced in January 2018. The law goes into effect starting Jan. 1, 2020. According to Scott Pink, special counsel at Los Angeles-based law firm O’Melveny & Myers, the CCPA mirrors the GDPR in many ways, but it isn’t as extensive. “It’s sort of a GDPR lite,” he said. The CCPA provides three of the same rights that the GDPR provides, including 1) the right to know what data is collected about you, 2) the right to access that data, and 3) the right to request deletion, Pink explained. But in addition, the CCPA provides the right to say no to the sale of personal information and the right to not be discriminated against, he said. According to Pink, in order for a company to be subject to the CCPA, they must be doing business in California. This doesn’t necessarily mean they are headquartered or incorporated there. Having employees there or selling products there counts. Then, they must meet one of the following three criteria: 1. Have an annual revenue of $25 million or more
014,15_SDT030.qxp_Layout 1 11/14/19 5:02 PM Page 15
2. Have collected data from 50,000 California consumers 3. Derive 50% or more of revenue from the sale of personal information Unlike the GDPR, which imposed a fine based on the level of violation, CCPA allows individuals to pursue a lawsuit against the company. But under CCPA, companies could be liable for up to $2,500 per violation under the regulation.
The path to compliance Adrian Moir, lead technology evangelist at Quest Software, believes that the first step towards compliance is for companies to take a look at what’s in the regulations and determine how they should adjust their business accordingly. O’Melveny’s Pink agrees that this is a good first step, and believes that next, companies should begin doing an inventory of their data that is subject to the law. Sovan Bin, CEO and founder of data governance tool provider Odaseva, added that discovery and documentation of how personal information is used should be a company’s first step.
GDPR sets a precedent One factor that will impact how companies respond to the CCPA is that there is already a precedent with the GDPR. While GDPR certainly wasn’t the first privacy law to ever come into play, it was unique due to its massive scale. And with GDPR still fresh in people’s minds, it gives them a model for how compliance and enforcement play out. And because the CCPA is less extensive than the GDPR, those companies who already had to comply with GDPR will have an easier time with compliance this time around. “Companies who have been preparing for compliance of GDPR for the European consumers, would be already ready for the most part for CCPA,” said Bin. One criticism of the GDPR is that enforcement was pretty slow at first. By February 2019, eight months after it went into effect, only 91 fines had been issued by the GDPR. “[With GDPR,] it was a while before the first case came in and the first case didn’t get the full fine and everyone was going ‘oh, well that wasn’t so bad,’” said Moir. Moir predicts that the first case of the CCPA will set the trend for the outcome of the law as a whole. “I think maybe it’ll be the first case that tips it over the edge and you’ll start seeing people saying ‘yes, I really need to do this,’” he said. Moir also predicts that there will be a range of responses to the CCPA. Some organizations will accept the risk of CCPA and keep doing business as usual, while others will work to become complaint. z
Other states to jump on the privacy law bandwagon The CCPA only protects California residents, but it is likely that other states will begin drafting their own legislation as well. The same thing happened after California passed a data breach disclosure law in 2004, explained StrongKey’s CTO Arshad Noor. The law required companies to notify consumers when a breach happened. “It’s inconceivable that only Californians were being affected by these breaches,” said Noor. “When other states passed the law, their residents also got notified. Eventually it reached the point where I believe more than 42 states today have a breach disclosure law.” He added that at the time that law passed, it was the first of its kind in the United States. Prior to that, there hadn’t really been any sort of regulation around the Internet, data privacy, or breaches, Noor said. “Now I have a feeling everybody is far more educated about all of this,” he said. A few other states have already begun working on their own privacy laws, such as Washington and Vermont, according to the law firm BakerHostetler. BakerHostetler keeps an up-to-date list of all new data laws as well as changes to current data laws. Moir believes that a big factor that will influence other regulations is how much of a public desire there is for it. States will be under more pressure to pass their own laws if there is a big public push for tighter privacy regulations. Moir predicts that one outcome of the CCPA is that it leads residents of other states to want their data to be as protected as if they lived in California. There’s also the possibility that in the future, rather than having to comply with varying legislations from 50 different states, a federal law could pass. “I’ve also heard that businesses are actually asking U.S. Congress to come up with a single federal law,” said Noor. “So there’s apparently a lot of backroom conversations going on right now.” According to an October 2019 survey from Goodwin Simon Strategic Research, 81% of California voters believe that a federal privacy law needs to be at least as strong as the CCPA. z
016-21_SDT030.qxp_Layout 1 11/19/19 9:33 AM Page 16
A Look Back
How the year of value stream held up BY CHRISTINA CARDOZA
Last year, SD Times declared that 2019 would be the year of the value stream as businesses started to look more into the value they were getting out of their IT processes. “I can go in and solve my release process and improve my release time, but if I don’t look at how long it takes for that idea to go all the way through, then I miss time to market. Time to market and visual transformations right now is very, very important,” said Lance Knight, senior vice president at ConnectALL, an Orasi company. However, with any new terminology and approach, it took some time throughout the year for the industry to really understand what value stream management was and how it could be used to gain an advantage. Eric Robert-
son, vice president of product management and strategy execution at CollabNet VersionOne, defined the term as “an improvement strategy that links the needs of top management with the needs of the operations group. It is a combination of people, process and technology. It is mapping, measuring, visualizing, and then being able to understand from your planning, your epics, your stories, your work items, through these heterogeneous tools all the way through your enterprise software delivery lines, being able to understand that what you’re delivering aligns with the business objectives, and you’re being effective there.” And many companies jumped on value stream with the release of new products. CollabNet VersionOne focused its
winter 2017 release on value stream management with new features and functionalities to help organizations collaborate and work intelligently. The company also announced it would be acquired by TPG Capital in September for its enterprise DevOps and value stream management capabilities. In addition, TPG Capital provided $500 million of equity capital, which CollabNet plans to use to drive AI and value stream management throughout the enteprise. Planview released its new Agile Scaler offering in March to help extend Agile delivery across teams. It aims to connect multiple teams, tools and workflows as well as provide visibility into the flow of work. CloudBees announced its Software Delivery Management Platform, Product Hub and Value Stream Management modules in August at DevOps World | Jenkins World. The new solution includes a central place for integrated visibility and orchestration, a policy engine, a dashboard for continu-
Lack of cybersecurity skills BY JAKUB LEWKOWICZ
Security has hit a low point this year, as 2019 saw the 2nd, 3rd and 7th biggest breaches of all time measured by the number of people that were affected. The largest breach of the year occurred in May when First American Financial Corporation leaked 885 million records of documents related to mortgage deals going back to 2003, exposing Social Security digits, wire transactions and other information, according to a post at Krebs on Security. A month later, around 540 million Facebook user records were exposed on the Amazon cloud server including account names, IDs and details about comments and reactions to posts, according to a report by UpGuard.
The companies that were hacked due to poor security practices received some hefty fines. Facebook was fined a record-breaking $5 billion over privacy breaches this year. Meanwhile, Equifax agreed to pay $575 million in a settlement for a 2017 data breach that affected 147 million people. In August, Capital One notified users that a data breach breach affected about 100 million people. Fortunately, government officials stated that they believe the data has been recovered and that there is no evidence the data was used for fraud or shared by this individual. This year, online companies, Internet users and regulators waded through the first full year since the enactment of the General Data Protection Regula-
016-21_SDT030.qxp_Layout 1 11/19/19 9:34 AM Page 17
ous improvement and retrospectives, real-time value stream management capabilities, and integrated feature flag management. Jama Software announced it would be offering a cloud version of Tasktop Integration Hub in September to enable customers to automate and visu-
alize the flow of their software delivery value stream. It enabled teams to integrate Jama Connect with ALM, PLM, development and QA tools. Tasktop also had a number of its own value stream releases with the addition of test management integration capabilities to Integration Hub in
May, and a partnership with Planview in August to extend its Agile Scaler offering and help teams transform and scale on their own terms and timeline. Tasktop also teamed up with Tricentis in September for improved quality throughout the software delivery value stream. Together the companies will provide test automation, information flow automation, and value stream metrics. Lastly, Tasktop announced the limited release of Tasktop Viz in October designed to measure the flow of business value in software delivery. By the end of the year, the importance of value stream management was evident. The next step, which will continue throughout next year, is to now apply new roles to the new way of thinking. According to Dominica DeGrandis, the new principal flow advisor at Tasktop — who spoke at the DevOps Enterprise Summit in Las Vegas in October — we can expect to see the program management office morph into a value management office with value stream architects, value stream product leads, and product journey owners and champions. z
and best practices strain security tion (GDPR) in 2018. Reporting found that the impact of the GDPR, though, has been minimal to this point. Compliance has been slow, enforcement has been lax, and organizations are finding that learning about data origin, residence and use can be hugely daunting and difficult. Although 91 fines have been issued, the one major $56 million fine was imposed on Google for “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization.” Moving forward, the lessons learned regarding GDPR will once again be tested in the upcoming California Consumer Privacy Act (CCPA) that will go into effect as of Jan. 1. The law is designed to give users the right
to know all the data a business collects on them, the right to delete their data and the right to refuse the sale of that data. A large bottleneck in improving security is the lack of skills in the workforce to take on cybersecurity positions. The 2019 (ISC)² Cybersecurity Workforce Study estimates that the cybersecurity workforce is currently made up of 2.8 million individuals, but 4.07 million professionals are needed. The way software development companies are approaching security is also evolving. The 10th iteration of the BSIMM10 report found that the security aspect of DevOps is evolving with a new wave of engineering-led software security
efforts originating bottom-up in the development and operations teams rather than top-down from a centralized software security group (SSG). The responsibility for security has shifted to developers within their organizations, according to Gabriel Avner from WhiteSource in a post. Many are using tools that can scan the product’s code and issue alerts to developers about potential vulnerabilities in their code, allowing them to test earlier in the SDLC. In its top ten technology predictions for 2020, Gartner said that AI security will be a major development. AI security includes protecting AI-powered systems, leveraging AI to enhance security defense, and anticipating nefarious use of AI by attackers. z
016-21_SDT030.qxp_Layout 1 11/19/19 9:34 AM Page 18
A Look Back
Enterprises take off into the cloud The skies began to clear up for enterprises to start doing more in the cloud this year. According to Forrester, 2018 was the year cloud-native tools and technologies started to gain more traction, and 2019 was its breakout year as more enterprises turned to microservices, containers, serverless and modern approaches. “From a software development perspective, the cloud offers simplicity, velocity, elasticity, collaboration, and rapid innovation that isn't easily replicated, if at all possible, using traditional
company announced the first instantiation of its enterprise data cloud: Cloudera Data Platform, designed to manage data and workloads on any cloud. IBM released new services designed to harness the power of the hybrid cloud in February. The services included the IBM Cloud Integration platform to cut time and increase productivity, IBM Services Multi Cloud Management for simplifying IT operations, and the Cloud Hyper Protect Crypto Service to provide security on the public cloud. IBM also acquired Red Hat during the year, and in August revamped its portfo-
on-premises tools. And if you are going to be hosting on the cloud, why not develop in the cloud to make sure your development environment is as close to your runtime environment as possible?” Christopher Condo, senior analyst for Forrester, said last year. Plenty of providers and vendors also took note that more business was moving to the cloud and that they began to focus efforts on different forms of cloud. Google Cloud went on the path to a serverless future in January with the release of its NoSQL document database solution Cloud Firestore. The solution uses cloud-native technology in order for users to store, sync, and query data for web, mobile and IoT apps. In February, the merger between Cloudera and Hortonworks was officially completed, opening up new opportunities for a new Cloudera. Together, the companies work on supporting hybrid and multi-cloud deployments. In September, the newly formed
lio to be cloud-native and optimized for Red Hat OpenShift. The move made it possible to build mission-critical apps once and run them on different public clouds, the companies explained. “Red Hat is unlocking innovation with Linux-based technologies, including containers and Kubernetes, which have become the fundamental building blocks of hybrid cloud environments,” said Jim Whitehurst, president and CEO, Red Hat. “This open hybrid cloud foundation is what enables the vision of any app, anywhere, anytime.” Red Hat also had its own cloud announcements. In February, the company announced the Red Hat Code Ready Workspaces to help developers leverage Kubernetes and create cloudnative apps. In November, Red Hat announced Quarkus 1.0, a Kubernetesnative Java stack built for containers and cloud deployments. According to the company the solution will set up Java for a cloud-native world.
BY CHRISTINA CARDOZA
The cloud was also front and center at Microsoft’s Build conference in May. Microsoft announced updates to the Azure Kubernetes Service to provide event-driven auto scaling that supports serverless event-driven containers on Kubernetes. Microsoft also launched a new Azure Security Lab for its cloud and doubled up its bounty rewards in August to help strengthen its cloud security. Stackery announced local Lambda development in July to enable all developers to debug and develop any function in any language or framework. Atlassian even went deeper into the cloud with major updates to its cloud platform. The updates were focused on new cloud plans, platforms, admins and cloud migration — and included new cloud plans targeted at different use cases, enterprise-grade cloud controls, a new command center, and new programs and offerings for moving to the cloud. A new foundation, the Reactive Foundation, was launched in September to help advance the next generation of networked apps. According to the Linux Foundation, as cloud-native computing and more modern app development practices start to take hold, reactive programming will become more important. The Eclipse Foundation formed the Eclipse Cloud Development Tools Working Group in October to help simplify cloud-native app development. The group will focus on development tools for and in the cloud. In November, Sumo Logic announced it was acquiring JASK Labs, a company that specializes in cloudnative autonomous security operations. Gartner ended the year with distributed cloud on its top 10 strategic technology trends for 2020. According to the research firm, as cloud adoption continues to rise, a new era of cloud computing will emerge. Distributed cloud refers to “the distribution of public cloud services to locations outside the cloud provider’s physical data center, but which are still controlled by the provider.” z
Full Page Ads_SDT030.qxp_Layout 1 11/18/19 11:42 AM Page 19
016-21_SDT030.qxp_Layout 1 11/19/19 9:35 AM Page 20
A A Look Look Back Back
Microsoft innovates across the board BY JENNA SARGENT
2019 was Microsoft’s year. After a strong 2018 — a main highlight of which was its acquisition of GitHub — the company kept up that momentum into 2019. Last year, our yearly review of Microsoft focused heavily on Azure. Microsoft invested heavily into its cloud platform last year, and it paid off. This year, things aren’t so simple. The company continued its investment into Azure, but outside of cloud, it revealed significant innovations across the board — from a remote version of Visual Studio to new AI-enabled solutions to new foldable devices and a new operating
system to go with them. In May, Microsoft revealed Visual Studio Online, which is a web-based version of Visual Studio. Visual Studio Online allows developers to open, edit, and debug code that has been stored remotely. It is intended to be used as a remote companion to the native IDE or VS Code, not a replacement. This year the company also greatly expanded its capabilities for creating and deploying IoT solutions. In 2019 alone, the company released over 100 services targeted at IoT. In October, it released updates to IoT Central, Azure IoT Hub, Azure Maps, and Azure Time Series Insights. Microsoft also plans to make its
IoT security solution Azure Sphere generally available in February 2020. The company also announced that it would be merging the .NET Framework and .NET Core into a single solution. This technically hasn’t happened yet, as the company is starting this with the release of .NET 5, which is scheduled for release in 2020. Microsoft explained that all of the things that developers love about .NET Core will remain, but .NET 5 will also provide more choices on runtime experiences, Java interoperability on all platforms, and Objective-C and Swift interoperability on multiple operating systems. Microsoft also entered the foldable screen game with the announcement of two foldable Surface devices that are expected to hit the market for the 2020 holidays. To support these two devices, Surface Neo and Surface Duo, the company developed a new operating system designed for dual-screen devices. Called Windows 10X, the operating system is an extension of Windows 10. It will have similar navigation and functionality to Windows 10, but will have enhancements that optimize it for more flexible postures and mobile use. In 2019, the company released sever-
Testing gets smarter, and so do testers BY CHRISTINA CARDOZA
Testing continued to evolve throughout the year as businesses worked to get high-quality software out faster. Now that testing early and often has become a common mantra, testing smarter was the new focus in 2019. The advances in machine learning and artificial intelligence have enabled more test automation and accuracy. Perfecto announced an AI-based testing solution, Perfecto Codeless, in March that automated the process of writing test scripts. Around the same time, Functionize received a $16 million series A round of funding to bring natural language processing to enterprise-level software testing.
In April, visual testing solution provider Applitools announced the Ultrafast Visual Grid, a new way to manage application function and visual quality. According to the company, it uses AI functionality to validate elements and maintain test code and functional test scripts. Parasoft rolled out the beta release of Selenic UI, an automated test tool that monitors tests, discovers errors, makes remediation recommendations, and does self healing. Testim ended the year with a new AIbased testing kit for developers to create resilient tests directly in their code. The
AI inspects application elements, continuously learns, and improves the robustness of tests. “[We] roll our eyes when products throw the words machine learning or artificial intelligence at their own product. But suffice it to say the tools are getting smarter in a non-trivial way,” said Dan Widing, CEO of testing platform provider ProdPerfect. However, as new tools come to market, Kevin Surace, CEO of Appvance, warned businesses about getting caught up in the hype. “When looking for an AI test solution, developers should dig
016-21_SDT030.qxp_Layout 1 11/19/19 9:36 AM Page 21
al versions of TypeScript, the most recent of which was TypeScript 3.7, announced at Microsoft Ignite. A highly requested feature that was introduced in 3.7 is optional chaining, which allows developers to write code that can immediately stop running if it encounters a null or undefined value. Also, at Ignite, the company announced it made several enhancements to Microsoft 365, including updates to Teams, a new mobile app for Office, a public preview of the Fluid Framework, a new version of Microsoft Project, AI-enabled Cortana capabilities, a redesigned version of Yammer, and new features in Microsoft Search. They also announced Project Cortex, an evolution in knowledge sharing within organizations. It uses AI to parse through content created by an organization, then pushes that information to relevant locations for business users to access. Project Cortex creates a knowledge center, made up of topic cards, people cards, and topic pages. The solution allows business users to make sense of their data and share knowledge in ways they previously couldn’t. z
into what the prospective solution actually has now and how it’s going to benefit them. If they say AI, do they really have any AI or machine learning, and have they employed it in a way that will benefit my needs,” he advised. Outside of AI, testers themselves are getting smarter about their place within teams. One of the messages at the STARWEST testing conference in October was that testers should stop trying to settle or fit in, and start to lead. “If the rest of an organization was just as aware as we all are that all great testers possess these characteristics, would testing have ever been left out of any transformation in the software industry?” asked Stacy Kirk, QualityWorks Consulting Group CEO and founder. z
Java’s last year on top? BY JENNA SARGENT
Following the new biannual release schedule that started in 2018, this year Java saw two major releases, Java 12 in March and Java 13 in September. Java 12 introduced features such as a new low pause time garbage collector, microbench suite, switch expressions, a JVM constraints API, and more. With Java 13, Oracle set out to improve the performance, stability, and security of the Java SE Platform and the Java Development Kit (JDK). This version introduced three major enhancements: dynamic CDS archives, the ability to uncommit unused memory, and a reimplementation of the Legacy Socket API. Java 13 also introduced two new preview features, Switch Expressions and text blocks, for developers to test out. Switch Expressions allows switch to be used either as a statement or expression. Text blocks are multi-line string literals that automatically get formatted in a predictable way. Another major change that had occurred in 2018 was the transfer of Java EE from Oracle to the Eclipse Foundation. The move was announced in 2017, and the Eclipse Foundation changed its name to Jakarta EE in February 2018. Even in 2019, the transition continued. One of the major milestones it reached in 2019 was that it now operates under an “open, vendor-neutral community-driven process.” The Eclipse Foundation’s executive director, Mike Milinkovich, commented that this model would allow for more innovation in Java. While Java still remains the most
022_SDT030.qxp_Layout 1 11/14/19 5:01 PM Page 22
DevOps Summit takeaway: Culture can be a competitive advantage, or it can be an inhibitor BY CHRISTINA CARDOZA
The DevOps community wants to get back to the human aspect of developing software. At the DevOps Enterprise Summit in Las Vegas last month, one of the dominating themes has been people, not process or technology. Andre Martin, vice president of PeopleDev at Google, spoke about how businesses can create “a culture of high performance” that doesn’t hold them back. “How do you make sure that a company is as powerful as a brand?” said Martin. “And how do you ensure culture is a lever of growth and not the reason why your company doesn’t exist?” Martin spoke about some of the top companies 20 years ago: Atari, ACE, Taxi, TiVo and Jones University. Fast forward to today, and these companies have disappeared despite being on top and having all the technology they needed to remain there, he explained. “The reason they lost their way is because they lost the culture,” Martin explained. “They lost sight of the original principles they started their firms with … and started to treat themselves as invincible … as if no competitor could compete with them.” “Past experience is going to be the inhibitor to future growth,” Martin added. “We use our experiences to define the moment we are sitting in and in doing so we are likely missing a lot.” However, companies do seem to understand culture is important; they just are not tackling it, Martin explained. He cited a McKinsey and Company report that found 68% of respondents believe culture is a competitive advantage, 81% believe organizations without high-performing cultures are doomed, 76% believe they can change their culture, and 67% believe
SD Times photo by Christina Cardoza
they need to change. To get a sense of how an organization’s teams are working, Martin said to look at “the bids.” In a couples relationship study, researchers found couples bid towards, away and against. Bid towards is demonstrated through compliments, engagement and retention. Bid against is arguing, and bid away is ignoring. The researchers found couples who bid towards each other 80% of the time were more likely to stay together than couples who only did it 33% of the time. “The takeaway is pay attention to the bids,” said Martin. “Teams are bidding for your attention every day. What kind of bid are you giving them?” In addition, Martin provided eight attributes he believes can erode culture quickly: 1. Keeping people busy 2. Not allowing for failure 3. Making everything a priority 4. Creating more competition 5. Being resistant to new ideas 6. Invoking history 7. Critique
8. Keeping the circle small and tight Martin also added it’s not only important to look at your team’s culture, but also be aware of the team’s climate. Culture is the expectations that are set while climate is the environment, the felt experience of team members. Getting both aligned will result in high engagement and committed employees, said Martin. For instance, you can’t claim to be a company all about values and collaboration and then have your teams in a dark basement and you are just barking orders at them, he explained. In order to change culture, leaders have to be mindful of their team’s climate; as an employee or talent, they have to have a voice and speak up in order to make the company a successful brand; and as a community, we have to talk to each other and understand how to balance the work, and be open to learn and do things differently, according to Martin. “If you find someone with creativity, innovation and talent — don’t let them go,” said Martin. z
Full Page Ads_SDT030.qxp_Layout 1 11/18/19 11:42 AM Page 23
CI/CD is just the beginning.
;ยrv-11;ัด;u-|;vเฆl;=uol1o7;1ollb||o7;rัดoยฤป 0ย|ย_-|-0oย|;ย;uย|_bm]|_-|_-rr;mv0;=ou;-m7-[;uฤต Connect your DevOps tools and teams to all work upstream Automate|_;Yoยo=ยouh=uolb7;-เฆom|oor;u-เฆom-m70-1h Visualizeยoยu;m7ล|oล;m7ยouhYoย|ovro|0oยัด;m;1hv Accelerate|_;ย-ัดย;7;ัดbย;uยo=|_;-ย;vol;ruo7ย1|vยoย0ยbัด7 olou;7ยrัดb1-|;;m|uยฤทmolou;ย-v|;ลfยv|ย-ัดย;ฤบ
Coming 2020 bm-ัดัดย-0ยvbm;vvl;|ub1v|ooัด=ouvo[ย-u;7;ัดbย;uยฤบ Tasktop.com
024-028_SDT030.qxp_Layout 1 11/15/19 12:56 PM Page 24
No-code mobile app development: Do
BY CHRISTINA CARDOZA
evelopers are tired of switching their focus back and forth between projects, and business folks are tired of waiting for developers to get their projects — but it doesn’t have to be this way. The rise of mobile development is enabling more work to get done on the fly, and the explosion of no-code development tools in the marketplace is enabling business users to create their own mobile solutions without relying on developers. “There are many businesspeople who face specific problems, and although they may have the ideas to overcome these hurdles, they are dependent on IT teams to turn them into practical solutions. If you train your employees to solve their own ITrelated problems, they can become more versatile, independent, and overall way more successful at their job,”
said Chris Obdam, CEO of the no-code enterprise app development tool provider Betty Blocks. The need for no-code development has come out of the demand for digital capabilities, said Jeffrey Hammond, vice president and principal analyst for Forrester. There are just not enough computer science professionals available, he explained. In a recent Forrester mobile executive survey, when asked how many technical resources respondents had dedicated to mobile, on average most organizations only had around 20 technical people for mobile. “If you are trying to build native applications with that kind of staff, you are lucky if you are going to get more than five applications out. If you are a large organization, you simply don’t have the amount of technical development resources available to build and maintain these
applications. Getting the business users involved and helping them meet their needs, even if the app might be a little simple or not have quite the fidelity of a full blown native app, it still provides a lot of value to organizations,” said Hammond. According to Obdam, there are various definitions of no-code, so it is important to note what no-code actually means. “No-code generally implies that you can build an application without the need for traditional programming. But the type of applications that no-code can build are very diverse, depending on the provider,” he said.
The rise of no-code in mobile A major trend happening in the mobile space today is that organizations are actually starting to look deeper into why they are building mobile applications as opposed to just building them and hop-
024-028_SDT030.qxp_Layout 1 11/15/19 12:56 PM Page 25
more with less
ing things will turn out right, Forrester’s Hammond explained. “In the early days, companies said they had to have a mobile app because the competitors had a mobile app...almost like that State Farm commercial where the agent says ‘well I have a mobile app, too’ even though he doesn’t,” he explained. What is beginning to happen is organizations are concentrating on their mobile spending, and since organizations usually only have enough in the budget to do a couple of native mobile apps a year, they are starting to seriously consider no-code. “All the things that employees need or use often go unfulfilled or they are not maintained even if they are built because there are just not enough resources to do them, so it creates a need for no-code,” said Hammond. In addition, almost everyone has a
mobile device today. It is an intimate form of enabling communications and that is something businesses really want to take advantage of but haven’t been able to, Hammond explained. Because of this problem, business’ software needs are not being met. Mobile apps are becoming time consuming and too expensive to build. “There’s a whole lot of smart people working in every company, in every line of business within the company, who see software as an answer to a wide variety of their problems — whether it’s about too many manual processes, or lack of appropriate data collection, analytics and insight from the data, or the lack of intelligence in a lot of the processes. So all of those opportunities exist, and the demand for software is growing. Yet, there’s no fundamental change. Software is actually becoming continued on page 26 >
No-code isn’t just for business users Typically no-code solutions are targeted at business users while low-code solutions are targeted at developers. No-code enables business users to create apps with no programming experience while low-code gives developers the option to manipulate the code. “For the end-user there is no difference. It’s the process of building the application that varies. If your IT team doesn’t have a problem with working with new tooling that hardly requires any coding from them, then it’s a great solution. However, if your developers really enjoy traditional programming it might be more sensible to opt for a low-code solution,” said Chris Obdam, CEO of the no-code tool provider Betty Blocks. While there is a lot you can do within no-code solutions, sometimes there is still a need to drop into and manipulate the code, said Jeffrey Hammond, vice president and principal analyst for Forrester, so experienced developers tend to stay away from the no-code solutions. According to Obdam, native mobile development still “requires real craftsmanship,” and seasoned developers prefer to take advantage of underlying technology and techniques. “There are many traditionally schooled developers who enjoy the fact that they don’t have to code as much as they used to, whereas others might find it harder to distance themselves from coding. For them, a full-on switch to no-code might be a bit much.” Obdam added that no-code is a good option for novice developers who want to get their hands dirty with mobile development. “Although the chances of adopting a no-code solution are smaller among seasoned mobile developers, no-code does enable less experienced developers to start tinkering with mobile application development. Making a mobile app used to be reserved to a select few, but with no-code tooling, you see that many more people become very capable of building their own mobile applications.” z —Christina Cardoza
024-028_SDT030.qxp_Layout 1 11/15/19 12:57 PM Page 26
< continued from page 25
tougher and more expensive to build,” said Praveen Seshadri, founder and CEO of no-code platform AppSheet. One option is to buy something off the shelf, but it rarely ever works out, Seshadri explained, because every business has unique and specific needs. “It’s
this confluence of forces where everybody’s got a device, everybody can visualize quite easily the applications they might build on the factory floor, in a warehouse, out on the farm, or whatever it is. And yet it’s increasingly incredibly expensive, time consuming, to build these applications, so they’re not get-
No-code mobile apps in the real world It is easy to talk about the benefits of no-code development for mobile applications, but how do these applications actually look and behave in the real world? Tutti Gourmet, a gluten-free and allergen-free cookie and snack manufacturer, recently turned to no-code development to automate several processes. “I consider myself a moderate to advanced Excel user,” said Elijah Magrane, operations director at Tutti Gourmet. “When I came here, everything was done manually by hand—either with paper or by physically entering data into a spreadsheet. So, my first order of business when I started was to overhaul the process.” Some of the solutions Magrane created with Appsheet’s tool included a timesheet for employees to track hours, production logs and summaries, and production, inventory and documentation applications. “If we’re not up on our documentation, we could get a recall, which would probably put us out of business,” said Magrane. “Now, I receive notifications when expiration dates are approaching. This way, I can stay on top of all our documentation for our suppliers. This has been really really helpful.” In an effort to undergo a digital transformation, energy company Kentucky Power recently turned to no-code to help move away from paper and digitize processes such as automating anything from inspection and incident reports to employee communications. Some of the prerequisites the company was looking for in a mobile development solution were that it had to have a built-in scanner to track serial numbers, enable users to create new forms and work orders as well as update existing ones, and be able to develop fast. With no-code, the company was able to create apps that tracked failed or damaged electric poles, transformers and circuits. “As a pilot, we started with the Transformer Tracker in our Ashland shop with three or four foremen, who have to collect information about transformers,” Paula Bell, a lean team member for Kentucky Power, said in a case study. “After about a week of use, a foreman from another shop called me, and said ‘Hey Paula, can I have that thing that Rick uses to get serial numbers?’ When someone asks to start using a new tool based on another user sharing it, to me, that’s success!” Lastly, KLB Construction turned to no-code because it was cheaper and more effective to build its own custom solution that met its specific needs than to buy the construction management software solutions already available. Some of the applications the company created included field management, daily reports, reimbursements, near misses and incidents and safety alerts. “Everything is starting to get connected and even an underserved industry like construction is going to have to adopt new technology to stay with the curve. Larger companies have more resources and seem more willing. KLB is already an early adopter in construction technology and that has made us far more efficient and productive,” Richard Glass, director of information services at KLB, said in a case study. z —Christina Cardoza
ting built. So that’s this pressure situation, and that’s why you’re seeing a lot of talk about low-code and no-code,” Seshadri said. The applications being built with nocode are typically not applications that are going to be used by millions of end users, they are primarily built for internal productivity, Betty Block’s Obdam explained. “No-code is especially useful for mobile apps that focus on small processes for specific types of employees. A dock worker for instance, spends most of his working hours outside carrying only a mobile phone or tablet. It becomes interesting when this person can oversee all the processes relevant to him while on the go,” he said. Seshadri explained that no-code can actually speed up mobile development processes and free up IT workers, who were forced to build these business apps because they were the only ones who could create the software. By empowering a business user to create an application you actually empower the people who understand the problem to turn their ideas into reality, Seshadri said. “It becomes faster to build because there is no longer any back and forth between the business and IT department on what needs to be built and why,” said Seshadri. “Why would they ever want the developer? Even if a developer was available, it’s still way faster, and you’re not translating this how to somebody else’s going to do it, and then give it back to you, and go back through this process again when you could do it yourself. So, no business user would want to do it if they can build the apps themselves. Or somebody in their organization can build the apps themselves. So it’s actually somebody who understands the part of the business, the process they are actually in.” Obdam added, “Long story short, no-code allows you to create specialized mobile apps that are part of a larger ecosystem. That’s where a no-code platform can truly excel: You build a platform supporting multiple mobile apps that are linked to a central back-office. Every part of the ecosystem can be created with no-code. In that way, no-code continued on page 28 >
027_SDT030.qxp_Layout 1 11/14/19 4:59 PM Page 27
How no-code disrupts mobile development E
nd users today have been conditioned to expect a lot from their applications. First, they should have an amazingly easy experience. They also should be able to utilize all the features of the device they’re running on. Apps should be intuitive and actually help users along, by anticipating and pre-filling data entry into forms and guiding users to the finish of their journey, whether it’s purchasing something, using a calendar, or simply finding out a piece of information. In fact, a huge number of consumer applications have these benefits. Unfortunately, a huge number of business applications do not. And why is that? Creating applications for mobile devices is hard. There are different operating systems, different versions of those operating systems, and the applications themselves need to leverage the device in an application-specific way. Plus, finding skilled mobile application developers is hard, and they are expensive. In the opinion of Praveen Seshadri, founder and CEO of no-code application platform AppSheet, “Mobile app development is a nightmare. Has been and still is, from the point of view of writing software. This is primarily due to the two dominant platforms used for mobile app development with the code for each platform being widely different. So, any successful mobile app development project requires that two codebases must be managed and continually supported.” Among the great advances of mobile applications has been in their ability to take advantage of the features of the device. Banks are leveraging the cameras in these devices to enable customers to take a photograph of a check and send it in to be deposited. And voice recognition programs like Siri, Cortana and Google help users engage their applications when their hands are otherwise occupied. “A truck driver should be able to talk Content provided by SD Times and
to his business app and say, ‘Hey show me where I’m supposed to go next,’ Seshadri said. “In business apps, every one of them is some particular domain; maybe it’s about inspection of trucks. So I should be able to say to it, ‘Show me the trucks that have damage.’ Try saying that to Siri and it’s going to barf, because it has no clue what you’re talking about. It’s not scoped down to specific domains. It’s why every app needs to have appspecific intelligence.”
Seshadri explained that if a company is capturing data about inspecting its fleet of trucks, there should be a workflow that’s kicked off if one of the trucks is damaged. If there’s damage, then an email has to go to a manager, and it needs to carry a photo of the damage as an attachment. Or you’re collecting data, the mobile part is enabled, but you also have to be able to do reports every day, so management can see what’s happening. “You have to build dashboards and analytics,” he said. “And, in order to do this, you need to do auditing, because people capturing data need to worry about compliance.” These app development challenges and consumerization of business applications are what led AppSheet to create a no-code app dev platform five years ago, designed for use by non-developers to solve some of their own business problems. And after years of evolving the product based on feedback from its customers, AppSheet this month is introducing two new artificial intelligencerelated features to its platform: Optical
character recognition (OCR) and predictive models to drive better outcomes. Using the example of the bank check, Seshadri noted that not only was the camera taking the photo, but OCR under the hood could read and understand the content of the check, including the signature of endorsement. This eliminates the need to copy the content on the check and manually enter the information into a form. “OCR and AI working together probably needed a data scientist and a whole bunch of training to get the system to learn how to read those checks,” he said. “Maybe Bank of America or Wells Fargo had the resources to do this, but shouldn’t the hundred-person company be able to deliver an app that can read labels on things? That’s the interesting dimension of no code, you can create apps without developers, and you should be able to write intelligent apps, and you shouldn’t need data scientists to do this.” The OCR feature will be bundled with natural language processing capabilities into what AppSheet is calling its Intelligence Package. On the predictive modeling side, AppSheet has added features that perform a statistical analysis of users’ app data to make predictions about future outcomes. Each predictive model is powered by a machine learning algorithm that learns to generalize from historical data. One example of this type of predictive model includes categorizing customer input based on examples of the feedback and the categories to which they belong. Another example involves predicting customer churn by using customer instances and data on whether they have ended their relationship. “We’re trying to not make some esoteric thing that only data scientists and machine learning people with degrees from Stanford can understand,” Seshadri said, “but make it something that anybody can understand and consume.” z
024-028_SDT030.qxp_Layout 1 11/19/19 11:34 AM Page 28
< continued from page 26
facilitates the entire process, which also makes it easy to oversee and govern each component.”
The no-code mobile app dev process No-code mobile apps are often built by business users for business users, meaning it doesn’t go beyond the organization’s employees. However, that doesn’t mean that it doesn’t have to undergo the same scrutiny that a consumer-facing mobile app does. According to Forrester’s Hammond, there has to be some level of governance put in place as organizations begin to expand their use of no-code.
There needs to be someone in charge to offer advice and help during the process, and there need to be rules and standards about what apps should be created and how they should be created. In addition, the IT department should not be left out of the loop, especially if you expect them to maintain the applications. “IT ends up playing catch up because they weren’t aware that this was going on or are asked to maintain these apps. It is better to be proactive about that governance process because in general you can put an app out without doing any traditional testing,” said Hammond.
4 key practices for securing mobile APIs As mobile APIs become more full-featured and rich, they become more dependent on data, key stores and connectivity profiles that can result in new vectors of attack. This drives the need for better security and best practices to patch up those vulnerabilities. Gartner expects that API abuse will be the number one attack vector for data breaches by 2020 in their research and Now Secure said in a post that a whopping 85% of mobile applications fail to secure at least one of OWASP’s Mobile Top 10 criteria. Tom Tovar, the CEO of Appdome, a no-code mobile solutions security platform, told SD Times that 5-10 years ago, the onus was on consumers to protect their own data. Now, developers are picking up the flag of security and doing this on behalf of the user. “Proper security is always a layered approach. There’s no silver bullet to block all of the threats, and you have to release apps into the public market,” Tovar said, adding that there are four key practices to help block the biggest vulnerabilities of mobile APIs. The four key tasks include: 1. Protecting the connection: cybercriminals can spoof a connection or intercept communications, they can perpetrate a devastating man-in-the-middle attack. 2. Include jailbreak and root protection: Jailbreaking gives cybercriminals complete control over the app. APIs must have protection to prevent being abused in this way. 3. Secure authentication and access: Many apps don’t use APIs that require secure authentication, giving anyone access to sensitive data. 4. Encrypting the data: Data used by APIs must be encrypted to protect against interception and manipulation. Tovar added that there is a great demand for security engineers, and the current pace of app development is introducing new no-code tools to secure those applications. “Mobile app security is a highly specialized skill. There are really amazing, knowledgeable security engineers out there in the world. But there’s not enough of them and if you’re a mobile developer, you might have 2k developers building the app, and 2 people to securing it,” Tovar said. “We want to solve this human problem with technology to code these four layers of security into an app without relying on humans —Jakub Lewkowicz writing code. z
From a security perspective, business users need to be aware of compliance, security, and risk management policies. “From a security perspective, if you have people punch holes in your existing security posture to get access to data and you are not on top of that, then you increase your threat surface without even realizing it,” said Hammond. AppSheet’s Seshadri believes all this can be solved with the proper platform. “The fact of the matter is code is incredibly hard. Performance tuning, very hard. Making sure things are secure. Very hard. Honestly, you do not want developers writing that code. Because almost always every 20 lines of code has a bug in it. What you want is a platform to give you security by default. You want a platform to give you a scale and performance by default. Without letting them mess it up. And that’s what no-code platforms do,” he said. Similarly to how users don’t worry about security or performance when using something like Google Docs or Office 365, business users should not have to worry about implementing security when developing no-code apps. “They should be given an abstraction, and a model that says, here are the security abstractions,” said Seshadri. Additionally, a platform should enable business users to go in and make changes, updates and add new features once the application is built. “If you enable a business user to build whatever you define as their first version of their application, it’s the equivalent of giving a mouse a cookie. Because their needs don’t stop with just the mobile app, or all the workflow messaging. But their ambitions around the apps they need to build,” Seshadri explained. “There’s a whole end-to-end system around it. And that end-to-end system needs to be built. You need to be able to update it. You need to be able to manage the data, do whatever ETL, archiving, reporting, analytics, auditing, scale it out if they have more people, deploy it in different languages… All of that needs to be available to the business user, but ultra simplified.” z
029-31_SDT030.qxp_Layout 1 11/14/19 5:00 PM Page 29
Angular powers business apps in the enterprise *But React and Vue.js are catching up
n today’s highly competitive digital world, designing great customer experiences is crucial. If your application or site isn’t pleasant to use, customers will use someone else’s. There are plenty of frameworks and libraries that can help developers design web applications with good UX/CX, one of which is Angular. Prem Khatri, vice president of operations at software provider Chetu, believes the two reasons Angular has become a favorite among developers are its automation and ease of use. Angular makes it very easy to develop large enterprise apps. Specifically, Angular allows for greater code readability, endto-end testing, and faster initialization, configuration, and rendering. “Given Angular’s robust tools for creating web applications, such as its component-based architecture, CLI automation, dependency injection, Ivy
BY JENNA SARGENT Renderer, and its Google support, the platform is best used for the development of large-scale, UI-heavy web apps with dynamic content, as well as progressive web apps, or PWAs, that allow for app-like experiences on a web browser,” said Khatri. According to Stephen Fluin, developer relations lead at Google, about 90% of Angular applications are actually “behind the firewall.” Companies are creating Angular applications for internal use, to drive processes and workflows. “We will talk to a Fortune 500 company and find out that they don’t have one Angular application, they have 100,” said Fluin. “And this is actually very similar to what we see at Google with Angular, where we have thousands of Angular projects in Google and most of those are empowering employees
029-31_SDT030.qxp_Layout 1 11/14/19 5:01 PM Page 30
< continued from page 29
029-31_SDT030.qxp_Layout 1 11/14/19 5:01 PM Page 31
Accessibility in Angular Web accessibility is an important thing to have, but unfortunately most websites don’t meet accessibility standards. In fact, a 2019 analysis by WebAIM of the top million website homepages found that 98.7% of them didn’t meet WCAG 2 (Web Content Accessibility Guidelines). While a framework can’t guarantee accessibility, there are several tools available to Angular developers to improve accessibility, software architect and author of the Angular Project book, Zama Khan Mohammed, wrote in a post on the Angular blog. According to Mohammed, accessibility starts with UI design. He recommends that designers use color palettes with contrasts that meet the accessibility standards, choose appropriate typography, use responsive design, and create simple animations and interactions. A lot of accessibility issues can be addressed by using native elements and proper semantics. Here are a few rules to keep in mind: n Use semantics tags (like nav, aside, and main) instead of just using div and span. n Use the right order for headings n Use alt attributes on images n Use buttons for things that can be clicked on. Further, if a noninteractive element is the click event, then add in key events for keyboard accessibility n Associate label with the form control n Avoid positive “tabindex” n Use captions on video and audio
Other things to consider are keyboard navigation and focus management. According to Mohammed, keyboard navigation is important for users with motor disabilities. By making sure the tabs are in a logical order, it will be easier to navigate the website using just the keyboard. In addition to the tab key, there are other keys to keep in mind. The Angular CDK’s ListKeyManager helps maintain keyboard interaction for components like menus, dropdowns, selects, and list boxes. Focus management is also important for users who do not navigate with a mouse. “Knowing where the focus goes while using the application is really important for accessibility because we want users who do not use a mouse (screen reader/keyboard users) to be directed to the right place when an interaction occurs or when the route changes,” said Mohammed. Developers can force the focus of an element. This can become complicated in advanced situations, but the Angular CDK provides a FocusManager and FocusTrap service to help developers handle focus. Finally, Angular’s Codelyze can help detect common accessibility issues. “Accessibility is a must for all web applications and it should be considered from Day 1 in the project development lifecycle,” said Mohammed. “The Angular team provides tools to make it easier to create Accessible Components, and now it’s time for developers to utilize them and create accessible Angular applications.” z
Angular gets a new compiler in Angular 9 According to Fluin, a big change to the Angular codebase has just been
released. Ivy is the next big milestone for Angular. Ivy is the result of an attempt to rewrite the Angular compiler and runtime, Fluin said. As of Angular 9, Ivy will be made default for people. It has backwards compatibility built in so that people don’t have to change their applications in order to take advantage of it. Ivy will allow developers to “lazy load just a single component,” Fluin said. This was a very highly requested feature, he explained. According to the team, Ivy will enable developers to significantly reduce the size of their applications. In addition to Ivy, the Angular team continues to work to improve the performance of Angular. “[We’re} doubling down on this idea that because we own more of the technology and build stack for building Angular applications, because we have our great CLI, we can continue to make applications better and better without people having to change their code,” said Fluin. z
032-38_SDT030.qxp_Layout 1 11/19/19 11:21 AM Page 32
Legacy app modernization is not a BY JAKUB LEWKOWICZ
egacy application modernization may mean different things to different people. But whether that means updating practices surrounding mainframes, adopting Agile and DevOps practices or updating to modern databases, legacy app modernization is necessary to keep pace with modern industry demands. “Legacy modernization at large means that you take advantage of enhanced operational agility and accelerated development,” said Arnal Dayaratna, research director for software development at IDC. In addition to its many different
definitions, there are many different types of modernization efforts organizations go through. While some choose to rip and replace legacy systems and build new ones from the ground up, others choose to use fullyautomated digital migration solutions. The important thing, however, to remember is that organizations shouldn’t be replacing legacy apps just for the sake of replacing them, Dayaratna explained. “Every legacy app doesn’t need to be modernized. There would need to be some kind of pain point or some kind of [issue] that modernization improves,” he said. “The best practice would be to focus energy [on] modernization efforts
and initiatives [that] are contained and specific to certain goals and areas of improvement with respect to an application.” “There’s a lack of understanding from companies that you don’t have to rip and replace all of your applications. You don’t have to do a major overhaul with COBOL applications that have been running for years. As we say, working code is gold. And as you want to improve those applications or change them, you can do that right on the mainframe. You can do that with COBOL,” David Rizzo, VP of product engineering at Compuware, added. According to recent research from the consulting firm Deloitte, the drivers
032-38_SDT030.qxp_Layout 1 11/19/19 11:24 AM Page 33
tions. In addition, there aren’t many tools yet that tackle modernization, Dayaratna added. “There is plenty of tooling to support developing new applications that are container-based or that are microservice-based… [but] there’s less tooling that is focused explicitly on transforming or modernizing legacy applications,” he said. Additionally, Dayaratna pointed out that it’s not easy to re-architect an application in a way that makes it more suitable for modern development if there is a legacy codebase and an application with say 10 million lines of code. That’s why Dayaratna explained most instances of legacy app modernization occur in an incremental fashion. “Let’s take a health insurance company that has a legacy app that processes claims. The claims come in and they have the patient’s date of birth, social security number, insurance identification, and different diagnosis codes that need to be processed. What the company will do is modernize that part of the application. It’s rare that an organization will transform an application wholesale. Now overtime, they may end up revamping it completely, but that
slash and burn process for legacy app modernization include the high cost of maintaining legacy applications, systems and infrastructure; and a shortage of employees that are skilled in legacy languages such as COBOL and Natural. Additional drivers include legacy applications that take too long to update functionality and teams being prevented from working on another part of the application while updating, according to Dayaratna. In order to address these issues, teams are trying to leverage Agile integration. However, the lack of knowledge about where to start, the steep learning curve, and the complexity end up being major challenges for organiza-
will take some time. That’s a massivegradual process,” Dayaratna said.
Mainframe modernization is essential Although mainframes may seem like a blast from the past, they are still front and center when it comes to business operations and remain a constant in an age of vast technological transformations. Even as organizations move towards microservices and cloud-native applications, monolithic, legacy, and waterfallbased applications still power significant parts of the business, and modern CIOs don’t want to remove older systems that are working well, according to Dayaratna. According to IBM research, who has
one of the most widely used mainframes (IBM Z systems), mainframes are used by 71% of Fortune 500 companies. In addition, mainframes handle 87% of all credit card transactions and 68% of the world’s IT workloads. “The number one challenge that virtually every enterprise has faced, or is currently addressing, is the fact that over 80% of enterprise data sits on a mainframe, while modern endpoints (phones, tablets) are not part of the mainframe model. So the actual challenge is that digital transformation initiatives must ensure that enterprise data/services be made available to modern endpoints with the digital experience that the customer expects, but with security built-in to address CSO/regulatory mandates,” Bill Oakes, head of product marketing for API management and microservices at Broadcom, and David McNierney, product marketing leader at Broadcom, wrote in an email to SD Times. As organizations undergo a data migration project there are many other challenges they will have to consider including the amount of downtime required to complete the migration, as well as business risks due to technical compatibility issues, data corruption, application performance problems, data loss, and cost. While there is still little tooling for managing and modernization mainframes, the tools that are available are making significant strides. “One of the interesting things that’s happening now in the industry is that many vendors are bringing modern development practices to mainframes and enabling modern application development to be performed on mainframe based apps, which was largely not possible before,” IDC’s Dayaratna said. Historically, developers were not able to use a browser-based IDE to develop code for the mainframe or apply autocomplete and automated debugging capabilities. Now that the tooling is catching up to modern development methods, and there will be more options for Agile development and DevOps on mainframe-based architectures, according to Dayaratna . continued on page 34 >
032-38_SDT030.qxp_Layout 1 11/19/19 11:24 AM Page 34
< continued from page 33
“What [bringing modern practices] allows you to do is act like every other application and every other platform within the enterprise to the mainframe has the same abilities to be developed in an Agile way, using modern DevOps tools, using modern IDEs. When you do that and modernize your development environment for the mainframe that allows you to work with the other platforms and allows for every platform across the enterprise to be looked at in the same way. It can then be integrated into the same deployment pipeline so that you can move code through whichever platform is being deployed to,” said Compuware’s Rizzo.
they’ve put into their mainframes and they also get to utilize the platform in their data center, which is the way that is most secure and best at processing highvolume transaction processing. This way they get to keep the best of both worlds by using modern tools, Rizzo explained. “The good ones that are being progressive or being stewards of their platforms or their industry are looking at modernizing those applications as far as
Legacy modernization is a challenge “Modernization is all about scaling and being able to form the foundation for new data sources, real-time analytics and machine learning,” said Monte Zweben, CEO and co-founder of Splice Machine. While some organizations decide to take drastic measures by moving their legacy applications to the cloud, this isn’t always optimal, Zweben said. Sometimes it results in some improvements, but at a great cost. “The app [in the cloud] is the same application that was running beforehand on premise. In short, it’s a little bit more Agile because you can bring it up and down pretty quickly and lower some operational costs. But in the end, the cost of that application can even go up in the cloud and the application hasn’t changed,” Zweben said. “We’re at the beginning of migration and modernization. Most of my predictions for 2020 is that there’s going to be huge cloud migration and disillusionment. What I mean by that is that the first step where everyone seems to leap in cloud migration. I think they’re going to be disappointed with what they get out of that.” Compuware’s Rizzo advised organizations to integrate with a hybrid cloud solution where some of the application’s functionality that talks to the mainframe is running in the cloud. These enterprises are able to benefit from all the years of investment that
being able to integrate with the modern tool chain and being able to continue to develop them, and they’re looking at ways how they can continue to keep those applications running as efficiently as they are and continue to support their customers,” Rizzo said. “So they’re looking at the future. And that’s where it’s key that they understand that they can keep them on the mainframe.” Other organizations are taking an extreme approach to rewriting a legacy application completely on a new platform to be able to incorporate new data sources, use machine learning models and to scale, said Zweben. “We think it is a huge mistake because with Splice Machine, you can keep the legacy application whole. We just replace the data platform underneath it, like the foundation, and avoid the rewrite of the application. This saves enormous amounts of time and money in that migration process and actually avoids real risk because you have to quality assure much less when you’re just replacing the foundation rather than replacing the whole application,” said Zweben. Managing the technology to work in sync has also been a struggle for organ-
izations. “You had to bring operational data platforms to the table, either NoSQL systems or relational databases. You would have to bring analytics engine. Typically they’re either data warehouses or Hadoop -based compute engines, SPARC based-compute engines and you bring machine learning algorithms into it. And this takes a lot of heavy lifting a distributed system engineering,” Zweben said. “And that’s hard. This is really hard stuff to configure and make work and operate.” By using automated migration tools, companies will be able to migrate to any cloud way easier and open up new operating entities anywhere in the world on a dime,” according to Zweben. API management is also an essential core component to digital transformation across every horizontal. “Digital initiatives based on APIs are all about providing scalable, reliable connectivity between data, people, apps and devices. To support this mission, experienced architects look for API management to help them solve the challenge of integrating systems, adapting services, orchestrating data and rapidly creating modern, enterprise-scale REST APIs from different sources,” Oakes and McNierney said. Another problem that needs to be solved is that there are groups in organizations working in silos. Zweben gave the example that when an organization wants to implement AI into their applications, they’ll typically have an AI/ML specialist group working off to the side and in this case the key is to put AI people into every team instead. “Every enterprise is dealing with ongoing digital disruption and transformation in order to remain competitive in their market — regardless of the market,” Oakes and McNierney wrote. “Mainframe organizations understand the need to make the platform less of a silo and more like other platforms so they have modernization-in-place initiatives underway. Even careers mainframers, who have built and maintained the mainframe-based systems of record that are the lifeblood of their organizations, understand this need.” z
Full Page Ads_SDT030.qxp_Layout 1 11/19/19 9:29 AM Page 35
Full Page Ads_SDT030.qxp_Layout 1 11/19/19 9:29 AM Page 36
032-38_SDT030.qxp_Layout 1 11/19/19 11:26 AM Page 37
How these companies can help you monitor your applications David Rizzo, VP of product engineering at Compuware Compuware delivers highly innovative solutions and integrations that enable IT professionals of all skill levels to manage mainframe applications, data and platform operations with ease and agility. As mainframe workloads continue to grow and platform stewardship shifts to cross-platform DevOps teams, providing common tools that automate, integrate and measure are key to empowering any IT professional to be productive on the mainframe, regardless of experience. As the Mainframe Partner for the Next 50 Years, each quarter we provide customers with net-new capabilities and enhancements to existing products that empower them to mainstream the mainframe, so they can fully leverage their mainframe investments and innovate on the platform. Our Topaz suite of development and testing solutions —
Monte Zweben, CEO and co-founder of Splice Machine Splice Machine is the platform to modernize and extend the functionality of your legacy applications in record time. These applications need to scale to petabytes by leveraging new data sources and make intelligent decisions based on that data. Companies are rushing to modernize their applications that were written on legacy technology decades ago. To transform their applications, enterprises are either moving to the cloud, employing microservices, or rebuilding them on a NoSQL database. Each approach has its unique advantages and disadvantages. By moving to the cloud, enterprises can make their applications agile, but it is an exercise in infrastructure optimization that doesn’t fundamentally improve business outcomes. Using microservices and containerization can help enterprises rapidly deploy applications but
Bill Oakes, head of product marketing for API management and microservices at Broadcom Inc., and David McNierney, product marketing leader at Broadcom Broadcom provides a comprehensive modernization portfolio to enable enterprises to modernize their legacy mainframe back-ends with modern front-ends, including mobile, cloud, and IoT. Broadcom’s Layer7 is a full lifecycle API management solution that enables API developers to build digital assets from systems of record by integrating existing applications in APIs, modernizing mainframe services, connecting to legacy assets, or building new microservices. For front-end developers, Layer7 provides client-native SDKs that handle the security operations for each runtime transaction. The developer can focus on building a delightful user experience while the solution encrypts the channel, manages authentication and authorization using native interactions and bio-
which principally includes Topaz for Program Analysis, Topaz for Enterprise Data and Topaz for Total Test — enables developers to visualize and analyze programs; manage and prepare test data; and automate unit, functional, integration and regression testing all within a modernized Eclipse-based development environment. ISPW, Compuware’s mainframe CI/CD tool, empowers users to quickly and safely understand, build, test and deploy mainframe code. Importantly, our tools integrate with a myriad of favored DevOps tools from companies like Atlassian, Jenkins, SonarSource, XebiaLabs, Elastic, BMC and more. We believe the only way to be truly Agile is to integrate mainframe-focused tools into a multi-vendor, cross-platform DevOps toolchain of choice. Our full product portfolio spanning development and operations, together with integrations, ensure customers can continue to advance mainframe development quality, velocity and efficiency in support of customer-facing digital innovation. doesn’t leverage artificial intelligence. Replatforming the app on a NoSQL database enhances its scalability by accommodating massive amounts of data, but requires the application to be written from scratch, which increases its complexity exponentially. Splice Machine is an intelligent SQL platform that enables companies to be Agile, data-rich, and smart. Splice Machine’s approach is to migrate the application to its scalable platform without the risk and expense of re-writing. Enterprises have the flexibility to unify business analytics on the same platform and then inject artificial intelligence and machine learning natively. Now diverse applications across industries can make better decisions faster using more extensive and diverse data sets. With Splice Machine, enterprises can modernize their applications in a matter of weeks versus months or years using other technologies. metrics, and enables features that would not otherwise be possible. Broadcom’s CA Brightside provides enterprise-grade support for, and extensions, to the Zowe framework. A modern, open source interface to the mainframe, Zowe offers a command-line interface (CLI) similar to the ones provided by cloud platforms like AWS and Azure, and an API Mediation Layer that break down the legacy mainframe silo. CA Endevor, the dominant mainframe SCM, now offers CA Endevor Bridge for Git. When combined with Zowe, this allows modern developers to use IDEs like Visual Studio Code and GitHub without disrupting their colleagues using legacy tool. By adopting Zowe with CA Brightside and CA Endevor Bridge for Git, mainframe and distributed teams can collaborate more closely while aligning on common devops tools and practices. Our team of experts will help your organization establish a solid API program strategy and overcome organizational, cultural, and technical hurdles to ensure a successful rollout. z
032-38_SDT030.qxp_Layout 1 11/19/19 11:31 AM Page 38
A guide to legacy app modernization tools n
FEATURED PROVIDERS n
n Broadcom: With Broadcom, companies modernize their legacy applications by leveraging comprehensive API lifecycle management capabilities to maximize insights, efficiencies, and value across the entire organization. Their solutions include: CA Brightside, an enterprise-grade version of the Zowe open-source framework that breaks down the mainframe silo; the Layer 7 full life cycle API management solution; and CA Endevor, the dominant mainframe SCM that now offers CA Endevor Bridge for Git. n Compuware: Compuware, a mainframe-dedicated software company, empowers the world’s largest companies to excel in the digital economy by taking full advantage of their mainframe investments. They do this by delivering innovative software that enables IT professionals with mainstream skills to develop, deliver and support mainframe applications with ease and agility. n Splice Machine: Splice Machine is a scalable SQL database that enables companies to modernize their legacy and custom applications to be agile, data-rich, and intelligent — all without re-writes. Splice Machine helps organizations in demanding, data driven industries deliver intelligent decisions at scale and accelerate the speed of doing business by incorporating new data sources and AI into their operational applications. n Akamai: Akamai provides a distributed edge platform that offers intelligence to optimize devices, and capacity to move huge volumes of data and content, whether it’s broadcast to large audiences or personalized for each individual user. It also offers a defensive shield built to protect websites, mobile infrastructure and API-driven requests. n CAST Software: CAST Imaging scans and automatically understands any complex software system. It reverses engineers ‘as is’ architecture automatically, visualizes all dependencies via easy-tonavigate blueprints, zooms from overall architecture layers down to the finest detail, and enables impact analysis and To-Be architecture exploration. n Cumulus Networks: Cumulus Networks brings an open, modern and disaggregated, standards-based approach to building campus networks. It provides seamless integration with automation tools, and the ability to leverage common tools and best practices across a data center. n IBM: After its recent acquisition of Red Hat, the company offers the portable platform Red Hat OpenShift for modernizing existing IT infrastructure and apps. It also offers the IBM Garage Method that con-
nects customers to teams that help explore business goals, potential outcomes, and ideas for incorporating advanced cloud and cognitive capabilities like Watson AI, IoT, or blockchain into solutions and skill sets for teams. n Micro Focus: Micro Focus offers solutions that allow users to: modernize the application delivery process with new practices and tools for Agile and DevOps; redefine where and how applications are built and deployed; unlock and modernize access to mainframe applications and data; and build services for integrating business processes and systems. n Mulesoft: Mulesoft’s Anypoint platform enables users to connect legacy systems to digital channels quickly, lower legacy system maintenance spend, and insulate legacy systems from spikes in data requests. Mulesoft built a “WSDL decomposer” a component that dynamically ingests a monolithic WSDL, and exposes each operation’s functionality through a distinct SOAP and REST service. n Progress: Progress’ Kinvey platform uses modern microservices frameworks and low code connectors to present dis-
parate systems as a single, secure data collection or RESTful API, without having to replicate data or replatform legacy systems. It also includes patented hybrid data integration technology to get the benefits of cloud-native architecture. n Quali: Quali’s Cloud Sandboxing solution accelerates and de-risks your application modernization initiatives by allowing you to bring both legacy and new applications into a common DevOps pipeline. Automated application and infrastructure blueprints ensure that application components are reliably deployed for all parts of the SDLC. n Red Hat: Red Hat offers open, modular, cloud-ready platforms that help you transform legacy apps to modern, Agile ones, which increases business value. It’s offerings include Red Hat JBoss Enterprise Application Platform (EAP), a Java EEbased application server runtime platform, and Red Hat OpenShift Container Platform, which gives full control over Kubernetes environments. n Skytap: Skytap Cloud supports modernization by offering a progressive approach to migrating and modernizing application infrastructure, processes and architecture. It also uses an environment-based infrastructure model to provide self-service, eliminate configuration drift, and increase infrastructure utilization at global scale. n Software AG: Software AG’s Adabas & Natural Application Modernization Platform is a low-risk, cost-effective approach to mainframe integration that makes indispensable legacy business applications easier to use and more accessible. This mainframe integration platform modernizes user interfaces, delivers business logic as services, and provides realtime access to transactional data from existing and new applications. n Syncsort: Syncsort’s Connect for Big Data makes it clean and simple to ingest, translate, process and distribute mainframe data with Hadoop. Syncsort’s Elevate MFSort provides the fastest and most resource-efficient mainframe sort, copy, and join technology. z
Full Page Ads_SDT030.qxp_Layout 1 11/18/19 11:43 AM Page 39
040_SDT030.qxp_Layout 1 11/14/19 4:56 PM Page 40
Guest View BY HANS OTHARSSON
Best of friends, greatest of enemies Hans Otharsson is customer success officer at OpenLegacy.
ontrary to popular belief, the world is not full of start-ups. There’s a reason the world’s top X companies lists exist — these well-established powerhouses have been around for decades. The issue is that in this competitive, start-up environment, even well-established companies are now required to be as nimble as start-ups, and, if they don’t, they risk going out of business. It’s time for them to take a hard look at their technology stacks, ensuring their future success by making mainframes relevant as the decades change. They need to adopt agile and continuous development, while maintaining their existing environments — where the company’s most valuable data is located on a stable, long-operating legacy mainframe system. Therefore, these organizations end up with two camps, both charged with meeting the increasing digital requirements of Millennial and Gen Z customers, but with a very different perspective on resources. The camps are delineated by responsibilities, budget, people, and focus.
In this competitive, start-up environment, even well-established companies are now required to be as nimble as start-ups.
Us v. Them — The Mainframers Most of the analysts and programmers benefit from having the largest chunk of the budget, about 70 percent, because they run the core enterprise software and need to ensure stability within the legacy systems. They have a certain degree of power because of longevity and because no one understands exactly what they do. The mainframers fondly remember the time when they were exactly where the digital transformation engineers are now. These mainframe analysts and programmers did amazing things with a technology that was new, constantly changing, and misunderstood. Their applications became a critical part of the corporate DNA — which they are responsible for maintaining to this day. They are used to being the guardians of the “crown jewels,” so they are focused on ensuring the jewels’ safety and longevity. Of course, it has been more than 20 years now. In the context of this exact role and time frame, others within the organization feel that they are
difficult to deal with. The perception is that these “old timers” are very guarded, conservative, and extremely territorial.
Us v. Them — The Digital Transformers On the other side of the wall are the “innovative” people — architects and developers. The innovation people are all about “new demand,” addressing the new market and new customer needs, all the cutting-edge exciting stuff.
Stuck in the middle The chief architect (CA) is the human “middleware” between teams, working with the mainframers to help overcome the challenges, finding the best path by which to connect the new technologies with the legacy systems. Technically, the CA struggles with the limitations of ESB and SOA and handling the stack of middleware. On the personal side, the CA must overcome the resistance from the camps of “Do nothing,” “Change nothing, and “That’s how it’s always been done.”
Creating an atmosphere of cooperation The goal: staff buy in. The best-case scenario is where the CA creates a solid architectural plan that clearly outlines the challenges faced by each side with specific technologies and action items based on best practices that can overcome the issues. With a very clearly defined plan, the process of adopting and integrating new technologies into the legacy system will be simpler. This reduces the mystery, context, and complexity of the legacy application — making the integration much more efficient. It will also assuage fears that the process will lead to job or role elimination. The well-structured plan will simply communicate that the architects are simply taking the fantastic, high performing, secure, bulletproof legacy system and allowing new visitors via the existing doors into the system. New technology platforms, such as microservice-based API integration, makes adoption much easier. Members of both camps should work hand-inglove to facilitate cooperation. Complete transparency to strengthen ties and make the “them” into “us,” will get the job done. z
041_SDT030.qxp_Layout 1 11/18/19 3:07 PM Page 41
Analyst View BY ARNAL DAYARATNA
Evaluating the ethics of software I
n recent years, technology analysts have devoted much attention to the topic of developers and how the demographics of developers are changing. For starters, the International Data Corporation (IDC) has noted the growth of developer populations in China, India, Brazil, Russia, Indonesia and Turkey, as well as select countries in East Africa. IDC has also pioneered research into the category of “part time developers,” a category of developers that do not have developer in their job title, but who nevertheless perform development-related work as part of their job roles and responsibilities. It is worth mentioning that the term “part time developer” more accurately captures the way in which software developers are financially compensated for their work. Examples of part-time developers include business analysts, data scientists, data analysts, project managers, risk managers and strategy managers as well as DevOps engineers, storage engineers, network engineers, and IT operations administrators. While the distinction between full-time and part-time developers is an important one, there is another category of developers that represents a critical piece of the conversation regarding developer populations worldwide. This category involves the professional resources that are responsible for evaluating the ethics of software applications. As AI/ML-based applications proliferate, it becomes important for ethicists to opine on the ethical implications of the implementation of AI/ML, and whether a specific algorithm discriminates against one or more groups of people. Recent allegations that an AI/ML algorithm used by a prominent health insurer discriminated on the basis of race, for example, underscore the need for ethical reflection to accompany software development. Ethical reflection is required not only for AI/ML applications, but for any application that collects data about its users and customers. While organizations have a long history of collecting data about customers, important ethical questions prevail about the extent to which end users are aware of how their data is used, particularly in cases involving the sale of that data to third party organizations. Is it ethical for software applications to render data about end users accessible to foreign governments or to transnational organizations that
seek to influence a political election? Similarly, is it ethical for organizations that are well known for committing, or facilitating criminal activity to buy data from a software application that can subsequently be used to commit crimes? Meanwhile, all software should be examined for its inclusiveness with respect to whether it accommodates users with special needs because of a personal attribute such as blindness, deafness or the inability to use their hands or voice, for example. Does a particular software application discriminate against dyslexic or color blind users, for example? Is there a way to mitigate or circumvent such discrimination? Can software be used to both identify and remediate a lack of inclusiveness with respect to the design of software applications? The larger point here is that contemporary software requires ethical deliberation and validation as part of the testing and QA process. This need for ethical validation has always been important for software applications, but the proliferation and ubiquity of contemporary software require a deeper level of ethical evaluation and analysis. Software is so interwoven into the fabric of our existence—in wearables, mobile devices, automobiles, digital assistants, consumer applications and enterprise applications—that a reflection on the ethics of software is critical to the software development lifecycle. Currently, the technology industry is woefully unprepared to undertake this responsibility regarding the ethical evaluation of software because of a lack of professional resources with the requisite training in both philosophy and software development. What this suggests is that, whereas a computer science degree was one once of the most highly sought after degrees, going forward, the technology industry will be in dire need of minds that have training in philosophy, ethics and computer science. The infiltration of software into daily life requires the emergence of ways of evaluating the ethics of software that can be swiftly integrated into the software development life cycle itself by ethics engineers that are as much software developers as are the resources that write code. z
Dr. Arnal Dayaratna is Research Director, Software Development at IDC.
As AI/ML-based applications proliferate, it becomes important for ethicists to opine on the ethical implications.
042_SDT030.qxp_Layout 1 11/15/19 12:53 PM Page 42
Industry Watch BY DAVID RUBINSTEIN
What follows CD? Progressive Delivery David Rubinstein is editor-in-chief of SD Times.
oftware development and delivery practices continue to evolve and change, so on the heels of the late October DevOps Enterprise Summit, attendees and journalists alike have been asking, ‘Where does it all go from here?’ One area involves value streams, the creation of which allow organizations to see waste in their organization and eliminate for better efficiency and, ultimately, quality. Another is CI/CD. The practice of continually introducing changes to the codebase and deploying those changes out for testing and feedback prior to wide release is well understood. So, how does the industry improve on continuous delivery? According to Adam Zimman, VP of platform at feature experimentation software provider LaunchDarkly, the future is through ‘progressive delivery.’ As defined by Redmonk analyst James Governor and Zimman in mid-2018, progressive delivery allows organizations to roll out changes while being mindful of the users’ experience. In a nutshell, Zimman said it’s about having increasing control over who gets to see what, when. “The idea of being able to garner feedback from specific cohorts prior to broad release of features or products has been a thing since people started selling stuff,” Zimman said. “In the past five years, as the ideas around more continuous delivery — a faster cadence of release cycles — has shifted what tools people look to to be able to do this type of experimentation or controlled rollout.” A key tool to creating control points in software is a feature flag. Developers add these flags into their code that can be turned on or off for certain cohorts. It can be as simple as turning a feature on or off, or giving access in certain contexts but not in others. For instance, you might want to allow access to features in a staging environment for testing but not in production. And those types of access controls ultimately enable an organization to delegate who gets to manage them. “If it’s something the developer is providing the feature flag mechanism themselves, or they’re storing the values in a simple database, then the ownership of that control resides with developers,” Zimman said. “You’re talking about accessing a database
In a nutshell...it’s about having increasing control over who gets to see what, when.
directly; you’re talking about changing value in that database, so you need to have some level of engineering background to be able to do that without shooting yourself in the foot, basically.” Then, he added, as you start to roll that into production, “you may look to transition that ownership to an operations team, commensurate with a DevOps model where you want to have equal visibility and shared responsibility.” Finally, as you start to look at this functionality as it starts to impact users, you want to include the individual business owners that are closest to those business outcomes. The use of feature flags has been associated with feature experimentation — A/B testing, bluegreen deployments — but Zimman sees distinctions between experimentation and progressive delivery. “Often times, you think of an experiment, I want to put something out there and test if it’s better than what I had before. Or, if it’s something net new, then I want to see if people like it. But the goal of the experiment often times implies that you’re going to have an outcome that either is going to roll it back for everyone, so that no one gets that new thing because it was worse, or you’re going to roll it out to everyone. “One of the key distinctions with progressive delivery,” he continued, “is the idea that you actually want to have an end state of this control point that is being thoughtful of the user base that it’s applicable to. I talk about this in the context of B2B. That makes a lot of sense for anybody who has kind of looked at the idea of mult-tiers of service, where you have a premium tier, and entry-level tier, and so on. You actually want new features to potentially to only go to one of those groups. You’re not actually rolling it out to everyone.” In a post from Aug. 2018, Redmonk’s Governor wrote: “A great deal of our thinking in application delivery has been about consistency between development and deployment targets – see the promise of Docker and then Kubernetes. A core aspect of cattle vs pets as an analogy is that the cattle are all the same. The fleet is homogeneous. But we’re moving into a multicloud, multiplatform, hybrid world, where deployment targets are vary and we may want to route deployment and make it more, well, progressive.” And that’s progress. z
Full Page Ads_SDT030.qxp_Layout 1 11/18/19 11:43 AM Page 43
Bad address data costs you money, customers and insight. Melissa’s 30+ years of domain experience in address management, patented fuzzy matching and multi-sourced reference datasets power the global data quality tools you need to keep addresses clean, correct and current. The result? Trusted information that improves customer communication, fraud prevention, predictive analytics, and the bottom line. • Global Address Verification • Digital Identity Verification • Email & Phone Verification • Location Intelligence • Single Customer View See the Elephant in Your Business -
Name it and Tame it!
www.Melissa.com | 1-800-MELISSA
Free API Trials, Data Quality Audit & Professional Services.
Full Page Ads_SDT030.qxp_Layout 1 11/18/19 3:53 PM Page 44
The latest issue of SD Times is now available. The December issue features waving the flag for feature experimentation, a legacy app moderni...
Published on Dec 2, 2019
The latest issue of SD Times is now available. The December issue features waving the flag for feature experimentation, a legacy app moderni...