SD Times - February 2019

Page 1

Digital Transformation A tool suite for integrated collaborative architectures Capture an integrated view of what is, what will be, and the journey there.




Pro Cloud Server

Collaborate Seamlessly Across the Enterprise

Ű Optimize collaboration over distributed networks Ű Tightly link Enterprise Architect with numerous platforms Ű 6WUHDPOLQH FRQ̧JXUDWLRQ DQG PDQDJHPHQW Ű Extend model access to your entire organization







News Watch


New software licenses aim to protect against cloud providers


A new enterprise data cloud emerges


California privacy act follows GDPR


Report: DevOps is causing chaos for enterprises


The OCF’s mission: Standardize the Internet of Things

IoT is the next digital transformation page 18

On the definition of done page 14

UX design: It takes a village


GUEST VIEW by Rakshit Soral Become a better React Native developer


ANALYST VIEW by Rob Enderle Will RAZR design cut into Apple’s lead?


INDUSTRY WATCH by David Rubinstein It’s time for data privacy legislation

page 24

DevOps in 2019: Seeking systems unicorns, tackling toil, and saying ‘yes’ to service silos

page 28 Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2019 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at


Instantly Search Terabytes EDITORIAL EDITOR-IN-CHIEF David Rubinstein NEWS EDITOR Christina Cardoza

dtSearch’s document filters support: ‡ popular file types ‡ emails with multilevel attachments ‡ a wide variety of databases ‡ web data


Over 25 search options including: ‡ efficient multithreaded search ‡ HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ ‡ forensics options like credit card search

Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum









The Smart Choice for Text Retrieval® since 1991 1-800-IT-FINDS



D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803











SD Times

February 2019


Python is the programming language of the year The Python programming language is continuing to increase so much in popularity that the TIOBE Index has declared it is the programming language of 2018. The index reported in August that Python was steadily closing in on C++ and headed towards the top three programming languages. As of the TIOBE Index’s January 2019 report, Python is now the third top language. “For almost 20 years, C, C++ and Java are consistently in the top 3, far ahead of the rest of the pack Python is joining these 3 languages now. It is the most frequently taught first language at universities nowadays, it is number one in the statistical domain, number one in AI programming, number one in scripting and number one in writing system tests. Besides this, Python is also leading in web programming and scientific computing (just to name some other domains). In summary, Python is everywhere,” TIOBE wrote. According to TIOBE, the top 10 programming languages of the year include: Java, C, Python, C++, Visual Basic .NET, JavaScript, C#, PHP, SQL and Objective-C.

GitHub reveals free private repositories

Kong 1.0 for APIs generally available

GitHub wants to make its software development platform even more accessible to developers with updates to GitHub Free and GitHub Enterprise. The company announced GitHub will now provide developers with unlimited free private repositories as well as a new unified offering for enterprise users. The unlimited free private repositories will enable developers to use GitHub for private projects, and include up to three collaborators per repository. “Many developers want to use private repos to apply for a job, work on a side project, or try something out in private before releasing it publicly. Starting today, those scenarios, and many more, are possible on GitHub at no cost,” Nat Friedman, CEO of GitHub, wrote in a post. GitHub will still offer its free public repositories, which allow for an unlimited amount of collaborators, Friedman explained.

Kong has announced that Kong 1.0 is now generally available. Kong is a Microservice API Gateway for managing, securing, and connecting hybrid and cloud-native technologies. The release includes features and fixes that will make Kong faster, more flexible, and resilient. Kong 1.0 includes the ability to be deployed as a service mesh proxy. It contains out of the box service mesh functionality and integrations with technologies such as Prometheus, Zipkin, health checks, canary, blue-green, and more.

Red Hat supports OpenJDK on Windows Red Hat has announced longterm commercial support for OpenJDK on Windows. This addition to its existing support will enable organizations to “standardize the development and deployment of Java appli-

In addition to being the programming language of the year, Python will also get a new governance model in 2019, according to the Python Software Foundation. The decision to come up with a new model was made after Python creator and chief Guido van Rossum stepped down as the BDFL (benevolent dictator for life). The new governance model will rely on a five-person steering council to establish standard practices for introducing new features to the Python programming language. The steering council will serve as the “court of final appeal” for changes to the language and will have broad authority over the decision-making process, including the ability to accept or reject Python Enhancement Proposals (such as the one used to introduce this governance model), enforce and update the project’s code of conduct, create subcommittees and manage project assets. But the intended goal of the council is to take a more hands-off and occasional approach to flexing its powers. z cations throughout the enterprise with a flexible, powerful and open alternative to proprietary Java platforms,” the company explained. “By extending support to users running OpenJDK on Windows, we are reinforcing our dedication to the success of open source enterprise Java and its users,” said Craig Muzilla, senior vice president of the Core Products and Cloud Services Business Group at Red Hat. “With changes on the horizon impacting the long-term support of proprietary JDK solutions, we want to give customers the confidence that they can continue to do what they have been doing with minimal disruption.”

JFrog to launch repository for Go modules JFrog has announced a new public repository to support the open-source community of Go developers as well as the thousands of Go-oriented projects for building and validating Go

modules. JFrog GoCenter is meant for software modules developed in the Go programming language, and is expected to officially launch to the broad community early next year. “JFrog was founded on open-source technology and on making developers’ lives easier,” said Yoav Landman, co-founder and CTO of JFrog. “GoCenter underscores our commitment to the opensource Go community and addresses a need for reusable Go modules that every Go developer experiences.” Go is an open-source language that was developed by Google engineers. It focuses on building simple, reliable and efficient software. According to JFrog, it is the “cloud programming language” and the, fourth most popular language, with more than 2 million developers, the company explained. However, the company believes the language suffers from a lack of a single, public and trusted source for developers to take advantage of and manage Go modules.

GitLab heads into serverless GitLab is bringing serverless capabilities to its DevOps solution with the release of GitLab Serverless. This new solution was delivered in partnership with the multi-cloud serverless management platform provider TriggerMesh. “The TriggerMesh founders have deep expertise in serverless. Sebastien Goasguen has been at the forefront of the technology since the very beginning. We found his work on kubeless forward thinking and were thrilled to partner when the opportunity presented itself,” said Sid Sijbrandij, co-founder and CEO of GitLab. GitLab Serverless will enable users to deploy serverless functions and apps to any cloud or infrastructure directly from the GitLab UI, the company explained. It is built using the open-source project for building, deploying and managing serverless workloads Knative. The solution was officially released in the 11.6 release of GitLab.

Microsoft urges safeguards for facial recognition As facial recognition technology advances, Microsoft is worried about how it could be taken advantage of. For instance, users could be unknowingly tracked everywhere they go. As a result, the company is asking technology companies to start taking action. “The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that

exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up,” Brad Smith, president of Microsoft, wrote in a blog post. Back in July, the company began to urge governments to adopt laws to regulate the technology. It even provided several recommendations. However, Microsoft explained that it can no longer sit back and wait for governments to step up. “We and other tech companies need to start creating safeguards to address facial recognition technology. We believe this technology can serve our customers in important and broad ways, and increasingly we’re not just encouraged, but inspired by many of the facial recognition applications our customers are deploying. But more than with many other technologies, this technology needs to be developed and used carefully,” Smith wrote.

Google introduces new initiatives to adhere to AI principles In an effort to encourage teams within the company to consider how AI principles impact their projects, Google has established several new initiatives. The news comes months after the company detailed its seven principles for AI. The company has launched a training program to help employees address ethical issues that arise in their work. It also launched an AI Ethics Speaker Series and added a technical module on fairness to the Machine Learning Crash Course. In addition, the company established a formal review structure for assessing new

February 2019

SD Times

projects, products, and deals. The review structure consists of three core groups: 1. An innovation team for handling day-to-day operations and initial assessments 2. A group of senior experts to provide technological, functional, and application expertise 3. A council of senior executives for handling complex and difficult issues

development community.” As part of the announcement, ACT is also welcoming two new projects that will be hosted at the Linux Foundation: OpenChain, a project that identifies key recommended processes for open-source management; and the Open Compliance Project, which will educate and help developers and companies better understand license requirements.

Linux Foundation forms Compliance Tooling project

LF Deep Learning Foundation brings on training framework

The Linux Foundation is working on improving open-source compliance with the formation of a new project. The Automated Compliance Tooling (ACT) project has been set up to consolidate investments, increase interoperability and help organizations manage compliance obligations. According to the foundation, while the use of opensource code is becoming very popular, it is important to remember there are licenses that users have a responsibility to comply with along with the using the code, which can be difficult or complex to deal with. “There are numerous open source compliance tooling projects but the majority are unfunded and have limited scope to build out robust usability or advanced features,” said Kate Stewart, senior director of strategic programs at The Linux Foundation. “We have also heard from many organizations that the tools that do exist do not meet their current needs. Forming a neutral body under The Linux Foundation to work on these issues will allow us to increase funding and support for the compliance tooling

Uber’s open-source distributed training framework Horovod is joining the LF Deep Learning Foundation to support its work in artificial intelligence, machine learning and deep learning. The LF Deep Learning Foundation is an umbrella project under the Linux Foundation. Horovod is designed for other deep learning projects like TensorFlow, Keras and PyTorch. The goal of the project is to make distributed deep learning fast and easier to use. The primary goal of the project was to take a singleGPU Tensor Flow program and train it on GPUs faster. According to the foundation, Horovod has been able to improve GPU resource usage figures with advance algorithms and high-performance networks. Uber also explained that the project scales significantly better than standard distributed TensorFlow, and is twice as fast. Uber has been using Horovod for self-driving vehicles, fraud detection, and trip forecasting. It has also be used by other industry leaders such as Alibaba, Amazon and NVIDIA. z



SD Times

February 2019

New software protect against BY CHRISTINA CARDOZA pen-source companies are tired of being pushed around by cloud providers and technology giants. As a result, these companies are taking measures into their own hands with the development of new software licenses for their projects.. For instance, earlier this year a group of businesses and developers came together under the Commons Clause, a initiative meant to add restrictions that limit or prevent the selling of open-source software. This has come from the frustration born of cloud providers using opensource software for their own commercial benefit without giving any recognition or reward to the open-source project it benefited from. “I call it the big code robbery. Amazon and other cloud providers are taking successful open-source code and adopting it as their own cloud service. This is a com-


plete abuse of the open-source concept,” Ofer Bengal, co-founder and CEO of the software company Redis Labs, creator of the open-source project Redis, said at the time. However, this license was not widely accepted from the open-source community because it violated the meaning of open source under the Open Source Initiative’s definition.

On the heels of Common Clause Following the Commons Clause, MongoDB announced the Server Side Public License (SSPL) — a new software license designed to protect MongoDB and other open-source software against costly litigations and other software problems. “This should be a time of incredible opportunity for open source. The revenue generated by a service can be a great source of funding for open source projects, far greater than what has historically been available. The reality, however, is that once an open

source project becomes interesting, it is too easy for large cloud vendors to capture most of the value while contributing little or nothing back to the community,” Eliot Horowitz, CTO and co-founder of MongoDB, wrote at the time of its announcement. SSPL is currently under review to become an Open Source Initiativeapproved license. “MongoDB is trying to address the exact same issue that Commons Clause does: unfair competition by cloud providers who are monetizing successful open-source projects to which they contributed very little (if at all). They take advantage of these projects by packaging them into proprietary service offerings, selling them and making substantial income — all while using their monopolistic power to compete with the very same companies that sponsor and develop those projects,” Bengal said. Now, Apache Kafka software provider Confluent has announced the

Confluent Community License, a new license that will enable users to download, modify and redistribute code, but will not allow users to provide the software as a SaaS offering, explained Jay Kreps, CEO of Confluent. The new community license is meant to replace the Apache 2.0 license for certain components of the Confluent data streaming platform, but will not affect the use of Apache Kafka, which was developed by the engineers at Confluent.

Fighting threats to OS “What this means is that, for example, you can use KSQL however you see fit as an ingredient in your own products

offering, and put all their own investments into differentiated proprietary offerings. The point is not to moralize about this behavior, these companies are simply following their commercial interests and acting within the bounds of what the license of the software allows,” he added. Additionally, time-series SQL database provider Timescale announced its own mission to fight off cloud providers with the development of the Timescale License (TSL). The company has started to build new open-source features that will be made available under the TSL. “Some of these new features (‘community features’) will be free (i.e., avail-

licenses aim to cloud providers or services, whether those products are delivered as software or as SaaS, but you cannot create a KSQL-as-aservice offering. We’ll still be doing all development out in the open and accepting pull requests and feature suggestions. For those who aren’t commercial cloud providers, i.e. 99.9999% of the users of these projects, this adds no meaningful restriction on what they can do with the software, while allowing us to continue investing heavily in its creation,” Kreps wrote in a blog post. “We think it is a necessary step. This lets us continue to invest heavily in code that we distribute for free, while sustaining a healthy business that funds this investment. I’ll explain why both of these things are important,” Kreps wrote. “The major cloud providers (Amazon, Microsoft, Alibaba, and Google) all differ in how they approach open source. Some of these companies partner with the open source companies that offer hosted versions of their system as a service. Others take the open source code, bake it into the cloud

able at no charge) for all users except for the <0.0001% who just offer a hosted ‘database-as-a-service’ version of TimescaleDB. Other features will require a commercial relationship with TimescaleDB to unlock and use for everyone (‘enterprise features’),” Ajay Kulkarni, co-founder and CEO of TimeScale, wrote in a post. As part of its mission, it will develop new features under open source, community or enterprise categories. “Software licenses like the TSL are the new reality for open-source businesses like Timescale. This is because the migration of software workloads to the cloud has changed the open-source software industry. Public clouds currently dominate the space, enabling some of them to treat open-source communities and businesses as free R&D for their own proprietary services,” Kulkarni wrote. In addition, Kulkarni explained that although Timescale is a huge advocate for open-source software with its database built off of the open-source PostgreSQL project, he finds it difficult for open-source projects to maintain longterm sustainability. “Many OSS projects

February 2019

SD Times

start from within and are nurtured by much larger corporations, or develop as community projects relying on external sponsors or foundations for financial support. These backers can be fickle. And building complex software infrastructure requires top-notch engineers, sustained effort, and significant investment,” he wrote. The company hopes to address longterm sustainability as well as the wants and needs of the community with the new license and the availability of proprietary and community features. The open-source features and community features will be made publicly available for free, while the enterprise features will require a paid relationship in order to generate revenue and help Timescale become self-sustaining, according to Kulkarni. While all these efforts have the same common goal in mind, which is to prevent or limit cloud providers from taking advantage of their work, not everyone in the open-source community is happy with the way things are being handled. Chris Lamb, current project leader of the Debian project, explained that the companies associated with these open-source projects are not looking at the big picture when they start implementing their own licenses. “Efforts like the Commons Clause and SSPL do not begin to address the problem as (amongst many other reasons) they are not free software licenses, assume a ‘zero sum’ attitude towards the world and ultimately are a short-sighted & retrograde step for the companies that are promoting them,” he wrote in an email to SD Times. Lamb believes the licenses ignore the principles and people that have supported the opensource community for past couple of decades in favor of fiscal elements. But Lamb does agree that the threat of cloud providers taking over is an ongoing threat to the open-source community. “I believe that some free and open source software is in danger of being taken over by companies that do not share the values of the communities in which they participate in,” he wrote. z


February 2019

SD Times


The Cloudera and Hortonworks merger that was first announced in October officially completed this month, paving the way for a new Cloudera. As part of the merger, the former rivals will live under the Cloudera name and offer an enterprise data cloud capable of supporting hybrid and multi-cloud deployments as well as provide machine learning and analytics capabilities. “We are very excited about where this puts us in the big data, cloud, analytics, and IoT/streaming markets,” said Arun Murthy, co-founder and chief product officer of Hortonworks and now Cloudera’s CPO. “Cloudera is in a unique position where we can offer customers an open source approach to multi and hybrid cloud, all with common shared services like security and governance. “Think about the large-scale cloud providers — it is in their best interest for customers to stay with their cloud only, therefore they don’t offer multi-cloud flexibility,” Murthy added. “Hybrid is another area where we have strengths and we see a lot of customers adopting hybrid approaches. The cloud is perfect for ephemeral workloads, but those can get quite costly in the cloud since they’re always on. We are delivering frictionless hybrid from the Edge to AI so customers can play the data where it lies.” According to Noel Yuhanna, principal analyst for Forrester Research, this is huge news for the big data space because it will raise the bar for innovation. Yuhanna explained that more than 40 percent of organizations suffer from failed big data initiatives because of deployment complexity and lack of skilled resources. “We see that cloud big data management platforms are still evolving, especially when it comes to end-to-end support for multi-cloud in the area of storage, integration, metadata, transformation, and governance. We believe that a combined platform from this merger will help overcome this gap based on their solution offering. In addition, the merger is likely to create an even broader big data services offering that will benefit customers to sup-

port even more complex and larger business initiatives,” said Yuhanna. Yuhanna added that Hortonworks’ big data experience will blend nicely with Cloudera’s artificial intelligence and machine learning skills. For instance, Cloudera’s CEO Tom Reilly explained Hortonworks has invested heavily in real-time streaming and data ingest to support IoT use cases at the edge, while Cloudera has invested in machine learning and AI to provide data scientists the ability to automate machine learning workflows. “This merger is likely to focus on innovation to make big data simpler whether for cloud or on-premises, or hybrid cloud. It’ll likely focus on automating all functions of big data platforms such as administration, integration, security, governance, and transformation requiring less resources, so that organizations can accelerate their business initiatives with big data,” said Yuhanna. However, not everyone in the industry is on board with the merger. “I can’t find any innovation benefits to customers in this merger,” John Schroder, CEO and chairman of the board at MapR, stated when the merger was revealed. “It is entirely about cost cutting and rationalization.” Murthy argues that the combined resources, scale and talent of Cloudera and Hortonworks will only put Cloud-

era in a position to do more, and to do it faster. Yuhanna echoed similar statements, believing the combination of the two big data powerhouses will only accelerate the pace of innovation necessary to support more business use cases in today’s digital world. “Together the new Cloudera has the scale it needs to service the constantly changing needs of the world’s most demanding organizations and to grow even more dominant in the market,” Murthy said. “New open-source standards such as Kubernetes, container technology and the growing adoption of cloud-native architectures are major parts of Cloudera’s strategy. Our primary initiative out of the gate is to deliver a 100-percent open-source unified platform, which leverages the best features of Hortonworks Data Platform (HDP) 3.0 and Cloudera’s CDH 6.0. Cloud-native and built for any cloud — with a public cloud experience across all clouds.” In addition, Murthy noted the new Cloudera will remain committed to its existing solutions. For instance, HDP 3, Cloudera Distribution Hadoop (CDH) 4 and CDH 5 will continue to be supported for at least the next three years; customer support channels will remain unchanged; Hortonworks Data Flow will be available with CDH; and Cloudera Data Science Workbench will be available on HDP. z



SD Times

February 2019

California Consumer Privacy Act follows in the GDPR’s footsteps BY JENNA SARGENT

Last May, the European Union’s General Data Protection Regulation (GDPR) went into effect, created to give consumers more control over how their data could be used by large companies. Shortly after the GDPR took effect, California approved a new regulation that will be going into effect starting January 1, 2020. In many regards, the California Consumer Privacy Act is very similar to the GDPR, explained Lev Lesokhin, EVP of strategy and analytics at CAST Software. The new law will provide California residents with the right to control data that companies collect about them. It will enable you, as a consumer, to set limits on how a company can use your personal data. The GDPR imposes a 4 percent fine on all revenue or €20 million (whichever is greater) for violators, while the California law opens up the ability to sue companies that don’t comply, he explained. He believes that the penalty for violating the California law is actually less stringent that the fine for the GDPR. “How much can an individual really sue you for letting the data be breached or letting the data be used without their consent? It’s probably not going to be anything close to 4 percent of revenue, at least for big companies.” Lesokhin believes that the new law will have a compounding effect when paired with the GDPR. “GDPR seems far away, but for any technology company with any kind of global ambitions, Europe is a big market. The same

would be said of California. That’s a big market as well, so even if GDPR didn’t exist, most companies would have to pay attention.” Even though the new law technically only protects Californians, the rest of the United States will likely experience the same benefits as someone living in California. Implementing the functionality required by the law is going to take a lot of effort, and it would take even more effort to have to implement functionality that acts differently on California residents versus anyone else, Lesokhin explained. Prior to the GDPR going into effect,

customers take a much more proactive stance to it than U.S. companies,” said Lesokhin. “When the law came into effect in May, I think a lot of companies here just started to wake up to the fact that this is real for them. And so we’ve seen some of the preps that you have to do and it can actually be pretty intense.” Lesokhin explained that it is harder for organizations with legacy systems to comply with these types of regulations. Newer tech companies with newer technologies can leapfrog off their competitors and build to the regulations, giving them an advantage. But for older organizations, they will have more to consider. They will need to look into their systems and determine how their applications manage that data, as well as all of the touch points of that data so that they can make sure the data is compliant. “That process of actually

The California Consumer Privacy Act website lists 10 rights that the law will provide consumers: l Right to know all data collected by a business on you. l Right to say NO to the sale of your information.

parties with whom your data is shared. l Right to know the categories of sources of information from whom your data was acquired.

l Right to DELETE your data.

l Right to know the business or commercial purpose of collecting your l Right to be informed of what categories information. of data will be collected about you prior to its collection, and to be informed of l Enforcement by the Attorney General of any changes to this collection. the State of California. l Mandated opt-in before sale of children’s l Private right of action when companies information (under the age of 16). breach your data, to make sure these companies keep your information safe. l Right to know the categories of third

data governance company erwin released a study that revealed that only 6 percent of organizations felt that they were prepared for the upcoming regulation. Several big tech companies are already under investigation for violating the GDPR, such as Twitter, Fortune has reported. “We’ve seen a lot of our European

mapping out the flow of your data through all of your systems can be pretty complex and we’ve seen that first hand,” said Lesokhin. “I think one thing that’s clear here is that there’s more of a premium now being placed on how data access and data handling is being architected in software,” he said. z


SD Times

February 2019

On the definition he productivity of a team is heavily dependent on establishing the “definition of done” for that team, something that has been debated for at least a decade. It could be questioned whether “done” is even achievable in today’s continuous improvement environment. I argue that however it is defined, it is important to understand the obstacles that organizations accidentally impose and, hence, stop “done” from being achieved. All of these — whether caused by cultural or process issues — can be overcome. Let’s start with cross-functional teams. Unless the Scrum or Agile team is joined at the hip with IT operations, then the “definition of done” could differ. For instance, a Scrum team might


Chuck Gehman is an IEEE member and an engineer with Perforce Software.

BY CHUCK GEHMAN focus on what they are doing in a vacuum, separate from what is going on in the world of customers: stories are completed, unit tests passed, code and security reviews covered, code ready to merge, and so forth. But all this focuses on the developer, not on the delivery of business value to the outside world. Imagine the frustration after shipping a new application, and on the first day it is in use, technical support receives a call from an important international customer asking for address line two to be 65 characters instead of 45. Even worse is when a team believes they are shipping to production on Tuesday evening, when “out of nowhere” an IT operations professional rains on their parade by telling them they need to schedule a database change three weeks in advance.

Estimation Developers may also be distracted by firefighting and bug fixing, which can derail the schedule and leads us to the next point: estimation. For instance, allocating 60 percent of a developer’s time to stories in an iteration (i.e., being productive) sounds reasonable, but things might happen that get in the way. Also, teams notoriously struggle with estimation; they may over- or underallocate resources. One of the key tenets of Agile is the breaking up of projects into pieces and delivering value early, which in itself is a good way to define both “done” and success. In a four-week release cycle, it’s important to have all the stories planned for week one to actually happen in that week and not bleed on to the next — and each subsequent week.

Interference We all know that during a project, people understandably come up with valid, wonderful suggestions for improvements. This can affect not only delivery timeframes, but also cause scope creep. Going back to that four-week release example, we can recognize such a timeframe is long enough for many great ideas to come forth that could be added. However, it is better to stick with the discipline of finishing what was started. There are long-term cultural benefits to this, not least of which is keeping developers motivated and avoiding technical debt, both of which get in the way of “done”. Here’s a typical scenario. The CEO comes up with a suggestion for additional features. They may sound fine in principle but, in practice, implementing the

February 2019

SD Times

of done ideas within the current release may mean introducing elements that have not been estimated. This can force short-cuts, or create other dependencies that come back to haunt the next release. Change does not always equate to Agile. Of course, nothing is set in stone and, if a suggestion still delivers value and can be built solidly within the same time-frame, it probably makes sense to make that addition. This brings us to a final and very important point: business value, a term that is used so much that eyes glaze over. But when business value is vague, then it is hard to define “done.” If the success criteria defined by the business are not met, then “done” has not happened.

Answers The answer to defining business value that drives a “done” state is to have a clear charter of what the business value is. Set clear goals, such as: “Four months after we ship this new version of this online video game, it will provide this much additional revenue.” Or, “Within two months, we expect new player acquisition generated by existing player invitations to increase by 25 percent.” We should always be striving to build a “done product” that delivers the most value to the customer. In planning, the most valuable product, feature, or release should have the highest priority. Then, while the sprint is in progress, everyone on the team should frequently be asking themselves, “Is this the most important thing I should be doing right now? And if not, why not?’ A realistic amount of time needs to be budgeted for unforeseen technical debt. It is better to identify this potential obstacle early in the planning process. Doing that might involve sit-

ting the product owner down with developer teams and considering whether the story is doable, is too big, what still needs to be completed, and so on (in other words, a backlog refining process). It’s important that this refinement occurs before any sprints begin. In addition to “definition of done,” there is a case for “definition of urgent.” For instance, is that additional feature, or distraction from project scope to put out a fire, really an emergency? As a manager, I should ask myself: “If I do not interrupt this developer from what she is doing (which is the most important thing we should do), and have her work on the CEO’s suggested feature or production defect, will the world come to an end?” Or, “Will I lose this important customer?” Unplanned work is the biggest accidental cause of not achieving done for most teams. Being mindful of the impact of these type of changes will not only increase productivity but, just as important, engender developers’ trust that the business will not interrupt their work while they are deep in a project. Similarly, when carrying out estimations, it is essential to include what is expected of Ops within the release timeframe, such as in our example of the database change. In addition to realistic estimation, it is a good idea to remove the personal element, too. Observe over time what the true velocity of a team is, rather than rely on an individual’s guesswork. Agile planning tools can help a lot here. Yes, the Scrum team probably should be joined at the hip with IT Operations. This is what DevOps is about. By its very nature, DevOps provides the transparency, collaboration, and the fulfillment, in a concrete way,

of the basic Agile goal of delivering value to the customer. DevOps does provide greater flexibility but within a defined and well-documented framework of practices. So there are no surprises to anyone, internal or external to the team. From a process perspective, improving each of the scenarios we’ve talked about here can be managed in a variety of tactical ways. However, it is the much-vaunted DevOps culture that takes “done” to another level. That a great company culture results in

greater team productivity is well understood. While “done” is going to differ according to each project (and that may well be within the context of continuous improvement), the fundamental route to achieving that is identifying how to best avoid the roadblocks that people and processes can create. z



SD Times

February 2019


Report: DevOps is causing chaos for enterprises BY CHRISTINA CARDOZA

Initiatives like DevOps and Site Reliability Engineering (SRE) were designed to bring development and operations teams together to deliver better software. However, a recent report has found the shared accountability these initiatives promote is actually causing problems. OverOps’ Dev vs. Ops: The State of Accountability report found DevOps is creating chaos and confusion when it comes to application reliability and downtime. The report is based on responses from more than 2,400 development and IT professionals. A majority of users reported a lack of visibility, data and metrics as the number one obstacle to their DevOps initiatives while 40 percent of respondents cited moving too quickly was causing errors in production. “Successful DevOps isn’t just about moving fast and eliminating barriers between teams. It’s about unifying the right people, processes and tools to gain a complete understanding of your system and ensure the delivery of reliable software. Without clearly defined workflows and insight into what’s happening at the deepest level of your environment, more accountability ultimately means more problems,” said Tal Weiss, CTO and co-founder of OverOps. According to the report, 67 percent of respondents find their entire team is to blame when an application breaks or experiences an error, and 73 percent believe both Dev and Ops are equally accountable for the entire quality of the application. Weiss explained this inability to distinguish an “owner” is one of the biggest

problems causing the chaos. About a quarter of the respondents also stated there is a lack of clarity around who is responsible for code quality. Weiss explained it is especially hard for the operations team to identify who is accountable for code quality because of their core responsibilities. “In a given day they may receive a handful of automated alerts related to a service disruption or slowdown, and it’s up to them to use dashboards, monitoring tools and logs to quickly determine where the problem is and who is responsible for fixing it. When ownership for a given release, application or piece of code is ambiguous, it’s difficult for ops to pull in the right people and move fast to get an issue resolved. Meanwhile they’re primarily being measured by uptime and how long it takes for them to resolve incidents. This means every moment wasted just trying to locate the problem and determine who is accountable is a strike against them,” he said. According to Weiss, in order to address and maintain DevOps processes, teams need to have greater visibility into their environments. “If you don’t have insight into what’s happening at every level of your environment, it’s impossible to effectively identify the root cause of an issue and who is responsible for fixing it,” he said. “As the lines between these two teams continue to blur, organizations will need to focus on adopting tools that deepen visibility into their applications. Clarifying ownership of applications and services, and avoiding the ‘multiple owners = no owner’ syndrome is a crucial for even the most bleeding edge organizations,” the report stated. z

In other DevOps news… n Customer experience (CX) assurance platform provider Cyara announced a new DevOps integration as a way to help businesses improve the speed and quality of their development, testing and production teams. According to the company, the new features will enable businesses to increase their automation initiatives and provide an Agile approach to CX design and management. Cyara is looking to add additional DevOps solutions for defect tracking, lifecycle management and IT ticketing systems later this year. n Veracode released the results of its annual State of Software Security report, which found that DevSecOps was having a positive impact on the overall state of security. According to the report, organizations with active DevSecOps initiatives in place fixed vulnerabilities over 11.5 times faster than organizations that do not use the approach. The report also found that there is a strong correlation between how often an organization scans for vulnerabilities and how quickly they address those vulnerabilities. There is a comparable jump in fix rate with every increase in scan frequency until organizations hit 300 scans per year, at which point the percentage of closed flaws skyrockets to almost 100 percent. Veracode hypothesizes that greater scan frequency indicates a higher likelihood that organizations are practicing DevSecOps. n XebiaLabs announced new DevOps as Code features as an effort to break down the barrier to entry for DevOps. According to XebiaLabs, DevOps as Code enables teams to specify end-to-end DevOps pipeline flows, infrastructure configurations, and deployment settings. The latest features include a new CLI, best practices, and Everything as Code, which enables teams to store definitions for deployment packages, infrastructure, environments, releases templates, and dashboards in YAML files. In addition, the new feature enables development teams to define configurations and connect to code evaluation tools and compliance checks to monitor security and compliance test at a glance. DevOps teams can also use the company’s blueprints as a reference and share their own best practices across their organization. z

SD T Times imes News on Mond day The latest news, news analysis and commentary delivvered to your inbox!

• Reports on the newest technologies affecting enterprise deve developers elopers • Insights into thee practices and innovations reshaping softw ware development • News from softtware providers, industry consortia, open n source projects and more m

Read SD Times Neews On Monday too keep up with everything happening in the software devvelopment industrry. SUB BSCRIBE TODA AY! Y!


SD Times

February 2019

is the next digital transformation BY JENNA SARGENT


s people become more inclined to let technology into their lives and homes, the market for Internet of Things (IoT) devices flourishes. In fact, according to research from Bain, the IoT market is expected to grow to $520 billion by 2021, which will be more than double what it was in 2017.

Jonathan Sullivan, CTO of NS1, believes the first wave of IoT came when people started to realize that you could put a SIM card in just about anything and have it connect back to the Internet. While not particularly useful, those applications did gain a lot of attention. “I think there was initially some hype around it when people realized you could connect your rice cooker or your toaster to the Internet and have it trigger on a schedule or link up to your calendar,” said Sullivan. However, those types of applications of IoT did not prove to be very successful in the market, he explained. Devices running Amazon’s Alexa or Google Home, for example, don’t often get thought of as IoT, but “definitely kind of fits the bill,” explained Sullivan. There is an entire ecosystem of devices that have emerged around those types of devices, such as smart lightbulbs and home control solutions. “All of those have kind of converged and merged so that it’s easier to control everything in the smart home,” said Sullivan. “So I think the smart home is its own category, but it’s definitely what I would consider IoT — Things that are not computers that are now enhanced by their ability to connect to the Internet and do something more than they would otherwise be able to do because they’re provided with context or connectivity to other devices or they’re linked together.” The more insights you can gain on yourself and your surroundings, the more opportunity there is to improve life, explained Greg Baker, global VP and GM of cyber digital transformation

at Optiv. For example, a smartwatch that can predict the early signs of a heart attack has the potential to save someone’s life if they are able to get to the hospital faster than they otherwise would have, he explained. According to Sullivan, another example is Nest, which makes smart thermostats that can be controlled via the Internet or be put on set schedules. In New York City, Nest has a partnership with Con Edison, which is an electricity provider. During the summer, if there is a heat wave, there is a setting that users can opt into that raises the thermostat’s temperature setting so that the air conditioner isn’t going at max. This takes load off of the electric grid and helps prevent blackouts or brownouts, Sullivan explained. It also gives participating consumers a discount on their electric bill, so it benefits the consumer as well as the city. “I think that those are really going to be the major types of very interesting and compelling innovations as people figure out how to connect more and more things,” said Sullivan. “I think there’s still a bunch of terrible products that are Internet-connected and have no reason to be Internetconnected, but I also think there’s a lot of really valuable applications, particularly in industrial and agriculture,” said Sullivan. “For example, the ability to put out remote cameras or remote sensors that are solar powered and talk back to a grid. You can do a lot of really interesting analysis of data or anemometers for really granular wind measurements across farms. There’s a lot of really interesting, but I guess what we would consider more boring or mundane applications of the technology, and by connecting smart sensors back to the Internet or to a grid, you can do a lot of really really interesting things that were just not possible 10 years ago.”

February 2019

SD Times

Advances in edge computing will also enable more use cases for IoT deployments. “We don’t want to have to send data to the cloud or to a remote data center before you can act on it,” said Paul Miller, senior analyst at Forrester. “It might be out on an oil rig or halfway up a mountain in a wind farm and in those sorts of situations, you may not have the bandwidth to send data back and forth, so we are seeing a lot of thinking in this space around working out exactly what you need to process locally, what you need to transmit somewhere else, and understanding how those play together.

IoT adoption growing With IoT, the industry is going through yet another digital transformation, Baker explained. Now that organizations have already adopted cloud computing, they are looking at data more holistically and want to enrich that data with what others are doing in the industry. “It might sound ridiculous to have a smart fridge that can auto-order groceries for you or other things like that, but there’s also real-world applicability to some problems that can be solved for peoples’ day-to-day lives,” he said. According to data from Forrester, last year 36 percent of enterprises were either implementing or expanding their IoT deployments. On top of that, 28 percent were planning on implementing IoT in the next 12 months. Combining those numbers together, 63 percent of enterprises are either doing IoT or at least planning on it in the next year, Miller explained. According to Miller, the largest industry segment for IoT at the moment is in the industrial products space. Forty-five percent of respondents in the industrial sector are continued on page 20 >



SD Times

February 2019

< continued from page 19

Security is still a concern for IoT According to Eilon Lotem, CTO of SAM Seamless Network, there are three main security issues currently impacting IoT: zero visibility, lazy consumers and unaware vendors, and exponential growth. The first problem is that there is no monitoring or auditing mechanisms inherent to IoT. As a result, when data breaches happen, the users and vendors have no visibility into what is happening, and therefore do not know who or what they are trying to protect against, Lotem explained. Second, customers are willing to buy devices that do not have built-in security. IoT device manufacturers are trying to maximize profits, which means they will not place much value on securing devices. “The average human behaves in such a way that they first act on their emotions and desires and then think about the consequences of them,” Lotem said. “For example, when a child asks his parents to buy a cool smart speaker as a Christmas gift, how many parents first consider the potential security risks involved in that purchase? Regarding organizations, it is primarily an issue of cost. If IoTs can save on operational costs or increase revenue, the more probability there will be that organizations will adopt IoT first and then consider the security second. In both cases, we feel privacy concerns will not affect IoT adoption.” In fact, Baker believes that the desire to create good experiences may be negatively impacting security. When customers go to set up a new device, they don’t want to have to go through a lot of steps to get it set up. “I mean, consumers, when you buy things want to just plug them in and have them work, right, pretty easy, simple setup,” he said. “So when you do add those additional security layers and it’s more complex for the average person to set up, you might get those things like poor Amazon reviews or things like that. So I think it’s a balance for IoT providers and IoT creators of devices to balance the complexity of how secure setup should be vs what are consumers going to want to access.” Finally, as the number of IoT devices grows, the attack surface also grows, especially when IoT vendors are trying to maximize their profits and not making security a priority, Lotem explained. However, Miller believes that IoT security is improving. “We’re seeing far fewer examples of IoT devices going out into the field that have a hard-coded default password,” he said. The increased security is a result of two different factors. “Partly, we’re seeing better consideration of security in the products themselves,” Miller explained. “And partly, we’re seeing growing awareness and growing understanding amongst those organizations that are actually deploying this stuff that they actually need to think through some of these issues.” z

already implementing IoT, while pharmaceuticals and medicine are next on the list. Miller believes that the oil and gas industry will see an increase in IoT very soon. Currently, only about 32 percent are implementing IoT, but 42 percent plan on implementing over the course of the next year. “There’s a lot of interest and a lot of ground being laid to move quite quickly in the oil and gas space and start doing things at scale,” said Miller. Miller believes there is an increased interest in gathering analytics and insights on IoT devices. Initially, the focus for IoT was just to get the machines connected up and working in the first place. Now, there is an interest in actually adding an intelligent layer on top and enabling things such as predictive maintenance or predictive scheduling, he explained. “Just connecting things to the Internet on their own doesn’t actually deliver much value,” said Miller. “It’s taking this bigger step and thinking about the business value you’re trying to deliver, thinking about analytics, thinking about how it possibly changes the business model or an interaction with an end user. And that I think is where the conversation really needs to move more and more.” Miller predicts that organizations will more often begin to scale out their IoT deployments. Currently, many IoT deployments, especially in the industrial sector, are pilots or proof-of-concepts, which are often very small and localized. For example, an individual plant manager or individual production line manager may use IoT to gain visibility into “their own little bit of the world,” Miller explained. “But actually as we start to look at some of the bigger opportunities around analytics, around machine learning, around prediction and predictive maintenance for example, then you really have to start moving out of the small proof-of-concepts, the small pilot, and actually have to start bridging the divide between the operational side of an organization and the IT side of the

organization,” said Miller. “And we’re seeing a huge amount of focus on that at the moment. Sometimes it’s talked about at this IT/OT divide. And we’re going to have to work out how to bridge that if these IoT deployments are going to scale beyond the pilot to actually become something useful that delivers real business value, because you’re going to have to connect in all these other business systems outside the factory itself.”

Privacy concerns don’t impact adoption If you talk to someone in the security community or someone knowledgeable about technology, they may express concern over putting Internet-connected devices in the home. But, Sullivan believes they are a vocal minority compared to the rest of the population. Sullivan believes that if American homes were polled, the majority of them would already have an IoT-type device in their house that they don’t even realize is an IoT device. For example, smart scales that send your weight or body fat percentage to some software when you step on them, or even smartwatches that have their own chips in them, such as Apple Watches. “People have already kind of let this stuff into their home,” said Sullivan. “You’ve already got cell phones which are always on, always connected, microphone/camera enabled devices. And I think you heard similar concerns when smart phones really arrived — that it was too powerful or too much access was provided to Google or Apple. I think a lot of the arguments are kind of similar for the Alexas and the Google Homes. I think we’ll get over that hurdle pretty soon.” According to Reggie Best, president and CPO of FireMon, a firewall provider, privacy concerns are not really driving people away from IoT. “I think there are factions within organizations that are concerned about it, but I would say that those projects are happening anyway.” He likens it to the concerns around cloud when people were first getting onboard with that, or the concern when PCs first started entering businesses.

February 2019

SD Times

Best practices for securely connecting IoT devices One major security concern is that these devices have a connection back to their cloud or provider, Baker explained. An attack like a DNS hijack or a cloud server compromise could allow an attacker past your security stack, using the persistent connection from the device to the cloud. Something as seemingly innocent as a smart fridge could be opening your network up to vulnerability. “All it takes is one device to compromise what’s on your network and then they can learn more about getting into the rest of the devices on your network,” said Baker. “The vendors have to be accountable for securing everything from your home, all the way through… I would say that any responsible company putting things into the market, security should be baked into their design process,” said Baker. Baker recommends that consumers have segregated networks in their home to protect themselves. Consumers should have a network for their personal devices, one for IoT devices, and a guest network. “All of your IoT devices live on their own connections so that way if somebody was to expose or breach one of those, they wouldn’t have the ability to get in to where all your normal personal data exists,” said Baker. For companies, Baker recommends that the IoT network be treated as “untrusted” and only allow IoT device activity. Companies should also be limiting peer-to-peer activity as much as possible. z

The future looks bright for IoT “There’s a lot of talk now about the conjoining of cloud infrastructures and IoT, and I expect those to be used kind of in tandem, particularly as we start getting to more autonomous transportation systems and citywide innovative infrastructures like light control systems and so forth,” said Best. “That whole combination of cloud and IoT is an important emerging kind of area that will be a lot more talked about over the coming few years.” Baker believes that we’ll start to see manufacturers add different devices to have a portfolio of smart devices that can share information among them on a home network. We’re already seeing that with devices connected to Google

Home or Alexa, but Baker believes more industries and manufacturers will follow suit. “IoT will become a big part of life, impacting dramatically the way we communicate, behave and think,” said Lotem. In addition, there will be a race to control the IoT market that might cause havoc because of a lack of device management that may bring “greater security risks and less protection.” Baker predicts that there will be more targeted attacks against IoT devices as more people adopt them and have them in their homes. However, he also believes that manufacturers will look for ways to secure their devices to protect against those attacks. z



SD Times

February 2019

The OCF’s mission: Standardize the Internet of Things BY CHRISTINA CARDOZA

The Internet of Things (IoT) is rapidly growing with smart devices like cars, refrigerators, televisions and wearables already integrated into our everyday lives. But as people start relying on IoT more and more, the Open Connectivity Foundation (OCF) is on a mission to provide interoperability for consumers, businesses and developers. “The true potential for IoT — just like the Internet — is in interoperability. Simply developing a new app for each new device and ecosystem isn’t conducive to scaling. For developers to succeed, quick, secure and interoperable development is key,” said Clarke Stevens, chair of the data model tools task group and vice chair of the data modeling work group at the Open Connectivity Foundation. According to Stevens, the problem is that there are too many approaches and takes on IoT. “IoT is most useful when it is usable by the most people and the most companies,” he said. What the OCF has set out to do is to bring these different approaches together and create a single experience that would allow these devices to start talking to one another. David McCall, chair of the strategy work group for the OCF, explained businesses are in such a huge rush to bring something to market, they aren’t looking at the bigger picture. “You are getting compelling use cases, but they are fairly limited in their ability to interoperate across a broad range of devices,” he said. McCall explained that IoT devices typically speak a proprietary language to a

cloud service, app or phone. Now if you have multiple different IoT devices, that means you are going to have multiple apps for each device and that “doesn’t scale well,” according to McCall, unless you are willing to buy everything from the same manufacturer. The OCF’s mission is taking place in two parts: 1. Specifications, code and certifications; and 2. Improving end user experience through interoperability and compliance. Through the foundation’s open-source project IoTivity,

it provides a framework for enabling device-to-device connectivity, regardless of the operating system. The OCF also provides a “lite” version of IoTivity, which is a lightweight implementation of the OCF 1.3 specification, and targets constrained hardware and software environments that rely on resource utilization, energy efficiency, and modular customization, the OCF explained. Another way the OCF is working towards achieving interoperability is through its online tool oneIoTa, which is designed to encourage the design and use of interoperable device data models. Other tools include a bridging specification for discovering and connecting to other ecosystems and a security framework for protecting against threats. “OCF is designed around interoperating with other standards, and as a result, does not require developers to

convert all devices to the OCF specification. Overall, the OCF architecture, based on atomic-level interoperable resources, provides a solution to bring in existing ecosystems and make them interoperable,” said Stevens. The foundation recently released version 2.0 of its specification for securing IoT. The latest release focused on security, device-to-device connectivity over the cloud, and defined a standard for device communication. “In previous versions of the specification, IoT devices could communicate with each other over the proximal network. But once users left their homes, they were unable to control their devices via their tablet or mobile phone. With the new cloud features, this will no longer be a problem,” the OCF wrote in a blog post. In addition, the OCF’s 1.0 specification was recently ratified as an international standard by the Joint Technical Committee for ICT standardization of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) also known as ISO/IEC JTC 1. The next step is to submit its latest 2.0 specification to the ISO.IEC JTC 1. “We are excited to see the ISO/IEC Joint Technical Committee 1 approval of the OCF specification as an international standard for the IoT ecosystem,” said John Park, executive director of OCF. “Achieving this recognition reinforces OCF’s contribution to the global IoT community to solve the interoperability gap between devices, supported by the development of ISO/IEC 30118 standardization and OCF-certified devices in the coming years.” z

Be a more effective manager

Visit the Learning Center Watch a free webinar and get the information you need to make decisions about software development tools.

Learn about your industry at


SD Times

February 2019

Software development shops are bringing data scientists, IT pros and end users into the user experience discussion BY LISA MORGAN

X design and testing continue to evolve with the emergence of new technologies that enable new types of experiences. Mobile and web apps changed the conversation from UI to UX. Now, UX involves much more than graphical UIs and app performance. Organizations must deliver omnichannel experiences that contemplate voice interfaces, virtual elements and more. “We’re just scratching the surface of UX design as a discipline. It’s not UX/UI. UX is a much broader discipline than UI,” said Jason Wong, research VP at Gartner. “UI looks at how the user interacts with a given application. UX includes user research, content, performance and back-end so you need a team to help execute that.” Beyond graphics, UX designers and testers must contemplate conversational interfaces, immersive experiences, and the IoT. “It’s not just about graphics, it’s about tones,” said Wong. “We’ve seen companies hire copyrighters from the movies and TV to help script responses


for chatbots and hone what that voice response should sound like in terms of a voice interaction. Very complex things like that are still emerging which UX is still trying to grasp because they’re so new and they’re constantly changing.”

UX requires a holistic view UX designers need to think about how their brand translates to different types of experiences and how the brand translates across those experiences. “At a meta level, the biggest shift has been toward recognizing that experience design is multichannel,” said Martha Cotton, managing director of Design Research at Fjord, the UX arm of Accenture. “You need to think about the channel, but when you’re designing an experience, [you have to realize that] people don’t interact on a channel-bychannel basis.” Instead, they think about the tasks they want to accomplish. For example, noisy airports aren’t the best settings for phone calls or voice interfaces, so an individual might choose to check a credit card balance using a smartphone or lap-

top. However, while driving, the same person might call the credit card company and check her balance using an interactive voice response (IVR) system. “When you’re a designer, you need to think about the omnichannel world and how people engage overall with a service. That can include any number of touch points, some of which may not have been invented yet,” said Cotton. In order to get those experiences right, UX designers are working closer than ever with end users in the form of participatory design, which involves the user in the design of an interface early on. For example, Cotton said when she helped a printer company develop a new app, the project included a panel of approximately 40 users who were engaged over the course of several months, albeit not continuously. “I’m seeing this notion of explicit cocreation,” said Cotton. “Another interesting trend is the integration of data scientists and UX researchers. [We’re seeing] UX teams made up of a multidisciplinary set of folks and they’re working to integrate data science with

qualitative UX design. Uber did exactly that. It was this constant interplay between what they were learning from the data scientist’s perspective and out in the field understanding the rationale for people using the service. They were testing a series of new features and able to pivot based on the data that was coming through the data scientist’s effort and user research insights.” The implication is that designers are spending less time in front of a screen and more time out in the field trying to understand what an experience should be from an end user’s point of view.

Agile applies to UX Not surprisingly, Agile ways of working are impacting UX design and testing in the form of iteration. LinkedIn founder Reid Hoffman once said, “If you are not embarrassed by the first version of your product, you’ve launched too late.” The quote, though, reflects a common misperception about Agile, which is that the methodology is all about speed. “It’s a ridiculous quote because there’s only one opportunity to make a first

impression. Do you really want to make a poor impression with a crappy app?” said Gartner’s Wong. “When I talk to clients and they say we just want to reskin our website or take portions of our website and put it on our mobile app I tell them that will create a bad impression. If a user is excited enough to download your app and finds it’s a crappy reskin of your website, they’ll probably delete it. They won’t reload it and they won‘t give you the opportunity to understand how they’re using it, so design is really integral to the whole Agile and DevOps process.” While speed is a benefit of Agile, two other important benefits are the alignment of business and IT and improved quality. “Businesses are adopting Agile methodologies such as LeSS and SAFe that help entire businesses think about how they should move to continuous, iterative work and improvements,” said Wong. “The preconceptions of Agile from a development and IT perspective is we can do things faster.” In fact, in a recent note, Gartner stated that achieving Agile and DevOps

February 2019

SD Times

success requires setting business expectations and measuring success in terms of business value. That requires aligning the business and IT around Agile and DevOps. “Digital design and techniques like design thinking, human-centered design and then going to prototyping techniques like Lean Startup and Lean UX help refine those ideas and uncover what the problems are because the problems and issues faced by the users are not explicit,” said Wong. “Since they’re not visible, you have to uncover the hidden challenges. Empathy is a big thing in design, stepping into the shoes of your users to understand what they need even if they don’t recognize it themselves.” Through 2023, Gartner expects human rather than technical factors will cause 90% of Agile/DevOps initiatives to miss expectations of higher customer value and faster product delivery. “A lot of companies think they’re doing Agile, but they’re not actually being successful at it, so they’re going through the motions and they’re not wholly embracing the concepts and taking it to its fullest and changing the culture entirely across their organization,” said Wong. “We see companies trying to do a wholesale shift to Agile, and that typically fails. It typically takes two to three years before the Agile mindset is fully indoctrinated within any large development organizations so within the first year there should be progress but by no means should companies expect in the first year to become fully Agile.” Another issue is that management doesn’t understand customer value, according to Gartner. “Before, the business would bring its projects and ideas to IT on an annual basis. All sorts of executives would get together, pick and choose different projects, and then bless them. You’d start building it and then there would be scope creep and budget overruns and the thing would get pushed out. That’s waterfall. Agile is really about being responsive so you can see what’s happening in a shorter period of time and responding to market demands, customer demands, and allowing the continued on page 26 >



SD Times

February 2019

<continued from page 25

business to adjust its strategy to whatever disruption is out there.” Synonymous with the brand is the experience it delivers, which is why Agile ways of working must extend to design professionals as well as developers, DevOps and the business. “Agile without design thinking is still a form of order-taking,” said Wong. “Ultimately, Agile is about iterative work. If you don’t do an upfront design/discovery exercise, how do you know that MVP is actually viable?”

More design expertise is necessary UX design is not a one-size-fits-all proposition because a product or service may span multiple touchpoints that are navigated differently by different users. “It’s not so much application development organizations taking requirements now or the business going to IT and saying we need to build a mobile app or a chatbot and we want it to do this,” said Wong. “Upfront design means solving the customer problems in the most efficient and effective manner for usability, engagement, [and] stickiness.” However, the ratio of designers and developers is very poor, particularly in enterprises, according to a Gartner survey. In trailing enterprises, the ratio of developers to UX professionals is 21:1. At the leading companies where building mobile apps is central to the business, the ratio is 2:1. At design agencies, it’s 4:1, and at ISVs, it’s 9:1. “If you have one designer and 21 developers, that’s one designer across five teams potentially,” said Wong.

“Developers should understand the principles behind design and UX but they themselves are not fulfilling the roles of UX design. UX design encompasses many different areas of expertise like user research, content and performance.” An emerging role is the UX architect, who considers the digital landscape beyond the UI level, such as performance requirements and the services that will be necessary to deliver experiences across multiple touch points. “The UX architect is an important

role for bridging IT and business,” said Wong. “Design as a competency should not reside within IT because it becomes an execution arm. [Instead,] design really should be the glue between the business and IT.” In line with the asymmetric balance of designers versus developers, Gartner found that only 28% of organizations are mature at UX design. One problem is that UX design as a function can reside in different places in an organization such as marketing, an innovation center or IT. The function also tends to lack a charter. “The question we get from the mainstream and conservative companies is standardizing design for their apps because they’ve got old apps and SaaS apps from all kinds of vendors,” said Wong. “The question is how do they standardize that so employees are not confused by all the different UI elements and they don’t have to jump from one app to another app to fill in data and do repetitive things.” Increasingly, users expect to accomplish tasks from within an app rather than jumping from one app to another.

So, instead of a salesperson logging into a customer relationship management (CRM) app, a customer service ticketing system and an expense reporting system, there’s a new responsive web app or new mobile app that interfaces with various systems.

UX testing is evolving too UX testing is becoming more iterative, and the tests themselves extend past the UI. “I’m seeing a more agile approach to testing [in the form of] much more fluid, rapid bursts of testing,” said Fjord’s Cotton. “I’m seeing a lot more of what we call ‘micro pilot testing,’ so, rather than testing an entire interface, taking small pieces and launching them, maybe even in the real world, to get a sense of how they’re doing and making tweaks as necessary.” To improve the success of digital products, Gartner recommends organizations instill a continuous quality mentality across development and operations. “Now I’m thinking about how I can deploy my application intelligently so that I can learn about its effect on my business, as well as mitigate some risk, so I’m looking at things like a canary release, or a blue-green release or using feature flags, coupled with launching darkly,” said Jim Scheibmeir, associate principal analyst at Gartner. “One thing we need to understand is reliability is an aspect of user experience.” Few organizations are using hypothesis-driven chaos engineering (a modern form of testing) to understand how complex environments and dependency degradation impact their applications. According to Scheibmeir, organizations that embrace that type of practice will achieve a competitive advantage and retain customers even on bad days when the application is behaving at its worst. “There’s a cultural aspect to this which is it’s my job and we don’t throw quality or testing over the wall to another team,” said Scheibmeir. “Part of this is about architecture and building applications in a way that enables us to blue green releases or canary releases.” z


SD Times

February 2019

Seeking systems Unicorns, tackling toil, and saying Yes



February 2019

SD Times


n the last few years, we’ve watched DevOps transition beyond a buzzword to become widely accepted by organizations large and small, global and startup. As we take stock of all that happened in 2018, it’s the perfect time to reflect on where we see things going in the new year.

Everyone Is seeking systems unicorns It is becoming increasingly easier to create and run complex scalable systems. Kubernetes and other frameworks have made what was previously a specialized practice into something obtainable by any developer with a credit card; this is an amazing and wonderful thing. Startups and small companies can build and scale at unprecedented rates. However, as easily as these systems can be built, it’s become just as easy for lower-level details to slip through the cracks. Configuration management, logging streams, process management, scalable design patterns and performance tuning are just some examples of important considerations that are becoming frequently overlooked as organizations scale. There is a growing demand for these systems unicorns who understand these low-level details, akin in many ways to the UNIX system administrators of the past, compiling kernels and working with low-level systems code. From Senior DevOps Engineers to the site reliability engineers (SREs) at Google (who really are in a league of their own, doing some truly amazing things), we’ll see increasing demand for deep technical experience with operating systems, and distributed systems, including Kubernetes.

The dangers of development drudgery Organizations further along in their DevOps journey are implementing SRE practices. As a methodology, SRE seeks to improve the reliability of software and reduce the operational burden of running that software. There are many conAdam Serediuk is director of Cloud Operations at xMatters

cepts in its methodology, but one thing we can all agree on is automating and reducing manual work. Development drudgery, referred to as “toil” by Google SRE, is manual and repetitive operational work, and often the day-to-day reality for many. Toil increases linearly as your service scales and can never truly be eliminated. High levels of toil create negative outcomes, burnout, increased risk for error, and ultimately, slow progress. If you’re spending more than 50 percent of your time on toil, it’s time to invest in turning this into positive outcomes through engineering work. As repetitive work, toil lacks enduring value. In 2019, we’ll see more and more organizations realizing the need to incorporate SRE practices, including eliminating toil. This will allow them to focus on building software and delivering better outcomes for customers, freed from the burden of tedious manual work. Organizations will thus empower their teams to focus on creating real value instead of merely keeping things running. Let’s automate last year’s job.

Kubernetes takes the crown Kubernetes is everywhere, and for good reason. Vendor-supported Kubernetes implementations like Google Kubernetes Engine, Amazon EKS, and opensource tooling such as kops have made it increasingly simple to deploy what is one of the most transformative tools in runtime and deployment orchestration to recent memory. With rumors of other vendors transitioning their own container orchestration platforms into maintenance, you cannot go anywhere without hearing about Kubernetes. As an orchestration platform, Kubernetes has assisted dramatically in DevOps practices by being an approachcontinued on page 30 >


SD Times

February 2019

< continued from page 29

able and standard deployment platform. When teams use the same tooling, they benefit from that shared knowledge. It has greatly assisted in the DevOps goals of common tooling, automation, infrastructure as code, and repeatability. Being usable almost anywhere, multicloud deployments are easier than ever and it avoids vendor lock in. The rate of innovation and community involvement has allowed Kubernetes to quickly rise to high levels of adoption, popularity, and explosive growth, and this will only continue in 2019. I am excited to see what happens next.

DevSecOps adoption struggles It is difficult to argue against DevSecOps, but what is it exactly? DevOps adoption is still widely in progress, and just like DevOps it is a culture change, so is DevSecOps — including security in the development process. In the earliest days of DevOps, corporate security policies frequently slowed development, and in some cases, these concerns were either bypassed or ignored in the name of progress. The systems and tools of the time frequently made security difficult to automate. Today, however, migration to cloud platforms or more controllable infrastructure is enabling increased security automation and enhancements, with API-controllable network access lists, DDoS mitigations, and access policy management. There is also an influx of DevSecOps tools to address other concerns and a growing ecosystem of security, and while they may help, a culture of security must come first — and this is where organizations will continue to struggle. Any tooling must be as flexible and supportable as code like the rest of the DevOps ecosystem for it to be successful and adopted by practitioners. Cloud providers who figure out a way to make this easier, analyzing and reporting on your existing software without requiring additional develop-

ment, will have a winning combination, providing a valuable safety net. Automated code analysis and basic vulnerability scanning has become easy, and there is little excuse not to include it in your development process. These bolt-on products can help your security, but are no replacement for a securityconscious culture and development process.

Competitive gaps grow Now that DevOps has been accepted in the mainstream, the competitive edge of mature organizations who have already realized its benefits will increase, and dramatically. Their advantages from faster time-to-market and higher customer satisfaction will leave other companies wondering, “How are

impact on the way service teams in DevOps organizations are created. As service teams are responsible for the life cycle of their software, including technical decisions, they may have complete control of all technical choices. Just as team cultures vary, so may their technology choices, implementations, and even workflows. This can create many silos within a development organization, making portability of both software and people inside the organization challenging. However, the positives may outweigh the downsides. Broader spectrums of technology allow for multiple approaches and for the best ideas to win. It enables those teams to move quickly when needed because of their intimate experience with their services and freedoms of choice. It is best to temper this with some standards to gain efficiencies, such as common deployment platforms, datasources, and monitoring tools. Establish guilds and architecture plans, but listen to the teams when they complain. Your teams are usually right, and if they dislike something enough, they may just do it their way, either at your company or someplace else. Strike the right balance and encourage them continually to share, working together to let amazing software be built. If one thing is for certain, it is that the way we work has changed dramatically. The operational technology available today has come so far to enable Dev and Ops to work together in ways that were simply not possible before. In speaking a common language, and using common tools like Kubernetes and cloud providers like Google or Amazon, it is easier than ever for teams to work together. It will be interesting to see how serverless and other approaches evolve DevOps further in 2019 and beyond, and allow us to work together in developing great software with even more abstraction from the underlying platforms. z

The operational technology available today has come so far to enable Dev and Ops to work together in ways that were simply not possible before.

they going so fast?” Organizations who are behind the curve will have a lot of catching up to do, and because DevOps is not an overnight change, those who delay may be left behind in an increasingly difficult proposition. This is not to suggest that DevOps is the only way operate, but certainly the benefits of time-tomarket are difficult to deny. Simultaneously, while skillsets grow and adapt, the opportunities for highly skilled DevOps practitioners and full stack developers will also increase, creating more demand for top talent who may not want to join an organization who isn’t practicing DevOps successfully.

Embrace the silo The “You Build It, You Run It” mentality made popular by Werner Vogels, CTO of Amazon, has made a profound


SD Times

February 2019


Become a better React Native developer Rakshit Soral is a technology consultant (Mobility & Operations) at Simform, a React Native app development company.


eact Native, a framework for building cross-platform applications, has been in the news lately for the right reasons. It is backed by a renowned team at Facebook and the whole JavaScript community. The framework aimed to reach record-breaking heights with the slogan “Learn Once, Write Everywhere.” We will look at six crucial tips that will help mobile developers to become better React Native developers. 1.Correct choice of Navigation Library. React Native has a long history of pains and complaints associated with Navigation. Throughout the beginning of version 0.5, many navigational libraries were released and deprecated but only few among them succeeded in maintaining the impact and feel of a native app. One example is Airbnb, which found out React Navigation — which is the recommended Navigational library for React Native — is not working with their brownfield app. That’s why developers at Airbnb came up with their own navigational library, which is now the second-most used navigational library after React Navigation. 2.Native Debugging is old-school. Debugging in React Native can be quite annoying at times when your project size grows. This is because React Native relies on the Chrome Debugger, which uses the Chrome JS engine, but JavaScript debugging can be only done on JavaScriptCore. This creates a subtle difference between the JavaScript execution environments, and the only way for developers to get out of this problem is by doing debugging through Android Studio on Android and Xcode on iOS. Android Studio: Developers have access to the Android profiler, which is hailed as the best tool for Android to analyze an app’s performance regarding CPU power, memory, and more. Xcode: On Xcode, developers can press the Debug View Hierarchy button, which will show all views in a 3D way. Using this, developers can inspect their full view tree in a very visual, appealing way. 3. Upgrade wisely. At times, developers may want to upgrade their React Native version in order to take advantage of new features. But there

React Native is a great tool to speed up your native development

might be some situation when developers have installed native modules that are linked/ bridged to native code. Fortunately, there is a way to upgrade React Native wisely, which involves unlinking native packages, upgrading and then relinking them without hassle. 4. Optimization is key to performance. You will most likely agree with the fact that an application is incomplete without Images. Images can be implemented in an application either with the help of a static resource accessed from a local directory or with the help of an external resource that needs to be fetched from a back-end server. Whatever may be the requirement of a developer, it is very important to optimize React Native Images with higher priority. Instead of processing them from the client end and crashing the performance, developers can optimize them at the server level. Moreover, there are many CDN options available to host Images so that developers can easily make API calls and upload images to the server. 5. Reduce App Size for mobile. Most React Native developers are in the habit of using native components and third-party libraries that increase the size of an application. This drastically impacts the performance and loading speed of the application. Developers can follow certain guidelines in order to reduce the size of an application. On Android, this can be done by enabling Proguard and reducing the size of graphics. On iOS, however, this might be a tedious task as iOS does not offer any straightforward solution to solve this problem. Yet, some workarounds can be done that would relatively improve the size of an iOS application. 6. Don’t fear learning native code. React Native is a great tool to speed up your native development — especially in situations where your business is targeting multiple platforms — though there might be some use cases where developers need to implement any functionality that doesn’t exists in the core library. Fortunately, React Native has got this problem covered with its pleasant API so developers can use native libraries to implement the required functionality. For this purpose, developers need to have a good understanding of core native languages such as Objective-C/ Swift (iOS) and Java/ Kotlin (Android). Moreover, it is very important to learn how React Native works under the hood. z

February 2019

SD Times


Will RAZR design cut into Apple’s lead? C

ell phone designs in the early years were incredibly fluid, with only one real constant — users, for whatever reason, hated screen phones. They were hard to type on, operating systems were a mess, there were no real apps, and they were comparatively expensive. One of the first powerful successes was the Motorola RAZR, which tended to dominate the market on and off until the market effectively passed to Apple’s design. Then everyone seemed to lock in on that once-unacceptable design thanks to Apple’s powerful marketing and impressive execution. Today, even though Apple itself is third in the world (behind Samsung and Huawei) their design is still dominant. With the emergence of foldable screens, we have the opportunity to flip the market again, and the most powerful brand for a foldable phone is likely the RAZR, which defined foldable phones. But if you could get a phone with a large screen that not only would better fit in a pocket but would be more robust, and competitively priced, the market could flip back to the foldable form factor.

Status, not practicality I use a Key 2 Blackberry phone because my priorities are work and security. If I want to watch videos or read, I generally use an Amazon Fire Tablet, which costs under $150 and is fully curated by Amazon. For a laptop my current go-to box is the HP Spectre Folio. Each device is selected based on what I want to do with it and the fact that, as far as I can tell, each device does what I want to do best. Sadly, this approach to tech is unusual. Most seem to buy tech based on what others are buying, and based on usage models they may not even agree with. For instance, one of the reasons screen phones were so unpopular before the iPhone was because they took too much attention to use and could result in accidents for walkers or drivers. And given how many of us were defined by the keyboards on our Blackberry and Palm devices it still amazes me that most others threw out the keyboards in order to get better gameplay and video performance. Or we shifted from a feature that would help our careers and potentially make us money to favoring features that cost us money and could get us killed.

Foldable screens Now foldable screens are expensive, and expensive means exclusive. Plus, while the strong connection to the original Star Trek TV show and movies isn’t as pronounced, the act of unfolding the phone is pronounced and very visible. This all means that while the foldable phones will have to overcome the perception that thicker phones are lower status, the use of the phone should make it more visible, and, at least initially, be an effective way to pull attention to the relatively unique design. Assuming the software works right, and initial designs have showcased that this isn’t trivial because the interface needs to automatically adjust for the changing screen size, then screens with foldable displays should carry higher status.

Rob Enderle is a principal analyst at the Enderle Group.

The difference between ‘could’ and ‘will’ Now there is no doubt in my mind that if Steve Jobs were running Motorola, he’d fund the marketing to a degree that the odds would favor success. This is the difference between ‘could’ and ‘will’ — a foldable phone certainly could take over and become the new standard design, but it will require a significant marketing effort, or some viral luck, to overcome current perceptions and what will likely be some significant pushback from Apple. Motorola has in the past stepped up to this kind of an effort and Samsung is certainly capable of that as well. But, recently, all three firms have underfunded marketing, which does potentially make Apple more vulnerable but also makes it less likely we’ll see the market pivot near-term to foldable displays and phones. I’d personally prefer a foldable phone because it would be more portable and more interesting to use (I might be able to leave my Fire Tablet at home). We should know by the end of the year whether this new RAZR is a home run or another promising device that fell short of potential. Let’s hope it is the former because the smartphone market is really soft at the moment and a new, more popular form factor could light a significant sales fire under it. z

A foldable phone certainly could take over and become the new standard design, but it will require a significant marketing effort.



SD Times

February 2019


It’s time for data privacy legislation David Rubinstein is editor-in-chief of SD Times.


rivacy means different things to different people. It’s why some people build walls around their homes, yet others are totally comfortable walking on public beaches in the nude. Data privacy is another animal altogether, and there are varying degrees to what people will accept when it comes to giving up their personal information. People I’ve spoken with usually have three reactions to putting their data online. 1) They have a belief that Internet security is so effective that they’re not worried about having their personally identifiable information stolen or misused. 2) They’re resigned to the fact that their data is at risk, yet take some solace in the fact that millions upon millions of data bits are being collected, so they see the odds of identity theft somewhere in the neighborhood of getting hit by lightning. 3) They view giving up some data, some privacy, as the cost for access to all the Internet has to offer. A recent study from the Center for Data Innovation came across my desk this week, and it shows that while Americans have concerns about who’s collecting their information and what they’re doing with it, few are willing to pay for data privacy. The study shows that while 80 percent of Americans said they would like online services to collect less data on them, only 27 percent said they would be willing to pay a monthly subscription fee for Internet services that currently are free if those services would agree to collect less of their data. According to to Daniel Castro, the center’s director, limiting the amount of personal data businesses can collect would require tradeoffs. He said overly restrictive privacy laws would result in added costs to the business in terms of compliance with those laws and reduce their revenues. I say, so what? Is nothing sacred anymore? Facebook, Google and the other major data players can’t be trusted to protect their users’ data, so if they have to incur costs to comply with laws meant to protect the unsuspecting (and even suspecting) public, they can certainly afford it.

Companies need personal data like people need oxygen... But the onus should not be on the public to agree to privacy tradeoffs for Internet access.

These companies are devious, don’t be fooled. First they hide behind “War and Peace”-sized privacy policies that few people have the time or gumption to read through, which absolves them of wrongdoing by saying, ‘Well, the user clicked the box. He knew what he was agreeing to.’ No, he or she did not know what they were agreeing to. Let’s be real. They just clicked the box so they could get on with the business of doing what they needed to do on the Internet. Then there are the tricks they employ to collect even more data than we would provide if asked directly for it (but of course is likely covered on page 172 of the privacy policy). All those quizzes about what was your first car, who ever shopped at a particular store, what do you know about World War II, and so on, are tools to mine even more information about us. And we’re either blissfully unaware, or we don’t take the quizzes, fun as they’re made to appear. Further, a recent study by the Pew Institute revealed that 74 percent of Facebook users are unaware that the company captures lists of their interests for ad-targeting purposes. My wife, ever the conspiracy theorist, believes our Google Home device is listening to everything she says. How else, she asks, would the Internet know she had a cellphone conversation with her mother about a pair of shoes, and then when she hung up, ads for those shoes immediately popped up on an Internet site she was looking at? She didn’t go to the shoe company website, didn’t make any indication at all on the Internet that she has interest in those shoes, but somehow, the Internet knew of the phone conversation she had with her mother. That’s creepy. Companies need personal data like people need oxygen. Without it, they will die. But the onus should not be on the public to agree to privacy tradeoffs for Internet access. Data collection companies should be held accountable for properly using personal data. The GDPR, and now the California Privacy Act, are important first steps for protecting the public. More are needed. Efforts such as data decentralization are returning control to individuals. Yet so far, we’ve left it up to the technology companies to police themselves, and we see how that’s worked. It’s time federal regulators stepped in. Damn the costs. z

Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.