SDT03 cover.qxp_Layout 1 8/22/17 11:52 AM Page 1
SEPTEMBER 2017 • VOL. 2, ISSUE 3 • $9.95 • www.sdtimes.com
SDT03 Full Page Ads.qxp_Layout 1 8/21/17 12:37 PM Page 2
SDT03 page 3.qxp_Layout 1 8/21/17 4:22 PM Page 3
VOLUME 2, ISSUE 3 • SEPTEMBER 2017
The evolving state of enterprise middleware page 10 15
Ethics, addiction and dark patterns
Follow the path to microservices
Girls in Tech’s Adriana Gascoigne: ‘We have a long way to go’
Service Virtualization brings expediency
How to choose the right database for your microservices
Microsoft releases .NET Core 2.0
GrapeCity focuses its brand on developers
CollabNet, VersionOne merge
Finding an integrated ALM toolset for hybrid application development
is almost here. Again. page 32
GUEST VIEW by Bill Curran Avoiding embedded analytics bear traps
ANALYST VIEW by David Cappuccio Top 10 tech trends impacting infrastructure and operations
INDUSTRY WATCH by David Rubinstein From the editor’s notebook
Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 225 Broadhollow Road, Suite 211, Melville, NY 11747. Periodicals postage paid at Huntington Station, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2017 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 225 Broadhollow Road, Suite 211, Melville, NY 11747. SD Times subscriber services may be reached at email@example.com.
SDT03 page 4.qxp_Layout 1 8/18/17 5:16 PM Page 4
Instantly Search Terabytes of Data DFURVVDGHVNWRSQHWZRUN,QWHUQHWRU ,QWUDQHWVLWHZLWKGW6HDUFKHQWHUSULVHDQG developer products
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein 631-421-4154 firstname.lastname@example.org SOCIAL MEDIA AND ONLINE EDITORS Christina Cardoza email@example.com Madison Moore firstname.lastname@example.org INTERN Ian Schafer email@example.com
Over 25 search features, with easy multicolor hit-highlighting options
SENIOR ART DIRECTOR Mara Leonardi firstname.lastname@example.org CONTRIBUTING WRITERS Jacqueline Emigh, Lisa Morgan, Alexandra Weber Morales, Frank J. Ohlhorst
dtSearchâ€™s document filters support popular file types, emails with multilevel attachments, databases, web data
Developers: Â‡$3,VIRU1(7-DYDDQG& Â‡6'.VIRU:LQGRZV8:3/LQX[ 0DFDQG$QGURLG Â‡6HHGW6HDUFKFRPIRUDUWLFOHVRQ faceted search, advanced data FODVVLILFDWLRQZRUNLQJZLWK64/ 1R64/ RWKHU'%V06$]XUHHWF
Visit dtSearch.com for Â‡KXQGUHGVRIUHYLHZVDQGFDVHVWXGLHV Â‡IXOO\IXQFWLRQDOHYDOXDWLRQV
The Smart Choice for Text RetrievalÂ® since 1991
CONTRIBUTING ANALYSTS Rob Enderle, Michael Facemire, Mike Gualtieri, Peter Thorne
CUSTOMER SERVICE SUBSCRIPTIONS email@example.com ADVERTISING TRAFFIC Mara Leonardi firstname.lastname@example.org LIST SERVICES Shauna Koehler email@example.com REPRINTS firstname.lastname@example.org ACCOUNTING email@example.com ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 firstname.lastname@example.org WESTERN U.S., WESTERN CANADA, EASTERN ASIA, AUSTRALIA, INDIA Paula F. Miller 925-831-3803 email@example.com
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 225 Broadhollow Road Suite 211 Melville, NY 11747 www.d2emerge.com
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:26 AM Page 5
SDT03 page 6,7.qxp_Layout 1 8/21/17 12:18 PM Page 6
NEWS WATCH tories, which prevents image tags from being overridden when they go to production.
Oracle seeks foundation for Java EE As the delivery of Java EE 8 approaches, Oracle believes they have the ability to rethink how Java EE is developed in order to “make it more agile and responsive to changing industry and technology demands.” “Java EE is enormously successful, with a competitive market of compatible implementations, broad adoption of individual technologies, a huge ecosystem of frameworks and tools, and countless applications delivering value to enterprises and end users,” according to Oracle in a blog post. “But although Java EE is developed in open source with the participation of the Java EE community, often the process is not seen as being agile, flexible or open enough, particularly when compared to other open source communities. We’d like to do better.” According to Oracle, moving Java EE technologies to an open-source foundation may be the right next step, in order to “adopt more agile processes, implement more flexible licensing, and change the governance process.”
Debian turns 24 Debian, an operating system and distribution of Free Software, celebrated its 24th birthday on August 16. One of the biggest announcements of Debian this year was the release of Debian 9, codenamed Stretch, which adds corrections for security issues, along with a few adjustments for serious problems. Security support for the old Debian 8 release is expected to be available until around June in 2018. Other Debian
Puppet releases dev kit on GitHub
MongoDB files to go public The $1.6 billion company MongoDB has confidentially filed to go public, according to reports. MongoDB is a database startup company, and going public will set up competition with Oracle, MongoDB’s biggest target. Earlier in the year, MongoDB’s CEO Dev Ittycheria told Crain’s New York that Oracle is “incredibly vulnerable because they’ve lost the developer’s heart and soul,” said Ittycheria. According to the reports, a change in SEC regulations allowed the company to confidentially file to go public, which previously was only available to companies at a certain size. If MongoDB does go public, the report states it will be the seventh “unicorn” startup to do so in 2017. happenings this year includes the annual Debian conference, which was held in Montreal. Also, the Chinese language team has been relaunched with new contributors, and the Debian Installer, many pages in the Debian website, and other documents are being updated and translated into Chinese. Debian is currently looking for team members to join this effort.
Chris Lattner joins Google Brain Chris Lattner, who led the development of Apple’s Swift programming language, has left his most recent position as vice president of Autopilot Software at Tesla, where he lead development of the auto manufacturer’s AI software. Lattner announced his new position with the Google Brain project on Twitter, where he’ll be serving as director of engineering with Tensor-Flow
Developer Experience. Lattner worked at Apple for over a decade and started the LLVM toolchain and compiler project in 2000.
Docker Enterprise Edition updates Docker announced the latest release of Docker Enterprise Edition(EE), which enables organizations to modernize traditional applications and microservices built on Windows, Linux or the mainframe. With this new edition, companies can also manage all of these microservices and applications in the same cluster. The new release lets organizations customize rolebased access and define physical and logical boundaries for different users and teams sharing the same Docker EE environment, according to Docker. Additional features include automated image promotion and immutable reposi-
Puppet, an Infrastructure as Code company, has released the Puppet Development Kit on GitHub to streamline the process of writing, testing and implementing Puppet modules. “Tools like puppet-lint and rspec-puppet have been around for a long time to help you catch issues in your code,” Lindsey Smith, a senior project manager with Puppet, wrote in a post on the company’s development blog. “However, you had to discover these tools, install them and figure out how to use them effectively on your own. With the Puppet Development Kit, we now take care of all that, so you don’t have to.” In addition to troubleshooting tools, the Puppet Development Kit includes improvements to the module skeleton, the ability to generate modules and classes and a new command line interface called pdk.
Tricentis acquires load testing provider Flood IO Tricentis is bolstering its software testing expertise with the acquisition of Flood IO. Flood IO is an on-demand load testing solution provider designed to maximize test strategies, provide feedback loops, and discover issues in real time. “Times have changed. Old performance testing approaches are too late, too heavy, and too slow for today’s lean, fastpaced delivery pipelines,” said Sandeep Johri, CEO of Tricentis. “Yet, releasing updates
SDT03 page 6,7.qxp_Layout 1 8/21/17 12:18 PM Page 7
without insight into their performance impact is incredibly dangerous in today’s world— with competitors just a click away.” According to Tricentis, Flood IO will enable the company and its users to embrace load and performance testing as well as the concept of “shift left” load testing. As part of the acquisition, Flood IO will continue as a standalone service and continue its mission to provide continuous load testing in DevOps.
Black Duck releases Hub Detect The amount of package managers and CI tools has grown over the years, and as a result, there is an additional need for DevOps automation. With Black Duck’s new release of Hub Detect, the company wants to simplify and streamline open source management for DevSecOps, and simplify integration into a DevOps toolchain. Hub Detect ensures the most accurate inventory of
open source is available by automatically combining multiple analysis techniques. It’s a multi-factor approach, which Black Duck says is critical for effective management of open source security and license compliance risks. “Speed and agility are paramount in DevOps. With Hub Detect we’ve eliminated the complexity of identifying each of the package managers and CI tools in use and the pain of having to configure them individually,” said Black Duck CEO Lou Shipley.
Heptio projects target Kubernetes operations Heptio announced two new open source projects designed to improve Kubernetes operations, regardless of where a developer runs a cluster. The first project is Heptio Ark, a utility for managing disaster recovery, specifically for Kubernetes cluster resources and persistent volumes, writes Craig McLuckie, founder and CEO of Heptio. It gives users a
way to backup and restore applications and PVs from a series of checkpoints. Heptio Sonobuoy is the second open source project, and it’s a diagnostic tool that makes it easy to understand the state of a Kubernetes cluster by running a set of Kubernetes conformance tests in an accessible and non-destructive manner, writes McLuckie.
Bitcoin hits new record high Bitcoin hit a record high of $4,500 in August, which means market capitalization is over $73 billion according to CoinDesk. Leaders in this space say that increased interest from Korean and Japanese exchanges are increasing the value of the cryptocurrency. “Another part of it is driven by the psychology of markets, as $USD 5,000 seems to be within reach, now that the $4,000 level has been easily broken,” said William Mougayar, the founder of Startup Management. Since January 2015, he
Postman adds multi-region API monitoring An update to the Postman API development brings multi-region support for monitoring API performance, and to measure network latency between regions. To many, API performance has been a black-box situation. Developers rely on APIs to provide services and data to their applications, yet often don’t know the state of those APIs when they have been created by another organization. Often, a change to the API, or its location, can impact the application’s performance. With the Postman update, “you make an HTTP request to the API with the test on Postman,” Abhinav Asthana, co-founder and CEO of Postdot Technologies, which produces Postman, told SD Times. “You can do it as often as every five minutes to make sure it doesn’t go down, which could result in massive losses” of business, he added. “We can simulate what API response times will be like if, for example, you’re in the United States and the API is on a server in Japan,” Asthana said. The new multi-region support lets organizations “get as close to where their users are as possible,” he said, to reduce latency in API calls Abhinav Asthana and improve overall application performance. With this release, Postman supports six regions — US East, US West, Canada, EU, Asia Pacific and South America — mirroring the AWS API Gateway regions, he said, adding that Postman expects to add on more and more regions.
explained, the price of Bitcoin has increased 500% from $200 to $1,000 in January 2017, and just spiked to a record high over $4,000 as US and North Korea tensions escalated, he said.
Amazon’s new security service Macie Amazon launched a new security service called Amazon Macie, which uses machine learning to help identify and protect sensitive data stored in AWS from breaches, data leaks, and unauthorized access. Key features of Amazon Macie include data security automation, data security and monitoring, data visibility for proactive loss prevention, and data research and reporting.
Red Hat rolls out Development Suite 2.0 Red Hat has updated their Red Hat Development Suite to version 2.0, including updates to Red Hat JBoss Development Suite and Red Hat Container Development Kit. The Red Hat Development Suite installer is available for Windows, macOS and Red Hat Enterprise Linux, and it will automatically download, install and configure selected tools such as EAP, Fuse and the Kompose 1.0 technical preview, a new addition to the suite. Kompose is a tool that can be used to convert Docker Compose files to Kubernetes or Red Hat OpenShift artifacts. Kompose was conceived as an onboarding tool for Kubernetes users by Skippbox (since acquired by Bitnami) and it received contributions from Google and Red Hat early in development. It’s now a part of the Kubernetes Community Project as of version 1.0.0. z
SDT03 page 8,9.qxp_Layout 1 8/18/17 4:24 PM Page 8
BY MANISHA SAHASRABUDHE
The main goal of DevOps is quite simple: ship software updates frequently, reliably, and with better quality. This goal is somewhat “motherhood and apple pie,” since almost every organization will agree that they want to get there. Many will tell you they’ve already embarked on the DevOps journey by following some commonly followed frameworks, such as “CALMS.” However, very few will express complete satisfaction with the results. After
down • 38% pointed out that they had a mix of legacy and modern applications, i.e. a brownfield environment. This created complexity in terms of deployment strategies and endpoints, toolchain, etc. • 27% were still struggling with siloed teams that could not collaborate as expected • 23% had limited access to self-service infrastructure Other notable pain points included
We’re just starting our DevOps journey and we’re brimming with optimism.
We’ve gone through training and the culture changing. We’re automating tasks like infra provisioning, CI, deployments, etc.
We’ve automated all the tasks and we’ve seen some gains.
There is so much to learn!
It is more challenging than we thought, but we’re still cautiously optimistic.
However, our DevOps efforts are becoming increasingly complex and we’re not making progress at anywhere near the pace we envisioned.
DevOps Maturity Figure 1
speaking to 200+ DevOps professionals at various stages of the adoption lifecycle, we found that organizations generally fall in one of three categories (Fig. 1): We were most interested in groups two and three since they were actually in the middle of their DevOps journey. When asked to better explain the challenges and roadblocks, here is what we found: • 68% said that the lack of connectivity between the various DevOps tools in their toolchain was the most frustrating aspect • 52% said that a large portion of their testing was still manual, slowing them R Manisha Sahasrabudhe is co-founder and vice president of product management at Shippable.
finding the right DevOps skill set, difficulty managing the complexity of multiple services and environments, lack of budget and urgency, and limited support from executive leadership Let us look at each of these challenges in greater detail.
Challenge #1: Lack of connectivity in the DevOps toolchain There are many DevOps tools available that help automate different tasks like CI, infrastructure provisioning, testing, deployments, config management, release management, etc. While these have helped tremendously as organizations start adopting DevOps processes, they often do not work well together. As a classic example, a Principal DevOps engineer whose team uses
Capistrano for deployments told us that he still communicates with Test and Ops teams via JIRA tickets whenever a new version of the application had to be deployed, or whenever a config change had to be applied across their infrastructure. (Fig. 2) All the information required to run Capistrano scripts was available in the JIRA ticket, which he manually copied over to his scripts before running them. This process usually took several hours and needed to be carefully managed since the required config was manually transferred twice: once when entered into JIRA, and again when he copied it to Capistrano. This is one simple example, but this problem exists across the entire toolchain. Smaller organizations get around this problem by writing custom scripts that glue their toolchain together. This works fine for a couple of applications, but quickly escalates to spaghetti hell since these scripts aren’t usually written in a standard fashion. They are also difficult to maintain and often contain tokens, keys and other sensitive information. Worse still, these scripts are highly customized for each application and cannot be reused to easily scale automation workflows. For most serious organizations, it is an expensive and complex effort to build this homegrown “DevOps glue,” and unless they have the discipline and resources of the Facebooks and Amazons of the world, it ultimately becomes a roadblock for DevOps progress. Continuous Delivery is very difficult to achieve when the tools in your DevOps toolchain cannot collaborate and you manage dependencies manually or through custom scripts.
SDT03 page 8,9.qxp_Layout 1 8/18/17 4:24 PM Page 9
Challenge #2: Lack of test automation Despite all the focus on TDD, most organizations still struggle with automating their tests. If the testing is manual, it is almost impossible to execute the entire test suite for every commit, becoming a barrier for Continuous Delivery. Teams try to optimize this by running a core set of tests for every commit and running the complete test suite only periodically. This means that most bugs are found later in your software delivery workflow and are much more expensive to find and fix. (Fig. 3) Test automation is an important part of the DevOps adoption process and hence needs to be a top priority.
Challenge #3: Brownfield environments Typical IT portfolios are heterogeneous in nature, spanning multiple decades of technology, cloud platform vendors, private and public clouds in their labs and data centers, all at the same time. It
Developers are driven to move as fast as they can and build new stuff. QA and Release management teams are driven to be as thorough as possible, making sure no software errors can escape past their watchful eyes. Both teams are often gated by SecOps and Infrastructure Ops, who are incentivized to ensure production doesn’t break. Governance and compliance also plays a role in slowing things down. Cost centers are under pressure to do more with less, which leads to a culture that opposes change, since change introduces risk and destabilizes things, which means more money and resources are required to manage the impact. This breakdown across functional silos leads to collaboration and coordination issues, slowing down the flow of application changes. Some organizations try to address this by making developers build, test and operate software. Though this might
Challenge #5: Limited access to selfservice infrastructure and environments For many organizations, virtual machines and cloud computing transformed the process of obtaining the right infrastructure on-demand. What previously took months could now be achieved in a few minutes. IaaS providers like AWS have hundreds of machines with flexible configurations and many options for pre-installed OS and other tools. Tools like Ansible, Chef, Puppet help represent infrastructure-as-code, which further speeds up provisioning and re-provisioning of machines. However, this is still a problem in many organizations, especially those running their own data centers or those that haven’t embraced the cloud yet. A popular DevOps framework describes a CALMS approach, consisting of Culture, Automation, Lean, Measurement and Sharing. The DevOps movement started as a cultural movement, and even today, most implementations focus heavily on culture. While culture is an important part of any DevOps story, changing organizational culture is the hardest thing of all. Culture forms over a period of time due to ground realities. Ops teams don’t hate change because they are irrational or want to be blockers. Over the years, they’ve taken the heat every time an over-enthusiastic Dev team tried to fast-track changes to production without following every step along the way. Seating them with the developers might help make the work environment a little friendlier but it doesn’t address the root cause, no matter how many beers they have together. z
work in theory, developers are bogged down by production issues, and a majority of time is spent on operating what they built last month as opposed innovating on new things. Most organizations try to get all teams involved across all phases of the SDLC, but this approach still relies on manual collaboration. Automation is the best way to broker peace and help Dev and Ops collaborate. But as we see in other challenges, ad-hoc automation itself can slow you
Relative time to fix a bug, depending on the stage it was found Without Continuous Delivery
With Continuous Delivery
Challenge #4: Cultural problems Applications evolve across functionals silos. Developers craft software, which is stabilized by QA, and then deployed and operated by IT Operations. Even though all these teams are expected to work together and collaborate, they often have conflicting interests.
down and introduce risk and errors.
is very challenging to create a workflow that spans across these aspects since most tools work with specific architectures and technologies. This leads to toolchain sprawl as each team uses the toolchain best serving their needs. The rise of Docker has also encouraged many organizations to develop microservices-based applications. This has also increased the complexity for DevOps automation since an application now needs hundreds of deployment pipelines for heterogeneous microservices.
30x 25x 20x 15x 10x 5x 0x
Planning Coding Integration/ System/ Pre-prod component acceptance testing testing
Planning Coding Integration/ System/ Pre-prod component acceptance testing testing
SDT03 page 10,11,12.qxp_Layout 1 8/21/17 2:07 PM Page 10
The evolving state of e
BY JACQUELINE EMIGH
The enterprise middleware landscape is shifting dramatically, with organizations pursuing a variety of paths — in both specialized managed services and DYI iPaaS packaged software — to get to the common destination of cloudenabled middleware. With cloud integration skills in high demand, new outside partners to work with, and even some LOB managers now creating business apps, corporate developers are feeling the impact. Indeed, over the next three years, fully 76 percent of organizations plan to replace their existing middleware infrastructures, according to a recent study by the Aberdeen Group. While 43 percent will replace those systems only partially, 33 percent will do so completely, said Michael Caton, research analyst for IT at Aberdeen, in an interview with SD Times. The study also found that 36 percent of organizations are running three or more middleware integration platforms. “As it turns out, in many situations, some
of these systems came through company acquisitions,” noted Rob Consoli, chief revenue officer for Liaison Technology, the middleware managed services provider that commissioned the report. Interviews by SD Times with developers, system integrators, and other industry observers affirm that enterprise middleware is migrating to the cloud fast. “Traditional middleware was the glue that held the enterprise together and let systems ‘talk’ to each other,” noted Marcus Turner, CTO and chief architect at Enola Labs, a custom corporate software development house. Yet with ongoing demands to pay software license fees, aging systems can get expensive. Meanwhile, traditional middleware systems are geared to working with old-fashioned monolithic databases, at a time when organizations yearn to learn what they can so as to get ahead of the competition. Furthermore, the programming skills needed to keep legacy systems humming are growing increasing rare, with developers trained in languages like
‘The cost savings of DIY packages are more than wiped out by the need to hire trained developers.’ —Rob Consoli
COBOL retiring out of the workforce. “I’ve told some customers that they’re only a heartbeat away from needing to get new systems,” Turner remarked. In contrast, while often also referred to as “middleware,” newer cloudenabled infrastructures fulfill functions unimaginable 15 or 20 years ago. Essentially, these systems step beyond tying together on premise applications to add integration with cloud-based information from outside the organization.
Hadoop and Spark “Middleware platforms running in microservices environments are lighter weight, more decoupled, and more automatable. They also use standard HTTP,” pointed out Faisal Memon, technical product marketer at NGINX. These newer middleware platforms are able to provide enterprise delivery of microservices data from both SQL databases and non-SQL data sources such as MongoDB. This big data is culled by highly distributed processing platforms like Apache Hadoop and Spark. Hadoop/Spark helps out with microservices data in two major ways, according to Memon. “One of these [is related to] the sheer size of the data set. Using a single computer to do a simple math operation, like taking an average, would take months or years to complete. Using Hadoop or Spark with a cluster of computer resources would reduce that time to something more reasonable, hours or days,” he said.
SDT03 page 10,11,12.qxp_Layout 1 8/21/17 2:10 PM Page 11
f enterprise middleware The second way is that the microservices data set is not likely to be organized into a single structured database. “It’s likely to be scattered in different places and in different formats. Hadoop/Spark helps to bring the disparate data sets together.”
Middleware on the job One of Liaison’s customers, a large pharmaceutical firm, recently built an app for accessing and analyzing microservices data from Twitter feeds using Alloy, Liaison’s cloud-enabled managed middleware platform. “The company wanted to know what consumers are saying online about its pharmaceutical products,” Consoli observed. Organizations also use cloudenabled middleware in synchronizing data between SaaS and enterprise apps. The endpoints to various SaaS applications tend to behave in different ways, meaning that without integration, LOB users are forced to separately access multiple sources of information. Meanwhile, more SaaS keep springing up all the time. Salesforce.com and other early SaaS pionners have since been joined by hundreds of other SaaS, pointed out Vijay Tella, a former Oracle exec and TIBCO co-founder who has since cofounded a start-up called Workato. Experts also envision a surge in integration between enterprise systems and mobile and IoT data in coming years. Enterprise IoT data includes informa-
tion from POS systems, for example.
Faster message brokers To keep pace with newer distributed processing systems, cloud-enabled middleware platforms also use new message brokers. Popular choices include RabbitMQ Apache Kafka, ActionMQ, and ZeroMQ, for example. RabbitMQ, an open-source message broker distributed by Pilot Software, uses the same type of message queuing architecture as traditional message brokers, but operates much faster. Apache Kafka, on the other hand, stores messages in memory rather than queues. Like message queuing systems, Kafka is good at tasks like decoupling processes from data and buffering unprocessed messages. However, other uses of Kafka include log aggregation, operational monitoring, stream processing, and event sourcing, according to a use case scenario by the Apache Foundation.
Managed middleware services If you’ve taken a look at one managed middleware service, you haven’t necessarily seen them all. With four years of R&D behind it, Alloy is “built from the ground up as a brand new development, leveraging a modern microservices and big data infrastructure designed to handle both computer application integration and data management requirements”, Consoli told SD Times. Expertise in regulatory compliance
is one key differentiator for Alloy, he noted. Liaison holds compliance certifications for HIPAA, European Privacy Shield, PCI DSS 3.1, SOC 2 Type 2, and CFR 21 Part 1. Traditional middleware vendors like Oracle, IBM and TIBCO have also stepped into managed middleware services with add-ons to their existing product and services line-ups. So, too, has Red Hat, with managed services for its JBoss middleware. The new Oracle Cloud at Customer, designed to be 100 percent compatible with Oracle Cloud, is for customers needing on premise data warehouses for regulatory reasons, pointed out Vikas Anand, VP, product management and strategy, Oracle Integration, Process and API Management Cloud Services. Also among large IT vendors, Dell offers a managed middleware service called Boomi. In an offshoot of the managed space, some application software providers now sell managed middleware services as an adjunct to SaaS. Workato doesn’t sell its managed middleware services directly to customers, but to partners in the software industry, said Tella. The platform is in use by about 21,000 organizations. Like TIBCO and Oracle, Workato provides corporate developers and even LOB managers with “recipes” they can use for quickly building workflow apps in human resources, sales, and other continued on page 12 >
SDT03 page 10,11,12.qxp_Layout 1 8/21/17 2:10 PM Page 12
SnapLogic’s platform allows for all manner of integration between applications and data. < continued from page 11
application areas. Workato uses Amazon’s AWS for data warehousing.
DYI iPaaS packaged solutions Also aimed at easing integration, DYI iPaaS packaged solutions are provided as software rather than services. Top competitors in this arena include SnapLogic, MuleSoft, and Jitterbit. Some contend that the DYI packages are less expensive than managed services, but Consoli counters that the cost savings are more than wiped out by organizations’ need to hire trained developers to implement the software. Most of Liaison’s customers have experienced 50 percent or greater savings with Liaison’s managed services approach when compared to in-house application integration, according to Consoli. One SnapLogic customer, a file sharing and content management company named Box, has deployed the packaged software in a predictive analysis implementation for use across multiple LOBs. The system connects 23 different applications, includes 170 million data pipelines, and processes more than 15 million transactions per day, all with only 1.5 FTE developers, according to Alan Luong, senior enterprise systems program manager at Box.
SnapLogic’s platform also enables LOB managers to become what SnapLogic calls “citizen integrators” by letting them create apps without doing any coding. Adobe, another customer, now has about 500 of these end users in place.
Degrees of complexity Beyond costs, the complexity of needed integrations is another factor that enterprises look at when weighing whether to go with a DYI packaged solution or a cloud-enabled middleware managed service. “Depending on the scope of the service provided, managed services can be a boon to organizations finding themselves short on expertise with middleware platforms of yesterday and even today. Managed service vendors provide numerous benefits to companies looking to save time and money on the various aspects of middleware management,” remarked Lucas Vogel, founder of Endpoint Systems, a systems integrator and developer of endpoint software. “Building your own middleware or services layer gives companies full control over their service stack, but it comes at a cost with respect to having a dedicated DevOps and support team at the ready for managing the building, testing, and deployment of the middleware services, in addition to having a
development team. Deploying to the cloud means deploying to a moving target, where everything from the tools to the platforms themselves undergo constant changes.” However, to build smooth relationships with middleware managed service providers, enterprises should be upfront about the degree of integration complexity and the amount of maintenance expected of providers, he noted.
Opportunities for developers In one of a series of recommendations at the conclusion of Aberdeen’s report, the analyst firm advised organized to evaluate both iPaaS and cloud-based managed services to determine whether they can free up time for IT to work on more strategic projects. The report also uncovered internal skills gaps at many organizations for cloud-enabled integration. Enterprises voiced needs for more skills in SQL programming (61 percent); C#, C, C++, and Objective C (54 percent); API-based programming (53 percent); mobile deployment (54 percent); web development (47 percent); Java (40 percent); and data integration languages such as R (59 percent). Additional needs emerged for more expertise in security and regulatory compliance, Aberdeen’s Caton said. z
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:35 AM Page 13
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:27 AM Page 14
SDT03 page 15.qxp_Layout 1 8/18/17 4:25 PM Page 15
Ethics, addiction and dark patterns BY ALEXANDRA WEBER MORALES
We’ve all fallen prey to them at one time or another: Design techniques such as the bait-and-switch, disguised ads, faraway billing, friend spam and sneaking items into the checkout cart. These “dark patterns” are interfaces are “carefully crafted to trick users into doing things, such as buying insurance with their purchase or signing up for recurring bills,” according to the website Darkpattern.org, which is dedicated to exposing these tricks and “shaming” companies that use them. Many of these shady practices are classic business scams brought online. Perhaps more worrisome are the new ways mobile apps capture our attention — until we can’t break away. In Addiction by Design: Machine Gambling in Las Vegas (Princeton University Press, 2013), MIT science, technology, and society professor Natasha Dow Schüll crystallizes her 15 years of field research in Las Vegas in an analysis of how electronic gamblers slip into a twilight called the “machine zone” — and how the industry optimizes for maximum “time on device.” Slot machines are one of the most profitable entertainment industries in the United States, according to Tristan Harris, a former design ethicist for Google. In a disconcerting essay on Medium, Harris argues that Schüll’s findings don’t only apply to gamblers: “But here’s the unfortunate truth — several billion people have a slot machine their pocket: When we pull our phone out of our pocket, we’re playing a slot machine to see what notifications we got. When we pull to refresh our email, we’re playing a slot machine to see what new email we got. When we swipe down our finger to scroll the Instagram feed, we’re playing a slot machine to see what photo comes next.
When we swipe faces left/right on dating apps like Tinder, we’re playing a slot machine to see if we got a match. When we tap the # of red notifications, we’re playing a slot machine to what’s underneath.” Thanks to intermittent variable rewards, Harris and many others note, mobile apps are easily addicting. But when you design for addiction, you open yourself to ethical questions.
In Hooked: How to Build Habit-Forming Products (Portfolio, 2014), consumer psychology expert Nir Eyal recommends using operant conditioning — intermittent rewards — to create addictive products. But are all products meant to be
An ethical design checklist Harris recommends that technology products: 1. Honor off-screen possibilities such as clicking to other sites 2. Be easy to disconnect 3. Enhance relationships rather than isolate users 4. Respect schedules and boundaries, not encouraging addiction or rewarding oversharing 5. Help “get life well lived” as opposed to “get things done” — in other words, prioritize life-enhancing work over shuffling meaningless tasks 6. Have “net positive” benefits 7. Minimize misunderstandings and “unnecessary conflicts that prolong screen time” 8. Eliminate detours and distractions.
addictive, or is a “viral” product one that will flame out after the hype is over? Are “habit-forming” apps a sustainable business model? In short, what are the ethics of addictive design? Interestingly, though Eyal argues that technology cannot be addictive, Schüll’s gambling research indicates otherwise. And Eyal contradicted his own book’s.” Technology dependence and distraction are easily solved, so calling them addictive is overkill, he said: “Everything is addictive these days. We’re told our iPhones are addictive, Facebook is addictive, even Slack is addictive,” Eyal said. However, he admitted, one to five percent of the technology user population does struggle to stop using a product when they don’t want to. “What do these companies that have people that they know want to stop, but can’t because of an addiction, do? What’s their ethical obligation? Well, there’s something we can do in our industry that other industries can’t do. If you are a distiller, you could throw up your hands and say ‘I don’t know who the alcoholics are.’ But in our industry, we do know — because we have personally identifiable information that tells us who is using and who is abusing our product. What is that data? It’s time on site. A company like Facebook could, if they so choose, reach out to the small percentage of people that are using that product past a certain threshold — 20 hour a week, 30 hours a week, whatever that threshold may be — and reach out to them with a small message that asks them do they need help?” Eyal said. He suggests a simple, respectful pop-up message to these users that reads, “Facebook is great but sometimes people find they use it too much. Can we help you cut back?” It remains to be seen if Facebook will implement such a measure, but Harris has come out swinging in the opposite direction from Eyal. He has launched timewellspent.io, a movement to “reclaim our minds from being hijacked by technology,” according to the website. z
SDT03 page 16.qxp_Layout 1 8/18/17 4:25 PM Page 16
Follow the path to microservices BY LISA MORGAN
Many of today’s organizations jump into microservices without considering what success will require. Rather than assessing where they are, where they want to go and how best to get there, they’re hoping to make giant leaps directly from waterfall or Agile to microservices. If you’re one of the hopefuls who wants to teach elephants how to dance and fly, Red Hat can help. “There are a lot of prerequisites you have to master first, but people are trying to bypass all that,” said Burr Sutter, director of developer experience at Red Hat. “They don’t realize they can deliver monolithic software in one-week sprints doing a lot of things besides microservices.” There’s nothing wrong with microservices, of course. However, there are better and worse ways to adopt them. If you want to get things right the first time, Sutter and Red Hat recommend the following five-step approach that builds on critical best practices: Stage 1: Re-org to DevOps. DevOps, like Agile, requires a cultural shift from hand-offs (throw it over the wall) to shared responsibility. That means collaborative problem-solving instead of finger-pointing and ticketfiling. “You shouldn’t have dev versus ops versus security or anything else. You need a cohesive team,” said Sutter. “Once you have empathy for the people who have to stay up all night when things go wrong, the quality of code goes up.” Stage 2: Enable self-service on demand. Developers shouldn’t have to wait days, weeks or months to get a virtual machine when software delivery cycles continue to shrink. With elastic infrastructure or infrastructure as code, Content provided by SD Times and
provisioning can be almost instantaneous. “How many days or weeks does it take for a developer to get a virtual machine and how many tickets have to be filed? I see companies making this mistake all the time.” said Sutter. “ Servers are cheap compared to the guy waiting for the server. It’s like telling the finance department they can’t use
ever, there are some more subtle and important characteristics people tend to overlook. “Jez Humble has some beautiful tests for this one,” said Sutter. “The one that blows peoples’ minds is, ‘The trunk is always deployable,’ meaning what you believe to be the source of truth in your code repository is ready to deploy at any given moment. That’s what your mindset should be.” ‘Once you have empathy for That’s a difficult test, though, the people who have to stay so most people realize their trunk up all night when things isn’t always deployable. For those go wrong, the quality who still think their trunk is of code goes up.‘ always deployable, the next test is —Burr Sutter whether everyone on the team is checking in code daily. Then, if a build breaks, how long does it take to fix? Remediation should take less than 10 minutes versus a day or two. Finally, how long does it take to onboard a new engineer? Complicated architectures and onboarding practices tend to get in the way. Stage Five: Use advanced deployment techniques. This is where blue-green and canary deployments come in. The subtleties are important because if calculators and pencils.” continuous integration is done properly, Stage 3: Automate. Do you have the software team will want to enable Snowflake servers or Phoenix servers? continuous delivery by automating the If you have the former, it’s time to deployment pipeline. With Kubernetes switch to the latter. and blue-green deployments, all users “With a Phoenix server, your server receive an update or a rollback in a sintemplate is code, not human. That’s gle click. With canary deployments, all super-critical,” said Sutter. I’ve talked updates are tested with a subset of to multiple customers about this topic users prior to a complete rollout. and not everyone knows you have to “One reasons you should care about write scripts. One company told me the subtle automations is because of all they had 300 servers and they weren’t the CVEs that are discovered throughout sure if they could reboot any of them.” the entire software stack,” said Sutter. Stage 4: Continuous integration In short, if you want to succeed with and delivery. Continuous integration microservices, master everything else in and delivery are often measured in order, first. Learn more at www.redhat.com z terms of software delivery speed. How-
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:27 AM Page 17
INTEGRATION FROM THE OPEN SOURCE LEADER Bring greater efficiency to automation and decision making through continuous integration of applications and data. With Red Hat, you can gain insight to make better decisions with real-time access to the right data, in the right form.
Copyright © 2015 Red Hat, Inc. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, and JBoss are trademarks of Red Hat, Inc., registered in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:28 AM Page 18
SDT03 page 19, 20.qxp_Layout 1 8/18/17 4:26 PM Page 19
Girls in Tech’s Adriana Gascoigne: ‘We have a long way to go’ BY MADISON MOORE
Another day, another headline highlighting gender equality and diversity in the tech workplace. The most recent story? A leaked 10-page screed from a Google engineer who insists that women in the tech workplace are not underrepresented because of bias and sexism, but because of “inherent psychological differences between men and women.” Yet, companies like Uber, Tesla, startups, and multiple firms have been hit with lawsuits and allegations around sexism, sexual assault and harassment, bullying, and bias by former women employees. Adriana Gascoigne, founder of Girls in Tech, is one of the many individuals insisting that sexual harassment and bias is rampant in the tech community. It’s been going on long before she even started her career in tech in 2006, she said. In fact, one of the main reasons she started Girls in Tech (a group that focuses on a new generation of women and minorities in technology) was to make sure the tech playing field encourages and empowers these diverse groups of people. And although there is an apparent shift happening in this space, with more women coming forward and openly speaking about their experiences, Gascoigne said we haven’t reached that silver lining, yet. In an exclusive talk with SD Times, Gascoigne gets real about her own experiences in tech, her thoughts on Silicon Valley, what’s changed for gender diversity, and where companies need to take real action. SD Times: What’s your own experience in terms of dealing with harassment in this industry, and how did you push through it and make it to where you are today? Gascoigne: When I jumped into my career, I was the only female at a tech company. I didn’t think anything of it until I was being alienated, isolated and treated differently. People would com-
municate in not appropriate ways, or do inappropriate things [to me]. I had a water bottle thrown at my head and people swearing at me, [and] things you wouldn’t find in a productive environment. There was no reason to fire me so [the executives] just put me on the oth-
‘I feel there is a lot of negative connotation for coming forward, for calling someone out or a group of people out for their bad behavior.’ er side of the office. They had me isolated from the other part of the team, which I thought was a really horrible solution. It was putting a piece of tape on the issue. This continued to happen to me years later (at a different company). I was in marketing at another high tech company and had to travel with someone with an engineering background. Very inappropriate things were said, certain requests were made that were highly inappropriate, and he just made me feel uncomfortable and like my job was on the line if I didnt comply or keep my mouth closed. Eventually it came to me talking to
HR who didn’t do much. And it was so disappointing because the HR person was a woman. [The company] said it had to do with my performance, even though my performance was absolutely flawless. It was just an excuse to push me out. It was shocking and sad, but I learned fast that this is the way the working world works, and I have to fend for myself. SD Times: I’ve heard both sides of the gender diversity issue; some say everything is better now, and others argue that we have more work to do. Where do you think we stand on gender diversity? Gascoigne: Oh no, we have a really long way to go. You get the proof in all the headlines. There are all of these people coming forward that are stating their experiences in a detailed fashion, and hopefully this is the silver lining that will create or instigate change, or hold companies and executives accountable for their behavior. In addition to having more women or minorities on boards or in C-level positions, you will create a more loyal staff, an environment of solidarity, and a more empathetic [company]. I believe that’s what missing. I think empathy is really taking time to understand what people are going through. And it’s hard, because if you are not an African-American female, you are not going to understand what she is going through because you are not in her shoes. But taking the time to actually talk to the person, talk about what they are going through — empathy can echo throughout the company’s fabric and allow for a very positive trickle-down. SD Times: In the New York Times article that you cited in your open letter to women, women speak frankly about their experiences. Why don’t more people come forward? Gascoigne: People aren’t really talking about [this behavior]. People — where continued on page 20 >
SDT03 page 19, 20.qxp_Layout 1 8/18/17 4:26 PM Page 20
Girls in tech have ‘a long way to go’ < continued from page 19
SD Times: Are there any “safe places”
their gender or racial background is scarce in an industry — don’t want to be the “tattletales” or the whiners or stand out because of something negative that happened. Even if they aren’t the ones that caused it. I feel that there is a sense of fear that they might lose their jobs. I feel there is a lot of negative connotation for coming forward, for calling someone out or a group of people out for their bad behavior. A lot of people end up enduring it or absorbing it and not speaking out, and they hope it will go away with time. Really it doesn’t, it typically gets worse and worse, and suddenly it becomes a systemic issue in that company.
for women, minorities and different genders? Is there anything that Girls in Tech provides? Gascoigne: We are launching a diversity and inclusion program next month, which really helps corporations and midto late-stage startups focus on three major things that we think are crucial for the current diversity and inclusion issue. The first is recruitment tactics: How do you create certain activities to really promote diversity within the recruitment channels in the company? Second is building a culture: What are the types of things that companies need to do on every level to instill certain values within a culture that is sustainable and
I think it’s a matter of building more confidence and having the ability to get support from an HR division or executives, and to have the ability to do anonymous complaints in an organized way. HR policies and new guidelines need to be activated, which will hold people accountable. This means strict guidelines, where [people] can’t get away with things. We never thought we needed this in the past, but it’s quite obvious that we do. This is all very crucial to the advancement of our ecosystem and attracting more women and minority groups to the field. If you are experiencing all these negative activities, why would the next generation of women in tech be interested in jumping into the field?
empowers people to be productive? Third is training and tactical things like training programs for management and executives: How do you treat others in the workplace? How do you sustain an [open] culture? What do you do day-to-day to sustain a culture that is really inviting to all different groups of people? How do you treat the opposite gender in a respectful way? We feel the training and policy goes hand-in-hand. This means strict policies that are issued by HR and executives, and it holds the executives accountable for making sure that the rules are followed, and the guidelines are followed. We think it will be a game-changer for the field. We are really focused on this in addition to being a community
with a voice. [We want to encourage] those brave women to come forward and share their personal stories, have our support as a community, and [we want to] serve as a platform where they can share their experiences and have a voice based on what they have been through. SD Times: Policies and training seem key for companies today. And we are all aware of companies that need to work on gender diversity. But what about the ones that are setting a good example? Gascoigne: Autodesk has a robust diversity and inclusion division, and they have a training program for executives and management. They instill these values to empower women to get ahead in their positions within AutoDesk. Etsy created a boot camp to empower more minority groups to code, and after they go through the program, they hire these folks into the company and nurture their growth as they become senior level coders or executive level employees. Also, CEO of Salesforce Mark Benioff had part of his HR division go in and do research on all women that were the same level and made sure they were getting paid the same as their male counterparts. They were able to make sure there was parity when it came to compensation and salaries for men and women in the same position. There are solid companies activating programs because they know it’s a problem. They want to integrate these programs and avoid going down a path that companies like Uber, GitHub, and Tesla have had to deal with. It’s obviously a matter of protecting the employees and making sure they feel safe and comfortable in the workplace, but it’s also a PR issue and it’s also a revenue issue. Consumers speak with their wallets. In this type of society where you look at these headlines and you are losing hope that people lost all of their morals and ethical makeup, there are people that are moral and ethical out there, and they are empathetic. We need to continue to celebrate that and those people and what they are doing. z
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:40 AM Page 21
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:40 AM Page 22
SDT03 page 23-DR.qxp_Layout 1 8/21/17 4:26 PM Page 23
Service Virtualization brings expediency Development organizations can move with the speed of today’s computing world by testing pieces of component applications in virtualized environments on-prem, in cloud BY DAVID RUBINSTEIN
The consumerism of IT is transforming how businesses work. Users want what they want, when they want it, and IT departments have to keep pace. “The advent of modern technologies such as mobile, cloud, virtualization and IoT has fueled higher consumer expectations and demands. We expect information and services to be served anytime, anywhere at the click of a button,” Kimm Yeo, Senior Manager, worldwide product marketing at HPE, said. One new technology that’s making this digital transformation possible is service virtualization, which gives developers a chance to roll out their own test environments and test these pieces of applications before the entirety is full assembled. Yeo said the new computing capabilities available to consumers today don’t come without added complexity and cost. “They require staff who need to coordinate and manage the disparate number of tools as well as siloed apps and components that need to come together as a composite-based applications for their customers. The challenges are compounded as businesses and application owners try to get the different development, ops and QA teams to collaborate and deliver great quality apps with speed.” But many development and QA teams have been struggling with the balancing act of releasing their products and services with speed while preserving quality. The advent of new technology such as service virtualization has proved to ease the development and testing process, leading to both cost and time savings. Early adopters are finding that using Content provided by SD Times and
virtual services can significantly speed up the development and testing of new (or re-engineered) software components. “These created virtual services and test assets can easily be shared with other teams and re-used as part of the agile, continuous testing and DevOps process, Yeo said. HPE Service Virtualization improves communications and simulations of dependent components that could be on a client, in middleware or legacy system, she explained. “As long as the components needed are web-based, cloudbased or SOA based applications (service oriented architecture) written in Java or C#, and leverage transport and messaging APIs such as Http/s, XML, REST, JSON, JMS,
MQ and more, developers and testers can use HPE Service Virtualization to virtualize and simulate the restricted components,” she said.
Increasing role in mobile IoT apps Research firm Gartner, Inc. forecasts that IoT technologies will result in some 8.4 billion connected things being in use worldwide in 2017, up 31 percent from 2016, and will reach 20.4 billion by 2020. Total spending on endpoints and services will reach almost $2 trillion in 2017. China, North America, and Western Europe are driving the use of connected things, and the three regions together will represent 67 percent of the overall IoT installed base in 2017. z Follow Kim @kimm_yeo
Highlights of HPE Service Virtualization Apart from IoT connected apps testing support, HPE Service Virtualization 4.0 continues to expand and enhance support in multiple areas. Here are a few of the highlights: Continued breadth of protocol support — From non-intrusive virtualization of SAG WebMethods integration server to enhanced financial protocols such as SWIFT MT/MX messages and FIX financial messages over IBM WebSphere MQ protocol. You can realistically simulate SWIFT protocol messages, modify the test data or swift test scenarios effortlessly and without the need to have the technical know-how or the SWIFT network environment available. Enhanced Virtual Service design and simulation — The introduction of new dynamic data rule and data driving capabilities further help reduce time and improved efficiency for the users. Continued support for DevOps and Continuous Testing — With the enhanced SV Jenkins plug-in. The updated HPE Application Automation Tools 5 Jenkins plugin allows ease of manipulation, deployment, undeploy capabilites and the management of changes in virtual services and assets as part of the continuous delivery pipeline. Infrastructure and licensing changes — There are several changes here such as the introduction of a new SV Server concurrent license which allows running SV Server in dynamic network environment and/or for cloud deployments, support of Linux in beta, and changes to the SV distribution packages and support of 64-bit versions only (removal of 32-bit versions) of SV Designer and Server.
SDT03 page 24,25.qxp_Layout 1 8/18/17 1:42 PM Page 24
How to choose the right database BY ROSHAN KUMAR
Microservices are in the spotlight, as infrastructure building blocks, because they offer decoupling of services, data store autonomy, miniaturized development, testing set up, and other benefits that facilitate faster time to market for new applications or updates. The availability of containers and their orchestration tools has also contributed to the increase in microservices adoption. At its core, the rise of microservices is the rejection of the traditional architecture approach, in which a monolithic database is shared between services. Microservices, in contrast, embrace independent, autonomous, specialized data stores for each microservice. An e-Commerce solution, for example, may employ the following services: application server; content cache; session store; product catalog; search and discovery; order processing; order fulfillment; analytics; and many more. Rather than use a large, single database to store all of the operational and transactional data, a modern e-Commerce solution may use a microservices architecture similar to the one depicted in Figure 1, in which each service has its own database.
How to choose a data store The first step when choosing the ideal data store is to determine the nature of your microservice’s data. The data can be broadly classified into the following categories: 1. Ephemeral data: A cache server is a good example of an ephemeral data store. It is a temporary data store whose purpose is to improve the user experience by serving information in real time. The microservice is typically tuned for high performance and the operations are read intensive. The store has no durability requirements as it does not store the master copy of the data, but it still has to be highly available because failures could cause user experience issues and subseRoshan Kumar is product manager at Redis Labs
Figure 1. Microservices in a sample e-commerce solution.
quently, lost revenue. Separately, failures can also cause “cache stampede” issues, where much slower databases crawl because they cannot handle the high frequency accesses. 2. Transient data: Data such as logs, messages and signals usually arrive in high volume and velocity. Data ingest services typically process this information before passing it to the appropriate destination. Such data stores need to support high-speed writes. Additional built-in capabilities to support timeseries data and JSON are a plus. The durability requirements of transient data are higher than that of ephemeral data, but not as high as transactional data. 3. Operational data: Information gathered from user sessions, such as
user profiles, shopping cart contents, etc., are considered operational data. The microservice offers better user experience with real-time feedback. Even though the data stored in the database is not a permanent proof of record, the architecture must make its best effort to retain the data for business continuity. Often, operational data is organized in a particular data model such as JSON, graph, relational, key-value, etc. 4. Transactional data: Data gathered from transactions such as payment processing and order processing must be stored as a permanent record in a database that supports strong ACID controls. In the e-Commerce application shown in Figure 1, we can classify the microservices and their respective data stores as shown in Table A.
Cache Server Session Store User Activity Tracker
Ephemeral Operational Transient
User Comments and Ratings
Transient, Operational Operational
Low-latency user experience Low-latency user experience Fast data ingest–collect all activities and pass it on to analytics engine Data ingest, reporting, support escalation, communication Low-latency user experience with near-accu-
Product Catalog rate, Search Engine Analytics and Reporting
updated product Low-latency user experience Personalized recommendations, enhanced user experience Optimized user experience, permanent store of record Permanent store of record
SDT03 page 24,25.qxp_Layout 1 8/18/17 1:25 PM Page 25
for your microservices TABLE B Microservice
User Activity Tracker
User Comments and Ratings
Analytics and Reporting
Tunability for consistency and durability In order to optimize your microservice for performance and data durability requirements, it’s important to confirm that your selected database offers the appropriate tunability features for the data categories that you’ve identified. For high performance, a pure in-memory database is an ideal choice. For durability, data replication along with
persistence on disk or flash storage is the best solution. For example, the cache server in our e-Commerce example must be optimized for low-latency, high-speed read operations. Based on the nature of the data, the database need not be burdened with durability options. On the other hand, the order fulfillment microservice focuses on keeping the data clean and consistent. Table B shows how the data store
for each microservice should be configured and tuned based on its requirements for atomicity, consistency, isolation and durability. Lastly, it is also important to evaluate the deployment and orchestration options available with the database in order to ensure that all microservices are deployed and managed in a homogenous environment. Some key criteria to look for include: 1. Availability as a container: Since microservices are mostly deployed as containers managed by orchestration tools, there is great operational efficiency to be gained when also using databases as containers. 2. Cloud/on-premises options: Is the database available on the cloud or on-premises, where the microservice is deployed? A database that’s available for both deployment options offers greater flexibility. 3. Vendor lock-in: Organizations sometimes switch orchestration tools, so it’s helpful to make sure the database supports all popular orchestration tools. z
E-book Print Ads August 2017_SD TIMES_3.pdf 1 7/6/2017 7:15:19 PM
NOM2017AD.qxp_Layout 1 7/26/17 12:25 PM Page 1
Subscribe to SD Times News on Monday to get the latest news, news analysis and commentary delivered to your inbox.
• Reports on the newest technologies affecting enterprise developers — IoT, Artificial Intelligence, Machine Learning and Big Data • Insights into the practices and innovations reshaping software development such as containers, microservices, DevOps and more • The latest news from the software providers, industry consortia, open source projects and research institutions
Subscribe today to keep up with everything happening in the software development industry.
CLICK HERE TO SUBSCRIBE
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:41 AM Page 26
SDT03 page 27.qxp_Layout 1 8/21/17 4:58 PM Page 27
Microsoft releases .NET Core 2.0 Open-source update adds API support, live unit testing and better cloud debugging, based on .NET Standard 2.0 ‘spec‘ BY DAVID RUBINSTEIN
Microsoft late last month announced the release of the open-source .NET Core 2.0, with improved performance, 20,000 more APIs from the .NET Framework world, better cloud debugging and live unit testing. .NET Core is one of the platforms Microsoft developers can use to create cross-platform applications; Core is best-suited for microservices and container architectures, according to Scott Hunter, Microsoft’s .NET director of program management. Hunter explained that .NET Core, Xamarin, UWP and the .NET Framework platforms all have different subsets of APIs, which has made sharing code between applications on those platforms difficult. “If you’re a .NET Framework customer and you’re using file APIs and then you move to Universal Windows applications, those APIs didn’t exist. We need to make all our dot-nets come together under a common umbrella.” So today, Microsoft announced the official release of .NET Standard 2.0, which Hunter described as “a spec like HTML5” that is a set of all the APIs that can run on all the different platforms. It contains all the intrinsics, Hunter said, adding that .NET Framework, .NET Core, UWP and Xamarin all must implement .NET Standard 2.0. “This makes it easy to share code between a .NET Core app and a Xamarin app,” Hunter said. “The ability to do code reuse became much easier.” Among the platforms .NET Standard 2.0 and its 40,000 APIs support are .NET Framework 4.6.1, .NET Core 2.0, and Xamarin for iOS, Mac and Android. Support for UWP is in the works and is expected to ship later this year, Microsoft’s Immo Landwerth, program manager on the .NET team, wrote in a XXXX Wednesday announcing that .NET Standard 2.0 was final.
.NET Core 2.0 is required to build NuGet packages; Visual Studio 2017 15.3 is required to author .NET Standard 2.0 libraries. Also, the latest version of Visual Studio for Mac, 7.1, supports building .NET Standard libraries, Landwerth wrote. In web benchmarks, .NET Core 2.0 is 20 percent faster than the previous version, Hunter said, crediting the open-source community for contributing many of the performance fixes. In .the NET Core 2.0 wave, cloud debugging has been improved, according to Hunter. Today, developers use a logging framework to log errors. “The problem with that logging frameworks is that if it’s not aware of Azure, when you post your app up to Azure, Azure’s portal won’t be able to show you the logs.
example, the app is frozen and a ‘cloud snapshot’ is taken that can be downloaded from the portal to Visual Studio running in a local machine, so you can debug the application without breaking it from running in the cloud, he said. In March, the capability to do live unit testing was added to Visual Studio for .NET Framework customers. Today, the feature is supported in .NET Core 2.0. This capability tells developers which code has unit tests written for it, and which code doesn’t. Live unit testing can tell if the code is covered, and what’s passing and failing live in the IDE while typing code. Further, it can identify how changes to source code impact tests. “Let’s say I’ve got 8,000 tests. You don’t want to run all 8,000 tests when you change one line of source code. It actually identifies only
‘.NET Core has reinvigorated the developer community. We’ve seen a revival of .NET since the open-sourcing of Core.’ —Scott Hunter
“We changed .NET Core 2 where it can be aware of where it’s running at, let’s say the cloud, and the cloud can be aware of it,” he continued. “When you publish your .NET Core 2 application up to Azure, things like diagnostic logs, they just go to the right places automatically. All the diagnostic stuff just works without writing code as you Azure-fy your application.” When your application is published to Azure, he said, a bar in the portal enables enhanced diagnostics. “We inject a profiler and better crash analytics into the app,” he explained. The profiler watches the app. If it crashes on the same method 100 times, for
the tests that are impacted by the code you change. And we can do that live. Maybe the one line of code you change affects only three tests. We rerun those test only.” Support for containerizing ASP.NET Core apps as Windows Nano images has been added; users can now select Nano as the container platform for new or existing projects. Finally, first-class support has been added in .NET Core 2.0 for Angular JS and React, Hunter said. “.NET Core has reinvigorated the developer community,” he said. “We’ve seen a revival of .NET since the open-sourcing of Core.” z
SDT03 page 28.qxp_Layout 1 8/18/17 4:33 PM Page 28
GrapeCity focuses its brand on developers BY JACQUELINE EMIGH
Software developers are about to discover a new, unified brand message across the GrapeCity Developer Solutions product family, which includes ComponentOne, ActiveReports, Spread, and Wijmo. A company-wide strategy to refine branding includes a new website design, product marks and brand guide, and new releases of all products, said Joseph Lininger, head of global marketing for GrapeCity, Inc. Launching this month, the new English language website is the centerpiece for the brand campaign and provides a single resource for all four product lines. A new Japanese language site is planned for early 2018, with Chinese and Korean language sites in the works for shortly thereafter. The company also plans to enhance its consumer messaging across all channels to better illustrate the brand’s focus on supporting developers. Core product features will remain unchanged as industry-leading component lines, but will receive ongoing upgrades and enhancements, and in the coming months customers will experience a more unified look and feel to component UIs, samples, demos and documentation. GrapeCity, the largest maker of Windows component software in the world, talked extensively with industry leaders and software developers while planning its brand strategy. The company’s global business units stand solidly behind the new approach, according to Lininger. “Our message is that GrapeCity makes software tools that empower enterprise software developers to achieve more,” said Lininger. “We build, support, and maintain the most powerful, easily extensible and reliable developer solutions available. Our tools give Content provided by SD Times and
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:41 AM Page 29
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:43 AM Page 30
SDT03 page 31.qxp_Layout 1 8/21/17 2:48 PM Page 31
CollabNet, VersionOne merge BY DAVID RUBINSTEIN
Developer software company CollabNet announced it is merging with VersionOne, a provider of software for agile enterprises. Terms of the deal have not been disclosed. The deal enables the company to offer solutions designed to empower organizations to flow Agile practices through all aspects of the business, align business goals with development execution, and measure the value of its process and applications, according to Flint Brenton, CEO of CollabNet. About the merger, Brenton said, “There’s a lot of great upside. We offer enterprise-level security, compliance and experience. We’ve moved to where the market wants to be, both agile and traditional software development, onpremises and in the cloud.” CollabNet is perhaps best known for its TeamForge application life cycle management software. Version 17.4 was released in May with features that address Agile at enterprise scale and that simplify source code management and version control. “We had made an investment in Agile and our solution was up and coming, but VersionOne
was stronger than we were” in that area, Brenton said. VersionOne provides what is widely considered the top platform for scaling Agile through the enterprise with support for the Scaled Agile Framework, Scrum, Kanban as well as continuous delivery and DevOps automation. “Where we were weak, CollabNet was strong,” said Robert Holler, former CEO of VersionOne who is now chief strategy officer at CollabNet. About Agile and DevOps coming together, Forrester’s Diego Lo Giudice wrote in the merger announcement, “In the past, faster delivery meant lower quality and higher risk. Leading organizations have shown that applying Agile and DevOps practices enable faster delivery, higher quality and lower risk.” CollabNet will keep the VersionOne brand for its Agile products, according Brenton, but the company “might revisit branding down the road.” “We might not be the market leader in this space, but I believe we’re the technology leader right now,” Brenton said. z
Some near-term solutions synergies Several opportunities have been identified to provide existing customers and the market-at-large with near-term and measurable value: TeamForge and VersionOne ALM Connect Gives TeamForge customers visibility, traceability of the flow of data and artifacts across ALM tools including VersionOne Lifecycle TeamForge and VersionOne Continuum Gives TeamForge customers the ability to automatically package and deploy applications and provide visibility & traceability across the CD pipeline VersionOne Lifecycle and TeamForge SCM Provides VersionOne Lifecycle customers enterprise grade web based code review and source control management capability VersionOne Continuum and CollabNet DLM Provides both product customers value stream monitoring, measuring, tool chain orchestration, traceability and lean performance management metrics and KPIs
Veracode survey shows cybersecurity skills gap BY MADISON MOORE
Today’s formal education shows significant security skills gaps in the IT and developer professional community. According to new research from Veracode and DevOps.com, 76 percent of developers indicated security and secure development education is needed for today’s world of coding, but it’s missing from current curriculums. By not including security as part of bachelor’s or master’s degree program, or by leaving it out of training on the job, businesses are risking being part of another global cyber attack, according to Veracode, a software security company recently acquired by CA Technologies. The 2017 DevSecOps Global Skills Survey from Veracode, found that 65 percent of DevOps professionals say they are learning the skills they need on the job, and they are not receiving the necessary training through their formal education. “With major industry breaches further highlighting the need to integrate security into the DevOps process, organizations need to ensure that adequate security training is embedded in their DNA,” said Alan Shimel, editor-in-chief, DevOps.com. “As formal education isn’t keeping up with the need for security, organizations need to fill the gap with increased support for education.” Almost all of the respondents (80 percent) have a bachelor’s or master’s degree, but there is still a lack of cybersecurity knowledge prior to entering the workforce. The survey found that 70 percent of respondents said their security education was not adequate. “Our research with DevOps.com highlights the fact that there are no clear shortcuts to address the skills gap,” said Maria Loughlin, vice president of engineering at Veracode. z
SDT03 page 32,33,35.qxp_Layout 1 8/18/17 4:35 PM Page 32
BY LISA MORGAN
JDK 9 is less than three weeks away at the time of the writing and one of the burning questions is whether it has been worth the wait. Originally, general availability was slated for release in September 2016, then March 2017, July 2017 and finally September 2017. So, it’s taken some time, but when you update a programming language as a community, progress is slower than if a vendor does it single-handedly. There’s still work to be done, including bug fixes, which will come later. Interestingly, not many Java developers seem to want to talk about Java 9. Red Hat, which occupies two seats on
is almost here. A
Promised modularity expected, but is it the Java Community Process (JCP) Executive Committee, wasn’t able to respond in time for publication, according to a spokesperson. (Red Hat and IBM have had differences with Oracle concerning what JDK 9’s modularity should look like, which is not surprising). A Cisco engineer using JDK 8
wasn’t even aware that JDK 9 was almost ready for general release. Part of the problem may be that there’s not a lot of Java 9 information available yet. “I was super excited about 9 because everyone’s talking about the whole Jigsaw thing. Then I opened up some
What’s new in the Java 9 specification? JDK 9 introduces a link time phase which occurs between compile and runtime. It allows developers to assemble a set of modules into a runtime and customize modules as necessary. The Java compiler (javac) and linker (jlink: The Java Linker) tools enable developers to specify module paths that locate module definitions. Java Enhancement Proposal (JEP) 247 enhances javac so it can compile Java programs to run on selected older versions of the platform. JEP 282 (jlink) assembles and optimizes a set of modules and their dependencies into a custom runtime image that can be optimized for a single program. Also new is the modular Java ARchive (JAR) file which is a JAR file that has the module-info.class file in its root directory and a MOD packaging format that can include native code and configuration files. The module system also has a more concise version-string format that distinguishes major, minor, security and patch updates.
Instant code feedback JEP 222 (JShell) adds Read-Eval-Print-Loop (REPL) functionality to the Java platform, which has been a feature of other languages including LISP and Scala. REPL is an important addition to JDK 9 because it provides developers with instant feedback as they write code. The capability is helpful for program-
mers who are new to Java and for experienced Java developers who want to learn how to use an API, a library or a feature. REPL helps improve developer productivity by preventing errors early in development.
Improved JVM compilation and performance JEP 165 provides compiler directive options to control JVM compilation. The level of control is manageable at runtime and method-specific. This new feature supersedes and is backward compatible with CompileCommand. JEP 197 segments code cache, with each segment containing a different type of compiled code to improve performance and enable future extensions. JEP 276 dynamically links high-level object operations to the appropriate method handles at runtime based on the actual types of values passed. While dynamic linking is not a new feature, the new capability provides a way to express higher level operations on the objects and methods that implement them, unlike java.lang-invoke which is a low-level API. JEP 158 introduces a common logging system for all JVM components. JEP 271 uses this unified framework to reimplement garbage collection (GC) for consistency between old
SDT03 page 32,33,35.qxp_Layout 1 8/18/17 4:35 PM Page 33
videos and all the people on the video told me to wait. Based on their statements, the instructions I’ve read and examples of things online, I have to say the same thing,” said John Alexander, senior software developer at Internet marketing service provider Zeeto who’s been using Java since 2004. “I can’t justifiably tell anybody at my company we
too little too late? can switch over right now and get all the benefits. There’s going to have to be research done, which is kind of scary.” Alexander Volkov, a business analyst and architect at online brokerage software provider Devexperts, is wholly unimpressed. “Actually, [JDK 9] has no super
compelling features as almost every feature is outdated and is a ‘catch-up’ of Java as a language and a platform to what is already available somewhere else, be it other languages, libraries or platforms,” he said. Software development management consultant Eric Madrid has been using Java since the first version. He sees value in Java 9. “9 is unique in that a lot of the Oracle obligations have been fulfilled. This is the first release of Java where the community gets what they’ve been asking for,” said Madrid. “We’ve had some tooling, more modular design and the big one I’m looking forward to is native interprocess communication.” In February 2017 Oracle published a beta draft of the Java Access Bridge Architecture. Java Access Bridge is a collection of classes and DLLs that enable interprocess communication among assistive technologies and Java applications. “There a thing called the TwelveFactor App, which is the de facto stan-
and new formats, though the consistency is not absolute. To improve user experience, JEP 248 makes G1 the default garbage collector on 32- and 64-bit server configurations. It replaces the Parallel GC, which has been used for a very long time. The reasoning is that a low latency-oriented collector is better than throughput-oriented collector for its intended purpose (user experience).
Core library enhancements JEP 102 improves the API used to control and manage operating system processes. The ProcessHandle class provides the process’s process ID, arguments, command, start time, accumulated CPU time, user, parent process and descendants. The class can also monitor the liveness of processes and destroy processes. With ProcessHandle .onExitmethod, the asynchronous mechanisms of the CompletableFuture class can perform an action when the process exits. JEP 193 defines three different things. The first is a standards means of invoking the equivalents of Java.util. concurrent.atomic and sun.misc.Unsafe.operations upon object fields and array elements. In older versions, sun.misc.Unsafe.operations provides a non-standard set of fence operations. JEP 193 defines a standard set of fence operations to enable fine-grain control of memory ordering.
dard for scaling, configuring, deploying, coding and testing software applications. One of the tenets is scale and how you scale is through interprocess communication,” said Madrid. “You have a process running on one machine and I want to directly communicate with a process on another machine. There are native ways of doing that. Java’s never allowed you to interact with a process directly, you had to go about it another way and use different layers and databases and messaging systems and things like that, but this is Java’s first attempt at building native scalable design.” The migration to Java 9 from previous versions will be more difficult than previous migrations because Java 9 has a modular architecture and previous versions have a monolithic architecture. Developers want to see Java 9 support in their favorite tools and educational material that can help ease the path to migration. “Other releases were effectively turnkey. You could get your applications continued on page 35 >
It also defines a standard reachability fence operation to ensure a referenced object remains strongly reachable. JEP 254 provides a space-efficient internal representation for strings. Previously, the String class stored characters in a char array using two bytes for each character. The new representation of the String class is a byte array plus an encoding-flag field. JEP 266 is an interoperable publish-subscribe framework. Updates have been made to the CompletableFuture API and other things with the goal of continually evolving the uses of concurrency and parallelism in applications. JEP 269 is a collection of APIs that are static factory methods to produce collections of immutable instances. The APIs simplify the creation of instances. For example list.of and set.of can take up to 10 fixed arguments. Alternatively, vararg can be used for an arbitrary number of arguments that pertain to lists and sets. Map.of is for a small number of key-value pairs. It does not use vararg. “It’s taken 20+ years for Java to have this by default which is insane. It’s taken way too long,” said Zeeto’s Alexander. “The fact that I have not one but two different libraries installed across every one of our applications just to generate static lists poorly is very strange to me. I shouldn’t have to use a third-party library to generate a list like that. So, they’re late to the party, but I like it.” JEP 269 makes lists, sets and maps easier, so what’s not to like? Perhaps the fact it’s a set of APIs rather than a standard, built-in feature of the language. z
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:44 AM Page 34
SDT03 page 32,33,35.qxp_Layout 1 8/18/17 4:36 PM Page 35
< continued from page 33
to stand up with minor adjustments,” said Zeeto’s Alexander. “That’s definitely not going to be the case with 9. Aside from extremely simple trivial applications, there is going to be work that needs to be done just to get your application to run as opposed to refactoring it to be 100% modular and following the new suggested conventions. That’s an interesting concept because this is kind of a big request for us as developers to do.” Madrid has been experimenting with JDK 9 and says he would hesitate to migrate to it until there’s more support for it. “I have an opinion against migrating because writing code is hard enough. You have to have the right reasons to migrate and so migrating for the sake of just going to 9 would be kind of silly,” said Madrid. The conditions [under which] I would want to use 9 is do I have free range on implementation [and whether] the things I like about Java such as Spring and OS support are ready for 9. I think there’s some level of soaking that 9 needs to have before it’s adopted by a larger community like any new technology, but I think it will be well-received.” JDK 9 has a lot of small enhancements and there are a number of features from previous versions that have been deprecated or removed. Following are some things worth noting.
is verbose and add complexities. Java 9 will diverge all future library releases as ‘lib-9’ and ‘lib-legacy’ and [that] will definitely impact the community.” A module is a self-describing collection of code and data that can be combined with other modules to achieve different configurations including Java Runtime Environment (JRE). The JRE modularization simplifies application testing and maintenance.
Java 9 support At the time of this writing, Eclipse has a beta version that can be added to an existing Eclipse Oxygen (4.7) install. The beta provides the ability add JRE and JDK as installed JRE. With it, developers can create Java and plugin projects that use a JRE or JDK 9. They can also compile modules that are part of a Java product. In addition, there is beta support for Java Standard Edition (SE) 9. In a March blog post, IntelliJ announced that IntelliJ IDEA 2017.1 provides support for Java 9 modules including code completion, quick fixes and help. The blog includes a nice walkthrough of a “Hello World” example. It also notes that Java 9 modules must correspond to IntelliJ modules and that the dependencies for both Java 9 modules and IntelliJ modules need to be declared. In July, IntelliJ announced new improvements that aid developer productivity. One of them is the color-
‘The story is simple: Your application will break. How many hoops will it take to get it back online?‘ —John Alexander
coded Java Module diagram. It enables developers to visualize modules and their dependencies. Madrid is a firmly committed IntelliJ user because it enables him to be more productive than he would be otherwise. Like Zeeto’s Alexander, he intends to proceed cautiously with Java 9 to mitigate risks. “Whatever new thing I might build would have to be low-risk. It can’t be a hardened, production,
scalable system. I have to trust [Java 9] first,” said Madrid. “There are some principles I would want to follow to be able to observe and control the system. I would set up my acceptance tests, performance tests and I would measure the software and benchmark it against something like 8. If I don’t see improvements and even worse, if I see regressions or unexpected behaviors, I’m going to wait until those are addressed.” The Apache Maven community has been running some experiments and experiencing some frustrations, which indicative of Java 9’s early days. According to the Gradle blog, Gradle 3.0 provides initial support for Java 9’s latest Early Access Preview (EAP). Developers can also build and run tests using earlier versions of JDK 9. However, there is no support for modules or JDK 9 specific compile options. Bottom line, it’s going to take time for JDK 9 to gain traction, but it will get there. Hopefully, the move to the modular architecture won’t be overly painful; however, it will require a lot of patience and hard work on developers’ parts, especially in the early days as things break and behave unexpectedly. Tools will help ease the pain eventually, as they always do. z
Modularity Is a big deal JDK 9’s modularity makes sense in an era where all things large are being broken up into smaller pieces. In JDK 9’s case, the modular architecture enables greater scalability down to smaller devices. It also improves security, performance and maintainability. Modularization has been a multiyear effort that wasn’t ready in time for JDK 8, which is why it’s introduced in JDK 9. The modularization can be problematic for applications developed with the earlier monolithic versions of the language. “The story is simple: Your application will break. How many hoops will it take to get it back online?” said Zeeto’s Alexander. “Modularization at this point
SDT03 page 36,37,40.qxp_Layout 1 8/21/17 1:15 PM Page 36
Scaling Agile Enterprises need to let teams find their own rhythms, but come together at critical junctures
he concepts of Agile software development are well understood, more than a decade after the original manifesto was put to paper. It calls for things such as “people over process” and “responding to change over following a plan.” Of course, the devil is in the details, and companies are hitting a wall in trying to implement Agile, especially when they look to infuse Agile beyond their development organizations. Part of the reason is that Agile is not prescriptive. It provides a broad framework, but there is no map to follow, and organizations are developing their own processes to achieve the Agile goal of delivering software more quickly, of higher quality, and that meets the requirements of the user.
BY DAVID RUBINSTEIN Also, it was originally designed as an approach to developing software. Now, the “Agile enterprise” is the goal. Why should only software development move quickly? Why can’t marketing, and sales, and business decisions, be made in a more Agile fashion? They can, of course, but again, the devil is in the details. There are challenges in organizational leadership, in allowing teams to find their rhythms and then coming together at critical junctures and in using metrics to guide the business. “Agile is front and center for everyone trying to make a digital transformation,” said Steve Elliot, CEO of AgileCraft, makers of a scaled Agile
management platform. “But running an Agile mindset across groups of teams is the challenge.” And therein lies perhaps the greatest challenge to achieving business agility. Few organizations are starting from scratch; most have development teams doing Agile projects and creating versions of software faster than release teams, marketing and sales teams, and even business decision-makers can keep up. “The agile methodology has been so successful that development teams are pushing code more quickly than IT can deal with,” said Anders Wallgren, CTO at Electric Cloud, a DevOps release automation software provider. Some of this is due to the fact that whether most organizations embrace it
SDT03 page 36,37,40.qxp_Layout 1 8/21/17 1:16 PM Page 37
or not, they are all becoming software companies. Pizza sellers want to be platform players. Automobile manufacturers use more software in their cars than ever before; mechanics are transforming from ‘grease monkeys’ to debuggers. And a lot of companies piloted Agile in software projects, and found that it works. But they never had a strategy to scale it throughout the enterprise, said, Sally Elatta, president of Agile Transformations Inc. and founder of Agility Health. “They never realized the organizational impact Agile would have. It was a bottom-up movement.”
Making the digital transformation “The essence of Agile is adaption and change,” noted Boris Chen, co-founder
and vice president of engineering at tCell.io, a company focused on realtime application security for DevOps. “But there are friction points if you can’t get all parts of the organization up to the same speed.” “It’s almost like when mobile came along,” said AgileCraft’s Elliot. “Companies struggled to adapt. The ability to deal with changes in technology is critical to success.” Today, he said, organizations must conceive of a technology strategy — in this case, with the goal of business agility — and trace it down to deployment. “There has to be one view of the world,” he said. Smart companies making the change understand that an Agile enterprise is more than having individual teams doing Agile. “That,” said Robert Holler, former CEO of agile project management software provider VersionOne and newly minted chief strategy officer at development tools maker CollabNet, “can create a tribal scenario where each team sets its own practices and tooling. That doesn’t scale very well. Organizations need to take a systems thinking approach, make decisions about improving and optimizing the whole system, not just the pieces. Lots of organizations are still wrapped up in tribal agile, and they’re not getting the full benefit. They’re not working for the benefit of the whole.” It’s difficult to get teams working at the same speed, and Holler said it’s fine if they don’t. “It’s OK for teams to work at different cadences, but ultimately they have to align,” he said. Holler went on to say that organizations might be doing continuous delivery internally, creating software that is deployable every day but not necessarily deployed. Meanwhile the business is operating on a quarterly or annual cadence. “So you have to come together on monthly, quarterly and annual cycles. When the month rolls around, the organization has to operate on that meta-cadence,” he said, coining a new term. Because the development group is working at a faster cadence than other teams, challenges lie in understanding when to pull PR and marketing, for example, in to discuss work being
done in development. “There are different levels of releases,” said Shannon Mason, vice president of product management for CA Agile Central. “There are minor tweaks, changes. With the big paradigm-shifting things, you can get them out and hide them, then educate the field teams by turning it on for them. You can turn it on in pieces.” tCells’ Chen suggests integrating marketing into some developer meetings, perhaps weekly, “so they can understand what’s delivered and what is coming.” In the days of waterfall, teams would create requirements (with marketing and business input) for development, and after the 12-month development life cycle was complete, marketing would have the list of new features to promote. But in today’s world, since there is the notion that software is never done but simply improved iteratively, you need to break work down into increments of value, Mason said. “What’s the value proposition? Who are you targeting? What defines a ‘win’? If the uptake is good, the feedback is positive and the tech is solid, then we determine something is done.” But the work has to be done at a sustainable pace, offered Ronica Roth, CA agility services advisor and team lead. “There’s a heartbeat and rhythm” to work, she said. “I don’t turn my heart off at the end of the day because it did a good job. But I don’t run 24 hours a day either because I’d fall dead.” Organizations, she said, need cadences and rhythms, just as people mark days, weeks and seasons. “When an organization has a set of rhythms that work together, that’s business agility.”
Leadership is critical One of the biggest hurdles to scaling Agile across the enterprise and achieving business agility is a lack of business leadership experience in Agile processes. Pizza sellers never had to deal with Agile practices before. “You can teach people how to do Scrum in five seconds. It’s completely logical. But you don’t put a kid in pre-K and say, ‘We’ll continued on page 40 >
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:44 AM Page 38
Do you need to bridge for hybrid application development today?
Support choice in the enterprise application portfolio with Micro Focus Application Delivery Management. As you transform your enterprise application portfolio from waterfall development to Agile and DevOps based delivery, a tool that supports Project Agile will not deliver to enterprise scale. This is why an integrated application lifecycle management toolchain matters. Manage complexity across the portfolio, to continuously deliver quality applications at scale. Discover the New. Start your free trial today. microfocus.com/alm-octane
SDT03 page 39.qxp_Layout 1 8/18/17 1:21 PM Page 39
Finding an integrated ALM toolset for hybrid application development BY MADISON MOORE
As companies make the move to Agile software development, it’s possible many will run into challenges as they transition from traditional practices. Whether it is regulatory requirements or reasons of data sovereignty, it can be difficult for teams to move quickly when faced with these constraints. A quick solution is hybrid application development or hybrid Agile/ waterfall development, which is common in large organizations that have some projects in their portfolio being developed using Agile, and others with waterfall, said Malcolm Isaacs, senior solutions marketing manager at HPE. At the executive level, getting insight into the requirements and resources in each team can be difficult when using the same units from waterfall and Agile teams. According to Isaacs, there needs to be a way to compare two different products and teams using two different methodologies. This complexity is creating anxiety within the application portfolio. Collin Chau, a solutions marketing professional for HPE, said he sees many organizations’ current Agile tools are not capable of handling the workload of hybrid development, nor can they straddle both Agile and waterfall environments, much less be able to scale up for the enterprise. “With hybrid application development, the market is not taking a binary approach,” said Chau. “It’s about straddling waterfall development and Agile delivery whereby any given team may have one methodology or a mix of difContent provided by SD Times and
ferent methodologies in the same portfolio they are managing.” Companies desire this flexibility and they are looking for a solution that propels hybrid application development. They want to be more fluid and move faster. Yet some companies don’t want to give up their traditional technology, so they are trying to find the “best of both worlds,” working with Agile for faster time to develop, while still working with legacy technology, said Chau. And it is possible that both Agile and waterfall may come into play with legacy software, according to Isaacs. If the back end is
different tools to manage waterfall software and Agile software development. At some point, these tools need to integrate and provide metrics and reporting. This is what’s missing in managing application development in a hybrid space, Chau noted. There needs to be an integrated ALM toolchain that allows customers to move between both methodologies as they adopt Agile, he said. Companies also need to be able to scale across their tens and hundreds of distributed development teams, and at the same time, adopt hybrid application development so they can move from traditional IT to the hybrid space ‘With hybrid application with confidence, said Chau. development, the market is By bringing together the abilinot taking a binary approach. ty to provide visibility and dig into It’s about straddling waterfall metrics and reporting, HPE credevelopment and Agile delivery.‘ ated an integrated toolchain —Collin Chau called HPE ALM Octane, which includes traceability, visibility, and doing releases every few months, and assures governance and compliance. the front end is doing releases every HPE’s ALM Octane is offered in two few weeks or so, there is an obvious flavors, Pro and Enterprise. Pro is all “clash of culture,” he said. Of course, about helping Agile teams and developculture is important when it comes to ment teams scale across project teams Agile and DevOps methodologies as in the organization. At the same time, opposed to waterfall. the goal is to help them move into “You have to have a way to manage hybrid application development, espeyour development because you have cially as they transition from waterfall to two teams using two different method- Agile. ALM Octane Enterprise is releasologies,” said Isaacs. “You need integra- ing this month, and it recognizes the tion points between the two teams, you need to be Enterprise Agile, said Chau. need to do the planning up front, and “Businesses demand Agile portfolio you can’t just have the teams working in management, but they do not want to do isolation from each other.” so at the risk of management complexity Isaacs added that there needs to be and the hidden cost of having to adopt some sort of communication and syn- add-ons just to make things work,” said chronization between the two teams. It’s Chau. “Now in this space, addressing a major pain point for companies, he program and project requirements in said, since teams are all working with distributed teams is the call to action.” z
SDT03 page 36,37,40.qxp_Layout 1 8/21/17 1:17 PM Page 40
< continued from page 37
see you in 12 years. They need continued instruction and training,” said Mason. “The people are the hard part. Getting people over their own egos and long-held beliefs, especially in leadership, is the real challenge.” One of the reasons cited for a void in leadership is that training and education are “notoriously underfunded,” said CollabNet’s Holler. “Organizations say ‘Everyone’s doing agile, so
let’s do agile.” But Holler said making a digital transformation for business agility is a change management process. “Training, education, centers of excellence are required for positive outcomes. Large organizations have lots of inertia.” Programs have become more agile, but not the enterprise. Budgeting, for instance, “and the way organizations think about budget, is antiquated,” said AgileCraft’s Elliot.
Are business agility and growth on your organization’s radar? It’s important for organizations to be able to measure how they are achieving business agility. Executives need to have a strategy, to understand how Agile becomes part of their enterprise DNA. To help provide guidance, Agility Health has created radar screens that guide organizations along their path. With DevOps, “organizations are at a low maturity level even though it’s the number one priority for CIOs,” said Sally Elatta, president of Agile Transformation Inc., which has created the Agility Health radars. She said organizations face platform issues, a lack of automated tools, that their architecture is not scalable, and that they have legacy systems to maintain. The radars, which measure things like team health, DevOps maturity, technical health and enterprise business agility, “measure the health of the transformation and the maturity of teams,” Elatta said. “They give a guide to continuous growth.” The radars, she explained, are assessments. “You’re doing something every quarter to improve and grow,” she said. Organizations bring discipline to their technical teams, and she believes the same discipline needs to be brought to growth. “It’s about what matters to you and your team. Organizations have two backlog – project and growth. This holds you accountable every quarter.” —David Rubinstein
In short, leadership has to change, as the role of an Agile manager is “changing in a huge way,” said Agile Transformations’ Elatta. “The role is more strategic and less tactical. They must remove organizational obstacles, form and support communities of practice. It’s a big deal.” Elatta noted that organizations “haven’t invested in these people. They’ve forgotten about the middle management layer.”
Using metrics For organizations to get to business agility, they need to measure what matters most to them, and to know if their investment in people and process is bring a desired return on investment. Without metrics, “you can’t tell if you’re faster, cheaper, better and healthier” with Agile, Elatta said. “You need continuous measurement for growth. Measure where you are today, gain consensus of where you want to go, then measure again. This cycle is critical.” CA’s Mason pointed out that organizations collect massive amounts of data from their software that can be used “to save them from creating bad plans. We look at our data and see patterns from a human perspective. Envisioning what a product will look like 12 months from now is like looking for something that isn’t there. Knowing yourself a week from now is easier than knowing yourself a year from now.” Businesses are looking to extend agility across their entire value stream, said CollabNet’s Holler. Data helps organizations deliver the products their customers want. “When your value stream is optimized, how do you optimize the feedback loop? Faster feedback can totally transform your business. It’s nirvana for Agile.” Electric Cloud’s Wallgren echoed that sentiment. “Continuous improvement requires data,” he said, providing insights into release frequency, failure rates, user experiences and much more. “We ought to be doing more monitoring before production. If nothing else, you learn what a properly operating system looks like.” z
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:45 AM Page 41
SDT03 page 42,43.qxp_Layout 1 8/18/17 1:33 PM Page 42
In the enterprise or on shared distributed systems, data science requires an intelligent team capable of collaborating and digging deep into complex data BY MADISON MOORE ata scientists are no magicians, but they are in high demand. Researchers and analysts in this space recognize the diversity and explosion of Big Data, but the only way enterprises are going to be able to prepare for the future of Big Data is with a data science team capable of working with dirty data, complex problems, and open-source languages, experts in the field say. According to Forrester research from 2015, 66 percent of global data and analytics decision-makers reported that their firms either expanded or are planning to implement Big Data technologies within the next 12 months. Enterprises today are becoming more serious about Big Data and analytics, and they’re looking to attract data science talent so they can achieve all of their objectives for their data programs. “There are certain data science rock stars [who are] completely up to speed on deep learning and typically have a
doctorate degree,” said Thomas Dinsmore, a Big Data science expert who works at Cloudera. “The big tech companies basically bid up the salaries of those folks, so hiring is challenge and difficult, but not impossible.” Just take a look at salary data. Glassdoor reports that the national annual salary for a data scientist is $113,436, with big tech companies paying their data scientists anywhere from $108,000 to almost $135,000 annually. And on LinkedIn and other job board sites, recruiters are constantly searching for people that fit the data science role.
Finding a “data rock star” Part of the reason it’s so difficult finding a data scientist is the role is still not completely clear in many organizations, said Dinsmore. Companies are not always sure what qualities, characteristics, or background they should be looking for in a candidate. In addition to the data science “rock stars,” there are entry-level data scientists, or those
who are young and typically have a great understanding of popular opensource languages, coding, and hacking. And because they are “steep in this data science culture, they can add value very quickly when they come on board to a large enterprise,” said Dinsmore. The number one characteristic a solid data scientist candidate should have is the passion to develop insight from data, which Dinsmore says some data scientists have, and some don’t. “It’s not necessarily a matter of training in a particular language, because this field is changing so fast, the language or framework or library that is most popular today may not be the most popular in two years,” said Dinsmore. “The thing that sets capable data scientists apart is these people typically have gone out and grappled with data, and drawn some sort of insight from it.” Since data scientists will have the skills needed to work with analytics and data insights, they become critical components of the actual ‘insights teams’ in
SDT03 page 42,43.qxp_Layout 1 8/18/17 1:34 PM Page 43
place in some organizations, according to Forrester analyst Brian Hopkins, It’s the insights teams that build applications that connect data, insights, and action in closed loops through software, he said. “If utilized in this way, data scientists becomes critical to achieve an insight advantage, which spells profitable growth in the digital economy,” said Hopkins. Data scientists should also stay connected to business outcome changes, according to Hopkins. Organizations feel that they need to place their data science team in a room and just feed them data in order for them to do their job, but he disagrees. “Data scientists need to be embedded in the Agile teams that deliver models and insights directly to impact business changes that are big and differentiating,” said Hopkins. Working this way allows data scientists to stay motivated, and eventually organizations will attract more intelligent data scientists. It also means organizations need to find or provide a data center of excellence, said Hopkins. “You need a very good working relationship between data science and data engineering, so your engineers are working to free data scientists from having to futz with data access, data quality, and data timeliness,” he added.
Trend towards open data science Besides the influx of unstructured data, researchers and analysts have noticed a major trend towards open data science. As of today, there is an overwhelming majority of data scientists who prefer open-source data tools, said Dinsmore. In contrast, 10 years ago, if you spoke to working analytics, you would find that commercial tools were the most preferred. These enterprises were adamant about using standardized tools or one commercial tool, and now they are broadening their horizons and are comfortable working with open-source data science tools, he said. No one is replacing their commercial tools, but instead, using open-source tools to work sideby-side in an organization. Plus, the cadence of development in
open-source languages like R and Python is so much faster. Take TensorFlow, for instance. It was released in 2015 and within a few months, it was the most widely developed machine learning package in the ecosystem, said Dinsmore. And it’s still a trending repository on GitHub. The main issue for data science teams today is not which tool to use, it’s the inherent conflict between the needs of the data scientists and the needs of the IT organization, said Dinsmore. Data scientists want flexibility, they want the ability to add and work with new packages, and they want to experiment. They want to work with the tools they are comfortable with, said Hopkins, so if they want Python notebooks, so be it. If they need a version of Spark, the company needs to provide it. On the other hand, the IT organization wants stability, a limited number of discrete types of software, and above all, security, said Dinsmore. This conflict can be managed, but it depends on the degree to which an organization can take advant-source tools, which creates a “data jail model,” according to Dinsmore. This means data scientists end up running a query and extracting data to their laptops so they can continue to do the work their way. This leads to what Dinsmore says are “laptops in the wild.” Essentially, there is a whole slew of data scientists who are doing Big Data science on their laptops from extracted data platforms. All of that data is uncovered, insecure, unmanaged, and unschryonized, said Dinsmore. A solution is to give data scientists the ability to simultaneously, through containers, have a secure environment for the data, and also have contained and isolated instances available for data scientists to work in the packages they choose, he said. “It’s very important for organizations to find a way to standardize their data science tooling without sacrificing the ability to experiment and innovate,” said Dinsmore. “That implies choosing a basic platform that will enable experimentation and collaboration, but does not entail just sort of laying down a dictate.” z
A guide to Big Data analytics tools n AppDynamics: Application Analytics enables enterprises to automatically collect and correlate business transactions, mobile, browser, log and custom data to get insights into IT operations, customer experience and business outcomes. n Cask: Cask provides the Cask Data Application Platform (CDAP), the first unified integration platform for apps and data that cuts down the time to production by 80%. n CA Technologies: Big Data Management solutions from CA Technologies deliver visibility and simplicity for monitoring and managing big data projects across all platforms from a single, unified view. n Databricks: Databricks, the company behind Apache Spark, is a just-in-time data platform enables data scientists and engineers to simplify data integration, perform real-time experimentation and share results. n DataStax: DataStax Enterprise Graph is the only scalable real-time graph database fast enough to power customer-facing applications, capable of scaling to massive datasets and powering deep analytical queries. n MapR: MapR Technologies provides a Converged Data Platform that fully integrates analytics with operational processes. The MapR Platform unifies high speed, scale and reliability with global event streaming, NoSQL and database capabilities. n Redis Labs: Redis use cases include real-time analytics, fast high-volume transactions, in-app social functionality, application job management, queuing, and caching. n Splunk: Hunk is a platform that allows users to explore, analyze, and visualize data in Hadoop and NoSQL data stores. n Pepperdata: Leading companies rely on Pepperdata’s DevOps for Big Data platform to manage and improve the performance of Hadoop and Spark. z
The most innovative, globally distributed apps are real time and always on. Are yours?
Contextual The best data management systems serve relevant, holistic, and contextual information. Search, analytics, graph, and security are critical capabilities to serve
DataStax Enterprise, powered by the best distribution of Apache CassandraTM
personalized experiences, recommendations, and fraud detection.
Always On High availability doesn’t cut it. A key feature of a data platform is continuous availability. That means “maintaining 100 percent uptime” not 99.9999. All the time.
Real Time Customers expect responsiveness from the applications they use, and if they don’t operate at real-time speeds, the eﬀect on the user is similar to the application
GROW MARKET SHARE
ACCELERATE TIME TO MARKET
Distributed Your applications need to be distributed across many data centers and cloud regions. Your customers need access from anywhere and instantly.
Scalable If your database technology can’t support the scale requirements of your data, you have to ask yourself why you are using it.
Download this eBook at www.datastax.com/magic5
SDT03 page 45.qxp_Layout 1 8/18/17 11:47 AM Page 45
Distributed, always-on data management sistent security model across the data Today’s businesses must ensure their platform. In addition, DataStax Entercustomers are always able to stay con- prise integrates with data security nected to them, and access their apps mechanisms to encrypt data at rest and wherever and whenever they need. Yet in motion to keep customers’ data conoutages still plague companies, threat- tinuously secure and available. ening profitability and reputations. Using the DataStax Enterprise (DSE) Continuous availability in the CARDS data management platform, enterprises Globally distributed applications with can ensure their applications are always thousands or even millions of worldavailable, even when other applications wide users are particularly challenging. fail. Software development managers “We deliver data anytime, anywhere, choosing a data management platform every time, all the time along with for these applications must look for sevunmatched performance, continuous eral key capabilities. DataStax uses the availability and fantastic customer experiences,” said Robin Schu- ‘Software development macher, senior VP and chief prodmanagers must ensure the uct officer at DataStax. “Compastability of their applications nies can lose millions of dollars per as the number of users and minute, so downtime threats are amount of data grows’ huge. Your customers are con—Robin Schumacher stantly connected to your business so your software has to be constantly available.” acronym “CARDS” to articulate what enterprise customers need. Contextual. Data platforms should Perform better than competitors System outages equal headline news. In support applications in context, blending July 2016, 2,300 Southwest flights were transactional, analytical, search, and inadvertently cancelled costing $54 mil- graph capabilities. In a financial services lion. Before that, 2,300 Delta flights context, that might mean taking a credit had been impacted by a glitch that last- card transaction, analyzing the customer’s buying patterns and searching for ed three days and cost $150 million. “I’ve seen customers that haven’t the information to approve the transacexperienced one second of unplanned tion. Data management platforms have downtime in the six years I’ve been at to process multiple workloads in a single DataStax,” said Schumacher. “That’s data platform simultaneously. DataStax rare, because most businesses are run- also ensures that an application can utining legacy database software that can’t lize the right data model, and in real time, regardless of whether the applicadeliver constant uptime.” DataStax’s zero downtime track tion requires document, graph, or transrecord is achieved with the masterless actional database capabilities or all three. Always On. A data platform must architecture at the foundation of its commercial DataStax Enterprise plat- provide zero downtime. For example, form. The company is a key contributor one DataStax customer kept its recomto Apache Cassandra, giving the DSE mendation engine running despite a platform continuous availability and hurricane that took down a whole data extreme scalability. DSE also unifies center. All of the company’s databases search, analytics, and graph, with a con- failed except DSE because its architecBY LISA MORGAN
Content provided by SD Times and
ture was able to retain uptime via data distribution across other data centers. Real Time. In today’s fast-paced business environment of globally distributed applications, more enterprises need to deliver real-time data to their customers in the moment of “now.” DSE ensures access to fast data so users can accelerate business processes and deliver better real-time customer experiences. Distributed. A distributed architecture must distribute data anywhere in the world, providing timely data delivery and protecting customers from outages. DSE handles this requirement at scale, thanks to its masterless architecture. Scalable. Companies need a scalable platform that can support from 100 to 100 million users. DataStax Enterprise scales linearly so it’s predictable and performant. “Software development managers must ensure the stability of their applications as the number of users and amount of data grows,” said Schumacher. “DSE future-proofs your systems so what you start with today works for years to come.”
From legacy software to microservices DataStax Enterprise supports all types of applications whether they’re legacy applications, mobile applications or modern applications using microservices. “Different applications require different data models. We support them all along with any type of digital application you’re building or maintaining,” said Schumacher. “If you have to use several vendors to achieve that — transactions, search, analytics, etc. — implementation can be a nightmare. It’s easier to have a unified platform from us that can support all your requirements.” Learn more at www.datastax.com. z
SDT03 page 46.qxp_Layout 1 8/18/17 4:36 PM Page 46
Guest View BY BILL CURRAN
Avoiding embedded analytics bear traps Bill Curran is CEO at Izenda.
he benefits of data monetization that result from embedding analytics into applications are well known. However, few data product owners have mastered unlocking the data analytics treasure trove to create new revenue streams and increase user engagement. Many data product development journeys are riddled with ‘bear traps’ that cause trepidation, or inhibit or halt project efforts. Simply knowing how to sidestep pitfalls can help them evade the snares that render a solution unwieldy or cause them to abandon ongoing development. Awareness of these bear traps is invaluable as data product teams navigate through the growing range of viable BI platforms, developer tool kits, charting libraries, ETL tools, and data preparation platforms that are available to data product teams evaluating the right mix of solutions to use when delivering their data product.
Analytics tools developed for internal use are not designed for use as commercial data solutions.
Producing Impossible-to-Maintain Products This first trap goes unseen until it’s too late. Only after the product launches does the team realize they based the solution on a hard-to-maintain business intelligence (BI) platform. Beyond the obvious maintenance activities (data extraction and cleansing, or visualization), a great data product performs core activities essential to keeping it operational over the product lifecycle. Typically, analytics tools developed for internal use are not designed for use as commercial data solutions. Enterprise BI vendors may not anticipate widespread external use of a data product – including support of custom reporting, dashboards and self-service BI for tens of thousands of users. Their platforms may miss data product owners’ basic needs: adding functionality or administering tenant and user permissions so users see the right analytics for their roles. It’s easy to avoid this trap by selecting a robust platform with an integrated product life cycle management ecosystem, and that demonstrates an ability to operate at scale. Consider these questions: • How many clients and users does your largest
customer support? • How does your largest customer manage support and scalability? • How long does it take to onboard a new client? • How does your solution publish report and dashboard updates across single and multi-tenant instances?
Lack of Future-Proofing This bear trap is sprung when a lack of futureproofing appears late in the development cycle. Consider issues of scale — can the solution store millions of rows of data, or support tens of thousands of users in a multi-tenant environment? Most modern BI platforms easily handle scaling. However, many have functional limitations that complicate the data product’s ability to meet demands that come with customer growth. Think about these future needs: • How will it meet different customers’ compliance requirements? • Will using iframes result in lost customers due to a lack of responsive design? • Is there an open front-end that allows for customization? • Does the analytics software complement the application’s architecture?
Developing with Blinders On Companies that have deployed a solution may think they’ve avoided all the traps. But one catches product teams as they prepare for the next iteration of the data product. Today’s BI platforms lack utilization analytics — reporting on product usage and the functionality used. Data products provide insights that help users perform their roles efficiently and effectively. Data product teams need the same benefit of analytics. Avoid this trap by asking a few simple questions: • Are utilization analytics available? • What are the most and least frequently used dashboards, reports, etc.? • How can this information be used to find opportunities to improve adoption rates? Taking time to identify the traps and steps to avoid snares will empower data product teams to walk the development path with confidence, leading to successful data monetization. z
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:45 AM Page 47
OCT. 1â€“6, 2017 ANAHEIM, CA HTTPS://WELL.TC/WGWM
USE PROMO CODE SWCM TO SAVE UP TO AN ADDITIONAL $200 OFF YOUR REGISTRATION* *Valid on packages over $400
SDT03 page 48.qxp_WirelessDC Ad.qxd 8/18/17 11:46 AM Page 1
DON’T MISS A SINGLE ISSUE! Renew your FREE subscription to SD Times!
Take a moment to visit sdtimes.com. Subscribing today means you won’t miss in-depth features on the newest technologies affecting enterprise developers — IoT, Artificial Intelligence, Machine Learning and Big Data. SD Times offers insights into the practices and innovations reshaping software development such as containers, microservices, DevOps and more. Find the latest news from the software providers, industry consortia, open source projects and research institutions. Available in two formats — print or e-mail with a link to download the PDF. Subscribe today to keep up with everything happening in the software development industry!
Sign up for FREE today at www.sdtimes.com.
SDT03 page 49.qxp_Layout 1 8/18/17 1:20 PM Page 49
Analyst View BY DAVID CAPPUCCIO
Top 10 tech trends impacting I&O A
s organizations strive to align IT and operational technology to drive digital business innovation, infrastructure and operations (I&O) leaders should focus on key technology trends to support these initiatives. These trends fall into three areas — strategic, tactical, and organizational. The trends will all directly influence how IT delivers services to the business.
Strategic Disappearing Data Centers. Gartner predicts that by 2020, more compute power will have been sold by infrastructure as a service (IaaS) and platform as a service (PaaS) cloud providers than sold and deployed into enterprise data centers. Most enterprises — unless very small — will continue to have an on-premises (or hosted) data center capability. However, with most compute power moving to IaaS providers, enterprises and vendors need to focus on managing and leveraging the hybrid combination of on-premises, off-premises, cloud, and non-cloud architectures. Interconnect Fabrics. Data center interconnection fabric is poised to deliver on the promise of the data center as software-defined, dynamic, and distributed. The ability to monitor, manage, and distribute workloads dynamically, or to rapidly provision network services through an API, opens up a range of possibilities. Containers, Microservices, and Application Streams. Containers provide a convenient way to implement per-process isolation. Microservices can be deployed and managed independently, and once implemented, possibly inside of containers, they have little direct interaction with the underlying operating system (OS).
Tactical Business-Driven IT. Recent Gartner surveys have shown that up to 29% of IT spending comes from business units rather than traditional IT and this will increase over the next few years. This business-driven IT was often a means of getting around traditional slow-paced IT processes. Today, business-driven IT is mostly designed to provide technically proficient business people a means of implementing new ideas quickly, while adapting to,
or entering, new markets. IT’s role should be to David Cappuccio is build relationships witih business stakeholders, vice president and distinguished analyst thereby staying aware of new projects and what at Gartner. their potential long-term impacts will be on overall operations. Data Center as a Service. Under a data-center-as-a-service (DCaaS) model, the role of IT and the data center is to deliver the right service, at the right pace, from the right provider, at the right price. IT becomes a broker of services rather than just the provider of hardware. IT leaders can enable DCaaS by segmenting application portfolios based on business requirements. Some services may remain on-premises, but many others will migrate elsewhere, increasing IT’s agility, while reducing its physical footprint. IoT. The Internet of Things will change how data centers are designed and managed and how they evolve as massive volumes of Enterprises and vendors need to devices stream data around the world. I&O should use an IoT focus on managing and leveraging architect who looks at the longthe hybrid combination of term strategy for both IoT and the data center. on-premises, off-premises, cloud,
and non-cloud architectures.
Remote Device (Thing) Management. A growing trend for many organizations with remote sites/offices is the need to manage remote assets centrally. This has taken on more importance as enterprises focus on micro-data center support for regional or remote sites, and the emerging role of edge computing environments for geo-specific compute requirements such as the IoT. Micro and Edge Computing Environments. Micro and edge computing executes real-time applications that require high-speed response. The communication delay is shortened to a few milliseconds, from several hundred milliseconds. It offloads some of the computation-intensive processing on the user’s device to edge servers and makes application processing less dependent on the device’s capability. New Roles in IT. First will be the IT cloud broker, responsible for monitoring and managing multiple cloud service providers. Next will be the IoT architect, tasked with understanding the potential impact of multiple IoT systems on data centers. z
SDT03 page 50.qxp_Layout 1 8/21/17 2:49 PM Page 50
Industry Watch BY DAVID RUBINSTEIN
From the editor’s notebook David Rubinstein is editor-in-chief of SD Times.
alue Stream Integration. Digital Twinning. Events as API strategy. These topics are the ones we’ve been hearing about this summer, and a trip through this editor’s notebook will throw some light on these matters. According to Tasktop’s Mik Kersten, Agile and DevOps are local optimizations of your value stream, with DevOps leading a digital transformation that requires automation. Yet, with all the talk of Agile practices and DevOps enabling faster software release cycles and improved quality, Kersten said, “If you don’t look at the endpoint flow, you might be starting at the wrong place. You might not have enough UX designers, for example, so the bottleneck is upstream. Now, you’re not delivering software any faster.” He said delivering software products today requires an integrated view. The point, Kersten said, is to “shift the focus from making teams more productive to end-to-end business value delivery.” As an example, he cited a large bank that went from nine-month releases to fourweek releases, taking all the right steps with agile and automation. “But their velocity went down by a quarter after their transformation,” he noted. “They had failed to realize how much time was lost on compliance.” Now, he said, they’re doing compliance checks every four weeks. “When looking at the value stream, maybe you don’t need the mainframe team releasing every two weeks.” The challenge comes from trying to connect multiple disciplines in an organization into the value stream, he said. “They’re connecting Agile development with QA, with changes in the testing landscape, and automation, all moving left. Then connecting dev/QA to the service desk. You need it all to flow. The integration problem has moved up the stack.” Up the stack and to the edge. This is where digital twinning design patterns come in, according to Jeffrey Hammond, vice president and principal analyst at Forrester Research. Digital twinning — the ability to match a physical asset to a digital platform — has been made relevant to developers creating applications related to IoT. “You need to be
When looking at the value stream, maybe you don’t need the mainframe team releasing every two weeks.
able to program for embedded devices and work at the edge,” he said. “That’s very different than working at the core. The twin kind of fits between those two different domains and allows them to work together even if they don’t understand what the others do. This is more of a Big Data problem than anything else. But to be clear, IoT is where all the action is with digital twins.” GE, IBM, SAP, Amazon, even Microsoft, have some level of concept or structure that is a twin. “It’s a fairly common programming pattern in these IoT platforms today,” Hammond said. Beyond IoT, agents and chatbots are evolving from big companies, like Amazon, Google and Microsoft. “Right now, those companies own that assistant, they allow it to be customized a little bit for individuals. I could easily see a digital twin that essentially begins to act as my avatar from an agent perspective, and becomes increasingly mine, and if not a personification of me, at least a deeper understanding of what I do, where I go, who I am, what I want. I’m not sure I want multiple vendors to own that information about me. I think I’d want to control it myself. This is more the digital self, but when you ask how you deliver this, the concept of a twin is one way that this can be done.” These IoT systems are beginning to emit a flow of events to be managed and processed. Put those events together with functions — Lambdas and Google Functions — and you start to get a very different kind of program, and a much more interactive programming environment, Hammond said. “If you start to think about AI, or cognitive routines and machine learning algorithms, you can see a world where when an event happens, that event gets noticed by an algorithm, that algorithm decides it should take an action on behalf of the user, and then sends a notification to a user — a customer on a mobile device or a chat message inside Facebook Messenger — and all of a sudden you have a very different and proactive user interface, instead of ‘well, let me launch my mobile app and ask the system to do something for me.’ ” “That combination of serverless and API based on events, coupled with cognitive computing, is looking very interesting to me.” Interesting is an understatement. z
SDT03 Full Page Ads.qxp_Layout 1 8/21/17 12:38 PM Page 51
Data Quality Made Easy. Your Data, Your Way. NAME
@ Melissa provides the full spectrum of data
Our data quality solutions are available
quality to ensure you have data you can trust.
on-premises and in the Cloud – fast, easy
We profile, standardize, verify, match and enrich global People Data – name, address,
to use, and powerful developer tools, integrations and plugins for the Microsoft and Oracle Product Ecosystems.
email, phone, and more.
Start Your Free Trial www.Melissa.com/sd-times
Melissa Data is now Melissa. See What’s New at www.Melissa.com
SDT03 Full Page Ads.qxp_Layout 1 8/18/17 11:46 AM Page 52