FC_SDT012.qxp_Layout 1 5/22/18 2:49 PM Page 1
JUNE 2018 • VOL. 2, ISSUE 12 • $9.95 • www.sdtimes.com
Full PageAds_SDT012.qxp_Layout 1 5/21/18 10:19 AM Page 2
42 million passengers a year trust Rogue Wave to fly high One of the world’s leading airlines wanted to ensure maximum capacity and revenue on their flights. So, they called us. Our advanced software solutions help optimize inventory and guarantee every customer gets their seat at the best possible value. Want the momentum to take you to the skies? There’s more to Rogue Wave than you think.
WE’VE GOT THE SKIES COVERED FROM A TO ZEND < WEB AND MOBILE APP DEVELOPMENT >
SECURE COMPONENTS < PLATFORM INDEPENDENT BUILDING BLOCKS >
OPENLOGIC < END-TO-END ENTERPRISE OPEN SOURCE >
KLOCWORK < APPSEC AND COMPLIANCE STATIC CODE ANALYSIS >
JREBEL < JAVA DEVELOPMENT PRODUCTIVITY >
AKANA < API MANAGEMENT >
003_SDT012.qxp_Layout 1 5/22/18 2:56 PM Page 3
VOLUME 2, ISSUE 12 • JUNE 2018
Tools shown at Build follow AI, modernization trends
What does it take to build a secure app by design?
LeanIX: Dated monolithic architectures are killing future opportunities for IT
SmartBear acquires Hiptest testing platform
ServiceNow brings DevOps to the enterprise
Revolutionizing API testing with artificial intelligence page 8
GUEST VIEW by Kevin Steigerwald Faster doesn’t mean sacrificing quality
ANALYST VIEW by Rob Enderle Three ‘killer apps’ for next-gen mobile
INDUSTRY WATCH by David Rubinstein Bringing the VA into the 21st century
Tips for building AI into mobile apps page 28
Migrating to Microsoft Azure page 35
Low code development: It’s not just for business users page 41 page 22
Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at firstname.lastname@example.org.
004_SDT012.qxp_Layout 1 5/22/18 12:35 PM Page 4
Instantly Search Terabytes
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein email@example.com NEWS EDITOR Christina Cardoza firstname.lastname@example.org
dtSearchâ€™s document filters support: Â‡ popular file types Â‡ emails with multilevel attachments Â‡ a wide variety of databases Â‡ web data
SOCIAL MEDIA AND ONLINE EDITOR Jenna Sargent email@example.com INTERNS Ian Schafer firstname.lastname@example.org Matt Santamaria email@example.com ART DIRECTOR Mara Leonardi firstname.lastname@example.org
2YHUVHDUFKRSWLRQVLQFOXGLQJ Â‡ efficient multithreaded search Â‡ HDV\PXOWLFRORUKLWKLJKOLJKWLQJ Â‡ forensics options like credit card search
CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Frank J. Ohlhorst, Jeffrey Schwartz CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum
Developers: Â‡ $3,VIRU NET, C++ and Java; ask about new cross-platform NET Standard SDK with Xamarin and NET Core Â‡ 6'.VIRU:LQGRZV8:3/LQX[0DF L26LQEHWD$QGURLGLQEHWD Â‡ )$4VRQIDFHWHGVHDUFKJUDQXODUGDWD FODVVLILFDWLRQ$]XUHDQGPRUH
SUBSCRIPTIONS email@example.com ADVERTISING TRAFFIC Mara Leonardi firstname.lastname@example.org LIST SERVICES Shauna Koehler email@example.com REPRINTS firstname.lastname@example.org ACCOUNTING email@example.com
Visit dtSearch.com for Â‡KXQGUHGVRIUHYLHZVDQGFDVHVWXGLHV Â‡IXOO\IXQFWLRQDOHQWHUSULVHDQG developer evaluations
The Smart Choice for Text RetrievalÂ® since 1991
PUBLISHER David Lyman 978-465-2351 firstname.lastname@example.org
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com
Full PageAds_SDT012.qxp_Layout 1 5/21/18 10:19 AM Page 5
006-7_SDT012.qxp_Layout 1 5/22/18 1:24 PM Page 6
NEWS WATCH Angular 6, focusing on packages, now available Angular 6.0.0 is now available. According to the team, this is a major release focused on the toolchain rather than the underlying framework. The release continues the team’s focus of being smaller, faster and easier to use.r than the underlying framework. With this release the team will be synchronizing major versions of the framework packages, CLI and Material + CDK going forward, which are all available as 6.0 as of today, the team explained. “We made this change to clarify cross compatibility. The minor and patch releases for these three projects will be issued based on the project’s needs,” said Stephen Fluin, developer advocate for Angular. Additionally, ‘ng update <package>’ has been added to help users adopt the correct version of dependencies and keep them in sync, and the latest release updates to RxJS version 6. Other features include the first release of Angular Elements, a new tree component for displaying hierarchical data in Angular Material + CDK Components, new starter components, support for workspaces containing multiple projects, support for creating and building libraries, and long term support for all major releases.
Angular for Designers plans unveiled at ng-conf The Angular team also announced new plans to help bring designers into the development workflow with the
introduction of Angular for Designers. “Apps that users love to use, developers love to build,” said Stephen Fluin, developer advocate for Angular. “But we have been leaving out designers and I think they are a critical part of building experiences that users love to use.” The goals for the solution is to: 1. Make Angular useful for designers 2. Create more collaborative teams 3. Empower design system authors 4. Make it easier to build great experiences. Some ways Angular thinks it can meet these goals is by providing solutions that don’t force designers to learn developer tools; providing more ergonomic file formats; creating a single HTML file; adding new commands that makes prototyping easier; providing best practices, getting started guides and focused instructions for designers; simplifying the process for designer to
contribute; and making it easier for developers to use the design experience.
JNBridgePro 9.0 adds multiple proxy DLLs, Java 10 support Java and .NET Framework interoperability tool provider JNBridge announced the release of JNBridgePro 9.0, which aims to address requested changes from its community of users. “JNBridgePro 9.0 addresses a longstanding user request: the ability to create .NET-toJava projects employing multiple proxy DLLs. Users want to do this for a number of reasons, but all of them revolve around the need to integrate a .NET project with multiple Java projects,” the company wrote in a blog post. With the new release of its architecture-agnostic interoperability utility, which allows
users to access both the Java and .NET Framework APIs from either of the two environments, JNBridge has added support for Java 10, alongside multiple proxy DLLs, which the company says brings the platform more in line with users’ development styles.
Stack Overflow rolls out new platform for development teams Stack Overflow has announced the availability of Stack Overflow for Teams. It is a new private and internal platform designed to help development teams find answers to questions quickly without disrupting their workflow. It can integrate with existing tools such as Slack in order to promote communication and efficiency, the company explained. With Stack Overflow for Teams, teams can search the
Indigo Design Code Studio aims to foster collaboration Infragistics has announced Indigo Design Code Studio, a new product design platform that enables designers and developers to work together, while still using the individual tools they prefer. Indigo Design Code Studio is comprised of the Infragistics Design System, Indigo Studio, and Code Generation and UI Components for Angular and Flutter. The Infragistics Design System features a library of UI components, UI patterns, pre-defined screens, and complete app scenarios built using Sketch. Indigo Studio is a graphical prototype tool that uses drag-and-drop to upload designs, add capabilities such as navigation and transitions between screens, and work with stakeholders and get feedback. It also allows user to perform usability studies with built-in video playback and analytics on studies. Users will be able to create Angular or Flutter applications for Android and iOS using Indigo D2C Studio Visual Studio Code plug-in and UI components.
006-7_SDT012.qxp_Layout 1 5/22/18 1:24 PM Page 7
Google announces Android Jetpack, ARCore and WearOS updates at I/O Google I/O, the company’s annual developer conference, was packed with new and upcoming developer features for its operating system, augmented reality and wearable solutions. With the beta release of Android P, the operating system is getting a new “Jetpack” of nextgeneration Android components. As part of Jetpack, the company is launching Android KTX designed to optimize the Kotlin development experience. Android KTX will provide a set of Kotlin extensions and optimize Jetpack and Android APIs for the programming language. A major update to ARCore was announced with Cloud Anchors to help enable collaborative AR experiences, Vertical Plane Detection for placing AR objects on more surfaces, and Sceneform for faster AR development. In addition, Wear OS by Google developer preview 2 was announced with support for Actions on Google and power-related enhancements such as a battery saver mode.
platform for answers as well as edit or crowdsource answers so that it remains upto-date. The platform offers unlimited private questions and answers; an intuitive archiving feature to go deeper into a subject; member profiles to showcase user’s specific skills and topics; and integration within the workflow.
CodeAI: QbitLogic’s new Intelligent security solution QbitLogic is leveraging the power of artificial intelligence to better protect software systems. The company announced the release of CodeAI, a next generational SaaS platform designed to repair security defects before releasing solutions to the public. CodeAI is able to predict new defects in code and suggest simple fixes for the issue. As users interact with CodeAI, the algorithm is able to learn from those interactions and will improve overtime, according to the company. “Anybody with a Github
account can use CodeAI for free only once for now, but it will be one of the most affordable solutions on the market later”, said Arkady Miteiko, cofounder and CEO of QbitLogic. “A few teams that are building open source projects had a chance to see CodeAI at work, and already, one of them described it as a ‘first nonhuman contributor.’ We kind of liked that description and decided to use it. CodeAI is an artificial brain that understands code. Future applications can extend beyond bug fixing.”
able as free and open source. The team will be removing the paid professional plan, and making features and source code available for everyone to use or contribute to. According to the team, it will be slowing down internal development efforts for Fuse open now that is open source, and will be launching the Fuse App Engine, a new business model around Apps as a Service “which we believe is a crucial step towards a brighter future for app product owners and solving the key pain points of the app industry going forward,” according to the company.
Fuse Open mobile app development tool goes open source
Intel’s computer vision toolkit for IoT
Fuse is joining the opensource world with the release of Fuse Open. Fuse is a crossplatform mobile app development tool suite that supports Android and iOS applications. that aims to reduce development times and resources. With the release of Fuse Open, the entire platform, tooling and libraries are now avail-
As part of its Internet of Things and artificial intelligence strategy, Intel has announced the Open Visual Inference and Neural Network Optimization (OpenVINO) toolkit for developers. OpenVINO is designed to give developers the ability to build computer vision and deep learning inference apps at the edge.
The solution will join Intel’s Vision Products portfolio. With the toolkit, developers can build multi-platform computer vision solutions. Intel explains it enables CNN-based deep learning inference on the edge. It also supports heterogeneous execution across computer vision accelerators, aims to speed time to market, features a library of functions and pre optimized kernels, and includes optimized calls for OpenCV and OpenVX.
Cask drives Big Data into Google Cloud Platform Cask has announced that it will be joining the Google Cloud Platform. Cask Data Application Platform (CDAP) is an open-source utility that makes it easy to build and run Big Data solutions. “We’re thrilled to welcome the talented Cask team to Google Cloud, and are excited to work together to help make developers more productive with our data processing services both in the cloud and onpremise,” said William Vambenepe, group product manager of Google Cloud. “We are committed to open source, and look forward to driving the CDAP project’s growth within the broader developer community.” According to Cask, its vision from the start has been to provide simplified access to complex technologies. It wanted to accelerate the Big Data industry by providing a standardization and simplification layer to allow for portability across diverse environments, usability among diverse groups of users, and the security and governance needed in the enterprise. z
008-10,12_SDT012.qxp_Layout 1 5/21/18 10:54 AM Page 8
BY CHRIS COLOSIMO
ecently, a simple conversation I had analyzing the current challenges associated with software testing in the modern era led to a key realization: the tools in the software testing industry have not been focused on simplicity for the agile world. Agile is primarily a developmentfocused activity. In its most basic terms, agile is a software development Chris Colosimo is a Product Manager at Parasoft, with expertise in SDLC acceleration through automation.
methodology where typical SDLC activities that would traditionally span over the duration of a project are broken down into much smaller pieces called sprints. Typically, a sprint is 2 to 3 weeks and in a sprint, development activities are focused on new features and enhancements. One sprint looks something like this:
A sprint starts with the design and creation phase, where the new functionality is split up into user stories, scoped, and then development immediately starts building something. At the end of the sprint, there may or may not be release activities, but no matter what, feedback is obtained and then another sprint begins and the process repeats itself over and over again. Agile allows organizations to turn on a dime because the feedback collected during each sprint can be applied to the next sprint and help to guide, shape, and focus the project. This works great for develop-
008-10,12_SDT012.qxp_Layout 1 5/21/18 10:55 AM Page 9
ment, but if you look at the test portion of the sprint, it starts to get complicated. Test does not get access to test the new features and enhancements until well into the Sprint, and for logical reasons. The testing team needs to wait until the development team has built the full functionality, so test is always a little bit behind development right from the beginning.
Test can’t keep pace with development This problem only intensifies as the sprint continues, due to the most common testing technique used to validate the application, which is by manually interacting with the user interface. This
is known as UI testing and it’s the most common testing practice because it’s easy to use — it’s easy to associate actions in the UI with the user story, it’s easy to scale across a large body of testers, and because of record and playback functionality, it’s easy to do an initial round of automation. But there are many problems with UI testing: • There are hidden costs that stem from the inefficiency of UI testing. The most fundamental challenge with UI testing is the amount of time it takes for development to fix defects when given a UI test as a reproduction. Typically, as a tester begins the defect discovery process, they start with exploratory testing (methodically searching the application searching for unexpected behavior). When they find a defect, they need to reproduce that for development, which involves writing up “steps to reproduce.” When development receives these instructions, they need to find the version of the application that test is using, stand it up, and go through the steps exercising the UI. If the defect reproduces, they then have to associate that defect with the underlying code to determine the root cause. Development starts working on a fix which requires them to tear apart the application, fix the defect, and then rebuild the application before QA can start testing again. This further delays the software delivery process and slows down the whole pipeline. • UI testing doesn’t comprehensively test an application. Testing at the UI layer validates an application’s process flow end-to-end, but doesn’t necessarily test the entire breadth of the system’s internals. Often when new functionality is introduced into an application, it requires changing or updating existing interfaces. Some of these components may not be accessed when using the new functionality but present significant risks to the organization if they are not tested. • Test doesn’t have access to the code so it is difficult for them to map the actions that they are per-
forming in the UI to the underlying source. As a result, the tests that get built do not provide complete API coverage. Quite often things get missed. When it comes time to run the full regression cycle, critical defects may be uncovered. Often, this late cycle defectdetection leads to significant release delays and raises the total cost of testing. • It’s difficult to maintain UI automation. A main reason why test struggles to keep pace with development is because too much time is spent managing broken UI tests. In fact, up to 80% of testing time is spent either manually executing the UI tests or fixing automated UI tests that have broken as result of application change. Each one of the above factors can lead to significant delays in a sprint, but when you take into account how a traditional project cycle works, it’s a series of these sprints followed by a hardening or regression cycle. At each step of the way, test is struggling to keep pace with development — but because of the test techniques traditionally used, they can never get the total and complete comprehensive testing that they want.
Typically, they will be able to validate the new features and capabilities, but fall short of complete test coverage. This is frustrating for a lot of testers, but it’s not their fault. It is just the nature of the beast given the capabilities that exist in the tooling market. The dangerous part is that without these quality practices, defects are leaking into production and eroding the perceived benefit gained from agile.
What about API testing? Can it save agile? The analysts and industry agree that API testing can more precisely pinpoint the root cause of defects than UI testing because API tests are closer to the code, easier to automate, and more resistant to application change. Also, continued on page 10 >
008-10,12_SDT012.qxp_Layout 1 5/21/18 10:55 AM Page 10
Revolutionizing API testing with AI < continued from page 9
API tests offer a better form of defect reproduction and communication between development and test because the test artifact represents the convergence of those two areas. In a recent blog I explored API testing, what it is, and how to build a comprehensive API testing strategy. You can read it to get more information about this extremely effective testing practice. Testing at the API layer is a great practice for agile specifically, because it enables testers to validate functionality given the compressed timeline, and API tests are highly reusable. Additionally, API tests have the following advantages: Lower time-to-defect remediation when compared to UI tests
If an API test fails, you can be pretty darn sure you know approximately where to look in the code. Developers love getting API tests from test because they can execute them directly against their application without having to hook up the entire environment. And they can continuously rerun them as they are starting to fix the defect. The lower time-to-defect remediation means that in general, development can fix a bug faster when provided an API test vs UI test. When considering the time frames involved in agile, this is exactly what we need. Once a defect is discovered, an API test is provided to development, which they can use to find, fix, and validate the defect, all without having to rebuild the entire application, which saves a tremendous amount of time. This is exactly the kind of speed we need for agile. API tests are “automation ready”
APIs represent the invisible communication that takes place behind the scenes of an application. The invisible nature of the communication helps the automation process. There is significantly less complexity involved in getting an application to the point where you can start interfacing with it at the
API level than would be required to stand up an application in its entirety so you could operate at the UI level. As a result, API tests can easily be run in automation at earlier stages of the SDLC. Most of my customers run them at the same time they are running their unit tests as a function of code check-in. These API test runs can also be associated with the bug tracking system in a much easier way so that when defects are resolved, the accompanying API tests can easily be handed back and forth between development and test. This significantly reduces the overall handoff process because instead of filing the defect, providing the steps to reproduce, and then waiting for a new build from development, testers can receive notifications from the bug tracking system that a defect has been resolved and see the automated test cases that validate the resolution. Those API tests can easily be built into a regression suite and reused over and over. API tests are more resilient to change than UI tests
As a part of our research, we saw that 80% of development time was spent on managing and updating UI tests that had broken as a result of change. Change is a big time killer when it comes to agile, but because of the increased code check-ins and shortened time frames introduced with agile, change is constant. If an organization has exclusive reliance on UI testing, application change can be devastating because many of the test cases that have been built to validate critical functionality simply stop working. One of the main principles of agile is the ability to turn on a dime, which means that UIs and functionality are changing all the time and the burden of supporting and maintaining those tests can become overwhelming for test teams. On the other hand, API tests don’t even see the UI. APIs also have specific versioning capabilities built in, that allow testers to maintain stability as the application is undergoing change.
Additionally, APIs are defined with the service contract that can be leveraged to update test cases as the application undergoes these changes.
API tests can save agile by giving an organization the ability to easily test an application at the earlier stages of development as well as provide an effective communication mechanism between development and test that is highly resistant to change. Organizations that adopt API testing as a fundamental piece of their testing strategy can leverage the agility they provide to really get ahead of the testing challenges.
So why aren’t organizations API testing?
Even with all of the benefits that come from API testing, the industry is still focused on UI testing. We believe this is because testers don’t know how to test the API and/or don’t know how their application is using the APIs. It’s not immediately apparent where to get started API testing an application, and understanding how to assemble all of the “puzzle pieces” together in a meaningful way requires domain knowledge of the application. Since organizations still continued on page 12 >
Full PageAds_SDT012.qxp_Layout 1 5/21/18 10:19 AM Page 11
008-10,12_SDT012.qxp_Layout 1 5/21/18 10:55 AM Page 12
Revolutionizing API testing with AI < continued from page 10
tend to leverage a centralized testing practice, testers need intimate knowledge of all the different application interfaces and know how to stitch them together properly. It’s not a trivial task. API testing is still considered no man’s land. In a recent survey, we asked a series of developers and testers who is responsible for API testing in their organization. • 70% of testers said that development is responsible for API testing. (“The development team created the APIs and as they were built they should have also built API tests to validate that they work as described.”) • 80% of developers said that test is responsible for API testing. (“We created these APIs to be externally facing and documented them with a service contract. The testing team should come in and validate that the APIs work as described.”) As you can see, there’s some confusion as to who is ultimately responsible for API testing. I believe that API testing is the responsibility of both developers and testers, in different forms, but it is this disconnect that leads to low API test coverage. Testing at the API level requires specialized skills and tools in order to get comprehensive test coverage. It’s not intuitive. There are tools that exist in the market that are trying to help organizations build an API testing strategy, but the vast majority of them require a high
degree of technical expertise to build comprehensive API tests. Additionally, testers still need to understand how the APIs work, which requires domain knowledge. As a result, organizations tend to do the bare minimum for API testing, which is the opposite of what we need for agile.
Artificial Intelligence for Test Automation Why AI, why now? The only way to solve this industry problem is to build tools that take the complexity out of API testing. New technology is using artificial intelligence to help take the complexity out of API testing by converting manual UI tests into automated API tests, lowering the technical skills required to adopt API testing and helping organizations build a comprehensive API testing strategy that scales. So how does it work?
Converts UI activity into automated API tests
The technology monitors background traffic while you’re executing manual tests, analyzes that traffic, and uses artificial intelligence to automatically build a meaningful set of API test scenarios. When building those API tests, it first identifies API calls, then discovers patterns and analyzes the relationships between them, so it can generate complete API testing scenarios, not just a series of API tests. Reduces the learning curve to API testing
With an easy place to start building API tests, testers don’t have to touch the difficult activities associated with manually building API tests, i.e. finding the correct service definition, understanding the data payload, or running a test over
and over again to understand the relationships between requests and responses so you can start building assertions. Instead, the AI-enabled technology does all of this heavy lifting automatically, based on activity it observes while the tester is using the UI. This helps novice users get a greater understanding of API testing in general because they can map the activities they performed in the UI to the API tests that were created, and build a greater understanding of the relationship between the UI and the underlying API calls, helping drive future API testing efforts. Helps users build comprehensive API testing strategies
Although API testing is one of the most effective software testing practices, many organizations haven’t successfully adopted the practice because it requires specialized skills and tools. To help organizations adopt a comprehensive API testing practice, it is important to also use a sophisticated API testing tool that provides visual tools that are easy to adopt, enabling API testing beginners to start creating powerful API scenarios in a short amount of time.
What’s the end result? Agile development helps organizations deliver quality software to market faster, but without needed technology to help organizations fully test their applications at speed, the risks associated with accelerated delivery erode Agile’s potential benefits. Now is the time for organizations to get smart about API testing. A solid API testing strategy will allow organizations to get the most value out of their agile transformations. To make this a reality, testing tools should work for us. Using artificial intelligence to do the heavy lifting of creating API test scenarios lowers the complexity associated with API testing, lowers its adoption barriers, and helps organizations bring in a manageable, maintainable, and scalable test strategy. z
Full PageAds_SDT012.qxp_Layout 1 5/21/18 10:51 AM Page 13
014_SDT012.qxp_Layout 1 5/21/18 1:43 PM Page 14
Tools shown at Build follow AI, modernization trends BY DAVID RUBINSTEIN
Several new and updated tools showcased at Microsoft’s Build developer conference in Seattle revealed how partners are following the trends toward AI, new software architectures and modernization. Here is a look at their news announcements:
Progress: Conversational UI for chatbots Progress today announced the May 16 availability Conversational UI, a set of controls built for chatbots. Part of Telerik and Kendo UI tools, the components give both web and application developers the ability to build natural language understanding into their solutions that can run across multiple devices and on many chatbot frameworks, including Microsoft’s Azure Bot Service. Among the challenges of creating a quality human-computer interaction is the difficulty in capturing visual cues. The new controls give developers the ability to create chatbot apps that use visual elements to “enhance the natural flow of conversation,” Progress said in its announcement. Using the components, developers can implement such things as calendars and date pickers, which can be used to steer the conversation.
Bitnami: App migration to Azure Bitnami announced support for Azure in its Stacksmith tool for automating application migration to the cloud. According to Bitnami’s announcement, Stacksmith takes a running application in the datacenter, repackages and optimizes it for cloud deployment, and delivers everything needed to successfully deploy it to a container or cloud platform. Stacksmith then continuously monitors the replatformed application for updates and patches, providing an easy way for companies to keep applications up-to-date and secure.
PureSec: Security for Azure Functions Israel-based serverless security company
PureSec released a beta version of its security solution for Microsoft Azure Functions, Microsoft’s serverless platform. According to the company, PureSec’s Serverless Security Runtime Environment (SSRE) defends against such application-layer attacks as NoSQL/SQL injection and other unauthorized malicious actions.
Altova server software on Azure Altova has announced the release of a free virtual machine template for use on Azure, pre-installed with Altova server software for automating data processing and integration workflows. The template, available in the Azure Marketplace, installs all of Altova’s server software products, and users can activate those they want. Using the Altova LicenseServer, customers can get free 30-day trials of all the products. The products in the template are FlowForce Server for data processing and integration; MaprForce Server for mapping and aggregation of processes for XML, JSON, databases, flat files, and more; and MobileTogether Server, a gateway to get back-end data into mobile applications. Also included are StyleVision Server for business report generation; RaptorXML Server for validating and processing XML, XBRL and JSON; and DiffDog Server, for automating comparisons of files and directories in parallel computing environments.
AI to help translate and re-architect legacy applications while removing the technical debt in those applications.
PagerDuty supports VS Team Services Incident response provider PagerDuty announced integrations with Microsoft Azure and Visual Studio Team Services to offer teams insights into their operational health and event intelligence powered by machine learning throughout the software development life cycle. The integrations sync incidents between PagerDuty and Microsoft Azure Alerts to gain more context about your infrastructure and services, the company said.
PreEmptive: Rooted device security PreEmptive Solutions today announced updates to Dotfuscator Professional Edition (to 4.35.0) and Dotfuscator CE (to 5.35.0) to include rooted device detection and response controls from Xamarin.Android applications. Rooting allows users of Android devices to gain access to subsystems that can be used to threaten those devices. According to the company, root detection and response is included with Dotfuscator at no additional cost. Dotfuscator Community Edition (CE) is available at no cost to qualified Visual Studio developers and Dotfuscator Professional licenses start at $1,750 per developer.
Syncfusion: Dashboard platform preview Mobilize.Net: AI-powered transformation WebMAP5, the latest of version of Mobilize.Net’s code transformation tool, adds Angular support as companies look to take their legacy applications to the web and the cloud. Angular support was added through Progress Kendo UI, a native Angular UI component library. Mobilize.Net gives organizations a way to take Visual Basic 6, C#, PowerBuilder and Silverlight applications to the cloud and web, using
Full PageAds_SDT012.qxp_Layout 1 5/21/18 10:20 AM Page 15
016,17_SDT012.qxp_Layout 1 5/22/18 12:38 PM Page 16
e are entering a new stage of app development. While until now, requirements for architecture were left to the discretion of companies, developers and their target audiences, recent legal changes in the European Union and the United States have brought a new player to the table: regulations. Most notably, the EU’s General Data Protection Regulation (GDPR) is a much-needed update to 20-year-old legislation that also enshrines into law groundbreaking new principles that were until recently only the talk of policy circles. Taking a bold stand for individual privacy, the GDPR limits what data processors and data controllers can do with the information they collect and gives EU data subjects unique rights, such as the ability to revoke consent for data processing or request that their data be erased. The GDPR places all responsibility for data protection firmly on the shoulders of businesses and holds them accountable in the eyes of the law for any data breaches and leaks that may occur. The fines the regulation imposes for noncompliance are no joke: up to US$24 million or 4 percent of a company’s global annual turnover, whichever is higher. But there is one stipulation in the GDPR that directly affects developers: Apps from now on should be built with data protection by design. What this essentially means is that, while until now the decision concerning the level of security of an app or service was left in the hands of the company or the development team building it, under the GDPR, it will be mandatory for security features protecting data to be included into the design, from the first stages of the development process. In truth, when it comes to applications, security has always taken a back seat to features more likely to boost a product’s user-friendliness and saleability, such as attractive UIs, enhanced tools, unique features, etc. It was — Roman Foeckl is CEO of CoSoSys, a data security company
What does it take to build a secure app by design? BY ROMAN FOECKL
and still is — considered more important for a product to be appealing rather than secure. In its 2017 Application Security Statistics Report, WhiteHat Security reported that almost half of all applications are vulnerable on every single day of the year. From the 13 industries it looked at, the report found that approximately 60 percent of applications in the Utilities, Education, Accommodations, Retail, and Manufacturing sectors are always vulnerable. With most applications requesting access to personal information and requiring in-app payments, it’s no surprise the GDPR has decided to dedicate an entire article to adding new data protection standards to the development process.
The challenges to boosting app security According to WhiteHat Security’s report, most organizations are unable to
resolve the vulnerabilities found in their applications. This means that it’s not so much a matter of choice that they decide to forego security features, but that their development teams lack the necessary skills to add them to their applications. Security is, after all, a niche unto itself. Specialized personnel are needed to accurately apply security guidelines and compliance profiles. Companies are thus faced with two choices: to hire security engineers or to train their own developers and familiarize them with best security practices. Both options imply high costs and neither ensures success. On the one hand, security engineers are few and far between and there’s no guarantee a company will easily find and hire one, let alone a team. On the other hand, although developers may be taught the finer points of secure app development, the chances are high that they will only be able to create basic or subpar securi-
016,17_SDT012.qxp_Layout 1 5/22/18 12:38 PM Page 17
automatically respond to incidents and apply remediation actions. At the same time, veterans of the information security sector have extended their product lines to include APIs with a strong focus on DLP and compliance policies. Aiming to broaden DLP APIs’ capabilities, products such as sensitivity.io and CloudLock offer more diverse implementation methods, moving beyond cloud storage, towards local native SDKs and Security as a Service. They also provide integration options for everything from apps to popular infrastructures, clouds and services. DLP APIs eliminate both the need for companies to invest in additional staff or training as well as the manhours needed to maintain applications’ security features. Cybersecurity APIs can ensure compliance with regulations such as GDPR, GLBA, HIPAA and the like, while constantly adding new features and policies depending on how the regulations themselves are updated or changed.
Towards data protection by default
ty features in the applications they build. There is also the issue of maintenance. Security is never a one-off affair. It’s not enough for security to be added in the development phase. Protection profiles have to be constantly updated to the latest requirements and new features have to be added as new threats, attacks and vulnerabilities are discovered. This means a lot of extra work for developers. So how can companies efficiently build their applications to include data protection by design while keeping costs down? A third option, provided by technology, has begun emerging in recent years: cybersecurity APIs.
Rise of the data loss prevention APIs Given the mounting pressure to secure applications and prevent data leaks and theft, it is no wonder that cybersecurity APIs have started being developed —
with giants like Google and Amazon leading the way. After first adding DLP features to Gmail in 2015 and then extending them to the entire G Suite, Google launched its Cloud Data Loss Prevention (DLP) API in fall 2017, with the aim of helping organizations protect and regulate sensitive data. Aimed at developers, it can be integrated with external data sources and used to scan large datasets in Google Cloud Storage, Datastore, and BigQuery. Amazon took things further by adding Macie to its offering, a DLP service for AWS S3 that uses machine learning and natural language processing (NLP) to identify, classify, monitor and protect sensitive information. Built using the AI-based algorithms developed by startup Harvest.ai, which Amazon acquired last year, Amazon Macie acts as an alerting engine that, integrated with other Amazon services, can
Security is now a mandatory aspect of app development for anyone falling under the incidence of the GDPR. Companies will have to be able to prove that data protection features were added to their applications during development for any new product built after the regulation comes into force on May 25. This can be done through the guidance of security engineers or through the use of cybersecurity APIs that both hold the necessary expertise to correctly enact these policies at app level. Nor should companies that need not worry about the EU’s new regulation ignore security. Any popular app, especially one whose vulnerabilities can be easily exploited, is one eager cybercriminal away from being compromised and the company behind it can lose all credibility and a significant number of users overnight. In the age of endless exploits, data trafficking and websites dedicated to leaks, data protection’s place will no longer be only on company networks and computers, but at the very heart of all applications and services. z
018_SDT012.qxp_Layout 1 5/21/18 2:51 PM Page 18
LeanIX: Dated monolithic architectures are killing future opportunities for IT BY JENNA SARGENT
Despite the rapid advances in technology, it seems many enterprises are still stuck in the past in terms of system architecture. André Christ, CEO and co-founder of SaaS company LeanIX, finds too many enterprises are continuing to use older architectures rather than upgrading to newer systems. This is starting to pose a problem as machine learning and the Internet of Things are becoming more prominent. Older architectures simply cannot be extended to support these new technologies, Christ explained. With monolithic architectures “you have very long cycles of updating those applications,” said Christ. “Companies are seeing they will no longer be able to introduce new products, new channels, or new markets if they don’t ensure that their software architectures are flexible enough.” According to Christ, those companies staying with old architectures are essentially being forced to spend resources maintaining their existing architectures rather than putting their efforts into new technologies. “Those legacy systems are killing the future business opportunity,” he said. “As with any other major transformation, the decision to modernize is not to be entered into lightly,” said Christ. “Shifting away from an often-times decades old highly layered monolithic architecture comes with a set of challenges that is not for the faint of heart.” The way enterprise architecture is addressed today is much more collaborative than it was five or 10 years ago, explained Christ. According to him, there are three factors contributing to the need to be more collaborative when it comes to enterprise architecture. First is that back then, decisions, products, and approaches were much more central and top-down, Christ explained. “Right now, architecture
decisions are decisions on which applications should be bought, or which software should be developed,” he said. “It’s no longer centrally decided and from the CIO, it’s much more widespread in the organization.” This democratization has led to more collaboration between business and IT, he said. Second, access to modern platforms fosters collaboration because teams are no longer just managing applications that run locally on machines. They are implementing technologies such as virtualization, cloud computing, and
A six-step plan Christ lists these six rules that a microservices adoption plan should follow:
Services should be designed and 1 responsible for only a single business capability. Each microservice should have its 2 own datastore. Code should have similar levels of 3 maturity. Deploy microservices in 4 containers. Treat servers as replaceable 5 members of a group that all have the same functionality. Monitor microservices with 6 mechanisms such as response time notifications, service error notifications, and dashboards.
microservices, which necessitates collaboration, Christ said. Third, the increasing complexity of IT projects, such as digital transformation and support for IoT, led to the need for tighter integration within the enterprise architecture. Some of the obstacles that he sees organizations running into when trying to undergo the transformation from
older monolithic architecture are the data volume and integrity, autonomous service management, and the ability to run diagnostics. Christ explained that the volume of data can cause major stress on an enterprise’s network, resulting in latency. There may be hundreds of services running within a single application, and requests could span multiple services. In addition, when taking a decentralized approach, such as with microservices, high-quality data becomes more complex, especially when each microservice has a different database and each database might use a different technology. This can lead to issues when services are referencing data in other services, Christ said. Management of autonomous services also becomes more complex when teams are dealing with hundreds of interdependent services rather than just one deployment, he explained. Finally, managing performance on microservice-based applications requires more effort than it would on a monolithic application. Applications may produce large numbers of logs that need analysis, which means debugging and running diagnostics becomes a laborious process, Christ said. In order to overcome these challenges, it’s important to remember that this transition will not happen all at once, but the monolith will decrease every day. Monolithic code will live on in the system for years after making the switch. He explained that companies need to decide when and where these microservices can be integrated into existing applications. He also explained that when coming up with a plan, it’s important to think about how the application will be used. “Identify the goals in the context of the user’s environment, task or work flow and let the needs inform the design,” he said. z
Full PageAds_SDT012.qxp_Layout 1 5/21/18 10:20 AM Page 19
020_SDT012.qxp_Layout 1 5/22/18 12:34 PM Page 20
SmartBear acquires Hiptest testing platform in by providing an easy-to-use platform for non-technical teams SmartBear is giving Agile and to refine the features into acceptDevOps teams more tools to ance criteria. The second chalmove faster with the acquisition lenge is making sure business of Hiptest. Hiptest is a continuteams and developers use the ous testing platform that enables same language to create that teams to collaborate, test and shared understanding. Hiptest generate living documentation. enables teams to progressively According to Hiptest, it was create and reuse a common busibuilt by an Agile team and ness terminology when creating designed for Agile teams. It is The Hiptest team is joining SmartBear on its shared vision to the acceptance criteria.” currently used by more than empower modern software development teams to move faster. All Hiptest employees will 25,000 users in 140 countries. “We founded Hiptest to enable keep the users’ perspectives in mind join SmartBear and continue to grow throughout development. SmartBear the Hiptest product and business. Agile and DevOps teams to get to market faster with the product they actually says this is more important than ever SmartBear will continue to develop imagined,” Laurent Py, co-founder and today because teams are developing Hiptest in Besancon, France, and has plans to grow operations there. CEO at Hiptest, said in a news release rapidly and releasing more frequently. “The addition of Hiptest to the Smart“One of the biggest challenges devel“We felt strongly that SmartBear was the ideal partner for us moving forward opment teams face is collaboration. If Bear product portfolio complements all because of our shared vision for you just focus on test automation with of our test automation tools that are empowering modern software develop- BDD frameworks like Cucumber you loved by development teams today, including TestLeft, CrossBrowserTestment teams to go from idea to produc- miss the point. Business stakeholders and develop- ing, SoapUI Pro, and Swagger Inspection, faster.” In addition, Hiptest fosters collabo- ment teams need to have a conversation tor,” Lloyd said in the release. “Hiptest is ration between business and technolo- before starting development to create a an exciting, fast-growing business, and is gy teams with native support for Behav- shared understanding of the features,” aligned with SmartBear’s strategy of ior Driven Development. BDD ensures Ryan Lloyd, vice president of products, delivering easy-to-use and easy-to-conthat team members have a common test and development at SmartBear, told sume tools in support of the Software understanding of requirements and SD Times. “This is where Hiptest comes Development Life Cycle (SDLC).” z BY CHRISTINA CARDOZA
ServiceNow brings DevOps to the enterprise BY CHRISTINA CARDOZA
ServiceNow has announced a new Enterprise DevOps offering as part of its application development services. Enterprise DevOps is a collaborative work approach designed to bring tactical and strategic balance to the business. “We know that software eats (and runs) the world and Enterprise DevOps is part of that process,” Allan Leinwand, chief technology officer at ServiceNow, wrote in a post. ServiceNow already uses DevOps to manage internal software development, which involves some 80 builds every day, along with code
check-ins from a thousand ServiceNow developers worldwide.” According to the company, 80 percent of companies are expected to adopt DevOps principles in the next two years. Enterprise DevOps was built to include both IT Ops and development teams. IT Ops wants control, transparency, security and governance while developers want agility, flexibility and speed, Leinwand explained. “Our mission is to help customers move from previous traditional enterprise software development practices where apps are developed in one silo
and then operated in another. A typical Enterprise DevOps workflow includes: software planning, coding, testing, deployment and the ongoing operations of a live app,” he wrote. ServiceNow Enterprise DevOps will be available to London customers and focuses on software planning with the company’s Scaled Agile Framework 4.5 product, its Agile development product and integrations with developer collaboration tools. For its Madrid release, the offering will extend into software, coding and testing with integrations with Atlassian Jira, Git and Jenkins. z
Full PageAds_SDT012.qxp_Layout 1 5/21/18 10:20 AM Page 21
Software delivery; it’s a team sport
In software delivery if teams aren’t rowing together, they’re rowing in circles. Integrate the complex network of disparate tools that you use to plan, build and deliver software at scale. $XWRPDWHWKHćRZRI product-critical information across your entire software delivery process. Help your teams pull together.
022-25_SDT012.qxp_Layout 1 5/22/18 1:08 PM Page 22
he 2018 SD Times 100 is here, and we celebrate the achievements of these companies as they take or retain their position as leaders and influencers in the software development industry. Like a piñata at a child’s birthday party, these companies have rained down goodies in the form of new projects, innovative technologies and paradigm-shifting ideas to delight and amaze the people who develop and ultimately use software. This year’s basket includes many of the classics, like M&Ms and Hershey bars, but there are some upstarts that tantalize us with new flavors and consistencies of technology that are sure to last a long time. This reflects the fact that our industry does not stand still. As tastes change, as times change, so does our consumption of the software that in turn is “eating the world.” New companies and new categories on this list are a testament to the ever-changing technology landscape. And you can’t beat that with a stick!
022-25_SDT012.qxp_Layout 1 5/22/18 1:08 PM Page 23
022-25_SDT012.qxp_Layout 1 5/22/18 1:08 PM Page 24
Creative, innovative people draw inspiration from those around them. The Beatles were influenced by Chuck Berry, Little Richard and Elvis Presley, among others. In turn, they influenced the next generation of musicians, and continue to inspire people today. The same is true in the software development space. Innovations in service-oriented architectures influenced the creation of microservices, or microapps. The later, though, were made possible by the idea of the cloud. This category looks at those companies, associations and projects that have inspired development and IT shops to build upon the work they have created, and recognizes them for their leadership as we begin to create a digital world we only could have dreamed about a generation ago.
022-25_SDT012.qxp_Layout 1 5/22/18 1:09 PM Page 25
022-27_SDT012.qxp_Layout 1 5/22/18 1:16 PM Page 27
028-33_SDT012.qxp_Layout 1 5/22/18 2:52 PM Page 28
Tips for Building AI into Mobile Apps BY LISA MORGAN
028-33_SDT012.qxp_Layout 1 5/22/18 2:52 PM Page 29
pps are getting smarter, which is affecting what developers do and how they do it. While programmers don’t have to be AI experts to include intelligent elements in their app, they should understand something about what they’re building into their app and why. For example, if you’re trying to improve a shopping experience or the stickiness of a content site, you’ll likely use a recommendation engine. If you’re building a social app, an agriculture app, or a pet monitoring app, image recognition may make sense. If real-time context is important, such as for search or marketing purposes, location recognition may be worth considering. And regardless of what you’re building, you’ll probably add a conversational interface sooner or later. The use cases and the opportunities for including AI in mobile apps are practically limitless, but it’s wise to understand the limitations of what you’re using, including how it will affect app performance and user experience.
While it’s not necessary for developers to become data scientists to take advantage of AI, they should familiarize themselves with the basics so they can use AI and remediate issues more effectively. “AI that is integral to new mobile experiences, such as voice-based assistants and location-based services, increasingly require mobile developers to have a rudimentary understanding of AI to be effective,” said Vinodh Swaminathan, principal, Intelligent Automation, Cognitive & AI at professional services firm KPMG. “AI platform providers are increasingly packing a lot of developer-friendly features and architectures [into their products] that take the burden of knowing AI off developers.”
What developers should know AI is not one thing AI often is used interchangeably with other terms including machine learning, deep learning, and cognitive computing, which can be confusing for those who haven’t yet taken the time to understand their differences. Others, such as technology analyst, advisor, and architect Janakiram MSV view the terms more narrowly. In a workshop at the recent Interop conference, he explained the various types of AI and their associations as follows: • AI = recommendation engines • Machine learning = pattern recognition • Cognitive computing = sensing • Deep learning = multi-level (“deep”) neural networks that model the human brain to enable more human-like decisions. For the purpose of this article, AI is used as an umbrella term.
Given the shortage of data science talent, it’s no surprise that there is a growing body of easier-touse frameworks and platforms, as well as Alexa Skills, APIs and reusable models. Simplicity does not alleviate the need for thought, however. Rather than just grabbing a machine learning model and plugging it into an application, for example, developers should understand how the model applies to the particular application or use case. According to Swaminathan, that may require getting a better sense of what data was used to train the model and what levers are available to further refine the pre-trained model to improve its effectiveness and performance. Most developers haven’t been trained to think in terms of models and algorithms, however. They’ve been taught how to code. “Mobile applications have been about user experience and not so much about how you make the application more intelligent. It’s only recently that continued on page 30 >
028-33_SDT012.qxp_Layout 1 5/22/18 2:52 PM Page 30
< continued from page 29
chatbots and intelligent components have started to get exposure,” said Dmitri Tcherevik, CTO of cognitive-first business application development platform provider Progress Software. “If you want to do simple object or image recognition or speech processing, or if you want to build a simple chatbot, there are many tools from which to choose.” Like anything else, though, what’s available off the shelf may not be exactly what your application requires. Specialized use cases tend to require specialized models and algorithms to yield the desired results. While specialized models and algorithms may also be available off-the-shelf, they may also need some fine-tuning or modification to deliver the value the app is intended to provide. “If you’re building an end-to-end application, you need to know how to collect data, how to store data, how to move data stored in the cloud, how to clean data, and extract the features to make that suitable for algorithm training and model training,” said Tcherevik. Data scientists know how to do all of that, but most developers do not. Given that just about everything is data-driven these days, including AI, it’s wise for developers to learn something about working with data, how machines learn, commonly used statistical techniques and the associated ethical issues, all of which are typically included in an introductory data science course or book. “Depending on the application, there may be liability issues if [the] machine learning is not properly trained and makes a wrong decision,” said Tom Coughlin, IEEE senior member and president of data storage consulting firm Coughlin Associates. “A developer should test the application under all edge conditions to try and find such issues before release, or create some type of fail-safe that can avoid dangerous situations, if the application will be used for mission-critical applications.” An important thing to understand when working with AI is that things are not static. For example, if a dataset changes, then the model using that dataset will need to be modified or retrained.
“Developers need to understand that the AI is only as good as its model and training. Without constant feedback and input, an AI model can become something else entirely,” said Pam Wickerham, director of Solutions Development at contract management platform SpringCM. “A trained model is never done and will always evolve and change. As an app gets smarter, it’s important to remember that it can evolve to [include] bias. It’s important to have a large sam-
that the more data there is, the more accurate the results. However, it’s not only the quantity of data that’s important but also its specificity to the problem or use case the AI model is being asked to address. This means that the more specific or esoteric your domain area is, the less performance and quality should be expected from existing AI APIs.” He also warns that since AI APIs are mostly cloud-based, connectivity and communications overhead need to be
‘You need to have some knowledge of how the thing that you just put together is working or how it could be made to run faster, because if it has to run on mobile, it needs to be thermally responsible.’ —Kevin Bjorke
ple set and review the feedback and training loop constantly to be sure the focus doesn’t become narrow or go in [an unintended] direction.” Nir Bar-Lev, co-founder and CEO of deep learning computer vision platform Allegro.ai, thinks developers should understand how the fundamental nature of coding and AI differ. Namely, that with standard code, answers are deterministic and with AI they are statistical. “AI delivers a prediction answer on a given question with a corresponding statistical score,” said Bar-Lev. “Each score is also a product of the specific API, the specific question, the actual ‘noise’ in the environment and the version of the API.”
Why APIs alone aren’t enough Adding intelligence to a mobile app isn’t necessarily as easy as calling an API because some refinement may be necessary to suit the app and use case. Like anything else, new capabilities should not be added to an app simply because it seems fashionable to do so. “The more your application requires specific domain knowledge, the less you can rely on AI APIs available today as solutions for your needs,” said Bar-Lev. “AI is a learning paradigm. [The] conventional best practice rule of thumb is
considered. “AI, and specifically deep learning, are computationally heavy. As such, most of them are provided from the cloud, where there are abundant compute resources,” said Bar-Lev. “Obviously, developers need to consider cloud-based API calls for AI as they would any cloud-base actually making a local on-device call.” There’s also the question of integrating AI into the SDLC to constantly monitor what’s working and what’s not. “As with any new feature or new technology that supports the application, it’s important to pay attention to key lifecycle metrics. Why have we added AI? What are we trying to improve? How do we assess the effectiveness of other techniques?” said Pavel Veller, CTO of the Digital Engagement Practice on AI Competency Center and Capabilities at software development service provider EPAM. “It’s easier [to answer those questions] for business processes that have a clear funnel to optimize and evaluate, such as eCommerce, and more difficult for others that focus on engagement. The key, though, remains in identifying KPIs and putting continuous improvement in place.” As mobile developers know, many factors can impact the effectiveness of continued on page 32 >
Full PageAds_SDT012.qxp_Layout 1 5/21/18 10:21 AM Page 31
028-33_SDT012.qxp_Layout 1 5/22/18 2:53 PM Page 32
< continued from page 30
Be mindful of resources As always, mobile apps need to be developed with resource utilization in mind. What one can do with GPU clusters in the cloud is not the same thing one can accomplish on a mobile phone. Therefore, developers need to consider how the models and algorithms they intend to use would affect resources including battery power and memory usage. “Often, there’s not a clear path to getting something working quickly on [a] mobile device with the prepackaged libraries such as Apple’s Core ML and Google TensorFlow Lite,” said independent developer Kevin Bjorke. “If I have all of the resources and time I need because the client will pay for it, it’s [a different scenario] than when it has to run inside the Samsung phone. You want to do stuff and execute stuff that doesn’t use all the memory on the phone.” According to Progress’ Tcherevik, resource utilization is just another
3 Machine Learning Tools to Consider Mobile developers ready to experiment with machine learning or deep learning should consider the following frameworks:
n Apple Core ML is a machine learning foundation upon which domain-specific frameworks operate. The domain-specific frameworks include Vision (image analysis), Foundation (natural language processing) and GamePlayKit (which evaluates decision trees). Developers can use Core ML to add pre-trained machine learning models to their apps.
n Caffe2 is a lightweight, modular deep-learning open-source framework contributed by Facebook. It allows users to experiment with deep learning and take advantage of the models and algorithms community members have contributed. Caffe2 comes with Python & C++ APIs so developers can prototype immediately and optimize later.
n Google Tensorflow is available as two solutions for deploying machine learning applications on mobile and embedded devices. TensorFlow Lite is a modified version of TensorFlow Mobile that allows developers to build apps that have a smaller binary size, fewer dependencies and better performance. However, TensorFlow Lite is available as a developer preview, which does not cover all the use cases TensorFlow Mobile covers. TensorFlow Light also offers less functionality than TensorFlow Mobile. For production cases, developers should use TensorFlow Mobile. z parameter that developers should be monitoring. “It goes back to the developer’s continuous deployment workflow,” said Tcherevik. “Establish a feedback loop and have a process and culture of continuous evaluation and improvement. Things don’t change solely because of technology. A change in consumer behavior or a market trend can make as negative an impact as wrongly implemented AI.” Sooner or later something will break or no longer work as intended, which is another reason why developers need to understand the basics of what it is they’re adding to an app. “You need to have some knowledge of how the thing that you just put together is working or how it could be made to run faster, because if it has to run on mobile, it needs to be thermally responsible,” said Bjorke. “In production, you really care about how much it costs to generate just this piece of information.”
Know where the data came from Typically, there’s no shortage of data; however, the existence of data does not mean that the data is available, clean, reliable or can be used as intended. Data availability, even within a company, can be an issue if the data owner
refuses to share the data. In other contexts, the availability of the data may hinge on its intended use, the party using it, the purpose for which its being used and whether payment is required. Also, data cleanliness is a huge issue. For example, a typical company will have multiple customer records for a single individual, all of which differ slightly. The problem is caused by the field discrepancies in various systems, manual entry errors, corrupt databases, etc. Meanwhile, companies are amassing huge volumes of data that are growing even faster with the addition of streaming IoT data. In many cases, data is stored without regard for its cleanliness, which is one reason data scientists spend so much time cleaning data. While there are tools that help speed and simplify the process, data cleanliness tends to evade the minds of those who have not been taught to think critically about the data they intend to use. If the data isn’t clean, it isn’t reliable. Even if the data is available, clean and reliable, it may be that its use is restricted in certain ways or in certain contexts. Personally identifiable information (PII) is a good example of this. It may be fine to use certain information in the aggregate as applied to a population but not an individual. Regulations, terms of service and corporate gover-
028-33_SDT012.qxp_Layout 1 5/22/18 2:53 PM Page 33
nance policies cover such information use. However, just because certain types of information can’t be used doesn’t mean that the same information cannot be inferred using data points that have been deemed acceptable to use for other purposes. For example, income level can be inferred from zip code, financial transactions and even the words used in social media posts. Typically, several data points are triangulated to infer a missing piece of data, which can raise ethical issues if not liability issues. Sadly, the degree to which companies understand their own data can be questionable, let alone third-party data. Ergo, it’s bad practice to just grab data and use it without regard for its origin. “There’s very little or zero discussion of where data came from, said Bjorke. “Developers are trained to program, so they understand less about data collection.”
on-campus courses, there are many online offerings from vendors and highly reputable universities including MIT, many of which are available free of charge. “Although better tools are lowering the barrier of entry, especially in mobile, developers should have knowledge about how the techniques used work at a high level, how to evaluate the results, how to tune and iterate, how to structure data, etc.” said Outsystems’ Alegria. “Developers should also have the knowledge and experience in data manipulation [and] know how to best transform [the data] to leverage the AI algorithms. Additionally, developers should understand the end-to-end solution from data intake, transformation, algorithm setup, training and evaluation and then how it all maps to a production scenario.”
cantly different, and you might need to handle more than one implementation, such as IoS and Android, which is why doing things in the cloud is still a better option in a lot of cases.” Of course, the best way to learn is through hands-on experience and observation. Like Agile development or technology pilots, it’s wise to start small and create something simple that tests the entire cycle, in this case, from data gathering to prediction in a production setting. Alegria recommends developers focus on iteration by picking a single metric to improve. “Ensure you know what metric represents [AI] effectiveness and that you have a way to measure it. Also, ensure you are gathering the data on the AI element’s performance and that you set up your app to optimize getting at that data,
Getting started Today’s developers are feeling the pressure to embed some sort of intelligence into their mobile apps, whether it’s for competitive reasons or because their companies want to provide new types of value to customers. “Instead of hand-rolling code, developers should try to leverage the major platform providers such as IBM Watson, Google, etc. [because] they provide easy on-ramps, good economics and great features,” said KPMG’s Swaminathan. “Investing in skills that can ‘explain’ AI will also be important as there will likely be some regulatory scrutiny. Mobile app developers should be able to demonstrate that they not only understand how their AI works, but that they have ‘control’ over it.” Meanwhile, development managers should accommodate AI development with specific steps in the development lifecycle such as training data curation, corpus management, improvement and refinement of the training models, he said. There are a lot of resources available now that provide solid overviews of AI, the different types of AI, data science and working with data. While most working professionals are unable attend
Outsystems head of AI Antonio Alegria recommends develoeprs select one metric to improve in each iteration, and that apps are set up to gather that data.
One misconception is that if machine learning is simply applied to data, magic happens. Also, some underestimate the amount of training data required for machine learning. “Deploying AI models is still hard not from [the standpoint of] putting the model into production and using it in your app, but deploying all of the code and logic that ensure the data is transformed in the same way that the model saw during training time,” said Alegria. “This can be a bigger challenge in mobile because the stacks used in training and in production could be signifi-
such as through user feedback,” said Alegria. “Continuously measure the model performance across important segments and set up alerts when the performance degrades. Be sure to keep an eye on the failed cases by looking at the data manually to learn if there are some common patterns you’re missing.” There’s more to AI than may be apparent to the uninitiated, in other words. While the fundamentals are not difficult to grasp and there are plenty of resources available for review, there’s also no substitute for hands-on experience. z
Full PageAds_SDT012.qxp_Layout 1 5/22/18 2:26 PM Page 34
035-37_SDT012.qxp_Layout 1 5/22/18 12:29 PM Page 35
Migrating to Microsoft Azure BY JEFFREY SCHWARTZ
ne of the first things cloud architect Bill Zack did after moving from Connecticut to Nashville in 2013 was to form a Microsoft Azure User Group. Launched with just four initial members, the Nashville Azure User Group has a membership well above 800 and growing. “It’s been exploding,” Zack says of the user group in his new hometown. “I’d like to take credit for it, but I think the growth of Azure had something to do with it,” he adds with a laugh. The future of Azure was no laughing matter five years ago. At the time, Amazon Web Services (AWS) had a distinct lead in capabilities and enterprise customers over Microsoft, and any other cloud infrastructure service provider. AWS is still the leading cloud provider but as Microsoft invested billions in expanding the global footprint
of Azure and has belatedly though aggressively added infrastructure services and features that let customers migrate their virtual machines, it has narrowed the gap. Google has an equally extensive footprint and IBM and Oracle are also gaining ground. But Microsoft Azure has established itself as the clear No. 2. Azure isn’t a mere opportunity for Microsoft. The company has staked its future on the success of Azure coupled with its cloud-based Microsoft 365 and Office 365 management and productivity services. While that’s been evident for some time, the company has stepped up the urgency and focus over the past year. Driving Azure consumption is the top directive to Microsoft employees and partners. Every major offering and initiative at Microsoft is built on growing the usage of Azure. “Microsoft is becoming a cloud-first
company,” says Dmitri Tcherevik, chief technology officer of Progress Software. ‘Everything they do is about Azure and making Azure successful.”
Microsoft’s Azure Migration Focus Making Azure successful, of course, requires enterprises of all sizes to migrate their traditional Microsoft software and workloads to the cloud — or at least a hybrid cloud-based architecture. While that’s not a trivial process, Microsoft is now doubling down on that priority by offering broad migration options with as little friction as possible. Microsoft stepped up its migration effort at last fall’s Ignite conference when it announced a coordinated set of programs across the board to offer migration options that will fit different business, infrastructure and application models and wide risk-factors. continued on page 36 >
035-37_SDT012.qxp_Layout 1 5/22/18 12:30 PM Page 36
< continued from page 35
The programs include collections of best practices, tools and specific partnerships with ISVs and hardware suppliers to all phases of migration and ongoing management. Although not markedly different from what other major cloud providers emphasize, Microsoft’s migration resources are all focused on three key processes: assess, migrate and optimize. “We look across those three big areas and make sure that we have a set of tools and capabilities built into Azure to help customers assess ahead of the migration, do the actual migration and then optimize after the migration,” says Corey Sanders, Microsoft’s corporate VP for Azure Compute. “On top of that we also have a large set of partners that can fold into that, each sometimes into multiple stages and sometimes into individual stages.” As of mid-May, Microsoft referenced 17 solutions, though there are many others in the Azure marketplace that fall under the migration and optimization umbrella, including various security tools. Microsoft also has partnerships with systems integrators and managed service suppliers. Among its ISV partnerships, one example Sanders points to is Turbonomic, which is focused mostly on optimization and understanding the cost and the implications of running those services in an on-premises environment and in an Azure environment, and offering a multi cloud view. “It plays in very nicely to our overall approach which is making sure we’re helping customers where they need the help. We have the services where they want the services, but we also partner with solutions where they may have other needs and they have other desires,” Sanders says. Another example is Attunity, which provides data connectors and relational database and data warehouse migration software. In November, Attunity and Microsoft inked a partnership facilitating the migration from various databases — Oracle, Teradata, Netezza, DB2, MySQL and even Microsoft’s SQL Server to Azure SQL. The utility, Attunity Replicate for Microsoft Migration
includes a free tool for those who complete certain types of migrations within a year. “Since we’ve launched that, we’ve had many hundreds of downloads and customer engagements,” says Itamar Ankorion, Attunity’s chief marketing officer. “We are very excited about the traction this offering has had to the Azure marketplace.”
Assessing Azure Readiness Customers will often use these partner tools to enhance those offered by Microsoft, many of which are offered free to accelerate adoption. Looking to address the first phase of migration, assessment, Microsoft recently started offering its new Azure Migrate service. Azure Migrate is a free offering that provides an appliance-based, agentless discovery of enterprise on-premises datacenter infrastructure. The initial service can discover VMware-based virtualized Linux and Windows virtual machines, with support for Hyper-V VMs and physical servers slated to arrive imminently. Microsoft also offers an agent-based discovery tool as an option with Azure Migrate that can provide views of interdependencies, allowing organizations to determine if the machines that host existing multi-tier applications are suited for Azure. The tool also can help determine the sizing of Azure VMs for the application and help determine cost including potential discounts using Microsoft’s Hybrid Benefit program, which will apply discounts available from enterprise Software Assurance licensing to Azure. In the assessment phase, Microsoft’s Azure Migrate tool also helps determine migration priorities aligned with business objectives. The tool helps map the source and target environments and address the infrastructure and application dependencies. The Microsoft tools map out recommendations for which Azure resources to use and suggests what migration options are best for any given architecture of application servers.
Lift and Shift Many get started by moving their virtual machines to Azure, using what is commonly referred to as the “lift and
shift” approach. “We get a lot of customers that have a datacenter and maybe their equipment is reaching the end of life and they’re trying to decide whether to replace it or not,” Stratum’s Zack says. “That tends to push them toward a cloud solution.” Microsoft’s own Azure Site Recovery (ASR) service is a common tool for migrating workloads to the public cloud, Zack says. Many organizations first use of Azure is for backup and disaster recovery. “I always joke that migration is always just failover and then you never come back,” Zack says.
Much of the decision of an organization’s management team Nick Colyer, solutions principal for cloud & DevOps at Chicago-based Ahead, agrees. “We use Azure Site Recovery a lot,” Colyer says, especially among those who don’t want to pay extra for migration tools. Azure Migrate also helps Colyer’s determine what workloads and VMs are suited for Azure and whether to use IaaS or get out of OS and infrastructure management going directly to PaaS services. There’s no distinct pattern for how customers make these decisions, he explains. Some will move incrementally, others will put a large percentage of workloads into production quickly, he says. Much of the decision depends on the maturity of an organization’s IT and cloud services management team and their appetite for risk. “In some cases, they’ll move slowly, so even when they say 20 percent they’ll probably move more over time,” Colyer says. “The other big thing that plays a bigger part is how well they understand their applications. If they have a very good institutional knowledge and the people that built these things are still there and are able to move them, then there’s more appetite to do that.” In addition to Azure Migrate, Microsoft and ASR, Microsoft recently
035-37_SDT012.qxp_Layout 1 5/22/18 12:30 PM Page 37
started rolling out its Azure Database Migration Service for moving relational data to the cloud, and Azure Data Box, an appliance that lets you ship large amounts of data to a Microsoft datacenter, typically useful for first time migrations where there could be terabytes or more of data to move. There are four paths to migration an organization can go down. The aforementioned lift-and-shift is the first. Microsoft and other cloud providers often refer to this as rehosting. This nocode migration option effec-
microservices because these microservices can be deployed independently, they can be managed independently, scaled independently. So that is for developers a natural next step,” he says. Proponents of moving to a microservices architecture often refer to this as “strangling the monolith.” Tcherevik is among them. “As the business processes change inside a company, the monolith needs to be updated,” he says. “Traditionally you would just go in and rewrite portions of that monolith but now instead of rewriting it, you
n depends on the maturity IT and cloud services and their appetite for risk.
tively moves applications and data without changing it and comes with the least risk and most ease. It’s the fastest approach, but the most suitable for an older application that doesn’t justify or require the addition of new capabilities. But the assessment phase may show that those workloads may not be conducive for the lift and shift approach, perhaps because running them in Azure will consume costly compute services, among other reasons.
Strangling the Monolith If that’s the case, the second option, refactoring or repackaging, is an expeditious way to extend the application infrastructure without rewriting the application code. Applications that are refactored are typically brought into container services such as Kubernetes, Docker or Mesosphere. This approach is necessary for organizations who want to make sure their applications scale better on Azure, says Progress Software’s Tcherevik. “The answer to that is to start rewriting your monolithic applications into collections of
would implement new functionality or functionality that requires significant changes. You would integrate it with the monolith through an API gateway of sorts.” In Azure, once the application logic is packaged into containers, they’ll work within the IaaS services as well as to managed PaaS offerings including the Azure SQL Database Managed Instance, Azure Database Service for Postgre SQL, Azure Database for MySQL and Azure Cosmos DB. Others, such as migration from Cassandra and MongoDB databases to Microsoft’s new CosmosDB, the globally distributed multi-model database service in Azure, are in the pipeline. Those going with the PaaS approach and want to ensure multi-cloud support or be able to use common APIs will often go with platforms based on Cloud Foundry, which Azure and other cloud providers support, as well as OpenShift, supported by Red Hat. Microsoft, which has a development partnership with Red Hat, last month released the Red Hat OpenShift Container Platform, which is now in the Azure Marketplace.
According to Microsoft, the refactoring migration option makes sense for those who have committed to a DevOps methodology, have applications that organizations don’t want to rewrite but need to scale in the cloud with minimal management of VMs.
Decompose and Rewrite The third, and more ambitious option, is re-architecting applications enabling customers to use capabilities in Azure such as autoscaling and dynamic reconfiguration. This involves having developers decompose the monolithic application code into micro-services that can integrate with each other to create an app, but without the interdependencies, which lets organizations test, deploy, scale and manage separately. This makes sense for business-critical functions where organizations want to leverage existing development efforts but anticipate the need to add functionality and scale, thereby requiring them to move to a continuous integration/continuous development (CI/CD) DevOps model. The final and boldest approach to cloud migration is rebuilding to create cloud-native applications. Arguably this isn’t actually migration so much as building new greenfield apps that are based on event-driven functions and don’t need to specifically use or manage existing infrastructure. Driving this event-driven model is Microsoft’s PaaS serverless capabilities, known as Azure Functions. It provides highly available Azure Storage and can now tap Microsoft’s new CosmosDB service. These are suited for bursty, or transaction type of applications. Whichever migration approach an organization takes, and many are likely to use multiple options depending on the use case, the final phase is optimization. That includes cost management using the Azure Cost Management Service managed by Microsoft subsidiary Cloudyn, securing applications and data via the Azure Security Center, monitoring via the dashboards in the Azure Portal and data alerts and protection with Azure Backup. z
038,39_SDT012.qxp_Layout 1 5/22/18 3:58 PM Page 38
Talking Azure Migration with
Corey Sanders Microsoft corporate VP BY JEFFREY SCHWARTZ icrosoft has divided the process of moving on-premises workloads and applications to the cloud into three key steps: Assess, migrate and optimize. Once the assessment of existing infrastructure, applications and dependencies is complete, there are four general options for undergoing an application migration to Azure. Those are: re-hosting or lift and shift, refactoring with containers, re-architecting for PaaS, and starting from scratch by rebuilding apps (or creating new ones) with a fully serverless PaaS environment. In addition to a number of new tools offered by Microsoft, the company has quite a few partnerships for those that have specific requirements. In an interview with SD Times, Corey Sanders, Microsoft’s corporate VP for Azure compute, discussed these migration options.
Do you find the majority of organizations that are making the move going with the lift and shift approach?
A majority of organizations have at least some portion of their infrastructure doing lift and shift or rehosting. But increasing amounts are including refactoring and containerization as part of that stage. I see a significant number of customers coming in with some portions being lift and shift, some portion being refactored and containerized to put those pieces together in the cloud to get the benefit in Azure. Both are
fantastic managed PaaS services that make sure that the minimal amount of changes are required to take advantage of the Agile and cost benefit capabilities of Azure.
of really strong infrastructure as a service with great PaaS services, both on the data and compute side, make Azure such a such an appealing choice for a lot of these large enterprises.
Is there a leaning towards IaaS or PaaS these days in terms of the migration?
Do you find that organizations that are doing this are shifting their applications or .NET applications to .NET Core, Cloud Foundry or Kubernetes?
It certainly leans toward infrastructureas-a-service. A majority of the migrations on the compute side lean towards infrastructure-as-a-service. More and more of the data migration is moving towards more managed services like SQL Managed Instances or SQL Database with the combination of the PaaS services with those compute-based infrastructure services. When organizations do a lift-and-shift approach, do they quickly find that perhaps they should have done a greater percentage of the move via refactoring, or is it more common to just want to get it out there and get the ball rolling?
I find most customers are getting into the cloud and taking advantage of agile infrastructure, which is a strong benefit unto itself. You can save up to 60 percent, which is a pretty exciting opportunity for a lot of customers. Having said that, as soon as lift- and-shift is underway, it’s a very quick conversation. ‘How am I modernizing now by bringing containers into my service that is taking advantage of serverless solutions?’ This is where that combination
We see more customers target modernization and target some of the computebased PaaS platforms. We definitely feel a lot of excitement around all of the many open-source platforms, including certainly Kubernetes and the Azure Kubernetes Service offered on Azure. Additionally, we recently announced the first managed OpenShift in the public cloud but we’re also absolutely very excited to see customers moving their Java applications onto Cloud Foundry and even onto Docker Enterprise as well. We see the full range of platforms being used by customers as they look to modernizing their solutions with containers, which is typically a step after that first lift-and-shift. You have also worked closely with Ansible, Puppet and Chef. While those are DevOps orchestration platforms, should they be considered in the migration process?
Typically, customers deploying Puppet and Chef are moving toward more of a DevOps approach. They’re looking for the ability to manage their infrastructure in a simpler and more automated
038,39_SDT012.qxp_Layout 1 5/22/18 3:51 PM Page 39
way. I see many customers doing that on premises, prior to migration, but most are doing that post-migration. From my experience it’s not necessarily a part or a key aspect of the migration process. I have found the migration to public cloud to be a distinct decision-point from moving workloads to more of a DevOps and an automated process, which can happen both on premises or in the cloud.
be shutting things down or scaling things down on the weekend or after hours. I always recommend this first and certainly the rest as they come along have varying degrees of value that could be added from that ability, from that agility, from those managed services, from that containerization. But certainly, the easiest first ones are the ones that cleanly lift and shift and get some benefit from flexible infrastructure.
Is it recommended that organizations think about how they want to approach DevOps before they make the move, or is that something they really can do after moving some applications to the cloud?
Are you finding that some are bypassing infrastructure-as-a-service, liftand-shift migrations altogether and just going right to PaaS?
One of the biggest benefits that customers can get from deploying in Azure, whether it be deploying on infrastructure or deploying on a data PaaS service, is the ease of management and ease of automation with both the built-in tool sets but then also with many of the capabilities, either from partners like Chef, Puppet or container-based solutions, or even serverless platforms sitting on top. It’s incredibly easy to automate and manage infrastructure in a much simpler, faster way and much more costeffective way. I strongly recommend that customers who make their lift- and- shift into Azure consider the next step in their journey, which frequently is a step around automation and a step around DevOps. For organizations that are a little more conservative or doing this reluctantly — they know they have to do it — what is the best type of workload to start off with for the migration process?
I think the best applications for customers to start with as the first deployment option, whether they’re on Windows or Linux, whether it’s SQL or Postgre, really comes down to the expectations around management and operations of that workload. For workloads that are a little bit more flexible in the way that customers manage and deploy, they will be able to embrace the process of migration but also to be able to better take advantage of some of the agility offered by the cloud, whether it
Absolutely. There are customers who will do it as part of the migration step— containerize and deploy as a Kubernetes service or even go entirely to platform-as-a-service and go straight to managed SQL or straight to managed PostgreSQL or CosmosDB with the Azure App Service sitting on top as more of a rebuild or refactor of their application. It’s really tied to the business requirements of the customers and we have the right tools and the right support models and the right people to go help customers no matter which step they want to skip and what their end date is for those applications. Can an organization make the leap from on-premises to a Data Lake- type environment or is that a relatively ambitious step to take?
It depends very much on the application with services like Azure Data Box being able to take entire Data Lakes sitting on premises and effectively ship them into Azure and then build analytics and AI services on top using our best- in- class services for that analytics layer. With the data coming in through Data Box it creates quite a great opportunity for specific workloads and specific customers. How much uptake so far have you seen for Azure Stack in the six months it’s been out? Should that be considered part of a migration strategy or option?
Azure Stack is a part of the overall continuum and it is a part of the overall story for customers as they make this migration. Certainly, Azure Stack has
the cloud experiences, the cloud optimizations but running in edge locations and so enabling customers to be able to still benefit from the Azure programming model and the Azure data experiences. Still being within close proximity to their end customer or close proximity to their end devices is a big part of the overall story where migration to the cloud can be migration to an edgebased Azure Stack, and we see many customers looking at that as a part of their overall hybrid and edge strategy. In addition to migrating from on- premises, obviously you have customers that are migrating from other clouds or even from the classic Azure service [pre-Azure Resource Manager]. Are there enough tools out there for that as well?
Certainly, on the partner side, quite a bit of the tooling is enabled to support migration from other clouds into Azure and even our services do have the ability to enable some aspect of migration from other clouds as well, including things like Azure Site Recovery. And some of the capabilities as part of that overall migration service are enabled as part of other cloud-based migrations. But most of that support engagement is offered as partner-based services and solutions. What new migration tools do you anticipate releasing in the coming year?
We have a lot of planned investment coming from the Azure team, both across compute and across data in the migration space. It’s an area where we’re seeing a lot of customer interest and a lot of customer passion. And that investment is going to come in two forms. It’s going to come in improving, enhancing and expanding our capabilities on the Azure side to offer these services directly to customers. But it’s also going to come with better and improved integration opportunities for partners to be able to deliver upon their services and their specialized capabilities as well. I expect the combination of those two will dramatically expand over the next year, the capabilities for customers to be able to do [migration] very easy and fast. z
Full PageAds_SDT012.qxp_Layout 1 5/21/18 5:11 PM Page 40
OutSystems is the #1 low-code application platform for building: Brilliant Digital Customer Experiences Flexible Departmental Apps Robust Large-Scale Systems
See it at work at www.outsystems.com/sdtimes
041,42,45,46_SDT012.qxp_Layout 1 5/22/18 1:48 PM Page 41
Low code development: It’s not just for business users Software developers can move quickly from mundane to masterful
oftware developers have the wrong idea when it comes to low code development. Because of low code’s visual, drag-and-drop programming approach, it has often been associated with business users or citizen developers. This reputation has many developers skeptical of bringing it into their workflows, but with the pressure to be faster and transform to digital, they may have no choice. “Low-code platforms are rapidly becoming the standard in many organizations,” said Mike Hughes, director of product marketing for OutSystems. “This is a game changer for organizations looking to become more agile to evolving business needs.” The digital disruption is changing the entire pace of innovation inside an organization, therefore they need to
BY CHRISTINA CARDOZA adopt solutions that enable them to deliver much more quickly while being responsive to the business operations, according to Malcolm Ross, vice president of product of Appian. Instead of thinking that low code is beneath them, developers should start thinking of low code as an opportunity to be better and expand their ability to deliver greater value. Too often, developers are focused on other things like deployment scripts, environments, underlying languages and platforms instead of actually building and delivering the application, Hughes explained. “Ultimately the job of any developer is to deliver something of value, but what often happens is that the bulk of the developer’s time is spent on coding
activities, script development, troubleshooting builds — things that don’t directly contribute value,” he said. With developers overworked and under resourced, having a low code solution in their toolbox could help them keep pace with competitors and put their focus on more important matters. according to Burley Kawasaki, executive vice president of products for Kony. “Traditional app developers are realizing the largest gains from low code because it is easier. There are often fewer bugs to fix because they are working with fewer modules that have already been tested, so they realize a lot of the benefits in terms of productivity and improving the development stream,” Amanda Myers, director of product marketing for Kony, added. continued on page 42 >
041,42,45,46_SDT012.qxp_Layout 1 5/23/18 10:03 AM Page 42
What does your company or tool bring to the low code process? Mike Hughes, director of product marketing for OutSystems The flexibility of OutSystems is unmatched in the industry. Using our visual development environment, organizations of all sizes accelerate their development timelines. Whether it’s to bring legacy systems up-to-date or to develop new full-stack cross-platform mobile apps, our low-code platform and single-click deployment make it easy for our customers to transform business. Low-code is a perfect choice for any organization with ambitious digital goals. Not all low-code platforms are ideal for all use cases, though. Some solutions billed as low-code are actually “no-code.” Most may not require a working knowledge of programming languages, but they are limited, and eventually, the “citizen developers” who use them will hit a wall with what they can actually do with it. On the other end are “low-code” solutions that really do require professional developers, if not for the actual development, then for deployment and maintenance. For any of these solutions to work where they aren’t specifically designed to, the solution’s core functionality has to change or additional services purchased to make it fit. Either way, it’s additional cost and additional complexity. OutSystems sits in the low-code sweet spot, offering the simplicity of visual development that citizen developers need while being powerful enough that professional developers can build on our software and customize it however they want.
Malcolm Ross, vice president of product at Appian Appian is the ideal platform to unify and orchestrate the new world of work that includes emerging technologies like artificial intelligence, robotic process automation, and blockchain. Appian
< continued from page 41
The power of low code There are three main reasons developers should adopt to low code development: Speed, agility and adoption. With the constant pressure in business to be transforming and keeping up with the market, there is a gap between customer expectations and the ability to meet those expectations according to Kony’s Kawasaki. “Everyone wants to move faster, accelerate their development, and get a bit more done,” he said. “Low code helps support that because it allows drag-and-drop and visual tools.
makes it easy for companies to integrate these complex technologies in the name of customer experience and operational excellence. Our technology can be used to write new apps from scratch, or to connect and enhance legacy applications already in place. Appian provides customers with a professional services arm, expertise/positioning across industry, alignment of technology within industries, and strategic partnerships that help to execute the vision. Our software runs some of the most important processes at the world’s best companies, like Barclays, Sprint, Merck, Aviva, and Dallas/Fort Worth International Airport.
Burley Kawasaki, executive vice president of products Kony is the fastest growing, cloud-based digital application and enterprise mobility solutions company, and a recognized industry leader among low code and mobile application development platform (MADP) providers. Kony helps organizations of all sizes drive business ingenuity by rapidly transforming ideas into innovative and secure omnichannel applications. Built on the industry’s leading digital platform,Kony provides the most innovative and secure omnichannel applications, with exceptional user experience and app design. Kony’s cross-platform, low-code solution also empowers organizations to develop and manage their own apps to better engage with their customers, partners and employees. By seamlessly leveraging and connecting apps to all types of data sources and information, Kony also enables organizations to transform their business processes and gain valuable insight. Kony was named the first place winner in CTIA’s MobITs Awards in the Mobile Applications, Development & Platforms category and included on the Inc. 500|5000 list of fastest growing private companies in America. z
It allows users to meet business needs with less time, reuse components, and quickly stitch together and build an app.” Low code takes away the manual intensive process, and also makes the process easier to understand in business terms, according to OutSystems’ Hughes. “Team members come and go, and things change hands often. There is a lot of waste involved in those handoffs,” he said. “With low code being visual, it makes it so much easier to see what is happening, and reduce the technical debt and cost of maintaining apps.” That speed results in the business’
ability to move in an agile fashion. “Agility means organizations can respond at the speed of business,” said Hughes. Not only can developers move faster, but they can get more done in less time. Lastly, adoption enables the team to build stuff the business actually wants. “Rather than spending years building an app that maybe is not the right app because times change so quickly, you can deploy every week or two and really deliver the features people will actually use and care about,” said Hughes. Low code development enables all these benefits because it provides continued on page 46 >
Full PageAds_SDT012.qxp_Layout 1 5/21/18 10:21 AM Page 43
webinar ad.qxp_WirelessDC Ad.qxd 5/23/18 10:05 AM Page 1
Be a more effective manager
Visit the sdtimes.com Learning Center Watch a free webinar and get the information you need to make decisions about software development tools.
Learn about your industry at www.sdtimes.com/sections/webinars
041,42,45,46_SDT012.qxp_Layout 1 5/22/18 3:52 PM Page 45
A guide to low code solutions n Alpha Software Corporation: Alpha Software offers the only unified mobile and web development and deployment platform with distinct “no-code” and “lowcode” modes. The platform materially accelerates digital transformation by allowing line-of-business (LOB) professionals to work in parallel with IT developers to build the smartphone apps they need themselves and thereby significantly cut the bottleneck traditionally associated with the development of mobile apps.
n AgilePoint: AgilePoint NX is a futureproof digital transformation platform. Its “build once and re-use many times” feature defines a new concept in application development. As an organization’s business needs evolve during its digital transformation journey, the same critical business applications can adaptively evolve. n Betty Blocks: ‘How can we make it easier‘ is the foundation of everything Betty Blocks does. Develop applications rapidly and intuitively through visual modeling; 100 percent in the cloud, multi-device and cross-platform, designed with the flexible UI-builder. And all that without any code. n Caspio: Caspio is embraced by business developers for its ease of use, speed-tomarket and enterprise-grade features. Using visual point-and-click tools, business developers can execute the entire application design, development and deployment process, allowing them to rapidly deliver a minimum viable product and continue iterating as the market requires. The platform also offers built-in security controls, governance and compliance — such as HIPAA, FERPA, FIPS 140-2, and the EU General Data Protection Regulation. n Dell Boomi: Dell Boomi is a provider of cloud integration and workflow automation software that lets organizations connect everything and engage everywhere across any channel, device or platform using Dell Boomi’s industry leading lowcode iPaaS platform. The Boomi unified platform includes Boomi Flow, low-code workflow automation with cloud native integration for building and deploying simple and sophisticated workflows to efficiently drive business.
FEATURED PROVIDERS n
Mendix is easy, fast and intuitive with the use of visual models, enabling a wide continuum of people, from developers to business analysts, to build robust applications without the need for code. With model-driven development, business leaders and IT have a shared language to build applications rapidly. n Microsoft: PowerApps features a dragand-drop, citizen developer-focused solution designed to build apps with the Microsoft Common Data Service. PowerApps can be used with Microsoft Flow, the company’s automated workflow solution, for data integration. Build apps fast with a point-and-click approach to app design. Easily connect your app to data and use Excel-like expressions to easily add logic. Publish your app to the web, iOS, Android, and Windows 10. n Nintex: Nintex helps enterprises automate, orchestrate, and optimize business processes. With the company’s intelligent process automation (IPA) solutions, IT pros and line-of-business employees rely on the Nintex Platform to turn their manual or continued on page 46 >
041,42,45,46_SDT012.qxp_Layout 1 5/22/18 3:53 PM Page 46
< continued from page 42
tighter collaboration between the business and IT operations, according to Appian’s Ross. “The hardest part of building enterprise applications is the communication between the business and IT. Business users know what they want, but you can’t just sit down with a business user, show them Java code and ask if this is what they wanted. You have to build that entire app or build a mock up of that app to really communicate it,” he said. With low code’s visual design techniques, non-technical business users can easily understand what IT is building and participate in the development process, Ross explained. Low code also takes the focus off of frameworks and puts it on business value. According to Appian’s Ross, developers normally correlate their personality or job with the programming language they use. For instance, they may say they are a Java developer or .NET developer. Instead of being hung up on the technology and frameworks, low code puts the focus more on solving business challenges. “If we are able to get people to focus on the business challenge, than that is a win win in terms of the business having developers who understand what the goals are and in terms of business values, and then for the developers themselves it is a different way of thinking about things. You are focused on value rather than how am I going to get these three frameworks to get together without actually developing anything,” said Hughes.
The citizen developer While the citizen developer movement is getting a little bit too “overhyped,” Kony’s Kawasaki says developers should not let that diminish the importance of citizen developers on their teams. Citizen developers can be a crucial component as part of the broader development team. When a business person has a vision or an idea, a citizen developer can help create a prototype that the business can start testing with their stakeholder. “There is tremendous business value in this rather than just getting IT involved to create a prototype, you can start testing against your
business and users,” said Kawasaki. Low code really enables the team to get something rolling out quickly, understand it and test it to see if it succeeds or fails, Myers added. After the business gets a clear view on what the requirements or design needs to look like, they can then communicate that back to the development team, which may include more professional developers, and start actually building the real application, Kawasaki explained. Appian’s Ross believes citizen developers are just as valuable as professional developers because they still have a dedicated role in building the overall solution. “At the end of the day, you are building a software application and even though you have a citizen developer, it doesn’t obscure the need to do things like code reviews and make sure the app is designed properly, especially if it is going to impact a large part of the operation,” he said. Although a citizen developer doesn’t necessarily have to have 10 years of programming experience, they should understand things such as data, variables and how a point of data is going to be container as it goes through a transaction. According to OutSystems’ Hughes, less technical users are a critical part of the solution, but IT still needs to put the right guardrails in place before giving them the ability to create apps. “Low code made it much easier in some respect to build apps, but it also means people can do a lot more damage a lot more quickly,” he said. IT should control what users can build and have access to as well as protect data that should not be accessed. “With OutSystems we have many examples of citizen developers — often business analysts — solving interesting business challenges by building new solutions, but often these are smaller in scope,” said Hughes. “If an organization is developing a replacement for a mission critical legacy platform, this is beyond the skills of citizen developers, but with a team of skilled low-code developers, projects that might have taken years can be completed in months.” z
Full PageAds_SDT012.qxp_Layout 1 5/21/18 10:21 AM Page 47
048_SDT012.qxp_Layout 1 5/21/18 12:19 PM Page 48
Guest View BY KEVIN STEIGERWALD
Faster doesn’t mean sacrificing quality Kevin Steigerwald is VP of Product Management at Jama Software
t’s tempting to opt for speed and test the limits of your organization by barreling through development as fast as possible. The cost of this approach isn’t just the quality of your final product, but of your company’s overall health and morale. What could go wrong? The good news is, moving fast today doesn’t necessitate lowering your development standards. While there’s no silver bullet that will magically shield you from the pitfalls of tighter development schedules and faster iterations, there are some steps you can take to create a process that will withstand the growing constraints.
Better requirements Requirements are the backbone of successful products. If these aren’t defined, communicated, documented, and followed properly, you won’t end up with the right product. Requirements management software can alleviate a lot of struggles, but it won’t take care of the nagging problems consistent with human communication. Specifically, teams need to learn how to strike the balance between just enough and not too many details when relaying requirement information. Requirements with too little detail leave room for ambiguity, which opens the door to misinterpretation, confusion, and ultimately building the wrong thing. Provide excessive detail and product and engineering teams may feel stifled creatively and unable to innovate. Instead, requirements should keep teams aligned on “what” is being asked for, as well as “why.” This leaves the engineering experts free to solve “how” to implement. Additionally, creating a glossary of language and common terminology can also help ensure everyone is on the same page.
Whatever route your organization chooses, procrastinating isn’t an option anymore.
Aligning teams It’s been the case that hardware and software teams work separately. In many cases, companies creating traditional hardware products — whether it’s refrigerators, coffee makers, or watches — might not even have a software team in house, and vice versa. And even if a business does have both, there’s usu-
ally not a ton of collaboration taking place. Successful connected products demand hardware and software teams work together, and companies need to devise a plan bringing these two together. Any solid plan must include a path toward unification, which includes space for transparency, open communication, and accountability. You also no longer need to guess about whether or not your plan is working. Monitoring the progress of both teams via analytic insights should tell you in real time. The key is to share your goals with your teams and come up with measures of success. Be mindful of vanity metrics that can easily be manipulated and balance them with countermetrics, which measure events or goals that are directly impacted by the success or failure of your other metrics.
Take advantage of new technologies Development teams can only move so fast when they’re weighed down by emailing revised documents back and forth, waiting on reviews and approvals, and then counting on the one person in the organization who understands how to operate their legacy platform to implement those changes. Modern product development platforms can take the burn out of a lot of these processes, making teams more nimble and iterative. When a gap or change emerges during development, stakeholders can be immediately notified of the impact, reducing blind spots that could otherwise spiral into critical defects. Today’s platforms have also been designed to be accessible enough so anyone in the organization can pick them up quickly, demystifying the process and removing the gatekeepers.
Don’t delay Whatever route your organization chooses, procrastinating isn’t an option anymore. Companies that aren’t used to this speed of development will wake up one day and realize everything has changed. Arriving seemingly out of nowhere, their new competitors will move quicker than those that have been in the market for decades. While it may take some time to scope and adjust to new processes and technologies, the earlier you start, the better. Things aren’t going back to the way they were, and tomorrow’s game-changing products aren’t going to wait around for us to develop them. z
049_SDT012.qxp_Layout 1 5/22/18 12:29 PM Page 49
Analyst View BY ROB ENDERLE
Three ‘killer apps’ for next-gen mobile O
ften it isn’t one thing that brings about a technology revolution but a blending of elements. 5G, mixed reality, and AI are about to dramatically change both how we interact with computers not only revolutionize human/machine interaction but blur the lines between what constitutes imagination and reality. Let’s talk about this coming technological revolution and its impact on our world.
5G The next step in wireless technology promises to give us wired network speeds and far better performance than we have ever seen before, allowing us to shift more processing load to the Cloud and redefine the balance between battery life and performance once again. This unprecedented performance levels could uplift the potential for mobile thin clients and further blur the lines between PCs, tablets, and smartphones. The result should be the ability to more aggressively provide intelligence and vastly higher application resolution without needing to increase edge processing power significantly, creating the dual benefit of small devices and long battery life without adversely impacting this increased performance.
Mixed Reality A key area that will be enhanced by 5G will be Mixed Reality and the ability to blend more seamlessly what is real with what is virtual at the edge. This could mean a market pivot from handheld to head-mounted technology like has been recently demonstrated in part by the Lenovo Mirage Solo but with full smartphone capability and better design. Headsets will likely evolve into something far closer to the Magic Leap Mixed Reality solution, making them far more acceptable for longterm use. In the interim, it will likely enhance existing efforts like Google Daydream substantially. By blending telephony and other forms of communication into these head-mounted devices, smartphones as we know them should start to become obsolete. Properly implemented, heads-up displays of information should reduce the kinds of problems we’ve had with smartphones and distracted walking or driving. By blending realities far more aggressively, the
technology could massively change a wide variety of industries, from outside advertising to exterior finishes and clothing. You may no longer need to paint or dress differently but instead either geolocate a digital design or provide an RFID tag to allow the mixed-reality glasses to render your preferences instead. For those without the technology, the world would, I expect, devolve into a largely indistinct gray but with the glasses the finishes would only be limited by the imagination.
Rob Enderle is a principal analyst at the Enderle Group.
AI But once we add AI we get the potential for rendered digital assistants, or familiars, taking the form of humans, animals, robots, or even magical creatures. Always with you, but only visible to others if you allowed it, they would see what you see and be able to answer questions and even have conversations with you (Amazon’s Echo has this feature in trial today). Eventually they could anticipate your needs and act in your stead. For instance, knowing that you like to order a case of wine when you find a new one you like at a restaurant this digital assistant avatar might automatically set up for your approval a case of the wine after you express a liking for it. These virtual assistants could become companions, advisors, protectors (able to call for help automatically if you needed it), and even act rudely, while only visible to you, towards people that you don’t like. I can imagine your rendered virtual digital assistant double-handing rude gestures at your boss while he or she is dressing you down. In concert with the emergence of a personal robotics market, the ability to overlay your personal robot with your rendered digital assistant will allow for the assistant to better interact with the real world. In effect, your digital assistant will not only follow you around, look and behave the same, but be able to use the robot’s ability to pick things up and move them. And, because the visible representation of that robot is virtual you could change its appearance at your leisure. I’m looking forward to be followed around by the robot from Lost in Space (Danger Will Robinson!). z
For those without the technology, the world would, I expect, devolve into a largely indistinct gray.
050_SDT012.qxp_Layout 1 5/22/18 12:28 PM Page 50
Industry Watch BY DAVID RUBINSTEIN
Bringing the VA into the 21st century David Rubinstein is editor-in-chief of SD Times.
he U.S. Department of Veterans Affairs is a mess. From a lack of leadership, to poor facilities management at local hospitals and clinics, the department with the second-largest budget in government cannot get its act together. Veterans complain about the length of time it takes them to get appointments, the amount of bureaucracy that must be navigated to become eligible for treatments, falsified records and even preventable deaths. On Long Island, New York, where SD Times has its editorial offices, a recent poll of employees at the Northport VA Hospital revealed poor morale brought on by equipment breakdowns, staffing that doesn’t match up with need, and hazardous and filthy conditions. One employee went so far as to comment, “The infrastructure of this center is like a terminal patient on a ventilator,” as reported in late April by Newsday, the local newspaper here. But this column is not about government. It’s about technology, and how Big Data solutions provider Hortonworks is working with the VA to turn around. “The VA is a big organization, but not unlike many other organizations that face challenges... modernization from an infrastructure perspective and cultural change. Looking ahead versus where they’ve been and how they’ve been,” said Shaun Bierweiler, vice president of U.S. public sector for Hortonworks. “The VA has challenges of being a medical institution coupled with being a government entity. Being able to support security, the scalability and sustainability, comes with added scrutiny. When you look at the IRS … and their system going down due in part to the age and capabilities of some of their systems there, the VA recognizes they have a modernization need. They’re really looking at how can they support the capabilities they have today but also look forward to electronic health care to supporting the IoTs and the sensors, to doing more predictive maintenance and single view of patients throughout their career. There’s a lot of exciting things, a lot of potential there … What we just described is a significant undertaking and needs to be done in a programmatic and appropriate way.” Introducing sensors and artificial intelligence within the physical infrastructure — both in HVAC
The VA has challenges of being a medical institution coupled with being a government entity.
as well as medical devices — can help the organization predict when the equipment needs servicing, or is approaching end-of-life. Moving to digital records creation, storage and collaboration can ensure patients get the care and medication they need, and reduce the amount of errors in documentation. The ultimate goal is to ensure every veteran receives the timely care, treatment and medication they deserve after serving this country so admirably. But as Bierweiler said, this is no small undertaking. There are medical and governmental processes and regulations, which can sometimes appear to be at odds, that need to be satisfied. “The people at each one of those sides might have different expectations and different requirements for what they’re looking at,” he noted. “So that’s another benefit to … having that open architecture and being flexible enough to support the various pieces. Visualization is one piece; it could be a search component, an algorithm, a database. That flexibility is huge when you think about the vast reach that something like the VA has to support.” Hortonworks is helping the VA understand open-source technologies that can help with its data problem. “When you think about all the interfaces and touch points — the websites, the mobile applications, as well as the medical tools in the centers themselves and shared amongst the physicians — at the root of that, you need to have a data framework and a data platform that can support all of those touch points,” Bierweiler explained. “That’s where the infrastructure is key. That’s where having the right partnerships, the right architecture, the right open philosophy going in, is essential to ensure that five years down the road, when there’s another type of technology that comes that all consumers are going to expect, that they’re in a position where they can adopt that without having to rip and replace and go back and redo years of work.” And, as Bierweiler pointed out, data fundamentally is involved in almost every transaction a customer experiences on a daily basis. Perhaps Robert Wilkie, tapped by President Donald Trump late last month to run the agency after his first choice — his personal physician, Ronnie Jackson — withdrew his name from consideration amid allegations of misconduct and wrongdoing, can right the ship. z
Full PageAds_SDT012.qxp_Layout 1 5/21/18 10:22 AM Page 51
Learn, Explore, Use Your Destination for Data Cleansing & Enrichment APIs
Global Email Global IP Locator DEVELOPER
Global Phone Property Data
Your centralized portal to discover our tools, code snippets and examples. RAPID APPLICATION DEVELOPMENT
REAL-TIME & BATCH PROCESSING
TRY OR BUY
FLEXIBLE CLOUD APIS
Convenient access to Melissa APIs to solve problems with ease and scalability.
Ideal for web forms and call center applications, plus batch processing for database cleanup.
Easy payment options to free funds for core business operations.
Supports REST, JSON, XML and SOAP for easy integration into your application.
Turn Data into Success – Start Developing Today! Melissa.com/developer 1-800-MELISSA
Full PageAds_SDT012.qxp_Layout 1 5/21/18 5:15 PM Page 52