FC_SDT011.qxp_Layout 1 4/23/18 2:41 PM Page 1
MAY 2018 • VOL. 2, ISSUE 11 • $9.95 • www.sdtimes.com
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:24 PM Page 2
To improve farming worldwide, this leader trusts Rogue Wave When an agricultural pioneer wanted to be on the forefront of innovation, it turned to Rogue Wave. Our advanced software and API management solutions empower farmers to share real-time data. That helps improve decisions and grow returns. Want to reap the benefits of smart thinking? There’s more to Rogue Wave than you think.
WE’VE GOT THE LAND COVERED FROM A TO ZEND < WEB AND MOBILE APP DEVELOPMENT >
SECURE COMPONENTS < PLATFORM INDEPENDENT BUILDING BLOCKS >
< END-TO-END ENTERPRISE OPEN SOURCE >
< APPSEC AND COMPLIANCE STATIC CODE ANALYSIS >
< JAVA DEVELOPMENT PRODUCTIVITY >
< API MANAGEMENT >
003_SDT011.qxp_Layout 1 4/20/18 5:00 PM Page 3
VOLUME 2, ISSUE 11 • MAY 2018
MICROSOFT BUILD SHOWCASE
8 How sports performance analytics can help technology organizations win the game
Data Connectivity: An unrecognized, multi-issue problem
IFTTT: Bringing new meaning to applets
Businesses are failing to reach Big Data maturity, report finds
Move fast and fix things: It’s time for an API audit
JFrog Xray 2.0 examines the CI/CD pipeline
Making the web available to all
Using hybrid mobile app development to your advantage page 36
GUEST VIEW by Eric Naiburg “Done” should include security
ANALYST VIEW by Rob Enderle Smartphone sales mirror PC declines
INDUSTRY WATCH by David Rubinstein In software, words matter
Testing strives to keep pace with development
Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at email@example.com.
004_SDT011.qxp_Layout 1 4/19/18 3:36 PM Page 4
Instantly Search Terabytes
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein firstname.lastname@example.org NEWS EDITOR Christina Cardoza email@example.com
dtSearchâ€™s document filters support: + popular file types + emails with multilevel attachments + a wide variety of databases + web data
&"#" !$ #% + efficient multithreaded search + #)%$ "$$ + forensics options like credit card search
SOCIAL MEDIA AND ONLINE EDITOR Jenna Sargent firstname.lastname@example.org INTERNS Ian Schafer email@example.com Matt Santamaria firstname.lastname@example.org ART DIRECTOR Mara Leonardi email@example.com CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Frank J. Ohlhorst, Jeffrey Schwartz CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum
Developers: + # " NET, C++ and Java; ask about new cross-platform NET Standard SDK with Xamarin and NET Core + # " '# %( $" $ + # $#""%"$ ##$ *%" "
SUBSCRIPTIONS firstname.lastname@example.org ADVERTISING TRAFFIC Mara Leonardi email@example.com LIST SERVICES Shauna Koehler firstname.lastname@example.org REPRINTS email@example.com ACCOUNTING firstname.lastname@example.org
Visit dtSearch.com for +%"# "&'###$%# +%)%$ $"!"# developer evaluations
The Smart Choice for Text RetrievalÂ® since 1991
PUBLISHER David Lyman 978-465-2351 email@example.com
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:25 PM Page 5
06,7_SDT011.qxp_Layout 1 4/19/18 4:48 PM Page 6
NEWS WATCH Mozilla’s WebAssembly Studio IDE enters beta Mozilla wants to improve WebAssembly for developers with the beta release of its WebAssembly Studio. WebAssembly Studio is an online IDE for learning and teaching WebAssembly that Mozilla began working on last December. Since then, the company has been working to make WebAssembly more accessible to the programming community. WebAssembly Studio will have basic C, C++, and Rust support. Mozilla explained that currently most of the compilation services available today are running server-side, but the company hopes to do more of it in the client in the future. WebAssembly Studio will offer the ability to edit WebAssembly binary modules (.wasm) and text files (.wat). Other features include context menus that provide the ability to do many things in WebAssembly Studio, such as verifying that WebAssembly modules are valid or using Binaryen optimization to pass over
WebAssembly modules. The studio’s Binary Explorer visualizes how the code is represented at the binary level. It also offers the ability to embed interactive WebAssembly Studio projects thanks to embed.ly.
app-based environment to a web-based one, we believe the true potential of webOS has yet to be seen,” said I.P. Park, chief technology officer at LG Electronics.
Open-source version of webOS launches
Oracle announces polyglot virtual machine, GraalVM
LG Electronics is moving webOS beyond TVs with the release of webOS Open Source Edition. WebOS is a multitasking operating system that was designed for smart devices and smart TVs. Before coming to LG, webOS was launched as Palm OS in 2009. It was acquired by HP in 2010, and then licensed to LG in 2013. Since then, the company has been using the technology for its smart TVs and refrigerators. “WebOS has come a long way since then and is now a mature and stable platform ready to move beyond TVs to join the very exclusive group of operating systems that have been successfully commercialization at such a mass level. As we move from an
Oracle has set out on a mission to create a universal virtual machine that can support multiple languages while providing consistent performance, tooling and configuration. The company announced GraalVM 1.0, a virtual machine designed to accomplish that mission with high performance and interoperability with no overhead when building polyglot apps. GraalVM allows objects and arrays to be used by foreign languages without having to convert them into different languages first. For example, this tool would allow Node.js code to access the functionality of a Java library, or to call a Python routine from within Java. With this flexibility, programmers will be
Apple wants developers to start transitioning to 64-bit apps Apple developers should start transitioning their applications to support 64-bit. With the release of macOS High Sierra 10.13.4, Apple will start warning users if they launch an app that doesn’t have 64-bit support. According to the company, if a user sees this alert, it means that the application is not optimized for their system. All new apps submitted to the Mac App Store must have 64-bit support, and all app updates and existing apps must support this by June 2018. “If you distribute your apps outside the Mac App Store, we highly recommend distributing 64-bit binaries to make sure your users can con-
tinue to run your apps on future versions of macOS,” Apple wrote in a statement. Apple says 64-bit apps can dramatically improve the memory and performance of a system. “State-of-the-art technology is what makes a Mac a Mac. All modern Macs include powerful 64-bit processors, and macOS runs advanced 64-bit apps,” the company wrote. “The technologies that define today’s Mac experience—such as Metal graphics acceleration—work only with 64-bit apps. To ensure that the apps you purchase are as advanced as the Mac you run them on, all future Mac software will eventually be required to be 64-bit.”
New programming language for AI DimensionalMechanics has announced a new programming language for artificial intelligence and deep learning development. Its NeoPulse Framework 2.0, called NML 2.0, is designed to make AI more accessible to developers, programmers and data scientists. “With NeoPulse, our AI builds your AI,” said Rajeev Dutt, CEO of DimensionalMechanics. “We’ve created a platform for AI development that lets developers of all skill levels rapidly answer data-driven questions using a cost-effective and repeatable approach. Developers can now create AI models in a fraction of the time and cost than was ever possible before.”seniorssss NML 2.0 enables organizations to build AI solutions based on a variety of data types including video, numerical text, images and audio.
Top tech companies pledge to improve cybersecurity Thirty-four technology and security companies have made a public commitment to protect, improve, and empower online users. The Cybersecurity Tech Accord is a global
06,7_SDT011.qxp_Layout 1 4/19/18 4:47 PM Page 7
agreement among companies pledging to defend against the misuse of their technology, and safeguard users from malicious attacks. “The devastating attacks from the past year demonstrate that cybersecurity is not just about what any single company can do but also about what we can all do together,” said Brad Smith, Microsoft president. “This tech sector accord will help us take a principled path towards more effective steps to work together and defend customers around the world.” The Tech Accord includes signatures from: ABB, Arm, Avast, BitDefender, BT, CA Technologies, Cisco, Cloudflare, Datastax, Dell, DocuSign, Facebook, Fastly, FireEye, F-Secure, GitHub, GuardTime, HP, HPE, Intuit, Juniper Networks, LinkedIn, Microsoft, Nielsen, Nokia, Oracle, RSA, SAP, Stripe, Symantec, Telefonic, Tenable, Trend Micro, and VMware. Through the pledge, the companies plan to tackle four cybersecurity areas: stronger defense, no offense, capacity building, and collective action.
CollabNet releases State of Agile report CollabNet is providing insights and analysis on the trends of agile in the industry with its 12th annual State of Agile report. According to the report, agile adoption seems to be expanding within organizations. “Year after year the annual State of Agile Report has helped our industry gauge the adoption and effectiveness of agile in software organizations,” said Lee Cunningham, senior director of enterprise agile strategy at CollabNet VersionOne. “This year’s report affirms
CMMI Institute introduces CMMI Development Version 2 The CMMI Institute is looking to strengthen its ability to help businesses rapidly deliver high-quality software and meet customer satisfaction with the release of CMMI Development V2.0. CMMI, also known as the Capability Maturity Model Integration, is a set of best practices designed to improve performance, key capabilities and business processes. CMMI Development 2.0 is designed to address global business challenges with best practices including engineering and developing products; improving performance; building and sustaining capability; managing business resilience; planning and managing work; selecting and managing supplies; ensuring quality; managing the workforce; and supporting implementation. “Global adoption of the CMMI has been growing at a record rate because of the material results it delivers,” said Kirk Botula, CMMI Institute CEO. “Highperformance commercial and government organizations around the world rely on CMMI to provide a clear roadmap to mitigate risk, create value, and build a resilient culture of continuous improvement. These include companies like Honeywell, Cognizant and Unisys, and U.S. government agencies, such as the FDA and NASA.” the effectiveness of agile in accelerating software delivery and helping teams manage the changing priorities within their organizations. We also see in this year’s report that agile adoption still has a long way to go.” This year, a higher percentage of respondents than last year reported that they had adopted agile. Twenty-five percent said that “all or almost all” of their teams had adopted agile, compared to eight percent in 2016. The respondents also reported that agile practices were being adopted at higher levels in the organization, as well. Higher level agile planning techniques, such as product roadmapping and agile portfolio planning experienced an increase in use.
GitLab integrates with GitHub for CI/CD GitLab 10.6 has been released, featuring new CI/CD integration with GitHub and further integration with Kubernetes. With GitLab CI/CD for GitHub, developers can create a CI/CD project in GitLab and connect
it into GitHub. According to GitLab, while it already has received position feedback from its built-in CI/CD features, the company felt GitHub integration was a huge piece missing to its portfolio. GitLab also added the ability to integrate CI/CD with other repositories, such as BitBucket. This new functionality was primarily designed for four audiences: open-source projects, large enterprises, GitHub users, and Gemnasium customers. Users with a public open-source project on GitHub will be able to take advantage of all of GitLabs highest tier features for free. Large enterprises will now be able to use a common CI/CD pipeline for all of their different repositories. According to the company, many enterprises have wanted to standardize on GitLab, but could not because they had code stored in different repositories.
FIDO Alliance, W3C create WebAuthn The FIDO Alliance and the World Wide Web Consortium (W3C) have reached a major
milestone in their effort towards bringing stronger and simpler web authentication to users globally. The organizations have announced the Web Authentication (WebAuthn) standard is advancing to the Candidate Recommendation stage, the last step before the final approval of a web standard. WebAuthn is a web API standard that provides users with new methods to securely authenticate on the web, in browsers, and across sites and devices. With the new specification, users will be able to log in using a single gesture, removing some of the complexity that is currently associated with authentication processes. According to FIDO, the standards strengthens FIDO Authentication and removes the need to rely on password. In addition, it provides the advantage of having credentials stay on the device instead of being stored in a server somewhere. It also helps prevent against attacks that rely on stolen passwords, such as phishing, man-in-themiddle, and replay attacks. z
08-10_SDT011.qxp_Layout 1 4/19/18 3:55 PM Page 8
How Sports Performance Technology Organizations BY DOMINICA D EGRANDIS f all the sports underdog stories ever written, nothing comes close to the Leicester City win of the English Premier League in 20152016. Ignored and underestimated by their peers, financially limited, and on the verge of relegation, Leicester defied 5000-1 odds to clinch the most astonishing title win in sports history. To put this in perspective, the odds of victory for the 1980 U.S. Olympic “Miracle on Ice” hockey team were 1000-1. Like the U.S. hockey team, Leicester did not have the most talented or experienced players. They had the third-lowest average possession in the league, the second-lowest average pass success rate, the third-lowest number of short passes and the highest number of penalties received. Unlike the other top teams, Leicester wasn’t bothered by this. Why? Because they took a different strategy, one that looked beyond the usual shots on target and high-tempo attacking tactics. One that provided them with a winning differentiator: an innovative sports science and medical team, carefully integrated into the decision-making process. Similar to winning sports teams, high-performing technology organizations are changing how they measure performance. This article compares two key areas that winning sports teams and high-performing technology organizations measure to understand what it takes to level up performance: workload and fitness/safety.
Dominica DeGrandis is director of Digital Transformation at Tasktop.
Sports: Workload Instead of relying on generic data, such as shots on target or possession statistics, top sports teams create their own metrics and build algorithms to fit the club’s philosophy and tactics. Leicester plays a defense game, using tactics that allow them to play with fewer players, a benefit for a financially strapped team. Tactics include: • Kicking the ball farther up the field than most teams • Making passes predictable • Developing effective partnerships on the field (e.g., relationship between the right-back and right-midfielder) • Constant evaluation • Effective relationships of players, medical team, managers, coaching staff, analysts So, what do they measure? Paul Balsom, Leicester’s head of sports science and performance analysis, focuses on two things: injury reduction and performance improvement. As such, they pay particular attention to optimal load, which includes all the games, all the trainings, all the gym time, conditioning and medical time. If the load is too high (or too low), they won’t have optimal performance. Too much load on players causes injuries. Players get 48 hours off after each game to recover. Unlike other clubs, they also get an additional day off mid-week. Leicester understands the benefit obtained when players are not fully loaded. This idea should sound familiar to tech organizations that benefit from visualizing work-in-
progress (WIP) and focus on flow time versus resource utilization. Leicester employs a medical staff plus a 10-person team of analysts that monitor everything. While the club spends a fraction of what the larger clubs do, they recognize that in order to win, the right tools and support staff are paramount.
Tech: Workload Traditionally, technology companies have measured resource utilization with the idea that developers and engineers
08-10_SDT011.qxp_Layout 1 4/19/18 3:35 PM Page 9
Analytics Can Help Win the Game which is a field of applied statistics that studies waiting lines. Queueing theory allows us to quantify relationships between wait times and capacity utilization even when arrivals and service times are highly variable. It’s like what happens when we’re in the middle of a deployment and a switch goes bad, taking out 1,000 servers. Or when intruders hack their way onto your now unsecure database servers. Queuing theory is the math behind why 100 percent capacity utilization doesn’t work, especially in a domain with low predictability, such as software development and delivery. Statistically, once we get past 80 percent utilization, things slow down and queues build up. Freeways utilized at 100 percent utilization come to a grinding halt. Too much WIP across all the teams working in the value stream is responsible for late arrivals and deliveries, often due to conflicting priorities, particularly between teams with high dependencies. High dependencies equal high wait times because people aren’t available when you need them. Imagine a goalie not being available when you need him. must be kept fully utilized to receive maximum return on investment. But operating a software development process near full utilization actually increases delays due to dependencies, conflicting priorities and unplanned work. Additionally, high utilization contributes to stress and fatigue (from long hours hunched over a keyboard), to a load that is unsustainable — no matter how much people love their job. Eventually, overloaded workers exhibit burnout symptoms: cynicism and detachment, emotional exhaustion and feelings of ineffectiveness and lack of
accomplishment — factors that undermine high performance. Instead of resource utilization, one area that high-performing organizations are looking at is flow, the continuous smooth and fast delivery of business value. From the initial business request all the way across the value stream to production, flow is how business value is delivered quickly. The single biggest deterrent to flow is too much WIP. Why? Because WIP and flow time have a relationship. High WIP means that some items sit waiting in queues longer. There is science behind this called queueing theory,
In a league where $330 million is paid out annually to injured players, Leicester takes injuries seriously. They measure the optimal load for every player using GPS systems and heart-rate monitors. Additionally, players complete questionnaires daily to identify how they are feeling in general. If a trend in quadricep soreness emerges, coaches can adjust training sessions to reduce potential injuries. continued on page 10 >
08-10_SDT011.qxp_Layout 1 4/19/18 3:36 PM Page 10
< continued from page 9
Sports analytics show that 40 percent of sports injuries are avoidable. Leicester has the lowest number of injured players in the league, which gives them an average player availability of 96 percent, the best in the league. Soccer is all about energy management. Games are 90 minutes long, and few people can sprint that whole time. Leicester fitness levels make them hard to beat. Their capability to play at a high tempo throughout the whole game puts them in position to score towards the end of the match. You cannot wear them down or just wait for them to tire. It’s worth noting that players advocate for the type of training they need; they are okay saying, “My quadriceps are sore.” The equivalent of that in tech is someone saying, “I’m tired from working late (or over the weekend).” One benefit of looking at the sports domain is that it makes it easier to spot essentials that can be applied to other domains, such as safety in tech.
(such as passing in football or uptime in tech). It does Leicester no good for the right-back to kick the ball to his most talented right-midfielder if the latter isn’t available. Tech teams who optimize locally and impact other teams negatively do not improve performance at the organizational level. Flow time is only as fast as the slowest moving part. Therefore, we must consider all the factors within the system that influence organizational fitness (workload or WIP, communication, policies, flow, dependencies, feedback and safety). Winning sports teams and high-performing tech organizations invest in tools and analytics to visualize and better understand performance influencers and detractors from a systems perspective. As a result, they are in a better position to blow the competition away.
Good organizational cultures drive software delivery and organizational performance, and job satisfaction drives revenues.
Tech: Safety How does safety play out in the technology domain? By making it okay for workers to be honest about their workload, which helps with the capture of clean data allowing decisions based on valid data. While at a conference, I overheard an attendee at a vendor booth ask if there was a way to exclude weekends from lead time reports. I politely asked, “Why do you want to exclude weekends?” He said, “My team doesn’t work weekends. I don’t want that time counted against us.” I asked, “How is it that lead time metrics count against you?” He said, “It’s in my goals — it impacts my performance review.” As Eli Goldratt said, “Tell me how you’re going to measure me and I’ll tell you how I’ll behave.” If people feel safe making their work visible, there is less incentive to game the metrics.
How are high-performing tech organizations gauging fitness and safety issues? Predictive analytics help. Similar to sports, self-reporting through survey data has its advantages. In their DevOps Metrics article, Mik Kersten and Nicole Forsgren state, “Research shows that good organizational cultures drive software delivery and organizational performance, and job satisfaction drives revenues. Monitoring these proactively (through survey data) and not just reactively (through turnover metrics in HR databases) should be a priority for all technical managers and executives.”
Understand the System A view of the whole system looks at everything influencing performance and increases the understanding of cause and effect. Injured soccer players can’t outrun stronger opponents. Like football, tech organizations are complex sociotechnical systems with competing functions and relationships between individual components. There are multiple interacting human and non-human components operating within a dynamic and constantly changing environment. Traditional performance measures typically fail to consider this complexity, and instead often focus on components in isolation
The Game is Changing Data analytics is having a huge impact on how sports teams perform, as Leicester has proven through its crossdepartmental relationships, a willingness to embrace new styles of working, and constant evaluation and feedback process. Lean product management practices and measures help tech teams ship features that customers want more frequently. This faster delivery time enables a faster feedback loop with customers. The result? The entire organization benefits, as measured by profitability, productivity and market share. While Leicester was unable to retain the crown the following season, the club has cemented its position in one of the most lucrative sports league in the world. And thanks to its progressive approach to sports performance analytics, it will likely remain competitive with its most powerful competitors. Leicester shook things up and innovated with a whole system approach. All tech organizations should take note. z
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:25 PM Page 11
012,51_SDT011.qxp_Layout 1 4/20/18 5:00 PM Page 12
Data Connectivity: an unrecognized, multi-issue problem New software architectures place new demands on data connectivity going on now, and we’ll continue to see “Facts are stubborn, but statistics are that for some time to come.” more pliable,” wrote Mark Twain. Never has this been more true when it Adaptability comes to data connectivity. If you don’t Roi Avinoam, CTO and co-founder of have good connectivity to your data Panoply Software, said he believes that wherever it may reside, then it’s hard to adaptability plays an important role in do applications like artificial intelli- how companies master their data congence (AI) or analytics. They are very nectivity. “I think what some people data dependent technologies. Connec- might miss is that really the way to mastivity is a huge, largely unrecognized ter data and get insights comes from problem, according to Amit Sharma, CEO of CData. “If you ‘We need to be able to don’t have good data, you won’t think up ideas and try be able to have good AI soluthem out and make tions or analytics, or big data drastic changes overnight, solutions,” he said. “I think it’s a problem that’s going to keep without having a big price being more challenging and relto pay for it.’ evant in the market in the —Roi Avinoam future.” BY ALYSON BEHR
A seismic shift There’s a dramatic shift from data for the sake of data, to data for the sake of business, and this manifests itself in many different ways. Terms like “selfservice analytics” services provide enough utility to the business user that they can get the data they need and manipulate it in a very non-technical fashion. Tony Fisher, general manager of Magnitude Software, said, “Data for the sake of data, is a very technical thing, and data for the business is very business-oriented thing. Business users are more concerned about orders and customers or things that are more business-oriented than they are about table or column names. They want to access their data and manipulate it in terms of business needs.” Fisher believes that it’s very important for technical staff to grasp the concept that data is really just an artifact that’s providing the business analyst with a business-oriented view into their data. “I think that’s one of the big shifts that’s
being adaptable,” he said. “All the time, I notice that when people talk or think about data, the solutions proposed are always the ones where you have to review all the data you have, and then review all the business questions, and ideas for insight requirements, and then after you map these out you come up with solutions. It’s great for about two to three months.” He added that the “problem is, after you’ve done all of that, the business changes, the industry changes, the market changes, the API’s and the data that you have changes, and then you have to do it all over again. And that’s the kind of state of mind that I think we need as an industry to evolve.” Avinoam is an engineer, so he compares it to software engineering’s Waterfall development process. “Basically we have to design everything up front, and we have to figure out how it needs to work, and then we go and develop it, and it was so rigid. You can’t make changes,
and it’s not adaptable. Now development teams are incredibly agile, right?” He pointed out that now an idea can be brainstormed in the morning and it’s shipped to production that night. He’s trying to impose this agility on his team and wants the industry to follow. “One day is too much, in my opinion. We need to be able to think up ideas and try them out and make drastic changes overnight, without having a big price to pay for it. It should be encouraged, it should be a positive experience that we’ll rotate our entire state of mind to think of completely different types of data or different connections that we might do. And execute on it in a day or two.” He emphasized that the issue really isn’t how you solve your current problem, that’s easy. What’s important is thinking about how you are going to solve an endless stream of problems, challenges and opportunities that may hit you on a daily basis, and ensure that every system keeps up.
Maintaining data consistency Ensuring data consistency when new technologies like microservices and cloud-based distributed applications come into play is no easy task. There’s been a pretty significant shift over the past couple of years just in terms of the overall approach according to Dion Picco, vice president of product management and product marketing for Progress. He uses data warehousing as an example. “The data warehouse approach of more of a record-oriented, relational or star schema approach, has really served its purpose well. It’s certainly going to continue as a pretty prominent standard, since it’s the whole process behind Extract, Transform, Load continued on page 51 >
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:26 PM Page 13
Learn, Explore, Use Your Destination for Data Cleansing & Enrichment APIs
Global Email Global IP Locator DEVELOPER
Global Phone Property Data
Your centralized portal to discover our tools, code snippets and examples. RAPID APPLICATION DEVELOPMENT
REAL-TIME & BATCH PROCESSING
TRY OR BUY
FLEXIBLE CLOUD APIS
Convenient access to Melissa APIs to solve problems with ease and scalability.
Ideal for web forms and call center applications, plus batch processing for database cleanup.
Easy payment options to free funds for core business operations.
Supports REST, JSON, XML and SOAP for easy integration into your application.
Turn Data into Success – Start Developing Today! Melissa.com/developer 1-800-MELISSA
14_SDT011.qxp_Layout 1 4/19/18 3:35 PM Page 14
IFTTT: Bringing new meaning to applets of applets as a way in which one service can gain access to endpoints on another service to create some type of value.” IFTTT is a free platform designed to help users and developers to do more with their apps and devices. “We at IFTTT believe ultimately everything in our world is going to become a digital service,” said Tibbets. “There are all these things around us both purely digital and physical that connect to the Internet. Each one of them has information about who people are and how they use those products and services.” “What IFTTT really proposes to solve is for any of those services that are looking to build out some type of
around are already available on the IFTTT platform, then they can easily build an applet without writing any code. If a developer wants to include new APIs, they have to build out some back-end services and connect those to the IFTTT platform protocol, Tibbets explained. The company used to refer to these services as recipes. “Applets can do everything that Recipes could — and much more. They bring your services together, creating new experiences that you can unlock with a single switch. A house that welcomes you, an efficient workplace, an easier way to stay informed — there are thousands of experiences to choose from, or you can create your own,” the company wrote in a post. The reason for the change was that Recipes gave off the notion that someone had to make it themselves, and the company wanted the service to be something more approachable and inline with what it envisioned IFTTT being about. “We felt [applets] was a IFTTT’s applets are mini-automations or mini app services that connect two or more apps and services safe word and really repretogether to provide specific actions. sented the first step in our journey in being able to cremore modern technologies like ecosystem or set of inspiration that ate value for the end user,” said Tibbets. HTML5. Similarly, developers have allows access to other services to under“We really envision applets to be full-on been moving away from the use Java stand customers better and do more integration where services can work applets in favor of plugin-free Java Web things for customers. IFTTT wants to together. IFTTT is becoming a rich Start technology. Oracle announced in be that solution,” Tibbets added. environment that enables developers January 2016 plans to deprecate the IFTTT aims to have applets bring and users to make connections without Java browser plugin in JDK 9, and just that value proposition to developers. having to spend all the time building out last month announced plans to stop sup- According to the company, you can a bespoke connection to each individual porting applets in Java SE 8. think of applets as mini-automations, or API or developer platform that might However, the term applets is being mini app services. “These are simple offer something interesting as a product reborn in the form a new service. connections between services,” Tibbets or service.” IFTTT, also known as If This Then explained. “An applet, for example, can Going forward, the company plans That, is bringing new meaning to the turn on your porch light when your piz- to move away from what traditionally term. “Our applets are definitely break- za is about to be delivered, or send you has looked like automation and move ing away from the old Java applets. a notification when your car goes out- more towards access. “IFTTT is about Most people you talk with today have side the city you live in.” granting access between services never heard of Java applets,” said LinApplets are built around APIs. If the beyond just these cool automations that den Tibbets, CEO of IFTTT. “We think APIs that developers want to build applets are known for today.” z BY CHRISTINA CARDOZA
If you’ve been in the software development industry for some time, you may have heard of the term applets — or more specifically, Java applets. Java applets first came into existence in 1995 after the first release of the Java programming language. They were designed as a small application written in Java, and compiled and delivered as Java bytecode. As time went on, Java applets sort of followed the same path as the browser plug-in Adobe Flash. While Java applets are small applications, they require plugins in order to run. For the past couple of years, developers have been moving away from using Adobe Flash in favor of
Full PageAds_SDT011.qxp_Layout 1 4/23/18 10:14 AM Page 15
16_SDT011.qxp_Layout 1 4/20/18 1:01 PM Page 16
Businesses are failing to reach Big Data maturity, report finds AtScale three-year survey looks at Big Data, the cloud, challenges ahead BY CHRISTINA CARDOZA
Enterprises have been leveraging Big Data solutions to capture, store, analyze, organize and transform their data for years; but despite the advancements made in this space, they have still not reached a transformational level of maturity, a recently released survey revealed. AtScale announced the results from its Big Data maturity survey, and found only 12 percent of respondents have reached a high-level of maturity when it comes to Big Data. This is up from 8 percent in 2016. The 2018 Big Data Maturity survey was a three-year effort that consisted of 5,593 responses from all industries across the globe. The responses were collected in collaboration with a number of partners including Cloudera, Hortonworks, MapR, Tableau, ODPi (a Linux Foundation Project and Apache Software Foundation. The survey looked at how companies around the global are using Big Data and the cloud, and where these technologies are causing enterprises to struggle. “Every enterprise needs to understand this report as their leaders will
need to consider the impact their evolving Big Data Analytics environment has on their ability to deliver self-service analytics and manage governance,” said John Mertic, director for ODPi. According to the report, it is clear that respondents see the benefits of utilizing Big Data. Ninety-five percent reported they plan to do as much or more with Big Data over the next three months. About 65 percent are using Big Data in a “strategic” or “game changing” way while 34 percent say they are using it in an experimental or tactical way. However, the biggest challenges responses are experiencing with Big Data include lack of skill set, governance, performance, security and management. In addition, the report finds an overconfidence among respondents with 78 percent ranking their Big Data maturity as medium or high when in fact only a small amount of reached high-level maturity. To tackle management issues, many respondents have taken their Big Data efforts to the cloud. More than half of the respondents have some amount of
their current Big Data deployment in the cloud and 77 percent project they would use the cloud for Big Data. Although the report finds respondents believe the cloud is making it harder to access data. Forty-two percent of respondents have self-service access, which is down from last year, and 58 percent reported a lack of self-service to Big Data. This is resulting in a typical enterprise data analytics portfolio to utilize hybrid data environments that combine traditional data warehousing solutions. Only 20 percent of respondents are looking to use Big Data as a placement strategy for earlier data platforms. According to AtScale, to accommodate this, chief data officers need to build out agile data environments that can accommodate traditional Business Intelligence (BI) platforms. Other findings included Tableau maintains the #1 spot on Big Data, Microsoft Excel maintains the #1 spot on small data, 52 percent of respondents are dealing with siloed decentralized analytics teams, and central business analytics units are not the norm yet. z
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:28 PM Page 17
| Developer Solutions
Empower your development. Build better applications. GrapeCityâ€™s family of products provides developers, designers, and architects with the ultimate collection of easy-to-use tools for building sleek, high-performing, feature-complete applications. :LWKRYHU\HDUVRIH[SHULHQFHZHXQGHUVWDQG\RXUQHHGVDQGRÎ?HUWKHLQGXVWU\Č‡VEHVWVXSSRUW Our team is your team.
.NET UI CONTROLS
For more information: 1-800-831-9006
Learn more and get free 30-day trials at GrapeCity.com ÂŠ 2018 GrapeCity, Inc. All rights reserved. All other product and brand names are trademarks and/or registered trademarks of their respective holders.
18,19_SDT011.qxp_Layout 1 4/19/18 3:38 PM Page 18
Move fast and fix things: It’s time for an API audit BY ROB ZAZUETA
If you run an open API program, the current controversy surrounding Cambridge Analytica’s use of Facebook data to create psychographic profiles of millions of Facebook users should concern you, and not just because of how your profile data may have been used. I recall being very surprised at how much data I could access through Facebook’s application programming interface (API) back when they first released it. I could easily navigate through a specific user’s news feed and friends list and all but replicate that user’s web of social interactivity with only a handful of calls. Facebook opened this data to allow developers to create games and applications that enhanced the core purpose of Facebook at the time — connecting people and allowing them to share their lives with their friends online. While the terms of service made Rob Zazueta is Director of Digital Strategy at TIBCO.
it clear that data was not intended to be captured and stored, there was also nothing stopping a developer from breaking those rules — and nothing Facebook could do to easily tell if the rules had been violated. Subsequent updates to the Facebook API limited the access to much of that data, but the genie was already out of the bottle. It appears the data Cambridge Analytica used may have been gathered sometime prior to 2015, before those limits were put in place.
It Isn’t Just Facebook Facebook is taking a big hit on all this controversy, but there’s a part of me that feels it’s somewhat undeserved. The same data that may have been used to target specific audiences with messages of questionable veracity also allowed companies like Zynga to flourish, and helped Facebook evolve from a simple social bulletin board to a genuine social platform. I don’t believe any of this was malicious on Facebook’s part. I think it’s the unintended conse-
quences of a drive toward radical openness marred by a culture of “move fast and break things.” Now it’s time to move fast and fix things. If you run an API program that is open to the public, you should take this as a warning to audit your APIs now to understand exactly what data you’re exposing, who has access to it, and how that data is connected to other API endpoints in your system.
Why an API Audit is Important As an example, the early Facebook API allowed a fair amount of a user’s friend data to be exposed as part of the user’s profile data. This meant if your friend granted a third-party app access to their data, that app would also give some limited access to your data, even if you didn’t grant access to that app. I’m aware of at least one other social network API that not only returned a user’s profile data in a single call, it also returned every one of their followers. Aside from the fat payload that created, it meant giving more data to the application than
18,19_SDT011.qxp_Layout 1 4/19/18 3:38 PM Page 19
it actually required or requested. Proper normalization of RESTful endpoints combined with endpoint-level access restrictions is one of the best ways to avoid this type of situation. For example, a userâ€™s profile may be accessible from the endpoint â€˜/users/rzazuetaâ€™. Rather than list all connected friends as part of that response, the data should contain a link to the friends list, i.e. â€˜/users/rzazueta/friendsâ€™. When endpoint-level access controls are applied in the code, only those with the ability to read the friendâ€™s endpoint would gain access to that information. This, of course, means you need to set up your API packages, user roles, and endpoints in a way that allows for that level of control. Most API management systems make this relatively easy, but can only help if youâ€™ve designed your API correctly. If you have never done so, now is a time to perform an audit of your API to map what data is accessible to which users and ensure youâ€™re not exposing more than you intend to â€” even if your API is internal only.
API Audit 101 Start by creating a map connecting which users and user roles have access to which endpoints. Ideally, all of your users will have consistent access through a set of roles rather than individual custom access. If thatâ€™s not the case, consider creating new roles that will suit those customersâ€™ needs. Next, look at the data in each of those endpoints. If youâ€™re applying content filtering to limit what data is returned to a specific user or role, make sure you mark that down. In a well-designed RESTful API, your endpoints would return only the data they are responsible for. Any data related to other endpoints should only be accessible through those endpoints, referenced through a hyperlink, as in my user profile and friend list example above. Itâ€™s tempting to provide all of that data in a single response to cut down on the number of API requests, but it also opens the door to exposing more data than intended. If your API is designed to return more data in fewer calls, you should consider moving that logic from the
core API code to a layer that calls on the core APIs to consolidate and respond to those requests as a separate function. This pattern, called â€œBackend for Frontendâ€? or â€œBFF,â€? has been adopted by companies such as Netflix to make it easier to create APIs that serve specific client needs. BFFs allow for an extra level of access control, as they should be limited by the same access levels as the customers using them. Over the years, Iâ€™ve spoken with a number of companies who have hesitated in moving forward with an API program for fear it could be a vector of attack for hackers. The Cambridge Analytica case would seem to confirm those fears, though perhaps not in the ways once imagined. The situation serves as a clear warning to API providers that data security must go beyond basic access controls and firewalls. Good API management systems can significantly improve the security of your APIs. Those designing the APIs, however, must keep in mind the potential unintended consequences of their design decisions. z
20_SDT011.qxp_Layout 1 4/19/18 3:39 PM Page 20
JFrog Xray 2.0 examines the CI/CD pipeline BY CHRISTINA CARDOZA
JFrog has announced the latest release of its continuous security and universal artifact analysis solution. JFrog Xray 2.0 is designed to give DevOps teams insight into potential problems and the confidence to release into development, deployment and production stages. “Developers are incorporating an ever-growing number of artifacts from external and internal sources into their CI/CD pipeline to release faster,” said Dror Bereznitsky, vice president of products at JFrog. “While expediting delivery, this introduces risk of software being out of compliance or out of date. JFrog Xray multilayer analysis detects dependencies across all software packages to enable full impact analysis and secure releases.” The solution provides a multilayer approach to analyzing containers and software artifacts for bugs, license com-
In other DevOps news… n CA Technologies announced it has acquired software composition analysis specialists SourceClear to bolster its DevSecOps portfolio. The company has plans to incorporate SourceClear’s SaaSbased SCA tool and proprietary vulnerability database with the CA Veracode cloud platform. According to the CA, SourceClear’s SCA solution can not only inform DevOps teams about vulnerable components, but also tell whether that component is being utilized in the application, reducing false positives related to unused components in an open-source library which may be insecure, but inconsequential to a project.
n Centrify is bringing its Zero Trust Security platform to DevOps environments with Next-Gen Access integration. Next-Gen Access combines Identity-as-aService, Enterprise Mobility Management
JFrog launches Xray 2.0 with high availability to bolster DevSecOps.
pliance issues, and quality assurance. Features include the ability to analyze artifacts for all major package formats, deep recursive scanning to provide insight into Xray’s universal component graph, ability to show the impact of issues, automated protection for the life and Privileged Access Management in order to reduce exposure to common security threats without compromising security. Features include a centralized management of Docker groups within Active Directory, centralized management of access rights and privileges, access management, and the ability to authenticate to HashiCorp Vault.
n GitLab announced a new investment that will help it achieve its mission of Complete DevOps. According to the company, a Complete DevOps solution includes a single UI for dev and operations, integrates all phases of DevOps, and enables dev and operation teams to work together with less friction. The undisclosed investment comes from Telstra Ventures, which will also leverage GitLab’s app to support its own innovation and optimize collaborative workflows. “We look forward to partnering with Telstra to support its large application team and to aid the company in its
cycle, and native Artifactory integration. The latest version provides enhanced usability, improved visibility, new native indexing and scanning support, and the ability to continuously govern and audit all artifacts consumed and produced, according to the company. “JFrog Xray breaks down artifacts according to their specific packaging. Xray scans each package type, knows how to unpack it and what every underlying layer contains. Each unpacked component is examined individually to uncover potential vulnerabilities and policy violations, mapped out and merged into Xray’s universal component graph that represents the entire organization’s software structure. This allows developers to get maximum visibility into software dependencies and truly understand the impact of every issue found,” the company wrote in a statement. z vision of connecting people through technology,” said Sid Sijbrandij, CEO and co-founder of GitLab. “DevOps is increasingly being adopted by organizations around the globe to radically improve productivity and the pace at which software moves from idea to market.”
n Testplant has acquired NCC Group’s Web Performance Division, bringing a SaaS-based user-centric application platform and a team of testing, monitoring, and data science experts to the company. As part of the acquisition, Testplant is rebranding the combined company to Eggplant. Eggplant is the name of the company’s flagship product designed for user-centric, end-to-end test automation and analytics solution. The company says Eggplant will now be able to enhance its customers’ DevOps pipelines in order to provide faster product delivery, visibility, and continuous product improvement, while still ensuring a superior digital experience. z
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:28 PM Page 21
Software delivery; it’s a team sport
In software delivery if teams aren’t rowing together, they’re rowing in circles. Integrate the complex network of disparate tools that you use to plan, build and deliver software at scale. $XWRPDWHWKHćRZRI product-critical information across your entire software delivery process. Help your teams pull together.
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:28 PM Page 22
23_SDT011.qxp_Layout 1 4/20/18 2:28 PM Page 23
MICROSOFT BUILD PARTNER SHOWCASE
MAY 2018 | 23
Microsoft lays out plans for Visual Studio icrosoft is working on improving the user experience of its software development environment, Visual Studio. The company recently revealed its Visual Studio Roadmap for the next two quarters. According to Microsoft, the roadmap is just a peek into what’s next for Visual Studio and while it includes some of the significant features the team is working on and a rough time frame of when they will be available, it is not a full list of what’s to come. Visual Studio 2017 will continue to follow the company’s release rhythm implemented after Visual Studio 2015. Instead of large quarterly releases, Microsoft aims to make smaller and more frequent minor updates. Minor updates will ship about every six weeks, and servicing updates will ship more quickly. The company in early April released version 15.7 preview 3 of Visual Studio 2017. The release included updates to Universal Windows Platform develop-
ment, C++ development improvements, updates to Xamarin and .NET mobile development, the ability to configure installation locations, debugger support and live unit testing improvements. Features expected between now and June include a new connected service for Azure Key Vault in C# apps, continuous delivery for Azure Functions using Visual Studio, ability to switch Git branches faster, JIT debugging for .NET Core, improved startup and solution load performance, and the ability to set a native thread name with the SetThreadDescription API in C++. Currently still in preview is Microsoft’s Android Designer Split View, Visual Studio Live Share Preview, and Automatic iOS provisioning from Visual Studio. While Q2 will include updates for all workloads, its .NET Core workloads, C++, Universal Windows Platform, Python development and mobile development areas will get a lot of focus, the company announced.
The Q3 release will focus on desktop development with C++ and Visual Studion extension development. Features planned for Q3 include improved continuous delivery capabilities, multicursor and multi-selection editing, C++ debugger enhancements, extension packs, ability to publish to the Visual Studio Marketplace using a command line, and more improvements to the startup and solution load performance. Much of this work will be detailed at Microsoft’s Build conference May 7-10. And many Microsoft partners will be on hand to discuss support for these new features. In the meantime, this showcase provides a look at many of those partners and the tools they provide to extend and enhance the Visual Studio ecosystem, including Visual Studio Team Services, Microsoft Azure, SQL Server and more. z Look to sdtimes.com for all the news announced at the conference!
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:29 PM Page 24
Introducing GrapeCity Documents, a new product line of document APIs
Com plet Doc e ume nt A PIs do
Take total control of your digital documents with this NEW collection of ultra-fast, low-footprint document APIs for .NET Standard 2.0 These intuitive, extensible APIs allow you to create, load, modify, and save Excel spreadsheets and PDFs in any .NET Standard 2.0 application.
Expand the reach of modern apps With full support for .NET Standard 2.0, you can target multiple platforms, devices, and cloud with one code base.
High-speed, small footprint The API architecture is designed to generate large, optimized documents, fastâ€”while remaining lightweight and extensible.
Comprehensive, highly programmable Do more with your Excel spreadsheets and PDFs: these APIs support Windows, Mac, Linux, and a wide variety of features for your documents.
No dependencies Generate and edit digital documents with no Acrobat or Excel dependencies.
GrapeCity Documents for PDF GrapeCity Documents for Excel
Get the free trial today! www.grapecity.com/en/documents-api
For more information
ÂŠ 2018 GrapeCity, Inc. All rights reserved. All other product and brand names are trademarks and/or registered trademarks of their respective holders.
25_SDT011.qxp_Layout 1 4/19/18 4:46 PM Page 25
MICROSOFT BUILD PARTNER SHOWCASE
MAY 2018 | 25
GrapeCity Reimagines Business Apps
oday’s business applications must deliver timely information that can be analyzed in a variety of business file formats. While file format components aren’t a new concept, the application of them has been best suited to desktop applications that require a lot of memory. GrapeCity is setting a new standard by providing lightweight, high-performance components that operate at the server level so information sharing and distribution can be achieved faster and more reliably. “GrapeCity has always focused on delivering quality UI components that accelerate data presentation, analysis and visualization,” said Issam Elbaytam, chief software architect at GrapeCity. “Our products enable developers to get data out of information silos so it can be shared with others in an efficient and timely manner.” Toward that end, GrapeCity is announcing the GrapeCity Documents product line that includes GrapeCity Documents for Excel and GrapeCity Documents for PDF. By the end of the year, similar products will be available for Word. Additional file formats will be supported in the future.
Share information reliably
The hyper-connected modern world isn’t necessarily reflected in the ways enterprise applications behave and business processes flow. In a typical information-sharing scenario, if one person needs a document reviewed, they’ll send a PDF for review or an Excel file that enables the recipient to manipulate the data. Information from different departments or different offices needs to be combined into a master document. To do that, the original documents are sent to a central location where they’re cut and pasted into a master document and then sent out for review and analysis. With GrapeCity Document products, the aggregation of information can occur programmatically at the server level, which saves time and effort and also reduces manual errors.
GrapeCity Documents for PDF will be introduced at Microsoft Build. The product generates and distributes PDFs at the server level, including master templates. With it, pieces of information, including charts and graphs, can be merged into a PDF. “The PDF library includes a comprehensive graphics and text layout, and rendering engine to generate really precise PDF documents,” said Elbaytam. “It also supports TrueType and OpenType fonts and multiple languages, and does all of these things in a fast, memory-efficient way.” Both GrapeCity Documents products also include a rich barcode library and a data visualization add-on to make business
“We tackle the difficult problems so our customers can focus on delivering highquality business solutions.” —Issam Elbaytam documents as informative as possible. Importantly, the products enable developers to build complete business document management solutions that are stable and performant. For example, in a benchmark test of 1.5 million randomly generated cells, GrapeCity Documents for Excel was able to generate it in about one second. “Our components help developers achieve more in less time,” said Elbaytam. “We tackle the difficult problems so our customers can focus on delivering high-quality business solutions on time and on budget.” Many vendors transfer the burden of customer support to their developer communities to cut costs at the expense of developers who need fast answers to important questions. GrapeCity’s forum is supplemented by a dedicated technical support team that provides personalized services so developers can meet increasing end-user expectations within expected time frames. Learn more at www.grapecity.com. n
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:30 PM Page 26
27_SDT011.qxp_Layout 1 4/23/18 11:08 AM Page 27
MICROSOFT BUILD PARTNER SHOWCASE
MAY 2018 | 27
Productive, cross-platform .NET development with JetBrains Rider
JetBrains Rider is a cross-platform IDE for full-stack .NET developers. Foundation Server (TFS), and comes with built-in tools like a debugger, a unit test runner, a super-fast NuGet client, a WPF XAML preview window, a database editor, and all that. But enough buzzwords! Being powered by ReSharper, Rider helps enhance our productivity by providing navigation, automatic code inspections and quick fixes, code generation, a large number of refactorings, and more. Of course there is code completion and code templates. Rider also indexes all file names, textual content, as well as the meaning of content. It even knows how to navigate from an ASP.NET MVC controller directly to its corresponding view. Or to a referenced model, or even an automatically decompiled third-party library.
Because Rider analyzes our code all the time, it can help us maintain a consistent code style and warn us for common errors — even those that usually only become visible at runtime. This makes our code better, more robust and resilient, but also helps us learn new language features and constructs, such as working with tuples and deconstruction in the latest C# 7.2. In summary, Rider can be used to develop, run and debug almost any .NET project type, on any platform. And while it is pretty light and fast, it also comes with very powerful features to help us be productive! Give JetBrains Rider a try! A 30-day trial version is available from www.jetbrains.com/rider. n
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:31 PM Page 28
29_SDT011.qxp_Layout 1 4/23/18 11:08 AM Page 29
MICROSOFT BUILD PARTNER SHOWCASE
MAY 2018 | 29
LEADTOOLS: One SDK, Multiple Platforms
he software development landscape is increasingly diverse, which is both a blessing and a curse for development teams. The many devices and platforms provide endless paths and opportunities to imagine, innovate, and develop. Coders experience the excitement that various new technologies can create. However, Project and Dev Managers may feel more pressure due to the overwhelming decisions they have to make and the implications that arise from producing modern apps. If your project requires multiple platforms—and it likely does or will—there are several ways you can go about it. You can write separate native apps for each platform, use tools that convert code between languages, write shared code with a cross-platform mobile framework such as Xamarin, or even a mix somewhere in between. Deciding on this approach depends on your team, budget, and deadlines. As if that wasn’t difficult enough, what happens when you add to the mix the need for specialized functionality like OCR, barcode, or image processing? Perhaps you have found a great library, but what if it only supports one of your platforms? Now you must double your efforts in finding, learning, testing, and managing those SDKs. Each library likely has its own licensing agreements, increasing your time investment; and some libraries may be open-source, which brings upon further questions and doubts about reliability. With so many development platforms, devices and operating systems on the market, is it even possible to get an efficient solution to your SDK problem?
Comprehensive Feature Set + Comprehensive Platform Support = Successful Apps
If you need to integrate raster, document, medical, or vector imaging into your
If It Sounds Too Good to be True…Never Mind, LEADTOOLS is Legit!
Does that sound too good to be true? Let’s look at a couple specific examples. You need to create an application for UWP, iOS, and Android that takes a picture and uses OCR to convert that text into a searchable PDF. You have a purely .NET development team with little to no native Apple and Android development experience.
LEADTOOLS for Xamarin fits the bill perfectly and allows your team to code everything in XAML and C#. Now let’s say you need to develop the same application but you have a mixed group of developers with varied expertise on their respective platforms and languages. LEADTOOLS has your team covered in that scenario as well with fully native libraries allowing your programmers to code in Objective-C or Swift for iOS and macOS, and Java for Android. Your developers may not be able to help each other with the nuances of their syntaxes, but they can still work together when implementing LEADTOOLS because the API differences are minimal. Lastly, for teams that want to find some middle ground between the two paradigms, LEADTOOLS offers several web services. You can still design and code native UIs and offload your imaging requirements to a common server. You can host the services on your own servers or take advantage of everything Microsoft Azure has to offer.
LEADTOOLS + Microsoft Azure = Tomorrow’s Imaging Apps
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:31 PM Page 30
0 AY 31, 2
31_SDT011.qxp_Layout 1 4/19/18 3:39 PM Page 31
MICROSOFT BUILD PARTNER SHOWCASE
MAY 2018 | 31
Progress Enables Tomorrow’s Apps Today
Chatbots and Conversational UIs
The new Progress NativeChat chatbot framework and complementing Telerik Conversational UI components enable
Leverage Microsoft HoloLens
Progress will be previewing its forthcoming Telerik AR/VR UI components at Build. The components will enable developers to reuse existing skills to add HoloLens and 3D charting capabilities to their apps in a simple plug-and-play fashion. “Working in 3D is a really difficult because you have to think differently about how a UI looks and reacts to user
interactions,” said Krsmanovic. “While these capabilities are not yet part of mainstream enterprise app development, they will be in the future. We’re investing in this today so tomorrow you can easily build AR/VR capabilities into your apps.”
Add Dependability to Open Source
Open source has become a staple in enterprise environments that want to stay current with the latest technology trends and community-driven innovation. “Developers sometimes get burned by open source projects that come and go” said Krsmanovic. “Enterprise developers want to take advantage of what’s new, cool
Get More Accomplished
NASA uses Telerik products to visualize and give the public access to Mars Rover data. The data is made available through the NASA analyst notebook, which is presented to the public as a web app. Before creating the analyst notebook web app, NASA was using several development tools that weren’t integrated, which made the project difficult. Using Telerik UI and reporting tools, which operate across NASA’s other development tools, one developer can now accomplish the same amount of work that required three developers in the past. Learn more at www.telerik.com/ build-2018. n
032-34_SDT011.qxp_Layout 1 4/23/18 11:37 AM Page 32
32 | MAY 2018
Microsoft Build Partner 7pace: 7pace Timetracker extends Visual
Studio Team Services with professional time recording and management capabilities, designed for software developers. 7pace Timetracker is so seamlessly integrated, you’ll think it’s always been there. 7pace Timetracker builds feedback into your work environment to help teams learn and improve over time, automates standard tasks, and lets you create high-level reports. Users benefit from learning productivity by data, reporting accurately and being enabled to deliver unrivalled planning and forecast.
Alachisoft: NCache is a high-performance open source .NET/.NET Core in-memory distributed cache. It provides speed, scalability and reliability for database, data storage and ASP.NET/Core applications. The NCache distributed cluster communicates with optional NCache client caches deployed at the application for optimum speed and reliability. NCache makes sharing and managing data in the cluster as simple as on a single server, and synchronizes with the resident database. Available on-premises and in the cloud. Learn more at www.alachisoft.com. ASNA: ASNA Synon Escape migrates Synongenerated AS/400 (ne IBM i) applications to C# and SQL Server on the .NET platform. With Escape, Synon-generated applications are given new life, free from Synon and its asphyxiating Model. Escape exploits the Model and other RPG programs to provide a refactored version of the original application in C#. This new version of the application can then be maintained and enhanced with conventional C# idioms and best practices. Caphyon: Over the last 15 years Caphyon
has created software applications for developers and Internet professionals that are reliable, secure and easy to use. Advanced Installer is used for authoring MSI/AppX /MSIX packages and repackaging by a wide users range, from beginners to senior developers and system administrators. Fast and easy to use, it ensures a great ROI for your team reducing the time used to create the
setup packages, leaving more time for development.
more. Read more and download now on www.devsense.com.
Chef: Chef Automate enables you to build, deploy, and manage your Azure infrastructure and applications seamlessly across your dev and operations teams. The Automate platform gives you end-to-end visibility across your entire fleet, so you can detect issues and enable continuous compliance of apps and infrastructure and manage changes in a single workflow. Chef Automate is available in the Azure marketplace so you can start automating today.
GitHub: GitHub is how people build soft-
Combit: The List & Label reporting tool enhances applications hassle free, enabling generation of a host of report types. Designer objects include tables, crosstabs, charts, barcodes, graphics, PDF, and more. It offers rapid performance and is suitable for deployment in any project size. The report designer can be redistributed free of charge. List & Label supports WinForms, ASP.NET (MVC), WPF and connects to any data source: SQL, ADO.NET, OLE DB, ODBC, ORMs. www.combit.com DevArt: dbForge SQL Complete is an
excellent add-in for SQL Server Management Studio and Visual Studio providing an outstanding Intellisense-like functionality and automatic code formatting features. The tool enhances code accuracy and quality, speeds up SQL code writing by offering contextbased smart suggestions, performs automatic formatting and refactoring, thus increasing your productivity, reducing costs, and saving you a huge amount of time and efforts in the database and code development process.
DevSense: PHP Tools for Visual Studio
transparently integrate into Microsoft Visual Studio, and extend it with the support for PHP language. The extension is focused on developer productivity respecting conventions. It understands your code, provides smart code completion, quick navigation, syntax error checking, integrated localized PHP manual, project system, debugging support and
ware. Millions of individuals and organizations around the world use GitHub to discover, share, and contribute to software—from games and experiments to popular frameworks and leading applications. Whether you work for a small startup, a university, or a Fortune 500 company, GitHub enables powerful, collaborative workflows. You can use GitHub.com in the cloud or GitHub Enterprise on your server, then integrate your favorite apps and services to customize how you build software.
Gnostice: Gnostice creates PDF and Office document processing controls and APIs for WinForms, WPF, ASP.NET and Xamarin. These controls are offered as the Gnostice XtremeDocumentStudio.NET product. The included multi-format HTML5 based ASP.NET Document Viewer is one of the major and much liked controls of Gnostice XtremeDocumentStudio. The Document Viewersupports viewing of PDF and Office documents with support for interactive PDF form-filling and annotation. To know more, please log on to www.gnostice.com.
leader in providing tools and solutions that accelerate design, development, and collaboration. In April, they released the latest ver-
032-34_SDT011.qxp_Layout 1 4/20/18 2:29 PM Page 33
MAY 2018 | 33
InstallAware: Integrates with Visual Studio, letting you build
MSI Windows Installer, EXE Native Code, App-V Application Virtualization, and APPX Windows Store (Universal Windows Platform/UWP) outputs from your active Visual Studio project in just a single click on the Visual Studio toolbar! Other InstallAware unique features include programmatic application pinning to the Windows 10 Start Menu/Taskbar, up to 90% better compression, fully automated virtual machine unit testing (with command line support), and many more. Try InstallAware today: www.installaware.com.
IntervalZero: RTX64 turns the Microsoft 64-bit Windows operating system into a real-time operating system (RTOS). RTX64 enhances Windows by providing hard real-time and control capabilities familiar to both developers and end users. RTX64 is a key component of the IntervalZero RTOS Platform that comprises x86 and x64 multicore multiprocessors, Windows, and real-time Ethernet (e.g. EtherCAT or PROFINET) to outperform real-time hardware such as DSPs and radically reduce the development costs for systems that require determinism or hard real-time.
n JetBrains: Rider is fast, powerful, cross-platform .NET IDE.
n LEAD Technologies: LEAD Technologies is the develop-
er of LEADTOOLS, the award-winning line of comprehensive SDKs designed to help programmers integrate document, medical, and multimedia imaging into their desktop, server, and mobile applications. LEADTOOLS offers support for OCR, Barcode, Forms Recognition, PDF, Document Conversion and Viewing, Document Cleanup, Annotations, DICOM, PACS, audio/video codecs, streaming, Image Compression, Image Processing, Viewers, Scanning, and more. A LEADTOOLS toolkit literally puts millions of lines of code at the fingertips of application developers.
Mobilize.Net: Used by over 90% of the Global 1000, Mobilize.Net’s AI-assisted modernization tools—including WebMAP5—transform ‘90s desktop client/server apps into modcontinued on page 34 >
032-34_SDT011.qxp_Layout 1 4/23/18 11:06 AM Page 34
34 | MAY 2018
Microsoft Build Partner Showcase < continued from page 33
Modern Requirements: Business
analysis is often not aligned with Application Life Cycle or DevOps practices resulting in missed dates, unfulfilled product promises and challenged regulatory reporting. Modern Requirements4TFS reduces time-to-value delivery with ingenious process automation, visualizations and extensible reporting; eliminating waste and streamlining collaboration. Uniquely built into TFS/VSTS, it bridges the gap between document and information management. Modern Requirements4TFS is a value-added solution for Visual Studio Enterprise subscribers and is used by thousands of organizations worldwide.
OpsHub: In Digital Age, enterprises must
adopt Agility at Scale to be competitive and meet customers’ expectations. OpsHub Integration Manager enables out of the box integration between 50+ systems like VSTS, RTC, Jira, ServiceNow, Rational DoorsNG, HPE ALM, etc., enabling enterprise to implement Agility at Scale and deliver quality products and customer experiences. OpsHub enables Program Management Organizations to effectively drive digital transformation even when different teams are using heterogenous tools. #SAFe #EPMO
Perforce: Perforce Software is a leading
provider of developer collaboration solutions that span the software development lifecycle. Our flagship version control platform, Helix Core, is renowned for its flexibility and integrates with countless tools. Thanks to the P4VS plugin, Helix Core version control capabilities are available to Visual Studio users. With the plugin, teams can perform essential version control tasks without leaving Visual Studio. Learn more
about version control for Visual Studio.
n Progress Telerik: Telerik DevCraft pro-
Redgate: Redgate has specialized in database software for 18 years and is the leading provider of tools for professionals working on the Microsoft Data Platform. Our solutions help developers and data professionals include databases in compliant DevOps processes, so they can stay agile and productive at every stage of development, while protecting personally-identifiable information and complying with regulations such as GDPR and SOX. Find out more about how our solutions can help you at www.red-gate.com/ solutions Riganti: DotVVM is a unique framework for building rich web UI experiences with .NET. It is designed to provide an easy and efficient way of building complex line of business web apps. DotVVM offers full integration with Visual Studio and provides useful readymade components for enterprise web applications. It can be adopted quickly by any team of .NET developers, even if the team has no previous experience with web development. SonarSource: A leader in Code Quality
and security, SonarSource’s products SonarLint, SonarQube and SonarCloud are massively used and fully integrated with DevOps pipeline: Visual Studio, VS Code, VSTS/TFS, Jenkins Github. From initial writing of the code in the IDE, through having Pull Request decorated automatically, up to using a quality gate to help decide whether the code is fit for production or not, developers have an overall assessment of the quality of their code throughout the SDLC. More on www.sonarsource.com
Tasktop: Tasktop integrates Microsoft VSTS
and TFS with external Agile and DevOps tools to create a unified software value stream. Automating the flow of information across best-of-breed tools enables visibility and traceability while eliminating wasteful manual updates. This accelerates development speed while enabling developers to work in their tools of choice. Connect Microsoft with more than 50 Agile and DevOps tools such as Jira, ServiceNow, Micro Focus, VersionOne, CA, and IBM.
UXDivers: UXDivers is a product design
company with focus on user experience and user interface design. The company is committed to helping developers close the gap between good coding and good user interface design. Proof of this commitment are UXDivers most popular products, including Grial UI Kit and Gorilla Player for Xamarin.Forms. Grial UI Kit, the first user interface Kit for Xamarin.Forms, is a complete collection of UI layouts, styles and resources. UXDivers’ Gorilla Player is a multiple device previewer for Xamarin.Forms applications.
TenAsys: TenAsys makes optimum use of
Visual Studio for the development of hard, real-time applications. The INtime RTOS is a fully supported target of the latest Visual Studio 2017 IDE. Developers simultaneously develop and debug their C/C++ real-time applications with their Windows hosted ones, even with both OS’s running on the same machine, at the same time. Achieve fully deterministic solutions with the tools and experience you rely upon today.
Xircuit: Xircuit, which has been devel-
oped by All about Ashley in Germany, is a cloud-based framework that makes it simple to create websites and smartphone apps. Xircuit is specifically designed to help small and medium-sized businesses reach customers, promote brand awareness, and drive business growth. Xircuit websites and apps look fantastic on any device. The all-inone platform offers everything for a modern digital appearance. Take advantage of Xircuit: Your new homepage is waiting for you at www.xircuit.io n
Full PageAds_SDT011.qxp_Layout 1 4/23/18 11:12 AM Page 35
036-39_SDT011.qxp_Layout 1 4/20/18 12:56 PM Page 36
Using hybrid mobile to your advantage BY CHRISTINA CARDOZA
t is no longer a question of “should I build a mobile app?” Digital businesses have begun to realize mobile app development is necessary to not only stay competitive, but to achieve customer satisfaction. Customers are accessing information on a huge variety of devices, and have come to expect a high-quality, mobile-friendly user experience.
“Consumers, users, and employees are now all used to a consumer mobile app experience,”said Javier Perez, product director for the Axway Appcelerator, a mobile app development framework. “Everyone is accessing social media, news and more on their mobile devices, and they demand the same if not better experiences as they would get on the web.” With all the options available to developers, the question now is “how should I build a mobile app?” There are a number of different paths a business can take to develop applications — native, web and cross-platform. However, one approach in particular is leading the mobile race. According to the Ionic Developer Survey 2017, hybrid app development is taking over. The report revealed that 32.7 percent of responding developers plan to abandon native app development in favor of hybrid. In addition, the survey found a nearly 700 percent decrease in developers building exclusively with native tools. “The broader trend is that hybrid development is gaining traction, while the native approach is waning,” the report stated. “We think that makes
sense. The benefits of hybrid are obvious. And as the web evolves, there are fewer and fewer reasons not to adopt.”.
The hybrid app advantage A hybrid app is essentially a native app, Ionic’s CEO Max Lynch explained. Hybrid apps are downloaded from an app store similarly to native, and can access native feature such the GPS, camera, contacts, accelerometer and sensors. The difference is that native
036-39_SDT011.qxp_Layout 1 4/23/18 9:55 AM Page 37
but wants to launch a mobile application, the web technologies make it very easy to convert it to a mobile app using a hybrid approach, according to Francis. “Since businesses have already created a web app, creating hybrid apps is much easier and the lower friction option because the codebase that they have on the web app is the same as the hybrid apps, and they can make certain tweaks to the look, feel and design,” said Sencha’s Adwankar. “In terms of look and feel, developers know the best of the web and how to create a web app that looks fantastic.”
nature of using a hybrid approach. “It will be something like our designers or clients will say it would be really cool if the app did this, or when I perform this action it would be great if this animation occurred, and often times we just have to say this is something a hybrid app can’t do or it is really cost-prohibited,” he explained Platform-specific native, while more expensive, provides a general level of polish that the hybrid approach is never going to be able to match, according to Francis. Hybrid apps tend to be more sluggish than native apps and the ani-
The different approaches to mobile app development A native app, according to Javier Perez, product director for the Axway Appcelerator, is an app that is developed in a platform-specific programming language. For example, Swift and Objective-C are used to create iOS apps, Java is used for Android, and C# and .NET are used for Windows. According to Max Lynch, CEO of the Ionic framework, a hybrid app is an application that works in an embedded webview, but can access the same functionality of a native app. “There are no limitations,” Lynch said. “A hybrid app is just as capable as a pure native app. It just happens to do more of the work in a webview.” Hybrid development is an approach or subset of cross-platform development. Crossplatform apps provide a native experience and similar to hybrid apps, enable developers to utilize one codebase across different devices, but it does not work in a webview. Progressive Web Apps (PWAs) are powered by web technologies and can be as engaging as hybrid and native apps, but they run in the web browser rather than having to be accessed through the app store, according to Sandeep Adwankar head of products for Sencha. z
This makes it very easy for a company to test the mobile market by porting their existing web app to a hybrid configuration. If it works, then a company can decide if it wants to build a more polished hybrid app or build it with one of the traditional platform specific approaches, according to the BHW Group’s Francis.
Hybrid app shortcomings Despite the cost savings, Francis said hybrid app development does come with same drawbacks.. While the BHW Group has never pursued a hybrid app that failed, Francis said, there are always some concessions the team has to make just by the
mation isn’t going to be as crisp. Developers can spend extra time to add polish to a hybrid app, but it can be very timeconsuming, Francis explained. Businesses need to think about the application’s future to decide whether a hybrid approach is really going to be the most cost-effective in the longer term. “The biggest thing is just understanding the roadmap for the product, not just the minimal viable product, and evaluating 2.0 or 3.0 product viability as a hybrid app, not just looking at the price tag of the initial build,” said Francis. According to Todd Anglin, vice president of product and developer relations at Progress, performance is the continued on page 38 >
036-39_SDT011.qxp_Layout 1 4/23/18 9:56 AM Page 38
< continued from page 37
most common issue when it comes to hybrid. “The performance problem is caused by the way a hybrid app works: it simply embeds a mobile browser window in a native app shell. All of the code in a hybrid app then runs in this embedded web view, including the UI, which is still based on HTML. Because all of the code is running like this, what often happens is that hybrid apps ‘stutter’ — that is, they don’t render at the smooth 60fps that gives mobile apps their responsive, quality feel.” However, Ionic’s Lynch said the emergence of performance APIs and powerful hardware is making the performance issues between hybrid and native barely noticeable. Francis added that native functionality and inner operations integrated within the device are easier to do with native because the platform exposes the
data for developers whereas in a hybrid approach, developers are typically waiting for someone to create an opensource library that bridges the gap. “If you want to use the latest and greatest OS features, that is going to be harder in a hybrid app,” he said. The same goes for newer technologies, according to Axway’s Perez. For example, virtual and augmented reality are a big consumer trend at the moment and platform providers like Google and Apple have developed their own libraries to support that functionality. With hybrid, developers have to wait for the support to come, he explained. “What we find in general is that the tradeoff a company is willing to make is that they want to optimize for building the app quickly and having a great experience. This is generally data driven apps, social network apps and business
IONIC: Ionic is a front-end SDK built on top of Angular for building cross-platform mobile apps. The framework offers open source and pro services to build, deploy, test and monitor apps. It enables users to build hybrid, native and Progressive Web Apps with one codebase. It features native plugins, real app examples, component demos, guides, how-tos, a CLI for creating and building, a library of common app icons, deep linking, Cordova plugins, live reload, and AoT compiling for loading apps fast.
apps, and these use cases fit really well with hybrid,” said Lynch. “If developers are building a game, anything that is computationally intensive or incredibly animation driven, they are going to want to have full control with a native app.” Lynch thinks there is often a debate between developers because native developers don’t want to see hybrid apps replace what they do professionally. Web developers just want to build for mobile and not have to learn a completely different background or skillset, he explained. Additionally, hybrid development was harder in the past because there were no frameworks like Apache Cordova or Ionic providing components and libraries to help. Developers used to have to build things from scratch, Lynch explained. However, Francis stresses that businesses and developers should not pick sides. There is a place for each approach in the mobile space and a reason to use any of them for any given project. For instance, hybrid apps are going to be better suited for less consumerfacing apps such as internal-use applications that are used more for business and have fewer flashy features and more social interaction. LeanIX uses a mix of hybrid and native development within its company. According to Andre Christ, CEO of SaaS company Lean IX, the more highlevel functionality is implemented in a native way so it doesn't impact the user experience while other functionality that requires little to no integration is implemented in a hybrid way. Christ explained that the company has learned to design for mobile first in the web application early on. “You can’t just reuse the web content you produced. You always need to adjust the views and the performance, and collaborate between your mobile and internal web application development teams,” he said.
How Progressive Web Apps (PWAs) compare to hybrid apps Consumers today are experiencing an app fatigue, according to Sencha’s Adwankar. It is becoming exhausting and a nuisance for consumers to have to go through the process of going to the app
036-39_SDT011.qxp_Layout 1 4/20/18 12:59 PM Page 39
they are targeting PWAs. “Developers building hybrid apps and PWAs are taking advantage of the single most adopted, tested, and dominant technology stack in the world: The Web,” the report states. Control is also a factor in the rise of PWAs, according to Ionic’s Lynch. PWAs provide the same control developers had back when they were just deploying websites. They did not have to worry about a gatekeeper, they were able to push updates as much as they wanted, and they could manage the whole process themselves. Lynch adds PWAs are going to work well for companies who already get a lot of traffic through the web. “Let’s say you get a lot of Google search traffic, instead of risking pushing those search users to the app store and possibly losing them, PWAs
open to a quality mobile app experience without having to download the app and store it on your phone,” he said. Eventually, PWAs could replace hybrid apps, but that is a long ways off, according to the BHW Group’s Francis. “PWAs are kind of a way the web is trying to win back the mobile battle, it is just going to take a while because with the web you have to wait on all the browser updates and support,” he said. “Most people don’t even know you can install a webpage to your homescreen on your phone. So in a world where browser support and consumers understand how they work, I could totally see PWAs replacing hybrid apps because they effectively do the same thing. It is just that you don’t need that wrapper and you don’t need to submit PWAs to the app store.” z
040,41_SDT011.qxp_Layout 1 4/20/18 1:02 PM Page 40
Making the web available Capital One Bank prioritizes accessibility with dedicated team to empower people with disabilities BY LISA MORGAN
Web and mobile development focuses on technological inclusiveness, such as across operating systems, browsers and devices. Yet, as organizations become even more digital over time, human accessibility still isn’t getting the attention it should because there’s a general lack of awareness about the issue and how best to address it. Capital One began its journey creating its digital accessibility team in 2010 with the goal of establishing accessibility as a fundamental part of its digital delivery and providing all customers with equal access to its website and apps. Over the years, the company has expanded its digital accessibility team from one to 12, some of whom face their own digital accessibility challenges. While Capital One has placed considerable emphasis on building accessibility into its products among developers and designers, there are also efforts at the business level to educate business leaders and lines of business as well as to provide them with self-service tools. Without organizational involvement and the proper investments, web accessibility programs can fall short of expectations, or worse.
Why accessibility is such a big issue According to the World Health Organization (WHO), approximately 15 percent of the world’s population lives with some sort of disability. Granted, 15 percent of any population is a minority, but when it comes to serving disabled individuals, a plain old cost-benefit analysis is inadequate. Businesses must also weigh the risks, which include possible regulatory and legal action, lost customers and damage to reputation. They should also consider the opportunities,
some of which may not be apparent. A larger problem is the general lack of awareness about web accessibility and its importance. Understandably, those who lack disabilities tend to think about them, in very simple terms, when the topic is actually complex. For example, blindness and deafness are pretty obvious, but there are many types of physical, cognitive and other types of disabilities. Moreover, even within a single category, there tends to be a range of disabilities. For example, not everyone with auditory disabilities is deaf. They could be hard of hearing or they may have trouble processing auditory input. “Many times, engineering and design groups don’t know exactly what digital accessibility means. Or, if they do have some awareness of what it is, they aren’t always sure how to actually implement it effectively. That’s probably the biggest challenge,” said Mark Penicook, senior manager of accessibility at Capital One.
Where to start If your team hasn’t started thinking about web accessibility and how that impacts your products and your customers, the time to start thinking about it (and doing something about it) is now. However, it isn’t necessarily obvious how or where such efforts should begin. Capital One follows the World Wide Web Consortium (W3C) Web Content Accessibility Guidelines (WCAG 2.0), which address a plethora of accessibility issues for web and mobile applications through the Web Accessibility Initiative (WAI). More broadly, the W3C develops standards for the web, including industries converging on the web such as digital publishing, TV and broadcasting, web payments, automotive and
internet of things, as well as aspects such as privacy, security and internationalization. “We preview all of the W3C standards during development to make sure they can support the accessibility needs of any user. If there’s an issue, we talk with working groups to address barriers and advance accessibility opportunities in that standard,” said Judy Brewer, director of the W3C’s Web Accessibility Initiative. “Additionally we develop specifications that are specific to accessibility, such as Accessible Rich Internet Applications (ARIA).” The W3C’s Web Accessibility Initiative is updating its guidelines to increase coverage of cognitive disabilities and low vision disabilities, and placing additional emphasis on mobile. Brewer also notes the business advantage of using tools that support the Authoring Tool Accessibility Guidelines (ATAG) 2.0. “[We have standards for] anything that can better support the more efficient production of accessible content is important, including HTML editors and WYSIWYG authoring tools, content management systems, social media, media editors and graphics edi-
040,41_SDT011.qxp_Layout 1 4/20/18 1:02 PM Page 41
tors,” said Brewer. “Businesses can more efficiently address accessibility by building it into the authoring tools that they’re using.” The W3C WAI is also underscoring the importance of improving resources for testing, since testing website and mobile app accessibility is more difficult than testing and validating HTML, for example. The latter can be done automatically; however, the former requires a mix of automated, semi-automated and expert testing. In addition, the W3C is coordinating with some research and development organizations and actively promoting standards harmonization so it’s easier for companies to implement accessibility. Capital One’s Four-Prong Approach As the Capital One digital accessibility team has grown from one to a team of 12 employees and consultants, it has formalized an approach to ensure that web accessibility is implemented well across the organization. “We spent a lot of time and effort looking at the way we can move accessibility further up the chain in the software development lifecycle to help our engineers, designers and product owners who are working on all of our digital
properties,” said Penicook. “Number one is they have to know about accessibility, so we want to make sure that everyone’s aware of and understands our corporate standard so they can execute against that.” Importantly, Capital One’s accessibility group is proactively addressing four issues simultaneously: 1. Ensuring accessibility is built into the pipeline 2. Continually monitoring and testing what’s in production 3. Providing self-service digital accessibility training and testing tools 4. Marketing the digital accessibility “brand,” so other parts of the organization understand what Penicook’s team does and how to find it “There was a time when we had to do all of the testing, all of the consulting and make all of the recommendations,” said Penicook. “Self-service is about continuing this journey to enable others to be able to take on some of the responsibilities our team has been providing in the past.” The internal efforts are complemented with external efforts that keep the group involved in the accessibility community, via conferences and websites. “We would love it if every company made their websites and mobile applications fantastically accessible everywhere,” said Penicook. “We’re not doing this to compete more effectively against another bank or credit card company. It’s just the right thing to do.” Implementation Challenges Constantly ensuring and improving digital accessibility is a complex task, particularly given the range of assistive features that need to be provided to ensure that whatever disability a person may have does not go overlooked or is not underserved. “Ensuring accessibility throughout the SDLC can be challenging because you need to make sure accessibility is integrated into what you’re building and that it stays integrated,” said Penicook. “When you have a small group like we do, you have to make sure that your accessibility knowledge and best practices get proliferated across work streams, lines of business and delivery
channels.” Another challenge is ensuring accessibility as technology and programming languages change. “With all the operating systems and assistive technologies and the interplay of interactions of all of those things, practically speaking, each user has their own tech stack,” said Penicook. “There are a lot of constantly-evolving considerations that have to be made and at the same time, we’re facing pressures to release products, services, updates and functionality as rapidly as possible.” Precious time can be saved when accessibility is addressed throughout the SDLC, starting very early in the SDLC. It’s also important that engineering, risk compliance and legal are aware of what accessibility is, why it’s important and what their specific role is in relation to it.
Think creatively Penicook’s group has put effort into addressing temporary disabilities along with permanent ones, which is an easy point to overlook. The team also considers combinations of disabilities. “A lot of times we talk about temporary or situational circumstances. For example, what we do to help a person who may have lost the use of a limb also equates to someone whose broken arm is in a sling or is carrying a baby with one arm,” said Penicook. “We stress inclusive and universal design so that the things we do for accessibility can also be extrapolated into temporary situations or circumstances.” Building an organizational process takes a lot of thought at a lot of levels and the requirements vary from jurisdiction to jurisdiction. The W3C has a policy reference page that provides the status of requirements in different locations, but increasingly finds that policies around the world are calling for use of W3C’s WCAG 2.0 as an internationally harmonized accessibility standard. But before getting creative, make sure to get the basics right: understand accessibility, understand how it impacts customers, products and the business and develop a set of processes that ensure great digital experiences for all of your customers. z
Full PageAds_SDT011.qxp_Layout 1 4/20/18 5:51 PM Page 42
043-50_SDT011.qxp_Layout 1 4/23/18 2:02 PM Page 43
Testing strives to keep pace with development
Faster and shorter software development release cycles are leaving testing in the dust. Organizations must become more automated in order to keep up.
ith the emphasis on everfaster software release cycles, organizations are turning to automated testing to ensure they can keep up with that speed while simultaneously ensuring they are releasing quality products. Though people have been talking about automated testing for a while now, many testing efforts are still manual, said Jeff Scheaffer, general manager of continuous delivery at CA Technologies. According to Scheaffer, the most recent estimates are that 70 percent of testing is still done manually. Mark Lambert, vice president of products at Parasoft, explained that automated testing is not just about having the right tooling, but about automating regression testing as well, which is what allows organizations to deliver quality at speed. Without automation, the overhead of tests “grinds delivery to a halt” and
BY JENNA SARGENT becomes a bottleneck in the process, Lambert explained. “There is a fundamental need for testing to not only no longer be a bottleneck to innovation, but to able to keep pace with development, to reduce maintenance overhead, and to be able to optimize your test suite to reduce bloat,” said Wayne Ariola, chief marketing officer at Tricentis. “The ‘right’ testing tools cannot fail to deliver on these requirements for quality at speed.” Organizations need to seamlessly integrate testing into the software delivery pipeline so it can become a bridge between development and delivery, said Lambert of Parasoft. “Without that bridge your development gets done, but then you’re not able to validate the quality and do the delivery and push it out the door.” According to Scheaffer, even though
many organizations are still doing manual testing, they will soon be forced to implement automation into their software development life cycles in order to stay competitive in their industry. “One only needs to look at other industries to see what happens to those who don’t embrace automation,” Scheaffer said. “Nearly every major industry relies on automated factories to produce goods. There is no reason to think the production and testing of software should be any different. Those who don’t embrace automation will not be able to keep pace with competitors due to the cost, speed and quality benefits of automation.” There are two shifts that testing tools should accommodate, according to Tricentis’ Ariola. The first is that assessing the risk for a release candidate should be the primary objective of test case design and execution. The seccontinued on page 44 >
043-50_SDT011.qxp_Layout 1 4/23/18 2:02 PM Page 44
Hurdles to automated testing Organizations face many challenges when it comes into implementing automated testing. Among them are: l Automated testing requires a cultural shift. According to CA Technologies’ Jeff Scheaffer, even though the benefits of automated testing have been proven, many individuals or teams have difficulty with giving up the control that is associated with manual testing. He said that it is important to strike a balance between automation and human control and checkpoints. l “The maintenance trap.” Ariola of Tricentis referred to as “the maintenance trap.” Organizations get started with automation and have all of the tests up and running, but over time their environment changes and their applications change, and suddenly their automated tooling becomes unstable, explained Wayne Ariola of Tricentis. When this happens, teams need to spend more time on maintenance rather than defining the proper test scenarios, he noted. To overcome this, organizations will have to eliminate the need for scripting, such as by implementing model-based testing, Scheaffer said. l Having the right stateful data at hand. Ariola said once that has been solved, the next challenge becomes making suring that the right systems and services are in place to be able to perform tests. “These two challenges require test data management that overcomes tough demands in terms of timedynamics, data fluctuation, and consumption, and orchestrat-
< continued from page 43
ond is that testing tools should take the input of various stakeholders into account during each stage of the release cycle. “Not all test information is relevant at the same time,” Ariola explained. “It must be delivered when stakeholders can take action on it—and those actions must be measurable.” Ariola said that there are three major components of choosing the right testing solution. First, organizations need to fully be on board with automation. He stressed that while manual efforts can complement automation, to achieve the consistent release cadence organizations need these days, automation is a necessity. Second, organizations need to anticipate the demands of tomorrow to avoid disruption. “These organizations must
ed service virtualization that helps you stabilize access to dependent systems,” Ariola said. l Skill and manpower. “In organizations that use manual testing, when they want to move to automation, it’s not just buying the tools,” said Applitools CEO Gil Sever. Organizations buy the new tool, but then they also probably need to hire a software engineer or software developer to build and maintain the scripts. Sever cites this need for resources as the main reason why not everyone has fully embraced automation. Companies can stick with manual testing and get away with cheaper manpower to do manual quality assurance, and some of them do, Sever explained. “I think visual validation joins that trend as well because instead of writing 50 or 100 lines of code to validate all the different fields, and all the different numbers, and all the different images that appear on the screen, with visual testing you actually tell the tool: validate that this image is correct and this screen is correct,” Sever said. “In the best case scenario, you only need to write one line of code. We actually take the screenshot, analyze it, and give you the result without needing you to write any specific code for it.” l Scaling test practices with skills and knowledge. “You need to find something that’s easy to use, and can help translate the technical details of what you’re trying to implement into something that’s consumable by non-experts,” Parasoft’s Mark Lambert. z
also come to the realization that legacy tools were not built to meet today’s demands of Agile and DevOps, much less tomorrow’s,” Ariola said. Finally, testing should be able to keep up with the pace of development, reduce maintenance overhead, and optimize the testing suite in order to reduce bloat, Ariola explained. Ariola also believes that removing the need for scripting can be a useful feature for automated testing tools. “Accelerating testing to where it can keep pace with development requires features that remove arguably the biggest pain point in legacy testing solutions—the requirement of scripting,” he said. “A model-based solution eliminates the need for coding expertise, while also reducing maintenance overhead that prevents organizations from reaching automation rates they never
thought were possible—90 percent or greater.” Ariola also said that organizations should look for a testing platform that will give them access to the entire suite of development technologies used, and allow them to access everything through a single interface. Having this sort of end-to-end platform will enable organizations to update testing in realtime and in parallel with the evolution of the applications, he explained. Gil Sever, co-founder and CEO of Applitools, said it’s important to pick a tool that can integrate with your existing tools and provide robust reporting capabilities. Your organization should also consider whether or not it will be able to integrate with other tools in the delivery process, such as continuous integration tools, like Jenkins, or collabcontinued on page 48 >
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:31 PM Page 43
Continuous Testing Enable continuous testing across your software delivery lifecycle. Adopt next-generation testing practices to test early, often, automatically, and continuously.
Only CA offers a continuous testing strategy thatâ€™s automated and builĹ„ upon end-to-end integrations and open source. Enable your DevOps and continuous delivery practices today.
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:32 PM Page 44
043-50_SDT011.qxp_Layout 1 4/20/18 2:30 PM Page 47
How these companies can help you automate your testing process Jeff Scheaffer, General Manager, Continuous Delivery, CA Technologies As more organizations adopt DevOps and agile development methods, they gravitate towards test automation —enabling test teams to automatically generate reusable test assets like test cases, test data, and test automation scripts right from requirements. CA Technologies offers comprehensive continuous testing solutions that automate the most difficult testing activities — from requirements engineering through test design automation and optimization. These capabilities help organizations test at the speed of agile and build better apps, faster — with reduced costs and risk. While many testing vendors may offer point testing tools for regression, performance, API, or integration testing, and even automate and drive efficiency in these testing areas, no other vendor can offer true end-to-end, continuous testing from planning to production. Our approach to using continuous testing is to create an open, flexible, integrated ecosystem from planning to production. CA provides the tools to plan, build, test, release, operate, automate and secure all software delivery whether an app is cloud-native, classically architected or hosted on a legacy platform.
Wayne Ariola, Chief Marketing Officer, Tricentis First, the Tricentis Continuous Testing platform was architected to accelerate testing to keep pace with rapid delivery processes. At the core of the Tricentis Continuous Testing platform is “ModelBased Test Automation” technology, which detects changes to the application under test and helps you rapidly evolve the regression test suite to accommodate new user stories. The Tricentis ModelBased Test Automation technology eliminates the overhead required to maintain scripted tests. Second, the Tricentis solution focuses on risk-based testing. We start by identifying your top business risks (considering
both frequency and damage), then optimize the test suite to cover the greatest risks. This results in a highly-efficient test suite that is fast to execute and easy to maintain. On average, this approach yields significantly greater risk coverage with about 66% fewer tests. Finally, we provide a unified solution for automating end-to-end testing across the many technologies involved in today’s business transactions. An intuitive, business-readable interface makes it simple to automate tests across 100+ technologies—everything from web apps, to packaged applications such as SAP, SFDC, Oracle, and Workday), to legacy applications and APIs. This is all accomplished with scriptless test automation, which enables extreme automation without the overhead associated with managing scripts.
Gil Sever, co-founder and CEO, Applitools To help guide Test Automation Engineers, DevOps Teams, Front End Developers, Manual QA experts, and Digital Transformation executives, Applitools created Application Visual Management (AVM) – a new category framework that automates all visual aspects of application UI creation, testing, delivery and monitoring at the speed of CI-CD. The goal is to help shorten application delivery cycles and improve software quality because what the customer sees is what matters most. By helping prevent visual flaws from occurring in the application delivery process, teams can eliminate the UI bugs that frequently result from events, such as browser and operating system updates, new devices penetrating the marketplace, the effects of dynamic content on the web, and the increasing frequency of updates resulting from CI-CD initiatives. Today’s DevOps toolchain only supports the functional aspects of modern application delivery in areas like testing, monitoring, Continuous Integration (CI), Continuous Delivery (CD), accessibility, security, bug tracking, collaboration, source control, and more. Applitools AI Powered Automated Visual Testing and Monitoring platform handles all aspects of visual UI validation
to complete the DevOps toolchain, allowing acceleration and full automation of the entire delivery process.
Mark Lambert, vice president of products, Parasoft Parasoft’s automated software testing tool suite helps organizations build a scalable test automation strategy that addresses each layer of the testing pyramid. At the bottom of the pyramid, you can use Parasoft Jtest, for Java, or Parasoft C/C++test, for C and C++, to cut the time in half that it takes to create a solid foundation of Java (JUnit) or C/C++ unit tests. By using Parasoft Jtest, you get tests that are easy to create while being easier to automate and easier to debug than complex end-to-end UI tests. Sitting on top of the unit tests are a broad set of service-layer API tests built with Parasoft SOAtest. These tests provide the perfect communication layer between developers and testers, with visual tools that are easy to adopt and scale. Parasoft SOAtest’s smart test generation technology uses AI to automate the creation of these tests, making it easier for API testing beginners to get started. Once the portfolio of tests is established, the next goal is running these automated tests continuously, as the team is actively developing new capabilities and enhancements. This is where service virtualization, powered by Parasoft Virtualize, can enable early-stage defect detection by decoupling the tests from constrained system dependencies and shifting-left the execution of automated regression tests. Parasoft DTP brings the results of all of your testing together, integrated into your tools to provide robust reporting and analytics dashboards that provide visibility into the entire testing practice a whole. Parasoft DTP’s smart analytic engine gives teams actionable insights that drive dev/test productivity and focus. Leveraging Parasoft’s technologies, organizations can unlock the potential of Agile by building meaningful and maintainable test suites that seamlessly integrate into the CI pipeline. z
043-50_SDT011.qxp_Layout 1 4/20/18 5:02 PM Page 48
< continued from page 44
oration tools, like Slack. On the reporting side, the tool should be able to provide a visual representation of test results. It is also important to show which tests have passed, which tests have failed, and what the status of the application is after running the tests, Sever explained. For example, Applitools has a dashboard that shows what percentage of tests passed or failed and if they failed on specific browsers or specific devices. This allows users to isolate the root cause or source of the problem and quickly fix it, Sever said. Robust reporting capabilities also give organizations a better idea of how far away they are from being able to release a reliable application, Sever noted. Parasoft’s Lambert stressed that organizations should not be looking specifically for certain features, but rather looking at their current and future needs, and finding a solution that can satisfy both of those needs. It all comes down to the total cost of ownership, Lambert said. “What might be quick and cheap now may not be maintainable or scalable in the future,” he explained. Organizations need to find a balance between satisfying the needs they currently have, but also thinking about how that tool can scale in the future to suit future needs and requirements, he said. CA’s Scheaffer said he believes that in order to find the right automated testing tools, organizations should start by identifying a business driver. “Everyone wants to deliver quality software faster, with reduced risk and at lower costs, but usually an organization will have a specific, acute need that is leading them to evaluate testing solutions,” he said. Once that driving factor is identified, organizations need to see where the gaps in their existing tools or processes are preventing them from achieving that objective, Scheaffer said. “From there, it’s easy to identify which testing tools are right for your organization,” Scheaffer said. “If your
suite, Lambert explained. He said that when organizations build a large suite of test cases, it can be difficult to not only identify the impact of changes, but to then take those changes into account and be able to refactor the test suite accordingly, he said. “Being able to have tooling that can help you with that impact assessment and with the refactoring will help streamline the maintenance of the practice so that you’re not spending all of your time rebuilding test automation,” Lambert said. “You’re able to just kind of rely on the test automation working as part of a regression and focus on expanding the overall test coverage on new functionality.” According to Scheaffer, enabling continuous testing is the ultimate goal of test automation. “Rather than testing being an event that happens at a point in time (or in the SDLC), testing is done constantly,” he said. “With continuous testing, test cases are generated automatically from the business requirements, and then test cases are executed automatically as the code is being written. This results in tests that are both more comprehensive, and more efficient—enabling quality software to be delivered faster, at lower costs.” Lambert said he believes the future of test automation lies with the advance of artificial intelligence and machine learning. “Artificial intelligence and machine learning work better the more data you give it,” he said. “What I see is AI and machine learning really becoming powerful when it’s trained. So that training can take the form of human interactions within an application and an engine watching to see what the human does, and then trying to variate on that. It can take the form of a human providing some kind of tests and saying this test validates some functionality and using that to train the engine. That’s the direction that I see tests going.” z
While manual efforts can complement automation, to achieve the consistent release cadence organizations need these days, automation is a necessity. biggest challenge is managing test data, the logical place to start is with a test data management solution. If mapping requirements to test cases is a challenge, you should look at solutions that offer model-based test case design. Organizations that have an issue accessing test environments can look at service virtualization solutions as a first step. And so on.” Service virtualization gives you control over the dependencies in an application, so tests that would normally be created and executed at the end of the cycle can be executed earlier in the process, Lambert noted. All of that presents the problem of maintaining a stable test automation
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:32 PM Page 47
the premier Continuous Testing event in the Bay Area.
Accelerate 2018 San Francisco | May 7-8 Transforming Testing for DevOps $UWLȴFLDOΖQWHOOLJHQFH
Testers vs. SDETs
Complimentary pass: 3URPRFRGH SDTIMES
043-50_SDT011.qxp_Layout 1 4/23/18 2:02 PM Page 50
A guide to automated testing tools n BrowserStack is the industry-leading,
cloud web and mobile testing platform, enabling developers to test their websites and apps on different operating systems and mobile devices. Optimized for CI/CD environments, BrowserStack accelerates time to market providing error-free testing by concurrently running automated tests for native and hybrid mobile apps on thousands of iOS and Android devices. n IBM’s Rational Test Workbench covers testing from the mainframe, through to microservices and mobile. IBM’s approach provides a consistent experience across functional, API, and performance testing so you can create integrated tests across every layer of your system. n LogiGear: With the no-coding and keyword-driven approach to test authoring in its TestArchitect products, users can rapidly create, maintain, reuse and share a large scale of automated tests for desktop, mobile and web applications. n Micro Focus’ testing solutions keep quality at the center of today’s modern applications and hybrid infrastructures with an integrated end-to-end application lifecycle management solution that is built for any methodology, technology and delivery model. For more information visit Micro Focus DevOps. n Microsoft provides a specialized tool set for testers that delivers an integrated experience starting from agile planning to test and release management, on premises or in the cloud. n Mobile Labs’ deviceConnect provides secure remote access to mobile devices for managed use by developers and testers. With deviceBridge (an extension to deviceConnect), many test automation frameworks and developer tools used for checkout and debugging can retain cloud-based devices as if locally connected by USB. n Progress: Telerik Test Studio is a testautomation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. n Rogue Wave: Klocwork boosts software security and creates more reliable software. With Klocwork, analyze static code on-thefly, simplify peer code reviews, and extend
FEATURED PROVIDERS n
n Applitools: Applitools provides visual AI testing solutions to test automation engineers, DevOps teams, front end developers, QA experts, and digital transformation executives. Its Application Visual Management framework automates all aspects of application development. By using visual AI, problems are detected early before the application is released into production. Application Visual Management along with Applitools’ Monitoring platform allows for acceleration and automation of the entire software delivery cycle. n CA Technologies: Only CA Technologies offers next-generation, integrated continuous testing solutions that automate the most difficult testing activities – from requirements engineering through test design automation, service virtualization and intelligent orchestration. Built on end-to-end integrations and open source, CA’s comprehensive solutions help organizations eliminate testing bottlenecks impacting their DevOps and continuous delivery practices to test at the speed of agile, and build better apps, faster. n Parasoft: Parasoft’s automated software testing tool suite brings efficiency to the entire software development team, by automating time-consuming testing tasks for developers and testers, and helping managers and team leaders understand what to focus on. With innovative technologies that combine SDLC test automation with smart analytics, Parasoft technologies provide insights up and down the testing pyramid to enable organizations to succeed in today’s most strategic development initiatives, and release high-quality software, faster. n Tricentis: Tricentis is recognized by both Forrester and Gartner as a leader in software test automation, functional testing, and continuous testing. By enabling test automation rates of over 90%, we help Global 2000 companies control business risk as they adopt DevOps, Agile, and Continuous Delivery. Our integrated software testing solution, Tricentis Tosca, provides a unique Model-based Test Automation and Test Case Design approach to functional test automation—encompassing risk-based testing, test data management and provisioning, service virtualization, API testing and more.
the life of complex software. Thousands of customers, including the biggest brands in the automotive, mobile device, consumer electronics, medical technologies, telecom, military and aerospace sectors, make Klocwork part of their software development process. n Sauce Labs provides the world’s largest cloud-based platform for automated testing of web and mobile applications. Optimized for use in CI and CD environments, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium, both widely adopted open-source standards for automating browser and mobile application functionality. n SmartBear provides a range of frictionless tools to help testers and developers deliver robust test automation strategies. SmartBear automation tools ensure functional, performance, and security correctness within your deployment process, inte-
grating with tools like Jenkins, TeamCity, and more. n SOASTA’s Digital Performance Management (DPM) Platform enables measurement, testing and improvement of digital performance. It includes five technologies: mPulse real user monitoring (RUM); the CloudTest platform for continuous load testing; TouchTest mobile functional test automation; Digital Operation Center (DOC) for a unified view of contextual intelligence accessible from any device; and Data Science Workbench, simplifying analysis of user performance data. n Testplant: Eggplant’s Digital Automation Intelligence Suite empowers teams to continuously create amazing, user-centric digital experiences by testing the true UX, not the code. Eggplant AI uses artificial intelligence and machine learning to hunt defects, and auto-generates test scripts to increase testing productivity, efficiency, speed, and coverage. z
012,51_SDT011.qxp_Layout 1 4/20/18 5:00 PM Page 51
Data connectivity: a multi-issue problem to him is, “You end up with a shared (ETL). What I’m seeing that’s largely database architecture behind all of the driven in some ways by the whole data microservices much as in the old style science, big data movement, is a move architecture, but it still removes a lot of away from ETL and a move more that need for dealing with the nuances of towards ELT style.” this because you ultimately defer to the He explained, “Instead of extracting database system. So if you have a clusdata from systems, transforming it into tered database system, you’re gonna hit the format you want for long term stora certain level of scalability, and because age, and then putting it into your data everybody’s using a shared database, you warehouse, the ultimate move from just don’t have those issues of consistency to a pure warehousing data perspective has worry about in the same way.” been just dump everything into a data Picco pointed out a third approach, lake. There is no qualified set of records “Obviously you can relax certain connecessarily. You basically have a data lake of ‘The data warehouse stuff that you load and approach of more of a transform on demand. record-oriented, relaThis has really given tional or star schema rise to things like data prep tools, as an examapproach, has really ple of something that served its purpose well.’ previously was part of —Dion Picco the ETL process and driven by IT. Now it’s driven in terms of data scientists, and var- straints, and so if you’re not dealing with ious folks on the business side who need fully transactional environments and you access to the data when they want, levercan deal with eventually consistent sort aging more citizen-oriented tools to do of scenarios, there’s a wealth of new the data transformation, data access databases to choose from like Apache piece.” Using this new approach pre- Cassandra and Spark. There’s a lot of serves the fidelity of the original data set infrastructure built on the Hadoop in a way that your typical ETL process ecosystem today. I mean, we just had an doesn’t. This represents a fundamental explosion of different databases that are change. really fit for purpose, and so if your purHe described the microservices land- pose isn’t high-scale online transaction scape in general and hybrid architec- processing (OLTP), then likely there’s a tures. “What’s typically happening here database to fit your need.” is every service often has its own dataHe listed several new SQL vendors base. So a customer service might have a that also achieve a different level of percustomer database. A product service formance but still enable a full active might have a product database, and as database. “Google Spanner is a great you scale these services up, the horizon- example. You got Cluster XDB and Volt tal scalability of that service needs to DB that are out there. They’re combinmake sure that they have a consistent ing the best of in-memory along with view of that data.” It’s not as simple as it new architectural patterns from an sounds because you may hit a level of OLTP perspective that I think a lot of scalability that you can’t grow beyond. transactional-style applications need.” On the other hand, he said he believes the simplest pattern is the one APIs vs drivers in the new world that’s still the most dominant pattern, The difference between an API and a which is to not have one database per data driver is that an API is a specificamicroservice. The end game, according tion that describes what to do. A driver < continued from page 12
is an implementation that describes how to do it. They’re both still relevant in the modern services world. JDBC and ODBC are standards that have been around for more than a decade. According to CData’s Sharma, they’re technologies that are going to stay around. He said, “The choice of connectivity or any of these driver technologies is dependent on the platform choice that people make. While ODBC is still very popular, I see a little bit of trend, not much, of people moving away from the native driver technology. ODBC is CC++ based technology, and I see people moving to JDBC and ADO.NET instead of that. In some enterprises, I also see resistance to JDBC, just because of how Oracle is handling Java. Separate issue, but still Java is very popular. I see a little bit of trend with people are moving away from native. People had that impression that native technologies were required for performance reasons, but I don’t think that’s true anymore.” The Java and .NET runtimes have matured so much, that they can be comparable to native technologies, and offer other advantages on top of that. Mobile platforms are taking off in popularity. Sharma noted, “Our driver technologies offer direct connectivity from the driver to the data source. What’s more popular with the mobile platforms, is to go through an intermediary. If you’re building a mobile application, what you would do is, that application would talk to some server somewhere, which might be, again, JDBC or ADO.NET, or something, and that’s where the connectivity would happen, instead of building the connectivity right into the application on the device itself.” The challenges that data connectivity present are multi-faceted and require key players both the business and technical sides of the issue collaborate and come up with innovative solutions. Sharma predicted, “People think connectivity’s easy, but it’s going to take a lot of effort to actually get it right.”z
052_SDT011.qxp_Layout 1 4/20/18 1:03 PM Page 52
Guest View BY ERIC NAIBURG
“Done” should include security Eric Naiburg is a vice president at Scrum.org
n today’s fast-paced, digital world, cybersecurity attacks occur daily. Businesses are scrambling to protect their assets and consumers fear for the safety of their personal information. Even large enterprises with ample resources and expertise aren’t safe, with LinkedIn, Yahoo, Sony, Target and the IRS all falling victim to malicious hackers. According to recent research, the average U.S. company of 1,000 employees or more spends $15 million a year battling cybercrime (up 20 percent compared to last year). Additionally, Cybersecurity Ventures estimates that global spending on cybersecurity products and services could top $1 trillion over the next five years.
Security deserves its own line item Software delivery teams are working more and more in an Agile manner, most of which are using the Scrum framework to deliver those products. To realistically combat rising cybercrime levels, security should be part of everything we do and a required part of releasing any software. For instance, in the Scrum framework, we have the concept of the “Definition of Done”. In the Scrum Guide, the official body of knowledge of Scrum, we talk about the “Definition of Done” as: When a Product Backlog item or an Increment is described as “Done”, everyone must understand what “Done” means. Although this may vary significantly per Scrum Team, members must have a shared understanding of what it means for work to be complete, to ensure transparency. This is the definition of “Done” for the Scrum Team and is used to assess when work is complete on the product Increment. For many Scrum teams, the “Definition of Done” includes items such as: all unit tests have been written and passed, deployment scripts have been written, product backlog item assumptions have been fully met and code has been reviewed and approved, for example. So, where is security? Too often, organizations and their development teams assume that security is covered as part of the code review or defined in the non-functional requirements, backlog and process
Too often ... development teams assume that security is covered as part of the code review.
items for operations. The truth is, we don’t live in a perfect world where everyone clearly understand each other and everything is black-and-white. “Done” needs to explicitly call out secure coding, code scanning, security tests and any elements of security that should be implemented as part of delivering any production-ready software.
Adapt security governance for Agile environments In addition to prioritizing security as a “Definition of Done,” organizations should automate and delegate security governance wherever possible. By moving beyond project-level governance, development teams can streamline their review process, reduce the number of handoffs and optimize overall efficiency. Instead of relying on developers to manually perform vulnerability scanning, static and dynamic scanning solutions can be implemented and plug-ins can be run passively, notifying developers if/when security issues arise. To streamline security governance, it’s important to rethink historical training processes. Many organizations require developers to attend formal security training, however this practice can drastically hinder time-to-market. Plus, developers often have difficulty applying what they learn in these trainings to their daily work. Instead, organizations should consider monitoring individual coding behaviors, root-cause the reason for any insecure coding outcomes and design specific training plans for certain developers or teams in need.
The cost of deprioritizing security is too high The reality is, the way most organizations currently provide security guidance to development teams (i.e. publishing security guidelines in a reference document and holding calendar-driven security training) is not very agile and misses people along the way. With Scrum, we are always inspecting and adapting how we work, which enables teams to understand what is needed and when, so that they can evolve and learn on an ongoing basis. With data indicating that cybercrime damages could cost the world $6 trillion annually by 2021, now is the time to prioritize security once and for all. Rather than viewing security as later stage, add-on work, organizations need to consider security an essential building block of software delivery. z
053_SDT011 new.qxp_Layout 1 4/20/18 1:03 PM Page 53
Analyst View BY ROB ENDERLE
Smartphone sales mirror PC declines A
ccording to Gartner, smartphone sales have moved into decline seeming to mirror the trend in PCs. I think the reason for both declines is the massive reduction in demand generation in the segment. For PCs it was the result of Microsoft and Intel cutting back sharply on their marketing efforts along with the PC OEMs. For the smartphones it is Apple’s decision to stop being the market maker for the segment and Samsung’s unwillingness to pick up that mantel for more than a brief period. Let’s look at why smartphone sales are likely to continue to slow over the near term.
Demand generation We’ll start with the PC market as it evolved in the 1990s and slowed in the 2000s, and then talk about why slowing smartphone sales could happen far faster. Part of the reason for any strong market is the level of investment and execution with demand generation marketing. The 1990s were powerful for PCs with lots of tech TV shows (C/Net was even sold to CBS), big launches (I still remember the huge Windows 95 launch), and lots of well done TV advertising like Intel’s old Bunny Man spots. There was drama too, we even had the Mac vs. PC ads that Apple rolled out, all of which got folks excited about regularly cycling their PCs as the market grew. But the PC market eventually became saturated, growth slowed, and everyone cut back on their marketing spend for demand generation. The TV shows went off the air, the magazines got smaller and many failed, and the web sites mostly went into life support. Smartphones started out focused largely on business and promoted by advocates with Research In Motion, now Blackberry, setting the early pace. But when Apple entered, it did so with massive demand generation efforts that not only allowed them to race by Research in Motion and Palm, it forced the entire market into an iPhone clone mode. But it made Apple the singular market maker; it was Apple’s draft that was allowing everyone else building iPhone clones to gain share. And because the iPhone was so expensive, it allowed a robust segment of lower-cost phones underneath.
But just as the PC vendors cut back sharply on their demand generation efforts, Tim Cook’s shift of Apple from a marketing-driven company under Jobs to a product-driven company had an adverse impact. As Apple increased the number of assorted products they didn’t have the money to market each individual one anymore and seemed to cut overall spending as a result. Samsung has stepped into the breach several times but for short periods that did spike Samsung sales. But their efforts mostly benefitted Samsung phones and weren’t sustained long enough for a stronger market effort. In addition, Samsung has a far deeper line of phones so even if they had a more sustained marketing program, the benefit to other brands would have been far more limited. In short it isn’t yet clear a vendor like Samsung can be, alone, a market maker.
Rob Enderle is a principal analyst at the Enderle Group.
Unless we get another Apple-like market maker, overall trends for smartphones are likely to continue flat.
Spikes, then slower sales again Unless there is a reason to do a broad refresh, and 5G certainly will be that reason, sales will slow. And once the 5G refresh is done, they will slow again. We should also see sales slow even farther as we approach the 5G refresh as coverage of that coming event should have buyers defer sales in anticipation of it. This is called the Osborne Effect named after a PC company that re-announced new hardware and stalled current hardware sales. There is one other coming event that could result in churn over the next several years and that is the emergence of self-driving cars. These cars will need a very different smartphone experience than what we have today in them (you won’t have apps like Waze for instance and may have a far bigger in car screen for entertainment). This should result in some substantial physical changes to phone design, but this is all likely out in the 2020+ period. Anticipating some of these changes with upgrade programs and designs that anticipate coming needs should allow smartphone vendors to outperform peers during these interesting times. But, unless we get another Apple-like market maker, overall trends for smartphones are likely to continue flat to declining long term. z
054_SDT011.qxp_Layout 1 4/20/18 1:03 PM Page 54
Industry Watch BY DAVID RUBINSTEIN
In software, words matter David Rubinstein is editor-in-chief of SD Times.
his month’s column is about words — specifically, the textual content that appears in your applications. While teams of engineers and designers spend hours thinking about the user interface, the actual words written into an application are often an oversight. An easy example used to understand the importance of the word content is when you click on something in an app or on a website and an incomprehensible error message is returned. How many times have you gotten these messages and can’t be sure what exactly went wrong? Content is the forgotten aspect of applications, partly because it’s hard to change once the text is hard-coded into strings in an application, and partly because it often falls to developers to manage, and that’s really not what their jobs are or should be about. “Developers don’t want to be in the business of managing text,” said May Habib, co-founder and CEO at Qordoba, which has created a Strings Intelligence platform to give project managers access to the text in strings without involving engineers. “There’s a scrutiny every sentence goes through in the editorial world, but not in software products. Engineers,” she stated flatly, “shouldn’t be writing copy.” The word content of a product relates to a high degree of engagement and conversion, Habib added. Qordoba has been in existence since 2014, starting in the product localization space, which involves translating content into local languages. “People don’t think about the words in their applications until they have to localize. When copy has inconsistencies, you can find them when localizing. If going from English to French, you might see there are a number of different ways that the same thing has been said, and you catch that in localization,” Habib said. The Strings Intelligence platform uses machine learning to automate extraction of text from hardcoded strings, to “give control of copy back to the people who should have it,” explained Adam Frankl, VP of marketing at Qordoba. Managing content, he said, has “always been viewed as too hard,
Developers don’t want to be in the business of managing text. Engineers shouldn’t be writing copy.
having to go into hard-coded strings and make changes. We’ve created this platform with the focus of improving developers’ lives.” Added to the platform last month was the ability to do content scoring, available in 21 languages. Habib explained that it’s a way for production teams to know how good their copy is when scored out of 100. “It’s a summary of grammar, language style, emotion and personality,” Habib said. “What reading level is the person at? Is the content style casual or technical?” People understand a score; it quantifies what’s correct or incorrect in the wording, she explained. Habib said that Qordoba’s platform uses Natural Language Processing to assess the emotion underlying the word content, helping companies ensure they are staying on message and presenting it in the way they intend. In a statement announcing the content scoring feature, Habib wrote: “For many companies today, managing written copy in their applications is a mess. Final copy is developed in Google Docs or Sketch, copied into source code by engineers, who end up writing more copy on their own, and the final product lives across dozens, sometimes hundreds of files. Copy is created by dozens, and in the enterprise, hundreds, of different authors, with no automated system to see if people are adhering to a company’s brand voice or style. Some text is hardcoded into source code, where it becomes difficult to find and risky to update.” Interfaces matter. Words matter. Qordoba is working to make sure people can customize content in software interfaces, apps and products, much like they already do in email and CRM systems. “Our plan is to have a permanent spot in the tech stack used to manage all the text everywhere,” Habib said. “GitHub is used to manage code — Jenkins is used to manage build and deployment — Qordoba is used to manage all of the words in the user experience.” Think of this magazine. We spend a lot of time designing the pages, creating interesting layouts, choosing the right artwork. Between you and me — (don’t tell the art director) — all of that is the window dressing. It’s nice to look at and gives the user a positive experience. But it’s the words that keep readers coming back, that give us engagement. It’s no different with your applications. z
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:32 PM Page 48
Full PageAds_SDT011.qxp_Layout 1 4/19/18 3:33 PM Page 56