SD Times - March 2018

Page 1

FC_SDT09.qxp_Layout 1 2/26/18 1:22 PM Page 1

MARCH 2018 • VOL. 2, ISSUE 9 • $9.95 • www.sdtimes.com


Full PageAds_SDT09.qxp_Layout 1 2/23/18 11:17 AM Page 2


003_SDT09.qxp_Layout 1 2/23/18 5:01 PM Page 3

Contents

VOLUME 2, ISSUE 9 • MARCH 2018

FEATURES A growing blem Internet pro

Preparing for the GDPR in the eleventh hour

page 22

page 8

Release automation: All about the pipeline page 28

Buyers Guide: Managing your APIs

NEWS

page 35

COLUMNS

6

News Watch

12

Java 10: Cloud, serverless focus

14

41

ANALYST VIEW by Peter Thorne IoT needs ordinary applications too

10 books every web developer should read that will increase your software IQ

42

GUEST VIEW by Rick Orloff CEOs: The biggest shadow IT threat?

18

The two big traps of code test coverage

43

20

XebiaLabs raises $100M in funding

INDUSTRY WATCH by David Rubinstein Managing data across multiple clouds

20

CloudBees acquires CD and CI company Codeship

Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at subscriptions@d2emerge.com.


004_SDT09.qxp_Layout 1 2/23/18 11:16 AM Page 4

Instantly Search Terabytes of Data across an Internet or Intranet site, desktop, network or mobile device

www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein drubinstein@d2emerge.com NEWS EDITOR Christina Cardoza ccardoza@d2emerge.com SOCIAL MEDIA AND ONLINE EDITOR Jenna Sargent jsargent@d2emerge.com

dtSearch enterprise and developer products have over 25 search options, ZLWK HDV\ PXOWLFRORU KLW KLJKOLJKWLQJ

INTERNS Ian Schafer ischafer@d2emerge.com Matt Santamaria msantamaria@d2emerge.com ART DIRECTOR Mara Leonardi mleonardi@d2emerge.com

dtSearch’s document filters support: popular file types emails with multilevel attachments a wide variety of databases web data $VN DERXW QHZ FURVV SODWIRUP Developers: 1(7 6WDQGDUG 6'. $3,V IRU NET, including Xamarin Java and C++ and NET Core 6'.V IRU :LQGRZV 8:3 /LQX[ 0DF L26 LQ EHWD $QGURLG LQ EHWD 6HH GW6HDUFK FRP IRU articles on faceted search, advanced data FODVVLILFDWLRQ $]XUH DQG PRUH

.

.

.

CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Frank J. Ohlhorst CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum CUSTOMER SERVICE SUBSCRIPTIONS subscriptions@d2emerge.com ADVERTISING TRAFFIC Mara Leonardi adtraffic@d2emerge.com LIST SERVICES Shauna Koehler skoehler@d2emerge.com REPRINTS reprints@d2emerge.com ACCOUNTING accounting@d2emerge.com ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 dlyman@d2emerge.com

Visit dtSearch.com for KXQGUHGV RI UHYLHZV DQG FDVH VWXGLHV IXOO\ IXQFWLRQDO HYDOXDWLRQV

The Smart Choice for Text Retrieval® since 1991

dtSearch.com 1-800-IT-FINDS

WESTERN U.S., WESTERN CANADA, EASTERN ASIA, AUSTRALIA, INDIA Paula F. Miller 925-831-3803 pmiller@d2emerge.com

PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein

D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com


Full PageAds_SDT09.qxp_Layout 1 2/23/18 11:17 AM Page 5


006,7_SDT09.qxp_Layout 1 2/23/18 2:18 PM Page 6

6

SD Times

March 2018

www.sdtimes.com

NEWS WATCH AngularJS to get one more major release The JavaScript-based frontend app framework AngularJS is expected to get one more significant release this year. AngularJS has recently been pushed aside in favor of its successor, Angular, however, the Angular team realizes many developers still rely on AngularJS and does not plan to abandon the framework. “We understand that many developers are still using AngularJS, and that the migration process to Angular takes time and energy, but we also are aware that developers want clarity on the future AngularJS development plans,” according to the team. As a result, the team is currently working on AngularJS 1.7. This version will continue under development until June 30, 2018. Starting on July 1, the release will enter a 3 year long term support period. After 1.7 is finished, the team will publish patch releases in the form of point releases until June 30. After July 1, the

team will turn its focus to bug fixes that meet the following criteria: a security flaw in the 1.7.x branch; a major browser gets updated and causes AngularJS 1.7 apps to stop working; or the jQuery library releases and update that causes angularJS 1.7 apps to stop working.

Google announces Android KTX for Kotlin development The Android team has announced a preview of Android KTX. Android KTX is a set of extensions designed to improve the process of writing Kotlin code for Android. It does this by providing an API layer on top of the Android framework and Support Library. Android KTX will enable developers to convert strings to URIs more naturally, according to the team. It will also be easier to edit SharedPreferences using Android KTX than it is with Kotlin. Android KTX will be more efficient at translating path differences and simplifies the process of triggering an action with View onPreDraw by

several lines of code. Currently, the part of Android KTX that supports the Android framework is available. The part that supports the Support Library will be available in an upcoming Support Library release. The team has indicated that it is waiting for the API to stabilize before this happens.

Microsoft previews Progressive Web Apps on Edge, Windows 10 After announcing its vision last year to bring Progressive Web Apps (PWAs) to Windows 10, Microsoft is starting to preview PWAs and has outlined its roadmap to bring PWAs to the Microsoft Store. Starting with EdgeHTML 17.17063, Microsoft has enabled Service Workers and push notifications by default, which will complete the suite of technologies that will lay the technical foundation for PWAs. Going forward, the company will be doing crawling experiments and indexing the quality PWAs to list them in

Report shows broken IT processes are causing employees to leave their jobs A third of employees are currently looking for new jobs and over 86 percent claim that the broken IT processes are the driving factor in their decision to leave their position, according to a new study funded by workflow solution provider Nintex. The Definitive Guide to America’s Most Broken Process was published by Nintex with the survey conducted by Lucid Research in July of 2017 with more than 1,000 full-time employees at U.S. enterprises. The roles of the employees included human resources, sales, finance, and IT. “It was interesting to find that most processes employees consider ‘broken’ are everyday processes like onboarding paperwork and submitting an IT request; they’re not highly strategic things,” said Ryan Duguid, senior VP of technology strategy at Nintex. “It’s all the everyday little things that are broken that make employees mad, slow them down, and prevent them from reaching their full potential and so on. The most broken processes are right under our noses.” Based on survey respondents, the top five broken corporate processes include technology troubleshooting, access to tools and documents that enable job performance, annual performance reviews, promotions, and employee onboarding.

the Microsoft Store. In the next release of Windows 10, these PWAs will show up in the Microsoft Store. Developers can now also submit PWAs to the Microsoft Store. According to Microsoft, listing a PWA in the Microsoft Store will help developers get more insight into user satisfaction. Developers will be able to access user reviews and ratings and get analytics on installs, uninstalls, shares, and performance. It also makes it easier for users to access the apps on devices where the browser not a natural entry point, such as Xbox, Windows Mixed Reality, and other non-PC devices.

Trello boards get a “Power-Up” with new directory Atlassian’s visual organization tool Trello released a new solution for discovering, managing and organizing PowerUps. With the new Power-Ups directory, development teams can easily search for the right Power-Ups for their workflow, and implement it to their Trello boards. The new Power-Ups directory will feature more than 80 public Power-Ups designed for popular applications such as Slack, Google Drive and Bitbucket. Developers can also custom build their work PowerUps for their specific business needs, and share it privately. The new directory will also include a featured section to highlight some favorite Power-Ups as well as articles on how to maximize Power-Ups capabilities. Developers will be able to search the directory by categories such as analytics and reporting, automation, developer tools, and IT and project management.


006,7_SDT09.qxp_Layout 1 2/23/18 2:19 PM Page 7

www.sdtimes.com

Mozilla releases Internet of Things gateway solution Mozilla is announcing a new solution that enables anyone to build their own gateway for the Internet of Things. Things Gateway is a part of the company’s experimental framework Project Things. Project Things is an open framework focused on security, privacy and interoperability for IoT. The gateway provides an implementation of a Web of Things gateway. “The ‘Web of Things’ (WoT) is the idea of taking the lessons learned from the World Wide Web and applying them to IoT. It’s about creating a decentralized Internet of Things by giving Things URLs on the web to make them linkable and discoverable, and defining a standard data model and APIs to make them interoperable,” the company wrote. With the launch of Things Gateway, the company wants to make it easier to build a Things Gateway with devices like Raspberry Pi, as well as web-based commands and controls, and voice-based commands. Other features of the gateway include: a rules engine for IFTF logic, a floor-plan view, additional device type support such as smart plugs and colored lights, an add-on system with support for new protocols and devices, and a new system for using third-party apps.

Red Hat to acquire CoreOS for $250 million Red Hat has announced plans to acquire Kubernetes and container-native solution provider CoreOS. CoreOS is known for its enterprise Kubernetes platform Tectonic. Tectonic is designed to provide automated operations and portability across private and public cloud providers. “The next era of technology is being driven by container-based applications that span multi- and hybrid cloud environments, including physical, virtual, private cloud and public cloud platforms. Kubernetes, containers and Linux are at the heart of this transformation, and, like Red Hat, CoreOS has been a leader in both the upstream open source communities that are fueling these innovations and its work to bring enterprisegrade Kubernetes to customers. We believe this acquisition cements Red Hat as a cornerstone of hybrid cloud and modern app deploy-

ments,” said Paul Cormier, president of products and technologies for Red Hat. Red Hat will combine CoreOS’s capabilities with its Kubernetes and containerbased portfolio, including Red Hat OpenShift. Other CoreOS solutions include the enterprise container registry Quay, lightweight Linux distribution Container Linux, distributed data store for Kubernetes etcd, and application container engine rkt.

Apache Kibble is now a Top-Level Project The Apache Software Foundation has announced that Apache Kibble is now a TopLevel Project. Apache Kibble is a reporting platform that collects, aggregates, analyzes, and visualizes activity in software projects and communities. It provides users with a detailed view of a project’s code, discussions, issues, and individuals. Kibble is an open source version of the enterprise reporting platform Snoot, which is used by dozens of

Apache projects and used by the Apache Software Foundations for its official reports, such as the ASF Annual Report, according to the foundation. While many projects become a Top-Level Project after entering the Apache Incubator, the Apache Kibble entered the Apache Software Foundation directly as a TopLevel Project.

SmartBear announces new API testing and documentation tool SmartBear announced the release of Swagger Inspector. The new solution is a free, cloud-based API testing and documentation tool. The tool is designed to simplify the validation of APIs and generate OpenAPI documentation. “As APIs are increasingly playing a pivotal role in digital transformation, it becomes imperative to deliver quality, consumable APIs at a faster pace,” said Christian Wright, EVP and GM of API business at SmartBear. “We built Swagger Inspector to simplify the API

March 2018

SD Times

development process by empowering developers to easily test and auto-generate their OpenAPI documentation with a single tool on the cloud.” With the tool, developers can check that APIs are working as intended without having to add components into existing code or processes. According to the company, Swagger Inspector was designed with no learning curve. Developers can check any API including REST, SOAP and Graph QL. In addition, Swagger Inspector enables developers to create OpenAPI documentation for any API, and host it on the design and documentation platform Swagger Hub.

Pivotal Container Service now available Pivotal and VMware have announced the general availability of its Kubernetes-based container service Pivotal Container Service (PKS). The two companies along with Google Cloud announced it would be collaborating on PKS at a VMware conference in August. The initial version was released late last year. PKS is designed to give both operators and developers the ability to run Kubernetes at scale in production. Version 1.0 of the solution includes: multi-cloud capabilities, support for vSphere and GCP, support for Kubernetes 1.9.2, Cloud Native Computing Foundation-certification, advanced container networking, enterprise grade security and multi-tenancy with cluster-level security. Other key features include: on-demand provisioning, a container registry, NSX-T network virtualization, and fully automated operations. z

7


: n o i t r o t x e r e b y C

008,9,11_SDT09.qxp_Layout 1 2/23/18 2:33 PM Page 8

8

SD Times

March 2018

“I

www.sdtimes.com

Kidnapping? That’s so last century. Today, compromising photos on social media and national security secrets are where the money is.

f you’re gonna commit a crime,” as “Slick Willie” Sutton said when asked why he robbed banks, “That’s where the money is.” Also known as “Willie the Actor” for his ability to disguise himself, Sutton stole an estimated $2 million during his 40-year robbery career. Modern-day cyber criminals have adopted this approach to digital extortion and blackmail. They’ve become good at going where the money is and disguising themselves on their victim’s networks until it’s time to levy the threat to pay up.

Easy money Ed Cabrera, chief cyber security officer at Trend Micro said, “The motivation for extortionists is simple. it works.” In many cases, cyber criminals are able to monetize their criminal activity within minutes of their initial attack. Any type

BY ALYSON BEHR of extortion-related activity, be it through ransomware or by other means, has changed the threat landscape coming out of the cyber-criminal undergrounds. With traditional data breaches, monetization can take months. They not only have to penetrate and exploit a particular network to find data that they can sell in the criminal underground, they have to exfiltrate that data, then parse it before they can sell it. David Perry, noted computer security consultant, pointed out the main enticement for criminals to go digital is because “we’ve put everything onto computers, and put all the computers online.” He added, “Extortion isn’t the only thing that’s happening. It’s only one of a vast panoply of things that involve everything from me hacking into your

computer just to prove how tough I am, to nation-states attacking one another.”

The Players Perry described six silos of actors that have to be dealt with. “The first are trolls. I would call them bullies, under a different context. Although we think of them as being very low point, in terms of extortion and blackmail, they actually have driven children to suicide. So I would say that’s a very important category.” Second there are hackers.“There’s a great range of these, from wannabe script kiddies with orange hair you meet at DEFCON for their first year, to people who are so dangerous the CIA is afraid of them.” He said they mostly want companies to know how tough they are.


: a growing Internet problem 008,9,11_SDT09.qxp_Layout 1 2/23/18 2:33 PM Page 9

www.sdtimes.com

Then come hacktivists, groups of hackers banded together for political action. These groups are more dangerous than any individual. Perry said, “Beyond hacktivists, are criminals which comprise maybe 100,000 different groups ranging from one person, for example your dirtbag nephew who’s hacking into you to make money, all the way to the Russian mafia. It is a whole world of criminals. There’s no one adversary that we can point the magic bullet at.” This is where much of the extortion activities are coming from. Next are corporations. According to Perry, corporations are more dangerous than criminals because corporations believe that they can do everything that they do, because that as long as they are pursuing profit and adding value for their shareholders, then what they do is right. He adds, “Criminals at least know what they’re doing is against the law.” The last silo is government. “You could argue that it’s very difficult to tell the corporations from the governments. I’m unwilling to admit that just yet,” Perry said.

Enter Crime as a Service Cabrera said that what’s aided and abetted attacks is the growth of “crime as a

service” in the criminal underground. Cabrera describes the burgeoning market of groups offering these services as “cyber tech startups.” They’re able to create ransomware as a service to budding cyber criminals that are not as capable or don’t have the infrastructure needed to successfully run the attack. On top of these services, they’re able to deliver those attacks to a demographic of the criminal’s choosing, and handle the entire payment processing using mostly Bitcoin or other cryptocurrencies. “What’s really enabled digital extortion is not so much the data mining aspect to it, it is all the other services and capabilities that have grown on the service web that have equally grown in the criminal underground,” he said. There are groups that develop a capability and a capacity that lends to one type of attack or another. Cabrera warned, “Criminals are able to communicate and collaborate and obtain services that they might be missing quite easily.” For example, if they specialize in digital extortion and want to move into other practices, such as going into traditional DDoS attacks to further their digital extortion activities, they can definitely find those individuals or groups offering those services.

March 2018

SD Times

Technology alone is not the fix

What can a company do to make it more difficult for extortionists? A holistic program will better protect against infiltration. Companies with networks have to learn how to segment their networks and access to those segments, so that the sysop who runs email doesn’t have the keys to get into the deep storage, and that they aren’t co-linked for the same access. Perry advised, “Don’t leave credit card info on the Point of Sale (POS) devices. Back them up immediately and take them out to an air gap network where they aren’t connected to the internet.” With accurate backups in place, you have the ability to re-flash and go back to it in an instant, if you have to. Plans have to be in place to do that. Have extra computers that are waiting in advance. Think about running a virtual operating system that you’re dumping the contents of the heap onto every time you turn off the computer, and loading fresh in the morning. Cabrera recommended three different backups in two different file types, and one air-gapped. “The idea that you’re going to be resilient enough based on the infrastructure and the security posture that you have, is not a continued on page 11 >

9


Full PageAds_SDT09.qxp_Layout 1 2/23/18 11:18 AM Page 10


008,9,11_SDT09.qxp_Layout 1 2/23/18 2:33 PM Page 11

www.sdtimes.com

Cyberextortion < continued from page 9

good approach,” he said. “You must have a layered approach from many different angles.” He points out companies either have to mitigate it, accept it, or transfer it. “So, when all else fails, purchasing cyber-insurance is also an option. There’s no one silver bullet that will help you survive these types of attacks.” Richard Santalesa, founding mem-

ber of the SmartEdgeLaw Group and of counsel to the Bortstein Legal Group, recommended a strategy of planning for the attack and performing dry-run fire drills. He used as an example the shipping company Maersk. “As part of their recovery from ransomware, they recreated something like 40,000 workstations, 1,000 servers, over the course of two weeks. That was a

GDPR: Changing the rules on both sides The EU’s General Data Protection Regulation or GDPR, is set to take effect in May. It applies to companies processing the personal data of EU residents and business’s residing in the Union, regardless of the company’s location. In other words, it also applies to any business if it does business with any person living anywhere in the EU without regard to where that business’s headquarters are. Essentially, this means everyone. Richard Santalesa, founding member of the SmartEdgeLaw Group and Of Counsel to the Bortstein Legal Group, says, “It’s going to have a huge effect. The GDPR has never been a fan of U.S. tech companies, and this gives them another lever and baseball bat to go after them if they so choose. For instance, Amazon, Google and Facebook have all set up specific different privacy settings and encryption in direct response to the GDPR. It’s gonna have a dramatic effect on the big players for starters, but then everybody else as well.” Other significant changes include the strengthening of consent conditions and the requirement that terms need to be in understandable, plain language. Breach notification must be given within 72 hours of the breach discovery. Data subjects have the right to demand and receive data control confirmation as to whether or not personal data concerning them is being processed, where and for what reason. Data erasure, data portability, and Privacy by Design are also addressed in further protection of the data subject. If companies haven’t already assessed their level of compliance, it’s time to do so. Penalties for non-compliance are stiff. According to the official GDPR site, GDPR organizations in breach of GDPR can be fined up to 4% of annual global turnover or €20 Million (whichever is greater). This is the maximum fine and the rules apply to both controllers and processors. In plain English, clouds will not be exempt from GDPR enforcement. What this means to a company in the throws of a breach is that it must assess whether it’s worth paying the ransom, which could be considerably less than the fine, and securing their data, or not paying the ransom and reporting the breach within required time frames. The downside here is that these hacks can have a devastating impact on company reputation if the data is released. By levying their fine structure, GDPR has now given digital extortionists a calculator by which to determine the amount of ransom to demand so that it can fall within the victim’s brand protection —Alyson Behr loss tolerance, in other words, less than the potential fine. z

March 2018

SD Times

mammoth undertaking on the part of their IT group to do that, but that told me right away these guys had practiced this in advance and weren’t just looking at page 12A of the manual and saying, ‘What do we do now?”

Be a better cybercitizen Individual protection begins with being better educated. The ordinary end user has to understand what’s at stake with computer security. Not just to themselves, but to their family, to their employer, and to the nation that they live in. Users need to know what they’re charged with protecting. They’re protecting their data, access, and reputation. “You’re protecting the security of your system, your relationships to other people, be those familial, work, or nation-state,” Perry said. More educated employees leads to better company security. He suggests that there needs to be a reward for an end user who discovers that the network has been breached. One option available to victimized individuals that needs to be weighed carefully against the potential damage, is the individual publishing the information themselves, thereby negating the extortionist’s leverage on them. It’s a tough call. Perry is working on an education program with Peter Cassidy from the APWG. It was unveiled at the United Nation’s Education Center in Vienna last fall. Ten years ago, Perry, Cassidy and the federal government founded a user awareness program called Stop.Think.Connect. as a resource to better educate end users.

What’s next? Nothing is not hackable. “At the nationstate end of things, look for somebody to hack one of our drones and turn it back around on us,” warned Perry. Extortion will evolve to include critical infrastructure. Travel, automotive, air traffic, power, sewage, and water will fall victim. “We’ve already seen extortion perpetrated on hospitals. We haven’t seen it on the power grid yet, but we will,” he says. To quote the great Al Jolson, “You ain’t seen nothing yet.” z

11


012_SDT09.qxp_Layout 1 2/23/18 3:02 PM Page 90

12

SD Times

March 2018

www.sdtimes.com

Java 10: Cloud, serverless focus BY JENNA SARGENT

Java 10 will be released in March with a focus on cloud and serverless computing, Oracle has announced. The release is the first since the company decided last year to go to a twice-yearly release schedule, with major updates occurring in March and September. “The move to a new release model has been met with a lot of enthusiasm in the Java developer community,” George Saab, vice president of software development of the Java Platform Group at Oracle, told SD Times. “With JDK 10, we’ll deliver the first major release that was fully developed under the new model. I believe that the breadth of features, their high quality and the smaller scope overall of major releases under the new release model all make it easier for developers to find something exciting in each release, migrate and benefit from the faster cadence. As such, I think that this was a very positive change for the platform overall — it has been reinvigorating in many ways!” Java 10 will continue the vision of rapid and iterative innovation cycles in the Java platform, said Saab. JDK 10 will be “better suited for serverless and cloud deployments than any previous release,” he said. According to Saab, the company will continue with its plan of contributing features to OpenJDK from Oracle JDK. It has open-sourced the root certificates in the Java SE Root CA program and contributed the Application Class-Data Sharing feature, he said. The Application Class-Data Sharing feature enables the HotSpot VM to reduce the application footprint. By sharing common class metadata across different Java processes, it improves startup. This makes Java 10 well-suited for serverless and cloud deployments, Saab said. Java 10 will also feature performance improvements from the previous major releases. One such improvement is “making the full [garbage collection] cycle of the default G1 garbage collec-

tor parallel,” he said. Beginning with the release of Java 8, Java has become more of a functional language. Java 10 will be continuing in that fashion. By declaring local variables using ‘var’ and having the compiler take care of inferring types, Java 10 will feel like a traditional functional language, Saab explained. At the same time, it will maintain “Java’s commitment to static type safety and improving the developer experience by reduc-

ing the ceremony associated with writing Java code,” said Saab. Going forward, Oracle will continue to evolve the platform to address new challenges and innovate on the JVM and Java language, Saab said. One of the challenges is moving to a serverless model. It will innovate on the JVM with projects such as ZGC, Loom, and Metropolis, and on the Java language with projects such as Amber, Valhalla, and several others, he said. z

Java EE’s transition to Eclipse has been smooth Last year, Oracle handed Java EE over to Java, Eclipse Java API for RESTful Web the Eclipse Foundation. The Eclipse Foun- Services, Eclipse Jersey (a REST framedation has since been hard at work with work), Eclipse WebSocket API for Java, and the transition of the project. Eclipse JSON Processing. The transition has been running According to Milinkovich, the Java smoothly so far, according to Mike Community Process will not be involved in Milinkovich, Executive Director of the deciding upon technical specifications for Eclipse Foundation. “The main issue is just future iterations of Java EE. the sheer scale of the project,” Instead, the Eclipse Founhe said. Migrating the code, redation will be creating a new hosting building, doing IP specification. As part of this checks, and so on, is a lot of process, it will create a new work, Milinkovich explained. brand, compatibility logo, and “So far I have to say that certification process. It will we’re very happy with the also provide access to the pace, and the community has Technology Compatibility Kits been very supportive,” said (TCK). The rebranding will Milinkovich. occur because Java EE is A Project Management closely associated with the Committee has also been set Java EE, associated with old, monolithic enterprise up and was approved by the the old, will get a new Java architecture. Eclipse Board of Directors in name, said Milinkovich. Milinkovich said that a October 2017. According to community election has been Milinkovich, the committee has been meet- underway in a GitHub thread to help the ing regularly and has already created and company choose a new name to rebrand provisioned the first nine projects. He said Java EE. As of press time, the two final that code is already being added to those choices were Jakarta EE and Enterprise projects. Profile. One of the projects is Eclipse Grizzly, “Under this new brand this technology which allows developers to utilize the Java will head to a modern, cloud-native set of NIO API, which helps with the creation of capabilities,” said Milinkovich. “The ‘Java scalable server applications. Another proj- EE’ brand is strongly associated with the ect is Eclipse OpenMQ, which is a message- old, on-premise, monolithic application oriented middleware platform. One more servers of the past. Everyone involved in example is Eclipse Tyrus, which gives a ref- EE4J is working to ensure that Java has a erence implementation for Java API for bright future as the language and platform WebSocket. Other projects include Eclipse of choice for the microservices and cloud Mojarra (an implementation of JavaServer native architectures of the future.” —Jenna Sargent Faces), Eclipse Message Service API for


Full PageAds_SDT09.qxp_Layout 1 2/23/18 11:18 AM Page 13


014,15,16_SDT09.qxp_Layout 1 2/23/18 2:32 PM Page 14

14

SD Times

March 2018

www.sdtimes.com

10 books every web developer should BY MICHAEL ROBERTS

When wannabe developers ask what books they should read, I usually respond “First off, just read.” A large part of the software development process is reading other peoples’ code. That said, the best thing you can do to improve as a developer is to read anything that will sharpen your speed and comprehension skills. The more effective you become at reading, the more efficient you will become in your dayto-day work building software. The following however, are books that if you have not yet read them, will have the most significant impact on your software IQ.

Moonwalking with Einstein by Joshua Foer and Mike Chamberlain

Google is great, but for all the convenience it offers, it really has deteriorated true learning. Why memorize what you can look up, right? And, if you don’t have a solid understanding of Michael Roberts is an instructor at Origin Code Academy and board member of San Diego JavaScript Community.

how to improve your memory, you really have no other option. Most developers are not taught memorization techniques and never even make an attempt to get better. As a result, mobile devices have become a crutch, and it shows. Today’s developers struggle to produce more than a few lines of code without referencing Google and then StackOverflow. Looking up language nuances or a specification when you are coding is a time sink. Guessing the signature of a function a few times and then looking it up is an even bigger time sink. But there is another way. This book teaches specific tactics to get the most out of focus, chunking, and repetition so that when you have to recall shortcut key strokes, status codes, or the arguments to a function, you can do so easily. The author reveals tips about how humans with the best trained memories compete in memorization competitions and how he learned techniques over a very short period of time.

You Don’t Know JS by Kyle Simpson

This is a series of books that collectively should be treated as the bible for JavaScript. Every JS developer should read it and maintain a copy in the closest night stand drawer. The author has even provided the full copy of each

book online if you would like to read it for free on GitHub. It’s a tough read, and slow going for most. The volumes are each little booklets that usually require a couple of passes to absorb it all, but each of the 7 volumes will deepen your knowledge of some of the trickier parts of JavaScript.

Clean Code by Robert C. Martin

The author of this book is referred to with reverence in the software community as “Uncle Bob” and is well known for his numerous conference talks about writing well organized and maintainable code. After reading this book developers will likely spend more time thinking about why we write code in a particular


014,15,16_SDT09.qxp_Layout 1 2/23/18 2:21 PM Page 15

www.sdtimes.com

March 2018

SD Times

15

read that will increase your software IQ way and what that our styles and habits say about the seriousness of our approach to the craft. Uncle Bob’s principles will allow you to identify code smells (the difference between good code and bad), and better yet, a process you can use to clean up and refactor your code to make it more readable and maintainable.

Software Craftsmanship: The New Imperative by Pete McBreen

The principles in this book align well with Clean Code. It differs in that it talks more about the art, than the science of software. Reading it will help developers figure out how to deliver value to customers via software. It addresses collaboration with stakeholders, project management, and more of the soft skills that are needed to really be a master at the craft. There is even a chapter on managing software craftsmen that will help developers better under-

stand the relationship between those that code and those that lead.

7 Languages in 7 Weeks: A Pragmatic Guide to Learning Programming Languages by Bruce Tate

The ability to learn fast and pick up new languages gives developers a real edge in today’s market. This book will help developers become decent at reading the code of these new languages, and understand the role they play, even if you’re not planning to become a polyglot (one who has mastered many languages). The point to learning a bit of 7 languages in 7 weeks is to gain a generalist’s knowledge. This allows a developer to better compare and contrast languages, and should strengthen the mastery of those used more regularly. If you’re curious about the 7 lan-

guages that are covered in the book, it examines Clojure, Haskell, Io, Prolog, Scala, Erlang, and Ruby. Using this 7 week approach you will learn, or be reminded of, programing paradigms that have evolved over time. Many have strengths that make the languages best suited to solve particular types of challenges. Others demonstrate the fadlike nature of how engineers work for a few decades, and then collectively decide the old way is boring, and the new way is “the only way” to code well. JavaScript programming, for example, can be done in a functional, object oriented, or procedural style. This book will inspire you to take a look at languages that are more focused on one or two of those methods and take a deeper dive into how each language implements common design patterns.

7 Databases in 7 Weeks: A Guide to Modern Databases and the NoSQL Movement by Eric Redmond and Jim Wilson

By gaining exposure to 7 different databases, developers can broaden their ability to pick the right database solution for each new problem they encounter, versus feeling stuck continued on page 16 >


014,15,16_SDT09.qxp_Layout 1 2/23/18 2:21 PM Page 16

16

SD Times

March 2018

www.sdtimes.com

< continued from page 15

with using the one or two solutions that are most familiar to them. This book will give developers the confidence to conquer building applications using any database. Even those databases that first appear to be unique will suddenly seem very similar to those used quite commonly by today’s developer community.

JavaScript: The Good Parts by Douglas Crockford

JavaScript is moving really fast these days. So fast, that some people skip learning the basics and focus on mastering frameworks and libraries before they have a deep understanding of “vanilla” or pure JavaScript. In this book you will go back to basics and learn many of JavaScript’s nuances and what pitfalls to avoid. Since there are so many libraries and frameworks, software developers need to be able to evaluate libraries and frameworks quickly. This book serves as a guide for best practices. Even if you decide not to follow them, understanding Douglas’ decision making process and rational will help you get better at evaluating other people’s code. It will help you refine your ability to not just say what you don’t like, but articulate why. Understanding why some areas of JavaScript should be avoided will also help you craft better software, and think more about design patterns that will stand the test of time.

Think and Grow Rich by Napoleon Hill

Success in software development parallels success in life. The principles that you can learn and see practical application of in this book will make you more productive and mentally successful. Personal and professional achieve-

ment requires a productive thought process and success oriented mentality. This book was published almost a century ago, but its stories are just as applicable to the life of a success-minded individual today.

motivate them to embrace your solutions, so you can spend more time building things you love.

How to Win Friends and Influence People

HTML & CSS: Design and Build Web Sites

by Dale Carnegie

by Jon Duckett

From an outsiders perspective writing code is thought to be one of the most important skills of a software engineer. However, being able to listen and communicate effectively is far more important. Simply having a great idea or design for how to build something is wonderful, but being able to effectively communicate that idea, get buy-in and the “green light” to build is another. This book will provide anyone — even developers — with the tools to negotiate and be empathetic to stakeholders. Use this book to get better at setting and managing expectations. After reading and practicing the techniques you will be wellequipped to understand others and

This is the book you will set on your office coffee table, and every time you pick it up you will learn something new. It is not a book you will read cover to cover, but it is one that you will return to frequently and digest it in small chunks. It is beautifully illustrated and the examples of code make HTML come alive. As much as we like to think we know the fundamentals, this book is packed with implementations of HTML and CSS specifications that developers can come back to over-and-over and still learn each time. Use it like a dictionary to look something up (when Google is not handy), or when you just want to refine your knowledge of designing websites. On your coffee table it will make you look like the hipster coder we all aspire to be. z


Full PageAds_SDT09.qxp_Layout 1 2/23/18 11:18 AM Page 17


018,19_SDT09.qxp_Layout 1 2/23/18 2:38 PM Page 18

18

SD Times

March 2018

www.sdtimes.com

The two big traps of BY ARTHUR HICKEN

M

easurement of code coverage is one of those things that always catches my attention. On the one hand, I often find that organizations don’t necessarily know how much code they are covering during testing — which is really surprising! At the other end of the coverage spectrum, there are organizations for whom the number is so important that the quality and efficacy of the tests has become mostly irrelevant. They mindlessly chase the 100% dragon and believe that if they have that number the software is good, or even the best that it can be. This is at least as dangerous as not knowing what you’ve tested, in fact perhaps more so since it can give you a false sense of security. Code coverage can be a good and interesting number to assess your software quality, but it’s important to remember that it is a means, rather than an end. We don’t want coverage for coverage’s sake, we want coverage because it’s supposed to indicate that we’ve done a good job testing the software. If the tests themselves aren’t meaningful, then having more of them certainly doesn’t indicate better software. The important goal is to make sure every piece of code is tested, not just executed. Failing enough time and money to fully test everything, at least make sure that everything important is tested. What this means is that while low coverage means we’re probably not testing enough, high coverage by itself doesn’t necessarily correlate to high quality — the picture is more complicated than that. Obviously, having a happy medium where you have “enough” coverage to be comfortable about releasing the softArthur Hicken is an evangelist at Parasoft.

ware with a good, stable, maintainable test suite that has “just enough tests” would be perfect. But still, these coverage traps are common. The first trap is the “we don’t know our coverage” trap. This seems unreasonable to me — coverage tools are cheap and plentiful. A friend of mine suggests that organizations know their coverage number isn’t good, so developers and testers are loath to expose the poor coverage to management. I would hope this isn’t the usual case. One real issue that teams encounter when trying to measure coverage is that the system is too complicated. When you build an application out of pieces on top of pieces on top of pieces, just knowing where to put the coverage counters can be a daunting task. I would suggest that if it’s actually difficult to measure the coverage in your application, you should think twice about the architecture. A second way to fall into this trap happens with organizations that may have a lot of testing, but no real coverage number because they haven’t found a proper way to aggregate the numbers from different test runs. If you’re doing manual testing, functional testing, unit

testing, and end-to-end testing, you can’t simply add the numbers up. Even if they are each achieving 25% coverage it is unlikely that it’s 100% when combined. In fact, it’s more likely to be closer to the 25% than to the 100% when you look into it. The next trap is the “coverage is everything” perspective. Once teams are able to measure coverage, it’s not uncommon for managers to say “let’s increase that number.” Eventually the number itself becomes more important than the testing. Perhaps the best analogy is one I heard from Parasoft’s founder, Adam Kolawa:

“It’s like asking a pianist to cover 100% of the piano keys rather than hit just the keys that make sense in the context of a given piece of music. When he plays the piece, he gets whatever amount of key coverage makes sense.” Therein lies the problem — mindless coverage is the same as mindless music. The coverage needs to reflect real, meaningful use of the code, other-


018,19_SDT09.qxp_Layout 1 2/23/18 2:39 PM Page 19

www.sdtimes.com

March 2018

SD Times

code test coverage wise it’s just noise. And speaking of noise… the cost of coverage goes up as coverage increases. Remember that you not only need to create tests, but you have to maintain them going forward. If you’re not planning on re-using and maintaining a test, you should probably not waste the time creating it in the first place. As the test suite gets larger, the amount of noise increases in unexpected ways. Twice as many tests may mean two or even three times as much noise. The meaningless tests end up creating more noise than good tests because they have no real context, but have to be dealt with each time the tests are executed. Talk about technical debt! Useless tests are a real danger. Now, in certain industries, safetycritical industries for example, the 100% coverage metric is a requirement. But even in that case, it’s all too easy to

treat any execution of a line of code as a meaningful test, which is simply not true. I have two basic questions I ask to determine if a test is a good test: 1) What does it mean when the test fails? 2) What does it mean when the test passes? Ideally, when a test fails, we know something about what went wrong, and if the test is really good, it will point us in the right direction to fix it. All too often when a test fails, no one knows why, no one can reproduce it, and the test is ignored. Conversely, when a test passes we should be able to know what was tested —– it should mean that a particular feature or piece of functionality is working properly. If you can’t answer one of those questions, you probably have a problem with your test. If you can’t answer

either of them, the test is probably more trouble than it’s worth. The way out of this trap is first to understand that the coverage percentage itself isn’t the goal. The real goal is to create useful meaningful tests. This of course takes time. In simple code, writing unit tests is simple, but in complex, real-world applications, it means writing stubs and mocks and using frameworks. This can take quite a bit of time and if you’re not doing it all the time, it’s easy to forget the nuances of the APIs involved. Even if you are serious about testing, the time it takes to create a really good test can be more than you expect. Hopefully you’ve learned that coverage is important, and improving coverage is a worthy goal. But keep in mind that simply chasing the percentage isn’t nearly as valuable as writing stable, maintainable, meaningful tests. z

683(5 )$67 $1' $'9$1&(' &+$576

/LJKWQLQJ&KDUW y y y y

:3) DQG :LQ)RUPV 2SWLPL]HG IRU UHDO WLPH GDWD PRQLWRULQJ 5HDO WLPH VFUROOLQJ XS WR ELOOLRQ SRLQWV LQ ' +XQGUHGV RI H[DPSOHV

y y y y

2Q OLQH DQG RII OLQH PDSV 7RXFK VFUHHQ IHDWXUHV $GYDQFHG 3RODU DQG 6PLWK FKDUWV 2XWVWDQGLQJ FXVWRPHU VXSSRUW

Ϯ ĐŚĂƌƚƐ Ͳ ϯ ĐŚĂƌƚƐ Ͳ DĂƉƐ Ͳ sŽůƵŵĞ ƌĞŶĚĞƌŝŶŐ Ͳ 'ĂƵŐĞƐ

ZZZ /LJKWQLQJ&KDUW FRP GR

&Z dZ/ >

19


020_SDT09.qxp_Layout 1 2/23/18 2:46 PM Page 20

20

SD Times

March 2018

www.sdtimes.com

DEVOPS WATCH

XebiaLabs raises $100M in funding enterprise DevOps.” Research analyst firm Forrester XebiaLabs has announced it has secured more than $100 million in a recently declared that 2018 would be series B round of funding. The compa- the year of enterprise DevOps. Forny plans to use this new growth invest- rester expects more organizations will move from individual projects to ment to enter in a new era of enterprise-wide initiatives that enterprise DevOps. support thousands of developers “This past year was our and apps, and their effect on the strongest to date, and today’s business. XebiaLabs said the investment will perfectly posiinvestment will help the compation us to lead a new era where ny increase enterprise demand, the essential focus is on scaling DevOps across the enterprise,” Derek Langone invest in more staffing and leadership and support the growing said Derek Langone, CEO of XebiaLabs. “Tapping into the experi- demand of DevOps initiatives. “In fact, we recently introduced an ences of these exceptional VC firms, who have guided software industry industry-first DevOps intelligence soluleaders such as Facebook, Dropbox, tion, XL Impact,” Langone continued. Slack, Cloudera, PagerDuty, and “XL Impact allows companies to take a Atlassian, will be invaluable as we critical step in improving software delivwork with them to define the future of ery – tracking and reporting on the busiBY CHRISTINA CARDOZA

ness impact of their DevOps initiatives.” The round of funding included Susquehanna Growth Equity and Accel as well as existing shareholders. “XebiaLabs has a deep understanding of the challenges specific to enterprise software delivery,” said Martin Angert, director at Susquehanna Growth Equity. As part of the investment, Susquehanna Growth Equity’s Angert and Arun Mathew, partner at Accel, will join XebiaLab’s board of directors. In addition, the company recently announced the release of XebiaLabs 7.6 DevOps Platform. It features a new release relationship view, a new plugin manager, Template Version Control features, new plugin integrations, and the ability to set and track progress towards team goals. z

CloudBees acquires CD and CI company Codeship BY CHRISTINA CARDOZA

CloudBees is strengthening its position in the continuous integration and continuous delivery space with the acquisition of Codeship. Codeship is a SaaSbased CD platform for automated, cloud-based software delivery. “This is the most impactful acquisition in CloudBees’ history,” said Sacha Labourey, CEO and co-founder of CloudBees. “We’ve been leading the continuous delivery and DevOps industry with automation solutions and we recognize that needs are evolving quickly. With Codeship, we want to provide organizations an additional option that is self-service and easy to use. This acquisition ensures that CloudBees will be able to offer a range of CI/CD solutions, from cloud, self-service and opinionated to on-premise, self-managed and fully customizable - all through a smooth and unified experience.”

Codeship features three solutions: Codeship Basic, Codeship Pro and Codeship Enterprise. Basic and Pro feature a free tier as well as the ability for open source projects. Pro features a pipeline as code solution and supports Docker natively. The enterprise edition takes a hybrid approach to CD and enables users to run builds in the cloud and host their source code on-premise. Going forward, Codeship will continue to deliver these products under the CloudBees brand. CloudBees will retain Codeship executives and employees. In addition, CloudBees will now be able to provide a broader CI/CD portfolio to organizations of all sizes, and Codeship will be able to provide more extensive customer service and support. “Together with CloudBees we can make sure that you’re never blocked on your path to CI/CD adoption. You can

start with a simple and easy-to-use SaaS with a small team, and evolve to centrally managed and fully customized build machines at scale for tens of thousands of developers and IT professionals,” Codeship CEO and founder Moritz Plassnig wrote in a post. Other features of Codeship include: cloud native support, language agnostics, parallel pipelines, concurrent builds, control over workflows, predictable scaling, single-tenancy, SSH key access, permissions and cache encryption. “As the adoption of DevOps and cloud computing accelerates at warp speed within organizations, CI and CD are their strategic backbone on which the DNA of their software value creation gets codified. From applications to infrastructure as code, every IT asset definition flows through a continuous pipeline engine, that’s CI/CD,” Labourey wrote in a blog post. z


Full PageAds_SDT09.qxp_Layout 1 2/23/18 11:19 AM Page 21


022,23,26,27_SDT09.qxp_Layout 1 2/23/18 2:43 PM Page 22

22

SD Times

March 2018

www.sdtimes.com

Preparing for the GDPR in

BY CHRISTINA CARDOZA f digital businesses haven’t already been preparing for the European Union’s General Data Protection Regulation (GDPR), the time is now. The regulation was adopted in April of 2016 and will officially go into effect on May 25, 2018. Despite the advance notice and warnings given, the range of GDPR preparedness is still very broad, according to Richard Macaskill, product manager at Redgate Software. Preparedness ranges from companies that have invested significantly in training and tools, to companies who are taking a riskier path and just waiting to see what they can get away with. The latter approach is troubling because there is an up to 4 percent fine of annual global revenue or €20

I

million (whichever is greater) if digital businesses don’t comply. “I think it’s safe to say that organizations should be much further along than they are. Under GDPR, you need to be able to articulate what the data is, where on your network it resides, what controls you have for protecting it, and the measures addressing mistakes/breaches,” said Adam Famularo, CEO of data governance solution provider erwin. According to a recent survey conducted by erwin, only 6 percent of enterprise respondents indicated they were prepared for the regulation. In another industry survey from data backup, protection, recovery and management provider Commvault, only 21 percent of respondents believed they had a good understanding of what the GDPR actually means.

The problem, according to the Commvault study, is that businesses don’t understand their data. Only 18 percent of respondents understand what data their company is collecting, and where it stores that data. “For a long time, businesses just collected more data than it needed to and retained it much longer than it should with the hope that someday it is going to provide some kind of value,” said Nigel Tozer, solutions marketing director, EMEA at Commvault. Part of what is still unknown about the GDPR is what happens once it goes into effect. The Commvault study found 17 percent of respondents understood the potential impact GDPR will have on the overall business. That lack of awareness is causing some businesses not to realize how serious this will be. Every


022,23,26,27_SDT09.qxp_Layout 1 2/23/18 2:47 PM Page 23

www.sdtimes.com

the eleventh hour

business of every size has to comply with these regulations. It doesn’t matter if you are just a plumber. If you collect and store information about an individual in the EU, you have to comply, according to Tozer. While it is going to be impossible for a regulator to go in and audit everyone to see if they are compliant or not, that doesn’t mean businesses should just ignore the regulation. The GDPR regulator body gave businesses two years to get ready and start complying. By May, businesses need to show they have made significant progress towards GDPR compliance. According to Seth Dobrin, chief data officer for IBM, a lot of businesses are still trying to figure out what “significant progress means,” but the most important thing is for businesses to start making changes. “Starting today is better

than not starting at all,” he said. “At least you are showing some progress.” According to Commvault’s Tozer, a good place to start is fixing the biggest hole first. Identify the weakest link and start directing some of the efforts there, he explained. For example, if a company is worried about a data breach in a particular area, it should try resolving that first before going forward. IBM’s Dobrin suggests a complexity reduction exercise. GDPR is not about all the data that resides within a company; it is about all the data that pertains specifically to individuals. Once businesses can reduce that complexity and understand their personal data, they can begin a data discovery exercise to figure out where the data is, what state is it in, and what needs to be done to get it GDPR ready.

March 2018

SD Times

Reducing complexity is also a great way for companies to “clean their house,” according to Noam Abramovitz, head of product and GDPR product evangelist for IT operations company Loom Systems. Once a business understands where everything is stored and what they have and don’t have, they can have a conversation about what they want to collect and archive, and form a strategy on how to maintain compliance. To understand what you have and where it is, businesses will need to conduct a data audit or data mapping exercise, according to Redgate’s Macaskill. To ensure your data map is good and provides proper visibility into the data and that is can maintain a true view. “By now I would hope that companies are already identifying the ways they currently hold and use data, and assessing how that will need to change in the future. Policies can be changed quite quickly in theory, but products take time to update and prove. In-flight projects will be impacted by the need to change ways of working and new projects will need estimating and resourcing with GDPR in mind,” said Dan Martland, head of technical testing at Edge Testing. A lot of the first steps toward GDPR preparedness also revolve around education. According to business intelligence and data management provider Information Builders, businesses need their employees to understand their risks and understand how their projects impacts personal data. Developers and IT managers can do their businesses a big service by just being aware of the regulation and what it requires, and understanding what can be done in terms of portability and privacy of data. What is also important to understand is the regulation is not binary, according to Loom’s Abramovitz. “You are not going to wake up one day, go over a checklist and then you are finished,” he said. Compliance is something businesses will have to keep working on moving forward. Whether companies want to be compliant or just stay away from the EU, they need to start having an internal discussion of continued on page 26 >

23


Full PageAds_SDT09.qxp_Layout 1 2/23/18 2:14 PM Page 24

Learn, Explore, Use Your Destination for Data Cleansing & Enrichment APIs

Global Address

ID Verification

Global Email Global IP Locator DEVELOPER

Global Phone Property Data

Global Name

Business Coder

Your centralized portal to discover our tools, code snippets and examples. RAPID APPLICATION DEVELOPMENT

REAL-TIME & BATCH PROCESSING

TRY OR BUY

FLEXIBLE CLOUD APIS

Convenient access to Melissa APIs to solve problems with ease and scalability.

Ideal for web forms and call center applications, plus batch processing for database cleanup.

Easy payment options to free funds for core business operations.

Supports REST, JSON, XML and SOAP for easy integration into your application.

Turn Data into Success – Start Developing Today! Melissa.com/developer 1-800-MELISSA


025_SDT09.qxp_Layout 1 2/23/18 2:29 PM Page 25

www.sdtimes.com

March 2018

SD Times

INDUSTRY SPOTLIGHT

Melissa enables active data quality BY LISA MORGAN

Poor data quality costs businesses time, money, and customers. For companies conducting business in Europe, the associated costs could rise dramatically when the EU’s General Data Protection Regulation (GDPR) takes effect in May 2018. One small data quality mistake involving the misuse of Personally-Identifiable Information (PII) could cost a company 20 million euros or 4 percent of annual turn-over, whichever is higher.

well for corporate or process-specific data. Active data quality helps ensure that customer data is also kept accurate, including names, residential addresses, phone numbers email addresses, job titles, company names, and more. “When organizations start building their data quality regimen, they inevitably wonder whether they should develop a solution in-house or buy an off-the-shelf solution,” said Brown. “We advocate a hybrid approach because it

‘We allow developers to get the tools they need for small wins so they can solve one problem at a time in discrete phases.’ —Greg Brown, VP of marketing

Even in the absence of GDPR, the accelerating pace of business means that companies can no longer wait weeks or months to correct data errors, particularly if they want to stay competitive. To overcome these and other challenges, businesses are turning to Melissa’s “active data quality” solutions. “Active data quality is anything that that depends on accurate, real-world reference data,” said Greg Brown, VP of Marketing at Melissa. “People are constantly moving and changing jobs, yet most enterprises don’t realize how inaccurate their data really is.” For example, recent Melissa research reveals that within 3 1/2 years, half of all customer records will become outdated or otherwise inaccurate. Even more alarming is that 30 percent of the decay occurs within the first 12 months. Keeping current with the data changes is both expensive and difficult using inhouse resources alone.

Rule-based versus active data quality Many businesses already use a rule-based approach to data quality, which works Content provided by SD Times and

gives them the best of both worlds.” Specifically, a hybrid approach allows development teams to define rule-based processes, metrics, and data controls while taking advantage of Melissa’s active data quality.

Common data quality challenges Handling all data quality issues inhouse is exceptionally challenging. For example, many organizations are illequipped to resolve two seemingly different addresses such as 6801 Hollywood Blvd., Hollywood, CA and 6801 Hollywood, Los Angeles, CA. While the two addresses appear to be different addresses, the former is a vanity city name and the latter is the preferred USPS city name. However, there are other challenges such as ensuring accurate directional information, suite information, carrier codes, and ZIP+4 codes that impact mail delivery. In fact, many businesses lack the standardization necessary to recognize that International Business Machines and IBM are the same company. Companies also struggle to parse

inverse or mixed names such as Mr. and Mrs. John and Mary Smith. Fewer still are able to transliterate foreign characters into Latin characters so their customer data can be validated and deduped on a global scale faster and more efficiently. “The best model is one that can incorporate ‘smart, sharp tools’ to augment your current processes, as opposed to a monolithic approach that requires you to buy an entire suite of tools that you don’t necessarily need,” said Brown.

Melissa’s new Data Quality Portal Melissa enables developers to improve their company’s data quality as they see fit. Rather than limiting what developers can accomplish in a 30-day trial, the new portal allows developers to move at their own pace by purchasing small credit packages. “If your immediate problem is that you need real-time email verification on your e-commerce site, you can purchase a license for only that without any other encumbrances or add-ons,” said Brown. “It’s a low-risk initial investment, and it’s very easy to spin up when you need additional transactions.” Using Melissa’s new Data Quality Portal, developers can try out data cleansing and enrichment tools, as well as access code snippets, examples and flexible REST APIs. Melissa also offers an audit service that allows developers to determine important data quality metrics, such as the percentage of duplicate customer files. “Most companies realize that poor data quality equals poor analytics,” said Brown. “With GDPR, garbage-ingarbage-out comes with a higher level of risk. The portal provides an easy way for developers to proactively start reducing that risk.” For more information, visit www.melissa.com/developer. z

25


022,23,26,27_SDT09.qxp_Layout 1 2/23/18 2:45 PM Page 26

26

SD Times

March 2018

www.sdtimes.com

< continued from page 23

how the regulation impacts them. “GDPR has been a significant time in the making and it can’t arrive soon enough — people need to understand how their data is being used to influence them and nudge them into specific choices and be given back the power to say they don’t want that to happen,” said Martland.

Technology’s role in GDPR The law speaks about some approaches like encryption, anonymization, and sensitive data. A lot of these things are impossible to do without tooling, according to Redgate’s Macaskill. Tooling vendors have an interesting story here because they can help pave the way towards GDPR compliance. Commvault’s Tozer said the biggest obstacle in complying with the GDPR will be data complexity. Having a tool in place can help businesses easily profile their data, understand what they have, where it is and what needs ro change. “The biggest challenge in complying with the GDPR is the fact that personal data can be located anywhere,” according to the company. Commvault’s GDPR compliance solution provides backup, recovery, and archiving of structured and unstructured data in a single searchable solution. It features the ability to identify and map, preserve and protect information, prioritize security, reduce exposure, manage retention, provides role based capabilities, and includes audit and reporting features.

Loom Systems believes having that centralized place to store data, logs and events is essential for complying with the regulation. “If organizations don’t have a centralized solution, what will happen is they will have to be compliant for each and every server, which is tedious and requires a lot of manual work. It is also very dangerous if they miss some sensitive data that could make them no longer compliant with the GDPR,” said Abramovitz. Loom Systems’ Sophie for GDPR is an AIOps platform that analyzes both log and unstructured machine data for visibility into IT environments. It includes a “find my PII” (personally identifiable information) feature, enables users to remove any identifiable information, can be stored on-premises or in the cloud, and helps comply with the right to be forgotten. Redgate’s Macaskill explained with GDPR there is a movement from ‘trust me’ to ‘show me.’ Instead of just trusting that a business is going to take care of your data, they have to prove they can. To do so, they need to have a dependable, repeatable process and easily show where the data is and how it is managed. This requires businesses to have better insight into their databases. Redgate’s data solutions enable users to control and manage their database and database copies, protect sensitive data, automatically mask databases, monitor the data, and provide backups. A lot of the solutions on the market correctly focus on a data management

Who should be concerned about the GDPR? Everyone who does any kind of business with anyone in the EU. While the GDPR is designed to replace the Data Protection Directive 95/46/EC and designed specially for European data privacy laws, this impacts businesses worldwide. “The biggest change is Increased Territorial Scope. This means the regulation extends beyond the continent to any company that collects or stores personal data of subjects residing in the EU, regardless of the company’s location,” said Adam Famularo, CEO of erwin. Famularo adds the rules also apply to both controllers and processors, which means clouds are not exempt. According to IBM’s chief data officer Seth Dobrin, the regulation is not about where you are based, it is about where your subjects, employees, clients, and contractors are. “It is a misnomer to ask how companies outside the EU should think about this or if they should approach it differently because it applies to your subjects and their rights. It pertains to anyone who has a subject that resides in the EU,” he said. —Christina Cardoza

or data governance aspect of the GDPR. This is because for years, companies have been collecting information and piling up layers upon layers of data, according Jon Deutsch, VP and global head of industry solutions at Information Builders. In addition, a lot of it is collected in a very fragmented way. With personal data being the main aspect of the GDPR, organization’s are scrambling to understand what they have and properly management it now. “An effective data governance program is critical to ensuring the data visibility and categorization needed to comply with GDPR. It can help you assess and prioritize risks to your data and enable easier verification of your compliance with GDPR and auditors,” erwin’s Famularo added. Erwin EDGE (enterprise data governance experience) enables companies to discover and harvest data assets, classify PII data, create a GDPR inventory, perform GDPR risk analysis, prioritize risks, define GDPR controls, apply and socialize GDPR requirements, implement GDPR controls into IT roadmaps, and leverage a GDPR framework to prove compliance. “With erwin EDGE, companies can execute and ensure compliance with their current (as-is) architecture and assets and ensure new deployments and/or changes (to-be) incorporate the appropriate controls so that they are GDPR ready and compliant at inception,” said Famularo. Information Builders takes a more tactical approach to complying with GDPR through three layers: strategic, planning and organization. Planning includes what are you going to do, how are you going to do it, and what is the scope of your work. The second layer involves understanding the data, where it lives, what it does, and how it pertains to personally identifiable information. The third is about analytics and monitoring. With the Information Builders Accelerator, users can pinpoint the greatest GDPR risks, understand where to start, and track how well the company is meeting expectations and goals in terms of compliance and timelines. Tools can also help businesses continue to comply with the GDPR even


022,23,26,27_SDT09.qxp_Layout 1 2/23/18 2:45 PM Page 27

www.sdtimes.com

What type of data is the regulation protecting? The GDPR applies to personal customer data or private individuals’ data. According to Dan Martland, head of technical testing at Edge Testing, that includes any form of data with information on customers, business partners, vendors, employees and members of the public. This type of data can live anywhere from emails, documents, files and photos to online stores, mobile apps, homegrown apps, data warehouses and spreadsheets. “To ensure GDPR compliance, organizations need to document what personal data is held, its location, source, reason for storage, length of retention, use, access rights and how it is shared, both internally and externally,” Martland said. “They must then get consent from the data subject to have their personal data processed and, going further than before, detail what happens to their data once consent is granted.” In addition, IBM’s chief data officer Seth Dobrin explains the regulation redefines personal identifiable information (PII) to a broader term called personal data. Personal data includes any data that can be used to directly or indirectly identify an individual. It includes all of the data within PII as well as things like GPS coordinates, IP addresses, bank details, social networking and medical information. “It is anything that could be used to potentially directly or indirectly identify you,” Dobrin said. —Christina Cardoza

after May 25. According to Edge Testing’s Martland, the GDPR will continue to be a major IT challenge over the next several years. To manage and assess ongoing GDPR compliance, he believes there is a need for a robust test data management strategy. “We believe that data management within the development process, particularly test data management, is the greatest source of risk for GDPR compliance. Access to realistic or representative data is an essential part of the development process: analysts need real data to investigate and elaborate requirements, developers need representative data in order to design and build the code, and testers probably need the largest datasets in order to create and execute their tests,” he said. Lastly, if companies are looking for one solution to help prepare for the GDPR, IBM offers an end-to-end solu-

tion from consulting services to software that can help with discovery, consent management and breach notification. Depending on the entry point a company needs, IBM can help with data discovery assessments, GDPR readiness assessment, GDPR education and training, operationalizing GDPR readiness, and monitoring and reporting capabilities. “This is not a one-and-done regulation,” said Dobrin. “This will be an ongoing journey that is going to require monitoring and reporting of compliance.”

GDPR and the future While GDPR is coming from the EU, IBM’s Dobrin believes businesses should treat this as a global standard. Just applying this to your subjects in the EU is going to create more work than it would to apply it globally. “Putting all these processes in place and having it only

March 2018

SD Times

apply to subjects that reside in Europe is going to be confusing and cumbersome,” Dobrin said. “We are applying this to our entire environment on all our subjects globally because that is the most effective way to implement it.” According to erwin’s Famularo, in addition to personal data, the GDPR strengthens the conditions for consent, makes breach notification mandatory, expands rights of data subjects, applies the right to be forgotten, and introduces data portability. All of this can be beneficial to everyone globally. “I believe GDPR will become the de facto data regulation globally. The issues of data governance and protection, specifically around personally identifiable information and portability will not be going away any time soon. And, if you look at regulations like HIPAA, businesses are motivated to action by regulations — and steep fines,” he said. GDPR also presents the opportunity to better understand your customers, according to Information Builders’ Deutsch. By organizing and understanding data, businesses can get better insight into customers and customers can get better visibility into their relationship with the business. “Let’s take advantage of what we are doing and turn it into an opportunity to better our customer relationships,” he said. Every enterprise is aware their industry is going to be digitally disrupted if it hasn’t been already, according to Dobrin. The primary way an industry or business gets disrupted is when a third party comes in, takes a different perspective on what clients are looking for by looking at things through their eyes, and provides them a better solution that is more outcome-based and satisfies their need. Dobrin explained the reason this disruption happens is because businesses don’t have a good understanding of their clients or their relationship with their clients. GDPR solves this problem by forcing them to truly understand their customer base. “The GDPR is going to really help businesses understand their clients and build a conversation around how you can be better, quicker, faster, more efficient and more productive,” said Dobrin. z

27


028,29,31,32_SDT09.qxp_Layout 1 2/23/18 4:25 PM Page 28

28

SD Times

March 2018

www.sdtimes.com

Release automation: All about the pipeline

Repeatable build-dev-test processes lead to benefits of speed, reliability BY DAVID RUBINSTEIN


028,29,31,32_SDT09.qxp_Layout 1 2/23/18 4:26 PM Page 29

www.sdtimes.com

T

o many, release automation is that last stage of the DevOps pipeline, when software is made public. Yet, to many, release automation remains largely misunderstood and so is not as widely utilized in organizaions that claim to be agile and doing DevOps. “I don’t know if people are recognizing the benefits of application release automation just yet,” said Tracy Ragan, CEO and co-founder of OpenMake, which offers automation from build to release. One person who does understand the benefits of application release automation is Tom Sherman, divisional vice president of DevOps at health insurance management company HCSC. “In mid-2016, we started a project to move all of our applications onto a release automation platform. We’ve been able to move about 400 applications to implement a release automation process for them. We have automated the process of building the application and then doing the deployments. We’re seeing a lot of benefits in doing that in terms of speed and resources saved, as they don’t have to work like they had to do in a manual release process, particularly on the infrastructure side — the teams that used to have to support us by loading our applications out onto servers and so forth.” Bob Evans, director of applications engineering at email delivery service SparkPost, sees the benefit of getting new features out to customers more quickly. “This is crucial because it allows an organization to get feedback during the development cycle and iterate accordingly,” he said, adding “I don't believe you should take operations out of the equation, it’s just their responsibilities shift. DevOps puts more ownership and success of the overall service on the team. It also empowers team members to address issues before adding more features.” Automation is used in many areas of IT, from test and build automation on the developer side to automating steps in IT operations. Yet some fear remains. As Electric Cloud’s CTO Anders Wallgren noted, “Release automation managers have a bit of ‘we’re different.

We’re a duck-billed platypus, you can’t possibly automate what we do. It can’t possibly work for us, we’re different.’ All of the usual kinds of phrases come out when you start to dig in at particular organizations. There’s a little bit of fear there for this kind of stuff. We like other companies have had to work through that a little bit.” Not only a fear of losing their jobs, but a fear of turning over the keys to a machine that in their minds could lead to releasing broken or bug-filled software. Release managers, Wallgren said, should spend more time talking with the people who’ve created the functionality upstream and downstream. “They still seem to not really be getting it yet. They say, ‘Look, if we automate everything, what if something goes wrong, are we going to release software that doesn’t work?’ All the usual reasonable fears that people have.” And who are these release managers? “I have seen several instances where people that work in the release management arena are a little bit divorced from the details of what it takes to get a release out the door,” Wallgren said. “What I mean by that is often times, they’re project managers, and I don’t mean that in a perjorative way; they’re there. Their job is to herd the cats. We all know how much fun that is. Sometimes, the notion of automation to them is a little bit scary, for a number of reasons. Sometimes it maybe dives into the technical areas a

March 2018

SD Times

on the pulse. Other release managers are like, ‘I wait until people tell me what to do and then I rubber-stamp it and it goes.’ I’m probably exaggerating on that, but there is a bit of that.”

To script, or not to script In most organizations today doing continuous integration and deployment (CI/CD) — the precursor to release automation — the use of scripts remains prevalent. Some, though, see the use of scripts as hampering CI/CD due to scripts being static in nature and unable to adapt across pipelines. “We all have a waterfall approach to CD; scripts come from waterfall,” OpenMake’s Ragan said. “It is still built around states. In dev, test and production, it’s a very fast waterfall approach, but it’s still waterfall. When the build runs, kick off tests. If the tests pass, go to production.” She recalled a time before the overhyped need for speed when organizations had the luxury of time to tweak scripts to adapt to different environments. Today, she said, “people need repeatable deployment processes from dev to test to production very quickly.” On the speed issue, she opined, “Companies need the choice to be fast or not. What they all need, though, is to be responsive to their market.” HCSC’s Sherman said repeatable processes provide reliability, but believes scripts have remained valuable. “Anything that you script, you’re executing the process the same way every time you

‘People need repeatable deployment processes from dev to test to production very quickly.’ —Tracy Ragan, CEO, OpenMake

little too much, and a kind of nervousness about, well, am I going to be necessary if we automate everything. That silo is starting to learn about this and figure out what it means to them.” They’re also not necessarily exposed to the innards of the releases, he said. “Some release managers are more like product owners; they understand what’s coming in the release, what’s in there, why it’s in there, they have their finger

do it, which eliminates some errors that we were seeing in the manual process. When you had different support staff working with you they might not have remembered a step that had to be executed, or the sequence of the steps that had to be executed in a particular order to make it work. The scripting takes that risk away from your process.” He went on to say that “the way that continued on page 31 >

29


Full PageAds_SDT09.qxp_Layout 1 2/23/18 11:19 AM Page 30

Octopus Deploy: Enterprise scale DevOps by Paul Stovell, Founder & CEO | Octopus Deploy

As the culture and practices around DevOps sweep the world, application development teams look for ways to automate deployments and releases of the apps they build, as well as automating all of the operations and maintenances processes needed to keep them running. DevOps works because it breaks down silos and encourages everyone on the team to take ownership of producing software that works. Automation makes teams happier and more productive, and emboldens teams to deploy changes often, iterating quickly based on the feedback from real users.

We’ve focused on building software that engineers love because we know it will mean they actually use it

Octopus Deploy doesn’t do version control, build, or monitoring; but we do deployments. And we do them really well. We’ve helped over 20,000 teams around the world to automate their deployments, including NASA, Accenture, Microsoft, and 58 of the Fortune 100. Octopus works with Team Foundation Server/VSTS, Atlassian Bamboo, Jenkins, and your existing developer toolchain, to give teams fast, reliable, consistent deployments for web applications, services, and database changes.

The feedback loop a team gets is powerful: automation reduces the time and, more importantly, the risk, which means deployments can be more frequent. More frequent deployments mean smaller batch sizes for changes, again reducing risk. Which leads to more automation, which leads to even less risk. Ultimately, it means happier customers and end users – they can give feedback faster, and the team can act on that feedback quickly. All this automation is great, but at the enterprise scale, it’s LQFRQVLVWHQW (DUO\ DGRSWHU WHDPV DUH XVLQJ GL΍ HUHQW SURGXFWV or hand-rolling their own scripts, which always take far more time and maintenance than anyone expects. Meanwhile, other teams lag behind, still deploying manually once a quarter, not sure where to start. The same problems are solved over and over E\ GL΍ HUHQW WHDPV It’s no surprise that leading enterprises are beginning to standardize on DevOps tooling. The goal is simple: give everyone at the company a standard version control system, a standard way to build and test software, a standard way to track bugs, a standard way to deploy it, and a standard way to monitor and manage it. Team members can move from project to project, and thanks to the consistency, immediately begin to automate without reinventing the wheel. Standardization projects like this aren’t easy though. Tools and approaches that are pushed down from above can be met with resistance by the engineers that are forced to work with them. The result is “shelfware”; expensive software that is put on the shelf and never used. And the consistency problem still isn’t solved.

You may not have heard of Octopus Deploy until now, but chances are there’s an engineering team somewhere in your organization that has, and they’d love to tell you how much they enjoy using it. We built Octopus to be a solution that engineers love, with a great user experience, easy installation and onboarding experience, thorough and genuinely useful documentation, and with the right philosophies and extensibility points to allow it to solve any problems your engineers face. We’ve focused on building software that engineers love because we know it will mean they actually use it, and a push to standardize will actually be successful – not shelfware. To learn more about how Octopus Deploy and deployment automation can help you to get consistent, reliable deployments across your entire enterprise – or to see a video interview of how Accenture standardized over 400 teams on Octopus Deploy – go to octopus.com/enterprise The goal is simple: give everyone at the company a standard version control system, a standard way to build and test software, a standard way to track bugs, a standard way to deploy it, and a standard way to monitor and manage it.

Octopus Deploy


028,29,31,32_SDT09.qxp_Layout 1 2/23/18 4:27 PM Page 31

www.sdtimes.com

March 2018

SD Times

31

< continued from page 29

our process works is that the scripts are created ahead of time and we’re executing the scripts over and over again when we do a deployment. Once you get the scripts created it is an asset that you manage just like any other piece of code, and you version it and test it and implement a new version of it so you’re getting a consistent execution of that script every time.” Sherman acknowledged that scripts are labor-intensive, to a certain extent, but HCSC looks at tools “that do things more from a configuration standpoint, where you can configure within the tool the way you want to execute your deployment. That’s sort of the next generation of our platform, to take some of the scripting out of the equation and put it into a tool that’s got more ability to have the configuration built in to the execution. “But those tools are expensive and you’ve got to figure out where you need that level of automation or where the script is adequate,” he continued. “We’re not seeing a lot of volatility in our scripts once we’ve got them set up and they're working. They’re not requiring a lot of maintenance right now, so we’re not as concerned about that at this point in time. There is a place… we did make a decision on, that we did not want to put into the scripts and we wanted to go to a next-generation tool. So there is definitely a niche for that, and we’re just seeing whether that’s something we want to do on an enterprise level or is it something where we make the tool that does the scripting a part of our business and then go to the next-generation tools. That’s the analysis we’re doing right now.” continued on page 32 >

ARA vs. CD... What’s the difference? Some in the industry use CD to mean continuous deployment, while others say it means continuous delivery. And how is that different from application release automation? There seems to be a general understanding of the difference, but it’s not 100 percent clear. As Electric Cloud’s CTO Anders Wallgren noted, “We’re still in one of those stages in the market where people are using the same phrases to mean lots of different things. Everybody kind of has their own vocabulary, and we’re still working on deciphering that, trying to come up with what we only half-jokingly call around here the big decoder ring, so we can talk to people in the language that they speak.” OpenMake CEO Tracy Ragan described it like this: “Continuous Integration is the step for checking in and running tests. Continuous deployment is when the CI workflow ran, was repeated in test, and move to production. Release automation tracks change requests to an end binary.” In release automation, she said, you can be tracking more than one deployment. Wallgren agreed. “To me, deployment is a subset of release. It’s the last thing that happens before release. You’re doing deployment orders of magnitude more frequently than release. Deployment processes are part of product. Deployment was siloed into IT, or dev, or QA, where you frequently seed different deployment automation, or lack of automation.” So there you have it.


028,29,31,32_SDT09.qxp_Layout 1 2/23/18 4:27 PM Page 32

32

SD Times

March 2018

www.sdtimes.com

< continued from page 31

All about the pipeline So, versioning scripts is helpful, but what happens when test, staging and production environments change, as they always do? According to SparkPost’s Evans, “I think building a pipeline that can adapt to changes is key. However sometimes having different load and performance profiles, and hardware constraints makes verifying certain features/fixes tough in different environments.” OpenMake’s Ragan agreed, saying “to be repeatable, [the flow] has to go across the pipeline. Application release automation allows developers to decide what gets consumed in test and production, and adds security features.” Yet release automation is “kind of all over the place,” Wallgren said. “Some companies don’t have a release step in their pipeline. They say, if it makes it through the pipeline, it goes straight into release automatically. They don’t have release managers.” He called release management “a kind of Maginot line. It’s the last defense against releasing things that can hurt them.”

It’s a culture thing HCSC’s Sherman descripted the hurdles and roadblocks to implementing release automation. “It was a difficult project to get start-

Pets or cattle? Does your organization baby your servers, with a lot of tender loving care that you would give to your favorite pet? Or does your organization view your servers as cattle, spinning them up and slaughtering them quickly? In the world of release automation, you want cattle, not pets, according to OpenMake co-founder and CEO Tracy Ragan. “You want agentless, elastic environments and minimize what you’re installing on them,” she said. “That’s a reflection of the 21st century.

ed and get through the first couple rounds of it. The other complication I think we had was that our applications, as we went through the analysis, are not all the same, so we had a lot of one-off things that we had to account for, but once we got a process built and an assembly line kind of approach to it, it went pretty smoothly and we really gathered some momentum and were able to do the second half of the project much more quickly than the first half went. “We had some issues with engagement.. getting the teams to buy into the process. Then we had issues with staffing on my side, of getting the right people who could do the work in place, and then there’s always a problem here of competing priorities. The teams that we were working with had big customer projects that they were

‘We have automated the process of building the application and then doing the deployments. We’re seeing a lot of benefits in doing that in terms of speed and resources saved.’ —Tom Sherman, divisional vice president of DevOps, HCSC

working on and we had to fit our work into that.” Sherman did note that HCSC leadership was committed to making release automation happen as part of a larger organizational digital transformation. “That really helped us turn the corner by having that backing that this was an

important project that needed to be done. That really made it work,” he said. SparkPost has had some form of release automation since it launched in 2015, Evans said. “It was an evolution and over time more pieces of the process were streamlined. We knew in order to be agile and ship features to our customers that release automation was a must. We prioritized critical path and layered in more capabilities as we grew in volume and features. Being an on-premise company before, we had some legacy processes that hindered some of the rollout of the pipeline initially. We also had a new set of challenges as we started rolling out our nextgeneration architecture in 2017. Our development process changed as well because previously we were working towards quarterly releases. Most features stopped at a test environment until closer to release. Now we release pieces of features to production as they are ready and ship multiple times a day.” In the world of automation, Wallgren said, “You have to ask the question, ‘what goes wrong today, and how do you deal with that?’ If you can automate it, and it does the same thing every time … then why is that not better? Sometimes you have to push people on that and challenge their assumptions.” So, when everything is automated, does release management go away in the future? “I don’t think so,” said Wallgren. “But… can questions that release managers want the answers to be populated automatically? Yes.” z


Full PageAds_SDT09.qxp_Layout 1 2/23/18 11:20 AM Page 33


Full PageAds_SDT09.qxp_Layout 1 2/23/18 11:20 AM Page 34


035,36,38,42_SDT09.qxp_Layout 1 2/23/18 4:29 PM Page 35

www.sdtimes.com

March 2018

SD Times

Buyers Guide

Managing your APIs Being able to take out services and bring in new ones via connectivity help keep businesses in the game BY JENNA SARGENT

A

PIs are the basis of modern software development, according to Abhinav Asthana, founder and CEO of Postman. “APIs define how data and services are shared, updated and managed across millions of programs, apps and companies.” “API management refers to a portfolio of tools used to create, secure, monitor and govern the application programming interfaces (APIs) that connect individual software components, microservices, data sources and endpoints within a modern application architecture,” said David Chiu, director of API management product marketing at CA Technologies. “API management is a technology set that helps organiza-

tions do three things,” said Ian Goldsmith, VP of product management for Akana by Rogue Wave Software. “It helps them build the right APIs, build those APIs correctly, and then make sure that those APIs are running as expected.”

Why is API management so important? The importance of API management is directly tied to the importance of APIs said Goldsmith. “APIs are becoming more and more important as a mechanism for all sorts of things from internal integration through developing partner ecosystems through direct business transformation,” he said. “A lot of companies understand their business really well, continued on page 38 >

35


035,36,38,42_SDT09.qxp_Layout 1 2/23/18 4:29 PM Page 36

36

SD Times

March 2018

www.sdtimes.com

David Chiu, Director of API Management Product Marketing, CA Technologies The CA API Management and Microservices portfolio allows you to create, secure, deliver and manage the full lifecycle of APIs and microservices at tremendous scale, bringing startup agility to any enterprise and ensuring that your business is positioned to capitalize on new opportunities with a modern application architecture.

digital transformation with APIs — development, orchestration, security, management, monitoring, deployment, discovery and consumption. Microservices in Minutes — CA is the only vendor with automated, lowcode development of microservices from new or existing data sources — enabling the creation and delivery of complete APIs with business rules up to 10x faster than other approaches.

Full Lifecycle API Management — CA offers the industry’s most robust

The Modern Software Factory — CA API Management integrates with

capability set that spans the entire API lifecycle and accelerates every step of

other highly-acclaimed CA products to solve tough customer problems across

the API lifecycle, including rapid API testing, omnichannel security with advanced authentication, and precision monitoring of apps, APIs and microservices.

Trusted by the Most Demanding Customers — CA is named a leader in every major analyst evaluation of API management. Hundreds of enterprise, commercial and government customers trust our military-grade security and success stories in the most demanding verticals, including finance, healthcare, retail, transportation, telecom and others. z

Abhinav Asthana, Founder and CEO, Postman Postman is the only complete API development environment, and is used by nearly five million developers and more than 100,000 companies worldwide. Postman's elegant, flexible interface helps developers build connected software via APIs — quickly, easily and accurately. Postman has features to support every stage of API development, and benefits developers working for small businesses as well as industryleading enterprises. Postman supports API design by

allowing developers to create APIs directly within its application. And its Mock Service can be used to mock request-and-response pairs for an API under development. Devs can test APIs, examine responses, add tests and scripts to requests and fully debug an API. Automated testing is also easy with Postman by using the Postman Collection Runner to connect with external CI/CD tools. Postman Documentation is beautiful, web-viewable, and machinereadable; devs can download it directly into their Postman instance and begin working with an API immediately. Devs

can use Postman’s API Monitoring to test an in-production API for uptime, responsiveness and correct behavior. API Publishers can use Postman to onboard developers faster, with Postman’s Run In Postman button and Documentation. All these tools are based on Postman’s specification format — the Postman Collection — which is a robust, executable description of an API. Postman also hosts the API Network, which is a library of Postman Collections for the best and most popular APIs in the world. z

Ian Goldsmith, VP of product management, Akana by Rogue Wave Software Most people think of API management very narrowly, addressing only the deploy and run lifecycle stages. Robust API management addresses design, build, manage and secure phases. Today’s decisions impact your business now — and your future capabilities. Rogue Wave Akana provides a comprehensive solution spanning all stages of the lifecycle, helping ensure that our customers build the right APIs, build them the right way, and ensure they are behaving correctly.

The Akana Platform: • Provides comprehensive API design capabilities supporting the authoring

and documentation of APIs with import and export of multiple different API Definition document types. • Offers the industry’s most complete API security solution with comprehensive support for modern (OAuth, Open ID Connect, JWT, JOSE, etc), and legacy (WS-*, x.509, and more) security functionality and standards. • Enables seamless mediation and integration with existing APIs, services, and applications. • Controls traffic to protect backend applications and prioritize traffic for high-value applications. • Automatically publishes APIs through an intuitive and fully-functional developer portal facilitating self-serviced

access and management of APIs and Apps. • Surfaces insights from API traffic and developer activity via powerful analytics. • Governs the progress of planning, building, and running APIs across the entire lifeycle. • Can be deployed across geographic regions with exceptional scale and performance to meet the most complex and demanding requirements. The depth and strength of our product combined with our experience and expertise helps our customers manage complex technology issues so they can focus on driving new business and measuring the success of their business initiatives. z


Full PageAds_SDT09.qxp_Layout 1 2/23/18 11:20 AM Page 37


035,36,38,42_SDT09.qxp_Layout 1 2/23/18 4:30 PM Page 38

38

SD Times

March 2018

www.sdtimes.com

< continued from page 35

but they don’t necessarily understand the technology behind APIs,” said Goldsmith. API management platforms make sure that you are delivering the right API, that it’s working properly, and that you are not introducing security vulnerabilities, he said. It also allows companies to correct identify the proper use of the API, he explained. “As the API becomes more important, the requirements for API become more important.” APIs are beneficial to the development process in a number of ways, Asthana explained. “APIs support the creation of microservices, helping to define and govern interactions between smaller, more manageable packages of code; and the creation of an API allows content (data or services) to be created once, and shared across many channels. The content can be maintained and updated in parallel with its use, which is determined by the API,” said Asthana. Asthana explained that since APIs help streamline the development process, they are very important to companies with a significant investment in developers. “We all expect world-class customer experiences to be convenient, secure and integrated with the rest of our digital lives,” said Chiu. “To accomplish this, enterprises must deliver the right data to the right endpoint at the right time, and many are doing so by modernizing their application architectures to integrate traditional monolithic applications with public or private cloud services, partners, third-party data providers, mobile apps and IoT devices.” Chiu explained that this architectural model is “reliant on APIs to provide the secure and consistent connectivity needed between endpoints.” Large organizations might need thousands of APIs and each one “is a potential point of failure in terms of security, stability, and scalability,” he said. “API management brings order to this pattern by establishing a foundation for enterprises to create, secure, monitor and govern hundreds or thousands of APIs consistently across their entire lifecycle,” said Chiu.

API management is not without challenges

4.

An API needs support and development throughout its entire lifecycle, said Asthana. He explained that a complete toolchain for API developers will make API development easier. According to Chiu there are eight key challenges that an API management solution will address. “Integrating and modernizing existing systems and services to create a viable application architecture for mobile, cloud and IoT. Providing self-service management for API publishers and consumers that can scale to a program with thousands of individual APIs. Designing and building APIs and microservices quickly and consistently from a range of new or legacy data sources.

Building or refactoring an application architecture with the speed, scale and safety needed for a particular enterprise or industry vertical. Protecting an enterprise against threats to apps, APIs and services consistently across the application architecture. Gaining powerful, actionable insights from monitoring and analysis of API and app performance. Incorporating API development and deployment into existing governance and DevOps automation processes. Accelerating mobile and IoT development by making it easier for app developers to discover, consume and optimize available APIs.” z —Jenna Sargent

According to Chiu, a robust API management solution provides a foundation for a modern application architecture with APIs and microservices, thus creating an agile business. “It reduces critical barriers to innovation and digital transformation, allowing enterprises to respond more quickly to competitive, regulatory and consumer demands for new apps, integrations, business models and technology innovations.”

“protect backend infrastructure from high-traffic loads and being able to apportion quota capacity across different applications” is needed. Monitoring is also important, as you need to know real time use of traffic and be able to see historical analysis. In addition to these latter-stage runtime needs, the early stage of building APIs is also important to keep in mind, Goldsmith said. Asthana believes that it is important for API management solutions to have a breadth of services. According to him, the ecosystem should include key API tools such as API Development Environments, API Integration platforms, and API directories. “At its core, API management is about speed, security and scale,” said Chiu. Beyond those three qualities, API management solutions should be evaluated from its breadth of functionality and deployment flexibility. Because the full API lifecycle consists of numerous steps, having robust capabilities spanning those steps “extends both the benefits and value of an API management solution,” said Chiu. He also said that good API management solutions are available in multiple different form factors and deployment options. z

1.

2. 3.

What a good solution should have Goldsmith believes that good API management solutions have to have really strong security capabilities. Modern APIs should have OpenID Connect with JWT (JSON Web Tokens) and JKS (Java KeyStore). He noted that there are some newer token types that allow for standalone authentication, skipping the need to verify the token with a server, which is more efficient. Goldsmith said that having OAuth 2.0 with JWT is critical so having it integrated in a data solution is good. It is also important to be able to manage these certificates and use things such as SNI (Server Name Identification), Goldsmith said. Traffic management and throttling are also important in an API management tool, said Goldsmith. Being able to

5.

6. 7.

8.


Full PageAds_SDT09.qxp_Layout 1 2/23/18 11:34 AM Page 39


035,36,38,40_SDT09.qxp_Layout 1 2/23/18 4:38 PM Page 40

40

SD Times

March 2018

www.sdtimes.com

A guide to API Management tools n Apigee: Apigee is an API management platform for modernizing IT infrastructure, building microservices and managing applications. The platform was acquired by Google in 2016 and added to the Google Cloud. It includes gateway, security, analytics, developer portal, and operations capabilities. n Cloud Elements: Cloud API integration platform Cloud Elements enables developers to publish, integrate, aggregate and manage their APIs through a unified platform. Using Cloud Elements, developers can quickly connect entire categories of cloud services (e.g. CRM, Documents, Finance) using uniform APIs. Cloud Elements provides an API Scorecard so organizations can see how their API measures up in the industry. n Dell Boomi: Boomi API Management, provides a unified and scalable, cloud-based platform to centrally manage and enrich API interactions through their entire lifecycle. With Boomi, users can rapidly configure any endpoint as an API, publish APIs on-premise or in the cloud, manage APIs with traffic control and usage dashboards. n digitalML: The ignite API Product Management Platform provides the only APIcatalog with a built-in lifecycle, with an endto-end focus on plan, design, build, and run. Enterprises can deliver and manage APIs, microservices and SOAP/composite services from intake to any implementation. n IBM: IBM Cloud’s API Connect is designed for organizations looking to streamline and accelerate their journey into the API economy; API Connect on IBM Cloud is an API lifecycle management offering which allows any organization to create, publish, manage and secure APIs across cloud environments — including multicloud and hybrid environments. n Mashape: Mashape powers Microservice APIs. Mashape is the company behind Kong, the most popular open-source API Gateway. Mashape offers a wide range of API products and tools from testing to analytics. The main enterprise offering is Kong Enterprise, which includes Kong Analytics,

n

FEATURED PROVIDERS n

n Akana by Rogue Wave Software: Akana API management encompasses all facets of a comprehensive API initiative to build, secure, and future-proof your API ecosystem. From planning, design, deployment, and versioning, Akana provides extensive security with flexible deployment via SaaS, on-premises, or hybrid. Akana API security, design, traffic management, mediation and integration, developer portal, analytics, and lifecycle management is designed for enterprise, delivering value, reliability and security at scale. The world’s largest companies trust Akana to harness the power of their technology and transform their business. Build for today, extend into tomorrow. n CA Technologies: CA Technologies helps customers create an agile business by modernizing application architectures with APIs and microservices. Its portfolio includes the industry’s most innovative solution for microservices, and provides the most trusted and complete capabilities across the API lifecycle for development, orchestration, security, management, monitoring, deployment, discovery and consumption. n Postman: Postman provides tools to make API development more simple. Over 3 million developers use its apps. Its API Development Environment is available on Mac OS, Windows, and Linux, enabling faster, easier, and better API development across a variety of operating systems. Postman developed its from the ground up to support API developers. It features an intuitive user interface to send requests, save responses, and create workflows. Key features include request history, variables, environments, tests and pre-request scripts, and collection and request descriptions. It also provides API monitoring for API uptime, responsiveness, and correctness. Kong Dev Portal and an enterprise version of the API Gateway with advanced security and HA features. n MuleSoft: MuleSoft’s API Manager is designed to help users manage, monitor, analyze and secure APIs in a few simple steps. The manager enables users to proxy existing services or secure APIs with an API management gateway; add or remove prebuilt or custom policies; deliver access management; provision access; and set alerts so users can respond proactively. n NevaTech: Nevatech Sentinet is an enterprise class API Management platform written in .NET that is available for onpremises, cloud and hybrid environments. It connects, mediates and manages interactions between providers and consumers of services. Sentinet supports industry SOAP and REST standards as well as Microsoftspecific technologies and includes an API Repository for API Governance, API versioning, auto-discovery, description, publishing and Lifecycle Management. n Oracle: Oracle recently released the

Oracle API Platform Cloud Service.

The new service was developed with the API-first design and governance features from its acquisition of Apiary as well as Oracle’s own API management capabilities. The service is providers an end-to-end service for designing, prototyping, documenting, testing and managing the proliferation of critical APIs. n Red Hat: Red Hat 3Scale API Management is a hybrid cloud-based platform designed to help organizations build a more successful API program. It includes access control, security, rate limits, payment gateway integration and developer experience tools. Performance insights can also be shared across an organization with clear API analytics and reporting mechanisms. n TIBCO Software: TIBCO Mashery Enterprise is a full life cycle API management solution that allows users to create, integrate, and securely manage APIs and API user communities. Mashery is available either as a full SaaS solution, or with the option to run the API gateway on-premise in a hybrid configuration. z


041_SDT09.qxp_Layout 1 2/23/18 2:28 PM Page 41

www.sdtimes.com

March 2018

SD Times

Analyst View BY PETER THORNE

IoT needs ordinary applications too L

et’s imagine… Jump a few years into the future. All the hoopla about IoT is history. A lot of equipment comes with network interfaces. The software embedded in the equipment uses resources on the network to get its job done. Imagine a new machine being installed in a factory. It’s in position, the mechanical setup is complete, the electricals have been tested. Next step is to plug in (or switch on) the network connection.

What happens next? I agree there’s a case for machines only speaking when they are spoken to. But rather than wait for the local controllers to execute their next ‘discovery’ scan, let’s assume this is an active machine. It will get things started by looking for the resources it will use on the network. Some of these will be IoT-ish, but in addition, some resources the machine needs will be ordinary applications. The first example is asset management. The newly connected machine will probe for all the asset management systems it knows about, using all the protocols it has got. It must find a way to register itself as a new asset. This means finding a way to message the organization’s ERP system, and self register in the asset management module with details like manufacturer, model number, and serial number. The next example is configuration. The machine needs to phone home to the organization that provided it to its new user. Why? Because the machine needs to find out what functions the new user has paid for. So it must get a message to the customer relationship management system; or perhaps the sales order handling system of the provider. One of these systems will respond with information to instruct the machine to configure itself according to the order. It’s likely the machine will be largely autonomous. But maybe it still needs occasional operator attention. So it may need to contact the human resources system, and exchange information about who is authorised to access the machine.

Some observations This is hardly revolutionary. Even today, we are disappointed if plugging in a USB device, or switching on a network-connected media player

needs any follow-up actions. Surely it just loads the right drivers, registers as a device or source or player, and finds any applications which might be interested in its capabilities. We expect to plug in, switch on and use, with no need for geeky selection of options, and manual input of names and addresses to get things connected. Of course IoT enables a lot of new things to happen. The new machine will be reporting sensor readings to an analytics system which will recognize problems before they cause breakdown. An optimization system will be making second-by-second decisions responding to status, plans for the next few minutes, hours and days, and adjusting activity accordingly. A machine learning system may be monitoring the relationships between plans, inputs and outputs of an operator-controlled machine with a view to providing a better operatorassistance service next year. There will be new dataflows to and from this machine up and down the supply chain, improving visibility and enabling faster reaction to problems. But for this machine to be part of a business, and not just a demonstration, it needs the ordinary apps as well. So if your focus is, say, financial software used by manufacturers, remember that one day, your software will be scanning machines like this one so that you can implement new automated ways of calculating first year capital allowances for tax, or utilisation levels for the activity-based-costing model.

Peter Thorne is director at analysis firm Cambashi.

IoT enables a lot of new things to happen... but for a machine to be part of a business... it needs the ordinary apps as well.

Next steps For developers, the vital change in mindset is to supplement an inward-looking, function- and performance-based focus on your software modules with outward looking consideration of how to use the new connectivity. The ‘things’ for IoT have existed as records in business system databases. But now the business system can also reach these ‘things’ directly. Where are they, what is their status? If this triggers ideas on how to do something new or different or better, then you are part of the future. z

41


040_SDT09.qxp_Layout 1 2/23/18 4:39 PM Page 42

42

SD Times

March 2018

www.sdtimes.com

Guest View BY RICK ORLOFF

CEOs: The biggest shadow IT threat? Rick Orloff is chief security officer at Code42 and former security head at eBay and Apple.

C

hances are high that your business is home to shadow IT. The practice of using unsanctioned software on company devices isn’t done out of malice. It’s quite the opposite — users are turning to unapproved applications like chat apps, task managers, or collaboration tools in an effort to be more productive. While the intentions of this practice may not be malicious, shadow IT exposes companies and their customers to malware and hackers via vulnerabilities in the software. In fact, it has been reported that a third of successful attacks against enterprises will involve shadow IT resources by 2020. Making sure that employees understand the risks posed by shadow IT starts at the top. Employees often believe data security is IT’s problem, and that if IT does its “job” and filters out the threats, they have nothing to worry about. Leaders therefore need to make sure their employees understand exactly how an unknown or unapproved app can quickly lead to a massive data breach that extends far beyond their devices. But, as it turns out, leadership may be great at talking about the dangers of shadow IT — and ignoring their own advice. According to a recent industry study, 75 percent of CEOs and more than half of business decisionmakers acknowledge that they use applications and programs that aren’t approved by their IT department. This is despite 91 percent of CEOs acknowledging that their behaviors could be considered a security risk to their organization. It is not enough for employees to reject shadow IT if members of the C-Suite aren’t heeding their own advice. To lower the risks presented by shadow IT, companies should develop a related policy that applies to everyone, from top to bottom. The basic principles of such a plan are simple: Provide employees with effective easy-touse tools and capabilities — Employees use shadow IT because unsanctioned apps help them achieve a goal. Provide your employees with quality tools and they won’t need to look elsewhere. At many organizations, different teams used different chat tools, including Chatter, HipChat and Google Chat. By switching your entire company to one channel, you can unify all teams onto a single chat

Leadership may be great at talking about the dangers of shadow IT — and ignoring their own advice.

tool, improving inter-departmental communication and collaboration — and remove the need for any team to go rogue and install unapproved chat apps. Deliver a straightforward, meaningful message on mutual expectations and accountability — Your communication to employees, including the leadership team, has to a) deliver a crisp, meaningful message; b) demonstrate that security is a core responsibility bestowed by executives; c) close the loop between what you say and what employees understand; and d) hold employees accountable. Demonstrate that security is a core responsibility for everyone — Preventing cyberthreats from taking hold in your company is like a war. End users are on the front lines of the battle — their endpoints are the primary attack vector, and they need to embrace the strategies to protect them that are set by the “generals,” the IT and InfoSec teams. The C-level executives are in the war room, setting priorities and approving the strategies proposed by IT and InfoSec. From the top on down, everyone involved needs to understand that all the fancy security tools in the world are worthless if they don’t follow the rules. They need to understand that even a small error could lead to immense costs, lost productivity, brand damage, and more. Most importantly, no employee — even trusted administrators and executives — should be exempt from the consequences when rules are broken. Structure the organization for success — To prevent shadow IT, security must have a view of the entire company — no exceptions for C-level or other privileged users. Creating any security program, whether it is focused on shadow IT or another pain point, requires accurate situational awareness. Organizations must first have a holistic view of data usage behaviors and security risks before they can decide where to add doors and locks. Align CapEx and OpEx where there are synergies and predictable results — Providing employees new tools so they don’t turn to shadow IT requires careful CapEx and OpEx planning. This doesn’t happen overnight, and shadow IT apps don’t show up overnight, either. Plan your technology purchases wisely and efficiently, and you’ll prevent shadow IT from showing up in the first place. z


043_SDT09.qxp_Layout 1 2/23/18 5:07 PM Page 43

www.sdtimes.com

March 2018

SD Times

Industry Watch BY DAVID RUBINSTEIN

Managing data across multiple clouds M

icroservices and containers have created a world of interoperable cloud-based services that can be moved seamlessly from one platform to another with minimal change. An area that has lagged, however, is providing data that can be used across multiple cloud platforms to power today’s modern applications. The folks at SwiftStack, a 7-year-old data storage solution provider, have come up with a way to manage data across multiple clouds, based on three fundamental principles — that data is universally accessible; data decisions are based on policies, not manual operation; and metadata is the organizing principle rather than silo locations and hierarchical data structures. “What we’ve noticed,” said Joe Arnold, cofounder, president and chief product officer at SwiftStack, “is that each public cloud is like its own operating system. Think about an application developer — they’re typically building the application in a specific programming language and targeting a specific environment — Windows, Linux, Mac, mobile devices, whatever. But what about the back-end infrastructure? We’re building applications in these public clouds to take advantage of the different capabilities they have. Google has a bunch of machine learning, Amazon has a whole host of services that they offer. The public clouds aren’t just a dumb place to run virtual machines. What we’re noticing now is that each of those clouds is going after really targeted applications that have unique functionality in those public clouds. It raises up the importance of being able to have access to your data even if it’s in another cloud, or even if that data is on-premises. You need to be able to get that data and manage that data into those public clouds so that you can take advantage of the services being offered there.” The key to making universal accessibility happen is a new, open-source file system the company has developed — File Access, it’s called — that can provide data to both file-based and object-based applications. “When you look at how data typically can be stored in the public cloud, object is the very dominant way of storing that data, and application developers are building applications around using those Object APIs. Yet when you look on-premises, you see a lot of those applications still being pre-

dominantly file-based. There’s an impedence that needs to be worked through, so you can build your new applications with the same data that supports your existing applications.” If you’re managing data across multiple clouds, one thing that’s critical is that an application shouldn’t have to worry about where data lives. “If it’s in the namespace, it should have access to the data. It shouldn’t care if it’s in a data center somewhere, or in AWS or Google or any other type of cloud,” Erik Pounds, SwiftStack vice president of marketing, said. “Placement of data needs to be policy-driven instead of manually driven. Based on the requirements for the data, different policies can be set for different data sets, which determine how it’s stored and protected and migrated or archived. From an application standpoint, it decides how the data is handled more so than an operator.” Arnold said he’s seeing customers who tell him they don’t want to have to know necessarily where their data is, but instead want their system to manage and optimize where the data is placed. He said, “The system we have allows data being placed into the system, and policies can be applied to that data, based on age, for example, something gets archived, or something needs to be accessed in a public cloud, then let’s cache that and store that so it’s readily available and faster to access for that application.” Finally, metadata plays a key role in managing data in a single, flat global namespace. “This,” Pounds said, “is what’s going to break down barriers that restrict value you get out of your data and the utilization of that data. Instead of the knowledge of data being owned by a single application or that data only being in a single location, once you have metadata associated with that data, it can be indexed and searched on. Now, anyone that has access to that data can get something out of it. They can run queries on massive sets of data and narrow that data down and start to utilize and act on that data. “That’s something we’re seeing as a big change that’s going to enable the management and utilization of data across multiple clouds.” z

David Rubinstein is editor-in-chief of SD Times.

Placement of data needs to be policy-driven instead of manually driven.

43


Full PageAds_SDT09.qxp_Layout 1 2/23/18 11:35 AM Page 44

Pro Cloud Server Collaborate with

Create

Integrate

with

with

OSLC RESTful API

AAFree of theversion Pro Cloud for up to+ 25 users 5-25version user, Free of Server the Pro+ WebEA Cloud Server WebEA $CUGF QP ƒXG QT OQTG EWTTGPV QT TGPGYGF NKEGPUGU QH VJG 'PVGTRTKUG #TEJKVGEV For those with five or more current or renewed licenses of the Enterprise Architect Corporate Edition(or (or above). Conditions apply,apply, see websee site web for details. Corporate Edition above). Conditions site for details.

Visit: sparxsystems.com/pcs-express

Online Demo: North America: spxcld.us | Europe: spxcld.eu | Australia: spxcld.com.au

er v r NEW e dS

u ss o l C Pro Expre

Visit sparxsystems.com/procloud for a trial and purchasing options


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.