FFC_SDT010.qxp_Layout 1 3/26/18 11:49 AM Page 1
FC_SDT010.qxp_Layout 1 3/23/18 3:35 PM Page 1
APRIL 2018 • VOL. 2, ISSUE 10 • $9.95 • www.sdtimes.com
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:48 AM Page 2
When they had to call Mars, they called Rogue Wave first When your unmanned vehicle is 50 million miles away, nothing can fail. Which is why researchers called us. Our advanced software solutions help operate the vehicle remotely and power all mission-critical communications. Surprised? There’s more to Rogue Wave than you think.
WE’VE GOT MARS COVERED FROM A TO ZEND < WEB AND MOBILE APP DEVELOPMENT >
SECURE COMPONENTS < PLATFORM INDEPENDENT BUILDING BLOCKS >
OPENLOGIC < END-TO-END ENTERPRISE OPEN SOURCE >
KLOCWORK < APPSEC AND COMPLIANCE STATIC CODE ANALYSIS >
JREBEL < JAVA DEVELOPMENT PRODUCTIVITY >
AKANA < API MANAGEMENT >
003_SDT010.qxp_Layout 1 3/23/18 3:10 PM Page 3
VOLUME 2, ISSUE 10 • APRIL 2018
Software’s shrinking gender gap
Fight for free ‘as in freedom’
To change the world, target a problem
GitLab: 2018 is the year for open source and DevOps
page 8 Ten things that change when a developer gets promoted
ITSM’s Next Wave: AI and Machine Learning
4 tips to build a lean, mean ITSM machine
Q&A: Talking ITSM with ServiceNow GM Farrell Hough
GUEST VIEW by Christine Spang What to expect for Python
ANALYST VIEW by Arnal Dayaratna The demise of the statistician
INDUSTRY WATCH by David Rubinstein Column as a service
Buyers Guide: page 33 The driving force behind DevOps Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2018 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at email@example.com.
004_SDT010.qxp_Layout 1 3/23/18 12:00 PM Page 4
Instantly Search Terabytes of Data DFURVVDGHVNWRSQHWZRUN,QWHUQHWRU ,QWUDQHWVLWHZLWKGW6HDUFKHQWHUSULVHDQG developer products
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein firstname.lastname@example.org NEWS EDITOR Christina Cardoza email@example.com SOCIAL MEDIA AND ONLINE EDITOR Jenna Sargent firstname.lastname@example.org INTERNS Ian Schafer email@example.com
Over 25 search features, with easy multicolor hit-highlighting options
dtSearchâ€™s document filters support popular file types, emails with multilevel attachments, databases, web data
Matt Santamaria firstname.lastname@example.org ART DIRECTOR Mara Leonardi email@example.com CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Frank J. Ohlhorst, Jeffrey Schwartz CONTRIBUTING ANALYSTS Cambashi, Enderle Group, Gartner, IDC, Ovum
Developers: Â‡$3,VIRU1(7-DYDDQG& Â‡6'.VIRU:LQGRZV8:3/LQX[ 0DFDQG$QGURLG Â‡6HHGW6HDUFKFRPIRUDUWLFOHVRQ faceted search, advanced data FODVVLILFDWLRQZRUNLQJZLWK64/ 1R64/ RWKHU'%V06$]XUHHWF
SUBSCRIPTIONS firstname.lastname@example.org ADVERTISING TRAFFIC Mara Leonardi email@example.com LIST SERVICES Shauna Koehler firstname.lastname@example.org REPRINTS email@example.com ACCOUNTING firstname.lastname@example.org
Visit dtSearch.com for Â‡KXQGUHGVRIUHYLHZVDQGFDVHVWXGLHV Â‡IXOO\IXQFWLRQDOHYDOXDWLRQV
The Smart Choice for Text RetrievalÂ® since 1991
PUBLISHER David Lyman 978-465-2351 email@example.com
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:48 AM Page 5
006,7_SDT010.qxp_Layout 1 3/23/18 1:45 PM Page 6
NEWS WATCH Developer preview of Android P operating system released Google has announced the first developer preview of its upcoming operating system Android P. The release of Android P is just a baseline build for developers. The company says it will have a lot more features to announce at its developer conference Google I/O in May. The preview features indoor positioning with Wi-Fi RTT, display cutout support, improved messaging notifications, a multi-camera API, ImageDecoder for bitmaps and drawables, media APIs, autofill improvements, neural networks API 1.1, and new mobile payments APIs. The new indoor positioning is designed to provide location-based service solutions while the display cutout support allows apps to take advantage of fullscreen content. In addition, Android P is focused on strengthening security, privacy and performance.
Industry leaders launch data manifesto The data community is getting new guidelines to approach data effectively and ethically. Data.world and industry leaders today are announcing the Manifesto for Data Practices. The Manifesto for Data Practices aims to help organizations and users maximize their data’s internal value and impact while taking things like privacy and security into account. “Today, every choice that a company makes about data has the potential to help or harm consumers, communities, and even entire countries,” said Brett Hurt, CEO of data.world. “The Manifesto for Data Practices is critically important because it defines a new model for improving data practices themselves. It can be used by any organization to foster ethical, productive data teamwork, and we feel privileged to have collaborated with so many industry luminaries to coauthors and release this manifesto.” The Manifesto for Data Practices is built on four values and 12 principles. The values are inclusion, experimentation, accountability and impact.
According to the manifesto, the principles aim to help data teams: 1. Use data to improve life for our users, customers, organizations, and communities.
2. Create reproducible and extensible work. 3. Build teams with diverse ideas, background and strengths.
4. Prioritize the continuous collection and availability of discussions and metadata.
5. Clearly identify the questions and objectives that drive each project and use to guide both planning and refinement.
6. Be open to changing our methods and conclusions in response to new knowledge.
7. Recognize and mitigate bias in ourselves
Postman announces new API dev platform for the enterprise Postman has announced the release of Postman Enterprise, a new solution designed to expand on the features of its API development solution Postman Pro. Postman Enterprise was created to provide enterprise users with new and improved security and enterprise-only features. “Enterprises want the option for their developers to use Postman, but within a safe, secure and enterprisecontrolled environment,” said Abhinav Asthana, CEO and cofounder of Postman. “These organizations now have the
and in the data we use.
option to make all of their API development faster, easier and better with Postman Enterprise.” The new release features Single Sign-On (SSO), allowing organizations to more easily and securely manage team members’ access to API development work within a Postman instance. Postman Enterprise already supports multiple SSO providers such as Okta, OneLogin, Duo, Ping Identity, AD FS, and GSuite. Going forward, the company plans to add new providers based on customer need. Postman will also be providing SAML 2.0-compliant
8. Present our work in ways that empower others to make better-informed decisions.
9. Consider carefully the ethical implications of choices we make when using data, and the impacts of our work on individuals and society.
10. Respect and invite fair criticism while promoting the identification and open discussion of errors, risks, and unintended consequences of our work.
10. Protect the privacy and security of individuals represented in our data.
12. Help others to understand the most useful and appropriate applications of data to solve real-world problems.
identity provider support.
JavaFX now available as a separate module Oracle is making JavaFX available as a separate module in an effort to make it easier to adopt. JavaFX is the software platform for developing desktop Java apps. Currently it is a part of the JDK, and will continue to be supported as part of JDK 8 through 2022 at least, but starting with the release of JDK 11 it will be its own module. The Java Platform Module System was put in place with
the release of Java SE 9, leading the way to decouple JavaFX from the JDK and make it available as a separate module, Oracle says. Having it available as a separate module will provide freedom and flexibility with the framework for developers using it. “Over the last decade, the JavaFX technology has found its niche where it enjoys the support of a passionate developer community. At the same time, the magnitude of opportunities for cross-platform toolkits such as JavaFX in the market place has been eroded by the rise of ‘mobile first’ and
006,7_SDT010.qxp_Layout 1 3/23/18 1:40 PM Page 7
‘web first’ applications,” the company wrote in a white paper.
Windows 10 update introduces machine learning component Microsoft announced Windows ML, an artificial intelligence runtime that brings high-quality machine learning to Windows devices. At its Windows Developer Days event in early March, the company explained the addition completes CEO Satya Nadella’s vision of widespread adoption of machine learning, providing a platform for developers to use AI within their applications. “At its core, the runtime enables developers to include machine learning models in their applications and solve problems for their customers,” said Kam VedBrat, Microsoft partner group program manager. “Developers can take a set of images and train the model on inputs and outputs,” which had typically been done in large clusters or the cloud, but now models can be packaged up in the applications, he pointed out. Organizations might choose to train their AI models in the cloud and evaluate them on edge devices, according to Kevin Gallo, Microsoft corporate vice president. “There are 600 million Windows devices, so quickly we’ll have a large footprint of devices” capable of leveraging AI.
Stack Overflow: interest in DevOps, ML are on the rise DevOps and machine learning are emerging as two of the most important trends in
the industry today, a new report finds. Stack Overflow released its annual Developer Survey Results to find out where developers are working, and what tools they are using. The report is based on the responses of 101,592 software developers from 183 countries. According to the report, the rise of DevOps and machine learning are evident in developer salaries and technologies. DevOps specialist was found to be the highestpaying developer job, with engineering manager being the first. In addition, data scientist or machine learning specialist was found to be the third-highest paying job. The languages and frameworks also associated with DevOps and machine learning were found to be trending upwards. Python continues to rise among the ranks of programming languages, surpassing C# in popularity this year.
Last year, the language surpassed PHP. Python was also found to be the most wanted language of the survey. Go, a language Stack Overflow’s data scientist Julia Silge says is often associated with DevOps, was also a top mostloved and most-wanted programming language.
Microsoft releases its Azure Service Fabric to open source Microsoft’s Service Fabric Team announced the open-source release of Azure Service Fabric under the MIT license. The team behind the distributed systems platform, designed to easily package, deploy and manage scalable and reliable microservices and containers, will be transitioning to a completely open development process on GitHub over the coming months. While the Service Fabric repo available on the project’s
Kubernetes graduates from the Cloud Native Computing Foundation The Cloud Native Computing Foundation (CNCF) has announced that Kubernetes has moved from incubation to graduate. For this to happen, projects have to demonstrate a thriving adoption, documentation, structured governance process, and a commitment to community success and inclusivity. According to the foundation, this is the first open-source project to graduate. “Kubernetes led to the creation of the CNCF as the first project accepted by the Technical Operating Committee (TOC) a little over two years ago,” said Chris Aniszczyk, COO of the CNCF. “With the project’s rapid growth, broad participation from numerous organizations, cloud providers and users, and proven ability to operate at scale, the TOC readily endorsed Kubernetes moving on from incubation to graduate. It signals that Kubernetes is mature as an open source project and resilient enough to manage containers at scale across any industry in companies of all sizes.”
GitHub contains build and test tools for Linux, allowing users to clone and build Service Fabric on Linux systems, run basic tests, open issues and submit pull requests, the team says it is “working hard” on migrating the Windows build environment to GitHub with a complete continuous integration environment. “We’ve been developing Service Fabric internally for Windows for close to a decade, and most of that time it was a Microsoft-internal platform, which means we have close to a decade’s worth of internal Microsoft tools to migrate and processes to refine before we can put something usable out on GitHub,” the team wrote in a development blog.
Google announces ARCore 1.0 Google has announced the release of ARCore 1.0. ARCore is the company’s augmented reality SDK for Android. Google recently abandoned its previous AR initiatives, Project Tango, in favor of ARCore. With the release of ARCore 1.0, developers will now be able to publish AR apps to the Play Store. Features of the latest release include improved environmental understanding, allowing users to place virtual assets on textured surfaces. ARCore is also now supported in the Android Studio Beta emulator so developers can test apps in virtual environments from their desktop. ARCore currently works on 13 different models, including the Pixel line and Samsung’s Galaxy S8 and S8+. The company is partnering with other device manufacturers to enable ARCore on their devices this year. z
008,9,10,12_SDT010.qxp_Layout 1 3/23/18 12:01 PM Page 8
TEN THINGS THAT CHANGE WHEN A DEVELOPER GETS PROMOTED BY BRADLEY L. JONES
008,9,10,12_SDT010.qxp_Layout 1 3/23/18 12:01 PM Page 9
hen you do a great job as a developer, you often get promoted. The irony is that with most promotions, what you do often changes from what you previously did. One of the most important things you can do when offered a promotion is to become aware of the changes that are expected in what you will be doing. It is important to know what additional tasks are likely to be added to what you are doing as well as what tasks you have been doing that you will need to give up. There are some promotions you can receive that have minimal impact. For example, some organizations will promote a developer from a junior status to a regular status, and then later to a senior status. This type of career progression often has less change in expectation; however, this progression also has a more limited career path.
they would prefer to simply be demoted back to their previous role. In both cases, the best way to make sure you continue along a career path you enjoy is to be aware of the impact a promotion is going to have on what you do. With that in mind, the following are several changes that can occur in what you do should you be promoted into a position that includes developer management.
CHANGE 1: Less Coding, Architecting, or “Hands On” While many developers don’t like to do testing, most like to code. Along with coding, developers tend to like to do the architecting and other hands-on tasks related to creating programmed solutions. After all, these tasks are what put “program” in programmer.
If you are a developer looking to get promoted, then how do you avoid getting caught in the Peter Principle? How do you avoid being surprised if you happen to land in a position that causes you to want to leave the organization? It is when a promotion shifts you to a role that involves management that larger changes can happen in what you do. This would include promotions such as going from developer to team leader or from developer to development manager. It is this kind of promotion that many developers strive for; however, it is also this kind of promotion that can lead to the Peter Principle, or to putting you into a position you no longer care to do. One of the things you can do to avoid being promoted into a position that is beyond your ability is to be aware of what you are good at doing. Knowing your strengths and weaknesses is core to understanding if you are likely to be able to succeed at something. Knowing your strengths and weaknesses will also help you to avoid getting into a role that you don’t find enjoyable. Equally important is understanding what you enjoy doing. Developers that get promoted into a position they don’t care to do often end up leaving a company even if Bradley Jones has authored more than 20 books on programming. He is currently building his own company, Lots of Software, LLC.
The reality is, most developers would prefer to spend more time coding than they currently do. They find that other tasks such as meetings can get in the way. As such, one of the biggest changes to be aware of is the redirection of your time away from coding.
CHANGE 2: More “Paperwork” There are many opportunities that can happen when you are promoted, and many are covered in the following sections. One of the most grating tasks that tends to come with a promotion is more “paperwork.” Such tasks can include little things such as tracking a team’s time off, approving vacation days or being the person a team member calls when they are sick. It can also include bigger tasks, such as writing reviews, doing performance management tasks or reporting the status of the team’s projects. The bigger your team, the more time administrative tasks are likely to take. Because these tasks tend to be driven by a human resource department and pushed down by upper management, they tend to be visible. Thus, they are tasks that tend to have to be done if you expect any future promotions! continued on page 10 >
008,9,10,12_SDT010.qxp_Layout 1 3/23/18 12:01 PM Page 10
< continued from page 9
CHANGE 3: Thinking More Strategically than Tactically As a developer you might not see everything happening within a business. Often the focus of developers is to deliver a solution that meets a set of requirements. Their approach is often tactical, using a set of predetermined tools to derive a solution that meets a defined set of needs. As a manager, the role can become more strategic, with a focus on the business rather than just the solution. In general, the focus expands beyond just your own projects that have been assigned. It can become necessary to be aware of what the business is doing beyond just your own group. Depending on the size of the organization, you might need to understand what the business units around you are doing or what the business as a whole is doing. As a manager, it will be more important to understand how you and your team’s actions fit within the larger strategy. This begs the question; how do you learn the business? Often developers understand a portion of the business better than many of the otherwise involved. This is a result of having built business rules into the applications that are driving the business. Unfortunately, while a developer might understand a segment in extreme detail, that doesn’t mean they will know the business rules that other developers have coded or of how those rules apply outside of the applications they’ve done. As such, when you are promoted into a higher-level position, it is common to find a need to expand your understanding of the business. To accomplish this requires time and effort – time that comes from other tasks that could be done.
CHANGE 4: Changes in the Amount of Control Many developers mistakenly believe that they will have more control if they are promoted. While this can be true for some areas of responsibility, there is an equal potential to have less control.
As a developer, you likely controlled your time. You were likely given tasks to do, and you controlled getting them done. You either got them done, or you didn’t. If your task required something from others to complete, then while that could impact your delivery, it was outside of your control. This provided a reason for missing a delivery. As a manager, you have tasks you are given as well. Some of these tasks you will do while some of the tasks you will delegate to team members. You gain control of how you get the tasks done as well as how you allocate them across your team. You are, however, at the mercy of your developers to accomplish the tasks. If one of your team members doesn’t deliver, you don’t have a reason for missing the overall delivery. A development manager is generally a part of a larger team as well. While you have a team you can leverage to accomplish tasks, you are also a part of a team with a leader who is leveraging you as well. This means that while you can control those below you, there are likely those above you that are assigning tasks to you. Often the control you gain of those below you is outweighed by what is required by those above.
CHANGE 5: An Increase in Responsibility With an increase in control generally comes an increase in responsibility. Often as a manager, you are not only given a higher level of responsibility over managing people, but also a higher level of involvement in delivering what is needed for the business. The increase in responsibility often requires that you lead people that report to you, as well as to work closer with your peers and those above you. Because of the higher level of responsibility, it is not uncommon to be put into a position where you need to say “no” or to indicate something can’t be done within the constraints that you and your team are operating in. As a developer you were likely asked to deliver seemingly impossible tasks. In a developer management role, you now have developers that say “no” to you for the very same reasons. At the same time, you
might be asked from those above you to complete tasks that you believe would be irresponsible to take responsibility for. You might have to still say “no” as well. Just as if you override a team member’s “no,” your superior might override yours. The difference is, if you’ve been given a task to do and your team members say no, you are still responsible for getting the task delivered.
CHANGE 6: More Time Resolving Issues It should be no surprise after reading the previous changes that another area where you will be impacted is in issue resolution. In a perfect world, everything goes smoothly. As a developer, you should already know things don’t always go smoothly. As a development manager, there are several areas where you’ll be likely spending additional time resolving issues. As a developer, you likely addressed scope creep on projects. As a development manager, you’ll not only be managing this common issue from those above and around you, but also for the members of your team. Not only will you need to manage project requirements being expanded by the business users, but you’ll also need to manage what your own team members are doing. Keeping your own developers focused so as to avoid adding scope or features is equally important. When that developer wants to add the new animated button that isn’t needed or budgeted, then it is likely that you must reign the project back. In addition to features and scope, you might also be given more responsibility to manage budgeting and the issues associated with it. This can include determining new allocations as well as cutting existing ones. A third and more common area that gets added to responsibilities at a management level is the resolution of personnel issues. Whenever two or more people are involved in a task, chances are issues will arise at some point. If you have a team, then any issues that arise with its members often become your responsibility to resolve. continued on page 12 >
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:49 AM Page 11
008,9,10,12_SDT010.qxp_Layout 1 3/23/18 12:01 PM Page 12
< continued from page 10
CHANGE 7: Toeing the Company Line One of the changes that many developers can find hard to accept is the need to balance personal opinion with what is best for the company. Developers often suggest better ways to accomplish tasks. They will suggest newer, more innovative ways to accomplish something. Suggestions might be for better products or tools that could be used. As a manager, in the best interest of time and the company, you might need to say no, even when you agree with the developer. There are times when you might have to make or support decisions that you feel are not optimal. As a manager, it is important to consider what is in the best interest of the business. This can include weighing the level of work to be done against relative to what the business gains. Often this requires making a decision on what is “good enough” versus what is the best solution available. Similarly, while many developers believe that a promotion will lead to the ability to replace the underlying systems within a company with a better solution, this is often not feasible due to risk, finances, or a variety of other reasons. As a manager, you might have to support what is in place already. This can give the impression of ‘toeing the company line.”
CHANGE 8: You Become “One of Them” Part of being included in making decisions at a higher level is an expectation and responsibility to keep things confidential. As indicated in Change 7, you often gain access to additional information that helps you to better understand risks and costs. While you are gaining insight into more of the details, your subordinates are not, nor are you always able to share what you know. As time passes, the changes that occur in the transition to a management role will cause relationships to change. This can result from the difference in authority created when you were promoted. The result is that developers that report to you might be more reserved knowing that you are responsible for reviews and other impactful decisions related to their careers. Your previous peers might not include you in conversations that you previously were a part of.
CHANGE 9: More Meetings and More Communication While it isn’t necessarily always true, in general, most people end up in more meetings the higher they are promoted in an organization. This should not be a surprise since the higher you progress in a company, the more you tend to rely on those reporting to you for tasks to be completed and the business to be moved forward. Of course, it is possible to limit formal meetings; however, you will need communication to flow so you can stay
aware of where things stand. You’ll need to keep the communication flowing with your own team members. You’ll also have to continue to keep the flow of information happening with your boss.
CHANGE 10: Potential for More External Relationships The role of a developer could be accomplished without interacting with very many people. Depending on the specific development role, this could be working with other developers, an analyst, a manager and very few others. Developers that get promoted into management roles tend to be those that interact with a wider range of people. Once promoted, in addition to communicating with many of the same people you interacted with before, it is common to interact across a broader array of people within the organization. In large companies, this might include interacting across more technical teams as well as with other managers. In smaller organizations, a promotion into a development management role might also include more interactions with business units, external users, and higher levels of management. It might also include time spent working with external vendors and peers in other organizations.
Conclusion As you are promoted, most roles will push you towards using more interpersonal skills rather than technical skills. If you are a developer that prefers to be head-down writing code and avoiding other people, then a promotion to a position that involves more management could be brutal. Every role is different, so not all the above items will change if you are promoted. It is wise to ask about the expectations of a promotion. By being aware and by communicating, you have a better chance of avoiding the Peter Principle or a promotion into a role that makes you want to leave. Better yet, by understanding the expectations of your new role, you’ll be better positioned to excel and then possibly be promoted yet again! z
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:49 AM Page 13
014,15_SDT010.qxp_Layout 1 3/23/18 3:08 PM Page 14
Software industry’s HackerRank report shows more women entering workforce BY MATT SANTAMARIA
Making a bigger impact “The future for women in the software space looks optimistic, and all signs are pointing in the right direction,” according to Sofus Macskassy, VP of data science at HackerRank. “Companies can drive change by offering more
At 27 years old, Jessica Williams became the co-founder and CEO of Opearlo, one of the first voice design agencies in the United Kingdom. Her business helps bigbrand clients to reach customers through engaging voice experiences with Alexa. However, she has been to a lot of developer events where she is the only woman there and wants to change that. “It’s important for women to feel inspired and to surround themselves with other women that inspire them, whether or not it is in technology,” said Williams. “I want to inspire more women to get involved in voice because it is such a new and exciting space. I truly believe there’s going to be a growing need for voice designers and developers going forward.”
education around the obstacles that women face in the workplace,” he said. “For instance, studies show women have a tendency to report their skills differently than men and also tend to listen more and speak less in group settings. Managers need to recognize this and focus on recognizing their biases, conscious or unconscious.” There are many companies already working to inspire women to get involved in STEM. For instance, IBM has developed education models and mentorship programs for young women in order to develop their technical, leadership and confidence skills. The IBM programs include Corporate
Service Corps, P-Tech, SkillsBuild and Tech Re-Entry. “The business community must take bold action to ensure that women have every opportunity to succeed,” said Jennifer Ryan Crozier, president of the IBM Foundation. “This work is much more than a financial commitment; it’s about giving employees the tools and time to more easily serve in a way that benefits girls and women in society and equally benefits the ones who donate their talent. The impact of these programs is seen when young girls suddenly start envisioning their future differently and are determined to push through barriers.” z
014,15_SDT010.qxp_Layout 1 3/23/18 3:08 PM Page 15
gender gap shrinks Helping women gain entry to tech industry
Akilah Bolden-Monifa, a 60-year-old writer, recently decided to develop something on her own. She wanted to learn how to create an Alexa skill in order to increase awareness of Black History Month. Akilah then created a top Alexa skill called Black History Everyday and also built two additional skills for Alexa. “To know that so many people can hear the skill and be as enlightened through sound and knowledge as I was — it is, I think, very, very profound,” she said.
Kesha Williams is a software engineer at Chick-Fil-A and a mother of three. Once she bought her first Echo, she wanted to build an Alexa skill that would help her organize her day. This led to Live Plan Eat, which helped plan the family’s weekly meals. This encouraged her to prototype skills for the company that include asking Alexa for nutrition information about the meals served in restaurants. Kesha is also an advocate for STEM (Science, Technology, Engineering, and Mathematics). She built a skill called STEM Women that has highlighted women’s achievements and contributions in the field. “Technology has opened a lot of doors for me, and it’s a great feeling to share with others the lessons I’ve learned with voice,” Williams says. “If there’s anything I can do to help someone find their way to that path, I feel that’s really what I’m meant to do.”
Corporate Service Corps, from 20112017, has performed more than 800 projects related to women. Recently, the program worked with the Peace Corps to help a social enterprise in Ghana called TECHAiDE, which provides technology that allows young women in remote villages to complete their education. Inspired by the volunteer team itself, the male CEO of TECHAiDE hired more women as technicians. P-Tech is an innovative education model that helps blend high school, community college and workplace skills into one to help underserved students learn skills and gain experience to compete for jobs in STEM. The program started with one school in 2011 and has expanded to 90 schools in seven U.S. states, Morocco and Australia. According to IBM, most of the P-Tech schools are at least 40 percent female. SkillsBuild allows corporate employees to volunteer their time at schools in order to inspire the next generation of female developers and coders. Through the program, IBM female employees mentor girls one-on-one and it has proven successful. For example, a female IBM employee teamed up with an 11th-grader through an organization called #BuiltByGirls to have a STEM career event in January of 2018 at a high school in New Jersey. Tech Re-Entry allows women who have been out of the workforce for several years to rejoin the tech industry. The program provides “adult interns” with hands-on experience, mentoring and workshops which allow them to succeed. According to IBM, all participants have been recommended for full-time offers. z
016_SDT010.qxp_Layout 1 3/23/18 12:02 PM Page 16
Fight for free ‘as in freedom’ Free Software Foundation’s annual report shows organization’s efforts and accomplishments BY IAN C. SCHAFER
The Free Software Foundation’s future is looking bright, according to its Fiscal Year 2016 Annual Report. The report outlines efforts and accomplishments by the free “as in freedom” software advocacy group over the previous year, from activism to awards and growth in membership and infrastructure. With individual contributions to the non-profit totalling more than $1 million and additional funding from earned revenue, investments, interest and others, the organization was able to cleanly cover all operating expenses while setting over $56,000 aside, with a reported 81 percent of funds supporting the GNU project, free software and its other endeavors. An evaluation of the FSF’s financial health, accountability and transparency alongside over 8,000 other non-profits by Charity Navigator earned the FSF a top four-star rating. “[Charity Navigator] chose us, out of 8,000 charities, for their all-purpose list of “10 Charities Worth Watching,” demonstrating significant progress toward making user freedom an issue of general, widespread importance,” foundation executive director John Sullivan wrote in the opening letter of the report. “These accolades reflect the hard work of our small, dedicated team, and show that supporters are right to invest their dollars and time in the FSF.” Over last summer, the FSF and partners successfully lobbied European lawmakers for stronger laws in favor of net neutrality and mobilized community action against the U.S. Supreme Court’s approval of amendments to
Rule 41 of the federal Rules of Criminal Procedure, halting expansion of the government’s authority to crack devices. In addition, the organization’s Licensing and Compliance Lab, which defends copyright copyright claims on free software and publishes licenses like the GPL and offers Continuing
“We need the software powering our world to be freedom-respecting by nature, from the ground up. This will only happen if a critical mass of people demands it.” Furthering their efforts to codify what makes for free and ethical software practices, the foundation and collaborators drafted and launched ethical criteria for code-hosting repositories alongside evaluations of major sites based on those criteria.
Source: Free Software Foundation
Legal Education to legal and tech professionals and students, continued fighting against the Trans-Pacific Partnership alongside partners. “We’ve worked with an inspiring number of organizations to fight for better policy when it comes to freedom on the Internet and protection from bulk surveillance, but we need more than good laws,” Sullivan wrote.
In addition, the FSF attended 13 conferences over the 2016 FY, saw growth in email subscribers, unique website visits, launched a new version of its Email Self-Defense Guide and started taking donations to directly support the GNU toolchain. “More people and businesses are using free software than ever before,” Sullivan wrote. “That’s big news, but our most important measure of success is support for the ideals. In that area, we have momentum on our side, but also much more to do.” z
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:50 AM Page 17
018_SDT010 (1).qxp_Layout 1 3/23/18 12:17 PM Page 18
To change world, target a problem Bullied as schoolboy, developer wrote app to detect fighting BY CHRISTINA CARDOZA
When computer scientist Derek Peterson was a 5-foot, 110-pound schoolboy, his daily school routine involved being stuffed in lockers by upperclassmen, weekly choking and constant wedgies just because of his size and weight. “Being the smallest kid in school allowed everyone to feel free to take verbal and physical shots at me,” he says. Today, Peterson is an Ironman athlete who now stands at 6-1, and isn’t letting bullies get away with it anymore. Peterson has dedicated his career to stopping bullying and violence across America with the creation of his company, Soter Technologies. “The company name is inspired by Greek mythology, wherein ‘Soter’ is the personification of safety, deliverance, and preservation from harm. Using advanced sensors and software, Soter Technologies develops and delivers innovative solutions for environmental intelligence — to make the world a safer place… from schools to enterprises to public spaces,” according to the company’s website. At the time of Peterson’s schooling, the technology landscape was a lot different than it is today. Back then, there were no cell phones and the Internet wasn’t available to everyone, so Peterson says it was difficult to report attacks without revealing himself. Unable to fight back or tell anyone about it, he quietly suffered.
management application designed to give parents and students a place where they can anonymously and confidentially report any bullying or harassment. Peterson’s most recent solution is Fly Sense, a sound and chemical detection solution that can indicate bullying and fighting in isolated areas where video cameras can’t reach or are not allowed like bathrooms and locker rooms. In addition, Fly Sense is able to detect vaping and alert administrations of any concerns. The company is able to do this through the use of sensors and machine learning algorithms. “At the end of the day it all made me a better person, because I learned to roll with the punches (literally). I have learned to laugh at myself, because 90 percent of the time the school was laughing at me,” he says.
How they made it work Derek Peterson’s Soter Technologies offer a solution to stop bullying and other unwanted activity in schools.
“There is a saying, ‘snitches get stitches,’ so I couldn’t speak out or report things. You didn’t want to be known as a snitch so you would have to take a beating and shut up,” he said. To change this, him and his company are developing a number of hardware and software solutions in-house aimed at protecting and keeping students safe. “We are looking to impact the world and change lives by keeping people safe one device at a time,” Peterson says. The team has developed Fly Sights, a social media awareness solution designed to keep track of harmful bullying and cyberbullying activity as well as any self-harm messages, and alert school administrators. According to Peterson, with Fly Sights the company was able to stop two youths from committing suicide last year. Glue Board is the company’s incident
Peterson and his team used an informal agile process to get everything up and running. They used tools like Trello to keep track of their tasks and Pulse to estimate out the most difficult task they were trying to accomplish, what customers needed the most and if it were possible to reach those goals given their timeframes. All the code was kept organized in a closed GitHub repository. While the team didn’t have formal daily team meetings, they did make an effort to get the most important subset of team members together to talk about progress and maintain communication. Additionally, since the team’s solutions are Internet of Things-based, they had to make sure they were using SSL and SSH for their over the air updates and ability to send out notifications via email or text, and ensure the protection of their databases by replicating all their systems to provide backups and ways to shut people out if a hacker got in. Lastly, Peterson would advise leveraging a younger generation of developers because they have a better sense of the new technology and languages available today. “If you want to change the world, you have to find the problems. That is how you are going to change the world with your software,” said Peterson. z
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:50 AM Page 19
Learn, Explore, Use Your Destination for Data Cleansing & Enrichment APIs
Global Email Global IP Locator DEVELOPER
Global Phone Property Data
Your centralized portal to discover our tools, code snippets and examples. RAPID APPLICATION DEVELOPMENT
REAL-TIME & BATCH PROCESSING
TRY OR BUY
FLEXIBLE CLOUD APIS
Convenient access to Melissa APIs to solve problems with ease and scalability.
Ideal for web forms and call center applications, plus batch processing for database cleanup.
Easy payment options to free funds for core business operations.
Supports REST, JSON, XML and SOAP for easy integration into your application.
Turn Data into Success – Start Developing Today! Melissa.com/developer 1-800-MELISSA
020_SDT010_DevOpsWatch.qxp_Layout 1 3/23/18 12:17 PM Page 20
DEVOPS WATCH In other DevOps news…
GitLab: 2018 is the year for open source and DevOps BY JENNA SARGENT
DevOps and open source aren’t slowing down anytime soon, a newly released report revealed. GitLab released its 2018 Global Developer Survey on developers’ perception of their workplace, workflow, and tooling within IT organizations. The demand for DevOps continues to grow, even though there are still challenges created by outdated tools and company resistance to change. According to the report, only 23 percent identify DevOps as their development methodology. However, IT management has named DevOps as one of the top three areas of investment in 2018, indicating that the number of DevOps adopters is sure to grow this year. GitLab found that most developers are in agreement that DevOps workflows save time during the development process. Teams that have adopted DevOps have confirmed that there is an increase in productivity, allowing them to spend at least 50 percent of their time on new work. Forty-five percent of DevOps practitioners deploy on demand, while 71 percent of DevOps practitioners stated that automating the software develop-
ment life cycle was a high priority. This is compared to 39 percent and 60 percent of Agile practitioners, respectively. The study also found that remote teams tended to have higher overall satisfaction and productivity compared to in-office teams. Forty-one percent of those remote teams agreed that they have a well-established DevOps culture, while only 34 percent of in-office teams agreed with that sentiment. According to the report, an emphasis on open source tools is a unifying factor for all segments. Ninety-two percent of survey respondents agreed that open source tools are important to software innovation. Seventy-five percent of respondents reported that using open source tools was important. Eighty-four percent claimed that they preferred using open source tools over closed or proprietary tools. Almost half of respondents (45%) claimed that most of their tools are open source now, while 15 percent said that all of their tools are open source. The majority of respondents (60%) agreed that open source tools are “more secure, can improve overall software quality, and streamline the development process.” z
n Chef’s latest release of its compliance automation tool, InSpec 2., is designed to accelerate DevSecOps with cross-functional, infrastructure, security, assessment, and remediation features. “InSpec 2.0 builds on our commitment to build the essential tools and services needed for modern application teams to truly deliver on the promise of DevSecOps, fully integrating security with development and deployment for traditional and cloud-native software delivery,” said Marc Holmes, vice president of marketing at Chef. “InSpec provides an easy-to-learn, open-source path to incorporating security and compliance requirements as code directly with the delivery process, ensuring that applications and infrastructure are compliant every step of the way — not just at the end of the process.” n CloudBees is providing full support for Kubernetes in CloudBees Jenkins Enterprise, has acquired key Kubernetes talent, and has joined the Cloud Native Computing Foundation. As part of the company’s investment, it is welcoming a team of experienced Kubernetes talent and will work to develop a next-generation continuous delivery platform that will enable DevOps teams to deliver Kubernetesnative applications. n Perfecto announced new capabilities for mobile app developers to improve its continuous testing tools. The new capabilities include enhancements to the company’s Continuous Quality Lab and DigitalZoom reporting solution. The lab has been updated with Google’s Espresso and Apple’s XCUITest frameworks for testing Android and iOS mobile apps. DigitalZoom reporting aims to provide insight into issues with fast resolution. The latest release features HTTP achieve reports for earlier network analysis into performance issues. z
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:50 AM Page 21
022-25_SDT010.qxp_Layout 1 3/23/18 3:07 PM Page 22
Standardizing efforts will help organizations more easily follow the journey of their transactions BY JACQUELINE EMIGH ndustry efforts toward distributed tracing have been evolving for decades, and one of the latest initiatives in this arena is OpenTracing, an open distributed standard for apps and OSS packages. APMs like Lightstep and Datadog are eagerly pushing forward the emerging specification, as are customer organizations like HomeAway, PayPal and Pinterest, while some other industry leaders — including Dynatrace, NewRelic, and App Dynamics — are holding back from full support. Still, contributors to the open-source spec are forging ahead with more and more integrations, and considerable conference activities are in store for later this year. “Distributed tracing is absolutely essential to building microservices in highly scalable distributed environments,” contended Ben Sigelman, cocreator of OpenTracing and co-founder and CEO at Lightstep, in an interview with SD Times. In contrast to other types of tracing familiar to some developers, such as kernel tracing or stack tracing, distributed tracing is all about understanding the complex journeys
that transactions take in propagating across distributed systems. Where academic papers about distributed tracing started appearing even sooner, Google first began using a distributed tracing system called Dapper some 14 years ago, publishing the Dapper paper online about six years later. As a Google employee during the early phases of his career, Sigelman worked on Dapper, in addition to several other Google projects. He became intrigued with Dapper as a solution to the issues posed when a single user query would hit hundreds of processes and thousands of surfaces, overwhelming existing logging systems. Zipkin, another distributed tracing system, went open source a couple of years after Dapper.
A spec is born Where Dapper was geared to Google’s own internally controlled repository, however, the OpenTracing specification, launched in 2015, is designed to be a “single, standard mechanism to describe the behavior of [disparate] systems,” according to Sigelman. Tracing contexts are passed to both self-contained OSS services (like Cassandra and NGINX) and OSS packages locked into custom
services (such as ORM amds amd grpc), as well as “arbitrary application glue and business logic built around the above,” Sigelman wrote in a blog. As might be expected, among the earliest customer adopters of OpenTracing are many large, cloud-enabled online services dealing with massive numbers of transactions across myriad distributed systems. HomeAway, for example, is a vacation rental marketplace dealing with 2 million vacation homes in 190 countries, across 50 websites around the globe. “Our system is composed of different services written in different languages,” said Eduardo Solis, architect at HomeAway, in an email to SD Times. “We are also seeing many teams using patterns like CQRS and a lot of streaming where transactions have real-time patterns and asynchronous ones. Being able to visualize and measure all of this is critical!”
Why OpenTracing? “OpenTracing is a ‘must have’ tool for microservices and cloud-native applications. It is the API to adopt,” Solis continued. “Observability of the system is critical for business success in a con-
022-25_SDT010.qxp_Layout 1 3/26/18 10:37 AM Page 23
downstream tracing technology with a configuration change,” he said. Sigelman points to a number of different ways in which distributed tracing can be standardized, such as the following: • Standardized span management. Here, programmatic APIs are used to start, finish, and decorate time operations, which are called “spans” in the jargons of both Dapper and Zipkin. • Standardized inter-process propagation. Programmatic APIs are used to help in transferring tracing context across process boundaries. • Standardized active span management. In a single process, programmatic APIs store and retrieve the active span across package boundaries. • Standardized in-band context encoding. Specifications are made as to an exact wire-encoding format for tracing context passed alongside application data between processes. • Standardized out-of-band trace data encoding. Specifications are made about how decorated trace and span data should be encoded as it moves toward the distributed tracing vendor. Earlier standardization efforts in distributed tracing have focused on the last two of these scenarios, meaning the encoding and representation of trace and context data, both in-and out-ofband, as opposed to APIs. In so doing, these earlier efforts have failed to pro-
Sigelman, of course, concurs that OpenTracing carries significant advantages for developers. For one thing, developers of application code and OSS packages and services can instrument their own code without binding to any specific tracing vendor. Beyond that, each component of a distributed system can be instrumented in isolation, “and the distributed application maintainer can choose (or switch, or multiplex) a
Three main use cases Sigelman told SD Times that he sees three main use scenarios for OpenTracing: “The first of these is basic storytelling. What happens to a transaction across processes? The second is root cause analysis. What’s broken?” he noted. “The third main use case scenario is greenfield long-term analysis, to help bring improvements that would prevent the need for engineering changes in the future.” Still, leading APMs like Dynatrace, New Relic, and App Dynamics are hanging back from full support for OpenTracing. Why is this so? Alois Reitbauer, chief technology strategist at Dynatrace, agreed that OpenTracing does offer some important benefits to developers. “There’s a lot going on in the industry right now in terms of creating a standardized way for instrumenting applications, and OpenTracing is one part of that. What it tries to achieve is something really important, and something that the industry needs to solve, in terms of defining what a joint API can look like. Some frameworks are using Open-
scalable distributed environments.’ —Ben Sigleman
vide several benefits that developers actually need, Sigelman argued. “Standardization of encoding formats has few benefits for instrumentationAPI consistency, tracing vendor lock-in, or the tidiness of dependencies for OSS projects, the very things that stand in the way of turnkey tracing today,” he wrote. “What’s truly needed — and
what OpenTracing provides — is standardization of span management APIs, inter-process propagation APIs, and ideally active span management APIs.”
tainerized cloud world where applications are spinning up and down, having degradation or failure, and there is a very complex dependency graph. Instrumenting code properly is hard. Assuming you have the resources and knowledge to do it you end up using either some proprietary API or getting the system baked to a vendor system. There are APM solutions that autoinstrument but then you end up losing some of the powerful context capabilities. OpenTracing solves all of the above. “You have the whole open source community instrumenting popular frameworks and libraries,” Solis added, “you get a vendor neutral interface for instrumentation, and you can use that same API to do other more interesting things at the application level without getting married to one ‘Distributed tracing is absolutely essential single solution.” to building microservices in highly
How OpenTracing is different
continued on page 25 >
022,23,25,26_SDT010.qxp_Layout 1 3/23/18 2:08 PM Page 24
< continued from page 23
Tracing already today, but it’s mainly targeted for library and some middleware developers. End users will not necessarily have first-hand contact as frameworks and middleware either come already instrumented or instrumentation is handled by the monitoring provider,” Reitbauer told SD Times, in an email. “It’s a good first step, but it’s in its early stages, and the reality is that OpenTracing doesn’t paint the whole picture. Beyond just traces, systems need metrics and logs to give a comprehensive view of the ecosystem, with a full APM system in the backend as well.” In a recent blog post, Reitbauer went further to maintain that interoperability has become much more necessary lately with the rise of cloud services apps from third-party vendors, but that the only way to achieve interoperability is to solve two problems that OpenTracing doesn’t address. The problems involve abilities to “create an end-toend trace with multiple full boundaries” and to “access partial trace data in a well defined way and link it together for end-to-end visibility,” he wrote. Many APM and cloud providers and well aware of these issues and have started to work on solving them by agreeing on two things: a standardized method for propagating trace context information of vendors end-to-end, and a discussion of how to be able to ingest trace fragment data from each other, according to Reitbauer. “The first [of these] is on the way to be resolved within the next year. There is a W3C working group forming that
will define a standardized way to deal with trace information referred to as Trace-Context, which basically defines two new HTTP-Headers that can store and propagate trace information. Today every vendor would use their own headers, which means they will very likely get dropped by intermediaries that do not understand them,” said the Dynatrace exec. “Now let us move on to data formats. Unfortunately, a unified data format for trace data is further away from becoming reality,” he acknowledged. “Today there are practically as many formats available as there are tools. There isn’t even the conceptual agreement whether the data format should be standardized or if there should be a standardized API and everyone can build an exporter that fits their specific needs. There are pros and cons for both approaches and the future will reveal what implementers consider the best approach. The only thing that cannot be debated is that eventually we will need a means to easily collect trace fragments and link them together.” For his part, though, Sigelman has suggested that one of the big reasons why OpenTracing is progressing so rapidly is precisely due to the narrow, well defined, and manageable focus of the spec.
New support for the spec Now Datadog, a major monitoring platform for cloud environments, is another force avidly backing OpenTracing. In December of 2017, Datadog announced its support for OpenTracing as well as its
Datadog’s monitoring platform supports OpenTracing, enabling new languages on the platform.
OpenTracing is a ‘must have‘ tool for microservices, said HomeAway’s Solis.
membership in the Cloud Native Computing Foundation (CNCF). The vendor also unveiled plans to join the OpenTracing Specification Committee (OTSC) and to invest in developing the standard going forward. Datadog’s support for OpenTracing will let customers instrument their code for distributed tracing without concerns about getting locked in to a single vendor or making costly modifications to their code in the future, according to Ilan Rabinovitch, VP product and community for Datadog. “Open source technologies and open standards have long been critical to Datadog's success. Customers want to emit metrics and traces with the tooling that best fits their own workflows and want to enable them to do so, rather than force them to specific client-side tooling,” he told SD Times. “Many of our most popular integrations in infrastructure monitoring, including OpenStack and Docker, started off as community-driven contributions and collaborations around our open-source projects. In the world of OpenTracing we have seen our community build and open source their own OT-based tracers that enable new languages on Datadog, beyond our existing support for Java, Python, Ruby and Go.” In addition to the Specifications Committee, OpenTracing also runs multiple working groups. The Docu-
022-25_SDT010.qxp_Layout 1 3/23/18 3:06 PM Page 25
Sigelman is quick to observe that his co-creators on OpenTracing and his co-founders on Lightstep are two distinctly separate groups, and that many OpenTracing adopters are not Lightstep customers. He also cites large numbers of recent contributions from both OpenTracing and customer and vendor contributors, including the following.
Core API and Official OpenTracing Contributions n OpenTracing-C++ has now added support for dynamic loading, meaning that they will dynamically load tracing libraries at runtime rather than needing them to be linked at compiletime. Users can use any tracing system that supports OpenTracing. Support currently includes Envoy and NGINX. n OpenTracing-Python 2.0 and OpenTracing-C#v.0.12 have both been released. The main addition to each is Scopes and ScopeManager.
Content from the Community n Pinterest presented its Pintrace Trace Analyzer at the latest OTSC meeting. “The power of this tool is its ability to compare two batches of traces — displaying stats for each of the two and highlighting the changes,” explained Pinterest’s Naoman Abbas. “An unexpected and significant change in a metric can indicate that something is going wrong in a deployment.” n RedHat has shared best practices for using OpenTracing with Envoy or Istio. “We have seen that tracing system and with Istio is very simple to set up. It does not require any additional libraries. However, there are still some actions needed for header propagation. This can be done automatically with OpenTracing, and it also adds more visibility into the moni-
mentation Working Group meets every Thursday, while the Cross Language Working Group — entrusted with maintaining the OpenTracing APIs and ecosystem — meets on Fridays.
Conference fare Want to find out more about OpenTracing? This year, developers have an opportunity to meet with OpenTracing experts and discuss the emerging spec at a number of different conference venues. At the end of March, HomeAway held an end user meetup group together with Indeed, PayPal, and Under Armour. Talking with SD Times just before the event in Austin, HomeAway’s Solis said that he planned to give a presentation detailing how his devel-
tored process,” according to RedHat’s Pavol Loffay. n HomeAway presented at the Testing in Production meetup at Heavybit. HomeAway’s Priyanka Sharma showed ways to use tracing to lessen the pain when developers are running microservices using CI/CD. n Idit Levine, founder of Solo.io, delivered a presentation at Qcon about her OpenTracing native open-source project, Squash, and how it can be used for debugging containerized microservices.
Community Contributions n Software development firm Alibaba has created an application manager called Pandora.js, which integrates capabilities such as monitoring, debugging and resiliency while supplying native OpenTracing support to assist in inspecting applications at runtime. n Xavier Canal from Barcelona has built Opentracing-rails, a distributed tracing instrumentation for Ruby on Rails apps based on OpenTracing. The tool includes examples of how to initialize Zipkin and Jaeger tracers. n Gin, a web framework written in the Golong language, has begun to add helpers for request-level tracing. n Daniel Schmidt of Mesosphere has created Zipkin-playground, a repo with examples of Zipkin-OpenTracing-compatible APIs for client-side tracing. n The Akka and Concurrency utilities have both added support for Java and Scala. n Michael Nitschinger of Couchbase is now leading a community exploration into an OpenTracing API to be written in the Rust programming language. z —Jacqueline Emigh
opment team is using the new spec. “As infrastructure groups we are providing platforms and frameworks that deliver instrumentation to developers so they don’t have to do anything to get quality first level (entry/exit) tracing in their applications. We have also worked on an internal standard that developers using other technologies that we don’t support can instrument themselves. OpenTracing gives us this ability to just delegate to standard documentation and open-source forums if developers want to enrich their tracing. We are also doing a slow rollout so we can build capabilities in small but fast iterations,” the architect elaborated. Yet in case you missed the meetup in Austin, you have several other chances
ahead for getting together with developers from the OpenTracing community. KubeCon EU, happening from May 2 to 4 in Copenhagen, will feature two talks about OpenTracing, along with two salons. Salons are breakout sessions where folks interested in learning about distributed tracing can discuss the subject with speakers and mentors. OSCON, going on from July 17 to 19 in Portland, OR, will include three talks on OpenTracing, along with a workshop and salons. If you’d like to attend an OpenTracing salon at either venue, you can email OpenTracing at firstname.lastname@example.org to pose questions in advance. OpenTracing would also love to hear from participants who are willing to help out by mentoring. z
026-31_SDT010.qxp_Layout 1 3/26/18 11:44 AM Page 26
ITSM’s Next Wave: AI and Machine Learning earning By Jeffrey Schwartz
IT help desk technicians and administrators can’t move fast enough to keep up with the ﬂood of tickets that are coming their way these days. The good news is that help is on the way. Thanks to advances in artiﬁcial intelligence (AI), machine learning and predictive analytics, the ability to automate the resolution of IT service issues with little or no human intervention is now surfacing. Bringing AI and machine learning to facilitate self-service IT Service Management (ITSM) driven by bots, virtual agents and even using conversational computing capabilities is a common focus of the leading providers including ServiceNow, BMC Software, Micro Focus, Samanage, Atlassian and Ivanti, among many others. Several factors are now bringing AI into the ITSM equation, notably the availability of cloud-scale services, new programmable machine learning APIs that are now commercially viable and the introduction of virtual agents and bots into other customer service tools. It’s debatable to what extent AI is a priority and for sure, most of what’s now appearing in ITSM tools is rudimentary, if not just creating the building blocks for what will be available in the coming years. IT research ﬁrm IDC predicts that 75 percent of workers this year will interact with at least one application that has AI or machine learning capability built in and by next year the same percentage will have some interaction with intel26 April 2018
ligent digital assistants. “There’s a lot of interest in both artiﬁcial intelligence, and in workﬂow-style solutions, what they call low-code, no-code service solutions,” says IDC’s ITSM research manager Shannon Kalvar. “Both of these are in their early stages. When you look at what they are doing with AI, you are seeing what they’re calling intelligent agents, and everyone has a different name for it, which means you can chat with it in your chat interface, and they are also working on routing, trying to ﬁgure out who can answer your question. And those are both great things to be able to do. It’s just the beginning.” Driving the demand for these capabilities are the challenges that have besieged IT service desks over the past decade. The rise of mobile workers who have become reliant on the new wave of digitally driven processes, the ability for them to work ﬂexibility on any device, app, service and network, and the expectation and requirement of constant availability, has had an exponential correlation on incidents. The new socalled digital workplace and competitive requirement to resolve this growing ﬂow of tickets as quickly as possible, combined with mounting security incidents, is here to stay.
At the same time, the introduction of AI into the ITSM equation are early building blocks, meaning don’t expect
virtual agents and bots to take over the help desk overnight. But given the rise of AI and machine learning tools and infrastructure already applied in many threat detection tools, monitoring services and even recent advances in pattern recognition and natural language processing, it was only a matter of time before new levels of automation would start to appear in ITSM offerings. While many AI and machine algorithms have existed for some time, service desks can leverage the scale of public clouds and the machine learning interfaces leading infrastructure providers and developers now offer. The introduction of self-service functions and AI into ITSM tools — many call it AIOps — also aim to tap HR, ﬁnance and facilities management systems.
The need to bring AI to ITSM isn’t just a matter of cost-cutting. The number of requests and the wide scope of potential issues at the infrastructure, endpoint, application and now the cloud level also suggests that there are too many potential issues for even the largest help desks
026-31_SDT010.qxp_Layout 1 3/26/18 11:44 AM Page 27
Information You Need: ITOps Times
to keep up with. A study conducted last year by Accenture for leading ITSM provider ServiceNow, found that 43 percent of IT help desk administrators had more than 100 troubleshooting categories to choose from and 25 percent had over 300. ServiceNow began 2018 with the release of Kingston, the latest semi-annual version of its Now SaaS-based service. The release introduces its new Agent Intelligence orchestration engine. While it’s a critical new piece of the ServiceNow platform, Agent Intelligence has just started to roll out and is somewhat rudimentary. “Agent Intelligence is a supervised machine learning capability,” says Farrell Hough, general manager of ServiceNow’s ITSM, IT Business Manager and IT Asset Management businesses. “It trains on the data that you already have in your system, and very simply it is able to give you a view of accuracies for how to categorize and how it auto-classiﬁes.” Supervised machine learning uses an organization’s data to build predictive models that becomes more accurate over time as more data is capable of training them, according to the company. The machine learning models are designed to categorize and prioritize incidents and assign them in ways that will facilitate resolution and reduce potential errors, according to Hough. “This is taking away the monotony that comes for end users needing to categorize any kind of a task,” she says. “Digging through nested lists of categories, is extremely frustrating, a waste of time and hardly ever accurate.” The beneﬁt of supervised machine learning is it allows organizations to use AI without continued on page 29 >
Every business today is a
technology company, and executing on rapid-fire
releases while adopting new technologies such as
containers, infrastructure as code, and software-defined networks is paramount for success.
ITOps Times is aimed at IT managers who need to
stay on top of rapid, massive changes to how software is deployed, managed and updated.
NEWS AS IT HAPPENS, AND WHAT IT MEANS.
026-31_SDT010.qxp_Layout 1 3/26/18 12:06 PM Page 28
4 tips to build a lean, mean ITSM machine By Paul Buffington
Software has become central to all companies, changing operations across organizations. IT teams sit right at the center of this transformation, dealing with a variety of pressures. Customers demand fast, ﬂawless service, product teams need ﬂexible support as they reduce their time to market, and management expects lower costs. In response, many IT teams are switching to Agile approaches that value ease of use, collaboration, and knowledge-sharing over complex, inﬂexible workﬂows. Still, many IT teams fear that tearing down traditional ITSM processes may introduce unnecessary risk. With the right Agile approach though, teams can balance that risk with greater efﬁciency, enabling them to drive the business forward. Here are four tips to build your lean, mean ITSM machine.
1) Increase visibility between dev and ops
When something goes wrong, it’s often best to start by asking “what changed?” — looking for new software releases that may have triggered an incident. Unfortunately, many legacy software systems aren’t equipped for visibility across the DevOps lifecycle. Make sure you’re using software tools that connect
28 April 2018
Paul Buffington is a Principal Enterprise Solutions Engineer at Atlassian, where he is responsible for helping customers redefine the shape of modern ITSM.
IT and dev teams, so they can collaborate to ﬁx incidents and conﬁdently push changes. Service desk agents should be able to view changes and stay updated on bug-squashing progress. And, developers need the ability to see customers’ problems in real time and create permanent ﬁxes.
2) ‘Shift left’ with self-service
‘Shifting left’ reduces costs by moving resolution closer to the customer. Introducing self-service functionality with a customer-facing portal and knowledgecentric service desk is an investment that pays off through lower costs and higher user satisfaction. According to Forrester Research, manned support can cost up to $12 per contact, while self-service solves problems at 10 cents or less. That’s 120 times more cost-effective. A study by Coleman Parkes for Amdocs showed that 91% of customers say they prefer self-service if it is available and tailored to their needs. Self-service isn’t a new concept. We see it in our everyday lives, from buying airline ﬂights to using an ATM. We expect to be self-sufﬁcient with IT tasks too. A self-service portal is the face of IT to an organization. Through it, customers gain easy access to submit and track requests and are able to keep up with notiﬁcations. Making your service desk knowledge-centric further accommodates self-service preferences and reduces your request volume, allowing IT teams to focus on things that really need their attention. If you don’t already have one, aggregate your knowledge into a single continued on page 31 >
026-31_SDT010.qxp_Layout 1 3/26/18 11:44 AM Page 29
ITSM’s Next Wave: AI and Machine Learning < continued from page 27
having to hire data scientists. Does this mean ServiceNow’s Kingston release has suddenly turned resolution of tickets over to bots or virtual agents? Hough says that such a leap out of the gate isn’t realistic. “For models that apply unsupervised machine learning, I don’t know that the technology has evolved enough or if the skillsets have evolved enough in the workplace,” she says. “We are focusing very squarely on supervised machine learning, which is a lot more concrete and has practical applications. We’re continuing to pay attention to evolution in the unsupervised space, to see how those technologies evolve, but those tend to be more user intensive.”
the analysts all want to talk about the cutting-edge stuff, but the average organization I work with is still trying to ﬁgure out basic block and tackle,” he says. “They are doing stuff such as DevOps, pipeline automation and platform as a service, but that’s not connected well yet with other things in their organization, like their strategy and their portfolio and their program processes.” Steve Stover, VP of product and alliance at Samanage, a provider of a
Send in the Bots
Most of the leading ITSM providers say they’re looking at how tap into the growing use of chatbots in the enterprise, notably Slack, Microsoft Teams and Atlassian’s Hipchat, among others. For exam-
‘The analysts all want to talk about the cutting-edge stuff, but the average organization I work with is still trying to figure out basic block and tackle.’
Beware the Hype Scale
Troy DuMoulin, VP of research and development at Pink Elephant, a global ITSM training, certiﬁcation and consulting ﬁrm, says that while organizations are looking for simpliﬁed ITSM strategies, particularly the growing number shifting to sharedservices IT, most large shops he has consulted with aren’t shopping for virtual agents yet. “It’ s very early on the hype scale,” DuMoulin says. “They’re interesting enough for webinars and conferences but chatbots are very, very rudimentary. The average org isn’t even thinking about it. Even in self-service, it has very limited adoption.” Many organizations are starting to provide self-service for functions such as password reset but automating more complex issues is very early on the curve of emergence, according to DuMoulin. “The service management, the pundits,
proves over time,” he says. ServiceNow’s Hough promises similar improvements with its new Agent Intelligence. Built into the company’s Now cloud service platform, it’s designed to create predictive models based on how organizations resolve speciﬁc incidents.
—Troy DuMoulin, VP, Pink Elephant
SaaS-based ITSM platform ﬁnancially backed by Salesforce.com, disagrees with DuMoulin, saying interactions with enterprise customers prompted the company to form an AI group last year. The company delivered the ﬁrst AI capability to its Samanage Service Desk, with the ability to categorize tickets by comparing them against data in historical tickets, which the company said will provide faster resolution times and more accurate reporting. “We released that feature in the fourth quarter and has already had very high adoption rates,” Stover says. When it ﬁrst rolled out, customers accepted 80 percent of the predictions, according to Stover, and now it’s up in the high 90s. “The great thing about machine learning is it im-
ple, BMC has already announced integration with Slack, which is already popular among IT and dev teams, and works with SMS. A forthcoming release will support Microsoft Teams, which is offered free to Ofﬁce 365 subscribers. Atlassian, which says Jira Service Desk ITSM is among its fastest growing offerings, recently rolled out a new chatbot tool, called Stride, which it describes as a more collaborative team communicationsbased chatbot platform. Atlassian ITSM solution engineer Paul Bufﬁngton said the company is looking at how to bring Stride into its service desk offering. “If there’s a major incident, IT Ops teams could launch a dedicated room, where all of the chat conversation that goes on there is audited back to the incident,” Bufﬁngton says. “It provides a better way to solve those types of outages and incidents.” n
026-31_SDT010.qxp_Layout 1 3/26/18 11:44 AM Page 30
Q&A: Talking ITSM with
ServiceNow GM Farrell Hough By Jeffrey Schwartz
ServiceNow has charted impressive growth over the past year. The company reported nearly $2 billion in revenues in 2017 and is forecasting $2.35 billion for this year. In the last quarter of 2017, ServiceNow said it closed 41 deals worth $1 million or more and 500 customers are now spending at least that amount yearly with the company, up 43 percent. In January, the company released its Kingston release and will disclose its forthcoming London rollout next month. Leading ServiceNow’s ITSM, ITBM and ITAM business is Farrell Hough, who talked about the where service management is headed this year.
What’s your take on the role of ITIL in service management these days?
I will say the inﬂuence of ITIL are across all of our IT Service Management customers, but it’s very clear it needs to be updated. I will say we see less and less of our customers who are interested in pure ITIL compliance, they’ve really evolved, and focus a lot of service management concepts, enterprise service management concepts or maybe even new framework like FitSM. Or also things like [The Open Group’s} IT4IT. There are additional frameworks that are appearing more relevant or evolved than where ITIL is today but it is a great common language. When we go back and are interfacing with our customers, it creates a taxonomy. But we see much less rigor around adherence to ITEL processes, unless it’s potentially in a 30 April 2018
heavily regulated organization, they might lean on the rigor of ITIL but we just see it less and less.
Can you describe some of the Virtual Agents coming in the next release?
When we talk about user experience, this is shifting out of the box experience and providing a conversation design interface, is what we mean by making it easier to consume this type of technology. An example of this is checking status of any incident that you will build out a conversation tree and design it to account for scenarios around checking the status of an incident, or of a change or an outage. Some of that is a very administrative-type thing and it’s very basic. It’s a go and get and retrieve. Those kind of out of the box conversations are going to help customers get started quickly and get value quickly with virtual agents. Another example might be to say, provide me the latest comments on
that incident, help me work through a password reset, retrieving knowledge articles and working through certain steps might look like. Those are examples of those out of the box conversation.
What other areas of ITSM are you investing in?
Incident management. That concept of a major incident can extend into a lot of different business frameworks. One, every company can decide what qualiﬁes as a major incident. You don’t want to have to be thinking, ‘who do I have to communicate to, what type of notiﬁcation, what should the content of that notiﬁcation look like?’ You really want to be focusing on remediation, and working to get the situation under control, and deploy a ﬁx if you needed to. The way we structured the major incident management workbench, is giving you a lot of templatized ability to pre-script steps or task that you want to execute. So, you’re
026-31_SDT010.qxp_Layout 1 3/26/18 12:15 PM Page 31
orchestrating communication, you’re orchestrating the remediation that’s happening during this major incident, you’re not reacting to it.
That major incident can be any type of issue in the chain, whether desktop, application or infrastructure?
Yes. More often it tends to be infrastructure related. Enterprises are going to decide what a major incident is such as when a datacenter goes down, or a situation where many people are impacted, or services are impacted. Pre-deﬁning what that is makes user we don’t cry wolf and not say we have a major incident, when it turns out someone doesn’t have the right software on their laptop. That wouldn’t be a major incident.
Could it also apply to a major ransomeware attack, or some other security incident?
Yes. It’s a good place where security and IT interact. I focus mainly on the IT side. But a major security incident would be an interface into major incident management. Typically, organizations will have their own criteria for what is a major security incident where other resources have to be engaged. But IT and security react together when doing incident response and ultimately remediation. Another great area is an extension of incident management that we will reconﬁgure is for business crisis. A business crisis could be a local major disaster, or someone in the ﬁeld that internal employees need to be aware of. It could also include putting out those kinds of communications and orchestrating how those need to be handled and who has to do what ahead of time. So, when you are in the moment you are just executing. Those are the things we want to be thoughtful about end user experience where we will make work easier and peoples’ jobs easier to do.
What role does the new Integration Hub service that you released with Kingston provide? Basically, it’s a location for all of our integrations that we would have out to third parties and we will be working with third parties and partners, where we will be able to build integrations themselves in that context. It will almost be an orchestration capability that will tie into our workﬂow over time. n
Read the full interview on itopstimes.com
4 tips to build a lean, mean ITSM machine < continued from page 28
system. General questions, FAQs and how-to articles are all great sources of information that can be made available to your customer. For IT teams, providing access to service outage runbooks improves troubleshooting and incident management.
3) Use ChatOps for rapid response
Using modern, integrated communication tools can improve collaboration, reduce disruptive emails and phone calls, and decrease incident-response time. Here’s how IT teams can successfully use ChatOps: l Kill the phone bridge and handle incident response with chat tools l Create dedicated chat rooms for service beyond your ticketing system l Add time-saving bots that handle repeat actions l Get your IT and dev teams talking in the same system
4) Adopt a formal Post Incident Review (PIR) process
Some IT teams consider their job complete once normal service is restored and an incident is marked resolved. This is often the fault of legacy ITSM tools. A few ﬁelds to capture resolution data won’t carry the learning forward. Valuable lessons can be found in incidents. We’ve found that highly effective teams implement a formal PIR process. Some of these teams’ best practices include: l Establish a culture where the goal isn’t assigning blame, but understanding all contributing causes l Create a repeatable PIR process that is simple to follow and encourages collaboration l Link all related items created from the PIR to the original incident to improve visibility l Develop internal troubleshooting knowledge-base documents for future reference l Implement preventative actions that reduce the likelihood of incident recurrence l Post and share the PIR results and overall progress via blogs, reporting and dashboards Following these practices and adopting a lean approach to ITSM offers many beneﬁts — faster support, better team coordination and continuous improvement. Agile IT teams are more adaptable to changing needs and can improvise when faced with obstacles. Rather than just resolving requests and delivering on SLAs, these teams can better adapt to business needs. Consequently, they can streamline their day-to-day work and focus on driving technological improvements across the company, from sales and marketing, to new digital services that ensure customer success. n April 2018
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:51 AM Page 32
The driving force behind DevOps CI/CD pipeline removes friction from process BY CHRISTINA CARDOZA
evOps was not created solely on the idea that developers and operations should play nice together. DevOps is the cultural transformation organizations go through on the road to modern application delivery. The end goal is the ability to release high-quality software more frequently. DevOps enables this by promoting communication and collaboration. “Today, teams are supposed to make changes a lot faster. Your chances of doing that without developers and ops people working together successfully are pretty much zero,” said Paul Stovell, founder and CEO of Octopus Deploy. The driving force behind DevOps, the force that relies on that cohesion and brings it into the modern way of developing and delivering software, is the CI/CD pipeline. “While DevOps speaks more to the organizational transformation that takes place in companies that are undergoing digital transformations, the CI/CD pipeline is the engine that drives DevOps success—and helps them deliver better apps, faster. It’s the core business process by which
companies transition from manual, monolithic application delivery to automated, modern application delivery,” said Jeff Scheaffer, general manager of continuous delivery at CA Technologies. The CI/CD pipeline is a term that often comes up when talking about DevOps because it was designed to create a bridge between teams and help them see the bigger picture, according to Sezgin Küçükkaraaslan, co-founder and VP of engineering at OpsGenie. That bigger picture is getting software out and to customers. “CI/CD pipelines are the fastest way to production. They enable devs to easily build, package, integrate, test, and release code by automating manual, error-prone steps,” he said. While DevOps focuses on the culture, CI/CD focuses on the process and tools necessary to help teams adapt to a culture of continuous everything, Küçükkaraaslan added. The CI/CD pipeline is a key enabler of DevOps because it removes the friction within the DevOps process so that changes can happen more quickly and go to production faster, according to Octopus’ Stovell. The more friction you
remove, the faster the cycle happens, he explained. “It means you are moving the business forward and creating this beautiful feedback cycle and a continuous improvement environment,” said Stovell.
How to keep the pipeline flowing for you One way to think of the pipeline is as a ecosystem or lifecycle, according to Dan McFall, president and CEO of Mobile Labs. “It is actually a loop back in upon itself to be re-released. That is what we are talking about in the pipeline. It is a continuous run of writing code, testing code, deploying code, retesting it in production and then completing the feedback to the whole process again. It is the ability to release with confidence and keep running everything,” he said. Thinking about the pipeline more broadly will allow you to see the entire lifecycle of not just getting software into production, but how it gets there and what happens to it afterwards. That way if things fail, you can see what went wrong and recover easily, according to Octopus’ Stovell. The most important aspect in the continued on page 35 >
AUTOMATED API TESTING FOR TODAYâ€™S ENTERPRISES
Deploy Your APIs With Confidence API Fortress is for companies that care about their APIs. A powerful platform to help teams functionally test and simulate load as part of their deployments. A single solution for the entire organization. Reduce costs and increase efficiency. WORKS WITH ALL OF YOUR FAVORITE CI/CD SOLUTIONS
apifortress.com | email@example.com | @apifortress
< continued from page 33
CI/CD pipeline is the C, which stands for continuous. In order to be successful at CI and CD, you have to have “the ability to constantly move without having to halt everything,” said Patrick Poulin, founder of API Fortress. The pipeline consists of a release stage where you understand what you are creating, then testing stages, a preproduction stage, deployment stage and then ultimately production. Of course, this is an oversimplification of the pipeline, according to Robert Stroud, chief product officer for XebiaLabs, but the idea is to move through these stages or approval points in an automatic fashion. “One of the opportunities in the industry at the moment is there are a couple of hand-off points where we hand off to the development team, testing team, staging team and then the deployment team,” said Stroud. “The real opportunity for velocity is automation across all those steps and stages.” It is visualized as a pipeline because changes ideally flow from start to finish one after another, according to CA’s Scheaffer. At a high level, the pipeline “includes compiling, packaging, and running basic tests prior to a code base merge. After your code is in the base, the main branch of your version control software, additional tests are run to ensure your apps work with real configuration and other services. Performance and security tests are also run at this point. From here you deploy code to staging and then to production,” said OpsGenie’s Küçükkaraaslan. The best way to keep the pipeline working for your business is keeping it simple, visible and measurable, according to CA’s. Key factors here include automation and orchestration of the pipeline, improvement, alignment with all stakeholders, and ability to assess what good looks like. “DevOps allows you to make progress in more incremental and manageable chunks. It gives you the ability to have more confidence when software is ready and that you are truly delivering
Continuous delivery vs. continuous deployment While CD is mostly commonly known as continuous delivery, many organizations are beginning to think of it in terms of continuous development, according to Robert Stroud, chief product officer for XebiaLabs. What is happening today is that software changes are beginning to get smaller in size, nature and incremental differentiation. This change is enabling teams to get to a point where they can be automatically deployed. “The reality is where we are actually going to be ending up is in a situation where we are deploying at mini rates. Change is happening instantaneously. Maybe on a weekly basis or in some organizations they collect and group the change and deploy it on a monthly basis. It depends on the business and the business appetite for transition,” said Stroud. In order to keep up with the change, teams need to be practicing good deployment methods such as canary releases where changes are rolled out to a small sample size of their audience at once so the change can be validated, or a blue-green deployment where the release is staged in a manner that allows for various parts of the audience or customers to receive the change in a controlled manner. This also enables feedback to the developer so they can make sure what is being delivered was actually desired. A common mistake when it comes to deploying software is that teams will compile the code, deploy it to a test environment and when it is time for production they will compile the code again and deploy it to production, said Paul Stovell, founder and CEO of Octopus Deploy. “That is a bad practice because a lot things can sneak in when you are building it a second time. You have no guarantee that your test is really what is going into production,” he said. The right way to do it is to build once, keep a copy of the build and files that came out of the build process and then deploy it to test and to production. The other way to successfully achieve continuous deployment is to have a consistent process for each environment. “The best way to guarantee the production deployment is going to work is to make sure the exact process you run into production is as close as possible to every other environment,” said Stovell. A higher level view of the deployment pipeline is known as a DevOps Assembly Line, according to OpsGenie’s Küçükkaraaslan. “The challenge is that the DevOps toolchain is not as fully developed as what is available for CI/CD, and involves human dependencies that can be inefficient. The DevOps Assembly Line attempts to connect activities into event-driven workflows that can then be associated with specific —Christina Cardoza applications or services,” he said.
the right thing,” said Mobile Labs’ McFall.
Beware of kinks Your pipeline should be fluid where stages are occurring simultaneously. “For example, testing is not waiting for development to complete writing code to start the testing process. Instead, testing occurs in tandem with development—continuously testing smaller chunks of code in parallel with development,” said CA’s Scheaffer. Scheaffer explained the pipeline is
like a fiber optic cable containing many stages of glass fibers. “Each glass fiber can represent the workflow of an individual application, but you will likely have many different applications moving through your pipeline, and you are coordinating releases for multiple strands,” he said. However, having a bunch of moving parts happening at once can easily introduce complications and complexities. Don’t let your pipeline become a bottleneck. OpsGenie’s Küçükkaraaslan sugcontinued on page 38 >
033-43_SDT010.qxp_Layout 1 3/23/18 3:02 PM Page 36
How do these solutions enhance the CI/CD pipeline to foster DevOps? Patrick Poulin, founder and CEO of API Fortress API Fortress was specifically built for today’s agile architectures: A collaborative platform that bridges the gap between development, QA, and DevOps. By using the simple GUI, teams can work together to create a series of powerful API tests in one place. Those tests can then be executed at every stage of the life cycle, in particular as part of the CI/CD process. Using our webhooks or Jenkins plug-in, you can automatically execute tests and get notified of issues before the code is pushed live. The platform works in the cloud, or on-premises, giving you the flexibility to run tests from any environment while still satisfying security protocols. Catch errors before your customers find them, and release code with confidence.
Jeff Scheaffer, general manager of CD for CA Technologies CA Technologies looks at the CI/CD pipeline as the modern equivalent of a factory for software, or what we call the Modern Software Factory. The Modern Software Factory not only encompasses CI/CD best practices and the philosophy and prescriptions of DevOps but also includes the tooling, management, and orchestration of it all. CA Technologies provides the tools to plan, build, test, release, operate, automate and secure all software delivery whether an app is cloud-native, classically architected or hosted on a legacy platform.
Dan McFall, president and CEO of Mobile Labs Without a device cloud, there cannot really be an enterprise-level CI/CD pipeline. With only a few target devices or limited simulator support, it isn’t possible to have the coverage and the confidence to automate the build, deploy, and test cycle for mobile applications. A “do it yourself” process is too error-prone and unreliable
to run at scale for enterprise needs, and will force organizations to spend more time fixing their integrations. By turning real devices into highly available, managed infrastructure, Mobile Labs’ deviceConnect allows mobile developers to target a variety of production environments in a seamlessly integrated fashion. With these capabilities back under technology management with reliable and well-documented API’s, a mobile development team can click build and have confidence that their application will be automatically deployed to the appropriate number of devices, with the correct OS levels, tested, and have results returned in a timely fashion.
Paul Stovell, founder and CEO of Octopus Deploy Octopus sits right in the middle of your DevOps pipeline. On the far left you have source control and build systems. This is where development teams spend most of their time. At the other end of the spectrum lives your operations team and all the things running in production. Octopus is the gateway between these two worlds. We take the work that is being built by developers and already compiled in your build server, give it the thumbs up and take over the whole life cycle of orchestrating the release. Instead of building a suite of tools to service version control, build and monitoring, Octopus is focused on one thing, and one thing only — deployments. We take care of development, QA, acceptance testing and production deployments. We ensure releases have been tested and ensure deployments are consistent whether they are on-premises or hosted in the cloud.
Sezgin Küçükkaraaslan, co-founder & VP of engineering at OpsGenie Considering most teams leverage multiple solutions for each stage of the pipeline, it is nearly impossible to avoid alert fatigue and exces-
sive incident remediation timelines. OpsGenie is an alerting and incident response solution that helps teams combat alert fatigue and reduce unplanned downtime. We integrate with more than 175 tools including monitoring, ticketings, collaboration, chat, and CI/CD tools. In this way, OpsGenie acts as a central hub for alerting and incident management workflows, ensuring that the right people are always notified at the right time. We also build intelligence into this process, offering customizable escalation policies and rich alerts, ensuring that incidents can be addressed as quickly and effectively as possible. In the CI/CD pipeline, OpsGenie’s integrations with tools like XL Release, Jenkins, CircleCI, and Codeship ensure the health of your pipeline by notifying the correct team if something is wrong. We also actively ensure that all of your tools are running, so you are aware and informed of your pipeline’s operational health. Ultimately, we help teams keep the feedback loop short so issues are resolved in the least amount of time.
Robert Stroud, chief product officer for XebiaLabs XebiaLabs helps enterprise IT teams and organizations rapidly accelerate their delivery process while consistently deploying highquality software that enhances business outcomes. Imagine for a moment everything that happens from a pull request on. We instrument and automate that entire process so that organizations can move deployment packages and components through the CI/CD toolchain, through testing, and ultimately into production. If there are any issues when they get to production, we let them automatically roll back to the last known version, and we provide a feedback loop to the development organization and business about possible future improvements. It’s that combination of automation and feedback — plus end-toend visibility and control — that allows organizations to increase velocity and improve software quality in a way that advances business requirements. z
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:54 AM Page 37
033-43_SDT010.qxp_Layout 1 3/23/18 3:02 PM Page 38
Implementing an API-led DevOps approach If you think of the CI/CD pipeline as a hose that is constantly running to supply DevOps, API testing is a kink that is slowing down the flow, according to Patrick Poulin, founder of API Fortress. API testing has become a pain point in a DevOps development process simply because it has been ignored up until now. “It is one of those things that people have been procrastinating on because maybe they either haven’t had a tool out there that makes it easy or because it requires a lot of development work,” he said. If teams aren’t testing the APIs, either they won’t catch when an error happens or it will take them weeks to even discover the error because unless an API is entirely down, they won’t see it. “It ends up being an expensive error that can last for because because the teams are just not comprehensively testing it,” said Poulin. DevOps teams need to be putting in the same level of effort into API testing as they are into automating the testing of websites and apps. “APIs are just as critical if nore more critical than the front end,” Poulin said, “APIs touch every part of the company and therefore being able to have insight as to the testing, the reliability and the uptime of them should be available.” Teams should provide complete coverage of all their APIs, not just testing a single endpoint, but also creating integration test, which is a multi step task that reproduces common user flows. It is not only important to know how one feature works, you need to know how it works when coupled with a bunch of other features or processes, Poulin explained. “When you test everything out in a similar way to how a real world user would experience it, then you start seeing the cracks in between pieces,” he said. The key to all of this is to find a tool or platform that enables everyone from the CEO all the way down to the developer to access answers to questions like are my APIs up? Was there an API issue yesterday? “If you have the right tool in place, anyone can get those answers in just a few clicks and get full understanding into the health of their API program,” said Poulin. —Christina Cardoza
< continued from page 35
gested DevOps teams constantly be monitoring all of their components to ensure they are uncovering problems and addressing them as soon as possible. In addition, he explained teams should keep a close eye on test performance. “It can be tempting to rush new code into production, but it’s very dangerous to do this without the right testing. The complexity of systems’ interdependencies means there are no limits to what can go wrong. Monitoring how new code performs in a test environment is essential to releasing stable builds. Try to find the optimum balance between fast tests and tests running against an environment that simulates production,” said Küçükkaraaslan. XebiaLabs’ Stroud explained having good testing suites and good test coverage are key to knowing what has been deployed, where and when.
The CI/CD pipeline isn’t something you can buy in a box, Küçükkaraaslan explained. That means the pipeline is going to evolve over time, and teams and businesses need to be able to evolve with it. “We continuously work hard to improve our CI/CD performance, and embrace good practices. This means we are always gathering feedback from everyone involved in delivering new features in order to identify additional improvements. As our business evolves, so must our CI/CD processes,” he said. If your CI/CD pipeline doesn’t create a feedback loop, then you are not really doing DevOps, Mobile Labs’ McFall explained. “The benefit of doing this is it allows you to do things a little more quickly with high confidence and more education,” he said. Just because teams can now push everything to production, doesn’t mean they should always do it.
That's where the feedback loop comes into play because it ensures you are always listening to customers and not just doing this to push out new things. “You need to stay abreast of the continuous notion of best practices out there. Be aware of what your peers are doing, where their success is, and if there are opportunities to not make the same mistakes others have,” said McFall. Stay away from quick fix approaches, said CA’s Scheaffer. “With anything in life, the quick win is tempting but can come with a price. In CI/CD pipelines, this shows up as technical debt that manifests as plateaued progress, the inability to engage other teams, and an inconsistent understanding of what ‘good’ is within an organization. The result is too much rework spent rebuilding the pipeline process,” he explained. McFall believes organizations constantly try to find a one size fits all approach to tooling, when what they really need to do is find out what makes sense to their business and risk profile. Leverage automation as much as possible, according to Stroud, but don’t fall into silos of automation. Stroud explained a common problem in the pipeline today is that organizations aren’t talking across all departments yet. “One of the pivotal rules in DevOps is that we need to have consistent collaboration across the toolchain, not having that is one of the biggest traps for young players right now,” Stroud explained. Organizations can address this by standardizing and rationalizing the tools they use in each of these silos. Lastly, don’t fall into a blame culture. When something fails it is not so much about asking the who, what, where and why but rather how can you do better, how can you drive velocity, and how can you deliver better outcomes, according to XebiaLabs’ Stroud. “You want people to experiment and use trial and error to learn. This has to be a basic tenet of DevOps rather than having deep post mortems of who we can blame after a piece of change is deployed and doesn’t meet requirements. Use that experience or feedback to learn from it and change your processes in the future so you can continually drive value,” he said. z
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:54 AM Page 39
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:55 AM Page 40
Octopus Deploy: Enterprise scale DevOps by Paul Stovell, Founder & CEO | Octopus Deploy
As the culture and practices around DevOps sweep the world, application development teams look for ways to automate deployments and releases of the apps they build, as well as automating all of the operations and maintenances processes needed to keep them running. DevOps works because it breaks down silos and encourages everyone on the team to take ownership of producing software that works. Automation makes teams happier and more productive, and emboldens teams to deploy changes often, iterating quickly based on the feedback from real users.
Weâ€™ve focused on building software that engineers love because we know it will mean they actually use it
Octopus Deploy doesnâ€™t do version control, build, or monitoring; but we do deployments. And we do them really well. Weâ€™ve helped over 20,000 teams around the world to automate their deployments, including NASA, Accenture, Microsoft, and 58 of the Fortune 100. Octopus works with Team Foundation Server/VSTS, Atlassian Bamboo, Jenkins, and your existing developer toolchain, to give teams fast, reliable, consistent deployments for web applications, services, and database changes.
The feedback loop a team gets is powerful: automation reduces the time and, more importantly, the risk, which means deployments can be more frequent. More frequent deployments mean smaller batch sizes for changes, again reducing risk. Which leads to more automation, which leads to even less risk. Ultimately, it means happier customers and end users â€“ they can give feedback faster, and the team can act on that feedback quickly. All this automation is great, but at the enterprise scale, itâ€™s LQFRQVLVWHQW(DUO\DGRSWHUWHDPVDUHXVLQJGLÎ?HUHQWSURGXFWV or hand-rolling their own scripts, which always take far more time and maintenance than anyone expects. Meanwhile, other teams lag behind, still deploying manually once a quarter, not sure where to start. The same problems are solved over and over E\GLÎ?HUHQWWHDPV Itâ€™s no surprise that leading enterprises are beginning to standardize on DevOps tooling. The goal is simple: give everyone at the company a standard version control system, a standard way to build and test software, a standard way to track bugs, a standard way to deploy it, and a standard way to monitor and manage it. Team members can move from project to project, and thanks to the consistency, immediately begin to automate without reinventing the wheel. Standardization projects like this arenâ€™t easy though. Tools and approaches that are pushed down from above can be met with resistance by the engineers that are forced to work with them. The result is â€œshelfwareâ€?; expensive software that is put on the shelf and never used. And the consistency problem still isnâ€™t solved.
You may not have heard of Octopus Deploy until now, but chances are thereâ€™s an engineering team somewhere in your organization that has, and theyâ€™d love to tell you how much they enjoy using it. We built Octopus to be a solution that engineers love, with a great user experience, easy installation and onboarding experience, thorough and genuinely useful documentation, and with the right philosophies and extensibility points to allow it to solve any problems your engineers face. Weâ€™ve focused on building software that engineers love because we know it will mean they actually use it, and a push to standardize will actually be successful â€“ not shelfware. To learn more about how Octopus Deploy and deployment automation can help you to get consistent, reliable deployments across your entire enterprise â€“ or to see a video interview of how Accenture standardized over 400 teams on Octopus Deploy â€“ go to octopus.com/enterprise The goal is simple: give everyone at the company a standard version control system, a standard way to build and test software, a standard way to track bugs, a standard way to deploy it, and a standard way to monitor and manage it.
033-43_SDT010.qxp_Layout 1 3/23/18 2:03 PM Page 41
A guide to continuous delivery tools n Atlassian: Atlassian offers cloud and on-premises versions of continuous delivery tools. Bamboo is Atlassian’s onpremises option with first-class support for the “delivery” aspect of Continuous Delivery, tying automated builds, tests and releases together in a single workflow. It gives developers, testers, build engineers, and systems administrators a common space to work and share information while keeping sensitive operations like production deploys locked down. For cloud customers, Bitbucket Pipelines offers a modern continuous delivery service that’s built right into Atlassian’s version control system, Bitbucket Cloud. n Automic: Automic Software is the leader in business automation software and owned by CA Technologies. Automic V12 is a unified suite of business automation products for driving agility across enterprise operations and empowering DevOps initiatives. Capabilities include intelligent auto-updating of agents with zero business impact, removal of maintenance windows, agility across core business applications, and intelligent insights across automation silos. n Chef: Chef Automate, the leader in Continuous Automation, provides a platform that enables you to build, deploy and manage your infrastructure and applications collaboratively. Chef Automate works with Chef’s three open source projects; Chef for infrastructure automation, Habitat for application automation, and Inspec for compliance automation, as well as associated tools. Chef Automate provides commercial features on top of the open-source projects that include end-toend visibility across your entire fleet, tools to enable continuous compliance, a unified workflow to manage all change, enterprise grade support, and more. n CloudBees: CloudBees is the hub of enterprise Jenkins and DevOps, providing companies with smarter solutions for automating software development and delivery. CloudBees starts with Jenkins, the most trusted and widely-adopted continuous delivery platform, and adds enterprise-grade security, scalability, manageability and expert-level support.
FEATURED PROVIDERS n
n API Fortress: API Fortress is an API testing and monitoring platform built specifically to align development and operations in today’s architectures. Automate test executions as part of deployments from any CI platform, including Jenkins (try our plug-in). The platform simplifies creating tests, running them during deployments, and then using those same tests for production monitoring. n CA Technologies: CA Technologies’ solutions address the wide range of capabilities necessary to minimize friction in the pipeline to achieve business agility and compete in today’s marketplace. These solutions include everything from application lifecycle management to release automation to continuous testing to application monitoring — and much more. CA’s highly flexible, integrated solutions allow organizations to fully embrace the capabilities of the Modern Software Factory, enabling rapid development, automated testing, and seamless release of missioncritical applications. n Mobile Labs: Mobile Labs provides enterprise-grade mobile device clouds that improve efficiency and quality for agile, cross-platform mobile app and mobile web testing. Its patented device cloud, deviceConnect, available on-premises or as a hosted solution, provides affordable, highly secure access to a large inventory of mobile devices across major mobile platforms providing mobile developers, testers and quality assurance professionals increased agility and flexibility. Mobile Labs is a global organization with clients across the U.S., Europe and Australia. n Octopus Deploy: Octopus Deploy is an automated release management tool for modern developers and DevOps teams. Octopus takes over where your Continuous Integration server ends, enabling you to easily automate even the most complicated application deployments, whether on-premises or in the cloud. Features include the ability to promote releases between environments, repeatable and reliable deployments, ability to simplify the most complicated application deployments, an intuitive and easy to use dashboard, and first-class platform support. n OpsGenie: OpsGenie is an advanced IT alerting and incident management solution. The OpsGenie platform provides rich features and integrations that seamlessly blend with CI/CD pipelines to centralize the flow of alerts, and deliver them according to customizable schedules and escalation policies. Far more than just alerting, OpsGenie arms teams with real-time insight into system performance and then supports collaboration and automation so incidents are resolved efficiently. n XebiaLabs: XebiaLabs develops enterprise-scale Continuous Delivery and DevOps software, providing companies with the visibility, automation and control they need to deliver software faster and with less risk. Global market leaders rely on XebiaLabs to meet the increasing demand for accelerated and more reliable software releases.
By making the software delivery process more productive, manageable and hasslefree, CloudBees puts companies on the fastest path to transforming great ideas into great software and returning value to the business more quickly. n Datical: Datical brings Agile and DevOps to the database to radically
improve and simplify the application release process. Datical solutions deliver the database release automation capabilities IT teams need to bring applications to market faster while eliminating the security vulnerabilities, costly errors and downtime often associated with today’s application release process. continued on page 43 >
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:56 AM Page 42
033-43_SDT010.qxp_Layout 1 3/23/18 2:03 PM Page 43
A guide to continuous delivery tools < continued from page 41 n Dynatrace: Dynatrace provides the industry’s only AI-powered application monitoring that transcends the challenge human beings struggle with to manage complex, hyper-dynamic, web-scale applications. Bridging the gap between enterprise and cloud, Dynatrace helps dev, test, operation and business teams light up applications from the core with deep insights and actionable data. We help companies mature existing enterprise processes from CI to CD to DevOps, and bridge the gap from DevOps to hybrid-tonative NoOps. n Electric Cloud: Electric Cloud is a leader in enterprise Continuous Delivery and DevOps automation, helping organizations deliver better software faster by automating and accelerating build, test and deployment processes at scale. Industry leaders like Cisco, E-Trade, Gap, GE, Qualcomm and SpaceX use Electric Cloud’s solutions to boost software productivity. The ElectricFlow DevOps Release Automation Platform allows teams of all sizes to automate deployments and coordinate releases.
with all major development frameworks, version control systems, issue trackers, IDEs, and cloud services. n Microsoft: Visual Studio Team Services, Microsoft’s cloud-hosted DevOps service offers Git repositories; agile planning tools; complete build automation for Windows, Linux, Mac; cloud load testing; Continuous Integration and Continuous Deployment to Windows, Linux and Microsoft Azure; application analytics; and integration with third-party DevOps tools. Visual Studio Team Services supports any development language, works seamlessly with Docker-based containers, and supports GVFS enabling massive scale for very large git repositories. It also integrates with Visual Studio and other popular code editors. n Micro Focus: Micro Focus is a leading global enterprise software company uniquely positioned to help customers extend existing investments while embracing new technologies in a world of Hybrid IT. Providing customers with a world-class portfolio of enterprise-grade scalable solutions with analytics built-in, Micro Focus delivers customer-centered innovation across DevOps, Hybrid IT Management, Security & Data Management, and Predictive Analytics.
n GitLab: GitLab, the software development application designed for the enterprise, allows development teams to move faster from idea to production. Designed to provide a seamless development process, GitLab’s built-in Continuous Integration and Continuous Deployment enables developers to easily monitor the progress of tests and build pipelines, then deploy with the confidence that their code has been tested across multiple environments. Developers are able to develop and deploy rapidly and reliably with minimal human intervention to meet enterprise demands.
n Puppet: Puppet provides the leading IT automation platform to deliver and operate modern software. With Puppet, organizations know exactly what’s happening across all of their software, and get the automation needed to drive changes with confidence. More than 75% of the Fortune 100 rely on Puppet to adopt DevOps practices, move to the cloud, ensure security and compliance, and deliver better software faster.
n JetBrains: TeamCity is a continuous integration and deployment server that takes moments to set up, shows your build results on-the-fly, and works out of the box. It will make sure your software gets built, tested, and deployed, and you get notified about that appropriately, in any way you choose. TeamCity integrates
n Redgate Software: Including SQL Server databases in Continuous Integration and Continuous Delivery, and stopping them being the bottleneck in the process, is the mission at Redgate. Whether version controlling database code, including it in continuous integration, or adding it to automated deployments, the SQL Tool-
belt from Redgate includes every tool necessary. Many, like ReadyRoll and SQL Source Control, SQL Compare and DLM Automation, integrate with and plug into the same infrastructure already used for application development. Think Git or Team Foundation Server, Jenkins or TeamCity, Octopus Deploy or Bamboo, for example, and the database can be developed alongside the application. n Rogue Wave Software: Rogue Wave helps thousands of global enterprise customers tackle the hardest and most complex issues in building, connecting, and securing applications. Since 1989, our platforms, tools, components, and support have been used across financial services, technology, healthcare, government, entertainment, and manufacturing to deliver value and reduce risk. From API management, web and mobile, embeddable analytics, static and dynamic analysis to open source support, we have the software essentials to innovate with confidence. n Tasktop: Tasktop’s unique model-based integration paradigm unifies fragmented best-of-breed tools and automates the flow of project-critical information across dozens of tools, hundreds of projects and thousands of practitioners. The ultimate collaboration solution for DevOps specialists and all other teams in the software lifecycle, Tasktop’s pioneering Value Stream Integration technology provides organizations with unprecedented visibility and traceability into their value stream n TechExcel: DevSuite helps organizations manage and standardize development and releases via agile development methods and complete traceability. We understand the importance of rapid deployment and are focused on helping companies make the transition over to DevOps. To do this, we have partnered with many automation tools for testing and Continuous Integration, such as Ranorex and Jenkins. Right out of the box, DevSuite will include these technologies. z
044_SDT010.qxp_Layout 1 3/23/18 12:03 PM Page 44
Guest View BY CHRISTINE SPANG
What to expect for Python Christine Sprang is CTO and co-founder of Nylas, a communications API
ython has grown significantly in popularity since its initial release in 1991. The open source language now consistently ranks among the world’s most popular programming languages, having overcome those who claimed it was too slow or couldn’t scale. In fact, these days, leaders such as PayPal, Instagram and Yelp use Python as part of their core software stack. So where does it go from here? I started contributing to Debian when I was 15 and have been working with Python for 13 years. I’ve seen a lot of things come and go, but Python is consistently growing. In this article, I’ll offer my thoughts on what we can expect throughout 2018 from the Python world, and how developers can make the most of this ubiquitous language.
Mypy will go mainstream Currently, there’s a lot of experimentation with Mypy (a static type checker similar to lint) and a little bit of FUD (fear, uncertainty and doubt) to get over. However, the use of Mypy is on the rise, and I think we’ll quickly get over the FUD as more developers start to use Mypy in production environments. Mypy is now accepted as a Python standard, and as users realize there’s nothing to the early fears of Mypy causing Python to be less dynamic, it’s going to gain even greater traction. I believe this because of all the benefits Mypy brings to the table, including less debugging, a greater ability to understand and modify code, and a path to grow programs from dynamic to static typing. Mypy makes it possible to manage Python codebases at scale—the very place where folks used to say that scripting languages became untenable to continue using. Now you can start with dynamic typing for quick MVPs and add in static types as a product stabilizes, all without pausing feature development for a risky language rewrite.
Python 3.6 is more featureful, more consistent and in many cases more efficient than Python 2.7
Python 3 is the new Gold Standard Python 3 has now been around for 10 years, and it’s no longer a question of if organizations will migrate to it, but when . Early versions of Python 3 had significant downsides that hindered adoption of the backwards-incompatible update to the language—
but no longer. Python 3.6 is more featureful, more consistent, and in many cases, more efficient than Python 2.7. The tools for migrating to Python 3 have become mature and significantly decrease the burden of migrating. Most major libraries now support Python 3, meaning your dependencies should no longer block migrating. I believe we’ll see more and more organizations incrementally migrating their codebases to Python 3, and the majority of new Python projects will be started with Python 3.
Non-traditional coders will drive Python growth Having secured a loyal following among new and experienced developers alike, the next phase of Python growth will come from non-traditional coders. As fields such as Artificial Intelligence and Machine Learning increase in demand, we’ll see data scientists, physicists,bioengineers, and other professionals drive the spread of Python. Why would this be happening? • First of all, there’s a very low barrier to entry. Python can be installed on all major operating systems, including Windows, Linux, and OS X, and run in nearly every common work environment. • It’s also part of a strong community of users, with many user groups organized by Python.org , and all of us benefiting from the leadership of the Python Software Foundation. • Lastly, the Python Package Index (PyPi) makes it easy to find reusable packages, install and manage package dependencies, and isolate application environments. This low barrier to entry, plus the inclusion of packages like NumPy and TensorFlow makes Python appeal to professionals involved in astrophysics-level mathematics as well as deep neural networks.
High Schools and Universities will expand Python It only makes sense that computer science programs should expand their Python offerings. It is an ideal language for learning and teaching, since it has a simple structure and clearly defined syntax. It also requires less code to complete tasks than many other object-oriented languages. In fact, it is such an inviting entry into programming that the CoderSchool offers children as young as 10 what it calls Python Startup — an Intro to Python summer camp. z
045_SDT010.qxp_Layout 1 3/23/18 12:06 PM Page 45
Analyst View BY ARNAL DAYARATNA
The demise of the statistician O
ne of the remarkable consequences of the contemporary proliferation of data is the corresponding profusion of data-driven analytics across a wide range of industry verticals. In particular, research in verticals such as environmental science, education, pharmacology, medicine and public health differentially explore concepts such as correlation and causality as they relate to two or more variables. For example, the contemporary profusion of data has enabled a surfeit of research that asserts a relationship between variable X and cancer, variables Y and Z and the longevity of a marriage or variable Z and a detriment or benefit to the environment. Concrete examples of such analyses include claims about the relationship between BPA and cancer, the financial investment in a wedding and the longevity of the associated marriage and vaccines and the onset of autism. Historically, statisticians have been central to efforts to understand the statistical significance of correlative analytics that illustrate a relationship between two variables. That said, one of the notable consequences of the widespread adoption of business intelligence platforms is their diminution of the importance of the statistician and corresponding elevation of the ability of business users to derive actionable insights about large-scale datasets. In addition, business intelligence platforms absolve users of the need to write custom code and subsequently accelerate the derivation of analytic insights. While the acceleration of the derivation of analytic insights, the democratization of data scientistrelated capabilities and enhanced data visualization functionality represent some of the key advantages of contemporary business intelligence and data analytics platforms, drawbacks include the diminution of considerations related to statistical or analytic methodology, significance and rigor. Another way of putting this would be to say that contemporary business intelligence platforms have ushered in the death of the statistician, and laid the foundation instead for data savvy, business users capable of rapidly wrangling through a massive dataset and understanding relationships between one or more variables by means of a panoply of rich and multivalent visualizations.
The prioritization of accelerated time to insight in contemporary business intelligence platforms and the corresponding evacuation of statistical rigor have facilitated the proliferation of spurious correlative analytics that confuse the distinction between correlation and causation. For example, recent analyses claim a correlation between an embryoâ€™s exposure to deep ultrasounds and the onset of autism, the consumption of carrot juice and the amelioration of cancer and the ability of classical music to slow down dementia. While such studies may individually have merit with respect to the sample size upon which they operate, they evacuate deeper questions about the dataset in question, such as whether the correlation can serve as the theoretical foundation for causation on a broader scale. Machine learning technologies promise to resurrect the role played by statisticians by empowering data analysts to model relationships between a multitude of variables in contrast to business intelligence platforms that deliver correlative relationships between two variables. Moreover, machine learning platforms have the capability to model evolution in relationships over time. While the proliferation of BI platforms ushered in the death of the statistician and enabled the acceleration of the derivation of data-driven insight, it correspondingly facilitated the production of analyses that may have lacked the statistical rigor of analyses that examined correlation through the rich lens of tools at the disposal of trained statistician. Machine learning tools promise to restore some of that analytical rigor to data-driven analytics by providing additional dimensions of insight to conversations about correlations that may or may not be illustrative of causation. In conversations about health and wellness, in particular, enhanced analytical rigor about correlation vs. causation can go a long way toward ensuring that consumers of datadriven analytics make well-informed choices about the best options for their health instead of falling prey to the flotsam and jetsam of findings that are produced by the emergent factory of analytical insights that has been made possible, in part, by the democratization of BI tools. z
Dr. Arnal Dayaratna is Research Director, Software Development at IDC.
One of the consequences of BI platforms is their diminution of the importance of the statistician.
046_SDT010.qxp_Layout 1 3/26/18 11:38 AM Page 46
Industry Watch BY DAVID RUBINSTEIN
Column as a service David Rubinstein is editor-in-chief of SD Times.
s cloud services become more granular, more functionality can be had “as a service.” Two new interesting services caught my eye, and so I present this column as a service to you, dear readers. The first is “failure as a service,” which sounds counterintuitive. Wouldn’t people rather have success as a service? Most would, I’m sure, but then there are testers. The notion of failure as a service stems from chaos engineering, in which you inject harm to find weaknesses in your software and build up its strength. “But it’s hard to break things in a thoughtful way that is safe and secure,” Kolton Andrus, the CEO and founder of a company called Gremlin, told me in a recent conversation. “We take the concept of breaking things, but add in safety. We have a button to revert back to the previous state, which is important to make users comfortable.” Andrus began his journey into failure at Amazon, where a decade ago he was part of the team responsible for the retail website’s availability. “Saving downtime was key,” he explained. “We used a chaos engineering approach to proactively prepare” for outages. “Then Netflix came out with the chaos monkey, and I went to Netflix because they were talking about it publicly.” While at Netflix, he helped work on Fit, the next-generation platform that offered more precise failure testing. Apply this now to the need for resiliency in a services world. What does your organization do if each microservice your application relies on suddenly disappears? Andrus called the need for API resiliency “one of those unsung heroes.” The Netflix API, he said as an example, “is so much more important than the interface. If PlayStation goes down, the service still runs everywhere else.” Further, he said, keeping data correct is also a challenge. “You have to test the deployment to see if the data is correct. Failures that involve corrupt data are harder to diagnose and fix.” Gremlin, he said, is designed to test APIs. “It’s an important aspect that hasn’t gotten enough attention.” It was Forrester Research that coined the second “as-a-service” to be addressed here. That is “insight as a service.” A company called Panoply.io has cre-
What does your organization do if each microservice your application relies on suddenly disappears?
ated a platform that enables organizations to spin up data warehouses in which data is coalesced and optimized, ready to be visualized by third-party software. Machine learning and natural language processing are important parts of the platform that help organizations understand the data at its sources. Historically, manual labor was required to accumulate data and analyze it so business executives could make informed decisions. The skills of data engineers, server developers, DBAs and data scientists were required. Using automation and the cloud, insight-as-a-service can save organizations hours of coding, debugging and research. “Companies used to struggle with collecting and coalescing data,” said Jason Harris, an evangelist with Panoply.io. Doing omnichannel web analytics was extremely difficult, he added. The Panoply platform integrates with 100 different data sources, most any database provider and what Harris called a growing list of cloud providers. Harris specifically discussed integrations with Chartio and Stitch, which enables data input, warehousing and visualization to happen very fast, with no code required. “These things typically required querying and scripting, but now so much of it is simple drag-and-drop,” he said. Those integrations offer organizations the Automatic Cloud Data Stack, a pre-assembled stack that performs ETL, provides a data warehouse where data can be coalesced, AI-enabled query learning, business intelligence and visualizations of the data, Harris explained. “This way, organizations don’t have to try to figure out which tools to choose from the Swiss army knife,” he added. In the stack, users of Chartio BI platform choose the data source they want, which then flows through Stitch’s ETL pipeline and into Panoply’s data warehouse, finally into Chartio’s cloud analytics solution. Both of these services — like most cloud services — save organizations the cost of building, updating and maintaining the solutions on-premises, and free up their developers and administrators to add more value to the organization. This has been my column as a service to you! I’m sure we’ll be writing about more and unique “as-a-service” services as they’re turned loose on a suspecting public! z
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:56 AM Page 47
Full PageAds_SDT010.qxp_Layout 1 3/23/18 11:56 AM Page 48
Pro Cloud Server Collaborate with
OSLC RESTful API
AAFree of theversion Pro Cloud for up to+ 25 users 5-25version user, Free of Server the Pro+ WebEA Cloud Server WebEA $CUGFQPÆ’XGQTOQTGEWTTGPVQTTGPGYGFNKEGPUGUQHVJG'PVGTRTKUG#TEJKVGEV For those with five or more current or renewed licenses of the Enterprise Architect Corporate Edition(or (or above). Conditions apply,apply, see websee site web for details. Corporate Edition above). Conditions site for details.
Online Demo: North America: spxcld.us | Europe: spxcld.eu | Australia: spxcld.com.au
er v r NEW e dS
u ss o l C Pro Expre
Visit sparxsystems.com/procloud for a trial and purchasing options