FC_SDT032.qxp_Layout 1 1/17/20 2:53 PM Page 1
FEBRUARY 2020 • VOL. 2, ISSUE 32 • $9.95 • www.sdtimes.com
IFC_SDT032.qxp_Layout 1 1/17/20 3:29 PM Page 2
Instantly Search Terabytes
www.sdtimes.com EDITORIAL EDITOR-IN-CHIEF David Rubinstein email@example.com NEWS EDITOR Christina Cardoza firstname.lastname@example.org
dtSearchâ€™s document filters support: Â‡ popular file types Â‡ emails with multilevel attachments Â‡ a wide variety of databases Â‡ web data
SOCIAL MEDIA AND ONLINE EDITORS Jenna Sargent email@example.com Jakub Lewkowicz firstname.lastname@example.org ART DIRECTOR Mara Leonardi email@example.com CONTRIBUTING WRITERS Alyson Behr, Jacqueline Emigh, Lisa Morgan, Jeffrey Schwartz
2YHUVHDUFKRSWLRQVLQFOXGLQJ Â‡ efficient multithreaded search Â‡ HDV\PXOWLFRORUKLWKLJKOLJKWLQJ Â‡ forensics options like credit card search
Developers: Â‡ 6'.VIRU:LQGRZV/LQX[PDF26 Â‡ &URVVSODWIRUP$3,VIRU&-DYDDQG NET with NET Standard / 1(7&RUH
Â‡ )$4VRQIDFHWHGVHDUFKJUDQXODUGDWD FODVVLILFDWLRQ$]XUH$:6DQGPRUH
CONTRIBUTING ANALYSTS Enderle Group, Gartner, IDC, Intellyx, Ovum
ADVERTISING SALES PUBLISHER David Lyman 978-465-2351 firstname.lastname@example.org SALES MANAGER Jon Sawyer email@example.com
CUSTOMER SERVICE SUBSCRIPTIONS firstname.lastname@example.org ADVERTISING TRAFFIC Mara Leonardi email@example.com LIST SERVICES Jourdan Pedone firstname.lastname@example.org
Visit dtSearch.com for Â‡KXQGUHGVRIUHYLHZVDQGFDVHVWXGLHV Â‡IXOO\IXQFWLRQDOHQWHUSULVHDQG developer evaluations
The Smart Choice for Text RetrievalÂ® since 1991
REPRINTS email@example.com ACCOUNTING firstname.lastname@example.org
PRESIDENT & CEO David Lyman CHIEF OPERATING OFFICER David Rubinstein
D2 EMERGE LLC 80 Skyline Drive Suite 303 Plainview, NY 11803 www.d2emerge.com
003_SDT032.qxp_Layout 1 1/17/20 4:56 PM Page 3
VOLUME 2, ISSUE 32 • FEBRUARY 2020
FEATURES Web development shifts to a mobile-first approach
XebiaLabs DevOps platform 9.5 offers added visibility
Learning about your software progressively
Technology advances and demands for speed are driving enterprises to the edge page 28
GUEST VIEW by Eli Lopian The importance of healthy code
ANALYST VIEW by Bill Holz 5 steps to master continuous delivery
INDUSTRY WATCH by David Rubinstein How I came to know, and love tech
Creating a DevOps culture
BUYERS GUIDE How to solve your UI testing problems page 38 THE SECOND OF
Software Development Times (ISSN 1528-1965) is published 12 times per year by D2 Emerge LLC, 80 Skyline Drive, Suite 303, Plainview, NY 11803. Periodicals postage paid at Plainview, NY, and additional offices. SD Times is a registered trademark of D2 Emerge LLC. All contents © 2020 D2 Emerge LLC. All rights reserved. The price of a one-year subscription is US$179 for subscribers in the U.S., $189 in Canada, $229 elsewhere. POSTMASTER: Send address changes to SD Times, 80 Skyline Drive, Suite 303, Plainview, NY 11803. SD Times subscriber services may be reached at email@example.com.
004,5_SDT032.qxp_Layout 1 1/17/20 2:28 PM Page 4
NEWS WATCH TIOBE Index awards C programming language of the year After experiencing a significant drop on the index in 2016 and 2017, C has experienced huge growth over the last year. The language went from a 17.145% rating in November 2015 to 6.477% in August 2017, but as of January 2020 the language is back up at 15.773%. “Everybody thought that Python would become TIOBE’s programming language of the year for the second consecutive time. But it is good old language C that wins the award this time with an yearly increase of 2.4%,” TIOBE wrote in a post announcing the results. TIOBE believes the main driver behind C’s popularity this year is the emergence of many IoT devices. “C excels when it is applied to small devices that are performance-critical. It is easy to learn and there is a C compiler available for every processor,” TIOBE explained.
OSI co-founder leaves over license controversy Bruce Perens, co-founder of the Open Source Initiative (OSI), is removing himself from the organization over Cryptographic Autonomy License concerns. The Cryptographic Autonomy License was submitted to the OSI last year by opensource lawyer Van Lindberg on behalf of Holo, a decentralized app platform. Holo’s co-founder and distributed app architect Arthur Brock explained the license is designed to not only protect developers and users, but also end users’ privacy and control of identity and data. Perens has expressed
more concerns with how the license will be used and written. He believes the license requires users to have access to a lawyer in order to understand it, which is not the way he believes licenses should be developed for open source.
People on the move
n DataStax has announced Ed Anuff is joining the company as its chief product officer. Anuff has more than 25 years of experience in the industry. Most recently, he was the director of product management at Google Cloud Platform and senior vice president of product strategy at Apigee. According to the company, Anuff comes to DataStax with a bold mission: to “deliver exciting products that shatter today’s expectations.” n Information Builders has appointed data-industry veteran Keith Kohl as senior vice president of product management. Kohl will be responsible for all product management functions as well as defining strategy for the company’s solutions and implementing that direction into the products and services. He will also serve as a liaison between product, sales and marketing teams.
aims to provide a streamlined feature set without all of that clutter. Npm explained that the key feature is private packages, which enable developers to create, maintain, and upgrade packages outside of a public registry.
Report: Java developers turn to microservices Microservices have had a major impact on Java in the past few years. JVM plugin JRebel recently released their 2020 Java Developer Productivity Report, and a main focus of their finding was the impact of microservices adoption in Java. According to JRebel, application architecture is one of the determining factors for which technologies developers use. About 50% of respondents are working with a microservice architecture, 27.57% are working with a monolithic architecture, and 9.77% are working with SOAbased applications. Less than
n Informatica has announced a new chief executive officer. Amit Walia joined Informatica in 2013, and was most recently the company’s president of products and marketing. Additionally, the company announced Tracey Newell’s role as president and global field operations will be expanded to president and global field and marketing operations where she will be responsible for the company’s brand, digital, field, portfolio and marketing efforts. The company’s senior vice president and chief strategy officer Vineet Walia has been promoted to executive vice president where he will work closely with the board and leadership team.
004,5_SDT032.qxp_Layout 1 1/17/20 2:29 PM Page 5
Mozilla Web DNA uncovers web developer pain points Mozilla put the survey together as a means of representing the “voices of developers and designers working on the web.” They surveyed over 28,000 developers and designers from 173 countries. According to Mozilla, four out of the top five needs relate to browser compatibility. Documentation, debugging, frameworks, security and privacy are also in the top 10. As part of the survey, developers were also asked what was missing from the web. The top responses were access to hardware (12.4%), browser compatibility (8.6%), access to filesystem (4.7%), performance (3.4%), PWA support (3.4%), debugging (3.3%), and access to native APIs (3%). It should be noted that there were other responses, but only these seven had 3% or more people mention them. 5% of respondents work with desktop apps, mobile apps, serverless apps, or other.
SAFe 5.0 for Lean Enterprises released Scaled Agile Inc., the company behind the Scaled Agile Framework (SAFe), has announced a new version of the framework. SAFe 5.0 for Lean Enterprises features advances in strategy, execution, and leadership competencies. According to Scaled Agile, key benefits of this release include: l customer centricity and design thinking, l measure and grow guidance to help organizations determine their current state of business agility, l continuous learning culture competency, l and organizational agility competency. It also adds a new SAFe principle called Organize Around Value that helps organizations align development efforts around full, endto-end value flow.
IntelliJ IDEA’s 2020 roodmap According to JetBrains, the following features will be
released to IntelliJ IDEA over the course of 2020. The first to be released is the 2020.1 spring release. The features are centered around two main themes: performance and support for modern development workflows. On the performance side of things, it is improving indexing speed, redoing the read/ write locks threading model, and support for loading and unloading plugins without restarting.
Google’s Flutter focuses on ambient computing Google is preparing for an ambient computing future with its UI toolkit Flutter. The company announced new updates and strategies at its Flutter conference Flutter Interact. As part of this new focus, the company announced Flutter 1.12, the latest stable release of the framework. This release included new performance improvements, more control over Flutter content and updates to the Material and Cupertino libraries. In addition, the release introduced a new Google Fonts package with access to almost 1,000 open source font families. Other features included iOS 13 dark
mode, add-to-app updates, and support for Dart 2.7.
Dart 2.7 released to be safer and more expressive Version 2.7 comes with added support for extension methods, a new package for handling strings with special characters, an update on null safety and a new null safety playground experience in DartPad. The new extension method support allows developers to add new functionality to any type and have the brevity and auto-complete experience of regular method calls. Extension methods are resolved and dispatched statically, which means users can’t call them on values whose type is ‘dynamic,’ Michael Thomsen, product manager for Dart and Flutter, wrote in a post.
Erwin revamps Data Modeler solution Data governance solution provider erwin is releasing a new version of its data modeling solution. According to the company, the erwin Data Modeler is designed to help users design, deploy and understand high-quality data sources.
The new version features a new UI that makes it easier to customize the modeling canvas and access features and functionality. Other features of the new release include support for and model integration from major databases such as Amazon RedShift, latest DB2 releases, and the latest MS SQL Server releases. Additionally, modeling task automation capabilities have been added to save time, reduce errors and increase work product quality and speed. The capabilities include a new scheduler, new quick compare templates, a new ODBC query tool, and ability to customize and automate super-type/sub-type relationships.
Instana acquires three companies for the future of APM Instana says the acquisitions will add to its vision to create an observability tool for modern cloud applications. The three companies Instana acquired are StackImpact, a production-grade profiler; Signify, a tool for understanding the health of microservices; and BeeInstant, a scalable and performant realtime back end for customized metrics. z
006-8_SDT032.qxp_Layout 1 1/17/20 2:29 PM Page 6
Web development shifts to BY JENNA SARGENT
oday, a majority of internet users are browsing the web not from their computers, but from a mobile device. According to a 2018 study by Oberlo, 52.2% of global web traffic came from mobile phones, up from just 0.7% in 2009. In the United States, that number is even higher, with 58% of total web visits coming from mobile devices. Traditionally, web development focused first on designing a great desktop experience, and then formatting sites for other screen sizes. But with the decline of desktop browsing and the rise of mobile browsing, doesn’t it make sense to do things the other way around? This is where mobile-first development, a trend that has been gaining popularity over the years, comes in. Mobile-first development is exactly what it sounds like: it is the practice of designing web experiences specifically for mobile, then branching out to other device formats after. Mobile-first shouldn’t be confused with responsive design, which is another approach for ensuring good user experiences on mobile. Software development company Chetu’s assistant vice president of operations Pravin Vazirani explained that these two design processes are the two main schools of thought when it comes to web design. “Websites that are designed responsively adapt to whichever device the site is being viewed on,” said Vazirani. “These sites are usually desktop-focused that have been responsively redesigned to accommodate the mobile user. Mobilefirst web development, however, places a priority on mobile design while making responsiveness to other device types secondary.” According to Bob Bentz, president of digital marketing agency Purplegator, many people think that responsive design equals mobile-first development. “Mobile-first should be considered a
design strategy first,” Bentz said. “In other words, responsive design has made it easy for a website to look good on both desktop and mobile. What makes mobile-first different, however, is that the web designer considers its mobile users first and then desktop users.” Murphy O’Rourke, senior UX designer at digital tech consultancy SPR, added that there is a layer in between responsive and mobile-first called adaptive design, which is where there are completely different layouts depending on the device. “Maybe you cater some features specifically to phone or maybe even specifically to desktop and you hide things on tablet and so on because maybe they’re not going to be used,” he said. Mobile-first has an advantage over responsive design, Vazirani explained. This is because responsive sites tend to be slower, which just doesn’t cut it for users. “Sites that are forced to respond to each new device are naturally slower and more data-intensive — which is a no-go for modern, mobile users — and since mobile design often looks great on desktop devices anyway, it’s leading many to question why focusing on responsive design is beneficial at all. There’s no stopping the mobile-first web development trend now, and no
one has shown any signs of wanting to stop it anyway,” said Vazirani. According to Brittany Stackhouse, head of development at digital marketing agency Exposure Ninja, mobile-first forces developers to think about mobile user experience at the beginning of a project. Now, developers have to think early about questions such as “how long will the site take to load on a mobile device?” According to O’Rourke, there are several things to consider when doing mobile-first development, such as what features people use on their phone, what context they’re in while browsing, and how to make information easily digestible. For example, many people are in a different mindset when on their phone versus when they’re at a computer. This makes it crucial to figure out the proper workflow and determine what information needs to be shown first. “Your messaging should be concise, so that it not only fits on the screen, but also it’s easy to digest in short increments,” said O’Rourke. “And so I guess starting there, you’re really just kind of refining the information, you’re refining what you want the user to do, you’re presenting them tasks. You’re aiming to be easily digestible, small chunks of information and workflows. So instead
006-8_SDT032.qxp_Layout 1 1/17/20 2:30 PM Page 7
a mobile-first approach
of a huge form, you might break it up into a couple different parts.” In addition, mobile-first developers can utilize the features of a phone that aren’t present on desktop. “Using your native features, you can automatically input things based on your location, or using your camera, [etc],” said O’Rourke. In addition, developers must consider network access. “Almost everyone in the world has a mobile device these days, which is part of the reason mobile browsing currently dominates,” said Vazirani. “At the same time, network access is limited in many countries and regions, which makes it hard to load data-rich desktop websites.” Because of this, mobile sites need to be lightweight
and snappy. Another thing to consider is the fact that mobile is tap-oriented, not click-oriented. Certain things that work well on desktop don’t work well on mobile, such as dropdown menus, Vazirani explained. The actual process of developing mobile-first also differs from traditional desktop development. The process of mobile-first development is less trapped by archaic and inefficient development practices than desktop design, Vazirani explained. These practices arose because desktop design began in the early days of the internet. Compare that to mobile design, which rose with the smartphone. “Mobile design … [is] perfectly designed for the current era of human technological and social growth. While desktop web development remains rooted in established procedures, mobile development is more of a ‘Wild West’ economy with many developers figuring out bestpractices as they go along.” This “Wild West” economy isn’t perfect, though. Because developers are figuring things out as they go, it’s easier to make mistakes, he explained. But he believes it also offers developers more opportunities for user engagement. Mobile web development also benefits from new developer tools that make development more streamlined and intuitive, though Vazirani added that in a lot of instances, the same toolkits are
used across development of both mobile and desktop sites.
Mobilegeddon Adopting a mobile-first development approach benefits not just your users, but your site performance as well. In 2015, Mobilegeddon hit. Mobilegeddon is the name given to Google’s search engine algorithm update in April 2015 that gave priority to websites that perform well on mobile devices. This update made it so that websites “where text is readable without tapping or zooming, tap targets are spaced appropriately, and the page avoids unplayable content or horizontal scrolling” were given a rank boost. Then in 2018, Google started migrating to mobile-first indexing. Google’s index is used to determine a site’s position in Google search rankings. Previously, the index was based on the desktop version of an app, which led to problems for mobile searchers. By indexing the mobile version of a site, Google is ensuring that users searching for information on mobile devices find what they’re looking for. Google clarified that this does not mean there will be two separate indexes. Instead, it will increasingly index mobile versions of context, which will be added to the main index. Google also provides a mobile version of the tool Test My Site, which shows how sites rank for mobile site speed and functionality, Vazirani explained. “Google is more aware than anyone that cyberspace is going mobile fast, which is why this Silicon Valley giant has provided tools like the mobile version of Test My Site,” he said.
Desktop design won’t ever go away completely While mobile will certainly remain a major focus going forward, don’t expect desktop design to go away. “The push for mobile-first development is really just to continued on page 8 >
006-8_SDT032.qxp_Layout 1 1/17/20 2:30 PM Page 8
< continued from page 7
have an assurance that websites are working for mobile devices as well as desktops. Both are important since each can make up more [than] around half of all site traffic,” said Alexander Kehoe, co-founder and operations director at web design agency Caveni Digital. Desktop will likely always be a focus, especially in business contexts. “Nobody is going to be running reports and doing data synthesis on their phones,” said O’Rourke. “It would take way too long. And that’s not really the context of what these people expect to do. When they’re at work, they’re on a computer.” Exposure Ninja’s Stackhouse also emphasized that mobile-first development practices don’t mean that the desktop experience gets ignored. “Instead, we're creating responsively designed websites that work for both desktop and mobile users… So while mobile users may enjoy a faster, simpler version of a website, desktop users can enjoy an intricate experience that's optimized for their chosen device,” she said. While desktop design will never truly go away, it won’t be primary much longer. “Desktop development certainly isn't an afterthought, but it isn't the main focus anymore,” said Vazirani. “I would say that we are at a point where mobile-first is the priority and desktop development is lower on the scale. However, I imagine in a few years that desktop will drift even further down the scale of priorities to the verge of afterthought status.” Vazirani added that things other than mobile could have the potential to disrupt desktop even further. Advances in wearable technology and virtual and augmented reality could have this effect. “The truth of the matter is that desktopdesign will only be relevant as long as desktops are relevant, so if we as consumers are moving away from that, then yes it could very well become a bygone concept,” said Vazirani. “Desktop web design, as it was a decade ago, is already obsolete, with responsive design considered the new minimum. Mobile design, and therefore mobile-first development, is a step toward the immersive world that cyberspace was always intended to be.” z
Progressive Web Apps Progressive Web Apps (PWAs) are an important piece of the mobile-first puzzle. According to Cory Tanner, senior UX developer at digital agency DockYard, PWAs are responsive web applications that provide a website- and mobile-app-like experience. “With PWAs, companies can create a single digital property that takes full advantage of most modern device and browser types, eliminating the need to build and manage separate web apps and native mobile apps,” said Tanner. Traditional websites, Tanner went on to explain, are built and designed for desktop web browsers, limiting their ability to adapt to different device types. In addition, websites are unable to leverage mobile capabilities like push notifications or the ability to be downloaded to the homescreen. On the other side of the spectrum, mobile apps are applications designed to work on a specific operating system. They require developers to build different versions of the same app to work across both Android and Apple. PWAs eliminate some of these pain points and combine the best of both, Tanner explained. “When compared to traditional digital formats, PWAs have brought more focus to how a user interacts with web apps on a mobile device. PWAs bring added functionality that empowers app designers to reach the user with notifications and calls to action in a more interactive, localized format.” Many companies that understand the importance of mobile experiences view PWAs as a way to “deliver the convenience of mobile with the sophistication of desktop experiences,” he said. There are several benefits of PWAs — both for users and development teams. For users, they provide a seamless experience that loads like a typical website while offering use of capabilities of traditional mobile apps. For development teams, PWAs can provide simpler design, development and project management. Rather than managing different versions of the same app, teams can focus on building and maintaining one app. “By reducing the need to maintain multiple code bases, PWAs help to reduce overall cost and resource spend. Developers can spend less time on cross-platform compatibility updates and more time on creative innovations,” said Tanner. And companies are seeing massive ROI on their PWAs. For example, according to Tanner, Pinterest saw a 40% increase in time spent on their site, a 60% increase in engagement, and a 44% increase in user-generated ad revenue after launching their PWAs. Tanner expects PWAs to become more popular as companies start seeing the potential benefits. “Today, the majority of companies still tend to take the familiar, ‘safe’ route to digital product innovation by focusing on native mobile apps and website enhancements,” Tanner said. “In the coming years, more companies will realize the benefits of different mediums, like PWAs, for different use cases. PWAs will shift from an emerging approach to a core part of the digital product consideration set.” z
Full Page Ads_SDT032.qxp_Layout 1 1/17/20 2:17 PM Page 9
ELEPHANT OUT OF THE ROOM
Bad address and contact data that prevents effective engagement with customers via postal mail, email and phone is the elephant in the room for many companies. Melissaâ€™s 30+ years of domain experience in address management, patented fuzzy matching and multisourced reference datasets power the global data quality tools you need to keep customer data clean, correct and current. Tell bad data to vamoose, skedaddle, and Get the El out for good!
Data Quality APIs
Global Address Verification
Activate a Demo Account and Get a Free Pair of Elephant Socks! i.Melissa.com/sdtimes
www.Melissa.com | 1-800-MELISSA
010_SDT032.qxp_Layout 1 1/17/20 4:58 PM Page 10
XebiaLabs DevOps platform 9.5 offers added visibility CHRISTINA CARDOZA
The latest version of XebiaLab’s DevOps platform introduces a new feature for synchronizing the delivery of business apps across technical release pipelines. Delivery patterns in version 9.5 aims to track business apps as they progress, synchronize component pipelines, model pipeline dependencies visually, and visual realtime status. According to the company, the introduction of delivery patterns was necessary because managing application releases are becoming too complex as teams move at different speeds, and
dependencies between apps begin to slow down the process and open the application to risks. The new release also features updates to the company's on-demand release audit reporting solution and enhancements to its administrative and command-line interface functionality. With the new updates, the company explained users can generate audit reports for releases related to specific applications, environments or change requests; drill down into detailed information; customize reports to exact auditors specifications; and generate audit reports automatically. z
Split Software uncovers DevOps downsides to releasing faster BY CHRISTINA CARDOZA
A recent report from Split Software found while a majority of organizations release new features on a bi-weekly basis, many are experiencing downtime after the new feature is introduced. According to the company, 82% of respondents commonly uncover bugs in production, 38% have a mean time to resolution of greater than 1 day, and 41% have to roll back or hotfix more than 10% of new features. “Our survey of the DevOps community has highlighted some troubling issues that directly result from the intense demand to release faster,” said Dave Karow, evangelist for Split. “There are inherent risks that organizations must bear, to speed these releases to market and remain competitive.” “Once an issue is found, teams also often have to roll back code or hotfix it in production. Both of these practices
can introduce additional risk,” the company wrote in a blog post. In order to continue to move quickly, without introducing more risk into applications and services, Split recommends using feature flags for gradual rollouts; application, error, and feature monitoring to catch bugs quickly; and experimentation for avoiding unnecessary work. Other findings of the report included: 87% of respondents release new features more than once a month to keep up with the demand for new features; 88% of DevOps teams need more than an hour to resolve detected issues; and 27% of respondents think features are poorly adopted and utilized. The report is based on conversations with more than 100 different DevOps organizations in the US. z See “Learning about your software progressively,” page 37
In other DevOps news… n Dynatrace announced its Autonomous Cloud Enablement Practice designed to accelerate the DevOps movement into autonomous cloud operations. According to the company, the practice will provide best practices, hands-on expertise and automation services. “The move to autonomous cloud operations is not simply one of technical execution, it requires a transformation in DevOps thinking and alignment,” said Andrew Hittle, SVP and chief customer officer at Dynatrace.
n ShiftLeft is bringing static application security testing (SAST) to DevOps with the announcement of Inspect. The new solution is targeted specifically at developers, and able to integrate into their code repositories and build and bug-tracking tools. Inspect can insert security directly into developers pull requests as well as help them scan code and automate acceptance decisions based on security criteria. ShiftLeft also explained the solution will help developers find the “right vulnerability information at the right time, which leads to more efficient mean time to remediation and ultimately getting applications to production faster, without sacrificing security.”
n The DevOps security company Sysdig announced its open-source cloudnative runtime security project Falco is joining the Cloud Native Computing Foundation as an incubation-level hosted project. Falco is designed to detect, alert and reduce the risk of a security incident. n WhiteSource and Codefresh team up on open-source management in CI/CD pipelines. As part of the partnership, WhiteSourcewill provide a new integration with Codefresh’s Kubernetesnative CI/CD solution in order to help users secure and manage application dependencies and Docker images. z
Full Page Ads_SDT032.qxp_Layout 1 1/17/20 2:17 PM Page 11
Join the Week-Long Celebration of Agile Testing & Automation F U L L P R O G R A M N O W AVA I L A B L E
EPIC EXPERIENCE 2020 Special Offer: Register using promo code EPICMP to save up to $200 off your registration.*
April 19â€“23, 2020 S A N D I E G O, C A San Diego Mission Bay Resort
E P I C .T E C H W E L L .C O M *Discount valid on packages over $400
012,13_SDT032.qxp_Layout 1 1/17/20 2:42 PM Page 12
SD Times: From the beginning W
ho were we and what were we thinking? It was 20 years ago Alan Zeichick and I founded BZ Media, and on Feb. 22, 1999, Dave Rubinstein took the materials for the first issue of SD Times to the FedEx office in Hicksville (the one that closed late) to send to the printer and begin our journey. I had been working in high-tech media ad sales and publishing for 12 years, back on PC Magazine, PC Tech Journal at Ziff-Davis and later many developer publications at Miller Freeman notably launching Embedded Systems Programming and others. I knew how to do this. Easy! So we thought. Co-Founder Alan Zeichick similarly had worked at a tech publishing giant, IDG, for years before, splashing around in the niche magazine froth that was Miller Freeman in the 1990s. Alan had worked on a dozen magazines on the editorial side and launched at least a handful. He knew how to do this. Easy! So we thought. It’s very different to actually be on your own without the safety net of a large company. We had to build everything from scratch and raising money was far more challenging than we imagined. This was 1999, when money was pouring into startups like water. I had just pitched a small VC who’s motto was “we fund the plans that don’t get funded,” in retrospect a curious statement. We were launching a print magazine — no, a print NEWSPAPER — against the digital-content headwinds. He stopped me and said, “OK, Ted Bahr was the CEO and co-founder of BZ Media LLC.
vinced it was a sure thing. Believing in yourself and each other unquestioningly. I want you and your partner to go up And working 12 hours a day, every single to the top floor of your apartment or day of the year for the first 18 months, office building, rip up your business (Merry Christmas!). plan and THROW IT TO THE WIND But slowly, page by page, we got it and then come back downstairs to the done. Rubinstein, with real newspaper kitchen and… (lowering his voice for experience at the Long Island daily Newsday, joined us before the first issue, along with other key members of the startup crew like Mara Leonardi, Eddie Correia, Rebecca Pappas, Paula Miller, Jon Sawyer, Pat Sarica, and David Karp. Later on, 2005-2006 draft picks Alex Handy, Dave Lyman, Stacy Burris, Viena Ludwig, and Craig Reino did the heavy lifting, with many others along the way. Back in the early 2000s there were dozens of publications targeting developers but our readers were development managers — the bosses — and this made us different. But SD Times was a tough sell because most of the advertisers in those days were companies founded and run by hardcore coders themselves! They loved the hardcore programmer magazines He’ s the ‘ B’and he’ s the ‘ Z’ and fully identified with the brilliant emphasis) ...think of yourselves as a developer they no doubt were themDot Com!” selves. Their vision of a “manager” was “But wait, I thought you funded the probably Dilbert’s boss. startups that DON’T get funded?!” Like most professional publishers, “Oh I would fund you, you bet. If we competed and measured ourselves you were a Dot Com.” more by ad pages than circulation and, We wound up getting funded by F & besides, magazines were starting to F Capital Partners, the normal last-ditch move to a “controlled” model where a source consisting of Family & Friends, subscriber would qualify for a free subwith the Family in this case being Alan scription by job title or other criteria. and I. We were nuts, really. But here is a SD Times was a pioneer in this area and key element to being an entrepreneur: it allowed us to build a very definable denial! I understood the risks intellectu- audience for advertisers. We were also ally, but emotionally was thoroughly con- the first publication anywhere to have a
BY TED BAHR
012,13_SDT032.qxp_Layout 1 1/17/20 2:42 PM Page 13
companion digital version (OK, a PDF) of the issues that qualified for a BPA audit, which was an important publishing “seal of approval.” Our first year we were #22 in the programming category, then jumped up to #8 the next year and slowly moved into the top three. We then enjoyed a healthy dogfight with Dr. Dobb’s Journal and later MSDN Magazine for many years for the top spot. (Hah! Guess we won!) Our first ad contract was from Rick Riccetti at Seapine, a longtime supporter, as was Nigel Chanter at Perforce, Dave Cunningham at Dundas, Ghent Hito at N Software, Alexander Falk at Altova, Adam Kolowa (RIP) at Parasoft, Rene Garcia at Software FX, Jeff Largiader at Programmer’s Paradise — so many others [I mean, who could forget characters like Julie Fishhands? Hint: not her real name]. The sales team and I crisscrossed the country pitching the merits of reaching the programmer’s boss and made many friends — some of them are reading this today and are still partners with SD Times! We managed the competition and the ups and downs of the market — not to mention keeping the ship afloat through two recessions, and we emerged as the “last man standing” in the once-mighty IT publication world. So, why is that? From the beginning we thought that the pace of change and the expanding interconnectedness in software development made it very difficult for one to keep up. Sure, a heads-down programmer could find tips in Dr. Dobb’s, and later any specific coding issue could be Googled. But the programmer’s boss — the development manager — what were they going to do with that deep-downin-the-weeds information? They needed the Big Picture. What was happening and what it meant. When should we change platforms? When should we upgrade? What are the coming trends and how might they apply to my work?
Tough questions — judgment calls really — and these are just the sort of decisions that the Internet is simply terrible at helping with. Try answering any broad, tough question — which car should I buy? — and you’ll tumble down the rathole of a million conflicting opinions, reports and cleverly concealed biases. The Big Picture needs editors, and an editorial team that is in the field every day talking to people, assessing the pace of change, taking the new and unintelligible and translating it, and most importantly, selecting what’s important for you to know, like a trusted friend. Communicating the big picture, we decided and I still feel, is best done in print. Friendly, portable, browsable print. While you can get whatever you want — when you know what it is — online, the print experience brings you issues you haven’t thought about yet. Ideas you did not even know you were looking for. A broader education in your field and about your job — the Big Picture you need if you are going to lead teams, departments, or divi-
sions. SD Times is your essential guide to what is important today and what you will need to know in the future. When we started BZ Media and SD Times, it was a goal from the beginning to treat people well and with respect and to become every vendor’s favorite customer, to become every customer’s favorite vendor — and hopefully, your favorite magazine. By 2017, it was time to move on, and the company was sold. It was my great delight to hand SD Times and related properties over to a new company, D2 Emerge, owned by my friends and colleagues Dave Lyman and Dave Rubinstein. Happy anniversary SD Times and I wish you many, many more! z
014-17_SDT032.qxp_Layout 1 1/17/20 4:55 PM Page 14
Photo Album Eddie Correia, Alan Zeichick, Ted Bahr and Dave Rubinstein toast completion of issue #001
Ted ha d this standin g in w thing about a ter. T on Long hi Is l and. W s wa s did it e also in San Francis co
i tina Purp s i r h C d go an Erzi Pon Viena Ludewig Finance director gner and website desi Kelly Erin Broadhurst
ve ara Leonardi ha M d an se ne ig Katie Ser Adam Lobelia an issue with
Managing edit or Pa t Sarica and art direc tor Mara Leon ardi a t a holdiay p arty
014-17_SDT032.qxp_Layout 1 1/17/20 2:24 PM Page 15
SD Times was 5 Below in Vega s! Dave Rubinstein, Brian Scott, Ted Bahr, and SD Times publisher David Lyman!
One of the b est thing s ab out trade shows. . . the network in g reception! Ed itors Eddie C orreia and Alex Han dy take a bre ak!
Halloween wa s always a big deal a t S D Times So wa s Mard i Gra s!
014-17_SDT032.qxp_Layout 1 1/17/20 4:55 PM Page 16
Photo Album Dave (left) and Dave (right) toast the beginn ing of D2 Emerge in July 2017. Although it wa s a champagne toa st, the official drink of D2 Emerge is â€œThe Lyman. â€?
Ketel One vodka, ice and an orange slice!
The current SD Times editorial team: (from righ t)
News editor Ch ristina Cardoza and online edit ors Jenna Sargent and Jakub Lewkowic z
014-17_SDT032.qxp_Layout 1 1/17/20 2:20 PM Page 17
Trade Shows ger SD Times sales mana Jon Sawyer, righ t, on the street in New Orleans with the old BZ Crew.
sues One of the first is we printed with a s showca se of vendor ence er for the Agile Conf in 2017
Chris enj oyed the s yearâ€™ unicorn horn at la st DevOps Enterprise Summit. . .
no t so much
018-19_SDT032.qxp_Layout 1 1/17/20 5:25 PM Page 18
The software development they are always changing T he future of software development is simple and will take two paths. On the one hand, the bulk of applications will be created via lowcode and no-code platforms, or with highly sophisticated frameworks. That way, programmers can focus on the interesting parts of the application, while behind the scenes, automation ensures that the resulting code passes quality and security tests. On the other hand, a smaller number of projects will create applications and platforms using traditional languages and techniques — fewer guardrails but more power. If you look back over the past 20-odd years since the founding of SD Times, you’ll see the evolution of platforms, languages, tools and methodologies that lead us inexorably to this point: Most application programming becomes highly structured and automated, while the tools for high-end software engineering have become more powerful. That trend will continue, as will the rise in the diversity of run-time target platforms, encompassing more IoT and more cloud.
Platform Changes It’s entirely a coincidence that the first issue of SD Times, published on Feb. 23, 1990, covered the initial release of Java 2 Micro Edition (J2ME), the first serious cross-platform environment for mobile and embedded devices. Sure, today we talk about Android and iOS, but even then, the Internet of Things was taking shape — even if we didn’t know that term. Many other platforms, application A programmer, analyst, and tech writer since the Neolithic era, Alan Zeichick is the founding Editor-in-Chief of SD Times. Follow Alan on Twitter @zeichick.
BY ALAN ZEICHICK
frameworks, and Web frameworks appeared in the early 2000s. Microsoft’s .NET became incredibly popular for client/server development, as did the new C# language. Rails helped Ruby gain in popularity and became a powerful competitor to ASP.NET. Java itself evolved from its original “write once run anywhere” to become the underpinnings of many large-scale enterprise applications (at least those not written in C#) and of small-scale Android apps. Paradoxically, given that Java was one of the first interpreted languages, Java development is now considered a lowerlevel close-to-the-metal platform, more akin to the compiled C and C++, compared to scripting languages like PHP, Perl, and especially Python.
Practice Changes While all these new languages and the accompanying IDEs (like Eclipse and Visual Studio) were evolutionary, and helped programmers create better applications faster, they weren’t revolutionary. The biggest revolution in software development over the past two decades came through new methodologies. SD Times was in place to watch the rise of agile development, which proved to be thoroughly better than the old waterfall model, and vastly superior to the old CASE (Computer Aided Software Engineering) that I learned about as a young mainframe jockey. (Mention IBM’s AD/Cycle and watch me quiver. I need a PL/1 compiler, stat!) Many organizations were slow to understand the benefits of early attempts to impose rigor, discipline, quality, and most importantly, flexibility. That’s because those early attempts were heavy-weight, difficult to learn,
and expensive to implement. For example, the Unified Model Language (UNL) helped software architects model complex systems, and communicate those models to programmers and to testers. The Rational Unified Process (RUP) provided the process to help development teams turn those models into working code. The costs, and learning curve, were generally too much to bear.
Agile Changes Fortunately, the development world moved beyond URL and RUP. Many embraced the Agile Manifesto and a seemingly endless number of processes, frameworks, and methodologies, like XP (eXtreme Programming), TDD (Test Driven Development), Kanban, DSDM (Dynamic Systems Development Method, Lean, and mostly, Scrum. Forgive me if I missed your favorite agile methodology. Covering the fast-evolving world of agile was a constant theme in SD Times — until everything settled down. Everyone accepts that with modern-day applications, agile has won. We’re all better for it, mainly because organizations and users can derive value from software sooner. Unless there’s another Manifesto, we can say that Scrum, and variants of Scrum, have won the day.
Build Changes SD Times chronicles another evolution, this time from the ancient make utility to the wealth of tools and philosophies around builds. The concept of CI (Continuous Integration) is about as old as our publication, dating back to the earliest days of XP. CI then evolved, in some circles, to CD (Continuous Delivery). Meanwhile, an industry evolved around selling advanced platforms and
018-19_SDT032.qxp_Layout 1 1/17/20 5:26 PM Page 19
times, databases for managing code and assets, checking code quality during check-ins, verifying licenses, more. The only problems were that as software grew in complexity, the number of hours it took to actually do the check-in, run static analysis, and build the application, became unreasonable. We’re struggling with this, especially as teams and code repositories become larger, and as security concerns grow.
cized breaches, and the rise of new regulations like Europe’s GDPR (General Data Protection Regulation), to get everyone’s attention. Today, security is front-and-center, and many people are preaching about DevSecOps, but it’s going to take time to fix all the bad code, replace all the weak tools, and train all the bad programmers. Every week I still receive warnings about authentication flaws, injection flaws, and XSS (cross-site scripting). The OWASP Top 10 for 2017 looks like the OWASP Top 10 for 2004. Have we learned nothing?
Cloud Changes Security Changes Thank heavens. We are finally acknowledging that security must be designed into code at the outset — and that means everything from operating systems to frameworks, from run-time engines to stored procedures. Oh, wait, that’s not enough. In shared cloud environments, we also need to worry about microprocessor architecture, virtual machine hypervisors, and malicious applications running on other tenants. That reminds me of the ill-fated Software Security Summit, launched in 2005 by the editors of SD Times and our then-sister magazine, Software Test & Performance. To quote myself in a 2004 press release, Alan Zeichick, Conference Chairman of the Software Security Summit, said, “Software is vulnerable! Enterprises have spent millions of dollars installing network firewalls and Virtual Private Networks, but the real danger is in poorly written applications and platforms. Worms exploit unchecked buffers. Hackers break in through a Web site’s front door: its user interface. And too little is being done to address this impending crisis.” Alas, we were too early; development organizations weren’t ready to focus on secure coding and testing practices. Even Sarbanes-Oxley and HIPAA couldn’t get dev teams to think about security. It would take many well-publi-
One thing we have learned is that cloud computing is real, and can offer better price-points, better performance, better scalability, and even better security, than traditional data centers and colocation facilities. As one corporate CISO (chief information security officer) said to me recently, “Cloud providers have even more at stake than I do.” It’s the cloud, after all, that inspired the DevOps (Developer Operations) movement, where dev teams could whip out a credit card and provision a dev/test environment faster than they could
fill out a purchase requisition. Everyone loves shifting costs from CAPEX (capital expenses) to OPEX (operational expenses). And cloud providers could keep up with the latest hardware trends, whether it’s super-dense racks, ultra-fast storage, or loads of AI-friendly GPUs (graphics processing units). Cloud computing, generally speaking, can be divided into two categories. There’s SaaS (software-as-a-service),
like when you put your payroll or inventory into a multi-tenant system. Developers find SaaS uninteresting, except when it comes to customization or integration. In that sense, SaaS is simply COTS (common off-the-self software) running in someone else’s data center. The interesting part is IaaS (Infrastructure-as-a-Service) or the nearly identical PaaS (Platform-as-a-Service). That’s where the developer’s own code is running on the cloud provider’s servers, either on dedicated bare metal or in a shared environment thanks to virtualization. While the biggest ROI for the cloud is in deployment, there’s lots of devteam value in IaaS/PaaS, because it’s easy to create, provision, deploy, and tear down server instances . At first, organizations lifted-and-shifted existing data center applications into those clouds, doing only what was necessary to ensure that the code ran properly. Not much fun there, and from a change perspective, 2019-era IaaS/ PaaS didn’t look that terribly different from 2000-era client/server computing — except, again, that the applications were running in someone else’s data center. Did we reinvent Alan’s old mainframes? But then, techies started experimenting with cloud-native paradigms, looking at microservices and serverless functions. That, my friends, is arguably the biggest change in the past couple of decades. Before, we were changing how software was written: new low-code tools, new languages, new IDEs, new methodologies. But now, we are finally rethinking what the code itself looks like. It’s the biggest change, one might argue, since the move from structured programming to objectoriented programming. I can’t wait to see what SD Times will cover over the next two decades. But I bet it’ll still bifurcate: Most developers will build applications using high-level no-code and low-code tools, while a few will work close to the metal to create the latest and greatest platforms. z
020,21_SDT032.qxp_Layout 1 1/17/20 2:45 PM Page 20
We had it covered BY MARA LEONARDI
hen SD Times was founded as a newspaper, the look and feel was very much defined by the old riddle, “What is black and white and red all over?” Long, dense columns of text and minimal use of color. By 2011 the tech publication industry was changing. People were getting their news online. SD Times switched to a traditional magazine format and monthly frequency to showcase our strong suit — in-depth features. The new format, concept and glossy paper opened up a world of creative opportunity for me as a designer. I could finally put into practice the design rule of “a picture says a thousand words” instead of actually putting 1,000 words on the cover of SD Times. Here is a look at some of my favorite covers over the years. A BZ Media Publication
You got your
DEV in my OPS...
New road or dead end for Silverlight? Tools for Scrum: They CAN help AUGUST 2012 • ISSUE NO. 280 • $9.95 • www.sdtimes.com
Working with Autism — SD Times fea-
Hey, You got your Dev in my Ops — In
Ethics in AI — Editor-in-Chief David
tures often cover abstract concepts that are difficult to illustrate, like APIs and DevOps. However, this feature on autism was easy for me to grasp. Personally knowing people touched by this condition, I read this article with great interest and understanding. When I found this stock image of a young man with the puzzle pieces (a symbol for autism), I knew it would perfectly convey the concept.
2012, DevOps was a new buzzword. My editor had to explain it to me and I had to visually explain it to the reader. I photographed this one myself. We put the leftover chocolate and peanut butter in the kitchen for everyone to enjoy.
Rubinstein came up with this concept, but I am proud of the execution. Yes, I know that AI technology doesn’t usually deal with physical robots, but they create a tangible image for the tech.
Mara Leonardi is art director of SD Times.
What makes wearables work — This cover caused a little controversy! At the beginning of the wearable technology trend, there were few stock images available and we didn’t want to plug any specific device, like fitbit, on the cover. I liked that the color scheme implied part of the answer to the question in the headline (Bluetooth). I liked that the necklace and cuff looked like wearable tech jewelry at that time. I was pleased with my Photoshop prowess in the way I realistically isolated strands of the woman’s hair blowing over the SD Times logo. We felt this cover worked on many levels. Then, we got a letter from a female reader who interpreted it quite differently. She saw a woman who was shackled and barefoot and accused us of being insensitive to women. Images have the power to make an impact. It’s not always the impact you intend. Lesson learned.
020,21_SDT032.qxp_Layout 1 1/17/20 2:45 PM Page 21
A BZ Media Publication
Not your mother’s
Suite update tackles tough-to-get-out mobile, cloud development
Safeguard your company’s code and software licenses Gen-Exit: COBOL generation moves on, but companies handle it OCTOBER 2012 • ISSUE NO. 282 • $9.95 • www.sdtimes.com
The Unicorn — I love that this cover is
The Year of (in)security — This was the
not only pretty, but has an unexpected image for a tech publication. Usually, there’s nothing visually pretty about code, so illustrating “myth” was an easier task than creating a visual for HTML5.
one and only time we had a traditional illustration on the cover. By political cartoonist Mike Jenkins. www.capitalartworks.com.
Not your mother’s IDE — Illustrating “new” was easier than illustrating “IDE.”
Spark and Java — I simply had fun with the words and went big with the images.
A BZ Media Publication
YEAR IN REVIEW page 58 Cloud, tablets, Android, Scrum, social media and lawsuits
JANUARY 2011 • ISSUE NO. 261 www.sdtimes.com
Open Source: The First One — Anti-corporate,
It’s Not Just for Rebels Anymore
anti-establishment “rebels” are depicted with tie-dye, often worn by people who build open and free grass-roots communities.
Place your bets: Android, iOS or Windows Phone 7 Agile and ALM: Better together
022_SDT032.qxp_Layout 1 1/17/20 2:52 PM Page 22
Error 404: Company not found I
deas come, and then new ideas replace the old ones. It’s true in all walks of life; but likely doesn’t happen as quickly as it does in technology. Doorbells were just doorbells for over 100 years, until Ring reinvented them with video and entrance monitoring. Cars were gas-powered internal combustion engines for 100 years until Toyota, in 1997, came out with hybrid vehicles. Only 11 years after that, Tesla brought the first fully electric, rechargeable car to market. Over the course of 20 years, we’ve reported on so many companies that had led an industry segment, only to be bought up by a larger company, or have its ideas improved upon and repositioned in a race to the top of the hype cycle. Rational. BEA Systems. Borland. Macromedia. Sun Microsystems. Serena.
BY DAVID RUBINSTEIN Some even have weathered it all and still are going strong today. Companies we’ve written about since our launch… IBM, Microsoft and Oracle, of course. But also Parasoft and Altova. Intersystems and Perforce. Suse and Micro Focus. CA and Red Hat. And many more. Good for them. And with the rapid adoption of DevOps and cloud native computing, whole new market segments are coming up and being filled by startups. Outsystems for low code. Gremlin for chaos engineering. ConnectALL and Tasktop and value stream management. Lightstep and Datadog for monitoring (now called observability). Launch Darkly and Split Software for feature experimentation. To name only a few.
For this 20th anniversary, it’s been fun to look back at the companies we covered, and the ads they created to sell their wares. The internet has eaten so much of the industry’s marketing dollars, but SD Times has maintained a loyal following of readers, and there are still companies who value creating awareness for themselves in a magazine. There’s no clicking off the ad in four seconds, while not even retaining the name of the company that created it. A print ad just rests there, beside the article you’re reading, creating consciousness. Not intrusive. Not in your face. Not annoying. What follows are some of the ads that struck us as being cleverly done and eyecatching. And, they have scored well with Readex, an agency that rates the effectiveness of print ads. We’ve had fun revisiting them. Hope you will too. z
023-27_SDT032-VintageAds.qxp_Layout 1 1/17/20 2:53 PM Page 23
023-27_SDT032-VintageAds.qxp_Layout 1 1/17/20 2:54 PM Page 24
023-27_SDT032-VintageAds.qxp_Layout 1 1/17/20 2:55 PM Page 25
023-27_SDT032-VintageAds.qxp_Layout 1 1/17/20 2:55 PM Page 26
023-27_SDT032-VintageAds.qxp_Layout 1 1/17/20 2:55 PM Page 27
Meet the Programmer’s Boss H
e’s technical for sure but the people who work for him write all the code. He’s managing teams of developers now, using different languages and tools over multiple platforms. He’s got a team working on a mission-critical application with a C++ engine, a Java interface to the web and a Visual Basic user interface. He’s just told the programmers to stop reinventing the wheel and to use some VBX components he’s found. He has a goal this year to standardize on just three code editors down from the dozen currently in use across his department.
The CIO has just given him responsibility for bringing the company’s custom legacy apps onto the web. He’s got three middleware and two Enterprise Information Portal vendors coming in to pitch him next week. Then there’s the back-end fulfillment software — he can’t waste time diverting the team onto a custom-written app, he’ll have to buy the package and consulting services off the shelf. It’s a good thing he just got a $15,000 raise. The tech journals? Just more of the same old articles on how to debug in C. He needs a wide-angle view of the entire spectrum of applications and software development. All languages, all platforms. Vendor roadmaps, not how-to tips and tricks. He needs to know the news, the trends, the products, the alliances, and what they all mean. That’s why he reads SD Times.
The newspaper for software development managers
028-31_SDT032.qxp_Layout 1 1/17/20 3:10 PM Page 28
Technology advances a are driving e
028-31_SDT032.qxp_Layout 1 1/17/20 3:08 PM Page 29
and demands for speed enterprises to the edge BY JAKUB LEWKOWICZ
he popularity of edge computing has greatly increased as more industries are adopting IoT devices and are increasing their demand for low-latency processing and real-time, automated decision-making solutions. Both the evolution of technologies surrounding edge computing, such as 5G, and the industry demand for its benefits, such as speed and low latency, have created a Goldilocks zone that is now fueling the rapid expansion of edge computing. ResearchAndMarkets.com estimated that the global edge computing market will grow from $2.8 billion in 2019 to $9 billion by 2024, while Statista, a database company, predicted 75 billion IoT devices by 2025 in its State of the Edge report published in 2018. The report also found that manufacturing by far spent the most, at $23.5 million in 2017, followed by energy & utilities and IT & telecom at $13.5 million each. The same report projected that consumer applications and transportation & logistics would eventually take up a much larger chunk of edge spending by 2025. Edge computing is an approach that processes data at the edge of the network. This computing method drastically cuts down on latency times by shortening the distance the data has to travel and by distributing the data throughout devices and servers. Mobile telecom networks and data networks are converging into a cloud computing architecture and are increasingly looking to move computing power and storage out on the network edge, according to a whitepaper titled â€œThe Drivers and Benefits of Edge Computingâ€? by Schneider Electric. IDC defines edge computing as being compute power outside of centralized traditional data centers or traditional cloud infrastructure. An edge device might look like an additional server, it might be a rack mounted server, it could be a tow-
continued on page 30 >
028-31_SDT032.qxp_Layout 1 1/17/20 2:57 PM Page 30
< continued from page 29
er server, but it looks like additional IT infrastructure that is outside of the core and is referred to as “heavy edge,” which is at a very mature stage, according to Dave McCarthy, who is a research director within IDC’s worldwide infrastructure practice focusing on edge strategies. “[Heavy edge] is fairly mature although I think there are some new challenges that are happening around management and security that are causing people to think a little differently,” said McCarthy. This, he said, as opposed to “when you’re talking about IoT where you have all sorts of new types of devices connected that never were connected before, it’s more of an aggregation of things. It’s still very new and people are still trying to understand best strategies for how to deploy, manage and get the most out of those solutions,” McCarthy said. Edge computing also encompasses all of the software that is used to manage a distributed network of devices as well as the endpoints themselves. Edge devices use different networks to connect and most of it depends on industry, according to McCarthy. For example, transportation requires a lot of mobile assets so it may be connected by wireless technology such as 5G. Meanwhile, IoT in a retail store would use RFID tags or beacons and connect throughout a store using WiFi. Edge can even encompass devices that use wired connectivity. “Using edge computing to automate 20 cameras in the store will find more than humans who are going to miss something that the computer won’t. And so it all goes back to just being able to make quicker, more accurate decisions without having to again go back and send all this data to a cloud or somewhere else,” said McCarthy.
The advent of 5G Last year, a report by Allied Market Research attributed the growth of the edge computing market to the advent of 5G, which is hailed as being 10 times faster than 4G, as well as an increase in the number of smart applications, the rise in load on cloud infrastructure, and
the emergence of various frameworks and languages for IoT solutions. “5G opens up a lot more opportunities for different types of data and devices to be connected or maybe connected in different ways or share different kinds of data across cell networks that maybe today is certainly not as ubiquitous,” said Kuba Stolarski, a research director within IDC’s Enterprise Infrastructure practice. Realizing its potential, large companies such as Cisco, HPE, Huawei, IBM, Dell Technologies and Nokia now comprise most of the edge computing market share, according to a post by Microsoft. On the development front, Microsoft has been big on providing solutions
for the intelligent edge through Microsoft Azure Sphere, a high-level application platform that includes integrated communication and security features. “Internet connectivity is a two-way street. With these devices becoming a gateway to our homes, workplaces, and sensitive data, they also become targets for attacks. Look around a typical household and consider what could happen when even the most mundane devices are compromised: a weaponized stove, baby monitors that spy, the contents of your refrigerator being held for ransom,” Microsoft wrote in a post. The company said it wants to prevent another attack like the 2016 Mirai botnet attack where roughly 100,000 compromised IoT devices were repurposed by hackers into a botnet that effectively knocked the U.S. East Coast off the internet for a day. Microsoft Azure Sphere includes
Azure Sphere certified microcontrollers (MCUs) that combine both real-time and application processors with Microsoft security technology, Azure Sphere OS, which combines security innovations from Windows, a security monitor and a custom Linux kernel, as well as the Azure Sphere Security Service.
The value of the edge Many large enterprises now understand the value of edge and immediately understand why they need these functionalities in different places, according to Aaron Allsbrook, the co-founder and CTO of ClearBlade, an edge computing platform for enterprise IoT. They want to take some of the data that is typically processed in the cloud and minimize latency, add additional compute power to handle the troves of data, and allow for customizability. “The edge differentiates itself in that it still has all the capabilities, meaning you can run your microservices, you can do your stream processing, you can store data, you can even visualize, but it now allows users to push it and run it on a very small computer in the field,” Allsbrook said. “Often you want a subset of your solution. You want just the part that knows how to pull an OPC off of your injection molding machine or you want just the piece that knows how to talk to the equipment at a railroad crossing. And so you only bring a subset of that application and we allow you to, from your enterprise or from your cloud platform, push down just a subset of your applications so that you can do pieces and portions of your application and distribute that IoT load,” Allsbrook continued. Allsbrook said that a big focus is on automating and simplifying the process now that all of these things are connected. He added that the software needs the most improvement to figure out how to distribute all of the compute power effectively. “A lot of progress has been made definitely in the hardware side and in making that more available in the different permutations and configurations. We may even have too many right now to really understand how to make use of
028-31_SDT032.qxp_Layout 1 1/17/20 2:58 PM Page 31
Edge security is a paradox The security of edge devices presents both a major challenge and an opportunity for building more modern edge security practices. Edge devices exist outside of the protections that IT data centers provide, according to Patrick Sullivan, the global director of security at Akamai. One of the most prominent concerns is the physical security of the devices, which are more vulnerable to malicious attacks and mishaps of all kinds than typical office equipment and technology safely held within corporate walls. However, because edge computing distributes processing, storage, and applications across a wide range of devices and data centers, it’s difficult for any single disruption to take down the network. “This is a very impactful architecture for people as they’re building modern security,” said Sullivan. “So that edge model allows you to kind of have a homogeneous level of visibility and protection regardless of where that computing is. If it’s across a couple of cloud providers and a couple of colocation or data centers, that edge architecture allows you to accommodate all of that compute form factor and it gives you tremendous architectural flexibility.” A key aspect of modern security is to detect, mitigate and track malicious behavior as close to the threat source as possible. There’s less data going out to a centralized location and through communication lines, whether it’s fiber-optic or telephone cables. So, there’s less risk, because the data isn’t leaving the edge and going across the internet, which could prove to be highly beneficial for industries that have to transmit highly sensitive information such as the health, finance, and government sectors, according to Sullivan. Sullivan added that through the reduction of round trips where the data has to travel, and with the optimization of TCP and HTTP protocols, the edge model could avoid a tradeoff between security and speed.
it all. But figuring out how to get all the right applications and how to leverage all of this new compute has a lot of growth left in it,” said Allsbrook. “While integrating edge compute into smart homes may be slowing now, we’re instead going to see it start to hit more of the enterprise use cases, which have been in a lot of pilot mode quite frankly,” Allsbrook said. “People are beginning to put together these tech-
“It cuts across commerce, media, government, financial services. It’s sort of becoming the de facto model for at least web application security and denial of service mitigation,” Sullivan said. Another reason why organizations look toward an edge security model is because of the difficulty in hiring talent with expertise in things like web application security or mitigating bots that exist on the internet. Instead, those companies look to deploy their security on an edge model and also consume it as a managed service. “I think the edge security model is really the only viable architecture to stop a truly massive DDoS attack,” Sullivan said. “If you build an edge compute model, you can tap into that most scalable part of the internet. And then what you do is you, fight DDoS off before it can aggregate and collect and and really grow.” “If you have a centralized application, you can access it with millions of devices and try to break it. If you’re doing a DDoS, which is the most popular attack, okay, but if you take this instance of an application, now you have 1000 of those. So it’s going to be way harder to break it because the concentration of devices is going to be lower,” said Lior Fite, CEO of Saguna. “It’s actually increasing the surface, so you need to concentrate a lot more traffic to try and break it.” Sullivan added that the edge security model can learn something from DevOps processes. “I think there would be integration at the edge to sort of a DevOps process. So that’s a big focus for developers, making sure that the edge can be programmatically controlled via the APIs and configured as code. So I think that’s something that we’ve seen evolve over the last 5-7 years. And now as you publish, it updates your application. That same workflow can update the edge to any changes that need to be made to your security policy,” Sullivan said. z —Jakub Lewkowicz
nologies, and are very loudly asking to move very, very quickly with rolling them out for their connected products for their field deployments or for tracking their shipping.” Allsbrook added that approaching edge from a more holistic architecture view is the most important way to understand the technology moving forward. “IoT solutions are coming together with a lot of different stakeholders and
vendors in, and it’s also coming in with a lot of business units trying to tie themselves back together. So it takes a lot of collaboration to pull these things together,” said Allsbrook. “We need to have kind of the long term understanding that this stuff is going to continue to move fast. We need to be ready to solve the bigger problem when we get into how we are going to do enterprise IoT across everything.” z
32-36_SDT032.qxp_Layout 1 1/17/20 3:24 PM Page 32
here are three ways to create a DevOps culture: by default, by design or iteratively. The three are not necessarily mutually exclusive because DevOps tends to be a learning experience that takes considerable time. For example, talent marketplace TopCoder has had a formal DevOps practice for the past three or four years and started a DevOps-like practice in 2007. As part of its transformation, the company moved from an on-premises data center to the cloud and rebuilt its applications as cloud-native applications. More recently, it changed its tooling and updated its DevOps practice to include security and compliance. DevOps is ultimately a combination of a culture, processes and tools, BY LISA MORGAN although not everyone sees it that way. In fact, there are a lot of misconceptions about what DevOps is, as explained in 5 several years before an organization ing better, we’re adapting to change DevOps Myths, page 36). believes it has achieved any kind of better, and we’re delivering what the “DevOps is a set of practices that maturity. In fact, many of the compa- business is expecting in a timelier fashgive organizations and teams unified, nies interviewed for this article series ion,” said SPR’s Rodenbostel. cross-functional representation, shared have been practicing DevOps since accountability from development, oper2015 or longer and consider themselves How roles are evolving ations, the business and everybody in to be either at the beginning stage or Forrester recently published a report between,” said Justin Rodenbostel, VP the “awkward teenager” stage. entitled, “The Future of Technology of delivery at digital transformation “You don’t build the culture, it’s more Operations.” In it, Betz observes the agency SPR. “People work together emergent, more organic,” said Forflatter organizational structure of modthroughout the development process to rester’s Betz. “It gradually manifests as ern technology operations versus the create higher quality software solutions the sum total of all the interactions peo- traditional hierarchical structure. The in shorter time frames. We like to think ple are having and how those interactions flatter or matrix structure of the organof it as an extension of Agile.” are being transformed because they now izations impacts the roles of individuals Charles Betz, principal analyst at have new operational capabilities that and team dynamics. Forrester, said that “building a DevOps they didn’t have before and they have “The idea that somebody can specialculture” is a misnomer, however. ize is falling by the wayside,” said Betz. permission to use those capabilities.” “I think a subtle misconception is Not everyone agrees with that out- “You get to the T-shaped people who that you can somelook, however. In each have a specialty and a certain elehow start with culfact, some practition- ment of breath. You have more line of ture. This may be ers and consultants sight, a lot more interest, a lot more contrary to what advise starting with a commitment to the mission outcome of you’re hearing from cultural end state in a team and you have somewhat less comothers because culmind so it’s clearer mitment to your professional identity.” SECOND OF THREE PARTS That can be an uncomfortable tranture is important. But how processes, pracI believe as do a lot of tices, and tool sets sition for people whose self-worth is other folks, that culture is a lagging indi- need to evolve. That said, even with a tied to a particular role. “[Traditionally,] goals and responsicator,” said Betz. “You do change cul- planned transformation, pivots tend to ture by changing practices. You do occur along the way as organizational bilities have been based on a person. change culture even by changing tools.” priorities, competitive factors, cus- They have to be based on the success of Changing culture takes time tomer expectations, internal dynamics the team,” said Rodenbostel. “When roles are being invented to align with because it involves changing mindsets and technology all change. and ways of working. While one can “Now we’re working more closely DevOps, there is some discomfort start using a new tool today, it may be with the business. We’re communicat- because it’s more of an environment of
32-36_SDT032.qxp_Layout 1 1/17/20 5:15 PM Page 33
shared accountability. Whereas when you operate in silos, it’s much easier to be successful as an individual.” DevOps and CI/CD pipelines provide visibility into what’s happening throughout the development process, which has three effects. The first is providing individuals with insight into their own progress. The second is facilitating a sense of shared ownership and responsibility. The third is making people accountable for their work. At talent marketplace TopCoder, deployment engineers became DevOps engineers. And at messaging services company Gupshup, developers have begun taking responsibility for writing deployment code. QA has moved from performing testing to maintaining automated test suites. Dominic Holt, CTO of fractional CTO company Valerian Tech, said he first goes to the deployment files and automation scripts before writing a single line of code so he knows the code will work in whatever production environment he wants.
Teams are evolving Like Agile, DevOps is associated with cross-functional teams. “One of the foundational pillars of DevOps is systems thinking,” said CTO consultant Emad Georgy.
“[Instead of asking] who’s fault is it, systems thinking looks at the whole system. If something went wrong with the pipeline, no matter who owns it, let’s look at the system as a whole and understand what the actual root cause is and solve it. A lot of people don’t do that. A lot of people are not ready for that kind of thinking.” Teams are self-organizing at business and technology services firm Orion Business Innovation. “It’s about empowerment. They have motive, they have purpose,” said Pradeep Menon, EVP at Orion. Trust is a hallmark of an effective DevOps team. Specifically, members of the team trust each other. There is also a technical aspect of trust. Organizations are placing trust in that which is automated, whether it’s policies, deployments or a database’s ability to self-heal. “I think one way to encourage behavioral change is by doing regular measurement and responding to the results of that measurement,” said SPR’s Rodenbostel. “Give the team the opportunity to improve when things aren’t going well and celebrate when things are going right.” Another wise move is aligning financial incentives with the desired behavior, which not all organizations do. Part of the problem is that business leaders and HR don’t always understand the subtleties of IT. The disconnect can work against IT cultural transformation. At TopCoder, everyone can see the statistics, which makes it easier to see how code changes impact revenue, for example.
Governance is evolving DevOps involves a lot of automation throughout the Software Development Life Cycle (SDLC). With infrastructure as code and even compliance as code, governance can be integrated into the pipeline in a manner that does not add an unwieldy amount of overhead. “If you have five product lines, all with their own product teams and release cycles, and they’re all released in their own way, [you may benefit from] establishing one standard DevOps pipeline where everybody builds, inte-
grates, tests, deploys and monitors the same way because everyone’s using the same pipeline,” said Georgy. “The security guys are elated because they no longer have to look at five products that all have their own custom builds; they can secure one standard pipeline.” Forrester’s Betz said governance needs to switch from how to what. “You need to govern the promises your team is making,” said Betz. “It takes a certain level of trust and holding teams accountable for what they said they were going to do. It’s basically elevating the concept of internal SLAs but then not really meddling beyond that, unless you need to do forensics on why something really went badly.” Of course, that approach is at odds with auditing, which demands an explanation of exactly what happened, why it happened, when it happened, and who was involved. “The trouble is that work is getting done in a much more collaborative, less deterministic, fuzzier way. So, when an auditor comes in and says show me your standard flow chart for how this work is supposed to happen, there is no flow chart,” said Betz.
How to get started If you haven’t started your DevOps journey yet, it’s wiser to start with a pilot project and learn from that instead of attempting a massive transformation effort from the start. “If there are multiple products in the portfolio, [identify] which could immediately benefit from a DevOps transformation,” said Georgy. “Then go after quick wins.” One thing to keep in mind is even if a pilot went well, it may not scale as-is perfectly to another product or team. The approach may have to be adapted to suit the goals of the team, the people on the team, and the business’s expectation of the team. “It was relatively easy to implement DevOps for a specific product where it was the only way to achieve our desired outcome,” said Nirmesh Mehta, CTO at Gupshup. “It was much harder to expand it to other areas where there continued on page 37 >
32-36_SDT032.qxp_Layout 1 1/17/20 5:15 PM Page 34
5 DevOps Myths BY LISA MORGAN
any organizations claim to be doing DevOps, but is that actually the case? For one thing, just about everybody has their own definition of DevOps, and that interpretation tends to impact how DevOps operates within a team or company. Following are five of the misconceptions.
#1: DevOps = Dev + Ops On some level, parsing the term seems logical. For the past 15 years or so, the software industry has been saying that Dev and Ops have to work together as a cohesive unit to deliver higher quality software faster. Interestingly, the belief that the definition of DevOps is Dev + Ops falls apart on several levels. In fact, Charles Betz, principal analyst at Forrester says this is the #1 misconception. “We get a lot of people calling in and that’s their level of understanding,” Betz said. “They haven’t challenged themselves to really dig into where the whole movement came from.” Simply parsing the term can cause issues. For example, testing is obviously missing from the term. While DevOps teams do testing, they sometimes treat it like a second-class citizen and as a result, quality is not as high as it could be. “Without testing, you lost a vast majority of the benefits,” said Dave Messinger, CTO at talent marketplace TopCoder. “We’ve put a lot of emphasis on quality assurance and testing.” Clearly, security isn’t included in the term either. Because quick CVE scans aren’t enough, some teams have embraced DevSecOps to ensure that security is addressed adequately throughout the lifecycle. #2: It’s all about tooling Many types of cultural change are confused with tool procurement. DevOps, Agile, and even digital transformation are often viewed this way. If one simply
feedback. However, if failure is not tolerated and the pipeline is automated, it may appear as though code quality is worse than before. “People don’t realize that DevOps involves a lot of failing and learning,” said Georgy. “Some organizations “Without testing, you aren’t ready for that inforlost a vast majority of mation. They’re seeing it the benefits” of DevOps. as ever since we adopted a DevOps culture, we’re getting more errors than —Dave Messinger, CTO, TopCoder we had before. What they don’t realize is the errors have always been there.”
procures a tool or set of tools, then, from that time on, the organization is “doing DevOps.” “We dive into conversations about do we do Jenkins, do we do Docker, do we do Swarm and it just becomes a tool conversation. And then we have these philosophical battles about which tools are the best,” said CTO consultant Emad Georgy. “Foundationally, they’re just not ready for the cultural changes [and] they have no idea how to communicate the business value of what they’re doing.” Consultants often say that their clients tap them for tool recommendations before thinking about the larger picture. “A big mistake teams make is to go out and look at the entire ecosystem with hundreds of thousands of tools, trying to build the perfect solution,” said Dominic Holt, CTO of fractional CTO company Valerian Tech. “By the time you’ve built your DevOps pipeline, you’re going to change at least 50% because the ecosystem is moving so fast.” Some companies have spent 12 to 18 months building those pipelines only to discover they didn’t do it right the first time, so they spend more time redoing it, wasting two and a half years. “If you haven’t done this before, talk to people who have done it,” said Holt. “It probably took me six to 12 months to become familiar enough with these tools that I realized I didn’t do anything correctly.”
#3: DevOps increases error rates One of the benefits of DevOps is fast
#4: It’s about automating part of the SDLC People often talk about the importance of automating a pipeline, but some view it as synonymous with DevOps. “People confuse components or ingredients of DevOps with DevOps like somebody seeing a tree instead of a forest,” said Justin Rodenbostel, VP of delivery at digital transformation agency SPR. “For us, the most important part is the unified cross-functional aspect of working together. The communication and process aspects. We see tools as the support system.” #5: A DevOps engineer will enable a DevOps transformation Some organizations try to hire DevOps engineers to affect a DevOps culture. Others rebrand operations engineers as DevOps engineers and then claim to be doing DevOps. In a top-down transformation, the newly-hired DevOps engineer may not be empowered to spearhead a DevOps transformation. “You need champions because it’s very hard, especially in larger organizations to do these cultural shifts, especially when things are very much imprinted in the way things are done,” said Valerian’s Holt. In a bottoms-up transformation, the person may fail for people-related reasons, either because people don’t want to change, or they resent the new role which has been charged with changing the status quo. z
035_SDT032.qxp_Layout 1 1/21/20 2:44 PM Page 35
Learning about software progressively Businesses use limited-scope releases to mitigate risk
rogressive delivery is the natural extension of continuous delivery but refines what it means to “deliver” because unlike the ‘big bang’ of an allor-nothing release cutover, progressive delivery enables the business to gradually expose new functionality to limited numbers of users to assess the impact on user behavior and system health before expanding the release to the entire user base. RedMonk founder James Governor, who coined the term progressive delivery in 2018 to describe a basket of related skills and technologies for gradual releases that reduce risk in application delivery, announced his 2020 research topics recently and at the top of his list was progressive delivery. It’s great to experiment to ensure your software works as intended, yet progressive delivery is motivated by business factors — if the company releases software and business metrics go down, it’s better to know that before a wider release. “I want to watch it to limit the blast radius, so as I’m ramping it up, I want to be able to know whether it’s going well or not,” explained Dave Karow, Continuous Delivery evangelist at feature delivery platform provider Split Software. “I want to learn before I get to 100%. You know, one of our senior engineering leaders used to be at a very large file-sharing provider and he admitted that even when they used gradual rollouts, they didn’t tend to learn about [issues] until they were past the 50% mark. That’s painful compared to finding problems earlier in a gradual rollout. You want to set yourself up to learn about unforeseen issues as quickly as possible in production, and it would be nice if you didn’t have to hurt most of your users — if not all of them — before you figured out you had an incident.” Content provided by SD Times and
These patterns have been used by the largest e-commerce leaders for years. At Walmart, they use Progressive Delivery for two main purposes: one is called “test to learn,” and the other is “test to launch.” Test to learn is essentially A/B testing; which version of the software yields more of the desired user behaviors, such as buying more items or signing up for premium services. Test to launch, on the other hand, is more like application monitoring, where you’re watching a gradual rollout of the application to see that not only systems metrics,
done by another group. Everybody knows what we’re building this week, and what we are trying to accomplish.” These teams, he added, understand they might not get it exactly right the first time, so they’re looking to see how they can iterate quickly to improve it. “We don’t want to mark it ‘done’ because we shipped; we want to mark it ‘done’ because we moved the number.” Split’s feature delivery management software is the means by which people work together to experiment with and deliver quality software that meets busi-
‘I want to watch it to limit the blast radius, so as I’m ramping it up, I want to be able to know whether it’s going well or not.’ —Dave Karow, Continuous Devlivery evangelist, Split Software
but also business metrics, don’t decline as the software is given to more users. The ability to effectively roll out software to small cohorts before wide release to assess impact on the business comes with the assumption that business and development people are on the same page. Karow said he’s seeing the greatest success with progressive delivery in Agile and DevOps teams. “Those who most embrace this and get the most value out of it are already sort of in a two-pizza team,” Karow said, referring to Amazon CEO Jeff Bezos’ belief that if a team is so large that you can’t feed it with two pizzas, if won’t work effectively. “I have everybody related to this project within shouting distance of each other in a room or at least that tight in terms of a remote teaming thing, so that there’s no specification being written by one group and coding being done by another group and testing being done by another group and deployment being
ness and user requirements. “If [companies] are still solving fundamental problems to get people talking to each other and get the business and the developers on the same page, we’re not a silver bullet for that. We’re the means by which to actually do work together, not to get people to work together.” With progressive delivery, there is a natural progression of cohorts you want to expose to new code in production, Karow said. “The first cohort you want to expose in production to the new code is your developers and your testers. This gives them one last chance to be sure everything works as expected in the actual production environment, without any risk to users. Then there’s dogfooding. If I use my own product, then I’m going to go from my developers and testers to my non-developer users. Then I might go out to my friendlies, or my freebies. Finally, I’ll begin rolling it out to the rest of my general population.” z
32-36_SDT032.qxp_Layout 1 1/17/20 5:16 PM Page 36
Still testing like it’s 1999? BY MARK LAMBERT
A walk through the improvements (or lack thereof) in testing of the last 20 years, and three predictions that might help you strategize around where we’re going to end up 20 years from now. Let’s flash back to New Year’s Eve 1999, when we were popping corks to celebrate the entrance of a new millennium and, as the ball dropped, that Y2K hadn’t descended us into the Dark Ages. All the while we were blissfully unaware that the dot.com bubble was about to burst (even though it was obvious in hindsight!). Over the last 20 years, we have seen massive changes to how we design, develop, and deploy applications; from monolithic applications to microservices, desktop applications and static web applications to highly interactive web and mobile applications. The changes happened incrementally but the cumulative impact has been dramatic. One of the key mantras over the last 20 years has been the drive to accelerate delivery (“get it out sooner than the competition”) powered by the rise of Agile+DevOps. The Phoenix Project (by Gene Kim, George Spafford, and Kevin Behr) became required reading for anyone trying to transform their software delivery process and, while some organizations have achieved the goal of “Continuous Delivery,” many are still struggling with adopting some of the basic principles. Organizations that are succeeding in 2020 have transformed not only their development activities but also how they approach testing, starting with when they first think about testing. Traditionally testing was an afterthought, something we did when all the development was done. The leaders in 2020 are thinking about testing Mark Lambert is VP of Products at Parasoft.
THEN vs. NOW Flashback to 1999
The New Norm for 2020
Test after all development is complete
Fully automated test suites execute continuously. Teams leverage TDD and BDD to build testing into the process
Test everything end-to-end
Following the testing pyramid, with a solid foundation of unit tests. Focus end-to-end tests on business-critical processes
Automated testing enables Testers focus on manually running regression tests
their expertise with exploratory testing
Siloed development & test organizations (i.e. Testing Center of Excellence)
Integrated dev and test teams (i.e. Testing Community of Practice)
Physical test and stage environments are shared across teams
Dynamic cloud infrastructure (+ Docker) with Service Virtualization to eliminate constraints in the test environment
from the very beginning, leveraging TDD and BDD practices to build fully automated testing into the development process. These teams also recognize that you can’t test everything endto-end, and many look to the Testing Pyramid (advocated by Agile experts Martin Fowler and Michael Cohn) as a strategy for building a scalable test automation practice and freeing the testers to spend time with valuable exploratory testing, rather than tedious manual regression testing. The structure of the teams themselves have also changed. If you still have a test team that is airdropped into a project to do the testing, this needs to change. The only way to succeed with Agile+DevOps is to have the roles integrated together into a unified team that is working to
achieve the same goals. Lastly, we get to the test infrastructure. Gone are the days of the monolithic application, and so are the days of a test environment that you could deploy on a single machine and share across the teams. Today’s modern architectures require access to multiple back-end systems and all the teams require access to them “now.” Leveraging both cloud infrastructure and techniques such as Service Virtualization, teams can gain control over their complex test environments and test ondemand, anytime, anywhere.
Looking to 2040 Now that we are in 2020 and at the dawn of a new decade, what does the future hold? Certainly AI is going to play a large part in the way we both
32-36_SDT032.qxp_Layout 1 1/17/20 5:16 PM Page 37
develop and test applications. Many would say we are still in the Dark Ages for automated UI and manual testing, and many AI-powered testing technologies have been released over the last couple of years. This is sure to grow over the coming years, but where will this ultimately end up? Autonomous testing is my prediction, where the AI does all the mundane work, taking care of both test creation and execution, and the human reviews the results. AI will continue to be assistive, augmenting the tester and making the tester’s insight and domain expertise more valuable. But what about testing the AIinfused applications? More and more organizations are using AI to help gain insight into their business and streamline decision-making processes. But how do you test AI? While traditional approaches of ‘sample datasets’ are a great starting point, this is still an area of active research. Due to the non-deterministic nature of evolving AI systems, traditional testing techniques will also need to evolve. We will need to change the way we think about testing, moving away from the concept of a binary pass/fail status to rather introduce concepts of gated acceptance (guardrails if you like), used to determine if the algorithm is drifting too far off course. Lastly, the non-deterministic behavior of applications is going to get even more complicated with edge computing. Dedicated devices that run at the edge are going to change the way we deploy software, the same way that mobile devices changed the way we consume… everything. Evolve today and plan for tomorrow. As we head towards the next 20 years, the changes for sure are going to be incremental, so don’t worry. Take stock of your current state and prepare for the future … and keep reading SD Times for another 20 years of industry insights … Happy testing! z
< continued from page 33
was no burning requirement and the possibility of a lot of disruption.” One question is whether it’s better to have dynamic teams or static ones. “One of the most important things that the modern manager is starting to realize is that a high-performing, crossfunctional team is a very precious resource, and you don’t just throw people on or off those teams,” said Forrester’s Betz. “You don’t just take Mary off the team and replace her with Myra because Myra knows Python.” If Mary has been a contributing member of a team and understands the business problems the team is attempting to solve, then It’s much better to teach Mary Python, assuming she can wrap her head around it, Betz said. SPR’s Rodenbostel recommends considering the product that’s being worked on, the tools already in use, people and their skills as well as organizational constraints.
“If you take those things into consideration, the results may not be as immediate as they would be if you came in and made a drastic pivotal style change on a specific product and a bigger ecosystem, but the longer-term value, like actually changing people’s habits and making the cultural shift, is what we think is more successful,” said Rodenbostel. “It’s better to keep in mind that DevOps is a journey and not a one-size-fits-all solution.” Vladyslav Gram, head of DevOps at digital solutions company Ciklum, stressed the fact that the people affected by the shift need to be involved from the start. “We need to make sure everyone follows [our DevOps] process. If someone doesn’t like it, if someone thinks there are problems with this process, we need to hear them and we need to fix their problems,” said Gram. “If we don’t have this type of integrity, then we don’t have a DevOps process at all.” z
How to measure success Continuous evaluation is essential for continuous improvement. However, it may not be clear how to measure the effectiveness of a DevOps culture, since culture is intangible. However, tangible metrics can provide insight. Number of deployments. This is measured within a specific timeframe, such as the number of deployments per day. Release frequency is an obvious one. Many teams have embraced DevOps with the goal of increasing delivery speed. However, faster delivery should not be achieved at the expense of quality. Idea to production velocity. Is the time from business idea to production software decreasing? While this may be reflected in faster release velocity also, this metric considers the business aspect as well as the technical aspects. Quality improvement. Quality improvement metrics include defect rates, percentage of successful builds, and percentage of successful deployments, all of which should improve over time.
Mean time to recovery. Infrastructure as code and autonomous databases are improving mean time to recovery. However, teams still need to track their progress here. Business impact. While technical metrics can help explain the team’s productivity, speed, or quality improvements, ultimately, the purpose of DevOps is to speed time to value delivery. “Profitability, customer satisfaction, cost reduction and risk reduction are the big four from a CFO and CEO perspective,” said Forrester principal analyst Charles Betz. “The thing I like about DevOps is a big step towards actually getting line of sight from technical operations to those business metrics.” z
038-42_SDT032.qxp_Layout 1 1/17/20 3:20 PM Page 38
BY CHRISTINA CARDOZA
nterprises want to deliver software fast in order to keep up with market demands and stay competitive, but at the end of the day it doesn’t matter how fast they deliver software if it’s not pleasing to the end user. “Users are the ones who are going to be interacting with your application, so you want to make sure they get the best and correct experience they are looking for,” said Max Saperstone, director of software test automation at consulting company Coveros. The way to do this is to perform UI testing, which ensures an application is performing the right way. “The make or break of an application is with the user’s experience within the UI. It’s more critical than ever for the UI portion of the application to be functional and behave as expected,” said Ashish Mathur, director and architect of testing products at HCL Software, a division of HCL Technologies. One way of doing this is manually testing all the ways and scenarios users will be interacting with the application, although this can be timely and costly. The alternative to manual testing is automated testing, which automatically executes tests for testers so they can focus their time and effort in other areas. In today’s digital age, Mark Lambert, vice president of products at the automated software testing company Parasoft, explained organizations have to rely on automated testing in order to reduce the costs and efforts. “Organizations realize manual testing efforts don’t scale to the pace of delivery, so there’s an increased reliance on test automation to help with validating the increased cadence of delivery,” he said. But, that doesn’t mean manual test-
ing becomes obsolete. Lambert added that a successful UI testing strategy actually includes both manual and automated testing techniques. “When you think about UI testing, you want to think about what to automate and what to do manually,” he said. “Humans are very, very good at understanding if something feels right. They are very good at randomly exploring intuitively different execution paths and doing negative testing. What humans are not very good at are repetitive tasks. We get bored very quickly and if we are doing the same thing over and over again, we can often miss the details.” However, manual and automated testing is not enough to properly validate the UI.
Solving UI testing problems Despite the fact that test automation is supposed to help speed things up, organizations still struggle to achieve high levels of test automation and run test automation continuously, Parasoft’s Lambert said. Once organizations get started
with test automation, the number one problem they run into is maintenance as part of the ongoing effort. That is because things are constantly changing, making the test environment very complex. Lambert explained if testers don’t have reliable ways of locating elements in the page or handling wait conditions, it can put them at risk and costs days in maintenance. “As the application changes, the coverage also needs to change. The new areas of how the application behaves and flows need to be accommodated for that change,” HCL’s Mathur explained. In order to overcome this, Lambert suggested adopting the Page Object Model, which promotes reuse across test scripts, resulting in more maintainable tests. “When the UI changes, you only have to change in one place and not in two, 200, 2,000 or whatever the number of tests you have that are touched by that UI change,” he said. New artificial intelligence-based tools are beginning to come out to also address that pain point, make it easier
038-42_SDT032.qxp_Layout 1 1/17/20 3:23 PM Page 39
to recognize changes and automatically suggest or make updates to tests so they can still run, Coveros’ Saperstone explained. For instance, the Parasoft Selenic solution injects AI into the UI testing process to help analyze tests, understand where problems and instabilities are, and apply self-healing capabilities to the CI/CD pipeline so it
incorporation of machine learning will make testing a lot easier,” he said. However Saperstone doesn’t see this taking off for the next couple of years. Testers are still working on trusting AI, he said. “People are still stuck in the old manual testing mindset. They think they can take a manual test and convert it into an automated script to get the same coverage and results, and that’s not how that works,” said Saperstone. “You
doesn’t break due to failing builds. It also provides recommendations to the tester to improve test automation initiatives going forward. Artificial intelligence should also be able to help testers identify new test cases that need to be created, and assist beyond just maintaining what’s already there, according to Chris Haggan, HCL OneTest UI product manager. Other ways in which AI is starting to be applied to UI testing is in understanding what actual users do and in going through those workflows. In addition, Mathur explained that modern-day applications are becoming much more graphical, and the old ways of locating a piece of text and interacting with it don’t work anymore. This is where he believes machine learning will really thrive, in being able to help understand the context of the page, compare it to what is already known about the application, and what is changing within the application. “This will lead to much more robust and much more reliable test cases than we ever had in this space. The
need to think about what you are trying to accomplish, verify and understand.” The way teams are building user interfaces are also changing, Haggan said. There is a move towards cloudnative applications and technologies like microservices. So instead of having UI built as a monolith, they are now being developed and delivered in parts and pieces. The task of a UI testing organization like HCL is to go in and figure out how the pieces fit together, determine if all the pieces work together, and find out if it is seamless, according to Haggan. Aside from using tools, testers need to leverage smart execution, Lambert added. “Analyze the app and tests running against the app to determine what has changed and what tests need to be re-executed to those changes so you’re only executing tests and validating the changes in the app,” he said. This is extremely important because UI testing is slow; it takes time. There are many browsers involved and click paths. When you are testing, you are talking about thousands of automated tests that
are running. Being able to only target the necessary changes can significantly cut down test times. The slowness of UI testing also becomes a problem in an Agile or DevOps environment where there are frequent releases and builds happening. Testing needs to be done at a fast rate in order for it to be useful, according to Mathur. He recommended using distribution technology, cloud technologies and containers to speed things up. “Adopting technologies like Docker to the test cases and using agents so you can run them in parallel and get all the results in one place and get an answer fast as to the state of the application is increasingly important as everyone tries to move towards Agile development,” he said. Haggan added that it is important to bring the API and back-end solutions together with the UI testing because applications are becoming more and more reliant on those, especially with the use of microservices. Mathur also said that unit testing overlaps with UI testing. “If we have units that are outside the development scope of the application itself, there is a good need for unit testing to also be incorporated in UI testing to give a leg up for the functional tests to get started and build on top of that,” he said. Parasoft’s Lambert turns to the testing pyramid, which groups tests into different granularities and gives an idea of how many tests you need in those groups. “What it does is it talks about how to organize your test automation strategy. You have a lot of tests at the lower level of the development stack, so your unit tests and API tests should cover as much as possible there. UI tests are difficult to automate, maintain and get the environment set up. The testing pyramid minimizes this,” he said. “I’m a very big proponent of the testing pyramid, which says a foundation of unit tests backed up by API or service level tasks, UI tests, and both automation and manual testing make manual testing much more efficient and much more effective and much more valuable. That is how you can really have a great strategy that’ll help you accelerate your delivery process,” he said. z
038-42_SDT032.qxp_Layout 1 1/17/20 3:21 PM Page 40
The upside and downsides to Selenium BY CHRISTINA CARDOZA
Selenium is one of the most popular UI testing frameworks out there because it is open source, easy to use and has a lot of community support. According to Max Saperstone, director of software test automation at consulting company Coveros, because there are many large enterprises and businesses that have adopted it, it is proven that it can work and there is also a huge backing of support with different languages and resources. But as with any open-source project, it does have its limitations and it can be difficult to get past those limitations. For instance, he explained that just because it is a free tool, that doesn’t mean it is free for an organization. What they may not pay in licensing, they still have to pay in knowledge and talent. “But overall if you have people that can learn it and developers willing to take that on, it is usually a lot cheaper and easier to get started with,” he said. While some organizations have released tools on top of Selenium to help extend its use cases, Mark Lambert, vice president of products at the automated software testing company Parasoft, warns users to make sure they aren’t getting locked into a solution. “What our solution Selenic does is it plugs right into an organization’s existing Selenium test automation practice leveraging their tests as they exist today, but then injects its AI to help with what we see as being the number one challenge for organizations with UI test automation, which is that of maintainability,” he said. Within the HCL OneTest offering, users can run Selenium tests, or use Selenium to interact with browsers, according to Ashish Mathur, director and architect of testing products at HCL Software, a division of HCL Technologies. “Where we come into the picture is we provide value on top of Selenium, which is beyond any interactions with the browser. The whole aspect of being able to locate controls very easily and very intuitively in a manner that is very natural.”
According to Chris Haggan, HCL OneTest UI product manager, when development for Selenium IDE ceased, it left a gap in the market. While Selenium IDE development has recently picked back up in the last couple of years, HCL now provides extra value that can be added and closes some of the gaps still left, such as API performance,
test maintenance and ease of execution. “There is a class of testers who don’t necessarily want to write Selenium code, but understand the value that Selenium brings to the table. One of the things we did was we looked at how we can build an easy capability for testers to build scripts in the same way that the Selenium IDE had been doing,” said Haggan. z
How does your company help facilitate UI testing? Ashish Mathur, director and architect of testing products at HCL Software, a division of HCL Technologies Traditionally, UI layer testing has been mainly manual due to the brittle nature of typical automated UI testing tools. However, HCL OneTest UI delivers a much more robust test automation platform with both the Script Assure technology as well as the guided and self-healing capabilities. So even if the application UI changes, the scripts are “smart” enough to see those changes and continue running, and then alert the user that the application UI has changed. This intelligent object recognition during playback makes scripts resilient to changes and easy to maintain. Novice test automation engineers can get up and running quickly with the HCL OneTest UI natural language syntax that is auto-generated when recording against the system under test. These scripts can then be augmented with additional steps and verification points while the application is “offline.” Coupled with the ability to interleave API tests, HCL OneTest UI offers complete traceability from UI to API and back. HCL OneTest also provides the ability to reuse UI tests during performance testing and leads to efficiencies in script creation and earlier results. The Accelerated Functional Testing feature uses available test resources such as Docker, to help in achieving test results quickly by running as many tests simultaneously as possible. Coupled with seamless integration with most CI/CD systems, this enables quality in DevOps and with integrations to the emerging value stream management platforms, and the ability to continuously collaborate with the development teams in identifying new test scenarios to enhance the automation suites. Mark Lambert, vice president of products at Parasoft The move in the industry over the last 10 years has been accelerating towards open-source software, leveraging open source where appropriate and then leveraging vendor-driven solutions when things get more complex. At Parasoft, we take an open-source-first approach to testing. Our recently announced Selenic product is designed to help organizations with the adoption of the open-source framework Selenium, which we found the majority of people are using as their primary test automation practice. You can simply plug Selenic into your existing Selenium testing practice, and when things get complicated, Selenic supercharges the basic functionality that comes with Selenium. The number-one problem with Selenium is this problem of maintainability and stability that no one else is trying to address — without moving you away from Selenium. With Parasoft Selenic, it analyzes why tests fail and, by applying its AI analysis of prior executions, comes up with recommendations for updates to those tests. For instance, changes to locator strategies or wait conditions. It can also apply these recommendations at runtime, self-healing the tests when run as part of the CI/CD pipeline, and help avoid any unnecessary build failures. The recommendations are also provided as feedback to the tester, so that the tester can then edit the tests and make the changes that the AI engine is recommending. z
Full Page Ads_SDT032.qxp_Layout 1 1/17/20 2:17 PM Page 41
038-42_SDT032.qxp_Layout 1 1/17/20 3:21 PM Page 42
A guide to UI testing tools n
FEATURED PROVIDERS n
n HCL Software: HCL OneTest supports a DevOps testing approach with UI testing, API testing, performance testing, data fabrication, and service virtualization. The solution is designed to automate and run tests early and more frequently to discover errors faster. Recent additions to the HCL OneTest platform use cloud native technologies to offer users a solution that is both secure and offers discoverability of tests to enable re-use and collaboration. HCL OneTest supports DevOps deployment solutions through a wide range of integrations, and the emerging Value Stream Management sector by integrating with UrbanCode Velocity. n Parasoft: Parasoft’s UI testing solutions make it easy to automate web UI tests and integrate into your CI/CD pipeline. For Selenium users, Parasoft Selenic selfheals Selenium scripts at runtime and provides quick fixes in the user’s IDE to automatically update Selenium scripts. For users with complex test scenarios, Parasoft SOAtest provides complete end-to-end functional test automation (e.g. UI, API, database), integrated with Parasoft Virtualize for creating virtualized test environments that are available anytime, anywhere.
n Applitools modernizes functional and visual testing through Visual AI for increased coverage, higher quality, and better release velocity all for less time and money. Built for both developers and quality engineers, Applitools automatically validates the look, feel, and functionality of apps using 99.999% accurate computer vision technology leveraging images instead of test code. n Appium is an open-source test automation framework for native, hybrid and mobile web apps. It is a JS Foundation project that graduated in 2017. It features full access to back-end APIs and DBs from test code, and the ability to write tests with third-party development tools. n Eggplant enables companies to view their technology through the eyes of their users. The continuous, intelligent approach tests the end-to-end customer experience and investigates every possible user journey, providing unparalleled test coverage. Our technology taps AI and machine learning to test any technology on any device, operating system, or browser at any layer, from the UI to APIs to the database. n FrogLogic Squish is an automated GUI testing solution designed for cross-platform desktop, mobile, embedded and web apps. Features include support for all major GUI technologies, test script recording, object identification and verifications,
an integrated development environment, popular script languages for test scripting, support for BDD, and integration into test management and CI-systems. n Functionize, an automated testing solution, combines natural language processing, deep-learning ML models and other AI-based technologies to ensure the whole UI is tested with visual comparison, visual completion and visual confirmation. n Katalon The Katalon product suite is designed to generate automated tests across platforms. Katalon Recorder is a lightweight extension for test automation recording and playback. Katalon Studio aims to simplify test automation activities with built-in project templates, end-to-end testing, object spying, dual-editor interface, and a comprehensive BDD solution. Katalium is a new framework that provides blueprints of test automation projects based on Selenium and TestNG. n Leapwork users can design and execute automated test cases as visual flowcharts and automate any applications with the help of native support and Seleniumbased web automation. n Mabl is a codeless UI testing service. It enables continuous testing with an autohealing automation framework and maintenance-free test infrastructure. Mabl
Full Page Ads_SDT032.qxp_Layout 1 1/17/20 2:18 PM Page 43
044_SDT032.qxp_Layout 1 1/17/20 4:52 PM Page 44
Guest View BY ELI LOPIAN
The importance of healthy code Eli Lopian is CEO of Typemock.
oding creates the backbone of most businesses today, whether it is developing an app for our smartphones or other software meant to ensure smooth technological processes. It is a way we can talk to machines using a logic base and make them do what we want them to do. However, one misplaced figure or apostrophe can result in dire consequences. NASA discovered this the hard way. In the space war between the United States and the former Soviet Union, NASA launched Mariner 1 in 1962. Its mission was to collect scientific data about Venus. Unfortunately, a few minutes after Mariner 1’s launch, it did an unscheduled yaw-lift maneuver and lost contact with its ground-based guidance system. A safety officer was forced to call for its destruction 293 seconds after launch. Richard Morrison, NASA’s launch vehicles director at the time, testified before Congress that it was due to an “error in computer equations” that led to the space disaster. Additional reports blamed the source on a mistaken hyphen in the code. Others blamed it on an “overbar transcription error” or a “misplaced decimal point.” Similar mistakes can happen on any project, but the Mariner 1’s code error cost NASA and the American government millions of dollars. Every developer realizes the need for clean code, i.e. code that is efficient and easy to read with no duplication. But clean code is not necessarily healthy code. Healthy code is code that is maintainable. You can have clean code that is elegant but it is still unhealthy and will ultimately slow down development. So how do you create healthy code? l High coverage of unit tests. The more a program’s source code has been unit-tested, the easier it is to implement changes at a later date. Often developers fail to understand that if they invest more time increasing their unit test coverage originally, it helps not only the QAs but themselves with any changes needed later on and results in faster implementation. l Refactoring code. Refactoring is an essential part of the development process when work-
Clean code is not necessarily healthy code. Healthy code is code that is maintainable.
ing on code. The point of refactoring is changing or restructuring the code, without changing its external behavior. This, in turn, should make it more readable and understandable. Refactoring means that you’re actively taking note of the cleanliness of the code when you’re developing. You should also ensure you’re not unintentionally making unwanted changes to the product or app you’re designing.
Legacy code to healthy code Writing healthy code is easier to implement when you’re starting with new code. But what happens when the code you’re starting with is legacy code? Legacy code is “old” code, i.e. source code that was written for an unsupported operating system, app, or technology. Once legacy code is in production, ideally no one should need to change it. There are occasions however, when new features need to be integrated into the legacy code. It can very quickly turn the legacy code into spaghetti code. That is when Unit testing should be implemented. It shows the logic behind the code and enables the new team to see which part of the code is broken. It is crucial to remember that technology is dynamic. You never know when your legacy code is going to be in need of an update or when disaster is going to strike. Both of these scenarios can force DevOps to make changes to the software fast. The financial crisis following Lehman Brother’s collapse in 2008 spurred changes to the law for the financial services industry. Institutions were faced with a provision that affected financial reporting and auditing. The changes to the law required recoding and implementation in an extremely short time frame.
Healthy code = healthy product life cycle Software is not like architecture or engineering. Even the smallest bug can ruin the entire project or life cycle of a product. You need to make sure everything works and healthy code is key. Healthy code means that when something unforeseen does come up, the bug can be found quickly without having detrimental effects on business operations or downtime ensuring your overall software is agile, clear and robust. z
045_SDT032.qxp_Layout 1 1/17/20 3:30 PM Page 45
Analyst View BY BILL HOLZ
5 steps to master continuous delivery D
elivering quality applications with increased agility isn’t as simple as “doing” agile or DevOps. No matter where they are on their agile and DevOps journey, technical professionals can use these steps to achieve continuous delivery.
ing new applications, and refactor legacy applications to enable those applications to take full advantage of the agile and DevOps practices that are necessary to enable continuous delivery.
Bill Holz is a Research VP on the Application Platform Strategies team in Gartner for Technical Professionals.
Step 4: Automate infrastructure Step 1: Establish a continuous learning culture To deliver continuous delivery of quality solutions, implement continuous learning practices for streamlining work and reducing waste. Increase knowledge and skills. Carefully assess the capabilities of the current application development organization, and use this assessment to identify gaps in knowledge, skills, process and practices. Create communities of practice to support organizational learning and provide critical opportunities to build expertise. Adopt practices and processes that support this continuous learning.
Step 2: Develop agile fluency As you continue your continuous delivery journey, take three key actions to develop agile fluency. First, establish baseline metrics. How do you know if you are successfully moving toward your goal? Then, deploy agile methodologies. Identify what resources are needed to educate and train yourself and your peers to begin adoption of agile methodologies. Determine a benchmark for when you’ll know when you have become proficient enough to move to the next step. Finally, implement bimodal IT. Determine how Mode 1 (predictability) teams can benefit from agile, and how you can prevent bottlenecks and reduce dependencies between Mode 1 and Mode 2 (exploratory) teams.
Step 3: Mature agile practices Scrum, Kanban and “scrumban” are not enough to implement agile processes and practices. They are management frameworks, each with different goals, that provide no guidance for how to deliver working, consumable software. Mature your agile practices to prepare for continuous delivery. Agile technical practices provide the insight and guidance to help build quality into the application while receiving constant feedback on the system’s readiness for production deployment. Use a componentized architecture when creat-
Successful DevOps teams require the ability to: • Provision and configure tooling that helps not only development teams but also I&O teams manage and support new architecture. • Deploy and manage when it comes to implementing and managing your applications. There are an array of questions to ask: Should we use containers? How do we manage them? Do we move our applications and databases to the cloud? • Secure applications and data. This continues to be one of the key tasks for security professionals. The increased speed of change coupled with moving applications and data to the cloud make protecting your missioncritical assets more challenging. • Monitor applications. With the increasing adoption of DevOps-oriented lean and agile practices, application performance monitoring is needed to provide rapid quantitative feedback regarding the efficacy of the latest release. The data also needs to be available to support rapid triage of changes in performance and availability associated with these more frequent production updates.
Identify what resources are needed to educate and train yourself and your peers.
Step 5: Improve delivery cadence Once technical professionals reach this step, several advanced topics should be considered. The topics require, however, mature agile development and DevOps practices to ensure success. First, consider adopting an enterprise agile framework. The purpose of enterprise agile frameworks is to make the management of complex agile releases and evolving solutions not only feasible, but also routine and sustainable. Then, think about applying microservices architecture (MSA) principles. MSA builds distributed applications that support agile delivery and scalable deployment, both on-premises and to the cloud. Once adopted, MSA affects how development teams develop and deploy software. z
046_SDT032.qxp_Layout 1 1/17/20 3:30 PM Page 46
Industry Watch BY DAVID RUBINSTEIN
How I came to know, and love tech David Rubinstein is editor-in-chief of SD Times.
llow some self-indulgence as we celebrate 20 years. Growing up, I always wanted to be a sportscaster, having worshipped at the temple of the great Marv Albert (“kick SAVE and a beauty!). I went to the University of Maryland for journalism to begin the quest, and walked into the radio station, declaring, “I’m here to do football!” Instead, I was told if I watched and learned, I could be given women’s field hockey by the time I was a junior. Undaunted, yet pivoting from sportscaster to sportswriter, I walked into the campus newspaper office and declared, “I’m here to do football!” They handed me a notebook and said I would work the post-game visitor’s locker room. And so it began. After graduation, I worked in sports departments in various newspapers around the country, and after a few years, learned what all sportswriters know. You lose your fandom in the pressure cooker of deadlines. Suddenly, the games aren’t fun anymore. They’re work. I learned something else, too, that I hadn’t foreseen. Over time, you realize that once you’ve seen a completed “Hail Mary” touchdown pass, or a last-second buzzer-beating three-point shot, or a penalty shot successfully converted in hockey, that you’ve eventually seen every possible outcome there could be. So, sports was not only no longer fun. It was boring. A mid-career interview with Ted Bahr for the executive editor’s position at news startup SD Times changed all that. Ted didn’t so much conduct an interview as he made an hourslong sales pitch. His enthusiasm was infectious. I guess I would most closely liken it to how the “family” felt about Charles Manson (without all that gore). I was mesmerized. When he wrapped his spiel, I said, “This all sounds very exciting, but I have to tell you, I know nothing about computers.” I wasn’t lying. While colleagues boast about having the first Commodore or early Apple MacIntosh computers, my first was in 1994. A Packard Bell. I didn’t know how it worked, like I still don’t really understand how old television sets worked. I bought it so we could keep our young children entertained with interactive games and
What we’re writing about today, we couldn’t have even imagined in 2000.
songs, and not just have them watch “Barney” videos till my eyes and ears bled. As I started using the PC more, I became more and more curious. Back at the interview, Ted’s retort to my admitted lack of knowledge caught me off-guard, and changed my life. “Well, we don’t know anything about publishing a newspaper.” Wow... so, like, what are YOU thinking, I thought. But newspapering is something I most definitely knew how to do. And so, Dave 2.0 emerged. November 1999 marked the beginning of what has become a 20-year career in tech journalism. Under the incredible tutelage and guidance of Alan Zeichick — a superb editor whose work once again graces the pages of this special edition of SD Times — I got a crash course in the tech of the day (which was much easier to understand than today’s goulash of technologies and methodologies). I was unsteady but prepared enough to deliver Issue 001 of SD Times, on Feb. 23, 2000. And listened. And learned. Along the way, I have picked up many things. Perhaps the most striking, yet most obvious in 20-20 hindsight, is that covering the software development industry is most definitely not boring. Far from thinking I’ve seen every possible outcome, our world changes so rapidly that as a journalist, it’s imperative to be constantly learning, to ask the right questions, to poke the new startups to find if what they’re doing is merely spin or actually groundbreaking. What we’re writing about today, we couldn’t have even imagined in 2000. And I’m sure in 2040, what is being written about software development will be completely different again. I hope that SD Times is still publishing then, continuing to uncover stories that, as Ted wrote in his retrospective in this edition, our readers didn’t know they’d need to know. We did a cover story on how people with autism actually have the exact skills needed for programming. We’ve written about how the design of your office can affect how people work (the verdict was overwhelmingly ‘No Open Office’). We wrote about DevOps not being the ‘kumbaya’ moment it was pitched as. There have been 20 years worth of stories like these, along with industry profiles and news analyses. That’s what a magazine does. In long form. And that’s what we will continue to do, for as long as I’m at the helm. I promise you, it won’t be boring. z
047_SDT032.qxp_Layout 1 1/17/20 5:23 PM Page 1
Reach software development managers the way they prefer to be reached A recent survey of SD Times print and digital subscribers revealed that their number one choice for receiving marketing information from software providers is from advertising in SD Times. Software, DevOps and application development managers at large companies need a wide-angle view of industry trends and what they mean to them. That’s why they read and rely on SD Times.
Isn’t it time you revisited SD Times as part of your marketing campaigns? For advertising opportunities, contact SD Times Publisher David Lyman +1-978-465-2351 • firstname.lastname@example.org
Full Page Ads_SDT032.qxp_Layout 1 1/17/20 2:18 PM Page 48
The latest issue of SD Times is now available. The February issue marks SD Times 20th anniversary and features how software development has...