IG World Vol 1 * Issue 1 - Fall 2018

Page 1

INFORMATION GOVERNANCE WORLD

THE ROLE OF ANALYTICS IN IG PROGRAMS

WHEN IS A BLOCKCHAIN A GOOD SOLUTION?

IMPLEMENTING GDPR: ADVICE FROM ACROSS THE POND

YOUR GLOBAL IG RESOURCE®

ADVICE FROM LEADING IG EXPERTS

JASON R. BARON

eDISCOVERY PIONEER GIVES BACK

PAULA LEDERMAN

EXPERT INSIGHTS ON RECORDS MGMT GARTNER’S

DOUG LANEY

AI IN INFOGOV

JOHN J. JABLONSKI

ELECTRONIC DISCOVERY

ARLETTE WALLS

COMPLIANCE IN PHARMA

DE B

INFOGOVWORLD.COM ISSUE 1 • VOL 1 • FALL 2018

UT I

S SU

E!

INFONOMICS: THE VALUE SIDE OF IG

ROBERT SMALLWOOD



INSTRUCTOR-LED CLASSROOM TRAINING on IG With Leading IG Trainer Robert Smallwood San Diego, California, April 9-11, 2019 (Tuesday-Thursday)

3 Day Basic & Advanced Intensive Course

Attend this popular classroom course held at one of the most beautiful college campuses in the world, the University of San Diego, which overlooks the Pacific Ocean! Taught by IG thought leader Robert Smallwood, the world’s leading trainer and author on IG topics, students get personal attention to ensure they grasp key IG concepts and can apply them to their work. The first day covers IG Basics including the IGP Certification Prep Crash Course, followed by two days of Advanced IG Training. The course is based on Smallwood’s groundbreaking text, Information Governance (Wiley, 2014, 2019), and also supplemental course materials.

Topics Include: • • • • • • • • • • • •

Failures & Lessons Learned in IG GDPR, Big Data Impact IG Imperative IG Principles Role of Data Governance in IG IG Risk Assessments Strategic Planning for IG IG Policy Development IG Program Management Infonomics: The Value Side of IG IG for Legal Functions & E-discovery IG for RIM

• • • • • • • • • • • •

IG for IT Privacy Functions in IG IG for Email, Social, Mobile, Cloud SharePoint IG Digital Preservation Information Asset Registers Taxonomies & Metadata Cybersecurity in IG IG for Emerging Technologies The Role of Executive Sponsorship in IG IG Best Practices Developing Key Metrics for IG Programs

“Thank you to Robert Smallwood for providing us with so much insightful information, and the tips we will need to pass the IGP certification.” — IG & Compliance Manager, Major Pharmaceutical Firm

“I really got a lot of out of Mr. Smallwood’s teaching style and personal attention.” —IG Manager, Top 10 U.S. Law Firm

Past attendees include IG professionals from major law firms, leading corporations, and large government agencies, including:

Tuition Cost: $1,695* (Group discounts are available for 3 or more from the same company.) * SUPER EARLY BIRD: ($300 discount) Register by Oct. 9, 2018 Cost: $1,395 * EARLY BIRD ($200 discount) Register By Jan. 9, 2019 Tuition Cost: $1,495 Includes: Tuition, Breakfasts, Coffee Breaks, and Supplemental Materials NOTE: You must purchase the textbook prior to class. Housing options include nearby hotels in partnership with USD.

“The 3-day training was very educational, and the small classroom environment made it even more interactive.” —RIM Manager, Fortune 500 Corporation

Take advantage of this exclusive training opportunity to educate your IG team! Seating is limited, reserve yours today at IGTraining.com, or call us at 888-325-5914!

Info & Registration Requests to: registration@igtraining.com


PUBLISHER’S LETTER

PHOTO BY BEN SIEGFRIED

I’ve found that if you work hard enough, and get the right people on your team, things usually or eventually will work out. Sometimes, they work out even better than you had planned—and that’s how I feel about this debut issue of Information Governance World magazine. I’d had the idea for this magazine for over five years, and discussed it at times with industry leaders, but others didn’t see the need as clearly as I did. Then, a few years ago I was teaching an IG class with a long-time business associate of mine, Baird Brueseke, and he took an interest, and we started developing the idea in earnest. Sure, we had some missteps, failed attempts, wrong personnel decisions, and disappointments. We paid our tuition getting here, because every new business has surprises and frustrations. No startup begins with smooth sailing. But we persevered, and pressed on. Through a combination of research and luck, we found Kenny Boyer, our Creative Director, who has worked on retail magazines for 25 years. His talent and insights have been invaluable in creating a truly unique look and style that is ultimately eye-pleasing and readable. And really cool! As we began to introduce the concept of InfoGov World offering a sphere of marketing services, we found that the vendor community was quite welcoming and viewed the IG marketplace as “tremendously underserved” in terms of supplier marketing options. We also received positive feedback from IG leaders, who felt it was past time to unify the IG market segments and provide a high-quality, educational publication that could foster the development of informed IG professionals worldwide, and could support the somewhat fragmented IG community. We have a unique approach—you’ll see inside—where we want to not only provide in-depth IG coverage and insights, but also to give you a sense of the people who are leaders in the IG field. This isn’t a stodgy, boring tech magazine, but an interesting and educational resource that we hope the broader market will embrace. So here it is—the world’s first print and digital magazine dedicated entirely to Information Governance. Please read and enjoy our debut issue! And feel free to send us your feedback and ideas, as this is a magazine for the IG community that we intend to be an essential global resource.

Robert Smallwood CEO & Publisher

4

INFOGOVWORLD.COM


OPERATIONALIZE YOUR PRIVACY PROGRAM O

AUTOMATE GDPR RECORD KEEPING

READINESS & ACCOUNTABILITY TOOL

PIA, DPIA & PbD AUTOMATION

DATA MAPPING AUTOMATION

COOKIE CONSENT & WEBSITE SCANNING

Benchmark organizational readiness and provide executive-level visibility with detailed reports.

Choose from pre-defined screening questionnaires to generate appropriate record keeping requirements.

Populate the data flow inventory through questionnaires, scanning technologies or through bulk import.

Conduct ongoing scans of websites and generate cookie banners and notices.

GDPR Articles 5 & 24

GDPR Articles 25, 35 & 36

GDPR Articles 6, 30 & 32

GDPR Articles 7 & 21 ePrivacy Directive Draft Regulation

SUBJECT ACCESS RIGHTS PORTAL

UNIVERSAL CONSENT & PREFERENCE MANAGEMENT

VENDOR RISK MANAGEMENT

INCIDENT & BREACH MANAGEMENT

Capture and fulfill data subject requests based on regulation specific requirements

Embed consent management directly on website with standardized transaction workflow.

Conduct vendor risk assessments, audit and manage data transfers to third parties.

Build a systematic process to document incidents and determine necessity for notifications.

GDPR Articles 12 - 21

GDPR Article 7

GDPR Articles 28(1), 24(1), 29, 46(1)

GDPR Articles 33 & 34

FREE GDPR WORKSHOP 4.5 CPE Credit Hours

Details and Registration Available at PrivacyConnect.com

For privacy professionals focused on tools and best practices to operationalize compliance.


LETTER FROM THE EDITOR

PHOTO BY BEN SIEGFRIED

6

Hello to the InfoGov Community! There have been so many of you who provided input and helped us move this magazine from vision to reality. Thank you! Without your support we could not have crafted the debut issue you are now reading. I would like to express my gratitude to each of the IG leaders who participated in our interviews. Your willingness to share your experiences with our readers has made our plans to put a people face on tech a reality. A special shout out to Andrew Ysasi, an early believer in our endeavor. I look forward to working with Andrew and IG Guru to grow the IG community. The best part about my role as Executive Editor has been the opportunity to work with our talented team of artists, writers, editors and photographers to bring the vision that Robert Smallwood and I share to life. I feel a great sense of satisfaction that by working together, we have all contributed to the creation of a valuable resource for you, the reader. I hope you enjoy what you see and are informed by what you read. Information Governance World is a space where disparate business segments join together under the umbrella of a common vision. Information Governance programs must be in place in order for companies to maximize the monetization of information. Our vision defines the Information Governance market as consisting of nine key sub-market segments: Information Privacy, Information Security, Data Analytics & Infonomics, Regulatory Compliance, eDiscovery, Records & Information Management, Data Governance, ECM & EFSS and Archiving & Long-Term Digital Preservation. Combined, these nine market segments represent over $200 billion in annual spending. Our publication will be a voice for this industry. Our interviews with industry leaders will reveal insights into the real people who are shaping the Information Governance landscape. Each issue will cover Information Governance in Society with short profiles of top people in their field. In this debut issue we highlight Jason R. Baron, a pioneer in eDiscovery and friend to a Cambodian village and Nick Rhodes, an innovative media entrepreneur who built multi-billion-dollar programming franchises and lead massive metadata tagging projects in sports television programming. You will get to know infonomics pioneer Doug Laney and learn of his work. You’ll meet Judy Selby a leader in cybersecurity insurance and the law; Bob Seiner, an expert in Data Governance and a big sports fan; Arlette Walls a RIM compliance leader in the in the pharmaceutical industry with international roots; as well as prominent attorney and rugby bruiser John Jablonski; and RIM expert and knitting enthusiast Paula Lederman. We want to invite you into our tribe, into our community. Information Governance has long been sprawled across different channels and divisions. We want to change that. We want to bring everyone under one umbrella. And simply by taking the time to read this inaugural edition, you are helping make that vision a reality. Thank you for joining us.

Enjoy the issue.

Please send your comments, suggestions, and story ideas to me at bb@infogovworld.com.

Baird Brueseke Executive Editor

INFOGOVWORLD.COM


We’ve been leading successful implementations of new processes and technologies for over 20 years.

• • • • •

Business Process Analysis & Redesign RIM Program Assessments & Guidance ECM/EFSS System Requirements Definition Data Remediation/Shared Drive Cleanup RFP Development & Assessment

imergeconsult.com • 888.325.5914

INFORMATION GOVERNANCE WORLD

7


CONTENTS INFORMATION GOVERNANCE IN SOCIETY 10 An Interview with Jason R. Baron 13 Hide and Seek 14 An Interview with Nickolas Rhodes INFORMATION GOVERNANCE IN HEALTHCARE 16 IG in Healthcare: A Matter of Life & Death 17 Information Governance for Healthcare 18 The Risks and Benefits of AI in Healthcare 19 Healthcare informatics and Cross-Functional Collaboration 20 IG Leadership: Robert Smallwood INFORMATION PRIVACY 22 In Blockchain We Trust? 26 How I Learned that Facebook Failed 27 GDPR Information Workflow 28 Preserving our Privacy INFORMATION SECURITY 30 What is Security Awareness Training? 31 What is Penetration Testing? 32 An Interview with Judy Selby 33 What is a Vulnerability Assessment? 34 Stepping into Security Assessments 35 Security Awareness Training – a Quick Win for IG programs

FEATURE 36 Doug Laney: On the Money DATA ANALYTICS & INFONOMICS 44 Why Will Analytics be the Next Competitive Edge? 46 The Role of Analytics in IG Programs 48 Analytics 101: The four types of Analytics and their uses 50 An excerpt from Doug Laney’s New Infonomics Book REGULATORY COMPLIANCE 52 An Interview with Arlette Walls 53 The Relationship between Audit and Compliance 54 Implementing GDPR and the Need for Data Protection Officers 56 A Rising Star in California’s Cannabis Regulatory Compliance efforts 57 PCI-DSS Compliance eDISCOVERY 58 An Interview with John J. Jablonski 59 eDiscovery Overview 60 eDiscovery 101 61 eDiscovery Trends RECORDS AND INFORMATION MANAGEMENT 62 An Interview with Paula Lederman 64 Bringing Your RIM Program to the 21st Century 65 Defining Vital Records 66 Tools for GDPR Compliance 67 IM vs. IG

DATA GOVERNANCE 68 An Interview with Bob Seiner 70 The Four Horsemen of the Data Apocalypse 71 DG vs. IG ECM & EFSS 72 Just Semantics? 73 Eyes Wide Open ARCHIVING & LONG-TERM DIGITAL PRESERVATION 74 Backups, Archiving, Preservation – Oh My! 75 Archiving – Its Challenges and Value EMERGING TECHNOLOGY 76 AI and Information Governance 77 The Father of the Web 78 The Electronic Skin of the Earth 78 Machine Learning and Information Governance 79 Blockchain 79 The Merge 80 INFORMATION GOVERNANCE TRADE SHOWS 82 INFORMATION GOVERNANCE EVENTS

ON THE COVER: Doug Laney – VP & Distinguished Analyst, Chief Data Officer Research, Gartner. Check page 36 for his exclusive interview. Photo by Isi Akahome. THIS PAGE: Can AI Help Your Organization Achieve Its IG Goals? The full story on page 76

8

INFOGOVWORLD.COM


INFORMATION GOVERNANCE WORLD

YOUR GLOBAL IG RESOURCE® VOLUME #1 ISSUE #1 FALL 2018

CEO & PUBLISHER

Robert Smallwood COO & EXECUTIVE EDITOR

Baird Brueseke CREATIVE DIRECTOR

Kenny Boyer SENIOR EDITOR

Dan O’Brien CONTRIBUTING EDITORS

Mark Driskill, Taylor Brueseke, Martin Keen CONTRIBUTING ARTIST

Thomas Kimball CONTRIBUTING WRITERS

Lori Ashley, Baird Brueseke, Gary Cokins, Sam Fossett, Andrew Harvey, Darra Hofman, Doug Laney, Patricia Morris, Barry Moult, Bob Seiner, Robert Smallwood, Andrew Ysasi CONTRIBUTING PHOTOGRAPHERS

Isi Akahome, Nate Kieser, Ben Siegfried

Check us out online and sign up today for a free digital subscription to Information Governance World magazine.

MEDIA SALES

Scott Allbert SPECIAL THANKS TO INTERVIEWEES:

Jason R. Baron, John J. Jablonski, Doug Laney, Paula Lederman, Nick Rhodes, Bob Seiner Judy Selby, Robert Smallwood, Arlette Walls

2358 University Ave # 488, San Diego, CA 92104 INFORMATION GOVERNANCE EDUCATION, NEWS & EVENTS:

YOUR GLOBAL IG RESOURCE®

infogovworld.com 1.888.325.5914

infogovworld.com INFORMATION GOVERNANCE WORLD

9


INFORMATION GOVERNANCE IN SOCIETY

10

INFOGOVWORLD.COM


AN INTERVIEW WITH

JASON R. BARON

OF COUNSEL TO THE INFORMATION GOVERNANCE AND EDISCOVERY GROUP AT DRINKER BIDDLE & REATH LLP PORTRAITS BY NATE KIESER

M

r. Baron’s career has included serving as lead trial counsel for the Justice Department in landmark litigatiaon involving the preservation of White House emails, and being appointed as the first Director of Litigation at the U.S. National Archives and Records Administration (NARA). After spending 33 years in government service, in 2013 he joined the newly formed Information Governance and eDiscovery group at Drinker Biddle & Reath LLP and has been Chambers-ranked in eDiscovery for the past four years. A past co-chair of The Sedona Conference WG1 and past chair of the DC Bar’s Information Governance and eDiscovery Committee, Baron is currently co-chair of the Information Governance Initiative, a think tank and vendor consortium, and serves on the advisory boards of the Georgetown Advanced eDiscovery Institute and the Cardozo Data Law program. In 2013, Baron was named one of six “eDiscovery trailblazers” by The American Lawyer in its issue devoted to “The Top 50 Big Law Innovators of the Past 50 Years.” He is the first Federal lawyer (and only the second lawyer overall) to win the Emmett Leahy Award, given for lifetime achievements in support of the records and information management profession. Among his many other awards and commendations, in 2013 he was the recipient of the Justice Tom C. Clark Outstanding Government Lawyer award, which is given by the Federal Bar Association. The 2014 documentary “The Decade of Discovery” detailed Baron’s quest to find a better way to efficiently search White House emails, including his co-founding of the TREC Legal Track at the National Institute of Standards and Technology with PhD colleagues in computer science. More recently, in connection with the controversy

surrounding former Secretary of State Hillary Clinton’s use of a private email server, he appeared on CNN, ABC’s Good Morning America, NBC News, MSNBC’s Last Word with Lawrence McDonnell, and NPR’s All Things Considered––and has been interviewed by The New York Times, Washington Post, Wall St. Journal, TIME Magazine, and numerous other media outlets. Baron has written over 90 published pieces on eDiscovery and Information Governance-related topics, has served as an editor-in-chief on three Sedona Conference commentaries, co-taught the first eDiscovery course in the U.S. to graduate students obtaining PhDs and Master’s degrees in information science, and edited the 2016 ABA book, “Perspectives on Predictive Coding and Other Advanced Search Methods for the Legal Practitioner.” He continues to lecture around the U.S. and the world on novel and emerging IG issues of interest to the legal and records professions. We caught up with Baron while he was keynoting at an archivists’ conference in Edmonton, Alberta. InfoGov World: You were an attorney working for the federal government. How did that lead to your pioneering work in eDiscovery? JRB: I agree with Malcolm Gladwell’s observation in his book “Outliers” that it takes 10,000 hours of hard work to really master anything. In my case, beginning in 1992, I spent seven years as lead counsel on what was known as the “PROFS” case (Armstrong v. Executive Office of the President), which originally involved whether emails exchanged among Lt. Col. Oliver North and other National Security Council staff caught up in the Iran-Contra scandal, and preserved on PROFS backup tapes, should be categorized as government records. The D.C. Circuit ruled that emails with certain attached metadata

should be considered records; the case led to a settlement resulting in successive administrations permanently archiving all White House emails. This, in turn, has led to an ever-growing archive of at least a half-billion presidential record emails (to date) stored in NARA’s legal custody––all potentially subject to eDiscovery searches and other forms of access requests (e.g., under the Freedom of Information Act). It was in 2002, in my capacity as Director of Litigation at NARA, that the Justice Department in connection with U.S. v. Philip Morris (the RICO case filed against Big Tobacco) asked our agency to do a broad eDiscovery search for responsive Clinton-era emails. My shepherding of that early eDiscovery effort over six months, employing Boolean keyword searches, convinced me that lawyers would need more advanced tools if they were going to be expected to search vaster collections of digital objects in the future. And that recognition led me to seek out the brightest minds in computer science to help lawyers advance their knowledge of how to take advantage of artificial intelligence in the form of advanced search techniques. How has eDiscovery evolved in the last decade and a half with respect to the use of advanced search methods such as predictive coding and technologyassisted review? We really have come a very long way. Within a year of the 2006 adoption of changes to the Federal Rules of Civil Procedure, opinions authored by Judges John Facciola, Paul Grimm, and others cited to the 2007 Sedona Conference Commentary on Best Practices in Search and Information Retrieval––and to the TREC Legal Track research results–– suggesting that lawyers can and should explore better alternatives to keyword searching. Judge Peck’s 2012 opinion in the da Silva Moore case––giving a

INFORMATION GOVERNANCE WORLD

11


INFORMATION GOVERNANCE IN SOCIETY

“I began The Chelly Foundation in my late mom’s name to help with the health and education of the children of Chumkuri District.”

(Top right, clockwise): Jason with four girls who received donated bicycles; at a high school dedication for The Chelly Library; a girl holding a donated water tank with filter; and Jason helping carrying school desks to a special bread day celebration hosted by his charity.

“blessing” to the use of technology-assisted review (TAR) based in part on a seminal law review by Maura Grossman and Gordon Cormack involving data from the TREC Legal Track––served to further galvanize the eDiscovery community. This led to what is today a cottage industry of case law on the subject of the proper use of TAR methods. In complex cases involving voluminous electronic records, eDiscovery lawyers who choose to use advanced search methods enjoy a tremendous strategic advantage in litigation: they are able to provide insight into large data sets for their clients, allowing the construction of a governing narrative in a fraction of the time it otherwise would have taken previously. Admittedly, not every piece of litigation involves a sufficient volume of evidence so as to make advanced search methods economically viable; but, in my view, any lawyer who chooses to put his or her head in the sand in failing to even consider weighing the pros and cons of using advanced search techniques in the appropriate case is simply not demonstrating professional competence in the use of technology consistent with ABA Model rules and emerging State Bar guidelines. What advice or insights can you share with companies wishing to improve their eDiscovery processes? I would like to respond to your specific

12

INFOGOVWORLD.COM

question by saying something about improving Information Governance more generally. There is no turning back in terms of the corporate need to keep up with an exponentially growing amount of information in digital form. If anything, the enactment of the General Data Protection Regulation (GDPR) has now put a worldwide spotlight on the responsibility of corporate entities to manage their data sets in ways that ensure greater compliance with privacy laws. That means more than simply updating record retention policies: as lawyers in this area, we need to provide technological solutions to categorizing and disposing of records and information streaming in to corporate networks in ways old and new, including increasingly from smart devices and the Internet of Things. Corporations can improve their specific eDiscovery process by stepping back to ask the question: How do we mature our overall handling and disposition of data generally? The weekly random cyberbreach scandal only serves to underscore the importance of asking a prior question. Why do we have this data in the first place? It also has become very clear that we are all living in an increasingly algorithmic world. EDiscovery lawyers practicing advanced search have a leg up in terms of already understanding the power of algorithms, as well as some inkling as to

the potential for algorithmic bias. As part of any Information Governance program, the C-suite should be thinking of ways to escalate issues involving the use of algorithms of all kinds that may affect data collected on employees and customers. Only where there is intelligent discussion and greater transparency can organizations realistically expect to handle the governance and compliance challenges coming down the pike soon posed by Big Data. We’ve heard you are doing work in Cambodia as part of a nonprofit charity. Will you describe your efforts and the goals of the initiative? How much time do you have? (Laughs.) When my mom passed away in 2012 after suffering from MS for decades, she had been in the care of an institutional facility with extraordinary personal assistants. One of her main caregivers, a woman named Nan, hailed from a tiny village in southwestern Cambodia, about three hours south of Phnom Penh. In gratitude for her wonderful work, I made a pledge to Nan that I wished to do something to help the people, and particularly the children, of her home village. I was happy to fund the building of a library at the local high school and the distribution of school supplies in 2016. But when I went to Cambodia that first time to dedicate the library, I realized that the thousand kids who surrounded us at


the local school lacked so much in the way of basic infrastructure: their school had no toilets, they had no clean water to drink, and many students didn’t have enough to eat––while being expected to walk up to 10 kilometers each way to school. And so I began The Chelly Foundation in my late mom’s name to help with the health and education of the children of Chumkuri District. And I am pleased to say we have been making a positive difference in the lives of many people through a variety of activities, including: building water and sanitation facilities, giving away books and school supplies, buying bicycles, supporting dental programs, building playgrounds, and much more. Our main emphasis has been to incentivize kids to study and pass their 12th grade national exams; and to that end we have now given 10 deserving students (eight young women and two young men) full four-year scholarships to the Royal University in Phnom Penh and similar institutions of higher learning (with four more to be granted this fall). We just finished the building of a Chelly House to serve the community––and are heavily involved in several sustainable agriculture projects that hopefully will provide a steady income stream to do even more for the children. Anyone wishing to know more about our work is invited to go to www. thechellyfoundation.org and to like us on Facebook: www.facebook.com/ thechellyfoundation. We are a 501(c) (3) charity and all donations are tax deductible. Thank you for letting me talk about this!

JASON R. BARON SERVES AS OF COUNSEL TO THE INFORMATION GOVERNANCE AND EDISCOVERY GROUP AT DRINKER BIDDLE & REATH LLP IN THEIR WASHINGTON, D.C. OFFICE, AND AS CO-CHAIR OF THE INFORMATION GOVERNANCE INITIATIVE. PREVIOUSLY HE SERVED TWELVE YEARS AT THE JUSTICE DEPARTMENT, INCLUDING AS LEAD TRIAL COUNSEL IN LANDMARK LITIGATION OVER THE PRESERVATION OF WHITE HOUSE EMAIL, FOLLOWED BY THIRTEEN YEARS AS THE FIRST APPOINTED DIRECTOR OF LITIGATION AT THE NATIONAL ARCHIVES AND RECORDS ADMINISTRATION. A PROLIFIC AUTHOR AND SPEAKER ON THE SUBJECT OF ELECTRONIC RECORDS, JASON HAS PENNED OVER 90 PUBLISHED PIECES AND GIVEN OVER 500 PRESENTATIONS AROUND THE GLOBE. IN 2013 THE AMERICAN LAWYER NAMED HIM ONE OF SIX E-DISCOVERY TRAILBLAZERS IN ITS ISSUE DEVOTED TO “THE TOP 50 BIG LAW INNOVATORS OF THE PAST 50 YEARS,” AND IS THE FIRST FEDERAL LAWYER (AND ONLY THE SECOND LAWYER OVERALL) TO WIN THE EMMETT LEAHY AWARD, THE HIGHEST INTERNATIONAL HONOR GIVEN FOR CAREER IMPACT IN RECORDS AND INFORMATION MANAGEMENT.

Hide AND

Seek

I

f you’re younger than 25, then Snapchat is no doubt part of your lexicon. An Android and iOS app, Snapchat sends pictures, videos, or messages that are available for a short period of time; after which, they are inaccessible. The brief nature of the messages is meant to encourage interaction, though the veracity of that claim is debatable.

attempt to boost user numbers after a poor financial report. Despite claims of 100 million monthly users, Snapchat’s reluctance to share their figures and a downturn of public popularity reveals a much more insecure company than previously thought. A new feature that boasts closer interaction in an increasingly social media world feels like a financially motivated move.

NEW LOCATION-SHARING FEATURE

PRIVACY CONCERNS

Recently, Snapchat decided to take another shot at their location-sharing feature, which has been maligned as being inherently invasive. The Snap Map utilizes real-time locations that can be shared with friends (or requested of friends). This is juxtaposed with the previous iteration, which allowed contacts to have access to your whereabouts simply by following your Snapchat. Regardless of the tightening of scope here, there is something to be said about the balance between intimacy and stalking that location-sharing offers. Snap product designer Jack Brody has this to say about the update: “In a lot of ways, we’re taking what a map is and turning it upside down. […] This map isn’t about where am I, it’s about where are my friends and what are they up to? It’s not about figuring out how to get to your destination, but about discovering where you want to go.” Before this most recent update, many users were content to utilize the “only-me” Ghost Mode despite an “open to only share your location with specific users” mode in previous iterations. You can’t really blame them, as having relative strangers know your precise location feels like a recipe for stalking. This new feature is no doubt an

Even though this new feature stresses sharing your location with friends, concerns continue to mount from privacy experts. The exactness of locations poses a series security concern, especially when you consider that teenagers make up 22% of users. Michael Kasdan, a partner at Wiggin and Dana, who specializes in privacy, had this to say: “The app is very addictive… every time you open it, it marks where you are.” Additional concerns involve the ability of bad actors or terrorists to utilize newsfeeds to determine where crowds have gathered. Michael Downing, former LAPD chief, is concerned that Snapchat creates “soft targets, areas where civilians are at a greater risk for a terrorist threat.” He goes on to say that: “Soft targets are something we are trying to defend against right now, not only inside of stadiums and arenas but hardening the outside core where you have less control.” It is safe to say that this new feature is an improvement, but it certainly doesn’t disabuse all notions of security and privacy concerns. If you plan on opting into this new location-sharing feature, then be certain that the people you are sharing your location with are people you truly trust with that kind of specificity.

INFORMATION GOVERNANCE WORLD

13


INFORMATION GOVERNANCE IN SOCIETY

AN INTERVIEW WITH

NICKOLAS RHODES CONTENT PACKAGING PIONEER

F

or over 25 years Nickolas Rhodes has been part of senior management for several successful start-up and early-stage companies in programming content and video technology. In launching and building these businesses, he was a strategist for the acquisition of $225 million in committed equity and an operating manager through exit––with over $2.5 billion in asset value created. This included being the lead development executive of four 24-hour sports programming channels, six specialty cross platform content sites, and two 24-hour satellite radio networks. Rhodes is one of a small number of executives who has taken multiple programming offerings from concept and business plan to launch and profitability. He also served as Managing Director and President/COO of two video software companies that are at the forefront of evolving technologies for video production and were built to meet demands of changing consumer behavior around connected devices. He was part of the executive teams that built regional sports television, interactive sports television deployments, action sports and lifestyle events, and the first Spanish language sports network in North America. He also was part of start-up program syndication properties, and built content packaging blueprints for the music, action, and gaming categories.

Nickolas Rhodes relaxing at home in Malibu.

InfoGov World: Where did you grow up? Go to school? I grew up in Waterloo, Iowa, and attended local catholic grade school and high school, and then the University of Iowa. What are your best childhood memories of Iowa? Do you still make it back? My best memories of growing up in Iowa in the 70s was that we had to make our own fun––there were no video games, not even cable TV, and certainly no mobile phones. We were on our bikes and largely on our own, just making sure to be home in time for dinner. I get back to Iowa 2 or 3 times per year, usually around an Iowa football game and a summer family reunion. How did you get in to the cable TV business? I moved to L.A. to join Prime Ticket, a regional sports network distributed on cable TV, so I got into the business as a content packager and supplier. How did your Prime Ticket venture develop? What was unique about Prime Ticket? Prime Ticket was started by Bill Daniels and Jerry Buss, both legendary individuals. There were less than 20 people at the company when I was hired (at 26 years old). What was unique about Prime Ticket is the level of talent among the executive team by whom I was mentored and taught the business. John Severino, the former President of ABC Television, then Roger Werner, former CEO of ESPN, Mr. Daniels, of course, and the many sports team owner personalities as our content partners.

14

INFOGOVWORLD.COM

Tell us about the SpeedTV/OLN venture, how it got launched, how it developed, and the end result/exit. I left Prime Ticket after it was sold to Fox/Liberty to join Roger Werner as the first employee of a new content venture when I was 31. I had worked for Roger at Prime Ticket and he was founding what would become Speedvision. Roger has been my mentor and friend for decades now, and as I get older I realized how lucky I was to have him as a boss. Roger built ESPN from 1980 to 1990 before coming to Prime Ticket and the Prime Networks; he was one of the most respected network builders in all of cable. We built a business plan for Speedvision and sought $100mm in equity to build the network. Cox Communications committed to one-third of the funding for Speedvision, and they had agreed to buy the Times Mirror Cable systems and put together a programming venture fund. They put Roger in touch with Times Mirror, who agreed to invest in Speedvision if Roger put his team in place to manage their channel in development called Outdoor Life Network. So we combined the two networks into a single operating plan and then got $200mm in equity to build both. Ironically, Times Mirror pulled back from the investment shortly after committing, creating some real challenges, but Roger was able to replace their investment with Comcast and Continental Cable, giving us more distribution and a more stable launch platform. In a period of seven years, we went from essentially a blank page to over $250mm in annual revenue and Speedvision and OLN were sold for just under $1.5 Billion in 2001.


In Information Governance, metadata management is crucial. Could you explain how metadata is used to categorize film clips? When video became digitized, everything changed. It went from tape to files, and the volumes have grown exponentially. We used to “log shots”––essentially pen to paper as the program was being made or during a live sports event. These logs were transferred along with the physical media––the tapes–– into a library where any retrieval was done by hand. We organized it as best we could, but when you needed something you sent an intern into the library and they could be gone for a day or two. It was inherently inefficient. When the video world went digital, the tape began to fade away in favor of hi-res files stored on servers and proxy files sent to edit bays and in shared projects worked on remotely. At that point, metadata, taxonomy, and meta schema beyond day, date, location, etc. became crucial in your capability to find, package, and distribute your programming to maximize its potential value. A few years back, when I was President of Levels Beyond, we took on a 400,000-hour NASCAR library and built the metadata around the archives and also as new programming and footage came in weekly. As a result, the entire library could be searched in seconds. It made the library intuitive: able to cue up categories of potential use as it was tied to social media trending topics. As a specific example, when Jeff Gordon got into a shoving match in the pits one race, the library was cueing up old footage of the early NASCAR drivers beating the tar out of each other. It’s the ultimate in topicality and long-tail use of content. None of which is possible without the right meta schema. What are the benefits of good metadata planning and execution in the sports TV business? You never miss a chance to package a feature or find the right shots for storytelling. You have to be able to find it first––or you will have a lost opportunity cost. What are the most challenging aspects of executing a metadata strategy in the sports TV business? Consistent taxonomy for logging and having the meta properly tied to the right content asset management system so it integrates with all other systems (editing, social, online, internal, etc.)––and can be distributed via a headless, automated

distribution to multiple platforms with varying formats––is the objective and it begins at the meta stage. If that’s not right, then you will have serious efficiency issues. How has the sports TV business changed in the last decade or so? Pretty dramatic changes, really. The fan is connected at all times across multiple platforms and sources of content. Social media and social commentary were only enabled in the last decade and it has changed the experience forever. Interactivity and insider access is expected and new layers of gamification and virtual athletes are changing the experience as well. What do you like most about serial entrepreneurship? There are no excuses and no one to blame. It’s on you and your team. And that can be very rewarding, but it is also sometimes very lonely—because you will fail. What I like the most is knowing that you are attempting to do something new and provide a better experience for your target audience. When you succeed in doing that, there is a legacy and that is satisfying. You were a high school standout in hoops. Do you still play basketball? I still shoot a basketball with my son, but I don’t try to run up and down the court anymore. I don’t have the hops and you get to an age where joint injuries become more likely and more severe if they happen. Who was better, Larry Bird or Magic Johnson. Why? Wow. That’s like asking which of your parents you liked better! I won’t pick, but instead will offer up why I think they were both so important. They truly are the genesis of the popularity of the modern game, beginning in 1979 when they met in the NCAA final. I was in high school and remember it even today, and then they both went to storied franchises and revived interest in the NBA. Their most important contribution is they made the assist matter: they both passed the ball so well and raised the level of their teammates. The league had been driven by one-on-one and no defense until Larry and Magic arrived, and from that point on a great pass was recognized as much as a great shot. The game changed for the better—forever. What do you like most about living in Malibu? What do you like least? Malibu is small community that is full of interesting people living a quiet lifestyle. I

love the relaxed vibe, the access to the ocean and mountain trails, the weather, and the progressive attitude among the locals. What I like least is the summer traffic, but that’s not much to complain about in the big picture. It’s been a great experience to live here. What is your favorite lunch spot in L.A.? The Grill on the Alley. If you could have dinner with 3 people, living or dead, who would you pick? Jackie Robinson, Hunter Thompson, and John Lennon. How do you know our publisher Robert Smallwood? When did you last see each other? I’ve known Bob for almost 40 years. We met in Cedar Falls, Iowa, in 1980 and he was getting ready to graduate; I was at University of Northern Iowa for one year before moving to Iowa City to go to U of I. He’s been a pal since. Bob is smart, relentless, and fearless. We last crossed paths for dinner in L.A. a few months back, and enjoyed catching up. FOR OVER 25 YEARS, NICKOLAS RHODES HAS BEEN A LEADER IN VIDEO CONTENT PACKAGING AND PROGRAMMING. HE HAS BEEN PART OF SENIOR MANAGEMENT FOR SEVERAL SUCCESSFUL START-UP AND EARLY-STAGE COMPANIES. IN LAUNCHING AND BUILDING THESE BUSINESSES, HE WAS A STRATEGIST FOR THE ACQUISITION OF $225 MILLION IN COMMITTED EQUITY AND AN OPERATING MANAGER THROUGH EXIT––WITH OVER $2.5 BILLION IN ASSET VALUE CREATED. RHODES IS ONE OF A SMALL NUMBER OF EXECUTIVES WHO HAS TAKEN MULTIPLE PROGRAMMING OFFERINGS FROM CONCEPT AND BUSINESS PLAN TO LAUNCH AND PROFITABILITY.

INFORMATION GOVERNANCE WORLD

15


INFORMATION GOVERNANCE IN HEALTHCARE SPECIAL REPORT

IG IN HEALTHCARE: A MATTER OF LIFE AND DEATH THOUSANDS OF AMERICAN LIVES COULD BE SAVED USING IG PRINCIPLES AND METHODS

I

n November 1999, as the rest the United States dealt with the cultural and cult-prophetic aspects of Y2K, the Institutes of Medicine published a report titled: To Err is Human: Building a Safer Health System. The report began with these alarming words, “As many as 98,000 people die in hospitals each year as a result of medical errors that could have been prevented.” Reading that people die because of human error in the very places that should be curing us is quite jarring. The reality is that humans do make errors. Even in life and death situations, humans cannot escape their “humanness.” This basic human condition was an underlying point of the report. Standing on its own, To Err is Human was also a call to arms. It is probably not a coincidence that building a safer American health system would have been visionary in 1999. After all, this was a time of uncertainty. The dot-com bubble had burst, and the “end of the world” dogma surrounding Y2K was at its peak. So many Americans felt that the end was near. However, at the close of the twentieth century, some computer scientists saw the benefits of electronic information to the healthcare industry. Although most of the now famous To Err is Human report focused

16

INFOGOVWORLD.COM

on human-caused errors, the report also listed how and what computers could do to help humans make less mistakes. Without actually saying it, To Err is Human can be considered an early attempt at conceptualizing potential IG principles that would reduce fatal medical errors. Donaldson, Corrigan & Kohn (2000) promoted a “system-oriented approach” that “involves a cycle of anticipating problems” and “tracking and analyzing data as errors and near misses occur.” The identified data could then be used inside the system to “modify processes to prevent further occurrences.” However, it was not the U.S. that first promoted IG in healthcare; instead, it was the U.K.’s National Health Service that introduced the NHS IG Toolkit to its professionals in 2002-2003. Fifteen years later, the American healthcare system is not safer. In fact, it is even more dangerous to patients. In 2016, Drs. Martin Makary & Michael Daniel studied vital death statistics published by the Centers for Disease Control and Prevention (CDC). Specifically, the doctors noticed that the CDC’s process for determining the national leading causes of death statistics did not consider deaths caused by medical errors. This happens because death certificates include an International Classification of Disease


(ICD) code. In practice, codes like these are crucial for studying public health. Unfortunately, if there is no code for a cause of death, the CDC does not consider that death certificate in its “leading causes of death” list. Accounting for this, Makary & Daniel determined the number of deaths to be somewhere between 210,000 and 400,000 per year. Recently, a study by doctors at Johns Hopkians confirmed this, stating that an estimated 250,000 deaths per year due to medical mistakes. The real eye-catcher is that when included in the CDC’s vital death statistics, medical errors are the third leading cause of death. This alone indicates strongly the lack of IG principles in public health. IG principles used in healthcare can reduce medical errors. For example, specific data variables (such as the exact cause of death not included in the ICD codes) could be included in the patient’s end-of-life electronic health record. These health records, no longer useful to patients, become large repositories of electronic information that public health officials can use to analyze health over a lifetime and determine the exact cause of the fatal medical error. The American healthcare system is desperately in need of IG, and now hospitals and healthcare organizations are beginning to embark on IG programs to better protect their patients, and their brands.

Information Governance for Healthcare

G

iven the complexity of the healthcare industry, IG is emerging as a useful tool for securing patients’ electronic health records (EHR) and controlling who and what accesses these records. Crucially, IG also is helping to manage the explosion of electronic information in the healthcare industry. For example, EHRs replaced traditional paper records. In their place are computer terminals or tablets that healthcare professionals access while consulting with patients. These e-records do not stay in that doctor’s office; instead, they are in a database managed and controlled by IT professionals. If the patient then visits another doctor, the new doctor has access to the same EHR. If the patient then visits the first doctor again, the electronic notes from the second doctor are there for review. This represents a specific chain of custody. Consequently, IG helps secure, control, and optimize this electronic information, helping organizations to meet regulatory requirements to avoid HIPAA violations, fines, and sanctions. Leveraging a healthcare IG framework,

the healthcare industry can better manage reforms that have been part of the federal government’s push to keep the healthcare industry focused on keeping people healthy. Medicare and Medicaid reimbursement payments are now tied to the term “meaningful use,” defined in terms of EHR technology as improving the quality of care. In other words, the healthcare industry must show that such technology helps patients as per guidelines enacted under the Health Information Technology for Economic and Clinical Health (HITECH) Act. A healthcare-focused IG framework helps managers and leaders in the industry define meaningful use and make it a strategic focus of improving care. The American Health Information Management Association (AHIMA) defines IG as “an organization wide framework for managing information throughout its lifecycle and for supporting the organization’s strategy, operations, regulatory, legal, risk, and environmental requirements.”1 Much like other industries, the healthcare industry utilizes electronic data for strategic purposes. An active IG program helps to govern and control electronic information and access to it.

REFERENCES Daniel, M., & Makary, M. A. (2016). Medical error—the third leading cause of death in the US. Bmj, 353(i2139), 476636183. Donaldson, M. S., Corrigan, J. M., & Kohn, L. T. (Eds.). (2000). To err is human: building a safer health system (Vol. 6). National Academies Press.

REFERENCES The AHIMA website notes, “AHIMA-developed organizational IG competencies…incorporates more than 85 maturity markers or indicators of maturity in IG practices. These markers enable identification of maturity level, based on a five-level model across all 10 competencies.” Retrieved from http://www.ahima.org/topics/infogovernance/ igbasics?tabid=overview

INFORMATION GOVERNANCE WORLD

17


INFORMATION GOVERNANCE IN HEALTHCARE SPECIAL REPORT

The Risks and Benefits of AI in Healthcare

O

ne of the great triumphs of 20th century medicine was the use of technology as a tool to expand human life expectancy. In 1918, the average American lived into their early fifties. Women lived to be 48.4, while men lived to 54. In 2014, life expectancy for men was 76.5 and 81.3 for women. Americans can now expect to live a full generation longer than Americans did at the start of World War I. When contemplating the next century’s medical advancements, it is not out of the realm of possibility that Americans would add another generation to life expectancy by the close of the 21st century. People in the future might say, “Meet my grandma, she’s 121!” Whereas 20th century medicine conquered many of the deadly diseases that have plagued humans for thousands of years, 21st century medicine is aimed at conquering and controlling technologies such as artificial intelligence (AI), blockchain, and the Internet of Things (IoT). Take AI: Whether it is a robotic arm that looks and feels as real as a human arm or an artificial heart that calls in on its own for maintenance, the future will expand life

18

INFOGOVWORLD.COM

beyond anything thought possible in the 20th century. Part of this expansion will be because of the use of AI in healthcare, but these advancements must be governed with healthcare-focused IG principles. AI visionaries see a world free of human pain and suffering. Given the inevitably of disagreement about the use of AI in healthcare, it seems prudent to list the key benefits and risks of using AI to improve human health, particularly over an entire lifetime. One of the benefits of using AI-driven tools in the healthcare industry is that they will improve human health. For example, doctors will have access to vast amounts of human-like knowledge when running diagnostic tests and making their assessments. Or, envision smart medical devices, such as a pacemaker that knows instantly what is wrong with the heart and administers nanotechnology pulses to repair the problem. Such actions could take place without the patient even knowing. When coupled with a neuro-network interface, the human brain could be enhanced enough to allow for speech even when a person is lost in dementia. These types of smart devices could be placed at various parts of the body and act as sensors,

much like those used in cars. With each sensor aided by AI, they could connect to each other through this neuro-network and talk to each other, supported by biological impulses from the brain. The primary risk for using AI in healthcare is the cybercrime potential. Unless an AI-driven tool is completely isolated and independent inside the body, it will need to communicate with a central database in the cloud, or with other smart tools. This means they can be hacked, and their operation can be altered. A new way to assassinate political or business leaders might be to hack their implanted medical device! Other risks are that humans might become complacent and place too much trust in these AI-driven tools, even when they make incorrect suggestions. Related to this is the training of clinicians to rely on such devices. This could be very disruptive to current training regimes, so education on these emerging technology tools must be ramped up. REFERENCES United States Life Tables, 2014. National Vital Statistic Reports, 66(4). CDC. Retrieved from https://www.cdc. gov/nchs/data/nvsr/nvsr66/nvsr66_04.pdf


Healthcare Informatics and Cross-Functional Collaboration

O

ver the last decade, healthcare informatics has emerged as one of the fastest growing employment sectors. While the Bureau of Labor Statistics does not include a specific healthcare informatics career path, the Bureau uses the title Medical Records and Health Information Technician as the formal equivalent. The job market for these technicians is expanding (2016-2026) 13 percent faster than average in healthcare. This aligns with other expanding job markets in the healthcare industry such as analytics and information security, and privacy officers. The explosive use of electronically created information, Big Data analytics, and cloud computing have resulted in a significant expansion of the healthcare informatics job market. The ability to use and manage this information in the form of electronic health and medical records is a requirement of the Health Information Technology for Economic and Clinical Health (HITECH) Act and the Patient Protection and Affordable Care (ACA) Acts. In conjunction with privacy laws such as the Health Insurance Portability and Accountability Act (HIPAA), healthcare focused informatics has emerged as means of overcoming issues

such as interoperability and electronic health record (EHR) implementation. Crucially, healthcare informatics operationalizes Information Governance frameworks and the maturity models the industry now utilizes emphasize crossfunctional collaboration. Two or more specific definitions of healthcare informatics inform crossfunctional collaboration. Both are similar, but have subtle differences that explain the complex issues surrounding patient care teams. First, Saba & McCormick (2015), as quoted in (Sweeny, 2017), define informatics as “the integration of health-care sciences, computer science, information science, and cognitive science to assist in the management of healthcare information” (p. 223). Conversely, the American Nursing Association (ANA) defines nursing informatics as “a specialty that integrates nursing, science, computer science, and information science to manage and communicate data, information, and knowledge in nursing practice” (ANA, 2001). In a broad sense, informatics is important across the healthcare system. The influx of electronic information allows for accountability and transparency across the organization. The importance of cross-collaboration on patient care and

population health cannot be understated, and is informed by informatics-based insights. Most hospitals have disparate communication systems between departments, but leaning into informatics can foster collaboration in the campuslike setting of most hospitals and improve teamwork throughout the organization. Beyond the total patient care model, using cross-collaborative teams is the informatics used to manage other aspects of patient care. For example, patient satisfaction surveys filled out under the premise of a patient as a paying customer are now informaticsbased tools. By focusing on preventative medicine, healthcare informatics helps bring the healthcare industry into the 21st century and encourage a shift from treating sick people as a commodity to helping patients stay healthy. REFERENCES American Nurses Association. (2001). Scope and Standards of Nursing Informatics Practice. Washington, DC: American Nurses Publishing. Bureau of Labor Statistics. (2016). Medical Records and Health Information Technicians. Retrieved from https://www.bls.gov/ooh/healthcare/medical-records-and-health-information-technicians.htm Saba, V. K. & McCormick, K. A. (2015). Essentials of nursing informatics (6th ed.). New York: McGraw-Hill. Sweeney, J. (2017, February). Healthcare Informatics. Online Journal of Nursing Informatics (OJNI), 21(1),

INFORMATION GOVERNANCE WORLD

19


INFORMATION GOVERNANCE IN HEALTHCARE SPECIAL REPORT

IG Leadership: Robert Smallwood

R

obert F. Smallwood, MBA, CIP, IGP is a thought leader in Information Governance, having published seven books on IG topics, including the world’s first IG textbook, which is being used in many graduate university programs, as well to guide corporate IG programs. His latest book is Information Governance for Healthcare Professionals: A Practical Approach (HIMSS, 2018). Smallwood also developed comprehensive IG training courses, and has assisted hundreds of professionals to attain their IGP certification. He was a founding partner of IMERGE Consulting and is Managing Director of the Institute for Information Governance, and co-founder, CEO & Publisher of Information Governance World magazine. In addition to teaching IG courses, he consults with Fortune 500 companies, hospitals and governments to assist them in launching IG programs. Mr. Smallwood has published more than 100 articles and given more than 50 conference presentations. In addition to his nonfiction biz/tech books, he has published a novel, a theatrical play, and the first published personal account of Hurricane Katrina. We caught up with Robert Smallwood in Todos Santos, Mexico, where he had gone to find the muse while working on a novel. InfoGov World: Where are you from? Where did you go to school? Robert: I was born in Davenport, Iowa—also the birthplace of jazz great Bix Beiderbecke— and raised on the banks of the Mississippi River, in an area on the Iowa-Illinois border called the Quad Cities. It’s about 3 hours southwest of Chicago, so I grew up being a Cubs fan. Sometimes, my dad would take my brother and me to games, and when I was in high school I’d drive up with a group of friends to see a ballgame. I went to the University of Northern Iowa and lettered in cross-country my freshman year, but with my academic workload and a series of injuries, I quit running and focused more on school. I spent my junior year at U Mass/Boston on an exchange program,

20

INFOGOVWORLD.COM

and then finished up with B.A. degrees in Management and Psychology from UNI. Ten years later, I took night classes to get my MBA in International Business at Loyola University in New Orleans. Since I had a full-time job, was a single dad, a Big Brothers volunteer, a basketball coach for seven-yearolds, and later married with a family of five, it took me eight years to finish!

In the past year or so, there has been a sort of “perfect storm” that has fueled activity in the IG space. A combination of compliance pressures—notably GDPR—plus privacy and cybersecurity concerns, Big Data volumes, and the increasing recognition that information itself has value have contributed to a substantial increase in the number of organizations implementing IG programs.

How do you define IG? I try to keep it succinct, “Minimizing information risks and costs while maximizing information value.” Or an even shorter, “Security, control & optimization of information.”

So is there a new focus on privacy that is driving all this activity and growth? Privacy is a major driver, and it will continue to be one. There is new privacy legislation in California. And big tech companies like Google and Facebook are now lobbying congress to try to develop federal privacy legislation on their own terms—before consumers are screaming for it. Further, the 2016 hacking and theft of proprietary election research by Russian spies has brought many more into the privacy and cybersecurity conversation.

How did you make your way into the IG discipline? I was working with Wang Labs in the late 1980s when they came out with the first document imaging systems, about a year ahead of IBM. I learned everything I could about imaging and in 1990 went out on my own as an independent consultant. Document imaging, workflow, and then document management and enterprise report management eventually developed into ECM suites. I was heavily into ECM and content management consulting in the late 1990s and early 2000s, regarded as one of the top experts in the world, when I was on the AIIM Board. Then that market consolidated drastically, and the broader need for IG came into view. I’ve researched email archiving and e-document security since the early 2000s, and I continued to research and write reports on subtopics in the IG discipline like IG for SharePoint, IG for the Cloud, Social, Mobile, etc. In 2011, I signed a three-book deal with Wiley & Sons to write a series of books on IG topics including e-document security, electronic records management, and the first textbook on IG. I’m not sure I would have gone through with it if I had known how much work it was going to be! Publishers are real sticklers for detail. But the three books, two of them in the Wiley CIO Series, were published from 2012-2014––over 1100 pages in all. What major trends or influences have you seen in the IG market space?

What basic steps can companies take to meet GDPR compliance demands? A first step in the GDPR compliance process is to conduct an inventory of an enterprise’s information assets to create a data map showing where all incidences of data are housed. This is commonly the first major implementation step in IG programs, so the IG discipline and support for IG programs made substantial strides in 2018 and the leadup to GDPR going into effect. Progressive companies actually develop an Information Asset Register, which is a sort of “General Ledger of information assets” that tracks where all information assets physically reside, whether they contain sensitive or confidential information, and their lifecycle requirements. What is the most exciting development in IG of late? In a word, infonomics. IG programs are not only about managing risk and costs, but also about optimizing and finding new value in information. The concept of managing and monetizing information is core to the emerging field of Infonomics, which, according to Gartner’s Doug Laney,


is the discipline that asserts “economic significance” to information and provides a framework to manage, measure, and monetize information. Laney published a groundbreaking book in 2018, Infonomics, which delineates infonomics principles in great detail, providing many examples of ways organizations have harvested new value by finding ways to monetize information. I was fortunate enough to get to discuss these topics recently with Doug over breakfast in Chicago. Your next book is about IG in healthcare. How did you become interested in that market segment? When I learned that medical mistakes are the third leading cause of death in the U.S., killing over 250,000 Americans annually. In fact, I was the victim of a couple of serious medical mistakes that almost killed me. So last year, I began to research IG in healthcare, which is a segment I focused on heavily in the 1990s. There was only one book on that topic; I found it to be complex and academic. So, I thought there was a market for a practical, clearly-written guide for IG in healthcare that would help address this issue of medical mistakes in the U.S. Hopefully, my new book will help save hundreds of thousands of American lives over the next decade. What is one hobby or activity that you enjoy? Well, I love to travel. And about twice a year I go off some place exotic just to do some writing—not tech writing, but working on fiction. I’ve written a novel and a play, and have a Mexico novel I’ve been tinkering with for 10 years. I even wrote a New Orleans mafia screenplay that was once optioned. I think writing fiction helps me to focus on the cadence and rhythm of each sentence, which helps make my nonfiction writing better, more readable. Where have you gone off to write? Havana a few times, for as long as three weeks; a year on and off in the beautiful mountain community of San Miguel de Allende, Mexico; two months in Panama, at Bocos del Toro and the idyllic San Blas Islands off the Caribbean coast, including a week on a small yacht, where we ran into some very rough seas one night. Most recently, right here in Todos Santos, Mexico, which is about an hour and a half north of Cabo San Lucas. It’s an old fishing village, a small enclave near the coast where there are a lot of artists. I like to be around creative types when I am doing creative writing. And I was a little curious to see the

Hotel California, even though supposedly it isn’t the one the Eagles were singing about. I stayed in this cool old mansion owned by a sugar baron, which is now the Todos Santos Inn. There are no TVs or phones in the room, and there is a writing desk. Great place to write. They even have a portrait of Shakespeare in the bar! IGW: What is one thing that your colleagues in the IG space probably don’t know about you? About 17 or 18 years ago I had a grand mal seizure in a coffee shop in New Orleans. When I woke up in the hospital, the doctors told me I had terminal brain cancer and less than 30 days to live. They showed me an MRI of a large tumorous mass in my brain. I called my oldest son up to the hospital room to tell him that Pop was leaving soon. My mother and sister flew in and we started to make arrangements. Then, about a week later, the doctors said they ran some new tests and gave me antibiotics, and the mass in my brain shrunk. They concluded that I only had a serious brain infection caused by bacteria in the water (I had been living in a log cabin north of New Orleans that used well water). Many people might have sued the doctors for the stress of a terminal diagnosis, but I was overjoyed to have a second chance. I am more grateful for each day than I ever was. I try to make the best use of my time. ROBERT F. SMALLWOOD, MBA, CIP, IGP IS A THOUGHT LEADER IN INFORMATION GOVERNANCE, HAVING PUBLISHED SEVEN BOOKS ON IG TOPICS, INCLUDING THE WORLD’S FIRST IG TEXTBOOK WHICH IS BEING USED IN MANY GRADUATE UNIVERSITY PROGRAMS AS WELL AS TO GUIDE CORPORATE IG PROGRAMS.

FROM THE TOP: Bearded Smallwood in Havana Viejo; Hotel California, Todos Santos­­—night shot; Todos Santos Lobster-stuffed chile Poblano topped with cream sauce and pomegranate; Smallwood writing at a Havana café; Writing desk in room at Todos Santos Inn; Patio at Todos Santos Inn.

INFORMATION GOVERNANCE WORLD

21


INFORMATION PRIVACY

IN BLOCKCHAIN WE TRUST

MIRACLE CURE OR SNAKE OIL?

BY DARRA HOFMAN

B

y now, we’ve all heard about blockchain technology––or least its famous progenitor, Bitcoin. According to its evangelists, blockchain technology will secure our records, protect our privacy, democratize our technology, and probably fix us a cup of tea in the process. Blockchain’s detractors tend to agree with John Oliver’s takedown of Bitcoin and other cryptocurrencies as, “Everything you don’t understand about money combined with everything you don’t understand about computers.” So, what’s the real deal? Is blockchain technology the miracle cure that will soothe the aches and pains of digital Information Governance? Or is it just so much snake oil?

WHAT IS BLOCKCHAIN? That one guy who only wears t-shirts with memes told you that blockchain is the future. So why is it so hard to find out what blockchain actually is? In part, it’s because there’s no agreed-upon definition as to what constitutes a “blockchain,” and in part because there are actually a number of different kinds of “blockchains.” While academics can debate the nuances of exactly which technologies are and aren’t “blockchain” (and if that’s your thing, hit me up!), a blockchain can be understood as: • A distributed ledger with a decentralized architecture • Where transactions are: • Immutable • Secured through cryptography

22

INFOGOVWORLD.COM

“There’s no agreed-upon definition as to what constitutes a “blockchain,” and in part because there are actually a number of different kinds of “blockchains.”


LET’S BREAK EACH OF THOSE DOWN. A distributed ledger, or distributed ledger technology (DLT), is its own technology––of which blockchain is a form. A distributed ledger is a database of transactions. The “distributed” part comes in from the fact that every computer or server running the ledger (every “node”) runs that ledger in its entirety; there is no master-slave or master-copy setup. With a decentralized architecture, there is no centralized control over who can participate in the ledger. Instead of a centralized authority––say, Janice in accounting––maintaining the ledger, each node can construct and record its own updates to the ledger. The nodes then all vote on whether each update is valid and what order they occurred in through a consensus mechanism. While different consensus mechanisms operate differently, they all trust math (instead of Janice in accounting). This is why blockchain is considered a “trust-less” technology: there is no human or institutional intervention necessary to verify transactions. If the nodes reach consensus that a transaction is valid, it stays. If the nodes find a transaction invalid, it must sashay away. Transactions on the blockchain are made immutable and secured to the blockchain through a clever bit of math. With a blockchain, each transaction is cryptographically hashed––a cryptographic hashing algorithm makes an alphanumeric “fingerprint” of the transaction based on its exact content, down to the bit. A block of ten transactions will have ten hashes. Those hashes are then all hashed together to make the block hash. That block hash becomes the first hash of the next block, “chaining” all of the blocks together to make… a chain of blocks (or a “blockchain”).

SEE WHAT I DID THERE? In the above illustration (which uses simple addition, as opposed to the incredibly complex math of a real hashing algorithm), Block 2’s hash value is dependent on Block 1’s value; Block 3, in turn, depends on both Block 1 and 2. Changing the hash of any transaction––which, remember, happens when any bit of that transaction is changed––destroys the entire chain of hashes going forward. Because every block is unbreakably chained to the previous block, the blockchain is considered immutable. Furthermore, the cryptographic hash function works in such a way that it is virtually impossible to reconstruct the

Block 1 Transaction 1 = 1 Transaction 2 = 2 Transaction 3 = 3 Blockhash = 6

Block 2 Transaction 4 = Block 1 Blockhash = 6 Transaction 5 = 5 Transaction 6 = 6 Blockhash = 17

original transaction from its hash (much like you can’t build a person from a fingerprint). This means that it’s impossible to tamper and then go back and hide the tampering.

SO WHAT CAN BLOCKCHAIN DO FOR ME? So blockchain is a new technology that uses math to secure transactions on a ledger that anyone can read or write to without permission from a central authority. So why do you––a busy information professional––care? Blockchain is way up in the hype cycle; your team might well be asking whether a blockchain makes sense for your organization. A few benefits of the blockchain get touted pretty often: a blockchain will make our records more secure; a blockchain is more private; or a blockchain is auditable. To evaluate whether a blockchain makes sense for your organization, you need to know how true each of those claims is. Claims that blockchains are secure (or at least, more secure than other databases) rely on a few things. The first is the distributed nature of the blockchain ledger; being able to falsify records on the blockchain typically requires a “51% attack”––or gaining control of 51% of the nodes running the ledger. However, each user controls his/her/their own account through use of a private key; if that key is comprised, just like when a password is compromised, an attacker can then do anything the user could do. This is a real threat when considering the complexity of private keys and the elevated privileges in designs where a trusted body holds users’ keys in escrow. People are always a security threat; blockchains are no exception to that rule. The second element of the blockchain that leads people to claim it is secure is its usage of cryptography (such as the cryptographic hashing). People sometimes think this means data on the blockchain is natively encrypted. It’s not. In a public blockchain, like Bitcoin, transaction data cannot be encrypted; if it were, nodes couldn’t validate the transaction without

Block 3 Transaction 7 = Block 2 Blockhash = 17 Transaction 8 = 8 Transaction 9 = 9 Blockhash = 34

decrypting the data. If every node in a private blockchain is going to decrypt in order to validate transactions, then you have to ask why you’re spending the time and money to encrypt in the first place. So, even though blockchains use public key infrastructure (PKI) and cryptographic hashing, there’s a whole lot of unencrypted data (which, remember, anyone running a node can read) running around on a blockchain. Since encryption is pointed to as a reason that the blockchain is both more secure and more private, it’s difficult to overstate how important it is to understand exactly what data is, and isn’t, encrypted when considering a blockchain solution. Finally, claims that the blockchain will make records more secure often point to the immutability of transactions secured to the blockchain. It’s true: This is an excellent tool for ensuring the integrity of records. It also makes auditability a native feature of the blockchain. However, for records to be trustworthy––for information assets to retain their strategic or, in the case of litigation, evidentiary value––they must be accurate, reliable, and authentic.

INTEGRITY IS ONLY HALF OF AUTHENTICITY. Blockchain cannot ensure the accuracy of a record; it’s entirely possible for a user to enter a false or incorrect record onto a blockchain. Reliability is a condition of how a record is created; if Bob enters, say, an employee record into the blockchain without complying with the company’s record’s procedures, then that will be an unreliable record. Nothing that happens after a record’s creation can make it reliable. Lastly, authenticity––of which integrity is part––requires that a record is what it purports to be. There is nothing in the blockchain that instantiates the archival bond, which means a blockchain doesn’t ensure a record’s authenticity. Creating, managing, and preserving trustworthy records in a blockchain solution requires a lot of thought to build and integrate features that are not native to the blockchain.

INFORMATION GOVERNANCE WORLD

23


INFORMATION PRIVACY Trust

Accuracy

Precise

Correct

Reliability

Truthful

Pertinent

Consistency (with formal procedures)

Completeness at point of rules of creation

Authenticity

Objectivity/Imp artiality (Naturalness)

Competence of Author

Identity

Genuineness of Author

Integrity

Archival Bond

Completeness after creation

Figure 1: Taxonomy of Trust, by Dr. Victoria Lemieux

Need a Database

With Shared Read/ Write Permissions

Have Low Trust Between Parties

Need Disintermediation

Have Relationships Between Transactions

Figure 2: When Is a Blockchain a Good Solution?

WHEN IS A BLOCKCHAIN A GOOD SOLUTION? Are blockchains a complete write-off? A fad, doomed to the dustbin of history with Betamax and MySpace? No! Blockchains are still a technology in development, but they offer an excellent solution when you need a database with shared read/ write permissions, have low trust between parties, need disintermediation, and have relationships between the transactions in the database. The threshold question, then, is why do you need a blockchain (as opposed to simply a secure database)? The best answer is that you have parties who don’t

particularly trust one another, and you have some reason not to use a trusted third-party intermediary: cost, time, or simply the struggle finding someone all the parties can agree to trust. Like Information Governance itself, blockchain technology integrates social considerations of trust with data and technical considerations. As such, blockchains are rarely a good solution for information assets within an organization; the problems of trust and disintermediation (theoretically) shouldn’t be an intra-organizational problem. However, they can be very useful for interorganizational Information Governance. Some of the problem spaces

in which blockchain are being explored include land registries, supply chain management, food provenance, healthcare, and financial services. Examples include: • The Linux Foundation’s open-source collaborative blockchain, Hyperledger, is being used by IBM to develop a banking application • Oracle is developing preassembled, cloud-based, enterprise blockchain networks and applications • The National Association of Realtors is developing a member-engagement data blockchain that allows permission-based access For those cases where a blockchain makes sense, design matters.

Figure 3: The Interrelated Solution Layers of the Blockchain, by Dr. Victoria Lemieux

24

Social Layer

Data Layer

Technical Layer

• Legal institutions & rules & Regulation • Multi-stakeholder networks & business procedures • Social trust relations & infrastructures • Cryptocurrency valuation, markets, and financing • Cross-domains & sectors

• Ledger Trustworthiness • Privacy & Data Protection • Data Architecture • Data & Records Lifecycle management • Standards and evaluation frameworks

• Blockchain platforms • Blockchain applications • Reference architectures • Scalability issues • Layer 1 & 2, exchanges & app Security • Standards and evaluation frameworks

INFOGOVWORLD.COM

Implementing a successful blockchain requires asking in-depth technical questions: • What consensus mechanisms? • Permissioned or permission-less? • What data will be encrypted? • What kind of transaction speeds do we need? • How scalable does this system need to be? But it also requires asking a lot of peopleand organization-oriented questions. • Why do we need a trustless, disintermediated system? • What are we trying to fix by implementing a blockchain?


• How do we make this accessible and useable to the end users, so that they trust the system where they didn’t trust the previous processes? • What regulatory challenges arise from using such a new technology? • What makes the blockchain worth the extra investment, and how do we leverage that investment to maximize our return? Implementing a blockchain should be a strategic choice.

CONCLUSION Blockchains are new and sexy. They combine distributed ledger technology and cryptography in a way that lets transactions be processed without human intervention–– and thus no need to trust human fallibility. But new and sexy is often the wrong strategic choice, especially if old and dependable is sufficient to meet organizational needs. Before implementing a blockchain, an organization should ask itself: Why? Blockchain is fundamentally a technology that addresses a social problem––trust. For those cases where low trust and intermediation are problems, blockchain can offer a real solution to serious data management problems, bringing efficiency and transparency to processes that have long challenged interorganizational Information Governance. However, in cases where trust is not the fundamental problem, blockchain technology is not the best solution. The key is asking what organizational needs a blockchain can meet that can’t be met by its plainer ancestor, the database. Blockchain probably won’t get us a cup of tea (though who knows where the Internet of Things will go), but it is a very useful tool to have in the toolbox, as long as one remembers that a hammer does not make every problem into a nail.

DARRA HOFMAN, J.D., M.S.L.S., IS A PH.D. STUDENT AT THE UNIVERSITY OF BRITISH COLUMBIA SCHOOL OF LIBRARY, ARCHIVAL, AND INFORMATION STUDIES, WHERE SHE IS A RESEARCH ASSISTANT WITH INTERPARES TRUST AND BLOCKCHAIN@ UBC. HER RESEARCH FOCUSES ON THE INTERSECTION BETWEEN RECORDS, TECHNOLOGY, AND HUMAN RIGHTS, WITH PARTICULAR EMPHASIS ON BLOCKCHAIN. HER RESEARCH IS SUPPORTED BY A CANADA GRADUATE SCHOLARSHIP (SOCIAL SCIENCES AND HUMANITIES RESEARCH COUNCIL OF CANADA) AND A KILLAM DOCTORAL SCHOLARSHIP.

INFORMATION PRIVACY

News

Google Faces Fines Google has been saddled with the largest fine yet by the EU––€4.34bn ($5 Billion USD). The fine, based on the European Union’s claims, is over “serious illegal behavior” tied to how Google monopolizes its search engines on mobile phones in Europe. The claims are derived from a finding that Google required pre-installation of their search engine and web browser on phones using the Android operating system, which is used on nearly 80% of phones. If manufacturers failed to pre-install as instructed, then they would lose access to the Google Play store and other streaming services provided for by Google. Margrethe Vestager, the EU’s competition commissioner, had some harsh words about the tech giant: Google uses the Android OS “to cement its dominance as a search engine,” preventing innovation and competition “and this is illegal under EU antitrust rules.” She added: “The vast majority of users simply take what comes with their device and don’t download competing apps.” She concluded that these services are not free, as consumers “pay with their data” to use them. “Or to slightly paraphrase what [U.S. free market economist] Milton Friedman has said: ‘there ain’t no such thing as a free search.’” Citing that the inhibition of innovation and competition through restrictive usage of the Android OS is illegal under EU antitrust rules, Vestager stated it monopolizes the market. Unsurprisingly, Google was quick to announce that it would be appealing the ruling. A Google spokesperson had the following to say about the verdict: “Android has created more choice for everyone, not less. A vibrant ecosystem, rapid innovation and lower prices are classic hallmarks of robust competition. We will appeal the commission’s decision.” The tech giant has 90 days to end the practices outlined in the ruling to avoid increased and continued fines. The verdict caps a three-year investigation

into the Android OS by European commission’s competition authorities. This decision is sure to raise hackles in the U.S. government, and especially the White House, given ongoing trade disputes and chilly relations. There is some concern about how the decision could affect the cost of smartphones in the EU, but Vestager is relatively unconcerned. Her position is that the verdict should allow for greater competition in the long run, which would make prices come down.

This is not the first time that Google has faced fines in the EU, and it is likely not to be the last. Time will tell whether or not the commission’s decision is the opening salvo to a much longer struggle between Google and other tech giants, and the EU.

INFORMATION GOVERNANCE WORLD

25


INFORMATION PRIVACY

PRIORITIZING PRIVACY HOW I LEARNED THAT FACEBOOK FAILED

BY DAN O’BRIEN

U

nless you have been living under a rock, the recent woes experienced by tech and social media giant Facebook have dominated news cycles on days when chemical weapons, nuclear proliferation, and North Korean summits might otherwise capture the attention of the American consciousness. The headline:

Facebook Doesn’t Care About Users’ Privacy

(and they probably never did). In fact, they misappropriated the personal data of more than 85 million citizens, and then took a couple of years to admit it. The blunder has not gone unnoticed by other tech titans. On a Saturday afternoon Apple CEO, Tim Cook, at March’s Beijing China Development Forum, offered his hot take on the situation at hand and how data affects human lives: “This certain situation is so dire and has become so large that probably some well-crafted regulation is necessary. The ability of anyone to know what you’ve been browsing about for years, who your contacts are, who their contacts are, things you like and dislike, and every intimate detail of your life; from my own point of view, it shouldn’t exist.” Needless to say, Zuckerberg has taken a beating; so too has Facebook shares, suffering a precipitous drop of 4.4% since the unveiling of the Cambridge Analytica scandal. While Facebook users are now looking more closely at their app settings, there has been a noticeable exodus from the platform for many users. Millennials were less enthralled with Facebook than the newer social media platforms like Instagram (also owned by Facebook) and SnapChat, so a loss of active users as a result of these privacy concerns are a black eye on, arguably, the world’s most visible social media platform. And it is costing them money in not only market value, but also ad sales. Zuckerberg’s public comments (outside of his congressional testimony) have been largely tone deaf and saccharine sweet. They range from Facebook posts professing that “we have a responsibility to protect your data and if we can’t, then we don’t deserve to serve you” to question-dodging that

26

INFOGOVWORLD.COM

would make any career politician proud. In an interview with The New York Times speaking about the Cambridge Analytica scandal, he offered the following response: “Are there other Cambridge Analyticas out there? Were there apps which could have gotten access to more information and potentially sold it without us knowing or done something that violated people’s trust? We also need to make sure we get that under control.” So, in other words, yes, there are more coming. Can we really trust an organization that doesn’t immediately accept responsibility for safeguarding private user information, knowing that it might have been used for undermining our very democracy? The very fact that more people clicked on “fake news”—planted false stories and claims—than actual news stories should have set off huge red flags at Facebook. Truth be told, if they had an Information Governance (IG) program in place and were truly trying to maintain the integrity of our public news, and the integrity of their company, then this likely wouldn’t have happened.

THE LASTING EFFECT OF FAKE NEWS News of this privacy breach comes in the wake of the outing of Cambridge Analytica and the role they may have had in politically-oriented ads. To hear Facebook speak about it publicly, they have been trying to combat the rise of fake news

(despite famously denying they played any role in the election). Ungoverned, false information that lingered because of the clicks it created, and not whether or not the story has been appropriately sourced and verified, is the primary cause of the rise of fake news that plagued Facebook and spilled out into casual conversations. Fake news is best understood in the context of giving oxygen to an idea regardless of its veracity: such stories were likely accelerated and allowed to fester as a result of these ineffective fact-checking efforts. Without access to the results of these third-party editors, we cannot understand the effect in its totality. You might be wondering why we can’t apply more pressure to turn over the data or at least share the full results. The reason will irritate you. We can’t do much about it is because, as a private corporation, they are not obligated to release data. That is why GDPR-like legislation is being proposed in the U.S. on a state and national level. Facebook claims that in not revealing this internal data, they don’t risk revealing private user data––which is hypocritical at best, given what we know now about the Cambridge Analytica scandal. Can we genuinely believe that Facebook is really prioritizing privacy as their new ads suggest? We say no, but draw your own conclusions, sports fans.


GDPR INFORMATION WORKFLOW ELECTRONIC PRIVACY ACT COULD HAVE A DETRIMENTAL IMPACT ON BUSINESSES

T

he GDPR May adoption date has caused many American businesses headaches as they scramble to understand how the EU approaches the electronic privacy of its citizens. An April 2018 flash poll conducted by Baker Tilly Virchow & Krause LLP noted that 90% of organizations were not ready for GDPR. A study conducted by McDermott Will & Emery LLP found that 71% acknowledged “that lack of compliance could have a detrimental impact on their companies’ ability to conduct business globally.” American businesses need the EU’s customers. As a result, many may get lost in trying to become GDPR compliant without fully understanding why. At this point, most employees who have heard about GDPR understand the new regulations ensure privacy of EU citizens. However, this is not enough. American-based companies and their employees need to understand how PII travels throughout their company’s workflows. This understanding helps businesses/organizations who use sensitive PII be proactive in its protection, thus ensuring a consumer connection with American-based companies. The GDPR had only been in effect mere hours before an Austrian company filed GDPR complaints against Google, Facebook, WhatsApp, and Instagram. Facebook owns WhatsApp and Instagram, so they may be in even deeper trouble than Google. Nonetheless, the very quick filing of the complaint hints at the motives behind passing the GDPR. While few Americans may have known the exact reasoning behind Mark Zuckerberg’s testimony before the EU Parliament on May 22nd, he was there to reassure EU citizens that they could continue to use Facebook despite GDPR, which would be implemented less than a week later. Much as it is in the United States, these types of

public hearings before governmental bodies tend to align with contemporary politics. Records managers and other professionals who manage information at the executive level need to take an IG approach to understanding how all information, not just PII, moves through their businesses. Prior to GDPR enactment, tech companies that relied on proprietary algorithms could collect data from any number of collection points. Under the new regulatory framework, data collection should be severely limited and is no longer part of the processor function. Consequently, any IG professional who seeks an understanding of GDPR must fully grasp this unique relationship between controller and processor. Under GDPR, it is unclear how these proprietary algorithms will continue to function. They depend on the allencompassing processing information listed in Article Four, Section 2. Figure One illustrates the new controller to processor relationship. Notice the flow of information is only one way in this relationship.

C/P

“The GDPR had only been in effect mere hours before an Austrian company filed GDPR complaints against Google, Facebook, WhatsApp, and Instagram. Facebook owns WhatsApp and Instagram, so they may be in even deeper trouble than Google.”

C/P

Before GDPR

C

P

After GDPR Understanding the flow of information and the duties ascribed to the controller and processor roles, while also managing information in a GDPR-compliant IG framework, is a challenge that can be addressed with a firm conception of what privacy means to an EU resident.

INFORMATION GOVERNANCE WORLD

27


INFORMATION PRIVACY

PRESERVING OUR PRIVACY A NEW KIND OF COLD WAR?

M

ost Americans don’t remember the Cold War and détente with Russia––except perhaps for a reimagining like FX’s stellar series, The Americans. However, the recent attention Russian influence has received in the media conjures the feel of a digital version of spies and dead-drops. The most pressing question is: how deep and far-reaching are these efforts? Will they affect the 2018 midterm elections—and beyond? Have we lost control of our media and our election

process? But more importantly, what are the preventative measures being taken to safeguard against future espionage? Truly, Facebook and other digital platforms could have had an effect on the election. Billions of clicks on fake news had to have some impact. Obviously, if clicks had no impact, there would be no advertisers. But the depth of the “Facebook Effect” remains unclear, as real efforts into investigating are stymied by partisan posturing, Wall Street favors, and a reluctance on the part of tech giants to own any responsibility for how information is

shared at viral speeds. Certainly it can (and likely did) shift political opinions. This information, which is apparently available to those who plant surveys inside Facebook (which they then can sell to advertisers or rogue operators), represents the kind of lever that can have a domino effect in crumbling the roots of our democracy. What does this have to do with information privacy? Everything.

“Either we rein in Facebook and other social media platforms with meaningful new privacy and breach reporting legislation or we will continue to lose our privacy rights—and eventually our democracy.” —Robert Smallwood of the Institute for Information Governance

28

INFOGOVWORLD.COM


Knowing that America’s information is for sale, or is easily accessible, is an invitation for foreign governments to attempt to influence our elections––and it won’t only be Russia lurking in the shadows. The West has a long-standing tradition of influencing elections in South America and the Middle East through economic and covert means; here, we can see how foreign governments with InfoSec expertise could influence how our country is governed. A frightening thought indeed. And a clarion call for serious new privacy and Information Governance (IG) legislation. If social media platforms are being used as a vehicle for foreign governments, as the Justice Department’s Russia probe suggests, then what role does information sharing represent to our republic and the very nature of democracy? There are hard questions to ask. Is Facebook a threat to American democracy? Is regulation the answer? Or perhaps something more serious, like breaking up the Facebook juggernaut or shuttering those virtual doors that make creeping in the digital night simple for foreign governments. It is certainly food for thought. Sen. Ron Wyden (D-OR), leading up to Zuckerberg’s congressional testimony, offered his thoughts about what to do with Facebook: “There are going to be people who are going to say Facebook ought to be broken up. There have been a number of proposals and ideas for doing it, and I think unless [Zuckerberg] finds a way to honor the promise he made several years ago, he’s gonna have a law on his hands. […] I think we’ve got to establish a principle once and for all that you own your data, period. What does that mean in the real world? It’s not enough for a company to bury some technical lingo in their [terms of service] … It’s not enough to have some convoluted process for opting out.”

A DIGITAL MONOPOLY ON PERSONAL DATA? With Facebook front and center as the tech boogeyman of the year, we need to remember that they might not be the champions of innovation that has been built up in the public consciousness prior to their very public dressing-down. In many ways, they have created a kind of digital

monopoly by cannibalizing more than 60 companies, including: • Instagram • WhatsApp • Parakey • ConnectU • FriendFeed • Octazen • Divvyshot • Friendster • Numerous others Beyond that, Facebook login credentials are used in hundreds of popular apps like Words with Friends or Venmo. It conjures up a kind of informationhoarding that cannot continue unchecked. And the Facebook social platform is not the only arena in which information is tracked and sold: Instagram, Tinder, and other complementary sites are doing much of the same in the name of newsfeeds, user experience, and, most importantly, ad revenue via clicks. Additionally, they can monitor connected devices in your home through Wi-Fi connections that do not even have a Facebook app installed. Talk about invasion of privacy. That means they can monitor your kid’s cellphone, your smart TV, and even your Wi-Fi connected security system or doorbell. Makes you want to go back and take a closer look at their privacy notice, doesn’t it? Unfortunately, it never offers you options. It is always “take it or leave it” and once you sign up, they keep changing the rules to suit their own purposes. Consideration must be given to people’s privacy rights in order for these bad privacy practices to change.

A POSSIBLE SOLUTION: INFORMATION GOVERNANCE If you’re reading this, then it is likely that you have a vested interest in IG. Simply put: Facebook’s failures can be measured in their weak controls and a lack of an effective IG program. For the uninitiated, Information Governance is simply minimizing information risks and costs while maximizing information value. Well, maybe they got the value part right, but IG programs are about the security of information as well as the control of information. IG programs help companies meet compliance and legal demands while eliminating unneeded and outdated information in order to focus on leveraging high-value information across business units. Social media companies must do a better job of knowing what information they have, monitoring and controlling it, and taking proactive steps to guard user privacy.

INFORMATION PRIVACY

News

Finding Facebook Fakes

If you were to judge the effectiveness of Facebook’s efforts to combat fake profiles by the half-billion fake accounts they shut down in the first quarter of 2018, then you might conclude they are doing as promised in the face of the Cambridge Analytica scandal. However, when you consider that 3%-4% of all 2.4 billion+ profiles are still fake, then it makes you wonder what exactly Facebook is accomplishing with shuttering fake profiles other than good PR. The use of AI has cracked down on judging the content of potentially harmful fake profiles, and their subsequent posts and comments is at the forefront of the “transparent” efforts by the tech giant to quell public outcry over their role in the 2016 election. This zeal to stop fake news has dominated headlines, but remains mostly feckless considering the increase in profiles and Facebook’s lack of transparency about how they are going to police fake profiles, despite protestations to the contrary. The sad truth that has been leaking out is that fake news is still propagated on Facebook, and there seems to be very little that the company plans to do to stop channels that spread lies, hate, and propaganda. If anything, the efforts seem to have normalized unchecked information on the social media network. The great hope remains that Facebook will come to its senses and design a program that better facilitates identifying fake profiles and propaganda and delineating it for users. Certainly, Facebook is trying, and are shutting down fake profiles and groups on a daily, if not hourly, basis. Alas, it might not be enough.

INFORMATION GOVERNANCE WORLD

29


INFORMATION SECURITY

What is Security Awareness Training?

mployees’ human errors are the weakest link in securing an organization’s confidential information. However, there are some small, inexpensive steps (through employee training) that can reduce information risk. Security Awareness Training (SAT) programs educate an organization’s workforce about the risks to information and potential schemes employed by hackers. SAT provides them with the skills to act consistently in a way that protects the organization’s information assets. Bad actors target an employee’s natural human tendencies with phishing emails and spear-phishing campaigns. SAT training programs often include phishing simulation and other social-engineering tactics such as text message smishing and unattended USB drives. SAT products provide a comprehensive approach to employee training, which empowers them to recognize and avoid a broad range of threat vectors. SAT is an effective and easy way to reduce risk. Corporate risk is reduced by changing the (human) behavior of employees. Leading products in this market use innovative methods such as short, animated videos and pop quizzes to teach employees about information security threats. SAT is not a one-and-done activity. In order to be effective, SAT must be implemented as an ongoing process. Physical security programs implemented to meet OSHA requirements serve as a good metaphor. SAT is a continuous improvement process; new threats emerge every day. The leading products incorporate new content on a regular basis and provide employee engagement opportunities that go well beyond the traditional computer-based training activities.

30

INFOGOVWORLD.COM


WHAT IS PENETRATION TESTING? REDUCE RISK WITH SIMULATED ATTACKS BY BAIRD BRUESEKE

P

enetration testing (“pen test”) is a technique used by information security (InfoSec) professionals to find weaknesses in an organization’s InfoSec defenses. In a penetration test, authorized cybersecurity professionals play the hacker’s role. Penetration testing attempts to circumvent digital safeguards and involves the simulation of an attack by hackers or an internal bad actor. The same techniques used by hackers to attack companies every day are used. The results of a penetration test reveal (in advance) the vulnerabilities and weaknesses that could allow a malicious attacker to gain access to a company’s systems and data. Some techniques used include bruteforce attacks, exploitation of unpatched systems, and password-cracking tools. Organizations hire InfoSec experts with specialized training credentials––such as Certified Ethical Hacker (CEH) and

Offensive Security Certified Professional (OSCP)––to conduct authorized attempts to breach the organization’s security safeguards. These experts begin the pen test by conducting reconnaissance, often creating an attack surface and internet footprint analysis to passively identify exposures, risks, and gaps in security. Once potential vulnerabilities are identified, the penetration testing team initiates the exploit attempts using automated tools to probe websites, firewalls, and email systems. Successful exploits often involve multiple vulnerabilities, which are attacked over several days. Individually, none of the weaknesses are a wide-open door. However, when combined together by an expert penetration tester, the result is a snowball effect that provides the pen test expert with an initial foothold inside the network from which they can pivot and gain access to additional systems. Penetration testing is a useful technique for evaluating the potential damage from

a determined attacker, as well as to assess the organizational risks posed. Most hackers and criminals go after low-hanging fruit––easy targets. Regular penetration tests ensure that the efforts required to gain access to internal networks are substantial. The result? Most hackers will give up after a few hours and move on to other targets that are not so well defended. BAIRD BRUESEKE HAS 25-PLUS YEARS OF EXPERIENCE LEADING COMPANIES AND DESIGNING SOLUTIONS TO SOLVE CUSTOMER PROBLEMS. HE CO-FOUNDED WHEB SYSTEMS WHICH GREW FROM A TWO-PERSON START UP TO BECOME CAPTIVA SOFTWARE; A PUBLIC COMPANY WITH OVER 400 EMPLOYEES PURCHASED BY EMC. AFTER CAPTIVA, BAIRD’S INTERESTS TURNED TO EDUCATION AND CYBERSECURITY. HE CO-OWNS A PATENT AND CREATED A CLOUD-BASED PORTAL, CLAAS – COMPUTER LAB AS A SERVICE - WHICH PROVIDES ACADEMIC INSTITUTIONS THE ABILITY TO DELIVER A HANDS-ON COMPUTER SCIENCE LABORATORY EXPERIENCE TO DISTANCE LEARNERS.

INFORMATION GOVERNANCE WORLD

31


INFORMATION SECURITY

AN INTERVIEW WITH

JUDY SELBY SITTING DOWN WITH THE IN-DEMAND AUTHOR AND SPEAKER

J

udy has over 25 years of experience in insurance coverage litigation. She has particular expertise in cyber insurance and coverage under various policy forms for today’s emerging risks. As well, she is a prolific author and soughtafter speaker on insurance, cyber, technology, and compliance issues. She has been quoted in leading publications, including the Wall Street Journal, Fortune, Forbes, Reuters, Directors & Boards, and numerous others. InfoGov World: Where did you grow up? Judy: I grew up in Brooklyn, but way before it was the cool place to be. My old neighborhood is famous for great Italian food, Saturday Night Fever, and over-thetop Christmas lights. How did you develop an interest in cyberrisk mitigation and cyber insurance? I started working on insurance coverage matters right out of law school. I handled very large and complex cases that went on for years and involved tremendous volumes of paper–– and later electronic data in discovery. Because of my background, some former colleagues asked me to head up the eDiscovery and technology practice at my last law firm; I was responsible for managing the eDiscovery and data-handling processes for the massive Madoff litigations. I later co-founded the firm’s Information Governance practice as well. I concurrently studied cybersecurity, Big Data, IoT, and crisis management at MIT to develop a deeper knowledge of key issues affecting my practice. All this coincided with the emergence of cyber insurance, so it was a natural to marry my two areas of expertise and focus on cyber-risk mitigation and insurance. A lot of people are struggling to understand and deal with these issues, and I enjoy being in the position to help them. What types of consulting work have you recently been engaged in? I’ve been engaged in some really interesting

32

INFOGOVWORLD.COM

consulting projects. I often work with companies to help them get appropriate insurance across a variety of traditional insurance lines, including Directors and Officers, Employment Practices Liability, Generally Liability, Property, Crime, etc. However, I’m most often retained to advise companies about cyber insurance. I negotiate for better policy terms, help companies select the right coverages, advise them about coverage pitfalls, assist with completion of the application, and help them to understand their obligations under the terms of the policy. I’ve supported technical teams doing cyber-risk audits. I review the results of the audit and work with the company to get insurance coverage for the identified risks. I also conduct insurance due diligence in the context of corporate mergers and acquisitions, and consult with private equity firms about insurance issues. Over the past few months, I’ve become more involved with regulatory compliance engagements, particularly around the GDPR. I also advise corporate boards about insurance, privacy, cybersecurity best practices, and privacy/data protection compliance issues. Has GDPR had an impact on cyber-risk in the U.S.? How? Yes, but it’s not just GDPR. The New York State Department of Financial Services (DFS) cybersecurity regulation, the model cybersecurity law approved by the National Association of Insurance Commissioners (NAIC), and the recent cybersecurity guidance from the SEC are all requiring companies to adopt a much more mature approach to the governance of their information––recognizing that it is an incredibly valuable corporate asset to which a variety of serious risks are attached. The import of these new regulatory developments is that data security and privacy risks should be incorporated into the company’s enterprise risk management program. Information needs to be appropriately managed throughout its

entire lifecycle, from its creation or acquisition until its ultimate disposition. Because data touches virtually every part of modern enterprises, reliance on a siloed approach to Information Governance is bound to create compliance problems and increased risk. An additional impact of these new regulatory developments is the elevation of cyber-risk management to the board level. Going forward, I expect to see increased accountability on corporate boards concerning their oversight of these issues. Do you see risk considerations as playing an increasing role in Information Governance programs? Yes, I don’t think it’s possible to effectively govern information without knowing the company’s unique cyber and privacy-risk profile. After the company’s risks have been identified, steps can be taken to prioritize issues for remediation and to build processes to mitigate those risks on a goingforward basis. But unfortunately, despite best efforts, not all risks can be eliminated, so companies also should take a hard look at risk transfer through insurance. Do you see more companies addressing risk management and creating risk management departments? Yes, but I haven’t seen a uniform approach to the issue across the board. The “owner” of cyber and privacy-risk management within any particular organization might be a CISO, CIO, Chief Risk Officer, Risk Manager, or an assortment of other positions. Regardless of the title of the designated person, it’s important to take an enterprise view and obtain input from a cross-section of relevant stakeholders––including legal, compliance/ privacy, procurement, IT, human resources, and marketing. Adoption of this type of approach will enable better risk identification and formulation of appropriate and effective comprehensive risk management policies and procedures.


What steps can companies take to reduce their cybersecurity risks in the near term? I’m a big believer in getting an independent third-party risk assessment as a first step towards cyber-risk reduction. That way, the company can get a good understanding of what its exact risk profile is. No two companies are the same, so identifying the precise risks that impact the organization is vital in order to implement an effective cybersecurity and privacy program, develop an appropriate incident response plan, prepare any required regulatory disclosures, and obtain the right insurance coverages. What do cyber-insurance companies look for in assessing the risk posture of a company? Right now, there is no standard industry approach to this issue, but, generally, insurers will want to know the types and volumes of data the company handles (e.g., credit card data, PHI, PII, etc.), practices around data security, such as encryption, passwords, firewalls, adoption of cybersecurity and privacy policies and procedures, use of third-party service providers, data-retention practices, in-house cybersecurity and privacy personnel, any history of prior incidents, etc. Great care should be taken when responding to insurance policy applications because material misrepresentations, even if unintentional, may jeopardize coverage in the event of a claim. How can companies reduce the cost of cybersecurity insurance? One of the best ways to do this is to adopt and implement a sound and comprehensive Information Governance program. A company with demonstrably sound practices around data protection, regulatory compliance, and data hygiene will likely be viewed as a better risk to insurance underwriters, which should result in better rates and policy limits. What do you like most about New York City? I love the diversity of the people and pace of the city. It’s a place of both long-standing institutions and cutting-edge innovation. And being a huge sports fan, I love having access to all the major sports, particularly my beloved Yankees! What is your pet peeve? I’m certainly not a perfect grammarian, but I do have some grammar pet peeves. For instance, it drives me nuts when people say “Me and Joe went to the movies” or “None of us are going” or “between Joe and I.” I guess my elementary school English teachers in Brooklyn did a good job of hammering home those concepts when I was a kid.

JUDY SELBY HAS 25 YEARS OF INSURANCE COVERAGE LITIGATION EXPERIENCE WORKING ON BEHALF OF INSURERS AND CORPORATE POLICYHOLDERS. SHE HAS A PARTICULAR EXPERTISE IN CYBER INSURANCE AND COVERAGE UNDER VARIOUS POLICY FORMS FOR TODAY’S EMERGING RISKS. JUDY PROVIDES COVERAGE EVALUATION, POLICY NEGOTIATION, GAP ANALYSIS, AND SOC AUDIT SUPPORT SERVICES TO COMPANIES ACROSS MULTIPLE INDUSTRIES, BRINGING GREATER CLARITY AND CERTAINTY TO THEIR INSURANCE PROGRAMS. JUDY IS A PROLIFIC AUTHOR AND SOUGHT-AFTER SPEAKER ON INSURANCE, CYBER, TECHNOLOGY, AND COMPLIANCE ISSUES. SHE HAS BEEN QUOTED IN LEADING PUBLICATIONS, INCLUDING THE WALL STREET JOURNAL, FORTUNE, FORBES, REUTERS, DIRECTORS & BOARDS, INFORMATIONWEEK, BUSINESS INSURANCE, LAW360, BLOOMBERG BNA, CIO, CSO, INSURANCE BUSINESS AMERICA, NATIONAL LAW JOURNAL, DARK READING, CORPORATE EXECUTIVE

What is a Vulnerability Assessment?

T

he term vulnerability assessment applies to a broad range of systems. For example, in the context of a disaster recovery plan, the vulnerability assessment would include the likelihood of flooding, earthquakes, and other potential disasters. In the digital sphere, a vulnerability assessment is an evaluation of an organization’s cybersecurity weaknesses. This process includes identifying and prioritizing specific computer configuration issues that represent vulnerable aspects of an organization’s computing platforms. The Institute for Security and Open Methodologies (ISECOM) http://www.isecom. org/research/) publishes the Open-Source Security Testing Methodology Manual that documents the components of a vendor neutral approach to a wide range of assessment methods and techniques. A vulnerability assessment project typically includes the following: 1. Inventory of computing assets and networked devices 2. Ranking those resources in order of importance 3. Identification of vulnerabilities and potential threats 4. Risk assessment Prioritized remediation plan vulnerability assessment starts with an inventory of computer systems and other devices connected to the network. Once the items on the network have been enumerated, the network is scanned using an automated tool to look for vulnerabilities.

There are two types of scans: credentialed and noncredentialed. A credentialed scan uses domain admin credentials to obtain detailed inventories of software applications on each of the computers. This method provides the security team with the information necessary to identify operating system versions and required patches. Often overlooked, a company’s website should be part of a comprehensive vulnerability assessment. The Open Web Application Security Project (OWASP) maintains a list of the top-10 vulnerabilities most commonly found on websites. Surprisingly, many websites fail to properly implement user authentication and data input checking. These types of vulnerabilities have the potential to expose corporate data to anyone with internet access. Performing a vulnerability assessment exposes these issues so they may be resolved. The final output of a vulnerability assessment project is the prioritized remediation plan. This plan uses the results of the risk assessment to determine which vulnerabilities represent the greatest risk to the organization. The total list of vulnerabilities is often numbered in the hundreds, if not thousands. However, not all of the vulnerabilities are big problems requiring immediate attention. The prioritized remediation plan allows IT administrators to reduce corporate risk quickly by focusing on the most important weaknesses first.

INFORMATION GOVERNANCE WORLD

33


INFORMATION SECURITY

STEPPING INTO SECURITY ASSESSMENTS SELECTING A FRAMEWORK BY BAIRD BRUESEKE

I

n today’s cyber threat landscape, companies have a fiduciary duty to assess their cyber security posture. This is the root function of a Cybersecurity Assessment. Typically, 3rd party vendors are contracted to perform the Assessment. These firms have expertise in a variety of cybersecurity skills which they use to tailor the engagement to a scope appropriate for the organization being assessed. One of the first steps when starting a Cybersecurity Assessment project is to select a framework. This choice will become part of the project requirements and in large part define the scope of work to be performed by the 3rd party vendor. There are several frameworks to choose from including: ISO 27001, COBIT, NIST Cybersecurity Framework, NIST 800-53, DOD 8570, DCID 6/3, HITRUST CSF and the Cloud Security Alliance’s - Cloud Controls Matrix. Even the Motion Picture Association of America has defined a cybersecurity framework to protect their member’s intellectual property. The NIST Cybersecurity Framework consists of five “functions”. The five functions are: Identify, Protect, Detect, Respond and Recover as shown below:

Security Assessment using this framework can quickly become a big project, often too big for the organization’s size. For small and medium sized business, a good step forward is to specify the Center for Internet Security (CIS) Top 20 controls as the framework the independent cybersecurity team will assess. The CIS top twenty controls provide an easy to understand assessment tool which senior executives will understand. Once the CIS controls are evaluated, the organization’s security posture can be easily visualized using color-coded infographics and risk score heat charts. Many Security

CSC 1

Inventory of Authorized and Unauthorized Devices

CSC 5

Controlled Use of Administrative Privileges

CSC 9

RECOVER

IDENTIFY

PROTECT

RESPOND

Limitation and Control of Network Ports, Protocols, and Services

CSC 13 Data Protection

DETECT

These five functions are sub-divided into 22 categories … and then each category has multiple controls. One issue with the NIST framework is that a comprehensive

34

INFOGOVWORLD.COM

(PART 1 OF 2)

CSC 17

Security Skills Assessment and Appropriate Training to Fill Gaps

CSC 2

Inventory of Authorized and Unauthorized Software

CSC 6

Assessments include an evaluation of the business’ people, processes, and technologies. There is no point in spending technology dollars if the existing corporate processes do not support their use. These decisions can be explored using radar charts to visualize the cyber readiness of three metrics: people, process, and technology. Radar charts depict cyber security assessment scores in a circular chart with gradient rankings that show executives the information they need to act on to enhance their security posture. The second step in the Cyber Framework series will explore assessment metrics and executive engagement.

CSC 3

CSC 4

Secure Configurations for Hardware and Software

Continuous Vulnerability Assessment and Remediation

CSC 7

CSC 8

Maintenance, Monitoring, and Analysis of Audit Logs

Email and Web Browser Protections

Malware Defenses

CSC 10

CSC 11

CSC 12

Data Recovery Capability

CSC 14

Controlled Access Based on the Need to Know

CSC 18

Application Software Security

Secure Configurations for Network Devices

CSC 15

Wireless Access Control

CSC 19

Incident Response and Management

Boundary Defense

CSC 16

Account Monitoring and Control

CSC 20

Penetration Tests and Red Team Exercises


Security Awareness Training – a Quick Win for IG Programs PEOPLE ARE THE FIRST STEP IN SECURING THE ENTERPRISE

O

ne of the quick wins that an Information Governance (IG) program can bring to an organization is the implementation of a Security Awareness Training (SAT) program. Information Governance programs are implemented to reduce risk and maximize information value. Security Awareness Training programs are an excellent way to reduce risk and they are easy to implement. Employees have many bad habits that can leave a company vulnerable to data breach scenarios. In response to the ever-increasing cybersecurity threat faced by business, a new sub-segment of the Information Security market has emerged and matured in the last five years. The Security Awareness Training market grew 54% from 2015 to 2017. Projected revenues for 2018 top $400 million dollars. Cybersecurity threats are constantly evolving. One of the important things to understand when evaluating Security Awareness Training programs is the vendor’s cycle for new content development and deployment in the training platform. Some of the features to look for and evaluate when

selecting a Security Awareness Training product are: • Interactive content in varied formats designed to keep learners engaged • Training designed to teach resistance to multiple forms of social engineering • Optimization for smart phone and tablet usage • Gamification and other methods to engage employees and increase participation • Pre-structured campaigns for different types/levels of employees • Role-based training with optional customization based on corporate environment • Robust library of existing content and flexible micro-learning topics • Internal marketing material and communication tools for use by the HR department • Short lessons, approximately 5 minutes in length, certainly less than 10 minutes each • Integrated quizzes and metrics to track employee participation and knowledge retention • Integration with corporate LMS • Integration with end-point security systems It is important to understand that SAT

products typically include not only training, but also simulated attacks. Therefore, the way in which the SAT product interacts with existing cybersecurity defenses is a serious consideration. For example, if the training program administrator sends out a simulated phishing attack email, that email needs to make it through the SPAM filter and into the employee’s email inbox before the employee can be tempted into potentially clicking on the bad link. In smaller companies, it may be sufficient to whitelist to the domain from which the phishing email is being sent. In larger organizations which have Security Information Event Monitoring (SIEM) and other automated cyber defense systems, the company’s IT/Security Team would likely request integration of a notification process for the simulated attack campaign in order to avoid a rash of false alarms from the security monitoring systems. Security Awareness Training can provide a quick win for IG programs. The training immediately reduces risk. At the same time, management can point to the employee participation metrics as proof that proactive efforts are being made to enhance the organizations’ security posture.

INFORMATION GOVERNANCE WORLD

35


Doug Laney – VP & Distinguished Analyst, Chief Data Officer Research, Gartner.

36

INFOGOVWORLD.COM


N OTHE

An Interview with Infonomics Author Doug Laney

MONEY oug Laney is vice president and distinguished analyst with the Gartner Chief Data Officer (CDO) research and advisory team. Doug researches and advises clients on information monetization and valuation, open and syndicated data, analytics centers of excellence, data governance and Big data based innovation. He is the author of Gartner’s Enterprise Information Management Maturity Model and is a two-time recipient of Gartner’s annual thought leadership award.

Photography by Isi Akahome INFORMATION GOVERNANCE WORLD

37


38

INFOGOVWORLD.COM


inding new value in information is a key aspect of Information Governance, so we were quite interested to learn more about Doug and his work. InfoGov World: Where did you grow up? Go to school? Doug: I grew up in Deerfield, Illinois, a suburb about 30 miles north of Chicago. Walgreens, Astellas Pharma, Baxter Labs, Fortune Brands, and now Caterpillar are headquartered there. Sara Lee’s bakery used to be there, so the town often smelled of cakes when I was young. I wanted to study math and computer science. Since the University of Illinois was a top school in the field and only $600 tuition per semester at the time, there was no question that’s where I was headed. But after having spent four years in junior achievement, I was really more interested in the business side of computing, so I designed my own curriculum and graduated with a degree in business administration and computer science. Subsequently, the curriculum was adopted as a formal offering. What are your responsibilities at Gartner? I’m one of a couple thousand IT industry analysts at Gartner. We’re expected to be the eyes, ears, and thought leaders in our respective corners of IT, publishing, speaking, and advising IT and business leaders on enterprise technologies, best practices, etc. I’m with Gartner’s Data & Analytics research and advisory team, a collection of some of the most brilliant folks in the field I’ve ever met. How do you define infonomics? Infonomics is the concept and practice of treating information as an actual corporate asset. Everyone talks about information as one of their organization’s most critical corporate assets, but scant few actually behave as if it is. That is, they don’t monetize, manage, or measure it with nearly the same discipline as their physical or financial assets––or even their human capital. In fact, most companies treat their office furniture with greater asset discipline than their information. Why? Perhaps because even in the midst of the Information Age, the accounting profession refuses to recognize information as a balance sheet asset. Yet, information easily meets the criteria established for being recognized as an asset.

When did you first discover infonomics and how has your interest in infonomics evolved? I had always just assumed information was an asset and considered a form of property. But after the 9/11 terrorist attacks, some clients in the Twin Towers lamented to us not only the tragic loss of life, but also the loss of their data. Remember, this was in the days before the cloud when many maintained onsite backups. Naturally, these businesses contacted their insurance companies, who wouldn’t honor the claims arguing that electronic data wasn’t a type of property covered by their P&C policies. This led me to crack open my old accounting textbook to learn just what the definition of an asset is. Hint: information meets all the criteria of one. Then I popped onto the online financial database EDGAR to inspect some balance sheets, particularly those of data brokers and such. To my surprise, the value of the hardware their data sits on is recognized as an asset, but not the data. This, I thought, explains why organizations fail to manage their information well. They don’t measure its value. You know the old adage: ‘You can’t manage what you don’t measure.’ I also think it follows that you can’t monetize what you don’t manage. And this explains why so much information goes unutilized. Over time, my colleagues and I have developed and adapted models for measuring information’s value––in a variety of ways for different purposes. Also, we have explored how to manage information as an actual asset by applying asset management approaches from other fields. And we have laid out approaches to monetizing any information asset that takes advantage of its unique economic properties. Finally, we have begun exploring how traditional economics concepts––like supply and demand, marginal utility, pricing and elasticity, and others––must be tweaked to be applied to information assets. What or who are some of the major influences in developing your theories about infonomics? In the 1960s, University of Chicago economist and future Nobel

INFORMATION GOVERNANCE WORLD

39


CHECK OUT PAGE 50 FOR AN EXCERPT FROM DOUG LANEY’S NEW INFONOMICS BOOK

40

INFOGOVWORLD.COM


Measure the difference between the potential and realized value of key information assets, then set upon a journey to close the gap.

laureate Gary Becker devised the concept of “human capital.” Before that time, labor was just an expense, not really something to be managed or optimized. His ideas gave rise to the modern HR department and ultimately the chief human resource officer. Thankfully, due to the Fourteenth Amendment and similar anti-slavery laws in other countries, you can no longer own people. So, today a company cannot account for employees as assets, but that hasn’t stopped organizations from accounting and managing them like one internally. I think there are a lot of parallels here to information assets. Yet, as I mentioned, information does meet the criteria of a formal accounting asset. Other key influences on the topic: the renowned “father of the data warehouse concept” Bill Inmon; analytics thought leader Tom Davenport; data governance giant John Ladley; famous educator and author Paul Strassmann (who struggled for decades to value IT); and Claude Shannon, the 1930s father of information theory. And the infonomics concept has benefited immensely from the collaboration of numerous Gartner colleagues. What changes and trends in infonomics have you seen in the last year? Since the book was published, it seems most organizations are now seriously strategizing about how to generate greater economic benefits from their (and other) information assets. Some even have robust data monetization initiatives. There also seems to be more chatter about the possibility of recognizing and reporting on information on the balance sheet, but I think that’s still many years away. [Note: the Financial Accounting Standards Board (FASB) convened a group in 2016 to study this issue.] Lately, clients request meetings every week about how to help them quantify their information. And Gartner Consulting has done numerous infonomics-related projects for both

commercial and public sector organizations. What skill sets are needed for someone to become an expert in infonomics and leveraging info assets? The surprising answer to this is that these skills already exist in droves, but in other disciplines. There are experts in physical asset management, financial asset management, human capital management, accounting, economics, product innovation, and product management. Chief Data Officers and other leaders need to find these people and apply their skills to their organization’s information assets. What advice would you have for companies wanting to manage and monetize their information assets? Measure them. Measure the difference between the potential and realized value of key information assets, then set upon a journey to close the gap. Once business leaders and executives clearly see how much money they’re leaving money on the table, these initiatives will gain support and momentum. What hobby or special skill do you have that might surprise your colleagues? Competitive tennis, non-competitive golf, biking, and cooking are my things. Other than that I juggle, can talk like Donald Duck, teach junior achievement at grammar schools, and have a collection of hundreds of wind-up toys. Nothing that out of the ordinary! What is the best vacation spot you have found? I’ll never tell. It’s serene and secluded and I want to keep it that way. Sometimes, too much information is a bad thing.

The INFOrmation: Doug Laney Doug Laney is a senior analyst and advisor with Gartner’s Chief Data Officer research group. He is an accomplished practitioner and recognized authority on information and analytics strategy. Doug researches, publishes and consults to senior IT and business leaders on data monetization and valuation, open and syndicated data, data governance, and big-data based innovation. In the 90’s he coined the “3Vs” of volume, velocity, and variety, now commonly used in defining Big Data. More recently Doug helped launch and manage the Deloitte Analytics Institute, has guestlectured a major business schools, and has been published in the Wall Street Journal, Forbes, and the Financial Times among other journals. Upon returning to Gartner he researched and published the book, Infonomics, which has been selected by CIO Magazine as a “must read” book of the year. And next year Doug will begin teaching a graduate level business course on infonomics at the University of Illinois.

INFORMATION GOVERNANCE WORLD

41




DATA ANALYTICS & INFONOMICS

WHY WILL ANALYTICS BE THE NEXT COMPETITIVE EDGE? BY GARY COKINS

A

nalytics is becoming a competitive edge for organizations. Once being a “nice-tohave,” applying analytics is now becoming mission-critical. An August 6, 2009, New York Times article titled, “For Today’s Graduate, Just One Word: Statistics”[1] reminds me of the famous quote of advice to Dustin Hoffman’s character in his career breakthrough movie, The Graduate. It occurs when a self-righteous Los Angeles businessman takes aside the baby-faced Benjamin Braddock, played by Hoffman, and declares, “I just want to say one word to you––just one word––‘plastics.’” Perhaps a remake of this movie will be made and updated with the word “analytics” substituted for plastics. The use of analytics (that include statistics) is a skill gaining mainstream value due to the increasingly thinner margin for decision error. There is a requirement to gain insights and inferences from the treasure chest of raw transactional data that so many organizations have now stored (and are continuing to store) in a digital format. Organizations are drowning in data, but starving for information. The application of analytics is becoming commonly accepted, but will senior executives realize it?

44

INFOGOVWORLD.COM

“The application of analytics is becoming commonly accepted, but will senior executives realize it?”


“There is always risk when decisions are based on intuition, gut feel, flawed & misleading data, or politics.” —Gary Cokins, Founder, AnalyticsBased Performance Management LLC

How do executives and managers mature in applying accepted methods? Managers today are maturing in applying progressive managerial methods. Consider this. Roughly 50 years ago, CEOs hired accountants to do the financial analysis of a company, because this was too complex for them to fully grasp. Today, all CEOs and mainstream businesspeople know what priceearnings (PE) ratios and cash flow statements are, and that they are essential to interpreting a business’ financial health. They would not survive or get the job without this knowledge. Twenty years ago, CEOs of companies didn’t have computers on their desks. They didn’t have the time or skill to operate these complex machines and applications, so they had their secretaries and other staff do this for them. Today, you will become obsolete if you don’t at least personally possess multiple electronic devices (such as laptops, mobile phones, BlackBerrys, and PDAs) to have the

information you need at your fingertips.

BUSINESS ANALYTICS IS THE NEXT WAVE. Today, many businesspeople don’t really know what predictive modeling, forecasting, design of experiments, or mathematical optimization mean or do, but over the next ten years, use of these powerful techniques will have to become mainstream––just as financial analysis and computers have––if businesses want to thrive in a highly competitive and regulated marketplace. Executives, managers, and employee teams who do not understand, interpret, and leverage these assets will be challenged to survive. When we look at what kids are learning in school, then that is certainly true. We were all taught mean, mode, range, and probability theory in our first-year university statistical analytics course. Today, children have already learned these in the third grade! They are

REFERENCES

[1] http://www.nytimes.com/2009/08/06/technology/06statshtml?scp=1&sq=Graduate%20statistics&st=cse [2] Thomas H. Davenport and Jeanne G. Harris, Competing on Analytics: The New Science of Winning. Boston: Harvard Business School Publishing, 2007.

taught these methods in a very practical way. If you had x dimes, y quarters, and z nickels in your pocket, what is the chance of you pulling a dime from your pocket? Learning about range, mode, median, interpolation, and extrapolation follow in short succession. We are already seeing the impact of this with Gen Y/Echo boomers who are getting ready to enter the workforce: they are used to having easy access to information and are highly self-sufficient in understanding its utility. The next generation after that will not have any fear of analytics or look toward an “expert” to do the math. There is always risk when decisions are made based on intuition, gut feel, flawed and misleading data, or politics. In Babson College Professor Tom Davenport’s popular book, Competing on Analytics: The New Science of Winning, [2] he makes the case that increasingly, the primary source of attaining a competitive advantage will be an organization’s competence in mastering all flavors of analytics. If your management team is analytics-impaired, then your organization is at risk. Analytics is arguably the next wave for organizations to successfully compete and optimize the use of their resources, assets, and trading partners. Substantial benefits are realized from applying a systematic exploration of quantitative relationships among performance management factors. When the primary factors that drive an organization’s success are measured, closely monitored, and predicted, that organization is in a much better situation to adjust in advance and mitigate risks. That is, if a company is able to know (not just guess) which nonfinancial performance variables directly influence financial results, then it has a leg up on its competitors.

GARY COKINS (CORNELL UNIVERSITY BS IE/OR, 1971; NORTHWESTERN UNIVERSITY KELLOGG MBA 1974) IS AN INTERNATIONALLY RECOGNIZED EXPERT, SPEAKER, AND AUTHOR IN ENTERPRISE AND CORPORATE PERFORMANCE MANAGEMENT (EPM/CPM) SYSTEMS. HE IS THE FOUNDER OF ANALYTICS-BASED PERFORMANCE MANAGEMENT LLC WWW.GARYCOKINS.COM. HE BEGAN HIS CAREER IN INDUSTRY WITH A FORTUNE 100 COMPANY IN CFO AND OPERATIONS ROLES. THEN 15 YEARS IN CONSULTING WITH DELOITTE, KPMG, AND EDS (NOW PART OF HP). FROM 1997 UNTIL 2013 GARY WAS A PRINCIPAL CONSULTANT WITH SAS, A BUSINESS ANALYTICS SOFTWARE VENDOR. HIS MOST RECENT BOOKS ARE PERFORMANCE MANAGEMENT: INTEGRATING STRATEGY EXECUTION, METHODOLOGIES, RISK, AND ANALYTICS AND PREDICTIVE BUSINESS ANALYTICS. GARY COKINS, CPIM CAN BE REACHED AT: GARYCOKINS.COM

INFORMATION GOVERNANCE WORLD

45


DATA ANALYTICS & INFONOMOICS

THE ROLE OF ANALYTICS IN IG PROGRAMS BY SAM FOSSETT

46

INFOGOVWORLD.COM


F

or nearly four decades, data analytics has been used by leading organizations to gain new insights and track emerging market trends. Now, in the era of Big Data, increasingly sophisticated analytics capabilities are being used to help guide and monitor Information Governance (IG) programs. There are four distinct types of analytics to explore. In order of increasing complexity, and value added, they are: descriptive, diagnostic, predictive, and prescriptive. These analytics will become more important––and more difficult––as we continue to produce unprecedented amounts of data every year. Descriptive analytics tells you about information as it enters the system. Diagnostic analytics investigates larger data sets to understand past performance and tell you what has happened. Year-over-year, or month-to-month, data can be used to determine what will happen in the future: this is predictive analytics. Prescriptive analytics helps companies to determine what actions to take on these predictions based on a variety of potential futures.

STRUCTURED V. UNSTRUCTURED DATA Data analytics relies on structured data, which is stored in relational databases. When computers are fed data, it fits into a defined model. All the data within the model is structured. Unstructured information, on the other hand, is basically everything else—email messages, word processing and spreadsheet files, scanned images, PDFs, etc. Unstructured information lacks detailed and organized metadata. Structured data is more easily managed, analyzed, and monetized because it has rich and easily processed metadata. For example, a column titled “Name” will correspond to the name of the person linked to the rest of the data in the row. It may be unsurprising, but unstructured information is stored rather haphazardly. Every day, knowledge workers create documents and send emails to communicate with other knowledge workers. Our personally unique and inconsistent preferences for what we name our everyday office e-documents and where we save them makes for a labyrinth of data. Even the nature of the information within them is rather chaotic. Free-flowing sentences do not make sense to computers like databases full of 1s and 0s. As a result, analysis is more difficult, at least until the proper metadata

fields are created and leveraged––then the benefits are astronomical. Structured data is very useful for determining what is happening in the market or within your organization. However, relying on it will leave you missing the most important piece of the puzzle: why. Clearly, it is advantageous to know what is happening, but without the why it is impossible to act on. Historically, data analytics has been an imperfect science of observation. Systems produce massive amounts of data, and data scientists correlate data points to determine trends. Unfortunately, correlation does not imply causation. Think about all the information that isn’t included. Behind every one of these data points are emails and instant messages formulating ideas, as well as documents describing the thoughts and processes involved. There is a treasure trove of information to be found in the crucial unstructured data. Take, for example, a typical enterprise of 10,000 employees. On average, this organization will generate over 1 million email messages and nearly 250,000 IM chats every single day. During that same time, they will also create or update 100,000 e-documents. The problem is simply being able to corral and cull massive amounts of information into a useable data set. This is not an easy task, especially at scale. The challenge only increases with the production of more data. Not only are established technologies not designed to cope with this type of information, they’re also unable to function at such high volumes.

IG IS KEY Without Information Governance, all this relevant information remains hidden in desktop computers, file shares, content management systems, and mailboxes. This underutilized or unknown information is referred to as dark data. Not only does understanding and controlling this information add value to your analytics program, it also reduces risk across the enterprise. To control your information, you must own your information; when you own your information, you can utilize your information. This is much harder with unstructured information because most organizations have environments full of standalone, siloed solutions that each solve their own designated issue. This is fine from a business perspective, but a nightmare to manage for RIM and IG professionals. A single document sent as an email

attachment could be saved in a different location by every person included in the chain. Multiple copies of the same document make it difficult, if not impossible, to apply universal classification and retention policies. The same file may be stored and classified separately by legal, compliance, HR, and records––and no one would know! When this happens, organizations lose control and expose themselves to undue risks. Organizations have petabytes of dark data haphazardly stored throughout their file shares and collaboration software. Much of this information is ROT (redundant, obsolete, or trivial). ROT data hogs storage and can slow down systems and search capabilities––thus hindering business function. ROT may also be stored in retired legacy systems. These legacy systems can be a thorn in the side of IG professionals because of the amount of dark data and ROT intermingled with important business records. Mass deletion is not possible, but neither is the status quo. Implementing modern, proactive IG strategies can be a daunting task that requires input from a number of sources.

SO WHERE DO WE BEGIN? Information Governance is not something to jump into all at once, but rather to ease into step by step. The best place to start is with file analysis. In short, file analysis is a series of system scans to index this dark data and bring it to light. A deep content inspection is conducted and metadata tags are inserted. File analysis can be performed on the metadata or the content of files, depending on the intricacy and accuracy needed. Metadata is information about who created the file, as well as where and when. Think about the information on the outside of an envelope being metadata, while the actual letter enclosed is the content. Performing file analysis helps determine what information is ROT and can be deleted, what can be utilized, and what needs to be preserved.

LEVERAGING NEWFOUND KNOWLEDGE Analytics can help improve compliance functions by tracking and mapping communications. A communication heat map allows an administrator to view “who is communicating with whom about what” at a high level, while also having the granularity to drill down into any of the conversations that may set off compliance triggers. Beyond monitoring communications, tools are able to determine if there are sensitive

INFORMATION GOVERNANCE WORLD

47


DATA ANALYTICS & INFONOMOICS or potentially illegal communications being shared or stored in documents and files. Doing so proactively is an additional safeguard to keep an organization safe. These communication maps are also valuable to Human Resources. Knowing who communicates with whom, and about what, helps determine who the big players are within an organization. Understanding who knows and owns important information and tracking communication trends can help assess leadership potential and award promotions based on merit. It can also alert management about potential negative sentiments and potential insider threats to an organization. For legal teams, the data insight can drastically improve Early Case Assessment (ECA) abilities. Since the legal team knows what information the organization has and where it is stored, there’s no mad scramble to find information when litigation is initiated. Being able to analyze what information the organization holds saves time and effort in collection, while also providing a more accurate data set. It is not necessary to send massive amounts of information to outside counsel to be analyzed. When litigation does arise, the legal team can quickly and accurately determine what, if any, liability the organization faces and can make informed decisions on how to proceed. A process that used to take weeks or months can now be completed in hours or days. The benefits for records management teams are substantial as well. The insights gained from analysis provide important information about which documents are business records and which are unnecessary to retain. This goes beyond typical records, too. Items that are historically not considered records, such as private information discussed in an email, now may be discoverable for litigation. This means record managers need to be able to identify this information and apply retention to it. File analysis also makes compliance with new regulations (like the European Union privacy law, General Data Protection Regulation [GDPR], and ePrivacy) much easier. Many vendors have promised one-stop GDPR solutions, but the truth is there really is no such thing. GDPR is not something you can solve with a single-point solution, but rather something that requires the implementation of proper IG tools, techniques, and policies. Having in-depth knowledge of the information within an organization makes GDPR dSAR (digital subject access requests) a breeze.

TAKE-HOME MESSAGE If you’ve made it this far, then you care about analytics as much as I do––and see their utility in dealing with data. Information Governance is still a rather new concept in the business world and can be intimidating, but it is a game-changer. It is crucial to focus on the cross-functional benefits of IG in order to spur executives into action. The knowledge gained from analytics within IG helps create revenue and minimizes risk. It is a competitive advantage that will shape the next few decades in the corporate world, and I wouldn’t recommend being the laggard. SAM FOSSET IS AN INFORMATION GOVERNANCE SPECIALIST AT ZL TECHNOLOGIES, A LEADING IG COMPANY THAT CATERS TO LARGE ENTERPRISES. BORN IN OHIO, SAM MOVED TO SILICON VALLEY TO BE IN THE CENTER OF INNOVATION AND TECHNOLOGY AFTER GRADUATING FROM VANDERBILT UNIVERSITY. WHEN HE’S NOT SPREADING AWARENESS ABOUT THE BENEFITS OF IG, HE IS OUT HIKING, READING, EXPLORING LOCAL BREWERIES, OR HONING HIS PHOTOGRAPHY SKILLS. SFOSSETT@ZLTI.COM

48

INFOGOVWORLD.COM

Analytics 101 THE FOUR TYPES OF ANALYTICS AND THEIR USES

A

s this is a magazine about Information Governance, it behooves us to offer an introduction to analytics. Don’t worry, there won’t be a test. Let’s start with a term that is bandied about in academic and professional circles with reckless abandon: Big Data. Consistently misused, and woefully misunderstood, understanding it, however, can help you get the most out of the analytics we love so much.

BIG WHAT? Big Data analytics has a lot of uses (real-time fraud detection, complex competitive analysis, call center optimization, consumer sentiment analysis, intelligent traffic management, etc.) and has three primary factors: high volume, high velocity, and high variety of data. Analysis of the appropriately named Big Data can provide the kind of insight into relationships and business patterns that can improve a business’ bottom line. The four types of analytics we are most interested in for the purposes of this introduction are: 1. Descriptive – Real-time analysis of incoming data 2. Diagnostic – Understanding past performance 3. Predictive – Forecast of what might happen 4. Prescriptive – Formation of rules and recommendations


In many ways, descriptive analytics (or data mining) is the least sophisticated of the bunch. This doesn’t mean that it can’t help provide valuable insight into patterns. Here, we are trying to answer the question: what happened? Not nearly as sexy as predictive or prescriptive analytics, but useful nevertheless. Just as with other types of analytics, raw data is collated from multiple sources in order to provide insight into past behavior. However, the difference here is that the output is binary: was it wrong or right? There is no depth to the data; it provides no explanation. Most data-driven companies go well beyond descriptive analytics, utilizing others to understand the data in order to effect change within their business.

DIAGNOSTIC ANALYTICS Diagnostic analytics might be the most familiar to the modern marketer. Often, social media managers track success or failure based on number of posts, new followers (or unfollows), page-views, likes, shares, etc. This is as good of a description of what diagnostic analytics is as anything else. Using this methodology, we are trying to discover why something happened. Comparative statistics here are what drives diagnostic analytics, as we are comparing historical data with another set in order to understand why data looks the way it does. This is why people love Google Analytics so much: it lets them drill down and identify patterns in how visitors interact with their site.

PREDICTIVE ANALYTICS Predictive analytics utilizes Big Data to illustrate how patterns in the past can be used to predict the future. Sales is usually

VALUE

DESCRIPTIVE ANALYTICS

WHAT HAPPENED? Descriptive Analytics

WHY DID IT HAPPEN? Diagnostic Analytics

N ATIO t M R FO igh

IN

s

Hind

a beneficiary of predictive analytics, as lead generation and scoring, along with the sales process itself, is built out of a series of patterns and data points over time. Predictive analytics tells us what is likely to happen based on tendencies and trends, information that makes this forecasting tool so valuable. Bear in mind that statistical models are just estimates. It accuracy is in the continued accruement of data and the refinement of the model based on new information.

PRESCRIPTIVE ANALYTICS Despite its clear value, and sophistication, prescriptive analytics are not used as often and as widely as it should be. Perhaps it is an insistence on larger, system-wide analytics that makes this narrowly focused body of data overlooked so often. Its purpose is quite simple: what action should be taken in the future to correct a problem or take advantage of a trend. By looking at historical data and external information, a statistical algorithm is created that adds value to a company

WHEN WILL IT HAPPEN? Predictive Analytics

HOW CAN WE MAKE IT HAPPEN? Prescriptive Analytics

N ATIO

IZ PTIM

O

t

sigh

Fore

ht

Insig

DIFFICULTY regardless of its industry.

TAKE-HOME MESSAGE There is no easy conclusion about which one to use. It is dependent on your business scenario. Deploying a combination of analytics types is best to fit the needs of your company. Given the options, companies should choose that blend which provides the greatest return on their investment. Descriptive and diagnostic approaches are reactive; predictive and prescriptive are proactive. The real take-home message is to utilize data-driven analytics to first learn what is happening using descriptive and diagnostic analytics, and to move toward and understanding of what might happen, using predictive and prescriptive analytics. Leveraging Big Data analytics can provide dividends (pun intended) for companies by giving context to their business story, and arming decision-makers with better information, helping to provide a sustainable competitive advantage.

INFORMATION GOVERNANCE WORLD

49


DATA ANALYTICS & INFONOMOICS

INFONOMICS AN EXCERPT FROM DOUG LANEY’S NEW INFONOMICS BOOK PHOTO BY ISI AKAHOME

Infonomics is the theory, study, and discipline of asserting economic significance to information. It provides the framework for businesses to monetize, manage, and measure information as an actual asset. Infonomics endeavors to apply both economic and asset management principles and practices to the valuation, handling, and deployment of information assets. As a business, information, or information technology (IT) leader, chances are you regularly talk about information as one of your most valuable assets. Do you value or manage our organization’s information like an actual asset? Consider your company’s well-honed supply chain and asset management practices for physical assets, or your financial management and reporting discipline. Do you have similar accounting and asset management practices in place for your “information assets?” Most organizations do not. When considering how to put information to work for your organization, it’s essential to go beyond thinking and talking about information as an asset, to actually valuing and treating it as one.

50

INFOGOVWORLD.COM

The discipline of infonomics provides organizations a foundation and methods for quantifying information asset value and formal information asset management practices. Infonomics posits that information should be considered a new asset class in that it has measurable economic value and other properties that qualify it to be accounted for and administered as any other recognized type of asset— and that there are significant strategic, operational, and financial reasons for doing so. Infonomics provides the framework businesses and governments need to value information, manage it, and wield it as a real asset. Aptly, the topic coincides with the objectives and responsibilities of one of the hottest roles in businesses today: the chief data officer, or CDO. Most of the thousands of CDOs appointed in the past few years have been chartered with improving the efficiency and value-generating capacity of their organization’s information ecosystem. That is, they’ve been asked to lead their organization in treating and leveraging information with the same discipline as its other, more traditional assets. This book is for them. This book also is for CEOs who want to guide their organizations from just using information to weaponizing it. It is for CIOs who want to transform their organizations from regarding information as “that stuff IT manages” into a critical business asset. It’s also for the CFO who is heads-up to the economic benefits of information, but is looking for ways to better understand, gauge, and financially leverage these benefits. And this book is also for the enterprise architect who wants a new set of tools to create novel information-based solutions for the organization, and for academics in business and computing sciences forming and shepherding the next generation of leaders into the Information Age.


ADVERTORIAL

IBM Security Guardium Assists in Data Protection

Guardium uses cognitive analytics and automation to help protect critical data in today’s heterogeneous environments These days, data security breaches are more common than ever—and more impactful. Global studies show that the average total cost of a data breach is now almost $4 million dollars. What’s more, the loss of trade secrets, product designs or other intellectual property can spell financial ruin for an organization. Because of its value, critical and sensitive data is at the core of business interactions—which also makes it a highly attractive target for attack. Traditionally, organizations have focused on “perimeter” defenses for protecting their critical information. But traditional tools, such as anti-virus software and firewalls, are not equipped for today’s advanced threats, which many times come from inside the organization. Plus, data is constantly growing, changing and moving, so data protection measures must also be able to adapt to follow the data. Increasing numbers of users, applications and systems need instant access to different types of sensitive data—residing in or replicating into databases, data warehouses, file shares, big- data platforms, cloud environments and more. Keeping track of who has access to this dynamic, distributed and disparate data, and who is sharing it (and with whom), can seem like an insurmountable task. IBM®Security Guardium®is designed to safeguard critical data, wherever it resides. This comprehensive data protection platform empowers security teams to automatically analyze what is happening across the data environment to help minimize risk, protect sensitive data from internal and external threats, and seamlessly adapt to changes that affect data security and compliance. Guardium provides a comprehensive approach to protecting an organization’s “crown jewels”— the critical data that is vital for business success and survival. Leveraging its end-to-end graphical user interface, security teams can identify and remediate risks to sensitive data, whether the data is in motion or at rest. And this unified approach extends to a broad range of both structured and unstructured data repositories, including databases, data warehouses, Hadoop, NoSQL, in-memory systems, file shares and so on.

Analyze Threats to Sensitive Data

For effective data protection, organizations need to understand what exactly they need to protect and then thoroughly protect it. Guardium enables security teams to: • Discover and classify sensitive data and entitlements—and uncover compliance risks—automatically • Know who is accessing data, spot anomalies and stop data loss • Rapidly analyze data usage patterns to uncover and remediate risks • Support analytics with automated advanced analytics and machine learning to spot and stop unusual and risky behavior • Leverage specialized threat-detection analytics to spot and stop breaches early— such as by finding and alerting on SQL injections or malicious stored procedures • Provide a dashboard to help key stakeholders see data security and/or compliance status and progress over time, to better understand how the initiative is adding value to the business—and to understand gaps.

ibm.com/Data-Security/IBM-Guardium


REGULATORY COMPLIANCE AN INTERVIEW WITH

ARLETTE WALLS CHAIR OF THE PHARMACEUTICAL RECORDS AND INFORMATION MANAGEMENT ORGANIZATION (PRIMO)

T

oday we are talking with Arlette Walls, an Information & Compliance Manager based in Emeryville, CA. She has 13 years of experience in records management and Information Governance-related functions, including regulatory affairs, archiving, compliance, and information security/risk management. Disclaimer: Ms. Walls contributed to this article in her personal capacity. The views expressed herein are her own and do not represent the views of her employer. InfoGov World: How did you get into the records management business? Arlette: I actually started my career working for the Chief Information Officer at the American Embassy in Brussels, Belgium. I was a military spouse, and we moved every two to three years. Being adaptable and resourceful was essential when moving from station to station, as you had to reinvent yourself each time. I had worked in Regulatory Affairs at a biotech company in the Bay Area (Northern California) while we were stationed there. When I moved back to this area a few years ago, I went back to work for that same company, and played a multi-functional role in compliance, quality, information security, and business continuity. Then came an opportunity to combine my expertise and work as a records manager on global and cross-functional projects. What is your current role? I am an Information & Compliance Manager for a biomedical research site in Emeryville, CA and handle mainly scientific data, in physical and electronic format. I am also chair of PRIMO (Pharmaceutical Records and Information Management Organization), a consortium

52

INFOGOVWORLD.COM

managed by Drinker, Biddle & Reath, LLP, comprised of pharmaceutical companies working collectively to solve common issues related to our industry. How has records management evolved with the advent of Information Governance programs? To allow a business to take smart, risk-based approaches in making decisions, by necessity, information should now be governed through collaboration with various functions and in partnership with the business, which creates the information. I think records management is the tactical tool of Information Governance. In PRIMO, members share best practices on how to partner with others in their organization, strategically or tactically. What are some unique InfoGov challenges for the biomedical/ pharma segment? Effective information storage and records management in the pharmaceutical industry is essential, and we strive to address challenges such as: • Regulatory Requirements. Strict records retention requirements and penalties for non-compliance: This may not be a unique challenge to our industry, but our assets come in many types and formats, whether we are talking about histology slides, born digital records, scanned documents, or raw data, etc. • Legal Holds. Duty to preserve all information relevant to pending or anticipated litigation: An IG example would be to collaborate on a data-mapping exercise with IT. It is challenging to achieve, but having a data map will help increase the efficiency and effectiveness of discovery processes. • Rapid Growth. Technology and communication practices are evolving, further compounding information growth: This industry generates huge amounts of data, so more than ever, we need to be strategic partners and collaborate with all IGrelated functions to understand the impact of new strategies on this generation of data and help corral that data. • Information Privacy. Restrictions on retention and


The Relationship Between Audit and Compliance

I use of personal data: Ensuring privacy protections has long been a critically important goal with heightened visibility recently in light of the GDPR. What advice or insights can you share with companies in meeting those challenges? I am only speaking for myself here, but being part of a group that focuses on the same challenges you do, at the industry level, seems like the best approach. We focus on common issues, which means a crossfertilization of ideas that generates new insights and perspectives. We participate in benchmarking exercises to work through complex issues, and the survey results can impact outcomes for our own challenges. It also supports professional development and broadens our network of IG and RIM experts in our industry. What is your personal interest in your field? My interests are very diverse, from professional development to training, metrics, and business continuity. Of course, we all have our pet projects: At this time, I am particularly interested in the challenges resulting from the multitude of mergers, acquisitions, and divestitures & closures in our industry, which constitute an IG challenge on their own. ARLETTE WALLS IS AN INFORMATION & COMPLIANCE MANAGER BASED IN EMERYVILLE, CA. SHE HAS 13 YEARS OF EXPERIENCE IN RECORDS MANAGEMENT AND INFORMATION GOVERNANCE-RELATED FUNCTIONS, INCLUDING REGULATORY AFFAIRS, ARCHIVING, COMPLIANCE, AND INFORMATION SECURITY/ RISK MANAGEMENT. HER ROLE RELIES ON COLLABORATIONS ACROSS SITES AND FUNCTIONS, AND A GOOD COMPREHENSION OF ARCHIVING PRINCIPLES, RECORDS ANALYSIS, LEGAL HOLDS, PRIVACY LAWS, COMPLIANCE, BUSINESS CONTINUITY, AND IT SOLUTIONS. ARLETTE ALSO SERVES AS THE CURRENT CHAIR OF THE PHARMACEUTICAL RECORDS AND INFORMATION MANAGEMENT ORGANIZATION (PRIMO), A CONSORTIUM OF PHARMACEUTICAL COMPANIES COMMITTED TO ESTABLISHING EFFECTIVE, LEGALLY COMPLIANT, QUALITYDRIVEN INFORMATION MANAGEMENT PROGRAMS THROUGH COLLABORATIVE INDUSTRY EXCHANGES OF BEST PRACTICES.

n the IG world, audit and compliance have a unique relationship: one that ensures a business or other organization does not break any laws, regulations, rules, or standards. An auditor asks the question: Is the business doing what it said it would do? Ask.com notes there are “3 discrete types of audits: product (including services), process, and system.”[1] In its simplest form, an audit can be something as straightforward as a quality control measure, potentially identifying risk before it harms stakeholder interests. In more complex situations that require the business or organization to protect PII, an audit can be a formal investigative process whereby a designated professional independently “reviews, verifies, evaluates, and reports on an organization.”[2] Given the EU’s new privacy laws, the audit and compliance relationship ensures information systems that utilize PII do not open up an organization or business to costly lawsuits. Although the audit process checks for specific compliance issues, compliance on its own has a formal place in an organization. Compliance can be used as a means for showing stakeholders that the PII in their control is safe from potential leaks. The audit/compliance relationship is particularly useful for ensuring electronic information inside information systems does what it should do according to standards such as DoD Standard 5015.2. With an audit trail of electronic records, an IG professional can ensure a records management system (RMS) is PII-protection compliant at each access point. Because many of these flows are automated, compliance as a function must be both forward and outward-focused. This gives compliance a proactive function that mitigates risk. The Enron scandal is a case study in how the audit and compliance relationship breaks down when an incorrect or misleading audit function falsely inflates a business’ net worth. Arthur Andersen, once one of the most prestigious accounting firms in the business, conspired with some of Enron’s top executives to falsify earning statements. Although the objective was greed, these top executives bankrupted the company and sent its stock value tumbling. Unfortunately, virtually every person who worked at Enron lost their pensions as the company went bankrupt when Enron’s worth was revealed to be far less than what Arthur Anderson reported. In the wake of this scandal and prosecutions of Enron’s executives, the audit/compliance relationship was strengthened by the passage of the Sarbanes-Oxley of 2002. This act brought transparency to the audit/compliance relationship by protecting investors from illegal and fraudulent corporate activities. REFERENCES 1. http://asq.org/learn-about-quality/auditing/ 2. Laura Millar. “Glossary of Terms,” International Records Management Trust, p. 24. http:// www.irmt.org/documents/educ_training/term%20modules/IRMT%20TERM%20Glossary%20of%20Terms.pdf

INFORMATION GOVERNANCE WORLD

53


REGULATORY COMPLIANCE

IMPLEMENTING GDPR AND THE NEED FOR DATA PROTECTION OFFICERS

BUT ARE THEY PAID ENOUGH? BY ANDREW HARVEY & BARRY MOULT

T

he European Union General Data Protection Regulation (GDPR) is being subsumed into British domestic legislation, and is now the basis for a new Data Protection Act, replacing the old 1998 Act, itself based on a 1995 EU Directive. For this reason, until the new Act receives Royal Assent, this piece continues to refer to the GDPR. The pending legislation is, overall, causing much generalised debate regarding its implications and where Data Protection practice in the UK is destined. There has been substantial specific debate and concern about who should be appointed as the Data Protection Officer (DPO) under the GDPR within healthcare organizations. In this section we will attempt to inject some order into the confusion. This has concentrated on the GDPR itself, along with guidance from the Article 29 Working Group (WP29), the UK Information Commissioner’s Office’s (ICO), and significant discussion between the authors, both interpersonally and online with Information Governance (IG) and Data Protection professionals. The perspective here is mostly applicable to Acute trusts within the National Health Service (NHS), although its message is likely to be applicable more broadly across the UK healthcare sector.

IS A DPO REQUIRED? GDPR Article 37 states that a DPO is needed in any case where: • The processing is carried out by a public authority or body, except for courts; or • The core activities of the Data Controller or the Data Processor consist of processing operations which, by virtue of their nature,

54

INFOGOVWORLD.COM

their scope and / or their purposes, require regular and systematic monitoring of data subjects on a large scale; or • The core activities of the Data Controller or the Data Processor consist of processing large volumes of Special Categories of Data or information about criminal convictions and offences.[1] Whereas it is common understanding that the NHS is a public body, the term “public authority or body” is, rather unhelpfully, not defined in the GDPR. For sake of clarity, however, it is apparent by extrapolation from the definition in Schedule 1 of the Freedom of Information Act 2000, that the NHS is indeed included.

WHO SHOULD BE THE DPO? It is perfectly acceptable for public bodies to appoint a single DPO to be shared between authorities.[2] It may be beneficial that the DPO is shared between healthcare organizations working in close partnership with each other, or perhaps across several organizations within a localized partnership. GDPR Article 38 is clear about the position of the DPO, in that the Data Controller and Data Processor shall: • Ensure that the DPO is involved, properly and in a timely manner, in all issues which relate to the protection of personal data. • Support the DPO in performing their tasks by providing resources necessary to carry out those tasks and access to personal data and processing operations, and to maintain their expert knowledge. • Ensure that the DPO does not receive any instructions regarding the exercise of those tasks. He or she shall not be dismissed or penalized by the Data Controller or the

Data Processor for performing his tasks. The DPO shall report to the highest management level. [3]

DPO TASKS & DUTIES The DPO shall be bound by secrecy or confidentiality concerning the performance of his or her tasks. The Data Controller or Data Processor shall ensure that any such tasks and duties do not result in a conflict of interests.[4] With regard to the last point, WP29 clarifies that: As a rule of thumb, conflicting positions within the organization [A1] may include senior management positions (such as chief executive, chief operating, chief financial, chief medical officer, head of marketing department, head of Human Resources or head of IT departments) but also other roles lower down in the organisational structure if such positions or roles lead to the determination of purposes and means of processing. In addition, a conflict of interests may also arise for example if an external DPO is asked to represent the controller or processor before the Courts in cases involving data protection issues. [5] DPOs do not have to be lawyers, but need expert knowledge of Data Protection law and practices. From a practical perspective, they must also have an excellent understanding of the organization’s governance structure and be familiar with its IT infrastructure and technology. The DPO role may be employed (“internal DPO”), or there may be circumstances where they may act under a service contract (“external DPO”). In both cases, they must be given the necessary resources to fulfill the relevant job functions and be granted a certain level


of independence, to be able to act in the necessary “independent manner.” The DPO does not have to be a standalone role, and may have other tasks within the organization, so long as they do not interfere with the DPO role. WP29 has made it clear that the DPO “cannot hold a position within the organization that leads him or her to determine the purposes and the means of the processing of personal data.”[6] Many healthcare organizations already have staff in place who oversees most issues relating to Data Protection. These roles generally have titles such as Head of IG, IG Lead, IG Manager or Privacy Officer. It is anticipated that it is these roles that will be most appropriate to undertaking the DPO role within healthcare organizations with mature IG models.

WHAT ARE THE QUALIFICATIONS TO BE A DPO? GDPR Article 37 does not absolutely define the credentials for a DPO beyond “expert knowledge of data protection law and practices.”[7] The GDPR’s Recitals add that this should be “determined in particular according to the data processing operations carried out and the protection required for the personal data processed by the controller or the processor.”[8] Realistically, this is a member of staff with detailed expert knowledge and experience of applying IG and Data Protection principles within a healthcare environment. The WP29 guidance clarifies this further: Although Article 37(5) does not specify the professional qualities that should be considered when designating the DPO, it is a relevant element that DPOs should have expertise in national and European data protection laws and practices and an in-depth understanding

of the GDPR. It is also helpful if the supervisory authorities promote adequate and regular training for DPOs.

DPO QUALIFICATIONS & EXPERIENCE Knowledge of the business sector and of the organization of the controller is useful. The DPO should also have sufficient understanding of the processing operations carried out, as well as the information systems, and data security and data protection needs of the controller. In the case of a public authority or body, the DPO should also have a sound knowledge of the administrative rules and procedures of the organization.[9]

WHAT ARE THE TASKS OF THE DPO? The DPO’s tasks are very clearly delineated in the GDPR Article 39, to:

• Inform and advise the Data Controller or Data Processor and the employees who carry out processing of their Data Protection obligations • Monitor Data Protection compliance • Assign responsibilities, awareness-raising, and training of staff involved in processing operations • Undertake internal audits of Data Protection • Provide advice on the need and completion of Data Protection Impact Assessments • Cooperate with the ICO and act as the contact point for any issues relating to processing • Undertake or advise on the potential risk of processing activities.

WHAT ARE THE ORGANIZATION’S RESPONSIBILITIES? The most essential requirement is that the DPO must be allowed to perform their tasks in an independent manner. They need to report to the highest management level in the organization and cannot be dismissed or penalized for doing their job (i.e. giving advice). This will require a robust

governance reporting structure for the DPO to function and evidence that advice has been accepted or rejected. GDPR Article 38 requires the organization to support its DPO by ‘providing resources necessary to carry out [their] tasks and access to personal data and processing operations, and to maintain his or her expert knowledge’[A2]. The WP29 Guidance adds that, depending on the nature of the processing operations and the activities and size of the organization, the following resources should be provided to the DPO: • Active support of the DPO’s function by senior management (such as at board level). • Sufficient time for DPOs to fulfil their duties. • Adequate support • Official communication of the designation of the DPO to all staff • Necessary access to other services • Continuous training Given the size and structure of the organization, it may be necessary to set up a DPO team (a DPO and his/her staff ). Similarly, when the function of the DPO is exercised by an external service provider, a team of individuals working for that entity may effectively carry out the tasks of a DPO as a team, under the responsibility of a designated lead contact for the client. [10] Failure to appoint a DPO where required can lead to significant ramifications. Administrative fines can be as high as the equivalent of €10m (almost £9m at time of writing) or 2% of the organization’s turnover, whichever is higher. The appointment of a DPO is not only a legal requirement, it must also be seen as an efficient way to ensure Data Protection compliance, something that is especially true when it comes to sophisticated Data Processing activities and cross-border data flows.

ANDREW HARVEY IS HEAD OF IG, WESTERN SUSSEX HOSPITALS NHS FOUNDATION TRUST; CHAIR, SUSSEX-WIDE INFORMATION GOVERNANCE GROUP.

BARRY MOULT IS DIGITAL PROGRAMME MANAGER, SUFFOLK & NORTH EAST ESSEX SUSTAINABILITY TRANSFORMATION PARTNERSHIP; CHAIR, EAST OF ENGLAND INFORMATION GOVERNANCE FORUM CHAIR, NATIONAL STRATEGIC INFORMATION GOVERNANCE NETWORK.

INFORMATION GOVERNANCE WORLD

55


REGULATORY COMPLIANCE

A rising star in California’s Cannabis Regulatory Compliance efforts

F

or many cannabis-related business owners in Southern California, CPA Sonia Luna has been a guardian angel. She is arguably one of the country’s leading experts regarding the complex tax environment of California’s cannabis industry and its broader implication for the IRS. She won the award for Best Accountant at the California Cannabis Awards, which industry insiders have called the event the Oscars of Cannabis. Luna is the founder and CEO of Aviva Spectrum, the “Premier Provider cannabis accounting, internal audit, BlackLine implementations and risk management services on the West Coast.” As an award-winning CPA, her insights and knowledge of federal tax laws have kept many cannabis business owners afloat over the two decades they have been allowed to sell medical cannabis to patients. Her experience with federal tax laws have kept many cannabis business owners in compliance with new regulations. For more than twenty years, California contended with the ramifications of Proposition 215 (Cal. Health & Safety [H&S] § 11362.5), the law that created the medical cannabis industry in California–– most notably how to pay federal income tax. At the federal level, cannabis is a controlled substance and illegal to possess and use; however, medical cannabis dispensaries in California still have to pay taxes on income derived from sales. And in November 2016, California voters passed Proposition 64, making recreational use of cannabis legal at the state level. Unfortunately, many dispensaries owners cannot utilize the banking system, primarily because the federal government won’t ensure that money derived by cannabis sales will be protected by The Federal Deposit Insurance Corporation (FDIC). As a result, those dispensaries must operate in a way to

circumvent federal cannabis laws and find creative ways to pay their income taxes. As of 2016, there were more than 1,600 dispensaries statewide bringing in “close to $570 million of taxable income each year” (Turner, 2016). Some temporary patches to this banking problem such as getting money orders from the US postal service only help when they need to pay vendors in small amounts of $10,000 or less. Credit unions and banks have in recent years become more lenient in filing types of financial activities. As indicated by a decline in suspicious activity reports (SARs), The Financial Crimes Enforcement Network Indicates an increase in the use credit unions and banking institutions for holding cash shorter than 90 days. The 90-day requirement to file a SAR is a loophole. The recreational market will only make thing worse. Although Proposition 64 legalized cannabis for recreational use among adults, the state also created the Bureau of Cannabis Control (BCC) to regulate the industry. As of January 2018, the BCC had issued more than 400 licenses to sell recreational cannabis. Luna is at the forefront, showing cannabis businesses how

REFERENCES Luna, Sonia. “About me.” Aviva Spectrum. Retrieved from https://www.avivaspectrum.com/sonia-luna Bureau of Cannabis Control. (2018, January). More Than 400 State-Licensed Cannabis Businesses Operating on California’s First Day of Legal Commercial Cannabis Activity. Bureau of Cannabis Control. Retrieved from https:// cannabis.ca.gov/2018/01/01/more-than-400-state-licensed-cannabis-businesses-operating-on-californias-first-day-oflegal-commercial-cannabis-activity/ The Financial Crimes Enforcement Network. “Marijuana Banking Update.” Retrieved from https://www.fincen.gov/ sites/default/files/shared/Marijuana_Banking_Update_Through_Q1_2017.pdf Turner, Emily. (2016, February 17). Medical Marijuana Dispensaries Forced to Truck Bags of Cash to IRS Office to Pay Taxes. KPIX 5, CBS SF Bay Area. Retrieved from https://sanfrancisco.cbslocal.com/2016/02/17/medical-marijuanadispensaries-forced-to-truck-bags-of-cash-to-irs-office-to-pay-taxes/

56

INFOGOVWORLD.COM

to comply with the California regulatory environment and reporting requirements. Although she is still active in her executive role at Aviva, Luna is a key advisor for the startup software company Webjoint, a Los Angeles-based company that specializes in cannabis software and pointof-sale (POS) hardware that streamlines the supply-chain accountability required by the BCC. This type of detailed focus on the cannabis industry represents the promise transparency and accountability which California companies are bringing to fight against the federal government’s opposition to recreational cannabis use. Cannabis-specific information technology will be one part of an Information Governance framework that may be modeled in other states. As California readies itself to manage millions of new tax dollars from the recreational use of cannabis, Sonia Luna is there to help businesses comply with their regulatory compliance requirements.

SONIA LUNA HAS A PROVEN TRACK RECORD OF 20 YEARS OF PUBLIC ACCOUNTING, AUDITING AND FINANCE EXPERIENCE. SHE IS A CANNABIS ACCOUNTING EXPERT, PERFORMING TRANSACTION ANALYSIS, MARIJUANA BOOKKEEPING AND 280E RECORD KEEPING. MS. LUNA IS IIA CERTIFIED AND A TRAINED QUALITY ASSESSMENT REVIEWER, A CANNABIS FINANCIAL EXECUTIVE WITH BOARD EXPERIENCE IN ALL ASPECTS OF ACCOUNTING, AUDITING AND FINANCIAL MANAGEMENT. SHE ALSO HAS BOARD AND ADVISORY EXPERIENCE IN NON-PROFIT AND FOR-PROFIT SECTORS.


REGULATORY COMPLIANCE

News Tesla Tweetstorm Elon Musk and Tesla were in the news once more. Recent tweets about taking the company private with secured funds sparked some headscratching and questions about what implications a tweet might have if deemed as a false statement in the pursuit of increasing the worth of Tesla. Public tweets by principals of publicly-traded companies about finances pose interesting questions regarding compliance. Does sharing information about bankruptcy or the funds necessary to take a company private constitute some kind of breach in the eyes of the SEC? That remains to be seen, as they have been mum on the topic thus far. However, were Musk or Tesla unwilling, or unable, to provide regulatory filings that outline a deal that would allow them to go private, it poses a quandary for how compliance regulation is carried out in this everevolving business landscape. Either way, we bet a lot of folks are wishing they had some Tesla stock right about now.

PCI-DSS COMPLIANCE

P

CI-DSS is a term used in circles where personal and customer data is stored as a part of the business process. The acronym PCI-DSS abbreviates quite a mouthful: Payment Card Industry Data Security Standard. Developed by the PCI Security Standards Council, it was intended to assist in decreasing fraud in the payments industry. While it is a global standard, it is by no means law here in the United States; each state has its own regulations in regards to cardholder data and associated fines for non-compliance. Compliance is performed by a: • Qualified security assessor (QSA) • Internal security assessor (ISA) • Self-assessment questionnaire (SAQ) Compliance is of vital importance to any organization that stores cardholder data or personal data, as these items are susceptible to theft and fraud. With the regularity of data breaches and cyber-attacks, being compliant means protecting yourself from the loss of customer trust, revenue, reputation, and customers. Achieving compliance is an involved process that should be undertaken by an individual in the organization who understands everything involved. At its core, this “standard requires merchants and member service providers (MSPs) involved with storing, processing, or transmitting cardholder data to:” • Build and maintain a secure IT network • Protect cardholder data • Maintain a vulnerability management program

• Implement strong access control measures • Regularly monitor and test networks • And maintain an information security policy. Twelve additional requirements better address what an organization needs to do in order to be compliant: 1. Install and maintain a firewall configuration to protect cardholder data. 2. Do not use vendor-supplied defaults for system passwords and other security parameters. 3. Protect stored cardholder data. This includes all policies, procedures, and processes used in the storage of data. 4. Encrypt transmission of cardholder data across open, public networks. 5. Use and regularly update anti-virus software or programs. Since new malware is being used all the time for system attacks, protecting systems means regularly updating anti-virus programs to reflect new threats. 6. Develop and maintain secure systems and applications. Software updates help to safeguard against latest vulnerabilities. 7. Restrict access to cardholder data by business need-to-know. 8. Assign a unique ID to each person with computer access. 9. Restrict physical access to cardholder data. 10. Track and monitor all access to network resources and cardholder data. 11. Regularly test security systems and processes. Penetration testing is integral to security; it should be carried out in regular intervals and after changes to the network. 12. Maintain a policy that addresses information security for employees and contractors. This policy should be reviewed and updated based on new risks to your organization.

INFORMATION GOVERNANCE WORLD

57


eDISCOVERY AN INTERVIEW WITH

JOHN J. JABLONSKI MANAGING PARTNER, GERBER, CIANO, KELLY & BRADY LLP

I

nformation Governance encompasses a variety of disciplines under one umbrella, and we here at InfoGov World realize we are attempting something groundbreaking by trying to pull them together. Among these disparate threads is eDiscovery: Electronic discovery (also e-discovery or eDiscovery) is locating, securing, and searching for data and documents that are responsive in a legal matter, during the Early Case Assessment (ECA) stages of civil or criminal litigation. This process typically takes place online over a network but at times it is conducted offline on storage devices. eDiscovery uncovers potential evidence in a legal matter and includes all electronically stored information (ESI), which includes emails, e-documents, databases, voice mail, phone texts, audio and video files, social media, websites, and all associated metadata. The challenge of eDiscovery is securing the chain of custody, and preserving and reviewing all the various types of ESI. We like to reach to those at the top of their field, with that in mind today we are talking with John J. Jablonski –– Managing Partner, Gerber, Ciano, Kelly & Brady LLP. John’s passion for technology is apparent in his over 20 years of experience advising clients on enterprise-wide initiatives involving data privacy and security, Information Governance, IT governance, litigation readiness, privacy and security of enterprise technology, applications and cloud-based systems (including clickwrap and browse-wrap agreements, terms and conditions, master services, service level and software licensing agreements and website privacy statements), matter management software, enterprise-wide electronic evidence preservation, and eDiscovery solutions. As a frequent author, speaker, and national authority on preservation of evidence, Information Governance, data privacy, data security, and e-discovery in the United States, John is a leader in his field.

58

INFOGOVWORLD.COM


InfoGov World: You were a litigator in New York. How did that lead to your pioneering work in implementing legal hold notification programs and eDiscovery? John: I was always involved in technology. Early in my career, I worked to help clients protect the attorneyclient privilege when implementing email systems (yes, I am that old). Then, clients started asking how to avoid sanctions in eDiscovery cases. This led to a lot of research on how courts were handling eDiscovery sanctions. As a trial attorney, I came to the conclusion that clients needed a defensible story to tell about their preservation conduct. This led to developing legal hold policies, procedures, and consulting with designers of technology systems to develop tools clients could use to implement legal holds. What advice would you have for companies looking to revise and improve their legal hold processes? It sounds cliché, but people, processes, and technology. The best technology tools are worthless without the policies, procedures, and compliance processes. Outside counsel needs to be able to prove that a company took reasonable steps. Simply saying “our preservation process was reasonable” is not defensible. Getting IT involved is also key. You would be surprised how often my assessment of an existing legal hold process reveals that no one is doing what the general counsel’s office thinks they are doing to implement a hold. How has eDiscovery evolved in the last 10 years? It is incredibly more efficient and cost effective. The true differentiator between teams that do eDiscovery well and those that don’t is who has a better strategic approach specifically tailored to the case, case theories, and ESI involved. How has the role of privacy increased in discovery preparedness and litigation? Privacy is being honored in the breach. The industry talks about the need to have better MSAs, SOWs, protocols, and agreements, but I am not seeing it implemented as often as it should. Have you seen a burst of activity due

to the push for GDPR compliance in your London office? And even in the U.S.? Mature companies have been prepared for GDPR compliance for approximately two years now. We saw a flurry of activity in May (and continuing) for companies that believe they have zero exposure to GDPR, but want to make sure they are complying. Assessments in the past few months have revealed that companies collect and process far more EU resident data than they realize. While the risk of GDPR sanctions may be small for these companies, they are very real without some steps to mitigate exposure to GDPR or achieve compliance. What are the biggest issues facing litigators when it comes to eDiscovery today––and going forward? Avoiding complacency. Many litigators hand over the eDiscovery process to others without enough guidance to allow eDiscovery teams to be more efficient and cost effective. There is plenty of room to save clients money, including seeking court-imposed constraints on the scope of eDiscovery. Clients hate cost creep. A defined plan and reasonable scope (upon which you are willing to defend) is key to controlling costs. What is one secret talent or hobby you have that might surprise most of your colleagues in the legal profession? I still am active in contact sports. I play in an ice hockey league in the winter and the occasional rugby game (both over-40 age brackets). I played rugby competitively through my early 30s with a nationally ranked team—with multiple trips to England, Scotland, and France. Why Buffalo? What do you like most about living in Buffalo? It has a great sense of community and we love to overcome adversity. It’s not surprising that I would live here if you understand that companies ask me to tackle some of their most complex governance issues involving the use of technology. Are you a Bills fan? It’s not always easy to be one, but it’s our team.

e-Discovery Overview

W

e here at InfoGov World realize we are attempting something challenging: we are trying to unite disparate Information Governance elements under one umbrella. Part of that effort is to educate and illuminate areas that you might not otherwise be familiar with. With that in mind, let’s briefly discuss eDiscovery. Electronic discovery (also known as e-discovery or eDiscovery) is in reference to seeking out, locating, securing, and searching for data (electronic) as foundational evidence for criminal or civil cases. This process is conducted offline or through a network. This identification is often done at the request of a lawsuit or an ongoing investigation. Electronically stored information (ESI) can include: • Emails • Documents • Presentations • Databases • Voicemail • Audio and video files • Social media • Websites

While it would be simple to just equate evidence from eDiscovery to its hardcopy counterpart, the complexity and volume of the data is an order of magnitude larger. Once the data is collected, they are placed under a legal hold, which means that modification, deletion, erasure, or destruction of the data is strictly prohibited.

INFORMATION GOVERNANCE WORLD

59


eDISCOVERY Electronic Discovery or eDiscovery is the term used for the initial phase of litigation where the parties in a dispute are required to provide each other relevant information and records, along with all other evidence related to the case (AIIM).

e-Discovery 101 BY ANDREW YSASI

W

hile discovery is not an old term, eDiscovery became a hot topic in the mid-2000s when Electronically Stored Information (ESI) was referenced in the Federal Rules of Civil Procedure (FRCP) in the 2006 amendments. The FRCP were amended again in 2015 where eDiscovery was the driving factor in this amendment: “The burden or expense of proposed discovery should be determined in a realistic way. This includes the burden or expense of producing electronically stored information. Computer-based methods of searching such information continue to develop, particularly for cases involving large volumes of electronically stored information. Courts and parties should be willing to consider the opportunities for reducing the burden or expense of discovery as reliable means of searching electronically stored information become available“ (Abovelaw.com). Additional amendments were made to ensure the costs or acquiring and searching ESI were proportional to the amount of damages the litigation contemplated. Also, to ensure preservation and the avoidance of spoliation, which is the adulteration or

PO LIC Y

“Due to the ever-increasing volume of electronically stored information and the multitude of devices that generate such information, perfection in preserving all relevant electronically stored information is often impossible…This rule recognizes that ‘reasonable steps’ to preserve suffice; it does not call for perfection. The court should be sensitive to the party’s sophistication with regard to litigation in evaluating preservation efforts; some litigants, particularly individual litigants, may be less familiar with preservation obligations than others who have considerable experience in litigation” (Abovelaw.com). Why does any of this matter? If you are reading this article, you’re new to discovery, the law, or just want a basic understanding of eDiscovery. Discovery is about evidence and building a case. The weight of the evidence – the dynamic whereby the fact finder compares information to determine what fits in his mental construct, and what is rejected, and thus falls away (Foundations of Digital Evidence by George Paul). George Paul goes on to explain

how authenticity, integrity, time, identity, hearsay, and reliability provide the crucial foundation of building a case with ESI. Now that we have the rules, where do we find this information? “The average employee creates roughly one gigabyte of data annually (and growing), and data volumes are expected to increase over the next decade…as much as 50 times what it is today” (in Robert Smallwood’s Information Governance: Concepts, Strategies, and Best Practices (Wiley, 2014). Smallwood goes on to reference a landmark eDiscovery Case––Zubulake v. UBS Warburg: A landmark case in e-discovery arose from the opinions rendered in Zubulake v. U.B.S. Warburg, an employment discrimination case where the plaintiff, Laura Zubulake, sought access to e-mail messages involving or naming her. Although UBS produced over 100 pages of evidence, it was shown that employees intentionally deleted some relevant e-mail messages. The plaintiffs requested copies of e-mail from backup tapes, and the defendants refused to provide them, claiming it would be too expensive and burdensome to do so. The judge ruled that UBS had not taken proper care in preserving the

Electronic Discovery Reference Model

INFORMATION GOVERNANCE

Standards, Guidelines and Practical Resources for Legal Professionals and E-Discovery Practitioners

Getting your electronic house in order to mitigate risk & expenses should e-discovery become an issue, from initial creation of ESI (electronically stored information) through its final disposition. GOVERNANCE UNIFIED

PROCESSING

BUSINESS Profit

Reducing the volume of ESI and converting it, if necessary, to forms more suitable for review & analysis.

ION RAT TEG IN

DUTY Hold, Discover

PRESERVATION PRIVACY & SECURITY Risk

VALUE Create, Use Retain Archive

ASSET Store, Secure

RENC

Y

LEGAL Risk

possible adulteration of evidence. Further, the committee notes in the FCRP provide guidance on how to handle the newly amended rule.

TR

AN

c

SPA

Dispose

IDENTIFICATION Delivering ESI to others in appropriate forms & using appropriate delivery mechanisms.

PRO

RIM Risk

S CE

S

IT Efficiency

Ensuring that ESI is protected against inappropriate alteration or destruction.

PRESENTATION REVIEW Evaluating ESI for revelance & privilege.

COLLECTION Gathering ESI for further use in the e-discovery process (processing, review, etc.).

PRODUCTION Delivering ESI to others in appropriate forms & using appropriate delivery mechanisms.

Displaying ESI before audiences (at dispositions, hearings, trials, etc.). especially in native & near native forms, to elicit further information, validate existing facts or positions, or persuade an audience.

ANALYSIS Evaluating ESI for content & context, including key patterns, topics, people & discussion.

Source: EDRM.net

VOLUME 60

INFOGOVWORLD.COM

RELEVANCE


e-mail evidence, and the judge ordered an adverse inference (assumption that the evidence was damaging) instruction against UBS. Ultimately, the jury awarded Zubulake over $29 million in total compensatory and punitive damages. “The court looked at the proportionality test of Rule 26(b)(2) of the Federal Rules of Civil Procedure and applied it to the electronic communication at issue. Any electronic data that is as accessible as other documentation should have traditional discovery rules applied.” Although Zubulake’s award was later overturned on appeal, it is clear the stakes are huge in e-discovery and preservation of ESI. The EDRM Model provides further insight on how to handle eDiscovery and has broken eDiscovery into the stages below: Information Governance – Getting your electronic house in order to mitigate risk & expenses should e-discovery become an issue, from initial creation of ESI (electronically stored information) through its final disposition. Identification – Locating potential sources of ESI & determining its scope, breadth & depth. Preservation – Ensuring that ESI is protected against inappropriate alteration or destruction. Collection – Gathering ESI for further use in the e-discovery process (processing, review, etc.). Processing – Reducing the volume of ESI and converting it, if necessary, to forms more suitable for review & analysis. Review – Evaluating ESI for relevance & privilege. Analysis – Evaluating ESI for content & context, including key patterns, topics, people & discussion. Production – Delivering ESI to others in appropriate forms & using appropriate delivery mechanisms. Presentation – Displaying ESI before audiences (at depositions, hearings, trials, etc.), especially in native & near-native forms, to elicit further information, validate existing facts or positions, or persuade an audience. EDiscovery and the changes to the FRCP in 2006, and again in 2015, laid the foundation and further clarified how ESI can be used for cases in the 21st century. Additionally, the eDiscovery frameworks and EDRM provide excellent guidance on what to keep and what not to keep regarding information. If you want specifics on how discovery and disposition are handled at the US federal level, read the FRCP and consider this snippet from US Legal. Rules 26 to 37 of Title V of the Federal Rules of Civil Procedure (FRCP) deal with depositions and discovery. These rules guide the discovery process at the federal level. Most of the state courts have a similar version of the Federal Rules. A summary of rules 26 to 37 under chapter V is given here. ANDREW YSASI, MS, CRM, FIP, CIPM, CIPP, CISM, PMP, IGP IS THE VICE PRESIDENT OF KENT RECORDS AND INFORMATION MANAGEMENT® AND PRESIDENT OF IG GURU™, A NEWS ORGANIZATION TO ENSURE RELEVANT IG NEWS IS SHARED WITH THE IG COMMUNITY. HE HAS VOLUNTEERED WITH ARMA, THE ICRM, WORKED AS AN ADJUNCT PROFESSOR AND FOUNDED A CAREER CONSULTING PRACTICE – ADMOVIO®. ANDREW HAS SPOKEN ACROSS THE UNITED STATES AND CONTRIBUTED TO ARMA’S INFORMATION GOVERNANCE BODY OF KNOWLEDGE (IGBOK) AND RECORD MANAGEMENT CORE COMPETENCIES 2ND ED.

e-DISCOVERY TRENDS

O

ne of the biggest trends in e-discovery is finding ways to reduce the time and costs of conducting e-discovery activities. E-discovery can be a perplexing task given the multitude of electronic information systems in use today. Thus, the trend over the last decade has been highlighted by discovering ways to make accessing data quicker and easier. Today’s business environment is heavily dependent on electronic information systems and the automated transfer of information into and out of these systems. The type of system depends on the business function and the type of information inside the system. The challenge for those tasked with e-discovery is the extraction of “information about information.” Henceforth, where the systems and data are located has changed dramatically now that cloud computing is the norm. Gaining access to encrypted repositories loosely tied to the organization in a third-party cloud environment can be a challenge for organizations. Metadata continues to be an issue because many businesses have not considered the costs of retrieving this metadata. Therefore, when the business faces litigation, the business could be ordered to produce the relevant ESI. This process can be costly because only dedicated lawyers and records managers can authenticate the information. Another trend for e-discovery is the use of data analytics to reduce the costs of finding and producing for review electronically stored information (ESI). A 2012 study by the RAND Institute for Civil Justice found that “about $0.73 of every dollar in e-discovery was spent on” producing ESI. Calculating the true costs of e-discovery are is a challenge for businesses. And there are more complexities. New privacy rules, such as the EU’s GDPR, require that organizations take a substantially more proactive stance to protect consumer privacy. HIPAA required U.S. health systems and organizations to protect PHI; with GDPR, it is even more important for developers t to design information systems that protect PII and PHI. Known as Privacy by Design, this can be embedded into organizational cultures to lower e-discovery costs, since confidential information is easily located. REFERENCES Harrison, E. E. (2015). Future-proofing e-discovery. (cover story). Insidecounsel, 26 (278), 18-20. Schaufenbuel, B. (2007). E-Discovery and the Federal Rules of Civil Procedure : A Pocket Guide. Ely: IT Governance Publishing. Gordon, L. T. (2017). Looking Toward a Future of Accurate, Reliable, and Available Health Information. Biomedical Instrumentation & Technology, 51(2), 153-156. doi:10.2345/0899-8205-51.2.153 Hall, B. D. (2017). The Impact of Smart and Wearable Technology on Trade Secret Protection and E-Discovery. ABA Journal Of Labor & Employment Law, 33(1), 79-88. HERNANDEZ, A. (2016). Common Problems With E-Discovery-- and Their Solutions. Federal Lawyer, 63(8), 62-68. AI-powered platform for e-discovery. (2017). KM World, 26(7), 2-32.Lamont, J. (2017). Emerging content formats challenge: e-discovery. KM World, 26(8), 28-30. O’Neill, M. (2016). E-Discovery and the Internet of Things. Intellectual Property Litigation, 27(4), 26-28.

INFORMATION GOVERNANCE WORLD

61


RECORDS & INFORMATION MANAGEMENT

AN INTERVIEW WITH

PAULA LEDERMAN LEADING RIM EXPERT, PARTNER AT IMERGE CONSULTING

T

oday we are talking with Paula Lederman, Partner at IMERGE Consulting. Paula has over 15 years of consulting and training experience in the field of content and information management in both the public and private sectors. Her extensive education and experience in a wide variety of systems–– including SharePoint‚ Opentext‚ and Documentum––makes her a highly-qualified addition to our interview series. She has been involved in a number of taxonomy and classification projects‚ as well as many records and electronic records information management strategy‚ feasibility‚ and development projects. Her experience covers municipal‚ provincial/state‚ federal government agencies‚ the financial regulatory sector‚ and the private sector. She excels in implementing practical and user-friendly solutions and for knowledge transfer during client projects. InfoGov World: Where did you grow up? Go to school? Paula: I grew up in Toronto, Canada, and went to school at the University of Toronto, where I completed a Bachelor of Science in Computer Science, and then a Master’s of Library and Information Science. While I was on a maternity leave, I decided to complete a Master’s of Business Administration at Schulich School of Business, York University. How did you get into the records management business? I began my career as a computer programmer, working on

62

INFOGOVWORLD.COM

automated library systems. As the work automating libraries was applied to corporate settings, their own records also came into play. Over about ten years, my work shifted from 90% library automation and 10% records management to the exact reverse––now 90% records and information management and only 10% library automation. What is so interesting to me is that in the last few years the issues of digital preservation, digitization of print collections, and archiving of electronic records are beginning to shift that balance back as libraries find themselves holding a lot of electronic collections. Funding for donations of electronic records to archives has come with processing funds, which is accelerating the involvement of archivists in managing electronic records.’ What types of major projects have you worked on recently? I’ve been involved in several archives and digitization projects, as well as Information Governance programs for regulated industries (pharma), financial investment firms, and government agencies. I also completed several projects for United Nations agencies, which offer their own complexities and challenges. How has records management changed in the last five or ten years? The shift from paper to electronic records management has always been the biggest challenge and continues to be as the volume and velocity of content creation explodes. And the paper issues haven’t gone away. As organizations become more


(CLOCKWISE FROM TOP LEFT): I love that my work requires travel so I enjoy the beauty of nature wherever I go. This shot is from Alberta Canada; Along the same theme this is a shot with my husband on vacation in Newfoundland; My favorite place to relax is the gulf coast of Florida and I never get tired of photographing the changing sunsets from the balcony; I have three grown children and two grandchildren and I treasure time and holidays with family. Shopping for new glasses with my adorable granddaughter.

Smallwood’s, Managing Electronic Records (Methods, Best Practices and Technologies, Wiley, 2013). It is well referenced, with contributions from many industry leaders. I supplement this with up-to-date and specific articles, internet resources, or vendor demos based on the specific topic at the time. Standards are always being updated and new trends continue to appear. New trends include treating social media as records, blockchain, digital preservation, and cloud repositories. What is your favorite vacation spot? A quiet beach on the Gulf Coast of Florida, where I watch the waves and the sunsets.

mobile and virtual, it’s essential to have access to everything you need to do your job––from anywhere. This is driving many digitization initiatives, which, in turn, creates more requirements for metadata and tools that make the information findable and retrievable. Keyword searches where you get millions of hits aren’t enough. Artificial intelligence and semantic processing has come a long way, and maturity in that field will be the major breakthrough that is required to make traditional electronic records management work properly. As well, we have seen a movement upwards on the corporate ladder. More and more, recordkeeping is no longer relegated to administrators who only worry about shifting around boxes; use of information has moved to the executive suite, where it is now recognized as a corporate asset. With that recognition comes the realization that it is also a risk, making privacy and security considerations a threat to all organizations. The other problem with IG is that it is a very politically charged term and one that seems to imply more bureaucracy and overhead––and this is when organizations want to be agile, lean, and more decentralized. This is a dilemma that has to be resolved: an organization needs consistency in the way information is

managed and shared. Connectivity is the key, with everyone being on the same page. As silos are created, the competitive value of information assets are diluted and risks are introduced to the organization. Has GDPR had an impact in Canada? Yes, it has for all of North America. Companies with any business activity in the European Union are required to follow privacy and access requirements of GDPR; from an Information Governance point of view, that means knowing where their information is, how long it is kept, and how it is used and protected. It’s pretty shocking that large multinational corporations cannot answer these questions easily or efficiently. Are you still teaching? Yes, I teach the certificate course in Records and Information Management online at the University Of Toronto School Of Continuing Studies. It consists of three, twelve-week courses––all online. The three courses include: Fundamentals of Information and Records Management; Records and Information Management Practices; and Records and Information Management Strategy. What primary texts do you use? The primary textbook used is Robert

What is your greatest achievement? Being able to raise my three wonderful kids while still having a very exciting and demanding career, which has taken me to many interesting organizations and places all over the world. I feel very privileged. What hobby or special skill do you have that might surprise most of your colleagues? I love knitting, which has a cult-like following. I was a bit overwhelmed when I attended a Vogue knitting conference in New York City last year and met with 8,000 crazy knitters like me. Not like any Information Governance, ARMA, or AIIM meeting, that‘s for sure. Do you like back bacon? I’m Canadian and also a hockey mom. Of course! PAULA LEDERMAN IS AN INFORMATION MANAGEMENT CONSULTANT WITH IMERGE CONSULTING INC. SHE HAS OVER 20 YEARS OF CONSULTING EXPERIENCE IN ALL ASPECTS OF INFORMATION MANAGEMENT. SHE HAS A BSC IN COMPUTER TECHNOLOGY AND BOTH AN MBA AND MLS IN LIBRARY AND INFORMATION SCIENCE. IN ADDITION TO CONSULTING, SHE IS AN INSTRUCTOR AT THE UNIVERSITY OF TORONTO SCHOOL OF CONTINUING STUDIES IN THE INFORMATION AND RECORDS MANAGEMENT CERTIFICATE PROGRAM. PAULA.LEDERMAN@IMERGECONSULT.COM

INFORMATION GOVERNANCE WORLD

63


RECORDS & INFORMATION MANAGEMENT

Bringing your RIM program to the 21st Century

BY ANDREW YSASI, FIP, CIPM, CIPP, CISM, PMP, CRM, IGP

C

ompeting corporate programs and priorities have often tended to push the agenda for many Records and Information Management (RIM) programs aside. Cybersecurity programs, IT projects, cost-cutting initiatives, digital innovation and regular RIM operations can seem like plugging holes in a leaky dam for record managers. If this feels familiar, then perhaps it is time to take a new tact and build a 21st-century RIM program that meshes with, and supports the business objectives of the organization.

LEADERSHIP Leadership and executive sponsorship (getting the right person with the budget on board) for a RIM program facelift is important to ensure the success of your RIM program. Sometimes, the leadership is there but only for emotional support; when it comes to actual financial support, there isn’t much. If you’re responsible for a RIM program, you may have to work with the resources you have, but be mindful that by teaming up with other initiatives, like a data governance program, or conversion to a new document management system, can be a way to extend your resources. Regardless, it will take leadership to move any initiative.

INVENTORY AND APPRAISAL Traditional RIM practices include inventorying and appraisal of records to ensure proper storage, access, protections, and lifecycle management are in place. Traditional practices still have merit, but the “old school” security of knowing that offsite storage is the solution is outdated. Understanding the location of digital assets, who has access to them, and the value of that information is essential

64

INFOGOVWORLD.COM

to 21st-century RIM program governance. It is also key to supporting users in their search for information. Furthermore, the potential loss of the information must be considered––beyond the loss of revenue to the organization. And regulatory fines aren’t a trend; they’re a reality. (Google “GDPR fines” or “HIPAA fines” if you’re skeptical.)

an organization’s chance to defend itself (or go on the offensive in a lawsuit) because they have the information they need. However, keeping large information repositories is expensive and a risky proposition for most organizations. Proving what you have (or why you don’t have something) is more important than ever before.

RETENTION AND DISPOSITION

FINDING MONEY

Organizations have worked for decades to roll out a retention schedule that is clear, easy to implement, and is followed consistently by stakeholders. As cloud technology booms, legacy systems hang on, and data is transferred as if it were currency, it is more important than ever to have retention and defensible disposition. I’ve spoken to RIM superheroes who created plans to remove terabytes of data and encrypt long-term storage repositories. These plans can save on IT costs for storage and reduce the risk profile of the organization. Additionally, some organizations require their vendors have certifications to ensure the proper handling of information. In the past, vendors have skirted certification to offer a lower rate, but with the new risk landscape, those savings pale in comparison to regulatory fines. The debate over strict adherence to retention schedule rages on among RIM professionals. Many feel retention practices that often over-retain information improve

Properly disposing of risky (or unused) data to save on storage isn’t the only way a records manager can save an organization money. As the collectors and keepers of information for our organizations, we can use tools to mine that information to help the organization make better decisions, save money, and find new opportunities for generating new revenue. Perhaps some of the information you collected has monetary value. Perhaps your organization will apply concepts discussed by Gartner’s Doug Laney in his seminal book, Infonomics, which explains various ways to monetize information, which eventually may end up on your balance sheet as an asset.

TAKE-HOME MESSAGE If your RIM program can help your organization save money, help the business operate more efficiently, and find additional information value in the process, you have yourself a 21st-century RIM program.


DEFINING VITAL RECORDS

V

ital records are missioncritical records necessary for an organization to continue to operate in the event of disruption or disaster (e.g., fire, flood, hacker attack); and records that cannot be recreated from any other source. A disaster recovery (DR)/ business continuity (BC) plan is developed, tested, and implemented with vital records recovery being paramount. Importantly,

vital records include records that maintain and protect the rights of stakeholders in addition to continuing or restarting operations in the event of a disaster or other business interruption.

TYPES OF VITAL RECORDS For a lay audience, vital records are often thought of as those records in the public discourse: birth and death certificates, marriage licenses, and other official records. However, these represent a subset of vital records for governments; and to be clear, it is critical for governments to maintain historical records. What is less intuitive is that every formal organization—business, nonprofit, or government—has vital records. These records are important for the maintenance of operations and infrastructure of an

ongoing concern. For example, without payroll records and contact information, employees cannot be paid during emergencies—this very scenario played out in the aftermath of Hurricane Katrina and other disasters. This is not to say that all records are vital. As little as 1%-7% of an organization’s total records are considered vital. As such, each department within an organization needs to determine which records are vital records; not just important records but vital. Without identifying this subset of records, it is not possible to begin to prepare for a possible disaster. When business operations are interrupted, those that are ill-prepared suffer the greatest economic losses and more than two-thirds of organizations that are hit by a disaster go out of business. Vital records can be paper, microfilm, audio/videotape, or electronic records–– digital or analog. And appropriate steps must be taken to secure vital records, regardless of their media type.

IMPACT OF LOSING VITAL RECORDS Vital records are critically important, and their loss can cause disastrous effects. While a disaster easily demonstrates the importance of a sturdy building or durable equipment, it is the loss of vital records that proves to be the most impactful. After all, you can always repair, lease, or purchase property, buildings, or hard assets once more. According to one university study, more than 70% of organizations go out of business within three years of suffering a fire that caused the loss of business records and software. According to United Nations guidelines, the key points for managing risks and protecting vital records include: • Small subset. Your vital records will be small in number—typically only about 2% to 5% of all business records are vital. • Inventory and secure. Identify and protect them using IG policies, technologies, and physical security measures. • Keep updated. Remember to exchange older security copies for current versions as necessary. Also, official copies of vital records need to be tested periodically to

ensure they are readable and in the most current and prevalent electronic formats. • Test the plan. Have a plan for accessing the security copies in the event of an emergency—and practice it. This should serve as an important warning to understand your vital records, create and implement a vital records program, and to limit losses associated with a disaster.

U.S. NATIONAL ARCHIVES APPROACH TO IDENTIFY VITAL RECORDS The U.S. National Archives has created guidelines that American federal agencies should follow when identifying vital records and creating document inventories: • Consult with the official responsible for emergency coordination. • Review agency statutory and regulatory responsibilities and existing emergency plans for insights into the functions and records that may be included in the vital records inventory. • Review documentation created for the contingency planning and risk assessment phase of emergency preparedness. • Review current file plans of offices that are responsible for performing critical functions or may be responsible for preserving rights. • Review the agency records manual or records schedule to determine which records series potentially qualify as vital. Agencies must exercise caution in designating records as vital and in conducting the vital records inventory. Only those records series or electronic information systems (or portions of them) most critical to emergency operations or the preservation of legal or financial rights

INFORMATION GOVERNANCE WORLD

65


RECORDS & INFORMATION MANAGEMENT

Tools for GDPR Compliance THE RECORDS CONTINUUM AND TECHNOLOGY ASSISTED REVIEW

W

ith GDPR now fully implemented, there is no shortage of software offerings claiming to help businesses manage the complex regulatory environment presented to businesses outside the EU. However, IG practitioners should remember that software is not a panacea. Despite promises offered by technology, IG practitioners must be proactive in protecting PII across the entire information continuum, a continuum made more challenging by Big Data, the Internet of Things and unstructured information which lacks proper metadata. More than twenty years ago, as information scientists were grappling with the challenges presented by electronic information, researchers from Australia conceptualized the Records Continuum Model (RCM) to view important records EVIDENTAL AXIS amongst other information. The location of this DIMENSION 2 DIMENSION 1 CAPTURE information in the RCM CREATE determined critical aspects of the information’s use (such IDENTITY TRANSACTIONAL as disposition). The RCM AXIS AXIS is presented here as a means of conceptualizing how PII is treated amongst a sea of DIMENSION 3 DIMENSION 4 ORGANISE unstructured data, and how PLURALISE the RCM can be used to RECORDKEEPING AXIS understand technology assisted Figure 11 review (TAR) and other forms of predictive coding. Given that the RCM “facilitates a proactive and holistic view of managing digital information,”[1] the model positions information along four states of existence, including: create, capture, organize, and pluralize. Frank Upward and colleagues created the RCM to aid in the recognition of electronic records amongst other electronic information (See Figure 1). This holistic view is needed now more than ever as businesses outside the EU begin to understand how GDPR affects them. Fortunately, software technology has advanced enough that file analytics and machine learning help compliance officers and records managers ensure the protection of PII across the entire information continuum. By viewing electronic information in the RCM, IG practitioners have a formidable tool to create, capture, organize, and pluralize PII in any business situation. The concepts behind automation and machine learning are nothing new. At their core, these concepts involve artificial intelligence (AI), a concept that has piqued the imaginations of science-fiction fans for at least the last century. The RCM’s usefulness as an IG tool for GDPR compliance comes from conceptualizing how PII moves through a business––the information flow. TAR is “a process whereby computers are programmed to search a large amount of data to find quickly and efficiently the data that meet a particular requirement.”[2] With a detailed knowledge of how information is used in the business (aided by the RCM), IG practitioners can use TAR to identify hidden PII that may not be visible in large batches of unstructured information.

The Records Continuum Model

should be inventoried. The inventory of vital records should include: • The name of the office responsible for the records series or electronic information system containing vital information • The title of each records series or information system containing vital information • Identification of each series or system that contains emergency-operating vital records or vital records relating to rights • The medium on which the records are recorded • The physical location for off-site storage of copies of the records series or system • The frequency with which the records are to be cycled (updated) REFERENCES 1. The University of Edinburgh, “Vital Records,” March 22, 2011, www.recordsmanagement.ed.ac.uk/InfoStaff/ RMstaff/VitalRecords/VitalRecords.htm. 2. NSW Government, “State Records,” www.records.nsw. gov.au/recordkeeping/dirks-manual/doing-a-dirks-project/ manage-your-vital-records (accessed September 2, 2018). 3. The University of Edinburgh, “Vital Records.” 4. United Nations, Archives and Records Management, “Section 15 – Managing Risks and Protecting Vital Records,” http://archives.un.org/ unarms/en/unrecordsmgmt/unrecordsresources/ managingrisksandprotectingvitalrecords.htm#main (accessed September 2, 2018). 5. The U.S. National Archives and Records Administration, “Vital Records and Records Disaster Mitigation and Recovery: An Instructional Guide,” 1999, www.archives.gov/records-mgmt/vital-records/#Vital (accessed September 2, 2018).

66

INFOGOVWORLD.COM

REFERENCES 1. Svärd, Proscovia. “Enterprise Content Management and the Records Continuum Model as strategies for long-term preservation of digital information.” Records Management Journal 23, no. 3 (2013), 161. 2. Carroll, L. (2013). The Grossman-Cormack glossary of technology-assisted review. Federal Courts Law Review, 7(1). 3. Proscovia, 166.


IM vs. IG

WHAT IS THE DIFFERENCE BETWEEN INFORMATION MANAGEMENT AND IG?

M

any professionals are often confused and struggle to define the differences between Information Management (IM) and Information Governance. We’ll explain here, simply and clearly. IM includes the how of managing information that is executed mostly by your IT department. It is the day-to-day management of information which includes activities such as provisioning systems, backups, software development, data modeling, cyber-security measures, and keeping the network running. IG, on the other hand, focuses on the why of information. That is, when starting an IG program, you must first form business objectives, which is the “why” or purpose of the project. Then ask, “What information do we need to accomplish these business objectives? How long do we need this information? How secure does it

need to be?” The IG program ensures that lifecycle requirements are established and met; that old information with no business value is discarded to make room for new, high-value information. IG programs focus on reducing information risks and costs while finding new value. For decades, organizations have implemented a myriad of IM systems, and created massive amounts of information. However, the information was not well-governed or controlled, and, with increasing volumes and the Big Data effect, knowledge workers became overwhelmed with the information and often did not trust the reports they were getting. Sure, the numbers are there in the report, but are they true and accurate? At one point, the IG Lead for JPMorganChase stated that they surveyed their business unit leaders and that their top concern was accuracy in reports. If managers do not trust the information in their reports, it hampers

their decision-making capability. Now, what is the difference between records and information management (RIM), and IG? RIM programs are a component of broader IG programs, a key component, but only one of a number of key functions in an IG program. When looking at the IG Reference Model we can see that other key areas include Business Units (typically those with the greatest IG or Legal challenges), Legal (mostly focused on e-discovery), IT, and two areas which have gained significance in the last few years, Privacy & Security. When examining the 22 IG processes measured by the CGOC (Compliance, Governance, & Oversight Council) IG Process Maturity Model, it is clear that the Legal, Privacy, and Security functions are weighted much more heavily than RIM. So RIM is certainly one key component in IG programs—one that has gained in stature since the advent of GDPR—but other functional areas typically play a larger role.

INFORMATION GOVERNANCE WORLD

67


DATA GOVERNANCE AN INTERVIEW WITH

BOB SEINER

PRESIDENT AND PRINCIPAL CONSULTANT OF KIK CONSULTING & EDUCATIONAL SERVICES

T

oday we caught up with Bob Seiner in Pittsburgh. Bob is President and Principal Consultant of KIK Consulting & Educational Services (KIKConsulting.com). Robert S. (Bob) Seiner is well-recognized and respected in the information asset management industry (covering data, information, content, and knowledge management) for his tremendous commitment to collecting, recording, sharing his experience, and transferring how-to knowledge about successful practices. Seiner is known as the creator and implementer of the Non-Invasive Data Governance™ approach that has been recognized and adopted as a practical, less-threatening and successful alternative to traditional data governance methods for organizations world-wide. InfoGov World: Where did you grow up? Go to school? Bob: I have lived in and around Pittsburgh, Pennsylvania all my life. I grew up in the city and moved to the ‘burbs many years ago. I graduated with a computer science degree and an MBA from the University of Pittsburgh after originally studying architecture. How did you get into the data governance field? I was in the data governance field long before they called it data governance. I met my future boss during my MBA program. During an evening class, he asked me if I wanted to be the first data administrator his company (healthcare) ever had. I told him I’d think about it and I proceeded to quickly research as much as I could about data management. The internet was not prevalent back then. After accepting my new role, I came across an article titled “Accountability to the Rescue” written by data guru Larry English, who wrote that applying accountability for data was the best way to improve all aspects of data quality. He called it stewardship and I thought: What a novel idea…. I pitched the idea of stewardship to the brand-new and first CIO the company ever had. His response was: “We need that here––get started.” Stewardship morphed into governance. And that is how I got my start. How do you define data governance? My definition has some intentional grit and teeth. Data governance is the “execution and enforcement of authority over the definition, production, and usage of data.” I fully stand behind having a strong definition, especially if it catches people’s attention and opens the door for greater discussion. At the end of the day, true governance over data or information

68

INFOGOVWORLD.COM

requires executed and enforced authority. Some of my clients ponder that the definition is too aggressive. These clients do not like the words “execution and enforcement,” so they tone it down to something less aggressive like “formalized behavior for the management of data.” That is my definition of data stewardship. How has the field of data governance changed in the last decade? From a discipline perspective, I do not think data governance has changed that much in the past decade. There are certainly more software tools, regulations, and experts (sic) than ever. The tools can be very helpful when applied at the proper time, for the proper reason. The regulations are not optional, and the abundance of newer regulations give many organizations the reason they need to implement formal governance over data. More experts lead to more competition. I find that my experience in the field, along with my innovative, yet practical approach to data governance, sets me apart from much of the competition. What is the most important factor in setting up and running an effective data governance program? The most important factor is to begin and stay non-invasive in your approach to data governance. What I mean by “non-invasive” is that governance already exists in most organizations––albeit it is often informal, inefficient, and ineffective. As an example, all people who handle sensitive data must be held accountable for how they handle that data. All people who define data must be held accountable for how the data is defined, and data producers must be held accountable for the data they produce. I’ve said before that, potentially, everybody is a data steward––meaning that they must be held accountable for their relationship to the data. This increases the number of stewards you have, which, in turn, presents challenges to the people running the data governance program. Formalizing accountability is much less invasive than giving people new roles and responsibilities. There are many ways to stay non-invasive. Just ask if you want to learn more. You wrote a book titled, Noninvasive Data Governance. Could you describe the key concepts behind your approach? The complete title is Non-Invasive Data Governance: The Path


“I enjoy solving problems with practical solutions and I like working with people who are passionate about improving how their organizations operate. ” of Least Resistance and Greatest Success. The book has been a data governance bestseller on Amazon since 2014. I shared some of the key concepts to my approach in my answer to the last question. Other key concepts include applying governance to process rather than redefining process and leveraging existing governance that takes place in many different forms––info security, compliance, protection, standardization, and more. The book is also available in audible format, but I suggest not listening to it while driving. I also have an online learning plan and monthly webinar series with Dataversity. I also share information about my approach at events across the country and around the world each year. If you want more information on the concepts of non-invasive data governance, I have made certain that it is available. What do you enjoy most about consulting work? I enjoy solving problems with practical solutions and I like working with people who are passionate about improving how their organizations operate. The issues associated with improving data management can be consistent from organization to organization, while the solutions may be completely different depending on the organization’s culture and management acceptance. These things keep me very engaged. Tell us about your TDAN.com newsletter. What does it cover, how often does it go out, how many readers to you have, and where can our readers go to sign up for it? By now, you can probably tell that I am

fully invested in the data space. I have published my online newsletter, The Data Administration Newsletter (TDAN.com), since 1997. The publication is as old as my youngest daughter. The publication started quarterly, transitioned to monthly, and now it is published twice a month on the 1st and 3rd Wednesdays. I average between 30 and 35 thousand visitors a month and my registered reader base numbers in the tens of thousands. Each issue of the publication contains articles, quarterly columns from industry leaders, blogs, and special features about all aspects of data management and data administration including governance, big data, business intelligence, data analytics, data quality, and much more. Since data and Information Governance are favorite subjects of mine, you will find many articles on those subjects. People can sign up for the newsletter at no cost by clicking on the appropriate banner on TDAN.com’s front page. You can expect emails a few times a month about the new content. What is your favorite sports team, and why? Now you are talking my language. I am fortunate to live in Pittsburgh, where our professional and college sport teams almost always land near the top of the standings. I watch my Pirates, Penguins, Panthers, and obviously the Steelers with great interest. The first question I receive from people who learn I am from Pittsburgh is: “Are you a Steelers fan?” Well, duh. What is your favorite city to travel to, and why?

I think San Diego is one of my favorite cities in the U.S. The climate is typically perfect and there is something for everybody there––from the beach and the ocean to the desert and the zoo and the parks. Well, you catch my gist. I am fortunate to speak at data conferences in San Diego often and I always look forward to going there. Maybe I will see you there next time? What special talent, skill, or hobby do you have that might surprise your colleagues? I have always loved music and sports. I grew up on music and sports. I was a DJ on the campus radio station in college and you will always find music playing in my car and in my office. I listen to mainstream music, but add to it jazz, blues, big band, and good old progressive and classic rock. It seems like I have a knack for picking out words of songs that are perfect for every situation. Data is fun and challenging stuff, sports are exciting, and music is the soundtrack of my life. Thanks for asking! ROBERT S. (BOB) SEINER IS WELL-RECOGNIZED AND RESPECTED IN THE INFORMATION ASSET MANAGEMENT INDUSTRY (COVERING DATA, INFORMATION, CONTENT, AND KNOWLEDGE MANAGEMENT) FOR HIS TREMENDOUS COMMITMENT TO COLLECTING, RECORDING, SHARING HIS EXPERIENCE, AND TRANSFERRING HOW-TO KNOWLEDGE ABOUT SUCCESSFUL PRACTICES. SEINER IS KNOWN AS THE CREATOR AND IMPLEMENTER OF THE NON-INVASIVE DATA GOVERNANCE™ APPROACH THAT HAS BEEN RECOGNIZED AND ADOPTED AS A PRACTICAL, LESS-THREATENING AND SUCCESSFUL ALTERNATIVE TO TRADITIONAL DATA GOVERNANCE METHODS FOR ORGANIZATIONS WORLD-WIDE.

INFORMATION GOVERNANCE WORLD

69


DATA GOVERNANCE

The Four Horsemen of the Data Apocalypse

R

ecently, I heard someone speak about the four horsemen of the apocalypse. The original four horsemen of the apocalypse are described in the last book of the New Testament of the Bible as death, famine, war, and conquest, as a symbolic prophecy of the future. The same can be said about….

THE DATA APOCALYPSE The message and the messages of the four horsemen of the data apocalypse–– regarding attitudes such as ignorance, arrogance, obsolescence, and power–– resonated well with me, as it clearly describes reasons that I have seen as why organizations are struggling to manage their data as a valued asset. The four horsemen can be used to describe the negative attitudes organizations have toward data that have prevented these organizations from addressing the need to improve and gain value from their most valuable asset.

THE FIRST HORSEMAN IS IGNORANCE. The ignorance attitude can be best described as thinking that seeking value from data is not that important. Organizations that carry this “ignorance toward data” attitude are at the lowest end of the data-maturity spectrum. Organizations that demonstrate ignorance toward data are behind their competition when it comes to allocating resources focused on improving their data situation. Improvements in a data situation can include improving data quality, data understanding, data protection, and improving regulatory and compliancereporting capabilities. These organizations will be the last to hire Chief Data Officers (CDOs), the last to implement formal data governance programs, and will be at the end of the line when it comes to collecting and managing the information about their data––otherwise known as metadata.

70

INFOGOVWORLD.COM

You have heard the statement that “ignorance is bliss.” Well, not in this case. In this case, ignorance leads to organizations falling behind the times during the blossoming information age.

THE SECOND HORSEMAN IS ARROGANCE. The arrogance attitude can be described as the thinking that management knows more than the people who own and are responsible for the data. Organizations that maintain this attitude demonstrate belief that management knows best. Management will not know the difficulties their teams are having if they do not communicate with the people who know the data best. Arrogance can be avoided by having an open dialogue with the people who define, produce, and use data as part of their daily (hourly) routine. Arrogance toward data can be avoided by conducting internal assessments of how the organization governs its data against the industry’s best practices. Unnamed philosophers speculate that, “The difference between arrogance and confidence is performance.” Management should be looking at the data they use to improve their organization’s performance and be open-minded toward continuous data improvement.

THE THIRD HORSEMAN IS OBSOLESCENCE. The obsolescence attitude can be described as thinking that the present data, in the present systems, will never die; if it carried the organization this far, there is no reason to change. Organizations that carry this attitude are afraid to move out of the past and invest in the future. To stay one step ahead of the competition, organizations must continuously focus on improving data quality, data access, understanding, and protection—even if the present state allows the organization to get by. Organizations with obsolete data and systems become inefficient, ineffective, and

act very informally toward improving their data situation. As Andy Rooney, noted American radio and television personality, once said, “The fastest thing computers do is go obsolete.” The same can be said about the data housed on these computers, and the systems that manage the flow and use of data on these computers. Resting on your data laurels is the quickest way to become obsolete.

THE LAST HORSEMAN IS POWER. The power attitude can best be described as the feeling that projects owned by the most influential members of management are more critical than other projects. Organizations in which power is the driving attitude have a difficult time getting out of their own way when prioritizing those activities that will lead to higher quality data. Management with the most seniority, or management from the most profitable part of the business, typically see major investments in their own personal data infrastructure as most important and their personal pet projects, while addressing the most critical data needs, are often misunderstood or misinterpreted as being not as important. William Gaddis, famous American author, once said that, “Power doesn’t corrupt people, people corrupt power.” The truth is that the most powerful people in the organization must have the responsibility to know and understand the need to prioritize projects that will have the most valuable impact on the organization. Power-moves often lead to bad decision-making, which leads to the squeakiest of wheels getting the grease while the other wheels fall off the axle. The four horsemen of the data apocalypse are a simplified way of looking at the impediments to an organization’s ability to improve their data situation. The better we recognize these attitudes in our organization, the quicker and more effective we will be at addressing and managing the most important and valuable asset we own: our data. —Robert S. Seiner


DG vs IG: What’s the Difference? AN EXCERPT FROM ROBERT SMALLWOOD’S

UPCOMING BOOK, INFORMATION GOVERNANCE: CONCEPTS, STRATEGIES, & BEST PRACTICES, 2ND EDITION (WILEY, 2019)

T

here has been a great deal of confusion around the term Information Governance (IG), and how it is distinct from data governance. Some books, articles, and blogs have compounded the confusion by offering a limited definition of IG, or sometimes offering a definition of IG which is just plain incorrect, often confusing it with data governance. Even so-called “experts” confuse the terms!

DATA GOVERNANCE Data Governance expert Robert Seiner, author of the book Non-Invasive Data Governance, offers his definition of data governance, “Data governance is the execution and enforcement of authority over the definition, production and usage of data.” Data governance involves processes and controls to ensure that data at the most basic level—raw data that the organization is gathering and inputting—is true and accurate, and unique (not redundant). It involves data cleansing (or data scrubbing) to strip out corrupted, inaccurate, or extraneous data and de-duplication, to eliminate redundant occurrences of data. It also usually involves implementing Master Data Management (MDM) software and methods, to ensure that applications are referencing a “single version of the truth.” Data governance focuses on data quality “from the ground up” at the lowest or root level, so that subsequent reports, analyses and conclusions are based on clean, reliable, trusted data (or records) in database tables. Data governance is the most fundamental level at which to implement Information Governance. Data governance efforts seek to assure that formal management controls— systems, processes, and policies—are implemented to govern critical data assets to improve data quality and to avoid negative downstream effects of poor data. DG efforts also hold data stewards accountable. IG is new, but Data Governance is also a newer, hybrid quality control discipline

DATA GOVERNANCE

News Discovering Data

that includes elements of data quality, data management, IG policy development, business process improvement (BPI), and compliance and risk management.

INFORMATION GOVERNANCE Corporate governance is the highest level of governance in an organization and a key aspect of it is Information Governance (IG). According to the Sedona Conference, IG programs are about minimizing information risks and costs and maximizing information value. This is a compact way to convey the key aims of IG programs, and it is what should be emphasized when the merits of an IG program are discussed. The definition of IG can be distilled further. An even more succinct “elevator pitch” definition of IG is, “security, control, and optimization” of information. IG processes are higher level than the details of data governance. The IG approach to governance focuses not on detailed data capture, stewardship, and quality processes, but rather, on controlling the information that is generated by IT, office systems, and external systems; that is, the output of IT. IG efforts seek to govern and control information assets to lower risk, ensure compliance with regulations, and to improve information quality and accessibility while implementing information security measures to protect and preserve information that has business value. IG programs focus on breaking down traditional functional group “siloed” approaches, to maximize the value of information. Mature IG programs employ the principles of infonomics, to measure and monetize information. But these programs also must rely on robust, effective data governance programs to provide good, clean data so that calculations and analytics that are applied yield true and accurate results.

8 WAYS TO IDENTIFY PERSONAL DATA

GDPR was a tsunami for businesses across the globe. And now that it has crashed upon the shore, the search to locate and secure personal data has become paramount. Since many businesses are not quite up to the task, here are eight strategies that can assist in the identification of personal data: Looking For Documentation. This might seem intuitive, and you would be right. The problem comes when considering that only the most basic of systems will be able to use this to find consumers’ personal data. Manual Investigation. Again, smaller systems will be able to do this; however, the larger the system, the more labor-intensive this becomes. Turning to Application or Technical Specialists. Since the application and underlying data model are no doubt more technical than a manual investigation would allow, seeking out a specialist is the right move. Hiring External Consultants. Similar to technical specialists, you are outsourcing expertise. However, there can a drawback: often, there is a cost associated with a consultant getting up to speed on your particular data landscape. Metadata-Driven Software Approach. An intriguing approach is to use analytics to find the metadata associated with the personal data in order to locate it. This approach is often much quicker than others. Intranet or Internal System Search. Performing basic searches using existing tools in applications that house customer/consumer data. Best Guess and Hypothesis Testing. While it sounds like statistical testing, this approach is predicated on observations and insights, and is frequently inaccurate as a result. Turning to Software Vendors. Using new GDPR and privacy compliance tools for data mapping and data inventorying.

INFORMATION GOVERNANCE WORLD

71


ECM & EFSS

Just Semantics? CONTENT SERVICES AND THE DEMISE OF ECM?

E

nterprise content management (ECM) software (sometimes referred to as content management systems, or CMS) emerged in the mid-nineties to manage disparate content types, from web content to internal e-documents, reports, and business records. So, when a document is rendered in various forms (e.g., web, electronic, print), only one file of the content is needed, and it is rendered consistently in the format needed. This one file is kept up-to-date for access across departments or the entire enterprise. ECM can manage all types of content in the enterprise as objects, although in practice its focus is on managing unstructured content, while databases manage structured content. Structured content consists of data in rows and columns that can be manipulated arithmetically in calculations. This data is primarily financial and is often used in financial reports, business intelligence (BI), and analytics applications. Unstructured content is everything else—and by most estimates accounts for 80-90 percent of an organization’s total information—including those e-documents, e-records and e-mail. This may include scanned copies of documents like contracts or customer letters, loan or insurance applications, bills of lading, and land deeds, or internally created documents, like letters and memos, spreadsheets, audiovisual presentations, and other common business outputs.

ECM’S ROOTS Document imaging was the first of the ECM suite to develop, and started as a sort of electronic filing cabinet—a very expensive one. The technology did not really take off until the mid-1990s, when workflow capabilities were added to move folders and documents through worksteps in an automated

72

INFOGOVWORLD.COM

way, capturing statistics along the way. Graphical workflow capabilities made designing the work process steps much easier. Some dedicated or “pure play” workflow companies emerged. But imaging and workflow software did not yet manage other types of e-documents, only images, so soon a market for document management software opened up and those companies were soon swallowed by the big document imaging players like Wang, FileNet, and IBM. They also purchased report output and management software companies, at the time called computer output to laser disk (COLD) and later renamed ERM (enterprise report management). So document imaging evolved into document management and included report management. Then, the need for electronic records management (another ERM) capability became apparent. This was the marketplace’s response to organizations demanding complete information management solutions. The major software firms in this marketplace developed complementary technology sets that became an integrated suite of ECM applications, which includes: • Document imaging: scanning and digitizing paper documents. • Document management: including versioning, renditioning, check-in/check-out of documents, and search capabilities. • Records management: formally declaring documents as business records and track records according to retention and disposition schedules. • Collaboration: working in team workspaces, creating, sharing, and editing documents with physically remote users. • Web content management: maintaining one copy of content and publishing it in multiple places across the Web and on intranets. • Digital asset management: managing graphic files such as logos, artwork, advertisements, marketing collateral, and other


digital assets. • Enterprise report management: creating, publishing, and managing reports across the enterprise (which were formerly printed). • Workflow: automated routing through worksteps of the business process helped to speed approval and other decision processes, and workflow capabilities are often included in ECM suites or as add-ons. ECM systems now provide powerful document management support for versioned e-documents, and ensure that users can easily retrieve the latest versions, while tracking revisions. But many users will still use out-of-date versions they have stored locally, outside the repository (e.g., from their desktop PC, tablet, or smartphone, or in their e-mail inbox). This can result in costly errors, wasted work, and, most important, failure to comply with current regulations and operating procedures. Going to a cloud-based approach can address these issues.

THE ECM VS. CONTENT SERVICES DEBATE A generation after its beginnings, in 2017, Gartner Group announced a renaming of ECM as “Content Services” (CS) that revolve around applications, platforms, and components (Woodbridge, 2017), and provided their rationale for doing so. That set off a virtual firestorm in the ECM marketplace, especially in the AIIM community. The renaming and recasting of ECM was due, in part, because the promise of ECM was mostly never realized. David Jones, Director of Product Marketing at Nuxeo, noted earlier this year that: “Traditional ECM vendors convinced enterprises to move to an approach focused around a single-repository with a suite of products built on top. What they actually delivered, however, was a monolithic architecture that was unable to grow and deliver against the customer expectation.” In part because of dispersed and divergent electronic information repositories and cloud computing, businesses realized that what they really needed was access to information “intelligently and from anywhere.” Enter CS. Rather than put ECM architecture front and center, businesses are discovering that with CS they put applications and processes ahead of the ECM structure. AIIM’s definition is: CS is an alternative strategy to provide a more practical, multi-repository solution to achieve the benefits promised by the original vision of ECM: to intelligently capture information, disseminate it to the right people, processes, and departments, while ensuring compliance and creating process and cost efficiencies. What do you think? Is the change from ECM to CS real—and needed? Will it help organizations achieve better results than in the past?

Eyes Wide Open EFSS BENEFITS AND RISKS FOR USERS

T

eamwork in project management is key to successful project completion. Facilitated by information and communications technology (ICT), businesses with global footprints can strategize and collaborate in real-time with professionals half a world away. Electronic File Synch & Share (EFSS) vendors (e.g. Box, Dropbox), provide the ability to view and manipulate the same document regardless of the user’s physical location. Using EFSS, virtual collaboration between various team members across the globe is a reality that businesses depend on with greater frequency. yet many users are not fully aware of the key benefits and risks of utilizing EFSS tools.

EFSS BENEFITS EFSS is essentially a content service with built-in security and collaboration features that foster and support today’s electronic-enabled world. One benefit is file synchronization across all devices, allowing access to the same content from disparate devices that may be scattered geographically. This includes tablets, laptops, and smartphones, devices that have become essential to our digital lives. This benefit also includes instant access anywhere there is an Internet connection. A key security benefit is the facilitation of encryption while the data is at rest and in transit. Data can also be containerized and sent only to those allowed to see it.

EFSS RISKS The risks of using EFSS are often understated—mostly by the vendors offering these services. Certainly, they want a company to load as much content as possible into their

proprietary EFSS system. But most professionals utilizing these services never think of how they might port these e-documents and files to another EFSS provider, should the need arise. And if you think about it, it is not in the best interest of these vendors to make it easy for a customer to move their content to another platform. Maybe there are not “hostage fees” per se, but generally, there are few, if any tools provided to migrate e-documents—with their associated metadata intact—to another platform. Also, when comparing EFSS solutions to traditional enterprise content management (ECM) systems, the former are newer and not as mature and feature-rich when it comes to managing content through its lifecycle from creation to final disposition. Placing and removing legal holds can also be an issue. There are also some security concerns that should be addressed. The easy-to-use functionality of an EFSS is usually not secure enough for mission-critical enterprise applications. But because it is so easily used, many knowledge workers assume that their important e-documents are secure. However, if workers use a relatively unsecure EFSS they could open their organization to potential data breaches or compliance violations. So when moving toward using an EFSS, users must do so with their eyes wide open, and must conduct their own assessment of security and compliance risks. Generally, it is safest to use EFSS for collaborative functions that require a relatively short retention period, such as internal or marketing projects, and also those that have a low risk of e-documents being put on legal hold and requiring production during litigation.

INFORMATION GOVERNANCE WORLD

73


ARCHIVING & LONG-TERM DIGITAL PRESERVATION

Backups, Archiving, Preservation—Oh My!

COMPARING ROUTINE BACKUPS, CONTENT ARCHIVING, AND LONG-TERM DIGITAL PRESERVATION

D

igital information, which relies on complex computing platforms and networks, is created, received, and used daily to deliver services to citizens, consumers and customers, businesses, and to government agencies. Organizations face tremendous challenges in the twenty-first century to manage, preserve, and provide access to electronic records for as long as they are needed. In the event of a catastrophic event, a business needs a plan for remaining open to minimize loss of capital. Cerullo & Cerullo (2004) noted “a business continuity plan (BCP) seeks to eliminate or reduce the impact of a disaster condition before the condition occurs” (p. 70). A business continuity plan has three functions: contingency, resilience, and recovery. Although routine system backups, content archiving, and long-term digital preservation all preserve information, they each provide a unique function.

74

INFOGOVWORLD.COM

BACKUPS Most people are familiar with backups, which are done daily and often also weekly, where a full copy of daily transactions and system activity is made. This is done in a serial fashion, and usually on tape, so searches must be done linearly making them slow and cumbersome for tasks like retrieval of records during the e-discovery phase of litigation. Also, complex searches using multiple search terms, phrases, or dates are difficult or not possible. But in the event that systems need to be restored, due to a system failure, data breach, or ransomware attack, an entire restoration of the system data can be accomplished for business continuity (BC) purposes.

CONTENT ARCHIVING Content archiving is a relatively newer concept, where all content being created can be archived in real-time. This includes email messages, which are captured, time-stamped,


and archived, which preserves evidence and helps to avoid any spoliation (changed or deleted content, after-the fact) claims during litigation. Four key business functions that any email or content archiving solution must perform, include the need to: 1. Ensure archive completeness; 2. Provide efficient and reliable long-term storage; 3. Ensure security and integrity of content; 4. Provide immediate access to archived content for authorized users

DIGITAL PRESERVATION Long-term digital preservation (LTDP) is defined as: long-term, error-free storage of digital information, with means for retrieval and interpretation, for the entire time span the information is required to be retained. Digital preservation applies to content that is born digital as well as content that is converted to digital form. Some digital information assets must be preserved permanently as part of an organization’s documentary heritage. Dedicated repositories for historical and cultural memory such as libraries, archives, and museums need to move forward to put in place trustworthy digital repositories that can match the security, environmental controls, and wealth of descriptive metadata that these institutions have created for analog assets (such as books and paper records). Digital challenges associated with records management affect all sectors of society—academic, government, private and not for profit enterprises—and ultimately all citizens of all developed nations. Fortunately, a few software and services vendors have developed a cloud-based approach to digital preservation, which makes the process of bringing LTDP expertise in-house much easier, faster, and more economical. This cloudbased approach also assures durability of the digital information, whereby 5-6 copies of any digital document or file are saved on different servers in different parts of the world using major cloud providers such as Amazon and Microsoft. That geographic dispersion helps to mitigate risk associated with a disaster in any particular region. Files are stored in technologyneutral file formats, and the vendor provider takes care of any migrations to newer formats. The veracity and integrity of each file is tested regularly to ensure it has not been corrupted. A checksum algorithm is applied to ensure no changes have occurred at the bit level, and if they have, the error is flagged and a new or updated copy is created. A comprehensive Business Continuity, Litigation Readiness, and archiving plan must include all three of the above: backups, content archiving, and LTDP, under an IG program umbrella.

ARCHIVING

ITS CHALLENGES AND VALUE

G

enerally, there are two reasons organizations undertake archiving. First, organizations must save and make available information that is evidence of a business function. Such information is crucial to ensuring the organization adheres to a regulatory environment. Second, organizations should save specific information that has enduring value to the organization. Typically, this second reason for archiving aligns with a desire to ensure the organization’s historical origins are documented. In much the same way a historical museum showcases artifacts, a corporate archive showcases artifacts specific to its origins. For example, Wells Fargo Bank has 12 museums that also double as corporate archives. Perhaps their most recognizable artifacts are old stage coaches used while conducting business. In the business world, “archiving” might seem straightforward. For example, Secure Data Management, an industry leader in offsite digital storage solutions, defines archiving as “the process by which inactive information, in any format, is securely stored for long periods of time. Such information may (or may not) be used again in the future, but nonetheless should be stored until the end of its retention schedule.” Although this definition alludes to records management principles, i.e. the retention schedule, there are other references that point to the complexity of the root word—archive. The Society of American Archivists defines archive as “to transfer records from the individual or office of creation to a repository authorized to appraise, preserve, and provide access to those records.” This definition expands archiving by indicating that ownership and control of the information is vital to the business’ functions. These two definitions also comport with the International Organization for Standardization’s definition of a record. Known as ISO 15489-1:2001, this standard defines records as “information created, received, and maintained as evidence and information by an organization or person, in pursuance of legal obligations or in the transaction of business.” Of the three definitions listed above, this last one is the most crucial and presents the most challenges. Legal obligations and business transactions are often intertwined. In the healthcare industry, PHI and PII records must be used and stored in specific formats and locations that ensure their use and storage adheres to the regulatory environment created by laws such as HIPAA. The primary challenge is the volatility of today’s electronic information. If not archived according to laws, the entity using the essential or vital records can become subject to fines, sanctions and other legal consequences.

INFORMATION GOVERNANCE WORLD

75


EMERGING TECHNOLOGY

AI and Information Governance

CAN AI HELP YOUR ORGANIZATION ACHIEVE ITS IG GOALS? BY ROBERT SMALLWOOD

A

rtificial intelligence (AI) is the ability of software to “learn” and make decisions based on inputs and conditions. This creates intelligent computers that can reason on a fundamental level like humans, only much more rapidly and efficiently. The use of AI has drastically increased and is used for applications like robotics, complex classification, medical and maintenance diagnostics, advertising and pricing, and even compliance. AI solutions, if pundits are to be believed, will solve everything from data storage to transportation. The use of AI to assist in Information Governance program tasks and activities is steadily rising.

FOSTERING GDPR COMPLIANCE The European Union’s new General Data Protection Regulation (GDPR) has left companies across the globe scrambling to gain control over the consumer data they have housed. Some software companies are offering AI tools to assist in this effort. One example is Informatica’s Compliance Data Lake, which uses machine learning technology to simplify compliance tasks. The software aims to give enterprises a holistic, comprehensive view of consumer data, regardless of where it is stored. It uses machine learning to identify relationships among data in different databases and data stores, including email, instant messages, social media, transactional data and other sources. The goal is to ensure compliance with GDPR by gathering technical, business, operational and usage metadata, and providing more accurate compliance analytics and reporting.

AUTO-CLASSIFICATION AND FILE REMEDIATION AI is also being applied to large collections of unstructured information. Unstructured information lacks detailed metadata and must be classified and mapped to an organization’s information taxonomy so it can be managed. AI can be used to inspect the contents of e-documents and e-mails to make a “best guess” at how they should be categorized. Some of the more sophisticated file analysis classification and remediation (FACR) software can actually insert basic metadata tags to help organize content. This is an essential task for executing defensible disposition, that is, following an established records retention schedule (RRS) and dispositioning (usually destroying) information that has met its life cycle retention requirements.

E-DISCOVERY COLLECTION AND REVIEW AI is also used commonly to locate information that is responsive in a particular legal matter. Using predictive coding software, a human expert, usually an attorney working on a case, feeds the software examples of content (e.g. documents and emails) that are relevant. Then the software goes out into information stores and looks for similar content. It serves up the content and the expert reviewer goes through a sample and teaches the software “more like this” and “not like this” so the AI software gets better at narrowing its searches. After a few iterations, the software becomes quite efficient at finding the relevant information. But it doesn’t have to be perfect. Courts in the U.S. have ruled that if the predictive coding software locates 70% or more of the responsive information, then that is acceptable, since that is about the accuracy rate of humans, due to fatigue and error. AI has proven to be a good tool for IG programs to utilize to accomplish key tasks, and the use of AI in IG programs will continue to grow.

76

INFOGOVWORLD.COM


THE FATHER OF THE WEB TIM BERNERS-LEE INVENTOR OF THE WORLD WIDE WEB (THE WEB) BY MARK DRISKILL

I

magine growing up in a household that regularly discussed how computers would one day function like human brains. This was the case in the fifties and early sixties for Tim Berners-Lee, one of four children born to computer pioneers Mary Lee Woods and Conway Berners-Lee. Tim’s parents were part of a team of engineers who worked on the Ferranti Mark 1, the first commercially available computer. The Ferranti Mark 1 was the culmination of over a decade of research and development first begun during World War II, and explicitly linked to Alan Turing and other computer science pioneers. For young Tim and his siblings, the family discussions were potent fodder for developing scientific inquiry, even if they did not know it. Informed in part by his parents and the fertile imagination they instilled in him, Tim Berners-Lee went on to invent the World Wide Web (the Web). While he did not invent the Internet (that is another story entirely), he did invent what sits on top: the way people use information and communicate over the Internet as an infrastructure for facilitating electronic information flow. In the spirit of Oppenheimer’s remorse felt after the war, Berners-Lee at least partially regrets aspects of his invention, most notably the effect the Web has had on personal privacy and the dissemination of fake news by foreign agents to influence elections. While working as a contractor for the European Organization for Nuclear Research (CERN) in the eighties, Berners-Lee developed the concept of the Web browser. This specific type of software manages machine language connections using hypertext, transmission control protocol (TCP), and domain name systems (DNS). The Web was born out of Berners-Lee’s attempt to share information with other scientists at CERN and other strategic places across the globe. A Web browser matches the correct DNS with defined TCPs, viewable to a user because of hypertext transfer protocol (http). A generation beyond Berners-Lee’s first use of the Web’s connections to other scientists, Berners-Lee now realizes there was a crucial error in his thinking,

one that to this day businesses and other organizations that use the Web face on a regular basis—cybercrime. Berners-Lee did not anticipate that nefarious characters would one day use the Web for criminal purposes. In an interview conducted after induction into the Academy Achievement in 2007, he noted the Web “was designed to be a collaborative workspace for people to design on a system together.” In his initial Eureka moment, privacy was not his greatest concern. This seems to be a phenomenon not exclusive to BernersLee. In our attempt to manipulate our environments, humans often push forward

In this third phase of Berners-Lee’s career, he has taken on the role of one of the Web’s primary patriarchs, the parent charged with overseeing the authenticity and transparency of the Web. Darren Walker, president of the Ford Foundation, refers to him as “the Martin Luther King of our new digital world.” Today, Berners-Lee is the founder and an active member of the World Wide Web Foundation, an organization dedicated to ensuring the Web is a free and open platform.1 He is also a leading scientist at MIT, where he is developing software that will help return the Web to his

with innovation without understanding its social ramifications. This is certainly the case with the electronic information revolution, a revolution that has largely been guided by the flow of information across the Internet using the Web and Web browsers. The unique aspect of the current revolution is its scope of influence across a very short period. In a little less than three decades, the revolution started and is now in the phase where philosophers and historians measure the effects of the revolution. This speed is beneficial for humanity, most notably because pioneers such as Berners-Lee survive to be advocates of the very aspects of their inventions they did not consider in the beginning. For him, it was the decision to give his invention to the world to facilitate “an open and democratic platform for all.”

original democratic ideals. SOLID is being designed as a platform that takes back control of the Web from corporations and gives it back to people. Although it might be a coincidence, there are some parallels to the recent implementation of the EU’s GDPR. The most notable parallel is the control of PII by the people who own it, not the corporations that use it. Solid is theoretically a platform that uses GDPR as a foundation for protecting the right to privacy, a human right that corporations often ignore. However, the broader parallels involve the ethical and philosophical conditions of the electronic information revolution. These conditions still need clarification as leaders such as Tim Berners-Lee define the moral constructs of our digital footprints.

INFORMATION GOVERNANCE WORLD

77


EMERGING TECHNOLOGY

Machine Learning and Information Governance

THE ELECTRONIC SKIN OF EARTH

IG IS THE HEART OF THE MATTER

I

n a 1999 Business Week article, journalist Neil Gross offered a prediction that sounded like it came straight out of the latest science-fiction blockbuster: “In the next century, planet earth will don an electronic skin.” Gross was part of a team that published 21 ideas they predicted would be part of the 21st century’s technological footprint. Gross predicted there would eventually be enough devices connected to each other and to the Internet to cover the earth in an electronic skin. He noted that as of the close of the 20th century, “millions of embedded electronic measuring devices: thermostats, pressure gauges, pollution detectors, cameras, microphones” were already collecting and using electronic information. We live in a world driven by Internet-enabled electronic devices that allow us to connect to each other in ways never thought possible. For instance, many wake up in the morning and put on their Fitbit or IWatch and head out to exercise. Meanwhile, refrigerators throughout the world send shopping lists to retailers as the family car sends a message to the dealer that a routine oil change is due, sending still another message back to the phone of the car’s owner that an oil change was scheduled. Although the information that travels around the IoT is invisible, it should still be monitored and managed, which presents challenges to IG and RIM professionals. An IG framework is like a heart, in that it controls the flow of information and makes sure it reaches the proper connections. While twenty years ago tech journalists such as Gross informed our Y2K-focused world that technology and biology would one day merge, it was (and still is to some degree) science fiction. That such predictions would be in a business-focused magazine was telling. Modern business depends on electronic connections between devices, and these connections represent ribbons of information streams that spread throughout the business.These streams are essentially the lifeblood of the business and they should be managed in an Information Governance framework. However, this can be challenging because IoT-connected devices use machine language to talk to each other. Standard recordkeeping practices still apply. However, retention, deletion, storage, privacy, and destruction are problematic because it is unstructured. As these unstructured information streams are utilized in the business world, they present incidental information that itself can be valuable information. In the tech world, this is known as metadata. This data is largely unstructured and unmanaged, and at times moves around a network that has come to be known as the Internet of Things (IoT). As the IoT’s skin gets stitched together, “it will combine the global reach of the Internet with a new ability to directly control the physical world, including the machines, factories, and infrastructure that define the modern landscape.”

Gartner predicted that by 2020 there will be 20.8 billion devices connected to the IoT.

78

INFOGOVWORLD.COM

I

nformation management issues plague every business; Information Governance is almost always the solution. This can take the form of understanding what information to store or how best to articulate Information Governance rules and best practices. And innovative IG program managers leverage new technologies to assist in tackling thorny IG challenges.

“Machine learning is a field of study that gives computers the ability to learn without being explicitly programmed.”

Can machine learning offer a solution?

Machine learning is best explained as a process that improves the performance of a system through experience and information. For the purposes of information management, we are talking about data. By working with data and processes, patterns are harvested — Arthur Samuel in order to better analyze data. Since machine learning depends on data, the most important and germane component in regards to Information Governance is the appropriate model with which to build a machine learning model. From an Information Governance standpoint, compiling an appropriate model is going to involve knowing which question to ask and what kind of data to utilize as a part of a machine learning workflow. And since machine learning depends on data, and IG is all about quality data, then the continued collection of data is paramount to calibrate the machine-learning model to make better decisions and predictions with the available data.


THE MERGE THE INTERSECTION OF BLOCKCHAIN & INFORMATION GOVERNANCE

BLOCKCHAIN THE FUTURE IS HERE

I

f you follow emerging technology trends at all, or are a fan of John Oliver (who recently did a piece on blockchain), then you will have heard the term blockchain bandied about. The most visible use of blockchain right now is undoubtedly cryptocurrencies. More generally, a “blockchain” is linked blocks of information secured through cryptography. Each “block” in the chain has a cryptographic hash from the previous block, in addition to a timestamp and transaction information. By distributing the information in lieu of copying it, blockchain is resistant to intrusion attempts. While Bitcoin and other digital currencies make the headlines, intrepid entrepreneurs are leveraging it for other uses. A great metaphor for blockchain is a spreadsheet duplicated over a network of computers with regular updates. This shared information makes for one continuous database. Since it is not in a single location, attempts to corrupt the information are thwarted by the blockchain architecture. Ian Khan, TEDx speaker and author, offered this about blockchain: “As revolutionary as it sounds, blockchain truly is a mechanism to bring everyone to the highest degree of accountability. No more missed transactions, human or machine errors, or even an exchange that was not done with the consent of the parties involved. Above anything else, the most critical area where Blockchain helps is to guarantee the validity of a transaction by recording it not only on a main register but a connected distributed system of registers, all of which are connected through a secure validation mechanism. As an (seemingly) incorruptible and transparent process, blockchain offers a vision of the future that has accountability and responsibility at its center.”

“The blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value.” ­— Don & Alex Tapscott, in Blockchain Revolution (2016)

U

nless you’ve been living under a rock, you know that blockchain is one of the hottest new technologies. Its infrastructure allows for the execution and completion of transactions without the need for identity verification. While the financial focus often dominates the news, the transparent, secure, and immutable nature of blockchain makes it an interesting technology in regards to the movement of data. The fact that the information cannot change is important from an Information Governance standpoint. The encryption and hashing mechanism is the backbone of blockchain’s immutability. Since it requires a key to access the data, the ability to steal or alter data becomes more difficult. From an Information Governance perspective, blockchain allows users to better request data, get data back, revoke data, and determine the rights to data. Since the core of Information Governance concerns rules about the storage and transfer of information, blockchain affords a new way to do it. Blockchain is nascent technology and there is no way to predict the effect it will have on Information Governance, its core principles—transparency, security, and immutability— mark the most important aspects of IG’s best practices. Information Governance core principles are at odds with storing your data in an unsecured location or failing to maintain control of your information. Blockchain would seem to offer a solution. By clearing up the environment and controlling the information, you are putting yourself on the right side of Information Governance. By zeroing in on blockchain’s three principles, IG projects will bring value to an organization’s data.

“From an Information Governance perspective, blockchain allows users to better request data, get data back, revoke data, and determine the rights to data.”

INFORMATION GOVERNANCE WORLD

79


INFORMATION GOVERNANCE TRADE SHOWS

AHIMA Annual Conference September 22-26, 2018 (Miami Beach)

The American Health Information Management Association (AHIMA) will hold its 90th annual Convention and Exhibit in Miami Beach, Florida, at the new Miami Beach Convention Center. The theme of the conference is “Inspiring Leadership, encouraging change.” For several generations, AHIMA has taken “a leadership role in the effective management of health data and medical records.” As Health Information Management (HIM) leaders, AHIMA has established a prestigious reputation as a provider of training and education for healthcare administrators. This year’s conference begins with a weekend-long preview (Sep. 22-23) and includes sessions on topics such as healthcare privacy and security, interoperability of electronic information management systems and medical device security management, among many others. The weekend preview also includes extensive workshops that provide conference attendees insights into their specific professional foci. These preview workshops give attendees the opportunity to interact and network with other professionals. In the ongoing battle between hackers, software vendors, and cybersecurity experts, AHIMA recognizes that Information Governance and transparency bring added security to electronic health records (EHR) management. A key part of the AHIMA experience includes the festive atmosphere of the main exhibit hall. Immersed in networking opportunities, attendees will experience a main exhibit hall that will contain state-of-the-art technology

80

INFOGOVWORLD.COM

and software suites conceptualized and created by industry leaders, all designed to manage the complex healthcare records environment. Through AHIMA’s Sponsorship Program, a vendor’s reach moves beyond the social environment of the exhibit hall. AHIMA. (2018). Convention General Information. Retrieved from www.ahima.org/convention/geninfo What to expect at PCIG at AHIMA September 22-24 (Miami Beach) AHIMA will host the Privacy, Cybersecurity, and Information Governance Institute’s Confidentiality Privacy and Security meeting at the Miami Convention Center in Miami Florida. Attendees “will have the chance to choose from new focus areas and tracks, gain new insights, and learn about the hottest topics in the industry.” The meeting’s objectives are detailed and should give attendees an increased sense of awareness of privacy issues surrounding the use of electronic information. One of the main objectives of this meeting will be promoting an understanding of how Information Governance (IG) helps security and privacy outcomes. With the proliferation of foreign agents attempting to influence American elections, it is necessary that electronic data users stay up-to-date and remain proactive in a compliance environment presented by the volatility of continually changing electronic/digital environment. Broadly the Confidentiality Privacy and Security meeting is an opportunity to network with others who have compliance concerns regarding HIPAA, GDPR and state laws like the California’s Consumer Privacy Act. Attending AHIMA? What to do in Miami Beach

This year’s AHIMA Conference will be held in one America’s premier states for tourism—Florida. As such, attendees can visit some of South Florida’s unique tourist locations while embracing the inviting Florida sun. Dining in any restaurant along Espanola Way is unmatched in its cultural appeal and modernity. East of Miami, in the sun-ocean wrapped city of Miami Beach, IG pioneer Robert Smallwood will be holding a book signing and reception for his new book titled, Information Governance for Healthcare Professionals from 6-8pm on Monday, September 24, at The Plymouth, one of South Beach’s famous boutique hotels in historic Collins Park. This area of Miami Beach includes the new Miami Beach Convention Center yet retains the mid-20th century charm of art deco architecture and pastel paint schemes. Of course, Miami Beach is worldrenowned for its shopping and LatinAmerican cultural atmosphere. With Cuba just to the south, much of the old Miami Beach retained its immigrant roots, which you can see by visiting Little Havana. Drop in at the historic Ball & Chain for a mojito and some live music. The juxtaposition of cultural heritage and luxury shopping is quite unique in Miami Beach. When the sun goes down and the temperature cools outside, Miami Beach’s nightclub scene heats up as club goers often experience Latin American influenced music and dancing. If the nightlife is not your thing, there are Historic Art Deco District tours that highlight the best of more than 800 architectural landmarks, many of which have survived since the twenties. The New World Symphony at New World Center holds regular concerts that combine traditional orchestral music with a state-of-the-art audio/visual technology that should not be missed. The historic Fillmore at Jackie Gleason Theater was renovated in 2007. It retains some of the charm from the fifties, giving those who attend a show or concert an authentic South Beach entertainment experience. Jungle Island Zoo should also not be missed. With a large collection of rare and exotic birds, Jungle Island Zoo has survived since 1936 and offers visitors a rainforest experience. Next, enjoy the tamer atmosphere of the Miami Botanical Garden containing Japanese landscaping. Source: isitflorida.com/en-us/cities miamibeach/50-things-to-do-miami-beach.html


InfoGovCon (Information Governance Conference)

Sept. 25-28 –(Providence, RI)

InfoGovCon is its holding its fifth annual conference. #InfoGov18 attendees will enjoy “high-quality keynotes, interesting perspectives, quality panel discussions, and insightful fast-paced sessions that have continued to grow our audience as we enter our fifth year,” says Nick Inglis, co-founder of InfoGovCon. It will be held at the Providence Convention Center from September 25-28, 2018. You can register at infogovcon.com.

IAPP 2018 - Privacy. Security. Risk.

Oct. 16-19, 2018 (Austin TX) In mid-October, International Association of Privacy Professionals (IAPP) will hold a conference in Austin Texas. This unique event gathers together the best and most aptly trained experts in the fields of cybersecurity and privacy, with a significant attention paid to cross-education and top-rate networking. As the cybersecurity industry takes on a more meaningful role in the development of emergent technologies such as AI, this network of professional becomes a key strategic asset. There is a balance between what technology should do and what it can do. Help other experts find this balance this fall in Austin. While the main conference takes place Oct. 18-19, there will be two days of workshops and certification training Oct. 16 and 17. New to this year’s conference is a Privacy Engineering Section Forum that will discuss the intersection of people, technology, and privacy. This can’t miss forum should offer insights into this topic that are rarely discussed outside of networking circles.

ARMA Live! 2018

October 22 to 24 (Anaheim CA) ARMA International’s ARMA Live! 2018 conference will be held at the Anaheim Convention Center October 20-24. With a rich history and long-standing members, 2018 is slated to not disappoint. Highlights about the conference are below, and also may be found on the ARMA website:

• More featured sessions that are handpicked by industry experts • Adding additional “in the moment” interactions • More opportunities for peer-to-peer learning with industry groups throughout the event • Career Advancement - ARMA Live! Career Resource Center links job seekers with employers and industry recruiters. The ARMA Live! Career Center, located in the conference expo, is designed to connect attendees with experts in the profession, potential employers, and resources for career advancement. If you plan to advance your career by attaining continuing education credits or by finding a new job, come to ARMA Live! Experts in the profession and prospective employers will be available to review your resume, give you sage advice, or perhaps even offer you a role in their company. Unrivaled Networking - According to ARMA’s past attendees, one of the greatest benefits of attending ARMA Live! is meeting and mingling with peers from around the world and from different industries. This year ARMA Live! will once again put an emphasis on networking opportunities for you – starting with the Sunday Night Welcome Party and including industry-specific networking events and roundtables throughout the event.

Also, InfoGov World will be hosting a Magazine Launch Fiesta and Networking Reception on the Platinum Patio of the Anaheim Marriot on Monday, October 22, from 6PM-9PM. Request an invitation at events@infogovworld.com If You Are Going to ARMA: What to do in Anaheim ARMA Live: 2018 will be held in one of Southern California’s most famous but almost anonymous cities—Anaheim, the

home of Disneyland. Understandably, many conference attendees will not have the time to visit Disneyland or its neighbor, Knots Berry Farm, in Buena Park. Not to worry, there are still things to do and see that will enhance the ARMA experience. It should also be noted that the Anaheim Convention Center is the largest on the West Coast. Anaheim residents love their sports. If the Angels are in town, go catch a night game at Angel Stadium. The Anaheim Ducks call the Honda Center Home. Escape the heat by watching an NHL game. The city is also the home of the Anaheim Garden Walk, an outdoor shopping center with a movie theater and bowling alley, among many franchise eating experiences such as Bubba Gump Shrimp Company and the Cheese Cake Factory. Every Thursday the Center Street Promenade hosts Downtown Anaheim’s traditional Farmer’s Market. This is an opportunity to taste some local produce grown in the area. If the outdoors is your thing, visit Ralph B. Clark Regional Park. In addition to the usual park amenities, the park contains an extensive network of trails, many of which are protected from the hot Southern Californian sun by shade trees. Bird-watcher scan enjoy around 130 different species of birds, an unusually high number given the urbanism and small size of the area. Yorba Regional Park at the mouth of Santa Ana Canyon offers more outdoor experiences. The network of lakes and streams inside the park offer boating and fishing options. Still another outdoor experience is the Oak Canyon Nature Center. For even more authentic California outdoor experiences, visit any number of local beaches. From Huntington and Newport Beach west of Anaheim, to Laguna Beach south, the California coast is famous for surfing, dolphin and whale watching, and chilling out to that SoCal vibe.

INFORMATION GOVERNANCE WORLD

81


INFORMATION GOVERNANCE EVENTS October 3-5 October 4 October 8 October 15 October 18-19 October 17 October 18 October 21-24 October 22 October 22-24 October 23 October 24 October 25 October 27 October 29 October 30

Privacy + Security Forum 2018 (Washington D.C.) secureCISO (Munich) (ISC)2 Secure Event Series: Security Congress 2018 (New Orleans) CSX ISACA 2018 North America (Las Vegas) IAPP Privacy. Security. Risk. 2018 (Austin) SecureWorld (Cincinnati) ISSA CISO Executive Forum Series: Security, Legal, and Privacy (Atlanta) Data Connectors Cybersecurity (Vancouver) Association of Corporate Counsel (ACC) Annual Meeting (Austin) InfoGov World Magazine Launch Reception & Fiesta (Anaheim) ARMA LIVE! 2018 (Anaheim) Big Data Forum (Boston) The Masters Conference: 12th Annual International Legal Innovations Conference (D.C.) Predictive Analytics Innovation Summit (Chicago) 29th ISF Annual World Congress (Las Vegas) CSX ISACA Cyber Security Nexus (London) SecureWorld (Denver) Digital Analytics Association Atlanta Symposium

November 5 Nov. 5, 7, 9

The Sedona Conference Working Group 12 Inaugural Meeting 2018 IGP Certification Exam Prep Crash Course (Online M-W-F, 1PM-4pm ET) Institute for IG, register at IGTraining.com 2018 Small Firm Conference (Santa Monica) SecureWorld (Seattle) Data Connectors Cybersecurity (Orlando) (ISC)2 Secure Event Series: IFINSEC (Financial Sector IT Security Conference (Istanbul) The Masters Conference: No More Sleeping Beauty (Orlando) Data Connectors Cybersecurity (Nashville) CISO Executive Summit Series (Dallas) Privacy in IG Programs (Online M-W-F, 1PM-4pm ET) Institute for IG, register at IGTraining.com IAPP Europe Data Protection Congress 2018 (Brussels) CISO Executive Summit Series (Miami)

November 7 November 8 November 12 November 13 November 26 Nov 26, 28, 30 November 27 December 3 Dec. 3, 5, 7 December 4 December 5 December 6 December 10 Dec. 10,12, 14

82

Data Governance Winter Conference (Delray Beach) Dataversity, Debtech AIIM CIP Exam Prep Crash Course (Online M-W-F, 1PM-4pm ET) Institute for IG, register at IGTraining.com CISO Executive Summit Series (St. Louis) 4th Annual E-Discovery Day (national) CISO Executive Summit Series (Calgary) Data Connectors Cybersecurity (Washington D.C.) LunaMetrics, Google Analytics & Tag Manager Workshops (Washington D.C.) Cybersecurity Basics (Online M-W-F, 1PM-4pm ET) Institute for IG, register at IGTraining.com

January 28 - 31 January 29 - 31

LegalWeek (New York Hilton Midtown) LegalTech (New York Hilton Midtown)

Feb. 11-15

HIMSS19 Global Conference & Exhibition, (Orlando, FL) Orange County Convention Center

March 5 March 26-28

ARMA NYC Annual Conference AIIM Annual Conference (San Diego)

April 9-11

IG Basics & Advanced Classroom Training (San Diego) Institute for IG, register at IGTraining.com

May 20-22

MER Conference, Chicago

INFOGOVWORLD.COM


Information Governance Doesn’t Have to Be Challenging - It’s Time You Took Control of Your Information Everteam provides the tools to help you identify and clean up information assets, comply with regulations, decommission legacy applications and preserve information, enabling you to minimize risk, reduce costs and use your information for competitive advantage.

www.everteam.com info@everteam.com



Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.