Cyberasia 003 web

Page 1

Issue 003 | April 2018

ArtiďŹ cial Intelligence and Cybersecurity Deep Learning

AI

Cybersecurity

What Does it ALL Mean? Machine Learning

GDPR

Cognitive Computing


EUROPE, ROME // 26-27.9.2018 MIDWEST, INDIANAPOLIS // KICKOFF23.10.2018 TOKYO // 29-30.11.2018 TEL AVIV // 28-30.1.2019 LATIN AMERICA, PANAMA // FEBRUARY 2019 MIDWEST, INDIANAPOLIS // CONFERENCE & EXHIBITION JUNE 2019

2

Issue: 003; April 2018


卷首语

美国物理学家,未来学家 Michio Kaku 博士在他的著作”心灵的未来” 一书中阐述了自然界中两个 最大的秘密: 心灵和宇宙。 当今的高科技,使我们在宏观层面,能够看到银河系中数十亿光年远, 在微 观层面,操纵控制生命的基因。 凭借超级的计算能力,我们的智能正朝着机器学习, 深度学习,人工智能(AI)多领域的方向发展, 成为缩小我们自身生物奥秘与外部(机械)世界之间差距的桥梁。 根据经济学家的报告, 2017年企业在人工智能方面的并购费用为220亿美元,比2015年增加26倍。 麦肯锡全球研究院认为,将人工智能应用于市场营销和供应链,可能会创造的经济价值超过27亿美元。 谷歌宣布,在未来的20年,人工智能为人类做出的贡献将要远远超过火力或电力。 在互联网时代,我们很高兴能够体验技术创新所带来的多项突破, 解开许多人生的奥秘。使我们的 生产和生活更高效,创造出更多的利润。 但同时这也使我们变得更具有依赖性和脆弱。网络威胁,安全/隐私, 已经成为以数据驱动的经济 体中, 不可忽视的重要一部分。 为了更多的理解网络危胁的突发事变,我们再次和CSA主办方合作编缉了这本专刊, 重点介绍人工 智能和安全隐私及法规。 我们幸运有机会采访了Abode 的副总裁和首席安全官。布拉德.艾金帮助我们全面的了解 Adobe 的 安全防范系统,在此与我们的读者分享。(请見第10页和第 46页) 另外,我们感谢 Verizon 授权我们分享他们的第11版,2018最新的数据泄露研究报告。感谢中电展 览公司王颖总经理和张园助理的大力协 助和支持,把全套的报告翻译成中文版,在CSA的展会上与大 家分享。 最后,我们也汇集许多行业专家对当前网络安全的一些见解。请大家关注。 希望大家在这春天万物发芽的季节,赏心悦目的阅读。 孙玉萍,王颖,张园,敬献给大家! 2018年4月

Issue: 003; April 2018

3


LET'S GET

SOCIAL! Let CYBER SPACE ASIA and CyberAsia360 help you improve your social network: global trade events, monthly print magazine, weekly email marke ng, video promo on, and daily social media posts.

4

Issue: 003; April 2018


Publisher’s note

Dear Readers: In his book, The Future of the Mind, Dr. Michio Kaku, an American physicist and futurist writes that the two greatest mysteries in all of nature are these: the mind and the universe. With today’s vast technology, we have been able to accomplish things only dreamed of decades ago, from getting a glimpse of galaxies billions of light years away to manipulating the very genes that control life. With the power of super computing, our intelligence is now rapidly moving toward (and beyond) machine learning, deep learning, and artificial intelligence (AI), gradually closing the gap between the biological and mechanical worlds in which we live. According to The Economist, in 2017, companies spent around $22 Bn on artificial intelligence relating to mergers and acquisitions, 26 times more than in 2015. The McKinsey Global Institute reckons that applying AI to marketing, sales and supply chains could create an economic value of $2.7 Bn over the next 20 years. Furthermore, Google believes that AI will do more for humanity than fire or electricity. In the Internet era, we have excitedly uncovered many of life’s mysteries and experience interruptions brought by accelerated innovations and technological breakthroughs. While this connectivity has enabled us to be more efficient and produce more profits, it has also made us more dependable and vulnerable. Security (privacy) has become an integral part of how to keep our increasingly data-driven economy functioning more secure and away from harmful breaches. In an effort to comprehend the unexpected “boom moments” that will interrupt business operations, we once again have collaborated with CEIEC - the organizer of Cyber Space Asia (CSA), and have compiled this special edition of CyberAsia360, focusing on Artificial Intelligence and cybersecurity/privacy. I was fortunate to have interviewed Mr. Brad Arkin, VP & Chief Cybersecurity Officer at Adobe Systems. Through our conversation, he helped me understand and thus share with our readers, how Adobe, a globally-known marketing firm, both safeguard their own data systems and equip their customers with safety proven software (Pages 10 and 46). Additionally, we are thrilled that Verizon Enterprise has authorized us to present you with its 11th edition of 2018 Data Breach Investigations Report. Thanks to CSA, we are able to present a translated Chinese version. According to this report, there have been over 53,000 incidents and 2,216 confirmed data breaches throughout this past year alone. You can find a summary of the findings on pages 13 and 49 or a full report at the upcoming CSA event, on April 26-28, 2018. Lastly, we have also compiled an array of industry experts’ insightful remarks that we hope will help you or your organization navigate well in this growing and increasingly complicated digitalized world. Happy reading and enjoy the spring, a season when all of nature is sprouting in full swing. Ms. Sunny Sun Publisher

Issue: 003; April 2018

5


内容概括

Table of Contents

人工智能

行业动态

网络安全周话题 公安部第三研究所 所长助理 金波

8

网络安全策略 Verizon 数据泄露报告的观点总结

13

人工智能: 你所需要知道的一些基本信息

32

人工智能,还是认知计算?

33

人机交互:人工智能进化的钥匙

34

人工智能用于网络安全:好主意

35

人工智能在网络安全中没有可替代品

35

电子疲劳,人工智能在网络安全中最大的挑战

36

我们正在使用人工智能的10种方式

37

网络安全中的人工智能

38

人工智能给网络空间安全带来的非连续性挑战

40

阿里巴巴集团首席风险官 郑俊芳

安全是我们的生命线, 将时刻保持敬畏心 《网络安全法》正式实施,你必须知道的几件事

14 16

王怀宾 上海市信息安全行业协会副秘书长 上海易念信息科技有限公司CEO

基于计算机的安全 意识教育的发展 什么是《通用数据保护条例》GDPR?

19 23

国际网络安全 纯干货! 深信服2017年 安全威胁分析报告之网站安全篇

对话绿盟科技 高级副总裁叶晓虎 数据泄露和黑客攻击, 2017年网络安全多事之秋

6

25 百度安全有AI更安全

43

大数据和人工智能构建智能 风控未来 京东金融 沈晓春

44

29 31

Issue: 003; April 2018


Cybersecurity Policy

What is the GDPR?

CEO风采 CEO Corner

58

Verizon’s 2018 Data Breach Inves ga on Report

49

布拉德 艾金

Featured Ar cle

Adobe副总裁&首席安全官

Brad Arkin Nvidia’s Vision for the AI Future

50

The Industries Nvidia’s GM Believes Future Will be Most Impacted by AI

54

AI and the Future of Work

55

VP & Chief Security Officer at Adobe Systems

10, 46

Ar ficial Intelligence & Cyber Security AI: The Basics You Need to Know AI or Cogni ve Compu ng?

Malware/Ransomware

Human-Computer Interac on: Key to AI Evolu on

Data Breaches and Hacks Mark an Even ul 2017 in Cybersecurity

60

Cybercrime Losses Hit $600 Billion

61

Cybersecurity in a Connected World

Issue: 003; April 2018

AI In Cyber Security Cyber Fa gue, AI’s Biggest Cybersecurity Challenges

Cybersecurity Advisory

Cybersecurity and the Automo ve Industry

10 Ways You Already Use AI

62 66

68 69 70 71 72

AI for Cybersecurity: A Good Idea

74 75

AI Doesn’t Have Alterna ves in Cybersecurity

75

Global CEOs Worried about Cybersecurity, AI

76

Ar ficial Intelligence and the A ack/Defense Balance

78 7


行业动态

公安部第三研究所所长助理、首席科学家金波宣读 《构建网络空间安全秩序》倡议书

网络安全周话题 公安部第三研究所 所长助理 金波

GDPR的深远的影响是什么? 数据是数字经济时代的基本要素,在 大数据时代数据“公众化”和“匿名化” 利用的背景下,GDPR重申并部分重塑了

企业当今面临最大网络安全挑战是什么? 一方面,日新月异的信息技术带来的 外部新威胁日趋隐蔽和复杂,对企业造成 的危害也不断扩大;另一方面,企业内部 软硬件或服务潜在的脆弱性也不断暴露。 不断变化中的外部新威胁和不断暴露的内 部潜在脆弱性,使得企业所面临的威胁 和漏洞的量级都极大提升。在这样的形势 下,威胁信息的不对称性与企业安全支出 的有限性这一固有矛盾持续深化,企业网 络安全风险形势严峻,疲于应对。

8

隐私和数据保护的个人价值(而非商业价 值),是截至目前,全球范围内对个人数 据保护水平最高的规范。GDPR的出台使 得全球诸多组织,即使没有在欧盟设立机 构,但只要涉及处理欧盟境内数据主体的 个人数据,均面临着GDPR带来的合规风 险;GDPR为数据主体增设了一系列的新 权利,为作为数据控制者或处理者的组织 增设了一系列的新义务,并设置了严厉的 惩罚机制。这意味着有处理欧盟境内数据 主体的个人数据相关业务的组织需要了解

Issue: 003; April 2018


和落实GDPR的规定,提升自身的个人数据

提高。另外,随着网络的快速发展和隐私

保护水平。总的来说,GDPR的监管合规

保护意识的觉醒,民众隐私保护需求也逐

需求将成为推动欧盟企业乃至全球企业安

步提升,可以说,网络化和全球化正在消

全支出的主要因素。

减部分差异。

亚洲(中国)对于个人隐私权观点有何不

当今最为毁坏性的攻击手段是什么? 不考虑国防、军事,也不考虑心理建

同?有多少是介于文化上的差异? 中、美、欧等国家整体信息技术发展

设的话,最为毁坏性的攻击手段一是对关

阶段、行业和企业商业模式、区域国情和

键信息基础设施产生破坏(从较早期的震

历史(负担)等都不相同,隐私所包含的

网病毒开始),二是综合多种损害后果的

内容不完全等同,对各自所关注隐私的用

行为(如勒索软件)。

力点亦有不同。相较于欧美,中国对于个 如何应对“爆炸性”的时刻?

人隐私的保护起步较晚,个人隐私保护意

秉持如履薄冰的危机感,并通过反

识和保护程度不高。近几年,通过立法和

复、迭代更新的推演、模拟、演练感知和

出台相关规范(例如《民法总则》《网络

应对,人员和网络、系统通过演练可以处

安全法》等),中国的隐私保护水平不断

于动态和适当紧张的状态,有利于对抗重

CEO借助《网络安全法》确立的契机,主

大事件和应急响应。

动参与企业网络安全的战略决策,充分考 虑跨部门的网络安全影响,将安全作为一

对待攻击的最佳防预方法是什么? 基于外部的攻击不可避免,因此威胁

项持续的投资和未来的收益,通过培训、

的主动识别、动态感知、信息共享、应急

演练和制度建设,建立符合企业自身要求

演练等自不待言。至于如何实现识别与感

的有效的网络安全管理机制。

知,具有安全意识经过培训的员工则是企 业最大的安全资产和防御保障,在业务运

你对2018的预見?

行和安全管理的节点上能实时产生警觉。

2018年,网络安全事件的潜在和爆发 仍将是社会和信息化面对的主要风险,短

你对CEO的建议? 对于企业而言,建立起有效的、以 业务为导向、以风险管理为中心的网络安 全管理机制,需要解决一系列的组织结构 和管理方面的问题。业务流程自动化所带 的数字安全风险影响往往是跨部门的,并 日益扩展至企业的用户乃至供应链。建议

Issue: 003; April 2018

期之内不会有大的改观,传统保护领域和 新兴技术行业均面临日趋复杂的网络安全 挑战;全球范围内,网络空间面临的安全 与发展、数据安全与数据分享、监管与被 监管、言论自由与政治安全等基本矛盾将 进一步加剧,国际社会的竞争与博弈成为 新常态。

9


CEO风采

我与Adobe副总裁&首席安全官

布拉德 艾金 的一席对话

by SUNNY SUN 编者按:三月份,我有机会参加了Adobe 在拉斯维加斯举 办的2018 年数字营销市场峰会。在会上,理查德・布兰 森,J.J・华特,和黄仁勋受邀做了主题演讲。他们的成就启 发了也感动了数百万人。峰会的主题是“用户体验”,如何 创造和传达一种个性化的、引人入胜的体验。Adobe创造的 技术让用户在数字经济的空间中获得更大的功效, 由此营造 一种对客户友好的商业环境,从而建立起更好的社群。几乎 每个人都熟悉或者使用过Adobe的一两种产品,比如Adobe Acrobat,Adobe Photoshop,Adobe Flash Player等等。不仅 如此,Adobe 在创造面向未来的市场解决方案,比如由AI操 作的Adobe Sensei,功能十分强大,令人叹服。我参会时是戴 了一顶网络安全的帽子,有幸与Adobe副总裁&首席安全官, 布拉德・艾金交谈。布拉德・艾金先生分享了他在Adobe网络 安全方面,不断进化发展的一些见解,以及依照规程保障系统

有已知攻击,或者未来将会发展出来的攻击的代码。我们能

和产品健康稳定性的实践,还有如何在系统和产品中加入内置

做的是试着理解这些攻击可能发生的场景,然后让我们的代

的控制框架,来适应当今越来越互联的世界。 下文是我与布拉德・艾金对话的简单文字整理。

码可以更好的防护这些攻击。 我们和公司里很多其他团队合作。这个过程我们叫做 安全产品生命周期(Security Product Life Cycle,SPLC),这 基本上就是我们建立代码的步骤性活动和工具。我们在做代

Adobe为了保证把安全防范设进系统的基础层面,都做了

码之前,都会去思考我们将要建造一个什么样的东西,然后

什么?

当我们真正为安全起见而创建代码的时候,我们也能尽力去

我十年前加入Adobe的时候,Adobe已经有软件安全工

10

了解其中的弱点或者容易受攻击的地方。

程团队(Adobe Security Software Engineering,缩写ASSE)负责

公司从卖装在光盘里的软件发展到了卖装在服务器上

软件安全的,主要集中在桌面电脑客户所使用的网络产品,

的软件,我们也为这方面的安全负责,我们把产品生命周期

比如Photoshop,Acrobat和Flash Player。我们的工作就是查找

的范围阔大了,我们不再只去想写在桌面电脑上的代码,我

可能出问题的方面,针对它们写出防御性的代码,让坏人们

们也要制作在各种状况下都能使用的代码和基础设施。在以

达到目的的成本变高。

前,网络服务器是在物理硬件服务器上运行的。现在我们做

但是无论我们怎么建立防护,总有坏人耍聪明,总之

的一切都在虚拟环境里,我们的编码就是管理这种环境本身

我们并不是想要人们觉得:我们能建立一种可以防止当前所

的一种典型的工具。我们给桌面产品写安全代码的类似手段

Issue: 003; April 2018


现在也适用在了基础设施层面上,我们会设想可能的失败模

制。Adobe的法务团队研究法律,然后把解释法律语言的意

式,什么地方可能出问题,过去发生过什么,能从里面接受

思,

什么教训,我们能从中学到什么以后做得更好。这就是安全

律翻译,然后我们会把它再翻译成一套控制规则和能力,

产品生命周期。这是我们最好的主意,我们从别人那里借鉴

再由我们的产品来支持。有些情况下,Adobe提供的是一个

来东西,还有我们自己发展出来的东西,帮助我们制造出了

数据控制服务,所以我们就对GDPR负全责;在另一些情况

我们能达到最安全的产品。

下,Adobe提供的是处理,也就是说控制要由客户执行。

应用在Adobe的环境里。法规含义的表达需要大量的法

安全是好产品的一部分。为了让人们相信我们的产

我们需要确保能给客户满足DDPR要求的能力。我们在这方

品是安全的,我们有一种叫做Adobe通用控制框架(Adobe

面研究了很长时间,目标就是,到五月份,GDPR生效的时

Common Control Framework, CCF)的东西。我们称呼的

候,我们能轻松的完成我们的义务,我们的客户也能达成遵

时候就是用的缩写CCF。我们当然在安全方面兼容了SOC 2

守GDPR的义务。

报告,ISO27001,FedRAMP,FERPA,GLBA,HIPPA[注]

在读GDPR的时候,每个人会看到不同的东西,理解不

等等,我们还看了不同具体行业的安全标准。我们把它们

同意思。行业中每个人都在讨论,尝试理解和解释某些特定

简化到只剩核心成分,然后就管它叫Adobe通用控制框架

的部分,试着构想出市场服务行业将会是什么情形。现在还

(CCF),然后在我们提供的所有服务中都执行了它。这是

有很多的讨论,我们也非常好奇想看到管理层会怎么解读我

第一步。第二步我们会在全公司的所有服务中都执行,第三

们正在看的法律。这就是我们的角度。

步我们会在公司内部的办公室执行,也就是不会有客户行为 掺进来。 通过坚持执行这些通用控制,我们可以让审计方看到

我们的团队也跟Adobe主要的隐私团队紧密合作,为的 是遵守执行管理规则。没有安全就没有隐私,所以我们一起 合作来保证我们能做好遵守规则的准备。

Adobe控制1对SOC2控制6,7,9的指示图。在这个过程里我 们可以让第三方确认我们一直遵守控制。至于SOC 2安全和

如何更好的平衡“方便”和“隐私”?Adobe工作环境中如

ISO27001,这两种东西可以让我们向用户们说明他们也不会

何实践这一点?

违反规则,比如FERPA,GLBA等等。我们遵守GLBA和我们

我想举例说一下我们对自己的员工是怎么做的。这个

没有关系,但是使用我们产品的客户是需要遵守它的。我们

大会的主题是关于“体验”,怎么让体验更好。我们在思考

的产品符合FERPA,也符合HIPPA,还有其他不断出现的特

Adobe员工与我们IT支持下的系统互动时,体验如何,比如

种规则。我们不停地研究这些规则,也不断的发现,我们的

他们怎样打卡,在食堂怎么看菜单。在以前,你可能有一批

产品已经符合这些规则了。因为我们把规则都简化到了只剩

不同的账号,一个登陆邮箱的账号,一个登录系统的账号,

核心内容,所以这些规则的控制内容都已经是CCF里的一部

等等,让人非常糊涂。我们研究了在这方面可以做的一些

分了。所以Adobe服务的潜在客户可以从我们这里得到一份

事,然后我们把所有登陆都整合成单独一个体验,每一个员

SOC

工都只有一个账号,一个密码,他们早上时要认证一下,在

2报告,不仅符合Adobe的承诺,还由独立审计方毕马

威联署,它们会进行测试,并且正式我们符合以上说的控制

之后就不需要再登陆了,因为系统已经能够识别了。

规则。有了我们的安全产品生命周期(我们叫它我们的私家

但是他们还是需要每天输入一次密码,然后再用另一

秘方),和通用控制框架,还有通用控制框架施加给我们的

个因素验证一下,比如一个验证码或者弹出信息会发到你手

ISO27001行业标准,我们感觉我们可以把事情做好。

机上,然后你必须确认。我们在研究过程中尝试了几种连这

我的工作从来没有这么愉快自信稳定过。但我担心的

个都可以跳过的方法。为了简化过程,我们推出了零信任企

是,下一步将会发生什么,还有我们应该如何做准备。上面

业网(Zero Trust Enterprise Network)。大致是,在使用设备

我们提到的东西给了我们一个好的基础,每个我们研究过的

(手机、平板电脑等)时,我们通过允许他登入公司无线网

规则都告诉我们,什么是好的,我们能学到什么,我们可以

就实现了管理。我们可以往你的手机上推送一个授权,你有

加入什么新的防护,有什么新技术,我们怎么准备,怎么保

了认证码就可以打开手机,只要我们知道这个设备在你手上

护我们自己。

就行了。之后因为你已经知道了认证码,在第一次验证手机 之后你就不用再做一次了。我们的目标是,一旦你登入,验

Adobe是如何帮助它的客户做好符合GDPR规则的准备? 就像通用控制框架一样,我们会执行一套类似的机

Issue: 003; April 2018

证,那九十天这么长的时间里就一直保持登入状态,而不是 每天早上都要登入。这样我们就能通过认证设备来得到更好

11


CEO风采

的安全保障,这比靠人类记密码的方式好得多。在这个例子

平均寿命有的时候可能是一两个小时;它们突然进了系统,

里,我们能得到更好的的安全,更平滑的用户体验,还能让

做一点工作,然后就走了。想完全追踪它们,每一个服务器

我们消灭代理服务器。因为我们了解了设备,我们有充分的

都追踪,每一分钟都追踪,这在今天的挑战性跟几年前是完

信心知道设备处在正常状态里,直接跟我们的资源相连接。

全不同的。我觉得并不是不可能,但是你需要非常灵活。如

这是一个具体的例子。我们正在试着了解终端用户对这个技

果你今天用10年或者15年前的方式追踪,你一定会失败。你

术会有什么感觉,还有我们怎么去除他们目的和前期安全手

需要想,在过去我们能完全追踪每一个服务器,今天这要难

段之间的摩擦和屏障。这是一个我们设想用户体验情景的例

得多。我们希望达到的目的就是能保持一致性,还有维护电

子。

脑的“干净”。 我还联想我自己在五金店里的体验,五金店的店员认

如果有一个图像从一个受过认证的来源发过来,那它

识我,这种体验很个人化,店员知道我需要什么,对我解释

在发出的时候是安全的。如果它只存在了八小时,这八小时

细节的时候很耐心。这些体验对我非常有利。我们生活在一

里也没有太大的几率会发生什么坏事。我们如果能通过小心

个到处发生大规模行为的世界里,获取更好的个人化体验的

照顾这个图像的安全,来确定它处在应有的状态里,就会觉

基础,是取决于根据你的喜好和选择你想看到山还是看到

得有有信心的多。但现在这些东西变化的范围太大,速度太

海。

快了。 我们的客户也要面对他们的客户,所以在为我们的客

我们今天的经验是,当你使用机器学习技术的时候,

户设计产品的方面,我们必须为他们提供工具,保证他们的

发现反常的点是很有用的。多数时候这些地方只是奇怪,但

体验是很好的,过程是平顺的。同时也需要确认客户对我们

是没有危害,不是恶性的。机器学习只能帮助找出反常,但

的行为有授权,不会让他们突然吓一跳。这些都是在设计用

是不是超出规则之外的东西。

户体验时很重要的的东西。我们给企业客户提供工具,有用 的工具,让他们可以为客户有效服务。

大数据已经成了非常重要的一种公司资产,所以任何一点 这方面的泄露都有可能威胁到信任,而且由于资料互联, 还有可能恶化成大规模的泄露。Adobe和其他的安全公司 合作吗? 我们和Adobe团队一起做的工作很多,我们也在合适的 时候和其他商家合作。我们还没有向第三方外包过安全类的 业务。我们会采购防火墙,使用不同公司的技术和产品,来 创造一层一层的安全防护。但是我们自己才是把各种插件放 到一起产生作用的建筑师。Adobe的雇员才是在安全工作的 中心工作的人,在有需要调查的警报或者事故的时候提供监 管和反应。因为我们规模很大,所以我们几乎从所有地方都 会进行采购,但是我们没有某一个单独的合作伙伴。安全是

[注] SOC 2报告是一种标准化的审计报告,针对的审计内容是 服务提供者对客户隐私安全的管理水平。 ISO27001类似于“ISO9001”,但不是质量标准,而是信息 安全管理标准。

Adobe工作的一个内化的组成部分。 FedRAMP美国联邦对“云”产品类别的授权和安全管理标

安全业面临的最大挑战是什么?

准。

有很多。现在在几乎所有安全从业人员里都有一种冲 动,就是要对所有电脑和所有资料都保有很高的认识性和可 见度。但是机器的数量增长太快,寿命又变得非常短。在以 前你买了机器,从包装里拿出来,五年之后它虽然有折旧,

FERPA即“家庭教育权利及隐私法案”,规范家庭教育类信 息的隐私安全管理。 GLBA是针对金融机构对客户金融信息隐私安全管理的法案。

但是你还是能根据你的记录追踪到它。在今天的环境里,有 几万或者几十万虚拟服务器在虚拟环境里运行,而且它们的

12

HIPPA是针对医院患者医疗类信息的隐私安全管理法案。

Issue: 003; April 2018


Verizon 数据泄露报 告的观点总结

安全事件的威胁行为种类TOP20 Top 20 action varieties in incidents

DoS (hacking) DoS(黑客攻击) 21,409

Loss (error) 丢失(人为错误) 3,740

编辑注明:

Phishing (social) 网络钓鱼(社交) 1,192

感谢 Verizon 公司给予我们这个机会与大家分享这份最新的2018数据泄露报告。我选出几

Misdelivery (error) 邮寄错误(人为错误) 973

个有总结性的数据分析分享给大家。希望能够带给你这一年度典型的安全案例,以便参考。

Ransomware (malware) 勒索软件(恶意软件)

这份整体的报告也会在CSA展会上发行,有兴趣的读者请关注。

C2 (malware) C2(恶意软件)

787 631

Use of stolen credentials (hacking) 盗用用户凭证(黑客攻击) 424

RAM scraper (malware) RAM擦除器(恶意软件)

Who’s behind the breaches? 数据泄露的幕后黑手?

318

What tactics are utilized? 使用什么策略?

73%

Privilege abuse (misuse) 特权滥用(误用) 233

48%

perpetrated by outsiders 由外部人员渗透

Use of backdoor or C2 (hacking) 使用后门或C2(黑客攻击)

of breaches featured hacking 涉及黑客攻击

28%

221

Backdoor (malware) 后门(恶意软件)

30%

207

involved internal actors 涉及内部人员

included malware 包含恶意软件

Theft (physical) 盗窃(实体设备)

2%

17%

Pretexting (social) 诈骗(社交)

190

involved partners 涉及合作伙伴

of breaches had errors as causal events 涉及偶发事件等人为错误

2%

17%

170

Skimmer (physical) 复制器(实体设备) 139

featured multiple parties 包含多方人员

were social attacks 为社交攻击

Data mishandling (misuse) 数据误操作(误用)

50%

12%

Spyware/Keylogger (malware) 间谍软件/键盘记录器(恶意软件)

of breaches were carried out by organized 有组织的犯罪集团

involved privilege misuse 涉及特权滥用

12%

11%

criminal groups

122 121

Brute force (hacking) 暴力破解(黑客攻击) 109

受害者是谁? Who are the victims?

Capture app data (malware) 获取APP数据(恶意软件) 102

of breaches involved physical actions 数据泄露涉及实体行为

安全事件 Incidents

of breaches involved actors identified as nation-state or 确定为国家附属或民族附属的威胁发起人 state-affiliated

有什么其他共同点? What are other commonalities?

24%

49%

of breaches affected healthcare organizations 数据泄露影响医疗组织

1

of breaches were financially motivated 的数据泄露旨在获取经济利益

14%

13%

58%

20%

40%

60%

80%

100%

威胁行为种类TOP20(安全事件)(n=30362) Figure 4. Top 20 threat action varieties (incidents) (n=30,362)

数据泄露的威胁行为种类TOP20 Top 20 action varieties in breaches

of breaches were motivated by the gain of strategic 的数据泄露旨在获取战略优势(间谍) advantage (espionage)

Use of stolen credentials (hacking) 盗用用户凭证(黑客攻击) 399

RAM scraper (malware) RAM擦除器(恶意软件)

68%

of victims are categorized as small businesses 受害者为小型企业

76

76%

of breaches involved accommodation and food services 数据泄露涉及酒店及餐饮服务

were breaches of public sector entities 属于公共行业实体组织

80

Publishing error (error) 公开错误(人为错误)

0%

1 of non-POS malware was installed via malicious 的非POS类型恶意软件通过恶意邮件 安装 email

15%

Misconfiguration (error) 误配置(人为错误)

312

of breaches took months or longer to discover 的数据泄露需要几个月或更长的时间才能被发现

Phishing (social) 网络钓鱼(社交) 236

Privilege abuse (misuse) 特权滥用(误用)

在这里我们列了本年度数据集中“最热门”的数据情况。读者可以利用下面这张表来快速了解一下情况(发起人、行为、资产、属性)。

201

Misdelivery (error) 邮寄错误(人为错误) 187

Use of backdoor or C2 (hacking) 使用后门或C2(黑客攻击)

今年发生了53,000多起事变, 2,216起确认违规侵犯。

148

Theft (physical) 盗窃(实体设备) 123

C2 (malware) C2(恶意软件) 117

Backdoor (malware) 后门(恶意软件) 115

Pretexting (social) 诈骗(社交)

DDoS放大攻击的相对普遍性 Realtive prevalence of amplified DDoS attacks

114

Skimmer (physical) 复制器(实体设备)

100%

109

Brute force (hacking) 暴力破解(黑客攻击)

放大

92

Amplified

Spyware/keylogger (malware) 间谍软件/键盘记录器(恶意软件) 74

75%

Misconfiguration (error) 误配置(人为错误) Publishing error (error) 公开错误(人为错误) 59

Data mishandling (misuse) 数据误操作(误用) 55

50%

Capture app data (malware) 获取APP数据(恶意软件) 54

Export data (malware) 导出数据(恶意软件)

未放大 Not amplified

25%

Issue: 003; April 2018

2014

51

SQLi (hacking) SQLi(黑客攻击) 45

Password dumper (malware) 密码破解器(恶意软件)

0%

图25 放大DDoS攻击(2013-2017)(n=3272) 2013

Breaches 数据泄露事件

PercentDDoS攻击比例 of DDoS attacks

66

2015

2016

2017

45

20%

40%

60%

80%

100%

威胁行为种类TOP20(已证实的数据泄露)(n=1,799)

13


网络安全策略

安全是我们的生命线, 将时刻保持敬畏心 阿里巴巴集团首席风险官 郑俊芳

果互联网是可视化的,网购、社交、送餐、出行等诸

多互联网服务有不同的色彩线,那么,我们能看到,

捍卫安全生命线:10年进化数千人护航

五彩斑斓的网络早已与生活的方方面面不可分割。在

过去的一年里,阿里巴巴集团共受到2015次DDOS攻击,最

互联网给生活带来便捷的同时,就像是每条道路都需设置安全线

大攻击流量777Gbps。这个数字意味着什么?打个比方,整个杭

一样,互联网自身也需建立强大的防护能力,以保障服务和所有

州城的网民同时在线所使用的带宽,都远不及此。实现这样的有

用户的安全。

效对抗,阿里安全走了10多年,从被动应对,到主动防御,从人

在过去的19年里,阿里巴巴构建起包括新零售、云计算、大 文娱、智慧物流等在内的庞大而复杂的生态体系,为数以亿计的 用户提供便捷服务,同时,对于安全的探索从未停止。 每天,在阿里生态体系里,数以万计的黑客通过4千万次的 恶意访问以寻找安全漏洞,网络黑灰产通过爬虫发起17亿次的恶 意访问试图窃取数据,仅在淘宝平台,每天会有近400万次恶意尝 试登录。 这些攻击,每天都在真实发生着。面对如此巨量、复杂的攻 击,阿里防住了。

14

肉,到技术、算法。 2005年前后,“阿里安全”还是集团技术团队下设的一支几 个人组成的小队,彼时,抵御DDOS攻击的手段还是靠人肉发现 和攻防。曾经有过一个阶段,A商家看到B商家销量大好,会买通 黑客对B发动DDOS攻击。 DDOS攻击的本质是消耗平台的带宽和服务器资源。阿里 技术和安全团队发现服务器运转迟滞,不得不人肉排查。那是个 互联网行业普遍未建立安全能力的时代,人才紧缺,安全技术攻 防能力不足。当时的解决方案是把受到攻击的B店铺采取屏蔽处

Issue: 003; April 2018


理,让攻击者失去目标,以恢复服务器正常运转。

御巨量攻击。随后阿里介入,斩断攻击。

虽然危机解除,但教训是惨痛的。没有任何一个商家的利

到了2017年,一家合作伙伴也被黑灰产团伙盯上,而这时,

益该被牺牲。于是,2009年,阿里正式设立安全部,如今,阿里

阿里的安全能力早已覆盖生态伙伴,攻击消息传来时,阿里安全

生态体系的网络安全有数千人的专业团队在守护。

的技术专家说,不用怕,让他来吧。后来,对方得知阿里在助力

在今天,对抗DDOS攻击的任务早已交给了“无人值守”的

防护,索性放弃了攻击。

自动化防控产品。阿里也通过阿里的云计算平台将我们的DDOS 防御能力提供给了数十万的云上客户,时刻包围着这些云上客户 的网站与服务的稳定安全。攻防的根本目的在于让攻击方成本上 升而放弃攻击,防控能力越高,黑客付出的成本就越高,举个例 子,过去,黑客发动一次攻击要花费1元钱,如今,黑客打开一 个保险箱的成本就要100元,而保险箱里可能只有50元,这样“得 不偿失”的事,很多黑客放弃了。

力推安全联合:开放能力赋能生态 就像是现实世界里没有绝对完美的面孔一样,网络世界的 漏洞永远存在。对于互联网公司而言,建立提早发现并迅速止血 的能力,遏断黑客利用漏洞获取用户数据的企图,是一条值得努 力追求的路。 2012年起,阿里安全的“听风者”在着力建立另一套防御体 系:联合阿里体系外的白帽黑客,建立ASRC(阿里安全应急响 应中心)。简单说来,这是一个平台,能够让外部的白帽黑客在 发现漏洞后第一时间通知阿里。这是国内最早的互联网应急响应

保持敬畏之心:不辍探索持续进化 从害怕发现漏洞,到主动建立ASRC来找漏洞,解决安全问 题于萌芽状态的探索不止于此,2016年,阿里筹建“红蓝攻防体 系”以主动挖掘安全风险点。这在当时的阿里内部存在争议。 攻击来得毫无征兆,一天快到中午的时候,阿里安全接到 信息,暗网在流传阿里的相关数据,数据做了加密处理,虽然暂 时不会对业务产生影响,但发布者同时发布的勒索信息说得明 白,预定时间内不付钱,将公布这些数据。 负责处置的数据安全团队到现在还清晰记得当时的场景, 安全大于天,在几分钟内,参与处置的安全技术人员挤满了项目 室,一个多小时,数据“泄露”的源头被排查出来,这时,他们 才知道自己是被蓝军“搞了”。 在专门设立蓝军之前,阿里安全已经反复对系统做过加固 检测,但不知道效果如何,这样“有剧本”的“攻击”给整个安 全团队提了醒,然后就是更加细致入微的排查。

平台之一,从最早只是设置一个通报漏洞邮箱,发展成今天国内

蓝军还在不断“搞事儿”。一开始,这样的攻防一周都要

上千名、国外数百名白帽黑客参与的真正意义的平台,阿里集合

有一两次,蓝军频频得手,后来,红军防线越来越紧,蓝军又朝

各方之力,将建立的能力服务于整个生态,用安全生态的能力赋

着更高层次发动“攻击”,但难度越来越高,现在,蓝军筹划一

能生态安全。

次攻击的时间可能是一个月甚至更长。

2013年,阿里发布500万元赏金计划,举办互联网安全沙

“红蓝对抗”,以及阿里建立的图灵、猎户座、双子座、

龙,2017年双十一购物狂欢节之前,阿里巴巴ASRC联合业内12

潘多拉、米诺斯、归零、钱盾和蚂蚁金服光年等八大安全实验

家SRC,进行了一次面向电商生态的安全众测,发现了大量有价

室,都是在以技术构筑安全防护墙。

值的安全漏洞并推动生态伙伴快速解决,有效拉升了整个生态的

智能数据模型也在无时无刻发挥安全防护的作用。在阿里

安全水位。2018年,阿里发起的SRC运营工作讨论会,腾讯、百

平台上,全网商品已超过10亿量级,如何对这些商品信息进行识

度、360、京东、滴滴等互联网公司都在参与共创共享。

别?如果用普通的A4纸把这些商品信息打印成册,假设一页一个

去年12月30日,阿里正式加入First(事件应急响应与安全小 组)国际组织,与85个国家的414个应急响应相关组织建立联系, 谷哥、微软、亚马逊等国际互联网公司都在。

商品,现在,阿里10分钟内分析完成的商品手册叠起来将有44000 米高,相当于近5个珠穆朗玛峰高度。 在阿里平台上下单购物,就在你按下按钮的一瞬,阿里安

所做的这一切,都在于阿里始终认为,安全领域不需竞

全大数据风控系统已作了近百项安全检测。安全就是这样,10余

争,而必须联合。阿里安全的能力开放心态正在显露效果。2014

年来,数千阿里安全人一砖一瓦搭建起来安全的水位线。但我们

年,一名合作伙伴向阿里紧急求助。一个互联网黑灰产团伙,开

深知,互联网时代,安全始终是我们的生命线,这世上没有绝对

始是故意跟这名合作伙伴套近乎,套出了合作伙伴的网络出口IP

的安全。因此,每一天每一刻,阿里安全人都在保持敬畏之心,

地址,之后的故事像电影那样,黑灰产团伙画风大变,发来短信

让自己更努力、让技术更进步、让模型更智能,只有不断探索世

称,打钱破财消灾,否则会用DDOS实施攻击,其嚣张程度令人

界级的风险控制体系,我们才能保护这个全球最大的电子商务平

气急,直接发来了攻击时间。合作伙伴想到防御,但根本无力抵

台,提供更可靠的服务,保护更多的消费者。

Issue: 003; April 2018

15


网络安全策略

《网络安全法》正式实施, 你必须知道的几件事 目前《网络安全法》已在2017年6月1日起正式施行,对企业 加强网络安全建设提出了要求和约束。网络安全法从首次审议到

全、国计民生、公共利益的基础信息网络和重要信息系统界定为 关键信息基础设施。

现在施行,一直备受各界关注。

明确企业角色 本次《网络安全法》根据企业用户网络的重要性分为网络运 营者和关键基础设施运营者。 网络运营者是指网络的所有者、管理者和网络服务提供者,

明确监管部门 管理归属网信部门,企业需积极配合。第八条明确规定了网 信部门是负责统筹和监督网络安全工作的机构。电信主管部门、 公安部门和其他机关部门在各自职责范围内负责网络安全保护和 监督管理工作。

这一界定范围十分广泛,几乎将涉及网络产品服务的主体都纳入

第四十九条明确规定网络运营者必须对网信部门和有关部门

其中,只要“在中华人民共和国境内建设、运营、维护和使用网

依法实施的监督检查予以配合。如果不配合将按六十九条处以个

络,以及网络安全的监督管理”,都适用《网络安全法》。当今

人和单位罚款。如果不作为、抵制和违反规定后果会更严重。因

互联网时代,几乎每个企业都拥有对外提供服务的网站,也就意

此,主管部门的检查必须积极配合。

味着几乎每个企业都应关注并遵守网络安全法中网络运营者的相 关法律规定。

明确责任人

《网络安全法》首次明确了关键信息基础设施的主要涉及行

责任人需明确。第二十一、三十四条要求企业要明确网络责

业领域和判定标准,将那些公共通信和信息服务、能源、交通、

任人。出现安全事故,直接负责人需要承担责任并且接受法律处

水利、金融、公共服务、电子政务等重要行业和领域,以及其他

罚。建议企业的IT负责人重点关注《网络安全法》法律法规,并

一旦遭到破坏、丧失功能或者数据泄露,可能严重危害国家安

且推动企业网络安全建设。

16

Issue: 003; April 2018


网络安全法有哪些要求

对该负责人和关键岗位的人员进行安全背景审查;定期对从业人 员进行网络安全教育、技术培训和技能考核;对重要系统和数据

基本安全义务

库进行容灾备份;个人信息和重要数据境内备份;制定网络安全

《网络安全法》明确网络运营者的安全义务。为符合网络安 事件应急预案,并定期组织演练等。 全法规,要求网络运营者:

《网络安全法》对于关键信息基础设施的运营者不履行网

1、保障网络安全、稳定运行、维护网络数据的完整性、保 络安全保护义务的,规定由有关主管部门责令改正,给予警告; 密性、可用性;

拒不改正或者导致危害网络安全等后果的,处十万元以上一百万

2、遵守法律、行政法规,履行网络安全保护义务;

元以下罚款;对直接负责的主管人员处一万元以上十万元以下罚

3、接受政府和社会的监督,承担社会责任;

款。此外,对于境外的个人或者组织从事攻击、侵入、干扰、破

其中,安全保护义务包括但不限于不得泄露、篡改、毁损 坏等危害中华人民共和国的关键信息基础设施的活动,造成严重 其收集的个人信息;未经被收集者同意,不得向他人提供个人信 后果的,规定了法律责任;国务院公安部门和有关部门并可以决 息;应当采取技术措施和其他必要措施,确保其收集的个人信息 定对该个人或者组织采取冻结财产或者其他必要的制裁措施。 安全,防止信息泄露、毁损、丢失;制定网络安全事件应急预

实行等级保护制度

案,及时处置系统漏洞、计算机病毒、网络攻击、网络侵入等安

全风险;按照网络安全等级保护制度的要求,履行21条规定的安 《网络安全法》明确国家实行网络安全等级保护制度 (图1.)。 全保护义务,包括但不限于制定安全制度和操作规则,确定网络 安全负责人;采取技术措施等。

由于《网络安全法》明确提出了实现网络安全等级保护制 度,也就意味着单位不做等级保护工作就是违法。建议企业都按

《网络安全法》明确规定,网络运营者不履行本法规定的网 照等级保护相关制度做好测评工作。 络安全保护义务的,由有关主管部门责令改正,给予警告;拒不

个人信息保护

改正或者导致危害网络安全等后果的,处一万元以上十万元以下 罚款,对直接负责的主管人员处五千元以上五万元以下罚款。 《网络安全法》明确关键基础设施的安全义务。 由于关键信息基础设施影响着国民生活、国民经济甚至国家 安全。因此网络安全法对关键信息基础设施运营者提出较高的网 络安全要求。关键信息基础设施的运营者不仅要承担网络运营者 的法律义务和责任,同时还要承担作为关键信息基础设施运营者 的特色法律义务和责任。网络安全建设将成为关键信息设施的运 营者运营过程中必须完成的任务。 《网络安全法》规定了关键信息基础设施运营者的特殊安全

《网络安全法》用不少篇幅的条文来规定网络产品、服务提 供商对用户信息资料的收集和使用。总结起来就是: 一是网络运营者收集、使用个人信息必须符合合法、正当、 必要原则。 二是网络运营商收集、使用公民个人信息的目的明确原则和 知情同意原则。 三是网络运营者应当对用户信息严格保密,并建立用户信息 保护制度。 四是公民个人信息的删除权和更正权制度,即个人发现网络

保护义务,包括:设置专门安全管理机构和安全管理负责人,并 运营者违反法律、行政法规的规定或者双方的约定收集、使用其

图1.

Issue: 003; April 2018

17


网络安全策略

个人信息的,有权要求网络运营者删除其个人信息;发现网络运 营者收集、存储的其个人信息有错误的,有权要求网络运营者予 以更正。网络运营者应当采取措施予以删除或者更正。

监控预警和应急处置 本次《网络安全法》确定了监测预警的风向标,并用了整个 章节来体 性 章节来体现重要性。

用户根据网络安全法完成以下事情: 1、完善安全管理制度。采取有效技术措施和网络安全防护 设备保障网络安全、稳定运行,能有效应对网络安全事 件,包括: ●

对系统及硬件供应商、服务商的资质审查、存档记录;

内部电子邮箱等通讯软件的安全管理;

系统定期升级、维护;

加强信息数据管理,对重要数据做备份、存档、加密等 处理。

2、建立审核监控制度,具体而言: ●

必须有较好的监控手段来有效监控网络系统,及时发现 漏洞和信息泄露等风险;

严格审核并保证网络服务提供者不会收集与其提供的服 务无关的信息,尤其是涉及国家秘密、个人隐私等重要 敏感信息;

内容识别审查,能够有效发现并识别鉴别用户传输内容 存在的敏感信息等,以便及时处置。

3、定期进行安全检查和评估,关键基础设施运营者每年必 须进行一次评估。 4、建立报告制度。及时报告系统漏洞、信息泄露等风险事 件。 5、建立保密制度。对个人信息、重要数据进行保密,全程 保证传输安全,防止中间人劫持等威胁。 6、建立安全责任制度。本次新法明确规定,在企业违反网 络安全保护义务情形下,可对直接负责的主管人员或责 任人员进行处罚。

《网络安全法》第五章整章节均在描述监测预警和应急处置 制度的作用和重要性,无论是国家层面,政府各职能机构层面, 还是单独的网络运营者,都有所涉及。需要网络运营商具有问题 发现和安全响应处置的能力,构建安全情报中心,第一时间预警响 应。

建议 本次《网络安全法》的出台给互联网相关企业从各方面都带 来不小的影响,企业今后将要在网络安全保护和管理等方面承担 更多的义务和责任。信服君提醒用户关注安全并重视安全,建议

18

Issue: 003; April 2018


基于计算机的 安全意识教育 的发展 王怀宾 上海市信息安全行业协会副秘书长 上海易念信息科技有限公司CEO

网络安全意识教育作为安全产业的重要分支,在我国 长期被忽视或低估。伴随着网络安全问题的不断扩张,出 了传统的攻防、软硬件保护,人的安全意识问题正不断被 重新定义和认识,其重要性不容忽视。 据Gartner等权威机构的报告显示,网络犯罪活动继 续推动着网络安全市场不断扩大,到2021年,网络犯罪造 成的破坏将让全球每年损失6万亿美元;培训员工识别和 防御网络攻击是网络安全行业最广泛的应用;在创新网络 安全企业名单中,有3家安全意识教育方案提供商位于前50 强。 为了更好的开展网络安全意识教育,产业先驱者创办 了基于计算机的安全意识教育(Computer Based Training, 简称CBT)。即,通过端点计算设备(例如笔记本电脑, 台式机或平板电脑)交付的标准化、交互式安全教育或行

网络安全意识教育的

为管理内容。培训内容侧重于IT的普通用户,而不是安全

现状与影响

或IT专业人员。 2017年,可以看作是“安全意识的元年”。在事件驱 动、政府引导、法律合规方面的促进下,通过勒索软件、

所谓网络安全意识,是指在头脑中建立起来的一种网 络安全观念,在日常工作生活中,遇到各种网络威胁时, 具备防范与警觉心理和基本应对素养。

数据泄露事件,企业普遍认识到安全意识教育的重要性,

但往往在实际的工作中,相当数量企事业单位员工的

国家设立法定“国家网络安全宣传周”活动传播安全意

网络安全意识淡薄,甚至是缺乏最基本的网络安全威胁识

识,《网络安全法》规定企业必须履行员工教育培训的安

别和应对能力。由此可能给各单位造成的经济、名誉方面

全职责。

的损失巨大、甚至是不可挽回。

一系列的产业发展和举措,都将网络安全意识教育的 重要性和突出性放在了一个历史新高度上。 于此同时,基于CBT的网络安全意识教育新模式正在 不断推广、普及和发展,积极推动这一领域的蓬勃发展。

Issue: 003; April 2018

2017年,勒索病毒大面积爆发。某企业的业务部门 员工,在使用工作电脑办公期间,登陆私人邮箱,误点含 有勒索病毒的邮件链接,直接导致该员工的电脑被锁死。 电脑内的重要业务合同没有备份、合同签署迫在眉睫、IT

19


网络安全策略

部门无法解决……最终,该企业无奈地向黑客缴纳了不

全宣传教育工作。这一条明确指出了维护网络安全是全社

菲的“学费”作为代价,换回了电脑里宝贵的文档合同资

会的共同责任,在我国,各级政府和有关部门根据各自职

料……

责大力开展网络安全宣传教育活动,国家网信部门设立了

好似一记警钟敲响,自事件爆发后,该企业从上至

网络安全宣传周,每年9月第三周集中开展宣传教育,对

下开展全员安全教育,企业老总也深刻意识到:网络安全

公共通信和信息服务、能源、交通、水利、金融、公共服

不局限于IT部门,更是面向本单位全体人员的网络安全教

务、电子政务等重要行业和领域的企事业单位,要经常性

育。

开展网络安全宣传教育。 据统计,在 2017年的网络安全事件抽样调查中显示,

(4)监管要求。

超过70%的安全问题直接原因是员工安全意识薄弱。人对

除了上述的国内外的标准和国家的法律,强调了网络

安全的影响远大于任何技术、制度与流程;员工是企业最

要开展安全意识教育之外,各行各业还出台了诸多相应的

大的资产,同时也是最大的安全漏洞。安全领导者必须投

行业规范,提出了这方面的监管要求:

资于提高安全意识和影响行为的工具,通过全员意识教育 支持关键安全业务目标。

《中央企业商业秘密信息系统安全技术指引》 《中央企业商业秘密保护暂行规定》

合法合规下的

《金融行业信息系统信息安全等级保护实施指引》

网络安全意识教育

《证券期货业信息系统安全等级保护基本要求(试行)》

做好安全意识教育,可以起到提高员工风险识别能 力、建立员工行为规则、企业遵从合规要求、建立企业安

《商业银行信息科技风险现场检查指南》 《保险公司信息系统安全管理指引(试行)》

全文化等诸多方面积极推动作用。 而在这其中,合法合规性是硬性标准。“无规矩,不

网络安全相关的法律法规和国际、国内标准,其制定

成方圆”。在相关的法律法规、标准中,将安全意识列为

遵循了网络安全事业发展的科学规律。安全意识的教育,

重要的网络安全防护工作内容之一。

是网络安全工作中不可或缺的一部分,不能被忽视,需要

(1)国际标准。

运用科学、合适的方式方法开展相关的合法、合规动作。

被业界奉为信息安全管理“圣经”《ISO

27001:2013

当前面临的困境与挑战

》中,将意识、资源、能力、沟通、文档信息化等作为支 持板块的重要组成部分。在规范性附录A.7.2.2 中指出:组

网络安全意识教育作为各企事业单位网络安全工作中

织内所有员工、相关合同人员及第三方人员应接受适当的

的重要闭环组成部分,成为了各单位组织安全培训的重点

意识培训,并定期更新与他们工作相关的组织策略及程

工作之一。这其中,既有合法、合规的要求,也有各单位

序。

对于安全意识教育重要性的再认识。

(2)国内标准。 由公安部和全国信息安全标准化技术委员会提出的《 信息安全技术信息系统安全等级保护基本要求(GB/T 22239-

传统的网络安全意识教育,包括邀请专家授课,印刷 海报、制作易拉宝、发放宣传手册,制作台历、鼠标垫等 衍生品馈赠。

2008)》规范性附录,在5.2.3.3、6.2.3.4、7.2.3.4、7.2.3.4等

传统的网络安全意识教育,其优点在于便于理解,易

章节中,等保四个级别均明确指出要开展“安全意识教育

于操作。但在实际开展工作中,往往容易遇到很多麻烦。

和培训: a) 应对各类人员进行安全意识教育和岗位技能培

一般来说,为了有效开展网络安全意识教育活动,存在着

训。”

以下的问题:

(3)法律法规。

(1)时间成本阻碍培训开展

2017年6月1日正式实施的《网络安全法》,第十九

首先是组织实施上的困难,一场100人以上的线下培

条规定,各级人民政府及其有关部门应当组织开展经常性

训活动,需要多部门的协调才能开展,涉及的具体工作纷

的网络安全宣传教育,并指导、督促有关单位做好网络安

繁复杂;其次成本代价高昂,参与培训的员工不得不因为

20

Issue: 003; April 2018


高雄大學資管系開發「享安全」App。

(吳江泉攝)

培训活动暂时中断手中的工作,其中涉及的误工损失对于

传统方式的培训,讲师难找,学习内容一时半会儿难以收

大型金融、证券等公司来说不可估量,至于为此培训安排

集、整理和使用。

的场地费用、讲师成本等等,种种因素叠加在一起,让一

对于特定问题,如,勒索软件、开发安全、高管风

般的企业主难以承受——即便花了大成本,也未必做到好

险、黑客文化等的内容素材学习,需要更加专业的讲师力

效果,甚至可能对于业务来说是“得不偿失”。

量参与,但一般的企业很难去调动和使用有这样的资源为

(2)培训场景需要针对性

自己的安全教育服务。

培训的过程中缺乏有效沟通,“台上在上课,台下玩

(4)效果没有可度量指标

手机“的情景十分普遍,培训的效果无法度量。往常的线

一般来说,对于培训的检测通常是一份随堂测试或者

下培训是单方面的灌输、输出,缺乏有效的互动和反馈,

心得体会,但这样的做法因为测试评估的题库不完善、考

这让学习氛围、学习效果大打折扣。合适的培训场景,需

核内容随意性大,最终收效甚微。

要与工作、生活、客户使用习惯和环境结合的方式。 (3)学习内容缺乏专业性

可度量的效果标准,应当从多个维度科学判定,而 判定又应当以一定的样本数作为参考,既对个人做学习反

针对不同的人群,其所需的知识层次、类型也是不

馈,也对整体有一个科学的评估。同时,应当以一个标准

同的。对于普通员工,他们并不需要学习深奥的IT运维技

的题库样作为蓝本形成检测考核的内容,最终形成科学的

术,而是侧重于一种普遍的安全认知:密码、邮件、诈骗

评估与反馈报告。

防范、办公环境、日常行为的规范等。那么至于管理层、 技术层,他们关心和需要了解的学习侧重点又是各自有所 不同的。

基于CBT模式的 网络安全意识发展与实践

同样的,针对不同行业、企业,其所需的网络安全

所谓的CBT,即Computer Based Test,是指通过端点

培训也存在差异性。金融、卫生、能源等行业,在企业管

计算设备(例如笔记本电脑,台式机或平板电脑)交付的

理规定与合规要求上不同。针对不同要求的专业性,组织

标准化、交互式安全教育或行为管理内容。培训内容侧重

Issue: 003; April 2018

21


网络安全策略

于IT的普通用户,而不是安全或IT专业人员。

单位的参与度还有待提升。

在传统的网络安全意识教育活动无法满足更多的培训

“网络安全行业知识赛”采用基于CBT模式下的“

学习需求和要求的状况下,基于CBT模式的网络安全意识

享安全平台”,该平台拥有移动网校、企业LMS、竞赛平

培训学习正在成为一种全新的实践选择,受到了网络安全

台和钓鱼仿真、知识库五大模块,支持公有云服务与大

意识教育引领者的欢迎和尝试。

型企业私有云部署,帮助企业便捷建立基于计算机培训

上海正在努力打造成为国际经济、国际金融、国际贸

(CBT)的员工安全意识教育方案,增强安全教育的趣味

易、国际航运、科技创新中心和国际文化大都市。网络在

性、时效性和针对性,降低教育成本和合规风险,提升企

各企事业单位生产劳动、市民日常生活娱乐中已经高度普

业安全管理水平。

及,随之而来的网络安全教育的普及却存在一定短板甚至 是缺失。 2017年,由上海市信息安全行业协会组织开展了面 向全市的“网络安全行业知识赛”。本次行业知识赛共有

据了解,“享安全”平台不仅基于CBT模式开展安全 意识教育活动,相匹配的采取包括“钓鱼测试”“案例体 验”“社工模拟”的活动运营模式,“数字化教学内容, 用于计算机、电视等多种数字平台”的数字内容模式。

八大行业参与,分别涵盖了上海地区的工商行政、卫生计

换言之,单纯的CBT模式是开展网络安全意识教育的

生、税务、国资企业、教育、证券期货、银行以及纺织业

基础和根基,有针对性的开展更加具有体验和操作性的“

等行业。据统计超过11万人次参与线上知识赛。

活动运营模式”,可以加强网络安全意识教育的效果。

此次“网络安全行业知识赛”正是基于CBT模式,借

部分企事业单位,在参加“网络安全行业知识赛”的

助“享安全”学习平台在微信端铺开,通过快捷、有效的

基础上,继续选择开展了线下安全周的活动运营模式,将

线上互动学习方式,完成了知识学习、考试测评工作。

好玩、好学、好用的安全案例体验环节融入到学习之中,

参与知识赛的企事业单位员工,通过微信端进入“享 安全”的学习测评界面,利用碎片化时间学习网络安全知

以互动有趣的方式让员工理解和直观感受到网络安全的危 害后果和防范办法。

识点,以游戏互动、模拟答题等方式方法强化学习记忆,

有别于传统的线下培训学习方式,此次CBT的大规模

通过最终的线上测试测评检验学习效果,形成相应的个人

尝试,即使对于这种学习模式成熟度的考验,也验证和总

报告作为学习反馈。

结了基于CBT模式开展网络安全意识教育培训可以继续发

“网络安全行业知识赛”根据各行业的行业特点、所 面临的网络安全问题和学习短板,专家力量参与其中,有 针对性地制定相应的知识测评题库,有效地帮助企事业单 位巩固和提升了本单位人员的网络安全水平。 不仅如此,利用CBT模式的学习,可以最大程度的扩 大受培训学习人群的覆盖面,特别是弥补了长久以来部分 企事业单位在全员信息安全教育上的空白。 此次“网络安全行业知识赛”覆盖人群超过11万人, 同等规模的培训学习人群,如果采取相传统的培训,不仅

挥的特点和优势。 (1)个性教育 结合开展“钓鱼测试”、模拟测试等环节,形成每个 员工的网络安全风险地图形和个性化标签,有针对性进行 内容推送。 (2)游戏化 采取部门排名、个人积分、答题抽奖等行为激励,激 发参与学习测试对象的积极性和主动性。 (3)技术联动

在数量上难以达到,而且在成本上恐怕也是难以承受。同

采取用户行为管理、病毒网关、威胁情报等技术

时,以科学、完善的题库进行线上测评考试,保证了线上

产品集成,构成行为介入的测试类技术联动,尝试与

学习的效果,也有助于个人和单位及时得到学习反馈,还

AI、VR、AR等技术应用集成,在科学性、参与体验趣味

有助于各行业和单位制定下一阶段安全意识教育工作的重

性上达成更好效果。

点和目标。

基于CBT模式下的网络安全意识教育活动,克服传统

员工作为本单位网络安全的第一道防线,防微杜渐,

积病,创新与突破,采取更多融合方式丰富CBT模式下的

补足在网络信息安全培训方面的短板,有助于促进提升本

安全意识教育普及活动,相信在未来将会成为主流的网络

单位的整体网络安全水平。从知识赛的数据来看,行业内

安全意识教育活动的方式方法。

22

Issue: 003; April 2018


什么是《通用数据保护条例》GDPR? 《通用数据保护条例》(GDPR)的条款规定企业要保护欧 盟公民在欧盟成员国境内交易的个人数据和隐私,并对欧盟之外 的个人数据输出进行监管。欧盟28个成员国的公民都受到这项法 律条例的保护,即使他们的数据是在其他地方被处理的。

为什么出台《通用数据保护条例》? 众所周知,欧洲对企业如何使用公民的个人数据一直有着 更为严格的规定。由于近年来数据泄露事件频发,人们对于个人 隐私的保护也极为重视。基于数据隐私和安全,RSA对5个国家的 7500人进行了调查研究。结果表明,80%的消费者最担心的是银行 和金融数据的丢失,76%的消费者对丢失的安全信息或识别信息感

《通用数据保护条例》将影响哪些企业? • 任何存储或处理欧盟国家内有关欧盟公民个人信息的公 司,即使在欧盟境内没有业务存在,也必须遵守《通用数 据保护条例》。有关必须遵守《通用数据保护条例》的公 司的具体标准如下所示:

• 在欧盟境内拥有业务; • 在欧盟境内没有业务,但是存储或处理欧盟公民的 个人信息;

• 超过250名员工; • 少于250名员工,但是其数据处理方式非偶然地影响 了数据主体的权利和自由,或是包含某些类型的敏

到担忧。

感个人数据。

哪些类型的隐私数据将受到GDPR保护?

这也就意味着,《通用数据保护条例》几乎适用于所有的公

• 基本的身份信息,如姓名、地址和身份证号码等

司。普华永道提供的调查结果显示,92%的美国公司认为这一条

• 网络数据,如位置、IP地址、Cookie数据和RFID标签等

例将成为最重要的数据保护措施。

• 医疗保健和遗传数据 • 生物识别数据 • 种族或民族数据 • 政治观点 • 性取向

Issue: 003; April 2018

《通用数据保护条例》会影响第三方和客户之间的契约吗? 《通用数据保护条例》规定数据控制方(掌握数据的组织) 和数据处理方(帮助管理数据的外部组织)负有同等的责任。

23


网络安全策略

数据处理方(例如,云提供商、SaaS供应商或工资单服务提 供商)和客户所有的现有契约都需要阐明相应的责任。修订后的 合同中还需要对一系列的处理流程进行说明,包括这些数据是如 何被管理和保护的,以及如何对违规行为进行投诉。

如果企业没有遵守《通用数据保护条例》 将导致什么后果? 每一单GDPR违规行为将受到高达2000万欧元的严重处罚,或 者上一年全球年营业额的4%,以较高者为准。 目前,一个悬而未决的问题是如何对惩罚进行评估。例如, 一个对个人影响最小的违规行为,和一个因暴露个人识别信息 (PII)而对个人造成实际损失的违规行为之间的惩罚力度是否存

• 处理数据时要满足较高的同意标准,任何时候,同意都是 处理数据的法律依据。

• 尽量减少处理个人数据的数量,这被称为数据最小化的原 则。

• 在个人数据泄露的情况下,控制者至少应当在知道之时起72 小时以内向监管机构进行通知。

在区别? 不要忘记移动设备:根据Lookout公司对信息技术和安全主管

• 根据第37条,管理者应当指派一名数据保护官,这名数据

的调查显示,64%的员工会使用移动设备访问客户、合作伙伴和员

保护官可以是某一组织的雇员、很多组织的共同代表或者

工的个人识别信息(PII)。这种行为为遵守GDPR的合规性造成了

是外聘顾问。

另一种威胁。例如,81%的受访者表示,大多数员工都被允许在办 公设备(也可能是员工自己的设备)上安装个人应用程序。如果这些 应用程序需要访问和存储个人识别信息(PII),则必须按照GDPR合 规要求进行操作。但这一过程是很难控制的,尤其是你需要将员 工使用的所有未经授权的应用程序都考虑在内。

• 根据第35条,在进行数据处理之前,控制者应当对就个人 数据保护所设想的处理操作方式的影响进行评估, 尤其是很 可能对自然人权利和自由带来高度风险。

• 应通过适当的技术和组织措施,确保在处理过程中对数据 的保护。

• 特殊种类的个人数据处理要遵循特定的条件。

《通用数据保护条例》一些关键条款 • 为存储和处理个人信息提供一个法律基础。 • 只为合法目的收集和处理个人数据,并且一直做好保护措 施。

• 管理好所有数据被处理过的记录。 • 包括第三方在内的所有数据处理者,都要对控制和处理个 人数据的权利和自由进行风险评估,并从组织和技术方面 减轻对已确定的风险。

• 管理者应采取适当的技术和组织措施,以及对这些措施的 力度和适用性进行的持续评估,以确保和证明处理数据时 是按照本法规进行的。

• 管理者在控制、处理及转移个人数据时,应当及时对数据 主体的要求做出回应。

• 根据第16条,控制者应当通过各种方式对有关数据主体的 不准确个人数据进行更新与纠正,包括通过数据主体提供 补充披露函的方式。

• 在一些特定条件下,控制者应当永久删除数据主体的有关 个人数据。

• 在特定条件下,数据主体应当有权限制控制者处理其个人 数据。

• 根据数据可移植性的要求,控制者应当以“结构化的、常 用的和机器可读的格式”提供有关数据主体的个人数据。

• 应当提供可供选择的替代方法,而不仅仅是利用自动化处 理和剖析数据,比如人为干涉。

• 防止数据被转移到欧盟以外的“第三国或国际组织”,除 非有特殊的保护措施。

• 在直接向儿童提供服务时,要确保有额外的限制条件来保 护儿童个人数据的处理。

24

Issue: 003; April 2018


国际网络安全

全面解密!深信服2017年安全威胁分 析报告之勒索病毒篇 摘要:深信服对全国11个行业超过10w个域名(或IP)做安全防 御,共阻断勒索软件攻击163250次。从被攻击流量数据来看,勒 索软件攻击流量最多的三个月分别是5月、6月和10月。 正如众多安全专家所预测的,勒索软件会在将来的网络犯罪

统。从此,勒索病毒开启了近30年的攻击历程。

中占据重要作用,这两年恶意软件的活动情况恰好验证此预测。 技术门槛低、低风险、高回报使“勒索即服务”发展迅猛。

首次使用RSA加密的勒索软件

所谓知己知彼,方能百战不殆。勒索病毒究竟有怎样的前世

Archievus是在2006年出现的勒索软件,勒索软件发作后,会

今生?该如何去防范?本文节选自《深信服2017年安全威胁分析

对系统中“我的文档”目录中所有内容进行加密,并要求用户从

报告》,全面为您解密勒索病毒。

特定网站购买,来获取密码解密文件。它在勒索软件的历史舞台

勒索病毒演进

上首次使用RSA加密算法。RSA是一种非对称加密算法,让加密的 文档更加难以恢复。

最早的勒索软件 已知最早的勒索软件出现于1989年。该勒索软件运行后,

国内首个勒索软件

连同C盘的全部文件名也会被加密(从而导致系统无法启动)。

2006年出现的Redplus勒索木马(Trojan/Win32.Pluder),是

此时,屏幕将显示信息,声称用户的软件许可已经过期,要求用

国内首个勒索软件。该木马会隐藏用户文档,然后弹出窗口要求

户向“PC Cyborg”公司位于巴拿马的邮箱寄189美元,以解锁系

用户将赎金汇入指定银行账号。

Issue: 003; April 2018

25


国际网络安全

图1.

勒索即服务诞生

发起勒索软件攻击。勒索即服务一般有三种交付方式,第一种是

2015年,第一款勒索即服务(RaaS)Tox曾风靡一时,Tox工

最常见的是直接付费购买,获得勒索软件的终身使用权利,并且

具包在暗网发售,Tox允许用户访问购买页面,创建一个自定义的

可以自定义勒索说明、赎金价格和验证码。另一种是提供免费的

勒索软件,在这个页面,用户可以自定义勒索说明文字、赎金价

勒索软件,但是开发者要从赎金中抽取一定比例的分成。最后一

格和验证码。之后,Tox服务会生成一个2MB大小的可执行文件,

种是定制化服务,根据付费者要求提供开发服务。

该文件包含勒索代码,伪装成屏幕保护程序。

传播方式

同年,Karmen勒索软件出现。这种勒索软件即服务(RaaS) 的新变体为买家提供了一个用户友好的图形化仪表板,让买家可

勒索软件的传播手段主要以成本较低的邮件传播为主。同时

以访问位于黑暗网络上基于Web的控制面板,从而允许买家配置

也有针对医院、企业等特定组织的攻击,通过入侵组织内部的服

个性化版本的Karmen勒索软件。图1.

务器,散播勒索软件。随着人们对勒索软件的警惕性提高,勒索

随后,其他RaaS工具包如雨后春笋般冒出,例如Fakben、En

者也增加了其他的传播渠道,具体的传播方式如下:

crytor、Raddamant、Cerber、Stampado等勒索服务不断冲击市场,

邮件传播:攻击者以广撒网的方式大量传播垃圾邮件、钓鱼

勒索软件工具包的价格也随之走低。

邮件,一旦收件人打开邮件附件或者点击邮件中的链接地

比如勒索软件Shark,软件本身免费,开发者从赎金中赚取

址,勒索软件会以用户看不见的形式在后台静默安装,实施

20% ;勒索软件Stampado,售价$39 ,准许购买者终身使用工具包

勒索;

。最近在恶意软件论坛上搜索 RaaS 工具包,价格范围在$15 到 $95

漏洞传播:当用户正常访问网站时,攻击者利用页面上的恶

之间。

意广告验证用户的浏览器是否有可利用的漏洞。如果存在,

勒索病毒获利模式

则利用漏洞将勒索软件下载到用户的主机; 捆绑传播:与其他恶意软件捆绑传播;

随着黑产市场的扩张,勒索市场也不断受到其他运营模式

僵尸网络传播:一方面僵尸网络可以发送大量的垃圾邮件,

的冲击。一种新的商业模式应运而生——勒索即服务(RaaS),

另一方面僵尸网络为勒索软件即服务(RaaS)的发展起到了支

勒索软件开发者将部分软件功能作为服务出售,持续提供更新和

撑作用;

免杀服务,小黑客和供应商通过网络钓鱼或其他攻击进行传播扩

可移动存储介质、本地和远程的驱动器(如C盘和挂载的磁

散,然后利益分成。运营模式如下图所示:图2.

盘)传播:恶意软件会自我复制到所有本地驱动器的根目录

在勒索即服务市场中,无论技术高低,能力大小,甚至毫无 编程经验的人只要愿意支付金钱,都可以得到相应的服务,从而

中,并成为具有隐藏属性和系统属性的可执行文件; 文件共享网站传播:勒索软件存储在一些小众的文件共享网 站,等待用户点击链接下载文件;

26

Issue: 003; April 2018


图2.

网页挂马传播:当用户不小心访问恶意网站时,勒索软件会

对称的复杂加密算法,受害者一旦中招,只有支付赎金才能安全

被浏览器自动下载并在后台运行;

找回数据。

社交网络传播:勒索软件以社交网络中的.JPG图片或者其他 恶意文件载体传播。

2017年1月1日开始至12月31日截止,深信服下一代防火墙、 信服云盾等安全防护产品对全国11个行业(其中包括政府、教 育、医疗、金融、企业、能源等)超过10w个域名(或IP)做安全

2017年,WannaCry和Petya的出现彻底打开了我国的勒索犯 罪市场,尤其是WannaCry,还针对中国人民开发汉语版界面,如

防御,共阻断勒索软件攻击163250次。每月拦截勒索软件攻击次 数走势图如下:

下图所示。图3. 这两个漏洞就是典型的通过漏洞和共享进行传播。Wanacry 勒索软件利用Microsoft

Windows中一个公开已知的安全漏

洞。Petya勒索软件通过ME文档软件更新的漏洞传播。 WannaCry 和Petya的出现也同时表明,利用全球范围内普遍漏洞产生的勒索 软件攻击活动会给全人类带来灾难性的后果。

勒索病毒全年态势 2017年,勒索软件数量暴增。勒索即服务的逐步发展,使勒 索软件成本越来越低,备受黑客喜爱。在过去的一年中,各产业 都曾遭受到勒索软件的攻击。在2017年间流行的勒索软件使用不

Issue: 003; April 2018

27


国际网络安全

图3. 2017年,WannaCry和Petya的出现彻底打开了我国的勒索犯罪市场,尤其是WannaCry, 还针对中国人民开发汉语版界面,如下图所示。

从被攻击流量数据来看,勒索软件攻击流量最多的三个月 分别是5月、6月和10月。由此可以看到,在重大安全事件爆发时 期,以及国家重大会议活动期间,恶意软件传播非常频繁。因 此,网络安全工作时刻不能松懈,在特殊时期更应该加固网络防 护,避免中招。

程序下载或运行; 6. 做好网络安全隔离,将您的网络隔离到安全区,确保某个 区域的感染不会轻易扩散到其他区域; 7. 建立并实施权限与特权制度,使无权限用户无法访问到关 键应用程序、数据、或服务;

应对建议 目前针对勒索软件的措施主要是:更新补丁、封锁恶意源。 但这些都只有在威胁已经引发了损害之后才能开始。反病毒和反

8. 建立并实施自带设备安全策略,检查并隔离不符合安全标 准(没有安装反恶意软件、反病毒文件过期、操作系统需 要关键性补丁等)的设备; 9. 部署鉴定分析工具,可以在攻击过后确认:

恶意软件产品虽然防护给力,但威胁发展太快,任何工具都无法

a)感染来自何处;

提供100%防护。因此,深信服提供以下10个建议保护您以及您的

b)病毒已经在您的环境中潜伏多长时间;

单位免受勒索软件伤害:

c)是否已经从所有设备移除了感染文件;

1. 制定备份与恢复计划。经常备份您的系统,并且将备份文 件离线存储到独立设备;

d)所有设备是否恢复正常。 10. 最关键的是:加强用户安全意识培训,不要下载不明文

2. 使用专业的电子邮件与网络安全工具,可以分析电子邮件

件、点击不明电子邮件附件、或点击电子邮件中来路不

附件、网页、或文件是否包含恶意软件,可以隔离没有业

明的网页链接;毕竟人是安全链中最薄弱的一环,需要

务相关性的潜在破坏性广告与社交媒体网站;

围绕他们制定计划。

3. 及时对操作系统、设备、以及软件进行打补丁和更新; 4. 确保您的安全设备及安全软件等升级到最新版本,包括网

除此之外,深信服在勒索病毒应对实践中积累大量实战经

络上的反病毒、入侵防护系统、以及反恶意软件工具等;

验,拥有勒索病毒检测解决方案,勒索病毒防御响应解决方案,

5. 在可能的情况下,使用应用程序白名单,以防止非法应用

28

以及体系化解决方案,帮助更多用户从容应对勒索病毒!

Issue: 003; April 2018


对话绿盟科技 高级副总裁叶晓虎

1

1、 当今网络安全领域面临着哪些新挑战? 近两年来网络安全领域产生了很多新挑 战,最典型的是在关键基础设施领域和物联网应 用领域。在关键基础设施尤其是在例如电厂、能 源、精密仪器等生产领域,安全体系的设计首先 要保障生产的稳定运营,2016年爆发的乌克兰电 厂事件带来了深刻的影响。这与过去在IT领域的 安全问题解决方法有着很大的区别,提出了很多 新的挑战。随着物联网的应用,越来越多的设备 连接到云端并与其他系统产生关联,互联应用呈 现复杂网状,关系错综复杂。安全隐私、对环境 的影响,可靠性这些因素,都要求传统的安全模 型和体系发生大的改变。 虽然随着网络安全法的实施,社会各界对 网络安全重视的程度在加强。但是中国的企业和 政府在安全方面的投入占IT投入的比例,在全球 范围内仍然处于比较低的水平,有待进一步的加

2

2、 回顾全年安全事件,让您印象最深刻的

强。另外,网络安全行业与企业、监管机构之间 的信息共享和协作,仍然存在很多障碍,降低了 网络安全防护的效能。我们认为,未来企业和安 全厂商之间的关系不能仅仅停留在提供产品和技 术支撑上,面对未来的安全威胁和风险,需要企 业和安全厂商之间进一步加强和深化在安全运营 上的协作。

事件是什么?为什么? 我认为当今最具毁坏性的攻击手段,勒索 病毒绝对是其中之一,以WannaCry勒索病毒的 爆发为例。从技术上讲,这是第一个利用系统漏 洞进行自动传播特性的勒索软件。此病毒共使 100多个国家的数十万用户遭到袭击,其中包括 医疗、教育等公共事业单位和一些有名的大公 司。WannaCry事件的影响波及到公务服务、企业 运营,造成了极为深刻的影响。 反思整个事件的过程,有几点需要特别的 关注:

Issue: 003; April 2018

29


国际网络安全

1. 这次事件影响到了部分公共服务部门,事 件发生期间部分部门暂停了对公众的服务,甚至 部分企业停止了生产以应对。如何在接受一定风 险的条件下减轻安全事件发生期间对生产、公众 服务的影响,保持生产服务的运行,对整个系统 安全设计以及应急响应体系进行改进是个值得深 入探讨的方面。 2. WannaCry传播利用的永恒之蓝漏洞,在 病毒爆发前一个多月,微软发布了操作系统的补

运营服务以及设备代维、威胁分析服务等。通过

丁,各安全厂商都发布了检测的规则更新。但是

运营体系的完善和技术的进步,这些企业的安全

大量的企业并没有及时进行系统的更新,在蠕虫

得到有效的保障,对风险的处置效率得到很大的

爆发后才开始下载升级包,没有在病毒爆发前采

提升。2017年绿盟科技在这个方向取得了长足的

取措施及时消除漏洞和威胁带来的风险。在此次

进步,获得很多企业客户的认可,使我们更加坚

蠕虫病毒爆发期间,绿盟科技为数百家企业和组

定对连接协同的认知。

织机构提供了应急响应支撑服务。从此次应急响

机器学习/人工智能在安全领域的应用,使

应的过程 和结果来看,当前中国企业和组织机构

我们对安全数据的理解更加深入。2017年绿盟科

在预防和阻止大规模安全灾害事件上仍然面临较

技通过自己建立的威胁数据收集和分析系统,利

多的问题,需要逐步加强和完善企业和组织机构

用机器学习的方法开发了新型的威胁检测引擎。

的安全运维与管理体系。

这个新的引擎提升了检测效率,降低了漏报和误

事件发生期间,很多企业都展开了应急

报。同时,通过大数据分析系统,我们发现多起

工作,但是在很多企业单位中没有专职的安全人

新的攻击手段和方法,及时更新了对应的防护和

员,甚至IT管理人员都是生产人员兼职。人员的

检测规则,生产有效的威胁情报并通过情报共享

数量和技能问题,在发生大规模安全事件时尤为

体系使部署在企业现场的安全管理平台和设备起

突出。

到对应的防护作用。绿盟科技的威胁情报中心得

3.

3

到国内外同行的认可,在RSA2017上被选为热点

3、 绿盟科技的安全防御理念是什么

产品之一,是唯一来自中国的安全产品。这套系

举例说明 绿盟科技一直在践行“连接协同”的理 念,网络安全的本质是攻防双方力量的较量。经 过这些年的努力,越来越多的企业认识到企业的

统初见成效,但还需要做大量的完善和改进。

4

4、 对数据安全未来的发展方向,包括攻击 态势和技术发展,您有何预测?

安全不是孤岛,单靠自己的力量不足以对抗威

未来数据安全对于个人、企业、社会、国

胁。从2010年开始绿盟科技为企业提供远程安全

家,乃至全世界都是最受关注的。随着数据价值 的提升,让很多不法人员利用各种手段对数据进 行着攻击窃取,在各个层面都造成了巨大的损 失。 人工智能已经加入了数据安全的保护行 列,利用机器学习的技术可以智能的对数据进行 价值评估,对威胁进行追踪定位,在未来会有 80%以上的企业选择带有AI的工具来保障关键业 务数据安全。

30

Issue: 003; April 2018


数据泄露和黑客攻击,2017年网络安 全多事之秋 网络安全专家不断警告,威胁网络安全的事件只会越来越频 繁,花样越发繁多。客观上这就导致我们在网络安全方面绝不会 我们先回 回顾 下 感到无聊,但这种不无聊绝对不等于有趣。让我们先回顾一下 2017年网络安全“大事件”。 网络 天才黑客攻击美国国土安全局,偷走“网络 武器”,致大规模数据泄露。此为一。 这一年伊始,媒体的精力集中在了“永 恒之蓝”漏洞工具上。这个漏洞工具针对的 是一种叫做Windows服务信息块的文件分享 规则,过去十五年每个版本的Window都内置 这种规则,“永恒之蓝”据报是由美国国土 安全局开发的网络武器,后被黑客窃取。黑客 攻 使用它制造了两个年内最恶名昭彰的勒索程序攻 击,“想哭”(WannaCry)和“NotPetya”。 罗斯几十万 这两次网络攻击总共涉及了乌克兰和俄罗斯几十万 家企业和政府机构。目前仍然没有找到确定的来源,但在诸多欧 洲国家和美国带有政治色的推测中,“想哭”(WannaCry)来 自朝鲜,“NotPetya”则被认为和俄罗斯有关。但并没有确凿证 据。 永恒之蓝其实并不是黑客使用的唯一一种针对Windows的工 具。2017年10月,一种叫“坏兔子“(BadRabit)的勒索程序攻

蓝牙系统里的错误。根据发现它的人说,黑客可以通过这个错误

击了几家俄罗斯新闻机构和乌克兰的运输系统,比如基辅的地铁

获取对设备的权限和控制,进行中间人攻击。已经有解决这些错

和奥迪撒机场。据报道,坏兔子(BadRabit)还袭击了土耳其和

误的补丁推出,但事实上,全世界每天使用的几十亿个互联的设

保加利亚。在这次攻击中,黑客使用的漏洞工具叫做“永恒之浪

备上可能还有其他漏洞。 2017年还有很多其他关于数据泄露的惊人消息,包括优步被

漫”(EternalRomance)。

Broadpwn漏洞让十亿多手机终端设备陷入安全威胁。 此为二。

披露在2016年曾经成为数据泄露的受害者,多达57000000条拼车服 务用户资料被黑客盗取,但优步没有自行公开。 此外,去年人们还发现2013年雅虎数据泄露的规模其实要

发现系统脆弱性漏洞Broadpwn成为2017年网络安全领域另一

比起初的预期大得多。2013年的泄露影响了雅虎的所有用户账

重大事件。全世界超过十亿个IOS和安卓系统面临着堆溢出漏洞

户——当时用户有三十亿。如果这不够令人震惊,你还需要知

威胁。这个发现提醒了网络安全行业和终端设备用户,并不是所

道:黑客有三年的时间去随意使用他们盗取的信息,因为泄露是

有的网络安全威胁都来自于软件。苹果和谷歌推出了针对这个漏

在2016年才被发现的。

洞的补丁,但世界上的任意一台设备都仍然有可能受到攻击,有 人警告说,黑客现在将会开始攻击硬件上的缺陷。 “Broadpwn”并不是独一无二的。“Blueborn”去年也被发 现,有超过53亿个设备存在这个漏洞。简单的说,它是一个设备

Issue: 003; April 2018

今年内截止到目前还比较平静,未发生大型攻击,也没有类 似于“想哭”和“NotPetya”的事件登上头条,但是从优步和雅 虎事件看来,我们可能会在几年后的某一天突然发现现在正在发 生的数据泄露。

31


人工智能

人工智能: 你所需要知道的 一些基本信息 人工智能和机器学习正在迅速成为热词,就像“打破格局”和“创新”一样讨 厌,但跟它们不同的是,“人工智能”和“机器学习”背后是有一定实质内容的 为此,麦肯锡[注]为这个领域里的初学者——也就 是我们中的多数——提供了一份概况,告诉你你需要知

量数据,进行真正的“学习”。这也就是为什么“大数 据”如今显得十分重要。

道的事。

根据分析过程,机器学习主要分为三类:描述性分

首先,什么是人工智能?简单的说,它就是机器

析,预测性分析及规范性分析。毋庸置疑,描述性分析

在表现典型的人类认知功能方面的能力,比如推理,理

是最简单的过程,并未有真正地“思考”。规范性分析

解,与环境互动。在已经存在的人工智能技术

过程是最复杂的。描述性分析只是描述所发生的

门类里,有自动驾驶汽车——虽然它还远 达不到完美,当然还有机器人,计算 机视觉,计算机语言,机器学习。

事。预测性分析——各种分析团体都在使用

与传统方法相

这一种方式——是通过分析已有数据和

比,深度学习在语音识别

电脑模型推演各种场景,预测可能发

方面准确度高25%,面部识别准

生的事。而规范性分析则能够告诉

算法会浏览巨量的固定数据,从

确度高25%,图像分类准确度则高

你该采取何种行动以达到某个特定

中发现规律,然后它们进行预测

出41%。通过深度学习,机器可以

目标。

及提供解决方案。这种规律总结

学习越来越复杂的信息,并且可

及预测的过程取代了一般的人机

以根据学习成果作出结论

机器学习主要是依靠算法。

关系,即人类只给机器提供编程指 令。新型人机关系则是让机器依据大

32

(决定)。

根据算法,机器学习的方式主 要有监督学习、无监督学习,强化学 习。 监督学习主要是给电脑进行一系

Issue: 003; April 2018


列“输入”,以得到某个特定的“输出”。这种算法是

神经网络——多层互联的软件计算程序。跟一般算法相

用已有数据,结合人类的思想,将“输入”(例如利

比,它可以处理的数据库更大,所以执行任务时也就能

率,一年中的特定时间,等)和输出(比如房价)联系

获得更为准确的结果。

起来。这种方式用于典型的预测性分析。

与传统方法相比,深度学习在语音识别方面准确度

非监督学习中,人类未设定某种特定目标。当人们 不知如何使用手中的数据时,则最适合采用此类方法, 通过算法帮助你发现有用的规律模型,从而对数据进行 分级和使用。

高25%,面部识别准确度高25%,图像分类准确度则高出 41%。通过深度学习,机器可以学习越来越复杂的信息, 并且可以根据学习成果作出结论(决定)。 例如,你给电脑出示一个图像,神经网络就会开

这是机器学习最接近人类学习的方式,是一种奖励 机制下的学习方式。它让算法执行一项任务,然后获取 奖励,算法会尽量在每次执行任务时都让奖励最大化。 这种方式让机器与环境互动,所以从本质上讲就更加宽

始进行分析,记忆,然后,当你在另一个环境中出示同 一个图像时,它就能识别这个图像。这听起来似乎很简 单,因为这对人类大脑来说是太容易的一件事情。但其 实,这显然很难。

泛。 机器人顾问就是一个例子:它通过输入的指令来与

我们离伊隆・马斯克和比尔盖茨警告过的那种超

环境互动,如果这个指令得到了好的结果,这个机器顾

级人工智能仍然很远,但是这个技术的进步很快,非常

问就会受到奖励,比如获得分数,而有的时候,得到最

快,所以这个很远可能并没有那么远。理论上,人工智

佳结果本身就是一种奖励。随后这个机器会不断自我纠

能的应用几乎是没有限制的。但是人工智能正日趋成

正,获取最佳行动路径,来取得最大的奖励。

熟,理解它的真正概念,它能做什么,它未来可以做什

让人工智能获得极大进步的是深度学习。它会利用

么,是很有意义的。

人工智能,还是认知计算? 这两种描述未来信息技术的说法容

个定义的一种技术。有人认为人工智能的

计算让人工智能更加强大,在解决复杂问

易让外行人迷惑。人工智能现在非常走

意义是伞状的,它现在的意义分支确实也

题中并不相互排斥。他对认知计算的定义

红,而且已经被过度引用了,比如连聊

越来越多,认知学习就是其中的一个,可

是,一种在计算模型中模拟人类思考过程

天机器人也被成说是人工智能。认知计算

以说它是一种能让电脑更好的模拟人类的

的尝试,使用自我学习算法,能够进行数

有一种《黑镜》[注]似的气息,但是它可

方式。

据挖掘,规律辨认和自然语言处理。

能是把超级人工智能变成现实的东西。那 么,这两者的区别在哪?

伊万思在其《认知语言学最新动

马尔认为深度学习就从这里切入。

向》一书中引用了VDC研究院的物联网分

认知计算使用神经网络处理数据,并在其

根据《图灵计算机历史档案》一书

析师史蒂夫霍芬伯格的理论,解释了现存

在过程中学习。数据越多,机器学习到的

的说法,人工智能最简单的定义为:“

的所谓人工智能系统和认知学习系统的不

就越多,它做的决定就越精确。马尔把神

一类让电脑执行需要人类智慧的任务的科

同:如果一个人工智能系统和一个认知计

经网络叫做机器作出的决定连成的树状结

学。”这有一点让人困惑,因为有很多我

算系统去分析同一系列医疗数据和其他资

构,机器会步步作出决定,直到交付的问

们以为不需要使用智力的任务其实是需要

料,以给某个病人找到一套最佳治疗方

题被解决。

智力的。所以科普作家,迪恩・伊万斯做

案,那么,人工智能系统会分析数据,然

换句话说,机器学习,尤其是深度

了这样的澄清:人工智能的目标是让电脑

后给医生提供一个最佳建议方案。而认知

学习,可以让认知计算更有效,认知计算

能够通过模仿人类思考过程来解决复杂问

计算系统只会把所有必要的信息提供给医

又能带来真正的人工智能,而不是Siri和

题,例如辨认规律。

生,把选择方案的权利留给医生。

Alexa(谷歌的人工智能助手)。这具体

而认知计算,是可以大范围延伸这

Issue: 003; April 2018

但福布斯的伯纳德・马尔认为认知

是好事还是坏事目前还看不出来。

33


人工智能

人机交互:人工智能进化的钥匙 人机交互,简称HCI,已经被认为是人工智能进化的关键。 有趣的是,人工智能的进化也是HCI的关键。 这两者的交汇点是对话

IBM认为,这也是现在对人工智能开发者最大 的挑战。 电脑对于自然语言的处理能力已经很不错了,

天,人类回家后可以跟墙壁和厨房家电进行有意义 的对话,讨论家里孩子今天过得怎么样,同时又告 诉洗衣机六点开始工作。

但是在自然语言理解方面电脑还不能达到人类的预

但这在变成现实

期。这方面的研究者承认,多数人对电脑的预期是

之前还有一段时间。人

过分的,不符合实际的,但是他们仍然在尽力达到

类一般能自然而然做成

人们的预期。

的事并不能由人工智能

对于人类,对话是很自然的。我们不用有意识

自然完成。但等到电脑

的费力,不用记忆信息的上下文,就能理解别人对 我们表达出的信息。但是电脑在理解上下文,和如

学会了所有它需要学习

Gary Bradski, CTO, Arraiy

何用上下文帮助理解方面需要很多协助。

并且开始掌握时,乐观

所以科学家们现在用监督学习和强化学习来让

的人相信我们就可以得到比现在有用的多的个人助

电脑得到所需的知识,让它们变得对上下文有“意

手,它们能够读懂我们的话,面部表情和肢体语

识”,从而开始能更好的了解人类对话。现在这还

言,程度足够让它们成为真正的谈话伙伴,让我们

都在试验阶段,但是一些研究者相信我们在五年之

的生活变简单,帮助我们通过它们掌握的所有背景

内就可以让电脑理解人类语言,并且作出比现在有

资料做重要决定。

意义得多的反应。

34

的关于上下文的信息,

关于人工智能可能很可怕的方面,专家似乎

这个过程中的一大助力就是物联网。各种物体

认为这种恐惧被夸大了。四个行业内人员告诉麦肯

或者基建设施中的传感器,可以把关于人和电脑周

锡:首先,人工智能还不如悲观主义者对你宣传的

遭的环境信息传给电脑,在它们已有的信息中加入

那么先进;第二,人工智能所做的事离自我意识还

特定的情景或地点,提供上下文。

差很远——他主要只是一个规律识别系统。至少现

有的人把未来看做一种良性循环,人工智能

在是这样。第三,一个专家在采访中对麦肯锡说,

变得越来越好用,所以我们的使用频率提高,它也

我们要能思考的人工智能来做什么呢?根据Arr人工

就会更好用。怀疑者可能会说这种良性循环根本不

智能y公司的首席技术官盖瑞・布拉德斯基说,一

良性,因为它会让我们越来越依赖人工智能,但是

个能思考的电脑唯一有意义的使用方向就是空间探

这些怀疑论者好像都是《黑镜》迷,而不是人工智

索,或者其他危险的行为,而不会有一个能思考的

能研究者。人工智能研究研究会很高兴的看到有一

洗衣机。

Issue: 003; April 2018


人工智能用于网络 安全:好主意

Daniel Miessler, writer and informa on security professional

人工智能在网络安全 中没有可替代品

有人坚定地认为人工智能

现在仍然有争论:应该让人

在网络安全中不该有地位,这种

工智能在网络安全中有一席之地,

怀疑原因多样,可能是因为对

还是应该防止人工智能进入网络安

人工智能的潜能有疑虑,也可能

全领域。但是对于一些人来说,没

是因为害怕人工智能会夺走人类

有什么可争论的。人工智能在网络

安全专家的工作,更不用说人

安全中应该有地位,而且应该是重

工智能本身就可能成为网络安全

要地位,因为根本没有足够的人类

漏洞。但是专家丹尼尔・米思乐

能去做这个工作,正在做这个工作

说,这并不一定是对的。

的人也有太多人类的问题——也就

他认为人工智能的本质就可以切实帮助人类安全人员:

Laurent Gill, co-founder and chief product officer of Zenedge

是容易犯错——不可能像网络安全行业要求的那么优秀。

它是一套可以快速处理大量信息的系统。这是人类做不到

反对把人工智能排除在网络安全之外的其中一人是劳

的,又是人工智能非常擅长的。米思乐说,公司现在会制造

伦・基尔,Zenendge公司的联合创始人和首席产品官。这是

许多个TB大小的数据,这些数据是没有人会看的,但其中又

一家网络安全服务提供商。在SC杂志的一片特别报道中,

可能包含重要内容。这时只有人工智能能在数据堆里发现这

基尔表示:行业现在最需要的是更多的人工智能,而不是

种潜能,所以为什么不用它呢?

更多工程师。在这一点上,他反驳的是谷歌的信息安全和

米思乐用五个原因来论证为什

隐私负责人席泽尔・阿特金斯的说法。阿特金到最近为止

么人工智能在网络安全中表现

都是坚决反对在信息安全中加入更多技术手段。但是今

更好。第一,能研究数据,找

年早些时候,谷歌发表了一个网络安全人工智能解决方

到其中泄露或者漏洞的网络安

案“Chronicle”。这是一个明显的转变。

全专家现在非常紧缺。第二,

基尔说这种转变是有道理的。人工智能是今天数

人类要经过训练才能成为网络

字环境中能保持比黑客领先一步的唯一手段。你可以

安全专家。每训练一个的成本

想雇多少工程师就雇多少,但是只要其中的一个犯

都和上一个一样高。人工智能不

了错误,就会出问题。就像Equifax发生的灾难性问题

需要这种训练,而且在一个已经受 过“训练”的系统里加入更多的人

一样[注]。人类会犯错误,事情就是这么简单。 此外,你不可能想雇多少工程师就雇多少,因为现

工智能运算能力也不需要投入与训

在网络安全专业人员已经短缺了,而且在未来几年中短缺

练人类相同的额外成本。第三,人

会更加严重,2022年缺口估计会达到180万。已经没有专业人

类训练很少能够统一到让每个人都有一样的高水平。第四,

士可雇了。

人类就是人类,他们会觉得枯燥,他们会分神,他们会累。

对这两个问题的解决方案是很明显的:自动化,具体说

第五,人类是有偏见的生物,他们的偏见可能会渗透到分析

就是人工智能——如果我们指的是最广义的人工智能的话。

中去,影响准确性。

人工智能不太可能犯错。可以通过补丁教给它新东西,它不

所以,综上所述,把人工智能用于网络安全听起来是一

会忘记。虽然像所有东西一样,它有自身天然的挑战,但它

个很好的主意。米思乐还说,这个主意可能在五年内变成现

对于所有人来说似乎还是解决问题的最好方案,除了那些财

实。这并不难想象,因为人工智能只会被用于过滤资料。在

富500强公司,因为它们有钱雇用和吸引最好的专家。如果这

很多行业内,比如银行和金融服务,算法已经在执行这种功

些最好的专家犯了错误导致损失,它们也承受得起。

能了。为什么不能教它们在网络安全中也做同样的事呢。

Issue: 003; April 2018

35


人工智能

电子疲劳,人工智 能在网络安全中最 大的挑战 网络安全最近登上头条的时候太多了,所 以人们开始觉得电子攻击和数据泄露是一件常 发生的事,甚至是正常的。当你被大量信息持 续冲刷的时候,任何一个话题都会出现这种问 题:你对它失去了敏感。这种不敏感的状态是 让人讨厌的,但讨厌只是它最小的问题。如果 你觉得网络安全对你无所谓,都懒得为你最新 的网络账号想出一个更好的密码,那它还是非 常危险的。 理查德・福特博士,雷神公司下属网络安

Dr. Richard Ford, Chief Scien st, Forcepoint

全公司“Forcepoint”的首席科学家,把这种特 殊的去中心化称为电子疲劳,并且认为它是网 络安全领域中不多但非常严重的问题之一。 在与TechRepublic网站的采访中,福特解释

福特表示,我们甚至可以说网络安全人员

说电子空间现在已经是每个人和公司生活中巨

和电子犯罪分子之间正在进行着利用人工智能

大的一部分,你不可能逃开它。一旦你对编写

的比赛。这个比赛会在电脑之间进行——智能

新密码感觉到疲倦,对保证你的网络安全觉得

电脑,它们接受了人类训练,比人类更善于解

疲倦,你就会开始做出坏的决定,影响到自己。

决特定的复杂安全问题,因为电脑在解决这种

避免这种疲劳的方法是要对你的网上交易

问题时速度更快。

和行为保持一种健康的怀疑精神,对在网上发

福特甚至把人工智能称为一个“有认知能

表的信息也保持小心。基本上,你要一直认识

力的假肢”,而不是一个反乌托邦似的智能机

到,坏人永远都在寻找伤害好人的方法,如果

器。他说人工智能是一个让人类做出更好决定

你不小心,这也会发生在你身上。

的工具,而不是替人做决定。但是人工智能自

福特认为网络安全中的第二大威胁是人工

身有一个问题:当他变得越来越复杂,在一个

智能。这不是指把人工智能用于网络安全会造

时间点之后你就不再知道它是如何工作的了。

成问题,而是因为电子犯罪分子也可以利用人

所以福特问:你怎么知道它正在正常工作?换

工智能的潜能。

句话说,你怎么能知道人工智能是不是成为了

就像网络安全人员正在寻找更有效的利用

电子攻击的目标?

人工智能的方法,确保组织个人的安全一样,

这个问题是很吸引人的,也是有些恐怖

电子犯罪分子也在寻求利用人工智能达到他们

的。福特相信我们将在未来一二十年内找到答

自己的目的,一般来说是为了获取经济利益。

案。

36

Issue: 003; April 2018


我们正在使用人工智能的

10种方式 个人助手:Siri等语音

Gm人工智能:是的,

识别系统是一种利用 识别系统是 种利用

谷歌是非常热衷于机器

深度学习和神经网络 的人工智能系统。它们 目前还不是人工智能,但正被 教育着了解人类声音中的细微 区别,上下文和语意,正在走 向有朝一日成为真人工智能的

学习的。至今为止三年

很多人不只是读过人工智能的文章或者听说过它,他 们正在日常生活中有意使用人工智能。如果我们最宽泛的 定义人工智能的话,比如把解决某个特殊问题的算法也算 作人工智能,那我们很可能会在未来几年更多的应用它。

来Gm人工智能l一直有 一个智能自动回复内容,你可 以从三个选项中选择一个作为 回复。

所有的人工智能专家都会反对这种宽泛的定义,但 是为了简单起见,我们暂时保留它——需要说明的是聊天

PayPal:这个在线支付

机器人和机器人投资顾问并不是人工智能专家——比如伊

巨人使用深度学习评估

脸书:还记得最近脸

隆・马斯克——认为的真人工智能。它们只是受过“训

风险和侦测诈骗。从它

书泄露了50000000

练”的算法,用来处理巨量数据和识别规律。

路上。

用户资料的丑闻吗?

说起机器人顾问,金融服务业是最早接受机器学习和

记得人们删除账号发现脸书

算法的行业之一,这是因为金融技术类初创企业的刺激。

掌握那么多个人资料时,有多

根据云服务提供商RedPixie的首席数据官米歇尔・费尔德

么震惊吗?一部分资料被用于

曼的说法,风险计算,客户满意度的衡量,市场走向的感

了这个社交网络的机器学习活

知,都是这个行业内算法的主要应用方向。

动,以使他们的服务个性化。 面部识别就是这些活动的例子 之一。 谷歌地图:因为谷歌 知道你的位置,所以

在机器学习和使用算法代替人力,或者解决人力无法 解决的问题方面,健康和零售业是另外两个的领先者。 机器学习对于网上零售业来说是一个重大利好,它允 许它们持续提高服务水平,并且用算法分析它们收集的用

从事的业务和现在网上 诈骗的数量来看,这是很合理 的做法。 网飞:算法让视频推荐 变为可能。这听起来可 能不太重要,但要注意 的是:网飞宣布的这些 算法给他们每年的回报为十 亿美元,而且还帮他们加强了 用户存留度。

户资料,让服务个性化。如果你从亚马逊购物过,你肯定

它可以通过从你的智

已经见识过了:你会根据你最近购买的东西收到建议,而

优步:你几乎可以说没

能手机中提取数据,分析

这只是算法帮助网上零售商的一个侧面。

有机器学习优步就不会

交通速度,来建议从A到B之间

费尔德曼提供了十个现实生活中多次或者经常使用,

存在。这个共享车程的

的最快路线。是的,这有些吓

或者未来将会使用的人工智能——至少是算法——的例子。

公司使用算法来估计到达

人,但是如果你需要知道从A到

但还需要重申的是:网飞,PayPal和声田都没有在

时间,地点,还计算优步送餐

B之间怎么走最快,这也是很方

使用真正的机器学习。他们是在用算法和机器学习向他们

便的。

的客户提供一种更好的服务。至于这些服务能变得多么烦

谷歌:这没有什么可 吃惊的。世界上最大

的送达时间。

人,任何一个被亚马逊推荐轰炸过的人都能告诉你。但是

Lyst:这个网上零售商

人工智能有巨大的潜力让我们的生活真的变好。也可能会

使用深度学习,根据服

征服世界。结果我们只能以后才知道了。

装之间的视觉比较向他 们的用户推荐商品。

的搜索引擎正在不断 通过之前的搜索结果改进 他们的推荐的搜索结果——虽

[注]麦肯锡文中没有使用过全文“麦肯锡公司”,但指的是麦肯

然这种改进可能让一部分用户

锡咨询公司;

觉得没什么用。他们现在甚至

文中多次提到的《黑镜》是一部连续剧,讲的是人类被技术奴役

可以通过谷歌知识图谱调节对

的故事。

语意的理解。

Equifax是美国最大的信用担保公司之一,它的丑闻是指它2016年

好恶或者至少是他们的搜索结

用户数据泄露,使美国一半人口的信用数据被黑客盗走。

果来进行推荐。

Issue: 003; April 2018

声田:这个音乐网站本 月公开说,他们以与网 飞类似的方式使用机器 学习:通过每个用户的

37


人工智能

网络安全中的 人工智能

工智能已经在塑造下一代的

及大数据分析在过去几年都实现了

一些偏差:中性的TA(即土耳其语的O)

工业革命。现在,我们在人

重要突破。人工智能技术已经从仅

在以下机器翻译的译文中出现了明显的性

工智能技术及其新兴企业中已

仅是学术研究的工具转变成了公

别取向,例如她是一名厨师,他是一名医

经投入了数百万美元。私人智能助理,如

司能够切实在商业产品中应用的东

生;她是一名老师,他是一名士兵;他很

Siri、Cortana及Alexa,都还处于发展的婴

西。但是我们能相信人工智能吗?

开心,她不开心。这并不是谷歌工程师带

儿期。但是,它们正逐渐成为我们真实的 伴侣,能够像人一样和我们对话。 不知道你有没有否察觉到,人工智能

有性别偏见,工程师们只是使用所有他们 这是一个很难回答的问题,简单地 说,在人工智能发展的早期,有利有弊。

能找到的预设文本来训练机器,让机器自 己去得到答案。

技术已经遍布我们现代生活的几乎每一个

例如,Tay

Bot是微软开发的一款基于人

因此我们可以说,目前我们距离距离

角落,甚至在没那么现代的生活中也有所

工智能技术的Twitter聊天机器人,在2016

那种能够学习数据并得出正确结论的神奇

出现。语音识别、图像识别以及依靠人工

年3月上线。当这款机器人在互联网世界

智能引擎仍然还有几十年的差距。

智能来保证安全的自动驾驶汽车。金融行

里遨游了几个小时之后,它就开始轻车熟

那是不是说人工智能就没用了呢?绝

业也朝着基于人工智能的风险分析、信用

路了。因为互联网充满了各种各样的“老

非如此。如果应用得当,人工智能技术的

评分及贷款审核等方向发展。我们也看到

师”,这款机器人迅速学会并尤为擅长的

作用很大,并且能给我们的生活带来极大

了基于人工智能的律师机器人和医生机器

是脏话和种族歧视。16个小时之后,微软

的影响。

人的出现。这些都仅仅是开始。

意识到了这个灾难,然后出于善意的目的

总的来说,人工智能的这些进步主要 得益于三大驱动力:

需要平衡两个最重要的元素:数据和专业

几个月前,互联网博客Mashable发表

知识。首先海量的数据是至关重要的,此

了一篇有关谷歌翻译的文章,也说明了人

外这些数据要覆盖人类想要解决的所有问

工智能带来了的一些问题。土耳其语是一

题范畴,只有这样才能得到正确的结论。

2. 计算机能力:凭借现在的计算机能

门中性的语言。男性用语和女性用语之间

另一方面,专业知识,不管是驱动人工智

力,我们能够处理海量的数据。

没有区别,都使用O来代表英文的“He”

能技术发展的数学知识还是特定领域的专

3. 数学算法:数学和算法驱动着人工

和“She”。但是当土耳其语通过人工智

业知识,是充分挖掘问题相关数据的核心

智能的发展。机器学习、深度学习

能被翻译为英语时,机器算法就体现出了

元素。

1. 存储数据:我们现在只需极少的成 本就能存储大量的数据。

38

关停了这个机器人。

人工智能要实现我们预期的效果,只

Issue: 003; April 2018


人工智能和网络安全 在网络安全领域,人工智能也可以 发挥很大的作用,当然它也存在一定的局

当人类分析师研究恶意元素时,可能通常

能技术的引擎,当然这个引擎已经经过了

会去追本溯源,然后将类似的恶意事例一

几百万个已知的良性及恶性可执行文件的

网打尽(例如,被同一个人在同一时间使

训练,因此能够让机器来对这些可执行文

用同一样的词汇模式注册的域名)。

件进行分类。

限,这点无疑与前面提到的先决条件是一 样的——没有足够的数据和足够的专业知 识。 目前能够用于网络安全训练的数据依 然稀少。此外,人工智能系统无法自我解 释,换句话说,你得手动得去验证机器得 出的每一个决定,或者你也可以盲目相信 机器的每一个决定,只是你会发现,人工 智能的错误分类比例特别高。所以从本质 上来说,人工智能似乎并不适合应用到网 络安全,因为我们都知道,漏检测和错误 的检测结果可能会导致灾难性的后果。 但是,让我们回到人工智能系统的 优点上。自从有了人工智能技术、机器学 习、深度学习以及大数据分析,我们现在 能够将那些以前只能由少数人——最聪明

通过人工智能技术,我们可以模

结果非常令人惊讶。我们最终得到了

拟——及从机器的角度来——分析师的直

一个动态引擎,能够检测那些杀毒软件和

觉,Check Point的算法现在能够分析数百

静态分析无法发现的恶意可执行文件。事

万已知的入侵痕迹,并搜寻其他类似的事

实上单凭这个引擎,我们检测出了13%的

件。因此,对于那些从未见过的攻击,我

恶意可执行文件。如果不是“女猎人”的

们也能通过机器提供的威胁反馈进行防御

存在,我们肯定发现不了。

保护。如今,仅仅基于这个技术为我们提

还有一个例子就是CADET,情景意

供的情报,我们阻止了超过10%的网络攻

识检测(Context Aware Detection)。在

击。

Check Point平台中我们能够访问和看到所 我们的第二个引擎叫“女猎人”,这

有的IT基础设施部件:网络、数据中心、

个引擎旨在猎杀恶意可执行文件,这可是

云环境、终端设备及移动设备。这意味

网络安全最棘手的问题之一。从本质上来

着,我们现在能够完整地去分析情景,而

说,一个可执行文件一旦运行简直无所不

不仅仅是检查独立的安全元素,并且能够

能,因为它没有越界,所以哪怕这种文件

发现该元素是通过电子邮件传播还是通过

正尝试做一些坏事,我们都很难察觉到。

网络下载,链接来源于接收的电子邮件还 是移动设备的文本信息,发送人,域名注

的人类分析师——来处理的任务交给机

不过好消息是,哪怕真的存在,那只

器。这些技术能够在我们庞大的数据日志

有极少数的网络攻击者会从零开始去编写

中找出规律,帮助我们开拓眼界。

病毒。换句话说,一个恶意的可执行文件

事实上,我们从检测的安全元素和情

随着Check Point在网络安全中越来越

通常都会与之前存在过的可执行文件存在

景中提取出了几千个参数。通过CADET人

多地思考人工智能发挥的作用,我们已经

共性,尽管这类文件经常隐藏在我们的眼

工智能引擎,我们得出了一个准确并覆盖

在威胁防御平台中开始探索基于人工智能

皮底下。

多元情景的结论。这真的很了不起。

的防御引擎。我们已经在一些引擎中应用

册时间以及注册用户等信息。

但是,当我们使用机器驱动的算法

到目前为止,我们的测试结果显示,

时,我们的分析范围就扩大了。使用沙盒

漏检测率提升了2倍,假阳性概率提升了

作为动态分析平台,我们可以运行系统中

10倍。你得记住:这些不仅仅是漂亮的数

Hunting)。这个引擎的

的可执行文件,并收集几百个实时参数。

字。在真实的网络安全世界里,引擎的准

目标是增强我们的威胁防御智能。例如,

然后,我们会将这些参数交给基于人工智

确性是至关重要的。

了人工智能技术。 第一个值得一提的引擎是“狩猎行 动”(Campaign

摘要: 总而言之,上面的例子已经为我们展示 了,通过所有可用的技术如何结合专业知识 和海量数据帮助我们找到应对网络安全的最 佳途径。 在Check Point,我们努力将人工智能与 其他所有现存的技术结合起来以改善重要的 安全指数。我们承认人工智能技术还不够成 熟,还不能独立应用,仍然需要大量的人为

Issue: 003; April 2018

输入来改善效力。但是,当人工智能技术能 够当做系统的一个额外应用层,并嵌入到覆 盖整个攻击图谱的专业引擎中时,它就能够 独当一面了。 网络安全行业一定要脚踏实地。随着我 们在人工智能的发展路程中不断前行,这 些技术会带领我们走得更远,朝着开发更智 能、更实用的威胁防御机制前进。

39


人工智能

Enigma(恩尼格玛)密码机

人工智能给网络空间安全带来的 非连续性挑战 人工智能、图灵和信息安全 人工智能在诞生之初就和信息安全具有割舍不掉的 紧密关系。

且能瞒过那些向它提问的人,使他们从谈话中误以为这是 一次人同人的对话。直到今天,图灵测试依然被视为是衡 量人类对智能机器追求的基本标准。

谈到人工智能,我们就不得不谈到艾伦・麦席森・

图灵和信息安全之间的联系发生在第二次世界大

图灵(Alan Mathison Turing)。图灵是英国数学家、逻辑

战。他成功破译了纳粹德国复杂严密的密码系统Enigma

学家,被称为计算机科学之父,人工智能之父。图灵之所

(恩尼格玛)密码机,让希特勒的战争部署赤裸裸暴露在盟

以被称为“人工智能之父”是因为他第一次提出了“机器

军面前。当时纳粹军方使用复杂而精密的通讯安全系统

思维”的概念。这被公认是人工智能的起点。

Enigma(恩尼格玛)密码机进行加密通信。这种当时先进

他提出一个连自己都很难回答的问题:如何去定义 他所创造的新一代“智能”?它会变得怎么样?它存在多 大的可能性?

的密码机由一系列不断随机变化的转子组成,其结果拥有 多达百万的三次方种不同可能性。 德国军方也因此自信地以为盟军无法在有限的时间

1950年,他那篇著名论文《计算机器与智能》

内破译他们的加密通讯系统。幸运的是英国人请来了图

(Computing Machinery and Intelligence)的正式发表。论

灵,图灵在得到的一份加密的文档中窥探到了常人难以察

文有史以来第一次提出了“人工智能”的概念,以及同样

觉的蛛丝马迹并依此建造起一部绝无仅有的巨型机器(解

广为人知的“模仿游戏”和“图灵测试”。后者旨在判断

码器)。果不其然,当时完成的第一台解码机通过极其复

计算机是否会有一天会变的像人类一样真正地进行思考并

杂、庞大的计算操作成功地破解了加密信息。

40

Issue: 003; April 2018


丘吉尔曾在回忆录中这样记载:“图灵作为破译了

那么,近年以深度学习为代表人工智能技术的爆

Enigma(恩尼格玛)密码机的英雄,为盟军最终成功取得

发,会不会成为一次新的非连续性技术创新而对信息安全

第二次世界大战的胜利做出了最大的贡献。”

产生致命的影响?

图灵在战争期间做出的第二个巨大贡献便是破解了 Tunny密码,同样是一种高度加密的代码,被用来让纳粹

人工智能对网络空间安全的非连续性挑战

元首希特勒和战场上的军官作直接通信。即便如此高度加

2018年2月,OpenAI联合牛津大学、剑桥大学等多家

密的通信,依旧在不久之后便被图灵的巨型解码机所破

机构于日前发布了《人工智能恶意使用报告:预测、预防

解。可以说,图灵用自己的天才,加速了二战的结束,也

和缓解》。报告全文长达101页,调查了恶意使用人工智

改变了整个英国、乃至全人类的命运走向。

能技术可能带来的安全威胁。报告中详细分析了 AI 可能

德国人的Enigma和Tunny被破解的命运是当时信息安

在物理安全、数字安全和政治安全等方面带来的威胁。从

全的核心密码体系遭遇的一次灭顶之灾。用当代的观点来

报告我们可以看到,人工智能带来的安全威胁已经远远超

看,这个事件的根本原因是他们遭遇了 “非连续性创新”。

出了传统的信息安全范畴,扩展到整个网络空间安全。这

图灵和图灵解码机的出现并不是基于一个持续改进的可

份报告可谓是人类针对人工智能给网络空间安全带来的风

以预测的技术基础,而是一个突然出现的、不可预料的

险的全面分析与思考。

事件。这就叫做非连续性创新。在图灵出现之前,德国人

正如我们所一直认为的,随着人工智能技术的发

的Enigma因为拥有多达百万的三次方种不同可能性而具备

展,人工智能这种技术就像是一把越来越有力量的锤子。

“确定安全”,但是遭遇了图灵的Enigma却变得及其脆弱

随着这把锤子的加强,随着机器人、3D打印、语音识

和不安全。可见,非连续性的技术创新对于信息安全具有

别、语义识别、视觉识别这些技术的逐步成熟,人工智能

极大的、甚至颠覆性的影响。

一定会对于人类的行为产生深远的影响。也正如人类历史

图灵建造的解码机

Issue: 003; April 2018

41


人工智能

中所有的技术一样,这个锤子也具有两面性:有人拿它为 全人类谋福利,也有人拿它做恶意活动。 这份报告将滥用人工智能的威胁分为了物理安全、 数字安全和政治安全三类。在物理安全方面,不法分子可 以入侵网络系统,将无人机或者其他无人设备变成攻击 的武器;在数字安全方面,人工智能可以被用来自动执行 网络攻击,它也能合成人类的指纹或声纹骗过识别系统; 在政治安全方面,人工智能可以用来进行监视、煽动和欺 骗,引发公众恐慌和社会动荡。 报告提出了几个有发展前景的领域,以便进一步的

Predictions-8 insights to shape business strategy》。报告根据

研究,从而能扩大防御的范围、使攻击效率降低或者更难

全球知名人工智能学者以及普华永道PWC为自身客户所

执行。

提供的人工智能融合服务建议集合而成,提供了八种在人

在报告的最后,作者还分析了攻击者和防御者之间

工智能领域的洞察预测。

的长期平衡问题,不过并没有明确地解决这个问题。虽然

其中,人工智能为网络空间战争(AI Cyberwar)带

这份报告看起来带着一些“人工智能威胁论”的色彩,但

来许多道高一尺魔高一丈的网络攻击及防御。报告提出:

是其所提到的各种威胁点却是的确存在于我们因为人工智

越来越多人工智能技术变成了恶意软件和勒索软件的帮

能而越来越便利的生活中的--邮件诈骗、恶意二维码、对

凶,而“空手夺白刃”也只存在于故事中。我们需要用

于各大公司机构进行的勒索病毒攻击……甚至美国大选都

AI的手段去防御AI的攻击。据报告调研数据,27%的企业

似乎被俄罗斯通过技术手段影响了。OpenAI成立的宗旨

表示,计划2018年开发基于AI和机器学习的网络防攻击系

之一恰恰是为人类赢得“人工智能保卫战”。

统。随之而来的,网络安全也许是许多企业人工智能变革 的开始。

用AI的手段去防御AI的攻击

人工智能时代已经呼啸而来。无论AI技术在网络空

我们直面人工智能带来的网络空间安全并非鼓吹

间层面究竟带来的是利大于弊还是弊大于利,我们只能坦

“人工智能威胁论”,也无意于为人工智能泼冷水,但是

然直面积极应对而无法选择。面对人工智能给网络空间安

我们希望当人工智能技术成为我们手中“于物无不陷”

全带来的非连续挑战,我们究竟能采用什么制衡措施,人

的“矛”的时候,我们也应该同时制作一面“物莫能陷

工智能自身是不是解决这个威胁的最佳选择,需要更多的

也”的坚盾。

专家学者的卓越智慧和工作。

如何预测预防这些对人工智能技术的恶意使用?《

正如登山家乔治马洛里说的:我之所以热爱登山是

人工智能恶意使用报告:预测、预防和缓解》报告提供了

因为“山就在那里”。人工智能带来的网络空间安全问

四个建议:

题“就在那里”。这可能是接下来激励无数勇于攀登网络

1.人工智能领域的决策者应与技术研究人员密切合 作;

空间安全高峰的精英和学者投身其中最简单又最朴素的原 因。

2.认真对待可能出现的各类型威胁;

人工智能产业技术创新战略联盟

3.像解决计算机安全一样制定成熟的实践方案; 4.积极寻求扩大参与讨论这些挑战的利益相关者和领 域专家的范围。

AI联盟成立于2016年底,旨在联合建设具有国际视 野和影响力的人工智能技术产业发展合作平台,推动人工

但是,这份报告并未给出任何现实结论以及实用条

智能产业技术创新和产品服务孵化,参与并逐步部分主导

款,只不无担忧的表示“如果没有制定出足够的防御措

国际人工智能技术标准,全力推动我国人工智能技术和

施,我们可能很快就会看到究竟哪类攻击会先出现了。”

产业竞争力的提升,目前有专家40余位,联盟成员100余

2018年3月,普华永道发布了一份报告:《2018

42

AI

家。

Issue: 003; April 2018


百度安全 有AI更安全 随着社会进入移动互联网时代,网络安全问题已经成为事关 国家安全的重大问题。如何应对AI时代的网络安全、保证国家和 社会的安全,也成为互联网公司们在人工智能时代下将要思考的 新命题。 作为全球最大的中文信息搜索引擎和信息服务公司,百度旗 下的百度安全正在这场AI安全保卫战中积极行动,展示着自己的 实力与社会责任。 作为百度公司旗下,基于人工智能、大数据等核心技术打造 的领先安全品牌,百度安全是百度在互联网安全18年最佳实践的 总结与提炼。百度安全旨在以AI为核心构建安全生态系统,依托 人工智能和大数据,面向企业及个人用户提供安全解决方案,实 现AI时代的产业共赢。

Baidu’s AI expert Lu Qi said the new open-source pla orm is a ‘win-win situa on’. PHOTO: HANDOUT

随着人工智能时代的到来,百度安全认为,当前的AI安全主 要集中在两个方面:第一是传统安全遇到的几乎所有问题在AI安 全时代都存在,而且云、管、端上的传统安全问题更加集中。二 是AI时代还诞生了新的安全挑战,即针对机器学习本身的安全问 题。 一方面,AI技术不断进化增大了层出不穷的IoT设备终端上

作为百度公司旗下,基于人工智能、大 数据等核心技术打造的领先安全品牌, 百度安全是百度在互联网安全18年最佳 实践的总结与提炼。

的安全风险,传统的程序和数据安全变成了用户人身安全;另一 方面,因为图像识别算法在安全上的设计不足,AI时代诞生了以

辆安全增加了入侵的途径,黑客甚至可以不经任何物理接触就破

机器视觉为主的新安全挑战。比如,自动驾驶汽车的判断力集体

解车载控制系统,从而严重威胁到驾乘人员的人身安全。目前,

失灵、IoT体系被黑客控制、金融服务中的AI服务突然瘫痪、企业

百度安全已经成功模拟出未来可能出现的针对车载系统的远程威

级服务的AI系统崩溃等情况一旦出现,必将对经济、社会、金融

胁,如定位篡改、速度控制破解等,从而可以指导车载系统的研

等各方面产生难以估计的负面影响。

发人员在系统推出之前就堵上相关漏洞,并防范可能出现的类似

万物互联时代,黑客可以利用各种漏洞劫持智能终端设备,

攻击路径,保证车载系统的严密、完整和安全。

智能门锁、网络摄像头、智能电视、温控器无一不能幸免。2017

百度“人脸识别技术“已经深入到人们日常生活的众多领

年7月5日,百度在AI开发者大会上推出了国内首个致力于提升智

域,人脸支付、人脸闸机让人们的生活变得便捷。在这些服务背

能终端安全的开放解决方案——百度锐眼。

后,百度通过人脸核身、活体检测、证件识别、人脸对比等多种

百度锐眼拥有自主研发的安全检测引擎和海量漏洞样本分析

技术能力,捕获当前用户照片并与公民身份信息进行比对,实现

经验,作为国内首款针对IoT设备系统安全的检测平台,满足各

在线用户身份验证。99%精确度帮助用户识别业务场景中的人是

种形态的智能设备的安全检测需求,深度保护接入DuerOS的智能

否为「真人」且为「本人」,从而更加安全有效地完成身份核

设备,并制定安全规范,保证DuerOS生态的安全性与稳定性。在

实,为客户提供便捷和高效。

百度未来智慧客厅,通过与小度在家简单的对话,即可唤醒IoT设 备,感受未来智慧生活的安全新体验。

从IoT安全,到自动驾驶安全,再到人体密码破解安全,百 度安全涉及的均是当前科技前沿的最新领域,就此而言,百度安

在百度最具影响力的自动驾驶安全领域,百度安全肩负着自

全在这些前沿领域的AI安全防护水平不仅在国内领先,技术水准

动驾驶AI安全的重任。如今,随着无人驾驶和车联网的发展,车

在国际上也是属于前列。百度安全,正在引领全球AI安全行业迈

联网应用的多样化、未来汽车车载系统的网络化和智能化也给车

向创新发展新阶段。

Issue: 003; April 2018

43


人工智能

京东金融风控体系架构

大数据和人工智能构建智能风控未来 京东金融 沈晓春 在这个技术和数据引领的互联网时代,支付风控面

在技术方面,作为一家科技公司,我们在前沿风控

临着诸多机遇和挑战:一方面海量的数据基础和日趋成熟

技术探索方面投入了巨大的精力和资源,例如人脸识别、

的技术手段给风控带来了更多的可能性,另一方面网络诈

生物探针、虹膜识别、掌纹识别等等。尤其在人脸识别领

骗、网络黑产的手段也随着技术进步不断升级迭代,给风

域目前技术已相对成熟,并在京东金融的交易场景、授信

控带来了新的挑战。在这样的环境下,京东金融作为一家

场景中得到了广泛应用。与此同时,我们也在不断深入研

服务金融机构的科技公司,一直在风控领域不断地探索,

究生物探针技术,通过用户使用手机的习惯,例如登录账

下文将针对大数据和人工智能在风控中应用,结合京东金

户时敲击密码的习惯、浏览网页时上下滑动的习惯等等,

融的实践经验进行分享。

来进行身份识别。这项技术在很多场景都可以得到应用:

在数据方面,首先依托于京东的零售业务,我们积

例如手机丢失情况下,可以防范手机被不良分子劫持、账

累了物流、消费、供应商、商户等诸多数据资源,为整个

户被他人操纵的风险;再如,在营销反欺诈的场景中,这

风控业务提供了坚实的基础;此外,金融服务也是京东基

项技术可以有效进行人机识别,防范羊毛党的机器攻击。

础业务的重要组成部分,在信贷风控中起到了至关重要的

除了应用广泛外,这项技术还有一个很重要的优势,就是

作用。这些数据在千万的场景中不断地更新迭代,形成多

保证用户使用产品过程的流畅性,达到无感知风控的目

维、动态的数据库资源。

的,大大提高用户使用体验。

44

Issue: 003; April 2018


下图是京东金融的风控体系架构,它是一套在四年

这个过程中,会有一些算法的嵌入,例如我们自主研发的

的经营中,经过不断迭代而形成的,构建在大数据平台上

基于大规模图计算的涉黑群体挖掘技术,可以基于已识别

的分布式计算体系。经过近年的发展,京东金融的前端

的黑名单,通过关系网络挖掘技术来实现对黑产群体的打

业务目前拓展出了11条业务线,风控体系也基于这些业务

击,这项技术可以让我们在具备较少客户信息的情况下,

线,完善形成了大数据平台上的安全、风险决策平台、风

凭借观测其关联群体所得到的结果,对客户进行正面、

险数据洞察及风险运营平台等模块,这些模块具备不同的

负面或可疑的评估。这种方法可以被广泛应用于交易反欺

功能,例如风险决策平台,会包含大量的反欺诈模型、反

诈、反洗钱领域。

洗钱模型;风控数据洞察体系,会根据其他平台的数据产

基于上文所陈述的所有数据基础和技术能力,以及

生相应的报告,提示我们需要关注的问题和需要完善的策

在各个应用场景中积累的经验、优化的算法,我们打造

略;风险运营平台,负责后端信息的输出、处理及查询,

了“安全魔方”这一反欺诈解决方案,可以向合作伙伴输

保证信息的回流,形成数据闭环,在这个过程中,我们可

出交易反欺诈、反洗钱、营销风控等多方面的安全防护,

以结合人工智能的算法,让所有模型进行自动的迭代和优

协助其提升风控能力。

化,应对前端场景中出现的新情况。

下面介绍一下我们风控体系架构中的安全模块。

京东金融风控团队一直致力于对黑产的打击,从 2016年12月至今,我们已经和各地警方合作,破获网络 黑产案件29起,打掉黑产团伙13个,抓获犯罪嫌疑人118

账户安全是所有业务的关键保障,是风控中至关重

人,避免用户损失上亿元。作为一家科技公司以及互联网

要的环节,全方位的账户安全分为前端和后端。在前端,

金融生态中的一员,京东金融愿意输出自身的科技力量,

需要具备有效的识别手段,包括设备识别、人机识别及生

联合产业各方共同创造更加稳定安全的金融环境。

物识别:设备识别就是针对每一个登录账户的设备建立独 特的ID,例如屏幕色深、屏幕分辨率、操作系统、操作 系统版本等,通过获取和分析这些特征数据,识别其中的 异常设备登录行为或可疑设备;人机识别,正如上文介绍 的,通过操作行为习惯分析识别机器攻击;生物识别,通 过比对用户指纹、面部特征等方式进行身份验证。经上述 技术手段输出的结果,会传输到后端的异常登录模型和账 户等级模型中,这两个模型可以在极短的时间内,对一次 账户登录的风险程度进行预测,得出相应的判断结果。在

个人简介

沈晓春女士,现任京东金融集团风险管理部总经理

沈晓春女士,曾先后获得北京大学英语与经济学双学士学位,美国斯坦 福大学管理科学与工程硕士学位。 毕业后,曾就职于美国信贷咨询公 司FICO、花旗银行、德意志银行和华夏银行,先后从事信贷和支付风 险政策、计量建模、资产管理、风险系统与机制建设等工作,拥有 十多年的国内外金融行业风险管理经验。 沈晓春女士于2014年8月加入京东金融集团,并担任京东金融集 团风险管理部总经理。在沈晓春女士的带领之下,京东金融构 建了完整的风控体系,为京东金融各项业务的开展打下了坚 实的风控安全基础。

Issue: 003; April 2018

45


CEO Corner

My Conversation with

Brad Arkin VP & Chief Security Officer at Adobe Systems by SUNNY SUN Editor’s note: In March, I had the opportunity to a end Adobe Summit 2018, the digital marke ng conference in Las Vegas, where Richard Branson, J.J. Wa , and Jensen Huang were invited to speak. Their works have inspired and touched millions of individuals. The theme of the summit was User Experience – geared toward crea ng and delivering personal and engaging experiences. The technologies Adobe has created are to empower customers in the digital space in order to create a user-friendly business environment and, in turn, build a be er community. Nearly everyone is familiar with, or uses one or two Adobe programs, such as Adobe Acrobat, Adobe Photoshop, Adobe Flash Player, etc. However, the marke ng solu ons Adobe is crea ng for the future, such as Adobe Sensei powered by AI, are powerful and awe-inspiring. I came to the Summit wearing a cyber-security hat and was lucky enough to chat with Mr. Brad Arkin, VP & Chief Security Officer at Adobe Systems. Mr. Arkin kindly shared his insights on Adobe’s evolving developments rela ng to cybersecurity, its studies and prac ces of compliance to ensure their systems and products are sound, solid, and have built-in control frameworks to fit this increasingly connected world. Below is a brief transcript of my conversa on with Brad Arkin.

What does Adobe do to ensure that safety measures are built into its infrastructure system? When I joined Adobe ten years ago, the Adobe Security Software Engineering Team (ASSET) was in place for the products’ safety, primarily focused on the desktop clients’ web-based products, such as Photoshop, Acrobat, and Flash Player. Our job was to look at what types of things could go wrong, build defense coding around them, and make it more expensive for the bad guys to achieve their desired outcomes. However, no matter what we do to build our defenses, there are always bad guys who are trying to be more clever than us, so we don’t want to give the impression that it is

46

possible to build software that will protect against all attacks known today, or those that may be developed in the future. What we can do is try to understand likely attack scenarios and make our software stronger and more robust to against those. We work with many different teams in the company. The process we use is called the Secure Product Lifecycle (SPLC), and basically it is the sequence of activities and tools we use during the process of building software. It starts before we type code, thinking about what we are going to build, then when we actually create code to use for security purposes, trying to understand any weak or vulnerable points. When the company evolved from shipping software on discs to hosting the software in the cloud, we were responsible for securing that so we extended the SPLC, thinking about not just code written on desktops, but creating code and infrastructure that are wholly utilized. In the old days, the web server ran on the physical hardware server. Nowadays, we do everything in the virtual environment, the tools we use to manage the environment it-

Issue: 003; April 2018


self representing software as well. The same technique we use to write secure software for desktop products, we now apply to the infrastructure layers--thinking about potential failure modes, such as things that may go wrong, and what has happened in the past, and how we can learn from them and do better. That is the secure product lifecycle. That is our best idea--things we borrowed from others, and things we developed ourselves, helping us create something that is as secure as we could possibly make it. <We also helped to offer secure coding trainings available for free through SAFECode. Helping the company be as secure as possible is only part of our work. To help give customers and key stakeholders what they’ve asked us for and help illustrate that we are secure, we have something called the Adobe Common Controls Framework (CCF). We take various industry standards like, SOC 2 for security and availability, ISO27001, FedRAMP, FERPA, GLBA, HIPPA, etc., , and boil them down to the essentials, finding commonalities and call it Common Controls Framework. We implement the CCF cross every service we offer, every service within the company, and also feature it in the “back office” traditional IT, for employees. By verifying that we are in compliance with these common industry standards, we can more easily show our auditors that a particular Adobe Control maps to corresponding SOC 2 controls and through that process we are able to achieve third party attestation that we are in compliance with each of these overlapping standards. For SOC 2 security and availability as well as ISO 27001, these allow us to demonstrate to our customers <particularly enterprise> that they can comply with their obligations, like FERPA, GLBA, and others. We continuously study new and emerging standards, frequently finding that we are already in compliance, because when we boil the standards down, they represent a set of controls that are a part of what we already have in the CCF. So to the prospective buyers of Abode services, we can give them an SOC 2 report that is not only comprised of Adobe claims, but is substantiated by independent auditors like, KPMG who tests and verifies that we are in compliance with these standards. Between our SPLC – we call it our homemade <security> recipe -- and our Common Controls Framework (CCF), which give us industry standards such as ISO27001, all together these make us feel like we have a good stance on our security posture. For your reference: Link to CCF: https://www.adobe.com/security/compliance.html Link to SPLC: https://www.adobe.com/security/engineering.html

We’ve also open sourced our CCF to companies who may find it helpful, as a starting point on their compliance journey, learn more here: http://blogs.adobe.com/security/2017/05/open-source-ccf.html My job will never be happier, more confident or

Issue: 003; April 2018

more settled. I do still think about what will come next, and how to be prepared. The items I mentioned above give us a good foundation, and each one we study tells us what is good, what we can learn, what new defenses we could build in, what new technologies are out there, and how we can prepare and protect ourselves. How does Adobe help its clients be GDPR-compliance ready? Like the common controls framework (CCF), we go through a similar mechanism. The Adobe legal team studies the laws and translates what the law says. What it means within our environment requires a lot of legal interpretation, what it means in the Adobe context, we then translate that into a set of controls and capabilities, which need to be supported by our products. In some cases, Abode is a data controller, and we have the full responsibility to live up to the GDPR, and in some cases, Adobe is a processor, which means our customers are the controller. We have to make sure we give our customers the capability of what GDPR requirements are. We have studied this for a long time, and the goal is that when May comes, GDPR is in effect, and that we will be in a good place for our own obligations, and for our customers to fulfill obligations to comply with GDPR. When reading the volumes of the GDPR, individuals sometimes see different things, what they might mean to them. Everybody in the industry is talking, trying to understand how to translate certain sections and how to map this to what the marketing service might look like. There are still lots of conversations going on, and we all are curious to see how the regulators interpret the same law we have been reading. That is the approach we have been taking, My team is also closely working with Adobe Chief Privacy Officer and her team on privacy regulatory obligation and compliance. You can’t have privacy without security; therefore, together we work through that process to be GDPR ready. Adobe GDPR: https://www.adobe.com/privacy/general-data-protection-regulation. html

How does one balance ‘Convenience’ vs ‘Privacy’? And how is it practiced within the Adobe Working Environment? Let me give you an example of what we do for our employees. The theme of this conference is about experiences, and how to make the experience better. When we think about the Adobe employees’ experience interaction with our IT backend system, such as how to fill out the time cards, or look at the lunch menu for the café. In the old days, you may have bunches of different accounts, an account to login to your email, login to the system, etc., --it gets very confusing. We studied what our options

47


CEO Corner were, then we converted them into one single sign on experience, each employee has a single account, single password, and when they authenticate something, first time in the morning, then they don’t have to login from then on because the system already recognizes it. However, they still have to type in their password once-a-day, which is then authenticated via a second factor, a code or a note pop-up on your phone, which must then be accepted. As we study this, we look for ways to even eliminate that function entirely. To simplify that, we have rolled out a Zero Trust Enterprise Network (ZEN). The idea is that when you use the device (laptop, phone, tablet, etc.), when we manage it by enabling it to connect to the corporate Wi-Fi. We could push a certificate onto the device via your phone and once you have the code to open up the phone, that is good enough for us to know that the device is in your possession. Therefore, since you know the code, after this first time enabling the phone, you don’t have to do it again. Our goal is that once you sign-in and authenticate, you will remain signed in for a good 90 days, instead of signing in every morning. This way, we get better security based on the authentication of the device, which is much better than having to remember passwords. In this example, we get better security, lower user friction, and it allows us to eventually eliminate VPNs. Because we know the device, we have lots of confidence that the device is in the status of where it should be, directly connected to our resources. That is a detailed example. We are trying to understand what it feels like for the end users of the technology and how can we remove friction or barriers between what they try to achieve and the steps they take in advance on the security side. That is an example of thinking through what that experience looks alike. Also, I was thinking of my own experience in the hardware store and the personal experience of the store staff knowing me, knowing what I need, being patient with me to explain details. All that greatly benefited me. We live in a world of mass actives going on and to get the better experience of a personal touch is dependent upon whether you want to be seeing the mountain or the ocean, with your own preferences and choices. When it comes to product design for our customers, who interact with their customers, we must provide the tools in order to make sure that experiences are really good and going well. At the same time, you need to verify consent so there are no surprises. These are very important in designing the experience. We give the tools to our enterprise customers, usable tools to make it work for their customers. Big Data has become such a corporate asset that any small breach could potentially jeopardize trust or snowball into a massive compromise, due to the connectivity. Does Adobe collaborate with any other security firms?

48

We do lots of work within the Adobe security team, and we work with vendors where it makes sense. We haven’t outsourced any internal security obligations to a third party. We buy firewalls and use the technology from different companies, using different products to create layers of security stacked from different companies, but we are the architects plugging in all these things together to make it work. It is Adobe security employees staffing the security operation center, doing the monitoring and responding, in the case that there might be an alert or incident that needs to be investigated. So probably every security company is in some way a provider to Adobe. Because we are so big, we buy stuff from almost everyone, but we don’t have a one single partnership that stands out. Our security as an integral part of what we do here at Adobe. More info on how Adobe engages with security community: https:// www.adobe.com/security/community.html

What are the big challenges facing the security industry? There are a lot. There is a desire, almost among all security guys, to have a perfect awareness and visibility into all the computers and all the data. The number of machines we have is growing so fast, and the lifespan of these machines is becoming so short. In the old days, you buy the machine and take out of the box, and five years later, it has depreciated but can be tracked via your book. In today’s environment, tens of thousands or hundreds of thousands of virtual servers are running in a virtual environment and the average lifespan, in some cases, may be one or two hours; they pump into the system, do some work, and go away. Being able to keep track of them, for every single thing, at every minute, presents a totally different challenge today from what it was a few years ago. I think it is manageable, but you need to be flexible. If you try today to use the same tracking method as 10 or 15 years ago, you are doomed to fail. You need to think about while in old days we kept track of everything perfectly, today that may be harder. What we are trying to achieve is to be consistent and maintain good computer hygiene. If a <virtual machine> image comes from an approved source, then it should be good when it goes out the door. Or if it only lives for eight hours, chances are nothing bad will happen within these short eight hours. We will be more confident it is in the state that it was intended because we have taken lots of care that the image is secure. There is just so much in the scale and velocity of how things are changing. Our experience today is when using machine learning techniques, it is good to spot things out of the ordinary. Most of time things are wired but benign, not malicious. Machine learning helps compliment our existing security techniques and provides additional layers to help us see the bigger picture.

Issue: 003; April 2018


2018 Data Breach Investigation Report

Top 20 action varieties in incidents DoS (hacking) 21,409

Loss (error) 3,740

Phishing (social) 1,192

Misdelivery (error)

by Verizon

973

Ransomware (malware) 787

C2 (malware) 631

Editor note: A most sincere thanks to Verizon for providing this opportunity to share with our readers their 2018 Data Breach InvesƟgaƟon Report. I have selected a few summary pages of key findings from the full 67-page report and am pleased to present them here. It is my hope that this will help bring more awareness and guidance to your organiza on in this increasingly connected and date-driven world in which we live. For the full report translated into Chinese, you may pick up a hard copy at the next CSA event. Sunny Sun, Editor

Use of stolen credentials (hacking) 424

RAM scraper (malware) 318

Privilege abuse (misuse) 233

Use of backdoor or C2 (hacking) 221

Backdoor (malware) 207

Theft (physical) 190

Pretexting (social)

1

Summary of findings

170

Skimmer (physical) 139

Who’s behind the breaches?

What tactics are utilized?

Data mishandling (misuse)

73%

48%

Spyware/Keylogger (malware)

122

perpetrated by outsiders

121

of breaches featured hacking

Brute force (hacking) 109

30%

involved internal actors

2%

involved partners

included malware

Capture app data (malware)

17%

Misconfiguration (error)

102

of breaches had errors as causal events

2%

17%

50%

12%

12%

11%

Who are the victims?

What are other commonalities?

24%

49%

15%

76%

featured multiple parties

80

Publishing error (error) 76

0%

were social attacks

of breaches were carried out by organized criminal groups

Incidents

28%

20%

40%

60%

80%

100%

Figure 4. Top 20 threat action varieties (incidents) (n=30,362)

involved privilege misuse

of breaches involved actors identified as nation-state or state-affiliated

This year, we have over 53,000 incidents and 2,216 confirmed breaches.

of breaches involved physical actions

of breaches affected healthcare organizations

of non-POS malware was installed via malicious email1

Top 20 action varieties in breaches of breaches involved accommodation and food services

Use of stolen credentials (hacking)

of breaches were financially motivated

399

RAM scraper (malware)

14%

312

13%

were breaches of public sector entities

of breaches were motivated by the gain of strategic advantage (espionage)

Phishing (social) 236

Privilege abuse (misuse)

58%

201

68%

of victims are categorized as small businesses

Misdelivery (error)

of breaches took months or longer to discover

187 1.

Use of backdoor or C2 (hacking)

We filtered out point-of-sale (POS) malware associated with a spree that affected numerous victims in the Accommodation and Food Services industry as it did not reflect the vector percentage across all industries.

148

Theft (physical) 123

C2 (malware) 117

Realtive Realtive prevalence prevalence of of amplified amplified DDoS DDoS attacks attacks

Backdoor (malware)

100%

115

Pretexting (social) 114

Skimmer (physical)

Amplified

109

Brute force (hacking) 92

75%

Spyware/keylogger (malware)

Percent of DDoS attacks

74

Misconfiguration (error) 66

Publishing error (error) 59

Data mishandling (misuse)

50%

55

Capture app data (malware) 54

Export data (malware)

Not amplified

25%

Breaches

51

SQLi (hacking) 45

Password dumper (malware)

0%

45

20%

40%

60%

80%

100%

Amplification DDoS attacks over time (n=3,272)

2013

Issue: 003; April 2018

2014

2015

2016

2017

Figure 5. Top 20 threat action varieties (confirmed data breaches) (n=1,799)

49


Feeaattu Featured urreed d Ar Ar cle cle le

Nvidia’s Vision for the

AI Future

By RYAN LUO

50

People say artificial intelligence will soon arrive. In reality, consumers and companies have been using artificial intelligence for quite some time. Siri is one example. Face recognition on Facebook is another. They are in a group called Artificial Narrow Intelligence, an AI focused on completing one specific task very well and often better than humans can. The program is limited, however, due to it’s narrow functionality, it’s lack of conscience and its inability to demonstrate genuine intelligence. So why all the hype? It is due to large breakthroughs in two subsets of AI: machine learning and deep learning. Here’s a great picture illustrating where machine learning and deep learning are located in the big circle that is AI: So let’s break those concepts down. Machine learning is a technique used to create Artificial intelligence. It essentially gives computers the ability to learn without specifically creating a code-base or giving specific instructions as is the case with traditional software programs. What happens is that an AI program is trained using machine learning, given large amounts of data and algorithms that “train” it how to accurately perform a task. Issue: 003; April 2018


Deep learning is a subset of machine learning, an area that has seen major breakthroughs, and a large reason why AI has become such a major discussion point. It involves a more rigorous training of an AI program, with the use of something called a neural network, inspired by how our own humans brains work. Until recently, neural networks were shunned by the AI community, largely because it required so much processing power and was therefore impractical. However, recent changes have made deep learning far more practical.

CPUs TO GPUs: First off, Moore’s Law has significantly slowed down. To explain briefly, this law observed that the number of transistors in a dense integrated circuit doubles approximately every two years. Basically, this meant the processing computing power would improve by 50% every year. You can thank Moore’s Law for the absurdly rapid development of new technol-

Issue: 003; April 2018

ogies ranging from smartphones, computers to the cloud, and software products. Having more processing power just meant that tech companies could do more, at a cheaper price and at a faster rate. But since the transistors in the circuits can’t get any smaller (because they’ve reached the atomic level), the level of improvement has dropped from 50% to 10%. So what does that mean? It simply means that chipmakers and companies will have to be more creative when it comes to producing greater processing power. That can mean changing the architecture of the processing units such as by adding more transistors. This is crucial as more and more companies place their bets on AI, which uses an enormous amount of processing power. Enter Nvidia, a company whose main product is GPU (Graphic Processing Units), made for PC hobbyists and gamers around the world and who have recently placed their bid in the race for AI. But how can a company that started off building GPU (Graphic Processing Units) for gamers compete in the AI space?

51


Featured Ar cle

Well, turns out that was the original intention of the founders from the beginning. Ok maybe not the original intention. The founders didn’t exactly set out to work on AI when they started this company, but they did recognize that the next wave of computing would rely more on GPUs rather than the standard CPUs. Why? Simply put, a GPU’s architecture is different than that of a CPU’s, and its function is different as well. GPUs have nearly 10 times the number of transistors than CPU’s and were originally created by Nvidia to help with 3D Rendering. If you ever tried rendering a long video with special features on a standard mac or pc, you’ll notice it takes forever. This is because it is graphically intensive. It didn’t take long for the founders to realize that it could be used in a multitude of applications such as financial modeling, cutting-edge scientific research, cryptocurrency mining and, yes, the testing and training of AI systems. Thus, the advent and creation of improved GPU hardware, especially from Nvidia, have made deep learning and neural nets more practical. GPUs are the perfect tool to help improve AI programs, an area in which Nvidia has extensive experience and market share. But why do AI programs need so much processing power? Two words: Big Data. The main difference between an AI program and a regular software program is that software programs are lines of code that usually follow a process or a set of rules to solve a problem or perform a function. Basically, the human is telling the computer exactly what to do. But the computer cannot think on it’s own, well at least not yet. But in order for an AI program to become an actual AI, it requires data, a lot of it. The way Deepu Talla, the VP and GM of Autonomous Machines at Nvidia tells it, deep learning is just software writing software. In order to create an AI program using deep learning, one must train it by presenting a lot of examples. For example, let’s say you wanted to teach the AI program to recognize a German Shepherd from images on Google. Essentially you would show pictures (data) to the program and have it submit a response. Every time the AI program gets it correct, it is logged so the AI knows that the picture has a German Shepherd. If it gets it wrong, it is also logged and the program learns that the other picture does not have a German Shepherd, but that it had a picture of another breed of dog. This way, the program learns. But in order for it to get smarter, thousands, even millions of images need to be tested with the AI program, and one can imagine this takes a long time. GPUs allows the program not only to filter enormous

52

amounts of data, but to also do it quickly. Currently, there are AI programs that have become more accurate than humans at classifying images (accurately determining what is in an image), and areas in voice recognition and map location have also vastly improved. Deepu calls this new era of reliance on GPUs a new way of computing and believes that the aggregate of deep learning and GPU computing will be the two key factors in improving AI.

HOW DEEP LEARNING PRODUCES INSIGHT: In order for deep learning to produce AI programs that provide reliable and accurate insight, two major operations must take place: training and inference. We already covered training a little earlier with the German Shepherd example. Inference, the next step, is the process of deploying and applying the newly-trained neural net to new sets of data to hopefully provide accurate and reliable insights. Oftentimes, these programs are deployed either through the cloud or through hardware devices like Nvidia’s GPU. Both have their advantages. Cloud, for instance, means that one can rent out GPU servers without having to purchase them outright (one can easily cost $5000). It also means that the program can access data from a data center, making use of the data already available and deploying it for consumer use. Deepu argues, however, that although tempting in a world of greater connectivity, the cloud is not the best place to deploy AI programs. He gives 4 reasons: He argues first that because the cloud is linked to the internet, the latency is high, and so programs are likely to lag. That would obviously be very unfavorable for a self-driving car, which must make decisions in literal seconds, and a 1-2 second lag could be potentially fatal. His second reason relates to connectivity. To access the cloud, one must be connected through the internet, but their are locations around the world that still do not or have poor access to internet connectivity, making such programs unreliable or even unusable. His third reason has to do with bandwidth. A good example is autonomous cars. When autonomous cars navigate the streets, they produce a ton of data, more bandwidth than any internet connection can provide reliably. And finally, there is the issue of privacy. Any data stored on the cloud is usually public, or at the very least, much more accessible to malignant parties. All these reasons are why Nvidia believes neural nets deployed onto actual hardware devices are the best option.

Issue: 003; April 2018


Jen-Hsun Huang, CEO of NVIDIA, launched the NVIDIA GPU Cloud (NGC) pla orm.

NVIDIA’S VISION: And that is where Nvidia will shine. As GPU’s become the go-to product to create and train AI programs, Nvidia has a unique advantage where they control the ecosystem, both the hardware and the software. Nvidia calls this platform CUDA, a single architecture, able to use either local servers or the cloud and have a consistent platform and software that everyone would be able to take advantage of. This was something the company saw early on as they noticed the transition to GPUs, and something that continues to fuel their continued success. Think of it as the new App store that Apple created long ago, a platform on which to create the next generation of AI programs. There Nvidia hopes to be at the forefront of AI development, and create a consistent and uniform roadmap that anyone, from industry professionals to aspiring high-schoolers, can utilize. That is Nvidia’s vision for the future of AI.

Issue: 003; April 2018

Of course, Nvidia has a long way to go. And not just them, the entire industry still sees challenges in deep learning. For example, deep learning in the area of speech and voice classification works very well but it is more difficult for those who have disabilities in those areas. Moreover, currently most programs are trained through supervised learning, where humans help tune the AI and make sure they have the right answers. Right now though, machines are pretty bad at unsupervised learning, but there is hope that one day the machines will be adequate enough to learn on their own. In addition, while the area of perception (where a program can identify what is around it or in front of it) has mostly been solved, programs still need to apply meaning to what they see and interact accordingly. Regardless of the industry though, Nvidia is in a good spot. As they continue to improve their hardware and software, they are poised to be an the center of AI development. Just so long as their clients can afford the $5,000 price tag.

53


Featured Ar cle

The Industries Nvidia’s GM Believes Future Will be Most Impacted by AI

the program, making it smarter as it learns more and more. And this is crucial, because when it comes to autonomous vehicles, Deepu states that the number one priority is safety. Doing so means that the software has to be perfect, and constantly updated in order to take into account scenarios that may never have been considered. For example, what happens if a car with a human driver stops at a light and the light is green, but because the driver is distracted doesn’t go? Does the autonomous car honk at the driver or drive around? These are the challenges developers working on AI must grapple with. With constant testing, however, these scenarios can be streamlined through simulations by companies like Nvidia. These simulations can be done through Nvidia Drive, Nvidia’s scalable AI platform for developing autonomous driving capabilities.

Autonomous Machines are the next industry to be significantly impacted by AI.

Deepu Talla, VP and GM of Autonomous Machines at Nvidia

T

he belief that Artificial Intelligence will impact all industries in the future is something generally held among AI experts. However, Deepu Talla believes there will be three markets where AI will have significant advances and a large impact: autonomous cars, autonomous machines, and autonomous cities. Most cars on the road today, in terms of autonomy, are level 2. These cars provide basic assisted driving such as automated braking and lane keeping using a combination of Lidar and Camera technology. Level 4 is where most of the industry wishes to head toward, where the car will be mostly in control but the steering wheel is still present for those who wish to take back control. Level 5, the highest level, aspires to have no steering wheel, and to simply transport a person from point A to point B. Level 5 is the game-changer that Deepu states will change the very culture of cars and how consumers view them. Before these cars take complete control from their human counterparts though, it will continue to require constant improvement in the software. More and more data from every test drive has to be fed into

54

For many, including Deepu, autonomous machines or robots are the ultimate incarnation of AI. When people first talked about AI, it was always C3PO from Star Wars or the T-1000 from Terminator. It is fitting that AI will help make great strides in the areas of robotics. However, the difficulty of robots is not to be underestimated. Autonomous cars are difficult enough, but compared functional robots, self-driving cars are a piece of cake. This is primarily because robots, especially those with arms or tactile features have to navigate a 3D plane with several degrees of freedom, as opposed to cars that simply go from Point A to Point B (and make sure they don’t hit anything). The benefits are numerous though. Just in time manufacturing is one area where robots could make a revolutionary change (less than 10% of the processes are actually automated). Pizza delivery is another area. Delivery of packages, especially in hard to reach areas, are great for those who live in more rural areas and wish to get the latest version of Alexa, for example. Agriculture is another aspect. According to Deepu, there are lots of areas suitable for farming that aren’t utilized simply because there are no people to farm them. Robots could help sort that out. One of the biggest areas of impact, though, is the creation of supportive robotics. As people live longer and longer, having robots take care of the elderly, checking up on them, supporting them with basic life functions (with endless patience and no judgement), will be a great benefit to society as a whole.

Issue: 003; April 2018


AI and the Future of Work “Software is eating the world.” Marc Andreessen spoke true as tech startups upended traditional companies in every industry. Even now, old corporations struggle as technology continues to improve and advance, and the threat of disruption is everywhere. Nowhere is that more felt than those who worked in blue collar industries, where many suddenly found themselves out of a job, replaced by machines and software. It was not a pleasant feeling, and with the inevitable advent of AI, even white collar jobs have reason to be concerned.

F

or a long time many people in industries like consulting, accounting, or law felt that software could never replace them. The ability to problem solve and develop human relationships was something many felt software could never do. AI is different though. AI is built with the intent of bridging gaps, training a program to come up with it’s own insights and to be able to think proactively. If these predictions turn out to be true, many industries believed to once be untouchable by technology, may not be so untouchable after all. On the other hand, however, there still exist many people who believe there is no cause for concern, such as Bridget Karlin, CTO & VP of IBM’s Global Technology Services, who heads IBM Watson, an AI platform for business. Built on the cloud and used for looking at data, then understanding, reasoning, and learning from it, IBM Watson gained prominence after beating two renowned Jeopardy winners. It was considered to

Issue: 003; April 2018

be one of the best use cases of AI, and is currently used as a platform for business. For Bridget, AI is a significant market opportunity that will augment humans, help them better solve problems, and open up new career opportunities. To make her point, Bridget cites a use case of IBM Watson, where a 66 year old woman was diagnosed with leukemia. Doctors put her through chemotherapy, yet despite the treatment the woman only got worse. They then brought in IBM Watson, and after filtering all the test data into the machine, within 10 minutes the system discovered that the woman had a different strain of leukemia and therefore required a different treatment. And IBM is not the only player to see AI as a new market opportunity. Nvidia has been increasing its efforts in the AI space, creating both the hardware and the software to create more accurate AI programs. Moreover, they have been heavily involved with their

55


Featured Ar cle

partner GE, who often bring products to hospitals. Almost 50,000 terad bytes of data is produced Bridget Karlin, CTO & VP of IBM’s Global Technology idwithin hospitals using NvidServices, Head of IBM 0% of ia’s hardware, although 90% Watson the data isn’t processed. However, the company hopes to usee and derive i tto speed d ttransinsight from the data, helping action times and learn to identify and more quickly diagnose diseases. Furthermore, with data from both government and industry, Bridget and Ned agree that even the chance to predict natural disasters could be possible with a well-trained AI Program. Now, although the potential for AI is huge, historically disruption has cost people jobs, especially throughout the tech world. Although there is a belief in economics that while emerging technologies eliminate jobs, they often in turn create new ones of a higher caliber and technical sophistication. In terms of how to navigate the new advent of AI, the National Telecommunications and Information Administration (NTIA) has some thoughts. The NTIA is not a regulatory agency but rather serves as an advisor on emerging technologies and how h the government and the rest of the US can ensure that emerging technologies ben t all Americans, not benefi ju just a select few. Over the p past couple of years, peop ple in the agency, like Deputy Associate Administrator Evelyn Remaley, have been studying the different aspects o of AI, trying to underst stand the technology and how to best position themEvelyn Remaley, selve selves to optimize it for both Deputy Associate public and private sectors. Administrator

nomenon, they’ve asked themselves questions such as what best the practices are that should be developed for the new industry and what commitments can be made to address issues such as job displacement or cybersecurity. Evelyn believes some ideas floating around are multi-state quarter policy processes, processes that will bring both civil society and industry together to help develop products that are safe, affordable, and competitive in the US. They also encourage policy makers to educate themselves on all aspect of AI and support public and private partnerships that use AI in positive ways. As it becomes more and more apparent in the business world, she states that it is important for agencies like the NTIA and those in Washington to be fully versed in the industry. This must be done to ensure that companies remain competitive but to also ensure that the government does not unfairly regulate the businesses that use AI in a business-friendly way. Evelyn does make it clear though that tech companies and those working on AI are responsible for guiding the development in a responsible and ethical way, ensuring that there are safeguards when AI programs are trained and protected from issues like bias. She and the NTIA have published 4 principles that should be observed when it comes to the development and uses of AI:

1.

AI must always augment human intelligence, with an emphasis that AI is used to help make humans smarter and better.

2.

There must be transparency within the industry, where AI is being applied and what data is being used is something that should be public knowledge

3.

Application and data usage must be public, but data and insights from AI processes may stay with clients. Essentially, as Enterprise businesses gather data and derive insights through their AI programs, the data they gather fundamentally belongs to their customers.

4.

There must be an industry commitment to helping students and workers at large develop skills necessary to succeed in this new industry.

Whil While studying this phe-

56

Issue: 003; April 2018


Principle number 4 is something Evelyn emphasizes, as those employees are living in a time when things are moving even faster than before. She recommends policies such as apprenticeships, helping people re-skill and re-learn in order to get new jobs as opportunities to help bring new jobs and ace, and enopportunities to the AI space, he skills suring that workers have the obs. to take advantage of such jobs. M Companies like IBM and Nvidia have done well to follow these 4 principles, according to Bridget from IBM Watson and Ned Finkle, VP of External Affairs at Nvidia. Per Bridget, the biggest users of IBM Watson is IBM themselves, whereby they were developed as service platforms to derive insightss elp from operational data to help with their business.

Ned Finkle, VP of External Affairs at Nvidia

One example is with their IT infrastructure, whereby they combine analytics with automation to keep it healthy and functional. In addition, they use it to predict incidents and catch issues within the company and with clients. One such issue is in the cybersecurity industry. As the world gets more and more connected and features like privacy and data become more vulnerable, AI can be used to not only observe malicious activity but also come up with potential solutions to deal with such issues.

products that augment humans, and providing a solution to employees who will be disrupted by this new technology. IBM is already taking steps to help educate and train the future workforce through their program called PTECH. This program allows high school students to gain a diploma at no cost to parents and assures them a job right after graduation. Bridget claims they have about 50,000 kids going through the program and will help fill the shortage of roles, such as those who need to train AI systems to make them smarter, and those who will explain the insights and verify the correct use of data by AI programs. Nvidia, on the other side, is working to create partnerships with colleges and employers around the country in an effort to help prepare them for the “new collar industry.” Surprisingly, Ned states that there is a shortage of data scientists at Nvidia, providing incentive for the company to invest in the education of new talent. As companies continue to advance their AI programs and make headways in the industry, government and businesses will continue to determine how best to bring all perspectives to the table. One such perspective involves the access of Big Data. Data is essential for AI, the question of how companies will use it without compromising its integrity is a huge topic. While companies feed more and more data into their AI programs, their data will become their competitive advantage. But that is not the only concern. An executive order was issued recently which found that the incentives in the marketplace weren’t always properly aligned to promote issues like security and privacy. As each company attempts to work fastest to get the best AI products to market, security may not be their priority, which is something Evelyn argues must change. Like it or not, it is clear that the AI revolution is coming. Whether the disruption will be good or bad and who ultimately benefits or not will ultimately be determined by the ecosystem private business and government decide to create.

In order for this happen successfully though, Bridget emphasizes that both the industry and government need to come together. The advent of AI is too great a problem for either party to solve on their own, and she believes it is their responsibility to come up with an ecosystem that ensures the marketplace is fairly and securely using data, creating competitive AI

Issue: 003; April 2018

57


Cybersecurity Policy

What is the GDPR? GDPR carries provisions that require businesses to protect the personal data and privacy of EU citizens for transactions that occur within EU member states, and regulates the exportation of personal data outside EU. The law protects individuals in the 28 member countries of the European Union, even if the data is processed elsewhere.

Why the GDPR? Europe has been known to have more stringent rules about how companies use the personal data of its citizens. The public views privacy seriously due to its recent year of increased high-profile data breach. Based on a data privacy and security studies by RSA, 80% of consumers consider a top concern for lost banking and financial data, 76% of consumer has concerns to the lost security information or identify information, among 7500 people surveyed in five countries.

Types of Privacy Data GDPR protect? • Basic identity information such as name, address and ID numbers • Web data such as location, IP address, cookie data and RFID tags • Health and genetic data • Biometric data • Racial or ethnic data • Political opinions • Sexual orientation

Companies to Be Affected by GDPR Any company that stores or processes personal information about EU citizens within EU states must comply with the GDPR, even if they do not have a business presence within the EU. Specific criteria for companies required to comply are: • A presence in an EU country. • No presence in the EU, but it processes personal data of European residents. • More than 250 employees. • Fewer than 250 employees but its data-processing impacts the rights and freedoms of data subjects, is not occasional, or includes certain types of sensitive personal data. That effectively means almost all companies. A PwC survey showed that 92 percent of U.S. companies consider GDPR a top data protection priority.

58

GDPR Affects Third-party and Customer Contracts? The GDPR places equal liability on data controllers (the organization that owns the data) and data processors (outside organizations that help manage that data). All existing contracts with processors (e.g., cloud providers, SaaS vendors, or payroll service providers) and customers need to spell out responsibilities. The revised contracts also need to define consistent processes for how data is managed and protected, and how breaches are reported.

What if Not Compliance with the GDPR? The GDPR allows for steep penalties of up to €20 million or 4 percent of global annual turnover, whichever is higher, for non-compliance. The big unanswered question is how penalties will be assessed. For example, how will fines differ for a breach that has minimal impact on individuals versus one where their exposed PII results in actual damage? Don’t forget about mobile: According to a survey of IT and security executives by Lookout, Inc., 64 percent of employees access customer, partner, and employee PII using mobile devices. That creates a unique set of risks for GDPR non-compliance. For example, 81 percent of the survey respondents said that most employees are approved to install personal apps on the devices used for work purposes, even if it’s their own device. If any of those apps access and store PII, they must do so in a GDPR-compliant manner. That’s tough to control, especially when you factor in all the unauthorized apps employees use.

Issue: 003; April 2018


The Essentials of GDPR • Have a legal basis for controlling and processing personal data • Collect and process personal data only for lawful purposes, and protect it at all times. • Maintain documentation of all data processing activities • Perform an assessment on the risks to the rights and freedoms of controlling and processing personal data, and develop organizational and technological mitigations for the identified risks., including the third party • Be able to demonstrate compliance with the GDPR, through organizational and technical • measures, and the on-going assessment of the strength and suitability of these measures • Meet the elevated standard of consent, anytime consent is the legal basis for processing data • Minimize the amount of personal data processed, a principle called data minimization • Notify the supervisory authority of a data breach within 72 hours of becoming aware of the breach • Appoint a data protection officer (Article 37), who can be an employee for one organization, a representative for a group of organizations, or an external consultant. • Carry out a data protection impact assessment (DPIA) for envisaged processings that are “likely to result in a high risk to the rights and freedoms” of data subjects, and secure the participation of the designated data protection officer in the assessment (Article 35). • Ensure the protection of data during processing activities, through the implementation of “appropriate technical and organizational measures” • Abide by specific conditions when processing special categories of data. • Respond promptly to requests from data subjects about the personal data you control, process, or transfer about him or her • Update and correct any inaccurate personal data held about a data subject, by various means including a supplementary disclosure from the data subject (Article 16). • Permanently erase any personal data about a data subject under specified conditions • Be able to temporarily restrict the processing of personal data on request from the data subject under certain conditions • Supply personal data concerning a data subject in a “structured, commonly used and machine-readable format” in response to a request for data portability

sions about people rather than just automated processing and profiling, such as human intervention • Prevent data from being transferred outside of the EU to “a third country or to an international organization” unless specific protections are in place • Ensure additional restrictions are in place to safeguard the handling of personal data of children when services are offered directly to children SOURCE: https://www.csoonline.com/article/3202771/data-protection/general-data-protection-regulation-gdpr-requirements-deadlines-and-facts.html https://www.rsa.com/content/dam/pdfs/7-2017/A-Practical-Guide-for-GDPRCompliance-Osterman-Research.pdf

REGULATORY COMPLIANCE • GLBA—The Gramm-Leach-Bliley Act (GLBA) requires financial institutions to safeguard their customers’ personal data. A “GLBA-Ready” Adobe service means that the service can be used in a way that enables the customer to help meet its GLBA Act obligations related to the use of service providers. • HIPAA—The Health Insurance Portability and Accountability Act (HIPAA) is legislation that governs the use of electronic medical records, and includes provisions to protect the security and privacy of personally identifiable health-related data called protected health information (PHI). By law, healthcare providers and insurance companies that have any sensitive PHI can only use products that are HIPAA-compliant. Certain Adobe services can be configured to be used in a way that supports HIPAA compliance by a customer that is a “covered entity” under HIPAA and signs Adobe’s Business Associate Agreement (BAA). • 21 CFR—The Code of Federal Regulation, Title 21, Part 11: Electronic Records; Electronic Signatures (21 CFR Part 11) establishes the U.S. Food and Drug Administration (FDA) regulations on electronic records and electronic signatures. Being 21 CFR Part 11 compliant means that Adobe services can be configured to be used in a way that allows pharmaceutical customers who engage with the FDA to comply with the 21 CFR Part 11 regulations. • FERPA—The U.S. Family Educational Rights and Privacy Act (FERPA) is designed to preserve the confidentiality of U.S. Student education records and directory information. Under FERPA guidelines, Adobe can contractually agree to act as a “school official” when it comes to handling regulated student data and therefore to enable our education customers to comply with FER- PA requirements. SOURCE: https://wwwimages2.adobe.com/content/dam/acom/en/ security/pdfs/AdobeCloudServices_ComplianceOverview.pdf

• Have alternative methods available for making deci-

Issue: 003; April 2018

59


Malware/Ransomware

Data Breaches and Hacks Mark an Eventful 2017 in Cybersecurity Cybersecurity experts never tire of warning that at cynd berthreats are only going to become more frequent and n more inventive. Effectively, this means that we’re in for a more and more excitement on the cybersecurity front though this excitement will not exactly be synonymous with fun. But let’s first take a look at what happened in 2017. eaches Massive data breaches sing and ingenious hacks using d cyberweapons pilfered from the NSA, that’s what happened. The star of the year, at least in terms of media attention, was the EternalBlue exploit of Windows — Server Message Block— coll a file-sharing protocol verpresent on every single ve sed over sion of Windows released the last 15 years. EternalBlue is reportedly a cyberweapon developed by the NSA and snatched by hackers, who then used it in two of the most notorious ransomware attacks of the year, WannaCry and NotPetya. ds The two attacks combined affected hundreds genthousands of businesses and several government agenusion as cies in Ukraine and Russia. There is still no conclusion ili to the source of the attacks, although based on the prevailing political sentiment in Europe and the United States, WannaCry was suggested to have been carried out by North Korea and NotPetya was attributed to Russia. However, no evidence has been found to prompt the making of any definite conclusion. EternalBlue turned out to not be the only Windows exploit used for cyberattacks. In October last year, a ransomware dubbed BadRabbit struck several Russian news agencies and Ukrainian transport infrastructure, including the Kiev underground and Odessa Airport. There were also reports of BadRabbit hitting targets in Turkey and Bulgaria. The attackers used an exploit named EternalRomance this time. Last year was also notable for vulnerability revelations like Broadpwn: the heap overflow vulnerability present on more than a billion iOS and Android devices across the world. The discovery of this vulnerability alerted the cybercommunity’s—and device users’—attention to the fact that not all cyberthreats originate in the software. Apple and Google made patches for the vulnerability but it remains unclear whether

60

any devices d were compromised and even if they weren’t, some wa warned that now hackers will aalso target hardware weak spots. Broadpwn was not the only one, either. BlueBorne was also discovered last year, with more than 5.3 b billion devices found to carry th the vulnerability, which was basic basically a bug in the Bluetooth system of the devices. According to the people who discovered it, hackers could d gain acc access to the device through the b ug and take control of it to carry out bug man-i man-in-the-middle attacks. Patches wer issued for the bug but the were fa remains that no device is fact s safe as there is no guarantee these are the only vulnerabilities in the billions of connected devices that are being used globally every day. There were also some st stunning news stories in the dat breach department last year, data inclu including the revelation that Uber had beco become a victim of a data breach i 2016, 6 which hi it did not disclose. Personal in information about as many as 57 million users of the ridesharing service was tapped by the attackers. Also last year, it became clear that the Yahoo data breach from 2013 was actually much more massive than initially thought. The 2013 breach had affected every single user account of Yahoo—and there were 3 billion of them at the time. If that’s not shocking enough, here’s more: the hackers that tapped this information could do whatever they wanted with it for three years. The breach was only discovered in late 2016. So far this year has been relatively quiet, with no major attacks like WannaCry or NotPetya making headlines. But judging by the Uber and Yahoo cases, we might wake up one day a couple of years from now with headlines of another massive data breach that is taking place right now somewhere. https://www.scmagazine.com/the-top-cybersecurity-threats-for-2017/article/720097/ https://www.calyptix.com/top-threats/biggest-cyber-attacks-2017-happened/

Issue: 003; April 2018 018


Cybercrime Losses Hit $600 Billion Business losses from cybercrimes have risen to $600 billion over the last three years, from $445 billion in 2014, a study from McAfee has found In percentage terms, the increase ase % of is of 0.1 percentage points, from 0.7% seGDP in 2014 to 0.8% in 2017, the cybersecurity services provider said. Among the factors behind the increase are the growing number of internet users—particularly from developing economies where cybersecurity measures tend to be laxer—which has increased cyber-breach opportunities for criminals; quicker access to new tech by the criminals; and growing sophistication in their approach to attacks. The centers of cybercrime are also growing: McAfee has identified India, Vietnam, North Korea, and Brazil among them. Generally, developed economies incur greater cybercrime-related losses as these are the top targets of the crimi-

nals. Yet the biggest losses in the threeyear period were incurred in mid-tier economies because of their lagging behind in cybersecurity. The fastest growing “genre” in cycy fastest-growing bercrime is ransomware. There has been a veritable boom in this segment. While in 2012-2015 only 33 ransomware programs were created, by the end of 2016 these had swelled to 70. Banks are cybercriminals’ favorite target but intellectual property theft is also high on the agenda. It is also the most important cause for concern McAfee noted, adding IP theft accounted for a quarter of cybersecurity losses in the period studied. http://www.livemint.com/Technology/fZn1bmxjRGM2Qwxo0ErEUK/Cybercrimes-cost-firms-600-billionlast-year-McAfee-report.html

Cybercrime Estimated Daily Activity Malicious scans

80 billion New malware

300,000 Phishing

33,000 Ransomware

4,000

Regional Distribution of Cybercrime 2017 Region (World Bank)

Region GDP ($, trillions)

Cybercrime Cost ($, billions)

Cybercrime Loss (% GDP)

North America

20.2

140 to 175

0.69 to 0.87%

Europe and Central Asia

20.3

160 to 180

0.79 to 0.89%

East Asia & the Pacific

22.5

120 to 200

0.53 to 0.89%

South Asia

2.9

7 to 15

0.24 to 0.52%

Latin America and the Caribbean

5.3

15 to 30

0.28 to 0.57%

Sub-Saharan Africa

1.5

1 to 3

0.07 to 0.20%

MENA

3.1

2 to 5

0.06 to 0.16%

World

$75.8

$445 to $608

0.59 to 0.80%

Issue: 003; April 2018

Records lost to hacking

780,000 SOURCE: McAfee report "Economic Impact of Cybercrime—No Slowing Down", Febrary 2018; https://www.mcafee.com/us/ resources/reports/restricted/ economic-impact-cybercrime.pdf

61


Cybersecurity Advisory

Cybersecurity and the Automotive Industry

Faye Francy, Executive Director of the Automotive Information Sharing and Analysis Center (AutoISAC), parallels this challenge to when she was an executive at Boeing, an airplane manufacturing company. She was present during the 9-11 terrorist attacks in New York City and emphasized that the company and even the industry as a whole had never conceived of the idea of using an airplane as a lethal weapon of mass destruction. Although governments and businesses have taken measures to prevent further hijackings of planes, she feels there is once

The automotive industry is a complicated ecosystem. Between all the moving parts, standards of safety and competition among companies, automotive businesses have a lot on their plate. Their lives don’t get any easier when things like self-driving cars and the Internet of Things start playing a larger role in their businesses. As things get more and more connected, experts are attempting to figure out how to best tackle cybersecurity issues.

62

Faye Francy, Execu ve Director of the Automo ve Informa on Sharing and Analysis Center

Issue: 003; April 2018


didn’t believe it was possible. So she brought in Matan Scharf, the CEO/ Founder of Israeli startup Cycuro LTD, who after two minutes demonstrated to Faye’s engineers how easy it would be for him to breach the system. As expected, her engineers were shocked. At that moment Faye recognized that although it would be difficult, there would have to be a huge culture shift within her company and her industry, to better understand cybersecurity and prepare for potential attacks. Now, as part of ISAC, she believes this discussion is paramount in the automotive industry as cyber threats become more prevalent in an era of connectivity. Geoff Wood, the Director of Business Development at Harman, a subsidiary of Samsung, wholly agrees. He says it is more important than ever to educate the industry to take action on improving their security systems, simply due to the acceleration of new technology. As tech companies begin producing more self-driving cars and connect cars to smart cities and devices, the automotive industry must play its part to keep up with the advances.

But what would a potential cyber attack look like?

again a shortsightedness, especially when it comes to cybersecurity. Faye recognized that the aviation industry did not have a cybersecurity culture. They did not understand it and needed to recognize that a “cyber 9-11” could be just as plausible. So she spoke with her engineers and asked them a theoretical question: If they decided to hijack the plane through a cyber attack, how would they do it? The engineers were confused and refused to do it. She was asking them to become black hat hackers but many of them

Why and what exactly do hackers look for when they conduct cyber attacks? Adam Pranter, Supervisory Special Agent of the FBI’s Las Vegas Division, shares his experiences working in law enforcement. He says the most recent issues he notices are bot-nets, which facilitate all sorts of crimes such as stealing data or sending spam but obfuscate a lot of activity. As such, it is often difficult to track or take down if organized crime or even nation states use them. Other areas are BEC (business email compromise), where malicious parties compromise corporate email accounts and redirect finan-

Matan Scharf, the CEO/Founder of Israeli startup Cycuro LTD

Geoff Wood, the Director of Business Development at Harman

Issue: 003; April 2018

63


Cybersecurity Advisory

cial transactions between two parties to their account or even gain access to financial systems. According to Adam, this is a multibillion dollar industry for criminal organizations. With respect to the automotive industry, Adam points out that many auto-dealers, when selling cars, often require personal info, such as when financing the sale. Systems like these are vulnerable to attacks. In addition, OEM manufacturers often store customer information and could be potential targets as well. Kevin Baltes, Director of Product Cybersecurity at General Motors, wants to emphasize that they at have a robust ecosystem. He does concede that there is still much to be done, since hackers are evolving with new techniques as quickly as businesses are. Kevin maintains that through all parts of their product, from the hardware to the software, they have implemented security systems that will limit a hacker’s ability to access sensitive data. Matan cautions that it is not always the car that hackers can breach. Many hackers can enter consumer electronics relatively easily, but the key difference with today’s technology innovations is the growing threat of infiltrating infrastructure and vehicles. Matan also points out that many of these businesses are unaware of the potential dangers and risks with integrating and developing such technologies. For example, one might create a great light sensor for a phone. That same light sen-

Kevin Baltes, Director of Product Cybersecurity at General Motors

sor might then end up in a vehicle, either through integration or connection, potentially rendering the vehicle vulnerable to breaches. Kevin also points out that many parts weren’t designed to connect with each other. As devices such as smartphones, bluetooth speakers and cars connect with each other, the different systems often do not

64

have universal protections against cyber attacks and sometimes provide no protection at all. Bluetooth inherently is very easy to hack. Furthermore, Geoff emphasizes that something as simple as charging your smartphone could provide an entry point for hackers.

Many hackers, however, are unlikely to hack cars even if they dedicate the resources to do so.

According to Matan, there are very few hackers who have the ability to hack into cars. There are some white hat hackers who post online proving that they have the ability to hack, say, a Jeep, but very few will invest the time and resources to do so for economic reasons. Matan states that if he himself wanted to hack a jeep, it would take about 2-3 years of research, studying findings and papers from DARPA. Investing this amount of time with so little financial gain means that hackers would likely target vulnerabilities elsewhere. Matan believes there are 3 types of hackers: researchers who attack systems for academic purposes, nation states who use hacking as weapons or censorship, and criminal hackers. Criminal hackers, of course, are the ones the public fears the most, although just because something can be hacked does not make it likely. Matan states that most hackers look for opportunities usually based on financial gain. It’s a cost-benefit analysis, and if the cost is too high, most hackers will look somewhere else. Adam and the FBI have also done a lot of research behind the motivations of hacking. He talks about a threat spectrum, ranging from Hacktivists who hack for non-financial reasons to criminal hackers (the vast majority) who hack purely for financial gain. There are also insiders, those who work inside the company who sometimes are the cause of breaches.

Issue: 003; April 2018


Other types of hackers could be state actors, those trying to sabotage or provide espionage.

They may not hack cars but may look to hack specific vehicles belonging to high profile individuals like politicians. In terms of probability though, he claims that while cyberterrorism is low, cyber warfare is more likely. Matan claims that if he were to be a state actor looking to cause damage to a single individual, instead of hacking a car, he would simply hack two traffic lights and set them to cause an accident. This would be much simpler than trying to attack the car itself. While the majority of the auto industry does not focus on cybersecurity, Kevin and Matan both state that GM is an outlier when it comes to emphasis on security. Kevin mentions that their CEO made it clear that all the parts they produce must be secure by design from the hardware to the software. But Matan makes it clear that although GM has made cybersecurity a priority, the rest of the industry needs to have a renewed focus. He states that there must be a cultural change, especially with OEMs and suppliers, who must think about the security architecture before they even start building the part.

So what steps are being made to cause this cultural shift?

Issue: 003; April 2018

Faye believes her organization, Auto-ISAC, holds the key. Auto-ISAC was designed by the Clinton policy directive with the intention of having private sector companies operating in that space collaborate and share key findings with each other. According to collaborative reports by the CIA and FBI, the bad guys are at least 10 years ahead simply because the criminal network shares all its data and information. To combat this, Faye maintains that the best move forward is to ensure that everyone works together. In other words, when a company detects a flaw in another company’s system, it should be noted and shared with the industry to help with additional preventative measures. According to Geoff, who is also a member of Auto-ISAC, ISAC originally had about 15 OEMs. One year later Tier-1 suppliers were allowed to become members, so currently the total number is 40 members.

The 4 pillars of the group are: intelligence sharing, providing analysis (determining the scope of the issue and its potential impact), recommendations on best practices, and partnerships between companies, researchers, and related parties.

Fundamentally, experts like Matan, Geoff, Kevin, and Faye all agree that the best way forward is not to compete on cybersecurity but to collaborate and share. Although it is a significant investment, the automotive industry is not the only industry hackers can exploit. It is a necessity as new technologies continue to rapidly develop. But there is hope. The Internet we know today was created on similar principles of sharing and opensourcing. There’s no reason to believe that the cybersecurity industry will be any different.

65


Cybersecurity Advisory

Cybersecurity in a Connected World 2017 had probably some of the most memorable data breaches in recent memory. From the breaches at Equifax and Yahoo, it seems hackers are becoming more successful at targeting large institutions and getting away with it. And as 2018 moves along, it looks like their life might be getting even easier.

W

ith the advent of smart devices and self-driving cars, the world is becoming more and more connected. The more connected devices are, the more opportunities there are for malicious hackers to steal precious data. Many experts confirm: companies and consumers can expect to see more and more data breaches, and the ability to provide preventative measures will increase in importance.

Unfortunately, privacy is a double-edged sword Privacy is a very specific concept to different people, and some individuals are more sensitive to it than others. Whereas in a connected world, a connected ecosystem only becomes strong based on how many individuals are willing to participate in such a network. The more data that is shared, the richer the analysis and the better and cheaper the service will be. As a result, there is a huge balancing act between maintaining customer privacy and providing better products, albeit at the cost of reduced security. This will especially be true when it comes to autonomous cars. According to Bryson Bort, CEO and founder of Scythe, a cybersecurity firm that produces attack platform products that business can use to test their security systems, cars are now computers on wheels. Being connected to devices in the city, to the Internet, and to other cars means that the entry points have increased on many levels. Aspects that determine the identity of a person, such as their name, social security

66

number and address could be at risk. According to Bryson, however, the threat of identity theft in the age of self-driving cars has less to do with a person’s name and other personal info than on who you are, what you’re doing, and what you prefer. Data has now become a valuable asset for businesses. It is becoming harder for individuals to secure their personal data because doing so means that they will miss out on the potential benefits of connected services--a dilemma that hackers can, and do, take advantage of. To add to this complexity is the concern of liability. Who ultimately owns the data? Once data is collected by devices and sent back to businesses, what are the principles on how best to responsibly use this data? Who’s to blame when a person’s data is stolen? These are the top questions being asked in the cybersecurity industry. Unfortunately, there is no clear answer. Consequently, experts believe that prevention measures, although difficult to create, are the best way to move forward. They cite the importance of understanding the mindset of Bryson Bort a hacker and their motivaCEO and founder of Scythe

Issue: 003; April 2018


tions behind it. Often, the motivation is financial, stealing funds from a bank or taking data like credit card info or social security numbers and selling it online. Thus, it is critical to first identify the inherent value of the data, determine what the hacker might use it for and then create the security architecture around it. Hackers, for example, could possibly lock you inside your car then hold you for ransom, asking for sensitive info or funds for your release. Their entry points are numerous, such as one’s mobile carrier, the cloud backend, or even weakness in the hardware. It is a complex landscape, and sometimes it isn’t possible to account for all the potential areas a hacker could potentially breach. As Bryson states, there are three guarantees in life: death, taxes, and vulnerabilities.

So, how does a company even start addressing this problem? Many experts say this is too large a problem for a single company to solve, as this is something the entire industry must address. To complicate matters even further, entry points and hackers are not the only areas of concern for companies. There are currently no universal industry standards or metrics to measure which cybersecurity systems are better (there is no, say, 5-star cybersecurity system). Moreover, when it comes to self-driving cars, or OEMs (Original Equipment Managers), many of them are not yet implementing hardware features which could better protect consumers from cyber attacks simply because consumers are unaware of these problems and, thus, not yet demanding them. In addition, because of the complexity of the systems and the tendencies of OEMs Simon Hartley to use third party items to VP BD of Unsafe build their parts, they often re-use things, specifically software. As a result, it makes the job of cybersecurity companies doubly difficult, as they often work with outdated software having limited information on the original writers. On top of this, the software is then connected to the internet. Many of these software programs were also written before the Internet of Things became a trend, and were never really meant to speak to each other in the first place. Consequently, the chances for error and weak points are high. Some experts even believe that given current manufacturing practices and standards, attempting to prevent cyber attacks is futile. Any solutions companies provide will have a half-life. Once the software program is out, because it is a stable enterprise product, it has to be predictable in order for

Issue: 003; April 2018

it to work. Then, once the hacker figures out the pattern, the hacker can avoid where the detection systems are and block them. It is the code that the hacker puts down next that causes harm, so detection of the malicious code is sometimes more important than detecting a breach.

There is also the question of hardware life. For example, the average life of a smartphone is only about 4 years. But the average life of a car is about 14 years. This means any update to the software that may improve it or protect it becomes difficult because the hardware is outdated. One potential solution is to bring the entire industry together to share resources and to determine universal standards and practices for dealing with the multiple issues that cybersecurity companies face in an era of greater connectivity. That is what experts like Anuja Sonalker, CEO of Steer, a self-parking car company and Vice Chair of SAE Vehicles, and Simon Hartley, VP BD of Unsafe, a cybersecurity startup and member of SAE’s IOT Cybersecurity Committee, believe is the best way forward. By providing industry knowledge and making it transparent to OEMs and other automotive companies, Anuja believes that the industry may be better prepared for malicious hackers in the future. One potential solution is have self-driving cars share information with other cars. For example, if two self-driving cars speak to each other and one of the cars notices the other has weaker encryption, it can use that information and reduce connectivity while also alerting all other selfdriving cars in the area. Furthermore, Simon believes that another solution is to have each car list the devices they are connected to and have a system of checks and balances. Basically, each device would require some type of signature to be verified, to ensure the software matches what is registered. The cybersecurity company should be notified of any inconsistencies indicating that the device may have been modified and bears closer inspection. This is only possible, however, with shared cooperation between all parties. Simon believes the first step lies with OEMs, which need to start writing and owning their own code and establishing stable architecture rather than relying on third party materials. Bryson feels there is also hope if OEMs begin establishing more robust architecture. He claims that most hackers are lazy and look for the lowest common denominator, seeking the easiest entry point. Anything more difficult and they’re not going to bother. Once that’s done, then the cybersecurity firms will be in a position to have a fighting chance. That is what Anuja and Simon hope to do at SAE, the Society of Automotive Engineers, as they try to bring greater awareness to the cybersecurity issues within the OEM industry. Only time will tell if this venture will ultimately work out. In the meantime, it’s probably best to change your password again.

67


Ar ficial Intelligence

AI: The Basics You Need to Know AI and machine learning are fast turning into buzzwords just as annoying as disruption or innovation. But unlike those, there is quite a lot of substance behind AI and machine learning. For the beginner in the field—which most of us are—McKinsey has devised a short overview of what you need to know.

F

irst of all, what is artificial intelligence? Simply put, this is the ability of machines to perform cognitive functions typical of humans, such as reasoning, perceiving, and interacting with their environment. Among the technologies already existent in the AI department are autonomous cars— far from perfect but still—robots, of course, computer vision and computer language, and machine learning. Machine learning is all about algorithms. Algorithms sift through massive data sets and detect patterns, which they then use to make predictions and recommendations. This pattern detection and prediction process replaces the standard human-computer relationship where the human simply feeds the machine with programming instructions. It allows the machines to perform what is essentially a learning process based on the data they have access to. That’s why big data is so important, by the way.

There three main types of machine There areare three main types of machine learndepending on the type of anaing,learning, depending on the type of analytical prolytical processes perform:predictive, descripcesses they perform:they descriptive, predictive, and prescriptive. andtive, prescriptive. Needless to say, the descriptive is the simplest process, there is no “thought” really involved. The prescriptive analytical process is the most complex. While the descriptive process simply describes what has happened, the predictive—employed by all sorts of ana-

68

lytical outlets—suggests what will happen based on available data and computer models looking into various scenarios, and the prescriptive actually tells you what needs to be done to achieve a specific goal. As regards types of approaches to machine learning, there are three dominant ones: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning involves feeding the computer a set of so-called inputs to produce a certain output. The algorithm uses the data fed to it as well as feedback from the humans to link the inputs (for instance, interest rates, time of the year, etc.) with the output, say, housing prices. This approach is used for predictive analytics typically. Unsupervised learning is essentially pattern detection without a specific goal set by the humans. This approach is best suited for cases when you don’t really know what to do with the data you have, so you use the algorithm to identify patterns that could prove useful in classifying and using the data. This is the closest machine learning comes to human learning. It is a reward-based approach that has the algorithm perform a task and receive a reward for it, aiming to maximize this reward every time it performs the task. This approach involves the machine interacting with the environment, which in this case is a much broader concept than nature. A robo-advisor is an example of this approach: it interacts with its environment by placing an order and

Issue: 003; April 2018


Deep if the order results in a gain, the robo-adsions (determinations) based on what it learning is visor is awarded, whether by points or by has learned. 25% better at voice simply achieving the best results. Then, For example, consider showrecognition than trathe machine self-corrects continually ing the computer an image. The neuditional methods; 27% to achieve the optimal course of action ral network processes it, remembers more accurate in facial ensuring the maximum reward. it and then, when presented with the recognition; and as much Taking AI a big step further is same image in a different environment, as 41% more accurate deep learning. Here we are dealing with it can recognize it. It may sound simple in image classifineural networks—interconnected layers because it comes so naturally to the hucation. of software calculators—that can process man brain but apparently, it isn’t. much bigger datasets than regular algorithms We are still a long way from artificial superand are consequently much more accurate in their intelligence—the kind Elon Musk and Bill Gates are performance of whatever tasks they are given. warning against—but the technology is developing Deep learning is 25% better at voice recognition fast, very fast and this “long way” might turn out to be than traditional methods; 27% more accurate in facial relatively short. The applications of AI are practically unlimited, if only theoretically for the time being. But recognition; and as much as 41% more accurate in imAI is coming of age and it pays to be aware of what it age classification. actually means and what it can do already and will be With deep learning machines can also learn inable to do in the future. creasingly complex information and make conclu-

AI or Cognitive Computing? Two terms describing the future if information technology are confusing the layperson. Artificial intelligence is all the buzz and is already being abused in reference to, say, chatbots. Cognitive computing has a Black Mirror-y aura around it, but it could be the thing that makes superAI a reality. So how do these two differ from each other? AI at its simplest definition, taken from the Turing Archive for the History of Computing, is “the science of making computers do things that require intelligence when done by humans.” It is a bit confusing, since so many tasks we don’t think require the conscious application of intelligence actually do require it. So here’s the clarification provided by a tech writer, Dean Evans: AI aims to enable computers to solve complex problems by mimicking human thought processes, such as pattern recognition. Cognitive computing, on the other hand, is the technology that could take this definition much

Issue: 003; April 2018

further. Some take AI as an umbrella term—which it is fast turning into—with cognitive learning an aspect of it, an enabler of machines to better mimic humans, if you will. One expert quoted by Evans, VDC Research IoT analyst Steve Hoffenberg, explains the difference between existing so-called AI systems and cognitive learning systems as follows. If an AI and a cognitive computing system both have to analyze a set of medical records and other data in order to find the best treatment for a patient, the AI system will analyze the data and suggest to the doctor the optimal course of action. The cognitive computing system, on the other hand, says Hoffenberg, will provide the doctor with all necessary information, leaving the choice of course of action to them. Another author, Forbes’ Bernard Marr, on the other hand, supports the cognitive computingas-AI-enabler definition, rather than the two being alternative

approaches to complex problemsolving. He defines cognitive computing as an attempt to simulate human thought processes in a computerized model utilizing selflearning algorithms capable of data mining, pattern recognition, and natural language processing. This is where deep learning comes in, according to Marr. Cognitive computing uses neural networks to process data and learn as it goes. The more data there is, the more the machine learns, and the more accurate its decisions become. Marr calls the neural network a tree of decisions that the machine makes every step of the way until it arrives at a solution to the problem it has been tasked with. In other words, machine learning, and especially deep learning, enables cognitive computing; which in turn is bringing true artificial intelligence, rather than Siri and Alexa, closer to reality. Whether this is a good or bad thing remains to be seen.

69


Ar ficial Intelligence

Human-Computer Interaction: Key to AI Evolution Human-computer interaction, or HCI, has been identified as crucial for the evolution of artificial intelligence. In an amusing twist, the evolution of AI is just as crucial for HCI. The meeting point of these two is conversation, which is also the greatest current challenge for AI developers, according to IBM

C

omputers are already pretty good at natural language processing, but natural language understanding is where they fall short of human expectations. Researchers in the area acknowledge that these expectations that most people have of computers are excessive and unrealistic, but they are nonetheless working to fulfill them. The keys: conversation and context. For humans, conversation comes naturally. We understand the message we are receiving from another person without making a conscious effort to identify and memorize the context in which this message is being given. Computers, however, need a lot of help to begin to understand context and how it guides understanding. So, scientists are now using supervised and reinforcement learning to equip computers with the knowledge they need in order to become “conscious” of context and begin understanding human conversation better. Right now, it’s all experimental but some researchers believe we could have computers understanding human language and interacting with us much more meaningfully than now within the next five years. One big helper in this endeavor is the Internet of Things. Sensors in various objects or infrastructure will feed contextual data about people and the environment to computers, adding to what they already have in the way of information about a certain situation or place, adding to context. Some see the future of AI as a virtuous cycle, where AI becomes more useful so we use it more and as we use it more, it becomes even more useful. Skeptics might say that this virtuous cycle is not so virtuous as it would make us much more dependent on computers than we already are but these skeptics are the

70

fans of Black Mirror, it seems, and not AI researchers, who rejoice at the prospect of one day humans being able to come home and talk—meaningfully—to the walls and the kitchen appliances about their children’s day as they tell the washing machine, say, to start working at 6 pm. There is still time before this becomes a reality, however. What comes naturally to humans cannot, by definition, come naturally to artificial intelligence. But when computers learn all they need to learn about context and start picking up on it, optimists believe we will be able to tap much more helpful personal assistants capable of reading our words, facial expressions and body language well enough to become true conversational partners that make our lives easier and help us make important decisions using all the contextual data they have. As regards the potentially scary aspect of AI, the experts seem to think the fears are exaggerated. Four industry insiders told McKinsey that, first of all, AI is not yet as advanced as the pessimists would like you to beGary Bradski, CTO, lieve; secondly, what AI does Arraiy is very far from sentience— it is simply a pattern recognition system for the most part. At least for now. Thirdly, as one expert noted in the McKinsey interview, what do we need actually thinking AI for? According to Arraiy’s CTO Gary Bradski, the only sensible applications of a thinking computer would be space exploration or other dangerous activities, rather than having a thinking washing machine.

Issue: 003; April 2018


10 Ways You Already Use AI PERSONAL ASSISTANTS: ASSISTANTS Voice recogni on systems, such as Siri, are a kind of ar ficial intelligence system u lizing deep learning and neural networks. They are not yet AI but they are being taught to understand the nuances in human voice, context, and seman cs, and are on their way of becoming true AI some day. FACEBOOK: Remember the recent scandal with the personal data of 50 million Facebook users? Remember how people started dele ng their accounts and were stunned by the amount of data Facebook actually has on them? Well, some of this data is used in the social network’s machine learning ac vi es to personalize the service. Face recogni on is one example of these ac vi es. GOOGLE MAPS: Google can suggest the fastest route from A to B by analyzing traffic speed data drawn from your smartphone because Google knows where you are. Yes, that’s creepy but it’s also useful if you need the fastest route from A to B. GOOGLE: No surprise there. The world’s top search engine is constantly improving—though some users have no use for these improvements—recommending results based on previous searches and now even tapping the seman c level with the Knowledge Graph.

Issue: 003; April 2018

A lot of users are not just reading and hearing about AI, they are using it unknowingly in their everyday lives. Chances are we will be using it even more in the coming years, as long as we define AI in the widest possible sense, including any algorithm set to perform a certain task. Some AI experts would disagree with this broad definition but let’s keep it for the sake of simplicity, with the note that chatbots and robo-advisors at investment firms are not really AI as the experts—and Elon Musk—see AI. They are algorithms “trained” to process huge amounts of data and spot patterns. Speaking of rob-advisors, the financial services industry has been an early adopter of machine learning and algorithms, spurred by fintech startups. Risk calculation, customer satisfaction measurement, and market trend detection are among the chief applications of algorithms in this industry, according to the CDO of cloud services provider RedPixie, Mitchell Feldman. Healthcare and retail are two other industries ahead of the game with machine learning and using algorithms for tasks previously performed by people—or not performed at all because they can’t be performed by humans. Machine learning has been a boon for e-retailers, allowing them to constantly improve and personalize their services based on customer data collected and analyzed by the algorithms. You’ve seen this if you have ever shopped from Amazon: you get recommendations sent to you based on your latest purchases, and this is just one facet of how algorithms are helping e-retailers. Feldman offers ten examples of real-life AI— or at least algorithms—that you may well have used more than once, are regularly using, or plan to use in the future. Again, a word of caution: neither Netflix nor PayPal nor Spotify are using real artificial intelligence. They are using algorithms and machine learning to offer their users a superior service. How annoying this service could become is something everyone flooded with Amazon recommendations as a result of a random search will tell you. Yet AI does have a massive potential to really make lives better. Or take over the world. That’s something that remains to be seen.

GMAIL: Google G l is i big bi on machine learning, yes. For three years now Gmail has had a func on of smart automa c reply where you can choose from three op ons. PAYPAL: The online payment giant uses deep learning for risk assessment and fraud detec on. It makes sense given its line of work and the abundance of online fraud schemes. NETFLIX: Video recommendaon is possible because of algorithms. It might sound insignificant but get this: Ne lix boasts more than $1 billion in annual return on its investment in these algorithms as they strengthen customer reten on rates. UBER: Arguably, Uber would not exist without machine learning. The ridesharing company uses algorithms to es mate arrival mes, loca ons, and delivery mes for its UberEATS service. LYST: The online retailer applies deep learning to make recommendaons to its clients based on visual comparisons between items of clothing. SPOTIFY: The music website uses machine learning in much the same way as Ne lix: to make recommenda ons based on every user’s likes and dislikes—or at least their searches.

71


AI & Cyber Security

AI In Cyber Security Artificial Intelligence is already shaping up to be next Industrial Revolution. Billions of dollars have been invested into AI technologies and the startups that build them. Personal helpers, such as Siri, Cortana, and Alexa are all still in their infancy stage. Yet, they are becoming actual companions, capable of human-like conversation.

W

hether you realize it or not, AI technologies are already present virtually everywhere you look, addressing almost every aspect of our modern and not-so-modern lives. Speech recognition, image recognition, autonomous cars that rely on AI-technology to keep them safe. The financial sector is moving to AI-based insurance risk analysis, credit scores, and loan eligibility. We’re also seeing the emergence of AI-based robot lawyers and AI-based medical diagnostics and prognoses. And these are all just the beginning. In general, there are three driving forces involved in this progression towards AI:

1.

Storage: We can now store enormous amounts of data at a fraction of what it used to cost.

2.

Compute Power: The capability now available lets us process mountains of data.

3. 72

Mathematics: Math and algorithms drive AI. Machine learn-

ing, deep learning, and big data analytics have all seen major breakthroughs in the past several years. AI technologies have moved from being purely a tool for academic research then to something practical that companies can actually build into their commercial products. But can we trust AI to make the right choices? This is a hard question to answer as, briefly put, in such early days as these it’s a mixed bag. For example, Tay Bot was an AI-based twitter chat bot by Microsoft, which went online in March 2016. It took a few hours of free chatting on the Internet for it to learn the drill. Since the internet has all sorts of ‘teachers,’ what this bot quickly learned and excelled at were profanity and racial bias. After 16 hours, Microsoft realized the catastrophe it created and shut it down for good. A few months ago, Mashable ran an article about another good example involving Google Translate. Turkish is a gender-neutral language. There is no distinction between male and female forms. They use ‘O’ for both ‘He’

and ‘She.’ But when translated to English through AI, the machine-driven algorithm shows bias: She is a cook, he is a doctor. She is a teacher, he is a soldier. And, seemingly apropos of nothing, He is happy, she’s unhappy. It’s not that Google engineers are sexist. They just fed their machines with all the pre-existing texts they could find and let the tool reach its own conclusions. It seems fair to say that we are still decades away from a magical engine that takes data in and spits the correct decision out. Does this make AI useless then? Not at all. For the right applications, it is far from useless and can make all the difference. It is just a matter of having the right balance of the two most crucial elements for AI to work as it should: data and expertise. Lots of data that covers the entire spectrum of the problem you are trying to solve is vital to having enough material upon which to derive the right conclusions. With regards to expertise, both in the mathematics that drives AI and in the specific domain be-

Issue: 003; April 2018


ing addressed, this element is the crucial ingredient needed to get the most out of the data in question.

AI and Cyber Security With regards to cyber security, AI too can be highly useful, though of course it does not come without limitations, which, unsurprisingly, are identical to the prerequisites mentioned above – not enough data and not enough expertise. Access to cybersecurity training data is anything but trivial. Furthermore, AI systems do not explain themselves, meaning you have to manually validate each decision or blindly trust it, only to then realize that this technology is notorious for having a fairly high false classification rate. Fundamentally, this is not an option in cybersecurity as we all know that missed detections and false positives can have disastrous consequences. But, let’s retrun to what these systems can do well. AI, machine learning, deep learning and big data analytics are letting us mechanize tasks previously only handled by our scarcest resources – the smartest human analysts. They can make sense of our gigantic mountains of data logs. They are opening our eyes in places where we were previously blind. As Check Point thinks more and more about AI’s role in cybersecurity, we’ve begun to explore AI-based engines across our threat prevention platform. We’re already using them in a few different capacities. The first one that’s worth mention-

ing is Campaign Hunting. The goal with this engine is to enhance our threat intelligence. For example, a human analyst looking at malicious elements would typically trace the origins of those elements and incriminate similar instances (e.g. domains registered by the same person at the same time with the same lexicographic pattern). By using AI technologies to emulate—and mechanize–an analyst’s intuition, Check Point’s algorithms can now analyze millions of known indicators of compromise, and hunt for additional similar ones. As a result, we’re able to produce an additional threat intelligence feed that offers first-time-prevention of attacks that we’ve never seen before. More than 10% of the cyber attacks we block today are based on intelligence gained solely through Campaign Hunting. A second engine, Huntress, looks for malicious executables, one of the toughest problems in cyber security. By nature, an executable can do anything when it’s running as it’s not breaching any boundaries. This makes it hard to figure out if it is trying to do something bad. The good news, though, is that cyber attackers rarely, if ever, write everything from scratch. That means similarities to previously known malicious executables are likely to surface, though they are often hidden to the human eye. But when we use a machine-driven algorithm, our scope of analysis broadens. Using a sandbox as a dynamic analysis platform, we let the executables run and collect hundreds of runtime

parameters. Then, we feed that data to the AI-based engine, previously trained by millions of known good and known bad executables, and ask it to categorize those executables. The results are quite astounding. We end up with a dynamic engine, capable of detecting malicious executables beyond what antivirus and static analysis would find. In fact, 13% of the detected malicious executables are based on findings solely from this engine. If it were not for Huntress, we would not have known to block them. Another example is CADET, Context Aware Detection. The Check Point platform gives us access and visibility into all parts of the IT infrastructure: networks, data centers, cloud environments, endpoint devices and mobile devices. This means that rather than inspecting isolated elements, we can look at the full session context and ask whether it came through email or as a web download, whether the link was sent in an email or a text message on a mobile device, who sent it, when was the domain registered and by whom? Essentially, we are extracting thousands of parameters from the inspected element and its context. By using the CADET AI engine, we can reach a single, accurate, context-informed verdict. That’s quite something. So far, our testing shows a two-fold improvement in our missed detections rate, and a staggering 10-fold reduction in the false-positive rate. You have to keep in mind: These are not just nice mathematical results. In real-life cybersecurity, engine accuracy is crucial.

SUMMARY To conclude then, the above examples illustrate how the combination of expertise and vast amounts of data can produce the best approach to make cybersecurity practical, using the entire arsenal of available technologies. At Check Point we combine AI with all of the other technologies we have in order to improve the metrics that actually matter. For now, we believe that AI technologies are still not mature enough to be used on their own and still need a

Issue: 003; April 2018

large amount of human input in order to be effective. When AI is used as an additional layer, added to a mixture of expert engines designed to cover the entire attack landscape however, it can really come into its own. Cybersecurity must be practical. And as we move farther along the AI continuum, those technologies are taking us farther along toward being able to develop smarter and more practical threat defense.

73


AI & Cyber Security

Cyber Fatigue, AI’s Biggest Cybersecurity Challenges Cybersecurity has been getting so much headline space recently that people are beginning to take cyberattacks and data breaches as something usual, even normal. This is what happens with every single topic if you are constantly flooded with information about it: you become desensitized. This desensitization is unpleasant at the very least but it can become plain dangerous when cybersecurity is concerned as you simply can’t be bothered to think up a better password for your latest online account. Dr. Richard Ford, Chief Scientist of Raytheon’s cybersecurity business Forcepoint, calls this particular desensitization cyber fatigue and counts it among the few but major challenges in the field of cybersecurity. In an interview with TechRepublic, Ford explains that cyberspace is now such a big part of everyone’s daily life—person or business—that there’s no escape from cyber risks. Once you start getting bored with making up new passwords, with ensuring your own safety online, you start making bad decisions that affect you.

The way around this fatigue is to adopt a degree of healthy skepticism in your online interactions and actions, and to be careful what you post online. Basically, it comes down to a constant awareness that there are bad guys out there looking for ways to harm other guys and one of these other guys might happen to be you, unless you’re careful. The second major challenge in cybersecurity, according to Ford, is artificial intelligence. Not because of the problems around its adoption as a cybersecurity tool but because of the fact that cybercriminals can also tap its potential. Just as cybersecurity agents are looking for ways of using AI more productively to ensure the safety of organizations and individuals, cybercriminals are looking for ways to use AI for their own purposes, which usually comes down to financial advantage.

74

Dr. Richard Ford, Chief Scien st, Forcepoint

One could even say there is a race between cybersecurity agents and cybercriminals to put AI to use and in the end, says Ford, the battle will be fought by computers—intelligent computers, which humans have trained to be better than them at solving specific complex problems relating to security because computers can do this more quickly than humans. Ford actually calls AI a cognitive prosthesis: rather than a dystopian-style intelligence machine, he sees

AI as a tool that enables humans to make better decisions, rather than the computers making decisions for the humans. There is one inherent problem with AI, however: as it becomes more and more complex, at one point you will stop understanding how the system works. So, asks Ford, how would you know that it is working properly? In other words, how can one know if their AI has not become the target of a cyberattack? The question is certainly fascinating, if a little scary. Ford believes we will find the answer in the next decade or two.

Issue: 003; April 2018


AI for Cybersecurity: A Good Idea

AI Doesn’t Have Alternatives in Cybersecurity

There are those who firmly believe AI has no place in cybersecurity, although the reasons for this belief may vary from a simple skepticism about the potential of AI to a fear that AI could eventually Daniel Miessler, take over the job from human writer and security experts, not to meninforma on security tion the simple fact that AI professional itself could become a cyber vulnerability. But this doesn’t have to be the case, says expert Daniel Miessler. He believes artificial intelligence could actually help human security agents simply because of what it is: a system capable of sorting through vast amounts of information very quickly. This is something humans are not capable of and AI is brilliant at. Businesses, says Miessler, produce terabytes of data that nobody looks at and this data could contain something important. Only AI can at this time find this potential something in the heaps of data, so why not use it? Miessler makes his case for AI in cybersecurity stronger by listing five reasons. First, there is a severe shortage of cybersecurity experts who study data that might contain evidence of a breach or a vulnerability. Second, humans need to be trained to become cybersecurity experts. Every next one needs to be trained at the same cost as the first one. AI does not need training as such and adding more AI capacity to an already “trained” system does not carry the same additional cost as training a human. Third, human training is rarely consistent enough to make everyone equally good at their job. Four, humans are simply humans: they get bored, they get distracted, and they get tired. Five, humans are biased creatures and their biases can seep through their analysis potentially compromising its accuracy. So, based on all this, AI for cybersecurity sounds like a very good idea and, says Miessler, it’s an idea that could become a reality in just five years. That’s not too hard to imagine given that AI will be used for sifting through data only. This is something that algorithms are already doing in various sectors, including banking and financial services. Why not teach them to do the same in cybersecurity?

The debate is still ongoing: does artificial intelligence have a place in cybersecurity or is it better kept out of it. For some, however, there is no debate. AI does have a place in cybersecurity and this is a big place because there Laurent Gill, are simply not enough humans to co-founder and chief do the job, and the ones that are product officer of available are simply too human— Zenedge read prone to make mistakes—to be as good as cybersecurity needs them to be. One proponent of the latter line of thinking is Laurent Gill, co-founder and chief product officer of Zenedge, a cybersecurity services provider. In a story for SC Magazine, Gill argues that what the industry needs right now is more artificial intelligence and not more engineers. In that, he counters statements made by Google’s director of infosec and privacy, Heather Adkins, who was until recently a staunch opponent of more tech in cybersecurity. Earlier this year, however, Google released a cybersecurity AI solution dubbed Chronicle, in an apparent about-turn. This about-turn makes sense, Gill says. AI is the only way to stay ahead of hackers in today’s digital environment. You can hire all the engineers you want, he argues, but all it takes is one person to make one mistake, which is what happened with Equifax, with disastrous consequences. Humans make mistakes, it’s as simple as that. What’s more, you actually can’t hire all the engineers you want because there is already a shortage of cybersecurity professionals and the sortage will get bigger in the coming years, projected to hit 1.8 million in 2022. There is simply no talent to employ. The solution to these two problems is evident: automation, and more specifically artificial intelligence, if we use the term in its widest sense. AI is not prone to mistakes. It can be taught to patch without forgetting. Though there are inherent challenges with that, as with anything, it seems to be the best course of action for everyone except the Fortune 500 companies who can afford to attract the best of the best. And risk these best of best making a mistake that can cost the company dearly.

Issue: 003; April 2018

75


AI & Cyber Security

Global CEOs Worried about Cybersecurity, AI Global chief executives are becoming increasingly worried about cybersecurity and they are wary of the advent of artificial intelligence, the latest PwC Annual Global CEO Survey has revealed

AI will some some AI generate will generate $15.7 trillion $15.7 trillionin global in global GDPbyby 2030 GDP 2030, which is a 14% increase for the period. Among the other things that bother global CEOs are geopolitical risks, which are a permanent threat, and the speed of technological change. Availability of key technical skills is also a problem for some, mostly in the Asia-Pacific, while in North America the biggest threat is seen to be cybersecurity.

One interesting outtake from the PwC survey is how CEOs see the world in terms of belief systems and business integration. The distribution of opinions clearly shows that the majority of CEOs see the world as becoming more fragmented in terms of beliefs and value systems, with

nationalism on the rise and multiple rules of law and liberties. At the same time, however, the majority sees the corporate world as becoming increasingly integrated. While one cannot draw a causeeffect line between the two, it’s nevertheless an interesting pattern of opinions.

The perception of top threats* Considering the following threats to your organisation’s growth prospects, how concerned are you about the following? 42% 41% 40% 40% 38% 38% 36% 35% 31% 29% 29% 29% 26% 26% 26%

Over-regulation Terrorism Geopolitical uncertainty Cyber threats Availability of key skills Speed of technological change Increasing tax burden Populism Climate change andenvironmental damage Exchange rate volatility Social instability Protectionism Uncertain economic growth Inadequate basic infrastructure Changing consumer behaviour

*Chart shows percentage of respondents answering ‘extremely concerned’.

76

Issue: 003; April 2018

Source: PwC, 21st Annual Global CEO Survey https://www.pwc.com/gx/en/ceo-survey/2018/pwc-ceo-survey-report-2018.pdf

While the overall sentiment among the more than 1,000 respondents in the survey is optimistic, there are some causes for concern. Chief among them is over-regulation but cybersecurity has also gone higher on the threat list. AI is not seen as a threat in itself but rather as a boon that will come at a price: millions of jobs will be lost, PwC says. The company has projected that


CYBERASIA360 MAGAZINE Cyber Space Asia (CSA) is an industry event focusing on the awareness, protec on, and solu ons in the Cyberspace. Our mission is to develop and build a bridge to facilitate collabora on at global level among companies and ins tu ons to protect valuable proper es: data! With increased connec vity and ar ficial intelligence, we come to a strong realiza on that cybersecurity should the first or foremost considera on in origina on’s system design, which is not the case most me. For the past three years, the CSA exhibi on has become a flagship event in the cybersecurity, a rac ng 300 major cyber security companies, and 12000 visitors, around the global.

MAGAZINE ADVERTISING OPPORTUNITIES: Display Adver sing Classified Adver sing Social Media Adver sing  Product-focused Content Wri ng Adver sing


AI & Cyber Security

Artificial Intelligence and the Attack/Defense Balance Bruce Schneier is writing about security issues on his blog since 2004, and in monthly newsletters since 1998. Is writing books, articles, and academic papers. Currently he is the Chief Technology Officer of IBM Resilient, a fellow at Harvard’s Berkman Center, and a board member of EFF.

By BRUCE SCHNEIER

A

rtificial intelligence technologies have the potential to upend the longstanding advantage that attack has over defense on the Internet. This has to do with the relative strengths and weaknesses of people and computers, how those all interplay in Internet security, and where AI technologies might change things. You can divide Internet security tasks into two sets: what humans do well and what computers do well. Traditionally, computers excel at speed, scale, and scope. They can launch attacks in milliseconds and infect millions of computers. They can scan computer code to look for particular kinds of vulnerabilities, and data packets to identify particular kinds of attacks. Humans, conversely, excel at thinking and reasoning. They can look at the data and distinguish a real attack from a false alarm, understand the attack as it’s happening, and respond to it. They can find new sorts of vulnerabilities in systems. Humans are creative and adaptive, and can understand context. Computers — so far, at least — are bad at what humans do well. They’re not creative or adaptive. They don’t understand context. They can behave irrationally because of those things. Humans are slow, and get bored at repetitive tasks. They’re terrible at big data analysis. They use cognitive shortcuts, and can only keep a few data points in their head at a time. They can also behave irrationally because of those things. AI will allow computers to take over Internet security tasks from humans, and then do them faster and at scale. Here are possible AI capabilities:

Discovering new vulnerabilities — and, more importantly, new types of vulnerabilities in systems, both by the offense to exploit and by the defense to patch, and then automatically exploiting or patching them.  Reacting and adapting to an adversary’s actions, again both on the offense and defense sides. This includes reasoning about those actions and what they mean in the context of the attack and the environment.  Abstracting lessons from individual incidents, generalizing them across systems and networks, and applying those lessons to increase attack and defense effectiveness elsewhere.  Identifying strategic and tactical trends from large datasets and using those trends to adapt attack and defense tactics. 

That’s an incomplete list. I don’t think anyone can predict what AI technologies will be capable of. But it’s not unreasonable to look at what humans do today and imagine a future where AIs are doing the same things, only at computer speeds, scale, and scope. Both attack and defense will benefit from AI technologies, but I believe that AI has the capability to tip the scales more toward defense. There will be better offensive and defensive AI techniques. But here’s the thing: defense is currently in a worse position than offense precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation. Roy Amara famously said that we overestimate the short-term effects of new technologies, but underestimate their long-term effects. AI is notoriously hard to predict, so many of the details I speculate about are likely to be wrong — and AI is likely to introduce new asymmetries that we can’t foresee. But AI is the most promising technology I’ve seen for bringing defense up to par with offense. For Internet security, that will change everything.

This essay previously appeared in the March/April 2018 issue of IEEE Security & Privacy. SOURCE: https://www.schneier.com/blog/archives/2018/03/artificial_inte.html

78

Issue: 003; April 2018


With a new look, revamped interface and added features,

rechargeasia.com is poised to become the industry’s leading web portal, connecting industry professionals on a global level and providing them with online tools to find new contacts and to forge new business relationships. Since its re-launch in early 2011, Rechargeasia.com has been experiencing rapid growth in site traffic and search engine rankings. To further boost our online presence, we have also cultivated a comprehensive email database comprised of close to 20,000 industry contacts from all over the world.

Online Advertising Opportunities

Showcase your latest new products -

E-newsletter

your product catalog

to promote your company and products via large aftermarket database

Recharge Asia Corporation For more info, contact: USA ph: 626-569-8238 Email: sunny@rechargeasia.com

www.facebook.com/rechargeasia www.weibo.com/rechargeasia www.twitter.t com/recharge_asia

Make an appointment with your

potential new customers


We are the Pla orm the Bridge

and the Communica on Channel

to promote your Business Solu ons in Cyber Space Asia

We offer: Latest Product News Cyber Security Solu ons Leading Industry Expert Insights Yearly Data Breach Inves ga on Report

CyberAsia360 is the media pla orm to give your voice the support and resonance to be heard. We are where your customers are - EVERYWHERE - Trade Shows & Conferences, Email Marke ng, Magazine Ads (Digital and/ or Print), Social Media, TV Commercials, Website.


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.