Issuu on Google+

#21 WINTER 2012





In this Issue Editorial . . . . . . . . . . . . . . . . . . . . . . . . 2 Be Our Guest . . . . . . . . . . . . . . . . . . . 3 The News. . . . . . . . . . . . . . . . . . . . . . . 4 Patriot Act vs. Cloud?. . . . . . . . . . . . . . . . . . . . . . . . . 4 Ghost Click. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 TOR and the Pedophiles. . . . . . . . . . . . . . . . . . . . . . 5 XML Encryption is eXtreMeLy Weak. . . . . . . . . . 5 Duqu: yet another Stuxnet? . . . . . . . . . . . . . . . . . . . 6

WPS: the new WEP?. . . . . . . . . . . . . 6 Carrier IQ . . . . . . . . . . . . . . . . . . . . . . 7 Protecting Computer Generated 3D Graphics. . . . . . . . . . . . . . . . . . . . . . . . 8

Scrambling 3D Objects. . . . . . . . . . . . . . . . . . . . . . . 8 Watermarking 3D Objects. . . . . . . . . . . . . . . . . . . . 11 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 13

Where Will We Be? . . . . . . . . . . . . . . 14 Technicolor Sponsored Conferences. . 14

Published Quarterly By Technicolor Security & Content Protection Laboratories Technical Editor: Eric Diehl Editors:

Sharon Ayalde


Patrice Auffret Gwenaël Doerr Alain Durand Marc Eluard Raphael Gelloz Olivier Heen Stéphane Onno Yves Maetz Xavier Rolland-Nevière


Gary Donnan Rachel Orand

Subscribe to the newsletter: security.newsletter(at) Report vulnerability: security(at)


Anti-piracy is starting this new year with good omen. The first event is the shutdown of MegaUpload on January 19th. What a surprise! For many months, piracy due to illegal streaming has exceeded piracy due to Peer-To-Peer (P2P). MegaUpload became the flagship of cyber lockers, as “The Pirate Bay” is the flagship of P2P. With servers spread all over the world and operators also disseminated, their position seemed impregnable. Nevertheless, a US grand jury indicted seven individuals and two societies of engaging in a racketeering conspiracy, conspiring to commit copyright infringement, conspiring to commit money laundering and two substantive counts of criminal copyright infringement. Following this decision, the FBI launched a vast operation ending up with the arrest of four individuals in New Zealand who should soon be transferred to the US, and the seizure of servers. Game over! The second event is the voluntary shutdown of BTJunkie. Although not the largest torrent tracker site, BTJunkie was one important piece of the P2P landscape (5th position). Since the 14th of February, the site is closed. Are these two events related? To the best of our knowledge, BTJunkie was not under any legal suit. Will that stop piracy? Of course, the answer is no. Obviously, we will see a serious slowdown of illegal download/streaming. As many people/promoting sites relied on MegaUpload, some time will be necessary to seed new cyber lockers with illegal content and to promote them. A successor to MegaUpload will most probably appear in the coming months. Then, was that operation useless? No. The content owners have demonstrated that there may be nothing such as an impregnable harbor for illegal content trading. In addition to the immediate temporary impact on piracy, it may send a strong, deterrent, pedagogical message to pirates (at least light-hearted ones). A collateral effect may be that people will rethink the conditions to use free cloud storage. MegaUpload was also used for legitimate content storage. All this information is now lost as we may expect that their owner did not locally back it up. This is an interesting topic to explore. 2012 may be a very thrilling year in the security and content protection arena.

E. DIEHL Technical Editor  




BE OUR GUEST Refik Molva

Hello Refik, may you introduce yourself?

I am a security researcher in computer and communication systems. I am a full professor at EURECOM and in addition, I am in charge of the Networking and Security Department at EURECOM.

How did you get into security?

This comes back to the time when security was not seen as a full research topic. In 1989, I was a researcher at IBM Zurich and my main topic was networking. With my first line manager, Phil Janson, we wanted to explore a new domain so we visited other IBM labs in the US. Network security just came up as evidence. I started working with Moti Yung on the security of an authentication protocol and we discovered and fixed vulnerabilities. This was a seminal work under the codename KryptoKnight that paved the way for the outstanding security activity in IBM Zurich. That was my first contribution to network security, and I have never stopped since. Three years later, I joined EURECOM as the only security researcher. Security kept being marginal, even deemed a bit suspicious, until the end of the nineties. With the advent of Internet, everything changed up to a point that now security is considered as an “easy” research topic since there is so much to do.

What are the main research domains that you have explored?

You did not mention privacy yet…

Whenever a new communication or computing paradigm arises, I try to spot original security problems raised by that paradigm. For instance, when multicast was a popular topic in networking, authentication of multicast flows was a brand new problem. Unlike unicast that can be addressed by symmetric message authentication techniques, multicast authentication inherently calls for asymmetric mechanisms. Asymmetric algorithms on the other hand are way too complex for real time traffic, so one had to come up with a new solution, that’s what we did with Alain Pannetrat, my Ph.D. student by then.

On that topic I want to stress that privacy does not only involve confidentiality. Unlinkability or even unobservability are equally important privacy requirements. Especially when applied to the cloud this might lead to exciting challenges. As an example, I can quote the apparent contradiction between cloud authentication and unlinkability.

In the early days, my focus was very broad: applied cryptography, network security, protocols. But at that time, one could still afford to cover so many topics. From 2003, I focus on the design of security protocols using cryptographic techniques.

Another example is with the advent of ad hoc networks that certainly raise several security requirements but only very few actually called for novel solutions. We were among the few research groups that identified selfishness as a new problem in ad-hoc networks and formalized it using game theory.

I was coming to this point. I currently investigate privacy problems in relation with cloud computing. Straightforward application of classical privacy mechanisms out of the crypto bag of tools is not sufficient: the additional difficulty comes from the very distributed nature of the cloud. Understanding this difficulty leads to new security protocol designs like, for instance, PRISM (Privacy-Preserving searches in Map-Reduce).




Do you think there are unsolvable privacy issues?

No. At least not from a pure technology point of view. Problems arise from our capacity to state the real needs. For instance, there is an obvious contradiction between usages in social networks and the privacy of both personal data and Personally Identifiable Information. A related difficulty is that many users do not spontaneously request privacy. This is counterbalanced by the recent European directive. This makes administrations aware of privacy issues and responsibilities, as well as organisms funding research, which I personally find a very good thing. Along the same lines, we recently were invited by public authorities to join a consortium in order to investigate privacy issues in a project involving RFID tags. This would have never happened without this directive.

Do you have any thought about research that you would like to share?

I am concerned about the overall trend with increasing constraints and short-term expectations from research. Research by definition can barely yield any useful outcome if constrained by concrete objectives. In the long run, betting on open-ended research will be much more profitable. A noteworthy counter-example is European R&D programs that claims to cover a broad range of goals from fundamental research through prototyping and standardization to business exploitation in projects within 2-3 year timeframe. Programs and funding for fundamental research should be separate from the ones for technology transfer and innovation, like NSF and DARPA are in the US.

Thank you! R. MOLVA (EURECOM, Sophia Antipolis) Interview by A. DURAND and O. HEEN

THE NEWS Patriot Act vs. Cloud? Signed by President Bush shortly after the attacks of 9/11, the Patriot Act aims at easing the gathering of data by American federal agencies in order to fight terrorism. Many people understand: “federal agencies may silently get data”. Some US cloud providers view it as a competitive drawback as non-US customers may perceive it as a threat for their data. While this might make sense at a first glance, a deeper analysis shows this is not rational. Non-US, security aware customers would actually classify data regarding their sensitivity. Non-sensitive data may go in any cloud. Regionalsensitive data shall go in regional clouds (like EU data in EU provider OVH, OBS, etc.) and sensitive data out-of-the-cloud anyway. In this context, the Patriot Act1 is not really a problem. Since most of the data is non-sensitive, the leading cloud provider will gather the largest market share regardless of regional laws and security context. This means the market is fairly competitive as long as the main amount of data in the cloud is non-sensitive, which seems to be currently the case. O. HEEN

Ghost Click First experiments on internet advertisement started in 1994.2 Many monetizing strategies and tools are possible. Internet advertisement has become a multi-billion-dollar industry and therefore a new target of attacks. Two years of collaboration between law enforcement and security researchers led to the arrest of six men who were operating a large fraud on internet advertisement: they are suspected to have made $14 million through click-jacking.3 The attack targeted the click referral principle where a host receives a small fee for redirecting a client to an advertised website. The attacker used a Domain Name System (DNS) changer malware. The malware redirected heavy traffic like iTunes or Netflix to other sites where they had advertisement agreements. Four million computers are infected and their users may not be aware. The rogue DNS servers have been shutdown, and replaced by legitimate ones. Thus, the infected computers can still access the Internet. These servers will be operated until this spring, leaving some time for deceived users to detect the infection and correct their DNS settings. 1  ‘The USA PATRIOT Act: Preserving Life and Liberty’, http://www. 2 Rachel Arandilla, ‘Rise and Fall of Online Advertising’, 1st Web Designer, March 1, 2011, 3  ‘International Cyber Ring That Infected Millions of Computers Dismantled’, FBI,




If this type of attacks spreads, it could deeply impair the Internet advertisement industry. These attacks exist in many forms, even lowtech ones. For instance, an attack may use human computing power, such as Amazon’s mechanical Turk4, to improve referring fees, or to increase the advertisement cost of a competitor… Y. MAETZ

TOR and the Pedophiles In October 2011, members of the Anonymous hacktivist collective claimed to have shut down more than 40 child pornography websites hidden inside the Tor network. The account details of about 1,600 members of Lolita City, the largest of those websites, were posted as well.5 This takedown was part of a larger ongoing anti-child pornography campaign launched by Anonymous called Operation Darknet (#OpDarknet).6,7 The Tor network provides anonymity to individuals seeking to evade government surveillance or censorship but is also used to conceal various illegal activities. Tor not only provides anonymization for clients but also for websites through the unofficial DNS top-level domain (TLD) .onion.8 Such websites are only reachable by Tor clients or through Tor gateways such as This collection of onion websites constitutes what is called a darknet. Darknet websites are part of the Invisible Web (or Deep Web), a part of the Internet not indexed by standard search engines. In the case of Tor, there is a Hidden Wiki that indexes hundreds of these onion sites. However, onion websites are still vulnerable to conventional attacks such as the ones launched by Anonymous. While browsing the Hidden Wiki, members of Anonymous discovered a section called Hard Candy linking to child pornography. Anonymous removed the links but they were quickly reposted. Having determined that the hosting platform Freedom Hosting was used to store most of this content, the hacktivists unsuccessfully requested the removal of this content. Consequently, Anonymous launched denial of service attacks against Lolita City and Freedom Host. The hackers were also able to fetch Lolita City’s accounts information by using a UTF-16 encoded SQL injection. Unsatisfied by these results and the lack of involvement of authorities, Anonymous pursued OpDarknet by disclosing the IP address of 190 consumers of child pornography (“Operation Paw Printing”). The hackers created a Firefox plugin similar to the Tor Button (“The Honey Pawt”) and tricked the Hard Candy users 4  ‘Amazon Mechanical Turk - Welcome’, welcome. 5  #OpDarknet, ‘Lolita City user dump’, Pastebin, October 18, 2011, http:// 6  ‘Operation DarkNet (opdarknet)’, Twitter, opdarknet. 7  ‘Opdarknet’s Pastebin’, Pastebin, 8  Chen Adrian, ‘The Underground Website Where You Can Buy Any Drug Imaginable’, Gawker, June 1, 2011, 9  ‘Tor: Hidden Service Protocol’, Tor project, docs/hidden-services.html.en.

to install it under the guise of a phony Tor security update. Once installed this plugin blocks and redirects all traffic going to child abuse websites back to Anonymous’ IP logger. Many found such actions justified by the perceived lack of efficiency of the authorities whose investigations take years before bringing a few pedophiles to justice. However, Anonymous does not have the resources to track the criminals down and bring them to justice. In this case, they tipped off criminals who could destroy the evidence and learned to be more cautious. OpDarknet also possibly compromised an ongoing investigation by tampering (or being accused of tampering) with evidence. R. GELLOZ

XML Encryption is eXtreMeLy Weak The XML Encryption scheme defined by the W3C seems to be quite straightforward: They use the well-known CBC (Cipher Block Chaining) mode with random IVs (Initialization Vectors) and a basic padding scheme consisting in adding (possibly zero) random bytes and then the last byte giving the total number of bytes that were padded. Nothing is very original and, therefore, the encryption scheme looks to be basically secure. However, two researchers of the Bochum University (Germany) have shown10 that it is possible to recover the clear text from an encrypted text (but not the key) as long as the attacker has access to a device (called oracle) able to determine whether a given ciphertext (and associate IV) corresponds to a well-formed cipher text for a given, unknown key. Here, well-formed means that the clear text was correctly padded and only has XML characters (more or less standard text characters). Their attack heavily relies on one property of the CBC mode: When used to decrypt, a modification by some mask of one byte in the IV (or at a given cipher block) causes a modification by the same mask in the first decrypted block (or at the block following the modified block). Based on that property and on the responses from the oracle (i.e. well-formed or bad-formed), it is possible to recover the full plain text in reasonable time. This new attack shows again that the security of cryptographic primitives heavily depends on the context: a perfectly secure cryptographic primitive – like CBC – may be attacked if used in a bad context (only XML characters may be encrypted). The attack is another reminder11 that it is generally bad for an implementation of a cryptographic primitive to be too talkative by revealing that some error occurred. A. DURAND 10  Tibor Jager and Somorovsky Juraj, ‘How to break XML encryption’, in Proceedings of the 18th ACM conference on Computer and communications security, CCS ’11 (New York, NY, USA: ACM, 2011), 413–422, doi:10.1145/2046707.2046756, 11  Daniel Bleichenbacher, ‘Chosen ciphertext attacks against protocols based on the RSA encryption standard PKCS #1’, in Advances in Cryptology — CRYPTO ’98, ed. Hugo Krawczyk, vol. 1462 (Berlin/ Heidelberg: Springer-Verlag), 1–12, j5758n240017h867/.




Duqu: yet another Stuxnet? Duqu malware was discovered in September of 2011. The name Duqu comes from the ~DQxxx.tmp files it drops, where xxx is a hexadecimal number. Duqu infects hosts, and gathers sensitive information like keystrokes, digital certificates, etc. New malware regularly appears but Duqu raised interest because it was technically close to Stuxnet malware. Stuxnet was discovered in June of 2010 and was very innovative at least regarding its target, an industrial system, and the high number of vulnerabilities used. Modifying an existing malware (Stuxnet) for making a new malware (Duqu) is a common practice. Even if Stuxnet and Duqu have different targets, their specificity raises some questions: do they come from the same “lab” or even from the same “toolkit”. Shall we expect other worms from the same “family”? O. HEEN


Wi-Fi Protected Setup (WPS) is a standard designed by the Wi-Fi Alliance (WFA) for easy connection of Wi-Fi enabled devices. The goal is to make it easy, without sacrificing the security offered by a Wi-Fi protection mechanism such as Wi-Fi Protected Access (WPA) for instance. In this article, we will describe the vulnerability as disclosed by Viehboeck12 that shows security has been sacrificed for usability. WPS works in three scenarios: Push Button Configuration (PBC), Internal Registrar (IR), and External Registrar (ER). PBC mode requires the end-user to push a button prior WPS authentication can work. IR mode also requires an end-user interaction via its own client stations (STA). ER mode does not require end-user interaction.

half). By considering that a PIN code attempt takes two seconds to complete, the full range of possibilities can be tested in around six hours. With statistical theory, this will be half that time: three hours.

100,000,000 COMBINATIONS

1 1

2 2

3 3

4 4











Viehbock’s paper is called “When poor design meets poor implementation”. Our view is that the design is not the only guilty here. WFA decided to build a protocol which recovers a wireless key, regardless of its size, by using an 8-digit PIN code, i.e. less than 27 bits. Reducing the key space to make it easier for the end-user is a lousy idea. In the title, we compared WPS to Wired Equivalent Privacy (WEP). We did that on purpose, because before the WPS discovery, pirates were trying to crack the easy WEP. Now, WPS is an even easier attack. Today, the best tool to break WPS is Reaver.13 It is time for WFA to rethink security. WFA has just recommended a permanent lockout after 10 failed attempts. Of course, this lockout could be reset by rebooting the device. P. AUFFRET PS: Technicolor DSL gateways implemented an anti-brute force mechanism before this vulnerability was discovered. There is a five minutes lockout after five failed attempts.

The disclosed attack is against ER mode. Because there is no enduser interaction, an attacker can launch a brute-force attack to find the correct Personal Identity Number (PIN) code. The vulnerability comes from a bad design in the authentication protocol, and a lack of anti-brute force requirements in the specification. Thus, every device implementing WPS mode is vulnerable. The authentication algorithm uses an 8-digit PIN code. The validation of the PIN code is a two-step process: each one separately validating one-half of the PIN. When the first half (4-digit) is valid, the access point answers with a specific message. When the second half (3-digit) is valid, a different message is answered. Finally, the last digit is a checksum of the seven previous digits. Consequently, the number of possible attempts is reduced from 100,000,000 to 11,000 (10,000 for the first half followed by only 1,000 for the second 12  Stephan Viehböck, ‘Wi-Fi Protected Setup PIN brute force vulnerability’, .braindump - RE and stuff, December 27, 2011, http://sviehb.wordpress. com/2011/12/27/wi-fi-protected-setup-pin-brute-force-vulnerability/.

13  ‘reaver-wps - Brute force attack against Wifi Protected Setup - Google Project Hosting’,





Carrier IQ is a California based company founded in Mountain View in 2005. Carrier IQ’s mission is to provide embedded analytics for mobile phones to telecom carriers. The goal is to improve “users experience” by gathering information when phone calls are dropped, where signal quality is poor, why applications crash, and which processes drain the battery. For instance, it could help carriers to know if additional cells are required when too many calls are dropped in a particular location. Trevor Eckhart, a system administrator, pointed out a Carrier IQ patent14 that claimed a broader scope: “Carrier IQ is able to query any metric from a device. A metric can be a dropped call because of lack of service. The scope of the word metric is very broad though, including device type, such as manufacturer and model, available memory and battery life, the type of applications resident on the device, the geographical location of the device, the end user’s pressing of keys on the device, usage history of the device, including those that characterize a user’s interaction with a device”. After having reversed–engineered the Carrier IQ software, the truth appeared to be different on two key points. First, the Carrier IQ software acts as a rootkit15 as it runs hidden from users and requires the device to be rooted to uninstall it. Uninstalling requires a strong technical background and often voids the phone’s warranty. Second, the rootkit acts as a spyware since it may send every user activity to carriers.

Carrier IQ software is suspected to be installed over more than 150 million phones from Android, Windows, BlackBerry, and even on iPhones on which it is off by default. Although Carrier IQ declined to disclose their customers, T-Mobile, Sprint, AT&T, Samsung, and HTC acknowledged having installed the carrier IQ software and they disclosed the list of affected phones.20 A Carrier IQ lawsuit class action21 has been submitted. By illegally collecting user information, Carrier IQ probably violated the Federal Wiretap Act. More recently, US representative Edward Markey proposed a mobile device privacy act to force carriers and phone makers to inform consumers about spying software that could only be installed with an expressed consent of the consumer. Similarly to Sony BMG22 in 2005, telecom carriers installed rootkits on consumer devices without consumers’ authorization. Each time a company acted like that, it led to alter its brand image and damage its consumer’s trust relationship. Such privacy leakage on consumer products is probably the visible tip of the iceberg. Most data is now managed and kept on the cloud by operators, making such leaks hard to detect in practice. How can consumers know which information is collected? Which information leaks? Is our digital world still compatible with privacy? S. ONNO

Trevor reverse-engineered the Carrier IQ software. In September 2011, he disclosed his first results at XDA developer forum.16 He revealed that the software collects sensitive information despite the user never “opting-in”. On November 28th, he posted a video on YouTube17, which shows results from his Android application – namely Logging TestApp - launched on a HTC (Evo 3D)/Sprint mobile phone with detailed application logs and results.18 Beyond data that Carrier IQ admitted collecting, Trevor also observed additional data such as pressed keys and trigger actions, SMS in clear text, URLs dispatched to the carrier, and even HTTPS content intercepted and revealed in the clear. Even more, if the user resigns from his carrier and only uses a Wi-Fi connection, the phone continued to report carrier IQ data. These Carrier IQ profiles list which data are sent, for whom and when. On December 2011, the Electronic Frontier Foundation launched an initiative to collect and analyze the Carrier IQ profiles after Jered Wierzbicki and Al. published a tool to convert Carrier IQ profile to human-readable XML.19 14  Steve Roskowski et al., ‘Data collection associated with components and services of a wireless communication network’. 15  ‘Rootkit’, Wikipedia, the free encyclopedia, wiki/Rootkit. 16  TrevE, ‘Were you aware of logging before reading news? -’, xda-developers, September 7, 2011, php?t=1252846. 17  Carrier IQ Part #2, 2011, AYNo&feature=youtube_gdata_player. 18  Trevor Eckhart, ‘CarrierIQ Part 2’, features/logs-and-services/loggers/carrieriq/carrieriq-part2/. 19  Dan Rosenberg, ‘Unpacking Compressed Carrier IQ Profiles’, It’s Bugs All the Way Down, December 25, 2011, blog/2011/12/25/unpacking-compressed-carrier-iq-profiles/.

20  FiWeBelize, ‘The Complete List of All the Phones With Carrier IQ Spyware Installed’, FiWeBelize, January 18, 2012, http://fiwebelize. com/2012/01/18/the-complete-list-of-all-the-phones-with-carrier-iqspyware-installed/. 21  ‘Carrier IQ Class Action Lawsuit’, 22  Mark Russinovich, ‘Sony, Rootkits and Digital Rights Management Gone Too Far’, Mark’s Blog, October 31, 2005, markrussinovich/archive/2005/10/31/sony-rootkits-and-digital-rights-management-gone-too-far.aspx.





Computer generated graphics have been steadily invading our daily lives over the last decade. Today, virtual objects are routinely used in a number of applications, including socializing metaverses, games, simulation tools, and user interfaces. Incorporating visual effects into movies is now considered the norm in the entertainment industry. Blockbuster animation movies inherently rely on synthesizing rich and complex 3D worlds. Creating these virtual objects/environments is a lengthy artistic process and the resulting end-product does have value. Losing control over the dissemination of such assets may lead to a number of uncomfortable situations for major Hollywood studios: •

Unfinished 3D assets that hit the Internet can generate bad press for the studio that created it;

3D components that somehow leaked out of the content production workflow can be reused in other uncontrolled virtual context, potentially harming royalties revenues;

3D objects, which are available to the public, could be exploited as a template to manufacture derivative products, e.g. by using modern 3D printers.

In view of this situation, it is reasonable to envision a content protection framework for computer generated 3D graphics and thus to review whether existing technologies could be adapted in a straightforward fashion or not. Content protection architectures conventionally include three main components. First, encryption techniques are applied to digital data in order to make it unusable to parties that do not hold the necessary credentials. Second, an access control framework is set in place to (i) attach usage rules to digital items in the form of a ticket/license/ permit, (ii) assign credentials to the individuals, and (iii) enforce the rules of the ecosystem. Third, a forensic strategy is deployed in order to be able to locate the source of a leak in the event that copyrighted assets were to appear on unauthorized distribution networks. This final component typically relies on the combination of digital watermarking technology and traitor tracing codes. Although it may be necessary to enrich existing rights expression languages to accommodate for the specificities of the 3D graphics ecosystem, it will a priori be possible to readily reuse standard baseline access control framework routinely employed in current Digital Rights Management (DRM) systems. In contrast, encryption

and watermarking techniques need to be revisited to better fit the particular needs of 3D data. The remainder of the article will therefore focus on these two elements.

Scrambling 3D Objects From the perspective of a cipher, bits are simply bits. It does not matter if those bits represent audio, video, or 3D graphics. They will eventually be ciphered in the same way. As a result, the most straightforward strategy to encrypt a 3D object is simply to bulk encrypt the binary file that describes it. The issue with such a direct approach is that the underlying structure of the file23 is also lost at encryption time. As a result, rendering engines oblivious to encryption will fail to parse the file which may lead to a critical crash of the attached player. A workaround consists in preserving the structure of the 3D file and to bulk-encrypt the structural elements independently. This is typically the solution adopted for XML encryption.24 This revised strategy also provides a finer granularity e.g., one could decide to only encrypt a few objects in a complex 3D scene. Nonetheless, major shortcomings remain. On the one hand, a renderer that does not have the necessary credentials to decrypt data may display nothing at the location of the protected object. Depending on the targeted use case, it may not be the ideal solution. In a metaverse, if the user has no means to notice that ‘something’ is missing, it is unlikely that she will miss anything. In other words, the incentive to pay extra money to have access to enriched premium environments is lacking. In an animation production workflow, the studio may not be willing to give access to the full scene to a subcontractor. However, this subcontractor should be able to have access to visual cues in order to insert graphical elements without hitting existing ones that would be invisible due to the protection framework. On the other hand, if the renderer attempts displaying a protected object based on its encrypted data, the object will ‘spill out’ and corrupt the whole 3D scene.

Geometry Preserving Encryption In cryptography, the terminology ‘format preserving encryption’ refers to particular ciphers that have the property of producing an output (the ciphertext) in the same format as the input (the plaintext).25 Preserving format may mean preserving some geometric 23  A number of formats have been proposed to describe 3D scenes. They all rely on some kind of structure to separate different types of data. For instance, in the most simple case, a 3D object is composed of a set of vertices with 3D coordinates (aka. the geometry) and a set of faces obtained by connected the previously defined vertices with edges (aka. the connectivity or the topology). The structure of the file can be implicit through the use of a particular convention, or explicit with XML-like tags to separate the different pieces of data. 24  ‘XML Encryption Syntax and Processing’ (W3C, December 10, 2002), 25  Mihir Bellare et al., ‘Format-Preserving Encryption’, in Selected Areas in Cryptography, ed. Michael J. Jacobson, Vincent Rijmen, and Reihaneh Safavi-Naini, vol. 5867 (Berlin, Heidelberg: Springer Berlin Heidelberg, 2009), 295–312,




Figure 1: Illustration of the visual impact of two alternate geometry preserving encryption techniques (PointShuffling vs. CoordShuffling) as well as their immunity against reconstructions attacks.

Assessing   the   security   of   geometry   preserving   encryption  techniques   will  For be  instance, critical.   if   security   properties of the protected object. anEven   encrypted object requirements  are  not  the  same  for  the  entertainment   should remain within the bounding box (or the convex hull, or the inside,the   or‌) military   of the original object. A by-product of such aof   geometry and   industries,   the   security   a   newly   preserving encryption would be that both protected and unprotected proposed   cipher   still   needs   to   be   properly   evaluated   object could be rendered in a virtual world without any undesired and   should  between be  in  nthe o  wobjects. ay  reduced  to  the  security  of  the   interferences underlying   cryptographic   primitive.   Multimedia   items   For illustrative purpose, let us consider a simple 3D object defined have   lot   of  redundancy   partial   of  the   by a seta  of V vertices v i=(xi , yi , zand   ) and a set ofeFncryption   faces, each face i being specified by an ordered list of the vertices that it is made of. data  may  not  be  enough  to  guarantee  security,  even  if   In other words, the first component defines the control points of the   content   is   unintelligible   when   it   is   rendered.   the 3D mesh whereas the second one describes how the mesh is Important   information   may   still   leak   that   would   wired altogether. A simple geometry preserving scrambling strategy, permit   the   protected   object   to   some   hereafter reconstructing   referred to as PointShuffling, consists in shuffling the set of vertices of the 3D object i.e. applying a key-seeded pseudo extent   e.g.   exploiting   a   prior   on   the   statistics   of   the   random permutation ĎƒK26 ( ) to the indices i of the vertices v . The underlying   signal.   This   is   a   lesson   learnt  i the   hard   impact of this process is to change the set of vertices associated way   in  selective   ncryption   or  multimedia   content.   to a given face while e leaving the 3D fcoordinates of the vertices untouched. Inherently, it is equivalent to pseudo-randomly rewiring Looking   back  defining at   the   PointShuffling   scrambling   the cloud of points the object. As a result, if the protected object is rendered, it is unintelligible (as depicted in Figure 1) even algorithm,   it   is   actually   possible   to   recover   quite   a   if it does not spill outside of the convex hull of the original object. decent   looking   3D   object.   As   mentioned   earlier,   the   When having access to the secret key K, recovering the original    object                  is      then              simply                    a    matter                    of      applying                          the   inverse permutation to  the set of vertices of the protected object.


definition   of   the   vertices   themselves   has   not   been   modified  i.e.  their  positions  in  the  3D  space  remained   protected object to some extent e.g. exploiting a prior on the 26 unchanged.   As   a   signal. result,   if  is the   rendering   engine   statistics of the underlying This a lesson learnt the hard way in selective encryption for multimedia content. ignores   the   face   information   and   simply   displays   the   Looking back the PointShuffling scrambling algorithm, vertices   of  atthe   object,   the   resulting   cloud   of   points   itprovides  a  lot  of  information  about  the  original  shape   is actually possible to recover quite a decent looking 3D object. As mentioned earlier, the definition of the vertices themselves has of   the   object   as   depicted   in   Figure   1.   At   this   stage,   the   not been modified i.e. their positions in the 3D space remained recovery  As process   then   a   simple   applying   unchanged. a result, ifis  the rendering engine matter   ignores theof   face information and simply displays the vertices of the object, the conventional  surface  reconstruction  techniques.27  The   resulting cloud of points provides a lot of information about the resulting  reconstructed  object  will  be  very  close  to  the   original shape of the object as depicted in Figure 1. At this stage, the original  process one,  iseven   it   is   nmatter ot   exactly   the  conventional same.   A   quick   recovery then aif  simple of applying 27 surface techniques. The resulting is   reconstructed fix   to  reconstruction address   this   security   problem   to   apply   three   object will bepermutations   very close to the original one,individual   even if it is not exactly different   to   the   coordinates   the same. A quick fix to address this security problem is to apply of  the  vertices:   three different permutations to the individual coordinates of the vertices:

xi â†? xĎƒ K (i ) x yi â†? yĎƒ K (i ) y

CoordShuffling: zi â†? zĎƒ K z (i )



This creates new vertices, which are in the bounding box of the original object, hence granting the desired geometry preserving property.                            As        depicted                          in        Figure                  1,      this          protected                 object is unintelligible and the reconstruction attack is now ineffective.  



 Amir  Said,  ‘Measuring  the  strength  of  partial  encryption   schemes’,  in  IEEE  International  Conference  on  Image  Processing,   2005.  ICIP  2005,  vol.  2  (presented  at  the  IEEE  International   Conference  on  Image   Processing,  2005.  ICIP  2005,   IEEE,  2005),  II–          1126–9,  doi:10.1109/ICIP.2005.1530258.  



 Tony  Derose  et  al.,  Surface  reconstruction  from  unorganized   points  (CiteSeerX,  1992),                 35.  



Figure 1: Illustration of the visual impact of two alternate geometry preserving encryption techniques (PointShuffling vs. CoordShuffling) as well as their immunity against reconstructions attacks.

Assessing the security of geometry preserving encryption techniques will be critical. Even if security requirements are not the same for the entertainment and the military industries, the security of a newly proposed cipher still needs to be properly evaluated and should be in no way reduced to the security of the underlying cryptographic primitive. Multimedia items have a lot of redundancy and partial encryption of the data may not be enough to guarantee security, even if the content is unintelligible when it is rendered. Important information may still leak that would permit reconstructing the

Still, the surface reconstruction attack is only one out of many yet unidentified threats against geometry preserving encryption schemes. 3D objects provide many side information channels that can be exploited by an adversary. For instance, should our basic 26  Amir Said, ‘Measuring the strength of partial encryption schemes’, in IEEE International Conference on Image Processing, 2005. ICIP 2005, vol. 2 (presented at the IEEE International Conference on Image Processing, 2005. ICIP 2005, IEEE, 2005), II– 1126–9, doi:10.1109/ICIP.2005.1530258. 27  Tony Derose et al., Surface reconstruction from unorganized points (CiteSeerX, 1992), summary?doi=




3D mesh be enriched with a texture28, there is a strong correlation between (i) the distance between 3D vertices and (ii) the distance between the corresponding 2D points in the texture image. Scrambling a single one of these two elements leaves the door open to a potential reconstruction attack. On another front, the connectivity of the object provides a lot of information on the actual geometry of the mesh and can thus helps to reconstruct an object.29 Security evaluation relying on the identification of potential threats due to information leakage will be one of the most sensitive tasks when designing novel geometry preserving ciphers for 3D graphics and should therefore not be overlooked.

Challenges Beyond Security

Keeping in mind that such protection technologies for 3D graphics will need to be eventually integrated into a content production and/ or distribution workflow, a number of non-security related challenges arise. Let us examine the question of distribution of 3D objects to begin with. In many application scenarios, the 3D mesh is streamed to the recipient for interactive visualization e.g. for 3D games or metaverses. To save bandwidth, a routine trick consists in partitioning the data in bags of connected faces and sending a bag of faces only if it contains an element facing the virtual camera.30 In most cases, the whole object will not be received by the recipient but only the components that might be visible in the scene. If this mechanism has not been accounted for by the protection technique, the recipient will not be able in most cases to recover the original object, even if she possesses the corresponding credentials. For instance, with both PointShuffling and CoordShuffling algorithms, a single missing protected vertex results in being unable to invert the pseudo-random permutation(s). To support this use case, it would be necessary to protect each element of the partition independently, at the risk of decreasing security at a critical level when the granularity of the partition becomes too small. Even if this trade-off was to be appropriately tackled, there would still be a remaining issue. In general, the encryption techniques proposed so far have a strong impact on the orientation of the faces. As a result, in each bag of protected faces, it is very likely that there will be at least one element facing the virtual camera. As a result, the whole object is streamed to the receiver, hence losing the expected benefit in terms of bandwidth. Compression performances will also be affected if no care is taken 28  Adding a texture to a 3D mesh is simply a matter of providing a 2D image and specifying where it is attached to the vertices defining the 3D object. The texture image is then stretched and shrunk as necessary in order to wrap it over the 3D faces accordingly. 29  M. Isenburg, S. Gumhold, and C. Gotsman, ‘Connectivity shapes’, in Visualization, 2001. VIS ’01. Proceedings (presented at the Visualization, 2001. VIS ’01. Proceedings, IEEE, 2001), 135–552, doi:10.1109/ VISUAL.2001.964504. 30  Sheng Yang, Chang-Su Kim, and C. -C.J Kuo, ‘A progressive view-dependent technique for interactive 3-D mesh transmission’, IEEE Transactions on Circuits and Systems for Video Technology 14, no. 11 (November 2004): 1249– 1264, doi:10.1109/TCSVT.2004.835153.

while integrating encryption techniques into a workflow. For instance, some 3D compression techniques rely on the same prediction framework as in audio and video.31 Based on some statistical prior about 3D data, the encoder makes an informed prediction and only entropy encodes the residual error. For instance, one could represent a 3D mesh like an arrangement of baseline patches. In this case, the encoder simply needs to send (i) a collection of baseline patches, (ii) some geometric parameters (translation, rotation, scaling) to position each patch, and (iii) the residual error for each patch in order for the decoder to recover the object. The issue with encryption is that it essentially produces a protected object that deviates from the statistical a priori model underpinning the prediction strategy. This subsequently results in unexpected residual errors that are poorly compressed by otherwise optimized entropy encoders. The compression ratio for protected object is bound to be significantly lower than for original objects. A possible avenue could reside in incorporating the cryptographic components within the encoding architecture e.g. by protecting the parameters obtained after prediction rather than the raw 3D mesh. The final issue with geometry preserving encryption is related to rendering, namely the impact of the protection on the number of frames being displayed per second (the frame rate). To display a 3D scene, the rendering engine goes through a process referred to as rasterisation. It essentially consists in identifying the position of the faces in the video frame captured by the camera, and drawing the resulting transformed faces when they are not occluded. To know whether a new incoming face is occluded, the rendering engine maintains a z-buffer which records for each pixel the depth of the currently drawn face. If for a given image pixel, the distance of the incoming face is smaller than the one currently registered in the z-buffer, the considered pixel is updated using the current face, as well as the z-buffer. Otherwise, it is discarded. A geometry preserving encryption technique, on its side, distorts the original object into something that may no longer look like a regular 3D mesh. For instance, the protected objects output by the PointShuffling and CoordShuffling algorithms are made of faces that are in average much larger than the ones of their original counterparts. Displaying such objects will therefore require many more updates of the z-buffer than usually needed. As a result, user experience may be degraded. Users with proper credentials will have no penalty whereas users without credentials may get a slower frame rate.

31  Jingliang Peng, Chang-Su Kim, and C.-C. Jay Kuo, ‘Technologies for 3D mesh compression: A survey’, Journal of Visual Communication and Image Representation 16, no. 6 (December 2005): 688–733, doi:10.1016/j. jvcir.2005.03.001, S1047320305000295.




Watermarking 3D Objects Digital watermarking complements encryption and acts as a deterrent against unauthorized uses of copyrighted content. Essentially, watermarking consists in modifying multimedia content in an imperceptible manner in order to convey additional information in a robust fashion. The selling point of watermarking is that it survives the D/A-A/D conversion (the analog hole). To be understood by a human, multimedia content needs to be eventually decrypted. At this very moment, the content is left unprotected and may be subject to piracy attacks. In contrast, a watermark is inherently tied to the host signal and travels along with it. Relying on this property, one could exploit watermarking to serialize content at distribution time i.e. each recipient receives a slightly different version of an asset, each one carrying a unique watermark. As a result, should a copy be illegally redistributed, the underlying watermark would allow to pinpoint the source of the leak. There are mainly two communications systems routinely used as the baseline for digital watermarking, namely spread spectrum watermarking and watermarking with side-information (aka. dirty paper watermarking).32 A content adaptation layer, which is in charge of accommodating for the particular characteristics of the host media, is then wrapped around this basis to obtain a full fledge watermarking system. While the watermarking layer is relatively mature and stable nowadays, there is still lively research activity to devise better content adaptation layers, especially for exotic types of media, such as 3D (animated) meshes, that only received marginal interest so far.

Fidelity Metrics for 3D Content

One of the requirements of watermarking is for the introduced modifications to remain imperceptible to a human observer. In the context of 3D mesh, it immediately raises the key question: how do you objectively measure the distortion between two 3D objects? Shall it relate to the geometric distortion of the mesh in the 3D space; or to the visual distortion that results after rendering from a number of viewpoints; or to the haptic sensation felt after 3D printing the objects? Fidelity metrics play a major part in watermarking and should be properly defined. They are indeed exploited to assess 32  Ingemar Cox et al., Digital Watermarking and Steganography, 2nd ed., (The Morgan Kaufmann Series in Multimedia Information and Systems) (Morgan Kaufmann, 2007).

the distortion introduced by the watermarking process as well as to perceptually shape the embedded watermark in order to make it less perceptible, e.g. by amplifying the watermark signal in regions where it will be less perceptible and attenuating it elsewhere. In 3D graphics, finding an appropriate objective distortion metric is still an open issue.33 The most intuitive and straightforward approach is the well-known Mean Squared Error (MSE), that aggregates the Euclidean distance between pairs of vertices, one vertex being taken from the original 3D mesh and the other being its watermarked counterpart. Another popular geometric distortion metric is the (asymmetric) Hausdorff distance, which is defined as the largest Euclidean distance from one surface to the other, as well as its symmetric version and numerous variants. Nevertheless, experimental results clearly showed that these quantitative geometric distortion metrics only exhibit marginal correlation with the perceptual distortion actually perceived by a human being. Similarly to what happened in audio and video, this situation motivated the definition of new metrics incorporating perceptual principles, rather than focusing solely on geometric information. Watermarking can indeed be assimilated to noise addition, which translates as the insertion of a rough component on the surface using 3D terminology. Since the introduction of roughness on a smooth surface is very noticeable, one could weigh the distortion measured locally on the surface of a 3D object with the roughness at this location in an attempt to better approach human perception. A major downside still remains: these metrics are usually defined for rigid 3D meshes and might not be fit to capture the distortion for animated meshes. A small distortion considered as imperceptible in a given pose may well be much more noticeable in another pose of a dynamic mesh. For instance, a small modification at the tip of the elbow may not have the same visibility depending on if the arm is contracted or extended. An early work has recently been published to tackle this issue34 but fails to fully solve the problem. It provides means to measure distortion when considering a precise animation made of well identified poses but does not succeed to define a universal metric that would grasp the distortion for any animation that could be generated from a rigged 3D mesh (a ‘rigged’ 3D mesh contains a skeleton which can be used to define a wide range of poses). 33  Abdullah Bulbul et al., ‘Assessing Visual Quality of 3D Polygonal Models’, IEEE Signal Processing Magazine, November 2011. 34  L. Vasa and V. Skala, ‘A Perception Correlated Comparison Method for Dynamic Meshes’, IEEE Transactions on Visualization and Computer Graphics 17, no. 2 (February 2011): 220–230, doi:10.1109/TVCG.2010.38.




Manipulating 3D Objects

Fidelity metrics are also used to specify a distortion range within which watermarks embedded in 3D objects are expected to be retrieved successfully. The robustness of a watermarking system is indeed evaluated with respect to the ability of the watermark signal to survive even if the protected asset is further manipulated. Obviously, severe degradation of the content will make the detector miss the watermark and this is the reason why robustness is limited to attacks35 that preserve the usability of the content, said usability being typically defined in terms of distortion. For conventional media (audio, image, video), the watermarking community partitions attacks into two main categories. Synchronous attacks encompasses all signal processing primitives that modify the samples value of the signal e.g. noise addition, filtering, compression, etc. In contrast, desynchronization attacks refer to signal processing operations that modify the samples position and thus disrupt the implicit alignment shared between the watermark embedder and detector36. This well-established classification is however quite ill-fitted for 3D content since nearly all 3D operations modify the position of the vertices. In computer graphics, the distinction is rather made between operations that affects the geometry only (i.e. the position of the vertices) and the primitives that may also impact the topology (i.e. the connectivity between vertices). The first category covers standard signal processing operations such as noise addition, quantization, and surface smoothing, as well as basic similarity transformations such as rotation, scaling and translation. It is really with attacks on the topology that the specificities of 3D data is fully revealed. A 3D mesh can be indeed seen as a particular nonuniform sampling of a virtual 2D surface in a 3D space. From this perspective, the mesh can be affected by a number of operations that for instance intends (i) to enrich the mesh with new vertices to refine the approximation of the surface, (ii) to simplify the mesh by discarding vertices while preserving the approximation of the surface, or even (iii) to define a brand new sampling of the surface. There is no equivalent to such attacks in conventional media such as audio, image, and video. This already large spectrum of potentially watermark-harming operations is further widened when this review is no longer restricted to rigid objects and is further extended to animated 3D meshes. For instance, a skeleton-based animated object can be defined by a regular 3D mesh that is attached to a skeleton made of ‘bones’ hierarchically connected through joints, each of these joints having a number of degrees of freedom (constructing such a skeleton is called 35  Signal processing operations applied to the content after watermarking are routinely referred to as ‘attacks’, regardless of the intent of the person (regular casual use vs. hostile adversary) performing the operation. 36  Digital watermarking is a communications channel and as such relies on implicit conventions between the emitter and the receiver (length of the transmitted symbols, symbols codebook, synchronization). If the synchronization between the two parties is perturbed, the watermark message may no longer be retrieved even if it remains present in the host data in a somehow latent state.

the ‘rigging’ of the object). The animation then consists in applying a transformation to the skeleton that naturally cascades to the ‘skin’. As a result, from a single animated 3D mesh, it is possible to generate a large collection of poses through isometric transformations of the surface.37 Nevertheless, this still does not cover the exhaustive bestiary of shapes that could be derived from a single animated object. Very realistic animations routinely include for instance a number of non-isometric deformations in order to model muscles and the associated skin stretching movements.

Limitations of Existing 3D Watermarking Schemes

When facing large variability, a routine watermarking practice consists in defining an embedding domain that will absorb most of the instability so that the baseline watermark channel only has to deal with marginal noise. Such embedding domain typically relies on some signal processing transforms, which exhibit appealing properties to achieve robustness. Should it be for audio, images or video, the transforms considered in watermarking have evolved over the years in a very similar pattern. Early algorithms essentially considered raw data, also referred to as the time or the spatial domain; subsequent works then relied on a frequency transform; finally, most recent proposals rely on some space-frequency representation of the host signal. 3D watermarking is no exception and also closely followed this trend, as can be inferred by reading recent comprehensive surveys.38 Still, 3D specific adjustments had to be incorporated here and there in order to obtain a dedicated content adaptation layer. Pioneer 3D watermark algorithms essentially scanned through the faces of the object to be protected and sequentially modified a particular feature e.g. the ratio of the edges, the area, the projection of one vertex onto the opposite edge, etc. These approaches have however been rapidly found to be extremely fragile, some of them being defeated in some cases with a simple reordering of the vertices. Significant effort has therefore been spent to devise alternate embedding domains that were directly derived from the raw representation of the 3D object and that offered a superior stability against attacks. For instance, a very popular scheme, which has led to numerous variations afterwards, consists in modifying the histogram of the distances between the vertices and the center of mass of the object.39 While such techniques are often straightforward to implement and fast to execute, they usually lack robustness in many aspects e.g. with respect to cropping, smoothing, re-meshing, etc.

37  An isometry is a transformation that preserves the distance between points. For a 3D surface, it implies that the geodesic distance over the surface remains untouched. For instance, the distance from the tip of a finger to the shoulder is the same regardless of whether the arm is extended or contracted. 38  Kai Wang et al., ‘A Comprehensive Survey on Three-Dimensional Mesh Watermarking’, IEEE Transactions on Multimedia 10, no. 8 (December 2008): 1513–1527, doi:10.1109/TMM.2008.2007350. 39  J. -W Cho, R. Prost, and H. -Y Jung, ‘An Oblivious Watermarking for 3-D Polygonal Meshes Using Distribution of Vertex Norms’, IEEE Transactions on Signal Processing 55, no. 1 (January 2007): 142–155, doi:10.1109/ TSP.2006.882111.




These shortcomings of the raw domain naturally led to investigating frequency representations of the 3D mesh.40 The motivation for considering such representations is threefold: (i) the spectral domain can naturally be robust to geometric attacks (smoothing, similarity transformations, etc.), (ii) the modifications introduced by the watermarking process are mechanically spread throughout the mesh at synthesis time (when performing the inverse transform from the spectral domain to the raw domain), and (iii) the spectral domain may offer a better control over the perceptual distortion introduced by the watermarking process.41 Still, the 3D spectral representation has a notable difference compared to its audiovisual counterparts. There are no canonical reference basis vectors available to decompose input 3D objects in the spectral domain and they have therefore to be defined in a content dependent manner by computing the eigenvectors of some intermediary representation of the object, namely the Laplacian matrix.42 The issue with such content-dependent spectral domain is that it makes the resulting watermarking algorithm inherently weak against topological attacks e.g. re-sampling attacks. Due to the implementation constraints induced by the spectral domain computation, researchers suggested using a multi-resolution domain that is comparable to some extent to the wavelet transform for audiovisual content. The multi-resolution representation of a 3D mesh indeed consists of a collection of versions of the mesh at decreasing levels of details, together with wavelet coefficients that enable going from one version of the mesh to another one of nearby level of detail. A state-of-the-art watermarking algorithm can then be applied to the coarsest wavelet coefficients for instance.43 While, this alternate strategy is significantly less computationally demanding than the spectral domain, it still shows high sensitivity to topological attacks. In summary, the vast majority of the published literature on 3D watermarking solely focuses on rigid meshes. In this context, improved robustness against geometric attacks has been reported but none of the embedding domains proposed so far offers verified robustness against topology attacks such as re-meshing. In addition, since these watermarking schemes were not considering animation in the first place, most of them will not be able to retrieve the 40  Ohbuchi R., Mukaiyama A., and Takahashi S., ‘A Frequency-Domain Approach to Watermarking 3D Shapes’, Computer Graphics Forum 21, no. 3 (2002): 373–382, doi:10.1111/1467-8659.t01-1-00597. 41  In contrast with audiovisual content, one peculiar characteristic of 3D content is that watermarks inserted in low frequencies are reported to be less perceptible than the one inserted in high frequencies. 42  For a 3D object made of N vertices, the Laplacian matrix has dimension N×N and rapidly raises computational complexity concerns for large 3D objects due to memory space constraints. A typical work-around consists in partitioning the mesh in elements for which the Laplacian matrix could be computed. It should be noted that various definition of the Laplacian matrix exist, each one leading to a spectral domain having different properties. There is currently no firm evidence hinting that a particular definition would be more appropriate for watermarking purpose. 43  Kai Wang et al., ‘Hierarchical Blind Watermarking of 3D Triangular Meshes’, in 2007 IEEE International Conference on Multimedia and Expo (presented at the 2007 IEEE International Conference on Multimedia and Expo, IEEE, 2007), 1235–1238, doi:10.1109/ICME.2007.4284880.

embedded watermark after the generation of a new pose. Only a handful of articles explicitly addressed the issue of animation e.g. by watermarking the individual temporal trajectories of the vertices.44 Although such an approach manages to protect a single animation generated from an animated 3D object, it fails to provide a watermark that could be reliably extracted from whichever arbitrary pose that could be generated with a rigged 3D mesh.

Concluding Remarks Protecting computer generated 3D graphics received marginal attention up to now. Both encryption and watermarking techniques may need to be revisited in order to better accommodate for the specificities of 3D content. On the encryption side, the adaptation of format preserving encryption techniques offers intriguing prospects for protecting 3D content while geometrically controlling the visual distortion at rendering. However, this approach raises a number of challenges which will need to be addressed, including security evaluation due to information potentially leaking from unprotected 3D components and interfering interplay with other mechanisms for 3D content (compression, steaming, rendering, etc). On the watermarking front, the current lack of robustness of the proposed solutions against re-meshing and pose generation calls for a drastic paradigm shift. In this perspective, 3D mesh intrinsic properties — such as geodesic distances, distance to the medial axis, etc. — appear to be promising candidates for watermark embedding since they are expected to offer natural robustness against topology attacks. A first step in this direction has recently been taken.45 X. ROLLAND-NEVIÈRE, Y. MAETZ, M. ÉLUARD, and G. DOËRR.

44  S. Yamazaki, ‘Watermarking motion data’, in Proc. Pacific Rim Workshop on Digital Steganography (STEG04), 2004, 177–185. 45  Jen-Sheng Tsai et al., ‘Geodesic distance-based pose-invariant blind watermarking algorithm for three-dimensional triangular mesh model’, in 2010 17th IEEE International Conference on Image Processing (ICIP) (presented at the 2010 17th IEEE International Conference on Image Processing (ICIP), IEEE, 2010), 209–212, doi:10.1109/ICIP.2010.5652120.




WHERE WILL WE BE? 1st International Workshop on Network Forensics, Security and Privacy (NFSP 2012), Macau, China, June 18-21, 2012

• Paper presentation: An empirical study of passive 802.11 device fingerprinting, by Christoph Neumann, Olivier Heen, and Stéphane Onno

10ième Symposium sur la Sécurité des Technologies de l’Information et des Communications (SSTIC 2012), Rennes, France, June 6-8, 2012 • Chairman: Olivier Heen

5th IFIP International Conference on New Technologies, Mobility and Security (NTMS 2012), Istanbul, Turkey, May 7-10, 2012

• Paper presentation: Improving the resistance to side-channel attacks on cloud storage services, by Olivier Heen, Christoph Neumann, Luis Montalvo, and Serge Defrance

Information Security Practice and Experience (ISPEC 2012), Hangzhou, China, April 9-12, 2012

• Paper presentation: Partial key exposure on RSA with private exponents larger than N, by Marc Joye, and Tancrède Lepoint

TECHNICOLOR SPONSORED CONFERENCES • 10ième Symposium sur la Sécurité des Technologies de l’Information et des Communications (SSTIC 2012), Rennes, France, June 6-8, 2012. Ten year special edition. • IH 2012, Information Hiding, Berkeley, CA, USA, May 15-18, 2012






Vancouver London


Palo Alto


Piaseczno Paris



Hollywood Guadalajara Bangalore





Palo Alto






New York












1, rue Jeanne d’Arc 92443 Issy-les-Moulineaux France Tel. : 33(0)1 41 86 50 00 - Fax : 33 (0) 1 41 86 58 59

© Copyright 2012 Technicolor. All rights reserved. All trade names referenced are service marks, trademarks, or registered trademarks of their respective companies.

Security Newsletter 21