Page 1

REQUIREMENTS SPECIFICATIONS Human-enhanced time-aware multimedia search

CUbRIK Project IST-287704 Deliverable D2.2 WP2

Deliverable Version 1.0 – 30 June 2012 Document. ref.: cubrik.D22.TUD.WP2.V1.0


Programme Name: ...................... IST Project Number: ........................... 287704 Project Title:.................................. CUbRIK Partners:........................................ Coordinator: ENG (IT) Contractors: UNITN, TUD, QMUL, LUH, POLMI, CERTH, NXT, MICT, ATN, FRH, INNEN, HOM, CVCE, EIPCM Document Number: ..................... cubrik.D22.TUD.WP2.V1.0.doc Work-Package: ............................. WP2 Deliverable Type: ........................ Document Contractual Date of Delivery: ..... 30 June 2012 Actual Date of Delivery: .............. 30 June 2012 Title of Document: ....................... Requirements Specifications Author(s): ..................................... Martha Larson (TUD) ....................................................... Mark Melenhorst (TUD/Novay) ....................................................... Ruud Janssen (TUD/Novay) ....................................................... Raynor Vliegenthart (TUD) ....................................................... Markus Brenner (QMUL) ....................................................... Naeem Ramzan (QMUL) ....................................................... Piero Fraternali (POLMI) ....................................................... Alessandro Bozzon (POLMI) ....................................................... Luca Galli (POLMI) ....................................................... Davide Martinenghi (POLMI) ....................................................... Ilio Catallo (POLMI) ....................................................... Eleonora Ciceri (POLMI) ....................................................... Marco Tagliasacchi (POLMI) ....................................................... Patrick Aichroth (FRH) ....................................................... Christian Weigel (FRH) ....................................................... Paolo Salvatore (INNEN) ....................................................... Lars Wieneke (CVCE) ....................................................... Sascha Kaufmann (CVCE) ....................................................... Ghislain Sillaume (CVCE) ....................................................... Uladzimir Kharkevich (UNITN) ....................................................... María Menéndez (UNITN) ....................................................... Aliaksandr Autayeu (UNITN) ....................................................... Anastasios Drosou (CERTH/ITI) ....................................................... Dimitrios Tzovaras (CERTH/ITI) .............................................. ........ Theodoros Semertzidis (CERTH) ............................................. .......... Michalis Lazaridis (CERTH) ............................................. .......... Dimitris Rafailidis (CERTH) ............................................. .......... Ismail Sengor Altingovde (LUH) ........................................... ............ Sergiu Chelaru (LUH) .......................................... ............. Mihai georgescu (LUH) Approval of this report ............... Summary of this report: .............. Summary of the user requirements gathered from the Domains of Practice. Definition of the system requirement specifications (First Release). History: .......................................... Version 1 distributed for internal review ....................................................... Version 2 (final) submitted for approval by EC Keyword List: ............................... Availability This report is: public

CUbRIK Requirements Specifications

D22 Version 1.0


Disclaimer This document contains confidential information in the form of the CUbRIK project findings, work and products and its use is strictly regulated by the CUbRIK Consortium Agreement and by Contract no. FP7- ICT-287704. Neither the CUbRIK Consortium nor any of its officers, employees or agents shall be responsible or liable in negligence or otherwise howsoever in respect of any inaccuracy or omission herein. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7-ICT-2011-7) under grant agreement n째 287704. The contents of this document are the sole responsibility of the CUbRIK consortium and can in no way be taken to reflect the views of the European Union.

This work is licensed under a Creative Commons Attribution-NonCommercialShareAlike 3.0 Unported License. This work is partially funded by the EU under grant IST-FP7-287704

CUbRIK Requirements Specifications

D22 Version 1.0


Table of Contents EXECUTIVE SUMMARY

1

1.

2

INTRODUCTION 1.1 1.2 1.3

2.

THE VERTICAL APPS THE HORIZONTAL DEMOS OUTLINE OF THIS DELIVERABLE VERTICAL APPS: HISTORY OF EUROPE

2.1 USER NEEDS AND SCENARIO DEVELOPMENT 2.1.1 Outline of the approach 2.1.2 Target user group 2.1.3 Exploratory interviews 2.1.4 Construction of user stories 2.1.5 User story evaluation with end-users 2.1.6 User story evaluation with technical experts 2.1.7 From user stories to technical implementation 2.1.8 Proposal for technical implementation 2.2 USE CASES 2.2.1 Image analyser 2.2.2 Social Graph Manager 2.2.3 Context expander 2.2.4 Copyright Checker 2.3 NEXT STEPS 3.

VERTICAL APPS: SME INNOVATION (FASHION INDUSTRY) 3.1 USER NEEDS AND EVALUATION 3.1.1 Outline of the approach 3.1.2 Target user group 3.1.3 Foundational interviews 3.1.4 Use story development 3.1.5 User story development 3.1.6 User story evaluation with technical experts 3.1.7 Proposal for technical implementation 3.2 USE CASES 3.2.1 User Story 1a - Search Similar Images 3.2.2 User Story 2a - Play Games 3.2.3 What do I wear today 3.3 NEXT STEPS

4.

HORIZONTAL DEMOS 4.1 INTRODUCTION 4.2 MEDIA ENTITY ANNOTATION 4.2.1 Application context 4.2.2 Technical contribution 4.2.3 Requirements and success criteria 4.2.4 Use cases 4.2.5 Release planning 4.3 CROSSWORDS 4.3.1 Application context 4.3.2 Technical contribution 4.3.3 Requirements and success criteria 4.3.4 Use cases 4.3.5 Release planning 4.4 LOGO DETECTION

CUbRIK Requirements Specifications

2 3 3 5 5 5 6 6 8 12 15 17 17 18 18 18 18 19 19 21 21 21 23 23 28 29 30 30 31 31 32 33 34 35 35 37 37 37 38 40 43 44 44 44 44 46 47 47 D22 Version 1.0


4.4.1 Application context and data set 4.4.2 Technical contribution 4.4.3 Requirements and success criteria 4.4.4 Use cases 4.4.5 Release planning 4.5 PEOPLE IDENTIFICATION 4.5.1 Application context 4.5.2 Technical contribution and success criteria 4.5.3 Requirements and success criteria 4.5.4 Use cases 4.5.5 Release planning 4.6 NEWS HISTORY PIPELINE 4.6.1 Application context and data set 4.6.2 Technical contribution 4.6.3 Requirements and success criteria 4.6.4 Use cases 4.6.5 Release planning 4.7 ACCESSIBILITY AWARE RELEVANCE FEEDBACK 4.7.1 Application context 4.7.2 Technical contribution 4.7.3 Requirements and success criteria 4.7.4 Release planning 4.8 LIKELINES 4.8.1 Application context 4.8.2 Technical contribution 4.8.3 Requirements and success criteria 4.8.4 Use cases 4.8.5 Release planning 4.9 COPYRIGHT-RELATED REQUIREMENTS 4.10 SUMMARY OF SUCCESS CRITERIA 4.11 NEXT STEPS 5.

47 47 47 48 50 50 50 50 51 52 52 52 52 53 53 54 59 59 59 59 61 62 63 63 63 63 64 65 65 66 67

FEATURE TO H-DEMO MAPPING 5.1 5.2

68

FEATURE TO H-DEMO MAPPING REQUIREMENTS AND FEATURES IMPLEMENTED IN RELEASE 1

68 75

6.

CONCLUSION AND OUTLOOK

76

7.

REFERENCES

77

8.

APPENDIX A: EU HISTORY SOCIAL GRAPHS

78

9. APPENDIX B: USE SCENARIOS DEVELOPED AS CANDIDATES FOR THE SME INNOVATION V-APP USE SCENARIO 81 9.1 9.2 9.3

THE USAGE SCENARIO NUMBER #1: PACKAGING EQUIPMENT MANUFACTURER THE USAGE SCENARIO NUMBER #2: SME IN THE FASHION INDUSTRY EXAMPLE OF SPECIFIC USAGE SCENARIOS OF THE CUB SME APP

81 85 89

10. APPENDIX C: EXAMPLE REQUIREMENTS RELATED TO SME INNOVATION VAPP 91

CUbRIK Requirements Specifications

D22 Version 1.0


Tables and Figures Table 1 Requirements Planning................................................................................................ 2 Table 2 Summary of the Success Criteria............................................................................... 67 Table 3 Feature to H-Demo mapping...................................................................................... 74 Table 4 Release 1 Summary................................................................................................... 75 Table 5 General requirements of the CUbRIK SME App........................................................ 92 Table 6 Functionalities of the CUbRIK SME App.................................................................... 95 Table 7 Matching of TRM contents categories and Innovation content categories ................ 96 Figure 1 Requirements Cycle.................................................................................................... 2 Figure 2: High-level schematic of the complementary relationship between the Vertical Applications (V-Apps) and the Horizontal Demonstrations (H-Demos), used to validate the CUbRIK platform. ...................................................................................................................... 3 Figure 3: Outline of the approach.............................................................................................. 5 Figure 4 Outline of the approach for the SME Innovation app................................................ 22 Figure 5 How information support SMEs innovation process ................................................. 24 Figure 6 Information considered important for market analysis in SMEs innovation process 25 Figure 7 Information considered important for technology intelligence in SMEs innovation process .................................................................................................................................... 25 Figure 8 Used keywords by SMEs in searching the web during innovation process.............. 25 Figure 9 Search functionalities interesting for SMEs during innovation process .................... 26 Figure 10 Functionalities interesting for SMEs during innovation process ............................. 27 Figure 11 Examples of fashion sites reviewed........................................................................ 29 Figure 12 Examples of Creative Commons licensed fashion items on Flickr photo sharing website .................................................................................................................................... 29 Figure 13 Overall Vision of Fashion Industry .......................................................................... 30 Figure 14 User Story 1a .......................................................................................................... 31 Figure 15 User Story 2a .......................................................................................................... 32 Figure 16 What do I wear today .............................................................................................. 33 Figure 18 Harvesting Represnetative Images of Named Entities ........................................... 37 Figure 19 UML diagram for ‘Media Harvesting for Entities’ .................................................... 40 Figure 20 UML diagram 'Entity search by metadata' .............................................................. 42 Figure 21 UML diagram 'Entity search by multimedia'............................................................ 43 Figure 22 UML diagram Crosswords ...................................................................................... 46 Figure 23 Use case diagram for H-Demo “News History”....................................................... 54 Figure 24 Activity diagram for use case "Query" (UC NH1).................................................... 55 Figure 25 Activity diagram for use case "Content Provisioning" (UC NH2) ............................ 56 Figure 26 Activity diagram for use case “Collection Validation & Annotation" (UC NH3) ....... 57 Figure 27 Activity diagram for use case “Model Update " (UC NH4) ...................................... 58 Figure 28: Coffee Capsule (left image) and Coffee pods (right image) .................................. 82 Figure 30: Result of Google search over images for “Coffee Capsule Drawings” .................. 83 Figure 31: Google Video search results for “Skirt colour summer 2012”................................ 85 Figure 32: Google Image search results for “Skirt colour summer 2012” ............................... 86 CUbRIK Requirements Specifications

D22 Version 1.0


Figure 33: Google Image search results for “Britney Spear trousers” .................................... 86 Figure 34: Yahoo Image search results for “Jennifer Aniston Skirt” ....................................... 87 Figure 35: Google Video search results for “Jennifer Aniston Skirt” ....................................... 87 Figure 36: Google Video search results for “America Music Awards 2011” ........................... 88 Figure 37: Umbrella decomposition in mechanism and function tree ..................................... 90 Figure 38 Keywords (contents) portfolio map ......................................................................... 96 Figure 39 Keywords (contents) relationship map.................................................................... 97 Figure 40 Keywords (contents) evolution map........................................................................ 97

CUbRIK Requirements Specifications

D22 Version 1.0


Executive Summary In this document the initial set of requirements is specified for the two types of applications that will be developed within CUbRIK: •

The Vertical Applications (V-Apps): applications that are closely connected to a domain of practice and evaluated with end-users;

The Horizontal Demos (H-demos): demonstrators that have the primary purpose of providing compelling, concrete examples of state-of-the-art technologies in action, as well as of the concrete functionality that these technologies can provide. The V-Apps requirements represent a co-ordinated effort between Task 2.1: “Domains of Practice” (M1-M6) and Task 10.1: “Application requirements and coordination” (M1-M36). For the V-Apps user stories have been developed and evaluated with end-users. Feedback on technical scope and feasibility has been collected. Finally, an initial set of requirements has been specified. For each of the H-Demos the application context, the technical contribution, the functional and non-functional requirements, and their success criteria are described. A compendium has been constructed that provides an overview of the features that are implemented by each of the H-Demos. To conclude this deliverable next steps have been identified. For the V-Apps this primarily involves the evaluation of the initial requirements by end-users in preparation for the next intermediate release of the requirements in M18. For the H-Demos, the focus will be on the implementation of the features that will be released in R1. After R1, a second iteration of requirements specification needs to be done in preparation for the first version of the CUbRIK V-Apps. •

CUbRIK Metadata Models

Page 1

D22 Version 1.0


1.

Introduction

This deliverable describes the initial requirements of the CUbRIK Vertical Applications (VApps) and Horizontal Demos (H-Demos), which will be developed within CUbRIK for the purpose of validating the CUbRIK platform. In this chapter we describe the over-all approach that was used to formulate the draft requirements for the V-Apps and the H-Demos.

1.1

The Vertical Apps

The V-Apps are end-to-end applications for the CUbRIK domains of practice, History of Europe and SME Innovation, which are two real-world domains chosen because they involve users with real-life needs for open search solutions, such as offered by the CUbRIK platform. The V-Apps are developed in multiple iterations, in which end-user feedback (User pull) is alternated with technical feedback (Technology push) and refinement of the requirements. This cycle aims to construct requirements that are both technically feasible and grounded in user-needs. The cycle is displayed in Figure 1.

Figure 1 Requirements Cycle For the V-Apps, in M9 one cycle has been completed. As a result, the requirements for the VApps are the initial requirements. As discussed in Sections 2.3 and 3.3, the next step for the V-Apps would be to evaluate the initial requirements with end users in order to make sure that the requirements address the needs of the end-user. In order to get the requirements into their final form, the following schedule is used, which is aligned with the release schedule for the WP10-prototypes. It is displayed in Table 1 below. Month

Planning

M9

Initial requirements, first version of D2.2

M18

Final requirements for prototype, published as an intermediate release of D2.2.

M22

Release of first prototype

M24

Final version of the V-Apps requirements in final version of D2.2.

M27

Release of second prototype Table 1 Requirements Planning

CUbRIK Metadata Models

Page 2

D22 Version 1.0


A WP2-WP10 Inter Work Package Task Group (consisting of CUbRIK partners CVCE, EIPCM, INNEN, TUD) was formed in order to coordinate the definition of requirements in WP2 and the development and evaluation of the demonstrator applications in WP10.

1.2

The Horizontal Demos

The H-Demos are technical and functional demonstrators of CUbRIK pipelines, which are frameworks for executing processes that consist of a workflow of tasks, each carried out by a task executor (i.e., a system component or a human contributor). To a lesser extent the cycle displayed in Figure 1 will also be followed by the H-Demos, as determined by their overall goal of demonstrating CUbRIK technology and functionality. The primary difference between V-Apps and H-Demos lies in their genesis and purpose. VApps start with users under specific real-world conditions and move from the users’ needs to applications that can be implemented with CUbRIK technology. The V-Apps serve the purpose of validating the CUbRIK platform in a setting characterized by interdisciplinary collaboration (content owners, inventers of analysis methods, infrastructure engineers and end users) where it is used to create systems that use open multimedia search technology to address real-world problems and contribute to creating new business opportunities. The H-Demos start with CUbRIK technologies and move from technical components and functions to demonstrators that can then be shown to users, in order to illustrate the capacities of the underlying multimedia technologies and allow them to innovate new uses for the CUbRIK platform.

Figure 2: High-level schematic of the complementary relationship between the Vertical Applications (V-Apps) and the Horizontal Demonstrations (H-Demos), used to validate the CUbRIK platform. Another difference between the V-Apps and the H-Demos in the way in which they cross cut the layers of the CUbRIK platform. V-Apps are “vertical” in the sense that they use pipelines to carry out an end-to-end task, involving multiple individual pipelines. Although H-Demos can also be end-to-end solutions, their main focus is on demonstrating the functionality of particular CUbRIK pipelines (e.g., content processing, querying and feedback processing). Both the H-Demos and V-Apps provided initial input for D2.1 at M9. In D2.2 the goal will be a further convergence of the requirements with the specifications of the metadata models.

1.3

Outline of this deliverable

Sections 2 and 3 of this deliverable deal with the V-Apps. These sections represent the coordinated efforts of Task 2.1: “Domains of Practice” (M1-M6) from WP2: “Requirements and Models” and Task 10.1: “Application requirements and coordination” (M1-M36) from WP10: “CUbRIK Apps and Evaluation”. Section 4 covers the requirements for the H-Demos. The form of the requirements for the HDemos was defined in consultation with the entire CUbRIK consortium. The requirement specifications for the H-Demos were contributed by the consortium partner responsible for co-ordinating each demo. The subsection for each H-Demo specifies H-Demos requirements (both functional and non-functional) and also the use cases. The requirements of each HCUbRIK Metadata Models

Page 3

D22 Version 1.0


Demo include a list of success criteria according to which the demo will be evaluated. Section 5 gives a view on Section 4, by summarizing the requirements of the H-Demos and specifying the mapping between these requirements and the CUbRIK features. It includes a section that summarizes the requirements and the features that will be covered by the first CUbRIK release, R1. Further requirements will be specified in an intermediate version of this deliverable in M18, and in the formal next version of this deliverable, which is scheduled for M24.

CUbRIK Metadata Models

Page 4

D22 Version 1.0


2.

Vertical Apps: History of Europe

History of Europe is the first domain of practice of CUbRIK. It is represented in the CUbRIK consortium by CVCE, a research and documentation centre for European studies, charged with the mandate of studying the process of European integration using an interdisciplinary approach. In the History of Europe domain, CUbRIK pursues the goal, described in Task 2.1 “Domains of Practice” of the DoW, of enabling the deep linking of media assets from private collections and public Web and the space and time aware access to content.

2.1

User needs and scenario development

In this section, we report on the requirements for the History of Europe V-App and the approach we have taken to be able to formulate them.

2.1.1

Outline of the approach

In this section we provide a high-level outline of the approach we took to select a target group, and to formulate and evaluate a number of user stories that fit well within the scope of CUbRIK, while simultaneously supporting the target group in their work. Central to the approach is the development and evaluation of user stories. The diagram in Error! Reference source not found. summarizes our approach.

Figure 3: Outline of the approach. The process that is above is an elaboration of the first cycle of user pull – technology push – requirements specification that was discussed in the previous chapter. Below we summarize what each of the steps comprises. The steps are explained in more detail in the following sections. 1. Target group selection. A target user group needs to be selected that allows CUbRIK to develop a demonstrator that addresses user needs in a specific real-life context, while demonstrating the technological capabilities of the CUbRIK-project regarding multimedia search and retrieval. For the History of Europe application domain, the team of domain experts in European Integration Studies at CUbRIK partner CVCE was selected, since multimedia search and retrieval is an inherent part of the experts’ daily

CUbRIK Metadata Models

Page 5

D22 Version 1.0


work. 2. Exploratory interviews. Two interviews were conducted to get a specific idea about the work of the domain experts, its complexity, and the experts’ needs for support. 3. Construction of user stories. Five different user stories were developed in two iterations. Each user story describes a task the domain experts have to perform as part of their daily work. Each user story also describes a technological solution to support the task at hand. Second, Near2Me (Kofler et al., 2011), an existing user story from the field of tourism, was presented to the users as it demonstrates what is technically possible in the near future, drawing on concepts that are strongly related to the support we think the domain experts may benefit from. 4. User story evaluation with end-users. A focus group with the domain experts was organized to evaluate the extent to which the scenarios match the experts’ needs. 5. User story evaluation with technical experts. An expert in the field of multimedia search and retrieval has provided feedback on the user stories, as a first check on technical feasibility and on relevance to the CUbRIK project. Second, a CUbRIK Apps workshop was organized with CUbRIK’s technical experts to evaluate whether the scenarios are technically feasible and fit well within the CUbRIK-scope. The sections below describe in more detail the approach we have taken, while in the next chapter the results and lessons learnt will be discussed.

2.1.2

Target user group

The domain experts in European Integration Studies at CUbRIK partner CVCE were selected as an easily accessible group of representatives of a potentially much larger community. It is estimated that the number of European Integration Studies experts throughout Europe and the rest of the world exceeds one hundred. The number of teachers and students interested in the history of Europe will undoubtedly be much larger, possibly several thousands. The application domain itself can be seen as representative for the emerging field of Digital Humanities – “an area of research, teaching, and creation concerned with the intersection of computing and the disciplines of the humanities”1.

2.1.3

Exploratory interviews

To learn about the everyday work of the domain experts and to identify the tasks that may benefit from computer support, exploratory semi-structured interviews were held with two experts. The interviews took one hour each. The interviews started with general questions about the expert’s work. Next, they were asked to tell something about typical projects they have been working on. Then questions were asked about search and retrieval tasks that require much time, are cumbersome to carry out, or which would otherwise benefit from computer support. In one of the interviews the respondent was asked about what he thought his work would look like in five years. The interviews were analysed with the purpose of understanding the nature of the expert’s work. The following lessons have been learnt: •

The domain experts are historians focusing on the topic of European Integration (1945 to present). They work on various research projects, with titles such as “Pierre Werner and the European Union”, “Spain and the European integration process”, or “The economic and monetary union”. Many projects are carried out in cooperation with historians at other institutes and universities throughout Europe.

Their work can be compared to “detective work”. They visit archives and collect evidence to formulate or support a hypothesis. They interview eye witnesses, digitize and annotate historical materials such as video, pictures, documents, letters, newspaper articles, books, cartoons, posters, flyers, etc. Much of their work is about

1 See http://en.wikipedia.org/wiki/Digital_humanities. CUbRIK Metadata Models

Page 6

D22 Version 1.0


interpretation, pattern recognition, and finding causal relations (“connecting the dots”), for instance when interpreting a series of meetings between people that ultimately led to an important historical event. The work therefore involves sensitivity and intuition. The interviewed experts were weary of computer support that would enforce a particular way of working. •

Because they have to be able to rely on their materials, the experts are very critical of their sources: these must be authentic and traceable and come from reputable sources (provenance2). Materials can be old and exist in various forms (for instance, old newspapers or documents on microfilm), related materials can be spread around various archives (fragmentation) and the “digital maturity” of archives varies enormously. Related materials can also be written in different languages, for instance in different accounts of the same event.

Typical tasks related to search and retrieval: •

Compiling sources o Finding archives: Iteratively create a corpus, using your own knowledge, by asking colleagues or peers, or by turning to Google. Sometimes a panel committee will validate the corpus and suggest improvements. o Collecting materials: The more advanced archives can be accessed online, others need to be visited in person. Permission to collect materials must usually be asked in advance. Often, a copyright agreement needs to be signed. o Technical treatment: Making digital photocopies (using a scanner) or photographs (using an HD photo camera) is not always allowed and some archives will digitize materials upon request. In rare cases (e.g., family archives), materials can be picked up for digitization and returned later. In case this hasn’t happened yet, collected material needs to be OCR-ed3, the copyright needs to be verified, and metadata needs to be added (title, caption, origin, date, copyright information). Sense making o Finding additional materials: Finding other media covering the same event (e.g., other pictures, or text instead of pictures). Finding different versions of the same document, e.g., versions targeted at different audiences, draft versions (for instance of a meeting agenda) or versions containing handwritten notes. o Identifying people, objects or context: Finding information that helps to identify people, buildings or objects in a picture, or the context in which a picture was taken (e.g., at which event). For instance, identifying the brand and type of a car visible in the background can help to identify the time period in which a picture was taken. This is now done by asking colleagues or peers, by looking for clues in books, or by using an image matching service such as TinEye4. Other clues include: national symbols, flags, and buildings or landmarks. o Searching within recordings: Finding and interpreting relevant segments in eye witness accounts (audio or video recordings of interviews). Searching inside these recordings is currently hampered by lack of an unambiguous subdivision into themes and/or inaccurate automatic speech-to-text transcription. Manual transcription, or correction of automatic transcription, is cumbersome and time consuming. o Identifying metadata: Finding information about a picture or document, such as the source, the author, or the copyright status, but also for whom a photo was taken or a text was written. Publishing

2 For a definition, see http://en.wikipedia.org/wiki/Provenance. 3 For a definition, see http://en.wikipedia.org/wiki/Optical_character_recognition. 4 See http://www.tineye.com/. CUbRIK Metadata Models

Page 7

D22 Version 1.0


o

Materials are published as part of (research) papers or books, or they are made accessible via a file server or website (for instance the CVCE research corpus5).

Suggestions for improved computer support: •

Searching within materials: Recognizing a person in a picture, finding relevant segments in an interview recording, generating transcriptions of interview recordings.

Finding references: Which other experts consulted or annotated this material, what other materials did they use or annotate.

Semantic interpretation: Suggest a time period for a picture or video (when was it taken or recorded).

Visualization of the relations between documents: Visualize the relations in an interactive network (like a mind map), where relations can be added, deleted, or otherwise manually influenced.

Visualization of multiple document versions: Visualize the differences between versions of a document, for instance the same document targeted at different audiences, or the same document collected from different archives.

Integration of search results: Search results are usually returned as an un-integrated list of hits, for instance hits in websites that need to be visited manually, one after the other. A kind of integrated summary would be much more useful.

2.1.4

Construction of user stories

The interviews identified a number of opportunities to make the experts’ work easier by means of computer support. We needed a user-centered design technique that allows us to discuss these opportunities with the experts (“is this what you really need?”) on the one hand and on the other hand to discuss the opportunities with the technical experts within CUbRIK (“is this feasible?”). This is also necessary to evaluate whether the opportunities are wellaligned with CUbRIK’s scope and priorities. Scenarios are a useful technique for these purposes6. A scenario is a narrative description of what people do and experience as they try to make use of a computer system (Carroll, 1995). It consists of a set of users, a context and a set of tasks that users perform or want to perform. It blends a carefully researched description of some set of real ongoing activities with an imaginative futuristic look at how technology could support those activities better (Suri & Marsh, 2001, p. 152). Using scenarios offers several benefits for design teams. They help to integrate the user perspective with the technology perspective. It also places the system use in the broader context of everyday use (Kuutti, 1995; Ludden, 2010). Finally, Suri & Marsh (2001) point out that using scenarios individualize the user: scenarios bring to life specific imagined individuals as opposed to relying on abstract user characteristics only. In other words, this helps designers to empathize with the users. These characteristics of the scenario technique fit the purposes of CUbRIK quite well. Therefore, scenarios were used to connect the current activities of the domain experts with the opportunities for technological support CUbRIK can offer. To avoid ambiguous terminology we use the term scenario to refer to the History of Europe case (CUbRIK partner CVCE) and the SME Innovation case (CUbRIK partner INNEN), while specific combinations of users, contexts, and computer-supported tasks – which can be seen as sub-scenarios – are referred to as user stories. We developed a set of five user stories within the History of Europe scenario that were informed by user needs on the one hand and technology potential on the other hand. Below the final versions of the five user stories are presented.

5 See http://www.cvce.eu/recherche/corpus. 6 A template-based description of the scenario building technique can be found at http://knowledgecentre.openlivinglabs.eu/learn/techniques

CUbRIK Metadata Models

Page 8

D22 Version 1.0


User story 1: Collecting materials at the archive Concept: Snapshot Matcher Ibrahim visits the Dutch national archive to collect materials for his research project on the European Movement. In the archive he finds two unclassified photos with some handwritten text on the backside. Ibrahim takes a few snapshots of the photos using the camera on his smartphone. He then uploads the snapshots to the online European History research platform where he creates a new dossier for them. Next, he starts the snapshot matcher to find out if high-quality scans of the photos are available elsewhere. This will take a while, so he resumes his search for more material at the archive. The results will be waiting for him when he is back at his office. The next morning, he logs in to the research platform and opens the dossier he created earlier. He is lucky; the snapshot matcher indeed found a few high-resolution scans matching his snapshots. He confirms that the scans should be added to the dossier and linked to his snapshots.

User story 2A: Where was this photo taken? Concept: Image Analyser Ibrahim wants to learn more about the photos he found at the archive. He opens the scans and selects the image analyser. With this tool he can identify all kinds of things in a picture, such as faces, objects, symbols, or text. One photo shows a large hall with peculiar chandeliers. Ibrahim wants to know where the photo was taken so he selects a chandelier in the scan to search for matching images. To narrow the search and speed up the analysis, he also provides a few keywords describing the hall. After a while the image analyser returns some results: the chandeliers match those in several other pictures, most of which were taken inside the Ridderzaal in The Hague. This confirms his suspicion that his photos were taken at The Hague congress of 1948.

User story 2B: Who is this person? Tell me more! Concepts: Image Analyser and Context Expander The other photo shows a few people at a dining table. Amidst familiar faces Ibrahim notices one person that is unknown to him. He wants to know who this is. He therefore selects the person’s face in the scan and, to speed up the search, he provides the names of some of the people he already recognized. He also selects the handwritten text on the scan of the photo’s backside; it will be automatically recognized and used to narrow the search. After a while, the image analyser returns a few CUbRIK Metadata Models

Page 9

D22 Version 1.0


similar images. The text printed on one of these suggests that the unknown person might be a politician, but it doesn’t mention his name. To resolve this, Ibrahim starts the context expander. This tool will deliver him the list of materials in which the images of the unknown person appear, such as newspaper articles, interviews, and commentaries. He can then study these materials himself to find further clues about the person’s identity.

User story 3: Can I publish this? Concept: Copyright Checker Ibrahim considers using one of the photos in a publication he is working on, together with other images: a scanned newspaper article, two scanned photographs, and one picture he found during an internet search. To perform a copyright clearance for his publication, Ibrahim decides to use the copyright checker. The checker will provide him with a detailed list of the rights owners and the copyright status for each of the images he wishes to publish. He selects his images and starts the checker; this will take a while, as the images must be compared with copyrighted material in many different repositories. Later today he will take a look at the results.

User story 4: Documenting the results Concept: Task Tracer Ibrahim is satisfied with his findings. He opens the task tracer to select the highlights of his search, in particular the actions he performed to learn where the first photo was taken. The task tracer will store these actions, including the results, inside the dossier. In the future Ibrahim and his colleagues will be able to return to the dossier and trace back the tasks that he performed. On the one hand this makes it more transparent to himself and to others what he did to reach his conclusions. On the other hand his colleagues can use his searches as a starting point for their own research.

Comment The five user stories demonstrate to the domain experts what is feasible in the near future and how it may support their work, but they do not offer insight into the long-term technical possibilities. To inspire the domain experts and to discuss these possibilities an approximately ten-minute video was used. In this video7 a concept was shown from the field of tourism, called Near2Me (Kofler et al., 2011). From this concept we selected three advanced features that could be applied to the domain experts’ work: a flexible browsing environment with different but connected views on multimedia search results, expert chroniclers (heavy-contributors), and personalized recommendations.

7 The video can be viewed on YouTube: http://www.youtube.com/watch?v=EFctxF4ElR8. CUbRIK Metadata Models

Page 10

D22 Version 1.0


Advanced feature 1: Profiles and personalization Description: Data about a person’s picture set (locations, annotations) are used to construct a profile and to personalize recommendations.

Advanced feature 2: Lenses and highlights Description: Lenses show different but connected views on search results. When pointing at an item, related items are highlighted in the other lenses.

Advanced feature 3: Expert chroniclers Description: People who are recommended because they contributed a lot of relevant material. Their perspective may be interesting to explore further.

CUbRIK Metadata Models

Page 11

D22 Version 1.0


2.1.5

User story evaluation with end-users

To get feedback from the domain experts a focus group was organized with five domain experts at the CVCE in Sanem, Luxemburg. The focus group consisted of three parts: 1. Discussing the experts’ day-to-day work Plenary discussion about their day-to-day work with a focus on tasks that involve finding, collecting and analyzing information (text, pictures, video, audio) for the experts’ research. A flip-over template was used to identify and prioritize difficult tasks, opportunities for improvement, and opportunities for computer support. 2. Evaluating the user stories The participating experts were split up in two groups to evaluate the scenarios. Scenarios were printed and distributed among the participants. A flip-over template was used to record the evaluations of the user stories in terms of appeal to the experts, perceived usefulness, possible improvements, pitfalls to be avoided, and the experts’ personal favourites. 3. A look into the future The Near2Me video was shown and then discussed, starting with how the video relates to examples from the experts’ current work practice. Next, the three advanced features were discussed with the experts, followed by an inventory of potential improvements and additions. Finally, the experts were asked for new ideas that were triggered by the video. Below we summarize the feedback on the five user stories and the three advanced features in terms of appeal (what did the domain experts like), usefulness (what makes it useful for their work), improvements (additions or improvements to consider), and pitfalls (issues to avoid).

User story 1: Collecting materials at the archive Concept: Snapshot Matcher Appeal

Ease of use.

Usefulness

Efficiency improvement when collecting materials.

Improvements

The results should be available immediately. At least, the user should be notified when a search is completed. The results should be of high quality to be useful. The snapshot matcher should also work on other materials (e.g., microfilm, audio, video). Online sharing of snapshots with the research community, to gain additional information from peers. Provide alternative interfaces: e.g., a digital camera combined with access using a PC. Search results should also include related materials, e.g. text documents. Post-processing (e.g., optical character recognition, handwriting recognition).

Pitfalls

Requiring the use of a smartphone; provide alternative interfaces such as digital camera and PC. Obtaining permission to take snapshots. Copyright issues. Waiting should take no longer than two-three hours so results can be used when one is still at the archive. The process should not interfere with the expert’s workflow at the archive.

CUbRIK Metadata Models

Page 12

D22 Version 1.0


User story 2A: Where was this picture taken? Concept: Image Analyser Appeal

Ease of use, less manual labor.

Usefulness

The use of additional information (picture and text; thus multimodal search) gives more trust in the results. Getting more images of the same location or event facilitates an interpretation.

Improvements

Use two-stage search: first within reputable sources, than expand. Show alternatives for entered keywords (hints, synonyms). Expand the search: e.g., provide a list of events that took place at the same location. Using a handmade sketch to search to search for buildings with a particular shape Batch processing of a set of related photos to improve the results.

Pitfalls

The experts will not just rely on the outcome; there should be ways to verify the results.

User story 2B: Who is this person? Tell me more! Concepts: Image Analyser and Context Expander Appeal

Especially the Context Expander is an appealing concept.

Usefulness

It is vital for the experts’ work to get all related information for a particular picture.

Improvements

Combine Image Analyser and Context Expander into one tool. Use multilingual search, e.g., based on automatic translation. Provide options to narrow down the search, for instance to a specific time interval, to increase the quality of the results. Search in two directions: provide text to find related images, provide an image to find related text. Provide contact information for the people who are recognized, so they can be contacted to identify the other people in a picture.

Pitfalls

Text based search is how it is done usually, and it is probably more efficient as a first step.

User story 3: Can I publish this? Concept: Copyright Checker Appeal

Copyright checking is important for the experts’ work. A lot of work and money is involved.

Usefulness

It is useful when publishing materials, e.g., in a book or presentation.

Improvements

Find the lowest copyright charge. Find related pictures that are free of charge.

Pitfalls

It takes a lot of legal expertise. Can the results be trusted?!?

CUbRIK Metadata Models

Page 13

D22 Version 1.0


User story 4: Documenting the results Concept: Task Tracer Appeal

The concept is not appealing at first glance; each researcher uses his/her own work method.

Usefulness

For oneself: a high-level log of search activities, e.g., to trace back the origins of a document. For others: documenting one’s findings, or to know who is working on what.

Improvements

Provide some form of access control: what is shown, and to whom.

Pitfalls

It should not cost any extra time and/or constrain one’s way of working.

Advanced feature 1: Profiles and personalization Appeal

Earlier image-based searches can be used for personalized suggestions, e.g., “related material that may interest you”.

Pitfalls

One’s personal profile should be customizable and adaptable, e.g., tailored to the task at hand.

Advanced feature 2: Lenses and highlights Appeal

Being able to switch between the lenses “pictures”, “places” and “people”.

Improvements

Include more lenses: sources, experts, bibliographies, networks of researchers, institutions, publication houses, journals, databases, geographical information. Other improvements: multilingual search, filters, highlight recent updates.

Advanced feature 3: Expert chroniclers Appeal

Checking the work and publications of experts that have worked on related topics.

Overall conclusions: •

The experts are, overall, positive about the user stories we developed.

Their favourite concepts are: o Image analyser & Context expander: searching for more information about a picture (e.g., who is in it, when or where was it taken, what else is known about it) or finding related materials (e.g., other pictures, documents etc. about the same event). o Snapshot matcher: quickly finding high-quality digital copies of sources found in an archive. o Lenses & highlights: providing context by showing relationships between sources, collections, experts, institutions, journals, etc.

Challenges that were not addressed in the scenarios, but still are important to the experts’ work: o Searching within interview recordings (audio, video).

CUbRIK Metadata Models

Page 14

D22 Version 1.0


2.1.6

User story evaluation with technical experts

To get feedback from technical experts, a CUbRIK Apps workshop was organized during the general assembly meeting in Como, Italy. Approximately 10 technical experts from various CUbRIK-partners participated in the workshop. During the workshop the same user stories were discussed as during the focus group. The purpose of this session was twofold. The first purpose was to discuss the technical feasibility of the concepts that were described in the scenario. The second purpose was to assist the technical experts in familiarizing themselves with the work of the CVCE domain experts, its complexity, and the parts of the experts’ work that may benefit from computer support. To accomplish these purposes we presented the user stories one by one. Discussions focused on the snapshot matcher, the image analyser, and the context expander. Due to time restrictions less attention was paid to the other user stories. Only user stories 1, 2A, and 2B could be covered in depth.

User story 1: Collecting materials at the archive Concept: Snapshot Matcher Summary of technical feasibility discussion Image type The technical feasibility (R&D impact) will depend on the type of snapshot: is it a photo taken by a camera, or is it a scan of a text (and is this text typeset or hand written). For composite snapshots containing both images and text, commercial software exists to segment the snapshot and correspondingly the R&D impact will be low. Text recognition (OCR) is commercially available for scans (low impact), although it may be more complicated for smartphone snapshots with distortions (medium impact). Recognition of faces and buildings For pictorial content, face extraction solutions exist (low to medium impact). Buildings may be easy to recognize (near duplicate detection; low impact) but near duplicate detection may be complicated when pictures were taken at different times of day (e.g., when trying to recognize a building). Finding similar images vs finding the same image There is a difference between a search for similar images and finding the same image (perhaps in a different format or quality). Open source implementations may exist and the impact is low to medium. There is a potential for crowdsourcing-based solutions, e.g. for image segmentation. Access to the collection It is important to know against what datasets or collections the snapshots will be matched (CVCE collection; Europeana; others?) and whether these collections can be accessed for indexing purposes. Copies will not be made; only metadata will be stored.

CUbRIK Metadata Models

Page 15

D22 Version 1.0


User story 2A: Where was this picture taken? Concept: Image Analyser Summary of technical feasibility discussion Indoor scenes It is very had to detect where an image was taken if it was taken indoors, irrespective of any added keywords like “big room”, “medieval”, or “chandelier”. It is also hard to find the same picture but taken in different conditions (different lighting, different angle, ...). There is a potential for crowdsourcing-based solutions (image segmentation, identification of important objects). Landmark detection (outdoor material) will be much easier. Semantic search Alternatively, a search based on metadata could be performed (what is known about the picture; descriptions of the semantic context of the picture). The construction of an entitypedia could be useful for this purpose. GWAPs Games-with-a-purpose could also be useful, e.g. to find a missing image, or a missing person. This should be further investigated. The R&D impact for these kinds of solutions is high.

User story 2B: Who is this person? Tell me more! Concepts: Image Analyser and Context Expander Summary of technical feasibility discussion Face recognition For face recognition an image resolution of 12-15 pixels for the face suffices. A face model needs to be trained using a sufficiently large set of frontal faces of reasonable solution [not specified further]. It is doable to detect images in which the same person appears by calculating face similarity. Conversely, one could also look for images in which a similar face appears. A good image of a person could be a starting point to find all related materials in which this person appears. Issues to take into consideration: matching good quality input against a low quality dataset; preferably more than one example picture should be used to get a better precision; video will be easier because the face appears in an array of video frames (the latter case is also more useful since human inspection of video requires a lot of time). The task may be relatively easy because politicians usually appear in a similar pose (frontal, shaking hands, etc.). Provided there is a good collection of such images, work can be done to improve specific aspects of the image in order to improve the face matching solution. The R&D impact will be high. Identifying a person An alternative is to answer the question “Is this person X?” The implementation would then work much like logo detection: textual input will be used to retrieve example images from a CUbRIK Metadata Models

Page 16

D22 Version 1.0


collection, which can then be compared to the source image. With human computation (crowdsourcing) a selection could be made from a narrowed set of possible candidates, e.g., by suggesting 10 pictures and asking whether the same person is in these pictures. Context expander If similar images are found with metadata assigned to them (tags), the context expander could look for related content based on the same tags (or related tags, based on tag propagation). Time based search Temporal reasoning is already part of WP4 of CUbRIK. For images (e.g., finding a person at a younger or older age) a set of annotated images is required showing the person at different ages. Based on metadata and face similarity, a ranked list of possibilities (more or less likely) could be produced. The R&D impact will be medium to high. Generally, the experts judged all user stories to be relevant to CUbRIK and to their current research activities. The level of R&D impact will mostly depend on how good a solution should be that is to be developed (i.e., the required precision).

2.1.7

From user stories to technical implementation

During the CUbRIK Apps workshop, relevant feedback on the user stories was elicited, though not all scenarios could be discussed in sufficient detail from the technical point of view. The technical experts needed more time to “digest” the scenarios and to identify technical implications and/or feasibility of implementations. Follow-up was therefore planned with the purpose of, on the one hand, gaining a deeper insight into technological feasibility and technical requirements of the scenarios and, on the other hand, mediating the scenarios and the technological development with each other. To this end the results of the workshop with the technical experts have been summarized and shared with the technical experts to allow them to do a more detailed technical feasibility check of the user stories. The technical experts then focused on one of the user stories in order to come up with a proposal for how this user story could be technically implemented. This proposal will be described in Section 2.1.8.

2.1.8

Proposal for technical implementation

Based on the outcome of the CUbRIK apps workshop, the technical team identified technical aspects they considered feasible for inclusion in the first version of the History of Europe VApp, to be implemented in M22. The decision was made to focus on scenario 2B (“Who is this person, tell me more”), which describes the Image Analyser and Context Expander concepts. The result (briefly summarized below; for details see the appendix) is a proposal for a technical implementation of this user story, called the “EU History Social Graphs”. The proposal makes use of a combination of face recognition and crowdsourcing components. The proposal distinguishes the following steps: 1. Starting point is a collection of images with one or more people relevant to the history of the EU. Face similarity is computed between the people that appear in the images. 2. For low-confidence matches, crowdsourcing is used to manually verify whether the people who appear to be the same people are indeed the same people. 3. Near-duplicates are calculated by collecting a set of multiple images from the same person from an external resource pool (i.e. Google Images). Textual annotations or crowd-sourcing can be the input for this step. 4. Based on a detection of co-occurrences of persons in the same image, a social graph can be constructed that portrays the strength (that is, the co-ocurrence frequency) of the relationships. CUbRIK Metadata Models

Page 17

D22 Version 1.0


5.

The social graph can then be used to identify people in an image that are not yet recognized, using a two-step process: o In the first step, face similarity is used to detect known persons in the image. Notice how the two blue arrows indicate that Romano Prodi and Franz Fischler have been identified in the image (see Appendix A). o The second step assumes that a mapping can be made from EU historic figures onto History of Europe domain experts. If, for instance, it is known which History of Europe domain experts are particularly knowledgeable on Romano Prodi and/or Franz Fischler, then they would be candidates to identify the unknown person (“expert crowdsourcing”). The illustrations corresponding to these steps can be found in Appendix A: EU History Social Graphs. On the basis of this proposal for technical applications, use cases were then developed for the first version of the V-App.

2.2

Use cases

The following use cases describe the current state of mediation between user requirements and technical possibilities in project month 8. These use cases will be further refined in the course of the project but will inform the required functionality for R2 of the CUbRIK platform releases which will be used to build the first prototype of the History of Europe application.

2.2.1

Image analyser

Identifier: ImageAnalyser Actors: User and system Pre-conditions: A social graph exists. Basic Course of Action: •

• • •

2.2.2

The user generates a multimedia query (a photo and/or video including one or more persons) and provides optional keywords (e.g. some words found on the back on the original photo) The system detects the person(s) identity(ies) and takes into account the existing social graph The system updates the social graph through the SocialGraphManager (optional) the system initiates the ContextExpander process

Social Graph Manager

Identifier: SocialGraphManager Actors: system Pre-conditions: A (initial) social graph exists. Basic Course of Action: The social graph is updated with the results of the image analyser (a new relationship) • The information of the social graph is enriched with external references (personidentifiers, places, events) Post-conditions: the social graph is updated in case of the detection of new relationships. •

2.2.3

Context expander

Identifier: ContextExpander Actors: user and system Pre-conditions: A social graph exists. Basic Course of Action: •

User creates a multimedia query (video, image, sound, text).

CUbRIK Metadata Models

Page 18

D22 Version 1.0


2.2.4

The system visualizes the social graph for the user

Copyright Checker

Identifier: CopyrightChecker Actors: User and system Basic Course of Action: • • • •

2.3

User creates a query with an image or video System identifies the image/video System retrieves copyright information to the identified image/video System shows license and copyright owner information (contact data, etc) to the user

Next steps

The approach that was described in this chapter demonstrates the iterative nature of specifying the requirements for both the V-Apps. The user stories have been evaluated with a group of CVCE-users once, while technical feedback has been collected once. The next steps will be the definition of the requirements for the History of Europe application that will be implemented in Task 10.2: History of Europe application (M19-M36). The first prototype is scheduled for M22 and the second for M24. Following the cyclical approach, user feedback will be used as input for the requirements over the course of the release. One of the other important next steps that is currently underway is a technical proof of concept for crowdsourcing. The design of the proof of concept is also based on both user needs in the domain of practice and also on the technical experts’ feedback, will inform the use case development and is a way of deriving target-aimed requirements. The Proof of Concept pursues the implementation of a prototypic time-based person recognition application in the cultural heritage domain of practice. It validates the technical feasibility of face identification and recognition supported by human-based crowdsourcing mechanisms. Different solution paths with different intensities of computer-based, algorithmic detection and recognition are compared. At this time, a list of example requirements has been created for the “Image Analyser” and the “Context Expander”.

Image Analyser •

Faces must be detected in historical input pictures and matched based on similarity.

Matching may take into account a social graph of actors and contextual information (e.g., date, time, place, event)

Persons that are matched on images must be matched against person identifiers (e.g., VIAF, Worldcat)

Best matches must be verified.

Verification may take place in one or two steps: Either based on a crowd-sourced assessment of best match proposals from the automatic matching and an expert verification or only an expert verification.

To facilitate expert verification a widget may be developed that allows fast association of person identities to images and that can be used on different social networks to crowd-source the identification task to expert networks.

Based on co-occurrence of persons in historical pictures a social graph must be build up.

Context Expander •

The social graph must be extended by contextual information about the image (date, time, location, event).

CUbRIK Metadata Models

Page 19

D22 Version 1.0


This information and the person identifiers must be linked to related information which may be linked to other related information (e.g. for persons: membership in an organization with start and end date, political offices [start and end date])

Links to related information may be provided by Entitypedia

The social graph may be enriched by referencing entities found in oral history interviews and historical TV news

The social graph may be further enriched by external image collections (European Commission Audiovisual Services, European Parliament Audiovisual, Europeana)

The context expander must offer the ability to create complex queries for non-technical users (e.g. "Which of the members of the United Europe Movement met at the Congress of Europe in The Hague and occupied high political offices in conservative governments during the 1960s?") In the revised version of this deliverable, to be issued as an intermediate release in M18, the requirements list will be extended. The requirements will take their final shape in M24, when the final version of this deliverable is due. The evaluation of the V-Apps will take place in Task 10.4: Evaluation of CUbRIK Apps (M19M36). The V-Apps will be evaluated on the basis of the requirements. •

CUbRIK Metadata Models

Page 20

D22 Version 1.0


3.

Vertical Apps: SME Innovation (Fashion Industry)

SME Innovation is the second domain of practice of CUbRIK. It is represented in the CUbRIK consortium by INNEN, an SME that develops advanced solutions for managing knowledge and driving innovation. In the SME Innovation domain, CUbRIK pursues the goal, described in Task 2.1 “Domains of Practice” of the DoW, of tracking trends over space and time in order to support SMEs in their innovation processes.

3.1

User needs and evaluation

In this section, we report on the requirements for the SME Innovation V-App and the approach that we have taken in order to formulate them. It is informative to note the differences between the approaches taken for the SME Innovation V-App and for the History of Europe V-App, described in the previous section. The History of Europe V-App began with a well-defined user group that has needs arising in the course of their daily work processes at CVCE. The SME Innovation V-App, on the other hand, addresses three different groups of stakeholders (described further in 3.1.2), which were not all defined a priori. The CUbRIK project had at its disposal limited prior knowledge of the processes that these groups engage in. Rather, definition of specific target user groups and elicitation of the processes that they engage in constitutes an integral, and substantial, part of the work of requirements definition for the SME Innovation V-App. It is anticipated, that in some cases, the SME Innovation V-App will serve to fulfil needs that did not explicitly exist, or that users were unaware of, before the start of the CUbRIK project. This situation is characteristic of a domain of practice in which the overall goal is to promote innovation: the number of a priori assumptions made about how innovation arises should be kept to a minimum. A major challenge of the SME Innovation V-App is to define a productive setting in which the ability of CUbRIK technologies to support multimedia search necessary for trend analysis can be validated. As reflected by the title of this section, initial work in this direction has led to the SME Innovation domain of practice to focus on a scenario involving the analysis of trends for the fashion industry.

3.1.1

Outline of the approach

During this first stage of requirements development, we have focused on the needs of SME’s. In the conclusion of this section, we discuss the next steps to ground the application requirements into the needs of the SME’s customers as well. With the first set of requirements as a basis, it becomes easier to involve customers into the process. In Figure 4 we display the approach that was taken to formulate a draft version of the requirements for the SME Innovation Fashion application. Central to the approach is the development of two scenarios from which eventually the Fashion scenario was selected.

CUbRIK Metadata Models

Page 21

D22 Version 1.0


Figure 4 Outline of the approach for the SME Innovation app The steps in the process are explained below: 1. Market analysis based on SME interviews. A user analysis was carried out by means of a series of interviews with SMEs about their search behavior and their needs with respect to information that supports the innovation processes. Interviews were carried out with SMEs belonging to the mechanical sector and to the fashon industry sector. 2. User story development. Two possible scenarios have been developed. The first was based on SMEs innovation in the mechanical sector, making usage of multimedia search (mainly image search) to find useful information for technology innovation for example in patents or scientific papers. This scenario foresees as direct users of the system the SMEs that carry out searches in order to discover information useful for their innovation. The second scenario was based on fashion industry SMEs, where application towards citizens performing searches and playing games could have provides interesting data towards SMEs operating in the fashon industry to understand trends (what people like and want to purchase). 3. Feedback on technical scope. During the meeting in Como an analysis of the two scenarios was carried out together with the technical experts from the CUbRIK consortium. From the analysis, some specific problem, arose concerning the scenario with mechanical SMEs, mainly due to the difficult usage of crowdsourcing in such a case (almost impossible to have a crowd providing feedback on very specific mechanical problems) and to the fact that image analysis in patents is very difficult (or almost impossible) due to a lot of noise created by notes in patents images explaining functioning of the invention. On the contrary, the fashon industry scenario looked more appropriate to demonstrate the CUbRIK main results of a multimedia search with humans in the loop through possible usage of crowdsourcing. 4. Feedback on technical feasibility from partners The scenario on the Fashion industry was further developed and technical feedback was positive with some specific suggestion provided, such as a possibility to detect lower/upper part of bodies, understand the texture of dresses. Some tests could also be done in the search for similar images of dresses. CUbRIK Metadata Models

Page 22

D22 Version 1.0


5.

3.1.2

Formulation of use cases and requirements. The specific use cases were then developed on the basis of the user

Target user group

In the SME Innovation domain of practice, three different groups of stakeholders are distinguished: 1. B2B SME’s producing technology products, who have a need for information about the behavior, interests and opinions of their customers, as well as their competitors, as well as technology intelligence actions related to gather information for potential innovation of their products/processes. In such category enter the SMEs operating in the mechanical sector. 2. B2C SMEs that make use of the products developed by SMEs, interested in knowing information on customers’ needs, but also information on competitors, and on technology intelligence for product innovation. The goal of these SMEs is to innovate new products to expand into new market sectors. They achieve this goal by using trend analysis to identify profitable new market sectors. 3. Individual customers (referred to as end users) from the general public who buy the goods and services of the B2C SMEs. These customers are the source of the trends that must be tracked in order for the B2C SMEs to innovate. This list clarifies the difference between the History of Europe and the SME Innovation domains of practice. While CVCE could be considered a direct end user of an application (a V-App) built on the top of CUbRIK platform, the INNEN case is to be considered as the use case of CUbRIK as a platform for IT SMEs to develop applications towards final end users. The two different cases are quite useful to demonstrate in full the potential of the CUbRIK Ecosystem approach, since they are able to provide direct applications for users, as well as a platform for SMEs to develop their own application making use of advanced multimedia search and crowdsourcing.

3.1.3

Foundational interviews

A market analysis has been carried out by means of interviews with B2C SMEs about their search behavior and needs with respect to information needs in the innovation processes. Interviews were carried out with SMEs belonging to the mechanical sector and to the fashion industry sector. The exploratory interviews reveal information about the importance of search in the SMEs Innovation domain of practice, and demonstrate the need for search applications to support innovation. These interviews provide the first foundation for deriving the requirements for a VApp in the SME Innovation domain of practice. The importance of search technology in SMEs innovation processes During the innovation processes, a set of actions are realised by SMEs to support human decision making: 1. Market analysis: this is mainly a vision of the market trend, the competitors actions, the customers’ needs and preferences. During this analysis, a set of information is usually searched by the SME to take decisions on possible innovations. 2. Technology analysis: this can be done prior the generation of the technological innovation concept, for example by performing a technology intelligence analysis on new technological systems (technology, materials, etc.) that can provide new/better performance, as well as to analyse the prior art, by looking at patents databases to check if there are already filled in patents that prevent certain innovations to be undertaken. The technological analysis can be performed also to check technologies used by competitors. These actions involve the process of search and have clear potential to be supported by multimedia search technology. CUbRIK Metadata Models

Page 23

D22 Version 1.0


The importance of information derived from search The search for market and technology information is one of the most important, and most frequently performed, activities that SMEs realise when dealing with innovation processes to support the decision making process. In previous R&D projects in which INNEN has been involved, specific analysis of SMEs innovation processes were made (project INSEARCH – FP7-research for benefit of SMEs – Deliverable 2.2, www.insearch-project.eu), interviewing 90 SMEs). The results of the analysis are summarised in the following: •

More than 90% of SMEs make use of market and/or technology information when planning a technological innovation, using such information as indicated in figure below (in red the SMEs that use such information, in pink the ones that do not use them):

Figure 5 How information support SMEs innovation process •

Regarding the market, clients opinion / comments and similar product functionalities are the most sought information by SMEs, as indicated by figure below (SMEs were requested to answer with level of importance of the information):

CUbRIK Metadata Models

Page 24

D22 Version 1.0


Figure 6 Information considered important for market analysis in SMEs innovation process •

Most of the SMEs use bookmarks in their browsers as main way to check/monitor interesting web sites during innovation/market analysis processes. Marketing researches are realised on demand, and mostly monitoring competitors and clients web sites;

While dealing with technological intelligence, scientific web sites and competitors web site are the most sought sites, as indicated in figure below:

Figure 7 Information considered important for technology intelligence in SMEs innovation process •

More than 90% of SMEs perform their search through Google or similar generalist search engines

The most used keywords in making searches are related to product type and function of the product, also when searching in patent databases (see next figure).

Figure 8 Used keywords by SMEs in searching the web during innovation process

CUbRIK Metadata Models

Page 25

D22 Version 1.0


Search is mostly performed through iterative searches, evaluating search results through the very first lines of documents/web sites

Most of SMEs do not spend more than 1 man/day in searching for information related to one innovation process, and do not visit more than 10 web sites

Key search functionalities for SMEs in the innovation process The following functionalities (see figure below) are all medium/high important for SMEs, scoring all of them more than 3,4 in a 1-5 scale (1= low, 5= high), As far as all possible answers scored between 3 and 4, the pictures reports the details between 3,2 and 4 to better outline the differences among answers:

Figure 9 Search functionalities interesting for SMEs during innovation process For SMEs, features able to extract specific information, and in particular identify specific patterns, are very important. Specifically, an SME is interested in identifying which “tool” (a Tool can be any technological system) performs a certain “action” (which is the Function of the system) against a certain “object”. As example for an umbrella, the Tool is the “Hydrophobic Sheet”, which performs the Action of “Deviate / Stop” against an “Object” that is the “Rain”. SMEs are interested in a system able to propose possible innovations meant as innovative tools that perform the action of interest of SME (in the umbrella example: which technological systems could deviate liquids?). This is related to the question: “find pattern to propose possible innovation” in yellow (picture below where to outline the differences among answers, the scale of picture below only reports the values between 3,7 and 3,85):

CUbRIK Metadata Models

Page 26

D22 Version 1.0


Figure 10 Functionalities interesting for SMEs during innovation process The insights about the importance of search for SMEs will be used as a basis for guiding the development of the V-App. Although it is clear that the insights will need to be adapted for the specific use scenario (fashion industry), this understanding of the needs of B2C SMEs provides an important framework of reference for specifying particular requirements Two possible scenarios have been developed. 1. A scenario towards SMEs operating in the mechanical industry 2. A scenario towards SMEs operating in the fashion industry The first was based on SMEs innovation in the mechanical sector, making use of multimedia search (mainly image search) to find useful information for technology innovation for example in patents or scientific papers. This scenario foresees as direct users of the system the SMEs that performed the searches and find out information useful for their innovation. The second scenario was based on fashion industry SMEs, where application towards citizens performing searches and playing games could have provided interesting data towards SMEs operating in the fashion industry to understand trends (what people like and want to purchase). Taken into consideration the points above, the approaches of the two V-Apps are therefore different: whereas in the CVCE domain, there is a well-defined user group with needs that arise from their work, the user needs in the case of the Fashion industry app are less defined as they have to take into consideration: the need of the App developer (INNEN) to build a sustainable business, the needs of the SMEs beneficiary of data collected and analysed by the app, and the needs of the users of the app (the citizens). All such situations make the users needs more ambiguous with respect to the CVCE case, but could provide also a more general proof that CUbRIK is able to become a platform for usage in several different scenarios. Also, the measurement of the success of the application is different in the two cases: whereas in the CVCE domain there are experts that need support for their daily work, in the SME innovation domain a solution is proposed that may or may not be adopted by the SME’s customers. Whether this is the case is hard to predict, since measuring customers’ needs of something that does not exist is rather difficult. As mentioned above, in contrast to the History of Europe domain of practice, no specific use scenario was pre-specified in the CUbRIK DoW for the SME Innovation domain of practice. In order to develop the scenario, two scenarios arising in the SME Innovation domain of practice were first suggested. Then, a selection of scenarios was made in consultation with technical

CUbRIK Metadata Models

Page 27

D22 Version 1.0


experts from the CUbRIK consortium during the CUbRIK Apps workshop at the CUbRIK general assembly meeting in Como, Italy (24 January 2012).

3.1.4

Use story development

Input to the process of use scenario development was an overview of the desirable characteristics of the SME Innovation V-App contributed by INNEN. This input is summarized in the remainder of this subsection: The goal of INNEN is to develop specific applications within CUbRIK to support SMEs innovation process using multimedia information. The starting point for this development is the InSearch project, and is the source of important contributing information such as the survey described in the previous section. The V-App for the SME Innovation domain of practice has the goal of providing a set of functionalities to SMEs operating in different sectors (manufacturing, services, etc.), namely: •

Supporting the search of multimedia information that could support humans in performing market and technology analysis. As an example, a company could search in news channels or in social channels (such as YouTube) for information on attributes of a certain product (colour, dimension, etc.);

Supporting the search of multimedia information for technology intelligence. As an example, SMEs could be supported in searching pictures and drawings (such as patent drawings) related to specific equipment. Similarly, an SME could take pictures or videos of their own equipment and search for similar equipments (in videos, pictures or drawings) in the web;

Support the usage of the above mentioned information to perform specific analysis. This could include product/attribute analysis (using a specific methodology to provide quantitative analysis on attributes/functions of certain products), and trend analysis (in time and space) of attributes/products functions (see Annex I); The application should be able to provide users with different possible usage modalities: •

A very simple usage through a browser like an interface that supports the search and retrieval of information without the need of having the SME to realise specific complex modelling of the system;

A more complex system able to support SMEs in modelling their own product/system to provide more instruction to the system to perform the search and analysis.

Embed the information in currently used systems of SMEs, for example providing info to email clients, or gathering search info from already existing SMEs systems (example: PDL/PDM systems as well as CAD / CAM that also have images/drawings); Moreover, the application should be able to be applied easily to different sectors: even in the same manufacturing sector the technology/products are quite different from one another, the CUbRIK application should be thought of as an application that could be used by SMEs with different products and belonging to different sectors without requiring big customisation efforts. Consistent with this input, two use scenarios were developed by INNEN as candidates for the SME Innovation V-App use scenario. Use scenario 1 was related to the packaging equipment manufacturing industry and Use Scenario 2 was related to the fashion industry. The proposals for these two scenarios are included in Appendix B “Use scenarios developed as candidates for the SME Innovation V-App use scenario”. As a result of the V-Apps workshop in Como, Use Scenario 2 involving the fashion industry was identified as having the clearest potential from the technical vantage point and was singled out for specific focus. User stories were then developed on the basis of the fashion industry scenario, which are described in the following subsection. In both cases, INNEN as an SME had to think not only about the functionalities that are offered the final user of the CUbRIK platform, but also about an application that could become financially profitable in the future. •

CUbRIK Metadata Models

Page 28

D22 Version 1.0


3.1.5

User story development

The input from the foundational interviews and from the two candidate use scenarios was used as a basis for the development of user stories for the Fashion Industry scenario. Further input was gathered by an informal survey of existing practices involving fashion on social multimedia websites. A number of websites were reviewed (cf. Figure 11), including http://fashism.com/, www. wearingittoday.com and www.vogue.co.uk/photo-blogs.

Figure 11 Examples of fashion sites reviewed Also, an informal review was carried out of general social websites on which users share fashion images. In particular, it was noted that many images are shared on the general photo sharing website Flickr (cf. Figure Z). The fact that users are motivated to make use of general tools not particularly developed for that purpose to share fashion images suggests that there is a large motivation on line to share fashion images.

Figure 12 Examples of Creative Commons licensed fashion items on Flickr photo sharing website

CUbRIK Metadata Models

Page 29

D22 Version 1.0


3.1.6

User story evaluation with technical experts

During the meeting in Como an analysis of the two scenarios was carried out together with the technical experts. During the analysis, some specific concerns arose in regard to the scenario with mechanical SMEs, mainly due to the difficult usage of crowdsourcing in such a case (almost impossible to have a crowd providing feedback on very specific mechanical problems) and to the fact that image analysis in patents is very difficult (or almost impossible) due to a lot of noise created by notes in patents images explaining functioning of the invention. On the contrary, the fashion industry scenario looked more appropriate to demonstrate the CUbRIK main results of a multimedia search with humans in the loop through possible usage of crowdsourcing.

3.1.7

Proposal for technical implementation

The fashion industry scenario was based on the specific need of SMEs: the information they would like to have is related to what people like: the more statistically valuable and precise information they have, the better. For such reason, the Fashion industry should focus on a certain category of users: citizens between 15 to 30 years (approx), and any information categorizing users (age, sex, geography) with respect to the users tastes (what they like in terms of colour, shapes, etc.) is good to have. Such information could derive from a data analysis of categorized and annotated data set. Such information could be used by SMEs to buy/commerce/offer specific products in line with people taste. Users (citizens) that make use of the application: their needs are less clear, sometimes even latent, and are mostly linked to entertainment (playing games, gaining something, social needs of connecting/sharing information on dresses with others, pure fun, etc.). Overall vision – Fashion Industry Web Portal End User functionalities

SMEs functionalities CUBRIK FASHON INDUSTRY WEB PORTAL

What people like

Enterprise Access : Trend analsys, input for marketing and product innovation

Gather and analyse data on Users activties (votes, searches, etc.)

Upload pictures

End Users Front End (citizens): Games, Info, Enterteinmen t

Search dress images

What people would like…

Contents

Play Games Pictures taken from available sources (example: Flickr)

Pictures uploaded by Users

Images provided by content providers (example: fashon industry companies)

I like it Is iit similar? I prefer …. Vote / Rank

Figure 13 Overall Vision of Fashion Industry

CUbRIK Metadata Models

Page 30

D22 Version 1.0


Description of technical contribution: •

Similar images detection of specific dresses (socks, ties, hats)

Detection of lower and upper body parts and dress attributes (texture, colour)

Experiment with engagement of a crowd for definition of similarity

Experiment with engagement of a crowd for gathering information on specific dresses and labelling dresses on such information (I like, I do not like, funny, ugly, nice....)

Evaluation of crow responses to find trends in sector through mining of information gathered and relation with dresses typology (example: most people like socks of certain texture and red colour)

3.2

Use cases

3.2.1

User Story 1a - Search Similar Images USER STORY 1a - Search similar images User take a picture (or use an existing one), upload the picture and can do two things: • Ask the crowd if they like it (similar to Fashism.com), also asking more attributes such as “I like it but of different color” • Search similar images related to dress (similar socks, similar skirts, etc.)

CUBRIK IMAGES ANALYSER ( SIMILAR DRESS IMAGE SEARCH FUNCTION)

CUBRIK provides results , and ask questions: • Was it similar? • Do you like it (gathering also more info)? • Search again

CUBRIK USERS ACTION ANALISER Users provide input and finalise the search

CUBRIK User Action analyzer provides aggregates info to SMEs on what people like (analyzing votes and searches: • colors • cut •Etc.

Figure 14 User Story 1a The following Use Cases on the overall App have been identified •

Name: Search similar images 1

Identifier: Search similar images 1a-1

Basic Course of Action: o User take a picture (or use an existing one), upload the picture o User ask the crowd if they like it (similar to Fashism.com), also asking more

CUbRIK Metadata Models

Page 31

D22 Version 1.0


o

attributes such as “I like it but of different colour” Crowd provides feedback to the picture

Name: Search similar images 2

Identifier: Search similar images 1a-2

Basic Course of Action: o User look at picture in the portal (from other users or that he/she uploaded) o User search similar images related to dress (similar socks, similar skirts. The result is retrieved from the indexed collection of matches and displayed o User rank image retrieved in terms of “is it similar (improve search quality)”, I like it, I do not like it, etc.

3.2.2

User Story 2a - Play Games USER STORY 2a - play games

CUBRIK IMAGES ANALYSER ( SIMILAR FACE IMAGE SEARCH FUNCTION)

CUBRIK Publish images of famous people dressing something “strange” or focusing on details, example: Look at Fiorello (famous Italian showman) ties, and asks to Users: • Do you like it • Upload your preferred tie image • Search other similar images of Fiorello ties • Upload other famous people strange ties (to be checked if possible from legal standpoint) Games could provide rewards to winner, for example let people upload pictures of ties, have the crowd to vote for the best one: the one that uploaded the best win the prize GO to User Story 1a Users uploads images Users decide to search of TIES for other TIE for Fiorello CUBRIK provides results , and ask questions: • Is Fiorello? • do like the ties? • Search similar TIES

CUBRIK IMAGES ANALYSER ( SIMILAR DRESS IMAGE SEARCH FUNCTION)

CUBRIK USERS ACTION ANALISER Users vote their best tie

CUBRIK User Action analyzer provides aggregates info to SMEs on what people like

CUBRIK provide set of images and ask for “Vote the Best Tie”

Figure 15 User Story 2a •

Name: Play funny games

Identifier: FunnyGames2

Basic Course of Action: o CUbRIK publishes images of famous artists or of funny dresses o The system asks to the user if they like it and gives opportunity to search for similar images, rank images (like it, yes not, etc.) or to upload similar funny images. o If asked to search similar images CUbRIK perform: face detection in case of similar images of famous people, or dress similarity detection on datasets of images.

CUbRIK Metadata Models

Page 32

D22 Version 1.0


o o

3.2.3

Crowd rank the images retrieved as similar or not, and vote if they like it or not. CUbRIK analyser takes into account votes with respect to the data set of images

What do I wear today

Figure 16 What do I wear today •

Name: what do I wear?

Identifier: what do I wear

Basic Course of Action: o Users take pictures of their clothes and share on the Fashion portal o Users ask the systems to provide images of lower or upper part bodies to check if they match with what they wear today o System retrieves images and show to users o Users select best matching lower / upper bodies and ask opinion to the crowd o Crowd votes for best matches o CUBRICK analyser takes into consideration votes to annotate the data sets for trend analysis

Name: Trend Analyser

Identifier: Trend analyser

Basic Course of Action: o SME user ask the system to provide insights on fashon trends o CUbRIK find the annotated data sets with crowd voting o CUbRIK perform analysis of textures and colours of certain data sets to extrapolate trends o CUbRIK provide answers such as: best voted socks are red, etc.

CUbRIK Metadata Models

Page 33

D22 Version 1.0


3.3

Next steps

At this time, several example lists of requirements for a V-App in the SME Innovation domain of practice have been created. It is anticipated that the set of requirements that will be presented in the revised version of this deliverable to be completed at M24 will build on and refine this example lists of requirements. One list of high-level requirements is drawn from the foundational interviews presented in 2.1.3 and involves the key search functionalities for SMEs in the innovation process. •

Results should be presented in the form of a ranking in terms of importance

Information from the Web should be crawled (refreshed) constantly

Keywords should be suggested to the V-App users

• The application should include alert functionality that signals new developments Another list was developed parallel to the initial candidates for use scenarios and is included in Appendix C: Example requirements related to SME Innovation V-App.

CUbRIK Metadata Models

Page 34

D22 Version 1.0


4.

Horizontal demos

4.1

Introduction

In this chapter we describe the H-Demos that each demonstrate the technological contribution to the Cubrik platform, the architecture of which was explained in Deliverable 9.1: “Integration Guidelines. Human-enhanced time-aware multimedia search”. This chapter will describe the implementation of various features related to the pipelines in the content processing tier, as well as crowdsourcing mechanisms (referred to as ‘Human annotation apps’). To provide an overview of the relationships between the demos and between the demos and the overall architecture, we mapped the demos onto the overall architecture. The result is displayed in Figure 17. In following sections we describe each of the H-Demos and their requirements. The following approach was taken to formulate the requirements: 1. A template was constructed in collaboration with the CUbRIK consortium to systematically describe the H-Demos. The following aspects were covered by the template: Application context: even though the H-Demos are not tied to a specific user needs, the context in which the H-Demo features are used is deemed important, since they will be part of the CUbRIK platform, which is aimed to serve the needs of users in a real-life (search and retrieval) setting. Technical contribution: the technological progress that is achieved by means of the demonstrator. Requirements and success criteria: the requirements that will be implemented by the H-Demos themselves, including criteria that can be used to measure whether the HDemo was successful. Use cases: a description of all user-system interactions, consisting of a name, an identifier, actors, pre-conditions and the basic course of action. Release planning: a planning of when features of the H-Demo will be released. 2.

The H-Demos show what in the end the CUbRIK platform is capable of. A feature list was constructed in order to give the full picture of which features will be available by R1 and also to start to establish the overall inventory of which features will be available in future releases.

CUbRIK Metadata Models

Page 35

D22 Version 1.0


Figure 17 Mapping of H-Demos on Cubrik Platform Architecture The sections 4.2 until 4.8 then each describe an H-Demo. One important requirement for all

CUbRIK Metadata Models

Page 36

D22 Version 1.0


cases (H-Demos, but also vertical apps and the overall platform for that matter) is the consideration of copyright aspects. We will address this topic in Section 4.9. In Chapter 5 a compendium is described that outlines which features are implemented by what H-Demo, as well as a summary of the H-Demo parts that are delivered at R1.

4.2

Media Entity Annotation

4.2.1

Application context

Description of application context: The entity media annotation demo could fit in search application contexts like: •

entity→media: Visualising named entities in the entity search results.

media→entity: Providing additional information about entities as a result of content based media search. For instance, person takes a photo of a monument. Content based search is used to find images of the monument in the entity repository. System provides additional information about monument and its relation with other entities. Description of data set: Input: A set of famous Italian monuments with metadata information was expert-generated. In total, experts collected a set of 100 monuments located in different Italian cities such as Rome, Florence, and Milan. Entities related to the cities or monuments were also collected. Output: Using different attribute values from the 100 monuments, a data set of about 30.000 images (100 monuments * 100 images * from 3 sources) is to be generated. Images will be queried from Panoramio, Picasa and Flickr. •

4.2.2

Technical contribution

Problem Solved: •

Harvesting representative images of named entities.

(HOMERIA)

GWAP (entity games) (UNITN-POLIMI)

2. Clean

3. Find

Entity Search (UNITN)

Entitypedia (Entities) (UNITN-ALL)

4. Show

(HOMERIA)

Entity Visualization (UNITN)

1. Harvest

(HOMERIA) If (confidence= medium or low) then crowdsourcing

Not now INTERNET (social data+media) (CERTH, L3S, UNITN)

Upload media (crowdsourcing)

Figure 18 Harvesting Represnetative Images of Named Entities

CUbRIK Metadata Models

Page 37

D22 Version 1.0


Description of technical contribution: Automatic: •

Automatic harvesting of images from the web (e.g., Picasa, Flickr, Panoramio, etc.) for a set of named entities.

Automatic cleaning of low quality images by applying state of the art computer vision algorithms.

Automatic ranking of images using social features (like number of likes/dislikes/favourites, comment, etc.) to filter highly relevant and interesting images.

Automatic selection of a small subset of representative images that depict diverse aspects of the entities.

Automatic classification of harvested images into low, medium, and high confidence levels.

Storing and retrieval of entities, relations between entities, entity attributes and multimedia in the specialised entity repository. Humans in the loop:

Improving the quality of the result by applying crowdsourcing techniques, e.g. by verifying images with low confidence by humans.

Improving the quality of the results by applying GWAP techniques, e.g. automatic generation of crosswords from entity-games framework for images with medium confidence.

4.2.3

Requirements and success criteria

Functional requirements: Automatic functionalities (R1:M12) •

Entities must be associated with unambiguously identifying metadata.

Images must be associated with unambiguously identifying metadata.

Images and entities must be automatically related by using their metadata. 100 high quality images must be automatically crawled from each of the image search engines (e.g., Picasa, Flickr and Panoramio) by using (GPS) locations and other metadata. Images must be further ranked and filtered using social features to obtain relevant and socially attractive images.

Small number (5) of representative images must be automatically selected.

• Entities, images, their metadata and relationship must be stored in entity repository. Crowdsourcing functionalities (R1:M12) •

There are no crowdsourcing components in the first release.

Automatic functionalities (R3:M24) •

Each piece of metadata must be associated with confidence level.

Crosswords must be automatically generated for images and metadata with medium level of confidence.

Images must be indexed to allow content based entity search

Images must be indexed also based on the time information (time of creation, time of index) to support also time queries directly to the multimedia indexer.

Images of monuments must focus on monuments and not on other entries, e.g. image of people in front of monument must be filtered out. Any images that doesn’t depict clearly and distinctively the monument must be discarded.

Entities must be indexed to allow metadata based entity search. Metadata based search may be one of the following:

CUbRIK Metadata Models

Page 38

D22 Version 1.0


o o o

keyword based syntactic search, concept based semantic search, entity search which is based on relationship between entities.

Search results must be ordered according to their relevance and also confidence. A learning to rank framework must be constructed to obtain the final set of images using different types of image features (i.e., content-based, social, etc).

Entities in the search results may be described by their metadata, related media, or both.

The GUI must have compact and extended representation for entities.

Crowdsourcing functionalities (R3:M24) •

For low confidence images, the crowd must validate an image with respect to its relevance to the entity. Validity of non-expert annotations must be evaluated.

The crowd may suggest a new image for an entity.

For medium-high confidence images, the crowd must play crossword games to validate metadata and media associated with entities.

The crowd may suggest errors in images used for crossword.

Crowdsourcing tasks may be allocated depending on users’ characteristics such as profile and previous performance.

Crosswords might use incentives in order to increase variables such as users’ performance and level of engagement.

Non-functional requirements: R1:M12 • The entity repository must be able to store at least 1M entities on a single server. R3:M24 •

The system must produce the results for entity search in less than 1 second.

Success criteria: The success criteria comprise: •

Quantitative demonstration of improvement of resulting data set quality after applying crowdsourcing and human computation techniques. o The goal of the first release (R1:M12) is to achieve a state of the art performance (in terms of precision and/or recall) in the task of fully automated media entity annotation (i.e. crawling representative images for given named entities). o The goal of the second release (R3:M24) is to improve the precision of the automatic media entity annotation pipeline by applying crowdsourcing techniques for low confidence images and applying human computation techniques (e.g. creating crosswords) for images with medium confidence. o The overall goal is to have a combined human-computer media entity annotation pipeline which can achieve results comparable with results produced by people (i.e. resulting data set can be considered a golden standard) but with much less human resources involved.

Quantitative demonstration of improvement of precision and/or recall of entity search algorithms using P@K and MAP as measures. It is hypothesized that state-of the art entity search algorithms will achieve better results on the data-set generated by mediaentity-annotation pipeline with crowdsourcing and human computation components than without using them.

CUbRIK Metadata Models

Page 39

D22 Version 1.0


4.2.4

Use cases

Media Harvesting for Entities •

Name: Media harvesting for entities

Identifier: UC harvesting1

Actors: Administrator

Figure 19 UML diagram for ‘Media Harvesting for Entities’ •

Pre-conditions: Entity repository has metadata which unambiguously identifies the entities, e.g. name and geo-coordinates of the monument.

Basic Course of Action: o The administrator provides a set of entity ids or a search query. o The list of entities with associated metadata is extracted from entity repository. o Media content and metadata is automatically harvested from the popular social media sharing web sites, e.g. Panoramio, Picasa, and Flickr. The first platform is

CUbRIK Metadata Models

Page 40

D22 Version 1.0


o o o o

o

•

used to query images based on their (GPS) locations, the latter two platforms use query by keyword and relevance. First 100 images are downloaded for every entity from every resource. Crowd information is captured (author, user response, etc.) to allow research in image quality and user feedback. Images are further ranked and/or filtered with respect to their relevance to the initial query (i.e., Italian monuments) taking into account the community appeal (i.e., social features). The confidence levels (low, medium, high) are assigned to each media content and metadata. Media and metadata with low level of confidence are validated by the crowd. The resulting media with metadata and confidence level are stored in the entity repository. The crosswords are generated for media and metadata with medium confidence. The results are used to improve the quality of the harvested media and metadata in the entity repository. For every entity, a small number (5) of representative images that depict diversely aspects of the entities are automatically selected from the 300 downloaded images.

Post-conditions: None

CUbRIK Metadata Models

Page 41

D22 Version 1.0


Entity search by metadata •

Name: Entity search by metadata

Identifier: UC entity search1

Actors: User

Figure 20 UML diagram 'Entity search by metadata' •

Pre-conditions: Entities and related multimedia were imported and indexed.

Basic Course of Action: o User inputs a query describing properties of desired entities. For instance user wants to find all the pictures of monuments which are located in Florence and are built between 1500 and 1600. o The system searches in entity repository for entities satisfying the given query criteria. o The resulting entities are shown to the user with metadata and associated multimedia. o User can use image search to find images similar to the ones associated with monuments.

Post-conditions: None

CUbRIK Metadata Models

Page 42

D22 Version 1.0


Entity search by multimedia •

Name: Entity search by multimedia

Identifier: UC entity search2

Actors: User

Figure 21 UML diagram 'Entity search by multimedia' •

Pre-conditions: Entities and related multimedia were imported and indexed.

Basic Course of Action: o User has an image of the monument and wants to find out more information about the monument. He posts the image as a query. The user may also provide a time range in the query to find e.g images of monuments established between 1500 – 1600 A.D. o The system runs a content similarity search for entities with images similar to the one provided by the user. Search results are ranked based on the content similarity and community preferences. o The resulting images and related entities are shown to the user with metadata. o User can navigate entity graph learning more about related entities.

Post-conditions: None.

4.2.5

Release planning

The automatic part of the pipeline will be completed by R1: M12 The extensions with human in the loop will be complete by R3: M24

CUbRIK Metadata Models

Page 43

D22 Version 1.0


4.3

Crosswords

4.3.1

Application context

Description of application context: The crosswords demo could fit in application contexts like: •

Assessing correctness of metadata of acquired entities;

Improving metadata of acquired entities;

The schema of the demo could be generalized to many other application contexts, for example, to educational context or human testing and evaluation context.

4.3.2

Technical contribution

Problem Solved: • Collecting user feedback for improving quality of entity metadata. Description of technical contribution: •

Creating a game with purpose which implements collecting feedback, relevance and improvements for metadata of entities.

Experiment with engagement of a crowd for human computation tasks, with the goal of improving the quality of the result.

Design of reusable Game Framework enabling to easily create new word games with purpose.

4.3.3

Requirements and success criteria

Functional requirements Automatic functionalities (R1:M12) •

Crosswords must be generated out of entity repository, using entity metadata;

Crowdsourcing functionalities (R1:M12) •

Users must be able to submit metadata error corrections as feedback;

Crosswords must be playable, that is, the task must be crowdsourced and gamified;

Automatic functionalities (R3:M24) •

Crosswords must be implemented on top of a Game Framework;

Feedback evaluation may be automatic;

Users expertise may be collected through Game Framework during the game;

Crowdsourcing functionalities (R3:M24) •

Users must be able to register or play as guests;

Crosswords must have good incentives embedded such as bonus and achievement systems;

Automatic functionalities (R5:M36) •

Crossword generation may be customizable and flexible;

Relevance evaluation may be automatic;

CUbRIK Metadata Models

Page 44

D22 Version 1.0


Users expertise and crossword topic may be automatically suggested;

Crowdsourcing functionalities (R5:M36) •

Users may be able to have a socially integrated experience (e.g. Facebook integration);

Non-functional requirements R1 (M12) The game should be responsive, e.g. user input should be processed in less than a second; R3 (M24)

The game should have appealing user interface;

Success criteria: The success criteria comprise: •

Game Framework implemented and available to game developers, which can access the Game Framework API, its documentation and build new games on top of it by the end of the project;

At least 1 game implemented on top the framework by the end of the project;

Feedback collected for the collection of entities: correction of mistypes in textual metadata (such as name misspellings), correction of relational attributes in metadata (such as an association between an entity and its image; an association between two entities with a relation like “bornIn”);

Quantitative demonstration of metadata improvement thanks to the crowd contribution; The quantitative metadata improvement is measured against a random subset of entities selected out of monuments data set and manually checked. If the error rate in the original data set is too low or the errors do not vary widely enough, manual distortions may be introduced for testing and demonstration purposes.

CUbRIK Metadata Models

Page 45

D22 Version 1.0


4.3.4

Use cases

Figure 22 UML diagram Crosswords •

Name: Metadata improvement

Identifier: UC metadata1

Actors: User

Pre-conditions: Entity repository has metadata which identifies the entities, e.g. name and geo-coordinates of the monument with medium level of errors.

Basic Course of Action: o User plays a word game, designed around the abovementioned set of entities; o User spots mistakes in metadata and submits feedback, motivated by bonus points and achievement system; o Game Framework fixates and then elaborates and submits to the entity repository corrections collected.

Post-conditions: Entity repository contains entity metadata of a higher quality.

Name: Relevance improvement

Identifier: UC metadata2

Actors: User

Pre-conditions: Entity repository has metadata which identifies the entities, e.g. name and geo-coordinates of the monument and a set of associated images.

Basic Course of Action: o User plays a word and media game, designed around the abovementioned set of entities; o User spots irrelevances in metadata and submits feedback, motivated by bonus points and achievement system; o Game Framework fixates and then elaborates and submits to the entity repository relevance information collected during games.

Post-conditions: Entity repository is enriched with relevance information.

CUbRIK Metadata Models

Page 46

D22 Version 1.0


4.3.5

Release planning

The pipeline will be completed in several iterations of maturity and synchronized with platform releases: R1 (M12), R3 (M24) and R5 (M36).

4.4

Logo detection

4.4.1

Application context and data set

Description of application context: The logo detection demo could fit in application contexts like: •

Marketing firms assessing the relevance of an advertisement campaign (e.g., sport sponsorship)

A public authority wishing to track hidden advertisement in video

The schema of the demo could be generalized to many other application contexts, if the logo image matching component is replaced by another domain-specific object matching component or trained on a different object recognition task

Description of data set: The provided dataset is Grozi-120 (http://grozi.calit2.net/grozi.html). Grozi-120 is a multimedia database of 120 grocery products. The database contains both a collection of images (representing the products as isolated objects in ideal imaging conditions) and a collection of 29 videos (taken in a shop). The matching phase will be performed against this video collection. The dataset comprises a ground truth, i.e., each video is provided with annotations about the possible occurrences of logos in each frame.

4.4.2

Technical contribution

Problem Solved: •

Brand Logo detection in collections of video from an input textual keyword.

Description of technical contribution: •

Detection of trademark logos in videos, based on an open source implementation of SIFT visual features.

Experiment with engagement of a crowd for human computation tasks, with the goal of improving the quality of the result.

Definition of content processing human computation tasks at different levels of difficulty

Definition of human computation tasks for relevance feedback

Preliminary evaluation of the crowd response using an open social network (Facebook)

4.4.3

Requirements and success criteria

Automatic functionalities (R1:M18M12) •

Brand names must be mapped to brand logo images

Video files must be segmented

Video segments must be summarized by the most relevant fragment in the video (key frame)

Visual descriptors must be extracted from logo images and key frames

Visual descriptors may be indexed

CUbRIK Metadata Models

Page 47

D22 Version 1.0


The logo images must be matched against the key frames of a video segment

A much must be associated with the confidence value of the match

Low confidence matches must be detected based upon a threshold

The threshold must be configurable

The matches must be presented in a GUI, every match is a triple: video, key frame, confidence

Matches must be sorted in descending order of confidence

Crowd votes for correct and incorrect matches may be exploited for suggesting an update of the matching threshold value

Crowdsourcing functionalities (R1:M12) •

The crowd must validate a logo image with respect to its relevance to the brand name

The crowd must retrieve a new logo image relevant to the brand name

The crowd must validate (yes/no) if a match between a logo image and a key frame of a video is correct

Success criteria: The success criteria comprise: •

Demonstration of how the result set varies due to crowd contribution.

Qualitative elicitation of usability problems in interacting with the crowd on open social networks

Qualitative elicitation of technical problems in deploying asynchronous crowd tasks on top of SMILA

Non-functional requirements (R1:M12) •

The GUI must permit the access to a video in the time instant where a given match selected by the user appears

The system will produce the query results of a query for a brand in less than 1 seconds

The system will produce the query results of a query for a new brand in less than 60 seconds after the time a set of candidate logo images are available for matching.

The system will segment and index a new video of up to 60 seconds in less than 60 seconds

4.4.4

Use cases

Name: Query for a brand

Identifier: UC logo1

Actors: User

Pre-conditions: Matches of the queried brand in the available collection have been identified and indexed.

Basic Course of Action: o User inputs a keyword corresponding to a brand name o The result is retrieved from the indexed collection of matches and displayed

Post-conditions: None

CUbRIK Metadata Models

Page 48

D22 Version 1.0


Name: Query for a new brand

Identifier: UC logo2

Actors: User

Pre-conditions: The video collection has been indexed.

Basic Course of Action: o User inputs a keyword corresponding to a non-indexed brand name o The system asks to the user to trigger the search of the new brand in the video collection o A new collection of logo instances for the specified brand is downloaded from an external web service, e.g., Google Images o The set of instances is validated by the crowd by means of the conflict-manager component o The instances that are tagged by the crowd as ‘relevant’ compose the set of known instances for the specified brand o The matching phase between the video collection and the logo instances is performed o The result is stored in the search index o The user is notified with the presence of the results for the query

Post-conditions: Matches of the queried brand are added to the index.

Name: insertion of a new video in the video collection

Identifier: UC logo3

Actors: Administrator

Pre-conditions: A set of logo instances is available (e.g., from previous queries)

Basic Course of Action: o A new video is added to the video collection o The file crawler notifies the system about the presence of a new video o The new video is segmented, extracting the key-frames o The keypoints are extracted and the descriptors are computed o Keypoints and descriptors are added to the data repository o The matching phase between the video collection and the logo instances is performed o The result is stored in the search index

Post-conditions: Matches of logo instances in the new video are added to the index.

CUbRIK Metadata Models

Page 49

D22 Version 1.0


System Query for a brand User

<<extend>>

Add a new video

Administrator

Query for a new brand

4.4.5

Release planning

The component will be completed by R1: M12

4.5

People identification

4.5.1

Application context

Description of application context: The people identification demo could fit in application contexts like: •

Generally: o Identification of people in a personal photo collection o Detection and identification of people in social gatherings

Examples of use-cases: o Identification of people in historical news archives, e.g. association of depicted people in press photos that have accompanying text o Fashion-related pattern mining tasks, e.g. detecting human bodies for the task of analyzing their clothes

Description of data set: A dataset in the form of a photo collection (where photos mainly depict people) is to be used to evaluate the pipeline. Generally, the pipeline targets photo collections with few individual people that frequently appear (together). Thus, the dataset should preferably be from a single source (a personal photo collection). The dataset should comprise a ground truth: each photo should be provided with annotations (face markings and labels) with respect to the depicted people.

4.5.2

Technical contribution and success criteria

Problem Solved: •

Identification of people (based on their faces) in a (personal) photo collection

Description of technical contribution: •

Detection and identification of people in a photo collection based on their:

CUbRIK Metadata Models

Page 50

D22 Version 1.0


o o

Faces (Predominately) Possibly other contextual cues like time, social semantics and clothing (body patches)

Considering semantics of photos with multiple people

Relevance feedback mechanism for improved query-retrieval

4.5.3

Requirements and success criteria

Automatic functionalities (R2:M18) •

Instances of depicted people (or their faces) must be mapped to distinct individual labels

Faces must be detected

Face features must be extracted and face descriptors must be generated

Given a face descriptor, a nearest/best match among other face descriptors must be found

Exclusivity (no same people in a single photo) must be considered

Other social semantics may be considered

Other contextual cues like time or clothing may be considered

Confidence values of matches must be computed and exposed

The matches must be presented in a GUI

Matches must be sorted in descending order of confidence

Relevance feedback (R2:M18) •

When user selects/clicks on a person in the gallery overview, initial results are presented. Repeating this step refines the presented results by incorporating a relevance feedback-like mechanism.

Non-functional requirements R1:M18 •

The GUI must permit the access to a photo in the time instant where a given match selected by the user appears

The system will produce the query results of a query for a person in less than 10 seconds

Success criteria: The success criteria comprise: •

Detection and identification rate of people should be over 50% given a training set size of no more than 10%. For example, with 30 different people in a dataset of 800 face instances, the rate should be over 50% compared to chance level of about 3%.

The detection and identification rate of people Relevance feedback should further improve the identification rate by at least 10%.

CUbRIK Metadata Models

Page 51

D22 Version 1.0


4.5.4

Use cases

Name: Insertion of a new photo into the photo collection

Identifier: New Photo

Basic Course of Action: o A new photo is added to the photo collection o The file crawler notifies the system about the presence of a new photo o The new photo is processed: faces are detected and face descriptors are extracted o Intermediate results are stored in a database o Re-Clustering/Inference of people depicted in the new photo o The final results are stored in a database (search index)

Name: Query all instances (appearances) of an individual person

Identifier: Show person’s appearances

Basic Course of Action: o Show overview gallery of all individual people o User clicks on an item in the shown overview gallery o Based on pre-computed results, all appearances of the selected person are then displayed

Name: Query social connections of an individual person

Identifier: Social connections

Basic Course of Action: o Show overview gallery of all individual people o User clicks on an item in the shown overview gallery o Based on pre-computed results, the social connections (the people a person is linked to depending on co-occurring depictions) of the selected person are then displayed

4.5.5

Release planning

The pipeline will be completed by R2: M18

4.6

News History Pipeline

4.6.1

Application context and data set

Description of application context: The news history demo could fit in application contexts like: •

Broad news topic summary creation of different sources for end users

Historical and political research for interested citizens

Support for social / political / media research on different methods of news editing/coverage/manipulation

CUbRIK Metadata Models

Page 52

D22 Version 1.0


Description of data set: A set of video clips, associated metadata and feature fingerprints (extracted from the clips) are present within the system. A clip may have one or several associated clips, i.e. clips which are complete or partial near duplicates. Dependencies between content segments are established using fingerprint-based robust video identification. The temporal order and provenance of the clips is determined and validated using extracted metadata, contextual information and user annotations Moreover, approaches for detecting editing / coding footprints may be applied for this purpose. The data set to be used for demo purposes consists of a collection of news clips from at least 7 different news topics where each topic is at lease covered by 10 clips from different sources. The data set might be updated/refined by H-Demo administrators during the development phase. Important issue: Benchmarking the system performance requires ground truth data. First experiments showed that annotating overlaps of new clips is a hard task even for humans. Therefore, time efforts are quite high and it might be difficult to handle within the project. One solution might be to use clips from a different domain, such as movie trailers.

Projected timing of the pipeline: The first version of the pipeline will be complete and work in it terminated by R3: M24 The second version of the pipeline will be complete and work in it terminated by R4: M30 R1:M12……R2:M18…….R3:M24…….R4:M30…....R5:M36

4.6.2

Technical contribution

Problem Solved: •

Identification of common sequences among a collection of news clips of a specific topic based on key word and or content-based video query (detection of how content has been used / edited by various actors)

Description of technical contribution: •

Granular temporal video matching based on spatial and temporal video fingerprinting technology.

Experimental inclusion of human knowledge/computation for validation, annotation and content provision tasks, with the goal of improving the quality of the result.

Prototype of a visualization / interaction interface for result presentation and user feedback

4.6.3

Requirements and success criteria

Automatic functionalities (R3:M24) •

The system must provide a query interface

The system must provide file upload and download functionality

The system must be able to (temporally) store content

The system must use a data base to store fingerprints, metadata and relationship, i.e. the data base contains the model

The system must detect perceptual temporal segment duplicates o The system may employ temporal pre-segmentation of video footage

CUbRIK Metadata Models

Page 53

D22 Version 1.0


o o o

The system must extract spatial video descriptors The system must extract temporal video descriptors The system must create a fingerprint and may use hashing algorithms on perceptual descriptors

The system may extract metadata from video/ image files

The system may generate topic proposals based on keywords

The system must be able to trigger external search engines

The system must be able to process results of external search engines

• The system must generate, deploy and display result visualization Crowdsourcing functionalities (R3:M24) •

The user must provide a query

The user/crowd may provide new content

The user/crowd may validate query results by confirming / establishing or rejecting relationships

The user/crowd may annotate query results by o providing additional topic keyword o providing temporal relation o providing creation date of the news clip Success criteria: The detection and identification rate of people should be over 50% given a training set size of no more than 10%. For example, with 30 different people in a dataset of 800 face instances, the rate should be over 50% compared to chance level of about 3%. Relevance feedback should further improve the identification rate by at least 10%. •

4.6.4

Use cases

Figure 23 Use case diagram for H-Demo “News History”

CUbRIK Metadata Models

Page 54

D22 Version 1.0


Query •

Name: Query

Identifier: UC NH1

Actors: User, System

Pre-conditions: Videos & metadata of the queried topic keyword or video exist in the collection

Basic Course of Action: o User performs a textual query (using keyword, topic, specific metadata) or content-based query (image/video, or URL to image/video), or a combination of the aforementioned o According to the query, a system action is triggered: video upload, download or metadata matching o The model update use case is triggered o Visualization data is generated and presented to the user

Post-conditions: None

Figure 24 Activity diagram for use case "Query" (UC NH1)

CUbRIK Metadata Models

Page 55

D22 Version 1.0


Content Provisioning •

Name: Content provisioning

Identifier: UC NH2

Actors: User / H-Demo Admin, System

Pre-conditions: None

Basic Course of Action: o Topic choice: A: The user requests existing topics (collected from previous user actions) by means of a keyword, and selects the appropriate topic and/or B: The user provides a new / refined topic o The user provides related video material via upload or download reference and/or related keywords o External search engines are triggered to retrieve related content o The results of the external search engine are fused and validated by the user against the topic provided o The result of that validation is sent into the model update use case

Post-conditions: None

Figure 25 Activity diagram for use case "Content Provisioning" (UC NH2)

CUbRIK Metadata Models

Page 56

D22 Version 1.0


Collection Validation & Annotation •

Name: Collection Validation & Annotation

Identifier: UC NH3

Actors: User / H-Demo Admin, System, External Video Search Engines

Pre-conditions: Videos & metadata exist in the collection

Basic Course of Action: o Visualization is created for a relations set to be validated / annotated o The visualization is presented to the user o The user interactively validates connections, provides new keywords and corrects/annotates wrong/missing/new relations o The user provided date is fed intro the model update use case

Post-conditions: None

Figure 26 Activity diagram for use case “Collection Validation & Annotation" (UC NH3)

CUbRIK Metadata Models

Page 57

D22 Version 1.0


Model Update •

Name: Model Update

Identifier: UC NH4

Actors: System

Pre-conditions: Provision of new data (content, user annotation, user feedback) for the news history model

Basic Course of Action: o Transmitted data is stored in data storage o Metadata are extracted from the data o Fingerprint is created for the data o Validation data is processed and integrated into perceptual matching process o Merges metadata, matching results and/or processed annotation update the index

Post-conditions: None

Figure 27 Activity diagram for use case “Model Update " (UC NH4)

CUbRIK Metadata Models

Page 58

D22 Version 1.0


4.6.5

Release planning

The first version of the pipeline will be complete and work in it terminated by R3: M24. The second version of the pipeline will be complete and work in it terminated by R4: M30

4.7

Accessibility aware relevance feedback

4.7.1

Application context

The Accessibility aware Relevance feedback H-Demo aims to enhance the process of multimedia content harvesting in such a way that it will be later able to provide an accessibility related indicator (i.e. confidence factor), regarding the level of accessibility and usability it offers to specific groups of users. Initially, the “Accessibility aware Relevance feedback” demo will be exhibited as a standalone horizontal demo, that could handle various types of multimedia data containing webpages, while later on will be incorporated to the “History of Europe” CUbRIK App.

4.7.2

Technical contribution

Data evaluation in terms of accessibility and usability of the multimedia content of Web pages (i.e. images, sound, text, header tags and metadata), data, so as to provide accessibility relevance factors by executing extended multimedia analysis (see list of features below).

An initial evaluation of the accessibility level of web sites will be based on the usage of existing accessibility tools (mostly open source tools), such as Tools/APIs for creating accessible software and standardization (e.g. JAVA Accessibility API, GNOME, IBM’s IAccessible2, etc.) and also available accessibility standards and methodologies (e.g. WAI- ARIA, ISO, UK Disability Discrimination Act DDA, etc.)

Generation of user-specific profiles from information derived during the user’s registration phase and the follow-up updates of these profiles according to the future behaviour of each user.

The accessibility evaluation of each web page will be user-specific and it will address the needs and the disabilities of the profile of the logged-in user (i.e. specific disabilities will be assigned to certain attributes of the multimedia objects of the web page). For this purpose, a list containing actual disabilities will be constructed that will be taken into account both visual, hearing and possibly even motor impairments. Similarly, all possible web-page data attributes, should be studied in depth and collected: Multimedia content characteristics: for (a) text (i.e. Visual information, hard-coded fonts, text background, multi-column formats, styles and stylesheets, etc.), (b) audio (i.e. sound quality, volume control, captions and/or transcripts, etc.), (c) images (i.e. alternate text, image resolution, zoom feature, etc.) and (d) other multimedia content (i.e. captions, transcripts or audio descriptions in videos, audio description, timing of media delivery, etc.).

In order to adapt the accessibility distance measure to match the users expectations (disabilities), the users will be given the opportunity to provide relevance feedback information regarding the accessibility level of the web page, via direct interaction with the query results. The relevance feedback they provide will refer either to the whole web-page itself or to specific multimedia objects contained in the web-page. The relevance feedback referring to single multimedia objects contained in the web page will be processed autonomously and will partially (e.g. via weighting factors) contribute to the final accessibility confidence factor of the full web page.

Advanced crowdsourcing techniques will be utilized for the evaluation of these attributes of multimedia data or multimedia metadata, for which no automatic evaluation can be performed – or is unacceptably expensive in terms of processing resources (e.g. useful navigation options, meaningful link-text, meaningful tagging of

CUbRIK Metadata Models

Page 59

D22 Version 1.0


tables, etc.). The functional requirements and features that will be supported in the corresponding CUbRIK platform Release versions R3 (M24) & R5 (M36) are following: (R3-M24) List of Features: •

Accessibility related descriptors/features concerning the visual objects in a web-page (i.e. images and video streaming media): o the colour histogram, o the colour layout, o areas with high luminance values, o the image resolution o shape descriptors of the image o the contrast of the image and o its texture, so as to address for vision related disabilities (e.g. colourblindness, etc.).

Accessibility related descriptors/features concerning the audio objects in a web-page (i.e. sounds and audio streaming media): o Frequency spectrum, o phase related features, o percentage of high-, low- frequencies, o duration, o mono/stereo/surround/etc., o bit rate, o loudness per frequency band (dB), o DC values, o (P)SNR, o etc.

Accessibility related features concerning the text (objects) in a web-page: o Font size, o Font colour, o Font contrast with respect to the background, o Text alignment, o Indentation/Spacing, o etc.

Accessibility related features concerning the metadata (for image/sound/text) in a webpage: o Any content in audio/visual format should also be available as a text transcript for hearing impaired users o Existence of images’ “alt” attribute. o Existence of proper header tags (i.e. h1, h2, h3, etc.) that make site navigation easier for users using assistive technologies (e.g. screen readers). o Preservation of consistency in layout, colour, and terminology for reducing the cognitive load placed on users. o Absence of “hard-coded” text size so as to defeat the use standard browser controls.

CUbRIK Metadata Models

Page 60

D22 Version 1.0


4.7.3

Requirements and success criteria

Functional Requirements (specifying functions that the system components must be able to perform): •

Input: web pages returned by the search results related to any possible topic, additional to time- space information and multimedia content, as well as the current user’s profile.

Output: confidence factor, which will stand as an indicator for the level that the, accessibility level of the content and the usability level of the web-page is influenced for the specific user.

Data storage: database, where the o accessibility related user profile, o the extracted features (see feature list above), o potentially the multimedia data of the web page itself, o the search history, o the relevance feedback input and o the changes over time of the relevance feedback related weighting factors will be stored.

Processing: extraction descriptor vectors per web-page is a complex task that will involve image-, sound-, textual- and metadata analysis and incorporation of state-ofthe art of relevance feedback techniques for multimedia data. For the simple case of examining if the “alt” attribute is present for every image in a web page, the analysis lasts only a few seconds when running on a core i5@2.8GHz processor lasts a few seconds. For experimental purposes, a set of multiple different descriptors (i.e. header existence, image- and audio analysis) extraction methods will be used and their combination will lead to the final assessment results. In general, the average system response time is expected to be in proportion with the complexity of the objects that are being tested.

Timing and synchronization: no timing issues are involved in the current process since each modality (i.e. text, image, audio, etc.) can be processed asynchronously on different processors, while the final result will be based on the fusion of the each of the latter.

Non-functional Requirements: •

The system will require users to log in, so as to load their accessibility profile (disabilities, elderly and situated disabilities, etc.), before performing the accessibility analysis.

The system will adapt the accessibility scores to users corresponding to their profile levels and display the final accessibility relevance factor as final decision score.

The system will provide an administrator functionality for setting permissions

Where appropriate the module shall be able to generate automated help wizards and error messages in case of system malfunctions and/or users’ mistakes.

A help menu shall exist, that will facilitate the user understand the operation of the system, and guide him along the process.

Diagnostics messages in case of unsuccessful or uncertain operations.

All software tools developed within the project will be released under an open source license.

All modules’ components will be implemented in modular, open source system architecture.

The module will be based on a layered solution with a high level of encapsulation of

CUbRIK Metadata Models

Page 61

D22 Version 1.0


tools to assure the maintainability of the infrastructure with future upgrades. •

The query result page will be able to switch between two functional modes: o a simple query result displaying mode and o an interactive mode, where the each registered user will be able to provide relevance feedback with respect to the accessibility level of the web-page.

Success criteria: •

The accepted result for the user should lie within the first 10 query results (rank-10).

The processing time should be <~5 secs for each query.

(Relevance) Feedback should be collected for at least three disabilities and for at least 20 sites.

The accessibility score of each website should be derived from at least the 50% of its containing objects. The latter should be either automatically processed or evaluated from crowdsourcing techniques.

4.7.4

Release planning

The “Accessibility aware Relevance feedback” H-Demo (R3-M24) will be incorporated in the final CUbRIK “History of Europe” application (R5-M36).

CUbRIK Metadata Models

Page 62

D22 Version 1.0


4.8

Likelines

4.8.1

Application context

Description of application context: The LikeLines demo could fit in application contexts like: •

General online video websites, providing fragment-level access to videos

Providing analytics to content creators, e.g., explicit liking particular time points could prove useful for fashion show videos

Summarization of (personal) video collections

Description of data set: A dataset in the form of a video collection. Further details like video type are to be determined.

4.8.2

Technical contribution

Problem Solved: •

Identification of the most interesting/relevant fragments in a video

Description of technical contribution: •

Open tool for collecting large amounts of timecode-lever user-feedback data needed for research in the young area of timecode-level access of video

Identification of the most interesting/relevant fragments in a video based on a fusion of: o Natural implicit user interactions with the video (e.g., play, pause, rewind) o Explicit user interactions with the video (i.e., explicitly liking particular time points) o Multimedia content analysis of the video

Use of user interactions as feedback to the multimedia content analysis process

Use of multimedia content analysis as feedback to collected user interactions

4.8.3

Requirements and success criteria

Automatic functionalities (R2:M18) •

The system must support at least one form of multimedia content analysis

The system must capture and store implicit and explicit user interactions

The system must be able to aggregate user interactions

The system must be able to fuse the output of multimedia content analysis and user interactions

The output of multimedia content analysis may be used to value the quality of user interactions User interactions may be used to gauge the performance of multimedia content analysis

Non-functional requirements •

The system must be open-source

The system must be able to operate on YouTube videos

The system may be able to operate on any video supported through HTML5’s video

CUbRIK Metadata Models

Page 63

D22 Version 1.0


capabilities Success criteria: The success criteria comprise: •

Main component implemented and publicly available on e.g. GitHub by the end of July, 2012

Feedback collected for at least one (to be determined) video dataset by March 2013: o At least 30 user interaction sessions per video recorded for at least 30 videos in the dataset (at least 900 interaction sessions in total) o Formulated at least 10 hypotheses describing semantics of implicit user interactions based on the collected feedback

4.8.4

Use cases

Name: Adding a video to a system’s collection

Identifier: LLindex

Actors: Administrator or Indexer

Pre-condtions: None

Basic Course of Action: o A video is added to the system’s collection o The system runs various multimedia content analysis algorithms on the video and indexes the output.

Post-conditions: Video and the output of the analysis algorithms is indexed and added to the collection

Name: Loading a video in a system’s collection for consumption

Identifier: LLload

Actors: User

Pre-condtions: None

Basic Course of Action: o User selects a video from the collection (after, e.g., search) o The system loads the video o The system loads previously indexed user interactions o The system loads, if any, pre-computed content analyses o The system aggregates previously indexed user interactions and content analyses o The aggregated information is displayed as a navigable heat map

Post-conditions: The video and the navigable heat map controls are loaded

Name: Watching a video in a system’s collection

Identifier: LLimplicit

Actors: User

Pre-condtions: LLload

Basic Course of Action: o The user watches the video and interacts naturally (e.g., pausing/seeking/etc.) o The system records the implicit interactions and indexes them

CUbRIK Metadata Models

Page 64

D22 Version 1.0


Post-conditions: User’s implicit interactions are indexed

Name: Liking a time point in a video

Identifier: LLexplicit

Actors: User

Pre-condtions: LLload

Basic Course of Action: o The user clicks the “like” button while watching the video o The system records the explicit interactions and indexes them

Post-conditions: User’s explicit interactions are indexed

4.8.5

Release planning

The component will be completed by R2: M18

4.9

Copyright-related requirements

One of the requirements for all cases involving the use of A/V content is the consideration of copyright aspects (e.g., regarding right of reproduction, right of communication to the public) for content approval, storage, annotation, transformation, presentation and distribution. This applies not only to the H-Demos described in the previous section, but also to the V-Apps. The approach in the project will be to try to partially automate the determination of whether and how content is processed and used in the platform, using automatic annotation and rules, and user input. The aim is to maximize availability of content for users, while ensuring respect for copyright holders at the same time, especially respect for the rights of users that participate in crowdsourcing and content production processes in CUbRIK. This will be done by implementing components to: •

determine the content status and approval for the system. This will be done by using relevant information (contextual and otherwise) to determine content provenance, authenticity and trust into the source and the content provider;

use and interpret relevant metadata (including CC content license and other relevant information) and contextual information, derivation of permissions of how content (as well as derived information and metadata) should be handled in the system, and communication of permissions to the relevant system domains (storage, annotation, transformation, presentation, delivery), where it will be interpreted;

communicate with rights holders and crowds to track, complement and modify rights, license and provenance information and resolve possible conflicts These components are part of the CUbRIK platform architecture. As such, they will also be used by the V-Apps and potentially also by the H-Demos. After R1 more specific requirements will be formulated. •

CUbRIK Metadata Models

Page 65

D22 Version 1.0


4.10 Summary of success criteria To conclude this chapter, we will summarize the success criteria from the different H-Demos as an indication of the H-Demos technical contribution to the state-of-the-art in general and the CUbRIK platform in particular. The summary is displayed in Table 2. H-Demo

Criteria

Media Entity Annotation

o Quantitative demonstration of improvement of resulting data set quality after applying crowdsourcing and human computation techniques. 





The goal of the first release (R1:M12) is to achieve a state of the art performance (in terms of precision and/or recall) in the task of fully automated media entity annotation (i.e. crawling representative images for given named entities). The goal of the second release (R3:M24) is to improve the precision of the automatic media entity annotation pipeline by applying crowdsourcing techniques for low confidence images and applying human computation techniques (e.g. creating crosswords) for images with medium confidence. The overall goal is to have a combined human-computer media entity annotation pipeline which can achieve results comparable with results produced by people (i.e. resulting data set can be considered a golden standard) but with much less human resources involved.

o Quantitative demonstration of improvement of precision and/or recall of entity search algorithms. o Quantitative demonstration of multimedia search time and retrieval accuracy using the time aware multimedia indexer. 

The testing aims to compare the methodâ&#x20AC;&#x2122;s performance vs. the exhaustive search approach as well as other state of the art multimedia indexers such as LSH or M-tree.

Crosswords

o Game Framework implemented and available to game developers, which can access the Game Framework API, its documentation and build new games on top of it by the end of the project; o At least 1 game implemented on top the framework by the end of the project; o Feedback collected for the collection of entities: correction of mistypes in textual metadata (such as name misspellings), correction of relational attributes in metadata (such as an association between an entity and its image; an association between two entities with a relation like â&#x20AC;&#x153;bornInâ&#x20AC;?); o Quantitative demonstration of metadata improvement thanks to the crowd contribution; The quantitative metadata improvement is measured against a random subset of entities selected out of monuments data set and manually checked. If the error rate in the original data set is too low or the errors do not vary widely enough, manual distortions may be introduced for testing and demonstration purposes.

Logo detection

o At least 50 responses per tasks collected from the crowd o Quantitative demonstration of improvement of precision and/or recall thanks to the crowd contribution o Qualitative elicitation of usability problems in interacting with the

CUbRIK Metadata Models

Page 66

D22 Version 1.0


H-Demo

Criteria crowd on open social networks o Qualitative elicitation of technical problems in deploying asynchronous crowd tasks on top of SMILA

People Identification

o Detection and identification rate of people significantly larger than chance level o Relevance feedback will enhance the chances of identification

News History Pipeline

o Minor success criterion: Find all related news clips (true positives) with minimal wrong (false positive) or missing (false negative) matches. o Major success criterion: Meet minor success criterion and establish the correct temporal sequence relation among the clips

Accessibility aware relevance feedback

o The accepted result for the user should lie within the first 10 query results (rank-10). o The processing time should be <~5 secs for each query. o (Relevance) Feedback should be collected for at least three disabilities and for at least 20 sites. o The accessibility score of each website should be derived from at least the 50% of its containing objects. The latter should be either automatically processed or evaluated from crowdsourcing techniques.

LikeLines

o Main component implemented and publicly available on e.g. GitHub by the end of July, 2012 o Feedback collected for at least one (to be determined) video dataset by March 2013:  

At least 30 user interaction sessions per video recorded for at least 30 videos in the dataset (at least 900 interaction sessions in total) Formulated at least 10 hypotheses describing semantics of implicit user interactions based on the collected feedback

Table 2 Summary of the Success Criteria

4.11 Next steps In this chapter we have described the current state of the H-Demo requirements and the features of the CUbRIK platform they will implement. For the H-Demos the most important next steps entail the implementation of the features that are planned for R1. After that, a second iteration of requirements specification needs to be done. Apart from feedback from the CUbRIK consortium itself, experts in the field may be consulted. As R2:M18 is close to the release of the first prototype of the CUbRIK platform in M22, we will work towards an alignment of the H-Demo features and requirements with the needs from the two domains of practice.

CUbRIK Metadata Models

Page 67

D22 Version 1.0


5.

Feature to H-Demo Mapping

5.1

Feature to H-Demo mapping

In order to get the full picture of the features that will be implemented as part of the CUbRIK platform, we constructed a compendium that describes which H-Demo will implement what features.

Feature

Object detection (Detectable objects: rigid quasi-planar objects) Face detection Face recognition Face characteristics: age, gender, etc. Technical metadata (encoding, bitrate etc.) Error & technical quality issues #beach #city #desert #field #flowers

Media entity Crosswords annotation

Logo detection

UNITN

POLMI

UNITN

People News history Accessibility identification pipeline aware relevance feedback QMUL FRH CERTH

LikeLines

X

X

TUD

X

#greenery #lake #mountain #sunset #winter #snow labels: #sand #water #sky #clouds #sunset #mountain #ground #vegetation #grass #architecture #flowers #snow

CUbRIK Metadata Models

Page 68

D22 Version 1.0


Feature

Media entity Crosswords annotation

Logo detection

UNITN

POLMI

UNITN

People News history Accessibility identification pipeline aware relevance feedback QMUL FRH CERTH

LikeLines

TUD

Technical metadata (bitrate, encoding etc.) Speech-to-text transcription (language models: English, German) Local feature detectors Local feature descriptors MPEG-7 features Matching local descriptors (SIFT, SURF) Spatial verification Temporal Segmentation: Shot & scene detection Key frame extraction

X X

X X

X

X X

X

X X

Thumbnails generation from extracted key frames Thumbnails assignment to temporal segments Audio segmentation Speakerâ&#x20AC;&#x2122;s turn Face similarity Face similarity by ID Image similarity by ID

CUbRIK Metadata Models

X

Page 69

D22 Version 1.0


Feature

Media entity Crosswords annotation

Logo detection

UNITN

POLMI

UNITN

Concept-based similarity Image similarity by content fingerprint Language detection Character normalization Spelling variant expansion, spellchecking Phonetic normalization Lemmatization Concept and entity annotation

People News history Accessibility identification pipeline aware relevance feedback QMUL FRH CERTH

LikeLines

TUD

X

Synonym, hyponym, and hypernym expansion Named entity extraction: places, events, people Relationship extraction: people to event participation; others Association of images to entities Geo entities extraction from structured resources. Geo concept and geo entity annotation. Geo tagging Geo entities extraction

CUbRIK Metadata Models

X

X

X

Page 70

D22 Version 1.0


Feature

Media entity Crosswords annotation

Logo detection

UNITN

POLMI

UNITN

Geo containment relations extraction Other geo-relations extraction Timeseries extraction from multimedia data Concept detection Timed event extraction Keyword search, match in text content Keyword-based external image/video search Keyword search, match in entity repository Entity search in entity repository Image/video-based video search Exact identification / integrity

CUbRIK Metadata Models

LikeLines

TUD

X X

X

X X

X X

Perceptual video identification / fingerprinting Perceptual image identification / fingerprinting Geographic coordinate search Calendar date/interval search Time segment access

People News history Accessibility identification pipeline aware relevance feedback QMUL FRH CERTH

X

X

Page 71

D22 Version 1.0


Feature

Result geo-positioning Result graph presentation Timeline presentation Image selection task Image discovery Binary relevance of matching rating Temporal sorting of two items

Media entity Crosswords annotation

Logo detection

UNITN

POLMI

LikeLines

TUD

X X X X X X

Confirmation of the temporal sorting of two items Temporal sorting of multiple items X BB creation for object detection BB adjustment for object detection Polygon creation for object detection Polygon adjustment for object detection X Mapping images to entities Query result validation Query result annotation Conflict to user matching Player achievement management Player profile management

CUbRIK Metadata Models

UNITN

People News history Accessibility identification pipeline aware relevance feedback QMUL FRH CERTH

X X X X

Page 72

D22 Version 1.0


Feature

Media entity Crosswords annotation

Logo detection

UNITN

POLMI

Automatic creation of entity games out of entity repository X Automatic feedback cross-validation X Bulk injection of content collections

UNITN

LikeLines

TUD

X X

Upload of media elements (with some extension) Crawling of images based on textual keywords

People News history Accessibility identification pipeline aware relevance feedback QMUL FRH CERTH

X X

X

Crawling of videos based on textual keywords Crawling of images based on geoposition or area X Extract generic metadata (containerlevel) Read and translate rights metadata and content licenses to permissions Temporal data storage Data base Feature User signs up explicitly to a CUbRIK App User signs up explicitly to a GWAP

CUbRIK Metadata Models

X

X X

X

Page 73

D22 Version 1.0


Feature

User signs in to a GWAP with a SN account (FB, twitter, LinkedIn)

Media entity Crosswords annotation

Logo detection

UNITN

POLMI

UNITN

People News history Accessibility identification pipeline aware relevance feedback QMUL FRH CERTH

LikeLines

TUD

X

User signs in to a CUbRIK App with a SN account (FB, twitter, LinkedIn) User signs in to CrowdSearcher with a SN account (FB, twitter, LinkedIn) Access control based on existing user /component profiles

Table 3 Feature to H-Demo mapping

CUbRIK Metadata Models

Page 74

D22 Version 1.0


5.2

Requirements and features implemented in Release 1

To conclude this chapter, in Table 4 the parts of the H-Demos that will be released in R1:M24 are summarized. H-Demo

Status on R1:M12

Media entity annotation

The automatic part of the pipeline completed

Crosswords

Automatic functionality: Crosswords must be generated out of entity repository, using entity metadata;

Logo detection People identification News history pipeline

Pipeline completed Non-functional requirements completed None; First version due in R3:M24

Accessibility aware relevance feedback

None; will be released as part of the CUbRIK platform (R3:M24)

LikeLines

None; pipeline completed in R2:M18

Copyright processing

Status to be decided during next plenary Table 4 Release 1 Summary

CUbRIK Metadata Models

Page 75

D22 Version 1.0


6.

Conclusion and Outlook

In this deliverable we have provided the foundation for the development of applications that fulfil the purposes that have been introduced in the DoW. We have outlined a number of different user stories and scenarios, in two domains of practice, which are relevant for and evaluated within end-users. We have defined the features that are the basis for the CUbRIK platform and that enable us to build ‘highly accurate search engines for different domains’ (DoW, Section B1.1.6). This document has specified the initial requirements for CUbRIK V-Apps and H-Demos, with special emphasis on requirements that will be fulfilled by CUbRIK Release 1 (R1) at M12. The initial V-Apps requirements are based on a first iteration of the User pull – Technology push – Requirements specification cycle (cf. Figure 1). It has been a co-ordinated effort between Task 2.1: “Domains of Practice” (M1-M6) and Task 10.1: “Application requirements and coordination” (M1-M36). Section 2 and 3 have described the development and evaluation of user stories in two domains of practice (History of Europe and SME Innovation). Evaluation has taken place both with end-users and with CUbRIK’s technical experts. We have stressed that the specification of the requirements of CUbRIK V-Apps is an ongoing, iterative process. As a next step, for both the History of Europe app and for the SME Innovation app feedback will have to be collected from the end-users in order to keep the requirements well-aligned with the user needs and to make them more specific and ready for implementation. Finally, evaluation of the requirements in WP10 will help us to become more comprehensive with regard to the non-functional requirements. For the H-Demos requirements and use cases have been formulated, as well as list of features that will make up the CUbRIK-platform. This feature list is complete for R1 features and will be extended for the subsequent releases. The requirements set out in this deliverable provide input to the “pipeline” workpackages in CUbRIK (WP5: “Pipelines for Content Analysis and Enrichment”, WP6: “Pipelines for Query Execution” and WP7: “Pipelines for Feedback Acquisition and Processing”). Further, this deliverable provides WP8 “Components and Tools” with input. Finally, contributes to defining the global view of the CUbRIK and in this way supports Task 9.1: “Architecture design and integration model” (M1-M12) and also support Task 9.2: “Delivery planning and management” (M1-M36). A revised version of this deliverable will be issued in M24. The goal of the revised version is to present a complete inventory of the requirements of the V-Apps and the H-Demos along with the final list of features that are needed within the CUbRIK platform to fulfil these requirements. To align the progress of this requirements document with the release planning of the V-Apps prototypes, an intermediate document containing the requirements of the first prototype of the V-Apps.

CUbRIK Metadata Models

Page 76

D22 Version 1.0


7.

References

Carroll, J.M. (1995). Introduction: The Scenario Perspective on System Development. In J.M. Carroll (ed.) Scenario-Based Design: Envisioning Work and Technology in System Development. New York: John Wiley & Sons. Kofler, C., Caballero, L., Menendez, M., Occhialini, V., and Larson M. (2011). Near2Me: an authentic and personalized social media-based recommender for travel destinations. In Proceedings of the 3rd ACM SIGMM international workshop on social media. New York: ACM. Kuutti, K. (1995). Work Processes: Scenarios as a Preliminary Vocabulary. In J.M. Carroll (ed.) Scenario-Based Design: Envisioning Work and Technology in System Development. New York: John Wiley & Sons. Ludden, G. (2010). Scenario Building. Retrieved on 02-02-2012 http://knowledgecentre.openlivinglabs.eu/learn/techniques/scenario-building.

from

Suri, J.F., and Marsh, M. (2000). Scenario building as an ergonomics method in consumer product design. In Applied Ergonomics, 31, p151-157.

CUbRIK Metadata Models

Page 77

D22 Version 1.0


8.

Appendix A: EU History Social Graphs

CUbRIK Metadata Models

Page 78

D22 Version 1.0


CUbRIK Metadata Models

Page 79

D22 Version 1.0


CUbRIK Metadata Models

Page 80

D22 Version 1.0


9. Appendix B: Use scenarios developed as candidates for the SME Innovation V-App use scenario 9.1 The usage scenario number #1: Packaging Equipment manufacturer The reference framework The SME is manufacturing equipment for the packaging industry, with focus on food industry. The interest of the SME is to collect information to innovate its packaging equipment for coffee capsule (like Nespresso), as well as market information on the coffee market. The company is interested in the following: 1. Technology information to support innovation: the interest is to check new features/machinery that can improve performance of their equipment, as example checking new modality to perform hot-welding, or finding new material for the coffee capsule. Also, there is specific interest to control patent databases, in particular checking the drawings of the patents. 2. Market information to support market analysis: the main interest is to gather relevant information on how the market evolves, in which geographical area there are more opportunities, which are the customer’s needs, and the competitors activities. As specific side requirements, the company would like the system to be directly connected with the company used Document Management System (Sharepoint of Microsoft), to use the already available documents/information as source data for the searches and to instruct the system for new searches, as well as be able to integrate the relevant information retrieved within the used DMS.

The use case of “CUB SME App” for the technology intelligence: The Specific innovation case : The industrial sector of packaging machines needs to overcome problems due to the Hot Plate Welding process (in the Hot Plate Welding process a hot plate melts the edges of two plastic pieces which are then pressed together to form a permanent bond), which is currently the most used welding system in packaging industry (about 80 % in the packaging sector); current technologies present a set of drawbacks both in terms of efficiency (heat dispersion), precision (inhomogeneous temperature of hot plates), and adaptability (high times and trial and error to set up configuration for different films). The main scope of the innovation is to find information able to support the definition of a new type of hot plate welding equipment. The usage of CUB-SME App: The user provides CUB-SME with a set of information on the hot welding process and the current used machine. In particular, the information provided are: •

A simple modelling of the process to identify the “primary function” of the equipment, following a TRIZ (theory of Inventive problem solving) modality of modelling, as well as components and subcomponents performing primary and secondary functions (decomposition of the System in a tree of mechanism and functions);

Pictures of the equipment, components and subcomponents;

Videos of the equipment, components and subcomponents;

• Drawings of the equipment, components and subcomponents; The CUB-SME App will be able to acquire the information (documents, videos, images, etc.) from available documents of the user in a semi-automatic way, as well as provide a guide to ask the user the information needed to perform the searches, including list of data sources already known by the user that can be of interest (channels for equipment manufacturers, on

CUbRIK Metadata Models

Page 81

D22 Version 1.0


line magazines, industrial association web sites, competitors web site, etc.). The CUB-SME App will then perform the following activities: •

Support the user in the keyword definition / expansion;

Provide the user with ranked links to videos, images, drawings that can be relevant for the equipment under study;

Provide ranked images, videos;

Provide the drawings of the patents similar to the drawing of the equipment (or components or sub-components) that is willing to innovate

Provide videos and news that could be relevant for the technology development, the welding equipment market, etc.

Perform analysis on the retrieved information to detect a set of “tools” that can perform the same “action” that the current system under innovation performs. In this case, one of the components of the welding equipment is the iron sheet that is heat to perform the welding process. The “tool” is the “iron sheet”, the “action” is “welding”. The CUB SME App will be able to provide user with information (images, video, others) with other “tools” that perform the same action (“welding”), to suggest possible innovations, for example identifying other mechanism that can perform the welding process with less heat, etc..

The use case of INNEN-CUbRIK app for the market analysis: The Specific innovation case : The company core business is to manufacture equipment for coffee capsule and coffee pods. The company needs to understand how the market is evolving, answering to market/technology questions such as:

Figure 28: Coffee Capsule (left image) and Coffee pods (right image)

Market trends in the usage of capsule versus pods?

Which capsule colour/design is most requested, and in which location?

New technologies on capsule (any “Soluble material packaging technologies”, etc.)?

Is the overall coffee machine market for capsule growing or diminishing? And where?

Any patents, papers, new technology of interest?

CUbRIK Metadata Models

Page 82

D22 Version 1.0


The usage of CUBRICK SMEs INNEN App(CUB-SME App): The user provides CUB-SME with a set of information on the coffee capsule market and technology. In particular, the information provided are: A simple modelling of the process to identify the “functions” of the capsule, its attributes, the name of competitors, the alternative products (example: the coffee pods), etc. • The attributes of the coffee capsule (colour, weight, etc.) and the specific customer requirements; • Pictures of the capsule; • Videos of the capsule; • Drawings of the capsule; In a search launched with Google on Videos, the aside image shows what is retrieved as first results. As example, by this very simple research, a set of interesting information could be retrieved: 1. New machines are launched able to use 2 coffee capsule at time. Is that possible also with pods? If not, this is certainly a good point for focussing on coffee capsule better that coffee pods 2. Timing trends: are videos on coffee capsule increasing their frequency in time or decreasing (not interesting any more)? 3. Geography: where coffee capsule market Figure 29: Result of Google search over is spread, where it is better capsule and videos for Coffee Capsule better pods? By launching a search on Images with keywords “coffee capsule drawings”, among several results in the Google two images are presented, from www.patentsonline.com (the left image) and an image of patent for a coffee capsule piercing machine (image at the right).

Figure 30: Result of Google search over images for “Coffee Capsule Drawings”

CUbRIK Metadata Models

Page 83

D22 Version 1.0


Searching for similar images, or having the possibility to search for images similar to the machine produced by the SME is highly interesting. The CUB-SME App will be able to acquire information (documents, videos, images, etc.) from already available documents of the user, as well as provide a simple guide to ask the user the information needed to perform the search. Among the information retrieved there are also list of data sources already known by the user that can be of interest (magazines specialise in coffee, industrial association web sites, competitors web site, etc.). The CUB-SME App will then perform the following activities: •

Support the user in the keyword definition / expansion

Provide the user with links to videos, images, drawings that can be relevant for the equipment under study

Provide ranked images, videos

Provide the drawings/images of patents and other documents similar to the drawing/images of the equipment for coffee capsule (or components or subcomponents) that is willing to innovate;

Provide videos and news that could be relevant for the technology development, the welding equipment market, etc. Moreover, the CUB-SME App will be able to support the user in performing specific analysis, in particular:

Geographical usage of keywords “pods” versus “capsule” to define which markets are more prone to one product versus the other;

Applying the methodology of the Technology Roadmap Management (see ANNEX), will be able to evaluate the attributes for “coffee capsule” of importance for the user, define where (spatial dimension) such attributes are used, and which is the trend in time. As example, the CUB-SME App could be able to provide the following information to the SME:

Attributes of capsule mostly requested: example the weight, colour, reliability, material with which they are made (recyclable, not recyclable, etc.),

where such attributes are mostly requested (which country/market),

if there are changes during time of such attributes (trends)

Information on patents for new equipment for coffee capsule (drawings in patents databases, videos, etc.).

Information on the overall coffee market: from news channels, videos and images in associations web sites or online magazines

CUbRIK Metadata Models

Page 84

D22 Version 1.0


9.2

The usage scenario number #2: SME in the fashion industry

The reference framework The SME is operating in the fashion industry. It can be either an SME producing its own products, or an SME that distribute products of manufacturers (these are SMEs that depending on the “feeling” they have of the market trend, they buy products from producers and distribute it directly to the commercial shops, or to the final user). The company is interested in the following: 1. Market information to support the decision on which product to produce/acquire/distribute; here the main interest is to gather relevant information on the trends of the season, which colour/shape is more in line with youth trend, the possible prices, etc. 2. Technology intelligence: in the case the SME is a producer, then they could be interested also in information regarding new equipment for dress making process, or new materials, etc. A technology intelligence process on such items could be considered interesting for their innovation processes.

The use case of INNEN-CUbRIK app for the market analysis: The specific innovation case : The company core business is to produce and distribute and distribute summer clothes for women. The company needs to understand which are the trends in the sector to decide their own production, and which dresses to acquire to be distributed. •

Market trends towards usage of which colour?

Market prefer trousers or skirt this year?

Closed or open shoes?

Price?

Etc.

The usage of CUBRICK SMEs INNEN App(CUBSME App): The user provides CUB-SME with a set of information on the market. In particular: •

Images of dresses that the user believe should be in line with market trends

Videos of fashion industry events

Drawings of new dresses realised by the SME, to check for similarities, colours, to get ideas, etc. As example, a search on the Video in Google for “skirt colour summer 2012” produced the following results:

Figure 31: Google Video search results for “Skirt colour summer 2012”

CUbRIK Metadata Models

Page 85

D22 Version 1.0


In a search for Images in Google with the same keywords, the following results are shown:

Figure 32: Google Image search results for “Skirt colour summer 2012” As fashion is also driven by so called “cool & trendy” people that somehow drive other people dressing style, the system could also support in identifying what “cool people” such as famous actors are dressing. This can be done by searching directly for what famous people is dressing, as example the following images appear while searching for “Jennifer Aniston skirt” or “”Britney Spear trousers”:

Figure 33: Google Image search results for “Britney Spear trousers”

CUbRIK Metadata Models

Page 86

D22 Version 1.0


Figure 34: Yahoo Image search results for “Jennifer Aniston Skirt” As can be seen by the above research results, some “fashion hints” could be taken by the results on Jennifer Aniston skirt, while not much could be derived by the “Britney Spear Trousers” search. Information could also be taken by Videos, for example by searching for videos of cool people (see below the Videos for “Jennifer Aniston Skirts”), or searching in Videos of “cool events”, such as the American Music Awards 2011. The following provides a view of the above searches realised with Yahoo and Google:

Figure 35: Google Video search results for “Jennifer Aniston Skirt”

CUbRIK Metadata Models

Page 87

D22 Version 1.0


Figure 36: Google Video search results for “America Music Awards 2011” As can be seen, interesting information could be retrieved, for example the Skirts dressed by Jennifer Aniston in her presence during the David Letterman show (one of the most popular show in US) could drive the style of women, as well as some of the “red carpet” style used by actors and musicians during the American Music Awards 2011. The CUB-SME App should be able to search relevant news also on such “trend driving” people in news channels, as well as in magazines related to fashion. The search can also be done by looking at web sites that already collect important information all over the world. As example http://impresodigital.el-nacional.com/suplementos/2011/12/29/ or http://impresodigital.el-nacional.com/suplementos/2011/12/08/ (this is from Venezuela). The CUB-SME App will be able to acquire information (documents, videos, images, etc.) from already available documents/images/videos of the user, as well as provide a simple guide to ask the user the information needed to perform the search. Among the information retrieved there are also list of data sources already known by the user that can be of interest (magazines specialise in fashion industry, news channels, competitors web site, etc.). The CUB-SME App will then perform the following activities: •

Provide the user with ranked videos, images, drawings

Provide videos and news that could be relevant for the market analysis under study

Gather information in Social Media channels, news channels, and provide insights on age, location of people related to the preferences for the dresses;

CUbRIK Metadata Models

Page 88

D22 Version 1.0


Detect which are the most trendy style for the season: skirt or trouser? Long Skirt or Short Skirt? Which colour?

Find specific attributes of dresses that could be important and initially unknown by the user (meant as not indicated in the keyword, video/images provided), as example: Skirt with “strings”, trouser with “horizontal pockets”, etc.

Provide information on dresses used by “cool and trendy people” that have been previously defined by the users;

Others....

9.3

Example of specific usage scenarios of the CUB SME App

Example #1: User working for SME in fashion industry John works for a Small company in the fashion industry, he is responsible of deciding which product to buy and then distribute for the summer season. Walking around in the city, he noticed some shops showing small skirts with very strong colours. He take a picture of one of them with his mobile phone, and send it to the CUB SME App, launching searches for: •

Similar images with such skirts

Videos where such skirts are used

Images in fashion magazines

Search on Social Communities to check what is said about such strongly coloured skirts When he goes back to the office on Monday, the CUB SME App shows him in his dashboard (with an alert in his mail box), the results of the searches, showing a ranked list of videos, images, etc, Moreover, the system provides a set of clustering possibilities, to aggregate the results for time, location, typology of data source, etc., Once checked the results, John evaluate some of them quite interesting and discharge other, and ask the CUB SME App to launch a new search based on his choices. The new search provides another set of multimedia contents as results. At that point, John check the searched contents, and decide to ask the system to perform some analysis, asking: •

Which are the most used colours in the images and video found

Which are the target users (young women, old women, etc.)

Which are the location where certain colour are most used (geographical coverage)

Check if “cool and trendy people” that John previously indicated to the system, have been using such colours The systems provides the results, John check them and decide that the strongly coloured skirts are a good choice for the year. At that point, John find out that no one of its supplier has such skirts available, John therefore ask the CUB SME App to find out all producers of skirts close to John location that have available skirts similar to the ones that John believe will be trendy for the year. The CUB SME App retrieve a list of companies that have such skirts in their offer, ranked in order of similarity of the product they offer with respect to the skirts images that John provided to the system. •

Example #2: User working for SME in manufacturing industry Roberto is working for a company producing umbrellas. He is looking for innovation that could overcome the drawback of the current portable umbrellas (unreliable, easy to be broken, short life cycle, not resistant to string winds, etc.) . He decide to use the CUB SME App to check possible innovation. First, he is guided by the application to decompose his equipment in a tree of function and mechanism, as follows:

CUbRIK Metadata Models

Page 89

D22 Version 1.0


Figure 37: Umbrella decomposition in mechanism and function tree Roberto also provides the CUB SME App a set of images of components and subcomponents, as well as drawings. Roberto is then guided in the definition of his market, providing a list of quality characteristics of the product (Weight, Dimension, etc), and the of Customer requirements (Reliability, Strength, Portability). He is also asked to provide information on the market (also by using already available information within the company), such as type of competitor, Names of competitors, type and names of clients and suppliers, etc. The CUB SME App contain a module able to acquire documentation relevant to the innovation processes of users (such as relevant images, drawing of patents, brochure of products) that the system ingest in order to start building the knowledge of on the specific company sector of interest. Also, to allow the system to define trends and attributes, Roberto is guided in providing a further list of information regarding the attributes, customer requirements, and quality characteristic of the product.. Roberto launch the CUB SME App to start looking for multimedia contents that could support the innovation process. The system retrieves: •

Images of Equipment similar to the ones provided

Patents with drawings similar to the ones provided

• Videos The system retrieves: •

A set of drawings from patents and scientific papers that can be interesting, with a text to synthesise the patent/paper

• Ranked images and videos relevant for Roberto Roberto evaluate the results, discharge the non relevant ones, and ask for a second search. On the retrieved results, Roberto ask the system to perform a set of analysis: 1. Indicate a set of the technological attributes important for innovation, and the related trends. 2. Provide the drawings of the patents most relevant for him 3. Provide aggregated view of “mechanism” that could perform the same “function” of the system under study Moreover, Roberto ask the system to alert him for any new patent, paper, or other information that is relevant for the product.

CUbRIK Metadata Models

Page 90

D22 Version 1.0


10. Appendix C: Example requirements related to SME Innovation V-App These requirements were generated in parallel with the candidate use scenarios listed in Appendix B. They will continue provide input to the next iteration of requirements definition in for the SME Innovation V-App, where they will again be combined with guidance from the technical partners of the CUbRIK consortium concerning technical feasibility. General requirements of the application In the following, a list of main general requirements (the list is not supposed to be structured in this phase, it is a general list of requirements) for the application is provided, with degree of importance and some notes to explain it: Requirement

Importance (1=low, 5= high)

Notes

Multilingual

4

The importance of a multilingual product is based on the target user. For manufacturing or technology service companies, the multilinguality is less important, as their main source of information could be find in English (example: patent database, scientific papers, etc.). For mass-market operating companies, such as Fashion industry, multilinguality is a very important aspect. A company operating in Italy needs to have a view of what is said in Italian News channels, Italian Social Media channels, etc. and the language is certainly Italian. Moreover, as the mass-market are most likely described in each country language, also analysis such as geographical and time trends could be better realised with a multilingual system.

Embedded / integrated in SMEs processes and systems

5

The experience shows that it is very difficult to have SMEs spending time and efforts in using specific systems that requires long learning time to be used. Clearly, differences could be seen from SMEs of 10 to 20 employees and the ones with 200 employees. In any case, it is important that the CUB SME App could be used in a simple way and integrated as much as possible in the daily operation of SMEs. It can be, as example, by pushing information to email clients, by integrating the CUB SME App with currently used system of SMEs (CAD/CAD or PDL/PDM) and well as with Document Management Systems.

Modular system with different level of usage possible

5

The difference in dimension within the SME domain (company with 0 to 10 employees or companies with 200 employees) and the difference in sector, makes it important that the app could be used in different modality, starting from a very simple system that provides a set of basic functionalities, expected to be used by very small SMEs, to a more complex system that might require some modelling of the SME product, but that is able to provide more sophisticated

CUbRIK Metadata Models

Page 91

D22 Version 1.0


features. Very simple to use Browser Like interface.

4

Related to the previous requirement, the SME needs an intuitive system that does not require long learning process. The CUB SME App should therefore have a user Interface to which even SMEs with employees without high IT proficiency could easily use. Ideally the CUB SME App should provide interfaces reminding the ones of most known web sites, such as â&#x20AC;&#x153;recommended books of Amazonâ&#x20AC;?, etc.

Easy to be adapted to different sectors

5

This requirement is a specific requirement not for the final user, but for the exploitation and the impact of the results. It is important to avoid market failure of the past where very sophisticated systems were developed but that needed efforts to be customised to each user that mad ether costs not affordable by SMEs, and therefore limited substantially the impact. The CUB SME App is supposed to be used in different sectors (example: manufacturing, fashion), with a very little customisation efforts.

Collaborative Environment

4

Innovations are carried out in form of collaborations between (mostly) technical and marketing staff in the company. The CUB SME App should provide relevant ICT-based support for carrying out the activities and should be multi-user capable and role based

Process orientation and project management support

3

Innovations are carried out in the SMEs as a processes that should be flexible and provide guidance for the users. As far as innovation processes in SME are quite unstructured, the CUB SME App should support the process orientation by guiding the user through the steps, but without putting too many restrictions on the sequence of activities. Proactive guidance of the system is probably the best approach followed by a task-based approach. At the same time Innovation processes are individual projects that are not carried out frequently. Therefore the CUB SME App should offer functionalities for project support and project management. It might be possible to have multiple innovation projects running concurrently, therefore multi-project support is desired

Table 5 General requirements of the CUbRIK SME App Description of main functions of the application In the following, a list of main functions to be performed by the CUB SME App is presented. The list is not supposed to be exhaustive neither structured in this phase, it is a list of functionalities to start providing a clear view of the application. Functionality

Importance (1=low, 5= high)

Notes

Monitoring web sites of interest

4

It is meant a continuous monitoring of the web sites of interest of the user, providing alert whenever some

CUbRIK Metadata Models

Page 92

D22 Version 1.0


new contents, considered relevant (the relevancy to be defined later on). New web pages to monitor could be added whenever the user will add new sources (because of the CUB SME App system find new sources interesting or because the user just add new sources).Monitoring includes a constant crawling in web sites (such as patent databases and scientific papers repositories for the technology intelligence) where new contents could be uploaded. This functionality is described in the following point. Identify web sites and related multimedia content of interest for the user

5

It is meant the set of actions necessary to crawl, index and search information on the web in order to find relevant information for the users. The CUB SME App should allow actions to be realised in back end, meaning a constant crawling and search to identify interesting web sites and multimedia contents not launched by the user, as well as searches realised with the specific user request. As output, therefore, the system should from one hand propose to the user web sites and documents of interest in a proactive way (with a certain frequency decided by the user), on the other hand will allow the user to launch specific searches on the web. As input, the system should have one or more of the following: -

The source of information (web sites) provided at the initial stage by the user

-

Source of information such a specific paper of patent where user want to find similar relevant documentation (treating similar concepts);

-

A set of concepts (keywords, images, video) related to the user technology/product (provided also at initial stage)

As output the system will provide the user, for example with daily frequency, with:

CUbRIK Metadata Models

-

Suggestion on Web Sites (links and short description) that could be of interest.

-

Suggestion of multimedia contents to download, with short description

-

Downloaded multimedia contents (with short description)

-

Downloaded patents with direct connection to

Page 93

D22 Version 1.0


drawings (with short description) -

Etc.

Searches should also be lunched directly by the user through keyword, images, videos. The search results are presented in a ranked way, to prioritise the most interesting results. The system might work in a similar way to Amazon â&#x20AC;&#x153;recommendationâ&#x20AC;? where is user read something the system suggest other books that might be of interest for user. User will be able to define the retrieved content if of interest or not, and by doing this should train the system that will then search for more relevant documents and web sites. Once the web sites have been recognised as interesting, the user could decide to add them to the web site to monitor, and the web sites will also be added to the internal knowledge base of CUB SME App. Suggesting researches

3

In the technology innovation domain, the knowledge necessary to perform the innovation is very often out of the internal SME knowledge. This means that the CUB SME App should be able to somehow suggest new researches expanding the knowledge of the SME. As example, an innovation in a manufacturing SME could deal with New Material, which are certainly out of the internal knowledge of the SME. The CUB SME App could suggest the users ways to enlarge the scope of the research expanding the knowledge of the user. This functionality could also act on the background, directly launching searches and filtering the results that, if interesting, will be presented to user.

Gathering information to instruct the system

CUbRIK Metadata Models

n.a. (it is not an outcome, is needed to make the other functions to work)

CUB SME App will gather relevant information by the user in order to perform the search and extraction functionalities. Such information could be gathered by having the system able to take as input documents, images, videos, as well as to have the system guiding the user in providing information (modelling) of products (technological systems). CUB SME App will provide very simple and intuitive functionalities to allow the user to upload a set of documents (such as Brochures with image of products, Drawings, Images, picture) considered relevant for the user; such information might be considered relevant for general concepts (example: they are related to competitors, or overall market trends), as well as documents considered relevant but for one specific product (example: images of one specific equipment, and therefore not relevant for other products).

Page 94

D22 Version 1.0


In such sense, CUB SME App will be able to label the information provided by user as information that are relevant for the company independently of a single product (and therefore always object of attention), to information that are â&#x20AC;&#x153;project specificâ&#x20AC;?, meaning that are relevant for one specific product. Filtering the multimedia contents

4

The system should allow users to search among the multimedia contents present in the company knowledge base (not only contents retrieved by the CUB SME App, but also documents that can be uploaded by the user, or newsletter that user receives, or images present in the company document management system). The filtering actions should allow to extrapolate those contents more relevant for the specific search launched by the user. The system retrieved a filtered and ranked list of contents, providing title and a short description of text, and allowing user to fine tune the research, or visualise the full contents identified.

Finding patterns in multimedia contents

4

CUB SME App will be able to find patterns and aggregate them to provide user with possible technological innovations, as well as with attributes of technology systems and products that could be interesting for users. For technology intelligence, the system could provide patterns on drawings of patents, images, videos of equipments, as well as the related attributes and quality characteristics ((colours, dimensions, performances, sounds, etc.). For the market analysis, the patters will be mainly focussed on the attributes of the products, and the related customer requirements (example: shape of skirt, colour, materials, etc.). This should also take into account time and space, to provide information on where certain attributes are most requested, with which trends. Ideally, the patter could also be able to see which is the target audience of certain attributes: example, if certain colours are preferred by young or less young women, etc.

Table 6 Functionalities of the CUbRIK SME App

CUbRIK Metadata Models

Page 95

D22 Version 1.0


Detecting trends and attributes for product innovation with TRM and TRIZ This is done using the methodology developed by INNEN in previous projects joining Technology Roadmap Management (methodology proposed by Sungjoo Lee, Seonghoon Lee, Hyeonju Seol, and Yongtae Park [Using patent information for designing new product and technology: keyword based technology roadmapping â&#x20AC;&#x201C;â&#x20AC;&#x201C; R&D management 2008]) with TRIZ modelling, synthesised in the following: TRM

contents

Innovation contents Categories

Categories Market Attributes

Customer Requirements in House of Quality

Product Attributes

Quality Characteristics in House of Quality

Technology

Mechanism in Mechanics/Function Tree

-

Functions / Action

Table 7 Matching of TRM contents categories and Innovation content categories Of the methodology proposed by Sungjoo Lee, Seonghoon Lee, Hyeonju Seol, and Yongtae Park the following steps are taken into consideration and adapted to multimedia contents: 1. The elicity action: a first phase when contents are defined on the research that user would like to follow 2. The Co-word analyses, in particular: o Keyword(Contents for multimedia searches) portfolio map, to define trends in technologies: this is done by mapping the research results based on the frequency with which certain contents are found in the data selected. In particular, it is taken into account the absolute number of the keywords/contents found, and the rate of increase in time.. The following picture shows how the results obtained can be categorised: o Keyword (contents) relationship map: this is to fix relationship among keyword among different layers of Keyword (contents) Categories (technology layer, product attributes, and functions) to provide information on which attributes of products could be related or performed by which technologies, or to define which attributes are the ones most interesting for certain products. Figure below represent such possible mapping.

Figure 38 Keywords (contents) portfolio map

CUbRIK Metadata Models

Page 96

D22 Version 1.0


3.

Figure 39 Keywords (contents) relationship map Keyword (contents) evolution map: to identify pattern of evolution in patents or other documents with known date of production, related to specific attributes or technologies. The results are like the ones reported below, to which a further raw needs to be added for the function layer:

Figure 40 Keywords (contents) evolution map 4. Finally a Keyword (content)analysis could be performed to verify all relation among keywords / contents. The suggested approach for CUB SME App is to take some of the steps of the TRM techniques to support SMEs users in perform analysis on market and technology data, suing multimedia contents.

CUbRIK Metadata Models

Page 97

D22 Version 1.0

CUbRIK Requirements Specifications  

User requirements by domains of practice and system requirement specifications for CUbRIK Release 1