TAUS Review#2 - The Quality Issue - January 2015

Page 1

TAUS RE VIEW of language business and technology

The “Quality” Issue Reviews of Language Business & Technology in: • Europe: Digital Market • Translation in Africa • Asia • Americas: Transparency in Translation

Translation Quality The Proof of the Pudding

Plus... columns by Nicholas Ostler, Lane Greene, Jost Zetzsche and Luigi Muzii

January 2015 - No. II

1


People-powered translation at machine-like speed. Translation should be fast, whether you’re ordering 1 word or 1 million, for yourself or for enterprise. Our platform, designed for speed, quality and high capacity, puts over 12,000+ skilled translators right at your fingertips, both online and through the Gengo API.

2

Experience how fast for free at gengo.com/taus


Magazine with a Mission How do we communicate in an ever more globalizing world? Will we all learn to speak the same language? A lingua franca, English, Chinese, Spanish? Or will we rely on translators to help us bridge the language divides? Language business and technology are core to the world economy and to the prevailing trend of globalization of business and governance. And yet, the language sector, its actors and innovations do not get much visibility in the media. Since 2005 TAUS has published numerous articles on translation automation and language business innovation on its web site. Now we are bundling them in TAUS Review, an online quarterly magazine. TAUS Review is a magazine with a mission. We believe that a vibrant language and translation industry helps the world communicate better, become more prosperous and more peaceful. Communicating across hundreds – if not thousands – of languages requires adoption of technology. In the age of the Internet of Things and the internet of you, translation – in every language – becomes embedded in every app, on every screen, on every web site, in every thing. In TAUS Review reporters and columnists worldwide monitor how machines and humans work together to help the world communicate better. We tell the stories about the successes and the excitements, but also about the frustrations, the failures and shortcomings of technologies and innovative models. We are conscious of the pressure on the profession, but convinced that language and translation technologies lead to greater opportunities. TAUS Review follows a simple and straightforward structure. In every issue we publish reports from four different continents – Africa, Americas, Asia and Europe – on new technologies, use cases and developments in language business and technology from these regions. In every issue we also publish perspectives from four different ‘personas’ – researcher, journalist, translator and language – by well-known writers from the language sector. This is complemented by features and conversations that are different in each issue. The knowledge we share in TAUS Review is part of the ‘shared commons’ that TAUS develops as a foundation for the global language and translation market to lift itself to a high-tech sector. TAUS is a think tank and resource center for the global translation industry, offering access to best practices, shared translation data, metrics and tools for quality evaluation, training, research.

Colophon Editorial contributions and feedback can be TAUS Review is a free online magazine, published

sent to:

four times per year. TAUS members and non-

General: editor@taus.net

members may distribute the magazine through

Continental reviews of language business and

their web sites and online media. Please write to

technology:

editor@taus.net for the embed code. TAUS Review

Africa review: africa@taus.net

currently has about 5,000 readers globally.

Americas review: americas@taus.net

Asia review: asia@taus.net

Publisher & managing editor: Jaap van der Meer

Europe review: europe@taus.net

Editor & publication manager: Anne-Maj van der

Persona’s perspectives of language business and

Meer

technology:

Distribution and advertisements: Yulia Korobova

Translator: translator@taus.net

Research: research@taus.net

Enquiries about distribution and advertisements:

Language: language@taus.net

review@taus.net

Journalist: journalist@taus.net

3


Content

Leader

Features

5. Leader by Jaap van der Meer

40. Translation in Africa by Simon Andriesen

Reviews of language business & technologies

44. The Proof of the Pudding by Attila Görög

8. In the Americas by Brian McConnell 50. Contributors 12. In Africa by Amlaku Eshetie 52. Directory of Distributors 15. In Europe by Andrew Joscelyne 53. Industry Agenda 20. In Asia by Mike Tian-Jian Jiang

Columns 26. The Journalist’s Perspective by Lane Greene 28. The Language Perspective by Nicholas Ostler 30. The Research Perspective by Luigi Muzii 36. The Translator’s Perspective by Jost Zetzsche

4


Leader

by Jaap van der Meer

Hello World! How do we Trade and Communicate? You

could just stay where you are and live from what

you have.

Not bother anyone, and hope that nobody bothers you. Try it... You starve or get killed. Of all the species living on our planet Earth we – humans – are the most adventurous. We travel, trade and communicate. And that’s how we evolve and prosper, or not… We can only imagine how Columbus started his communication with the Native Americans in 1492. Gesturing, pointing at objects and helplessly speaking his own words. Or the Dutch seafarers that went ashore at Hirado and started trading with the Japanese in the early 1600s. They learned and found common words. Continents got connected and economies started to grow. As long as we communicate and trade, and do this in a fair way, we are fine. But if we are at the end of our wits and at a loss of words, trade is losing, and weapons start to speak. War or terrorism seem the only answer if we are not communicating. The second issue of TAUS Review is about the quality of communication across languages or in other words: Translation Quality.

As long as we communicate and trade, and do this in a fair way, we are fine.

You could say that we have come a long way since the years of discoveries by famous world explorers. Today we are überconnected. We have the internet, smart phones, wearables and soon implants that communicate for us and with us. We are drowning in information and everything is documented and accessible for everybody. Evaluating translation quality has turned into an art and a science. Columbus and Abel Tasman would be flabbergasted if they saw and understood today’s error typology, fluency and adequacy models and tools for the measurement of our communication. Read the articles by Luigi Muzii and Attila Görög about the state of the art in translation quality evaluation: theory and practice.

Evaluating translation quality has turned into an art and a science.

And yet, if you read the articles about translation in Africa by Amlaku Eshetie and Simon Andriesen, you realize that we are not all together on this path of evolution. Despite the efforts of the European colonists to teach all African citizens to speak English and French, the truth today is that most of them don’t. The one million Chinese who settled in Africa in the

5


Leader by Jaap van der Meer

past decade find themselves stuttering and gesturing with people speaking more than a hundred different languages on local markets across the African continent just like Columbus did more than 500 years ago. The big global IT companies are happy if they manage to translate at least some of their instructions for use in a handful of African languages.

Let’s face it: measuring translation quality is a luxury, a luxury invented by a professional translation industry that’s looking to justify its costs.

Let’s face it: measuring translation quality is a luxury, a luxury invented by a professional translation industry that’s looking to justify its costs. There is nothing wrong with that as long as we measure the right things. Read the article by Brian McConnell, a typical translation

buyer, who questions very clearly whether the way we measure our communications today really makes sense. Just counting grammar errors and typos and not paying attention to the purpose of the communication, the type of information, the budget, the speed, does not make sense. He pleads for industry metrics and benchmarks. Hear, hear! The point is that there is not one quality that fits all needs and purposes. Sometimes – quite often actually – we are back to the very basics of people just trying to understand each other, whether it is in French, Russian, Swahili, Japanese, Bashkir (see TAUS meets Tars) or even English. And that’s where we – contemporaries – are so lucky to have technology that can help us get over the first hurdles. First hurdles of establishing a rapport, getting a gist or the essential information. When it comes to the nuances and critical matters – as both Lane Greene and Nicholas Ostler write in the TAUS Review – there is nothing better than learning to speak another language. There are things that no machine – or human for that matter – can translate very well. Life on Earth for us, adventurous species, is full of risks. A loss of words and breakdown of communications leads to a loss of trade, or worse the rattling of sabers and shooting of Kalashnikovs.

Life on Earth for us, adventurous species, is full of risks.

Welcome to TAUS Review of language business and technology, your best companion to manage your risks in life and to optimize your language business.

6


TAUS Community Discuss topics around translation quality, post-editing, data and translation automation with other professionals in the language industry. Log-in or sign-up on www.taus.net/community

Sign up for the TAUS Post-Editing Course today! Get an ofďŹ cial TAUS CertiďŹ cate + a listing in the Post-editors Directory

7


Review of language business & technologies in the Americas by Brian McConnell

Transparency in Translation Hybrid

translation

networks, such as

systems

Gengo,

and

translation

have done a great deal

of work to integrate professional translation into highly automated systems. and

systems

designers

This

to

enables software

utilize

professional

translation in much the same way that they consume other cloud computing and information services.

While this enables whole new classes of applications, such as travel listing services where every listing is maintained concurrently in multiple languages. These services pose new problems, especially around the challenge of measuring translation quality. Measuring translation quality has typically centered on technical criteria, such as the BLEU scale for machine translation. These measurement scales have a number of problems, among them: • Quality is often impossible to measure in an objective way for multiple customers (what is acceptable for one customer may be unacceptable for another). • Quality of service (turnaround time, number of post-edits required) may be as important as the quality of the translation itself. • Quality of the workforce employed by the agency is usually a complete unknown to the end customer. LSPs use certifications and testimonials as a global sort of “trust us!” label, but rarely provide any sort of analytic score based on customer feedback. Quality of Translation Is Relative While it never hurts to collect quality data across customers, my experience has been that one customer’s assessment of quality is a poor predictor of what to expect. Global quality scores are mostly useful as a filter, for avoiding LSPs that provide exceptionally poor quality output, and for highlighting those that are consistently good. They’re not so useful when ranking work that is neither bad, nor excellent. One person’s good is another

8

person’s excellent, and vice versa. What would be more useful for LSPs is to break out quality scores from customers, so other customers can see how they vary by customer type. I’m more interested in knowing how customers within my peer group are ranking work done as it flows through a system.

One customer’s assessment of quality is a poor predictor of what to expect.

After all, a company that’s hiring for medical/ technical translation will have different criteria than a company hiring translation for a consumer web service. An LSP doesn’t need to identify who its customers are, that’s obviously proprietary information, but labels like “translations for a medical device manufacturer”, “translations for a hotel booking service” or “translations for a car manual”, will help determine the type of work being done, and how customer assessed quality varies by category. Quality of Service Is Often More Important Once an LSP reaches a threshold for translation quality, quality of service becomes the key differentiating factor. This is mostly determined by how efficient the LSP is at assigning work,


Review of language business & technologies in the Americas by Brian McConnell

matching translators with projects, and how well designed their systems are. As a customer, I am primarily interested in two things: 1) reasonable turnaround time (not necessarily super fast, but predictable and consistent), and 2) minimizing the amount of re-translation or post-edits that need to be done after delivery. This sort of information can be measured without requiring the customer to provide subjective feedback after the fact, especially if the LSP has automated much of their workflow management. Turnaround time at various stages in the process can be measured in fine granularity, so the customer can see not just an average, but rather a histogram that shows if performance is consistent or varies widely. Post-edits are similarly easy to quantify, and should allow the customer to see what percentage of work units need to be touched up.

The goal isn’t to create a uniform standard for quality, but to measure many things, and make it easy for the customer to graph and compare the criteria that matter to them.

Each customer will have their criteria for defining success, so the goal isn’t to create a uniform standard for quality, but like we do for web analytics, to measure many things, and make it easy for the customer to graph and compare the criteria that matter to them (and to see for themselves that when you want speed, high quality and low cost, you can generally have any two of the three).

pay. It is difficult, if not impossible, to gauge from the retail per word rate what the translators doing the work are actually being paid. I view translators as an essential part of our business, and want to make sure that we get the right people for the job, and that they are paid fairly. While I think automation is a great thing, and enables applications that were not possible just a few years ago, I’ve never liked the idea of treating translators like automatons. Language is hard in the same way that programming is hard. People with unique skills deserve good pay, and generally speaking seek out employers that see it this way. One metric I would really like to see in the graphs and reports I get from our LSPs is a breakdown of how much gets passed through to translators, and ideally an estimation of what the translators are making per hour, so that I can gauge if the translators are being paid fairly, and if the LSP is being efficient (not overloading the work with project management or overhead). Why is this important? Because I know that translators resent being treated like commodity labor, and will avoid LSPs who treat them this way, whereas they will work for years with companies that treat and pay them fairly. I’d rather pay a bit more for the work, and know it’s backed up by a happy and stable workforce, especially since the per-word cost of translation is a relatively small component of the cost of software localization (hint: refactoring and maintaining multilingual-ready

Quality of the Workforce Is Proportional to Quality Of Pay One area where LSPs are notoriously opaque is translator

9


Review of language business & technologies in the Americas by Brian McConnell

software costs as much or more as the translation itself). Marketplaces Are The Future The features and reports I am describing are the type of thing you need in a good marketplace, a combination of metrics about quality of product and quality of service. So in a way, what I am describing in this article is what LSPs need to do and measure to function like or participate in a translation marketplace. Translation marketplaces are likely to play a prominent role in translation and localization going forward. The history of e-commerce provides some good examples of how this evolution is likely to unfold. The Internet has been very good at automating commercial transactions in two areas: small, specialized categories where the cost of sales and project management is high relative to the product or work sold, and at large scale where automation can greatly reduce operating costs. This is a threat for traditional LSPs because their businesses are optimized to produce profits for large, steady customers (exactly the type of customer that can benefit from process automation through a marketplace). LSPs lose money on small, one off deals (marketplaces like Gengo, Rev and Express It are likely to capture most of that market).

What about translators? They’re really the primary sellers in a translation marketplace.

To compete, LSPs need to provide the same types of metrics customers can get from marketplaces, and to provide the same sorts of integration options that services like Gengo and Cloudwords do (if you don’t provide a web

10

API for customers to hook into, you’re setting yourself up to lose business to providers that do). As a point of reference, our primary LSP invested in these things, and as a result has a solid lock on our business, as things run more or less on auto-pilot, so everybody is happy. What about translators? They’re really the primary sellers in a translation marketplace. In the long run, efficient markets for translation services should benefit translators. An efficient marketplace maximizes revenue opportunity, while minimizing the cut it takes from those revenues. LSPs used to be the marketplace, but in the future, I expect that LSPs will largely become project management companies that provide value on top of what highly automated marketplaces provide. Brian McConnell is the head of localization at Insightly, the leading CRM service for Google Apps users.


AD

• The ribbon, particularly for new users and others who are not aware of all the settings in the old menu structure • Much more accurate Concordance search results • Speed improvements • Virtual merge and autosave • Improved display filter • Easier access to the various help resources • New TM fields and field values are immediately available • Very stable Nora Diaz Freelance Translator - Mexico

Join the conversation noradiaz.blogspot.co.uk @NoraDiazB #Studio2014

www.sdl.com/studio2014 www.translationzone.com/studio2014 Take it further, share projects with SDL Studio GroupShare www.translationzone.com/groupshare2014

Purchase or upgrade to SDL Trados Studio 2014 today

11

/sdltrados


Review of language business & technologies in Africa by Amlaku Eshetie

Translation and localization practices in Africa: A closer look through the eyes of a translation professional. In my previous article I gave a general overview of the history, development and context of the translation and localization industry in Africa. The industry developed only recently. It is dominated by a few languages and struggles with the lack of technical knowledge and technological infrastructure.

This

article looks at the current translation and

localization experiences closely based on my own

5

years experience and available resources and facts.

Market The innovation of computer technology and the Internet started as well as blossomed predominantly in English and English speaking countries. Continuous and significant developments have taken place in the USA, Western Europe, Canada, Japan, and recently in India and China. Africa used to be considered as the ‘Dark Continent’ due to the fact that a large number of African people were illiterate, the spoken languages were not technologyfriendly, investment levels were so low and the economy was feeble. Africa’s status in the industry was non-existent. The few translation companies with a presence in Africa basically owned the market. According to the 2012 and 2013 Common Sense Advisory reports, Africa constituted only 0.27% of the market share and none of the top 100 Language Service Providers were from Africa.

Africa used to be considered the ‘Dark Continent’

However, the present situation is one that reverses what was seemingly unchangeable. The emerging changes are in almost every sector – the booming construction, the technological transformation and innovation, the literacy level, the investment, etc.

12

As a result, investors and economical leader organizations in the world are looking at Africa as their near-future niche to expand their businesses into. The population, filled with potential buyers and employees, is a major pull for businesses, companies and organizations of the West and the East. One view that reinforces my statements is the speech made by the Trade and Investment Minister of UK, Lord Green, in 2012, in which he says: “Sub-Saharan Africa is not only a trillion dollar market, but the IMF forecasts it will have seven of the world’s ten fastestgrowing economies over the next five years...” Translation and localization is no different. It facilitates more investments. More specifically document translation services play a great role in attracting foreign investors. Let me cite one translation and one localization example from Ethiopia. One of the world’s largest brewery companies, the British based DIAGEO, has bought a local brewery company and had to buy translation services (my own company, KHAABBA, was one of the service providers) to fit into local laws, rules and regulations, market systems, etc. while increasing its international market share. Similarly, the Chinese telecom company, TECHNO


Review of language business & technologies in Africa by Amlaku Eshetie

Telecom has to tap into the Ethiopian market by localizing its mobile operator software and applications into the three major local languages – Amharic, Oromiffaa and Tigirgna. Techno is now producing exclusively for the African market. They are present in 12 African countries, including Ethiopia and closed their Asian factories. Some African countries, such as South Africa and Egypt, have been able to nurture prominent language service providing companies locally as well as to housing multinational LSP’s that are capable of providing translation and localization services into tens and hundreds of African languages. (STAR Group, one of the world’s top 10 LSP’s, represented in over 50 locations across the world, is present in Africa only in Egypt.)

The population of Africa is growing and is now over a 7th of the world’s population.

The population of Africa is growing (over 1.1bn, according to World Population Review 2014) to have constituted over a 7th of the world’s population. As globalization is a looming reality, this massive population needs to have access to information, buy and sell products from and to the world market, allow professional mobility, and so forth, and this means translation and localization is vital.

Market Players There are numerous market players for these sectors and perhaps not of much concern for this article. The immediate players are the global actors in the translation and localization industry as well as the ICT sector. Some of them have already started penetrating the African market, though the extent and the share are still very low. I can tell you (from personal experience as well as public information that anyone could access) that Google is localizing its products into over a dozen African languages. Similarly, Microsoft has localized Windows 7, for example, into 10 African languages, including Amharic and Kiswahili. IBM, HP, Toshiba, Ericsson, Siemens, ZTE, and many more multinational ICT companies are the players who have entered (or are considering to enter) the African market and essentially need translation and localization. Toshiba once started out with translations in three languages. Nowadays, this number has grown to 24. Constraints The transplantation and localization market faces several constraints in Africa. Some of them could be lack of reliable infrastructure, such as electricity supply, internet access and connection capacity, highly trained technical people, development level of languages, cultural issues, and, according to Osborne, lack of means and strategies to exploit ICT. In Ehtiopia, for example, there are just a handful of translators/localizers, and they are often not

Various browsers (Chrome, Firefox/Mozilla, IE, etc), software/ applications (Android, Windows, etc.), hardware (keyboards, cash register machines, mobile phones, etc.) are all supplied in English from China, Europe and the US, and yet nearly the entire African population does not understand English quite well. Therefore, we could say that not much has been done and everything is there to consider.

13


Review of language business & technologies in Africa by Amlaku Eshetie

very fluent or familiar to CAT tools. Not only are they less skillful in using or unfamiliar to CAT tools, but also they do not have internet access as much as required. Conclusions As said by many, ‘the future business is Africa’. The human resource is relatively cheap and abundant; the land is vast with an unexploited wealth of resources, again in relative terms, and a growing market. This description of Africa applies to all sectors and industries. The translation and localization industry is more uniquely strategic as it is one of the means to propel other businesses and innovation by making information communication accessible to most.

ICT and other infrastructures are gradually being resolved and this will soon lead the way for full capacity development. On the other hand, the West and other developed nations appeared to level of saturation in resources, labour, and development of infrastructure. These two different situations are ‘pull’ and ‘push’ factors into Africa. Everybody from inside and outside of Africa seems passionate about the potentials and developments in Africa. Welcome!

Africa has a promising future in all aspects. The issues that have limited Africa from expanding

ACCURACY IS IMPORTANT. WE BELIEVE IT’S VITAL.

We combine the expertise of professionals with cutting edge technology to deliver translations that take quality to new heights. 14 Because there’s accuracy. And there’s Accuracy.

www.lingo24.com


Review of language business & technologies in Europe by Andrew Joscelyne

Digital Single Market, Merging Translation Agendas Last autumn, the European Union voted in a new President, finalized the members of its new Commission, and began to announce plans for encouraging growth in a beleaguered economy. From a translation perspective, the flagship item on the new fiveyear agenda will be setting in motion the Digital Single Market (DSM) – a continent-wide online market for goods and services worth €250B that would induce a paradigm shift in cross-border trading between consumers, business and governments. Above all, by trying to lift the barrier raised by the EU’s 24 official languages. So what’s the game plan for making the DSM truly multilingual? The DSM will round off the development of the existing single market that has mainly benefitted B2B trading. It was forged through the single currency of the euro, and supported by a trans-European rail service, the Schengen customs agreement for a subset of countries, and other forms of cross-border administrative and legal easing. Making the EU marketplace ‘digital’ looks like a much more complex enterprise than making it ‘single’. The idea is to stimulate a massive consumer marketplace on top of an IT and telecoms platform. The key drivers will be ubiquitous mobile communications and omnichannel retailing, fuelled in due course by the rollout of 5G telecoms networks within next 10 years that will offer the bandwidth

to generate far richer customer and citizen engagement than we have seen hitherto. Think of such emerging content technologies as integrated augmented and virtual reality, interactive ads, immersive video marketing, social robotics, in-car systems and wearable connectedness, and intelligent collaborative environments. More CEX-y In this kind of environment for a multilingual marketplace, the critical notion of ‘customer experience’ will involve far more than good old website localisation, linked government databases, and translating chat messages for aftersales services (see the Factbox about eBay’s European translation strategy).

How eBay translates for the EU Single Market According to Malcolm Ishida of eBay and Paula Shannon of Lionbridge, speaking at Localization World in 2014, eBay is planning to sell all its inventory anywhere in EU at any time. This means translating all of the relevant content into multiple languages. eBay foresees e-commerce growth of some 30% between 2013 and 2017 in France, Germany and the UK, largely driven by the mobile experience but involving omnichannel capability. And machine translation is the game-changer engine that will drive this expansion. The complexity of the process of converting visitors into customers in milliseconds means that technology is more an advisor than just an enabler, as it provides the information needed to make decisions. The overall process involves translating intertwined commerce and content.

15


Review of language business & technologies in Europe by Andrew Joscelyne

CEX must ultimately be able to embrace automated spoken language communication (due to the importance of mobile - globally, 40% of all web traffic is now mobile) via personalized assistants, and large volumes of multilingual data from the internet-of-humans and things that will deliver yet more business intelligence to all parties. This suggests that the legacy EU recipe of funding research into MT and resource sharing will need to be radically updated. However, the European Commission (EC - the executive arm of the EU) will have to deal with more prosaic digital issues. Firstly, there is an ongoing argument with the potential loss in revenue for the continent’s major telecoms companies due to standardising roaming charges for mobile users. This could mean that countries rather than the European DSM could remain the natural perimeter for commerce, dashing the hopes of ‘single.’

Language neutrality still ranks as the foremost challenge to the “single” marketplace.

Secondly, recent Eurostat figures show that Europe is in any case not yet a connected marketplace: one in five Europeans do not use the internet at all. But overall language neutrality (I use my language, you use yours and the system can handle them all) still ranks as the missing link in the construction of the DSM. EU R&D The language barrier has of course been addressed by the EC through a series of Framework Programmes for Research & Technology since the late 1980s (the 7th of these has just ended), funneling tens of millions of euros into funding projects covering most aspects of automated multilingual tech.

16

These have usually involved a combination of SMBs and a c a d e m i c researchers working on 2 to 5-year proof of concept initiatives. LetsMT (in Latvia), the Reverso Localize platform, the MOSES statistical m a c h i n e translation community are just a few of the outcomes , along with dozens of other unsung or abandoned efforts at developing fragments of a technology stack that could help translate Europe. However, the policy makers changed direction just when a final funding thrust might have led to the creation of a scalable service layer to feed the digital marketplace with language tools, resources and services. The latest (voted in 2013) round of EC funding for the Horizon2020 programme (heir to the former Framework Programmes) has significantly reduced the budget for developing translation technology – and language technology generally, including speech applications. It is now offering a meagre €15M for “Cracking the Language Barrier” via a call for automated t r a n s l a t i o n proposals focused on 21 European l a n g u a g e s identified by the META project as “endangered”.

Big data is inherently multilingual in Europe.

Big Data Wins Ironically, a much larger amount will go to fund the more strategic Big Data ccategory


Review of language business & technologies in Europe by Andrew Joscelyne

of projects. The language community has been quick to note that big data is inherently multilingual in Europe and therefore needs the appropriate tooling and processes to make text and speech data meaningful for analytics apps.

Solutions suppliers and technology developers solve real-world problems for their clients.

It would have been a good idea, therefore, to fund research into the practical pay-off between data analytics and the crosslingual use of unstructured data such as social media. It seems more likely this will be carried out by commercial translation suppliers, given the growing importance of ‘intelligence’ for enterprises, governments and security agencies. But the jury is out on whether you can even translate social media data satisfactorily to derive market insights. At the same time, however, a new wideranging EC instrument worth €1 billion called the Connecting Europe Facility (CEF) covering five Digital Service Infrastructures will be enabled by one obligatory building block that includes some funding for “automated translation.” This will be dedicated to exploring

statistical machine translation solutions for services involving e-government content in the EU which is currently almost always siloed in a single language. There is naturally widespread disappointment in the academic and tech supplier community that language technology in general and translation automation in particular have lost much of their funding. But will these recent budget choices negatively impact the creation of the DSM just at the time it was poised to bridge the ‘missing link’ of multilingual capability? Not necessarily if we think of the translation market as a whole. There will certainly be less money for academic translation research when compared with previous cash hand-outs, so it looks as if the best way to keep the translation flame alight going forward is to rethink the crucial second term of the new EC programme wording: the money now goes not to Research and Technology Development as of yore, but specifically to Research and Innovation. The Innovation Game Innovation is about markets, manufacturing processes and customer experiences, not just technology. It is best understood as being the product of an entire value chain, rather than as a particular widget or process. Simply put, solutions suppliers and technology developers solve real-world problems for their clients by calling where necessary on research to leverage knowledge that can bring engineering solutions to market so that business clients or consumers can benefit more quickly from better and/ or cheaper products and services, or improve their processes in various ways. Obviously the EC project model provides one

Europe has the densest and richest weave of translation support organizations in the world.

17


Review of language business & technologies in Europe by Andrew Joscelyne

way to drive innovation. But it may have the effect of limiting innovation just to bits of the tech stack. It is ultimately language service providers who will apply the technology in an innovative way in the marketplace. So why can’t more of them be involved more closely in any innovation efforts, without necessarily having to be shackled by the EC’s project red tape and slow decision-making? One way to make the contribution of translation companies better known in the entire DSM debate would be to leverage the implicit but perhaps slightly dormant power of translation organizations. For example, the EC’s very own LIND-Web database of (non-commercial) translation organizations in Europe lists no fewer than 353 national, regional, and sectorial translation and language teaching bodies, of which at least 85% must be translation organizations. This suggests that Europe has the densest and richest weave of translation support organizations in the world. But, like the continent itself, they are fragmented and largely replicative. A DSM strategy would therefore provide a strong reason for these organizations, possibly led by the main industry bodies such as GALA, ELIA and EUATC, to collaborate more closely and intelligently on delivering multi- and cross-lingual services adapted to the digital market. They could also draw on national language funding instruments to support clever innovation projects. Language Tech AND Translation Industry The language technology community is clearly committed to providing the appropriate infrastructure for language-based resources, tools and other services, whether or not there is much EC funding for this immediately. What will be vital is that this sort of infrastructure serves translation bodies in the most effective way. It could encourage them to play a bigger role in promoting tech solutions to their members

18

and seeking constructive feedback. They could also provide more, better training for a wider range of tech solutions to individual translation companies. A shared platform could also provide an objective data collection facility to give everyone in the DSM better picture of global market translation practices, needs and solutions than exists today.

The market potential of the emerging DSM would best be harnessed if the entire community joined forces to share agendas.

The best way to achieve this would be to develop a generally accessible dashboard that could inform market players and enable translation suppliers to demand relevant resources from the infrastructure. Close collaboration between associations might ensure that such business resources are reliable, well-maintained and shareable. In other words, the market potential of the emerging DSM would best be harnessed if the entire community – not just the technologists joined forces to share innovation agendas. The challenge will be to use the combined forces of industry associations, pressure groups and a technology infrastructure to build the necessary momentum.


19


Review of language business & technologies in Asia by Mike Tian-Jian Jiang

Trinity isn't just three-dimensional and The Three Musketeers isn't just for three but for all How I Learned to Stop Worrying and Love the Black Hole I am fully aware of the confusion and inaccessibility of this article, from the topic to the cheesy metaphors. The intention is to show how ideas and visions can be too open to interpretation and then resulting in not knowing how real they are except by contacting the person who is responsible to hear the whole story and the context.

Readers

may find it hard to

relate to the unfamiliar technical terms or imagine uncommon phenomena such as high dimensional theory and then ask for definitions or names. worth,

I

For

what it is

would recommend readers to give the movie

“Interstellar”

a try, and think about how nameless a

virtual high dimensional space-time could be.

The quest from now on would be to go where no one has gone before. Such as a black hole, which carries the highest possible entropy. Entropy was a fancy term suggested by John von Neumann to Claude Shannon in order to replace “information” because “nobody knows what entropy really is”. I’d ask the reader to be kind enough to bear with this writing style, as there is a relation to translation. I will still provide a plain explanation as much as possible. Introduction Quality, speed, and cost are usually considered the three most important factors for translation services. Effectiveness, efficiency, and satisfaction are well known as the three goals for the usability of any service, and can probably be seen as a paraphrase of the former. In Mandarin C h i n e s e a d e q u a c y, fluency, and elegancy (or the pertinence of pragmatics) are often argued to be the three criteria for translation quality. From the above three examples, one

Quality, speed, and cost are usually considered the three most important factors for translation services.

20

can easily notice how much people love threedegree perspectives. However, just like Newton’s laws of motion are inappropriate on very small scales, at very high speeds, or in very strong gravitational fields, the Information Age’s translation services are facing new challenges with massive content that requires time and money to localize. Due to these changes, it will be intriguing to see if the beloved 3-D aspects still hold. If I can insert a pun here, “When in doubt, C4!” This Mythbusters quote is similar to the idea behind this article. The difference here is that C4 is not necessarily a destructive explosive, but a complementary fourth character. A translation job’s characteristics are similar to the Christian doctrine of the Trinity, i.e. The Father is not the Son is not the Holy Spirit but all are God, in terms of quality, speed, and cost are often described as at least one will be sacrificed if the other two are dominant. Unlike the Trinity, however, a Venn diagram is usually applied to indicate a small intersection of the characteristics, a so-called promised land. The concern of the editor of this magazine is that the recent trend seems to suggest that the faith is fading away and many clients who are in doubt might be ready to just give up on the quality.


Review of language business & technologies in Asia by Mike Tian-Jian Jiang

This article is an alternative view with a fourth character. For example, there could be more than one way to look at the Venn diagram of quality, speed, and cost. When an observer views it from the side, there is always an angle where the two circles of the diagram perfectly overlap. Also, if the observer changes the view angle dynamically, all three circles of the Venn diagram are just the same thing. Although in the fields of physics and statistical machine learning, models such as string theory or support vector machine do not come with names of higher dimensions, readers might settle with a fourth character as the observer who sees everything relatively, just like the one from the theory of relativity. The following sections will try to sketch out how an observer can gain as much as possible from a translation job. The algorithm-changed world changes algorithms The algorithms of search engines enabled the world to perceive and conceive more information

in a constantly shorter span of time, but the language barrier feels higher and higher once people are escaping the ignorance. Not so many years ago, a global company would have taken months to be prepared to launch a new local branch.

A single new product must have all localized content available online at the same time.

Now, a single new product from Apple, say the iPhone 6 plus, must have all localized content available online at the same time. For modern translation services, machine translation appears to be the only choice that is fast enough. Since the quality of machine translation is usually unacceptable, a typical compromise is asking the customer to prioritize the content, sacrificing one part for another. The inconvenient truth is, even with the help of translation memory and a term base, once the amount of requested content reaches a certain threshold, the translation task will exceed reasonable delivery time and budget anyway. Under such pressure, new algorithms for computer programs and human operations are desperately needed. 80-20 rule, dynamic programming, and divide-and-conquer Without a loss of generality and confidentiality, based on personal experience, a 100-millioncharacter project in 2 months can be a tough situation. With 10,000 characters per week per project, 3 Japanese yen per character with a simple estimate-execute-evaluate waterfall approach is reasonable. The operation can even take advantage of crowdsourcing for proofreading for an affordable cost. But when it comes to 100 million characters, all of sudden the perception is beyond Hooke’s law, and not that linear any more.

21


Review of language business & technologies in Asia by Mike Tian-Jian Jiang

Yet somehow, a typical misconception is that when the demand is so high, wouldn’t it be appropriate to show some price flexibility? To make the most optimistic prediction with 20% repetition rate in terms of character, there will still be 80 million characters to go. Luckily, every cloud has its silver lining: the task is big enough to provide statistical and linguistic insights. Sometimes this even comes with behavioral psychology hints to bring the stimulusresponse back to a bearable range. Otherwise, randomly distributed translation jobs will wander in the market forever without anyone to take it. The high demand also implies that low-hanging fruit will not be short in supply.

Randomly distributed translation jobs will wander in the market forever without anyone to take it.

Let’s revisit the workflow with a computerassisted translation (CAT) system. It is intuitive to have a text translated from top to bottom in respect of context. However, a program should remember everything for you, back and forth. What if an algorithm could gather 20% of the most similar items together with context for you? What if an algorithm could analyze the critical path to complete as much translation as possible, like the artificial intelligence of a chess game? What if the former two if’s could ultimately create positive feedback that would resolve the translation project in an exponential fashion based on the power of 20%? More importantly, the real objective, i.e. the 4th and the one essence, should be to guarantee each segment of text to be tractable and remain intact once it is translated properly. In

22

this sense, a divide-and-conquer strategy is to find manageable jobs from a larger, seemingly intractable mission, where effectiveness/ quality and efficiency/speed are merely due diligence embedded in its very nature. For instance, after analyzing the 100-millioncharacter project with recent advanced natural language processing tools such as word2vec, similar phrases can be clustered and sorted by frequency, by length, by semantic distance, etc. One may find “semantic distance” incomprehensible, hence a simple analogy is invented: if the distance between “man” and “woman” are somewhat similar to the distance between “king” and “queen,” it is feasible to form a simple algebra like “king – man + woman = queen” for fun and for real world problem solving. Once the project manager is equipped with all the above well-organized phrases, it is just a matter of strategy to find the right person or the right system to digest each job, based on their specificity and the sensitivity, in terms of statistics. So the next significant question is: who is going to be the wise guy? D’Artagnan of the team Computer programs and the people who develop them may have their potential, but a good team that can survive with low cost and high satisfaction is always more than that. It has already come to some, if not all, translation service providers’ attention that agile translation, lean localization, and many other buzzword combinations, introduce all those waterfall vs. iteration comparisons to the translation industry. It is easier for the service


Review of language business & technologies in Asia by Mike Tian-Jian Jiang

provider to say no to more one-shot estimateexecute-evaluate waterfalls and welcome an era of growing and reinforcing iterations.

chicken, or more respectfully: D’Artagnan, i.e., the customer, can be more involved than just in the content and the paycheck?

Yet, the stakeholder’s, i.e. the customer’s point of view, remains unclear and deserves emphasis. A highlight here will be the 4th and the one character: chicken.

Test-driven translation The previous story actually came with a regrettable twist. The one who finally spotted the issue was a reviewer responsible for vetting translators periodically. Since the reviewer is not the proofreader nor has any other role in the project, the damage was already done. If one is willing to listen to the lesson learned from software development, test-driven development could be an effective attempt to fix it.

In the metaphor of Scrum, a famous methodology for agile/lean software development management, a chicken asks a pig to be a business partner in a restaurant venture, Ham-n-Egg. When the pig (developer) feels committed, the chicken (client) only appears involved. Despite how weird it sounds, at least the chicken is involved. In reality, the relationship between a customer and a translation service provider is no better than the above story. Assuming there is a global business owner submitting several texts to a crowd translation platform, the customer and the service provider both may not notice whether the crucial information in the text is or is not going to be translated by the faceless translators.

If one is willing to listen to the lesson learned from software development, test-driven development could be an effective attempt to fix it.

Here’s a funny (and true) story: there was an app development company requesting localization for a short commercial. This commercial contained another organization’s name and neither the customer nor the translator realized that the organization name looked like two common nouns with a colon in between, e.g. “X: Y,”. The resulting translation is fluent and appropriate for an advertisement, but the organization name is totally ruined. What if the

Although many CAT tools or translation management platforms provide basic quality assurance such as length check, tag/ symbol/punctuation validation, whitespace normalization, etc., there are more semiautomatic approaches worth pursuing. For example, under certain circumstances, many different Japanese expressions can be translated into the same English phrase. It sounds like good news when a translation service provider is recycling translation memory, but the whole document may disagree. Since they can be easily detected, a test of contextual relevancy can be conducted. The best part of test-driven development is that it is not only a reusable safeguard, but also a clue to enlighten the team towards a deeper understanding of a certain subject. Eventually, test cases will be patterns, templates, exemplars, and feeding back into the algorithm of computer and the people around it. Another example in Mandarin Chinese is the Simplified-Traditional and the China-TaiwanHong Kong-Singapore situation. If the target languages are both Simplified and Traditional Chinese, different characters and terms can be looked up in a table. Ultimately, it is up to the customer to decide how to proceed: converting Traditional Chinese into Simplified to keep the ambiguity low?

23


Review of language business & technologies in Asia by Mike Tian-Jian Jiang

Taking the opposite direction to secure the higher supply of translators? Or what Apple did: for the Chinese market oriented website, “Bigger than Bigger” used to be a blunter translation than Taiwan and Hong Kong. However, after receiving end users’ complaints, the former one followed the same way of the latter one, just with different characters. There’s probably no way to tell if Apple has them tested internally, but certainly many companies do not want a slogan out there and have it tested the hard way. A job is a story Yet another lesson learned is from a static HTML page translation job. After the application of the estimate-execute-evaluate waterfall approach, the customer realized that the texts of Open Graph Protocol (OGP) properties that are usually used for Facebook optimization were never extracted and translated, not to mention they were not included as part of the quote in the first place. Since the customer was not involved until the evaluation phase, it is easy to imagine the awkwardness between all parties. At this point, quality, speed, and cost are not relevant anymore if the whole intention was to maximize the customer’s exposure on Facebook. The software development had many catastrophic experiences concerning that, so finally it came out with a treatment: job story. In terms of translation, counting words/ characters of a given job and multiplying with unit price and the estimated time is seemingly a universal practice. The quality is then projected to the former two factors with educational guesses. When it comes to the HTML page translation job, the quality might be just fine but the purpose is lost. Even though the charactercount of OGP is typically small or zero – if they are generally duplications of other common HTML texts that came with the

24

page. So a new paradigm of job story is trying to prevent mistakes that might come from it: just write the true purpose down in this way: when (situation), I want to (motivation), so I can (expected outcome). The involvement of a customer can be as minimal as the above simple statement, and soon strangles many tragedies in their cradle. There are always more dimensions outside of the box The trinity of translation is quality, cost, and speed, but quality is neither cost nor speed, it is the box. Using machine or crowd translations to increase speed or to decrease cost might be good ideas inside the box, but the box always comes with boundaries. One of the final issues (if not the only one) is how much will an extra dimension cost? When a 100-million-character project comes to you, will it be fine to pay for research and development first, and then reduce the size of work in return? Admittedly, the profit might not cover the loss on the very first try if the magic came with an outrageous price.


Review of language business & technologies in Asia by Mike Tian-Jian Jiang

To boldly go where no one has gone before Recently a Medium post reminded us of the great accomplishments of Margaret Hamilton, who is famous for being the “one in which computer science and software engineering were not yet disciplines; instead learning was done on the job with hands on experience” and also invented the Universal System Language (USL) that has “taken on multiple dimensions as a systems engineering approach”. Looking at her portrait of the Apollo program might shed some light: programming at that time was more like a human translation task than programming today. Such an incredible amount of cards for the Apollo computers that

they had help from USL and this eventually let people land on the moon. Before we all get there, I will just admit that I do not know what will be the silver-bullet as the fourth character in a higher dimension. However, the beauty of scientific approach is that at the very least we can all try, and every falsifiable hypothesis will help us to improve by negation. In the end, even theology needs to define (via negativa) what is not included in the Trinity.

Get your insights, tools, metrics, data, benchmarking, contacts and knowledge from a neutral and independent industry organization. Join TAUS!

TAUS is a think tank and resource center for the global translation industry. Open to all translation buyers and providers, from individual translators, language service providers & buyers to governments and NGOs. taus.net

25


The Journalist’s Perspective by Lane Greene

Please be more specific Whether all the world’s languages are fundamentally one—a view associated with Noam Chomsky—or if languages encode specific worldviews and are deeply different—associated with

Benjamin Lee Whorf, an 1930s, is one of the most heated debates in language and thought, engaging some of the field’s biggest names. linguist writing in the

Few today would agree with an extreme version of either side: either that speakers of Hopi and Hawaiian think completely differently as a result of their languages, or that all language differences are psychologically trivial. And so linguists have offered many compromise formulas between the two extreme positions. One of the most interesting and important, from a translator’s point of view, comes from Roman Jakobson, a Russian-American linguist, who wrote that languages differ not in what they can convey, but in what they must. Jakobson was answering, in part, those who felt that some languages were incapable of rendering thoughts easily available in other languages. Most linguists today agree with him; given some creativity and a willingness to work with neologisms, you can say pretty much anything in any language. But the second half of Jakobson’s dictum is the one that poses a dilemma for translators on a daily basis: namely, that translating between languages that grammatically require different kinds of detail is devilishly hard—and even harder for machine translation. Take Jakobson’s own language, Russian. Like most European languages, Russian has a formal “you” (vy) and an informal one (ty). Like other English-speakers, I find this forced choice annoying: usually I just want to ask someone a question without assessing our relative seniority, the other person’s desire for respect, and so on. I just want to say “Do you know where the bathroom is?”, but in Russian I can’t: Russian and English differ on what they must convey, namely our social relationship.

26

In a much more complicated example, Russian has no word for “go”—or rather, it has dozens. Whether a trip is round-trip or one-way, completed or in process, by vehicle or by foot, and of course conjugated across six persons, the Russian verbs of motion are notorious for foreign learners like me. English, of course, gives me the option to specify all of these things, or the much more common option to omit them if I don’t care to go into detail: you can go to school, go home, go to hell, go to the bathroom, go to Frankfurt, go west, go to the store or go away, all with one verb.

Languages differ not in what they can convey, but in what they must.

What is the translator going to do from Russian to English? Ezdili v Mosvku means “they went [by vehicle] to Moscow [and came back].” The full translation is awkward. In a narrative, context will usually take care of any ambiguity, so it would be best to leave it as They went to Moscow. The issue is one of omission, and it’s a matter of relatively simple taste and judgment. But what about the other way, going from the less- to the more-specific language? Take Harry went to school. This is impossible to translate


The Journalist’s Perspective by Lane Greene

properly into Russian without more context: Are we talking about a repeated action? A single trip? Did he take the bus or walk? Did he just set out, or are we talking about a completed trip yesterday? We need all of this information. Again, in narrative, it will usually be available, so a translator can easily add into the Russian what was absent in the English. Commercial translation and localization, though, can be harder. In many localization contexts, the words to be translated have no narrative context. Think of the many English language web pages that have you fill out a form and then click a button that says “Go!” What should the Russian button say? Idi! (Singular, informal, one-way?) Probably, but what if you want to be more formal? What if it’s a travel website and the “go” is meant to invoke starting your vacation? As of this moment, a bug in Google Translate is stymieing my attempt to find out what machine translation does with this problem: I entered “Go.” (period included) in the English, with Russian as the target language, and got back… “Go”. Not what I was hoping for. “Go” without a period returns the infinitive (by foot, one-way) verb idti. But the infinitive isn’t right either. I tried “Go, and don’t come back.” Now we get something accurate: Idi i ne vozvrashchaysya. This is accurate—so long as we’re talking to one person whom we would address informally. But what if we’re not? Localizers and translators have to make these kind of tough calls all the

time. Should an advertisement address an individual or the crowd? (That is, grammatically singular or plural?) Should it have an informal or formal feel? Formal was the old default, when anyone reading this newsletter was learning German or French—“when in doubt, use vous…” But conventions are rapidly changing: Twitter is practically a vous-free zone, for example, and advertisers are always chasing youngsters.

In many localization contexts, the words to be translated have no narrative context.

M a c h i n e t ra n s l a t i o n struggles with even simple grammatical context. Plug in “I saw the girl as she was walking down the street,” and Google returns a bit of a mess. Bing does better: Ich sah das Mädchen, als sie die Straße entlang ging. But I was trying to trick it, and I succeeded: a girl (das Mädchen) is neuter in German, so this should read Ich sah das Mädchen, als es die Straße entlang ging. Bing didn’t reach back just one clause to see that the antecedent of she was das Mädchen. Imagine how much context MT would have to scan and understand to make much more subtle and complicated translations like those required in Russian but not in English: formal or informal you (does it scan back and see if “Mr.” or “Ms.” appears near the putative English antecedent?) The verb for going by foot or going by vehicle? (Does the engine scan for mention of a vehicle? How far away is the destination?) Now we are in the realm of serious artificial intelligence. Stephen Hawking recently repeated the common fear that artificial intelligence may one day threaten humans. But here, we see more proof of the old adage that computers find it easy to do things humans find hard to do, and vice-versa.

27


The Language Perspective by Nicholas Ostler

Fire and Water Last

quarter

I

looked at two kinds of blocks to lan-

guage learning, each of which can be reversed with powerful effect.

One

was the cognitive struggle to learn words and

patterns, all of them meaningless at first.

I

learn this stuff?

How

can

I

“How

do

get enough of it to be

ready when somebody actually uses the language to me?”

The

other was affective and emotional.

cope when

I

fail to understand?

the humiliation, when

I

What

“How

can

I

to do about

can’t even keep up with little

children?”

These are the dangers of language learning; and correspondingly, there is a surge of triumph when they are overcome. “Yes I did understand, at least part of that!” “How attractive that person seems who said it!” Machine translation only addresses the cognitive side. It is all about equivalence between words and phrases, not about the intent of the text being processed, let alone the kind of rapport being offered by the text’s author. But can measures of translation quality afford to be so unemotional? Applying such measures is intended to assess how close the product was to “getting it right”. Does the output text say “the same” as the input?

“Yes I did understand, at least part of that!”

28

But what features go into the judgement of “the same”. Is it enough that the translated text has the same truth conditions as the original? What if the structure – and hence the order of elements – is completely different? What if the level of style diverges: might this not mislead a reader? A technical manual and advertising copy might describe the same feature of a product – but not in the same terms, surely.

Shouldn’t the output document preserve the target effect – on the reader, or even (more purely) of the author’s intent? This last is a high standard, and perhaps impractical. But it remains the ideal. And as it happens, there is now a more concrete way of seeing its implications. This is when we come across a “self-translating” author: someone at ease in many languages, who produces “the same book” in more than one. I was amazed this year when I heard of Elif Shafak, now the most popular Turkish female novelist. Reading “The Forty Rules of Love”, a romance that draws heavily on the career of the Persian mystic Rumi, I naturally assumed that it was a translation of the Turkish version “Aşk”. But then I read her account of how the two versions were written: In “The Forty Rules of Love” I tried a completely new technique. I wrote the novel in English first. Then it was translated into Turkish by an excellent translator. Then I took the translation and I rewrote it. When the Turkish version was ripe and ready, I went back to the English version and rewrote it with a new spirit. (Today’s Zaman)


The Language Perspective by Nicholas Ostler

So in a vivid sense, both versions are the work of the same (bilingual) author. Somehow, she has produced “the same” book twice, differing only in the language of the readers. This must be close to an ideal translation, in that both versions directly express the author’s intent. In the world of machine translation, this seems most analogous to multilingual document production. But in practice it is the polar opposite from any kind of MT, since the guiding spirit is the affective intent of the author, monitored as every word is produced, very different from a single input text in a controlled language. It is interesting too, in that the different l a n g u a g e versions seem to have crossfertilized in the writing.

The author chooses different metaphors with which to leave her audiences.

But what differences does the author permit in practice? There are none in the top-level structure of the book into parts and chapters, and hardly any in the number of paragraphs in which each chapter is composed. It is within the paragraphs that the author gives herself the right to change content, in order – apparently – to communicate with audiences with different hearts, as it may be. This shows up in the first and last pages. Here her focus is the impact of love within a life. The beginning contrasts the effects of a stone dropped into a stream – trivial and indiscernible – and into a still lake – complex and fundamental. The end is a

retrospect, the lover watching the fading indigo of the sky, like a whirling dervish. English tells of infinite possibilities in its dissolution, but there are none in Turkish, where instead “clouds like fine white tulle return heavily” tül gibi beyaz ve ince bulutlar ağır ağır dönüyor. As the light fades, so does her wish to classify the kind of love she has had. The beginning is all about circles, how they selfgenerate. But in English they remain resolutely watery (‘ripples’) while in Turkish they transform into rings and buds and flowers. At the end “a life without love is of no account” aşksız geçen bir ömür beyhude yasanmıştır. That much is shared: but then the accounting for love is dismissed almost for the same reasons that the advent of love had been so profound. Especially in the Turkish, the phraseology is very similar: Çemberler çemberleri doğurur ~ ayrımlar ayrımları doğurur. Circles beget circles, divisions divisions, good or bad, creating or misleading. But then, at the very end, the author chooses different metaphors with which to leave her audiences. For the Turk, love is a world in itself: you’re either in the middle of it, or else you’re on the outside, longing for it. For the Anglo, “Love is the water of life” (that at least, as expected) but then “And a lover is a soul of fire! The universe turns differently when fire loves water.” What a contrast: “be at the centre,” or “embrace your own dousing”! How would a translation quality system assess this? But it is Google Translate’s literalism which has enabled my ready contrast of Turkish and English, and shown how fast and loose the author plays with her readers. The universe turns differently, when metaphors are so easily traded.

29


The Research Perspective by Luigi Muzii

Breakthroughs from Research In

the first issue of this journal,

I

wrote that we

are in the exponential side of the growth curve, the second half of the chessboard, to quote Ray Kurzweil, where every change has a significant impact.

In

the last few weeks, an article caught my attention

about how wearable devices are about to radically reshape

the

customer

service

experience

in

the

banking industry.

I have never thought of Internet banking as a fashion accessory, even though, more than thirty years ago now, when I was in my late teens, as an awkward user of the first home computers, I dreamt about home automation and home banking, but both seemed so improbable to become true in a near future. Pocket translators made their first appearance at the time. They were little more than calculators with a basic dictionary of words and idioms. A decade or so later, I ran in the first affordable PC-based machine translation program. Since then I had been deeply immerged in the typical, archetypal approach to translation and translation quality. A few years later, I installed and launched Mosaic for the first time, and I suddenly r e a l i z e d that things would change dramatically.

The most commonly asked question about translation quality is: How can it be measured? To measure something, you must know what it is, and then you must develop metrics that measure it.

Things actually have, although not that much in the translation academic community. The overall theme for this issue, translation

30

quality, is an evergreen, an ever-beloved topic of the translation community, the one that has probably changed less. As rates and the systematic abuse of technology, with which it often goes along, it is as emotional too. The gloomy legacy of positivism Emotionality says that, despite the elaborate spirit of Ăœbersetzungswissenschaft, positivism never permeated the solid fabric of the ivory towers of academia, where translation quality theories cannot be trivialized for workability. It comes as no surprise, then, that the cause against positivism and its gloomy legacy is being revived in certain circles. Nonetheless, I confess, I am a kind of positivist. I still trust in the intrinsic neutrality of science and technology, and I believe in the quantitative approach. All said, my view of the translation quality issue could be considered as somewhat biased. Measurability In his lecture to the Institution of Civil Engineers of May 3, 1883, Sir William Thompson, first Baron Kelvin, stated: “When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your


The Research Perspective by Luigi Muzii

knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of science.” The most commonly asked question about translation quality is: How can it be measured? To measure something, you must know what it is, and then you must develop metrics that measure it. Metrics definition is the hardest part for people who have always thought of quality in their deliverables as a questionable subject. Metrics In most cases, still today, especially in translation, the most common way to assess quality remains measuring the number and magnitude of defects, intended as the characteristics that causes users to depart from a standard process. The number of defects indicates how a product or service deviates from expectations, generally expressed in the form of requirements. When defects cannot be physically removed, their features and scope must be specified, and since specification of requirements is usually a rare gem in this field, translation quality is commonly assessed by comparison with the source text. Unfortunately, due to the human nature of the translation work, everyone is supposed to be able to assess a translation, and its intrinsic quality is often settled on personal taste. This is why, in Quality in Professional Translation, Joanna Drugan acknowledges that theorists and professionals overwhelmingly agree that there is no single objective way to measure quality, and that different models assess different things. And, as if to invoke Heisenberg’s uncertainty principle, she adds that any measurement will change the nature of the model itself.

Establishing a model or definition of quality, and translating it into a set of parameters to measure each of its elements of quality is pivotal. However, anything that is not relevant to this model could make any measurement erroneous. Therefore, striving for a single, allencompassing metric is not only troublesome, it can even be useless as a simple metric would not reveal all the problems, and multiple metrics would be needed to take all elements into account. Quality in translation today Nevertheless, quality is the unique selling proposition for the whole industry, which still lives — but maybe no longer thrives — on the idea of translation as an art.

Quality is the unique selling proposition for the whole industry.

In fact, for any business, quality is a prerequisite for existence on market, and it is expected. This is especially true in the translation business, where buyers cannot possibly assess the quality of all the products or services they receive. So, when reasoning about translation quality today, one question comes to my mind: Do the ever-increasing production rate/speed and volume of content still allow room for quality as a major concern at the expense of time and price as the primary, if not the only, determining factors in business? Quality management Based on the principle that knowing which parts of a process work well and which ones don’t allow taking measures to correct the problems, in the second half of the 20th century, the quality management approach moved from manufacturing into service, healthcare, education and government sectors, and today quality standards generally target processes, considering a product as the result of a process.

31


The Research Perspective by Luigi Muzii

In this perspective, quality has become a relative concept broadly corresponding to product suitability. As relative to needs, it varies with tasks, each one having its own requirements. This view extended to translation quality standards that mostly rely on the idea that following certain procedures will increase the likelihood of good quality. Even though repeatability is no straight, fast, and safe way to quality, at least it can lead to reasonably expect the same results and, then, spot the errororiginating causes.

Counting errors is still the one known or assumed way to instruct students in a centuries-old educational model forged to feed a system that has been flaking off for at least two decades.

Unfortunately, all translation-related quality standards simply replicate the typical trial-anderror approach of teaching, with downstream rather than upstream adjustments, catching rather than preventing errors, and this exception-handling approach challenges QA principles. At the Localization World Dublin conference in June 2014, translation quality scholar Sharon O’Brien restated that “the localization industry is ‘a-theoretical’ and localization industry experts have no need for careful theoretical concepts,” but also that “academics have shown remarkably little interest in taking the localization industry seriously.” Unfortunately, the first serious attempt to lineup theory and practice so far is the TAUS DQF, based on Sharon O’Brien’s dynamic quality evaluation model, which is still based on the traditional error-catching approach.

32

In fact, Sharon O’Brien also reported that, to her knowledge, no translation educational institution waived this approach. In other words, counting errors is still the one known or assumed way to instruct students in a centuries-old educational model forged to feed a system that has been flaking off for at least two decades. And, regrettably, it is affecting the machine translation community. The red-pen syndrome The error-catching/red-pen approach translates into a red-pen syndrome once students enter the translation business. Not surprisingly, the whole translation business is still at the errorcatching production model through serial, noncollaborative, additional steps that could easily introduce costly new errors at every stage. In The Psychology of Science, Abraham Maslow wrote, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.” The main problem with the error-catching model is that inspections are costly. Only very small, consistent samples make an otherwise impracticable 100 percent inspection sustainable. The error-catching model has its critics even among theorists, with some arguing that it is intrinsically flawed, as the language quality component in translation (register, grammar, idiom, collocations and spelling) allegedly neglects the important aspects of pragmatic and semantic meanings in language use. The problem with this model is that it only focuses on the final output rather than on the process, and this leads the players to add steps to the process to catch errors to improve quality. Unfortunately, besides adding costs by multiplying efforts, this exposes to the

An error-based chain is neither efficient nor lean or budget.


The Research Perspective by Luigi Muzii

risk of introducing new errors at every stage. An error-based chain is neither efficient nor lean or budget. In fact, in translation, quality costs are mostly for quality control — reviews, rejections, and repairs. A lot of time and money is regularly spent on linguistic quality, even though no one has been capable so far, especially among theorists, to demonstrate that any bijection exists between price and translation quality.

Not everyone likes change. The translation community does not either.

Also, quality inspections are seldom made on any input from customers or end users. They are ran at the end of the process following a standard definition of quality, which always matches only the prevalent view of the linguists involved. These, in turn, are taught more or less the same model, and apply it regardless of content type and market. In fact, translation students are seldom confronted with actual projects (especially as for volume and technical aspects) and even less they are taught to collect and analyze requirement specifications. On the other hand, following this attitude, linguists regards themselves as the only ones entitled to set translation quality specifications and translation quality assessment criteria. A change is needed A further confirmation comes from the latest prominent works on translation quality from an academic perspective, Joanna Drugan’s above cited Quality in Professional Translation and Ilse Depraetere’s Perspectives on Translation Quality. This work in particular shows how contrastive analysis — and thus the error catching approach — is still the dominant approach to quality assessment. Even Nathalie De Sutter’s proposal for a post-editing-based evaluation of MT output reveals the same legacy.

By the time of publication of this issue of TAUS Review, Juliane House’s latest work Translation Quality Assessment: Past and Present should be available for purchase. It will follow the same tracks as her fellow scholars, considering translation, at its core, as a linguistic art, thus reaffirming her own model of translation quality assessment, relying on detailed textual and culturally informed contextual analysis and comparison. Things get further intricate by adding any reference to socio-cultural and situational context. Not everyone likes change. The translation community does not either. After all, immutability can be extremely reassuring. Especially when everything around is changing at a very fast pace that is hard — and could seem impossible — to keep up to. On the other hand, as Joanna Drugan says, academics and the industry are pursuing different goals, and it should not come as a surprise that academic efforts in the area of translation quality are still largely ignored, if not explicitly rejected by the profession. Werner Heisenberg published his uncertainty principle in 1927. Today, although widely repeated in textbooks, this physical argument is known to be fundamentally misleading. Maybe it is true that the scientific method, based on deduction and falsifiability, is better at proliferating questions than it is at answering them, but this is what led us in the second half of the chessboard, by stimulating change.

33


The Research Perspective by Luigi Muzii

If the statement “we cannot solve problems by using the same kind of thinking we used when we created them” is much too often wrongly attributed to Albert Einstein, it is possibly because it makes perfect sense. Actually, a new type of thinking is essential even when dealing with translation quality. A new type of thinking is essential for the whole community to adapt to new conditions, survive and move to higher levels. A technology-driven paradigm shift Reportedly, in 1998, speech recognition pioneer Frederick Jelinek said, “Anytime a linguist leaves the group the recognition rate goes up.” The most serious threat to the translation community as we know it comes from technology. The technological changes are so overwhelming in their reach that to defy adaptation is to risk irrelevance. A large part of the translation industry has already embraced the coming changes and will be able to adjust accordingly, on the assumption that automation is creating opportunities to translate documents that would have never been part of the regular translation/ localization process due to volume, relevance or time sensitivity.

The technological changes are so overwhelming in their reach that to defy adaptation is to risk irrelevance.

34

At the same time, there is a vast belief in the translation professional community that not only has automation affected jobs but that is also endangering the nature of translating itself. This belief is supported by the daily proliferation of essays on the damage done even by long-established technologies like translation memories, while the amazing thing that should come from the translation academic community could be producing innovation by disrupting the use, not the technology itself.

Moving farther on Automatic output evaluation has become important because Statistical Machine Translation has become the dominant paradigm of machine translation. In her thesis on the human evaluation on statistical machine translation recently presented at the San Diego State University, Elisabeth Candy Stephens concluded that BLEU undervalues good translations and overvalues bad ones and that a lot of work still needs to be done in order to standardize this metric. Interestingly, Elisabeth Candy Stephens’ work restates the old-fashioned and firmly established claim of an assimilability of human beings and machines, thus forgetting, once again, that different systems and approaches require different assessment models. The bias in Stephens’ work comes possibly from the typical academic approach described before, that cannot help translation studies to move further. Automation helps lowering costs by allowing humans to be engaged only in tasks to which they can add value. Students at the University of Zurich’s Institute of Computational Linguistics recently obtained the opportunity to gaze down at the inner depths of machine translation in an introductory course on machine translation and parallel corpora. The idea was for the computational linguistics students to begin as quickly as possible to experiment with a statistical MT system without worrying about the technical details and to study MT in both theory and practice in order for them to become competent users. This pattern could help future translation providers develop an elastic mindset towards translation automation technology. Conclusion In the first issue of the TAUS Review, I wrote that, in this column, I would be asking myself the same questions I have been asking for the


The Research Perspective by Luigi Muzii

last thirty years and more: What breakthrough can we expect from research? May any breakthrough come from the academic world of translation? As a positivist, I am confident in scholars’ goodwill and keep thinking that the bad signs I read are not for real. The bad signs Translation scholars seem frightened by innovation, but their conservatism shows they seem more comfortable with conformism, as if they were worried to sing out of tune. Yet, if it is true that deduction and falsifiability make the scientific method better at proliferating questions than at answering them, thinking outside the box and singing out of tune are the first steps in innovation.

A product is not quality because it is hard to make and costs a lot of money.

A radical turn In Innovation and Entrepreneurship, quality guru Peter Drucker wrote: “Quality in a product or service is not what the supplier puts in. It is what the customer gets out and is willing to pay for. A product is not quality because it is hard to make and costs a lot of money, as manufacturers typically believe. This is incompetence. Customers pay only for what is of use to them and gives them value. Nothing else constitutes quality.” The error-catching model is already hardly sustainable, for complexity, costliness and unreliability. Simplicity is key to sustainability, and agile is the new black.

A radical change is needed in translation studies. Rather than insisting in refining the traditional errorcatching approach for quality, prone to subjectivity and fallacy, translation scholars should focus on new errorprevention models based on a few and simple quality criteria, which could be easy for customers and users to understand and allow the straightforward definition of specifications of requirements.

The next stage in quality assessment could then be the development of fully automatic tools to reduce human involvement, and thus subjectivity.

The next stage in quality assessment could then be — finally — the development of fully automatic tools to reduce human involvement, and thus subjectivity. These tools would enable vendors to collect and analyze requirement and help them prevent errors from the very beginning through compliance to criteria, while buyers could save time and money in sampling and assessing translations, through checklists, ongoing audits, and objective additive scoring. However, the biggest achievement would be for translators to finally work on clear goals and disciplined procedures. ____________________________________ Send news, reports, comments, idea, and recommendations to research@taus.net.

Besides being obsolete, overcomplicated, inefficient and thus unsuccessful, traditional theories and models are inadequate to respond to an increasingly demanding and complex work.

35


The Translator’s Perspective by Jost Zetzsche

Mission Possible I'm

a man on a mission these days.

mission:

As I've

Sort

said in the previous

of.

Here's my column -- it's

in all our interests to find better ways of utilizing machine translation than we have so far with postediting.

And "all"

really

means

all,

including

translation professionals and translation buyers.

There

is a lot of potential in harvesting data from

machine translation suggestions, but overall

I

think

we're going about it the wrong way.

Now, there are some machine-translated projects where post-editing is a good option. Those are projects with a highly trained MT engine in place for a well-suited language combination and text type where the posteditor essentially only has to do touch-up work. Anything else but post-editing would seem silly in that case. But do you always work with engines and in situations like that? Nope, neither do I. What are other ways to access content that comes out of a machine translation? Internally MT engines of course come up with many, many propositions as translations for the text to be translated, but typically they expose only one of those options -- that's the one that's supposed to be post-edited in the old, and very often tired, model. We all know that there will be some parts in that suggestion that will be OK, even really good, but we also know that in its entirety the suggestion often needs so much work that it becomes tedious to use it.

The machine translation engine holds in its dark recesses pretty much all or most of the translation elements we will end up using.

But here's the key: The machine translation

36

engine holds in its dark recesses pretty much all or most of the translation elements we will end up using; it just doesn't release them to us. You can expose some of those usually hidden "phrase tables" in Google Translate when you look at the different suggestions for each phrase by clicking on parts of a target sentence. You can also see it in the beta version of the Moses-based WIPO MT engine that I linked to in my last column. Some tools already have features that are moving in the right direction: • Wordfast Classic (and presumably the longannounced new version of Wordfast Pro) has an AutoSuggest feature that displays both segments and terminology (which actually ends up being subsegments) as you type from the various MT engines you can connect (under Setup> AS). • Trados Studio offers two plugins that use AutoSuggest for whole and partial segments -- unfortunately only from one MT at a time (the now free -- and very helpful -- MT AutoSuggest that suggests from one of the associated MT engines and Google Translate AutoSuggest which is also free and allows you to use phrase suggestions from Google Translate without paying the typical Google usage fee). • Déjà Vu X3 offers AutoWrite options for all associated MTs for whole and partial


The Translator’s Perspective by Jost Zetzsche

segments General).

(under

File>

Options>

I'm really excited about the progress that I've seen in these tools. However, none of these tools actually retrieves several suggestions from one MT engine per sub-segment (instead, they use several engines with one suggestion each). In addition, none of them is dynamic in the sense that the MT is continuously queried and re-queried on the basis of what has already been chosen as the right translation. See, the power of getting several suggestions from one MT makes so much sense with customized machine translation engines. It might be helpful to get lots of suggestions from a non-specialized MT, but from a specialized MT, it's kind of like TM on steroids. It provides all the different combinations of language that you trained it on. Will it come up with the right solution?

It provides all the different combinations of language that you trained it on.

Sure it will, if you can dig deep enough and look for fragments rather than the whole segment, and if the digging can happen purposefully through the keystrokes that you enter rather

than some awkward search functionality, thus placing the oversight and control squarely in your hands. Now couple that with an interactive reformulation of the suggestions based on what you previously decided on as your translation? (Please stop drooling on your keyboard!) I had a lovely talk with Spence Green the other day. Spence is just about to finish his PhD in computer science at Stanford. Among other things he's been working on a system that utilizes the Stanford MT system and does all those things that I've described above. He gave me access to his otherwise non-public system for a little while, and I captured myself doing some translation with his system (see video below). You can see a couple of things in the little video (regardless of whether you read German). First, there are several translations for each word that the MT suggests. You can see them by just hovering over the source text. Second, you can see that the machine translation suggests a translation for both of the segments (the grey text in the target fields)which continuously changes as I decide on my ongoing translation.

37


Open ‘Haus’ at TAUS: bring a snack and a drink and join the company 29 January, 2015 26 March 2015 Keizersgracht 74 Amsterdam

TAUS Executive Forum 9-10 April, 2015 Tokyo (Japan)

TAUS Industry Leaders Forum 1-2 June, 2015 Berlin (Germany)

38


The Translator’s Perspective by Jost Zetzsche

Third, my keyboard entries equal a search through the many suggestions (from one and not many MT systems) that are presented to me as AutoSuggest popups while I translate (and which I can accept with a hotkey). And fourth, every time I make a decision for a term or phrase, the system equals that with something in the source text which is highlighted accordingly. This helps you to know what still needs to be translated, and it shows what parts the MT system is still going to retrieve. You can see (or you could see if you were a German reader) that especially when it comes to the end of the second segment, the system makes some erroneous choices about what is being translated and doesn't really do what I want it to do. (When I asked Spence about this, he ended his explanation with "Like I said, English-German shows MT in its worst light ;-)") But those hiccups are really beside the point. The system is not a mature system that is ready for commercial use, but it shows what's potentially possible and what really is right within our reach. This morning I talked to someone about what we currently do with machine translation versus what we should and will be doing, and I mentioned to him that in 2020 we'll look back to 2014 and giggle at the folly of our unproductive approach to something that would soon be so productive. (As Spence said, "There's a big disconnect between what's been done with

translation technology and what can be done.") Here is my -- and I hope our -- mission. Why wait until 2020? Let's lobby the translation environment tool vendors and the MT community (we need them both in this endeavor) to give us the keys to becoming more productive as translators. At the same time, let's encourage them to steer away from a development -- post-editing of machine translation -that in many cases is not nearly as productive.

Let's lobby the translation environment tool vendors and the MT community to give us the keys to becoming more productive as translators.

I could see a tool like Déjà Vu, traditionally a pioneer with forward-reaching ideas about data assembly and text processing, leading the way in this. I could see developers developing apps for Trados Studio, or the memoQ team making yet another push to put some room between themselves and others. What about some of the SaaS-based solutions (Wordbee, XTM, Memsource, SmartCAT, etc.), or the open-source community around OmegaT? I bet CafeTran's (one-man) development team would be thrilled to work on this challenge. And ... At the same time, there are plenty of MT vendors -- like KantanMT, tauyou, PangeaMT, LetsMT, and others -- who have plenty of expertise to appropriately configure the MT backbone. Let's see who leads the crusade for making technology that we can all benefit from, and let's make sure that we keep on prodding them all forward.

39


Translation in Africa by Simon Andriesen

Many Africans

speak at least

different languages.

You

2,

often many more

would expect that language

is a big thing, but for many people it seems to be a non-issue.

The Yellow Pages of Kenya (pop. 44 60 languages) has just 7 entries under ‘Translations’. Of all 400 GALA members there are a handful in Egypt and just 2 in South Africa; no GALA members in any of the other 51 African countries. million, over

Translation simply does not seem to be big in Africa, probably because in countries with lots of people who don’t have much money studying translation or language is seen to be a luxury. And understandably so, as it is not at all certain that after a language education any solid income can be expected. In fact, practically none of the 40 or so members of the Kenya Interpreters and Translators Association (KITA) is a full-time translator. They have a different ‘day job’, often not even related to language. Also remarkable is that most of them translate from English into languages such as French, German, or Russian, even though they are not a native speaker of any of these languages. They feel that there is no market for, or rather: no money in translating into local languages such as Kikuyu, Kamba, or Swahili. The rates they command for the work they do are rather low: just 2 or 3 Kenya Shillings, which is roughly as many Euro cents. Just like in many other places, translation is undervalued and underrated. Companies, NGOs and governments do not see the importance of translation, they have no budget, and as a result hardly any form of translation infrastructure exists.

Practically none of the 40 or so members of the Kenya Interpreters and Translators Association is a full-time translator.

40

Translation should be key Africa has 14% of the world’s population, 28% of the world’s health burden and just 3% of the world’s doctors and nurses. In such a setting access to health information is crucial. Health information is available, for sure, but it is in English or French, depending whether you are in East or in West Africa, and these languages are spoken by less than 20% of the population. Anyone not speaking English or French, respectively, has no access. And that, while in areas with many patients and few doctors, health literacy is key. It saves lives and human suffering. On top of that it has a high ROI. Every euro spent on the translation of health information reduces the national health expenditure at least one thousand times.

Anyone not speaking English or French has no access.

Ebola would not have become a deadly crisis and global health threat if people would have been better informed. The problem, however, was that in the 3 ‘Ebola countries’ a total of 90 languages are spoken. English warning posters in Sierra Leone and Liberia, and French ones in Guinea only reach a small part of the population. International organizations simply did not understand that translated warnings would be much more effective. If people would have known right from the start that they should never touch the bodily fluids of an Ebola patient and that they should not touch, hug or kiss the body of an Ebola patient before the funeral, Ebola would not have become the devastating pandemic it has become. Translations help build health MT system for Swahili On behalf of the language industries Translators without Borders (TWB) decided that it is no longer acceptable that people suffer or die because of language. Translators cannot cure diseases, drill water wells, or send money. What


Translation in Africa by Simon Andriesen

we as translators cán do is provide language support where it is needed. That is why on behalf of TWB I have set up a health translators’ training center in Kenya in order to increase local translation capacity. We employ 10 of the best former trainees and they focus on Swahili and a few other East African languages. One of the projects we are involved in is the translation of a large corpus of crisis-related information. Whenever nature strikes, a relevant set of translated documents is available in the right language to support aid workers. We also translated many dozens of health articles from Wikipedia, t r a i n i n g materials for nurses, subtitles for health videos, and much more.

So far, the center has translated several million words, primarily in the health and crisis intervention domains, and most of it in Swahili.

As can be expected in an immature translation sector, translations tools are not commonly used, and even language graduates of the University of Nairobi have not been exposed to translation memory tools. After our trainees had been introduced to MemoQ, generously donated by Kilgray, and started working with it they felt that they worked on a much higher professional level than before. A full week of training was generously provided by Marek Pawelec, a well-known MemoQ trainer, who came to Nairobi on his own time, airfare donated by Kilgray. Since then, no single paragraph has been translated outside this TM tool! So far, the center has translated several million words, primarily in the health and crisis intervention domains, and most of it in Swahili. All these translations are obviously used by the NGOs who asked for our support, but the corpora (and TMs) are also used to train

a machine translation e n g i n e that will eventually s u p p o r t the (raw) translation of additional information into Swahili, and which eventually will be publicly available. This TM engine is part of the crisis intervention translation project; the pilot involves Swahili and Somali, but the project was set up with the whole developing world in mind. Terminology, or lack thereof One of the issues for African translators is that for many languages no reliable dictionaries or a documented grammar exist. Sometimes there is no translation for an English term. Take ‘cancer’. This is a relatively new concept that, until recently, was not relevant for Africa, as many people would simply not live long enough to develop cancer (as Johan Cruyff once said: ‘Every disadvantage has its benefit’). With increased life expectancy more people now get old enough to get cancer. In many African languages no term existed. Initially it was simply described, for example as ‘wound in the breast that does not go away’, but after some time a general term for cancer was adopted, for example (in Swahili) ‘saratani’. Another example is ‘kwashiorkor’, the term for a certain form of malnutrition. The term comes for the Ghanese language Ga and literally means ‘sickness a baby gets when the new baby comes’. It is interesting how a term describes what must have gone on for

41


Translation in Africa by Simon Andriesen

centuries, but the link between the new baby and the disease was not understood until recently, when it became clear that the birth of a new baby means that the older baby no longer gets the ever-important breast milk...

Simon Andriesen Simon is CEO of MediLingua,

Another example is ‘aids’. Not sure if it is true, but I was told that in Luhya, one of the 50 or so Kenyan languages, the term aids was described as ‘disease that you get after you sleep with your neighbour’s widow and then the widow dies and eventually you die’. Again, the term illustrates that people knew exactly what happened, although nobody knew how or why.

based in The Netherlands, which is fully focused on translation, localization and testing of pharmaceutical, clinical trial, biomedical and medical technology information. He is also board member of the language NGO Translators without Borders and manages TWBs translation center in Nairobi, Kenya, which he visits once every few months. He is advisory board member of the Life Sciences

For more information on TWB, visit www. translatorswithoutborders.org

Roundtable at Localization World, a series of high-level conferences about translation and localisation, and is a frequent speaker at conferences about language, medical translation, medical writing, and readability testing.

The last issue of Revista Tradumàtica focuses on translation quality. It highlights some of the most interesting topics of the area. The articles from the pen of renowned authors (both academics and industry professionals) lead us into the secrets of translation quality evaluation, post-editing, sampling and MT... and, of course, offer some more insights on new trends in the translation industry. Enjoy reading! See http://revistes.uab.cat/tradumatica

42


43


The Proof of the Pudding by Attila Görög

"The

proof of the pudding is in the eating."

This

old

17th century and is widely Spanish author Cervantes in his world famous novel “The Ingenious Gentleman Don Quixote”. It can be paraphrased with You can only saying dates back to the

translation quality evaluation needs to re-focus on a number of cost-effective, practical issues.

attributed to the

say something is a success after it has been tried out.

Applying

it to translations you could say: the

test of a translation is in its use.

The

question here

is: who is eating the pudding i.e. who is using the translation and for which purpose?

And

what taste

do they have?

The changing perception of translation quality has received much attention in both academia and industry. In the past, human or publication quality was the only target for translation buyers and vendors. One translator said on a translation forum a couple of years ago: "When I translate, my main aim is to produce a document which reads as if it were originally written in English."

Though high quality is the target for most translators, some of today’s customers may want something else.

Though high quality is the target for most translators, some of today’s customers may want something else. Also, a translator's work might be excellent in terms of fluency (i.e. sounds natural/intuitive), but how about the adequacy of the translation (i.e. fidelity to source text) or errors made based on an error-typology (terminology, country standards, formatting, etc.) The translation can be of a very high quality according to some criteria and be a bad translation according to others. In other words: Fluency is just one side of a coin which comes with multiple sides. Compliance vs. acceptance Today, there is an increasing appetite for a new approach to quality within the industry. Quality is when the customer is satisfied. As a result,

44

First of all, a translation is expected to fulfill certain basic criteria (compliance) in order to satisfy the "average" user. For this reason, each evaluation project should measure the degree of compliance between translated content and a benchmark that is based on predefined (and hopefully in the future standardized) quality levels (e.g. publication quality, expert quality, human quality, transcreation, full post-edit, light post-edit, raw MT output, etc.).

It adds to the confusion that many of these quality levels are undefined, vague and hard to measure.

These quality levels (or quality types if you will) should be specified beforehand by the customer. It adds to the confusion that many of these quality levels are undefined, vague and hard to measure. Note that I’m now only focusing on compliance and not on acceptance. Compliance does not necessarily mean acceptance by the customer or the user. A pudding can be a perfect pudding according to standards but if the person eating this pudding is not satisfied with the taste, the smell, the packaging, the price or anything else because of some personal preference, the product I deliver is simply not satisfactory. It's a good pudding according to certain criteria but it's not according to others. In order for it to be accepted, it also needs to fulfill additional user specific requirements. In the past, compliance wasn't really a problem. Buyers paid for (and expected to receive) translations that “read as if they were originally written in the target language”. As a result, vendors could mainly focus on acceptance i.e. on the special requests: deliver quicker, use


The Proof of the Pudding by Attila Görög

short sentences, use the client specific glossary, avoid negations etc. Today, even compliance becomes an issue. How can we make sure we provide the right quality? A one taste fits all approach to puddings or a one quality fits all to translations is not a satisfactory model any longer due to changing user needs, purposes, technology, budgets etc.

This measurement is used internationally to determine the quality and the price of each bulb. And, of course, there is also an average size. An average zift size for tulips, for instance, is 11-12, which means that the circumference of the bulb is 11-12 centimeter. Do you see where I’m getting at here…?

Different tastes One way of accounting for this is asking the consumers what they like. Let them taste the pudding first before we go on producing it in large volumes. The question is: who are they? The answer could be the customer (customer feedback is certainly valuable), an undefined crowd (large and ad-hoc group of users and non-users), a community (more specific group of users) or a selected user-group (even more specific and smaller).

There are two types of translation vendors today. The first type has a traditional view on quality. They advertise the quality level of their services with s u p e r l a t i ve s . They make sure that you know: they always provide top quality translations and their services are conform all existing quality standards. They are like the flower shop selling only the biggest and most expensive bulbs.

People, however, differ in their taste and you can only satisfy the majority. You need to find out about the taste of the “average” end-user visiting your website, buying your product, reading your marketing text and using your software. We live in a personalized world. Through cookies (now that we've mentioned puddings) web content is packaged to our personal needs and preferences. Who will eat the pudding and what is their personal taste? And how can we satisfy them in the most costeffective way? These are the main questions of today's translation marketplace. Bulbs and flowers Living in the country of tulips, I’ve learnt one thing. The bulb of a tulip is a very important organ. It is the storage place where nutrients are retained and kept safe for the next year. Just like the plants themselves, bulbs also differ in form, size and quality: usually, the bigger the bulb, the more nutrients it contains and the larger the plant and the flower it produces. If you want to augment your chances of growing a nice tulip, get a big bulb. Size does matter… at least when it comes to tulips. And there is a minimum bulb size that can be sold on the market. This is measured in zift.

There are two types of translation vendors today. The first type has a traditional view on quality. Nowadays, there is a second, emerging group of translation service providers offering a much more progressive view on quality. They provide multiple levels/types of quality and related pricing.

Nowadays, there is a second, emerging group of translation service providers offering a much more progressive view on quality. They provide multiple levels/types of quality and related pricing. Very often, they use various processes and even different personnel to ensure that the targeted quality is reached in the most effective way and, consequently, within the budgetary restrictions of the client. Translations are just like tulip bulbs, aren’t they? The problem is, common standards for measuring translation quality in a reliable way is missing. There are no internationally used

45


The Proof of the Pudding by Attila Görög

metrics or benchmarks to compare different levels of quality. Or are there? The industry standard DQF launched in 2011 by TAUS is definitely a good candidate for becoming an international standard. Chances are that this initiative will be internationally recognized in the near future. To draw a parallel with the zift method applied in the flower industry, we could define the minimum requirements for a translation (compliance) and add user preferences (acceptance). We could also specify an average score for translations by calculating industry averages for QE on translations passing the minimum threshold. This way, we would be able to deliver average, superb (maximum) and good enough (minimum) quality and everything in between. Back to puddings, user preferences are expressed in the form of specifications (e.g.: it should be sweeter than the norm, darker, thicker etc). When speaking about translations, we can specify the necessary criteria for compliance calling it an "average". But a better way is to apply a metric (e.g. a mixture of adequacy, fluency, error-typology, etc) and specify a minimum threshold and an average score for that type of content. Everything above the average score is personal preference and belongs to acceptance and should be paid extra for. Everything below qualifies for a discount. But how do we come up with the right metrics and the right benchmarking? And how do we provide thresholds for the different levels of quality: from raw MT output to publishable translation?

How do we provide thresholds for the different levels of quality: from raw MT output to publishable translation?

46

Translation as a five-star hotel Let's forget about puddings and take another

example, now from the hospitality industry: hotel star rating. Usually, a national body defines the requirements a hotel needs to fulfill in order to obtain a number of stars. Under 1 star, we talk about a B&B, a hostel or a camping but not about a hotel. The same thing could be done for translations. The budget or basic quality translation is 1 star. Comprehensibility would be a right criteria for this basic level. Gisting is another word, used in this context. There are different evaluation types that can be useful here such as readability or usability. Under 1 star the translation is not a translation anymore since a large part of the text is incomprehensible. To be at least 1 star you need to score at least X on some metrics. For 2 stars you need a higher score etc. If customers want a 5 star translation, they need to pay for it. They want it at record speed, they will pay more. They have a lower budget? You offer a 3 star translation. This may sound absurd, but that's where the industry is heading right now… and with good reason. Content profiling and benchmarks À propos, evaluation types. One method of choosing the right type of evaluation or the right mix of metrics is profiling content based on the categories: utility, time and sentiment. Utility refers to the relative importance of the functionality of the translation; time to the speed with which the translation is required and sentiment refers to the importance of impact on brand image i.e. how damaging low


The Proof of the Pudding by Attila Görög

quality translation might be. "A dynamic Quality Evaluation model should cater for variability in content type, communicative function, end user requirements, context, perishability, or mode of translation generation."(O'Brien, 2012) As mentioned above, once we have specified a basic level, we need to come up with an average score for translation quality. The only way to create an objective benchmark is by collecting large number of evaluation data obtained from different evaluators, from different domains, genres, language pairs etc. Human evaluation remains subjective unless you have gathered data from a large amount of users. Is community evaluation the way to go?

Certifying the LSP or standardizing the process is just not enough.

Community evaluation Last June, a number of participants of the TAUS QE Summit took up the challenge to define best practices for Community evaluation of translations. Community evaluation involves capturing the target audience preferences by actually crowdsourcing the quality evaluation process itself. What you want is that the quality of the content meets the expectations of customers and users and you also want to engage these groups in the production cycle to increase brand loyalty and help sharpen the focus of your offering/content. This type of evaluation usually involves opening an online collaboration technology process for volunteers to help review translated content. These volunteer evaluators build a collaborative community where they participate as reviewers – of the content and often of one another – and their work becomes visible immediately. By the way, the term community seems to be preferred: it suggests coherence. People like being part of a community. Even if you start with a “random” crowd, the objective should be

The Proof of the Pudding by Attila Görög

to build a community. But wait a second! Can I trust crowdsourcing? How do I make sure the evaluation provided is on par with professional evaluation? In a recent paper Goto et al. (2014) from Kyoto university undertook a pilot for crowdsourcing MT quality evaluation for Chinese to English translations. They compared crowdsourcing scores to professional scores and found out that the average score of crowdsourced (or community) evaluators matched professional evaluation results. Ok, I get it but how can I motivate the crowd to this job? The reason evaluators choose to join the community can vary. It can be because of an emotional bond with the content, pride in one’s native language or just for the sake of practicing language skills. In big, international companies communities can be formed from very interested users. Evaluation can also be passed on as a game. Knowing your community will enable you to choose the trigger which works best. Why another standard? Collecting evaluation data on as much content, from as many evaluators and sources as possible, will give us an idea of the thresholds belonging to different levels of quality. How do you prove that your translation reaches this level? Is a quality mark for translations a viable option in today's translation industry? Of course, ISO standards and certificates to ensure quality are available. Why should we certify the product itself (the translation) if the maker (the LSP) and the translation process or the quality management process that lie beneath already match certain standards.

How do I know that the delivered quality is what I pay for?

My answer is: certifying the LSP or standardizing the process is just not enough. Technology changes, processes change and translators

47


The Proof of the Pudding by Attila Görög

come and go. It might work if you do the certification every quarter or every month but even then… on the product level it’s unreliable. We need to know the exact quality of the final product. Moreover, the customer needs to be able to specify the quality level he desires and he can afford. Most customers have no clue what to say when you ask them about the quality level they want. As I mentioned earlier, there are vendors today offering translation services "tailored to your needs": from budget translations through first draft translations to professional translations and transcreation. Based on the content and the purpose, you pay for the quality level you choose. Some LSPs ask you to kindly specify the quality level when requiring a quote. The different levels may vary from free to expert translation with a definition of the different levels. Yet another company introduced the translation strip ticket: "a quick and economical translation service for all of your short texts". The same company is now hiring translation students for translating. I like the idea of choice here. You can choose to pay less and translate more at a lower quality level. You can save on some content and invest more in other content. My only problem again is: how do I know that the delivered quality is what I pay for? Can you prove that? Budget translation is too broad a category. Light post-edit is too vague. Raw MT output varies from domain to domain. The price might be specific but not the exact quality I obtain for that price. It should be quantifiable. It should be comparable.

You can't trust only one person's taste.

48

Note: I'm talking about individual translations here and different levels (types if you like) of quality. The process (use of MT, light vs full post-edit etc) as well as the pricing won't prove anything about the quality level of the end product.

Independent evaluation In my opinion, an independent evaluation of translations is the only way to go to obtain a neutral quality mark for translations. There are two options to do that. Either you use automatic evaluation that is based on a widely a c c e p t e d standard and correlates well with human evaluation.

In real estate, you can ask an independent agency to help you assess the quality or value of a house. Why not do the same for translations?

Automated metrics like BLEU, TER, GMT, etc. compute how similar a translated sentence is to a reference translation or previously translated data. It is assumed that the smaller the difference, the better the quality will be. Automated metrics emerged to address the need for objective, consistent, quick and cheap evaluations. The problem with these metrics is that they require a reference translation which is unrealistic in an industry setting. They might be useful when training engines but when measuring and comparing the quality of the end-product, they have a very limited use. And what is quality in computer-language? What are these algorithms exactly calculating? A better option is to send the translation to a third party evaluation service: not an in-house reviewer or translator, neither another LSP but a party using standard QE tools and that is vendor neutral. Such a service is now available in the TAUS DQF platform. And I know, your first reaction is: this is too expensive. And I agree, right now, it would be impossible to have each and every translation certified by a third party but large and critical projects would definitely deserve such treatment. And there is something called automatic sampling which helps reduce the cost by


The Proof of the Pudding by Attila Görög

reducing the volume in a systematic way. Automatic sampling is to be added in the near future to the TAUS DQF platform.

Maybe not. But it’s the producer who decides when the recording is satisfactory and when the band is ready to leave the studio.

There is nothing new under the sun. This approach to quality actually has been with us so long. Just look at cars and houses and of course puddings. Prices have been set for each type of product based on the quality levels. And so based on the quality (or the extra features) the price differs. In real estate, you can ask an independent agency to help you assess the quality or value of a house. Why not do the same for translations?

Evaluating puddings and translations have several things in common: You can't trust only one person's taste. What you need are many evaluators in order to avoid subjectivity and to obtain a valuable insight on basic and average levels of quality on minimum requirements, compliance and acceptance. To be able to prove that you fulfill the minimum requirements of quality products, you need to create benchmarks based on different attributes of your product (be it translations or puddings).

TAUS took up the challenge to provide these benchmarks through its DQF initiative.

Translators as musicians In a recent article, Nataly Kelly wrote the following: "Translators prioritize quality, whether the customer does or not. Ask a translator what kind of quality is acceptable, and a professional will tell you, ‘Only the best.’ Would a musician be happy with a sub-par performance? No, and neither are translators.” Sure, translators might not like providing imperfect translations. But let’s face it, most translations are imperfect because there is a lack of time, lack of resources, we are all humans and the customer doesn’t want to pay for perfection. Would a musician be happy?

This is to be done by combining evaluation types and collecting huge amounts of evaluation data. Community evaluation might be one way to go for the industry to harvest evaluation data on a large scale and to create benchmarks. TAUS took up the challenge to provide these benchmarks through its DQF initiative. Finally, to show that you provide the right quality, a translation quality mark acknowledged by the industry and provided by an independent 3rd party service would be the most credible way to go. Who knows, maybe one day, we will be able to assign such quality mark to translations automatically. But till that time, the proof of the pudding remains in the eating.

Attila Görög Attila Görög has both business and research in his veins. Having been involved in various national and international projects on language technology in the past 10 years, he became specialized in PEMT, terminology, quality evaluation, standardization and the re-use of translation material. Attila is interested in globalization issues, projects involving CAT tools and preparing translators and LSPs for the future through webinars and workshops. As a product manager at TAUS, he is responsible for the TAUS Evaluation platform also referred to as the Dynamic Quality Framework or DQF.

49


Contributors

Reviews Mike Tian-Jian Jiang Mike was the core developer of GOING (Natural Input Method, http://iasl.iis.

Andrew Joscelyne

sinica.edu.tw/goingime.htm), one of the most famous intelligent Chinese

Andrew Joscelyne has been reporting

phonetic

on language technology in Europe for

He was also one of the core committers of OpenVanilla,

well over 20 years now. He also been a market watcher

one of the most active text input method and processing

for European Commission support programs devoted

platform. He has over 12, 10, and 8 years experiences

to mapping language technology progress and needs.

on C++, Java, and C#, respectively. Also familiar with

Andrew has been especially interested in the changing

Lucene and Lemur/Indri. His most important skill set is

translation industry, and began working with TAUS from

natural language processing, especially for Chinese word

its beginnings as a part of the communication team.

segmentation based on pattern generation/matching,

Today he sees language technologies (and languages

n-gram statistical language modeling with SRILM, and

themselves) as a collection of silos – translation, spoken

conditional random fields with CRF++ or Wapiti.

interaction, text analytics, semantics, NLP and so on.

Specialties: Natural Language Processing, especially for

Tomorrow, these will converge and interpenetrate,

pattern analysis and statistical language modeling.

releasing new energies and possibilities for human

Information Retrieval, especially for tuning Lucene and

communication.

Lemur/Indri. Text Entry (Input Method).

Brian McConnell

Amlaku Eshetie

An inventor, author and entrepreneur,

Amlaku earned a BA degree in

Brian has founded four technology

Foreign

companies since moving to California

(English & French) in 1997, and

in

the

mid

1990s.

His

method products.

Languages

&

Literature

current

an MA in Teaching English as a

company, Worldwide Lexicon, focuses

Foreign Language (TEFL) in 2005,

on translation and localization technology. In September

both at Addis Ababa University, Ethiopia. He had been a

2012, his company launched xlatn.com, an online buyers

teacher of English at various levels until he switched to

guide and consultancy for translation and localization

translation and localisation in 2009. Currently, Amlaku

technology and services. In March 2013, they launched

is the founder and manager of KHAABBA International

www.dermundo.com, a multilingual link sharing service

Training and Language Services at which he has been

that enables users to curate and share interesting content

able to create a big base of clients for services, such

across language barriers.

as localisation, translation, editing & proofreading,

Specialties: Telecommunications system and software

interpretation, voiceovers, copy writing.

design with emphasis on IVR, wireless and multi-modal communications. Translation and localization technology.

50

input


Contributors

Perspectives Jost Zetzsche Jost Zetzsche is a certified Englishto-German technical translator, a translation technology consultant, and a widely published author on various aspects of translation. Originally from Hamburg, Germany, he earned a Ph.D. in the field of Chinese translation history and linguistics. His computer guide for translators, A Translator’s Tool Box for the 21st Century, is now in its eleventh edition and his technical newsletter for translators goes out to more than 10,000 translation professionals. In 2012, Penguin published his co-authored Found in Translation, a book about translation and interpretation for the general public. His Twitter handle is @jeromobot.

Luigi Muzii Luigi Muzii has been working in the language industry for more than 30 years as a translator, localizer, technical

writer,

author,

trainer,

university teacher of terminology and localization, and consultant. He has authored books on technical writing and translation quality systems, and is a regular speaker at conferences.

Nicholas Ostler

Lane Greene

Nicholas Ostler is author of three

Lane Greene is a business and

books on language history, Empires

finance

of the Word (2005), Ad Infinitum (on

Economist based in Berlin, and

Latin - 2007), and The Last Lingua

he also writes frequently about

Franca (2010). He is also Chairman

language for the newspaper and

of the Foundation for Endangered Languages, a global

online. His book on the politics of language around

charitable organization registered in England and Wales.

the world, You Are What You Speak, was published by

A research associate at the School of Oriental and African

Random House in Spring 2011. He contributed a chapter

Studies, University of London, he has also been a visiting

on culture to the Economist book “Megachange”, and his

professor at Hitotsubashi University in Tokyo, and L.N.

writing has also appeared in many other publications. He

Gumilev University in Astana, Kazakhstan. He holds an

is an outside advisor to Freedom House, and from 2005

M.A. from Oxford University in Latin, Greek, philosophy

to 2009 was an adjunct assistant professor in the Center

and economics, and a 1979 Ph.D. in linguistics from

for Global Affairs at New York University.

correspondent

for

The

M.I.T. He is an academician in the Russian Academy of Linguistics.

51


Directory of Distributors

Appen Appen is an award-winning, global leader in language, search and social technology. Appen helps leading technology companies expand into new global markets. BrauerTraining Training a new generation of translators & interpreters for the Digital Age using a web-based platform + cafeteriastyle modular workshops. Capita TI Capita TI offers translation and interpreting services in more than 150 languages to ensure that your marketing messages are heard - in any language. Cloudwords Cloudwords accelerates content globalization at scale, dramatically reducing the cost, complexity and turnaround time required for localization. Concorde Concorde is the largest LSP in the Netherlands. We believe in the empowering benefits of technology in multilingual services. Crestec Europe B.V. We provide complete technical documentation services in any language and format in a wide range of subjects. Whatever your needs are, we have the solution for you! Global Textware Expertise in many disciplines. From small quick turnaround jobs to complex translation. All you need to communicate about in any language. Hunnect Ltd. Hunnect Ltd. is an MLV with innovative thinking and a clear approach to translation automation and training post-editors. www.hunnect.hu KantanMT.com KantanMT.com is a leading SaaS based statistical machine translation platform that enables users to develop and manage customized MT engines in the cloud. Kawamura International Based in Tokyo, KI provides language services to companies around the world including MT and PE solutions to accelerate global business growth. KHAABBA International Training and Language Services KHAABBA is an LSP company for African languages based in Ethiopia.

52

Lingo24 Lingo24 delivers a range of professional language services, using technologies to help our clients & linguists work more effectively. Lionbridge Lionbridge is the largest translation company and #1 localization provider in marketing services in the world, ensuring global success for over 800 leading brands Moravia Flexible thinking. Reliable delivery. Under this motto, Moravia delivers multilingual language services for the world’s brand leaders. Rockant Consulting & Training We provide consulting, training and managed services that transform your career from “localization guy/girl,” to a strategic adviser to management. Safaba Translation Solutions, Inc. A technology leader providing automated translation solutions that deliver superior quality and simplify the path to global presence unlike any other solution. Sovee Sovee is a premier provider of translation and video solutions. The Sovee Smart Engine “learns” translation preferences in 6800 languages. SYSTRAN SYSTRAN is the market historic provider of language translation softwaresolutions for global corporations, public agencies and LSPs tauyou language technology machine translation and natural language processing solutions for the translation industry

VTMT As of 2013, VTMT sells translations made by man & machine. VTMT uses only PEMT, returning good translations quickly and for a fair price. Welocalize Welocalize offers innovative translation & localization solutions helping global brands grow & reach audiences around the world.


Industry Agenda

Upcoming TAUS Events

Upcoming TAUS Webinars

TAUS Executive Forum 9-10 April, 2015 Tokyo (Japan)

TAUS Translation Technology Showcase Iconic Translation Machines 4 February 2015

TAUS QE Summit Dublin 28 May, 2015 Dublin (Ireland) hosted by Microsoft

TAUS Translation Quality Webinar Benchmarking MT Engines 21 January, 2015

TAUS Industry Leaders Forum 1-2 June, 2015 Berlin (Germany)

Content Profiling for QE 18 March, 2015

TAUS Annual Conference 12-13 October, 2015 San Jose, CA (USA) TAUS QE Summit San Jose 14 October, 2015 San Jose, CA (USA) hosted by eBay

TAUS HAUS

Industry Events

TAUS Office Amsterdam Keizersgracht 74 29 January, 2015

Localization World 13-15 April, 2015 Shanghai (China)

TAUS Office Amsterdam Keizersgracht 74 26 March, 2015

4-6 June, 2015 Berlin (Germany)

Do you want to have your event listed here? Write to editor@taus.net for information.

53


MACHINE TRANSLATION FOR GLOBAL BUSINESS

ACCURATE. AGILE. ADAPTABLE. AUTOMATED. These are the elements that will ensure success for global business. Ensure YOUR success. Machine Translation is NOT a “one-size-fits-all” proposition. Choose a provider with proven expertise, industry leadership, proprietary technology, and happy clients.

CHOOSE SAFABA. Contact us to get started. +(412) 478-2408 • safaba.com

Safaba_TAUS_Sept2014.indd 1

54

12/17/14 10:53 AM


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.