Good translations start with good source content

If you’re surprised to hear that the biggest impact you can have on your translation budget lies with your source content, you’re not alone. Shoring up your source content seems counter-intuitive, but it’s exactly the right strategy to get the most value from your translation and localisation projects.

Organisations can achieve 50% to 80% reduction in translation costs and a 30%-plus reduction in delivery time by implementing best practices around managing source content. The Chartered Institute of Procurement and Supply says that operating in an omnichannel environment is increasingly a part of supply chain challenges – so streamlining and cost reduction is as important as thinking about the customer experience.

Translation is the too-late phase

It’s hard for translators to work in today’s business environment. They are perpetually at the end of the supply chain in any iteration of content production, and they are asked to produce localised content – that is, translations that have been adapted for suitability in local markets – in often impossibly short time periods, increasingly making it difficult to meet an unrealistic standard of quality.

More often than not, translators can see the problems upstream in the supply chain, but find themselves unable to effect any changes that would make the situation easier for themselves or their clients. They may not have direct access to the actual client, as a translation agency sits between them and acts as gatekeeper, and they may not even have reliable access to the tools that could improve the production process.

The client, meanwhile, struggles to get the content out to the translation agency and back again in a smooth manner. There continually seem to be bumps in the process that cause delays, mistranslations, or increased administrative overhead – needless cutting and pasting, for example.

It may be of some comfort, then, to know that good practices start at home, to mangle a perfectly good saying. Having a sound production process and robust source-language ecosystem lays the foundation for smooth development of localised content. In turn, this makes it easier to integrate the localised content into your products, websites, apps, knowledge bases, and content hubs.

Ready to adopt good localisation practices

It’s when companies are at the stage in their content maturity models when they recognise that content is an important part of their product or communication strategy that they are willing to invest in content as an asset. For companies not at that level of awareness, the rest of this article will not resonate. This is an important assumption, as in-house practices often reveal that companies are willing to live with broken content processes all along the line. They may say that content is king, but the king is shackled in a dungeon, and the keys have gone missing.

For the companies that are at the level of the maturity model where they are ready to take action, we make the following assumptions.

  • The company values content for its business value. Content isn’t considered that afterthought that fills in the pretty design, but a work product in its own right. In other words, the company recognises that the way that customers understand the products, services, instructions for use, value proposition, and the brand itself.
  • The company recognises that content production is not a commodity, and so does not fit the traditional supply chain model. Content returns in various iterations – new version, new language, new revision, and so on – and needs to be managed with the same care as other work products with iterative processes, such as code.
  • The company recognises that content is intrinsically different than data, and manages content with checks and balances suitable to it.
  • The company has equal interest in customers in the many markets, and aims to give them as much respect as the customers in the primary market.

Meeting these assumptions is an important point, as organisations which have not reached this stage of awareness are likely not willing or able to move to an operational model where they are ready to optimise management of localised content.

Put controls on source content

The single biggest impact you can have on your localisation efforts is to get your source content in order. A foundational principle for producing good translations is managing your source content well. Ideally, an organisation would create a superset of their source content, and re-use it across all of their output channels. This model accrues a tremendous amount of ROI, and the more languages you produce, the more this applies. Managing source content well involves making the most of semantic structure and metadata tags to help computer systems understand what the content is about, and as a result, how to translate that content more effectively.

Make your content translation-friendly

There are several writing theories with principles that apply to localised content. The principles of the Plain Language movement, for example, are a way to ensure that content is accessible to everyone. Controlled vocabulary is another technique from which you can borrow to ease confusion when terms need to be translated. Both of these theories agree on avoiding jargon, idiom, slang, and euphemisms, as they are harder to translate and often meaningless in the target language. Also pay attention to colours, gestures, and images. For example, there is no hand gesture that is not offensive in some culture. (Even the Facebook “thumbs up” for Like is a rude gesture in an entire area of the world.) Professional writers and translators will spot these errors and point them out or correct them.

The 3 techniques that are most common are:

  • translation – a faithful word-for-word rendition in another language
  • localisation – translation with additional compensation for differences in the target markets
  • transcreation – completely changing the message, if necessary, to make it meaningful to the target audience.

Transcreation is obviously the most resource-intensive, and will likely get used for marketing and other persuasive content.

Make your content interoperable

Industry has hundreds, if not thousands, of content standards that help store content, move content between systems, move content through the production, and so on. Your web or software developers may know about W3C standards that relate to the Open Web Platform, accessibility, semantic web, and the Web of Devices (the Internet of Things). They may not be as familiar with XLIFF, an interchange format commonly used to move content through the localisation process, image standards such as SVG, which has a handy text layer to store multiple language translations on a single image as metadata. Knowing the standards and deciding which ones apply to your projects can dramatically ease workflows and save significant time and money.

Use established workflows

When you use industry-standard workflows for translations, your project can go around the world in a day or two, and be translated with a minimum of drama. A typical workflow would be to export well-formed content (going back to interoperability standards) to a competent translation agency through a Translation Management System. The agency will run the content through your translation memory, subject the new content to machine translate, and then have it post-edited by a qualified translator. The quality-checked content is pushed back into your content repository and is ready for processing. Now, you can see how managing your source content can affect production efficiency of your translated content.

Use the right tools

The operational overhead of managing translations manually can be significant. It’s possible to bring down that overhead by using some industry-standard tools. Translation processes have become very sophisticated, and these tools are at the heart of automation and scale.

  • Translation memory. At the very basic level, a translation memory is a must. If you use a professional translation agency, they will use the translation memory to compare new translations with previous translations, and avoid translating sentences which had previously been translated. You own your translation memory, though, and are entitled to have the file for your own use, for example, with other agencies.
  • Translation automation. At the next level is project automation. If you translate or localise content regularly, a TMS (translation management system) can improve your processes a lot. Source language files get passed through to a TMS, which handles everything from calculating the number of words to be processed, to passing the files to translators and collecting the translated content, to calculating costs and generating invoices, to passing the translated content back into your content editing system into the appropriate file or database structure.
  • Machine translation. The larger your project, the more likely you are to use machine translation as the first pass at translating content. Machine translation happens before translators polish up the language in what is called the post-editing phase.
  • Content optimisation. The larger, more advanced organisations use software that scans the source content for not just spelling and grammar, but also for consistency, form, and harder-to-measure things like tone and voice. This sophisticated software can also offer authoring assistance to keep cost-sucking language problems from entering the body of content at the source.
  • Managing content as components. Organisations that produce masses of content have been using authoring environments called a CCMS (component content management system), where the source content is managed at granular levels. This means that content gets created once and re-used wherever it’s needed. This is called CODA (Create Once, Deliver Anywhere), which began as a topic-based, modular way of developing content that has become the centre of multi-channel publishing strategies.

Up-skill your content developers

Using the right kind of content developers to manage your content is important. Investing in the right skill sets will pay itself back in no time. The skills that content developers such as technical communicators, user assistance writers, and content designers bring to the table are often learned while working on larger teams with other skilled content professionals. On the scale of most-to-least suited to the job of writing content for translation are product managers and software or web developers. They bring important skills to the table, but it’s not creating content!

A word about Agile projects

Corporations using professional writers – that is, technical authors who understand how to manipulate the technical side of content to automate and scale – generally get source language delivered within the same sprint as the code, and translated content delivered one sprint later. This may seem like an over-generalisation, but the observation comes from years of experience and discussions with dozens of technical communication managers around the world. The work done up front to ensure that this can happen takes place in Sprint 0. This is where the story arc gets determined, based on the customer journeys, along with the work that projects the number of target languages, the output devices, content connection points, and so on. This allows content to be set up in ways that anticipate a content framework and lifecycle that works in that situation.

What you can fix, what you can’t

We can recognise the possibilities that strategic management of content can open up. These techniques will benefit larger companies that have:

  • translation and/or localisation needs
  • variants in language usage across multiple markets
  • cross-market content or native languages in alternative markets
  • cross-border commerce adaptation of language
  • usage differences, such as outputs to multiple devices
  • omnichannel marketing environments
  • rising use of social content
  • a strong need to respond to growth that involves more content

There are no silver bullets to solve localisation problems; to believe that would be naïve. Small companies that have limited translation needs, for example, would struggle to justify putting in a full-blown translation management system. They might need to find a hosted solution where a third party handles the management side of translation. Yet the same principles apply: localisation best practices begin with good source content.

 

Image copyright: Jayel Aheram, Flickr (CC)

What happens when content design crashes into the General Data Protection Regulation (GDPR)?

 

What would it be like to produce content in a total data vacuum? Picture yourself working in soundproofed blacked-out box with a computer that can only send but never receive information. You have a brief to design some content, but you haven’t been given much information about your users. You’re going to have to rely on intuition and assumption about their needs, interests and behaviour. No matter – you’re a resourceful person, so you make the best of it and cobble together some best-guess content. It’s a relief to press send.

Off it goes into the ether and you’ll never have to think about it, the users or their needs again – because there won’t be any feedback. That includes all metrics, page views, click-throughs, bounces and everything else you’re used to for assessing whether your work is fulfilling its aims. It sounds like a recipe for awful content, doesn’t it? It must be – though of course you won’t get to know either way.

Data drives content

For content professionals, such a scenario in the real world is unthinkable. Content is driven by data and databases, from analytics to A/B testing. Data is the beating heart of how content designers think about user needs and what we do to deliver on them. It’s also the biggest weapon in our armoury when it comes to dealing with sceptical and obstructive forces in the organisations we work for.

And yet, the situation above isn’t just a thought exercise. Working in a data void – or at best with a seriously diminished data set – could well become a reality for many of us in a couple of years if we don’t take timely steps to stay compliant with imminent new data protection legislation, according to Hazel Southwell, Data Protection Consultant, speaking at a recent Content, Seriously meetup.

Ignore data protection at your peril

Content producers who ignore the new rules will be destined to launch their content into the void, she warned, like the Soviet scientists who shot Laika, a Moscow street dog, into space with scant means of monitoring her progress and no hope of her survival. The ill-fated dog died from overheating after only a couple of hours and the scientists learned next to nothing from the adventure. At least she got to be the first animal in orbit – which is far more than content producers can hope for in return for their doomed efforts.

Producing content without user research and analytics (both pre and post publication) makes it far more likely to be irrelevant to target audiences – and useless to our objectives. More than that, data is the trump card, the invincible ace of spades, in any argument about the direction that content should be taking.

How often does data come to our rescue when subject matter experts are blocking improvements to clarity and readability, or when managers are resistant to important content changes? They can’t argue with the data. Without data in the armoury, we’re fighting blindfold with both arms tied behind our back.

Say hello to the General Data Protection Regulation

On 25 May 2018, the EU General Data Protection Regulation (GDPR) will come into force, making sweeping changes to rules governing the way we collect, use and store data. It will have an impact on any organisation, whether based inside or outside the European Union, that processes the personal data of any resident of the EU or any EU citizen elsewhere.

Companies will no longer be able to sidestep data protection obligations because their head office is in the US, say, or their servers are in Vanuatu. If they’re dealing with the personal data of EU citizens then they must comply with the rules. So Brexit will not provide a way out for UK organisations either.

The UK currently has one of the toughest data regimes in the world in the Data Protection Act 1998, backed up by the enforcements of the Information Commissioner’s Office (ICO). But the GDPR knocks that into the shade, not least with sanctions that are designed to bring the global tech behemoths out in a cold sweat. Even the likes of Google and Facebook might think twice about transgressions, faced with fines totalling €20 million or 4% of worldwide annual turnover – whichever is greater.

Personal data will include photos, email addresses, bank details, social media posts, cookies and IP addresses – anything, in fact, that identifies you directly or indirectly in your private, professional or public life. And if you’re processing this data, whether you’re a multinational or working from your front room, whether you’re turning a profit or not, then you’ll need to comply.

It might be a shock for a humble WordPress blogger to find their use of tools such as Google Analytics (much of which is based on monitoring IP addresses) could fall foul of the law. And their difficulties will be compounded if they deal with personalised content tailored to their audiences – for example, if they use a formula whereby 2 users might see a different paragraph within a single page depending on their age. It seems the quest for making highly relevant content is to become even more tortuous.

So how do you comply with the GDPR?

You’ll have to get explicit consent for obtaining and keeping personal data, which must be given to you freely, rather than as a bargaining chip for accessing your services. You’ll need to ask for it in clear and obvious way, not just imply you’re taking it and going ahead.

Having obtained consent fair and square you’ll have to store it, not only so the ICO can check you’re doing things right, but also so individuals concerned can see what you have on them. They should be able to transfer their data to other data controllers if they want – what’s being described as a new right of ‘data portability’.

Consent can be withdrawn as well as given, and you’ll have to erase data or correct inaccurate data if requested, or restrict processing data if you get an objection. If the data you’re keeping gets compromised through a security breach you may have to notify the relevant authority, the individual concerned or the public at large.

You’ll have to demonstrate that you’re complying with the GDPR, through policies and procedures, staff training, monitoring, documentation – and if your organisation is large enough, with the appointment of a designated data protection officer and appropriate records of your data processing activities.

Privacy will be prioritised by better design (privacy by design) and through more stringent default settings (privacy by default), and you’ll be encouraged to use data only when strictly necessary for your services.

Privacy fights back

If it sounds tough, that’s because it is. There are some obvious exemptions to the rules – such as for national security, defence, law enforcement, public services and health and so on – but it seems the EU has had enough of companies storing and selling huge quantities of personal information, our interests, health, social background, jobs, wealth, education and much more – information that has very likely been obtained in ways we were not wholly aware.

While we unwittingly surrender the details of our address books, calendars, emails and map co-ordinates to apps and companies that seem to have no call to know them, many of us are only dimly realising that our most private information is forming part of a vast global trade far beyond our control. Marketing giant Acxiom, for instance, is said to have stockpiled up to 3,000 separate nuggets of information on each of the 700 million people in its files.

In this context, the GDPR could be a welcome rebalancing in favour of the individual. Even so, EU member states still have some flexibility about how they implement many of the GDPR’s 99 Articles – not to mention the uncertainty of how a post-Brexit UK might slot into those arrangements.

There may also be ways to anonymise or ‘pseudonymise’ data so that it can be used without stepping on anyone’s toes, or making the most of exemptions for statistical research that doesn’t rely on the identifying aspects of the data. The sweep of the legislation may be fixed, but the crispness of its final boundaries are still to be defined.

Respect privacy, improve content, win trust

However the cookie in your cache might crumble come May 2018, content strategists must start putting data protection much higher up the agenda now. Content professionals are creative people and will be able to conjure up inventive and unimposing ways for users to give consent about their personal data.

It’s in everyone’s interests that content is engaging and relevant, and it won’t take much for users to understand how important data is for the best in content creation. It will be even more important for content professionals to create the kind of compelling content that will make users care enough to click the consent button – in whatever form it takes – without a second thought.

Many thanks to Hazel Southwell for her contribution to the Content, Seriously meetup.

LinkedIn https://uk.linkedin.com/in/hazel-southwell-55781412

 

Talk to us