Summer of content – part 2

What’s in a professional name

In this 4-part series, Rahel Bailie explores and maps the various roles, skills and job titles in content today. Rahel is a renowned content strategist and part of Scroll’s management team.

Does it matter what a content person is called, as long as they get the job done? In the first part of this series, we asked how a person is supposed to make sense of the content landscape. When practitioners can’t even agree on terminology, it’s not surprising that trying to hire staff or contractors, or even commission work, can cause confusion. And looking for a good fit for a job or contract is even harder when companies create a job description for, say, a content marketer, and then put a content strategy title on the job ad. Everyone gets frustrated.

Content job titles are not standardised

One senior manager at a large agency said that she needs to see about 80 CVs before she finds a content strategist who has the skillset she feels should be standard for that role. It’s not that candidates are purposefully trying to inflate their CVs. Content is not a regulated profession, where job titles are attached to specific roles: a paediatrician, a corporate tax lawyer, or an electrical engineer. It does not have a guiding body that standardises practices, methods, and deliveries, such as the Project Management Institute or International Institute of Business Analysis, where you know what to expect when you ask for a PRINCE2 or Agile certification. Content is usually not even a category in professional lists. Content professionals need to shoehorn themselves into categories like ‘Technology’, ‘Consulting’, or the catch-all ‘Advertising, Editorial and Management’.

Job titles differ globally

When I joined Scroll, I struggled to understand how the role of a copywriter differed from a digital content manager. Or how a content designer differed from a technical communicator. For example, in North America, the Society for Technical Communication defined technical writing as “simplifying the complex. Inherent in such a concise and deceptively simple definition is a whole range of skills and characteristics that address nearly every field of human endeavour at some level.”

Technical communicators became synonymous with writers who wrote user-facing content (customers, administrative users, or technical users) for software or hardware, but in reality, they write any informational or enabling content for any audience. I’ve met technical communicators who write everything from consumer instructions, user guides, recipes, medical procedures, and policies and procedures, to documentation for APIs, engineering specifications, and technical marketing datasheets.

In the UK, technical authors seem to occupy a much more niche area. A technical writer used to mean, in North American parlance, a science writer: someone who had some domain knowledge and wrote technical content in that domain. But that’s changing. Now, technical authors are more likely to be called technical writers or technical communicators. The remit is more content development, where writing is a small part of the process that begins with user research and ends with user-centered content. In the UK, they are the communicators brought in to develop technical content for technical audiences. They often use specialised authoring software that allows them to create output at great scale.

Guidance writers, technical writers, content designers…

In the UK, writing instructions for non-technical audiences is done by guidance writers, a designation I’d never heard outside of the UK. After some deductive reasoning, I determined that guidance is a combination of informational and instructional content – it ‘guides’ users to complete a task or understand information. Yet, a search for guidance writing seems to point to documents such as standard operating procedures, user guides, and so on.

So far, so good. Now let’s add content designers into the mix. Some searches for guidance writers points to content designers. The differences between a content designer and technical communicators or guidance writers are subtle and also not codified. So, is a content designer the same as a guidance writer the same as a technical communicator? Seems to be, but not so fast.

The UK government hires technical writers to write technical content for technical audiences – for example, API documentation for developers on their digital teams. There is no mention of the use of specialised software, though in my books, any technical writer worth their salt knows their way around a help authoring tool, even if they’re not provided access to that software in their job. There is also no mention of the methodology, which has multiple aspects, spelled out in the Technical Communication Body of Knowledge (TCBOK) for Technical Communicators.

The UK government has a very clear definition of a content designer, which I’ve described as a writer focused on ‘the UX of content’. There is a prescribed process that starts with user research, evidence-based decisions, and an outcome of user-centered content based on that research. Because of the clarity around the designation, it’s not surprising that industry is asking for “content designers with GDS experience”. There is a certain comfort level in knowing what is expected, both in terms of method and outputs.

To some professionals in the content industry, the content design process seems self-evident: every writer does that, right? After all, the expectations of a content designer is also part of the TCBOK, with a slightly different vocabulary and more variants to the methods. But to others, there is a world of difference, in which copywriters are given the mandate to “just write X” whereas content designers are expected to question whether content X is even needed in the first place before starting to write (or rewrite), and then deliver the content in a new way, if warranted. A content designer might request that a tool be created (what used to be called a wizard and more recently, an assistant) to deliver the content in a more user-centered way, as do technical communicators.

A rich professional landscape

Once we fill out this cluster of professions with some of the other common designations we encounter in our field, we end up with a rich, though sometimes confusing, professional landscape. Given the breadth and variety of the naming conventions and practices across the content field, how can we navigate this complicated landscape? How do we know whether we’re rejecting a perfectly qualified candidate because of a difference in vocabulary? In the next installment of the Summer of Content, I take a crack at creating a graphic representation of the various designations that content people wear. Fair warning, though: I’m a word nerd, so my graphic skills are limited.  I’ll map out some of the more popular names on a basic grid with liberal annotation.

Summer of content – part 1

Exploring the content landscape

In this 4-part series, Rahel Bailie explores and maps the various roles, skills and job titles in content today. Rahel is a renowned content strategist and part of Scroll’s management team.

Talking about working in the field of content is a bit of a minefield. You can ask what someone does, but their job title, job description, or even self-perception may not match your mental model. You’re a “content evangelist”, you say? Why, how … interesting! And that means you – here you pause, hoping for some clarity into what a content evangelist job might actually entail – promote content, or do you do the copywriting as well?

Whatever the outcome of such awkward discussions, you can be sure of one thing: the answer you think you’ll get is likely to be something different from what you thought it would be. As a seasoned content strategist, I can look at the job boards on any given day and easily spot a dozen content strategy jobs whose descriptions that bear no resemblance to each other, let alone to what I’d describe as content strategy. I imagine it’s the same experience for many of the other content-related areas of expertise in the industry.

The maze of content roles, skills and titles

For example, what is the difference between a copywriter and a digital content developer? In my mind, copy is the editorial side of content – it’s what a content consumer reads, whether that is on paper or on a web page – so a copywriter is the creator of that copy. In the digital space, copy needs to be accompanied by an extra component: metadata.

Without proper metadata to help the copy to be found in search, without well-crafted search result titles and descriptions to entice content consumers to click through to your copy, the task of content creation is not complete. Digital content means not only taking care of the editorial side but also the metadata that makes copy into content.

When looking for digital content managers, is it common to ask for this expertise, or do the hiring agents even know that this is “a thing”, a very important thing, in fact?

Marketing v technical content – how things used to work

There used to be 2 general buckets into which most business content fell.

Persuasive content – that is, content meant to entice readers to buy, or at least to enter the sales funnel – was created by marketing or advertising departments.

Enabling content – that is, content that enabled readers to complete tasks – was created by technical communicators (guidance writers in the UK), or instructional designers, when that content involved training. Sometimes enabling content was created by subject matter experts in the departments themselves, for example, HR policies and procedures.

Changes to the business content landscape

The business content landscape became large and varied, as the genres of content used in business has multiplied. We now have more buckets: persuasive, enabling, social, and content that I will call entertainment content. It’s not the same genre as television shows or films, but corporate-produced content such as corporate YouTube videos created to entertain. Each of these genres has multiple sub-genres, and some genres that defy categorisation – for example, is edutainment education or entertainment or marketing?

Shifts in terminology

Complicating this is the way that terminology shifts. Whether it is a human need to “claim and name” or a tendency to ignore history, it makes for some comical confusion around names.

For example, in the early days of the web, there was a transition from creating independent help files (not so affectionately called “help as tumour”) to embedding bits of the help directly in the interface. This became known as embedded assistance. That term is still in use, and academic programs teach methods and best practices for developing and maintaining embedded assistance.

As online interaction became more ubiquitous, the vernacular became “UI strings” or “string tables”, depending on how advanced the software developers were in cooperating with writers to store the content.

Developers who didn’t understand the pain of edits or translation would hardcode UI strings, whereas those had been through the pain of a translation cycle or two quickly learned to put the embedded assistance into a table.

Later, as start-ups decided they couldn’t afford trained technical communicators, responsibility for UI strings shifted to marketing or UX staff, who did this off the sides of their desk. As these companies grew, this type of work became a job in itself, and has been rebranded as UX writing.

Not only has the name changed, but the writing itself is now more brand-focused. To technical communicators with several decades of experience behind them, the work seems to be focused more on copy than content, and more on delight than comprehension. To those who have never heard of embedded assistance, they are creating their own best practices as they go along.

So, what’s next?

How is a person supposed to make sense of the content landscape? In the next instalment of the Summer of Content, I’ll discuss some of the differences in jobs, both geographically and in core competencies, that make this such an interesting, albeit sometimes frustrating, landscape to navigate.

GDPR: what we did and how

The General Data Protection Regulation (GDPR) comes into effect on 25 May 2018. That’s very soon and a lot of people are feeling understandably nervous about it…

Here’s what we’re doing at Scroll to prepare for GDPR – and why you should care about it.

What is GDPR?

GDPR is a piece of legislation that updates data protection law so it deals with the new ways we use data – like cookies and large-scale data collection. (The legislation GDPR is replacing came into effect in 1995: a lot has changed since then.)

GDPR makes data protection rules more or less the same throughout the EU. It gives data subjects – the people whose data is being held – a lot more rights over their data.

And, slightly terrifyingly, it means companies can be fined 4% of their annual turnover or 20 million Euros, whichever is greater, if they don’t comply.

Why should you care about GDPR?

Most people creating and editing content for an organisation will come across personal data (data that can directly or indirectly be used to identify someone) at some point.

As a professional, it’s important to understand a few of the issues and requirements around dealing with personal data, or you could unwittingly put your client and your reputation at risk. As you’ll see when you read further, not handling data properly can have serious consequences…

It’s also handy to have some kind of grasp of GDPR so if a client asks, you can look vaguely knowledgeable!

Some key bits of GDPR and what Scroll has done

GDPR is long and complex, so I can only give you a flavour of it here. Your best bet for comprehensive GDPR information is the Information Commissioner’s Office (ICO) website.

With that disclaimer out of the way, here are some key parts of GDPR and a bit about what we did to prepare.

Holding data lawfully

The GDPR says you have to have a legal basis for keeping or using data. There are a number of legal bases but the ones we use at Scroll are:

  • explicit, opt-in consent – gone are the tick boxes saying ‘tick here if you don’t want to hear from us’: people have to opt in to stuff now
  • to comply with a legal obligation – we keep data to show that a Scrollie has the right to work in the UK
  • to perform a contract or to take steps to enter into a contract – we keep data so we can search for roles for Scrollies

We’ve planned data audits once a year to ensure the only data we hold is covered by one of the legal bases above.

Interestingly, under GDPR, personal data is not just the usual things you’d think of, like name, address and email, etc. It now encompasses web data like location, IP address and cookie data, which makes things a bit trickier.

Being fair and transparent about data you hold

There are other rules about holding data, which are identical or very similar to the current Data Protection Act. They mostly come across as rather reasonable things to demand. For example, you must:

  • only collect data you need
  • tell people clearly why you’re collecting it (we do this via short privacy notices, that we show at the point of collecting people’s details)
  • make sure it’s accurate and up to date
  • not use it for any other reason than the one you told people about
  • not keep it longer than you have to

At Scroll, we did a data audit so we now know what data we have and where it’s stored. This means we can keep track of how long we keep information for, who has access to it, why we collected it and who is responsible for it. That helps us stick to these rules.

Keeping personal data secure

Under GDPR, you must keep personal data secure, protecting it from unauthorised use, accidental loss, destruction or damage. Securing your data involves looking at whether your systems are secure and who has access to them, among other things.

As part of our data audit, we classified the data so we knew which was most important to protect – for example, we classified our newsletter sign up list as less sensitive than our Directory, which contains names, past projects, test results and interview details.

We could then make decisions as a team about how to protect the most important data first – we limited access to the sensitive stuff and we’ll be running training on how to handle data properly.

Acting quickly if you have a data breach

If your data is accidentally or unlawfully destroyed, lost, altered, disclosed or accessed, you’ve had a data breach (and you have a problem).

Carphone Warehouse was fined £400,000 when it happened to them.  Wetherspoons deleted their entire database rather than risk another breach.

If you have a data breach, unless it’s a breach of data that can’t be used to identify people, you’ll have to report it to the Information Commissioner’s Office (ICO) and soon. If you don’t do it within 72 hours, you could face a fine. You may also have to inform all the individuals concerned, depending on what kind of data it was.

At Scroll, we’ve set up a data breach procedure and a notification form, so we quickly know what to do if it ever happens to us.

Respecting people’s rights around their data

Under GDPR, people have the right to:

  • access their personal data for free
  • have data corrected if it’s wrong
  • object to or stop you processing their data
  • be forgotten (a person can ask you to delete their personal data)
  • data portability (moving data seamlessly from one internet provider to another, for example)

Most of these rights are the same ones they had under the Data Protection Act, but with some added extras – eg the right to data portability.

All of these rights make it imperative that you know what data you have on people and where it’s stored, which is why you need – yep – a data audit. If this blog makes you think we’re obsessed with data audits, it’s because we truly are!

A summary of what else Scroll has done

There’s far too much to go into in detail, but we have also:

  • documented our journey to compliance and why we made the decisions we did (GDPR is big on accountability) – this document has been really useful as a ‘to do’ list to check off
  • carried out a risk assessment
  • thought about cookies (still thinking about cookies…)
  • updated Scroll’s data protection policy and privacy policy
  • thought about what ‘privacy by design’ will mean for us if we get a new Customer Relationship Management (CRM) system
  • acted on Mailchimp’s recommendations for compliance (we have a Mailchimp mailing list)

What else you should do about GDPR

Most clients you work for will have data protection policies in place already under the Data Protection Act, and will be strengthening them in readiness for the GDPR. Make sure you’re up to speed with what’s expected of you.

You can also have a flick through the GDPR guidance from the ICO – it’s written in a fairly straightforward, easy-to-understand way and is pretty user friendly, with ‘at a glance’ summaries and checklists.

I hope you learnt something new about GDPR from this blog. If you didn’t… could you get in touch and make sure we’re doing it right?

Digital asset management (DAM) at Content, Seriously

Two industry experts presented on DAM and taxonomies at the latest meetup for people who take content seriously. In this special one-off event, participants got to experience an eclectic corner of London in an even more eclectic venue.

About the venue: Rotherhithe Picture Research Library

The Rotherhithe Picture Research Library is an extensive collection of visual media – photos, drawings, paintings, maps, video, and even costumes – that media producers use to study eras and areas when conducting background research for their films and plays.

What makes this venue and collection unique is the approach that one Managing Director, Olivier Stockman, takes in managing the collection: the index is completely analog.

Stockman explains his philosophy behind the decision. He wanted to create an environment of discovery. The idea that someone would do an online search and settle on a single answer belies the richness of the material.

Researching a topic for a film, for example, could involve looking at streets and architecture, typical household items or clothing from that period, or typical work and holiday activities. Call this way of research the equivalent to the slow food movement, where one is expected to take the time to savour and digest what’s before you. But more on that later.

Digital asset management (DAM): Theresa Regli

The first speaker was Theresa Regli, one of the top DAM (Digital Asset Management) consultants in the industry and a new transplant to London. Hers was more of a conversation than a formal presentation, where she answered questions about how DAM systems work, and some of the challenges around managing digital assets. Here are some highlights.

About DAM systems

DAM systems are, in some respects, the new kid on the block, though the functionality is growing in sophistication quite rapidly. The need to manage digital assets grew out of organisations such as museums and corporations having large numbers of images that weren’t being stored in ways that were useful for finding and using them later on.

Digital assets aren’t just images

The notion of digital assets are expanding from static images, such as drawings and photos, to items such as video, 3D renderings, and other properties that contribute to virtual reality environments. This could range from gaming companies looking to manage all of the minutiae that gets combined in a multitude of ways during the development of games, to multinational corporations managing virtual reality apps that let you see furniture at home before you buy.

Connecting digital assets with physical assets

In the more interesting projects that Regli has worked on, there has been a need to connect the digital assets with physical ones. For example, one multinational had an extensive collection of physical objects from their century-old corporate history, and what was displayed online had to be keyed to its physical location in a warehouse.

Categorisation and data modelling

Whether you’re a company trying to organise your website images or an organisation with complex digital asset needs, Regli warned of the dangers of thinking that a technology will fix what is essentially a categorisation problem. Before pouring data into a DAM system, the organisation must do the up-front work of thinking through the business problem to be solved, analysing the assets, and then creating a categorisation system – a taxonomy or ontology – that forms the foundation of the data modelling to be done within the system. Regli says it may be a hard conversation, but it’s a disservice not to tell clients that buying the system without having the right complement of people to do the preliminary and ongoing work will be wasted, an expensive exercise.

Treasure hunting in analog

With Regli’s words of wisdom ringing in our ears, participants engaged in a treasure hunt through the stacks of the library. Regli provided a handful of topics to find, and participants could choose which topic to locate. The familiarisation on where the stacks were and how to look through them went fairly quickly, and several people chose “hops” as their topic. Hops played – and to an extent, still plays – an important part in the British economy.

Soon the oversized, loosely-bound packets of photos appeared on the desks. One of the photos found was of families picking hops in Kent. This discovery led to a discussion about how families who wanted to take a vacation, but really couldn’t afford one, would go to Kent for a week and pick hops. It seems that Stockman’s discovery method proved itself that evening.

Creating successful taxonomies: Andreas Blumauer

Wrapping up the triple-bill of DAM activities, Andreas Blumauer discussed the organisation at the heart of any digital management: taxonomies. Organising content for presentation is not as simple as it seems. Presentation needs to happen in context, and the relationships between entities are what provides enough context to give us a better understanding of a topic. Indeed, Blumauer introduced himself using an example of relationship categorisation to demonstrate the principles.

Image of Andreas Blumauer categorised

Creating successful taxonomies: Andreas Blumauer (slide 3 from his presentation)

Using recognised standards like SKOS (Simple Knowledge Organisation System)

There are a great number of factors that make a taxonomy successful, with a few of them standing out in Blumauer’s presentation. It’s important to keep in mind that a taxonomy is not meant for presentation, as is an information architecture. The taxonomy is meant for storage, to create classification, thereby contributing to knowledge.

First, effectiveness depends on the taxonomy being understood by the systems, search engines, and so on. This means using recognised standards. SKOS (Simple Knowledge Organisation System) is developing standards for knowledge systems – and the W3C is working to ensure that there is alignment between the ISO 25964-1 standard and SKOS.

Mapping to create context

Second, effectiveness depends on mappings to create context. Using a simple example, Blumauer demonstrated how connecting terms and labels creates a wider understanding of a topic.

A simple hierarchy is:

– Glassware
– – Stemware
– – – Champagne flute

Non-hierarchical connections would include:

– Champagne flute is used for Bellinis
(And then Bellini gets connected back to champagne flutes)
– Champagne flute is a champagne coupe
– Champagne is served at Tony’s cocktail bar
(And then Tony’s Bar gets connected back to champagne cocktails)

Mapping is business dependent, so it’s important to build a solid foundation and then to maintain the taxonomy. Nothing stays static, and new connections need to be made on an ongoing basis.

Include semantics in content architecture

Third, it’s important to connect the content lifecycle with a four-layer content architecture that includes a semantic layer. The semantic layer contributes to the success of search – semantic search, recommendation systems, analytics and in presenting the right content within content management systems: dynamic content publishing and automatic content authoring.

About the speakers

Theresa Regli worked for many years as a taxonomist and then as a DAM consultant for The Real Story in the US. She is now based in London, where she helps organisations turn content into digital assets, simplify complexity, and realise their potential in the digital world.

Theresa started as a journalist, transitioned to web development, and taxonomy, and then became director of content management at a systems integration firm. She has advised over 100 businesses on their digital strategies, including 20% of the Fortune 500. She is the author of the definitive book on managing digital marketing & media assets, Digital & Marketing Asset Management.

Andreas Blumauer is managing partner of the Semantic Web Company, and has experienced with large-scale semantic technology projects in various industry sectors. He is also responsible for the strategic development and product management of PoolParty Semantic Suite. Andreas has been a pioneer in the area of linked data and the semantic web since 2002; he is co-founder of SEMANTiCS conference series, and editor of one of the first comprehensive books in the area of the semantic web for the German speaking community. Andreas holds a master’s degree in Computer Sciences and Business Administration from the University of Vienna/Austria.

Join one of our meetups

London Content Strategy meetup

A relaxed atmosphere where content professionals can learn about best practices and emerging trends, and network with your counterparts in related fields.

Content, Seriously meetup

This meetup is for in-depth presentations, short workshops, and interactive sessions for professionals who may need a deeper understanding in a particular area. Suitable for content people and managers tasked with managing content.

 

Holes in the template: piping content into a web CMS

When companies have large quantities of content – for example, many products, where each one has several pieces of information – that product information probably doesn’t originate from their web content management system (WCMS). The WCMS acts as a ‘presentation layer’ – in other words, a mechanism to display content.

The content doesn’t have to live in a single system. In fact, there may be multiple systems that feed different types of information, both content and data, into the ‘presentation layer’ of the WCMS. This isn’t a bad thing. Different end systems are optimised to input, store and process particular kinds of content or data.

In this post, we look at how content works within a WCMS, and how a WCMS works with other systems to present content in ways that create richer information so that content consumers can make sense of, and make decisions with, that content.

What constitutes a product?

Before we talk about how content comes together, we need to ask what information is actually needed. This search on the staples.co.uk site for Post-it® Flags returned a typical result. The product consists of a few elements that we as content consumers recognise on the page:

  • images of the product
  • a short description
  • a price
  • quantity available
  • description
  • features
  • accessories
Results of a search on the staples.com site for 'Post-It flags'

A product on the staples.com site

This is what gets displayed on the screen, but there’s a whole other layer of technical information behind the screen that makes it possible for us to search for a product and see all of the bits of information we need to make sense of it.

To make sense of the technical side of what makes up a product, we need to look at the markup language behind the scenes. Content that needs to work between systems – for example, on different sites – needs to use a standard that can be understood across those systems. This has led to using international standards. In this case, we would use the markup language, or ‘schema’, specific to a product.

The set of standards preferred by search engines, at schema.org, defines the superset of elements that make up a product. There are 55 elements that make up a superset of a product, with some of those elements (such as ‘Offer’) being schemas of their own.

This is important to our discussion, specifically because we need to realise that what happens behind the scenes becomes critical to automating the delivery of the on-page content.

Product on shema,org

Some of the elements from the product schema

Product from schema.org

Presenting products in a WCMS

A typical WCMS is optimised for presenting content in complex ways. It doesn’t really ‘manage’ the content, but does excel at presentation of content and important post-publication functions, like providing a hook for gathering analytics. The WCMS allows all of the information to be aggregated into rich product descriptions and converge onto a single page. The front-end WCMS is good at presenting all of this information for consumers to understand the content in context.

For example, my favourite shoe store, Fluevog, can display shoes filtered by:

  • wearer type
  • size
  • style
  • colour
  • heel height
  • shoe family

A consumer can zoom in, check the fit (on a scale of narrow to wide), see the price how many are left in stock.

Shoes on the Fluevog site after the search has been filtered

A Fluevog search after refining the search using filters

Shoes are probably on the simpler side of the product spectrum. In B2B situations, products could:

  • come bundled with other products
  • be subject to bulk discounts
  • have geographic restrictions around where they can be sold
  • display company-specific information for buyers who have log-ins

A decent enterprise WCMS can calculate, based on programmed-in business logic, what gets shown where – which products, which currency, which bundling options, inventory levels, and so on.

Different systems, different functions

While a WCMS is sophisticated about the way it presents content, storing all of the content in a WCMS doesn’t make sense. The product information, such as attributes, pricing, and so on, needs to be stored in systems that are meant to manipulate content or data in particular ways.

Some systems have specialty functions to manipulate content at a granular level. Other systems have specialty functions – for example, a pricing tool may convert between currencies, round up or down, calculate volume discounts, and add the appropriate taxes per country. These back-end systems are generally highly configured, and the content in them is highly structured and tagged so that it can automatically be pushed to the display layer in a WCMS.

This kind of content, and the data that goes with it, could be displayed in two ways: a view that looks relatively dry – think of an Excel spreadsheet view that lists sizes and colours – or a view that makes the information clearer and more enticing to whoever is consuming the content. Technologists call this “decorating the data”. Seth Gottlieb explains more about this in a post on his blog.

Seth Gottlieb article: The CMS Decorator Pattern

Multiple systems, each fit for purpose

For an organisation with any significant amount of product information, there is a high probability that the images come from one system, the descriptions live elsewhere, the attributes come from yet another system, the price comes from a dedicated pricing tool, the delivery information is calculated and delivered from another system, and the ratings served up from yet another system.

Search result from Amazon for portable headphones

Search result from Amazon for portable headphones

These elements can be recognised in the description of the headphones shown here, taken from an Amazon search. Again, this is to be expected – sometimes there are over 50 systems working together in complex enterprise solutions. It’s fine, as long as the systems are configured well and work together seamlessly to either push the content into the WCMS or to allow the WCMS to pull the content on demand.

Putting holes in the template

How multiple systems work together is by putting ‘holes in the template’ and calling some scripts to get the right information to populate those template holes. Sounds simple, but there’s actually a lot of complexity to the equation.

A typical complement of systems that work together could be:

  • ERP (Enterprise Resource Production) system, which pushes data (SKUs, prices, etc) into the WCMS
  • PIM (Product Information Management) system, which pushes product content and attributes into the WCMS
  • DAM (Digital Asset Management) system, which pushes binary files (images, video, audio, PDFs etc) into the WCMS
  • TrM (Translation Management) system, which manages language and other market variants behind the scenes
  • TxM (Taxonomy Management) system, which controls the terminology and tags to optimise search

These are parallel processes. And just as you can ‘tag up’ content in many different ways, these systems can deliver that same content according to many different criteria.

For example, the content can be shown according to specific reader profiles. This could mean that a content consumer logs in as a premium-package member and sees something different to a standard-package member. Or a corporate buyer sees something different to a retail shopper. Or that a reader chooses some filters (women’s shoes, red, heeled, size 8) and sees content specific to their criteria.

The role of semantics

When we talk about content filters and ‘tagging up’ content, we’re actually talking about semantics. Creating content that has enough semantics to meet all of the demands on it is hard and complicated to do in a WCMS. The content has to have enough semantics, and the right semantics, for the underlying systems to understand under what conditions to display specific content.

That’s the difference between being shown the expected products and being shown something completely unexpected. Deane Barker, author of the O’Reilly book, Web Content Management, and blogger at Gadgetopia, describes the folly of not paying enough attention to what happens in the holes. And as the saying goes, therein lies the problem.

This is why companies that need to respond to market conditions in a hurry, or that want to output to multiple devices, channels, markets, or audiences, don’t put their content directly into a WCMS. They put their content into fit-for-purpose systems, and then let the WCMS do what it does best – pipe the content into the right holes.

Deane Barker article: Editors Live in the Holes

Making content more intelligent

There are structured authoring environments that push the ability to manipulate content at more granular levels. These haven’t been as popular among digital agencies but have long been staples of organisations that have to control and publish vast amounts of product content, particularly content audited by regulators. These typically replace the tangle of tools (word processing, email, JIRA, and other clunky kludges):

  • a CCMS (Component Content Management System) in which authors use recognised schemas (DITA, DocBook, S1000D) to structure content
  • a HAT (Help Authoring Tool), which uses custom schemas to structure content
  • an XML editor, which works with a CCMS or, in some occasions, a WCMS

In these cases, the authors take control of the content elements and attributes, which at delivery time get processed through a ‘build’ (much like a software build), and which then get pushed into downstream systems such as the WCMS.

Someone asked me whether using one of the popular content markup standards, specifically DITA, meant losing out on the ability to easily re-use and re-purpose content for different media and devices. Actually, it’s the other way around. Creating highly semantic content, or ‘intelligent content’, means being able to re-use and re-purpose content with ease and agility.

Ann Rockley article: What is intelligent content?

Content trade-offs

Intelligent content and schemas such as DITA are not for companies that have a few thousand pages of highly crafted marketing content that rarely changes. For those organisations, it may be enough to enter content into forms where, after clicking ‘Submit’, it eventually get piped into the holes in the templates.

Intelligent content is for companies with enough content to warrant having content developers who are trained professionals. They need to understand:

  • the theory behind structured content
  • how to write for a structured authoring environment
  • how to apply semantics and metadata
  • how to craft content for a multichannel publishing environment

It’s important to know that both options exist, and when to use the right option. By understanding how content gets moved around by systems until it is presented to end users, we can make better decisions about how and where we should be creating content.

Trends in content strategy

The Content Strategy Applied 2017 conference (9-10 Feb 2017) ended with a trend-spotting presentation from organisers Rahel Anne Bailie and Lucie Hyde. They brought their collective experience, along with the insights from conference presenters, to the podium.

One of the points Rahel made in this presentation was that content professionals today need to constantly work on keeping their skills and knowledge up-to-date.

Content professionals – get skilled up!

Rahel says, “The divide between content professionals who are upgrading their skills and those who are hanging onto the status quo – writing in word processing programs and emailing documents for ‘someone else’ to deal with things like metadata – will become more apparent.” 

Rahel heads Scroll’s content strategy arm. Scroll actively sifts content professionals to see who has the kind of skills and experience that content projects today require. We need to be confident we have the people with the best skills available on our books.

Rahel says, “Already, digital agencies who use content strategists vet CVs in ways that weed out the writers from those with advanced skills. The content professionals who decide to upgrade their skills will find that more opportunities open up for them.”

Content trends you need to learn about

We asked Rahel what she’d picked up on at the CSApplied 2017 conference that indicated future trends. Here’s what she has to say. If you want to be a content pro that really knows their stuff, these are the trends to watch.

And for lots of content professionals, I’d guess that all of these things represent both a need and a chance to start getting skilled up.

Building bridges across silos

One of the trends showed itself in the shadow of an announcement from one of the conference sponsors, Adobe. First, a little background.

Content strategists who do cross-silo strategies for omnichannel projects know that marketing content tends to be a layer of content over a huge amount of enabling or technical content. For example, a single product may have a bit of marketing content associated with it. But it will probably have hundreds of pieces of content that enables customers to use the product: warranty info, help content, user guides, admin guides, training material, microcopy for the interfaces, knowledge base articles, and so on.

When there is no way of integrating the content, it gets developed in multiple silos, with the usual discrepancies, inaccuracies, and duplication of effort that comes with a fragmented territory.

Adobe confirmed that the trend is for organisations who handle lots of content to want their CMS to be the repository for the content that gets delivered through the multiple channels. Until now, content developers creating large-scale enabling content do so in an external editing environment, and then get the content transferred into the web CMS. It seemed to Adobe that it was time to develop a technical solution to enable the integration of all customer content into a single place, while allowing authors to use their power editing tools. So,  Adobe’s created new XML Documentation Add-on for AEM. This makes AEM DITA-aware, extends its capabilities and transforms it into a full-edged enterprise-class component content management system.

Structured content tools are game-changers

Rahel sees this changing the content landscape in a big way (read her white paper: Expanding content scope to drive customer information needs).

She has seen a lot of resistance by technology departments to support content developers, often because they don’t understand the commercial value of content. But with one of the largest CMS on the market, AEM (Adobe Experience Manager), supporting a robust experience for content professionals who want to use the DITA standard for power-editing, it will be a huge game-changer.

This is a big deal for corporations, who increasingly accept that this kind of investment in content is vital for their bottom line. It’s also a big deal for content professionals, as relatively few know how to use a structured content tool or understand best practices in a collaborative writing environment. The content pros who upgrade their skills and knowledge to develop content that works for omnichannel delivery will be able to keep pace with these kinds of publishing environments.

Cognitive computing

For content people focusing on semantic content, cognitive computing came out of left field as the next big technology trend. Cognitive computing uses artificial intelligence to create self-learning programmes. And where there’s technology, there’s content, which means there will be a need for content meant for cognitive computing environments.

The technology side of the industry is moving much faster than the business side, which is creating an environment where technologists are looking to automate content. Sometimes that tactic works, but when it doesn’t, it can cause significant brand damage.

Increased automation of content delivery

There is a strong move to chatbots, Internet of Things, voice search, and related technologies. Some of these are to deliver service at scale, but a lot of it is in response to customer desire for ease of interaction. Examples include bots such as Siri, Alexa, and Cortana, where verbal search diverges from keyword search. This puts a higher demand on content, which has to sound conversational while being informative, and flow in particular patterns to make sense to humans while also making sense to the systems that deliver it up.

Shared, semantic content

For content to work within automated, cognitive computing environments, it needs to have enough structure and semantics that computing systems know when to pull specific content. Adaptive content, which allows content authors to tag content for specific contexts, is quickly becoming a core skill for content professionals in any environment where content gets delivered into shared spaces.

Resources

Content audit: how to define goals and scope

A good content audit is the cornerstone of many web projects. But starting a content audit can be scary. It’s like standing on a high board, preparing to dive into a sea of raw data.

If you want to avoid drowning in data, you need to invest time in defining the goals and scope of the audit. Work out what you want the content audit to achieve, and what content you actually want to audit.

Define your goals

You can look for almost anything in a content audit – from how well content performs in search to how well it converts for sales. So, before you start, you need to agree what you’re trying to achieve with this specific audit.

If you’re not the content strategist on this project, start talking to them now. Get an overview of the project the content is meant to inform. There is no point carefully checking metadata if this is primarily going to be a rebranding exercise.

Ask a lot of questions

Organisations are not necessarily sure what they need or can get from a content audit. The best way to define goals is to ask a lot of questions and try and read between the lines.

What they say: “We want to know which content is performing well / where we’re getting ROI.”

What this could mean: “We want to know…

  • how many people are seeing which bits of content
  • if people are acting on the content (for example, following a call to action)
  • if people are reading or otherwise using the content (not just leaving straight away)
  • if the content is doing what it’s intended to do; meeting user needs – or if there are gaps
  • if people are sharing the content on social media
  • if content is performing well in search
  • if the content is meeting our KPIs/business requirements”

What they say: “We want to know if the content is in good shape”

What this could mean: “We want to know if the content…

  • meets editorial best practice
  • meets UX best practice
  • meets branding, style, tone and voice guidelines
  • has an owner and is up-to-date
  • is accurate and relevant – on message, factually correct
  • has correct tags and metadata
  • is in correct format
  • is well-organised in a good IA”

Use the goals to define the work you need to do

Once you have agreed the goals, you will have a much clearer idea of how to conduct your audit. For example, if one goal is to work out which marketing content is giving a good return on investment, you could:

  • use analytics data and other site metrics to see which content is most popular
  • check where traffic to that content is coming from and going next
  • see how the content is being used
  • count social shares of the content
  • count conversions from the content
  • use any KPIs set by the business to evaluate

and so on.

Define scope (and acknowledge you can’t do it all)

Got the goals? Now define the scope. Work hard on getting the scope focused properly. Content audits are time-consuming work. You want a tight and accurate brief. You don’t want to spend time auditing duplicate content, or old news stories, or following redirects down rabbit holes, or anything else that does not help the client achieve their goals.

Prioritise, prioritise, prioritise

Budgets and time will almost certainly be tight. That means that you almost certainly won’t be able to get eyes on every single URL. Prioritise ruthlessly.

Quick wins

Generally, start with indexable HTML pages that a visitor can find through search. Ignore the rest, unless you’re conducting (for example) a specifically technical or SEO audit.

Double-check which bits of the digital estate you’re auditing – you might be able to ignore whole blogs and microsites.

Also, check if there any parts of a domain that are out of scope (for example all archived content, or all content in a certain /xxx/).

Find out how well the client knows their content:

  • Are there issues with URL duplication or other CMS-driven oddities you should know about?
  • Are there lists of content types or formats you can use?
  • Are there previous inventories or audits you can measure against?
  • Can you have access to someone who really knows the content and the CMS?

Look for representative samples

If there’s a repeated pattern in the content (for example annual reports, each of which comes with a standard set of links and assets) you can sometimes just audit a sample of these.

Have a think about what you need to know from these samples. Do you need to know how well a user journey is working? Or whether the assets are being downloaded? Or whether they are correctly branded and to style?

Site size rules of thumb

For sites under 500 pages, just check every page.

For sites 500-1,000 pages, focus on the most important content for full audit. This might be the business-critical content; the most-used content; the ‘top tasks’ content, or a combination of those things. It might be a few samples or patterns. Use the goals to inform this. Run a lighter audit of the rest of the content.

For massive sites, or if you need to do it all in a day, use the 80/20 rule. Identify the 20% of content that’s most important, and focus on that first. Make sure it includes:

  • representative samples of common content types and formats
  • representative samples of important user journeys
  • business-critical content
  • most-used content

Do what works

There are no hard rules about setting the scope. Successful audits depend on doing what works. Here’s one unconventional but effective solution by an anonymous content strategist.

“We divided all the content into 3 basic types: horrible, boring and important.

  • Horrible stuff. Content inside systems that could not practically be reorganised within the scope of the project. Solution: design around them and organise a future project to deal with them properly.
  • Boring stuff. Content that, due to time sensitive nature, was not worth spending effort on reorganising. Solution: Created an archiving process that involved minimal metadata changes.
  • Important stuff. Existing or imminent content that either had a long shelf life or would have high visibility at the time of the relaunch.

The Horrible and the Boring content represented the vast majority of the system and grouping them in this way allowed us leave them until another day.”

Leave room for surprises

Leave a bit of space in your schedule. Because you will almost certainly find hidden microsites, translations into strange languages, stub pages, odd redirects, and in some cases, entire sunken cities of content.

Dive in and do it!

If you define the goals and scope of your audit before you start, you will save a lot of time and energy – and in some cases your sanity. Write the goals and scope on a Post-It and put it on your screen.

Every time you feel analysis paralysis setting in, or the dread hand of spreadsheet confusion, read the Post-It. It’s the lifebouy that you can use to float happily through that sea of data.

How to cope with the increased demands on content

The complexity of producing and delivering content has grown exponentially over the past couple of decades, as the demands for content have grown. In simpler times, content was produced as a single-channel deliverable. We would write an article for a magazine, or a user guide, or a maintenance manual. There was one piece of content and one deliverable.

Writing content in simpler times

When the web came along, things changed considerably. We made the transition from writing in the book model for print and chunking the copy up for the web, to writing in topics for the web and then stitching the contents together for the print version that got delivered to customers.

For the most part, we still worked alone on a content deliverable. Each person on a team would be assigned an area to cover. For example, a company that produced a product would have:

  • marketing collateral in print done by a marketing team
  • marketing collateral on the website done by a digital marketing team
  • a user guide done by a technical communicator/technical author
  • a maintenance manual done by a different technical communicator
  • PDFs of the product material, uploaded (and forgotten) by a webmaster

Content got more complicated

As time went on, content got more complicated. The inconsistencies between digital and traditional channels became more apparent, and less tolerated, by customers. There were more demands on content, and more channels demanding content to fill them. There was not only the marketing funnel waiting to be filled, which makes up about 20% of any large website, but also product support material, the other 80%. Traditional product content was needed, such as quick start guides, user guides, training manuals, and service-center material. New channels also needed content: forums, knowledge bases, social, and so on. This didn’t account for the additional channels for that content, such as tablets, smartphones, wearables, and newer channels such as chat bots.

Multiplicity and the demands on content

Organisations are now in a situation where the volume of content and number of delivery variables means that the complexity of producing and delivering content has reached a tipping point. The demands on the business, the content developers, the technologies, and the content itself have grown exponentially, and it’s harder and harder to keep up.

For a moment, let’s picture 4 unique pieces of content that come together to describe a feature of a product. Now let’s say that that particular feature is used in 4 different product lines; that content is now being used 16 times. Now, imagine that each product line has four products within each line that uses that feature. Those 4 pieces of content get repeated 64 times. Now, multiply by 4 delivery channels, and that means those original 4 pieces of content are used a whopping 256 times. That’s a lot of copy-and-pasting.

4 pieces of content can be used 256 times.

How 4 pieces of content can balloon to 256 different uses.

This example of multiplicity is not understated. In fact, the phenomenon is all too common. As organisations develop more products and services, they create more content deliverables to support them, and deliver that content through multiple channels. At its best, content re-use is a laborious, time-consuming way to track where content is used and re-used. At worst, the process of tracking content use becomes a maintenance nightmare.

Finding a way to cope

How are organisations coping with this explosion of content? In my experience, not well. Too many clients have finally broken down and sought help because they’ve run out of spreadsheet management capacity – even in environments with a web CMS. Yet the demands on content continue to grow, and a greater level of sophistication is needed to deliver on the value propositions anticipated by the business.

So how can organisations cope? With a CODA (Create Once, Deliver Anywhere) strategy based on the COPE (Create One, Publish Everywhere) strategy used by the US’s NPR (National Public Radio). The basic idea is that a piece of content can be created once, and then re-used through automation, instead of using a copy-and-paste approach.

By pulling content into the many places it gets used, content developers experience a marked decrease in maintenance effort. After all, CODA also means Fix Once, Fix Everywhere. This is because when content is re-used by ‘transclusion’, the original piece of content is the only actual instance of the content. All of the other ‘copies’ are actually only a reference of the original. Fix a typo in the original piece of content, and all of the derivative content is automatically fixed as well.

Multichannel content

How 4 pieces of content can exist in multiple channels, in multiple contexts.

 

What goes into CODA

Creating CODA content is based on the principles of intelligent content. This means that content is structurally rich and semantically categorised. The definition, created by Ann Rockley, goes on to say that this makes content automatically discoverable, re-usable, reconfigurable, and adaptable. Those may sounds like technical benefits, so perhaps they are best rephrased in business terms.

  • Business efficiency. With less maintenance overhead, content developers can focus less on low-level tasks like searching for duplicate content and filling in spreadsheets, and spend more time on value-add activities. On one recent project, a particular task that took several staff several months to complete could have been completed in a matter of minutes, had the content been highly structured and semantically categorised.
  • Accountability. When a CODA framework is implemented well, there is a granular audit trail that would make any auditor swoon with delight.
  • Accuracy. Brand, marketing, legal, and compliance are all concerned with content accuracy. Having a single source of truth to draw from means less mistakes, fewer review cycles, and less legal checks before content goes live.
  • Personalisation. Whatever personalisation means to your organisation, it is more easily done within a CODA framework. The semantics added to content means the content is adaptive – in other words, it’s easier to change a sentence or two within a message to reach a different audience, to vary an offering, to output specific parts of a block of content to different devices, and so on. This can be done without losing the context, and makes maintenance so much easier.
  • Extension of reach. The idea that content can be produced in a tighter way also means that the company can leverage the content in new ways. Going into new markets, adding new product lines, taking new languages on board – all of these are possibilities that can be supported with content. No more lag between the intent and action.
  • Dynamic publishing. In companies with large quantities of content, the ability to publish content on the fly, collect existing content into new contexts, and create new assets for customers, whether paid or promotional, becomes competitive advantage.

Adopting CODA

A logical question is, “If CODA is so good, why isn’t everyone doing it?” The content developers who have been doing CODA for decades ask that question a lot. It’s a technique that has been used extensively for large bodies of content (in all fairness, the technique has traditionally been applied to post-sales content such as technical documentation, customer support content, and training material) to cope with demanding production schedules and a high likelihood of post-publication maintenance.

However, as the complexity of content delivery grows and the demand on content grows with it, the imperative for well-structured, highly semantic content will need to become the norm. It has implications for all areas of business, from how we create content to how we deliver it, and all the steps in between.

Resources

Rahel Bailie named in the top 25 content strategy influencers

Scroll’s own content strategy guru, Rahel Bailie, has been named one of the top 25 content strategist influencers 2016 by MindTouch.

MindTouch evaluated thousands of content strategists and created a measurement that took into account a wide range of metrics, including internet presence, influence, community engagement and participation. This is a snapshot of what’s happening (and who’s hot) in the world of content strategy today.

Read the list of top 25 content strategist influencers on the MindTouch site.

Good translations start with good source content

If you’re surprised to hear that the biggest impact you can have on your translation budget lies with your source content, you’re not alone. Shoring up your source content seems counter-intuitive, but it’s exactly the right strategy to get the most value from your translation and localisation projects.

Organisations can achieve 50% to 80% reduction in translation costs and a 30%-plus reduction in delivery time by implementing best practices around managing source content. The Chartered Institute of Procurement and Supply says that operating in an omnichannel environment is increasingly a part of supply chain challenges – so streamlining and cost reduction is as important as thinking about the customer experience.

Translation is the too-late phase

It’s hard for translators to work in today’s business environment. They are perpetually at the end of the supply chain in any iteration of content production, and they are asked to produce localised content – that is, translations that have been adapted for suitability in local markets – in often impossibly short time periods, increasingly making it difficult to meet an unrealistic standard of quality.

More often than not, translators can see the problems upstream in the supply chain, but find themselves unable to effect any changes that would make the situation easier for themselves or their clients. They may not have direct access to the actual client, as a translation agency sits between them and acts as gatekeeper, and they may not even have reliable access to the tools that could improve the production process.

The client, meanwhile, struggles to get the content out to the translation agency and back again in a smooth manner. There continually seem to be bumps in the process that cause delays, mistranslations, or increased administrative overhead – needless cutting and pasting, for example.

It may be of some comfort, then, to know that good practices start at home, to mangle a perfectly good saying. Having a sound production process and robust source-language ecosystem lays the foundation for smooth development of localised content. In turn, this makes it easier to integrate the localised content into your products, websites, apps, knowledge bases, and content hubs.

Ready to adopt good localisation practices

It’s when companies are at the stage in their content maturity models when they recognise that content is an important part of their product or communication strategy that they are willing to invest in content as an asset. For companies not at that level of awareness, the rest of this article will not resonate. This is an important assumption, as in-house practices often reveal that companies are willing to live with broken content processes all along the line. They may say that content is king, but the king is shackled in a dungeon, and the keys have gone missing.

For the companies that are at the level of the maturity model where they are ready to take action, we make the following assumptions.

  • The company values content for its business value. Content isn’t considered that afterthought that fills in the pretty design, but a work product in its own right. In other words, the company recognises that the way that customers understand the products, services, instructions for use, value proposition, and the brand itself.
  • The company recognises that content production is not a commodity, and so does not fit the traditional supply chain model. Content returns in various iterations – new version, new language, new revision, and so on – and needs to be managed with the same care as other work products with iterative processes, such as code.
  • The company recognises that content is intrinsically different than data, and manages content with checks and balances suitable to it.
  • The company has equal interest in customers in the many markets, and aims to give them as much respect as the customers in the primary market.

Meeting these assumptions is an important point, as organisations which have not reached this stage of awareness are likely not willing or able to move to an operational model where they are ready to optimise management of localised content.

Put controls on source content

The single biggest impact you can have on your localisation efforts is to get your source content in order. A foundational principle for producing good translations is managing your source content well. Ideally, an organisation would create a superset of their source content, and re-use it across all of their output channels. This model accrues a tremendous amount of ROI, and the more languages you produce, the more this applies. Managing source content well involves making the most of semantic structure and metadata tags to help computer systems understand what the content is about, and as a result, how to translate that content more effectively.

Make your content translation-friendly

There are several writing theories with principles that apply to localised content. The principles of the Plain Language movement, for example, are a way to ensure that content is accessible to everyone. Controlled vocabulary is another technique from which you can borrow to ease confusion when terms need to be translated. Both of these theories agree on avoiding jargon, idiom, slang, and euphemisms, as they are harder to translate and often meaningless in the target language. Also pay attention to colours, gestures, and images. For example, there is no hand gesture that is not offensive in some culture. (Even the Facebook “thumbs up” for Like is a rude gesture in an entire area of the world.) Professional writers and translators will spot these errors and point them out or correct them.

The 3 techniques that are most common are:

  • translation – a faithful word-for-word rendition in another language
  • localisation – translation with additional compensation for differences in the target markets
  • transcreation – completely changing the message, if necessary, to make it meaningful to the target audience.

Transcreation is obviously the most resource-intensive, and will likely get used for marketing and other persuasive content.

Make your content interoperable

Industry has hundreds, if not thousands, of content standards that help store content, move content between systems, move content through the production, and so on. Your web or software developers may know about W3C standards that relate to the Open Web Platform, accessibility, semantic web, and the Web of Devices (the Internet of Things). They may not be as familiar with XLIFF, an interchange format commonly used to move content through the localisation process, image standards such as SVG, which has a handy text layer to store multiple language translations on a single image as metadata. Knowing the standards and deciding which ones apply to your projects can dramatically ease workflows and save significant time and money.

Use established workflows

When you use industry-standard workflows for translations, your project can go around the world in a day or two, and be translated with a minimum of drama. A typical workflow would be to export well-formed content (going back to interoperability standards) to a competent translation agency through a Translation Management System. The agency will run the content through your translation memory, subject the new content to machine translate, and then have it post-edited by a qualified translator. The quality-checked content is pushed back into your content repository and is ready for processing. Now, you can see how managing your source content can affect production efficiency of your translated content.

Use the right tools

The operational overhead of managing translations manually can be significant. It’s possible to bring down that overhead by using some industry-standard tools. Translation processes have become very sophisticated, and these tools are at the heart of automation and scale.

  • Translation memory. At the very basic level, a translation memory is a must. If you use a professional translation agency, they will use the translation memory to compare new translations with previous translations, and avoid translating sentences which had previously been translated. You own your translation memory, though, and are entitled to have the file for your own use, for example, with other agencies.
  • Translation automation. At the next level is project automation. If you translate or localise content regularly, a TMS (translation management system) can improve your processes a lot. Source language files get passed through to a TMS, which handles everything from calculating the number of words to be processed, to passing the files to translators and collecting the translated content, to calculating costs and generating invoices, to passing the translated content back into your content editing system into the appropriate file or database structure.
  • Machine translation. The larger your project, the more likely you are to use machine translation as the first pass at translating content. Machine translation happens before translators polish up the language in what is called the post-editing phase.
  • Content optimisation. The larger, more advanced organisations use software that scans the source content for not just spelling and grammar, but also for consistency, form, and harder-to-measure things like tone and voice. This sophisticated software can also offer authoring assistance to keep cost-sucking language problems from entering the body of content at the source.
  • Managing content as components. Organisations that produce masses of content have been using authoring environments called a CCMS (component content management system), where the source content is managed at granular levels. This means that content gets created once and re-used wherever it’s needed. This is called CODA (Create Once, Deliver Anywhere), which began as a topic-based, modular way of developing content that has become the centre of multi-channel publishing strategies.

Up-skill your content developers

Using the right kind of content developers to manage your content is important. Investing in the right skill sets will pay itself back in no time. The skills that content developers such as technical communicators, user assistance writers, and content designers bring to the table are often learned while working on larger teams with other skilled content professionals. On the scale of most-to-least suited to the job of writing content for translation are product managers and software or web developers. They bring important skills to the table, but it’s not creating content!

A word about Agile projects

Corporations using professional writers – that is, technical authors who understand how to manipulate the technical side of content to automate and scale – generally get source language delivered within the same sprint as the code, and translated content delivered one sprint later. This may seem like an over-generalisation, but the observation comes from years of experience and discussions with dozens of technical communication managers around the world. The work done up front to ensure that this can happen takes place in Sprint 0. This is where the story arc gets determined, based on the customer journeys, along with the work that projects the number of target languages, the output devices, content connection points, and so on. This allows content to be set up in ways that anticipate a content framework and lifecycle that works in that situation.

What you can fix, what you can’t

We can recognise the possibilities that strategic management of content can open up. These techniques will benefit larger companies that have:

  • translation and/or localisation needs
  • variants in language usage across multiple markets
  • cross-market content or native languages in alternative markets
  • cross-border commerce adaptation of language
  • usage differences, such as outputs to multiple devices
  • omnichannel marketing environments
  • rising use of social content
  • a strong need to respond to growth that involves more content

There are no silver bullets to solve localisation problems; to believe that would be naïve. Small companies that have limited translation needs, for example, would struggle to justify putting in a full-blown translation management system. They might need to find a hosted solution where a third party handles the management side of translation. Yet the same principles apply: localisation best practices begin with good source content.

 

Image copyright: Jayel Aheram, Flickr (CC)

What happens when content design crashes into the General Data Protection Regulation (GDPR)?

 

What would it be like to produce content in a total data vacuum? Picture yourself working in soundproofed blacked-out box with a computer that can only send but never receive information. You have a brief to design some content, but you haven’t been given much information about your users. You’re going to have to rely on intuition and assumption about their needs, interests and behaviour. No matter – you’re a resourceful person, so you make the best of it and cobble together some best-guess content. It’s a relief to press send.

Off it goes into the ether and you’ll never have to think about it, the users or their needs again – because there won’t be any feedback. That includes all metrics, page views, click-throughs, bounces and everything else you’re used to for assessing whether your work is fulfilling its aims. It sounds like a recipe for awful content, doesn’t it? It must be – though of course you won’t get to know either way.

Data drives content

For content professionals, such a scenario in the real world is unthinkable. Content is driven by data and databases, from analytics to A/B testing. Data is the beating heart of how content designers think about user needs and what we do to deliver on them. It’s also the biggest weapon in our armoury when it comes to dealing with sceptical and obstructive forces in the organisations we work for.

And yet, the situation above isn’t just a thought exercise. Working in a data void – or at best with a seriously diminished data set – could well become a reality for many of us in a couple of years if we don’t take timely steps to stay compliant with imminent new data protection legislation, according to Hazel Southwell, Data Protection Consultant, speaking at a recent Content, Seriously meetup.

Ignore data protection at your peril

Content producers who ignore the new rules will be destined to launch their content into the void, she warned, like the Soviet scientists who shot Laika, a Moscow street dog, into space with scant means of monitoring her progress and no hope of her survival. The ill-fated dog died from overheating after only a couple of hours and the scientists learned next to nothing from the adventure. At least she got to be the first animal in orbit – which is far more than content producers can hope for in return for their doomed efforts.

Producing content without user research and analytics (both pre and post publication) makes it far more likely to be irrelevant to target audiences – and useless to our objectives. More than that, data is the trump card, the invincible ace of spades, in any argument about the direction that content should be taking.

How often does data come to our rescue when subject matter experts are blocking improvements to clarity and readability, or when managers are resistant to important content changes? They can’t argue with the data. Without data in the armoury, we’re fighting blindfold with both arms tied behind our back.

Say hello to the General Data Protection Regulation

On 25 May 2018, the EU General Data Protection Regulation (GDPR) will come into force, making sweeping changes to rules governing the way we collect, use and store data. It will have an impact on any organisation, whether based inside or outside the European Union, that processes the personal data of any resident of the EU or any EU citizen elsewhere.

Companies will no longer be able to sidestep data protection obligations because their head office is in the US, say, or their servers are in Vanuatu. If they’re dealing with the personal data of EU citizens then they must comply with the rules. So Brexit will not provide a way out for UK organisations either.

The UK currently has one of the toughest data regimes in the world in the Data Protection Act 1998, backed up by the enforcements of the Information Commissioner’s Office (ICO). But the GDPR knocks that into the shade, not least with sanctions that are designed to bring the global tech behemoths out in a cold sweat. Even the likes of Google and Facebook might think twice about transgressions, faced with fines totalling €20 million or 4% of worldwide annual turnover – whichever is greater.

Personal data will include photos, email addresses, bank details, social media posts, cookies and IP addresses – anything, in fact, that identifies you directly or indirectly in your private, professional or public life. And if you’re processing this data, whether you’re a multinational or working from your front room, whether you’re turning a profit or not, then you’ll need to comply.

It might be a shock for a humble WordPress blogger to find their use of tools such as Google Analytics (much of which is based on monitoring IP addresses) could fall foul of the law. And their difficulties will be compounded if they deal with personalised content tailored to their audiences – for example, if they use a formula whereby 2 users might see a different paragraph within a single page depending on their age. It seems the quest for making highly relevant content is to become even more tortuous.

So how do you comply with the GDPR?

You’ll have to get explicit consent for obtaining and keeping personal data, which must be given to you freely, rather than as a bargaining chip for accessing your services. You’ll need to ask for it in clear and obvious way, not just imply you’re taking it and going ahead.

Having obtained consent fair and square you’ll have to store it, not only so the ICO can check you’re doing things right, but also so individuals concerned can see what you have on them. They should be able to transfer their data to other data controllers if they want – what’s being described as a new right of ‘data portability’.

Consent can be withdrawn as well as given, and you’ll have to erase data or correct inaccurate data if requested, or restrict processing data if you get an objection. If the data you’re keeping gets compromised through a security breach you may have to notify the relevant authority, the individual concerned or the public at large.

You’ll have to demonstrate that you’re complying with the GDPR, through policies and procedures, staff training, monitoring, documentation – and if your organisation is large enough, with the appointment of a designated data protection officer and appropriate records of your data processing activities.

Privacy will be prioritised by better design (privacy by design) and through more stringent default settings (privacy by default), and you’ll be encouraged to use data only when strictly necessary for your services.

Privacy fights back

If it sounds tough, that’s because it is. There are some obvious exemptions to the rules – such as for national security, defence, law enforcement, public services and health and so on – but it seems the EU has had enough of companies storing and selling huge quantities of personal information, our interests, health, social background, jobs, wealth, education and much more – information that has very likely been obtained in ways we were not wholly aware.

While we unwittingly surrender the details of our address books, calendars, emails and map co-ordinates to apps and companies that seem to have no call to know them, many of us are only dimly realising that our most private information is forming part of a vast global trade far beyond our control. Marketing giant Acxiom, for instance, is said to have stockpiled up to 3,000 separate nuggets of information on each of the 700 million people in its files.

In this context, the GDPR could be a welcome rebalancing in favour of the individual. Even so, EU member states still have some flexibility about how they implement many of the GDPR’s 99 Articles – not to mention the uncertainty of how a post-Brexit UK might slot into those arrangements.

There may also be ways to anonymise or ‘pseudonymise’ data so that it can be used without stepping on anyone’s toes, or making the most of exemptions for statistical research that doesn’t rely on the identifying aspects of the data. The sweep of the legislation may be fixed, but the crispness of its final boundaries are still to be defined.

Respect privacy, improve content, win trust

However the cookie in your cache might crumble come May 2018, content strategists must start putting data protection much higher up the agenda now. Content professionals are creative people and will be able to conjure up inventive and unimposing ways for users to give consent about their personal data.

It’s in everyone’s interests that content is engaging and relevant, and it won’t take much for users to understand how important data is for the best in content creation. It will be even more important for content professionals to create the kind of compelling content that will make users care enough to click the consent button – in whatever form it takes – without a second thought.

Many thanks to Hazel Southwell for her contribution to the Content, Seriously meetup.

LinkedIn https://uk.linkedin.com/in/hazel-southwell-55781412

 

Talk to us

 

Don’t miss this – advice, tips and tricks for content strategy and content design

Here’s what you might have missed last month while you were busy with Brexit and related drama…

This blog is a round-up of the best of the Digital Content Academy newsletter in June. The newsletter is itself a round-up of the best advice, thinking, news and events in content strategy and content design.

The newsletter goes out every second Thursday. Don’t miss out.

News, advice, thought pieces

Government digital needs you!
The Government Digital Trends survey shows that, while the digital transformation agenda is a growing force in government, lack of skills is a major blocker.

Data + narrative = user journey
This is a brilliant case study, showing how you need to understand the analytics and also the narrative, the story arc, if you really want to craft a journey that works for users. So imaginative.

How millennials behave online
Confident, error-prone, different to everyone else.

Practical advice and how-tos

How to do remote moderated user testing
Common excuses for not doing user research: 1. no budget 2. don’t know what to test 3. don’t know who to test it on 4. actually don’t really know how to do it. (If you want to keep using those excuses, don’t read this post.)

Link to 1 thing, once only
This is the user experience rule we probably don’t follow enough.

Google’s style tips for UI
Writing copy for user interfaces? These will really help you up your game. Sample: ‘Focus on the user and what they can do with your app, rather than what you or your app is doing for the user.’

Tools

Amazing visual search tool
We love this tool. It scrapes Google search suggestions to provide keywords, but powerfully grouped into question facets. And then beautifully visualised.

Exactly what people do on your website
HotJar is a brilliant little tool to help you (and your clients) understand how people are using your site. Heatmaps, visitor recordings, conversion funnels and form analytics. Free to try.

And finally….

Exit strategy
A level-headed look at why you need a strategy in case you need to exit a position – be that a CMS, a social media channel or, say, a political union of countries. Emphasises the need to plan carefully, to account for what could go wrong, and to be prepared to act if the worst happens. Deserves to be widely read.

Get these newsletters

Sign up here for the Digital Content Academy newsletters, every second Thursday.

Prevent content from being a project blocker

A common time for organisations to take a long, hard look at their content is during a ‘web refresh’ project. This is when an organisation wants to update the look and feel of its website. It’s usually prompted by a business need – new functionality, rebranding after an acquisition or merger, or a simple update to keep the brand fresh. 

Often now, the scope of such projects goes beyond the website – complexity grows as we see more mobile access, more personalised content delivery as part of omnichannel environments, and more connectivity between software systems. So the term ‘web refresh’ is showing its age – but that’s a whole different article.

 One of the common choke points during a web refresh project is content.  At the end of a conference presentation, it’s not uncommon to be approached by a developer, manager, or other project team member with tales of woe about the state of their content. These reveal common themes:

  • “It’s been two years since we finished our end of the work, but the site hasn’t launched yet because they don’t have the content for it.”
  • “We had our user experience guy do the information architecture, but migrating the content over from the old system is such a nightmare.”
  • “We wanted our bid to be competitive so we excluded content, and the client has no idea how to deal with it, and we’re not prepared to deal with it.”
  • “We did this great design, and now we have to make all these adjustments because the content doesn’t fit.”

The systemic bias against content

The industry adage is that ‘content is king’, yet experience shows that it more often gets treated like the court jester. This bias against content is real. On digital projects, the visual designers are asked to mock something up to show the client. They might even be asked to mock up some functionality – a slider or a carousel. The content that goes into that mockup is often some dummy Latin text as a placeholder. The assumption is that the client will be persuaded by the beauty of the container, no matter what goes inside.

 

 

To use a metaphor, let’s pretend that your company is a coffee chain, and you ask an agency to update your business presence. They obsess about the signage, the shop windows, the furniture, the fancy barista equipment, the colour of the coffee cups and the angle of the lids. But when it comes to the actual coffee? They’ve brought in a couple of teenagers, handed them a jar of instant and an electric kettle, and poured something brown into the cup.

This is too often the case with content.

Look inside the digital agencies that get the web refresh contracts, from the boutique micro-consultancy to the world’s largest and most reputable, and you’ll be hard-pressed to find qualified content professionals. In fact, you’ll be hard-pressed to find content professionals at all. You will find developers and designers, because they are perceived as specialists and their work has therefore become valued.

Content, however, is perceived as ‘that stuff that anyone can do’. Agencies are happy to leave content to the client – hoping that the client can figure it out on their own.

Content development as a business skill

If the business adage is ‘content is king’, there’s an adage among content pros that goes something like ‘just because you can write, it doesn’t mean you can write professionally’. We all learned to write in primary school, but that writing bears little resemblance to the work that content professionals do. You might enjoy your Sunday bike rides, but that’s got nothing to do with the Tour de France.

 

 

So, writing is no longer ‘just’ writing; it’s no longer adequate to simply create copy. The craft has become content development – and it can get complicated.

To give you a few examples, the difference between writing for business communication and writing for digital delivery is like the difference between making a sandwich at home and running a restaurant. It’s not just the amount of content that is the difference. It’s the planning and scheduling; it’s understanding the differences of writing for desktop and writing for mobile; it’s the tagging and metadata to make sure the content can be processed properly and is findable by search engines. To quote a client, “this is what separates amateur speculators from professionals.”

Also, let’s not forget the external forces that content developers need to factor into their work. One example is organic search. A professional content developer pays attention to the changes to the algorithms that search engines, particularly Google, use to determine what is ‘good’ content. Content developers need to understand the implications so they can adjust their writing styles, metadata, and schema use, to help search engines find content.

Putting content to work

We have established that content is central to how you describe your products and services. It’s the articles that people read. It’s the instructions that people follow. It’s the photo and the description, the infographic or chart, the product specs, and the supporting material that persuades consumers to click the ‘Buy’ button. Copy is the content that consumers see, and metadata is the content that consumers don’t see. Together, the copy plus metadata comprise content that can be searched and found, delivered and viewed, understood, and acted on.

What goes into the making of digital content starts with a strategy and culminates in the content itself. Here are some of the basic considerations.

The content structure

The structure, codified in a content model, defines how content works within delivery systems, such as a CMS (content management system). The model is created by determining all the kinds of content that need to be created and work together to meet the business requirements. A content strategist would create a domain model, content types, content flows, and then consolidate them into a model. The developers or CMS integrators use this model to build rules about how content gets transported through the system, and delivered to a publishing system or shared with other software systems.

The semantics

There are various standards that technologies such as a CMS use to deliver content to other technologies. The content needs to conform to these standards. A content strategist would work with the technologists to determine which schemas are used, how the taxonomy is set up, which metadata fields are required and how they will be configured, how many channels the content needs to get published to, and so on.

The content

The copy must engage consumers, and fulfil their expectations in terms of user experience. A content professional also writes the adaptive copy that will be delivered to specific channels or outputs. They then add the metadata that allows systems to automatically process the content and provides search engines with the right information. The content also needs to be checked for editorial quality, factual accuracy, consistency, and technical integrity.

Remove the blockers from your project

Given the prominent role that content plays, and the complexity involved in dealing with the setup and management of content, it is time for organisations and their agencies step up their game. Rather than minimise the role of content in digital projects, and the role of the professionals who develop it, it is in the best interest to involve them throughout the project. 

Involve a content strategist while the vision and strategy are being formulated. Have the strategist work alongside the CMS integrators to develop the content model, or at least contribute to it. The content strategist will understand the vision for delivering content to meet business requirements, and their perspective will inform what the content model looks like.

Assign content strategists or content designers to work alongside the user experience team as they flesh out the presentation framework. Content takes time to develop, whether it’s new content or content rewritten to work in the new content model or on the new site, and this gives the content professional time to work on the launch-critical content.

Have a content professional work with the client-side writers to teach them how content will work in the new system, and what the expectations are around creating and maintaining the content. This is often a new experience for writers, who need some training around topics such as using formats and templates, semantic structures, metadata, and taxonomies. They may also need help with content governance, such as setting up and following workflow.

In the end, the best strategy toward removing blockers from content is to embrace the role of content and face the challenge head on: put content in the centre of your project. Getting your content in order is an integral part of the process – and integral suggests integrating content into the overall fabric of a project.

There is no magic bullet, but when done right, the result *is* magic.

 

 

Evidence-based content strategy and design

There is a lot of talk about evidence-based design these days. A quick search for evidence-based design, or EBD, returns results mostly focused on health care and the construction industry. Both of these professions have a vested interest in developing an empirical understanding of how people interact with their environments so that their practices can improve the effectiveness of project outcomes.

In healthcare, this means improving patient and staff well-being, patient healing, stress reduction, and safety.

In construction, the goal of evidence-based design is to improve the performance of buildings, and not only looks at ways that people interact with the built environment, but also how the various components of buildings interact as a complex system.

More

Evidence-based design method – Wikipedia

Evidence-Based Design Journal

Evidence-based design in digital services

In the realm of interactive digital services, the term evidence-design has crept in, largely unheralded. The benefits are seen as credibility.

Evidence-based design bases decisions on research, both user and scholarly, and increases the likelihood of effectiveness and ultimately success. Human Factors International, a consultancy known for its scholarly contributions and its accreditation program, describes the process as:

  • clarify the question being asked regarding UX methods or design
  • identify sources of research or best practice to help answer the question
  • find available research or best practice
  • review for credibility and applicability
  • check to see if other research or practice has come to the same conclusions
  • save copies of the materials along with links or citations for future reference
  • communicate and apply what you have learned

More

Evidence-Based Best Practices and Research – Human Factors International

Evidence-based content strategy and design

The more research we do into evidence-based design, the more that Scroll can attest that all along, it has been using an evidence-based design approach to content strategy and content design.

The methodologies are quite similar.

Evidence-based content strategy

Content strategy recognises that an organisation is a complex system, where various components interact to optimise content performance. A successful project outcome requires foresight and planning.

The discovery phase of a content strategy involves making a diagnosis, and then finding the right prescription.

The steps are:

  1. Clarify the organisational problem that content is being asked to solve.
  2. Research the content requirements of the organisation, the content consumers, the content developers, the technologies used to manage content, and the content itself.
  3. Conduct a gap analysis by looking at the difference between the current state and the ideal state.
  4. Determine the gaps that have prevented the organisation from reaching their ideal future state.
  5. Research content lifecycles, and identify best practices for the context.
  6. Map out a high-level solution and validate for feasibility and applicability.
  7. Communicate findings and get buy-in to proceed with implementation.

Once there is organisational clarity and agreement around the roadmap to a solution, the evidence-based content design process takes over.

Evidence-based content design

Once the big-picture goals have been established, the implementation phase begins. This is where content design comes in.

The content has to work from an editorial perspective, a user experience perspective, a comprehension perspective, and a technical perspective before it’s fit-for-purpose. That doesn’t happen by accident:

  1. Use evidence from analytics, user research and elsewhere to clarify the problem the content is being asked to solve (the user need).
  2. Research the requirements that allow the content to make the user of the content successful at their tasks (the acceptance criteria).
  3. Find the best practices for developing and delivering content in that context.
  4. Validate for credibility and applicability.
  5. Communicate findings and create the content.

Qualifying this approach as evidence-based design

Developing content and content systems is subject to the same rigour that goes into designing a healthcare environment or a building envelope that improves the performance of a complex system.

There is no room for opinions and conjecture.

An organisation must know they have a better system than before, and that their new system delivers better-performing content than before. They must be able to demonstrate this with data.

In content design, this is done through an empirical understanding of how people interact with content, combined with deep domain knowledge of editorial processes, learning theory, comprehension techniques, information architecture, and content development theories and practices. Once the content is live its performance can be measured using various metrics from web analytics, as well as through direct feedback from users.

In content strategy, this is done through a knowledge of content design combined with an understanding of the various ecosystems used for content development, management, and delivery.

In both disciplines, the experts at Scroll have a keen understanding of using content as a business asset to further organisational goals.