How do you know if your content is succeeding or failing?

You need to set up a system to measure your content, so you know if it is effective, says Andrew Charlesworth, content strategy consultant at Scroll.


Image by Mahesh Patel from Pixabay


If you have responsibility for creating content, managing it, or aligning web strategy with your organisational aims, you need to know if your content is doing its job.

Are your users and customers responding to your content as you planned? Do they need more of it, less of it, or something completely different from what you’re providing?

Performance measurement is the foundation of managing content to ensure your site provides a continuously improving experience for your users and customers. As the business aphorism says: what gets measured, gets managed.

How to measure content

The best way to measure the performance of your content is usability testing among your target audience.

Note “usability testing” not “user research”. Conventional user research is too high level to give real insight into content. It asks broad questions such as ‘do you visit our site?’, ‘what do you visit our site for?’ and ‘do you find it easy/hard to use?’

Usability testing lets you observe how visitors behave in your site, what they click on, what they read, what they ignore, how they complete tasks — their journeys.

However, I accept that usability testing is expensive and time-consuming, even for digital leviathans that can afford to keep products in a state of permanent beta. So, for routine measurement, you will be limited to web analytics.

The basic metric still used for websites — unique visitors per month — tells you very little about whether your content is doing its job. Is the number rising, falling or staying steady over time? And what does a rise, fall or steady state even mean?

“Engagement” isn’t a metric. Nor is “raising awareness”. If 10,000 unique visitors a month view a page, so what? What do you want them to do? 

Couple metrics to desired outcomes

What you really need to know is: is your content motivating the actions you want from visitors to your site? For example: buying your product or service, complying with your regulations, donating to your cause, or completing your training course.

Find metrics that are proxies for the outcomes your content is intended to precipitate.

Focus on measuring the performance of user journeys (how users see your site) not pages (how the CMS organises your site). 

No single number from Google Analytics will likely suffice. Any one measurement is reductionist.

Don’t measure things just because they are easy to measure and expect them to have meaning. For example unique visitors, dwell time, exit rate. A measure of your content’s performance may well comprise a blend of metrics.

Get to the why

Whether one number or many, remember that analytics data requires interpretation. Don’t allow the metric to become more important than the thing measured, which can happen if management sets targets. Remember Goodhart’s Law.

Data tells you what is happening, not why it is happening. For example, GOV.UK content designers have an analytics tool in their CMS called Content Data. One of the metrics it provides is the proportion of users who leave any given page for on-site search.

This is reckoned to be a proxy for users who haven’t found what they are looking for on the page. Maybe the title is misleading and needs to change. Or maybe a link from another page is inappropriate. The data is a prompt to examine the page, to look beyond ‘what’ to ‘why’.

In the middle of a multi-step journey, a large proportion of users reverting to on-site search might show a failure in the service at or near that point. “Might”, but not necessarily.

For example, in a service with a user journey of 6 steps, if 10% of users abandon the journey before the second step, does that mean the content on step 1 is failing for those 10%? Or is the service inappropriate for those 10% and they realise without going further? In which case the content on step 1 is succeeding in deterring ‘wrong’ users from wasting their time.

What about if 50% of users abandon the service at step 4? Is the problem on step 4 or some non sequitur in step 3 that wrong-foots users?

This can be difficult enough to resolve in simple linear services. User journeys with multiple branching paths soon add up to bewildering complexity.

The best you can do is examine the steps and form a hypothesis as to the remedy. Test your solution with users in the live environment. Measure again after an agreed period of time, and iterate accordingly.

Ideally, you’d run iterative rounds of usability testing and watch how real users navigate the service with each change.

Prioritise

Few organisations will be resourced sufficiently to analyse their entire site in this detailed way. So agree content performance indicators (CPIs) with the stakeholders of the content that has the biggest impact on the greatest number of users. This will be the content that is required by the tasks which the majority of users come to the website to do.

Agree review dates or other triggers, such as external events or spikes in performance. Establish a schedule of analysis, iteration and testing. 

Use CPI reviews to improve the performance of your content. Update the out-of-date and retire the redundant. Apply lessons from performing content to improve the failing.

All new content should have CPIs, agreed before drafting, which reflect the job the content is there to do. Use CPIs to shape content to the job it is intended to do.

Resist stakeholders who want you to publish inappropriate content with weak CPIs. If content hasn’t got a meaningful CPI, then it doesn't have a job, and it shouldn’t be on your site.


Want to talk more about measuring content effectiveness? Give us a shout @ScrollUK

Previous
Previous

Content lifecycle management: 3 principles to follow and 5 actions to take

Next
Next

Struggling in a self-serve world