Measuring Information Quality: What’s Missing?

A lot written about information quality recently. But how do we measure it?

There is a perception out there that information quality generally is very poor. But how do we know? All that digital content out there, but most of it is never read, and only a fraction of that is ever translated (and translators are often the only readers of that content). Just because a user guide isn’t read doesn’t mean the information itself within is poor quality. Perhaps, the accompanying software’s usability is spectacularly good and so it isn’t needed? Who knows?

In the GILT industry, information quality is all too often only assessed in terms of the cost, time, and effort to produce and then translate content (and usually one function is measured in isolation of the other). We have all kinds of metrics about “spend” published, time-to-market statistics analyzed, the opinions of professional linguists and terminologists debated, complicated mathematical formulae promulgated (trust me, if you reach that level you’ve clearly no real work to do), QA checklists written, certifications from standards bodies waved under our noses, and all the rest, in an attempt to define information quality. All good, though how efficient or applicable to the real world some of these things are is debatable.

However, often there’s a decider of information quality that is missing from these methodologies: the user of the information.

We need to move the key determinant of information quality to the user community: engaging users, and research, analyzing and understanding how users search for, retrieve, and use information to solve problems. For example, what search keywords and phrases do they use? Which pages do they read the most? Which parts of those pages are read and how? And so on.

The tools and opportunities for this are everywhere. Ever studied web server logs? Done any eye tracking studies (see image below) before and after an information quality project? Conducted comprehension studies on the outputs? Observed how real users consume information? Found out what terminology they actually use when helping each other, on support forums, and when they customize and extend software? Reviewed what keywords they use for searching or analyzed user comments?


So, let’s look at costs and translatability issues, post-editing metrics, number of flagged errors in QA, and so on, sure. But let’s connect it to the user experience too, regardless of language, and give the user the final say. Make users the arbiters of information quality.

Otherwise, we’re really just talking to ourselves.

4 thoughts on “Measuring Information Quality: What’s Missing?

  1. Pingback: Tweets that mention Measuring Information Quality. What's Missing? (Blogos): -- Topsy.com

  2. Zachary Overline

    Tex Texin said something interesting about this in an interview we conducted with him about a month back. When we asked about how a company’s money can better be spent on things other than strict content management, leverage-focused content creation, and word-counting, etc., he replied:

    “Well, take Google and other Web-based companies. They rely on analytics to track user behavior. By focusing on the types of user behavior that can be tracked, they follow trends and get immediate quantitative feedback on what works and what doesn’t. So instead of perfecting, say, the word-count of an explanation for a certain function, they throw the explanation online and track its effectiveness.

    From there, they can use these statistics to give feedback to their authors, translators, and inform their processes, thus improving the content—and their approach to content—for future use. Focusing more on user experience through analytics will give organizations a lot more bang for their buck in the long run.”

    He also talked about how controlled language can nullify the clarity that comes from variety.

    I’m not trying to promote our blog here (honestly), but it is a really good interview here, if you’re interested: http://bit.ly/bbrDBg

    1. ultanultan Post author

      Yes, read the interview when it came out. Good points. I must RT the link on Twitter. Many thanks for contributing!

  3. Roman Mironov

    I agree with your opinion. The way users access and read information are changing, and those who create this information need to closely follow the trends. For instance, I sometimes find myself using a search engine to find a solution to my problem, instead of searching in a thorough user guide that comes with a product and potentially has a better/quicker solution that a search engine can provide. Even though I always try to read user guides in their entirety, when it comes to problems, the first, unconscious urge is often to look for a solution online. For me, it’s a sign that printed and CD-based user guides might be eventually replaced with online documentation that is created with a specific focus on the keywords users would search for to find that documentation through a search engine.

    Thank you,
    Roman

Comments are closed.