February 12, 2015

How Can We Help Users Judge the Relevance of Content?

It’s not difficult for a user to find information—it’s as simple as opening a user manual or performing a Google search. What users often find difficult, is assessing whether the found information is relevant to their specific information need.  A crucial component of end user assistance is, therefore, supporting the user in judging the relevance of what they find.

Are you a technical communicator working as an information architect to increase your content’s findability, with the goal of creating a happy user experience for your customers? Then this article is for you. Read on to find out how you can better support users in making accurate relevance judgments by designing your content for enhanced findability.

My experience is that technical communicators generally don’t place much importance on supporting users in making relevance judgments.  I am sad to conclude that many technical communicators still believe that we must reach our audience by “delivering” intelligent content.

Such a view implies that the user is passive, and thus undermines the importance of supporting relevance judgment. This is perhaps one reason why users often neglect technical information in favor of other information sources, which provide more clues in support of such judgments.

Why is judging relevance important?

When users are uncertain about something related to a product and its usage, they ask questions and search for answers. They jump around from page to page, skimming content to quickly judge what it is all about and how it’s related to their current concerns.

If a user cannot judge the relevance of content, they will not read it—which means their information need is left unsatisfied, and you end up with an unhappy customer.

If supporting users in judging relevance is important in the book paradigm, it is even more important in the context of the web where searching users land on selected pages directly, as Mark Baker points out in his article, Subject First; Context Afterward

Many think that Google will always take you to pages that are relevant, since pages are search engine optimized. The thing is that the pages you get from Google may be relevant to the query, but irrelevant to the information need.

This happens since humans are bad at expressing their information need, which means that the query they express may not always fully represent their information need. In information science, this is coined as pertinence and relevance.

Relevance depends on the searchers subjective perception of the degree to which content fulfill the information need. Pertinence means that a page is relevant to the query but may be irrelevant to the information need.

Before we begin examining aspects of relevance judgment, it is important to acknowledge that there are a number of cases I am not going to address here. For example, the case of a user who is looking for something that does not exist, since they believe the product can do something it cannot. Thus they become frustrated even if the content they find provides clues to efficiently judge relevance.

Likewise, the user may judge a found text to be relevant for their concern (thus deeming a product suitable for accomplishing something it was not intended for), even if the intention of the writer was not to address such a need.

What tricks to users employ to judge relevance?

There are many tricks users employ to judge the relevance of something they find. Lets take a look at some of them:

1) The actual text. Once a user has found a page, they do not read it with the intention of thoroughly understanding it. Rather, they skim the text to judge what it is all about— to assess the relevance of the content in relation to the information they are seeking. The title, first sentences, and subsequent headlines all give important clues.

Users employ their prior experience and knowledge as a frame of reference for interpreting and understanding what they have read. A certain level of reading comprehension is required to be able to judge the relevance of a given text. The better reading skills a user has, the better they are at judging relevance. What reading skills does your target group have?

2) The metadata. Users look at a page from a broader perspective to identify clues that support relevance judgment. Metadata, as tags, provide the “about-ness” of content. A text may be tagged to metadata values that do not appear in the text.

3) The placement in a content hierarchy. When traversing the table of contents, users pick up clues from each content level. Here, breadcrumbs provide the position in a hierarchy. When navigating a web site, a page has a placement and the site provides a hierarchical context.

But a static content hierarchy is not a good way to help users judge relevance. And, since humanity uses the web more and more, users will be less and less exposed to tables of contents. Thus, content hierarchy literacy will soon be lost.

4) The placement in a product interface. When a user opens Help by clicking on part of the interface, the user judges that the content is “about” the UI component. Thus, the product interface provides a powerful context for judging relevance (and that is why embedded user assistance is so popular).

5) The key words used to search. The key words used when searching on the web or browsing an index, provide a context.

6) The sources to which related links point. Users may need to follow a link to judge the relevance of content, as I depict in article, Can Tech Writers Create Links without Even Writing Them?

What outperforms all of the above tricks?

The dialog that takes place in the context of a human conversation provides arguably the most efficient means for humans to judge relevance.

Furthermore, the dialog provides us technical communicators with important knowledge when it comes to developing metadata and classifications in our intention to increase findability. Let’s take a look before digging into how we can mimic parts of the dialog to increase findability.

Imagine you are a user who is stuck in product use. Seeking a solution, you ask your colleague a question such as, “Why doesn’t this thing work?” If your colleague deems the question to be unclear, they may ask a number of questions in response in order to understand what you are asking (according to the Situation Assessment Dialog, as outlined in article, Will an answer be easy to find if we mimic human dialog in user assistance?). 

Once you have received an answer from your colleague, you put that answer through a relevance assessment. You access your experience and knowledge in order to assess the relevance of the answer. You might not understand the answer at all— and in that case you ask your colleague, "What do you mean?”

If you receive an answer you do understand, you might ask yourself, "Was this really what I was looking for?" If your colleague provides an answer you immediately deem irrelevant, you may respond, "I'm sorry, I was not looking for how to perform X – but instead for how to perform Y..."

If your colleague has quickly answered (without apparently assessing your situation) with something which appears on the surface to be correct, you might question their answer. You proceed by asking, "OK, but is what you’re saying applicable to my product?"

This dialog continues as long as it takes for you to assess the relevance of the answer you receive. I will refer to this dialog as the “Relevance Assessment Dialog.”

Why do users turn to online communities instead of the user manual?

An interesting observance is that these two dialogs are often a natural occurrence in online communities. You can often observe that the user asking the question is providing some of the data that an expert initiating the Situation Assessment Dialog would naturally ask for. The question from a user might include statements like, "I'm using windows XP. I am trying to perform ABC, and I cannot use X to perform Y since Z happens, etc.”

If someone from the online community is uncertain about the situation assessment information provided by the user, they might ask a follow-up question, such as, "What version of X are you using?" Now, if the user who posted the original question becomes uncertain about the relevance of someone's answer, they might respond with, "Is what you’re saying valid for windows XP?"

So, when one human asks another human questions related to product use over the phone, or via text messaging for example, the Situation Assessment and Relevance Assessment Dialogs are more likely to take place than when the inquirer and the respondent are physically in the same space. This is due to the fact that the physical context often provides answers to many assessment questions.

So, how can we help users judge the relevance of content?

When designing end user assistance, consider the tricks humans use to judge relevance. You should for example:

  • Pay special attention to the title and first sentences of each section or topic. Write them so that they reveal what the information is all about. Don’t start with background information, do as journalists do and put the most important information up front. Consider using images or even videos to reveal the “aboutness”.
  • Write content in reasonable chunks that cover one subject each. Writing long sections of lengthy paragraphs covering many subjects forces the user to skim a lot of content. What they are looking for may be buried in a big paragraph of text. I discuss this more here.
  • Use bullet lists, images, tables, and headlines that make the “about-ness” of the content stick out. This, too, makes content easier to skim.
  • When crafting a book, design the content structure so that it provides clues which lend to relevance judgment. A section placed in “Installation>Installation of hardware X>Preparation” obviously has to do with preparing the installation of hardware X. Such “about-ness” would not be as easy to judge if the section is placed in “Design descriptions>Installation of hardware X>Preparation.”
  • Provide links to related information which provides answers to follow-up questions, (as depicted in Can Tech Writers Create Links without Even Writing Them? ).
  • When writing for the web, consider to optimize content for search engines (SEO). The purpose of SEO is not primarily to support users in judging relevance, rather SEO makes pages visible in web searchers, so that they are found, which yields good pertinence. What I am talking about here can be called “User relevance judgment optimization”.
  • Display metadata on each page, section, or topic to reveal the “about-ness” and validity of the content.

Metadata, generated from content classification using multiple taxonomies, is of special interest—at least for me—since it can be used to mimic a Situation Assessment Dialog. I argue that this dialog is an efficient way to support users in judging relevance. Consider the following two scenarios as examples.

First, the scenario of a user answering the Situation Assessment Questions who, through the process, almost automatically picks up clues which inform them of the content’s relevance. If the manual asks the user, “What type of product are you using?” the user is assured that the answers they receive are valid for the type of product he or she has selected.

Consider an e-commerce site where you find a consumer appliance, such as a dish washer, by making a number of selections in various search filters (like price, color, type, brand, etc). Once you have made certain selections, you know that the list of displayed appliances are applicable to your preferences.

Secondly, imagine a user who lands on a page without having traversed through the Situation Assessment Dialog.  The manual, as a web page, displays the answers to each Situation Assessment Question as metadata. These answers act as a “stand in” for the relevance assessment questions.

This is equal to a situation where a customer, who wants to purchase an appliance, conducts a Google search and lands on an e-commerce site page which displays the filter selections applying to the appliance they are searching for.

As a final thought, why not include some aspects of the Situation and Relevance Assessment Dialogs as an introduction to each topic in your content? This way, each topic becomes more like an online knowledge-base article. What do you think?


SeSAM is a methodology for designing according to dialogism, or for the searching user. Contact Information Architect Jonatan Lundin at jonatan.lundin@excosoft.com to learn more.

Explore previous editions of the Designing for the Searching User article series via the links below, and stay tuned for upcoming ones!

Previous articles in series: 

Will an Answer be Easy to Find if We Mimic Human Dialog in User Assistance?

Should the Answer to a User Question be a Short or a Long Topic?

How Do We Make Short Answers Easy to Find?

How Do We Predict Use Questions?

What Types of Questions Do Users Ask?

How Do We Know Which User Tasks to Write in Manuals? 

Is Intelligent Content a Good Idea in Technical Communication?

Why is it Important to Design for the Searching User? 

Why Should Technical Communicators Avoid Target Group Analysis?

Can Tech Writers Create Links without Even Writing Them? 

About the author

Jonatan Lundin

Jonatan is a pioneering information architect backed by over 20 years dedicated to XML documentation, and designing for findability.

Post Comment

Categories

  • Excosoft
  • Information Design
  • Skribenta

Latest posts

Looking Back on 2016

December 29, 2016

Skribenta 5 Now Released

December 12, 2016

More from Blog & News