Wednesday, August 27, 2008
Transparency in Information Retrieval
Today, I'd like to discuss transparency in the context of information retrieval. Transparency is an increasingly popular term these days in the context of search--perhaps not surprising, since users are finally starting to question the idea of search as a black box.
The idea of transparency is simple: users should know why a search engine returns a particular response to their query. Note the emphasis on "why" rather than "how". Most users don't care what algorithms a search engine uses to compute a response. What they do care about is how the engine ultimately "understood" their query--in other words, what question the engine thinks it's answering.
Some of you might find this description too anthropomorphic. But a recent study reported that most users expect search engines to read their minds--never mind that the general case goes beyond AI-complete (should we create a new class of ESP-complete problems)? But what frustrates users most is when a search engine not only fails to read their minds, but gives no indication of where the communication broke down, let alone how to fix it. In short, a failure to provide transparency.
What does this have to do with set retrieval vs. ranked retrieval? Plenty!
Set retrieval predates the Internet by a few decades, and was the first approach used to implement search engines. These search engines allowed users to enter queries by stringing together search terms with Boolean operators (AND, OR, etc.). Today, Boolean retrieval seem arcane, and most people see set retrieval as suitable for querying databases, rather than for querying search engines.
The biggest problem with set retrieval is that users find it extremely difficult to compose effective Boolean queries. Nonetheless, there is no question that set retrieval offers transparency: what you ask is what you get. And, if you prefer a particular sort order for your results, you can specify it.
In contrast, ranked retrieval makes it much easier for users to compose queries: users simply enter a few top-of-mind keywords. And for many use cases (in particular, known-item search) , a state-of-the-art implementation of ranked retrieval yields results that are good enough.
But ranked retrieval approaches generally shed transparency. At best, they employ standard information retrieval models that, although published in all of their gory detail, are opaque to their users--who are unlikely to be SIGIR regulars. At worst, they employ secret, proprietary models, either to protect their competitive differentiation or to thwart spammers.
Either way, the only clues that most ranked retrieval engines provide to users are text snippets from the returned documents. Those snippets may validate the relevance of the results that are shown, but the user does not learn what distinguishes the top-ranked results from other documents that contain some or all of the query terms.
If the user is satisfied with one of the top results, then transparency is unlikely to even come up. Even if the selected result isn't optimal, users may do well to satisfice. But when the search engine fails to read the user's mind, transparency offer the best hope of recovery.
But, as I mentioned earlier, users aren't great at composing queries for set retrieval, which was how ranked retrieval became so popular in the first place despite its lack of transparency. How do we resolve this dilemma?
To be continued...
Wednesday, August 27, 2008
Transparency in Information Retrieval
Today, I'd like to discuss transparency in the context of information retrieval. Transparency is an increasingly popular term these days in the context of search--perhaps not surprising, since users are finally starting to question the idea of search as a black box.
The idea of transparency is simple: users should know why a search engine returns a particular response to their query. Note the emphasis on "why" rather than "how". Most users don't care what algorithms a search engine uses to compute a response. What they do care about is how the engine ultimately "understood" their query--in other words, what question the engine thinks it's answering.
Some of you might find this description too anthropomorphic. But a recent study reported that most users expect search engines to read their minds--never mind that the general case goes beyond AI-complete (should we create a new class of ESP-complete problems)? But what frustrates users most is when a search engine not only fails to read their minds, but gives no indication of where the communication broke down, let alone how to fix it. In short, a failure to provide transparency.
What does this have to do with set retrieval vs. ranked retrieval? Plenty!
Set retrieval predates the Internet by a few decades, and was the first approach used to implement search engines. These search engines allowed users to enter queries by stringing together search terms with Boolean operators (AND, OR, etc.). Today, Boolean retrieval seem arcane, and most people see set retrieval as suitable for querying databases, rather than for querying search engines.
The biggest problem with set retrieval is that users find it extremely difficult to compose effective Boolean queries. Nonetheless, there is no question that set retrieval offers transparency: what you ask is what you get. And, if you prefer a particular sort order for your results, you can specify it.
In contrast, ranked retrieval makes it much easier for users to compose queries: users simply enter a few top-of-mind keywords. And for many use cases (in particular, known-item search) , a state-of-the-art implementation of ranked retrieval yields results that are good enough.
But ranked retrieval approaches generally shed transparency. At best, they employ standard information retrieval models that, although published in all of their gory detail, are opaque to their users--who are unlikely to be SIGIR regulars. At worst, they employ secret, proprietary models, either to protect their competitive differentiation or to thwart spammers.
Either way, the only clues that most ranked retrieval engines provide to users are text snippets from the returned documents. Those snippets may validate the relevance of the results that are shown, but the user does not learn what distinguishes the top-ranked results from other documents that contain some or all of the query terms.
If the user is satisfied with one of the top results, then transparency is unlikely to even come up. Even if the selected result isn't optimal, users may do well to satisfice. But when the search engine fails to read the user's mind, transparency offer the best hope of recovery.
But, as I mentioned earlier, users aren't great at composing queries for set retrieval, which was how ranked retrieval became so popular in the first place despite its lack of transparency. How do we resolve this dilemma?
To be continued...
8 comments:
- Max L. Wilson said...
-
I hear what you are saying here Daniel, but I think it needs to be considered carefully against the times when transparency has been considered negative. The examples nearly all unanimously note that users, when given transparency, overwork to get the system to 'better understand them'. Users were found to be refining their queries, despite having found the results they liked, in order to make sure the results were exactly relevant, with no errors. see a Semantic Web UI paper on recommenders and a CHI paper on image search.
Im not saying it wont be helpful for search. Wordbars is a search UI that i find interesting for adding transparency.
Interesting posts! - August 28, 2008 at 2:55 AM
- Daniel Tunkelang said...
-
Max, first, thanks for the links!
I think you're making a few points, which I'll try to restate and address.
First, transparency isn't a panacea for the problems of information seeking, and systems that offer transparency often have worse performance than systems that don't. That does not mean that transparency is to blame, but it may be that systems that offer transparency have had to make other design decisions to do so, such as limiting their efforts to those they can easily explain to users.
Second, if the conclusions of the Cramer et al. paper are generalizable, then transparency matters most when users focus on recall and least when users focus on precision. That may be related to transparency being more valued in the enterprise than on the web. I'll have to stew on that one.
Third, as I read the CHI paper, it actually supports my case that users benefit from transparency--although in this case, the context is training a classifier rather than retrieving search results. Because users don't think like machine learning algorithms, they don't necessarily supply the best training examples, and can actually confuse the models because of their lack of visibility into the process.
Finally, I do like search UIs like Wordbars that offer transparency. I'm looking to see a lot more of these at HCIR! - August 28, 2008 at 11:11 AM
-
-
max: At the risk of going off-topic, the Wordbars work that you point out reminds me of Marti Hearst's TileBars, from 1995:
http://www.sigchi.org/chi95/Electronic/documnts/papers/mah_bdy.htm
I'm also vaguely remembering other work, though I don't know from who.. only that it was from 10-15 years ago, where the distribution of query terms in the document was visually represented as a normalized, stacked bar. I.e. imagine a horizontal bar.. with 34% of the bar in blue, the next 20% of the bar in green, and the final 46% of the bar in red. Green, blue, and red correspond to the 3 terms of your 3-word query. (obviously, 2 colors if a 2-word query, etc.)
That way, the user could see the relative proportion of their query terms in the document. - August 28, 2008 at 11:55 AM
-
-
The following is not exactly the work I was referring to in my previous post, but it is very ,very similar. Take a look at Figure 2 of this paper:
http://www.dlib.org/dlib/january97/retrieval/01shneiderman.html
See how you get a bar that represents both the scale of the retrieval score, as well as a proportionalized distribution of the query terms in that document? - August 28, 2008 at 12:04 PM
- David Fauth said...
-
Thinking out loud on the comment you made That may be related to transparency being more valued in the enterprise than on the web. I'll have to stew on that one.
Could it be that "enterprise" data is more likely to be structured or have some semantic tags around it as the sample set in Cramer et al. paper as opposed to the Web which is less likely to be structured. - August 28, 2008 at 1:12 PM
- Daniel Tunkelang said...
-
Re TileBars: that's interesting, and I'll confess I wasn't familiar with it. I haven't seen many interfaces that address the challenges of communicating that matches reflect scattered passages within long documents.
And I do like how the WInquery interface in the D-Lib paper communicates the contribution of the various terms to each result. What I'm curious about is whether users understand this information. But in general I like their framework.
Finally, as to whether enterprise content being more structured than web content, there is some truth to that, though I think the bigger issue with web content is that the adversarial nature of web search prevents search engines from trusting what structure the document authors offer. We have had Meta tags for a while. - August 28, 2008 at 8:00 PM
-
-
Chapter 10 of Modern Information Retrieval by Ribeiro-Neto and Baeza-Yates (written by M. Hearst, and available online) gives a good overview of several UIs for IR...
- September 2, 2008 at 8:42 AM
- Daniel Tunkelang said...
-
Stefano, thanks for the link. I've skimmed that chapter before (I'm a big fan of Marti's work), but I'll read it again more closely. Indeed, I just noticed InfoCrystal, which Nick Belkin mentioned to me last week.
- September 2, 2008 at 9:27 AM
Wednesday, August 27, 2008
Transparency in Information Retrieval
Today, I'd like to discuss transparency in the context of information retrieval. Transparency is an increasingly popular term these days in the context of search--perhaps not surprising, since users are finally starting to question the idea of search as a black box.
The idea of transparency is simple: users should know why a search engine returns a particular response to their query. Note the emphasis on "why" rather than "how". Most users don't care what algorithms a search engine uses to compute a response. What they do care about is how the engine ultimately "understood" their query--in other words, what question the engine thinks it's answering.
Some of you might find this description too anthropomorphic. But a recent study reported that most users expect search engines to read their minds--never mind that the general case goes beyond AI-complete (should we create a new class of ESP-complete problems)? But what frustrates users most is when a search engine not only fails to read their minds, but gives no indication of where the communication broke down, let alone how to fix it. In short, a failure to provide transparency.
What does this have to do with set retrieval vs. ranked retrieval? Plenty!
Set retrieval predates the Internet by a few decades, and was the first approach used to implement search engines. These search engines allowed users to enter queries by stringing together search terms with Boolean operators (AND, OR, etc.). Today, Boolean retrieval seem arcane, and most people see set retrieval as suitable for querying databases, rather than for querying search engines.
The biggest problem with set retrieval is that users find it extremely difficult to compose effective Boolean queries. Nonetheless, there is no question that set retrieval offers transparency: what you ask is what you get. And, if you prefer a particular sort order for your results, you can specify it.
In contrast, ranked retrieval makes it much easier for users to compose queries: users simply enter a few top-of-mind keywords. And for many use cases (in particular, known-item search) , a state-of-the-art implementation of ranked retrieval yields results that are good enough.
But ranked retrieval approaches generally shed transparency. At best, they employ standard information retrieval models that, although published in all of their gory detail, are opaque to their users--who are unlikely to be SIGIR regulars. At worst, they employ secret, proprietary models, either to protect their competitive differentiation or to thwart spammers.
Either way, the only clues that most ranked retrieval engines provide to users are text snippets from the returned documents. Those snippets may validate the relevance of the results that are shown, but the user does not learn what distinguishes the top-ranked results from other documents that contain some or all of the query terms.
If the user is satisfied with one of the top results, then transparency is unlikely to even come up. Even if the selected result isn't optimal, users may do well to satisfice. But when the search engine fails to read the user's mind, transparency offer the best hope of recovery.
But, as I mentioned earlier, users aren't great at composing queries for set retrieval, which was how ranked retrieval became so popular in the first place despite its lack of transparency. How do we resolve this dilemma?
To be continued...
8 comments:
- Max L. Wilson said...
-
I hear what you are saying here Daniel, but I think it needs to be considered carefully against the times when transparency has been considered negative. The examples nearly all unanimously note that users, when given transparency, overwork to get the system to 'better understand them'. Users were found to be refining their queries, despite having found the results they liked, in order to make sure the results were exactly relevant, with no errors. see a Semantic Web UI paper on recommenders and a CHI paper on image search.
Im not saying it wont be helpful for search. Wordbars is a search UI that i find interesting for adding transparency.
Interesting posts! - August 28, 2008 at 2:55 AM
- Daniel Tunkelang said...
-
Max, first, thanks for the links!
I think you're making a few points, which I'll try to restate and address.
First, transparency isn't a panacea for the problems of information seeking, and systems that offer transparency often have worse performance than systems that don't. That does not mean that transparency is to blame, but it may be that systems that offer transparency have had to make other design decisions to do so, such as limiting their efforts to those they can easily explain to users.
Second, if the conclusions of the Cramer et al. paper are generalizable, then transparency matters most when users focus on recall and least when users focus on precision. That may be related to transparency being more valued in the enterprise than on the web. I'll have to stew on that one.
Third, as I read the CHI paper, it actually supports my case that users benefit from transparency--although in this case, the context is training a classifier rather than retrieving search results. Because users don't think like machine learning algorithms, they don't necessarily supply the best training examples, and can actually confuse the models because of their lack of visibility into the process.
Finally, I do like search UIs like Wordbars that offer transparency. I'm looking to see a lot more of these at HCIR! - August 28, 2008 at 11:11 AM
-
-
max: At the risk of going off-topic, the Wordbars work that you point out reminds me of Marti Hearst's TileBars, from 1995:
http://www.sigchi.org/chi95/Electronic/documnts/papers/mah_bdy.htm
I'm also vaguely remembering other work, though I don't know from who.. only that it was from 10-15 years ago, where the distribution of query terms in the document was visually represented as a normalized, stacked bar. I.e. imagine a horizontal bar.. with 34% of the bar in blue, the next 20% of the bar in green, and the final 46% of the bar in red. Green, blue, and red correspond to the 3 terms of your 3-word query. (obviously, 2 colors if a 2-word query, etc.)
That way, the user could see the relative proportion of their query terms in the document. - August 28, 2008 at 11:55 AM
-
-
The following is not exactly the work I was referring to in my previous post, but it is very ,very similar. Take a look at Figure 2 of this paper:
http://www.dlib.org/dlib/january97/retrieval/01shneiderman.html
See how you get a bar that represents both the scale of the retrieval score, as well as a proportionalized distribution of the query terms in that document? - August 28, 2008 at 12:04 PM
- David Fauth said...
-
Thinking out loud on the comment you made That may be related to transparency being more valued in the enterprise than on the web. I'll have to stew on that one.
Could it be that "enterprise" data is more likely to be structured or have some semantic tags around it as the sample set in Cramer et al. paper as opposed to the Web which is less likely to be structured. - August 28, 2008 at 1:12 PM
- Daniel Tunkelang said...
-
Re TileBars: that's interesting, and I'll confess I wasn't familiar with it. I haven't seen many interfaces that address the challenges of communicating that matches reflect scattered passages within long documents.
And I do like how the WInquery interface in the D-Lib paper communicates the contribution of the various terms to each result. What I'm curious about is whether users understand this information. But in general I like their framework.
Finally, as to whether enterprise content being more structured than web content, there is some truth to that, though I think the bigger issue with web content is that the adversarial nature of web search prevents search engines from trusting what structure the document authors offer. We have had Meta tags for a while. - August 28, 2008 at 8:00 PM
-
-
Chapter 10 of Modern Information Retrieval by Ribeiro-Neto and Baeza-Yates (written by M. Hearst, and available online) gives a good overview of several UIs for IR...
- September 2, 2008 at 8:42 AM
- Daniel Tunkelang said...
-
Stefano, thanks for the link. I've skimmed that chapter before (I'm a big fan of Marti's work), but I'll read it again more closely. Indeed, I just noticed InfoCrystal, which Nick Belkin mentioned to me last week.
- September 2, 2008 at 9:27 AM
8 comments:
I hear what you are saying here Daniel, but I think it needs to be considered carefully against the times when transparency has been considered negative. The examples nearly all unanimously note that users, when given transparency, overwork to get the system to 'better understand them'. Users were found to be refining their queries, despite having found the results they liked, in order to make sure the results were exactly relevant, with no errors. see a Semantic Web UI paper on recommenders and a CHI paper on image search.
Im not saying it wont be helpful for search. Wordbars is a search UI that i find interesting for adding transparency.
Interesting posts!
Max, first, thanks for the links!
I think you're making a few points, which I'll try to restate and address.
First, transparency isn't a panacea for the problems of information seeking, and systems that offer transparency often have worse performance than systems that don't. That does not mean that transparency is to blame, but it may be that systems that offer transparency have had to make other design decisions to do so, such as limiting their efforts to those they can easily explain to users.
Second, if the conclusions of the Cramer et al. paper are generalizable, then transparency matters most when users focus on recall and least when users focus on precision. That may be related to transparency being more valued in the enterprise than on the web. I'll have to stew on that one.
Third, as I read the CHI paper, it actually supports my case that users benefit from transparency--although in this case, the context is training a classifier rather than retrieving search results. Because users don't think like machine learning algorithms, they don't necessarily supply the best training examples, and can actually confuse the models because of their lack of visibility into the process.
Finally, I do like search UIs like Wordbars that offer transparency. I'm looking to see a lot more of these at HCIR!
max: At the risk of going off-topic, the Wordbars work that you point out reminds me of Marti Hearst's TileBars, from 1995:
http://www.sigchi.org/chi95/Electronic/documnts/papers/mah_bdy.htm
I'm also vaguely remembering other work, though I don't know from who.. only that it was from 10-15 years ago, where the distribution of query terms in the document was visually represented as a normalized, stacked bar. I.e. imagine a horizontal bar.. with 34% of the bar in blue, the next 20% of the bar in green, and the final 46% of the bar in red. Green, blue, and red correspond to the 3 terms of your 3-word query. (obviously, 2 colors if a 2-word query, etc.)
That way, the user could see the relative proportion of their query terms in the document.
The following is not exactly the work I was referring to in my previous post, but it is very ,very similar. Take a look at Figure 2 of this paper:
http://www.dlib.org/dlib/january97/retrieval/01shneiderman.html
See how you get a bar that represents both the scale of the retrieval score, as well as a proportionalized distribution of the query terms in that document?
Thinking out loud on the comment you made That may be related to transparency being more valued in the enterprise than on the web. I'll have to stew on that one.
Could it be that "enterprise" data is more likely to be structured or have some semantic tags around it as the sample set in Cramer et al. paper as opposed to the Web which is less likely to be structured.
Re TileBars: that's interesting, and I'll confess I wasn't familiar with it. I haven't seen many interfaces that address the challenges of communicating that matches reflect scattered passages within long documents.
And I do like how the WInquery interface in the D-Lib paper communicates the contribution of the various terms to each result. What I'm curious about is whether users understand this information. But in general I like their framework.
Finally, as to whether enterprise content being more structured than web content, there is some truth to that, though I think the bigger issue with web content is that the adversarial nature of web search prevents search engines from trusting what structure the document authors offer. We have had Meta tags for a while.
Chapter 10 of Modern Information Retrieval by Ribeiro-Neto and Baeza-Yates (written by M. Hearst, and available online) gives a good overview of several UIs for IR...
Stefano, thanks for the link. I've skimmed that chapter before (I'm a big fan of Marti's work), but I'll read it again more closely. Indeed, I just noticed InfoCrystal, which Nick Belkin mentioned to me last week.
Post a Comment