Showing posts with label Tefko Saracevic. Show all posts
Showing posts with label Tefko Saracevic. Show all posts

Monday, May 5, 2008

Saracevic on Relevance and Interaction

There is no Nobel Prize in computer science, despite computer science having done more than any other discipline in the past fifty years to change the world. Instead, there is the Turing Award, which serves as a Nobel Prize of computing.

But the Turing Award has never been given to anyone in information retrieval. Instead, there is the Gerald Salton Award, which serves as a Turing Award of information retrieval. Its recipients represent an A-list of information retrieval researchers.

Last week, I had the opportunity to talk with Salton Award recipient Tefko Saracevic. If you are not familiar with Saracevic, I suggest you take an hour to watch his 2007 lecture on "Relevance in information science".

I won't try to capture an hour of conversation in a blog post, but here are a few highlights:
  • We learn from philosophers, particularly Alfred Schütz, that we cannot reduce relevance to a single concept, but rather have to consider a system of interdependent relevancies, such as topical relevance, interpretational relevance, and motivational relevance.

  • When we talk about relevance measures, such as precision and recall, we evaluate results from the perspective of a user. But information retrieval approaches necessarily take a systems perspective, making assumptions about what people will want and encoding those assumptions in models and algorithms.

  • A major challenge in the information retrieval is that users--particularly web search users--often formulate queries that are ineffective, particularly because they are too short. Studies have shown that reference interviews can lead to improved retrieval effectiveness (typically through longer, more informative queries). He said that automated systems could help too, but he wasn't aware of any that had achieved traction.

  • A variety of factors affect interactive information retrieval, including task context, intent, expertise. Moreover, people react to certain relevance clues more than others, and more within some populations than others.
As I expected, I walked away with more questions than answers. But I did walk away reassured that my colleagues and I at Endeca , along with others in the HCIR community, are attacking the right problem: helping users formulate better queries.

I'd like to close with an anecdote that Saracevic recounts in his 2007 lecture. Bruce Croft had just delivered an information retrieval talk, and Nick Belkin raised the objection that users need to be incorporated into the study. Croft's conversation-ending response: "Tell us what to do, and we will do it."

We're halfway there. We've built interactive information retrieval systems, and we see from deployment after deployment that they work. Not that there isn't plenty of room for improvement, but the unmet challenge, as Ellen Voorhees makes clear, is evaluation. We need to address Nick Belkin's grand challenge and establish a paradigm suitable for evaluation of interactive IR systems.

Sunday, April 6, 2008

Nick Belkin at ECIR '08

Last week, I had the pleasure to attend the 30th European Conference on Information Retrieval, chaired by Iadh Ounis at the University of Glasgow. The conference was outstanding in several respects, not least of which was a keynote address by Nick Belkin, one the world's leading researchers on interactive information retrieval.

Nick's keynote, entitled "Some(what) Grand Challenges for Information Retrieval," was a full frontal attack on the Cranfield evaluation paradigm that has dominated IR research for the past half century. I am hoping to see his keynote published and posted online, but in the meantime here is a choice excerpt:
in accepting the [Gerald Salton] award at the 1997 SIGIR meeting, Tefko Saracevic stressed the significance of integrating research in information seeking behavior with research in IR system models and algorithms, saying: "if we consider that unlike art IR is not there for its own sake, that is, IR systems are researched and built to be used, then IR is far, far more than a branch of computer science, concerned primarily with issues of algorithms, computers, and computing."

...

Nevertheless, we can still see the dominance of the TREC (i.e. Cranfield) evaluation paradigm in most IR research, the inability of this paradigm to accommodate study of people in interaction with information systems (cf. the death of the TREC Interactive Track), and a dearth of research which integrates study of users’ goals, tasks and behaviors with research on models and methods which respond to results of such studies and supports those goals, tasks and behaviors.

This situation is especially striking for several reasons. First, it is clearly the case that IR as practiced is inherently interactive; secondly, it is clearly the case that the new models and associated representation and ranking techniques lead to only incremental (if that) improvement in performance over previous models and techniques, which is generally not statistically significant; and thirdly, that such improvement, as determined in TREC-style evaluation, rarely, if ever, leads to improved performance by human searchers in interactive IR systems.
Nick has long been critical of the IR community's neglect of users and interaction. But this keynote was significant for two reasons. First, the ECIR program committee's decision to invite a keynote speaker from the information science community acknowledges the need for collaboration between these two communities. Second, Nick reciprocated this overture by calling for interdisciplinary efforts to bridge the gap between the formal study of information retrieval and the practical understanding of information behavior. As an avid proponent of HCIR, I am heartily encouraged by steps like these.
Showing posts with label Tefko Saracevic. Show all posts
Showing posts with label Tefko Saracevic. Show all posts

Monday, May 5, 2008

Saracevic on Relevance and Interaction

There is no Nobel Prize in computer science, despite computer science having done more than any other discipline in the past fifty years to change the world. Instead, there is the Turing Award, which serves as a Nobel Prize of computing.

But the Turing Award has never been given to anyone in information retrieval. Instead, there is the Gerald Salton Award, which serves as a Turing Award of information retrieval. Its recipients represent an A-list of information retrieval researchers.

Last week, I had the opportunity to talk with Salton Award recipient Tefko Saracevic. If you are not familiar with Saracevic, I suggest you take an hour to watch his 2007 lecture on "Relevance in information science".

I won't try to capture an hour of conversation in a blog post, but here are a few highlights:
  • We learn from philosophers, particularly Alfred Schütz, that we cannot reduce relevance to a single concept, but rather have to consider a system of interdependent relevancies, such as topical relevance, interpretational relevance, and motivational relevance.

  • When we talk about relevance measures, such as precision and recall, we evaluate results from the perspective of a user. But information retrieval approaches necessarily take a systems perspective, making assumptions about what people will want and encoding those assumptions in models and algorithms.

  • A major challenge in the information retrieval is that users--particularly web search users--often formulate queries that are ineffective, particularly because they are too short. Studies have shown that reference interviews can lead to improved retrieval effectiveness (typically through longer, more informative queries). He said that automated systems could help too, but he wasn't aware of any that had achieved traction.

  • A variety of factors affect interactive information retrieval, including task context, intent, expertise. Moreover, people react to certain relevance clues more than others, and more within some populations than others.
As I expected, I walked away with more questions than answers. But I did walk away reassured that my colleagues and I at Endeca , along with others in the HCIR community, are attacking the right problem: helping users formulate better queries.

I'd like to close with an anecdote that Saracevic recounts in his 2007 lecture. Bruce Croft had just delivered an information retrieval talk, and Nick Belkin raised the objection that users need to be incorporated into the study. Croft's conversation-ending response: "Tell us what to do, and we will do it."

We're halfway there. We've built interactive information retrieval systems, and we see from deployment after deployment that they work. Not that there isn't plenty of room for improvement, but the unmet challenge, as Ellen Voorhees makes clear, is evaluation. We need to address Nick Belkin's grand challenge and establish a paradigm suitable for evaluation of interactive IR systems.

Sunday, April 6, 2008

Nick Belkin at ECIR '08

Last week, I had the pleasure to attend the 30th European Conference on Information Retrieval, chaired by Iadh Ounis at the University of Glasgow. The conference was outstanding in several respects, not least of which was a keynote address by Nick Belkin, one the world's leading researchers on interactive information retrieval.

Nick's keynote, entitled "Some(what) Grand Challenges for Information Retrieval," was a full frontal attack on the Cranfield evaluation paradigm that has dominated IR research for the past half century. I am hoping to see his keynote published and posted online, but in the meantime here is a choice excerpt:
in accepting the [Gerald Salton] award at the 1997 SIGIR meeting, Tefko Saracevic stressed the significance of integrating research in information seeking behavior with research in IR system models and algorithms, saying: "if we consider that unlike art IR is not there for its own sake, that is, IR systems are researched and built to be used, then IR is far, far more than a branch of computer science, concerned primarily with issues of algorithms, computers, and computing."

...

Nevertheless, we can still see the dominance of the TREC (i.e. Cranfield) evaluation paradigm in most IR research, the inability of this paradigm to accommodate study of people in interaction with information systems (cf. the death of the TREC Interactive Track), and a dearth of research which integrates study of users’ goals, tasks and behaviors with research on models and methods which respond to results of such studies and supports those goals, tasks and behaviors.

This situation is especially striking for several reasons. First, it is clearly the case that IR as practiced is inherently interactive; secondly, it is clearly the case that the new models and associated representation and ranking techniques lead to only incremental (if that) improvement in performance over previous models and techniques, which is generally not statistically significant; and thirdly, that such improvement, as determined in TREC-style evaluation, rarely, if ever, leads to improved performance by human searchers in interactive IR systems.
Nick has long been critical of the IR community's neglect of users and interaction. But this keynote was significant for two reasons. First, the ECIR program committee's decision to invite a keynote speaker from the information science community acknowledges the need for collaboration between these two communities. Second, Nick reciprocated this overture by calling for interdisciplinary efforts to bridge the gap between the formal study of information retrieval and the practical understanding of information behavior. As an avid proponent of HCIR, I am heartily encouraged by steps like these.
Showing posts with label Tefko Saracevic. Show all posts
Showing posts with label Tefko Saracevic. Show all posts

Monday, May 5, 2008

Saracevic on Relevance and Interaction

There is no Nobel Prize in computer science, despite computer science having done more than any other discipline in the past fifty years to change the world. Instead, there is the Turing Award, which serves as a Nobel Prize of computing.

But the Turing Award has never been given to anyone in information retrieval. Instead, there is the Gerald Salton Award, which serves as a Turing Award of information retrieval. Its recipients represent an A-list of information retrieval researchers.

Last week, I had the opportunity to talk with Salton Award recipient Tefko Saracevic. If you are not familiar with Saracevic, I suggest you take an hour to watch his 2007 lecture on "Relevance in information science".

I won't try to capture an hour of conversation in a blog post, but here are a few highlights:
  • We learn from philosophers, particularly Alfred Schütz, that we cannot reduce relevance to a single concept, but rather have to consider a system of interdependent relevancies, such as topical relevance, interpretational relevance, and motivational relevance.

  • When we talk about relevance measures, such as precision and recall, we evaluate results from the perspective of a user. But information retrieval approaches necessarily take a systems perspective, making assumptions about what people will want and encoding those assumptions in models and algorithms.

  • A major challenge in the information retrieval is that users--particularly web search users--often formulate queries that are ineffective, particularly because they are too short. Studies have shown that reference interviews can lead to improved retrieval effectiveness (typically through longer, more informative queries). He said that automated systems could help too, but he wasn't aware of any that had achieved traction.

  • A variety of factors affect interactive information retrieval, including task context, intent, expertise. Moreover, people react to certain relevance clues more than others, and more within some populations than others.
As I expected, I walked away with more questions than answers. But I did walk away reassured that my colleagues and I at Endeca , along with others in the HCIR community, are attacking the right problem: helping users formulate better queries.

I'd like to close with an anecdote that Saracevic recounts in his 2007 lecture. Bruce Croft had just delivered an information retrieval talk, and Nick Belkin raised the objection that users need to be incorporated into the study. Croft's conversation-ending response: "Tell us what to do, and we will do it."

We're halfway there. We've built interactive information retrieval systems, and we see from deployment after deployment that they work. Not that there isn't plenty of room for improvement, but the unmet challenge, as Ellen Voorhees makes clear, is evaluation. We need to address Nick Belkin's grand challenge and establish a paradigm suitable for evaluation of interactive IR systems.

Sunday, April 6, 2008

Nick Belkin at ECIR '08

Last week, I had the pleasure to attend the 30th European Conference on Information Retrieval, chaired by Iadh Ounis at the University of Glasgow. The conference was outstanding in several respects, not least of which was a keynote address by Nick Belkin, one the world's leading researchers on interactive information retrieval.

Nick's keynote, entitled "Some(what) Grand Challenges for Information Retrieval," was a full frontal attack on the Cranfield evaluation paradigm that has dominated IR research for the past half century. I am hoping to see his keynote published and posted online, but in the meantime here is a choice excerpt:
in accepting the [Gerald Salton] award at the 1997 SIGIR meeting, Tefko Saracevic stressed the significance of integrating research in information seeking behavior with research in IR system models and algorithms, saying: "if we consider that unlike art IR is not there for its own sake, that is, IR systems are researched and built to be used, then IR is far, far more than a branch of computer science, concerned primarily with issues of algorithms, computers, and computing."

...

Nevertheless, we can still see the dominance of the TREC (i.e. Cranfield) evaluation paradigm in most IR research, the inability of this paradigm to accommodate study of people in interaction with information systems (cf. the death of the TREC Interactive Track), and a dearth of research which integrates study of users’ goals, tasks and behaviors with research on models and methods which respond to results of such studies and supports those goals, tasks and behaviors.

This situation is especially striking for several reasons. First, it is clearly the case that IR as practiced is inherently interactive; secondly, it is clearly the case that the new models and associated representation and ranking techniques lead to only incremental (if that) improvement in performance over previous models and techniques, which is generally not statistically significant; and thirdly, that such improvement, as determined in TREC-style evaluation, rarely, if ever, leads to improved performance by human searchers in interactive IR systems.
Nick has long been critical of the IR community's neglect of users and interaction. But this keynote was significant for two reasons. First, the ECIR program committee's decision to invite a keynote speaker from the information science community acknowledges the need for collaboration between these two communities. Second, Nick reciprocated this overture by calling for interdisciplinary efforts to bridge the gap between the formal study of information retrieval and the practical understanding of information behavior. As an avid proponent of HCIR, I am heartily encouraged by steps like these.