Sunday, April 6, 2008

Nick Belkin at ECIR '08

Last week, I had the pleasure to attend the 30th European Conference on Information Retrieval, chaired by Iadh Ounis at the University of Glasgow. The conference was outstanding in several respects, not least of which was a keynote address by Nick Belkin, one the world's leading researchers on interactive information retrieval.

Nick's keynote, entitled "Some(what) Grand Challenges for Information Retrieval," was a full frontal attack on the Cranfield evaluation paradigm that has dominated IR research for the past half century. I am hoping to see his keynote published and posted online, but in the meantime here is a choice excerpt:
in accepting the [Gerald Salton] award at the 1997 SIGIR meeting, Tefko Saracevic stressed the significance of integrating research in information seeking behavior with research in IR system models and algorithms, saying: "if we consider that unlike art IR is not there for its own sake, that is, IR systems are researched and built to be used, then IR is far, far more than a branch of computer science, concerned primarily with issues of algorithms, computers, and computing."

...

Nevertheless, we can still see the dominance of the TREC (i.e. Cranfield) evaluation paradigm in most IR research, the inability of this paradigm to accommodate study of people in interaction with information systems (cf. the death of the TREC Interactive Track), and a dearth of research which integrates study of users’ goals, tasks and behaviors with research on models and methods which respond to results of such studies and supports those goals, tasks and behaviors.

This situation is especially striking for several reasons. First, it is clearly the case that IR as practiced is inherently interactive; secondly, it is clearly the case that the new models and associated representation and ranking techniques lead to only incremental (if that) improvement in performance over previous models and techniques, which is generally not statistically significant; and thirdly, that such improvement, as determined in TREC-style evaluation, rarely, if ever, leads to improved performance by human searchers in interactive IR systems.
Nick has long been critical of the IR community's neglect of users and interaction. But this keynote was significant for two reasons. First, the ECIR program committee's decision to invite a keynote speaker from the information science community acknowledges the need for collaboration between these two communities. Second, Nick reciprocated this overture by calling for interdisciplinary efforts to bridge the gap between the formal study of information retrieval and the practical understanding of information behavior. As an avid proponent of HCIR, I am heartily encouraged by steps like these.

7 comments:

Anonymous said...

Daniel -- thanks for posting this. Sounds like a fascinating talk.

Anonymous said...

From the perspective of a relatively green IR researcher, the IR community started as a combination of computer science and information/library science researchers. Nick's work—past and present—and the work of information and library scientists is extremely relevant to and, in my opinion, overlooked by the computer science IR community. I'd give my left toe for SIGIR to drop five papers which claim marginal DCG improvement—or better yet every "computational advertising" paper—for an equal number information science papers.

Daniel Tunkelang said...

In fairness to the SIGIR community, the divergence in methodology between the information retrieval and information science communities has made it very hard for the two to collaborate. IR researchers want repeatable experiments, which information and library scientists emphasize user studies that are inherently not repeatable. It is, as Nick said, a grand challenge.

Anonymous said...

Daniel -- I think the divergence in the two fields is quite a bit more complicated than simply a focus on repeatable experiments vs. user studies. There's baggage included with IS's close ties to LS, there's lots of appeal for CS students to manipulate formulae to eek out incremental improvements, the nuts-and-bolts software engineering expertise doesn't really exist in many IS departments, and the list goes on. Even though I came through an IS department on my way to CS, I haven't completely wrapped my head around all the reasons *why* this divide exists.

It is great to see venues like SIGIR and ECIR really elevating the role and visibility of good IS research in the IR community -- the most recent best papers at both conferences are perfect examples.

(and, as a side note, IMO if the conclusions of your user study aren't repeatable, then you're missing something in your study design or analysis... but that's probably another discussion)

Daniel Tunkelang said...

Jon, you're right--I am guilty of oversimplification. I think the crux of the problem, which is hardly unique to IR, is that it's easier--at least in the academic world--to propose solutions that incremental improve relative to an well-accepted problem statement than it is to propose changing the problem statement. This conservative attitude has some merit: it certainly filters out a lot of cranks. But it also discourages radical innovation.

Anonymous said...

Accepting nominations for SIGIR "Unfiltered Crank" award.

Daniel Tunkelang said...

Thanks to Jeff for posting a link to Nick's talk, recently published in the SIGIR Forum.

Sunday, April 6, 2008

Nick Belkin at ECIR '08

Last week, I had the pleasure to attend the 30th European Conference on Information Retrieval, chaired by Iadh Ounis at the University of Glasgow. The conference was outstanding in several respects, not least of which was a keynote address by Nick Belkin, one the world's leading researchers on interactive information retrieval.

Nick's keynote, entitled "Some(what) Grand Challenges for Information Retrieval," was a full frontal attack on the Cranfield evaluation paradigm that has dominated IR research for the past half century. I am hoping to see his keynote published and posted online, but in the meantime here is a choice excerpt:
in accepting the [Gerald Salton] award at the 1997 SIGIR meeting, Tefko Saracevic stressed the significance of integrating research in information seeking behavior with research in IR system models and algorithms, saying: "if we consider that unlike art IR is not there for its own sake, that is, IR systems are researched and built to be used, then IR is far, far more than a branch of computer science, concerned primarily with issues of algorithms, computers, and computing."

...

Nevertheless, we can still see the dominance of the TREC (i.e. Cranfield) evaluation paradigm in most IR research, the inability of this paradigm to accommodate study of people in interaction with information systems (cf. the death of the TREC Interactive Track), and a dearth of research which integrates study of users’ goals, tasks and behaviors with research on models and methods which respond to results of such studies and supports those goals, tasks and behaviors.

This situation is especially striking for several reasons. First, it is clearly the case that IR as practiced is inherently interactive; secondly, it is clearly the case that the new models and associated representation and ranking techniques lead to only incremental (if that) improvement in performance over previous models and techniques, which is generally not statistically significant; and thirdly, that such improvement, as determined in TREC-style evaluation, rarely, if ever, leads to improved performance by human searchers in interactive IR systems.
Nick has long been critical of the IR community's neglect of users and interaction. But this keynote was significant for two reasons. First, the ECIR program committee's decision to invite a keynote speaker from the information science community acknowledges the need for collaboration between these two communities. Second, Nick reciprocated this overture by calling for interdisciplinary efforts to bridge the gap between the formal study of information retrieval and the practical understanding of information behavior. As an avid proponent of HCIR, I am heartily encouraged by steps like these.

7 comments:

Anonymous said...

Daniel -- thanks for posting this. Sounds like a fascinating talk.

Anonymous said...

From the perspective of a relatively green IR researcher, the IR community started as a combination of computer science and information/library science researchers. Nick's work—past and present—and the work of information and library scientists is extremely relevant to and, in my opinion, overlooked by the computer science IR community. I'd give my left toe for SIGIR to drop five papers which claim marginal DCG improvement—or better yet every "computational advertising" paper—for an equal number information science papers.

Daniel Tunkelang said...

In fairness to the SIGIR community, the divergence in methodology between the information retrieval and information science communities has made it very hard for the two to collaborate. IR researchers want repeatable experiments, which information and library scientists emphasize user studies that are inherently not repeatable. It is, as Nick said, a grand challenge.

Anonymous said...

Daniel -- I think the divergence in the two fields is quite a bit more complicated than simply a focus on repeatable experiments vs. user studies. There's baggage included with IS's close ties to LS, there's lots of appeal for CS students to manipulate formulae to eek out incremental improvements, the nuts-and-bolts software engineering expertise doesn't really exist in many IS departments, and the list goes on. Even though I came through an IS department on my way to CS, I haven't completely wrapped my head around all the reasons *why* this divide exists.

It is great to see venues like SIGIR and ECIR really elevating the role and visibility of good IS research in the IR community -- the most recent best papers at both conferences are perfect examples.

(and, as a side note, IMO if the conclusions of your user study aren't repeatable, then you're missing something in your study design or analysis... but that's probably another discussion)

Daniel Tunkelang said...

Jon, you're right--I am guilty of oversimplification. I think the crux of the problem, which is hardly unique to IR, is that it's easier--at least in the academic world--to propose solutions that incremental improve relative to an well-accepted problem statement than it is to propose changing the problem statement. This conservative attitude has some merit: it certainly filters out a lot of cranks. But it also discourages radical innovation.

Anonymous said...

Accepting nominations for SIGIR "Unfiltered Crank" award.

Daniel Tunkelang said...

Thanks to Jeff for posting a link to Nick's talk, recently published in the SIGIR Forum.

Sunday, April 6, 2008

Nick Belkin at ECIR '08

Last week, I had the pleasure to attend the 30th European Conference on Information Retrieval, chaired by Iadh Ounis at the University of Glasgow. The conference was outstanding in several respects, not least of which was a keynote address by Nick Belkin, one the world's leading researchers on interactive information retrieval.

Nick's keynote, entitled "Some(what) Grand Challenges for Information Retrieval," was a full frontal attack on the Cranfield evaluation paradigm that has dominated IR research for the past half century. I am hoping to see his keynote published and posted online, but in the meantime here is a choice excerpt:
in accepting the [Gerald Salton] award at the 1997 SIGIR meeting, Tefko Saracevic stressed the significance of integrating research in information seeking behavior with research in IR system models and algorithms, saying: "if we consider that unlike art IR is not there for its own sake, that is, IR systems are researched and built to be used, then IR is far, far more than a branch of computer science, concerned primarily with issues of algorithms, computers, and computing."

...

Nevertheless, we can still see the dominance of the TREC (i.e. Cranfield) evaluation paradigm in most IR research, the inability of this paradigm to accommodate study of people in interaction with information systems (cf. the death of the TREC Interactive Track), and a dearth of research which integrates study of users’ goals, tasks and behaviors with research on models and methods which respond to results of such studies and supports those goals, tasks and behaviors.

This situation is especially striking for several reasons. First, it is clearly the case that IR as practiced is inherently interactive; secondly, it is clearly the case that the new models and associated representation and ranking techniques lead to only incremental (if that) improvement in performance over previous models and techniques, which is generally not statistically significant; and thirdly, that such improvement, as determined in TREC-style evaluation, rarely, if ever, leads to improved performance by human searchers in interactive IR systems.
Nick has long been critical of the IR community's neglect of users and interaction. But this keynote was significant for two reasons. First, the ECIR program committee's decision to invite a keynote speaker from the information science community acknowledges the need for collaboration between these two communities. Second, Nick reciprocated this overture by calling for interdisciplinary efforts to bridge the gap between the formal study of information retrieval and the practical understanding of information behavior. As an avid proponent of HCIR, I am heartily encouraged by steps like these.

7 comments:

Anonymous said...

Daniel -- thanks for posting this. Sounds like a fascinating talk.

Anonymous said...

From the perspective of a relatively green IR researcher, the IR community started as a combination of computer science and information/library science researchers. Nick's work—past and present—and the work of information and library scientists is extremely relevant to and, in my opinion, overlooked by the computer science IR community. I'd give my left toe for SIGIR to drop five papers which claim marginal DCG improvement—or better yet every "computational advertising" paper—for an equal number information science papers.

Daniel Tunkelang said...

In fairness to the SIGIR community, the divergence in methodology between the information retrieval and information science communities has made it very hard for the two to collaborate. IR researchers want repeatable experiments, which information and library scientists emphasize user studies that are inherently not repeatable. It is, as Nick said, a grand challenge.

Anonymous said...

Daniel -- I think the divergence in the two fields is quite a bit more complicated than simply a focus on repeatable experiments vs. user studies. There's baggage included with IS's close ties to LS, there's lots of appeal for CS students to manipulate formulae to eek out incremental improvements, the nuts-and-bolts software engineering expertise doesn't really exist in many IS departments, and the list goes on. Even though I came through an IS department on my way to CS, I haven't completely wrapped my head around all the reasons *why* this divide exists.

It is great to see venues like SIGIR and ECIR really elevating the role and visibility of good IS research in the IR community -- the most recent best papers at both conferences are perfect examples.

(and, as a side note, IMO if the conclusions of your user study aren't repeatable, then you're missing something in your study design or analysis... but that's probably another discussion)

Daniel Tunkelang said...

Jon, you're right--I am guilty of oversimplification. I think the crux of the problem, which is hardly unique to IR, is that it's easier--at least in the academic world--to propose solutions that incremental improve relative to an well-accepted problem statement than it is to propose changing the problem statement. This conservative attitude has some merit: it certainly filters out a lot of cranks. But it also discourages radical innovation.

Anonymous said...

Accepting nominations for SIGIR "Unfiltered Crank" award.

Daniel Tunkelang said...

Thanks to Jeff for posting a link to Nick's talk, recently published in the SIGIR Forum.