Monday, April 28, 2008

Social Navigation

There has bit a lot of recent buzz about social navigation, including some debate about what the phrase means. I dug into the archives and found a paper from the CHI '94 Conference on Human Factors in Computing Systems entitled "Running Out of Space: Models of Information Navigation". In it, Paul Dourish and Matthew Chalmers distinguish between semantic navigation and social navigation:
[semantic navigation offers] the ability to explore and choose perspectives of view based on knowledge of the semantically-structured information.
...
In social navigation, movement from one item to another is provoked as an artifact of the activity of another or a group of others.
Back in 1994, the Web was only starting to reach a broad audience. The authors cite two examples of social navigation: personal home pages, where people listed sites they found interesting, and collaborative filtering (specifically, the Information Tapestry project at Xerox PARC).

Today, a decade and a half later, the web has scaled by several orders of magnitude, search engines have largely obviated the listing of interesting sites on personal home pages, and collaborative filtering, while still going strong as a social influence on user experience, hardly feels like navigation. It does seem that the term "social navigation" deserves an update.

Following Dourish and Chalmers, let us define social navigation as the ability to explore and choose perspectives of view based on social information. Importantly, social navigation is user-controlled navigation just like semantic navigation--only that the user is navigation by changing the social lens on the information rather than specifying semantic constraints.

One example of social navigation is the ratings information at the Internet Movie Database (IMDB). For example, we can see from the ratings for Live Free or Die Hard that the movie appealed most to males under 18.

Fandango (an Endeca customer) takes this concept a step further, offering users faceted navigation of the space of movie reviews, where facets include age, gender, whether or not the reviewer has children, and whether the reviewer lives near the user.

More sophisticated interfaces will intermingle semantic and social navigation. Here is a screen shot from a prototype some of my colleagues put together and demonstrated at HCIR '07:

Social navigation, defined as above, offers users more than just the ability to be influenced by other people. It offers users transparency and control over the social lens. It allows us to think outside the black box.

Sunday, April 27, 2008

Happy Rota Day!


Since this is a personal blog, I'd like to go a bit off-topic and take recognize my late mentor Gian-Carlo Rota, whose birthday is today. While I and countless others recall Gian-Carlo most fondly as a mentor and teacher, his crowning achievement was to make combinatorics a respectable branch of modern mathematics. Indeed, combinatorics and probability theory have been instrumental to the progress of information retrieval and information science.

And this nugget of his advice about lecturing seems remarkably appropriate in the context of how information retrieval engines should work:
Every lecture should state one main point and repeat it over and over, like a theme with variations. An audience is like a herd of cows, moving slowly in the direction they are being driven towards. If we make one point, we have a good chance that the audience will take the right direction; if we make several points, then the cows will scatter all over the field. The audience will lose interest and everyone will go back to the thoughts they interrupted in order to come to our lecture.
Happy Birthday, Gian-Carlo.

Friday, April 25, 2008

Workshop on Ranked XML Querying

Thanks to an excellent blog written by Panos Ipeirotis at the NYU Stern School, I learned about a workshop held last month in Dagstuhl on ranked XML querying. Most of the presentations are available online, including one entitled DB & IR from a DB Viewpoint by Gerhard Weikum at the Max Planck Institut für Informatik. I'm excited to see these efforts to unify the DB and IR perspectives. So much more productive than the infamous MapReduce debate!

Thursday, April 24, 2008

Database Usability

Just as I was digesting Jeff Naughton's presentation at DB/IR day, a colleague at Endeca emailed me the keynote that H. V. Jagadish (University of Michigan) presented at SIGMOD '07 on making database systems usable. He enumerates the familiar pain points of today's database systems: confusing schemas, too many choices to make, unexpected--and unexplained--system behavior, and too high a cost for initial creation. He proposes "systems that reflect the user's model of the data, rather than forcing the data to fit a particular model."

As with Jeff's presentation, the main take-away here is a framework (though both he and Jeff have taken initial steps to address the problems they describe). As a practitioner, I'm most encouraged by the fact that database researchers, like information retrieval researchers, are increasingly recognizing the importance of users.

Wednesday, April 23, 2008

The Efficiency of Social Tagging

Credit to Kevin Duh by way of the natural language processing blog for highlighting recent work from PARC on understanding the efficiency of social tagging systems using information theory. The authors apply information theory to establish a framework for measuring the efficiency social tagging systems, and then empirically observe that the efficiency of tagging on del.icio.us has been decreasing over time. They conclude by suggesting that current tagging interfaces may be at fault, through a positive feedback process of encouraging popular tags.

After seeing this and the TagMaps work at Yahoo Research Berkeley, I feel that the IR and HCI communities should join forces to understand social tagging in general terms that relate information, knowledge representation, and human beings. These concerns are hardly specific to the web or to what is now called "social media"--after all, media is social by definition. Indeed, there is no reason to confine this approach to human-tagged collections--why not consider automated tagging systems on the same playing field?

Tuesday, April 22, 2008

Accessibility in Information Retrieval

The other day, I was talking with Leif Azzopardi at the University of Glasgow about accessibility in information retrieval. Accessibility is a concept borrowed from land use and transportation planning: it measures the cost that people are willing to incur to reach opportunities (e.g., shopping, restaurants), weighted by the desirability of those opportunities.

What does accessibility mean in the context of information retrieval?
Instead of an actual physical space, in IR, we are predominately concerned with accessing information within a collection of documents (i.e., information space), and instead of a transportation system, we have an Information Access System (i.e., a means by which we can access the information in the collection, like a query mechanism, a browsing mechanism, etc). The accessibility of a document is indicative of the likelihood or opportunity of it being retrieved by the user in this information space given such a mechanism.
It's a very appealing way to measure the effectiveness with which the an information retrieval system exposes a document collection--as well as the bias the system imposes. While the paper offers more questions than answers, I recommend to anyone who is interested in thinking outside the box of the traditional IR performance measures.

Sunday, April 20, 2008

North East DB / IR Day

Last Friday, I had the privilege to attend the Spring 2008 North East DB/IR Day, hosted by Columbia University:
The North East DB/IR Day brings together database and information retrieval researchers and students from both academic and research institutions in the Northeastern United States. The DB/IR Day is a semi-annual workshop that features an exciting technical program as well as informal discussion. The DB/IR Day provides a regular forum for presenting diverse viewpoints on database systems and information retrieval, addressing current topics as well as promoting information exchange among researchers.
The event lived up to its promise, and I was impressed with the quality of student posters. But my favorite part of the event was the keynote by Jeff Naughton entitled "Extracting Problems for Database and IR Researchers."

Jeff characterized the traditional philosophy of the database community as guaranteeing perfect outputs is the inputs are perfect. He argues that what we need more of today are databases that expect imperfection, and try to help.

To summarize his talk:
  • Provide support for "learn schema as you go."
  • Develop techniques to explain inconsistency and let users reason about it.
  • Expect errors, provide tools for users to understand/debug them.
  • View task as helping user discover what they want in large space of potential queries.
It is encouraging to see such a prominent database researcher advocating this vision, especially since it aligns so well with the technology we are developing at Endeca.

Friday, April 18, 2008

The Search for Meaning

By a fortuitous coincidence, I had the opportunity to see two consecutive presentations from search engine companies banking on natural language processing (NLP) to power the next generation of search. The first was from Ron Kaplan, Chief Technology and Science Officer of Powerset, who presented at Columbia University. The second was from Christian Hempelmann, Chief Scientific Officer of hakia, who presented at New York Semantic Web Meetup.

The Powerset talk was entitled "Deep natural language processing for web-scale indexing and retrieval." Jon Elsas, who attended the same talk earlier this week at CMU, did an excellent job summarizing it on his blog. I'll simply express my reaction: I don't get it. I have no reason to doubt that their NLP pipeline is best-in-class. The team has impressive credentials. But I see no evidence that they have produced better results than keyword search. After participating in their private beta for several months, I'd hoped that the presentation would help me see what I'd missed. I specifically asked Ron what measures they used to evaluate their system, and he was mum. So now I am more unconvinced that ever, though, to steal a line from a colleague, I cannot reconcile their enthusiasm with their results.

The hakia talk was entitled "Search for Meaning." Christian started by making the case for a semantic, rather than statistical approach to NLP. He then presented hakia's technology in a fair amount of detail, including walking through examples of worse sense disambiguation using context. I'm not convinced that semantics trump statistics, but I thoroughly enjoyed the presentation, and was intrigued enough to want to learn more. I find the company refreshingly open about its technology (not to mention that their beta is public), and I hope it works well enough to be practical.

Still, I'm not convinced the NLP is either the right answer or the right question. I'm no expert on the history of language, but it's clear that natural languages are hardly optimal means of communication, even among human beings. Rather, they are artifacts of our satisficing and resisting change. Since we are lucky enough to not have developed expectations that people can communicate with computers using natural language (HAL and Star Trek notwithstanding), why take a step backwards now? Rather than advocating for inefficient, unreliable communication mechanisms like natural language, we should be figuring out ways to make communication more efficient.

To use an analogy, there's a reason that programming languages have strict rules, and that compilers output errors rather than just trying to guess what you mean. The mild inconvenience upstream is a small cost, compared to the downstream benefits of unambiguous communication. I'm not suggesting that people start speaking in formal languages. But I do feel we should strive for a dialog-oriented approach where both the human and the computer have confidence in their mutual understanding. I can't resist a plug for HCIR.

Thursday, April 17, 2008

Ellen Voorhees defends Cranfield

I was extremely flattered to receive an email from Ellen Voorhees responding to my post about Nick Belkin's keynote. Then I was a little bit scared, since she is a strong advocate of the Cranfield tradition, and I braced myself for her rebuttal.

She pointed me to a talk she gave at the First International Workshop on Adaptive Information Retrieval (AIR) in 2006. I'd paraphrase her argument as follows: Nick and others (including me) are right to push for a paradigm that supports AIR research, but are being naïve regarding what is necessary for such research to deliver effective--and cost-effective--results. It's a strong case, and I'd be the first to concede that the advocates for AIR research have not (at least to my knowledge) produced a plausible abstract task that is amenable to efficient evaluation.

To quote Nick again, it's a grand challenge. And Ellen makes it clear that what we've learned so far is not encouraging.

Tuesday, April 15, 2008

Privacy and Information Theory

Privacy is a evergreen topic in technology discussions, and increasingly finds its way into the mainstream (cf. AOL, NSA, Facebook). My impression is that most people feel that companies and government agencies are amassing their "private" data to some nefarious end.

Let's forget about technology for a moment and subject the notion of privacy to basic examination. If I truly want to keep a secret, I don't tell anyone. If I want to share information with you but no one else, I only disclose the information under the proviso of a social or legal contract of non-disclosure.

But there's a major catch here: you--or I--may disclose the information involuntarily by our actions. The various establishments I frequent know my favorite foods, drinks, and even karaoke songs. More subtly, if I tell you in confidence that I don't like or trust someone, that information is likely to visibly affect your interaction with that person. Moreover, someone who knows that we are friends might even suspect me as the cause for your change in behavior.

What does this have to do with privacy of information? Everything! The mainstream debates treat information privacy as binary. Even when people discuss gradations of privacy, they tend to think in terms of each particular disclosure (e.g., age, favorite flavor of ice cream) as binary. But, if we take an information-theoretic look at disclosure, we immediately see that this binary view of disclosure is illusory.

For example, if you know I work for a software company and live in New York City, you know more about my gender, education, and salary than if you only know that I live in the United States. We can quantify this information gain in bits of conditional entropy.

Information theory provides a unifying framework for thinking about privacy. We can answer questions like "if I disclose that I like bagels and smoked salmon, to what extent to I disclose that I live in New York?" Or to what extent does an anonymized search log identify me personally.

If we can take this framework and make it consumable to non-information theorists, perhaps we can improve the quality of the privacy debate.

Saturday, April 12, 2008

Can Search be a Utility?

A recent lecture at the New York CTO club inspired a heated discussion on what is wrong with enterprise search solutions. Specifically, Jon Williams asked why search can't be a utility.

It's unfortunate when such a simple question calls for a complicated answer, but I'll try to tackle it.

On the web, almost all attempts to deviate even slightly from the venerable ranked-list paradigm have been resounding flops. More sophisticated interfaces, such as Clusty, receive favorable press coverage, but users don't vote for them with their virtual feet. And web search users seem reasonably satisfied with their experience.

Conversely, in the enterprise, there is widespread dissatisfaction with enterprise search solutions. A number of my colleagues have said that they installed a Google Search Appliance and "it didn't work." (Full disclosure: Google competes with Endeca in the enterprise).

While the GSA does have some significant technical limitations, I don't think the failures were primarily for technical reasons. Rather, I believe there was a failure of expectations. I believe the problem comes down to the question of whether relevance is subjective.

On the web, we get away with pretending that relevance is objective because there is so much agreement among users--particularly in the restricted class of queries that web search handles well, and that hence constitute the majority of actual searches.

In the enterprise, however, we not only lack the redundant and highly social structure of the web. We also tend to have more sophisticated information needs. Specifically, we tend to ask the kinds of informational queries that web search serves poorly, particularly when there is no Wikipedia page that addresses our needs.

It seems we can go in two directions.

The first is to make enterprise search more like web search by reducing the enterprise search problem to one that is user-independent and does not rely the social generation of enterprise data. Such a problem encompasses such mundane but important tasks as finding documents by title or finding department home pages. The challenges here fundamentally ones of infrastructure, reflecting the heterogeneous content repositories in enterprises and the controls mandated by business processes and regulatory compliance. Solving these problems is no cakewalk, but I think all of the major enterprise search vendors understand the framework for solving them.

The second is to embrace the difference between enterprise knowledge workers and casual web users, and to abandon the quest for an objective relevance measure. Such an approach requires admitting that there is no free lunch--that you can't just plug in a box and expect it to solve an enterprise's knowledge management problem. Rather, enterprise workers need to help shape the solution by supplying their proprietary knowledge and information needs. The main challenges for information access vendors are to make this process as painless as possible for enterprises, and to demonstrate the return so that enterprises make the necessary investment.

Thursday, April 10, 2008

Multiple-Query Sessions

As Nick Belkin pointed out in his recent ECIR 2008 keynote, a grand challenge for the IR community is to figure out how to bring the user into the evaluation process. A key aspect of this challenge is rethinking system evaluation in terms of sessions rather than queries.

Some recent work in the IR community is very encouraging:

- Work by Ryen White and colleagues at Microsoft Research that mines session data to guide users to popular web destinations. Their paper was awarded Best Paper at SIGIR 2007.

- Work by Nick Craswell and Martin Szummer (also at Microsoft Research, and also presented at SIGIR 2007) that performs random walks on the click graph to use click data effectively as evidence to improve relevance ranking for image search on the web.

- Work by Kalervo Järvelin (at the University of Tampere in Finland) and colleagues on discounted cumulated gain based evaluation of multiple-query IR sessions that was awarded Best Paper at ECIR 2008.

This recent work--and the prominence it has received in the IR community--is refreshing, especially in light of the relative lack of academic work on interactive IR and the demise of the short-lived TREC interactive track. They are first steps, but hopefully IR researchers and practitioners will pick up on them.

Tuesday, April 8, 2008

Q&A with Amit Singhal

Amit Singhal, who is head of search quality at Google, gave a very entertaining keynote at ECIR '08 that focused on the adversarial aspects of Web IR. Specifically, he discussed some of the techniques used in the arms race to game Google's ranking algorithms. Perhaps he revealed more than he intended!

During the question and answer session, I reminded Amit of the admonition against security through obscurity that is well accepted in the security and cryptography communities. I questioned whether his team is pursuing the wrong strategy by failing to respect this maxim. Amit replied that a relevance analog to security by design was an interesting challenge (which he delegated to the audience), but he appealed to the subjectivity of relevance as a reason for it being harder to make relevance as transparent as security.

While I accept the difficulty of this challenge, I reject the suggestion that subjectivity makes it harder. To being with, Google and other web search engines rank results objectively, rather than based on user-specific considerations. Furthermore, the subjectivity of relevance should make the adversarial problem easier rather than harder, as has been observed in the security industry.

But the challenge is indeed a daunting one. Is there a way we can give control to users and thus make the search engines objective referees rather than paternalistic gatekeepers?

At Endeca, we emphasize the transparency of our engine as a core value of our offering to enterprises. Granted, our clients generally do not have an adversarial relationship with their data. Still, I am convinced that the same approach not only can work on the web, but will be the only way to end the arms race between spammers and Amit's army of tweakers.

Sunday, April 6, 2008

Nick Belkin at ECIR '08

Last week, I had the pleasure to attend the 30th European Conference on Information Retrieval, chaired by Iadh Ounis at the University of Glasgow. The conference was outstanding in several respects, not least of which was a keynote address by Nick Belkin, one the world's leading researchers on interactive information retrieval.

Nick's keynote, entitled "Some(what) Grand Challenges for Information Retrieval," was a full frontal attack on the Cranfield evaluation paradigm that has dominated IR research for the past half century. I am hoping to see his keynote published and posted online, but in the meantime here is a choice excerpt:
in accepting the [Gerald Salton] award at the 1997 SIGIR meeting, Tefko Saracevic stressed the significance of integrating research in information seeking behavior with research in IR system models and algorithms, saying: "if we consider that unlike art IR is not there for its own sake, that is, IR systems are researched and built to be used, then IR is far, far more than a branch of computer science, concerned primarily with issues of algorithms, computers, and computing."

...

Nevertheless, we can still see the dominance of the TREC (i.e. Cranfield) evaluation paradigm in most IR research, the inability of this paradigm to accommodate study of people in interaction with information systems (cf. the death of the TREC Interactive Track), and a dearth of research which integrates study of users’ goals, tasks and behaviors with research on models and methods which respond to results of such studies and supports those goals, tasks and behaviors.

This situation is especially striking for several reasons. First, it is clearly the case that IR as practiced is inherently interactive; secondly, it is clearly the case that the new models and associated representation and ranking techniques lead to only incremental (if that) improvement in performance over previous models and techniques, which is generally not statistically significant; and thirdly, that such improvement, as determined in TREC-style evaluation, rarely, if ever, leads to improved performance by human searchers in interactive IR systems.
Nick has long been critical of the IR community's neglect of users and interaction. But this keynote was significant for two reasons. First, the ECIR program committee's decision to invite a keynote speaker from the information science community acknowledges the need for collaboration between these two communities. Second, Nick reciprocated this overture by calling for interdisciplinary efforts to bridge the gap between the formal study of information retrieval and the practical understanding of information behavior. As an avid proponent of HCIR, I am heartily encouraged by steps like these.

Monday, April 28, 2008

Social Navigation

There has bit a lot of recent buzz about social navigation, including some debate about what the phrase means. I dug into the archives and found a paper from the CHI '94 Conference on Human Factors in Computing Systems entitled "Running Out of Space: Models of Information Navigation". In it, Paul Dourish and Matthew Chalmers distinguish between semantic navigation and social navigation:
[semantic navigation offers] the ability to explore and choose perspectives of view based on knowledge of the semantically-structured information.
...
In social navigation, movement from one item to another is provoked as an artifact of the activity of another or a group of others.
Back in 1994, the Web was only starting to reach a broad audience. The authors cite two examples of social navigation: personal home pages, where people listed sites they found interesting, and collaborative filtering (specifically, the Information Tapestry project at Xerox PARC).

Today, a decade and a half later, the web has scaled by several orders of magnitude, search engines have largely obviated the listing of interesting sites on personal home pages, and collaborative filtering, while still going strong as a social influence on user experience, hardly feels like navigation. It does seem that the term "social navigation" deserves an update.

Following Dourish and Chalmers, let us define social navigation as the ability to explore and choose perspectives of view based on social information. Importantly, social navigation is user-controlled navigation just like semantic navigation--only that the user is navigation by changing the social lens on the information rather than specifying semantic constraints.

One example of social navigation is the ratings information at the Internet Movie Database (IMDB). For example, we can see from the ratings for Live Free or Die Hard that the movie appealed most to males under 18.

Fandango (an Endeca customer) takes this concept a step further, offering users faceted navigation of the space of movie reviews, where facets include age, gender, whether or not the reviewer has children, and whether the reviewer lives near the user.

More sophisticated interfaces will intermingle semantic and social navigation. Here is a screen shot from a prototype some of my colleagues put together and demonstrated at HCIR '07:

Social navigation, defined as above, offers users more than just the ability to be influenced by other people. It offers users transparency and control over the social lens. It allows us to think outside the black box.

Sunday, April 27, 2008

Happy Rota Day!


Since this is a personal blog, I'd like to go a bit off-topic and take recognize my late mentor Gian-Carlo Rota, whose birthday is today. While I and countless others recall Gian-Carlo most fondly as a mentor and teacher, his crowning achievement was to make combinatorics a respectable branch of modern mathematics. Indeed, combinatorics and probability theory have been instrumental to the progress of information retrieval and information science.

And this nugget of his advice about lecturing seems remarkably appropriate in the context of how information retrieval engines should work:
Every lecture should state one main point and repeat it over and over, like a theme with variations. An audience is like a herd of cows, moving slowly in the direction they are being driven towards. If we make one point, we have a good chance that the audience will take the right direction; if we make several points, then the cows will scatter all over the field. The audience will lose interest and everyone will go back to the thoughts they interrupted in order to come to our lecture.
Happy Birthday, Gian-Carlo.

Friday, April 25, 2008

Workshop on Ranked XML Querying

Thanks to an excellent blog written by Panos Ipeirotis at the NYU Stern School, I learned about a workshop held last month in Dagstuhl on ranked XML querying. Most of the presentations are available online, including one entitled DB & IR from a DB Viewpoint by Gerhard Weikum at the Max Planck Institut für Informatik. I'm excited to see these efforts to unify the DB and IR perspectives. So much more productive than the infamous MapReduce debate!

Thursday, April 24, 2008

Database Usability

Just as I was digesting Jeff Naughton's presentation at DB/IR day, a colleague at Endeca emailed me the keynote that H. V. Jagadish (University of Michigan) presented at SIGMOD '07 on making database systems usable. He enumerates the familiar pain points of today's database systems: confusing schemas, too many choices to make, unexpected--and unexplained--system behavior, and too high a cost for initial creation. He proposes "systems that reflect the user's model of the data, rather than forcing the data to fit a particular model."

As with Jeff's presentation, the main take-away here is a framework (though both he and Jeff have taken initial steps to address the problems they describe). As a practitioner, I'm most encouraged by the fact that database researchers, like information retrieval researchers, are increasingly recognizing the importance of users.

Wednesday, April 23, 2008

The Efficiency of Social Tagging

Credit to Kevin Duh by way of the natural language processing blog for highlighting recent work from PARC on understanding the efficiency of social tagging systems using information theory. The authors apply information theory to establish a framework for measuring the efficiency social tagging systems, and then empirically observe that the efficiency of tagging on del.icio.us has been decreasing over time. They conclude by suggesting that current tagging interfaces may be at fault, through a positive feedback process of encouraging popular tags.

After seeing this and the TagMaps work at Yahoo Research Berkeley, I feel that the IR and HCI communities should join forces to understand social tagging in general terms that relate information, knowledge representation, and human beings. These concerns are hardly specific to the web or to what is now called "social media"--after all, media is social by definition. Indeed, there is no reason to confine this approach to human-tagged collections--why not consider automated tagging systems on the same playing field?

Tuesday, April 22, 2008

Accessibility in Information Retrieval

The other day, I was talking with Leif Azzopardi at the University of Glasgow about accessibility in information retrieval. Accessibility is a concept borrowed from land use and transportation planning: it measures the cost that people are willing to incur to reach opportunities (e.g., shopping, restaurants), weighted by the desirability of those opportunities.

What does accessibility mean in the context of information retrieval?
Instead of an actual physical space, in IR, we are predominately concerned with accessing information within a collection of documents (i.e., information space), and instead of a transportation system, we have an Information Access System (i.e., a means by which we can access the information in the collection, like a query mechanism, a browsing mechanism, etc). The accessibility of a document is indicative of the likelihood or opportunity of it being retrieved by the user in this information space given such a mechanism.
It's a very appealing way to measure the effectiveness with which the an information retrieval system exposes a document collection--as well as the bias the system imposes. While the paper offers more questions than answers, I recommend to anyone who is interested in thinking outside the box of the traditional IR performance measures.

Sunday, April 20, 2008

North East DB / IR Day

Last Friday, I had the privilege to attend the Spring 2008 North East DB/IR Day, hosted by Columbia University:
The North East DB/IR Day brings together database and information retrieval researchers and students from both academic and research institutions in the Northeastern United States. The DB/IR Day is a semi-annual workshop that features an exciting technical program as well as informal discussion. The DB/IR Day provides a regular forum for presenting diverse viewpoints on database systems and information retrieval, addressing current topics as well as promoting information exchange among researchers.
The event lived up to its promise, and I was impressed with the quality of student posters. But my favorite part of the event was the keynote by Jeff Naughton entitled "Extracting Problems for Database and IR Researchers."

Jeff characterized the traditional philosophy of the database community as guaranteeing perfect outputs is the inputs are perfect. He argues that what we need more of today are databases that expect imperfection, and try to help.

To summarize his talk:
  • Provide support for "learn schema as you go."
  • Develop techniques to explain inconsistency and let users reason about it.
  • Expect errors, provide tools for users to understand/debug them.
  • View task as helping user discover what they want in large space of potential queries.
It is encouraging to see such a prominent database researcher advocating this vision, especially since it aligns so well with the technology we are developing at Endeca.

Friday, April 18, 2008

The Search for Meaning

By a fortuitous coincidence, I had the opportunity to see two consecutive presentations from search engine companies banking on natural language processing (NLP) to power the next generation of search. The first was from Ron Kaplan, Chief Technology and Science Officer of Powerset, who presented at Columbia University. The second was from Christian Hempelmann, Chief Scientific Officer of hakia, who presented at New York Semantic Web Meetup.

The Powerset talk was entitled "Deep natural language processing for web-scale indexing and retrieval." Jon Elsas, who attended the same talk earlier this week at CMU, did an excellent job summarizing it on his blog. I'll simply express my reaction: I don't get it. I have no reason to doubt that their NLP pipeline is best-in-class. The team has impressive credentials. But I see no evidence that they have produced better results than keyword search. After participating in their private beta for several months, I'd hoped that the presentation would help me see what I'd missed. I specifically asked Ron what measures they used to evaluate their system, and he was mum. So now I am more unconvinced that ever, though, to steal a line from a colleague, I cannot reconcile their enthusiasm with their results.

The hakia talk was entitled "Search for Meaning." Christian started by making the case for a semantic, rather than statistical approach to NLP. He then presented hakia's technology in a fair amount of detail, including walking through examples of worse sense disambiguation using context. I'm not convinced that semantics trump statistics, but I thoroughly enjoyed the presentation, and was intrigued enough to want to learn more. I find the company refreshingly open about its technology (not to mention that their beta is public), and I hope it works well enough to be practical.

Still, I'm not convinced the NLP is either the right answer or the right question. I'm no expert on the history of language, but it's clear that natural languages are hardly optimal means of communication, even among human beings. Rather, they are artifacts of our satisficing and resisting change. Since we are lucky enough to not have developed expectations that people can communicate with computers using natural language (HAL and Star Trek notwithstanding), why take a step backwards now? Rather than advocating for inefficient, unreliable communication mechanisms like natural language, we should be figuring out ways to make communication more efficient.

To use an analogy, there's a reason that programming languages have strict rules, and that compilers output errors rather than just trying to guess what you mean. The mild inconvenience upstream is a small cost, compared to the downstream benefits of unambiguous communication. I'm not suggesting that people start speaking in formal languages. But I do feel we should strive for a dialog-oriented approach where both the human and the computer have confidence in their mutual understanding. I can't resist a plug for HCIR.

Thursday, April 17, 2008

Ellen Voorhees defends Cranfield

I was extremely flattered to receive an email from Ellen Voorhees responding to my post about Nick Belkin's keynote. Then I was a little bit scared, since she is a strong advocate of the Cranfield tradition, and I braced myself for her rebuttal.

She pointed me to a talk she gave at the First International Workshop on Adaptive Information Retrieval (AIR) in 2006. I'd paraphrase her argument as follows: Nick and others (including me) are right to push for a paradigm that supports AIR research, but are being naïve regarding what is necessary for such research to deliver effective--and cost-effective--results. It's a strong case, and I'd be the first to concede that the advocates for AIR research have not (at least to my knowledge) produced a plausible abstract task that is amenable to efficient evaluation.

To quote Nick again, it's a grand challenge. And Ellen makes it clear that what we've learned so far is not encouraging.

Tuesday, April 15, 2008

Privacy and Information Theory

Privacy is a evergreen topic in technology discussions, and increasingly finds its way into the mainstream (cf. AOL, NSA, Facebook). My impression is that most people feel that companies and government agencies are amassing their "private" data to some nefarious end.

Let's forget about technology for a moment and subject the notion of privacy to basic examination. If I truly want to keep a secret, I don't tell anyone. If I want to share information with you but no one else, I only disclose the information under the proviso of a social or legal contract of non-disclosure.

But there's a major catch here: you--or I--may disclose the information involuntarily by our actions. The various establishments I frequent know my favorite foods, drinks, and even karaoke songs. More subtly, if I tell you in confidence that I don't like or trust someone, that information is likely to visibly affect your interaction with that person. Moreover, someone who knows that we are friends might even suspect me as the cause for your change in behavior.

What does this have to do with privacy of information? Everything! The mainstream debates treat information privacy as binary. Even when people discuss gradations of privacy, they tend to think in terms of each particular disclosure (e.g., age, favorite flavor of ice cream) as binary. But, if we take an information-theoretic look at disclosure, we immediately see that this binary view of disclosure is illusory.

For example, if you know I work for a software company and live in New York City, you know more about my gender, education, and salary than if you only know that I live in the United States. We can quantify this information gain in bits of conditional entropy.

Information theory provides a unifying framework for thinking about privacy. We can answer questions like "if I disclose that I like bagels and smoked salmon, to what extent to I disclose that I live in New York?" Or to what extent does an anonymized search log identify me personally.

If we can take this framework and make it consumable to non-information theorists, perhaps we can improve the quality of the privacy debate.

Saturday, April 12, 2008

Can Search be a Utility?

A recent lecture at the New York CTO club inspired a heated discussion on what is wrong with enterprise search solutions. Specifically, Jon Williams asked why search can't be a utility.

It's unfortunate when such a simple question calls for a complicated answer, but I'll try to tackle it.

On the web, almost all attempts to deviate even slightly from the venerable ranked-list paradigm have been resounding flops. More sophisticated interfaces, such as Clusty, receive favorable press coverage, but users don't vote for them with their virtual feet. And web search users seem reasonably satisfied with their experience.

Conversely, in the enterprise, there is widespread dissatisfaction with enterprise search solutions. A number of my colleagues have said that they installed a Google Search Appliance and "it didn't work." (Full disclosure: Google competes with Endeca in the enterprise).

While the GSA does have some significant technical limitations, I don't think the failures were primarily for technical reasons. Rather, I believe there was a failure of expectations. I believe the problem comes down to the question of whether relevance is subjective.

On the web, we get away with pretending that relevance is objective because there is so much agreement among users--particularly in the restricted class of queries that web search handles well, and that hence constitute the majority of actual searches.

In the enterprise, however, we not only lack the redundant and highly social structure of the web. We also tend to have more sophisticated information needs. Specifically, we tend to ask the kinds of informational queries that web search serves poorly, particularly when there is no Wikipedia page that addresses our needs.

It seems we can go in two directions.

The first is to make enterprise search more like web search by reducing the enterprise search problem to one that is user-independent and does not rely the social generation of enterprise data. Such a problem encompasses such mundane but important tasks as finding documents by title or finding department home pages. The challenges here fundamentally ones of infrastructure, reflecting the heterogeneous content repositories in enterprises and the controls mandated by business processes and regulatory compliance. Solving these problems is no cakewalk, but I think all of the major enterprise search vendors understand the framework for solving them.

The second is to embrace the difference between enterprise knowledge workers and casual web users, and to abandon the quest for an objective relevance measure. Such an approach requires admitting that there is no free lunch--that you can't just plug in a box and expect it to solve an enterprise's knowledge management problem. Rather, enterprise workers need to help shape the solution by supplying their proprietary knowledge and information needs. The main challenges for information access vendors are to make this process as painless as possible for enterprises, and to demonstrate the return so that enterprises make the necessary investment.

Thursday, April 10, 2008

Multiple-Query Sessions

As Nick Belkin pointed out in his recent ECIR 2008 keynote, a grand challenge for the IR community is to figure out how to bring the user into the evaluation process. A key aspect of this challenge is rethinking system evaluation in terms of sessions rather than queries.

Some recent work in the IR community is very encouraging:

- Work by Ryen White and colleagues at Microsoft Research that mines session data to guide users to popular web destinations. Their paper was awarded Best Paper at SIGIR 2007.

- Work by Nick Craswell and Martin Szummer (also at Microsoft Research, and also presented at SIGIR 2007) that performs random walks on the click graph to use click data effectively as evidence to improve relevance ranking for image search on the web.

- Work by Kalervo Järvelin (at the University of Tampere in Finland) and colleagues on discounted cumulated gain based evaluation of multiple-query IR sessions that was awarded Best Paper at ECIR 2008.

This recent work--and the prominence it has received in the IR community--is refreshing, especially in light of the relative lack of academic work on interactive IR and the demise of the short-lived TREC interactive track. They are first steps, but hopefully IR researchers and practitioners will pick up on them.

Tuesday, April 8, 2008

Q&A with Amit Singhal

Amit Singhal, who is head of search quality at Google, gave a very entertaining keynote at ECIR '08 that focused on the adversarial aspects of Web IR. Specifically, he discussed some of the techniques used in the arms race to game Google's ranking algorithms. Perhaps he revealed more than he intended!

During the question and answer session, I reminded Amit of the admonition against security through obscurity that is well accepted in the security and cryptography communities. I questioned whether his team is pursuing the wrong strategy by failing to respect this maxim. Amit replied that a relevance analog to security by design was an interesting challenge (which he delegated to the audience), but he appealed to the subjectivity of relevance as a reason for it being harder to make relevance as transparent as security.

While I accept the difficulty of this challenge, I reject the suggestion that subjectivity makes it harder. To being with, Google and other web search engines rank results objectively, rather than based on user-specific considerations. Furthermore, the subjectivity of relevance should make the adversarial problem easier rather than harder, as has been observed in the security industry.

But the challenge is indeed a daunting one. Is there a way we can give control to users and thus make the search engines objective referees rather than paternalistic gatekeepers?

At Endeca, we emphasize the transparency of our engine as a core value of our offering to enterprises. Granted, our clients generally do not have an adversarial relationship with their data. Still, I am convinced that the same approach not only can work on the web, but will be the only way to end the arms race between spammers and Amit's army of tweakers.

Sunday, April 6, 2008

Nick Belkin at ECIR '08

Last week, I had the pleasure to attend the 30th European Conference on Information Retrieval, chaired by Iadh Ounis at the University of Glasgow. The conference was outstanding in several respects, not least of which was a keynote address by Nick Belkin, one the world's leading researchers on interactive information retrieval.

Nick's keynote, entitled "Some(what) Grand Challenges for Information Retrieval," was a full frontal attack on the Cranfield evaluation paradigm that has dominated IR research for the past half century. I am hoping to see his keynote published and posted online, but in the meantime here is a choice excerpt:
in accepting the [Gerald Salton] award at the 1997 SIGIR meeting, Tefko Saracevic stressed the significance of integrating research in information seeking behavior with research in IR system models and algorithms, saying: "if we consider that unlike art IR is not there for its own sake, that is, IR systems are researched and built to be used, then IR is far, far more than a branch of computer science, concerned primarily with issues of algorithms, computers, and computing."

...

Nevertheless, we can still see the dominance of the TREC (i.e. Cranfield) evaluation paradigm in most IR research, the inability of this paradigm to accommodate study of people in interaction with information systems (cf. the death of the TREC Interactive Track), and a dearth of research which integrates study of users’ goals, tasks and behaviors with research on models and methods which respond to results of such studies and supports those goals, tasks and behaviors.

This situation is especially striking for several reasons. First, it is clearly the case that IR as practiced is inherently interactive; secondly, it is clearly the case that the new models and associated representation and ranking techniques lead to only incremental (if that) improvement in performance over previous models and techniques, which is generally not statistically significant; and thirdly, that such improvement, as determined in TREC-style evaluation, rarely, if ever, leads to improved performance by human searchers in interactive IR systems.
Nick has long been critical of the IR community's neglect of users and interaction. But this keynote was significant for two reasons. First, the ECIR program committee's decision to invite a keynote speaker from the information science community acknowledges the need for collaboration between these two communities. Second, Nick reciprocated this overture by calling for interdisciplinary efforts to bridge the gap between the formal study of information retrieval and the practical understanding of information behavior. As an avid proponent of HCIR, I am heartily encouraged by steps like these.

Monday, April 28, 2008

Social Navigation

There has bit a lot of recent buzz about social navigation, including some debate about what the phrase means. I dug into the archives and found a paper from the CHI '94 Conference on Human Factors in Computing Systems entitled "Running Out of Space: Models of Information Navigation". In it, Paul Dourish and Matthew Chalmers distinguish between semantic navigation and social navigation:
[semantic navigation offers] the ability to explore and choose perspectives of view based on knowledge of the semantically-structured information.
...
In social navigation, movement from one item to another is provoked as an artifact of the activity of another or a group of others.
Back in 1994, the Web was only starting to reach a broad audience. The authors cite two examples of social navigation: personal home pages, where people listed sites they found interesting, and collaborative filtering (specifically, the Information Tapestry project at Xerox PARC).

Today, a decade and a half later, the web has scaled by several orders of magnitude, search engines have largely obviated the listing of interesting sites on personal home pages, and collaborative filtering, while still going strong as a social influence on user experience, hardly feels like navigation. It does seem that the term "social navigation" deserves an update.

Following Dourish and Chalmers, let us define social navigation as the ability to explore and choose perspectives of view based on social information. Importantly, social navigation is user-controlled navigation just like semantic navigation--only that the user is navigation by changing the social lens on the information rather than specifying semantic constraints.

One example of social navigation is the ratings information at the Internet Movie Database (IMDB). For example, we can see from the ratings for Live Free or Die Hard that the movie appealed most to males under 18.

Fandango (an Endeca customer) takes this concept a step further, offering users faceted navigation of the space of movie reviews, where facets include age, gender, whether or not the reviewer has children, and whether the reviewer lives near the user.

More sophisticated interfaces will intermingle semantic and social navigation. Here is a screen shot from a prototype some of my colleagues put together and demonstrated at HCIR '07:

Social navigation, defined as above, offers users more than just the ability to be influenced by other people. It offers users transparency and control over the social lens. It allows us to think outside the black box.

Sunday, April 27, 2008

Happy Rota Day!


Since this is a personal blog, I'd like to go a bit off-topic and take recognize my late mentor Gian-Carlo Rota, whose birthday is today. While I and countless others recall Gian-Carlo most fondly as a mentor and teacher, his crowning achievement was to make combinatorics a respectable branch of modern mathematics. Indeed, combinatorics and probability theory have been instrumental to the progress of information retrieval and information science.

And this nugget of his advice about lecturing seems remarkably appropriate in the context of how information retrieval engines should work:
Every lecture should state one main point and repeat it over and over, like a theme with variations. An audience is like a herd of cows, moving slowly in the direction they are being driven towards. If we make one point, we have a good chance that the audience will take the right direction; if we make several points, then the cows will scatter all over the field. The audience will lose interest and everyone will go back to the thoughts they interrupted in order to come to our lecture.
Happy Birthday, Gian-Carlo.

Friday, April 25, 2008

Workshop on Ranked XML Querying

Thanks to an excellent blog written by Panos Ipeirotis at the NYU Stern School, I learned about a workshop held last month in Dagstuhl on ranked XML querying. Most of the presentations are available online, including one entitled DB & IR from a DB Viewpoint by Gerhard Weikum at the Max Planck Institut für Informatik. I'm excited to see these efforts to unify the DB and IR perspectives. So much more productive than the infamous MapReduce debate!

Thursday, April 24, 2008

Database Usability

Just as I was digesting Jeff Naughton's presentation at DB/IR day, a colleague at Endeca emailed me the keynote that H. V. Jagadish (University of Michigan) presented at SIGMOD '07 on making database systems usable. He enumerates the familiar pain points of today's database systems: confusing schemas, too many choices to make, unexpected--and unexplained--system behavior, and too high a cost for initial creation. He proposes "systems that reflect the user's model of the data, rather than forcing the data to fit a particular model."

As with Jeff's presentation, the main take-away here is a framework (though both he and Jeff have taken initial steps to address the problems they describe). As a practitioner, I'm most encouraged by the fact that database researchers, like information retrieval researchers, are increasingly recognizing the importance of users.

Wednesday, April 23, 2008

The Efficiency of Social Tagging

Credit to Kevin Duh by way of the natural language processing blog for highlighting recent work from PARC on understanding the efficiency of social tagging systems using information theory. The authors apply information theory to establish a framework for measuring the efficiency social tagging systems, and then empirically observe that the efficiency of tagging on del.icio.us has been decreasing over time. They conclude by suggesting that current tagging interfaces may be at fault, through a positive feedback process of encouraging popular tags.

After seeing this and the TagMaps work at Yahoo Research Berkeley, I feel that the IR and HCI communities should join forces to understand social tagging in general terms that relate information, knowledge representation, and human beings. These concerns are hardly specific to the web or to what is now called "social media"--after all, media is social by definition. Indeed, there is no reason to confine this approach to human-tagged collections--why not consider automated tagging systems on the same playing field?

Tuesday, April 22, 2008

Accessibility in Information Retrieval

The other day, I was talking with Leif Azzopardi at the University of Glasgow about accessibility in information retrieval. Accessibility is a concept borrowed from land use and transportation planning: it measures the cost that people are willing to incur to reach opportunities (e.g., shopping, restaurants), weighted by the desirability of those opportunities.

What does accessibility mean in the context of information retrieval?
Instead of an actual physical space, in IR, we are predominately concerned with accessing information within a collection of documents (i.e., information space), and instead of a transportation system, we have an Information Access System (i.e., a means by which we can access the information in the collection, like a query mechanism, a browsing mechanism, etc). The accessibility of a document is indicative of the likelihood or opportunity of it being retrieved by the user in this information space given such a mechanism.
It's a very appealing way to measure the effectiveness with which the an information retrieval system exposes a document collection--as well as the bias the system imposes. While the paper offers more questions than answers, I recommend to anyone who is interested in thinking outside the box of the traditional IR performance measures.

Sunday, April 20, 2008

North East DB / IR Day

Last Friday, I had the privilege to attend the Spring 2008 North East DB/IR Day, hosted by Columbia University:
The North East DB/IR Day brings together database and information retrieval researchers and students from both academic and research institutions in the Northeastern United States. The DB/IR Day is a semi-annual workshop that features an exciting technical program as well as informal discussion. The DB/IR Day provides a regular forum for presenting diverse viewpoints on database systems and information retrieval, addressing current topics as well as promoting information exchange among researchers.
The event lived up to its promise, and I was impressed with the quality of student posters. But my favorite part of the event was the keynote by Jeff Naughton entitled "Extracting Problems for Database and IR Researchers."

Jeff characterized the traditional philosophy of the database community as guaranteeing perfect outputs is the inputs are perfect. He argues that what we need more of today are databases that expect imperfection, and try to help.

To summarize his talk:
  • Provide support for "learn schema as you go."
  • Develop techniques to explain inconsistency and let users reason about it.
  • Expect errors, provide tools for users to understand/debug them.
  • View task as helping user discover what they want in large space of potential queries.
It is encouraging to see such a prominent database researcher advocating this vision, especially since it aligns so well with the technology we are developing at Endeca.

Friday, April 18, 2008

The Search for Meaning

By a fortuitous coincidence, I had the opportunity to see two consecutive presentations from search engine companies banking on natural language processing (NLP) to power the next generation of search. The first was from Ron Kaplan, Chief Technology and Science Officer of Powerset, who presented at Columbia University. The second was from Christian Hempelmann, Chief Scientific Officer of hakia, who presented at New York Semantic Web Meetup.

The Powerset talk was entitled "Deep natural language processing for web-scale indexing and retrieval." Jon Elsas, who attended the same talk earlier this week at CMU, did an excellent job summarizing it on his blog. I'll simply express my reaction: I don't get it. I have no reason to doubt that their NLP pipeline is best-in-class. The team has impressive credentials. But I see no evidence that they have produced better results than keyword search. After participating in their private beta for several months, I'd hoped that the presentation would help me see what I'd missed. I specifically asked Ron what measures they used to evaluate their system, and he was mum. So now I am more unconvinced that ever, though, to steal a line from a colleague, I cannot reconcile their enthusiasm with their results.

The hakia talk was entitled "Search for Meaning." Christian started by making the case for a semantic, rather than statistical approach to NLP. He then presented hakia's technology in a fair amount of detail, including walking through examples of worse sense disambiguation using context. I'm not convinced that semantics trump statistics, but I thoroughly enjoyed the presentation, and was intrigued enough to want to learn more. I find the company refreshingly open about its technology (not to mention that their beta is public), and I hope it works well enough to be practical.

Still, I'm not convinced the NLP is either the right answer or the right question. I'm no expert on the history of language, but it's clear that natural languages are hardly optimal means of communication, even among human beings. Rather, they are artifacts of our satisficing and resisting change. Since we are lucky enough to not have developed expectations that people can communicate with computers using natural language (HAL and Star Trek notwithstanding), why take a step backwards now? Rather than advocating for inefficient, unreliable communication mechanisms like natural language, we should be figuring out ways to make communication more efficient.

To use an analogy, there's a reason that programming languages have strict rules, and that compilers output errors rather than just trying to guess what you mean. The mild inconvenience upstream is a small cost, compared to the downstream benefits of unambiguous communication. I'm not suggesting that people start speaking in formal languages. But I do feel we should strive for a dialog-oriented approach where both the human and the computer have confidence in their mutual understanding. I can't resist a plug for HCIR.

Thursday, April 17, 2008

Ellen Voorhees defends Cranfield

I was extremely flattered to receive an email from Ellen Voorhees responding to my post about Nick Belkin's keynote. Then I was a little bit scared, since she is a strong advocate of the Cranfield tradition, and I braced myself for her rebuttal.

She pointed me to a talk she gave at the First International Workshop on Adaptive Information Retrieval (AIR) in 2006. I'd paraphrase her argument as follows: Nick and others (including me) are right to push for a paradigm that supports AIR research, but are being naïve regarding what is necessary for such research to deliver effective--and cost-effective--results. It's a strong case, and I'd be the first to concede that the advocates for AIR research have not (at least to my knowledge) produced a plausible abstract task that is amenable to efficient evaluation.

To quote Nick again, it's a grand challenge. And Ellen makes it clear that what we've learned so far is not encouraging.

Tuesday, April 15, 2008

Privacy and Information Theory

Privacy is a evergreen topic in technology discussions, and increasingly finds its way into the mainstream (cf. AOL, NSA, Facebook). My impression is that most people feel that companies and government agencies are amassing their "private" data to some nefarious end.

Let's forget about technology for a moment and subject the notion of privacy to basic examination. If I truly want to keep a secret, I don't tell anyone. If I want to share information with you but no one else, I only disclose the information under the proviso of a social or legal contract of non-disclosure.

But there's a major catch here: you--or I--may disclose the information involuntarily by our actions. The various establishments I frequent know my favorite foods, drinks, and even karaoke songs. More subtly, if I tell you in confidence that I don't like or trust someone, that information is likely to visibly affect your interaction with that person. Moreover, someone who knows that we are friends might even suspect me as the cause for your change in behavior.

What does this have to do with privacy of information? Everything! The mainstream debates treat information privacy as binary. Even when people discuss gradations of privacy, they tend to think in terms of each particular disclosure (e.g., age, favorite flavor of ice cream) as binary. But, if we take an information-theoretic look at disclosure, we immediately see that this binary view of disclosure is illusory.

For example, if you know I work for a software company and live in New York City, you know more about my gender, education, and salary than if you only know that I live in the United States. We can quantify this information gain in bits of conditional entropy.

Information theory provides a unifying framework for thinking about privacy. We can answer questions like "if I disclose that I like bagels and smoked salmon, to what extent to I disclose that I live in New York?" Or to what extent does an anonymized search log identify me personally.

If we can take this framework and make it consumable to non-information theorists, perhaps we can improve the quality of the privacy debate.

Saturday, April 12, 2008

Can Search be a Utility?

A recent lecture at the New York CTO club inspired a heated discussion on what is wrong with enterprise search solutions. Specifically, Jon Williams asked why search can't be a utility.

It's unfortunate when such a simple question calls for a complicated answer, but I'll try to tackle it.

On the web, almost all attempts to deviate even slightly from the venerable ranked-list paradigm have been resounding flops. More sophisticated interfaces, such as Clusty, receive favorable press coverage, but users don't vote for them with their virtual feet. And web search users seem reasonably satisfied with their experience.

Conversely, in the enterprise, there is widespread dissatisfaction with enterprise search solutions. A number of my colleagues have said that they installed a Google Search Appliance and "it didn't work." (Full disclosure: Google competes with Endeca in the enterprise).

While the GSA does have some significant technical limitations, I don't think the failures were primarily for technical reasons. Rather, I believe there was a failure of expectations. I believe the problem comes down to the question of whether relevance is subjective.

On the web, we get away with pretending that relevance is objective because there is so much agreement among users--particularly in the restricted class of queries that web search handles well, and that hence constitute the majority of actual searches.

In the enterprise, however, we not only lack the redundant and highly social structure of the web. We also tend to have more sophisticated information needs. Specifically, we tend to ask the kinds of informational queries that web search serves poorly, particularly when there is no Wikipedia page that addresses our needs.

It seems we can go in two directions.

The first is to make enterprise search more like web search by reducing the enterprise search problem to one that is user-independent and does not rely the social generation of enterprise data. Such a problem encompasses such mundane but important tasks as finding documents by title or finding department home pages. The challenges here fundamentally ones of infrastructure, reflecting the heterogeneous content repositories in enterprises and the controls mandated by business processes and regulatory compliance. Solving these problems is no cakewalk, but I think all of the major enterprise search vendors understand the framework for solving them.

The second is to embrace the difference between enterprise knowledge workers and casual web users, and to abandon the quest for an objective relevance measure. Such an approach requires admitting that there is no free lunch--that you can't just plug in a box and expect it to solve an enterprise's knowledge management problem. Rather, enterprise workers need to help shape the solution by supplying their proprietary knowledge and information needs. The main challenges for information access vendors are to make this process as painless as possible for enterprises, and to demonstrate the return so that enterprises make the necessary investment.

Thursday, April 10, 2008

Multiple-Query Sessions

As Nick Belkin pointed out in his recent ECIR 2008 keynote, a grand challenge for the IR community is to figure out how to bring the user into the evaluation process. A key aspect of this challenge is rethinking system evaluation in terms of sessions rather than queries.

Some recent work in the IR community is very encouraging:

- Work by Ryen White and colleagues at Microsoft Research that mines session data to guide users to popular web destinations. Their paper was awarded Best Paper at SIGIR 2007.

- Work by Nick Craswell and Martin Szummer (also at Microsoft Research, and also presented at SIGIR 2007) that performs random walks on the click graph to use click data effectively as evidence to improve relevance ranking for image search on the web.

- Work by Kalervo Järvelin (at the University of Tampere in Finland) and colleagues on discounted cumulated gain based evaluation of multiple-query IR sessions that was awarded Best Paper at ECIR 2008.

This recent work--and the prominence it has received in the IR community--is refreshing, especially in light of the relative lack of academic work on interactive IR and the demise of the short-lived TREC interactive track. They are first steps, but hopefully IR researchers and practitioners will pick up on them.

Tuesday, April 8, 2008

Q&A with Amit Singhal

Amit Singhal, who is head of search quality at Google, gave a very entertaining keynote at ECIR '08 that focused on the adversarial aspects of Web IR. Specifically, he discussed some of the techniques used in the arms race to game Google's ranking algorithms. Perhaps he revealed more than he intended!

During the question and answer session, I reminded Amit of the admonition against security through obscurity that is well accepted in the security and cryptography communities. I questioned whether his team is pursuing the wrong strategy by failing to respect this maxim. Amit replied that a relevance analog to security by design was an interesting challenge (which he delegated to the audience), but he appealed to the subjectivity of relevance as a reason for it being harder to make relevance as transparent as security.

While I accept the difficulty of this challenge, I reject the suggestion that subjectivity makes it harder. To being with, Google and other web search engines rank results objectively, rather than based on user-specific considerations. Furthermore, the subjectivity of relevance should make the adversarial problem easier rather than harder, as has been observed in the security industry.

But the challenge is indeed a daunting one. Is there a way we can give control to users and thus make the search engines objective referees rather than paternalistic gatekeepers?

At Endeca, we emphasize the transparency of our engine as a core value of our offering to enterprises. Granted, our clients generally do not have an adversarial relationship with their data. Still, I am convinced that the same approach not only can work on the web, but will be the only way to end the arms race between spammers and Amit's army of tweakers.

Sunday, April 6, 2008

Nick Belkin at ECIR '08

Last week, I had the pleasure to attend the 30th European Conference on Information Retrieval, chaired by Iadh Ounis at the University of Glasgow. The conference was outstanding in several respects, not least of which was a keynote address by Nick Belkin, one the world's leading researchers on interactive information retrieval.

Nick's keynote, entitled "Some(what) Grand Challenges for Information Retrieval," was a full frontal attack on the Cranfield evaluation paradigm that has dominated IR research for the past half century. I am hoping to see his keynote published and posted online, but in the meantime here is a choice excerpt:
in accepting the [Gerald Salton] award at the 1997 SIGIR meeting, Tefko Saracevic stressed the significance of integrating research in information seeking behavior with research in IR system models and algorithms, saying: "if we consider that unlike art IR is not there for its own sake, that is, IR systems are researched and built to be used, then IR is far, far more than a branch of computer science, concerned primarily with issues of algorithms, computers, and computing."

...

Nevertheless, we can still see the dominance of the TREC (i.e. Cranfield) evaluation paradigm in most IR research, the inability of this paradigm to accommodate study of people in interaction with information systems (cf. the death of the TREC Interactive Track), and a dearth of research which integrates study of users’ goals, tasks and behaviors with research on models and methods which respond to results of such studies and supports those goals, tasks and behaviors.

This situation is especially striking for several reasons. First, it is clearly the case that IR as practiced is inherently interactive; secondly, it is clearly the case that the new models and associated representation and ranking techniques lead to only incremental (if that) improvement in performance over previous models and techniques, which is generally not statistically significant; and thirdly, that such improvement, as determined in TREC-style evaluation, rarely, if ever, leads to improved performance by human searchers in interactive IR systems.
Nick has long been critical of the IR community's neglect of users and interaction. But this keynote was significant for two reasons. First, the ECIR program committee's decision to invite a keynote speaker from the information science community acknowledges the need for collaboration between these two communities. Second, Nick reciprocated this overture by calling for interdisciplinary efforts to bridge the gap between the formal study of information retrieval and the practical understanding of information behavior. As an avid proponent of HCIR, I am heartily encouraged by steps like these.