LIST OF TABLES TABLE 4-1: HYPOTHETICAL RECOMMENDATION LISTS FOR ROBERT HEINLEIN TABLE 4-2: PRECISION AND RECALL FOR NON-DIVERSIFIED CF TABLE 4-3: MULTIPLE LINEAR REGRESSION RESULTS TABLE 5-1: THE NUMBER OF SUBJECTS FOR EACH RECOMMENDER ALGORITHM TABLE 5-2: OVERALL USER OPINION, IN PERCENTAGES TABLE 5-3: USER PROFILING ALTERNATIVES TABLE 5-4: NUMBER OF USERS PER ALGORITHM 116 TABLE 5-5: OVERALL USER SATISFACTION TABLE 5-6: RECOMMENDED ALGORITHM BY PAPER CLASS TABLE 5-7: DISTRIBUTION OF USERS IN ONLINE EXPERIMENT TABLE 7-1: THE FLUIDITY OF'FIND MORE REFERENCES TABLE 7-2: EXAMPLE USER TYPE/USER TASK MATRIX TABLE 7-3: EXAMPLE HRI ASPECT MAPPINGS FOR USER TYPES 166 TABLE 7-4: EXAMPLE HRI ASPECT MAPPING FOR USER TASKS... TAbLE 8-1: ALGORITHM AND METRIC LISTINGS TABLE 8-2: SUMMARY OF RESULTS BY ALGORITHM TABLE 8-3: HRI MAPPINGS FOR RECOMMENDER METRICS IN THE DOMAIN OF RESEARCH PAPERS TABLE9-1: SUMMARY OF RECOMMENDER ALGORITHM PROPERTIES…,…… TABLE 9-2: AVAILABLE INFORMATION SEEKING TASKS TABLE 9-3: SUMMARY OF ALL SURVEY QUESTIONS TAbLE 9-4: NUMBER OF USERS PER EXPERIMENTAL CONDITION TABLE 9-5: OVERLAP OF TOP-10 RECOMMENDATION LISTS, BY BASKET SIZE TABLE 9-6: ORDER EFFECT FOR QUESTION 2-2 215 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission
LIST OF FIGURES FIGURE 1-1: THE INTENTION GAP BETWEEN USERS AND RECOMMENDERS FIGURE 1-2: THE HUMAN-RECOMMENDER INTERACTION PROCESS MODEL 895 FIGuRe 3-1: A CONCEPTUAL RATINGS MATRIX FIGURE 3-2: THE RECOMMENDATION PROCESS MODEL FIGURE 3-3: AN INSTANCE OF THE RECOMMENDATION PROCESS MODEL 38 FIGURE 3-4: RATING MATRIX EXAMPLE WITH CARS AND PAINT FIGURE 3-5: SELF-REFLECTIVE RATINGS MATRIX AS DIRECTED GRAPH FIGURE 4-1: PRECISION(A)AND RECALL(B)FOR INCREASING DIVERSITY FIGURE 4-2: INTRA-LIST SIMILARITY BEHAVIOR(A)AND OVERLAP WITH PURE CF LISTS (B) FIGURE4-3: RESULTS FOR SINGLE-VOTE AVERAGES (A), COVERED RANGE IF INTERESTS(B), AND OVERALL SATISFACTION (C FIGURE 5-1: OUR RATINGS MATRIX FOR RESEARCH PAPERS 82 FIGURE 5-2: FOR REMOVED CITATIONS THAT AN ALGORITHM WAS ABLE TO RECOMMEND, THE PERCENTAGE OF CITATIONS RECOMMENDED FIRST, AND IN THE TOP 10, 20, 30, OR 40 BY EACH ALGORITHM .......8 FIGURE 5-3: FOR ALL REMOVED CITATIONS, THE PERCENTAGE OF CITATIONS RECOMMENDED FIRST, AND IN THE TOP 10. 20, 30, OR 40 BY EACH ALGORITHM FIGURE 5-4: QUALITY OF INDIVIDUAL RECOMMENDATIONS FIGURE 5-5: NOVELTY OF INDIVIDUAL RECOMMENDATIONS FIGURE 5-6: FINDING RELATED WORK RESULTS FIGURE 5-7: FINDING PAPERS TO READ RESULTS FIGURE 5-8: MODEL INSTANCE FOR HYBRID RECOMMENDER ALGORITHMS 101 FIGURE 5-9: THE CBF-COMBINED AND CBF-SEPARATED ALGORITHMS FIGURE 5-10: THE FUSION ALGORITHM 106 FIGURE 5-11: OFFLINE SUMMARY FOR TOP FIVE ALGORITHMS 110 13 FIGURE 5-13: OVERALL USER SATISFACTION BY ALGORITHM FIGURE 6-1: THE PILLARS AND ASPECTS OF HRI FIGURE 6-2: THE HRI ANALYTIC PROCESS MODEL FIGURE8-1:OVERⅤIEV HE ADAPTABILITY METRIC FIGURE 8-2: POPULARITY RESULTS FIGURE 8-3: RATABILITY RESULTS FIGURE 8-4: ADJUSTED RANK RESULT FIGURE 8-5: ADAPTABILITY, EQUAL-SPLIT RESULTS FIGURE 8-6: ADAPTABILITY, ADAPT- HEAVY RESULTS 185 FIGURE 8-7: POPULARITY NEIGHBORHOOD RESULTS FIGURE 8-8: HYBRID POPULARITY NEIGHBORHOOD RESULTS FIGURE 8-9: RATABILITY NEIGHBORHOOD RESULTS FIGURE 8-10: HYBRID RATABILITY NEIGHBORHOOD RESULTS FIGURE 8-11: ADAPTABILITY, EQUAL-SPLIT NEIGHBORHOOD RESULTS 191 FIGURE 8-12 HYBRID ADAPTABILITY, EQUAL-SPLIT NEIGHBORHOOD RESULTS FIGURE 9-1: THE AUTHOR SELECTION PAGE 205 IGURE 9-2: THE PAPER AND CITATION SELECTION PAGE 205 FIGURE 9-3: THE RECOMMENDATION LIST SCREEN IGURE 9-4: RESULTS FOR FIRST THREE SURVEY QUESTIONS FIGURE 9-5: RESULTS FOR SECOND THREE SURVEY QUESTIONS 2l1 FIGURE 9-6: USER OPINION OF SUITABILITY FOR CHOSEN TASK 212 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission
CHAPTER 1 INTRODUCTION Every day, approximately 20 million words of technical information are recorded.A reader capable of reading 1000 words per minute would require 1.5 months, reading eight hours every day, to get through one day's output, and at the end of that period he would have fallen 5.5 years behind in his readin Hubert Murray Jr Methods for Satisfying the Needs of the Scientist and the Engineer for Scientific and Technical Communication. [100] In 1996, Reuters performed a survey of IT managers and found many of them suffered health problems from information overload. Other effects found in the study included anxiety, poor decision-making, difficulties in memorizing and remembering, and reduced attention span"[56]. Information overload is real, and it is getting to the point where people are becoming physically ill trying to keep up. Hubert Murrays quotation above is even more telling than it first appears. It is from 1966, before the introduction of the personal computer, before the explosion of the World Wide Web, before the Information Age. Fortunately, there are recommender systems. Recommenders quickly and efficiently sort thought vast quantities of information and bring the relevant pieces of information to our attention. In doing so, recommenders help us navigate through complex information spaces, allowing us to be more efficient and well, healthier. Also, the ACM had around 10,000 members and had just awarded its first Turing Award Reproduced with permission of the copyright owner. Further reproduction prohibited without permission
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission
Humans have acted as recommenders for many years: newspaper editors, movie critics, Consumer Reports, and postings on E-pinions com are all examples of people acting as recommenders for others People provide their opinions for others to use when making a similar decision. This form of recommendation is called collaborative filtering a target person seeks out the opinions of other people who are similar to her(e. g collaborators) and uses their opinions to inform her decision making process Just as computers exacerbate the information overload problem, they also help solve it. In the last 12 years, collaborative filtering has been enhanced through computer algorithms. By combining the strengths of a computer, sifting through data, with the strengths of a person, interpreting people's opinions, automated collaborative filtering g based recommender systems help users receive high quality personalized recommendations Because of these strengths, automated collaborative filtering (a.k. a CF)is pop with many e-commerce websites, appearing at Amazon. com, Yahoo! Music, and Netflix, among others. In each case CF recommends products the user might not have known about otherwise, enhancing the user's experience and improving the companys bottom line. However, recommenders, both computers and humans, do not always generate good recommendations; sometimes things can go quite wrong. For example, here is a true anecdote: John is the hero of our story. On a Friday evening he was going to take a date to see a movie. Unfortunately, he did not know what movie to see. He asked his closest friend for advice on which movie was the best one playing. His friend immediately answered, "A Clockwork Orange. You will love that movie. "John was skeptical but followed his friend's advice. The date went horribly. While he did enjoy the movie himself, his date did not care for it at all What went wrong? The problem was not that the recommendation was bad; on le contrary, the recommendation was exactly right. The context was wrong; the friend the Reproduced with permission of the copyright owner. Further reproduction prohibited without permission
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission
assumed John was not going on a date, and John failed to mention it. This kind of miscommunication between recommenders and users still plagues us today. computers have become recommenders this problem has gotten worse. while it would have been easy for John to correct his friend, it is more difficult, for example, to correct your TiVo when it incorrectly thinks you are gay, or when Amazon. com incorrectly thinks you love all cartoons [160] Yet, recommenders have enjoyed great success in many domains, including movies, books, music, and jokes. It is not an issue of quality; recommenders make accurate recommendations. It is an issue of context and purpose-why does the user want a recommendation? What is he going to do with it? This problem has been discussed in the recommender systems literature. Herlocker et al. described it best in [51] hen they state: "There is an emerging understanding that good recommendation accuracy alone does not give users of recommender systems an effective and satisfying experience, Recommender systems must provide not just accuracy, but also usefulness (Emphasis in original) We believe that a deeper understanding of can only help improve the quality and usefulness of recommenders. Much like the friend in our story, if computer recommenders knew why someone wanted a movie recommendation, they could tailor recommendations not just to the person, but also to the person's current need. Of course understanding why someone wants a recommendation can be elusive Thesis statement We believe information seeking theory provides a framework to answer this question of why [20]. We assume that the user has an information need and has come to a recommender as a part of her information seeking behavior. Given this assumption, we state our thesis as the following Reproduced with permission of the copyright owner. Further reproduction prohibited without permission
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission