Application of Dimensionality Reduction in Recommender System-A Case Study John t riedl Department of Computer Science and Engineering Army HPC Research Center University of Minnesota +1612625-4002 [sarwar, karypis, konstan, riedl]@cs. umn. edu meet many of the challenges of recommender Abstr We investigate the use of dimensionality reduction to I Introduction improve performance for a new class of data analysis software called "recommender systems Recommender systems apply knowledge discovery Recommender systems have evolved extremely interactive environment of the Web techniques to the problem of making product apply data recommendations during a live customer interaction analysis techniques to the problem of helping These systems are achieving widespread success in customers find which products they would like to E-commerce nowadays, especially with the advent of purchase at E-Commerce sites. For instance,a the Internet. The tremendous growth of customers recommender Amazon.com and products poses three key challenges for (www.amazoncom)suggestsbookstocustomers recommender systems in the E-commerce domain based on other books the customers have told These are: producing high quality recommendations, Amazon they like. another recommender syst performing many recommendations per second for CdnoW(www.cdnow.com)helpscustomerschoose millions of customers and products, and achieving CDs to purchase as gifts, based on other CDs the high coverage in the face of data sparsity. One recipient has liked in the past. In a sense, uccessful recommender system technology is recommender systems are an application of a collaborative filtering, which works by matching articular type of Knowledge Discovery in Databases ustomer preferences to other customers in making (KDD)(Fayyad et al. 1996)technique. KDD recommendations. Collaborative filtering has been systems use many subtle data analysis techniques to shown to produce high quality recommendations, but achieve two unsubtle goals. They are: (to save he performance degrades with the number money by discovering the potential for efficiencies, customers and products. New recommender system or(ii) to make more money by discovering ways to technologies are needed that can quickly prod sell more products to customers. For instance high quality recommendations, even for very large- companies are using KDd to discover which scale problems products sell well at which times of year, so they can manage their retail store inventory more efficiently This paper presents two different experiments where potentially saving millions of dollars ayear we have explored one technology called Singular (Brachman et al. 1996). Other companies are using Value Decomposition (SVD) to reduce the kdd to discover which customers will be most dimensionality of recommender system databases interested in a special offer, reducing the costs of Each experiment compares the quality of a direct mail or outbound telephone campaigns by recommender system using SVD with the quality of a hundreds of thousands of dollars a year recommender system using collaborative filtering (Bhattacharyya 1998, Ling et al. 1998). These he first experiment compares the effectiveness of pplications typically involve using Kdd to discover the two recommender systems at predicting consumer a new model, and having an analyst apply the model preferences based on a database of explicit ratings of to the application. However, the most direct benefit products. The second experiment compares the of Kdd to businesses is increasing sales of existing effectiveness of the two recommender systems at products by matching customers to the products they producing Top-N lists based on a real-life customer will be most likely to purchase. The Web presents urchase database from an E-Commerce site. Our new opportunities for KDD, but challenges KDD experience suggests that SVd has the potential to ystems to perform interactively. While a customer
Application of Dimensionality Reduction in Recommender System -- A Case Study Badrul M. Sarwar, George Karypis, Joseph A. Konstan, John T. Riedl Department of Computer Science and Engineering / Army HPC Research Center University of Minnesota Minneapolis, MN 55455 +1 612 625-4002 {sarwar, karypis, konstan, riedl}@cs.umn.edu Abstract We investigate the use of dimensionality reduction to improve performance for a new class of data analysis software called “recommender systems”. Recommender systems apply knowledge discovery techniques to the problem of making product recommendations during a live customer interaction. These systems are achieving widespread success in E-commerce nowadays, especially with the advent of the Internet. The tremendous growth of customers and products poses three key challenges for recommender systems in the E-commerce domain. These are: producing high quality recommendations, performing many recommendations per second for millions of customers and products, and achieving high coverage in the face of data sparsity. One successful recommender system technology is collaborative filtering, which works by matching customer preferences to other customers in making recommendations. Collaborative filtering has been shown to produce high quality recommendations, but the performance degrades with the number of customers and products. New recommender system technologies are needed that can quickly produce high quality recommendations, even for very largescale problems. This paper presents two different experiments where we have explored one technology called Singular Value Decomposition (SVD) to reduce the dimensionality of recommender system databases. Each experiment compares the quality of a recommender system using SVD with the quality of a recommender system using collaborative filtering. The first experiment compares the effectiveness of the two recommender systems at predicting consumer preferences based on a database of explicit ratings of products. The second experiment compares the effectiveness of the two recommender systems at producing Top-N lists based on a real-life customer purchase database from an E-Commerce site. Our experience suggests that SVD has the potential to meet many of the challenges of recommender systems, under certain conditions. 1 Introduction Recommender systems have evolved in the extremely interactive environment of the Web. They apply data analysis techniques to the problem of helping customers find which products they would like to purchase at E-Commerce sites. For instance, a recommender system on Amazon.com (www.amazon.com) suggests books to customers based on other books the customers have told Amazon they like. Another recommender system on CDnow (www.cdnow.com) helps customers choose CDs to purchase as gifts, based on other CDs the recipient has liked in the past. In a sense, recommender systems are an application of a particular type of Knowledge Discovery in Databases (KDD) (Fayyad et al. 1996) technique. KDD systems use many subtle data analysis techniques to achieve two unsubtle goals. They are: (i) to save money by discovering the potential for efficiencies, or (ii) to make more money by discovering ways to sell more products to customers. For instance, companies are using KDD to discover which products sell well at which times of year, so they can manage their retail store inventory more efficiently, potentially saving millions of dollars a year (Brachman et al. 1996). Other companies are using KDD to discover which customers will be most interested in a special offer, reducing the costs of direct mail or outbound telephone campaigns by hundreds of thousands of dollars a year (Bhattacharyya 1998, Ling et al. 1998). These applications typically involve using KDD to discover a new model, and having an analyst apply the model to the application. However, the most direct benefit of KDD to businesses is increasing sales of existing products by matching customers to the products they will be most likely to purchase. The Web presents new opportunities for KDD, but challenges KDD systems to perform interactively. While a customer
is at the E-Commerce site, the recommender system depicts the neighborhood formation using a nearest nust learn from the customers behavior, develop a neighbor technique in a very simple two dimensional model of that behavior, and apply that model to pace. Notice that each user' s neighborhood is those recommend products to the customer. Recommender other users who are most similar to him as identified systems directly realize this benefit of KDD systems by the proximity measure. Neighborhoods need not n E-Commerce. They help consumers find the be symmetric. Each user has the best neighborhood products they wish to buy at the E-Commerce site for him. Once a neighborhood of users is found Collaborative filtering is the most successful particular products can be evaluated by forming a recommender system technology to date, and is used weighted composite of the neighbors' opinions of in many of the most successful recommender systems on the Web. including those at Amazon. com and CDnow. com These statistical approaches, known as automated collaborative filtering, typically rely upon ratings as The earliest implementations of collaborative numerical expressions of user preference. Several filtering, in systems such as Tapestry( Goldberg et ratings-based automated collaborative filtering al., 1992), relied on the opinions of people from a systems have been developed. The GroupLens close-knit community, such as an office workgroup Research system(Resnick et al. 1994) provides a collaborative filtering for large pseudonymous collaborative filtering solution for target us Illustration of the neighborhood formation process. The distance between the ser and every other user is computed and the closest-k users are chosen as the agram k=5) communities cannot depend on each person knowing Usenet news and movies. Ringo( Shardanand et al the others. Several systems use statistical technique 1995)and video Recommender(Hill et al. 1995)are to provide personal recommendations of documents email and generat by finding a group of other users, known as recommendations on music and movies respectively neighbors that have a history of agreeing with the Here we present the schematic diagram of the target user. Usually, neighborhoods are formed by architecture of the GroupLens Research collaborative Sys Ratin Ratings h Www HTMI ResponseServer generator Recomm- Figure 2. Recommender System Architecture Database applying proximity measures such as the Pearson filtering engine in figure 2. The user interacts with a correlation between the opinions of the users. These Web interface. The Web server software are called nearest-neighbor techniques. Figure 1 communicates with the recommender system to
is at the E-Commerce site, the recommender system must learn from the customer’s behavior, develop a model of that behavior, and apply that model to recommend products to the customer. Recommender systems directly realize this benefit of KDD systems in E-Commerce. They help consumers find the products they wish to buy at the E-Commerce site. Collaborative filtering is the most successful recommender system technology to date, and is used in many of the most successful recommender systems on the Web, including those at Amazon.com and CDnow.com. The earliest implementations of collaborative filtering, in systems such as Tapestry (Goldberg et al., 1992), relied on the opinions of people from a close-knit community, such as an office workgroup. However, collaborative filtering for large communities cannot depend on each person knowing the others. Several systems use statistical techniques to provide personal recommendations of documents by finding a group of other users, known as neighbors that have a history of agreeing with the target user. Usually, neighborhoods are formed by applying proximity measures such as the Pearson correlation between the opinions of the users. These are called nearest-neighbor techniques. Figure 1 depicts the neighborhood formation using a nearestneighbor technique in a very simple two dimensional space. Notice that each user’s neighborhood is those other users who are most similar to him, as identified by the proximity measure. Neighborhoods need not be symmetric. Each user has the best neighborhood for him. Once a neighborhood of users is found, particular products can be evaluated by forming a weighted composite of the neighbors’ opinions of that document. These statistical approaches, known as automated collaborative filtering, typically rely upon ratings as numerical expressions of user preference. Several ratings-based automated collaborative filtering systems have been developed. The GroupLens Research system (Resnick et al. 1994) provides a pseudonymous collaborative filtering solution for Usenet news and movies. Ringo (Shardanand et al. 1995) and Video Recommender (Hill et al. 1995) are email and web systems that generate recommendations on music and movies respectively. Here we present the schematic diagram of the architecture of the GroupLens Research collaborative filtering engine in figure 2. The user interacts with a Web interface. The Web server software communicates with the recommender system to 1 2 5 3 4 Figure 1: Illustration of the neighborhood formation process. The distance between the target user and every other user is computed and the closest-k users are chosen as the neighbors (for this diagram k = 5). Recommender System Customer Dynamic HTML generator WWW Server Recommendations Respons e Reques t Correlation Database Ratings Database Ratings Ratings Recommendations Figure 2. Recommender System Architecture
choose products to suggest to the user. The 2 Existing Recommender Systems recommender system, in this case a collaborative Approaches and their limitations filtering system, uses its database of ratings of products to form neighborhoods and make recommendations. The Web server software displays Most collaborative filtering based recommender the recommended products to the user build a neighborhood of likeminded customers. The Neighborhood formation scheme The largest Web sites operate at a scale that stresses usually uses Pearson correlation or cosine similarit the direct implementation of collaborative filtering as a measure of proximity(Shardanand et al. 1995. Model-based techniques(Fayyad et al., 1996)have Resnick et al. 1994). Once these systems determine the potential to contribute to recommender systems the proximity neighborhood they produce two types that can operate at the scale of these sites. However, of recommendations these techniques must be adapted to the real-time needs of the Web, and they must be tested in realistic 1. Prediction of how much a customer c will like a problems derived from Web access patterns. The product P. In case of correlation based present paper describes our experimental results in algorithm, prediction on product 'P for applying a model-based technique, Latent Semantic customer C' is computed by computing a weighted sum of co-rated items between C and Indexing(LSD), that uses a dimensionality reduction all his neighbors and then by adding C's average our recommender system. We use two data sets in rating to that. This can be expressed by the following formula(Resnick et al., 1994) our experiments to test the performance of the model based technique: a movie dataset and an e-commerce The contributions of this paper are 1. The details of how one model-based Here, rcy denotes the correlation between user C echnology, LSI/SVD, was applied to and neighbor J. JP is J's ratings on product P reduce dimensionality in recommende J and C are J and C's average ratings. The systems for generating predictions prediction is personalized for the customer C There are. however. some naive non 2. Using low dimensional representation to compute neighborhood for generating personalized prediction schemes where prediction, for example, is computed simply by recommendations taking the average ratings of items being 3. The results of our experiments with LSIISVd on two test data sets-our 2. Recommendation of a list of products for a Movielens test-bed and customer customer C. This is commonly known as top-M product purchase data from a large E recommendation. Once a neighborhood is commerce company ormed, the recommender system algorithm focuses on the products rated by the neighbors The rest of the paper is organized as follows. The and selects a list of N products that will be liked next section describes some potential problems by the customer associated with correlation-based collaborative These systems have been successful in several filtering models. Section 3 explores the possibilities of leveraging the latent semantic relationship in domains, but the algorithm is reported to have shown some limitations such as. customer-product matrix as a basis for prediction generation. At the same time it explains how we can Sparsity: Nearest neighbor algorithms rely upon take the advantage of reduced dimensionality to form exact matches that cause the algorithms to better neighborhood of customers. The section sacrifice recommende overage and following that delineates our experimental test-bed accuracy(Konstan et al., 1997. Sarwar et al experimental design, results and discussion about the 998). In particular, since the correlation improvement in quality and performance. Section 5 coefficient is only defined between customers oncludes the paper and provides directions for future who have rated at least two products in common many pairs of customers have no correlation at all (Billsus et al., 1998). In practice, many commercial recommender systems are used to
choose products to suggest to the user. The recommender system, in this case a collaborative filtering system, uses its database of ratings of products to form neighborhoods and make recommendations. The Web server software displays the recommended products to the user. The largest Web sites operate at a scale that stresses the direct implementation of collaborative filtering. Model-based techniques (Fayyad et al., 1996) have the potential to contribute to recommender systems that can operate at the scale of these sites. However, these techniques must be adapted to the real-time needs of the Web, and they must be tested in realistic problems derived from Web access patterns. The present paper describes our experimental results in applying a model-based technique, Latent Semantic Indexing (LSI), that uses a dimensionality reduction technique, Singular Value Decomposition (SVD), to our recommender system. We use two data sets in our experiments to test the performance of the modelbased technique: a movie dataset and an e-commerce dataset. The contributions of this paper are: 1. The details of how one model-based technology, LSI/SVD, was applied to reduce dimensionality in recommender systems for generating predictions. 2. Using low dimensional representation to compute neighborhood for generating recommendations. 3. The results of our experiments with LSI/SVD on two test data sets—our MovieLens test-bed and customerproduct purchase data from a large Ecommerce company. The rest of the paper is organized as follows. The next section describes some potential problems associated with correlation-based collaborative filtering models. Section 3 explores the possibilities of leveraging the latent semantic relationship in customer-product matrix as a basis for prediction generation. At the same time it explains how we can take the advantage of reduced dimensionality to form better neighborhood of customers. The section following that delineates our experimental test-bed, experimental design, results and discussion about the improvement in quality and performance. Section 5 concludes the paper and provides directions for future research. 2 Existing Recommender Systems Approaches and their Limitations Most collaborative filtering based recommender systems build a neighborhood of likeminded customers. The Neighborhood formation scheme usually uses Pearson correlation or cosine similarity as a measure of proximity (Shardanand et al. 1995, Resnick et al. 1994). Once these systems determine the proximity neighborhood they produce two types of recommendations. 1. Prediction of how much a customer C will like a product P. In case of correlation based algorithm, prediction on product ‘P’ for customer ‘C’ is computed by computing a weighted sum of co-rated items between C and all his neighbors and then by adding C's average rating to that. This can be expressed by the following formula (Resnick et al., 1994): å å Î - = + J CJ J rates CJ r J J r C pred C ( ) P P Here, rCJ denotes the correlation between user C and neighbor J. JP is J's ratings on product P. J and C are J and C's average ratings. The prediction is personalized for the customer C. There are, however, some naive nonpersonalized prediction schemes where prediction, for example, is computed simply by taking the average ratings of items being predicted over all users (Herlocker et al., 1999). 2. Recommendation of a list of products for a customer C. This is commonly known as top-N recommendation. Once a neighborhood is formed, the recommender system algorithm focuses on the products rated by the neighbors and selects a list of N products that will be liked by the customer. These systems have been successful in several domains, but the algorithm is reported to have shown some limitations, such as: · Sparsity: Nearest neighbor algorithms rely upon exact matches that cause the algorithms to sacrifice recommender system coverage and accuracy (Konstan et al., 1997. Sarwar et al., 1998). In particular, since the correlation coefficient is only defined between customers who have rated at least two products in common, many pairs of customers have no correlation at all (Billsus et al., 1998). In practice, many commercial recommender systems are used to
evaluatelargeproductsets(e.g.,amazon.com filtering agent solution, however, did not address the recommends books and CDnow recommends fundamental problem of poor relationships music albums). In these systems, even active like-minded but sparse-rating customers. We customers may have rated well under 1% of the recognized that the KDd research community had products (1% of 2 million books is 20,000 extensive experience learning from sparse databases books--a large set on which to have an After reviewing several KDD techniques, we decided Accordingly, to try applying Latent Semantic Indexing (si)to algorithms may be unable to make many reduce the dimensionality of our customer-product commendations for a particular user. This problem is known as reduced coverage, and is due to sparse ratings of neighbors. Furthermore, LSI is a dimensionality reduction technique that has the accuracy of recommendations may be poor been widely used in information retrieval (Ir)to because fairly little ratings data can be included solve the problems of synonymy and polysemy An example of a missed opportunity for quality (Deerwester et al. 1990). Given a term-document- is the loss of neighbor transitivity. If customers frequency matrix, LSI is used to construct two Paul and Sue correlate highly, and Sue also matrices of reduced dimensionality. In essence, these correlates highly with Mike, it is not necessarily matrices represent latent attributes of terms, as true that Paul and Mike will correlate. They may reflected by their occurrence in documents, and of have too few ratings in common or may even documents, as reflected by the terms that occur show a negative correlation due to a small We rying number of unusual ratings in common relationships among pairs of customers based on atings of products. By reducing the dimensionality Scalability: Nearest neighbor algorithms require of the product space, we can increase density and computation that grows with both the number of thereby find more ratings. Discovery of latent ustomers and the number of products. With relationship from the database may potentially solve millions of customers and products, a typical the synonymy problem in recommender systems web-based recommender system running LSI, which uses singular value decomposition as its existing algorithms will suffer serious scalability underlying matrix factorization algorithm, maps nicely into the collaborative filtering recommender algorithm challenge. Berry et al. (1995)point out that Synonymy: In real life scenario, different product names can refer to the similar objects the reduced orthogonal dimensions resulting from Correlation based recommender systems cant SVD are less noisy than the original data and capture the latent associations between the terms and find this latent association and treat these products differently. For example, let us consider documents. Earlier work(Billsus et al. 1998)took two customers one of them rates 10 different advantage of this semantic property to reduce the dimensionality of feature space. The reduced feature recycled letter pad products as high"and another customer rates 10 different recycled meno ad products " high". Correlation based predictions. The rest of this section presents the construction of SVD-based recommender algorithm recommender systems would see no match for the purpose of generating predictions and top-M between product sets to compute correlation and would be unable to discover the latent recommendations; the following section describes association that both of them like recycled office our experimental setup, evaluation metrics, and products 3.1 Singular Value Decomposition (SVD) 3 Applying SVD for Collaborative Filtering SVD is a well-known matrix factorization technique The weakness of Pearson nearest neighbor for large, that factors an m x n matrix r into three matrices sparse databases he following recommender system algorithms. Our first approach R=U·S·V attempted to bridge the sparsity by incorporating semi-intelligent filtering agents into the system Where, U and V are two orthogonal matrices of size (Sarwar et al., 1998, Good et al., 1999). These agents m x r and n x r respectively; r is the rank of the evaluated and rated each product, using syntactic matrix R. S is a diagonal matrix of size r x r having features. By providing a dense ratings set, they all singular values of matrix R as its diagonal entries helped alleviate coverage and improved quality. The All the entries of matrix S are positive and stored in
evaluate large product sets (e.g., Amazon.com recommends books and CDnow recommends music albums). In these systems, even active customers may have rated well under 1% of the products (1% of 2 million books is 20,000 books--a large set on which to have an opinion). Accordingly, Pearson nearest neighbor algorithms may be unable to make many product recommendations for a particular user. This problem is known as reduced coverage, and is due to sparse ratings of neighbors. Furthermore, the accuracy of recommendations may be poor because fairly little ratings data can be included. An example of a missed opportunity for quality is the loss of neighbor transitivity. If customers Paul and Sue correlate highly, and Sue also correlates highly with Mike, it is not necessarily true that Paul and Mike will correlate. They may have too few ratings in common or may even show a negative correlation due to a small number of unusual ratings in common. · Scalability: Nearest neighbor algorithms require computation that grows with both the number of customers and the number of products. With millions of customers and products, a typical web-based recommender system running existing algorithms will suffer serious scalability problems. · Synonymy: In real life scenario, different product names can refer to the similar objects. Correlation based recommender systems can't find this latent association and treat these products differently. For example, let us consider two customers one of them rates 10 different recycled letter pad products as "high" and another customer rates 10 different recycled memo pad products "high". Correlation based recommender systems would see no match between product sets to compute correlation and would be unable to discover the latent association that both of them like recycled office products. 3 Applying SVD for Collaborative Filtering The weakness of Pearson nearest neighbor for large, sparse databases led us to explore alternative recommender system algorithms. Our first approach attempted to bridge the sparsity by incorporating semi-intelligent filtering agents into the system (Sarwar et al., 1998, Good et al., 1999). These agents evaluated and rated each product, using syntactic features. By providing a dense ratings set, they helped alleviate coverage and improved quality. The filtering agent solution, however, did not address the fundamental problem of poor relationships among like-minded but sparse-rating customers. We recognized that the KDD research community had extensive experience learning from sparse databases. After reviewing several KDD techniques, we decided to try applying Latent Semantic Indexing (LSI) to reduce the dimensionality of our customer-product ratings matrix. LSI is a dimensionality reduction technique that has been widely used in information retrieval (IR) to solve the problems of synonymy and polysemy (Deerwester et al. 1990). Given a term-documentfrequency matrix, LSI is used to construct two matrices of reduced dimensionality. In essence, these matrices represent latent attributes of terms, as reflected by their occurrence in documents, and of documents, as reflected by the terms that occur within them. We are trying to capture the relationships among pairs of customers based on ratings of products. By reducing the dimensionality of the product space, we can increase density and thereby find more ratings. Discovery of latent relationship from the database may potentially solve the synonymy problem in recommender systems. LSI, which uses singular value decomposition as its underlying matrix factorization algorithm, maps nicely into the collaborative filtering recommender algorithm challenge. Berry et al. (1995) point out that the reduced orthogonal dimensions resulting from SVD are less noisy than the original data and capture the latent associations between the terms and documents. Earlier work (Billsus et al. 1998) took advantage of this semantic property to reduce the dimensionality of feature space. The reduced feature space was used to train a neural network to generate predictions. The rest of this section presents the construction of SVD-based recommender algorithm for the purpose of generating predictions and top-N recommendations; the following section describes our experimental setup, evaluation metrics, and results. 3.1 Singular Value Decomposition (SVD) SVD is a well-known matrix factorization technique that factors an m ´ n matrix R into three matrices as the following: Where, U and V are two orthogonal matrices of size m ´ r and n ´ r respectively; r is the rank of the matrix R. S is a diagonal matrix of size r ´ r having all singular values of matrix R as its diagonal entries. All the entries of matrix S are positive and stored in R = U × S ×V ¢
decreasing order of their magnitude. The matrices x k and the dimension of Sk"Vk' is k xn.To obtained by performing SVD are particularly useful compute the prediction we simply calculate the dot for our application because of the property that SVD product of the h row of URSk and the ph column of provides the best lower rank approximations of the SkVk and add the customer average back using the iginal matrix R. in terms of frobenius norm. It is following possible to reduce the rx r matrix S to have only largest diagonal values to obtain a matrix Sk, k < r. If )·√S4V(P) the matrices U and V are reduced accordingly, then the reconstructed matrix Rk= Uk. Sk vk is the closest Note that even though the Rnom matrix is dense, the rank-k matrix to R. In other words, Rk minimizes the pecial structure of the matrix NPR allows us to use Frobenius norm IR-RAll over all rank-k matrices sparse SVD algorithms (e. g, Lanczos) whose most linear to the number of non- We use SVD in recommender systems to perform zeros in the original matrix R two different tasks: First, we use SVD to capture latent relationships between customers and products that allow us to compute the predicted likeliness of a 3.1.2 Recommendation generation certain product by a customer. Second, we use SVD In our second experiment, we look into the prospects to produce a low-dimensional representation of the of using low-dimensional space as a basis for original customer-product space and then compute neighborhood formation and using the neighbors neighborhood in the reduced space. We then used opinions on products they purchased we recommend that to generate a list of top-N product a list of N products for a given customer. For commendations for customers. The following is a purpose we consider customer preference data as description of our experiments binary by treating each non-zero entry of the customer-product matrix as 1. This means that we are 3.1.1 Prediction Generation only interested in whether a customer consumed a We start with a customer-product ratings matrix that particular product but not how much he/she liked that is very sparse, we call this matrix R. To capture meaningful latent relationship we first removed sparsity by filling our customer-product ratings Neighborhood formation in the reduced space matrix. We tried two different approaches: using the The fact that the reduced dimensional representation average ratings for a customer and using the average of the original space is less sparse than its high ratings for a product. We found the product average dimensional counterpart led us to form the produce a better result. We also considered two neighborhood in that space. As before, we started normalization techniques: conversion of ratings to z with the original customer-product matrix A, and then scores and subtraction of customer average from each used Svd to produce three decomposed matrices U, ng. We found the latter approach to provide S, and v. We then reduced S by retaining only better results. After normalization we obtain a filled eigenvalues and obtained Sk. Accordingly, we normalized matrix Rnorm. Essentially, Rnorm =R+NPR, performed dimensionality reduction to obtain Uk and where NPR is the fill-in matrix that provides naive Vk like the previous method, we finally computed non-personalized recommendation. We factor the the matrix product UASk". This m x k matrix is the k matrix Rnorm and obtain a low-rank approximation dimensional representation of m customers. We then after applying the following steps described in performed vector similarity (cosine similarity)to (Deerwester et al. 1990) form the neighborhood in that reduced space sing SVd to obtain U, Sand I Top-N Recommendation generation reduce the matrix S to dimension k Once the neighborhood is formed we concentrate on compute the square-root of the reduced the neighbors of a given customer and analyze the matrix Sk, to obtain Sk oroducts they purchased to recommend N products compute two resultant matrices: UKSkand the target customer is most likely to purchase. After SkVK' computing the neighborhood for a given customer C we scan through the purchase record of each of the k These resultant matrices can now be used to compute neighbors and perform a frequency count on the the recommendation score for any customer c and products they purchased. The product list is then product p. Recall that the dimension of UKSk is m sorted and most frequently purchased N items are eturned as recommendations for the target customer
decreasing order of their magnitude. The matrices obtained by performing SVD are particularly useful for our application because of the property that SVD provides the best lower rank approximations of the original matrix R, in terms of Frobenius norm. It is possible to reduce the r ´ r matrix S to have only k largest diagonal values to obtain a matrix Sk , k < r. If the matrices U and V are reduced accordingly, then the reconstructed matrix Rk = Uk .Sk .Vk ¢ is the closest rank-k matrix to R. In other words, Rk minimizes the Frobenius norm ||R- Rk || over all rank-k matrices. We use SVD in recommender systems to perform two different tasks: First, we use SVD to capture latent relationships between customers and products that allow us to compute the predicted likeliness of a certain product by a customer. Second, we use SVD to produce a low-dimensional representation of the original customer-product space and then compute neighborhood in the reduced space. We then used that to generate a list of top-N product recommendations for customers. The following is a description of our experiments. 3.1.1 Prediction Generation We start with a customer-product ratings matrix that is very sparse, we call this matrix R. To capture meaningful latent relationship we first removed sparsity by filling our customer-product ratings matrix. We tried two different approaches: using the average ratings for a customer and using the average ratings for a product. We found the product average produce a better result. We also considered two normalization techniques: conversion of ratings to zscores and subtraction of customer average from each rating. We found the latter approach to provide better results. After normalization we obtain a filled, normalized matrix Rnorm. Essentially, Rnorm = R+NPR, where NPR is the fill-in matrix that provides naive non-personalized recommendation. We factor the matrix Rnorm and obtain a low-rank approximation after applying the following steps described in (Deerwester et al. 1990): · factor Rnorm using SVD to obtain U, S and V. · reduce the matrix S to dimension k · compute the square-root of the reduced matrix Sk , to obtain Sk 1/2 · compute two resultant matrices: UkSk 1/2 and Sk 1/2Vk ¢ These resultant matrices can now be used to compute the recommendation score for any customer c and product p. Recall that the dimension of UkSk 1/2 is m ´ k and the dimension of Sk 1/2Vk ¢ is k ´ n. To compute the prediction we simply calculate the dot product of the c th row of UkSk 1/2 and the p th column of Sk 1/2Vk ¢ and add the customer average back using the following: C C U . S (c) S .V (P) P K k k k pred ¢ × ¢ = + . Note that even though the Rnorm matrix is dense, the special structure of the matrix NPR allows us to use sparse SVD algorithms (e.g., Lanczos) whose complexity is almost linear to the number of nonzeros in the original matrix R. 3.1.2 Recommendation generation In our second experiment, we look into the prospects of using low-dimensional space as a basis for neighborhood formation and using the neighbors’ opinions on products they purchased we recommend a list of N products for a given customer. For this purpose we consider customer preference data as binary by treating each non-zero entry of the customer-product matrix as 1. This means that we are only interested in whether a customer consumed a particular product but not how much he/she liked that product. Neighborhood formation in the reduced space: The fact that the reduced dimensional representation of the original space is less sparse than its highdimensional counterpart led us to form the neighborhood in that space. As before, we started with the original customer-product matrix A, and then used SVD to produce three decomposed matrices U, S, and V. We then reduced S by retaining only k eigenvalues and obtained Sk . Accordingly, we performed dimensionality reduction to obtain Uk and Vk . Like the previous method, we finally computed the matrix product UkSk 1/2. This m ´ k matrix is the k dimensional representation of m customers. We then performed vector similarity (cosine similarity) to form the neighborhood in that reduced space. Top-N Recommendation generation: Once the neighborhood is formed we concentrate on the neighbors of a given customer and analyze the products they purchased to recommend N products the target customer is most likely to purchase. After computing the neighborhood for a given customer C, we scan through the purchase record of each of the k neighbors and perform a frequency count on the products they purchased. The product list is then sorted and most frequently purchased N items are returned as recommendations for the target customer