Introduction Problem Definition Two Stages of Hash Function Learning Projection Stage (Dimension Reduction) Projected with real-valued projection function Given a point z,each projected dimension i will be associated with a real-valued projection function fi()(e.g.fi()=w) 。Quantization Stage Turn real into binary Essential difference between metric learning and learning to hash 日卡三4元,互Q0 Li (http://cs.nju.edu.cn/lwj) Learning to Hash CS.NJU 11/50
Introduction Problem Definition Two Stages of Hash Function Learning Projection Stage (Dimension Reduction) Projected with real-valued projection function Given a point x, each projected dimension i will be associated with a real-valued projection function fi(x) (e.g. fi(x) = w T i x) Quantization Stage Turn real into binary Essential difference between metric learning and learning to hash Li (http://cs.nju.edu.cn/lwj) Learning to Hash CS, NJU 11 / 50
Introduction Existing Methods Data-Independent Methods The hashing function family is defined independently of the training dataset: Locality-sensitive hashing (LSH):(Gionis et al.,1999;Andoni and Indyk,2008)and its extensions (Datar et al.,2004;Kulis and Grauman,2009;Kulis et al.,2009) SIKH:Shift invariant kernel hashing (SIKH)(Raginsky and Lazebnik, 2009) Hashing function:random projections. 日卡三4元,互Q0 Li (http://cs.nju.edu.cn/lwj) Learning to Hash CS.NJU 12 /50
Introduction Existing Methods Data-Independent Methods The hashing function family is defined independently of the training dataset: Locality-sensitive hashing (LSH): (Gionis et al., 1999; Andoni and Indyk, 2008) and its extensions (Datar et al., 2004; Kulis and Grauman, 2009; Kulis et al., 2009). SIKH: Shift invariant kernel hashing (SIKH) (Raginsky and Lazebnik, 2009). Hashing function: random projections. Li (http://cs.nju.edu.cn/lwj) Learning to Hash CS, NJU 12 / 50
Introduction Existing Methods Data-Dependent Methods Hashing functions are learned from a given training dataset. Relatively short codes Seminal papers:(Salakhutdinov and Hinton,2007,2009;Torralba et al., 2008;Weiss et al.,2008) Two categories: 。Unimodal 。Supervised methods given the labels yi or triplet (xi,j,k) Unsupervised methods Multimodal 。Supervised methods Unsupervised methods 日卡三4元,互Q0 Li (http://cs.nju.edu.cn/lvj) Learning to Hash CS.NJU 13/50
Introduction Existing Methods Data-Dependent Methods Hashing functions are learned from a given training dataset. Relatively short codes Seminal papers: (Salakhutdinov and Hinton, 2007, 2009; Torralba et al., 2008; Weiss et al., 2008) Two categories: Unimodal Supervised methods given the labels yi or triplet (xi , xj , xk) Unsupervised methods Multimodal Supervised methods Unsupervised methods Li (http://cs.nju.edu.cn/lwj) Learning to Hash CS, NJU 13 / 50
Introduction Existing Methods (Unimodal)Unsupervised Methods No labels to denote the categories of the training points. PCAH:principal component analysis. SH:(Weiss et al.,2008)eigenfunctions computed from the data similarity graph. ITQ:(Gong and Lazebnik,2011)orthogonal rotation matrix to refine the initial projection matrix learned by PCA. AGH:Graph-based hashing (Liu et al.,2011). IsoHash:projected dimensions with isotropic variances(Kong and Li, 2012b). 日卡三4元,互Q0 Li (http://cs.nju.edu.cn/lwj) Learning to Hash CS.NJU 14 /50
Introduction Existing Methods (Unimodal) Unsupervised Methods No labels to denote the categories of the training points. PCAH: principal component analysis. SH: (Weiss et al., 2008) eigenfunctions computed from the data similarity graph. ITQ: (Gong and Lazebnik, 2011) orthogonal rotation matrix to refine the initial projection matrix learned by PCA. AGH: Graph-based hashing (Liu et al., 2011). IsoHash: projected dimensions with isotropic variances (Kong and Li, 2012b). Li (http://cs.nju.edu.cn/lwj) Learning to Hash CS, NJU 14 / 50
Introduction Existing Methods (Unimodal)Supervised(semi-supervised)Methods Class labels or pairwise constraints: SSH:Semi-Supervised Hashing (SSH)(Wang et al.,2010a,b)exploits both labeled data and unlabeled data for hash function learning. MLH:Minimal loss hashing(MLH)(Norouzi and Fleet,2011)based on the latent structural SVM framework. KSH:Kernel-based supervised hashing (Liu et al.,2012) LDAHash:Linear discriminant analysis based hashing(Strecha et al., 2012) o LFH:Supervised hashing with latent factor models(Zhang et al., 2014) Triplet-based methods: Hamming Distance Metric Learning (HDML)(Norouzi et al.,2012) Column Generation base Hashing (CGHash)(Li et al.,2013) Li (http://cs.nju.edu.cn/lvj) Learning to Hash CS.NJU 15/50
Introduction Existing Methods (Unimodal) Supervised (semi-supervised) Methods Class labels or pairwise constraints: SSH: Semi-Supervised Hashing (SSH) (Wang et al., 2010a,b) exploits both labeled data and unlabeled data for hash function learning. MLH: Minimal loss hashing (MLH) (Norouzi and Fleet, 2011) based on the latent structural SVM framework. KSH: Kernel-based supervised hashing (Liu et al., 2012) LDAHash: Linear discriminant analysis based hashing (Strecha et al., 2012) LFH: Supervised hashing with latent factor models (Zhang et al., 2014) Triplet-based methods: Hamming Distance Metric Learning (HDML) (Norouzi et al., 2012) Column Generation base Hashing (CGHash) (Li et al., 2013) Li (http://cs.nju.edu.cn/lwj) Learning to Hash CS, NJU 15 / 50