●●●● ●●●● ●●●● ●●●● DHTs in Context ●●●● ●●●● User Application store file load file CFS File System Retrieve and store files Map files to blocks store block load block Storage DHash Reliable Block StorageReplication Caching lookup Chord DHT Lookup Routing send receive TCP/P Transport Communication Peer-to-Peer Networks -P. Felber
Peer-to-Peer Networks — P. Felber 11 DHTs in Context Transport DHT Reliable Block Storage File System Communication Lookup Routing Storage Replication Caching Retrieve and store files Map files to blocks CFS DHash Chord TCP/IP send receive lookup store_block load_block store_file load_file User Application
●●●● ●●●● ●●●● ●●●● DHTS Support Many Applications0000 o File sharing [CFS, Ocean Store, PAST, Web cache [Squirrel,. Censor-resistant stores [Eternity, FreeNet,. I Application-layer multicast [Narada, ..] Event notification [Scribe Naming systems [chordDNS, INS Query and indexing〖 Kademlia,… Communication primitives [3 Backup store [HiveNet Web archive [Herodotus Peer-to-Peer Networks -P. Felber
Peer-to-Peer Networks — P. Felber 12 DHTs Support Many Applications ⚫ File sharing [CFS, OceanStore, PAST, …] ⚫ Web cache [Squirrel, …] ⚫ Censor-resistant stores [Eternity, FreeNet, …] ⚫ Application-layer multicast [Narada, …] ⚫ Event notification [Scribe] ⚫ Naming systems [ChordDNS, INS, …] ⚫ Query and indexing [Kademlia, …] ⚫ Communication primitives [I3, …] ⚫ Backup store [HiveNet] ⚫ Web archive [Herodotus]
●●●● ●●●● ●●●● ●●●● DHT Case Studies ●●●● ●●●● o Case studies Chord Pasti TOPLUS Questions How is the hash space divided evenly among nodes? How do we locate a node? o How does we maintain routing tables How does we cope with(rapid) changes in membership? Peer-to-Peer Networks -P. Felber
Peer-to-Peer Networks — P. Felber 13 DHT Case Studies ⚫ Case Studies ⚫ Chord ⚫ Pastry ⚫ TOPLUS ⚫ Questions ⚫ How is the hash space divided evenly among nodes? ⚫ How do we locate a node? ⚫ How does we maintain routing tables? ⚫ How does we cope with (rapid) changes in membership?
●●●● ●●●● ●●●● ●●●● Chord(MIT) ●●●● ●●●● Circular m-bit ID space for both keys and nodes m=6 Node Id= SHA-1(IP N1 address) N56 o Key Id= sHa-1(key) K54 K10 A key is mapped to the Ni9 first node whose d is N48○ N14 equal to or follows the key ID Each node is responsible N42 for O(KN keys O(KN keys move when N38 K38 K24 a node joins or leaves N32 K30 Peer-to-Peer Networks -P. Felber
Peer-to-Peer Networks — P. Felber 14 Chord (MIT) ⚫ Circular m-bit ID space for both keys and nodes ⚫ Node ID = SHA-1(IP address) ⚫ Key ID = SHA-1(key) ⚫ A key is mapped to the first node whose ID is equal to or follows the key ID ⚫ Each node is responsible for O(K/N) keys ⚫ O(K/N) keys move when a node joins or leaves N1 N8 N14 N32 N21 N38 N42 N48 N51 N56 m=6 K30 K24 K10 K38 K54 2 m-1 0
●●●● ●●●● ●●●● ●●●● Chord state and Lookup(1) ●●●● ●●●● Basic Chord each node knows only 2 other m=6 nodes on the ring N1 Successor N56 K54 ● Predecessor( or ring lookup(K54) management) N51 Lookup is achieved by N14 forwarding requests around the ring through N42○ successor pointers ③N21 ● Requires o(N)hops N38 N32 Peer-to-Peer Networks -P. Felber
Peer-to-Peer Networks — P. Felber 15 Chord State and Lookup (1) ⚫ Basic Chord: each node knows only 2 other nodes on the ring ⚫ Successor ⚫ Predecessor (for ring management) ⚫ Lookup is achieved by forwarding requests around the ring through successor pointers ⚫ Requires O(N) hops N1 N8 N14 N32 N21 N38 N42 N48 N51 N56 m=6 K54 2 m-1 0 lookup(K54)