Data Servers/Storage Systems (Cont.) Prefetching Prefetch items that may be used soon ■Data caching ·Cache coherence ■Lock caching Locks can be cached by client across transactions Locks can be called back by the server Adaptive lock granularity Lock granularity escalation switch from finer granularity (e.g.tuple)lock to coarser Lock granularity de-escalation Start with coarse granularity to reduve overheads,switch to finer granularity in case of more concurrency conflict at server 。Details in book Database System Concepts-7th Edition 20.12 ©Silberscha乜,Korth and Sudarshan
Database System Concepts - 7 20.12 ©Silberschatz, Korth and Sudarshan th Edition Data Servers/Storage Systems (Cont.) ▪ Prefetching • Prefetch items that may be used soon ▪ Data caching • Cache coherence ▪ Lock caching • Locks can be cached by client across transactions • Locks can be called back by the server ▪ Adaptive lock granularity • Lock granularity escalation ▪ switch from finer granularity (e.g. tuple) lock to coarser • Lock granularity de-escalation ▪ Start with coarse granularity to reduve overheads, switch to finer granularity in case of more concurrency conflict at server ▪ Details in book
Data Servers (Cont.) Data Caching Data can be cached at client even in between transactions But check that data is up-to-date before it is used(cache coherency) Check can be done when requesting lock on data item ■Lock Caching Locks can be retained by client system even in between transactions Transactions can acquire cached locks locally,without contacting server Server calls back locks from clients when it receives conflicting lock request.Client returns lock once no local transaction is using it. Similar to lock callback on prefetch,but across transactions. Database System Concepts-7th Edition 20.13 ©Silberscha乜,Korth and Sudarshan
Database System Concepts - 7 20.13 ©Silberschatz, Korth and Sudarshan th Edition Data Servers (Cont.) ▪ Data Caching • Data can be cached at client even in between transactions • But check that data is up-to-date before it is used (cache coherency) • Check can be done when requesting lock on data item ▪ Lock Caching • Locks can be retained by client system even in between transactions • Transactions can acquire cached locks locally, without contacting server • Server calls back locks from clients when it receives conflicting lock request. Client returns lock once no local transaction is using it. ▪ Similar to lock callback on prefetch, but across transactions
Parallel Systems Parallel database systems consist of multiple processors and multiple disks connected by a fast interconnection network. 图 Motivation:handle workloads beyond what a single computer system can handle High performance transaction processing E.g.handling user requests at web-scale Decision support on very large amounts of data E.g.data gathered by large web sites/apps Database System Concepts-7th Edition 20.14 ©Silberscha乜,Korth and Sudarshan
Database System Concepts - 7 20.14 ©Silberschatz, Korth and Sudarshan th Edition Parallel Systems ▪ Parallel database systems consist of multiple processors and multiple disks connected by a fast interconnection network. ▪ Motivation: handle workloads beyond what a single computer system can handle ▪ High performance transaction processing • E.g. handling user requests at web-scale ▪ Decision support on very large amounts of data • E.g. data gathered by large web sites/apps
Parallel Systems (Cont.) A coarse-grain parallel machine consists of a small number of powerful processors A massively parallel or fine grain parallel machine utilizes thousands of smaller processors. Typically hosted in a data center Two main performance measures: throughput---the number of tasks that can be completed in a given time interval response time---the amount of time it takes to complete a single task from the time it is submitted Database System Concepts-7th Edition 20.15 ©Silberscha乜,Korth and Sudarshan
Database System Concepts - 7 20.15 ©Silberschatz, Korth and Sudarshan th Edition Parallel Systems (Cont.) ▪ A coarse-grain parallel machine consists of a small number of powerful processors ▪ A massively parallel or fine grain parallel machine utilizes thousands of smaller processors. • Typically hosted in a data center ▪ Two main performance measures: • throughput --- the number of tasks that can be completed in a given time interval • response time --- the amount of time it takes to complete a single task from the time it is submitted
Speed-Up and Scale-Up Speedup:a fixed-sized problem executing on a small system is given to a system which is N-times larger. ·Measured by: speedup small system elapsed time large system elapsed time Speedup is linear if equation equals N. Scaleup:increase the size of both the problem and the system N-times larger system used to perform N-times larger job ·Measured by: scaleup small system small problem elapsed time big system big problem elapsed time Scale up is linear if equation equals 1. Database System Concepts-7th Edition 20.16 ©Silberscha乜,Korth and Sudarshan
Database System Concepts - 7 20.16 ©Silberschatz, Korth and Sudarshan th Edition Speed-Up and Scale-Up ▪ Speedup: a fixed-sized problem executing on a small system is given to a system which is N-times larger. • Measured by: speedup = small system elapsed time large system elapsed time • Speedup is linearif equation equals N. ▪ Scaleup: increase the size of both the problem and the system • N-times larger system used to perform N-times larger job • Measured by: scaleup = small system small problem elapsed time big system big problem elapsed time • Scale up is linearif equation equals 1