64,19 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 6-10 Tagen
payback
0 °P sammeln
  • Broschiertes Buch

The last decade has brought groundbreaking developments in transaction processing. This resurgence of an otherwise mature research area has spurred from the diminishing cost per GB of DRAM that allows many transaction processing workloads to be entirely memory-resident. This shift demanded a pause to fundamentally rethink the architecture of database systems. The data storage lexicon has now expanded beyond spinning disks and RAID levels to include the cache hierarchy, memory consistency models, cache coherence and write invalidation costs, NUMA regions, and coherence domains. New memory…mehr

Produktbeschreibung
The last decade has brought groundbreaking developments in transaction processing. This resurgence of an otherwise mature research area has spurred from the diminishing cost per GB of DRAM that allows many transaction processing workloads to be entirely memory-resident. This shift demanded a pause to fundamentally rethink the architecture of database systems. The data storage lexicon has now expanded beyond spinning disks and RAID levels to include the cache hierarchy, memory consistency models, cache coherence and write invalidation costs, NUMA regions, and coherence domains. New memory technologies promise fast non-volatile storage and expose unchartered trade-offs for transactional durability, such as exploiting byte-addressable hot and cold storage through persistent programming that promotes simpler recovery protocols. In the meantime, the plateauing single-threaded processor performance has brought massive concurrency within a single node, first in the form of multi-core, andnow with many-core and heterogeneous processors.

The exciting possibility to reshape the storage, transaction, logging, and recovery layers of next-generation systems on emerging hardware have prompted the database research community to vigorously debate the trade-offs between specialized kernels that narrowly focus on transaction processing performance vs. designs that permit transactionally consistent data accesses from decision support and analytical workloads. In this book, we aim to classify and distill the new body of work on transaction processing that has surfaced in the last decade to navigate researchers and practitioners through this intricate research subject.
Autorenporträt
Mohammad Sadoghi is an Assistant Professor in the Computer Science Department at the University of California, Davis. Formerly, he was an Assistant Professor at Purdue University. Prior to joining academia, he was a Research Staff Member at IBM T.J. Watson Research Center for nearly four years. He received his Ph.D. from the Computer Science Department at the University of Toronto in 2013. At UC Davis, Prof. Sadoghi leads the ExpoLab research group with the aim to pioneer a new exploratory data platform (referred to as ExpoDB) a distributed ledger that unifies secure transactional and real-time analytical processing, all centered around a democratic and decentralized computational model.Spyros Blanas is an Assistant Professor in the Department of Computer Science and Engineering at The Ohio State University since 2014. He received his Ph.D. at the University of Wisconsin-Madison, where he was a member of the Database Systems group and the Microsoft Jim Gray Systems Lab. Part of his Ph.D. dissertation was commercialized in Microsoft SQL Server 2014 as the Hekaton in-memory transaction processing engine. His research interest is high-performance database systems and his current goal is to build a data management system for high-end computing facilities.