48,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 6-10 Tagen
payback
24 °P sammeln
  • Broschiertes Buch

Innovations in hardware architecture, like hyper-threading or multicore processors, mean that parallel computing resources are available for inexpensive desktop computers. In only a few years, many standard software products will be based on concepts of parallel programming implemented on such hardware, and the range of applications will be much broader than that of scientific computing, up to now the main application area for parallel computing.
Rauber and Rünger take up these recent developments in processor architecture by giving detailed descriptions of parallel programming techniques
…mehr

Produktbeschreibung
Innovations in hardware architecture, like hyper-threading or multicore processors, mean that parallel computing resources are available for inexpensive desktop computers. In only a few years, many standard software products will be based on concepts of parallel programming implemented on such hardware, and the range of applications will be much broader than that of scientific computing, up to now the main application area for parallel computing.

Rauber and Rünger take up these recent developments in processor architecture by giving detailed descriptions of parallel programming techniques that are necessary for developing efficient programs for multicore processors as well as for parallel cluster systems and supercomputers. Their book is structured in three main parts, covering all areas of parallel computing: the architecture of parallel systems, parallel programming models and environments, and the implementation of efficient application algorithms. The emphasis lieson parallel programming techniques needed for different architectures. For this second edition, all chapters have been carefully revised. The chapter on architecture of parallel systems has been updated considerably, with a greater emphasis on the architecture of multicore systems and adding new material on the latest developments in computer architecture. Lastly, a completely new chapter on general-purpose GPUs and the corresponding programming techniques has been added.

The main goal of the book is to present parallel programming techniques that can be used in many situations for a broad range of application areas and which enable the reader to develop correct and efficient parallel programs. Many examples and exercises are provided to show how to apply the techniques. The book can be used as both a textbook for students and a reference book for professionals. The material presented has been used for courses in parallel programming at different universities for many years.
Autorenporträt
Thomas Rauber has been professor for parallel and distributed systems at the University of Bayreuth since 2002. His research is focusing on algorithms and systems for distributed and parallel programming, on which he published more than 80 papers in journals or conference proceedings. Gudula Rünger has been professor at the Chemnitz University of Technology since 2000. Her main research interests are parallel and distributed programming both in theory and applications, and she published more than 80 conference and journal papers on these topics.
Rezensionen
From the book reviews:

"The book presents the current status of parallel programming. Well-organized and well-written, the textbook can be needed worldwide by computer science students that are enrolled in learning parallel programming. ... Each chapter presents in an accessible manner the complex theory behind parallel computing. The numerous figures and code fragments are very helpful. Moreover, each chapter ends with several exercises." (Dana Petcu, zbMATH, Vol. 1295, 2014)

"The authors provide an excellent introduction to the techniques needed to create and understand parallel programming. ... I recommend this book as a text for a course in parallel programming or for use by programmers learning about parallel programming. It provides a useful mix of theory and practice, with excellent introductions to pthreads and MPI, among others." (Charles Morgan, Computing Reviews, January, 2014)