Asia Pacific University Library catalogue


Shared-memory parallelism can be simple, fast, and scalable [electronic resource] / Julian Shun.

By: Shun, JulianMaterial type: TextTextSeries: ACM books ; #15.Publication details: [New York, New York] : ACM, ; [San Rafael, California] : Morgan & Claypool, c2017Description: 1 online resource (xv, 426 pages) : illustrations, chartsISBN: 9781970001907 (epub); 9781970001891 (pdf)Subject(s): Parallel programming (Computer science) | Parallel computers -- Programming | Parallel computers -- Programming | Parallel programming (Computer science) | Mehrkernprozessor | Multithreading | Nebenläufigkeit | Parallelverarbeitung | ComputerarchitekturDDC classification: 005.275 LOC classification: QA76.642 | .S48 2017ebOnline resources: ACM Digital Library Available in ACM Digital Library.
Contents:
Preliminaries and notation -- Internally deterministic parallelism : techniques and algorithms -- Deterministic parallelism in sequential iterative algorithms -- A deterministic phase-concurrent parallel hash table -- Priority updates : a contention-reducing primitive for deterministic programming -- Ligra : a lightweight graph processing framework for shared memory -- Ligra++ : adding compression to Ligra -- Linear-work parallel graph connectivity -- Parallel and cache-oblivious triangle computations -- Parallel cartesian tree and suffix tree construction -- Parallel computation of longest common prefixes -- Parallel Lempel-Ziv factorization -- Parallel wavelet tree construction -- Conclusion and future work.
Summary: Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era.--
    Average rating: 0.0 (0 votes)
Item type Current library Collection Call number Copy number Status Date due Barcode
General Circulation General Circulation APU Library
Online Database
E-Book QA76.642 .S48 2017eb (Browse shelf (Opens below)) 1 Available

"This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award."--Back cover.

Includes bibliographical references (pages 379-412) and index.

Preliminaries and notation -- Internally deterministic parallelism : techniques and algorithms -- Deterministic parallelism in sequential iterative algorithms -- A deterministic phase-concurrent parallel hash table -- Priority updates : a contention-reducing primitive for deterministic programming -- Ligra : a lightweight graph processing framework for shared memory -- Ligra++ : adding compression to Ligra -- Linear-work parallel graph connectivity -- Parallel and cache-oblivious triangle computations -- Parallel cartesian tree and suffix tree construction -- Parallel computation of longest common prefixes -- Parallel Lempel-Ziv factorization -- Parallel wavelet tree construction -- Conclusion and future work.

Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era.--

System requirements: Internet connectivity; World Wide Web browser; Adobe Digital Editions.

Mode of access: World Wide Web.

There are no comments on this title.

to post a comment.