000 03570nam a22004577a 4500
003 APU
005 20221028170038.0
008 210803t20172017nyuad b 001 0 eng d
015 _aGBB7B6640
_2bnb
020 _a9781970001907 (epub)
020 _a9781970001891 (pdf)
035 _a(OCoLC)ocn973138247
040 _aYDX
_beng
_cAPU
_dSF
042 _alccopycat
050 0 0 _aQA76.642
_b.S48 2017eb
082 0 4 _a005.275
_223
100 1 _aShun, Julian,
_947389
245 1 0 _aShared-memory parallelism can be simple, fast, and scalable
_h[electronic resource] /
_cJulian Shun.
260 _a[New York, New York] :
_bACM,
260 _a[San Rafael, California] :
_bMorgan & Claypool,
_cc2017.
300 _a1 online resource (xv, 426 pages) :
_billustrations, charts.
490 1 _aACM books ;
_v#15
500 _a"This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award."--Back cover.
504 _aIncludes bibliographical references (pages 379-412) and index.
505 0 _aPreliminaries and notation -- Internally deterministic parallelism : techniques and algorithms -- Deterministic parallelism in sequential iterative algorithms -- A deterministic phase-concurrent parallel hash table -- Priority updates : a contention-reducing primitive for deterministic programming -- Ligra : a lightweight graph processing framework for shared memory -- Ligra++ : adding compression to Ligra -- Linear-work parallel graph connectivity -- Parallel and cache-oblivious triangle computations -- Parallel cartesian tree and suffix tree construction -- Parallel computation of longest common prefixes -- Parallel Lempel-Ziv factorization -- Parallel wavelet tree construction -- Conclusion and future work.
520 _aParallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era.--
538 _aSystem requirements: Internet connectivity; World Wide Web browser; Adobe Digital Editions.
538 _aMode of access: World Wide Web.
650 0 _aParallel programming (Computer science)
650 0 _aParallel computers
_xProgramming.
_911675
650 7 _aParallel computers
_xProgramming.
_2fast
_911675
650 7 _aParallel programming (Computer science)
_2fast
_947390
650 7 _aMehrkernprozessor
_2gnd
_947391
650 7 _aMultithreading
_2gnd
_947392
650 7 _aNebenläufigkeit
_2gnd
_947393
650 7 _aParallelverarbeitung
_2gnd
_947394
650 7 _aComputerarchitektur
_2gnd
_947395
830 0 _aACM books ;
_v#15.
_947379
856 _3ACM Digital Library
_uhttps://dl-acm-org.ezproxy.apiit.edu.my/doi/book/10.1145/3018787
_zAvailable in ACM Digital Library.
942 _2lcc
_cE-Book
999 _c383482
_d383482