A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architectures

A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architectures PDF

Author: Ian N. Dunn

Publisher: Springer Science & Business Media

Published: 2012-09-14

Total Pages: 114

ISBN-13: 1441986502

DOWNLOAD EBOOK →

Despite five decades of research, parallel computing remains an exotic, frontier technology on the fringes of mainstream computing. Its much-heralded triumph over sequential computing has yet to materialize. This is in spite of the fact that the processing needs of many signal processing applications continue to eclipse the capabilities of sequential computing. The culprit is largely the software development environment. Fundamental shortcomings in the development environment of many parallel computer architectures thwart the adoption of parallel computing. Foremost, parallel computing has no unifying model to accurately predict the execution time of algorithms on parallel architectures. Cost and scarce programming resources prohibit deploying multiple algorithms and partitioning strategies in an attempt to find the fastest solution. As a consequence, algorithm design is largely an intuitive art form dominated by practitioners who specialize in a particular computer architecture. This, coupled with the fact that parallel computer architectures rarely last more than a couple of years, makes for a complex and challenging design environment. To navigate this environment, algorithm designers need a road map, a detailed procedure they can use to efficiently develop high performance, portable parallel algorithms. The focus of this book is to draw such a road map. The Parallel Algorithm Synthesis Procedure can be used to design reusable building blocks of adaptable, scalable software modules from which high performance signal processing applications can be constructed. The hallmark of the procedure is a semi-systematic process for introducing parameters to control the partitioning and scheduling of computation and communication. This facilitates the tailoring of software modules to exploit different configurations of multiple processors, multiple floating-point units, and hierarchical memories. To showcase the efficacy of this procedure, the book presents three case studies requiring various degrees of optimization for parallel execution.

Parallel Computing

Parallel Computing PDF

Author: Christian Bischof

Publisher: IOS Press

Published: 2008

Total Pages: 824

ISBN-13: 158603796X

DOWNLOAD EBOOK →

ParCo2007 marks a quarter of a century of the international conferences on parallel computing that started in Berlin in 1983. The aim of the conference is to give an overview of the developments, applications and future trends in high-performance computing for various platforms.

High Performance Computing

High Performance Computing PDF

Author: Gary Sabot

Publisher: Addison Wesley Longman

Published: 1995

Total Pages: 280

ISBN-13:

DOWNLOAD EBOOK →

This book shows by example how to solve complex scientific problems with programs that run on high-performance computers. Combining case studies from a variety of problem domains, it shows how to map or transform an abstract problem into concrete solutions that execute rapidly and efficiently on available high-performance hardware.

Parallel Processing and Parallel Algorithms

Parallel Processing and Parallel Algorithms PDF

Author: Seyed H Roosta

Publisher: Springer Science & Business Media

Published: 2012-12-06

Total Pages: 579

ISBN-13: 1461212200

DOWNLOAD EBOOK →

Motivation It is now possible to build powerful single-processor and multiprocessor systems and use them efficiently for data processing, which has seen an explosive ex pansion in many areas of computer science and engineering. One approach to meeting the performance requirements of the applications has been to utilize the most powerful single-processor system that is available. When such a system does not provide the performance requirements, pipelined and parallel process ing structures can be employed. The concept of parallel processing is a depar ture from sequential processing. In sequential computation one processor is in volved and performs one operation at a time. On the other hand, in parallel computation several processors cooperate to solve a problem, which reduces computing time because several operations can be carried out simultaneously. Using several processors that work together on a given computation illustrates a new paradigm in computer problem solving which is completely different from sequential processing. From the practical point of view, this provides sufficient justification to investigate the concept of parallel processing and related issues, such as parallel algorithms. Parallel processing involves utilizing several factors, such as parallel architectures, parallel algorithms, parallel programming lan guages and performance analysis, which are strongly interrelated. In general, four steps are involved in performing a computational problem in parallel. The first step is to understand the nature of computations in the specific application domain.

Introduction to Parallel Computing

Introduction to Parallel Computing PDF

Author: Ananth Grama

Publisher: Pearson Education

Published: 2003

Total Pages: 664

ISBN-13: 9780201648652

DOWNLOAD EBOOK →

A complete source of information on almost all aspects of parallel computing from introduction, to architectures, to programming paradigms, to algorithms, to programming standards. It covers traditional Computer Science algorithms, scientific computing algorithms and data intensive algorithms.

Dynamic Reconfiguration

Dynamic Reconfiguration PDF

Author: Ramachandran Vaidyanathan

Publisher: Springer Science & Business Media

Published: 2007-06-30

Total Pages: 525

ISBN-13: 0306484285

DOWNLOAD EBOOK →

Dynamic Reconfiguration: Architectures and Algorithms offers a comprehensive treatment of dynamically reconfigurable computer architectures and algorithms for them. The coverage is broad starting from fundamental algorithmic techniques, ranging across algorithms for a wide array of problems and applications, to simulations between models. The presentation employs a single reconfigurable model (the reconfigurable mesh) for most algorithms, to enable the reader to distill key ideas without the cumbersome details of a myriad of models. In addition to algorithms, the book discusses topics that provide a better understanding of dynamic reconfiguration such as scalability and computational power, and more recent advances such as optical models, run-time reconfiguration (on FPGA and related platforms), and implementing dynamic reconfiguration. The book, featuring many examples and a large set of exercises, is an excellent textbook or reference for a graduate course. It is also a useful reference to researchers and system developers in the area.

Hierarchical Scheduling in Parallel and Cluster Systems

Hierarchical Scheduling in Parallel and Cluster Systems PDF

Author: Sivarama Dandamudi

Publisher: Springer Science & Business Media

Published: 2012-12-06

Total Pages: 263

ISBN-13: 1461501334

DOWNLOAD EBOOK →

Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e. , to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space.

Soft Real-Time Systems: Predictability vs. Efficiency

Soft Real-Time Systems: Predictability vs. Efficiency PDF

Author: Giorgio C Buttazzo

Publisher: Springer Science & Business Media

Published: 2006-07-02

Total Pages: 281

ISBN-13: 0387281479

DOWNLOAD EBOOK →

Hard real-time systems are very predictable, but not sufficiently flexible to adapt to dynamic situations. They are built under pessimistic assumptions to cope with worst-case scenarios, so they often waste resources. Soft real-time systems are built to reduce resource consumption, tolerate overloads and adapt to system changes. They are also more suited to novel applications of real-time technology, such as multimedia systems, monitoring apparatuses, telecommunication networks, mobile robotics, virtual reality, and interactive computer games. This unique monograph provides concrete methods for building flexible, predictable soft real-time systems, in order to optimize resources and reduce costs. It is an invaluable reference for developers, as well as researchers and students in Computer Science.

Nearest Neighbor Search:

Nearest Neighbor Search: PDF

Author: Apostolos N. Papadopoulos

Publisher: Springer Science & Business Media

Published: 2006-11-22

Total Pages: 179

ISBN-13: 0387275444

DOWNLOAD EBOOK →

Modern applications are both data and computationally intensive and require the storage and manipulation of voluminous traditional (alphanumeric) and nontraditional data sets (images, text, geometric objects, time-series). Examples of such emerging application domains are: Geographical Information Systems (GIS), Multimedia Information Systems, CAD/CAM, Time-Series Analysis, Medical Information Sstems, On-Line Analytical Processing (OLAP), and Data Mining. These applications pose diverse requirements with respect to the information and the operations that need to be supported. From the database perspective, new techniques and tools therefore need to be developed towards increased processing efficiency. This monograph explores the way spatial database management systems aim at supporting queries that involve the space characteristics of the underlying data, and discusses query processing techniques for nearest neighbor queries. It provides both basic concepts and state-of-the-art results in spatial databases and parallel processing research, and studies numerous applications of nearest neighbor queries.

Handbook of Parallel Computing

Handbook of Parallel Computing PDF

Author: Sanguthevar Rajasekaran

Publisher: CRC Press

Published: 2007-12-20

Total Pages: 1224

ISBN-13: 1420011294

DOWNLOAD EBOOK →

The ability of parallel computing to process large data sets and handle time-consuming operations has resulted in unprecedented advances in biological and scientific computing, modeling, and simulations. Exploring these recent developments, the Handbook of Parallel Computing: Models, Algorithms, and Applications provides comprehensive coverage on a