Computer Solution of Large Linear Systems

Computer Solution of Large Linear Systems PDF

Author: Gerard Meurant

Publisher: Elsevier

Published: 1999-06-16

Total Pages: 777

ISBN-13: 0080529518

DOWNLOAD EBOOK →

This book deals with numerical methods for solving large sparse linear systems of equations, particularly those arising from the discretization of partial differential equations. It covers both direct and iterative methods. Direct methods which are considered are variants of Gaussian elimination and fast solvers for separable partial differential equations in rectangular domains. The book reviews the classical iterative methods like Jacobi, Gauss-Seidel and alternating directions algorithms. A particular emphasis is put on the conjugate gradient as well as conjugate gradient -like methods for non symmetric problems. Most efficient preconditioners used to speed up convergence are studied. A chapter is devoted to the multigrid method and the book ends with domain decomposition algorithms that are well suited for solving linear systems on parallel computers.

Iterative Solution of Large Linear Systems

Iterative Solution of Large Linear Systems PDF

Author: David M. Young

Publisher: Elsevier

Published: 2014-05-10

Total Pages: 599

ISBN-13: 1483274136

DOWNLOAD EBOOK →

Iterative Solution of Large Linear Systems describes the systematic development of a substantial portion of the theory of iterative methods for solving large linear systems, with emphasis on practical techniques. The focal point of the book is an analysis of the convergence properties of the successive overrelaxation (SOR) method as applied to a linear system where the matrix is "consistently ordered". Comprised of 18 chapters, this volume begins by showing how the solution of a certain partial differential equation by finite difference methods leads to a large linear system with a sparse matrix. The next chapter reviews matrix theory and the properties of matrices, as well as several theorems of matrix theory without proof. A number of iterative methods, including the SOR method, are then considered. Convergence theorems are also given for various iterative methods under certain assumptions on the matrix A of the system. Subsequent chapters deal with the eigenvalues of the SOR method for consistently ordered matrices; the optimum relaxation factor; nonstationary linear iterative methods; and semi-iterative methods. This book will be of interest to students and practitioners in the fields of computer science and applied mathematics.

Iterative Methods and Preconditioning for Large and Sparse Linear Systems with Applications

Iterative Methods and Preconditioning for Large and Sparse Linear Systems with Applications PDF

Author: Daniele Bertaccini

Publisher: CRC Press

Published: 2018-02-19

Total Pages: 375

ISBN-13: 1498764177

DOWNLOAD EBOOK →

This book describes, in a basic way, the most useful and effective iterative solvers and appropriate preconditioning techniques for some of the most important classes of large and sparse linear systems. The solution of large and sparse linear systems is the most time-consuming part for most of the scientific computing simulations. Indeed, mathematical models become more and more accurate by including a greater volume of data, but this requires the solution of larger and harder algebraic systems. In recent years, research has focused on the efficient solution of large sparse and/or structured systems generated by the discretization of numerical models by using iterative solvers.

Introduction to Parallel and Vector Solution of Linear Systems

Introduction to Parallel and Vector Solution of Linear Systems PDF

Author: James M. Ortega

Publisher: Springer Science & Business Media

Published: 1988-04-30

Total Pages: 330

ISBN-13: 9780306428623

DOWNLOAD EBOOK →

Although the origins of parallel computing go back to the last century, it was only in the 1970s that parallel and vector computers became available to the scientific community. The first of these machines-the 64 processor llliac IV and the vector computers built by Texas Instruments, Control Data Corporation, and then CRA Y Research Corporation-had a somewhat limited impact. They were few in number and available mostly to workers in a few government laboratories. By now, however, the trickle has become a flood. There are over 200 large-scale vector computers now installed, not only in government laboratories but also in universities and in an increasing diversity of industries. Moreover, the National Science Foundation's Super computing Centers have made large vector computers widely available to the academic community. In addition, smaller, very cost-effective vector computers are being manufactured by a number of companies. Parallelism in computers has also progressed rapidly. The largest super computers now consist of several vector processors working in parallel. Although the number of processors in such machines is still relatively small (up to 8), it is expected that an increasing number of processors will be added in the near future (to a total of 16 or 32). Moreover, there are a myriad of research projects to build machines with hundreds, thousands, or even more processors. Indeed, several companies are now selling parallel machines, some with as many as hundreds, or even tens of thousands, of processors.

Introduction to Parallel and Vector Solution of Linear Systems

Introduction to Parallel and Vector Solution of Linear Systems PDF

Author: James M. Ortega

Publisher: Springer Science & Business Media

Published: 2013-06-29

Total Pages: 309

ISBN-13: 1489921125

DOWNLOAD EBOOK →

Although the origins of parallel computing go back to the last century, it was only in the 1970s that parallel and vector computers became available to the scientific community. The first of these machines-the 64 processor llliac IV and the vector computers built by Texas Instruments, Control Data Corporation, and then CRA Y Research Corporation-had a somewhat limited impact. They were few in number and available mostly to workers in a few government laboratories. By now, however, the trickle has become a flood. There are over 200 large-scale vector computers now installed, not only in government laboratories but also in universities and in an increasing diversity of industries. Moreover, the National Science Foundation's Super computing Centers have made large vector computers widely available to the academic community. In addition, smaller, very cost-effective vector computers are being manufactured by a number of companies. Parallelism in computers has also progressed rapidly. The largest super computers now consist of several vector processors working in parallel. Although the number of processors in such machines is still relatively small (up to 8), it is expected that an increasing number of processors will be added in the near future (to a total of 16 or 32). Moreover, there are a myriad of research projects to build machines with hundreds, thousands, or even more processors. Indeed, several companies are now selling parallel machines, some with as many as hundreds, or even tens of thousands, of processors.

Templates for the Solution of Linear Systems

Templates for the Solution of Linear Systems PDF

Author: Richard Barrett

Publisher: SIAM

Published: 1994-01-01

Total Pages: 141

ISBN-13: 9781611971538

DOWNLOAD EBOOK →

In this book, which focuses on the use of iterative methods for solving large sparse systems of linear equations, templates are introduced to meet the needs of both the traditional user and the high-performance specialist. Templates, a description of a general algorithm rather than the executable object or source code more commonly found in a conventional software library, offer whatever degree of customization the user may desire. Templates offer three distinct advantages: they are general and reusable; they are not language specific; and they exploit the expertise of both the numerical analyst, who creates a template reflecting in-depth knowledge of a specific numerical technique, and the computational scientist, who then provides "value-added" capability to the general template description, customizing it for specific needs. For each template that is presented, the authors provide: a mathematical description of the flow of algorithm; discussion of convergence and stopping criteria to use in the iteration; suggestions for applying a method to special matrix types; advice for tuning the template; tips on parallel implementations; and hints as to when and why a method is useful.