Computational Science and Engineering Introduction:
Computational Science and Engineering Applications; characteristics and requirements, Review of Computational Complexity, Performance: metrics and measurements, Granularity and Partitioning, Locality: temporal/spatial/stream/kernel, Basic methods for parallel programming, Real-world case studies (drawn from multi-scale, multidiscipline applications)
High-End Computer Systems:
Memory Hierarchies, Multi-core Processors: Homogeneous and Heterogeneous, Shared-memory Symmetric Multiprocessors, Vector Computers, Distributed Memory Computers, Supercomputers and Petascale Systems, Application Accelerators / Reconfigurable Computing, Novel computers: Stream, multithreaded, and purpose-built
Parallel Algorithms:
Parallel models: ideal and real frameworks, Basic Techniques: Balanced Trees, Pointer Jumping, Divide and Conquer, Partitioning, Regular Algorithms: Matrix operations and Linear Algebra, Irregular Algorithms: Lists, Trees, Graphs, Randomization: Parallel Pseudo-Random Number Generators, Sorting, Monte Carlo techniques.
Parallel Programming:
Revealing concurrency in applications, Task and Functional Parallelism, Task Scheduling, Synchronization Methods, Parallel Primitives (collective operations), SPMD Programming (threads, Open MP, MPI), I/O and File Systems, Parallel Matlabs (Parallel Matlab, Star-P, Matlab MPI), Partitioning Global Address Space (PGAS) languages (UPC, Titanium, Global Arrays).
Achieving Performance:
Measuring performance, Identifying performance bottlenecks, Restructuring applications for deep memory hierarchies, Partitioning applications for heterogeneous resources, Using existing libraries, tools, and frameworks.
Course outcomes:
After studying the course the students will be able to
1. Apply the concepts of high performance computing
2. Develop various algorithms required for parallel computing.
3. Compare architectures for high performance computing.
Graduate Attributes :
Question paper pattern:
Text Books:
1. Grama, A. Gupta, G. Karypis, V. Kumar, An Introduction to Parallel Computing, Design and Analysis of Algorithms,Pearson Education India,2nd edition, 2004,ISBN-13: 978-8131708071.
2. G.E. Karniadakis, R.M. Kirby II, Parallel Scientific Computing in C++ and MPI: A Seamless Approach to Parallel Algorithms and their Implementation, Cambridge University Press, 2003.
Reference Books:
1. Wilkinson and M. Allen, Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers, Pearson,2nd edition, 2006, ISBN-13: 978-8131702390.
2. M.J. Quinn, Parallel Programming in C with MPI and Open MP, McGraw-Hill, 1st edition,2003, ISBN13: 978-0070582019.
3. G.S. Almasi and A. Gottlieb, Highly Parallel Computing, 2/E, Addison-Wesley, 1994.
4. J. Dongarra, I. Foster, G. Fox, W. Gropp, K. Kennedy, L. Torczon, A. White, editors, The Sourcebook of Parallel Computing, Morgan Kaufmann, 2002.