High Performance Computing
High Performance Computing
High Performance Computing is semester 8 subject of final year of computer engineering in Mumbai University. Prerequisite for studying this subject are Computer Organization. Course Objectives of the subject High Performance Computing to learn concepts of parallel processing as it pertains to high-performance computing.
To design, develop and analyze parallel programs on high performance computing resources using parallel programming paradigms. High Performance Computing Course Outcomes are Learner will be able to Memorize parallel processing approaches. Describe different parallel processing platforms involved in achieving High Performance Computing. Discuss different design issues in parallel programming. Develop efficient and high performance parallel programming. Learn parallel programming using message passing paradigm using open source APIs.
High-performance computing (HPC) is the ability to process data and perform complex calculations at high speeds. To put it into perspective, a laptop or desktop with a 3 GHz processor can perform around 3 billion calculations per second. While that is much faster than any human can achieve, it pales in comparison to HPC solutions that can perform quadrillions of calculations per second. One of the best-known types of HPC solutions is the supercomputer. A supercomputer contains thousands of compute nodes that work together to complete one or more tasks. This is called parallel processing. It’s similar to having thousands of PCs networked together, combining compute power to complete tasks faster. High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.
Module Introduction to Parallel Computing consists of the following subtopics Motivating Parallelism, Scope of Parallel Computing, Levels of parallelism (instruction, transaction, task, thread, memory, function). Classification Models: Architectural Schemes (Flynn‘s, Shore‘s, Feng‘s, Handler‘s) and Memory access (Shared Memory, Distributed Memory, Hybrid Distributed Shared Memory). Parallel Architectures: Pipeline Architecture, Array Processor, Multiprocessor Architecture, Systolic Architecture, Data Flow Architecture. Module Pipeline Processing consists of the following subtopics Introduction, Pipeline Performance, Arithmetic Pipelines, Pipeline instruction processing, Pipeline stage design, Hazards, Dynamic instruction scheduling. Module Parallel Programming Platforms consists of the following subtopics Parallel Programming Platforms: Implicit Parallelism: Trends in Microprocessor & Architectures, Limitations of Memory System Performance, Dichotomy of Parallel Computing Platforms, Physical Organization of Parallel Platforms, Communication Costs in Parallel Machines.
Module Parallel Algorithm Design consists of the following subtopics Principles of Parallel Algorithm Design: Preliminaries, Decomposition Techniques, Characteristics of Tasks and Interactions, Mapping Techniques for Load Balancing, Methods for Containing Interaction Overheads, Parallel Algorithm Models. Module Performance Measures consists of the following subtopics Performance Measures : Speedup, execution time, efficiency, cost, scalability, Effect of granularity on performance, Scalability of Parallel Systems, Amdahl‘s Law, Gustavson‘s Law, Performance Bottlenecks. Module HPC Programming consists of the following subtopics Programming Using the Message-Passing Paradigm: Principles of Message Passing Programming, The Building Blocks: Send and Receive Operations MPI: the Message Passing Interface, Topology and Embedding, Overlapping Communication with Computation, Collective Communication and Computation Operations, Introduction to OpenMP.
Suggested Texts Books for High Performance Computing by Mumbai University are as follows AnanthGrama, Anshul Gupta, George Karypis, Vipin Kumar ,Introduction to Parallel Computing, Pearson Education, Second Edition, 2007. M. R. Bhujade, Parallel Computing, 2nd edition, New Age International Publishers, 2009.Kai Hwang, Naresh Jotwani, Advanced Computer Architecture: Parallelism, Scalability, Programmability, McGraw Hill, Second Edition, 2010.Georg Hager, Gerhard Wellein, ―Introduction to High Performance Computing for Scientists and Engineers”, Chapman & Hall / CRC Computational Science series, 2011. Suggested Reference High Performance Computing Books for by Mumbai University are as follows Michael J. Quinn, Parallel Programming in C with MPI and OpenMP, McGraw-Hill International Editions, Computer Science Series, 2008.Kai Hwang, Zhiwei Xu, Scalable Parallel Computing: Technology, Architecture, Programming, McGraw Hill, 1998.Laurence T. Yang, Minyi Guo, High Performance Computing: Paradigm and Infrastructure Wiley, 2006.
Prepare For Your Placements: https://lastmomenttuitions.com/courses/placement-preparation/
/ Youtube Channel: https://www.youtube.com/channel/UCGFNZxMqKLsqWERX_N2f08Q
Follow For Latest Updates, Study Tips & More Content!
- Lectures 8
- Quizzes 0
- Duration 50 hours
- Skill level All levels
- Language English
- Students 76
- Certificate No
- Assessments Yes