CIS5930-02 High performance computing for scientific applications

Spring 2003, 3 Credit hours

Instructor: Ashok Srinivasan
Office hours: TF 1:00 pm - 2:00 pm, or by appointment.
Location: 169, Love Building
Phone: 644-0559, Email: asriniva@cs.fsu.edu
Course web site: Access through blackboard http://campus.fsu.edu

Lecture hours:
MW 3:35 pm - 4:50 pm, LOV 103

Text book: Parallel Programming in C with MPI and OpenMP, M.J. Quinn, McGraw-Hill publishing, manuscript (for use in this course only).

Reference books:

  1. Designing and Building Parallel Programs, by Ian Foster. Available on-line at: http://www-unix.mcs.anl.gov/dbpp.
  2. Parallel computer architecture -- A hardware/software approach, D. E. Culler and J. P. Singh, Morgan Kaufmann, 1999.
  3. Using MPI -- Portable parallel programming with the Message-Passing Interface, second edition, W. Gropp, E. Lusk, and A. Skjellum, The MIT press, 1999.

Prerequisites:

You should be comfortable programming in C (alternatively, expertise in Fortran may be acceptable, if you are willing to work hard and learn to at least "read" C) and have good knowledge of basic linear algebra. No prior knowledge of parallel computing is assumed.

Course rationale:

This course is meant for graduate students in Computer Science, Engineering, Mathematics, and the sciences, especially for those who need to use high performance computing in their research. This course will teach practical aspects of high performance computing, on both sequential and parallel machines, so that you will be able to effectively use high performance computer in your research.

Course objectives:

You will get practical experience in obtaining good performance in sequential and parallel environments. By the end of the course, you should be able to optimize the performance of your code on sequential and parallel machines, and program in the message passing paradigm using MPI and in the shared memory paradigm using OpenMP. The practical aspects will be supplemented with sufficient theory on parallel algorithms. You will also read papers to get acquainted with current research on selected topics.

Course description:

Sequential computing: Computer architectural features to support high performance computing, use of standard libraries, programming techniques, compiler optimization, exploiting the memory hierarchy, other algorithmic issues. Parallel computing: parallel machine and programming models, parallel algorithms, performance models, MPI, OpenMP. Applications: Numerical linear algebra, Molecular dynamics, Monte Carlo, etc. Project: You will also work on a project involving parallel programming. You are encouraged to chose your research project as the topic, if you are already involved in research.

Grading criteria:

Class participation 5
Project 20
Paper presentation 10
Homework assignments 15
Midterm 25
Final Exam 25

Your grade will be based on the scores obtained in the above categories, with weights as given above.

Course average Letter grade
90 - 100 A
87 - 90 A-
80-87 B
70-80 C
60 - 70 D
0 - 60 F

Attendance

While you will not be explicitly graded for attendance, you will be graded for class participation, and you will need to attend class in order to participate in it!

Course policies:

Honor code:

Plagiarism is "representing another's work or any part thereof, be it published or unpublished, as one's own. . . . For example, plagiarism includes failure to use quotation marks or other conventional markings around material quoted from any source" (Florida State University General Bulletin 1998-1999, p. 69). Failure to document material properly, that is, to indicate that the material came from another source, is also considered a form of plagiarism. Copying someone else's program, and turning it in as if it were your own work, is also considered plagiarism.

What I expect from the student:

Lecture plan:

Dates Topic Dates Topic
6 Jan - 8 Jan Optimizing sequential programs. 13 Jan - 15 Jan Optimizing sequential programs.
22 Jan Introduction to parallel computing. 27 Jan - 29 Jan (i) parallel architectures, (ii) parallel programming models, and (iii) performance analysis.
3 Feb - 5 Feb OpenMP. 10 Feb - 12 Feb Parallel algorithm design.
17 Feb - 19 Feb (i) MPI and (ii) Midterm. 24 Feb - 26 Feb (i) MPI and (ii) parallel linear algebra.
3 Mar - 5 Mar Parallel linear algebra. 10 - 12 Mar No class -- Spring break.
17 Mar - 19 Mar Domain decomposition. 24 Mar - 26 Mar (i) Domain decomposition and (ii) applications.
31 Mar - 2 Apr Applications. 7 Apr - 9 Apr Paper presentations.
14 Apr - 16 Apr Paper presentations. 21 Apr - 23 Apr Project presentations.
Fri, 2 May Final exam, 3:00 pm - 5:00 pm.

Useful links:


Last modified: 9 Dec 2002