JSU Course Homework 3

Homework 3

High Performance Computing and Computational Science

  • Read Chapters 3 and 4 of Parallel Programming with MPI, Peter S. Pacheco, Morgan Kaufmann, 1997. Section 4.3 is not important here
  • I will go through numerical integration (chapter 4) on Monday February 14
  • Consider the code shown in http://www.old-npac.org/projects/cpsedu/summer98summary/examples/mpi-c/examples96/simpson-rule_with_mpi.c. It is a variant of the algorithm present in chapter 4 of Pacheco. Run this code on a sequential machine and find execution time after removing MPI specific parts not needed on a single CPU. Do this as a function of N the total number of points (make N large to gert reliable times)
  • Now look at MPI code and estimate overhead for a computer where a single call to MPI_RECV or MPI_SEND takes 20 microseconds. Assume the other MPI calls used take ZERO time.What value of N (points per processor now) gives speed-up > 8 on 10 processors if these processors each have performance of sequential machine you used. Note the total problem size is 10N now and the "reduction" algorithm in code on web is not optimal; however use this simple approach.
  • Email Homework document or link to me at gcf@indiana.edu
  • Due Midnight Sunday February 20 2005