Parallelism

COS 597C, Fall 2010, Princeton University

Tuesday, Thursday 11:00-12:20, Room 302

Summary

The purpose of the course is to compare and contrast different techniques for developing high-performance applications in today and tomorrow's computational environment where parallelism is ubiquitous.   Topics discussed in the course will include the basics of modern parallel hardware architectures, low-level parallel programming with threads, locks and explicit synchronization, high-level programming in modern functional and data-parallel languages, functional reactive programming, software transactional memory, semi-automatic and fully-automatic parallelization.  Both language design considerations and compiler implementation considerations will be studied.  Not only will students get to study a wide range of existing parallel programming and implementation techniques, but they will also generate some new ideas for research and have an opportunity to help teach and develop course materials. A resume-builder for sure!

Professors

This course is co-taught by the Davids (August and Walker).  Office hours are by appointment.

David August, CS 209, august@cs.princeton.edu

David Walker, CS 211, dpw@cs.princeton.edu

Course Mailing List

All students must subscribe to this list.  See here:

https://lists.cs.princeton.edu/mailman/listinfo/cos597c

To post a message to all the list members, send email to cos597c@lists.cs.princeton.edu

Course Structure

The course will be arranged as a series of modules.  For each module, several students in the course will volunteer/be assigned to work on preparing content for each module.  For each module, some students will be Leaders (L) and some students will be Developers (D).  Every student should be a leader once and a developer once (see the schedule below).  The leader will give a lecture on the topic and lead development of an exercise or assignment.  The developer will help pretest the assignment and/or help develop infrastructure for it.  Deadlines (except for the first 2 weeks of class):

Tools

One of the tools we will be using in the class is the F# programming language.  You can run F# on your own Windows, Linux or Mac machine.  We are having it installed on some of the department linux machines.  To install it on Windows, simply download the latest version of Visual Studio and set the installations defaults to prefer F#.  Princeton students can download and use Visual Studio for academic purposes for free:

https://csguide.cs.princeton.edu/software/msdnaa

Once you have downloaded and installed F#, you will also need to download and install the F# powerpack:

http://fsharppowerpack.codeplex.com/

To run F# on Linux or a Mac, you must first install mono -- the open source version of the .NET libraries.  Here are Nick Johnson's instructions for installing it on Ubuntu Linux.

If you have difficulties with F# installation, please send email to the mailing list.  If you successfully install F# on some other version of Linux and would like to post the instructions, please send them to David Walker.

Course Schedule

A preliminary course schedule follows.  The schedule will likely change and evolve to meet student interests in the course.

Week Module Prof Students Links & Resources Assignments
Sep 16 Introduction    

Intro slides

Assignment 0 [Due: Saturday, midnight]:  Email dpw@cs and august@cs names of two modules you would like to participate in developing:  one that you think you know the best and one that you would like to learn more about
Sep 21, 23 History, EPIC, Vector Parallelism August Oh (L), Prabhu (L) History Lecture, ILP Lecture, Pipeline Timing

Goals: Learn about the history of parallel computing and the evolution of parallel hardware architectures;  Learn about vector architectures, SIMD, pipelining and instruction level parallelism.

Assignment #1:  Due Sept 30; A1.tar.gz; A2.tar.gz
Sep 28, 30, Oct 5, 7 Thread Parallelism, Memory Models, Properties, Debugging Techniques August/Walker Ghosh (L), Beard (L), Zhang (L), Divjyot Sethi (L), Raman (D), Sinha (D) Threads Intro Lecture; Threads 2

Memory Models Lecture

Debugging Lecture

Tom Ball et al lectures

Goals: understand pitfalls of threads and locks; understand weak memory models; understand desirable properties of concurrent programs; understand selected bug-finding and bug-prevention technologies

Assignment #2: Due Oct 14
Oct 12, 14, 19 Asynchronous, Reactive Programming & Applications

(F#, Monads)

(Routers, Robots & Animation)

Walker Dodds (L), Monsanto (L), Bell (L), Jablin (D), Huang (D), Prabhu (D) F# Asynch Lecture; asynch demo code; asynch program code

Explanations of asynch basics and the problems they solve here and here and here, Don Syme Blog, F# Language Reference, Functional Reactive Animation; FrTime

Goals: understand problems solved and the difficulties involved in programming reactive programs without asynchs, understand how to program with asynchs, resources, exceptions & cancellation, understand the underlying implementation in F#

Goals:  Understand the concept of behaviors and how to program with them, understand one application domain (GUIs, Animation, Robots) for FRP with behaviors, understand the basics of how to implement FRP with behaviors

Assignment #3:  Due Oct 28
Oct 21 Functional Message Passing Walker Sinha (L) Erlang Message Passing Lecture

Erlang Tutorial; F# Message Passing; F# Twitter App

Goals:  Understand how to program concurrent and distributed applications using purely functional message passing techniques

 
Oct 26, 28 Haskell, Parallelism and Transactional Memory Walker Xi (L), Drutskoy (L), Johnson (D) F# STM library,

Haskell Concurrency Tutorial, Haskell STM paper, Lock-free data structures in Haskell STMs, Haskell STM wiki

Haskell STMs Lecture

Goals:  Understand problem solved, pros and cons of using STMs, understand how to program with STM in F# or Haskell

Assignment #4: Haskell 

Fall Break          
Nov 9, 11 Transactional Memory

(Hardware Support)

August Raman (L), Schwartz-Narbonne (L), Liu (D) Transactional Memory Lecture 1

Transactional Memory Lecture 2

Goals:  Understand how hard can support transactional memory; Understand the fight between hardware designers (speed) vs language designers (semantics) in the transactional memory space

Assignment #5: Transactional Memory
Nov 16, 18 Languages:  StreamIt, Cilk August Liu (L), Pondicherry (L), Oh (D), Beard (D), Schlesinger (D) StreamIt Lecture

Cilk Lecture

Cilk and StreamIt Assignment (Due: Dec 14, 11:59PM)
Nov 23 Imperative Message Passing Parallelism August Kim (L),  Drutskoy (D), Schwartz-Narbonne (D) MPI Lecture MPI Assignment (Due: Sat Dec 7, 11:59PM)
Thanksgiving          
Nov 30, Dec 2, 7 Parallelism on GPUs and Auto-Parallelization August Jablin (L), Johnson (L), Huang (L), Ghosh (D), Bell (D) Autoparallel 1

Autoparallel 2

CUDA, LLVM-Liberty

 
Dec 9 Nested Data Parallelism

(Complexity, Algorithms)

Walker Zoufaly (L),  Kim (D), Dodds (D), Blelloch's CACM article; NESL pages and tutorial

Goals:  understand the pros and cons of nested data parallel programming; understand the complexity model (work, depth, latency, relation to real machines), understand what computational problems can be solved, gain experience programming with it

 
Dec 14, 16 Languages: Massively Parallel Systems (Map-Reduce, Dryad, Azure) Walker Schlesinger (L), Xi (D), Zhang (D), Zoufaly (D) Map-Reduce, FlumeJava, Dryad  

Grading

Grades will be assigned based on student participation in class and their contribution to larger group projects and completion of short individual assignments.