Quick links

An Empirical Comparison of Loop Scheduling Algorithms on a Shared Memory Multiprocessor

Report ID:
December 1991
Download Formats:


This paper studies several methods of instruction level parallelization
applied at the statement level on a shared memory multiprocessor, and
reports the results of an empirical evaluation to determine which of
the methods yields the best results. Using sequential code as a base
case, we compared doacross, list scheduling, greedy software pipelining
(a variant of perfect pipelining), and top down scheduling. The
experiments were performed on loops both with and without loop carried
dependencies. We find that statement level parallelism does yield
speedups on the shared memory multiprocessor. In addition, we observed
an interesting superlinearity effect for fully vectorized loops.

Follow us: Facebook Twitter Linkedin