Abstract
A systematic procedure for designing pipelined data-parallel algorithms that are suitable for execution on multicomputers is introduced. This procedure concentrates on grouping loops in the original program so as to reduce the number of communicating processors, control the granularity, and increase the degree of pipelining. The procedure starts with a nested-loop program, manipulates the dependencies between the loops, and groups related loops to obtain pipelined and data-parallel operations. Using this procedure, it is possible to parallelize a nested loop automatically.< >
| Original language | English |
|---|---|
| Pages | 653-656 |
| DOIs | |
| Publication status | Published - 1988 |
| Externally published | Yes |
| Event | Proceedings of the 2nd Symposium on the Frontiers of Massively Parallel Computation - Duration: 1 Jan 1988 → 1 Jan 1988 |
Conference
| Conference | Proceedings of the 2nd Symposium on the Frontiers of Massively Parallel Computation |
|---|---|
| Period | 1/01/88 → 1/01/88 |