Discover
/
Article

Parallel architectures for computer systems

MAY 01, 1984
Having several parts of a system simultaneously perform different parts of a task is an old notion that is proving more and more useful in the design of powerful computers.
James C. Browne

People frequently do more than one thing at a time: Driving a car while listening to the radio, cooking a meal so that several dishes are ready at once, or playing two lines of melody on a piano are all familiar examples. On a larger scale, many human activities, such as building a house or complicated experimental apparatus, or putting out a magazine, are separated into what might be called “units of activity” that are performed separately by people working in parallel. On a smaller scale, our brains control separately—but in a coordinated fashion—breathing, heartbeat and several different kinds of motor activity. In each of these cases, separate units of activity are carried out by separate processors (different people or different parts of the brain, for example) that work simultaneously (at the same time, but not in lockstep) and interact to produce the final effect or product.

This article is only available in PDF format

References

  1. 1. For a more complete history of parallelism, see, for example, R. W. Hockney, C. R. Jesshope, Parallel Computers, Hilger, Bristol (1981), chapter 1. This also contains an extensive bibliography of original papers, I have therefore cited only a few here.

  2. 2. J. Cocke, D. L. Slotnick, “The use of parallelism in numerical calculations,” IBM Research memorandum RC‐55, 21 July 1958.

  3. 3. P. Weston, Electronics, 22 September 1961, page 46.

  4. 4. D. L. Slotnick, C. W. Borck, R. C. McReynolds, Proc. Fall Jt. Comput. Conf. 1962. AFIPS Conf. Proc. vol. 22, page 97;
    G. H. Barnes, R. M. Brown, M. Kato, D. J. Kuck, D. J. Slotnick, R. A. Stokes, Computer, IEEE Trans. Comput. C‐17, 99 (1968).

  5. 5. Standard texts generally survey synchronization techniques; see, for example, J. L. Petersen, A. Silbershatz, Operating System Concepts, Addison‐Wesley, Reading, Mass. (1983).

  6. 6. T. Hoshino et al., Proc. 1983 Int. Conf. On Parallel Processing, H. J. Siegel, 1. Siegel, eds. IEEE Comput. Soc., Los Angeles (1983), page 95.

  7. 7. J. B. Dennis, Computer 13, 48 (1980).https://doi.org/CPTRB4

  8. 8. A. Gottlieb et al., IEEE Trans. Comput. C‐32, 175 (1982).https://doi.org/ITCOB4

  9. 9. M. C. Sejnowski, E. T. Upchurch, R. N. Kapur, D. P. S. Charlu, G. J. Lipovski, Proc. 1980 Natl. Comput. Conf., AFIPS Conf Proc. 49 (1980), page 631.

  10. 10. H. F. Jordan, P. L. Sawyer, Comput. Struct. 10, 21 (1979).https://doi.org/CMSTCJ

More about the authors

James C. Browne, University of Texas, Austin.

Related content
/
Article
Figuring out how to communicate with the public can be overwhelming. Here’s some advice for getting started.
/
Article
Amid growing investment in planetary-scale climate intervention strategies that alter sunlight reflection, global communities deserve inclusive and accountable oversight of research.
/
Article
Although motivated by the fundamental exploration of the weirdness of the quantum world, the prizewinning experiments have led to a promising branch of quantum computing technology.
/
Article
As conventional lithium-ion battery technology approaches its theoretical limits, researchers are studying alternative architectures with solid electrolytes.
This Content Appeared In
pt-cover_1984_05.jpeg

Volume 37, Number 5

Get PT in your inbox

pt_newsletter_card_blue.png
PT The Week in Physics

A collection of PT's content from the previous week delivered every Monday.

pt_newsletter_card_darkblue.png
PT New Issue Alert

Be notified about the new issue with links to highlights and the full TOC.

pt_newsletter_card_pink.png
PT Webinars & White Papers

The latest webinars, white papers and other informational resources.

By signing up you agree to allow AIP to send you email newsletters. You further agree to our privacy policy and terms of service.