Page last updated on 2017 February 17

*Enrollment code:* 13482

*Prerequisite:* ECE 254A (can be waived, but ECE 154 is required)

*Class meetings:* MW 10:00-11:30, Phelps 1431

*Instructor:* Professor Behrooz Parhami

*Open office hours:* M 12:00-2:00, W 1:00-2:00, HFH 5155

**Course announcements:** Listed in reverse chronological order

**Course calendar:** Schedule of lectures, homework, exams, research

**Homework assignments:** Five assignments, worth a total of 30%

**Exams:** Closed-book midterm (30%) and final (40%)

**Research paper:** Report and short oral presentation (not this quarter)

**Research paper guidlines:** Brief guide to format and contents

**Poster presentation tips:** Brief guide to format and structure

**Grade statistics:** Range, mean, etc. for homework and exam grades

**References:** Textbook and other sources (Textbook's web page)

**Lecture slides:** Available on the textbook's web page

**Miscellaneous information:** Motivation, catalog entry, history

Course lectures, homework assignments, exams, and research milestones have been scheduled as follows. This schedule will be strictly observed. In particular, no extension is possible for homework due dates or research milestones. Each lecture covers topics in 1-2 chapters of the textbook. Chapter numbers are provided in parentheses, after day & date. PowerPoint and PDF files of the lecture slides can be found on the textbook's web page.

**Day & Date (book chapters) Lecture topic [Homework posted/due] {Special notes}**

M 01/09 (1) Introduction to parallel processing

W 01/11 (2) A taste of parallel algorithms

M 01/16 No lecture: Martin Luther King Holiday [HW1 posted, chs. 1-4]

W 01/18 (3-4) Complexity and parallel computation models

M 01/23 (5) The PRAM shared-memory model and basic algorithms

W 01/25 (6A) More shared-memory algorithms [HW1 due] [HW2 posted, chs. 5-6C]

M 01/30 (6B-6C) Shared memory implementations and abstractions

W 02/01 (7) Sorting and selection networks [HW3 posted, chs. 7-8C]

M 02/06 (8A) Search acceleration circuits [HW2 due]

W 02/08 (8B-8C) Other circuit-level examples

M 02/13 (9) Sorting on a 2D mesh or torus architectures [HW3 due]

W 02/15 (1-8) Midterm exam, 10:00-11:45 AM: closed-book; a simple calculator is permitted

M 02/20 No lecture: President's Day Holiday

W 02/22 (10) Routing on a 2D mesh or torus architectures [HW4 posted, chs. 9-12]

M 02/27 (11) Other algorithms for mesh/torus architectures

W 03/01 (12) Mesh/torus variations and extensions

M 03/06 (13) Hypercubes and their algorithms [HW4 due] [HW5 posted, chs. 13-16]

W 03/08 (14) Sorting and routing on hypercubes

M 03/13 (15-16) Other interconnection architectures {Instructor/course evaluation surveys}

W 03/15 (17-18) Task scheduling and input/output [HW5 due]

M 03/20 (1-16) Final exam, 8:30-11:00 AM: closed-book; a simple calculator is permitted

T 03/28 {Course grades due by midnight}

-Turn in solutions in class before the lecture begins.

-Because solutions will be handed out on the due date, no extension can be granted.

-Use a cover page that includes your name, course name, and assignment number.

-Staple the sheets and write your name on top of each sheet in case they are separated.

-Although some cooperation is permitted, direct copying will have severe consequences

** Homework 1: Introduction, models, and complexity** (chs. 1-4, due W 2017/01/25, 10:00 AM)

Do the following problems from the textbook or defined below: 1.11, 1.23, 2.5, 2.9, 3.2, 3.5

Read the report [Ceze16] and discuss in one page of typeset text (single spacing okay, if needed) how the vision discussed might relate to or affect research in parallel processing.

[Ceze16] Ceze, L., M. D. Hill, and T. F. Wenisch, "Arch2030: A Vision of Computer Architecture Research over the Next 15 Years," Technical Report, Computing Community Consortium, 7 pp., 2016. [PDF]

** Homework 2: Shared memory model of parallel processing** (chs. 5-6C, due M 2017/02/06, 10:00 AM)

Do the following problems from the textbook or defined below: 5.2, 5.10, 5.20, 6.1, 6.9b, 16.29

Devise efficient PRAM algorithms for the following problems. In each case, the sequence

a. Find an element

b. Find the length

Read the paper [Sing16] and discuss in one page of typeset text (single spacing okay, if needed) the kinds of multi-stage interconnection networks used in Google data centers, why they were chosen for the particular application, and their suitability for shared-memory architectures.

[Sing16] Singh, A. and 18 others, "Jupiter Rising: A Decade of Clos Topologies and Centralized Control in Google's Datacenter Network,"

** Homework 3: Circuit model of parallel processing** (chs. 7-8C, due M 2017/02/13, 10:00 AM)

Do the following problems from the textbook: 7.5, 7.6, 7.13, 8.9, 8.13

** Homework 4: Mesh- and torus-connected computers** (chs. 9-12, due M 2017/03/06, 10:00 AM)

Do the following problems from the textbook or defined below: 9.2, 9.21, 10.7, 11.1, 11.5, 12.2

Using the result of the analysis for optimized shearsort in Section 9.3, compare the speed of shearsort for sorting 4

a. Tall, 4

b. Square, 2

c. Wide,

** Homework 5: Hypercubic and other networks** (chs. 13-16, due W 2017/03/15, 10:00 AM)

Do the following problems from the textbook: To be posted by 2017/03/06

Updated on January 31, 2017, for winter 2017.

The following sample exam using problems from the textbook is meant to indicate the types and levels of problems, rather than the coverage (which is outlined in the course calendar). Students are responsible for all sections and topics, in the textbook and class handouts, that are not explicitly excluded in the study guide that follows the sample exam, even if the material was not covered in class lectures.

*Sample Midterm Exam (105 minutes) *

Textbook problems 2.3, 3.5, 5.5 (with *i* + *s* corrected to *j* + *s*), 7.6a, and 8.4ac; note that problem statements might change a bit for a closed-book exam.

*Midterm Exam Study Guide*

The following sections are excluded from Chapters 1-8 of the textbook to be covered in the midterm exam, including the six new chapters named 6A-C (expanding on Chpater 6) and 8A-C (expanding on Chapter 8):

3.5, 4.5, 4.6, 6A.6, 6B.3, 6B.5, 6C.3, 6C.4, 6C.5, 6C.6, 7.6, 8A.5, 8A.6, 8B.2, 8B.5, 8B.6

*Sample Final Exam (150 minutes) *

Textbook problems 1.10, 6.14, 9.5, 10.5, 13.5a, 14.10, 16.1; note that problem statements might change a bit for a closed-book exam.

*Final Exam Study Guide*

The following sections are excluded from Chapters 1-16 of the textbook to be covered in the final exam: All midterm exclusions, plus 9.6, 12.6, 13.5, 15.5, 16.5, 16.6

Each student will review a subfield of parallel processing or do original research on a selected and approved topic. A tentative list of research topics is provided below; however, students should feel free to propose their own topics for approval. A publishable report earns an "A" for the course, regardless of homework grades. See the course calendar for schedule and due dates and Research Paper Guidlines for formatting tips.

1. Shared Memory Consistency: Models and Implementations (Assigned to: TBD)

[Stei04] Steinke, R. C. and G. J. Nutt, "A Unified Theory of Shared Memory Consistency," *J. ACM*, Vol. 51, No. 5, pp. 800-849, September 2004.

[Adve10] Adve, S. V. and H.-J. Boehm, "Memory Models: A Case for Rethinking Parallel Languages and Hardware," *Communications of the ACM*, Vol. 53, No. 8, pp. 90-101, August 2010.

2. Area/Time/Power Trade-offs in Designing Universal Circuits (Assigned to: TBD)

[Bhat08] Bhatt, S. N., G. Bilardi, and G. Pucci, "Area-Time Tradeoffs for Universal VLSI Circuits," *Theoretical Computer Science*, Vol. 408, Nos. 2-3, pp. 143-150, November 2008.

[Leis85] Leiserson, C. E., "Fat-Trees: Universal Networks for Hardware-Efficient Supercomputing," *IEEE Trans. Computers*, Vol. 3, No. 10, pp. 892-901, October 1985.

3. Optimized Interconnection Networks for Parallel Processing (Assigned to: TBD)

[Gupt06] Gupta, A. K., and W. J. Dally, "Topology Optimization of Interconnection Networks," *IEEE Computer Architecture Letters*, Vol. 5, No. 1, pp. 10-13, January-June 2006.

[Ahon04] Ahonen, T., D. A. Siguenza-Tortosa, H. Bin, and J. Nurmi, "Topology Optimization for Application-Specific Networks-on-Chip," *Proc. Int'l Workshop System-Level Interconnect Prediction*, pp. 53-60, 2004.

4. Trade-offs in Low- vs High-Dimensional Meshes and Tori (Assigned to: TBD)

[Dall90] Dally, W. J., "Performance Analysis of *k*-ary *n*-cube Interconnection Networks," *IEEE Trans. Computers*, Vol. 39, No. 6, pp. 775-785, June 1990.

[Agar91] Agarwal, A., "Limits on Interconnection Network Performance," *IEEE Trans. Parallel and Distributed Systems*, Vol. 2, No. 4, pp. 398-412, October 1991.

5. Implementing Deadlock-Free Routing via Turn Prohibition (Assigned to: TBD)

[Glas94] Glass, C. J. and L. M. Ni, "The Turn Model for Adaptive Routing," *J. ACM*, Vol. 41, No. 5, pp. 874-902, September 1994.

[Levi10] Levitin, L., M. Karpovsky, and M. Mustafa, "Minimal Sets of Turns for Breaking Cycles in Graphs Modeling Networks," *IEEE Trans. Parallel and Distributed Systems*, Vol. 21, No. 9, pp. 1342-1353, September 2010.

6. Swapped and Biswapped Networks: A Comparative Study (Assigned to: TBD)

[Parh05] Parhami, B., "Swapped Interconnection Networks: Topological, Performance, and Robustness Attributes," *J. Parallel and Distributed Computing*, Vol. 65, No. 11, pp. 1443-1452, November 2005.

[Xiao10] Xiao, W. J., B. Parhami, W. D. Chen, M. X. He, and W. H. Wei "Fully Symmetric Swapped Networks Based on Bipartite Cluster Connectivity," *Information Processing Letters*, Vol. 110, No. 6, pp. 211-215, 15 February 2010.

7. Robust Task Scheduling Algorithms for Parallel Processors (Assigned to: TBD)

[Ghos97] Ghosh, S., R. Melhem, and D. Mosse, "Fault Tolerance through Scheduling of Aperiodic Tasks in Hard Real-Time Multiprocessor Systems," *IEEE Trans. Parallel and Distributed Systems*, Vol. 8, No. 3, pp. 272-284, March 1997.

[Beno08] Benoit, A., M. Hakem, and Y. Robert, "Fault Tolerant Scheduling of Precedence Task Graphs on Heterogeneous Platforms," *Proc. Int'l. Symp. Parallel and Distributed Processing*, pp. 1-8, 2008.

8. Artificial Neural Networks as Parallel Systems and Algorithms (Assigned to: TBD)

[Take92] Takefuji, Y., *Neural Network Parallel Computing*, Kluwer, 1992.

[Yao99] Yao, X., "Evolving Artificial Neural Networks," *Proc. IEEE*, Vol. 87, No. 9, pp. 1423-1447, September 1999.

9. Adaptable Parallelism for Real-Time Performance and Reliability (Assigned to: TBD)

[Moro96] Moron, C. E., "Designing Adaptable Real-Time Fault-Tolerant Parallel Systems," *Proc. Int'l Parallel Processing Symp.*, pp. 754-758, 1996.

[Hsiu09] Hsiung, P.-A., C.-H. Huang, and Y.-H. Chen, "Hardware Task Scheduling and Placement in Operating Systems for Dynamically Reconfigurable SoC," *J. Embedded Computing*, Vol. 3, No. 1, pp. 53-62, 2009.

10. Distributed System-Level Malfunction Diagnosis in Multicomputers (Assigned to: TBD)

[Soma87] Somani, A. K., V. K. Agarwal, and D. Avis, "A Generalized Theory for System Level Diagnosis," *IEEE Trans. Computers*, Vol. 36, pp. 538-546, 1987

[Pelc91] Pelc, A., "Undirected Graph Models for System-Level Fault Diagnosis," *IEEE Trans. Computers*, Vol. 40, No. 11, pp. 1271-1276, November 1991.

11. The MapReduce Approach to Parallel Processing (Assigned to: TBD)

[Dean10] Dean, J. and S. Ghemawat, "MapReduce: A Flexible Data Processing Tool," *Communications of the ACM*, Vol. 53, No. 1, pp. 72-77, January 2010.

[Ston10] Stonebraker, M., et al., "MapReduce and Parallel DBMSs: Friends or Foes?" *Communications of the ACM*, Vol. 53, No. 1, pp. 64-71, January 2010.

12. Transactional Memory: Concept and Implementations (Assigned to: TBD)

[Laru08] Larus, J. and C. Kozyrakis, "Transactional Memory," *Communications of the ACM*, Vol. 51, No. 7, pp. 80-88, July 2008.

[Dice09] Dice, D., Y. Lev, M. Moir, and D. Nussbaum, "Early Experience with a Commercial Hardware Transactional Memory Implementation," *Proc. 14th Int'l Conf. Architectural Support for Programming Languages and Operating Systems*, 2009, pp. 157-168.

13. The Notion of Reliability Wall in Parallel Computing (Assigned to: TBD)

[Yang12] Yang, X., Z. Wang, J. Xue, and Y. Zhou, "The Reliability Wall for Exascale Supercomputing," *IEEE Trans. Computers*, Vol. 61, No. 6, pp. 767-779, June 2012.

[Zhen09] Zheng, Z. and Z. Lan, "Reliability-Aware Scalability Models for High-Performance Computing," *Proc. Int'l Conf. Cluster Computing*, 2009, pp. 1-9.

14. FPGA-Based Implementation of Application-Specific Parallel Systems (Assigned to: TBD)

[Wang03] Wang, X. and S. G. Ziavras, "Parallel Direct Solution of Linear Equations on FPGA-Based Machines," *Proc. Int'l Parallel and Distributed Processing Symp.*, 2003.

[Wood08] Woods, R., J. McAllister, G. Lightbody, and Y. Yi, *FPGA-Based Implementation of Signal Processing Systems*, Wiley, 2008.

15. Biologically-Inspired Parallel Algorithms and Architectures (Assigned to: TBD)

[Furb09] Furber, S., "Biologically-Inspired Massively-Parallel Architectures—Computing Beyond a Million Processors," *Proc. 9th Int'l Conf. Application of Concurrency to System Design*, 2009, pp. 3-12.

[Lewi09] Lewis, A., S. Mostaghim, and M. Randall (eds.), *Biologically-Inspired Optimization Methods*, Springer, 2009.

Here are some guidelines for preparing your research poster. The idea of the poster is to present your research results and conclusions thus far, get oral feedback during the session from the instructor and your peers, and to provide the instructor with something to comment on before your final report is due. Please send a PDF copy of the poster via e-mail by midnight on the poster presentation day.

Posters prepared for conferences must be colorful and eye-catching, as they are typically competing with dozens of other posters for the attendees' attention. Here is an example of a conference poster. Such posters are often mounted on a colored cardboard base, even if the pages themselves are standard PowerPoint slides. In our case, you should aim for a "plain" poster (loose sheets, to be taped to the wall in our classroom) that conveys your message in a simple and direct way. Eight to 10 pages, each resembling a PowerPoint slide, would be an appropriate goal. You can organize the pages into 2 x 4 (2 columns, 4 rows), 2 x 5, or 3 x 3 array on the wall. The top two of these might contain the project title, your name, course name and number, and a very short (50-word) abstract. The final two can perhaps contain your conclusions and directions for further work (including work that does not appear in the poster, but will be included in your research report). The rest will contain brief description of ideas, with emphasis on diagrams, graphs, tables, and the like, rather than text which is very difficult to absorb for a visitor in a very limited time span.

HW1 grades: Range = [73, 94], Mean = 86, Median = 88

HW2 grades: Range = [51, 82], Mean = 67, Median = 64

HW3 grades: Range = [32, 95], Mean = 70, Median = 77

HW4 grades: Range = [00, 00], Mean = 00, Median = 00

HW5 grades: Range = [00, 00], Mean = 00, Median = 00

Overall HW grades: Range = [00,00], Mean = 00, Median = 00

Midterm exam grades: Range = [53, 93], Mean = 74, Median = 75

Final exam grades: Range = [00, 00], Mean = 00, Median = 00
*All grades listed above are in percent*.

Course letter grades: Range = [X, Y], Mean = 0.0, Median = 0.0

** Required text:** B. Parhami,

The follolwing journals contain a wealth of information on new developments in parallel processing:

The following are the main conferences of the field: Int'l Symp. Computer Architecture (ISCA, since 1973), Int'l Conf. Parallel Processing (ICPP, since 1972), Int'l Parallel & Distributed Processing Symp. (IPDPS, formed in 1998 by merging IPPS/SPDP, which were held since 1987/1989), and ACM Symp. Parallel Algorithms and Architectures (SPAA, since 1988).

UCSB library's electronic journals, collections, and other resources

** Motivation:** The ultimate efficiency in parallel systems is to achieve a computation speedup factor of

*Catalog entry:* 254B. Advanced Computer Architecture: Parallel Processing(4) PARHAMI.*Prerequisites: ECE 254A. Lecture, 4 hours*. The nature of concurrent computations. Idealized models of parallel systems. Practical realization of concurrency. Interconnection networks. Building-block parallel algorithms. Algorithm design, optimality, and efficiency. Mapping and scheduling of computations. Example multiprocessors and multicomputers.

** History:** The graduate course ECE 254B was created by Dr. Parhami, shortly after he joined UCSB in 1988. It was first taught in spring 1989 as ECE 594L, Special Topics in Computer Architecture: Parallel and Distributed Computations. A year later, it was converted to ECE 251, a regular graduate course. In 1991, Dr. Parhami led an effort to restructure and update UCSB's graduate course offerings in the area of computer architecture. The result was the creation of the three-course sequence ECE 254A/B/C to replace ECE 250 (Adv. Computer Architecture) and ECE 251. The three new courses were designed to cover high-performance uniprocessing, parallel computing, and distributed computer systems, respectively. In 1999, based on a decade of experience in teaching ECE 254B, Dr. Parhami published the textbook

Offering of ECE 254B in winter 2016 (PDF file)

Offering of ECE 254B in winter 2014 (PDF file)

Offering of ECE 254B in winter 2013 (PDF file)

Offering of ECE 254B in fall 2010 (PDF file)

Offering of ECE 254B in fall 2008 (PDF file)

Offerings of ECE 254B from 2000 to 2006 (PDF file)