Professor Parhami's research activities and contributions can be divided into four areas, discussed below in current priority order. Due to circumstances and the special needs of Iran, where he lived until 1986, Professor Parhami devoted much time during the late 1970s and early 1980s to dealing with problems in computer technology transfer, including the solution of language-dependent problems (e.g., in information input and output). This research focus has been phased out in favor of the first three. For an overview of Professor Parhami's current research areas and ongoing projects, see his research program page.

Contributions to Computer Arithmetic

Contributions to Parallel Processing

Contributions to Dependable Computing

Contributions to Technology Transfer

**1990-92** PI, "Tradeoffs and Optimality Issues in Search Processors," NSF (CISE - MIPS Program)
**1989-xx** Annual travel grants for attending technical conferences, UCSB Academic Senate
**1988-90** Research Seed Funds, College of Engineering, Univ. of California, Santa Barbara
**1987-88** PI, "Systolic Computer Arithmetic," Carleton Univ. Research Award
**1978-83** PI, "Design & Application of Microcomputer-Based Systems," SHUT
**1977-81** PI, "Planning & Transfer of Informatics Technology," Government of Iran
**1976-79** PI, "Design & Implementation of Low-Cost Persian Output Displays," Government of Iran
**1975-77** PI, "Numerical Control of Machine Tools," Sharif Univ. of Technology (SHUT)
**1975-77** Co-PI with F. Mavaddat, "Generation of Classic Persian Scripts on Graphic Devices," SHUT
**1973-74** Faculty Associate, "Fault-Tolerant Computing," NSF, UCLA (A. Avizienis, PI)

In the following narrative, numbers in square brackets refer to items in B. Parhami's list of publications.

Professor Parhami was introduced to the field of computer arithmetic through the study of arithmetic error codes [3]. In 1976, borrowing ideas from low-cost arithmetic error codes, he proposed the class of low-cost residue number systems (RNS) [14], [16] with important simplifying properties for efficient high-speed arithmetic. Despite this long-standing interest, it was the 700 km weekly round-trip bus rides on the mountainous road between Tehran and Rasht in the Spring of 1985 that got him hooked to this field. The bus trips were necessitated by his teaching duties at Guilan University &mdash to satisfy a "national service" requirement for being granted a sabbatical leave outside Iran &mdash and suspension of all westbound flights from Tehran following a sudden escalation in the war with Iraq. During a typical trip, Professor Parhami would fill many pages of a note pad with equations, numerical calculations, and designs for a particular problem that interested him at the time.

The first paper resulting from these notes established the minimal lookup table size required to obtain *k* digits of convergence in the initial step of division by functional iteration [50]. Other publications have dealt with the design of high-speed systolic arithmetic engines, beginning with a fast addition scheme for binary signed-digit numbers [56]. This work in turn led to a new systolic up/down counter which extends the capabilities of previous designs and establishes a link between systolic design principles and redundant number representations [49]. From this latter work has grown a general theory of carry-free and limited-carry computer arithmetic [51], [66] which not only unifies all known redundant number representation schemes under the umbrella of Generalized Signed-Digit (GSD) representations but also leads to new forms with potential applications in the design of systolic arithmetic engines.

The main results of the GSD line of research to date can be summarized as: (a) Formulation of necessary and sufficient conditions for a redundant number representation system to allow carry-free or limited-carry arithmetic [51], [66], [99], (b) Development of various arithmetic algorithms and their associated support functions — zero, sign, and overflow detection as well as radix and digit-set conversions — for practical use of the unifying theory [52], [55], [81], [86], [89], and (c) Application of insights gained from GSD theory to other problems in computer arithmetic such as the design of number radix converters, high-speed counters, multioperand adders, and fast area-efficient multipliers [58], [63], [75], [82], [93], [107], [124], [156], [Asilo00c].

Professor Parhami's work in lookup-table and multiplexer-based realizations of certain functions [59], [61], [85], [173] is closely linked to his interest in RNS arithmetic and its systolic or massively parallel implementations [81]. In this area, he has applied the lookup-table-based approach to the design of optimal binary-to-residue and residue-to-binary converters [91], [98], [100], [142]. He has also led an effort in devising novel highly efficient RNS division algorithms [92], [97] based on detailed error analysis of approximate Chinese-Remainder-Theorem decoding [108]. These results may help broaden the appeal of RNS arithmetic beyond known signal processing applications. Parallel and redundant RNS computations are also part of his contributions in this area [127], [129].

Professor Parhami's recent research in computer arithmetic has focused on VLSI implementation aspects and associated speed/compactness/flexibility/power tradeoffs. For VLSI-based designs, he has dealt with theoretical foundations [117], [174], [188], [190], [197], [198], [200], [207], building-block subsystems [134], [135], [136], [175], [187], [189], fault-tolerant arithmetic [201], [204], efficient encodings for redundant representations [196], [205], [219], [jvsp], and application-specific designs. In the latter area, which is intimately related to Professor Parhami's research on parallel processing, digital filtering [129], [143], [168], [206], data compression [144], [158], [194], and FFT processors [128], [130] have been considered.

Publication of a textbook on computer arithmetic [179], [180] can be considered the culmination of more than two decades of research and education in this field, as is a more recent encyclopedia chapter on the topic [203]. The aforementioned textbook is now in its second edition [oup1], [oup2].

In the following narrative, numbers in square brackets refer to items in B. Parhami's list of publications.

A significant part of Professor Parhami's parallel processing research has focused on associative or search processor architectures and their applications. In 1971-72, he designed a disk-based back-end string processor called RAPID (acronym for Rotating Associative Processor for Information Dissemination). RAPID and CASSM, a similar system coincidentally introduced in the same conference session, were the forerunners of special-purpose systems which later came to be known as "database machines," with numerous research projects and publications.

The RAPID paper [2] and the comprehensive survey resulting from the background research [4] have been referenced by scores of researchers and textbook authors. In particular, Kohonen extensively quotes these papers in his book *Content-Addressable Memories* (Springer-Verlag, 2nd Ed., 1987, pp. 4 & 166- 172). Ozkarahan describes RAPID in *Database Machines and Database Management* (Prentice Hall, 1986, pp. 220-221). The survey paper [4] contains an original classification of associative devices that has been used extensively in the literature (e.g. see the books *Database Computers: Principles, Architectures, and Techniques*, by S. Y. W. Su, McGraw-Hill, 1988, pp. 182-184, and *A Hierarchical Associative Processing System*, by H. J. Stuttgen, Springer-Verlag, 1985, p. 32). Work on RAPID was partially funded by ONR and later there was some interest in the construction of a prototype. However, this plan coincided with Professor Parhami's return to Iran and was not pursued.

Professor Parhami's other early publications on associative/array processing include [5-10], [12], [17], [25], [30]. Many of these publications deal with fault tolerance issues (see the next section on dependable computing). In [17], it is shown that certain table lookup applications can benefit from a two-level associative memory organized in much the same way as conventional two-level storage hierarchies. As a continuation of the above, more recent studies have dealt with speed/cost tradeoffs in the design of VLSI search processors. Unlike relatively small associative memories that have been implemented in the past, large systems cannot rely on simple operand-broadcasting and reduction-by-wired-logic paradigms. Hence, search cycle can become quite large, nullifying part of the advantage of massive parallelism. Professor Parhami has shown how large associative memories can be systematically built from smaller building-block elements used in a pipelined arrangement and discussed the speed/cost tradeoffs among different types of building blocks [67], [68], [70], [71], [74], [83], [147]. He has also dealt with other optimality and algorithm design issues in search processors [65], [81], [87], [95], [112], [115]. Other publications have dealt with search and computational algorithms [110], [127], [148].

Professor Parhami's recent work on array processing deals with data-driven architectures in which control signals are pipelined in the same way as data signals, leading to uniformly short wires and low fan-out (no broadcasting of control signals). This work is motivated by the observation that SIMD-type parallel processors, which for certain problems offer the most cost-effective form of supercomputing [104], [105], are limited in the long run by signal propagation delay due to long wires and broadcasting. Both general theory [122], [140], [178] and specific applications of the design method [130], [143], [144], [157], [158], [163], [168], [194], [206] have been considered. Related to the preceding contributions are Professor Parhami's research on digit-parallel and digit-serial systolic arithmetic computations (described under work in computer arithmetic)

Alongside the work on associative and array processing, Professor Parhami has maintained an interest in general-purpose massively parallel computation, from dataflow systems [43] to architectures and algorithms for mesh-connected [79], [88], [90], [103] and chordal-ring [124] networks. For chordal ring networks, he introduced and evaluated periodically regular architectures [102], [109], [113], [131], [153], [169] that combine low node degree with small network diameter, leading to cost-effective and scalable systems. This work led to the insight that certain other commonly used interconnection networks can be viewed as pruned versions of denser networks [137] and that systematic pruning is an effective strategy for reducing the implementation cost of interconnection networks, while keeping most of their desirable properties [146], [151], [160], [209]. Different ways of pruning a network, or the same pruning strategy applied to multiple source networks, produce different descendant networks. Professor Parhami's work has shown that the uncovering such relationships often leads to better understanding [184], [191], [211], ease of comparison [176], [212], and algorithmic uniformity (or, at least, portability). These goals have led Professor Parhami to theoretical investigation of classes of interconnection networks, using algebraic structures [208], [215], [ipl] or number theoretic results [216], [217], [218], [220], [221].

Another thread in Professor Parhami's research on interconnection networks is the design and evaluation of hierarchical networks that offer advantages in cost-effectiveness and scalability, because they are modular and packageable. Examples include [114], [116], [119], [120], [123], [125], [126], [128], [132], [139], [141], [154], [161], [171], [172], [195], [210]. An interesting result, obtained recently, establishes that a certain method for constructing hierarchical networks leads to a Hamiltonian network if the component networks used are Hamiltonian [222], [jpdc]. Related to the preceding efforts is work that demonstrates advantages for the derived networks in terms of VLSI layout area and maximum wire length, a byproduct of which is the derivation of better layouts for previously proposed networks [131], [155], [159], [164], [167], [185], [186], [193]. Professor Parhami has also written papers that point out a need for shift of focus and discuss certain misconceptions in the field [181], [182].

Research issues for parallel architectures and algorithms are intertwined in that parallel architectures are devised not just based on hardware considerations but with an eye toward simplifying algorithm development or improving the running time, and new results on parallel algorithms often lead to enhancements in architectures to remove possible performance bottlenecks. In the area of parallel algorithms, Dr. Parhami has contributed to the development of new computational schemes [81], [103], [110], [115], [116], [119], [121], [138], [145], [148], [166], [192], [202], complexity or performance analysis [76], [87], [90], [112], and investigation of certain aspects of scheduling to meet combined correctness and timeliness requirements [69], [84], [177].

Publication of a textbook on parallel processing [162], [170] can be considered the culmination of more than three decades of research and education in this field.

In the following narrative, numbers in square brackets refer to items in B. Parhami's list of publications.

Professor Parhami's research in dependable computing started in 1969 with his master's thesis under Professor Robert A. Short. This work resulted in improved bounds for the reliability of sequential machines modeled as stochastic automata [1]. In 1973, he discovered the unidirectional error detection capabilities of low-cost arithmetic error codes and formulated the concept of two-dimensional burst errors for data stored on magnetic media [3], [28]. This work triggered interest in unidirectional error detecting (UED) codes and has been widely referenced (see, e.g., Theorem 2.4 in the text Error Detecting Codes, Self-Checking Circuits and Applications by Wakerly, North-Holland, 1978, p. 44). Applications of error codes in the design of self-checking logic circuits led to [13] and [15], with extensions and applications reported in [22] and [30]. The important design rule that "every data flow loop of a self-checking system must contain at least one checker" was the original contribution of [13]. Subsequently, Professor Parhami introduced the notions of order-preserving and difference-preserving UED codes which can be more efficient than general UED codes in some contexts [77].

Professor Parhami dealt with the incorporation of fault tolerance into associative memories and processors as part of his doctoral research under Professor Algirdas Avizienis [6], [7]. This work was later extended into an integrated cost/reliability model for highly parallel homogeneous array processors [9], [12]. Other contributions to this area include [8], [25], [36]. The signal processor described in [8] and some of the results in self-checking design resulted from consulting work with Ultrasystems, Inc. (Newport Beach, CA). Professor Parhami's work on interconnection redundancy [36] was the first formal publication of its kind and contained both a compendium of known results as well as new structures and analyses. While on sabbatical at the University of Waterloo, Professor Parhami participated in a project on improving the testability of digital systems [57].

Over the years, Professor Parhami has published a number of survey and tutorial papers in the area of dependable computing [19], [20], [27], [31], [53], [64]. This interest in compiling and systematizing various views of the field led to a comprehensive survey paper [96] that proposes a unified framework based on six levels of abstraction (defects, faults, errors, malfunctions, degradations, and failures) along with the associated terminology and techniques for describing, avoiding, and tolerating possible impairments to dependability at each level. Subsequently, a number of other overviews and methodological papers were published [106], [118], [133], [150], [199], [223]. Professor Parhami has also considered tradeoffs between correctness and timeliness and scheduling problems for replicated tasks run in multiprocessor environments [69], [84].

One focal point of Professor Parhami's research on Dependable computing has been the introduction and refinement of a data-driven dependability assurance technique based on multichannel computations [60], [62], [72], [106], [118]. Additionally, he has dealt with voting algorithms and networks as important elements for synthesizing such schemes. For voting networks, he has shown that they are intimately related to selection and sorting networks and that in certain special cases, efficient designs can be obtained from arithmetic- or multiplexer-based decomposition [73], [75], [78]. For voting algorithms, he has presented a systematic view of various exact/inexact voting schemes, offered a new voting scheme dubbed "approval voting" in which each of several channels provides a set of "approved" values, rather than a single exact or approximate value, to the voter, and showed that voting algorithms belong to hierarchical complexity classes [76], [80], [94], [101], [121], [138], [223]. Professor Parhami proved, for example, that with an ordered object space, *n*-way plurality voting has the same complexity as sorting, whereas with unordered objects, the complexity is quadratic in *n*.

Professor Parhami's recent research on dependable computing has focused on fault-tolerance and graceful degradation schemes for highly parallel systems, particularly regular arrays of processors. One aspect of this line of research is the design of robust parallel algorithms. Professor Parhami has studied the design of robust algorithms that run at optimal or near-optimal speed when a limited number of faults exist (the most common case) and gracefully degrade with the occurrence of more faults [145], [152], [166], [192], [202]. Another aspect is the use of space and time redundancy to tolerate computation and communication faults [122], [140]. Related to this effort are studies that relate the computational capabilities and topological parameters of a parallel architecture to the number and distribution of faulty elements [183]. A separate track concerns fault tolerance methods in computer arithmetic [200], [201], [204].

Preparation of a forthcoming textbook on dependable computing can be considered the culmination of four decades of research and education in this field.

Professor Parhami's return to Iran in 1974 coincided with the Iranian Government's adoption of a policy for rapid computerization that created immense adjustment, training, and technology transfer problems [24]. An important aspect of the problem was a mismatch between the Latin-based I/O capabilities in imported machines and the requirements of the Persian language. Professor Parhami's contributions to the solution of these and related standardization problems [18], [23], [32], [33], [37], [38], [41], [45], [111] helped establish him as an international authority on Persian and Arabic language information processing. His pioneering work on identifying and categorizing language-dependent problems [23], [33], [38] has been widely referenced by researchers in this field. He developed the first Persian character recognition system [37], [41] and proposed the only keyboard and character-set standard that is applicable to both Persian and Arabic [45].

On the educational side of technology transfer, Professor Parhami authored two Persian-language textbooks. The first one, *Computer Appreciation* [44], was a best-selling book more than a decade after its initial release in 1984 (unfortunately, Professor Parhami did not benefit from the crisp sales because he relinquished all rights upon leaving Iran in 1986!). The second book, *Computer Organization — Vol. 1: Basics of Hardware*, was part of a planned 3-volume series whose formal publication was scrapped due to Professor Parhami's departure from Iran. However, various drafts and the final manuscript for this book were used in several Iranian universities for over a decade. Professor Parhami also coauthored a 6000-term Glossary of Computers and Informatics [40], published a number of tutorial papers [11], [21], [26], [29], [34], [42], [48], [54], and held leadership positions in curriculum development activities which led to a standard nationwide computer science and engineering program [46], [47] offered by major universities in Tehran and other Iranian cities since 1983. Professor Parhami established the first numerical control laboratory facility in Iran [39], with help from a graduate student.

Professor Parhami's other publications with educational value include the tutorials and surveys mentioned in the previous sections, two encyclopedia articles [165], [203], a special-issue introduction [149], and a data structures paper [35] containing an "ingenious construct" (a referee's words) for dealing with multiple marriages that has both pedagogical and practical applications. A recent addition to these educational contributions is an undergraduate textbook on computer architecture [213], [214].