On June 19, 2007, Professor Parhami's UCSB ECE website moved to a new location. For an up-to-date version of this page, visit it at the new address: http://www.ece.ucsb.edu/~parhami/res_par_proc.htm
the following descriptions, selected items from B.
Parhami’s list of publications are provided in brackets.
Defining the Field
the literal sense of the term, parallel processing (that is, using multiple
processors and/or controllers to handle various tasks concurrently) is found in
virtually every computer. Research in parallel processing, however, has a
somewhat narrower focus: that of multiple processors or computers cooperating to
execute a single computational problem with greater speed, throughput,
cost-effectiveness, or reliability compared to any uniprocessor . The
processors or computers, and their communication mechanisms, can be homogeneous
or heterogeneous. Control can be centralized or distributed. The programming
model can entail a shared address space or explicit message passing. Memory
access can be uniform or nonuniform in latency. Communication between processors
can be direct (point-to-point) or indirect (multilevel switched). These
variations, and their many possible combinations, are studied from the viewpoint
of unifying models or theories, fundamental limits, distinguishing properties,
and suitability to specific application domains. Both hardware
design/construction issues and software/algorithm aspects are actively pursued.
processing, once viewed as an exotic technology, is now a pervasive one. Small
shared-memory multiprocessors are sprouting everywhere and large-scale
multicomputers built of commodity components have become quite cost-effective.
In the domain of massive parallelism, tens of thousands of processors already
appear in some systems and multimillion-node supercomputers are being
contemplated. With so many processors, optimization of processor design and its
various interfaces, scalability of interconnects, and tolerance to processor or
link failures are major issues. These problems have received a lot of attention
but much remains to be done. Professor Parhami’s work centers on the interface
between parallel architectures and algorithms. He studies the problem of
algorithm design for general-purpose parallel computers and its “converse”,
the provision of architectural features in systems to help improve computational
efficiency, economy, and reliability. These take the form of more efficient
communication or fault tolerance mechanisms and, at the extreme, the design of
algorithm-based special-purpose architectures.
Recently, Professor Parhami has also become involved in studying the theoretical
foundations of large-scale and hierarchical interconnection networks for
Recently, Professor Parhami has also become involved in studying the theoretical foundations of large-scale and hierarchical interconnection networks for parallel processing.
Diagram depicting the Hamiltonicity of a biswapped network, a particular 2-level hierarchical architecture, built of Hamiltonian components (based on joint work with Dr. Wenjun Xiao and others):