News

ECE Prof. Kaustav Banerjee among scientists named s Clarivate Analytics’ “2019 Highly Cited Researcher”

November 19th, 2019

photo of kaustav banerjee
Clarivate Analytics names 17 UC Santa Barbara scientists and social scientists to its 2019 list of highly cited researchers

The researchers have been named among the most influential scientists in the world, according to the 2019 list released by Clarivative Analytics (formerly Thomson Reuters).

The annual list recognizes researchers in the sciences and social sciences who produced multiple papers ranking in the top 1% by citations for their field and year of publication, demonstrating significant research influence among their peers. The papers surveyed include those published and cited during the period 2008-2018.

The 2019 list contains 6,216 highly cited researchers in various fields from around the world, and 2,491 highly cited researchers are recognized for their cross-field performance.

The UCSB Current – "Citing Excellence" (full article)

Clarivative Analytics – "Highly Cited Researchers"

Banerjee's COE Profile

ECE Prof. Dan Blumenthal’s FRESCO project featured in the The UCSB Current article “Quiet Light for Future Data Centers”

November 14th, 2019

illustration of a data center
Blumenthal’s project aims to bring the data center into an energy efficient scalable future

The deluge of data we transmit across the globe via the internet-enabled devices and services that come online every day has required us to become much more efficient with the power, bandwidth and physical space needed to maintain the technology of our modern online lives and businesses.

“Much of the world today is interconnected and relies on data centers for everything from business to financial to social interactions,” said Daniel Blumenthal, a professor of electrical and computer engineering at UC Santa Barbara. The amount of data now being processed is growing so fast that the power needed just to get it from one place to another along the so-called information superhighway constitutes a significant portion of the world’s total energy consumption, he said. This is particularly true of interconnects — the part of the internet infrastructure tasked with getting data from one location to another.

“Think of interconnects as the highways and the roads that move data,” Blumenthal said. There are several levels of interconnects, from the local types that move data from one device on a circuit to the next, to versions that are responsible for linkages between data centers. The energy required to power interconnects alone is 10% of the world’s total energy consumption and climbing, thanks to the growing amount of data that these components need to turn from electronic signals to light, and back to electronic signals. The energy needed to keep the data servers cool also adds to total power consumption.

“The amount of worldwide data traffic is driving up the capacity inside data centers to unprecedented levels and today’s engineering solutions break down,” Blumenthal explained. “Using conventional methods as this capacity explodes places a tax on the energy and cost requirements of optical communications between physical equipment, so we need drastically new approaches.”

As the demand for additional infrastructure to maintain the performance of the superhighways increases, the physical space needed for all these components and data centers is becoming a limiting factor, creating bottlenecks of information flow even as data processing chipsets increase their capacity to what could be a whopping 100 terabytes per second for a single chip in the not too far future. This level of expected scaling was unheard of not just a handful of years ago and now it appears that is where the world is headed.

“The challenge we have is to ramp up for when that happens,” said Blumenthal, who also serves as director for UC Santa Barbara’s Terabit Optical Ethernet Center, and represents UC Santa Barbara in Microsoft’s Optics for the Cloud Research Alliance.

This challenge is a now job for Blumenthal’s ARPA-e project called FRESCO: FREquency Stabilized COherent Optical Low-Energy Wavelength Division Multiplexing DC Interconnects. Bringing the speed, high data capacity and low-energy use of light (optics) to advanced internet infrastructure architecture, the FRESCO team aims to solve the data center bottleneck while bringing energy usage and space needs to a sustainable and engineerable level.

The UCSB Current – "Quiet Light for Future Data Centers" (full article)

Blumenthal's COE Profile

Blumenthal's Optical Communications and Photonic Integration Group

ECE Assistant Professor Galan Moody receives Air Force Young Investigator Award for quantum computing

October 21st, 2019

illustration of an all-electric-all-on-chip quantum photonic platformMoody aims to create an optical quantum computing platform in which all of the essential components are integrated onto a single semiconductor chip

Quantum computers use the fundamentals of quantum mechanics to potentially speed up the process of solving complex computations. Suppose you need to perform the task of searching for a specific number in a phone book. A classical computer will search each line of the phone book until it finds a match. A quantum computer could search the entire phone book at the same time by assessing each line simultaneously and return a result much faster.

The difference in speed is due to the computer’s basic unit for processing information. In a classical computer, that basic unit is called a bit, an electrical or optical pulse that represents either 0 or 1. A quantum computer’s basic unit is a qubit, which can represent numerous combinations of values from 0 and 1 at the same time. It is this characteristic that may allow quantum computers to speed up calculations. The downside of qubits is that they exist in a fragile quantum state that is vulnerable to environmental noise, such as changes in temperature. As a result, generating and managing qubits in a controlled environment poses significant challenges for researchers.

Moody, an assistant professor of electrical and computer engineering, has proposed a solution to overcome the poor efficiency and performance of existing quantum computing prototypes that use light to encode and process information. Optical systems are attractive because they naturally link quantum computing and networking in the same physical framework. However, existing technology still requires off-chip optical operations, which dramatically reduce efficiency, performance and scalability. In his project, “Heterogeneous III-V/Silicon Photonics for All-on-Chip: Linear Optical Quantum Computing,” Moody aims to create an optical quantum computing platform in which all of the essential components are integrated onto a single semiconductor chip.

Moody is one of 40 early-career scientists selected for a 2019 Young Investigator Award from the Air Force Office of Scientific Research. Winners receive $450,000 over three years to support their work. The program is intended to foster research by young scientists that supports the Air Force’s mission to control and maximize utilization of air, space and cyberspace, as well as related challenges in science and engineering.

The UCSB Current – "Pushing Quantum Photonics" (full article)

Moody's COE Profile

Moody's Quantum Photonics Lab

ECE Professor Dmitri Strukov and researchers interviewed in Nature Communications article “Building Brain-Inspired-Computing”

October 20th, 2019

photo of Dmitri Strukov
Strukov (an EE, UCSB), Giacomo Indiveri (an EE, U.of Zurich), Julie Grollier (a material physicist, UMPhy) and Stefano Fusi (a neuroscientist, Columbia U) talked to Nature Communications about the opportunities and challenges in developing brain-inspired computing technologies, namely neuromorphic computing, and advocated effective collaborations crossing multidisciplinary research areas to support this emerging community

Please tell us about your research background and how it brought you to work on neuromorphic computing?
Dmitri Strukov (DS): I was trained as an electrical engineer and got interested in developing circuits and architectures using emerging electron devices in my graduate school at Stony Brook University. Afterwards, I moved to Hewlett Packard Laboratories as a postdoctoral researcher and switched my attention to device physics. I spent most of my time developing models for mixed electronic-ionic conductors that could be used to implement resistive switching devices (known as memristors nowadays). This experience naturally led me to choose neuromorphic computing—one of the most promising applications of memristors—as my research area after I joined University of California at Santa Barbara. My major focus now is on the development of practical mixed-signal circuits for artificial neural networks. This is a challenging topic because it spans across a broad range of disciplines, from electron devices to algorithms. In the long term, I hope that our research will lead to practically useful neuromorphic systems that will be used in everyday life.

Why do we need neuromorphic computing?
DS: The answer is quite obvious if one interprets neuromorphic computing as a biologically inspired computing technology facilitated by powerful deep learning algorithms that have already showed profound impact on science, technology, and our society. However, when considering the very original definition of neuromorphic computing coined by Carver Mead at Caltech, which can be loosely put as “analog computing hardware organized similarly to the brains”, the answer becomes less clear to me. This is in part because such definition still leaves some ambiguity in how closely neuromorphic computing hardware should emulate the brains and what functionalities are expected from such systems. One could call neuromorphic computing a hardware that is merely borrowing a few tricks from biology, such as perceptron-like distributed parallel information processing, to perform simple machine learning tasks. Conversely, should it also integrate more advanced functions (e.g. spike-time encoding, various types of plasticity, homeostasis, etc.) and be capable of realizing cognitive functions at higher levels? Nevertheless, the primary motivation is arguably to achieve the extreme energy efficiency of the brains using neuromorphic computing. In fact, this will be the main advantage of analog and mixed-signal implementations of simple perceptron networks as well as of advanced spiking neural networks. Some existing results, albeit they perform simple tasks like image classification, have shown many orders of magnitudes improvement in energy and speed compared to purely digital computing, and some of them can even surpass the performance of the human brain.

What can we learn from our brain for information processing? How to emulate human brain using electronic devices and where are we now?
DS: There is a general consensus on the usefulness of some tricks that are employed by the brains, such as analog and in-memory computing, massively parallel processing, spike coding, task-specific connectivity in neural networks. Many of these ideas have already been implemented in state-of-the-art neuromorphic systems. I do believe, however, that we should not blindly try to mimic all features of the brains—at least not doing so without having a good engineering reason first—and we should consider simpler approaches based on more conventional technologies to achieve the same goal. On the other hand, we should also keep in mind that over millions of years the evolution of biological brains has been constrained to biomaterials optimized for specific tasks, while we have a much wider range of material choices now in the context of neuromorphic engineering. Therefore, there could exist profound differences in designing rules. For example, the brains have to rely on poor conductors offered by biomaterials, which have presumably affected the principles of brain structure and operation in some ways that are not necessarily to be applicable to neuromorphic computing based on high conducting materials.

What are the major hurdles to date towards realizing neuromorphic computing from your perspective?
DS: In my opinion, there are tough challenges at several levels. From a technology perspective, the foremost challenge is various device non-idealities, such as the notorious device-to-device variations in their current-voltage characteristics and poor yields of memory devices—one of the key components of neuromorphic circuits (I will elaborate more on these issues in the answer to question 6). In addition to these technological hurdles, I reckon that there might be other substantial economical and confidence barriers to achieve such highly innovative, yet high-risk technology. Ultimately, to be successful, neuromorphic computing hardware would have to win competition over conventional digital circuits that are supported by presently available infrastructures and enormous investments over years. Fortunately, this barrier does not appear to be as bad as, say, 20 years ago, because of slowing down innovations (mainly about feature size scaling) in conventional CMOS technology, very high development and production cost of sub-10-nm CMOS circuits, and general trend towards more specialized computing hardware. Apart from hardware issues, the progress on the algorithmic front is clearly not sufficient to cope with the explosive increase in the need from neuromorphic computing either, especially for higher cognition tasks. The lack of suitable algorithms, in return, has imposed large uncertainty in designing neuromorphic hardware.

Additional Questions:

  • What is your vision to tackle these major hurdles? Any suggestions?
  • What could be the measure of when the neuromorphic computing is ready to replace the current digital computing?
  • Any suggestion on how researchers, including but not limited to material scientists, device physicists, circuits engineers, computer scientists, neuroscientists or even policy makers, can better work together in this very multidisciplinary field?

Nature Communications – “Building Brain-Inspired-Computing” (full article)

Strukov's COE Profile

Strukov Research Group

ECE Professor Yasamin Mostofi’s lab research in UCSB The Current article “Your Video Can ID You Through Walls”

October 7th, 2019

illustration from mostofi videoResearchers in Mostofi’s lab have enabled, for the first time, determining whether the person behind a wall is the same individual who appears in given video footage, using only a pair of WiFi transceivers outside

This novel video-WiFi cross-modal gait-based person identification system, which they refer to as XModal-ID (pronounced Cross-Modal-ID), could have a variety of applications, from surveillance and security to smart homes. For instance, consider a scenario in which law enforcement has a video footage of a robbery. They suspect that the robber is hiding inside a house. Can a pair of WiFi transceivers outside the house determine if the person inside the house is the same as the one in the robbery video? Questions such as this have motivated this new technology.

“Our proposed approach makes it possible to determine if the person behind the wall is the same as the one in video footage, using only a pair of off-the-shelf WiFi transceivers outside,” said Mostofi. “This approach utilizes only received power measurements of a WiFi link. It does not need any prior WiFi or video training data of the person to be identified. It also does not need any knowledge of the operation area.”

The proposed methodology and experimental results will appear at the 25th International Conference on Mobile Computing and Networking (MobiCom) on October 22. The project was funded by a pair of grants from the National Science Foundation that focus on through-wall imaging and occupancy assessment.

In the team’s experiments, one WiFi transmitter and one WiFi receiver are behind walls, outside a room where a person is walking. The transmitter sends a wireless signal whose received power is measured by the receiver. Then, given video footage of a person from another area — and by using only such received wireless power measurements — the receiver can determine whether the person behind the wall is the same person seen in the video footage.

This innovation builds on previous work in the Mostofi Lab, which has pioneered sensing with everyday radio frequency signals such as WiFi since 2009.

“However, identifying a person through walls, from candidate video footage, is a considerably challenging problem,” said Mostofi. Her lab’s success in this endeavor is due to the new proposed methodology they developed.

“The way each one of us moves is unique. But how do we properly capture and compare the gait information content of the video and WiFi signals to establish if they belong to the same person?”

The researchers have proposed a new way that, for the first time, can translate the video gait content to the wireless domain.

The UCSB Current – "Your Video Can ID You Through Walls" (full article)

Mostofi's COE Profile

Mostofi Group

ECE Prof. Dmitri Strukov receives UCSB Institute for Energy Efficiency (IEE) funding to make AI and Machine Learning more efficient

October 2nd, 2019

photo of strukov
Strukov plans to make AI and machine learning more efficient by advancing computer architecture and hardware using technology, designed similar to the human brain, for solving some of the hardest optimization problems

Training artificial intelligence (AI), an activity that involves processing vast amounts of data, is an energy-intensive process. One recent estimate by a researcher at the University of Massachusetts Amherst suggested that training a single AI creates a carbon dioxide (CO2) footprint of 626,000 pounds, or five times the lifetime emissions of the average American car. That same paper asserted that using a state-of-the-art language model for natural language processing equals the CO2 emissions of one human for 30 years. Both findings provide a jarring quantification of AI’s environmental impact, which becomes more troublesome since nearly every industry uses AI and machining learning (ML) to improve decision making and problem solving.

Strukov’s project, “Energy-Efficient Mixed-Signal Neuro-Optimization Hardware,” employs mixed-signal neuromorphic circuits with integrated metal-oxide memristors, which are non-volatile memory devices that his group has been developing for the past ten years. Such circuits enable very dense, fast, and energy-efficient implementation of probabilistic vector-by-matrix multiplier, which is the most common operation in bio-inspired optimization algorithms. The preliminary results from his group show that the proposed hardware implementation is estimated to be 70 times faster and 20,000 times more energy efficient compared to the most efficient conventional approach.

The project received $50,000 in seed funding from the IEE with which Strukov plans to redesign existing circuits and build an interdisciplinary team of people who specialize in algorithms, application domain, and system integration.

The project was among the four awarded inaugural seed funding grants by the IEE. The awards were made possible through a gift from an anonymous donor. The selected projects are in the early stages of development and align with at least one of IEE’s three central themes: computing and communications, smart societal infrastructure, and the food-energy-water nexus. The seed funding is intended to produce preliminary results that UCSB scientists can use to apply for major external funding to further fund and expand their research.

IEEE News – "Making AI More Energy Efficient" (full article)

Strukov's COE Profile

Strukov Research Group

ECE’s Prof. Dan Blumenthal represents UCSB in Microsoft’s Optics for the Cloud Research Alliance

August 27th, 2019

illustration of MS cloud
UCSB selected as charter member of Microsoft’s Optics for the Cloud Research Alliance

The world is moving to the cloud at an ever-increasing rate. The cloud allows people to store, access and process data on a worldwide distributed computer that is connected by the internet, rather than be limited to local computers and hard drives. The process known as cloud computing has evolved, delivering information at fast speeds and high-performance levels, and simpler than ever for users to setup, maintain and access. It also is enabling new applications.

The cloud today is built with a sophisticated infrastructure of data centers connected by fiber optic networks. Due to the exploding demand for cloud computing, technology companies like Microsoft are researching ways to scale data centers and the networks that connect them to deliver increased capacity and faster communication speeds, while at the same time being cost effective and energy efficient. To address the continued growth of the cloud, Microsoft Research has assembled a cross-disciplinary team of scientists to explore how optics can revolutionize the next generation of the cloud. Optical technologies are used to encode data onto light and transmit this data over sophisticated fiber optic data communications networks at extremely high speeds.

UC Santa Barbara is one of six universities in the world — and the only institution from the United States — selected as inaugural members of Microsoft’s Optics for the Cloud Research Alliance. ECE Professor Daniel Blumenthal represents the university in the alliance.

The UCSB Current – "Next-Gen Cloud Computing" (full article)

Blumenthal's COE Profile

Blumenthal's Optical Communications and Photonic Integration Group (OCPI)

ECE postdocs Chunfeng Cui and Hongwei Zhao invited to participate in 2019 Rising Stars in EECS Workshop

August 13th, 2019

photos of Zhao and Cui
ECE postdoctoral researchers are identified among the most promising women in their field

UC Santa Barbara’s Chunfeng Cui and Hongwei Zhao are among roughly 70 women nationwide invited to participate in the 2019 Rising Stars in Electrical Engineering and Computer Science (EECS) Workshop hosted by the University of Illinois at Urbana Champaign. Previously held at MIT, Carnegie Mellon, Stanford, and UC Berkeley, the Rising Stars of EECS seeks the brightest and most promising women in the field during the early stages of their academic careers.

“It will be a great opportunity to learn from the best in academia and connect with other up-and-coming women,” said Zhao, who defended her PhD in electrical and computer engineering (ECE) at UCSB in June 2019.

Zhao will soon begin a postdoctoral research position at UCSB for her PhD advisor, Jonathan Klamkin, an associate professor of electrical and computer engineering. Prior to UCSB, Zhao received her master’s degree from the Institute of Semiconductors, Chinese Academy of Sciences and completed her undergraduate studies in electronics at Huazhong University of Science and Technology.

The annual workshop unites women who are interested in pursuing academic careers in computer science, computer engineering, and electrical engineering. Participants will present their research, interact with faculty from top-tier universities, and receive advice for advancing their careers.

“I could not be more grateful or happy to be invited,” said Cui, who is a postdoctoral researcher at UCSB for Zheng Zhang, a professor in the ECE Department. “I look forward to meeting my academic peers and sharing our experiences as female researchers.”

Cui received her PhD in computational mathematics with a specialization in numerical optimization for tensor data analysis from the Chinese Academy of the Sciences. Cui’s research spans two main areas: uncertainty quantification for electronic and photonic design automation; and tensor methods for machine learning. A tensor is a mathematical object that generalizes multi-dimensional data in the context of machine learning.

Rising Stars 2019 in EECS Workshop

The UCSB Current – "Rising Stars" (full article)

Cui's Uncertainty- and Data-Driven Computing Laboratory Bio

Zhao's Integrated Photonics Laboratory (iPL) Bio

ECE Professor Dan Blumenthal’s research in COE Convergence article “Moving Precision Lasers from Bench Scale to Chip Scale

August 7th, 2019

Artist concept of a Brillouin laser,
In the cover article of the January 2019 issue of Nature Photonics, UCSB researchers and their collaborators at Honeywell, Yale, and Northern Arizona University describe a significant milestone in this pursuit

Spectrally pure lasers lie at the heart of precision high-end scientific and commercial applications, thanks to their ability to produce near-perfect single color light. A laser’s capacity to produce such light is measured in terms of its linewidth, or coherence, which is the ability to emit a constant frequency over a certain period of time before the frequency changes.

Researchers go to great lengths to build highly coherent, near-single-frequency lasers for high-end systems, such as atomic clocks. Today, however, because these lasers are large and occupy racks full of equipment, they are relegated to bench tops in the laboratory, limiting their application.

The challenge is to move the performance of high-end lasers onto photonic micro-chips, dramatically reducing cost and size while making the technology available to a wide range of applications including spectroscopy, navigation, quantum computation, and optical communications. Achieving such performance at the chip scale would also go a long way toward addressing the challenge posed by the Internet’s exploding data-capacity requirements and the resulting increase in world-wide energy consumption of data centers and their fiber-optic interconnects.

In the Nature article the researchers describe the milestone of a chip scale laser capable of emitting light with a fundamental linewidth of less than 1 Hz — quiet enough to move demanding scientific applications to the chip scale.

To be impactful, these low-linewidth lasers must be incorporated into photonic integrated circuits (PICs) — the equivalents of computer micro-chips for light — that can be fabricated at wafer-scale in commercial micro-chip foundries. “To date, there hasn’t been a method for making a quiet laser with this level of coherence and narrow linewidth at the photonic-chip scale,” says co-author and team lead Dan Blumenthal , professor in the Department of Electrical and Computer Engineering at UC Santa Barbara. The current generation of chip-scale lasers are inherently noisy and have relatively large linewidth. New innovations have been needed that function within the fundamental physics associated with miniaturizing these high-quality lasers.

The project was funded under the Defense Advanced Research Project Agency’s (DARPA) OwlG initiative.

COE Convergence – "Moving Precision Lasers from Bench Scale to Chip Scale" (full article on pg. 25)

Blumenthal's COE Profile

Blumenthal's Optical Communications and Photonic Integration Group (OCPI)

ECE Professor Kaustav Banerjee’s work in COE Convergence article “Paving the Way for Graphene”

August 3rd, 2019

illustration of a graphene hwy
Electrical & Computer Engineering Prof. Banerjee’s Nanoelectronics Research Lab (NRL) develops an innovative synthesis process, overcoming a stubborn obstacle to wide-scale deployment of graphene in the semiconductor industry

Ever since graphene, the flexible, two-dimensional form of graphite (think of a 1-atom-thick sheet of pencil lead),was discovered in 2004, researchers around the world have been working to develop commercially scalable applications for this incredibly high-performance material.

Graphene is 100 to 300 times stronger than steel and has a maximum electrical current density orders of magnitude greater than that of copper, making it the strongest, thinnest, and, by far, the most reliable electrically conductive material in the world. This is why it is an extremely promising material for interconnects, the fundamental components that connect billions of transistors on microchips in computers and other electronic devices in the modern world.

For over two decades, interconnects have been made of copper, but that metal encounters fundamental physical limitations as electrical components that incorporate it shrink to the nanoscale. “As you reduce the dimensions of copper wires, their resistivity shoots up,” says Kaustav Banerjee, professor in the Department of Electrical and Computer Engineering at UC Santa Barbara’s College of Engineering. “Resistivity is a material property that is not supposed to change, but at the nanoscale, all properties change.”

As the resistivity increases, copper wires generate more heat, reducing their current carrying capacity. It’s a problem that poses a fundamental threat to the $500 billion semiconductor industry. Graphene has the potential to solve that and other issues, but a major obstacle is designing graphene micro-components that can be manufactured on-chip on a large scale in a commercial foundry.

“Whatever the component, be it inductors, interconnects, antennas, or anything else you want to do with graphene, industry will move forward with it only if you find a way to synthesize graphene directly onto silicon wafers,” Banerjee says. He explains that all the manufacturing processes related to the transistors, which are made first, are referred to as the ‘front end.’ To synthesize something at the back-end, that is, after the transistors are fabricated, you face a tight thermal budget, such that you cannot exceed a temperature of about 500 degrees Celsius. If the silicon wafer gets too hot during the back-end processes employed to fabricate the interconnects, the other elements that are already on the chip may get damaged, or some impurities may start diffusing, changing the characteristics of the transistors.”

Now, after a decade-long quest to achieve graphene interconnects, Banerjee’s lab has developed a method to implement high-conductivity nanometer-scale doped multilayer graphene (DMG) interconnects that are compatible with high-volume manufacturing of integrated circuits. A paper describing the novel process was selected as one of the top papers from more than 230 accepted for oral presentations at the 2018 IEEE International Electron Devices Meeting (IEDM), and was one of only two papers included in the first annual “IEDM Highlights” section of the December 2018 issue of the journal Nature Electronics.

COE Convergence – "Paving the Way for Graphene" (full article on pg. 31)

Banerjee's COE Profile

Banerjee's Nanoelectronics Research Lab (NRL)