The Association for Computing Machinery (ACM) interviews ECE Professor Yuan Xie in their November 2017 “People of ACM – Bulletin”

November 15th, 2017

photo of yuan xie
“People of ACM” highlights the unique scientific accomplishments and compelling personal attributes of ACM members who are making a difference in advancing computing as a science and a profession. These bulletins feature ACM members whose personal and professional stories are a source of inspiration for the larger computing community.

What research area(s) is receiving the most of your attention right now?
I am looking at application-driven and technology-driven novel circuits/architectures and design methodologies. My current research projects include novel architecture with emerging 3D integrated circuit (IC) and nonvolatile memory, interconnect architecture, and heterogeneous system architecture. In particular, my students and I have put a lot of effort into novel architectures for emerging workloads with an emphasis on artificial intelligence (AI). These novel architectures include computer architectures for deep learning neural networks, neuromorphic computing, and bio-inspired computing.

In your recent book Die-Stacking Architecture co-authored with Jishen Zhao, you predict that 3D memory stacking will be a computer architecture design that will become prevalent in the coming years. Will you tell us a little about 3D memory stacking?
Die-stacking technology is also called three-dimensional integrated circuits (3D ICs). The concept is to stack multiple layers of integrated circuits vertically, and connect them together with vertical interconnections called through-silicon vias (TSVs). My research group has been working on die-stacking architecture for more than a decade. We’ve been looking at different ways to innovate the processor architecture designs with this revolutionary technology. Recently, memory vendors have developed multi-layer 3D stacked DRAM products, such as Samsung’s High-bandwidth Memory (HBM) and Micron’s Hybrid-Memory Cube (HMC). Using interposer technologies, processors can be integrated with 3D stacked memory into the same package, increasing the in-package memory capacity dramatically. The first commercial die-stacking architecture is the AMD Fury X graphic processing unit (GPU) with 4GB HBM die-stacking memory, which was officially released in 2015. Since then, we have seen many other products that integrate 3D memory, such as Nvidia’s Volta GPU, Google’s TPU2, and, most recently, Intel and AMD’s partnership on Intel’s Kaby Lake G series, which integrates AMD’s Radeon GPU and 4GB HBM2.

More questions & answers and Xie’s ACM Bio

  • How might the introduction of radically new hardware impact the existing ecosystem of software?
  • What are the possible architectural innovations in the AI era?

The Association for Computing Machinery (ACM)

Xie's COE Profile

Xie's Scalable Energy-efficient Architecture Lab (SEAL)

ECE Professor Kaustav Banerjee and researchers reveal an advance in precision superlattices materials

September 28th, 2017

illustration of an electron beam creating a 2D superlattice Illustration on the right shows an electron beam (in purple) being used to create a 2D superlattice made up of quantum dots having extraordinary atomic-scale precision and placement

Control is a constant challenge for materials scientists, who are always seeking the perfect material — and the perfect way of treating it — to induce exactly the right electronic or optical activity required for a given application.

One key challenge to modulating activity in a semiconductor is controlling its band gap. When a material is excited with energy, say, a light pulse, the wider its band gap, the shorter the wavelength of the light it emits. The narrower the band gap, the longer the wavelength.

As electronics and the devices that incorporate them — smartphones, laptops and the like — have become smaller and smaller, the semiconductor transistors that power them have shrunk to the point of being not much larger than an atom. They can’t get much smaller. To overcome this limitation, researchers are seeking ways to harness the unique characteristics of nanoscale atomic cluster arrays — known as quantum dot superlattices — for building next generation electronics such as large-scale quantum information systems. In the quantum realm, precision is even more important.

New research conducted by UC Santa Barbara’s Department of Electrical and Computer Engineering reveals a major advance in precision superlattices materials. The findings by Professor Kaustav Banerjee, his Ph.D. students Xuejun Xie, Jiahao Kang and Wei Cao, postdoctoral fellow Jae Hwan Chu and collaborators at Rice University appear in the journal Nature Scientific Reports.

Their team’s research uses a focused electron beam to fabricate a large-scale quantum dot superlattice on which each quantum dot has a specific pre-determined size positioned at a precise location on an atomically thin sheet of two-dimensional (2-D) semiconductor molybdenum disulphide (MoS2). When the focused electron beam interacts with the MoS2 monolayer, it turns that area — which is on the order of a nanometer in diameter — from semiconducting to metallic. The quantum dots can be placed less than four nanometers apart, so that they become an artificial crystal — essentially a new 2-D material where the band gap can be specified to order, from 1.8 to 1.4 electron volts (eV).

This is the first time that scientists have created a large-area 2-D superlattice — nanoscale atomic clusters in an ordered grid — on an atomically thin material on which both the size and location of quantum dots are precisely controlled. The process not only creates several quantum dots, but can also be applied directly to large-scale fabrication of 2-D quantum dot superlattices. “We can, therefore, change the overall properties of the 2-D crystal,” Banerjee said.

Each quantum dot acts as a quantum well, where electron-hole activity occurs, and all of the dots in the grid are close enough to each other to ensure interactions. The researchers can vary the spacing and size of the dots to vary the band gap, which determines the wavelength of light it emits.

“Using this technique, we can engineer the band gap to match the application,” Banerjee said. Quantum dot superlattices have been widely investigated for creating materials with tunable band gaps but all were made using “bottom-up” methods in which atoms naturally and spontaneously combine to form a macro-object. But those methods make it inherently difficult to design the lattice structure as desired and, thus, to achieve optimal performance.

The UCSB Current – “Band Gaps, Made to Order” (full article)

Nature Scientific Reports – "Designing artificial 2D crystals with site and size controlled quantum dots"

Banerjee's COE Profile

Banerjee's Nanoelectronics Research Lab (NRL)

Professor Shuji Nakamura receives the 2017 Mountbatten Medal from the Great Britain-based Institution of Engineering and Technology (IET)

September 26th, 2017

photo of shuji nakamura
Nakamura selected by IET “in recognition of his pioneering development of blue LEDs as high-efficiency, low-power light sources, and in particular their contribution to the reduction of the world’s carbon footprint”

“This year we had a large number of entries and the standard was extremely high,” said Tim Constandinou, chair of the IET Awards and Prizes Committee. “The Achievement Awards allow us to recognize the huge impact that engineers have on all our lives. The winners are extremely talented and have achieved great things in their careers, whether they are a young professional demonstrating outstanding ability at the start of their journey or an engineer at the pinnacle of their career.”

Nakamura, who joined the UCSB faculty in 2000, is most certainly in the latter category. He is best known for his invention of the bright blue LED, for which he was selected as one of three winners of the Nobel Prize in Physics in 2014. Considered at the time a holy grail of solid-state lighting, the invention of the blue LED paved the way for the creation of white LED, which has since revolutionized the world of lighting with its energy efficiency, sustainability and durability.

“It is a great honor to receive the IET’s Mountbatten Medal award,” Nakamura said. “The blue LEDs have been used as an efficient solid state lighting, which has contributed to overcome the global warming issues by reducing the consumption of energy, and thereby reducing carbon containing greenhouse gases.”

Nakamura and colleagues at UCSB’s Solid State Lighting and Energy Electronics Center continue to develop high-efficiency, high-power lighting by refining fabrication techniques, and creating laser-based lighting. They are also developing extremely energy efficient power electronics that could in the future reduce the energy consumption and improve the performance of electronics from cell phones to computers to automotive equipment and even the power grid.

Nakamura will receive his medal November 15 at the 2017 IET awards ceremony in London.

The UCSB Current – “At the Pinnacle” (full article)

IET Awards – Mountbatten Medal

The Solid State Lighting & Energy Electronics Center (SSLEEC)

UCSB’s Bowers & Theogarajan (ECE) and Goard (MCDB) part of an optical brain-imaging research team awarded $9 million from NSF

September 20th, 2017

brain activity illustration
The National Science Foundation awards the group of neuroscientists, electrical engineers, molecular biologists, neurologists, bioengineers and physicists and its NEMONIC (NExt generation MultiphOton NeuroImaging Consortium) project – to push the boundaries of brain imaging by developing and widely sharing state-of-the-art brain imaging techniques.

“The limit to understanding the brain is no longer the ability to store, process and analyze data,” said B.N. Queenan, associate director of the UCSB Brain Initiative. “The fundamental barrier is the ability to see the brain in action. As neuroscientists, we would love to watch brain cells going about their daily business. We want to record all the cells all the time, but that’s just not possible with the existing technologies. Fundamentally, we need to invent new ways of seeing what brains are up to.”

The NEMONIC group uses light to measure brain activity. The wavelengths of light that the human eye processes do not pass through brain tissue easily. Instead, they bounce off the surface of the brain, the skull or the skin and appear opaque, limiting the human ability to see internal brain activity. However, longer wavelengths of light can pass through brain tissue unobstructed. NEMONIC employs strategic combinations of these longer wavelengths to reach deeper into the brain and image the activity of cells that have been engineered to glow when stimulated.

“This is a team that can do anything in multiphoton neuroimaging,” said NEMONIC team leader Spencer L. Smith, associate professor of cell biology and physiology at the University of North Carolina School of Medicine. “The NEMONIC team has exactly the expertise to engineer new, robust optical solutions to the problem of imaging the brain.”

To remove the technological bottlenecks to understanding the mind and the brain, the federal government launched the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative in 2013. As the name implies, the initiative is focused on developing new tools and strategies to image, map, diagnose, understand and repair the brain.

The NSF is one of the federal agencies leading the BRAIN Initiative. This year, the NSF gave 17 Next Generation Networks for Neuroscience (NeuroNex) awards to support the development of new experimental tools, theoretical frameworks and computational models that can be widely shared to advance neuroscience research. With this award, UCSB is now a designated NeuroNex Neurotechnology Hub, making it a critical part of the national neuroengineering network.

The three-part NEMONIC project first will develop new, streamlined multiphoton imaging approaches. Second, the team will widely share the newly engineered technologies and strategies to promote the free and productive acquisition and exchange of data across the international neuroscience community.

“Labs around the world are imaging the brains of a range of animal species, but multiphoton microscopy systems are expensive and require significant expertise to build and use,” said Goard, an assistant professor in UCSB’s Department of Molecular, Cellular, and Developmental Biology and in its Department of Psychological & Brain Sciences. “We want to make the multiphoton imaging process easier, cheaper and more robust, so we can all combine and analyze our data more effectively.”

Lastly, the NEMONIC team will capitalize on UCSB’s expertise in photonics and super-resolution techniques to push the boundaries of what is possible with optical neuroimaging. “Current methods of peering into the brain use bulky expensive lasers to generate the narrow femtosecond pulses needed for multiphoton imaging,” said NEMONIC team member Theogarajan, a professor in the campus’s Department of Electrical and Computer Engineering. “We are proposing a miniaturized multiphoton microscope based on cutting-edge photonic integrated circuits developed at UCSB, enabling live animal imaging and making multiphoton imaging cheaper.”

“Bringing light and electronics together is what UCSB is known for,” said Rod Alferness, dean of the UCSB College of Engineering. “UCSB is the West Coast headquarters of the American Institute for Manufacturing Integrated Photonics (AIM Photonics), where we integrate light-based approaches with electronics to invent and manufacture new telecommunication technologies. We are thrilled that UCSB can now deploy its particular talents in integrated photonic technology toward the brain.”

The UCSB Current – “Shedding Light on Brain Activity” (full article)

UCSB Brain Initiative

Theogarajan's Biomimetic Circuits & Nanosystems Group

Bower's Optoelectronics Research Group

U.S. News & World Report once again ranks UCSB number 8 among the country’s top public universities

September 19th, 2017

photo of storke tower and campus
In its 2018 listing of the “Top 30 Public National Universities” in the country, U.S. News & World Report has ranked UC Santa Barbara No. 8. Among the “Best National Universities” ranking, which includes both public and private institutions, UCSB placed No. 37.

UCSB’s College of Engineering ranked No. 20 among public universities on the U.S. News & World Report list of “Best Programs at Engineering Schools Whose Highest Degree is a Doctorate.”

In addition, UCSB placed No. 14 among public universities in the “Least Debt” section of the magazine’s ranking of student debt load at graduation. Also among public universities, UCSB placed No. 13 on the “Best Ethnic Diversity” ranking.

The magazine has just released its annual college rankings online at The “Best Colleges 2018” guidebook goes on sale today.

To rank colleges and universities for the Best Colleges 2018 guidebook, U.S. News & World Report assigned institutions to categories developed by the Carnegie Foundation for the Advancement of Teaching. UCSB’s category of national universities includes only institutions that emphasize faculty research and offer a full range of undergraduate majors, plus master’s degrees and doctoral programs.

UCSB, which this year experienced the most competitive admissions process in campus history, continues to attract the best of the best. Among prospective freshmen and undergraduate transfer students, academic qualifications and the diversity remain exceptionally high. For the 2017-18 academic year, the average high school grade-point average of applicants admitted is 4.25, and the average total score achieved on the required SATR test is 1996 out of a possible 2400.

The unprecedented academic qualifications and diversity of applicants made fall 2017 admissions the most selective in campus history. With 11 national centers and institutes, and more than 100 research units, UCSB offers unparalleled learning opportunities for undergraduate students. The world-class faculty includes six Nobel laureates, two Academy and Emmy Award winners, and recipients of a Millennium Technology Prize, a National Medal of Technology and Innovation and a Breakthrough Prize in Fundamental Physics.

The UCSB Current – "Top Ten Again" (full article)

ECE Ph.D. student Abhishek Badki and Prof. Pradeep Sen’s research with NVIDIA called “Computational Zoom” featured in The UCSB Current article “Picture Perfect”

August 1st, 2017

computational zoom original image

Badki, Sen and NVIDIA researchers develop a new technique that enables photographers to adjust image compositions after capture

When taking a picture, a photographer must typically commit to a composition that cannot be changed after the shutter is released. For example, when using a wide-angle lens to capture a subject in front of an appealing background, it is difficult to include the entire background and still have the subject be large enough in the frame.

Positioning the subject closer to the camera will make it larger, but unwanted distortion can occur. This distortion is reduced when shooting with a telephoto lens, since the photographer can move back while maintaining the foreground subject at a reasonable size. But this causes most of the background to be excluded. In each case, the photographer has to settle for a suboptimal composition that cannot be modified later.

ECE Ph.D. student Abhishek Badki and his advisor Pradeep Sen, along with NVIDIA researchers Orazio Gallo and Jan Kautz, have developed a new system that addresses this problem. Specifically, it allows photographers to compose an image post-capture by controlling the relative positions and sizes of objects in the image.

Computational Zoom, as the system is called, allows photographers the flexibility to generate novel image compositions — even some that cannot be captured by a physical camera — by controlling the sense of depth in the scene, the relative sizes of objects at different depths and the perspectives from which the objects are viewed.

For example, the system makes it possible to automatically combine wide-angle and telephoto perspectives into a single multi-perspective image, so that the subject is properly sized and the full background is visible. In a standard image, the light rays travel in straight lines into the camera at an angle specified by the focal length of the lens (the field of view angle). However, this new functionality allows photographers to produce physically impossible images in which the light rays “bend,” changing from a telephoto to a wide angle as they go through the scene.

Achieving the custom composition is a three-step process. First, the photographer must capture a “stack” of multiple images, moving the camera gradually closer to the scene between shots without changing the focal length of the lens. The system then uses the captured image stack, and a standard structure-from-motion algorithm, to automatically estimate the camera position and orientation for each image. Next, a novel multi-view 3D reconstruction method estimates “depth maps” for each image in the stack. Finally, all of this information is used to synthesize multi-perspective images which have novel compositions through a user interface.

“This new framework really empowers photographers by giving them much more flexibility later on to compose their desired shot,” said Pradeep Sen. “It allows them to tell the story they want to tell.”

The group’s research will be presented at SIGGRAPH 2017 (Special Interest Group on Computer GRAPHics and Interactive Techniques), the premier conference for technical research in computer graphics, held on July 31-August 3 in Los Angeles.

The UCSB Current – "Picture Perfect" (full article)

Sen's COE Profile

Badki's ECE webpage

ECE Ph.D. student Steve Bako and Prof. Pradeep Sen’s research with Disney and Pixar featured in The UCSB Current article “Intelligent Animation”

July 26th, 2017

before and after denoise image
Bako and Sen work with researchers at Disney Research and Pixar Animation Studios to develop a new technology based on artificial intelligence (AI) and deep learning to eliminate that noise and enable production-quality rendering at much higher speeds

Modern films and TV shows are filled with spectacular computer-generated sequences computed by rendering systems that simulate the flow of light in a three-dimensional scene and convert the information into a two-dimensional image. But computing the thousands of light rays (per frame) to achieve accurate color, shadows, reflectivity and other light-based characteristics is a labor-intensive, time-consuming and expensive undertaking. An alternative is to render the images using only a few light rays. That saves time and labor but results in inaccuracies that show up as objectionable “noise” in the final image.

Bako spent a year working at Pixar. The team tested the software by using millions of examples from the film “Finding Dory” to train a deep-learning model known as a convolutional neural network. Through this process, the system learned to transform noisy images into noise-free versions that resemble those computed with significantly more light rays. Once trained, the system successfully removed the noise on test images from entirely different films, such as Pixar’s latest release, “Cars 3,” and their upcoming feature “Coco,” even though they had completely disparate styles and color palettes.

“Noise is a really big problem for production rendering,” said Tony DeRose, head of research at Pixar. “This new technology allows us to automatically remove the noise while preserving the detail in our scenes.”

The work presents a significant step forward over previous state-of-the-art denoising methods, which often left artifacts or residual noise that required artists to either render more light rays or to tweak the denoising filter to improve the quality of a specific image. Disney and Pixar plan to incorporate the technology in their production pipelines to accelerate the movie-making process.

Bako will present the findings at SIGGRAPH 2017 (Special Interest Group on Computer GRAPHics and Interactive Techniques), the premier conference for technical research in computer graphics, held on July 31-August 3 in Los Angeles.

The UCSB Current – "Intelligent Animation" (full article)

Sen's COE Profile

Bako's ECE webpage

UCSB Current article focuses on Prof. Yasamin Mostofi Lab’s research on 3D through-wall imaging

June 20th, 2017

youtube video of mostofi lab research

UCSB researchers propose a new method for 3D through-wall imaging that utilizes drones and WiFi

Researchers at UC Santa Barbara professor Yasamin Mostofi’s lab have given the first demonstration of three-dimensional imaging of objects through walls using ordinary wireless signal. The technique, which involves two drones working in tandem, could have a variety of applications, such as emergency search-and-rescue, archaeological discovery and structural monitoring.

“Our proposed approach has enabled unmanned aerial vehicles to image details through walls in 3D with only WiFi signals,” said Mostofi, a professor of electrical and computer engineering at UCSB. “This approach utilizes only WiFi RSSI measurements, does not require any prior measurements in the area of interest and does not need objects to move to be imaged.”

The proposed methodology and experimental results appeared in the Association for Computing Machinery/Institute of Electrical and Electronics Engineers International Conference on Information Processing in Sensor Networks (IPSN).

In their experiment, two autonomous octocopters take off and fly outside an enclosed, four-sided brick house whose interior is unknown to the drones. While in flight, one copter continuously transmits a WiFi signal, the received power of which is measured by the other copter for the purpose of 3D imaging.

After traversing a few proposed routes, the copters utilize the imaging methodology developed by the researchers to reveal the area behind the walls and generate 3D high-resolution images of the objects inside. The 3D image closely matches the actual area.

“High-resolution 3D imaging through walls, such as brick walls or concrete walls, is very challenging, and the main motivation for the proposed approach,” said Chitra R. Karanam, the lead Ph.D. student on this project.

This development builds on previous work in the Mostofi Lab, which has pioneered sensing and imaging with everyday radio frequency signals such as WiFi. The lab published the first experimental demonstration of imaging with only WiFi in 2010, followed by several other works on this subject.

Their previous 2D method utilized ground-based robots working in tandem, the success of the 3D experiments is due to the copters’ ability to approach the area from several angles, as well as to the new proposed methodology developed by her lab.

The UCSB Current – "X-Ray Eyes in the Sky" (full article)

More info about Mostofi's 3D Through-Wall Imaging

Mostofi Lab

Vince Radzicki (EE) and Jenna Cryan (CE) receive “Outstanding Teaching Assistant” honors at the 2017 College of Engineering “Senior Send-Off”

June 20th, 2017

photos radzicki and cryan
College of Engineering (CoE) celebrates the undergraduate class of 2017 on June 16th at their annual “Senior Send-Off” event

The event program and reception included honoring seniors, teaching assistants and faculty members.

The following graduate students received “Outstanding Teaching Assistant (TA)” recognitions from the graduating seniors in their program:

Electrical Engineering (EE): Vince Radzicki
Computer Engineering (CE): Jenna Cryan

ECE Profs. Hua Lee (EE) and Luke Theogarajan & Forrest Brewer (CE) receive “Outstanding Faculty” honors at the 2017 College of Engineering “Senior Send-Off”

June 19th, 2017

photos lee, brewer and theogarajan
College of Engineering (CoE) celebrates the undergraduate class of 2017 on June 16th at their annual “Senior Send-Off” event

The event program and reception included honoring seniors, teaching assistants and faculty members.

The following ECE faculty received “Outstanding Faculty” recognitions from the graduating seniors in their program:

Electrical Engineering: Professor Hua Lee

Computer Engineering: Professors Forrest Brewer and Luke Theogarjan