See-Through Vision at UCSB

X-Ray Vision for Robots: Seeing Through Walls with Only WiFi

X-Ray Vision for Robots with Only WiFi

In the News: BBC Interview, Engadget, Gizmag, Daily Mail, Gizmodo, IDG (PC World, IT World, Computer World) , International Business Times (Yahoo News), Headline and Global News, SD Times, I-Programmer, Investors Business Daily, The Verge, Ubergizmo, Outer Places, UCSB press release, and other outlets, Aug. 2014

Imagine unmanned vehicles arriving behind thick concrete walls. They have no prior knowledge of the area behind these walls. But they are able to see every square inch of the invisible area through the walls, fully discovering what is on the other side with high accuracy. The objects on the other side do not even have to move to be detected. Now, imagine robots doing all these with only WiFi RSSI signals and no other sensors. In this project, we have shown how to do this. Watch the video for more details and results. Note that the same approach can be implemented on a fixed WiFi network as well.

New: Check out our most recent paper on this topic :

S. Depatla, L. Buckland, and Y. Mostofi, "X-Ray Vision with Only WiFi Power Measurements Using Rytov Wave Models," IEEE Transactions on Vehicular Technology, special issue on Indoor Localization, Tracking, and Mapping, volume 64, issue 4, pp. 1376-1387, April 2015.[pdf][bibtex][Sample data]

Back to top

Related Publications

Back to top

Setup Summary

Our proposed approach enables seeing a completely unknown area behind thick walls, only based on wireless measurements using WLAN cards. The figure below shows an example of the considered problem. The superimposed red volume marks the area that is completely unknown to an outside node (such as an unmanned vehicle or a WiFi-enabled smart node) and needs to be seen with details, based on only WiFi measurements. Note that most area is blocked by the first wall, which is concrete and thus highly attenuating. The two unmanned vehicles are interested in fully seeing what is inside at the targeted resolution of 2cm. Note that they know nothing about this area and have not made any prior measurements here.

This figure is generated for illustrative purposes. For a true snapshot of the robots in operation, see our Video and the Sample Imaging Results Section. Note that in the video, any marker on the floor is only for our evaluation purposes and are not used by the robots for positioning, navigation or imaging.

Challenges: This is an extremely challenging multi-disciplinary problem, which involves wireless communications, signal processing, and robotics. Consider the figure above for instance. A horizontal cut of the area of interest is 5.26m x 5.26m. Based on our targeted resolution of 2cm, this amounts to 69,169 unknown variables just for a horizontal cut, resulting in a considerably under-determined system since making that many wireless measurements would simply be prohibitive for the robots. Furthermore, there will be several propagation phenomena that the robots may not be able to include in their modeling. Finally, the robot positioning is also prone to error.

Our Approach: One robot measures the wireless transmissions of the other robot that is in the broadcast mode. Each wireless transmission passes through the unknown area and the objects attenuate the signal depending on their material properties and locations. By devising a framework based on proper wave propagation modeling and sparse signal processing, we have shown that it is indeed possible for the two unmanned vehicles to image the entire area, with a high targeted resolution, and see through highly-attenuating walls. More specifically, we formulate an approximated wave propagation model. Then, we exploit the sparsity of the map in wavelet, total variations, or space domain in order to solve this severely under-determined system. We have also taken advantage of directional antennas (see the figure above) in order to increase the imaging resolution. See our Video and Related Publications where we introduce our framework, show the underlying tradeoffs of different sparsity/imaging approaches, discuss the impact of different motion patterns of the robots, and present a number of experimental results.

Note that the same approach can be used for imaging on a handheld WiFi-enabled node or on a fixed WiFi network.

Back to top

Sample Imaging Results

Our initial proposed approach for see-through imaging was published in ACC 2009. Here, we show a few sample results where two unmanned vehicles see a 2D cut of a completely unknown area at the targeted resolution of 2cm. A similar concept can be extended to full 3D imaging. Note that any marker on the ground is only used by us for assessing the accuracy of the operation and is not used by the robots for positioning or imaging. In the imaging results, the quoted percentage measurements denote the percentage of the number of gathered wireless measurements as compared to the total number of unknown pixels that need to be seen. This ratio shows how under-determined the considered problem is. Sample dimensions and their imaged versions are also provided on the figures in blue.

The left figure above shows the area of interest that is completely unknown, while the middle figure shows a horizontal cut of it. The white areas indicate that there is an object while the black areas denote that there is nothing in those spots. Two unmanned vehicles can see through the walls and also see the walls (right figure) based on only WiFi measurements. Check out our TMC Jan. 2012 for First Demonstrations of See-Through Imaging with RSSI Signals and Advisee Ph.D. Thesis 2012 and Sensors Journal April 2013 for more results.


The left figure above shows the area of interest that is completely unknown, while the middle figure shows a horizontal cut of it. The white areas indicate that there is an object while the black areas denote that there is nothing in those spots. Two unmanned vehicles can see through the walls (as shown in the right figure) based on only WiFi measurements. Check out our latest paper in TVT 2015 for the approach that enabled this result.


The left figure above shows the area of interest that is completely unknown and needs to be imaged, while the middle figure shows a horizontal cut of it (it is 2.56mx2.56m). The white areas indicate that there is an object while the black areas denote that there is nothing in those spots. Two unmanned vehicles can image this area (as shown in the right figure) based on only WiFi measurements. Check out our Milcom 2010 and TMC 2011 for more details and the First Demonstrations of Imaging with WiFi RSSI.


Integration with a Laser Scanner: The figure above shows our proposed integrated framework where both laser scanner and WiFi measurements are used. The first row shows the area of interest that is completely unknown and needs to be imaged, while the first figure of the second row shows a horizontal cut of it (it is 7.67mx7.67m). The white areas indicate that there is an object while the black areas denote that there is nothing in those spots. Two unmanned vehicles then move outside of the area of interest to image a horizontal cut. The second figure of the second row presents the case that only a laser scanner is used. As can be seen, the occluded parts of the structure behind the walls can not be seen, as expected. The two right figures of the second row then show the performance of our proposed integrated framework based on both WiFi measurements and laser scanner data. It can be seen that the details can be clearly identified.Check out our Sensors journal 2014 paper for the approach that enabled this result.

Back to top

Summary of KEY FEATURES of our framework

Back to top

Potential Applications

Back to top

Other Acknowledgements