top of page

Diffraction Line Imaging

Mark Sheinin, Dinesh Reddy, Matthew O'Toole, and
Srinivasa Narasimhan
TL;DR abstract

We present a novel approach for 2D light-source positioning based on line (1D) sensors and light diffraction.

Narrated 2min overview video
Abstract
​

We present a novel computational imaging principle that combines diffractive optics with line (1D) sensing. When light passes through a diffraction grating, it disperses as a function of wavelength. We exploit this principle to recover 2D and even 3D positions from only line images. We derive a detailed image formation model and a learning-based algorithm for 2D position estimation. We show several extensions of our system to improve the accuracy of the 2D positioning and expand the effective field of view. We demonstrate our approach in two applications: (a) fast passive imaging of sparse light sources like street lamps, headlights at night and LED-based motion capture, and (b) structured light 3D scanning with line illumination and line sensing. Line imaging has several advantages over 2D sensors: high frame rate, high dynamic range, high fill-factor with additional on-chip computation, low cost beyond the visible spectrum, and high energy efficiency when used with line illumination. Thus, our system is able to achieve high-speed and high-accuracy 2D positioning of light sources and 3D scanning of scenes.

BibTex
Light sources as visual markers
​
​
​
​
​
​
​
​
​
​

In computer vision, we rely on visual markers (or features) to interpret the scene. These markers are areas in the image that stand out and can be easily (and repeatedly) detected. Due to their distinct visual appearance, light sources are often used as strong visual markers. As seen on the right, light-source markers can be found in night scenes, motion capture suits, and structured light 3D scanning. In these applications, our goal is to find the position of these markers in the 2D image plane. These 2D positions are then used for object tracking and 3D shape recovery (using triangulation).

figure_main_from_eccv.png

​

The problem is bandwidth
​
​
​
​
​
​
​
​
​
​

 

 

 

 

 

Now suppose we want to track these markers fast. How fast? thousands of frames-per-second fast. Such high frame rates require very short exposures, but these are easy to do. Even your smartphone camera can easily take images at a 1/8000 exposure (try it yourself in manual mode). What's the problem then? it's bandwidth; at frame hight rates, your camera can't transfer all the image data from the sensor to its memory fast enough. Moreover, even it could, that memory would run out very fast.

​

So what can we do? Well, here is a question: do we really need to capture all the image pixels?

Looking at the high-speed camera frames above suggests that the answer is no, since our light-source markers occupy the image domain very sparsely. In fact, in the scene above our two LEDs might occupy under 200 pixels of our 3.17MP camera. So, can we somehow get away with using much fewer pixels, say below 1% of our total 3.17MP, to extract the markers' 2D positions? The answer is yes, but we will need to encode the 2D positions in a novel way, using light diffraction.

once on, the fan is rotating

at 1000RPM

we attached two LEDs

to this fan

imaging the fan at high speed (highest possible with our 2D camera)

our system recovered

point positions 16x faster

fan_with_tracking.gif
The solution: Diffraction Line Imaging
​
​
​
​
​
​
​
​
​
​
​
​

 

 

 

 

 

 

The key idea behind Diffraction Line Imaging is this: our imaging system consists of a camera with a diffraction grating placed in its optical path. The diffraction grating disperses the light from the scene sources, creating horizontal rainbow streaks on the 2D sensor plane. But instead of putting a 2D sensor to image the resulting streaks, we place a vertical 1D color sensor (also known as a line scan camera) in the sensor plane. Our 1D sensor intersects the horizontal streaks which encode the 2D light source positions. How are the positions encoded?

For every point, the streak’s vertical intersection with the 1D sensor corresponds to the y-position on the 2D image plane, while the point’s x-coordinate is encoded by the measured color at the intersection.

explan_anim.gif
Tracking point sources
​

Our method can be used for the fast tracking of light-source markers. Unlike previous methods, the light source markers require no temporal coding (blinking in a certain sequence), and therefore our method also works for sources in the wild such as car headlights.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

results_img2_edited.png
glove.gif

motion capture suit

marker tracking

motion capture suit

raw detentions

motion capture glove

marker tracking

car.gif

car headlights tracking

suit3.gif
Structured light 3D scanning with both 1D illumination and a 1D camera
​

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Diffraction Line Imaging works just as well for lines, or more precisely curves. This is useful in 3D structured light scanning where the object is swept with a line (plane in space) illumination by a projector. The geometric deformation of the projected line as it hits the object surface is captured by a camera and used to compute the object's shape. In prior methods, a 2D camera was used to image the line (1D) illumination as it reflects from the object. To the best of our knowledge, ours is the first instance of a structured light system that uses 1D illumination and a 1D camera.

a schematic of our system for sparse point positioning

but our system works for lines as well

experimental prototype

(see paper for more details)

image_of_system.jpg

view from a 2D camera

view of the 2D sensor plane.

Here for every projected line, only the center column is captured by the 1D camera

here we stack all the 1D measurements horizontally.

Notice that this produces a dual image (an image from the projector's POV)

3D shape of the head recovered using our system

(see paper for scanning details)

line.gif
Fast line-illumination scanning
​
 
 
 
 
 
 
 
 
 
 
In the application above the scanned object was static and the line illumination was 'moving' across the object surface. A different configuration of line scanning involves a static lightwhile the object moves through the plane of illumination. This configuration is useful for scanning objects on a turntable or a fast-moving conveyor belt. We built a prototype of such a system. Below we tested it by scanning the blades of the fan from above (this time without the LEDs).
 
 
conveyer.png

scanning objects on a converyor belt

line_light.png

fast line-illumination scanning prototype

fan_line_disparity.gif

the b/w frames were captured using a 2D camera operating

at 60FPS. The 2D camera is not fast enough to capture the blades' motion, making them appear to rotate backward. The red curves are the recovered disparity using our system which is 30x time faster than the 2D camera, and hence can fully capture the blade's fast motion.  

we scan the blades of this fan

which is rotating at 1300RPM

commercial converyor belt

BibTex

​

@inproceedings{Sheinin:2020:Diff,
  title={Diffraction Line Imaging},
  author={M. Sheinin and D. N. Reddy and M. O'Toole and S. G. Narasimhan},
  booktitle={Proc. ECCV},
  year={2020},
  organization={Springer}
}

​

Acknowledgments
​

We thank Aswin Sankaranarayanan and Vishwanath Saragadam for help with building the hardware prototype and Stanislav Panev and Francesc Moreno for neural-network-related advice. We were supported in parts by NSF Grants IIS-1900821 and CCF-1730147, DARPA REVEAL Contract HR0011-16-C-0025, and the Andrew and Erna Finci Viterbi Foundation.

bottom of page