Single Pixel Camera Using an LED Matrix
by okooptics in Circuits > Cameras
9 Views, 0 Favorites, 0 Comments
Single Pixel Camera Using an LED Matrix







What if there was a way to collect an image with a single detector, a single pixel? It doesn’t seem possible - images consist of 2D information - how could all the information be captured with a single point measurement?
One way to do this is by scanning that point over the field-of-view, one-point at a time like a 3D Lidar map - the view of the photodector changes over a scene with a mirror scanner.
But there’s actually another way to solve this problem. And it amazingly doesn’t require any moving parts. I’ve always been fascinated by this idea, so I decided to build it - a one-pixel camera that has no moving parts. Just an LED matrix and a single photoresistor.
All the ideas in this instructable are based in the field of compressed sensing using structured illumination or detection.
Here are a couple references:
https://opg.optica.org/oe/fulltext.cfm?uri=oe-28-19-28190&id=437999 Graham M. Gibson, Steven D. Johnson, and Miles J. Padgett, "Single-pixel imaging 12 years on: a review," Opt. Express 28, 28190-28208 (2020)
https://opg.optica.org/oe/fulltext.cfm?uri=oe-26-3-2427&id=381067 Zi-Hao Xu, Wen Chen, José Penuelas, Miles Padgett, and Ming-Jie Sun, "1000 fps computational ghost imaging using LED-based structured illumination," Opt. Express 26, 2427-2434 (2018)
Supplies

Raspberry Pi 3B
64x64 LED matrix - https://www.adafruit.com/product/5362
Adafruit LED matrix bonnet - https://www.adafruit.com/product/3211
Nikon f/2.8 28mm SLR camera lens
Nikon f/1.4 50mm SLR camera lens
100mm plano convex lens: https://www.thorlabs.com/thorproduct.cfm?partnumber=LA1050-A
50.8mm lens mount
Components listed for translation mount by Remi-Rafael: https://www.instructables.com/Build-a-3D-Printed-Linear-Stage/
M3 hardware and 3D printing inserts - https://www.amazon.com/dp/B0BBSLL6G7?ref_=ppx_hzsearch_conn_dt_b_fed_asin_title_1&th=1
Blackout paper - https://www.amazon.com/dp/B0BWMRYJVZ?ref_=ppx_hzsearch_conn_dt_b_fed_asin_title_10&th=1
System Overview


Imagine an object that’s been illuminated with a unique pattern of light and all the reflected light from the object is collected onto a single photodetector, making a single measurement. The signal measured when the object is illuminated with this pattern depends on how reflective the object is at each point the pattern hits. In other words, the signal measured for this pattern is a linear combination of reflected light returning from the sample.
With a single measurement, there is no way to determine how much each of the points is contributing to the signal. But what if we then illuminated the object with a different pattern and made another measurement. And then another pattern, and another measurement - until we had a dataset consisting of known illumination patterns and their corresponding single photodetector measurements.
With this data, we’re able to actually reconstruct images using the mathematical framework developed in compressed sensing.
The simplest implementation of single-pixel cameras that I read about consisted of an LED matrix set up in a transmission configuration, so I purchased a 64x64 pixel Adafruit LED matrix controlled with a Raspberry Pi. The light is then collected onto a photoresistor that is read out by an Arduino.
Optical Design



The LED matrix is quite large and I needed to get some light from each LED collected onto a single photodetector, which has a much smaller area. I ordered a large area photodiode module that I hoped would help.
To understand this problem, it’s helpful to look at a ray diagram.
The system I built consists of two lenses from SLR cameras. These lenses consist of multiple elements to correct aberrations, and their prescriptions aren’t available. But you can approximate their performance using a paraxial lens and aperture stop.
The first lens demagnifies the LED matrix onto the object. The object blocks some of the light and the goal is then to collect this light onto a single photodetector. We need a second lens to do this, but instead of reimaging the LED matrix, the second lens images the aperture stop of the first lens. You can tell from the ray diagram that this is the position at which the light from all the different LEDs overlaps.
Because the light footprint increases after the first lens, it is also important to have a large diameter lens close to the object to begin focusing the light down as soon as possible. I added a plano-convex lens to do this, it’s essentially acting as what is called a field lens.
3D Print and Assemble Parts









Once I had a rough idea of the concept and the optics, it was time to start designing some mounts for the system. The Fusion360 file is attached in this step for reference
3D printed optical mounts concerned me a bit. I knew I would need some active alignment steps to make it work. I found a remarkable 3D printable translation stage designed by remi.rafael, which ended up being critical for the photodetector alignment. Links for the design are below.
After 3D printing everything, I started to assemble the system and checked for rough alignment. This took a few iterations, but I arrived at an optical mounting scheme that worked well enough for what I needed.
The 3D printed mounts require M3 bolts and heat inserts.
Run LED Matrix With Hadamard Matrices





The LED matrix and bonnet can be set up using the documentation provided by Adafruit: https://learn.adafruit.com/adafruit-rgb-matrix-bonnet-for-raspberry-pi/pinouts
An important question is then what is the best set of illumination patterns to use? How can we minimize the number of patterns required to form an image?
A common set of patterns used are called Hadamard matrices. If a display consists of NxN pixels, then there are N^2 Hadamard patterns, so there are 4096 patterns for the 64x64 LED matrix I have. But you don’t need all of them to reconstruct an image. And this is where compressed sensing becomes even more impressive. The minimum number of patterns required to make a decent reconstruction may be fewer patterns than pixels in the output image. It’s pretty amazing.
Attached is code for generating the Hadamard matrices.
Downloads
Design and Print Objects


The system requires transmissive objects, so I 3D printed masked objects that covered the field-of-view. The objects couldn't be too complex because the maximum resolution would be 64x64 pixels. I tried out a few ideas: numbers, the Oko Optics logo, a ship, and other things. The object mask slides into a slot so that it can easily be changed out.
Other objects can be created by uploading an svg file into Fusion360 and adjusting the blank mask. The stl files for the objects I used are attached here.
Align Photodetector and Collect Data




I turned all the LEDs on to align the photoresistor using the translation stage. A sharp image of the first lens' aperture stop should be visible at the photoresistor.
I loaded one of the objects into place and then covered the system with blackout paper.
Unfortunately, the LED matrix took most the Pi pins, so I had to set up the analog measurements with an Arduino. Synchronization between the Arduino and Pi took some time, but I got something working well enough. The first Hadamard pattern is triggered using a push button (Pin 25 of the Pi). The Pi then sends a signal to the Arduino (Pi Pin 19 to Arduino Pin 2). The Arduino then collects N samples from the photoresistor analog input and averages them. It then sends a signal back to the Pi to advance to the next Hadamard pattern (Arduino Pin 3 to Pi pin 18).
Upload the code to the Arduino and clear the Serial Monitor. Run the Pi script in the terminal. Make sure to update the file location of the images of the Hadamard matrices.
Reset the Arduino to get a fresh background measurement and then press the push button to start the Pi displaying Hadamard matrices. The data will be printed in the serial monitor for the Arduino. Once the measurements are complete, I copied the data into an excel sheet.
Run Reconstruction








The data is uploaded into a Matlab script that then runs the reconstruction. The inputs into the algorithm are the illumination patterns (a 64x64x4096 matrix) and the 4096 measurements collected by the photodetector (one measurement for each pattern).
Suppose we want to determine the value of a pixel in the reconstructed image. We take the LED value at those positions for every pattern, multiply it by the measurement array, and then take the sum. The procedure is then repeated for every pixel. Compared to other image reconstruction algorithms this one is relatively intuitive. The array of measurements needs to receive the appropriate weight for each pixel value, which is provided by known illumination patterns.
There was a tremendous amount of troubleshooting required to get this to work, so when I first saw the a reconstruction that resembled the object, I was so happy. The images are limited in resolution. The objects aren't particularly interesting. But when I think that I don’t have a 2D sensor in this camera, I feel pretty awestruck by the result.
Thanks for reading about the project!