New depth sensors may be sensitive enough to drive cars


New depth sensors may be sensitive enough to drive cars

Over the past decade, the MIT media lab’s cultural departments have been developing innovative camera imaging system – from one can see the camera around, to a can in a closed book to read text camera – through the use of “time of flight” by measuring the projected onto the scene in time to measure the distance light reflected back to the sensor.

In a new article in IEEE Access, members of the camera culture group have proposed a new method of flying time imaging that increases its depth resolution by a factor of 1,000. This is the type of resolution that can make self-driving cars practical.

The new method can also measure accurate distances through fog, which has proven to be a major obstacle to the development of autonomous vehicles.

Within 2 meters, the existing flight time system has a depth resolution of about one centimeter. This is enough for today’s car-assisted parking and collision detection systems.

But, as in electrical engineering and computer science and the media arts and sciences of joint doctoral student, is also the first author of the paper Achuta Kadambi explains, “when you expand the scope of your resolution will be exponentially decline. Suppose you have a remote scenario, and you want your car to detect an object further, so it can make a quick update. You may have started at 1 cm, but now you’re back a foot or even 5 feet of resolution. If you make a mistake, it can lead to loss of life. “

By contrast, the MIT researchers’ system is 2 meters in length and 3 microns in depth. Kadambi also carried out some test in this test, he through the length of 500 meters of fiber along the length direction sent a fiber optic signal, to simulate a long power attenuation, and then send signals to his system. These tests indicate that within 500 meters, the MIT system should still reach a depth of only one centimeter.

In the case of the flight time imaging, a brief flash of light into the scene, the camera measures the time required to return, indicating the distance of the object that reflects it. The longer the light bursts, the more blurred the distance it will travel. So the optical burst length is one of the factors that determine the resolution of the system.

Another factor is the detection rate. The modulator closes and opens the beam, which can be switched over a billion times a second, but the current detector can only make about 100 million measurements per second. The detection rate is the resolution that limits the existing flight time system to the centimeter level.

But there is another imaging technique that can improve resolution, Kadambi says. The technique is interferometry, where the beam is divided into two parts, half of which are kept in circulation, and the other part – the “sample beam” is shot into the visual scene. The reflection of the sample beam of light with the light of the local loop, and of the phase difference between the two beams of light – their electromagnetic wave troughs and peaks relative alignment – the very precise measurements of the distance of the sample beam.

But interference measurements need to be carefully synchronized with two beams of light. “You can’t put an interferometer on a car because it’s very sensitive to vibration,” says Kadambi. “We’re using some of the ideas of interferometry and some of LIDAR’s ideas, and we’re really combining the two.”

He explained that they also used acoustical ideas. Anyone performing in a music ensemble is familiar with the phenomenon of beating. If two singer slightly out – a tone at 440 hz, the other in 437 hz – their voice interaction will produce another kind of tone, its frequency is they are singing notes differences – in this case is 3 hz.

The same is true of light pulses. If the flight time imaging system at billion pulses per second emitted light to the scene, and returns the light combined with the light of to 999999999 pulses per second, the result will be a light pulse signal per second – the rate at which the camera can be easily detected. The slow “beat” will contain all the phase information required to measure the distance.

However, instead of trying to synchronize two high-frequency optical signals – because the interferometric system must – Kadambi and Raskar simply adjust the return signal, using the same technique. That is, they have pulses of light. The result is the same, but this approach is more practical for the automotive system.

“The fusion of optical coherence and electronic consistency is very unique,” says Raskar. “We’re modulating the light of a few thousand megahertz, like opening and closing millions of flashlights every second. But we are changing the electron rather than the optics. The combination of the two is really the power you have to get the system. “


Please enter your comment!
Please enter your name here