What's new

National Institute of Standards and Technology: Record-breaking camera keeps everything between 3 cm and 1.7 km in focus

Hamartia Antidote

ELITE MEMBER
Joined
Nov 17, 2013
Messages
35,188
Reaction score
30
Country
United States
Location
United States
hmm...computer vision systems....

90


In photography, depth of field refers to how much of a three-dimensional space the camera can focus on at once. A shallow depth of field, for example, would keep the subject sharp but blur out much of the foreground and background. Now, researchers at the National Institute of Standards and Technology have taken inspiration from ancient trilobytes to demonstrate a new light field camera with the deepest depth of field ever recorded.

Trilobytes swarmed the oceans about half a billion years ago, distant cousins of today's horseshoe crabs. Their visual systems were quite complex, including compound eyes, featuring anywhere between tens and thousands of tiny independent units, each with its own cornea, lens and photoreceptor cells.

One trilobyte in particular, Dalmanitina socialis, captured the attention of NIST researchers due to its unique compound eye structure. Fossil record examination indicates that this little guy had double-layer lenses throughout its visual system, unlike anything else in today's arthropod kingdom, and that the upper layers of these lenses had a bulge in the middle that created a second point of focus. That meant Dalmanitina socialis was able to focus both on the prey right in front of it and the predators that might be approaching from farther off.

The research team decided to see whether it could apply this kind of idea to a light field camera. Where regular cameras basically take in light and record color and luminance information across a two-dimensional grid, light field cameras are much more complex, encoding not just color and luminance, but the direction of each ray of light that enters the sensor.

When the entire light field is captured in this way, you end up with enough information to reconstruct the scene in terms of color, depth, transparency, specularity, refraction and occlusion, and you can adjust things like focus, depth of field, tilt and perspective shift once the photo's already taken.

The trouble thus far, according to the NIST team, has been extending depth of field without losing spatial resolution, or losing color information, or closing down the aperture so much that shutter speed becomes an issue. And that's where these bifocal trilobyte lenses have inspired a breakthrough.

The team designed an array of metalenses, a flat surface of glass studded with a bunch of tiny, rectangular, nano-scale titanium dioxide pillars. Each of these pillars was precisely shaped and oriented to manipulate light in specific ways.

Polarization played a key role here – the nanopillars bend light by different amounts if it's left circular polarized (LCP) or right circular polarized (RCP). A different amount of bending leads to a different focal point, so the researchers already effectively had two focal points to work with. The problem was that a single sensor could only capture a focused image from one of these focal points.

So the researchers positioned those nanopillar metalenses to make sure that some of the light that entered each of them had to go through the long side of the rectangle, and some went through the shorter path. Again, this bent the light by two different amounts and created two different focal points – one focused up close like a macro lens, the other focused way off in the distance like a telephoto lens, so between this and the polarization, the researchers had four images to deal with.

If the math wasn't crazy enough to that point, the researchers then figured out precise metalens geometries that caused the left circular polarized version of the telephoto-focused light beams to focus at the exact same plane as the right circular polarized version of the macro-focused light beams, enabling them both to be recorded simultaneously, in sharp focus, by a single light field sensor – without losing any spatial resolution.

The team designed and built a 39 x 39 metalens array, with the near focal point set at just 3 cm (1.2 in) and the far point set at 1.7 km (just over a mile). And it designed and coded a reconstruction algorithm using multi-scale convolutional neural networks to correct all the many aberrations introduced by those 1,521 tiny double-purpose metalenses, particularly given how hard it is to keep tight manufacturing tolerances at the nano-scale.

That reconstruction algorithm turned out to be a gem. After a simple calibration process and a training session, it can figure out exactly how and where a particular metalens array strays from perfection – in terms of chromatic aberration, blurriness, and other optical defects, and it can make corrections that can then be easily applied to any image taken.


A convolutional neural network is quickly trained to correct for aberrations in the lens array, and can then create completely sharp images with an extremely variable depth of field
A convolutional neural network is quickly trained to correct for aberrations in the lens array, and can then create completely sharp images with an extremely variable depth of field
NIST

What's more, while its two focal points are more than a mile apart, the reconstruction algorithm can sharply reconstruct any item placed in between them, creating a final image that can be set to have the biggest depth of field ever demonstrated, in which objects an inch and a half from the lens are as preternaturally clear and sharp as those way off on the horizon.

Indeed, the reconstruction algorithm does such a great job of correcting for errors that the research team says light field cameras using this technology won't have to be fabricated with extreme precision. That is to say, the team believes it should be relatively easy to manufacture.

As the study, published in Nature Communications, explains: "This bioinspired nanophotonic light-field camera, together with the computational post-processing, not only can achieve full-color imaging with extreme DoF, but is also able to eliminate the optical aberrations induced by the meta-optics."

From a series of aberration-corrected sub-images, the reconstruction algorithm is able to put together an image that's completely sharp, from the top-right NJU text at 3cm to the highrise building at 1.7 km away
From a series of aberration-corrected sub-images, the reconstruction algorithm is able to put together an image that's completely sharp, from the top-right NJU text at 3cm to the highrise building at 1.7 km away
NIST

The team believes this technology could be useful in consumer photography, optical microscopy and machine vision, among other areas, but since it's pretty fresh research at this point we wouldn't be expecting it to hit the shelves any time soon.
 
.
hmm...computer vision systems....

90


In photography, depth of field refers to how much of a three-dimensional space the camera can focus on at once. A shallow depth of field, for example, would keep the subject sharp but blur out much of the foreground and background. Now, researchers at the National Institute of Standards and Technology have taken inspiration from ancient trilobytes to demonstrate a new light field camera with the deepest depth of field ever recorded.

Trilobytes swarmed the oceans about half a billion years ago, distant cousins of today's horseshoe crabs. Their visual systems were quite complex, including compound eyes, featuring anywhere between tens and thousands of tiny independent units, each with its own cornea, lens and photoreceptor cells.

One trilobyte in particular, Dalmanitina socialis, captured the attention of NIST researchers due to its unique compound eye structure. Fossil record examination indicates that this little guy had double-layer lenses throughout its visual system, unlike anything else in today's arthropod kingdom, and that the upper layers of these lenses had a bulge in the middle that created a second point of focus. That meant Dalmanitina socialis was able to focus both on the prey right in front of it and the predators that might be approaching from farther off.

The research team decided to see whether it could apply this kind of idea to a light field camera. Where regular cameras basically take in light and record color and luminance information across a two-dimensional grid, light field cameras are much more complex, encoding not just color and luminance, but the direction of each ray of light that enters the sensor.

When the entire light field is captured in this way, you end up with enough information to reconstruct the scene in terms of color, depth, transparency, specularity, refraction and occlusion, and you can adjust things like focus, depth of field, tilt and perspective shift once the photo's already taken.

The trouble thus far, according to the NIST team, has been extending depth of field without losing spatial resolution, or losing color information, or closing down the aperture so much that shutter speed becomes an issue. And that's where these bifocal trilobyte lenses have inspired a breakthrough.

The team designed an array of metalenses, a flat surface of glass studded with a bunch of tiny, rectangular, nano-scale titanium dioxide pillars. Each of these pillars was precisely shaped and oriented to manipulate light in specific ways.

Polarization played a key role here – the nanopillars bend light by different amounts if it's left circular polarized (LCP) or right circular polarized (RCP). A different amount of bending leads to a different focal point, so the researchers already effectively had two focal points to work with. The problem was that a single sensor could only capture a focused image from one of these focal points.

So the researchers positioned those nanopillar metalenses to make sure that some of the light that entered each of them had to go through the long side of the rectangle, and some went through the shorter path. Again, this bent the light by two different amounts and created two different focal points – one focused up close like a macro lens, the other focused way off in the distance like a telephoto lens, so between this and the polarization, the researchers had four images to deal with.

If the math wasn't crazy enough to that point, the researchers then figured out precise metalens geometries that caused the left circular polarized version of the telephoto-focused light beams to focus at the exact same plane as the right circular polarized version of the macro-focused light beams, enabling them both to be recorded simultaneously, in sharp focus, by a single light field sensor – without losing any spatial resolution.

The team designed and built a 39 x 39 metalens array, with the near focal point set at just 3 cm (1.2 in) and the far point set at 1.7 km (just over a mile). And it designed and coded a reconstruction algorithm using multi-scale convolutional neural networks to correct all the many aberrations introduced by those 1,521 tiny double-purpose metalenses, particularly given how hard it is to keep tight manufacturing tolerances at the nano-scale.

That reconstruction algorithm turned out to be a gem. After a simple calibration process and a training session, it can figure out exactly how and where a particular metalens array strays from perfection – in terms of chromatic aberration, blurriness, and other optical defects, and it can make corrections that can then be easily applied to any image taken.


A convolutional neural network is quickly trained to correct for aberrations in the lens array, and can then create completely sharp images with an extremely variable depth of field
A convolutional neural network is quickly trained to correct for aberrations in the lens array, and can then create completely sharp images with an extremely variable depth of field
NIST

What's more, while its two focal points are more than a mile apart, the reconstruction algorithm can sharply reconstruct any item placed in between them, creating a final image that can be set to have the biggest depth of field ever demonstrated, in which objects an inch and a half from the lens are as preternaturally clear and sharp as those way off on the horizon.

Indeed, the reconstruction algorithm does such a great job of correcting for errors that the research team says light field cameras using this technology won't have to be fabricated with extreme precision. That is to say, the team believes it should be relatively easy to manufacture.

As the study, published in Nature Communications, explains: "This bioinspired nanophotonic light-field camera, together with the computational post-processing, not only can achieve full-color imaging with extreme DoF, but is also able to eliminate the optical aberrations induced by the meta-optics."

From a series of aberration-corrected sub-images, the reconstruction algorithm is able to put together an image that's completely sharp, from the top-right NJU text at 3cm to the highrise building at 1.7 km away's completely sharp, from the top-right NJU text at 3cm to the highrise building at 1.7 km away
From a series of aberration-corrected sub-images, the reconstruction algorithm is able to put together an image that's completely sharp, from the top-right NJU text at 3cm to the highrise building at 1.7 km away
NIST

The team believes this technology could be useful in consumer photography, optical microscopy and machine vision, among other areas, but since it's pretty fresh research at this point we wouldn't be expecting it to hit the shelves any time soon.

I knew this day would come. Wow!

Nowadays if you use some type of newer cameras, you can even correct depth of field in post-production...

But honestly - I am an old-fashioned sucker for Bokeh effects and the more creamy Bokeh (with very few halo artifacts), the better for me.

Having everything in focus is good for certain uses (maybe military) but for pure aesthetics, I will pass, Thanks.
 
.
I knew this day would come. Wow!

Nowadays if you use some type of newer cameras, you can even correct depth of field in post-production...

But honestly - I am an old-fashioned sucker for Bokeh effects and the more creamy Bokeh (with very few halo artifacts), the better for me.

Having everything in focus is good for certain uses (maybe military) but for pure aesthetics, I will pass, Thanks.

If you're into reading scientific journals, I suggest you read the work of Federico Capasso. He works on these types of Meta lenses having extended depth of focus. The problem with these types of lenses are, they have a softening effect on the edges, giving out a blurred impression. So most of the images needs huge post processing, and you end of losing some finer details.

My current research is in optics, and we tried fabricating similar lenses, ended up with much lower efficiency.
 
.
I knew this day would come. Wow!

Nowadays if you use some type of newer cameras, you can even correct depth of field in post-production...

But honestly - I am an old-fashioned sucker for Bokeh effects and the more creamy Bokeh (with very few halo artifacts), the better for me.

Having everything in focus is good for certain uses (maybe military) but for pure aesthetics, I will pass, Thanks.

I think most people from playing video games expect completely focused views and probably wonder why cameras don't do it.

Call-of-Duty-Modern-Warfare-Sniper.jpg
 
Last edited:
.
I think most people from playing video games expect completely focused views and probably wonder why cameras don't do it.

Yeah maybe it has other uses (like vision systems), but not for actual photography as practiced today.

Looks at these and tell me how everything in focus would be superior...

iu


iu


iu


iu


iu
 
.
Yeah maybe it has other uses (like vision systems), but not for actual photography as practiced today.

Looks at these and tell me how everything in focus would be superior...

iu


iu


iu


iu


iu

Actually I think it would be superior. I'm liking the Call Of Duty shot.
 
.
Yeah maybe it has other uses (like vision systems), but not for actual photography as practiced today.

Looks at these and tell me how everything in focus would be superior...

iu


iu


iu


iu


iu

Unreal Engine 5 with everything in focus
 
.

Pakistan Defence Latest Posts

Pakistan Affairs Latest Posts

Back
Top Bottom