#### MGrayson

##### Subscriber and Workshop Member

Good. Because we're stuck with it!I think I can live with diffraction ...

The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And donâ€™t forget to say hi!

- Thread starter MGrayson
- Start date

Good. Because we're stuck with it!I think I can live with diffraction ...

I can't help myself. What is all this Fourier Transform Convolution nonsense about? It goes like this:

An image is just a bunch of numbers. And it's not just the numbers, it's where they are. THIS pixel has THIS red value etc. etc. You know what else is just a bunch of numbers? A vector - an arrow in the plane. Or in space. Or in 300,000,000 dimensional space that is a 100 megapixel image.

While this is true, it's not very useful. Blurring operates on these 300,000,000 numbers in a difficult to compute way (ok, it's a matrix with 9 quintillion elements, although most of them are zero). Possible - after all, that's what Photoshop does when it applies a blur - but not helpful if you want to, say, de-blur an image.

So here's the trick! We write the image not as a sum of pixel values, but as a sum of wiggles of many different spatial frequencies. It's just a high-dimensional "change of basis". Instead of saying "the red value of this pixel is 129", we say "the red value of this pixel is 2*wiggle1 + 3*wiggle2 - 47*wiggle3 + ... +0.0000002*wiggle300000000." But these 300,000,000 coefficients {2,3,-47,...,0.0000002} are the same for every point. Only the wiggles change from point to point. These wiggle coefficients are called the "Fourier Transform". Yes, I'm conflating a few related concepts here. Experts please chuckle at my naivete to your heart's content.

"THAT", you say, "is the stupidest thing I've ever heard. I have to add up 300,000,000 wiggles to get the value at each point?" No, you don't. You*could*, but you don't have to. The reason you don't have to is that blurs act on THESE numbers - the wiggle coefficients - in a really simple way. You write the blur - what happens to a single point - in our case, the Airy disk as another 300,000,000 numbers representing it as a sum of wiggles, say {1, .5, .3, .2, .1, .05,...., 0.00003} and you **multiply** them by your image wiggle coefficients. In our example, {2,3,-47,...,0.0000002}*{1, .5, .3,...., 0.00003} = {2, 1.5, -14.1, ... , 0.00000000006}.

You're changing the wiggle coefficients independently - none of this "average with your neighbors in a particular way" (which is called convolution, BTW). This is*not* obvious, is a freaking *miracle*, and requires some (but not too much) calculation to prove. It's *why we do all this*.

In audio, a Low Pass filter is one that leaves the low frequency wiggles alone and decreases the high frequency wiggles. A blur generally does the same thing. High frequency wiggles get suppressed, low frequency wiggles barely change. After multiplying, when you add up all the new wiggle coefficients, you end up with the 300,000,000 numbers of ... the blurred photo. So such a blur might look like this {1,1,1,...,0.99, 0.95, 0.9, ... , 0.52, 0.51, ... , 0.01, 0.005, 0.0001}. BTW, getting from the image to the wiggle coefficients, doing stuff to them, and then switching back seems to take a near infinite amount of work, You really would have to add up 300,000,000 different functions. But there is a sneaky method called (unimaginatively) the Fast Fourier Transform that does it really quickly.

Well, if all I'm doing to blur is*multiplying* each wiggle coefficient by some number, then all I have to do to de-blur is *divide* by those same numbers, right? Blammo! We're done. Unfortunately, those numbers can get VERY small, or even, in the case of the Airy disk, zero. When you apply the filter, it just completely zeroes out a whole ton of the wiggle coefficients. They're gone. No more wiggles of *this* particular frequency, or *that* one, or... So you CAN'T get them back. Dividing by your blur wiggle coefficients has a lot of divide by zeros, and you're stuck.

And that's the problem with diffraction. It destroys information - it's literally lossy compression.

I need a nap...

Matt

An image is just a bunch of numbers. And it's not just the numbers, it's where they are. THIS pixel has THIS red value etc. etc. You know what else is just a bunch of numbers? A vector - an arrow in the plane. Or in space. Or in 300,000,000 dimensional space that is a 100 megapixel image.

While this is true, it's not very useful. Blurring operates on these 300,000,000 numbers in a difficult to compute way (ok, it's a matrix with 9 quintillion elements, although most of them are zero). Possible - after all, that's what Photoshop does when it applies a blur - but not helpful if you want to, say, de-blur an image.

So here's the trick! We write the image not as a sum of pixel values, but as a sum of wiggles of many different spatial frequencies. It's just a high-dimensional "change of basis". Instead of saying "the red value of this pixel is 129", we say "the red value of this pixel is 2*wiggle1 + 3*wiggle2 - 47*wiggle3 + ... +0.0000002*wiggle300000000." But these 300,000,000 coefficients {2,3,-47,...,0.0000002} are the same for every point. Only the wiggles change from point to point. These wiggle coefficients are called the "Fourier Transform". Yes, I'm conflating a few related concepts here. Experts please chuckle at my naivete to your heart's content.

"THAT", you say, "is the stupidest thing I've ever heard. I have to add up 300,000,000 wiggles to get the value at each point?" No, you don't. You

You're changing the wiggle coefficients independently - none of this "average with your neighbors in a particular way" (which is called convolution, BTW). This is

In audio, a Low Pass filter is one that leaves the low frequency wiggles alone and decreases the high frequency wiggles. A blur generally does the same thing. High frequency wiggles get suppressed, low frequency wiggles barely change. After multiplying, when you add up all the new wiggle coefficients, you end up with the 300,000,000 numbers of ... the blurred photo. So such a blur might look like this {1,1,1,...,0.99, 0.95, 0.9, ... , 0.52, 0.51, ... , 0.01, 0.005, 0.0001}. BTW, getting from the image to the wiggle coefficients, doing stuff to them, and then switching back seems to take a near infinite amount of work, You really would have to add up 300,000,000 different functions. But there is a sneaky method called (unimaginatively) the Fast Fourier Transform that does it really quickly.

Well, if all I'm doing to blur is

And that's the problem with diffraction. It destroys information - it's literally lossy compression.

I need a nap...

Matt

Last edited:

If you do lots of clever math under specific conditions, you can use diffraction to store information!And that's the problem with diffraction. It destroys information - it's literally lossy compression.

As the theoretical aperture get smaller, and we get to the level of just one photon, forgive me for asking (Iâ€™m not a physicist) but what happens? I kept thinking of light as a wave but does it not behave like a particle too? I was wondering about low light situationsâ€¦Darn you for opening this Pandoraâ€™s box! But the graphics you supplied are wonderful, particularly in 3D.The first and simplest case is where we forgot to put the lens on. We take the shot with a big hole in front of the sensor. What should happen? Waves of light flood in and expose every pixel evenly. This is an actual computer simulation, not just me drawing the answer. Light comes in the top and hits the sensor at the bottom. We're looking down on the space between the lens mount and the sensor. Flange distance is 20mm, Sensor width is 44mm.

As you can see, light is hitting the sensor everywhere evenly. Not a great Star image.

What happens if we close down the aperture?

A circle of light hits the sensor, as we'd expect, but there's some stuff going on out the fringes. Those are the boys jumping up and down at the end of the line. All we've done between the last image and this one is remove the boys from the outer parts of the line.

Great! As the aperture gets smaller, our circle gets smaller. Pinhole cameras here we come!

But as the aperture gets smaller still, the "off the end" stuff gets more important.

If we go even smaller, it gets much worse. Diffraction is just the word for what the missing boys in the line do to our waves.

I promised a few trillionths of a second at a time, but went to a full tenth of a nanosecond here, because nothing much happens other than what you see. When we introduce the lens, things will get radically different because the ends of the lines are closer to the sensor than the middle!

to be continued...

Then I began thinking of that lonely photon and did a search found this (unrelated, I know), but in the spirit of experimentation and the development of ultra-low light photography I thought that I would share:

A device uses quantum effects to create images of objects from light that never actually touched them

www.newscientist.com

I will stop now

If you do lots of clever math under specific conditions, you can use diffraction to store information!

Yeah, I'm talking about good old-fashioned classical geometric optics. No quantum effects. If we wanted to talk about how lens-coatings work, then we'd have to get into it. But everything I've been talking about has been known since the early 1800's.As the theoretical aperture get smaller, and we get to the level of just one photon, forgive me for asking (Iâ€™m not a physicist) but what happens? I kept thinking of light as a wave but does it not behave like a particle too? I was wondering about low light situationsâ€¦Darn you for opening this Pandoraâ€™s box! But the graphics you supplied are wonderful, particularly in 3D.

Then I began thinking of that lonely photon and did a search found this (unrelated, I know), but in the spirit of experimentation and the development of ultra-low light photography I thought that I would share:

## Quantum camera takes images of objects that havenâ€™t been hit by light

A device uses quantum effects to create images of objects from light that never actually touched themwww.newscientist.com

I will stop now

Matt

Which is kind of neat.But everything I've been talking about has been known since the early 1800's.

Those pesky telescopes!Which is kind of neat.

(Imagine that our ancestors were already worrying about defraction in anticipation of the invention of photography decades later!)

Victor B.

I wonder if one f/22 frame could help with stacking artifacts - When two branches cross at different depths, there is no image in the stack showing the more distant branch's detail where it nearly crosses behind the front one. You could use some of the diffraction-degraded image for that small region.diffraction which includes stacking if needed and can be utilized. Once diffraction takes its toll on an image the sharpness can never be brought back. You may be able to trick the eye but the image will never be as good as if diffraction was minimized.avoid

Victor B.

Matt

Last edited:

Mathematicians are not interested in the â€śrealâ€ť world!Matt,

This all dovetailing nicely for me; a week ago, I picked up a copy of Feynman's QED and I'm now half way through. I have little arrows drawn on sheets of paper strewn about everywhere!

Dave

That is a very good idea!I wonder if one f/22 frame could help with stacking artifacts - When two branches cross at different depths, there is no image in the stack showing the more distant branch's detail where it nearly crosses behind the front one. You could use some of the diffraction-degraded image for that small region.

Matt