The GetDPI Photography Forum

Great to see you here. Join our insightful photographic forum today and start tapping into a huge wealth of photographic knowledge. Completing our simple registration process will allow you to gain access to exclusive content, add your own topics and posts, share your work and connect with other members through your own private inbox! And don’t forget to say hi!

Perspective and Distortion

MGrayson

Subscriber and Workshop Member
Just for fun:
An illustration of the effect of distance on perspective and distortion that I did a long time ago (still using the first Canon 5D). I used a zoom lens, changed distance to the building and cropped to the identical framing:

View attachment 209588

The plane that I chose as constant in the sequence is the main facade. Interesting how it seems as if the tympanon was "pushed out". And the dome drops. When Foster and Partners planned the dome (you can visit it and walk above the parliamentarians which is a great idea for a democracy) – they had the problem that it is located above the middle of the building, not above the main facade. They chose a very steep profile (it's elliptical, not a circle). Otherwise it would only have been visible from a very long distance. The problem of sight lines and perspective is not exclusive to photography ;-)
Brilliant!

I love the growth of depth around the columns.
 

Geoff

Well-known member
Thanks Matt for this - doing a great job for us all here!

(although maybe the definition of distortion might need a bit more tweaking! :))
 

MGrayson

Subscriber and Workshop Member
As I look over the examples above, I think that two points bear emphasis.

What you (or your camera) sees depends ONLY on where you are.

What your print looks like also depends on what plane you choose for the sensor, and what chunk of that plane you choose to include.


As a result, any two overlapping images taken from the same location can look identical if they are positioned with respect to your eye as they were when they were originally captured
 

cunim

Well-known member
The world we see never looks "wrong" because the position of the imaging device (the eye) and the observer is always the same. Photography gave us a completely new condition where the point the image is taken and the point it is observed is decoupled. Then we notice the incongruity of an observation with projection effects, like from wide-angle lenses. Yet, at the same time, we can learn to accept images of the world that are outside our ability to experience them.
What a great way to put it. The visual system has a wired ability to correct what it sees and also an ability for interpretive learning. The wired component is both genetically determined and modulated by experience during critical periods of postnatal development. The wired component is dominant, but perception can be modulated by later experience. I wonder if what you describe is why distortions such as boat prow and looming effects are more bothersome in some orientations (where the distortion violates our predetermined expectations) and to some of us. Those who have less experience with architecture and with photos have not learned to make perceptual accommodations to these distortions. Would make for an interesting paper.

Matt, thanks for this topic. I need to read all of it half a dozen more times to stop the buzzing in my head.
 
Last edited:

Shashin

Well-known member
I enjoy making 180 degree and 360 degree view photos. It challenges the brain; or maybe I am distorted.
180
View attachment 209590
360
View attachment 209589
If you take those images and mount them on a curved 180 or 360 degree surface, then a viewer at the center of those curved prints would experience those images as natural--the curved lines would appear straight. The images are not distorted: the projection of them on a flat plane is the cause of their "unusual" appearance.
 

Shashin

Well-known member
What a great way to put it. The visual system has a wired ability to correct what it sees and also an ability for interpretive learning. The wired component is both genetically determined and modulated by experience during critical periods of postnatal development. The wired component is dominant, but perception can be modulated by later experience. I wonder if what you describe is why distortions such as boat prow and looming effects are more bothersome in some orientations (where the distortion does not violate our predetermined expectations) and to some of us. Those who have less experience with architecture and with photos have not learned to make perceptual accommodations to these distortions. Would make for an interesting paper.

Matt, thanks for this topic. I need to read all of it half a dozen more times to stop the buzzing in my head.
Just like anything, you get used to things. It is kind of like cooking to taste: just a little bit more sugar. We probably have all put a bit too much contrast in an image simply because you get used to it.

There is certainly an acclimatization to different types of photography and techniques. The modernist movement started experimenting with the photograph and those artist and their work caused a stir. Coburn in his work the Octopus in 1912 by pointing his camera down to make an abstract landscape. A Japanese photographer Yamamoto around 1927 experimented in manipulating the printing paper plane to cause distortions. Kertesz distorted nudes in flexible misers in 1933. These photographers were breaking rules and what people expected to see in photographs. Today, these images would be fairly run of the mill. We have incorporated that kind of visual vocabulary or grammar in our exceptions. Does anyone get shocked by Picasso anymore?
 

MGrayson

Subscriber and Workshop Member
If you take those images and mount them on a curved 180 or 360 degree surface, then a viewer at the center of those curved prints would experience those images as natural--the curved lines would appear straight. The images are not distorted: the projection of them on a flat plane is the cause of their "unusual" appearance.
Exactly! Yes yes yes!!
 

marc aurel

Active member
As I look over the examples above, I think that two points bear emphasis.

What you (or your camera) sees depends ONLY on where you are.

What your print looks like also depends on what plane you choose for the sensor, and what chunk of that plane you choose to include.


As a result, any two overlapping images taken from the same location can look identical if they are positioned with respect to your eye as they were when they were originally captured
So "what your print looks like" does not depend on sensor size, or if the used lens is a symmetrical design or retrofocus. There were lengthy discussions about this in other threads. Good to have that sorted out.

I don't want to complicate things. But I would like to add:
The projection method of your photographic system has an influence on "what your print looks like" too . With photographic system I mean lens, sensor & software correction. A fisheye lens will look different from a lens with rectilinear projection. And a lens the has for example some barrel distortion will look slightly different than a lens with perfect rectilinear projection. But by using software correction you can transform it to have perfect rectilinear projection. And then both look identical again.

An example: I took a photo of a church facade with a lens that is corrected to rectilinear projection in software using a profile. It's the GF 30mm TS with 15mm of shift, so a very wide lens with a lot of shift.

Image with perfect rectilinear projection, crop of the figure on the top of the church:
GF 30mm TS-rectilinear-projection-crop.jpg

Image with added barrel distortion, crop of the figure on the top of the church:
GF 30mm TS-with-some-barrel-distortion-crop.jpg

The second one looks better, doesn't it? Less stretched. So is the second lens better? No, it's not. It's the same image from the same lens and sensor, just with software transformation. So it's a different projection method that is used. And it is not one you would want to use for a photograph of architecture. This part of the image may look "more natural". But if you look at the full image – it's awful, because of all the barrel distortion ;-)

Image with perfect rectilinear projection, full image:
GF 30mm TS-rectilinear-projection.jpg

Now the one with barrel distortion (I left the edges visible so you can see the amount of software transformation I used):
GF 30mm TS-barrel-distortion.jpg

This church has eaten too much and has gone fat ;-)

I think these slight differences that people perceive may lead to the idea that a certain lens or sensor size is somehow better than another with regard to distortion.
The question of image quality is a completely different one though...
 
Last edited:

MGrayson

Subscriber and Workshop Member
So "what your print looks like" does not depend on sensor size, or if the used lens is a symmetrical design or retrofocus. There were lengthy discussions about this in other threads. Good to have that sorted out.

I don't want to complicate things. But I would like to add:
The projection method of your photographic system has an influence on "what your print looks like" too . With photographic system I mean lens, sensor & software correction. A fisheye lens will look different from a lens with rectilinear projection. And a lens the has for example some barrel distortion will look slightly different than a lens with perfect rectilinear projection. But by using software correction you can transform it to have perfect rectilinear projection. And then both look identical again.

An example: I took a photo of a church facade with a lens that is corrected to rectilinear projection in software using a profile. It's the GF 30mm TS with 15mm of shift, so a very wide lens with a lot of shift.

Image with perfect rectilinear projection, crop of the figure on the top of the church:
View attachment 209604

Image with added barrel distortion, crop of the figure on the top of the church:
View attachment 209605

The second one looks better, doesn't it? Less stretched. So is the second lens better? No, it's not. It's the same image from the same lens and sensor, just with software transformation. So it's a different projection method that is used. And it is not one you would want to use for a photograph of architecture. This part of the image may look "more natural". But if you look at the full image – it's awful, because of all the barrel distortion ;-)

Image with perfect rectilinear projection, full image:
View attachment 209606

Now the one with barrel distortion (I left the edges visible so you can see the amount of software transformation I used):
View attachment 209607

This church has eaten too much and has gone fat ;-)

I think these slight differences that people perceive may lead to the idea that a certain lens or sensor size is somehow better than another with regard to distortion.
The question of image quality is a completely different one though...
Apologies. I thought it was obvious that I was talking about "within the confines of a model".

Models are, of necessity, incomplete. Their purpose is to explain a small number of phenomena with as few assumptions as possible.

Constant vertical acceleration is not even useful for artillery, but it DOES explain why throwing a rock into the air makes a (pretty close to) parabolic trajectory. And by "explain", I mean that the model can be solved: d = 1/2 a t^2 + v t + h

Not discussed, solved.

We have a choice. Simplify and get an exact answer, or encompass reality and just talk about it. You may find no value in the former, but without it, we wouldn't have cameras at all.

My purpose in this thread has been to find the simplest assumptions to explain some phenomena. By choosing to isolate two types of - call them effects rather than distortions - I could take a toy universe where they were clear. "Light" means a straight line. "Image" comes from a zero width pinhole camera. "Print" means 2-dimensional projection. No offense to actual prints was intended. Of course, I'm not the first person to use this simplified world. It has been popular since perspective drawing was invented.

These are gross simplifications, yet they allow an exact solution which demonstrates the effects under discussion. You are welcome to find that worthless. I readily accept the model's limitations. They were deliberate.
 
Last edited:

Ben730

Active member
I am neither a mathematician nor a physicist, so please excuse my unprofessional attempt to explain this.
I have made a drawing to explain
why the sensor format has an influence on the distortion of the image
with the same image section (subject) and
the same camera position and
therefore different focal lengths.


The focal point (on the red line) shifts closer to the subject due to the longer focal length of the large sensor (blue).
The green triangle is not the same as the blue one. The angles are different.
From this I conclude that a larger sensor causes less distortion in the corners.

Bildschirmfoto 2024-01-09 um 13.41.20.png
 

MGrayson

Subscriber and Workshop Member
I am neither a mathematician nor a physicist, so please excuse my unprofessional attempt to explain this.
I have made a drawing to explain
why the sensor format has an influence on the distortion of the image
with the same image section (subject) and
the same camera position and
therefore different focal lengths.


The focal point (on the red line) shifts closer to the subject due to the longer focal length of the large sensor (blue).
The green triangle is not the same as the blue one. The angles are different.
From this I conclude that a larger sensor causes less distortion in the corners.

View attachment 209608
True as set up. But if you move the larger camera back so that the entrance pupil (or nodal point, I forget which) for the two systems match up, then the angles will be the same. This is a big problem with focus stacking in macro photography.
 

Ben730

Active member
NormallyI can't move the camera further back. In any case, I usually stand with my back against a wall, a street lamp, a garbage bin or a white delivery van (I hate them!) when taking architecture and interior shots.
 

MGrayson

Subscriber and Workshop Member
Normally I can't move the camera further back. In any case, I usually stand with my back against a wall, a street lamp, a garbage bin or a white delivery van (I hate them!) when taking architecture and interior shots.
Then yes, if you can't back up, then sensor size is a consideration. I mean it quite seriously when I say that phone cameras have their place - usually for their near infinite DoF, but working distance counts, too.
 

dchew

Well-known member
NormallyI can't move the camera further back. In any case, I usually stand with my back against a wall, a street lamp, a garbage bin or a white delivery van (I hate them!) when taking architecture and interior shots.
Hi Ben,
I think what Matt means is just the distance between the lens and the film plane. Let's assume you want the same scene in both formats. In that case, you will put the lens in the same place for both formats. A more realistic comparison is what I've drawn below. In order to get the same scene on two different formats, you need a different focal length lens (longer for the larger format).

Dave

1704811032253.png
 

Ben730

Active member
Hi Ben,
I think what Matt means is just the distance between the lens and the film plane. Let's assume you want the same scene in both formats. In that case, you will put the lens in the same place for both formats. A more realistic comparison is what I've drawn below. In order to get the same scene on two different formats, you need a different focal length lens (longer for the larger format).

Dave

View attachment 209613
So, this means if you shoot GFX + 30TS you have to go only 10mm forward 8with the camera/sensor to shoot exactly the same with IQ3 100 + 40HR?
 

pegelli

Well-known member
@Ben730, Yes, you have to move the sensor back and the place where the top/bottom rays cross needs to be kept in the same place. In your diagram one of the images is out of focus, the focal length of the large image needs to be 3x larger so it also needs to move three times further from the sensor while in your diagram it's hardly twice as far (see Dave Chew's diagram how the end-state should look). The only way to do that (assuming the small image is in focus) is move the big sensor backwards and when everything is set up correctly the angles "alpha" and "beta" become equal, so the corner distortion becomes the same as well.
 

marc aurel

Active member
So, this means if you shoot GFX + 30TS you have to go only 10mm forward 8with the camera/sensor to shoot exactly the same with IQ3 100 + 40HR?
You have to align the location of the entrance pupils of the lenses. Only then you have the same distance to the object. Where that plane is depends on the lens construction and can not simply be calculated by the difference of the focal length of both lenses as far as I know.

The other part of your question refers to focal length / angle of view. A 30mm lens on a 44x33mm sensor is equivalent to a lens with about 36,8mm on a 54x40mm sensor (since both sensors have the same aspect ratio I can just use the width of those sensors for the calculation: 30mm x 54mm / 44mm = 36,8181818mm). So you would have to crop the GF 30mm TS a bit for the same angle of view. But otherwise yes – you will see the same image.
 
Last edited:

cunim

Well-known member
This geometrically challenged photo unit (me) is slowly coming to grips with how a camera system functions. One interesting side effect of that is a greater appreciation of architectural painting. Seems to me that painters have freedom to introduce localised distortions that make an image perceptually "better" even as they violate the model of what a projection should ook like. @marc aurel 's church example is a photographic illustration of how that might work - but doesn't. The camera can't paint a bit of barrel distortion in at just the one place it looks better. In contrast, brush artists (the really good ones) can do barrel here and keystone there and .... you get the idea. Look at this image from Cooper. All sorts of subtle geometrical tweaking going on in there and the result is pleasing. I need to try to find a photo of the same building to compare.

As @Shashin points out, there is a long history of using optical distortion in artistic photography but, I suppose, those experiments are not particularly relevant to architecture. There, the best we can do as photographers is to apply movements and we may have to learn to like the results because of the constraints of the optical model. At least that is what this novice comes away with.
 

Attachments

Top