Also I agree that nearly every digital camera has to do some correction, and correcting for lighting / time of day makes our photos nicer. But the end goal should be a photo that looks as close to what we’d see naturally?
Analog cameras don’t have the dynamic range of human vision, fall quite short in the gamut area, use various grain sizes, and can take vastly different photos depending on aperture shape (bokeh), F stop, shutter speed, particular lens, focal plane alignment, and so on.
More basically, human eyes can change focus and aperture when looking at different parts of a scene, which photos don’t allow.
To take a “real photo”, one would have to capture a HDR light field, then present it in a way an eye could focus and adjust to any point of it. There used to be a light field digital camera, but the resolution was horrible, and no HDR.
Everything else, is subject to more or less interpretation… and in particular phone cameras, have to correct for some crazy diffraction effects because of the tiny sensors they use.
Wouldn’t mind getting a second hand “like new” one with a scratched front glassplastic… for the right price, as long as the inner plastic lenses aren’t scratched.
(I know, there’s about no chance of that ever happening)
It’s actually a great idea - an up up-to-date light field camera combined with eye tracking to adjust focus. It could work right now in some VR, and presumably the same presentation without VR via a front-facing two-camera (maybe one camera with good calibration) smartphone array.
Yup, I was seriously considering getting the Lytro, just to mess around. The main problem, is the resolution drop due to needing multiple sensor pixels per “image pixel”, but then having to store them all anyway. So if you wanted a 10Mpx output image, you might need a 100Mpx sensor, and shuffle around 100Mpx… just for the result to look like 10Mpx.
If we aim at 4K (8Mpx) displays, it might still take some time for the sensors, and data processing capability on both ends to catch up. If we were to aim at something like an immersive 360 capture, it might take even longer. Adding HDR, and 60fps video recording, would push things way out of current hardware capabilities.
The end goal should be some kind of representation of reality, at the very least, even if it’d not “what we see naturally”. A camera can see some things that we can’t, and can’t see some things that we can - at least in a single exposure - so, the image is never going be a perfect visual representation of how anyone remembers the scene.
But to suggest that they don’t represent some aspect of reality because they’re a simulacrum generated by visual data is just self-indulgent too-convenient-to-not-embrace pseudo-philosophy coming from someone whose wealth is tied to selling such bullshit to the public.
The goal here is to make people feel like they’re good at something - taking photos - by manufacturing the result, which not only totally defeats the point of what most people take photos for, but has some incredibly dark and severe edge cases which they clearly haven’t considered (and are motivated to not consider).
It depends on the artistic and technological intent I think. Valve (tube) amplifiers are inferior to any modern amplifier in every way you could actually measure with an oscilloscope yet people still build them and valves are still produced they same way they were in the 1950s because the imperfections they produce in the sound can sound pleasant, which is down to psychoacoustic factors which have subjective as well as objective components. A photo that looks exactly like what we’d see naturally is one potential goal but it’s not the only one in my opinion.
Well there are analog cameras
Also I agree that nearly every digital camera has to do some correction, and correcting for lighting / time of day makes our photos nicer. But the end goal should be a photo that looks as close to what we’d see naturally?
Analog cameras don’t have the dynamic range of human vision, fall quite short in the gamut area, use various grain sizes, and can take vastly different photos depending on aperture shape (bokeh), F stop, shutter speed, particular lens, focal plane alignment, and so on.
More basically, human eyes can change focus and aperture when looking at different parts of a scene, which photos don’t allow.
To take a “real photo”, one would have to capture a HDR light field, then present it in a way an eye could focus and adjust to any point of it. There used to be a light field digital camera, but the resolution was horrible, and no HDR.
https://en.m.wikipedia.org/wiki/Light_field_camera
Everything else, is subject to more or less interpretation… and in particular phone cameras, have to correct for some crazy diffraction effects because of the tiny sensors they use.
It seems like Vision Pro allows selective focusing.
But then you’d have to use the Vision Pro…
Wouldn’t mind getting a second hand “like new” one with a scratched front
glassplastic… for the right price, as long as the inner plastic lenses aren’t scratched.(I know, there’s about no chance of that ever happening)
deleted by creator
But not on a static image. They use eye tracking to figure out what you’re looking at and refocus the external cameras based on that.
It’s actually a great idea - an up up-to-date light field camera combined with eye tracking to adjust focus. It could work right now in some VR, and presumably the same presentation without VR via a front-facing two-camera (maybe one camera with good calibration) smartphone array.
Yup, I was seriously considering getting the Lytro, just to mess around. The main problem, is the resolution drop due to needing multiple sensor pixels per “image pixel”, but then having to store them all anyway. So if you wanted a 10Mpx output image, you might need a 100Mpx sensor, and shuffle around 100Mpx… just for the result to look like 10Mpx.
If we aim at 4K (8Mpx) displays, it might still take some time for the sensors, and data processing capability on both ends to catch up. If we were to aim at something like an immersive 360 capture, it might take even longer. Adding HDR, and 60fps video recording, would push things way out of current hardware capabilities.
The end goal should be some kind of representation of reality, at the very least, even if it’d not “what we see naturally”. A camera can see some things that we can’t, and can’t see some things that we can - at least in a single exposure - so, the image is never going be a perfect visual representation of how anyone remembers the scene.
But to suggest that they don’t represent some aspect of reality because they’re a simulacrum generated by visual data is just self-indulgent too-convenient-to-not-embrace pseudo-philosophy coming from someone whose wealth is tied to selling such bullshit to the public.
The goal here is to make people feel like they’re good at something - taking photos - by manufacturing the result, which not only totally defeats the point of what most people take photos for, but has some incredibly dark and severe edge cases which they clearly haven’t considered (and are motivated to not consider).
Which is just par for the course for tech bros.
It depends on the artistic and technological intent I think. Valve (tube) amplifiers are inferior to any modern amplifier in every way you could actually measure with an oscilloscope yet people still build them and valves are still produced they same way they were in the 1950s because the imperfections they produce in the sound can sound pleasant, which is down to psychoacoustic factors which have subjective as well as objective components. A photo that looks exactly like what we’d see naturally is one potential goal but it’s not the only one in my opinion.