Serious answer: the sensors in telescopes and probes don’t work exactly like human eyes. They pick up a different range of frequencies than our cone cells in the first place, and don’t have the same sort of overlapping input curves. There’s a lot of tricks and techniques in converting an image into the same sort of thing we’d see with the naked eye. You can sorta think of it like translating Japanese into English; there’s no perfect formula and it requires some creative interpretation no matter what.
The popular images that get published all over are simplistic composites and never really reflect the actual data astronomers rely on, so that was never a hindrance to scientific progress. It suddenly made the news because a research group decided to reevaluate the old data and reinterpret it against calibrations from other equipment (e.g. Voyager probe vs. the Very Large Telescope here on Earth). There’s a general interest factor in “wow that looks so much different than the old pictures”, when the underlying data really hasn’t changed.
To add to this, we apparently always knew. The famous blue image is more or less the correct hue, but the saturation has been absolutely blown out like a clickbait youtube thumbnail in order to show faint features more clearly. Somewhere along the line we stopped mentioning that that had been done. Irwin and co just just re-calculated it to get the most accurate version yet, because we’ve got a lot more data to work with now than we did back when Voyager 2 did its fly-by
Sort of? My understanding from reading a handful of articles is that Neptune has a bluish haze layer that’s absent on Uranus, but it’s fairly subtle and the overall color of both is a pretty similar frosty light green. So it’s not just that it got oversaturated but that that particular blue hue got applied to the whole planet and not just a thin layer.
Furthermore, it’s not that the original scientists failed to produce true-color images. The original published images of Neptune had deliberately enhanced colors to better show some of the features of the cloud surface, and the description text of the images said as much. But that nuance was quickly forgotten and everybody just took the deep blue coloring to reflect the actual color of the planet, which spread to depictions of the planet everywhere.
As the white/gold versus blue/black dress debate showed, our perception of color is heavily influenced by context, and is more than just a simple algorithm of which rods and cone cells were activated while viewing an image.
How did it take until 2023 to discern the true color of a planet we’ve known about since before humans found Antarctica?
According to a study 97% of scientists are color blind!
How would they even know unless the ones doing the colour-blindness study are part of the 3%
Serious answer: the sensors in telescopes and probes don’t work exactly like human eyes. They pick up a different range of frequencies than our cone cells in the first place, and don’t have the same sort of overlapping input curves. There’s a lot of tricks and techniques in converting an image into the same sort of thing we’d see with the naked eye. You can sorta think of it like translating Japanese into English; there’s no perfect formula and it requires some creative interpretation no matter what.
The popular images that get published all over are simplistic composites and never really reflect the actual data astronomers rely on, so that was never a hindrance to scientific progress. It suddenly made the news because a research group decided to reevaluate the old data and reinterpret it against calibrations from other equipment (e.g. Voyager probe vs. the Very Large Telescope here on Earth). There’s a general interest factor in “wow that looks so much different than the old pictures”, when the underlying data really hasn’t changed.
To add to this, we apparently always knew. The famous blue image is more or less the correct hue, but the saturation has been absolutely blown out like a clickbait youtube thumbnail in order to show faint features more clearly. Somewhere along the line we stopped mentioning that that had been done. Irwin and co just just re-calculated it to get the most accurate version yet, because we’ve got a lot more data to work with now than we did back when Voyager 2 did its fly-by
Sort of? My understanding from reading a handful of articles is that Neptune has a bluish haze layer that’s absent on Uranus, but it’s fairly subtle and the overall color of both is a pretty similar frosty light green. So it’s not just that it got oversaturated but that that particular blue hue got applied to the whole planet and not just a thin layer.
Furthermore, it’s not that the original scientists failed to produce true-color images. The original published images of Neptune had deliberately enhanced colors to better show some of the features of the cloud surface, and the description text of the images said as much. But that nuance was quickly forgotten and everybody just took the deep blue coloring to reflect the actual color of the planet, which spread to depictions of the planet everywhere.
Similar reason why these three photos look like slightly different colors: image sensor type, quality, and postprocessing.
Older images, less accurate. Especially if the type of image a system is capturing is not meant to be consistent with the human eye.
Even then, postprocessing is inevitable.
As the white/gold versus blue/black dress debate showed, our perception of color is heavily influenced by context, and is more than just a simple algorithm of which rods and cone cells were activated while viewing an image.