Header Ads

Is the iPhone Camera Too Smart? Or Not Smart Enough?

oWhat is a photograph? Technically and literally speaking, it’s a drawing (graph) of light (photo). Sentimentally speaking, it’s a moment in time, captured for all eternity, or until the medium itself rots away. Originally, these light-drawings were recorded on film that had to be developed with a chemical process, but are nowadays often captured by a digital image sensor and available for instant admiration. Anyone can take a photograph, but producing a good one requires some skill — knowing how to use the light and the camera in concert to capture an image.

Eye-Dynamic Range

The point of a camera is to preserve what the human eye sees in a single moment in space-time. This is difficult because eyes have what is described as high dynamic range. Our eyes can process many exposure levels in real time, which is why we can look at a bright sky and pick out details in the white fluffy clouds. But a camera lens can only deal with one exposure level at a time.

In the past, photographers would create high dynamic range images by taking multiple exposures of the same scene and stitching them together.Done just right, each element in the image looks as does in your mind’s eye. Done wrong, it robs the image of contrast and you end up with a murky surreal soup.

Image via KubxLab

Newer iPhone Pro cameras are attempting to do HDR, and much more, with each shot, whether the user wants it or not. It’s called computational photography — image capture and processing that uses digital computation rather than optical processes. When the user presses the shutter button, the camera creates up to nine frames, on different lenses, each with a different exposure level. Then the “Deep Fusion” feature takes the cleanest parts of each shot and stitches them together into a tapestry of lies an image with extreme high dynamic range. Specifically, the iPhone 13 Pro’s camera has three lenses and uses machine learning to automatically adjust lighting and focus. Sometimes it switches between them, sometimes it uses data from all of them. It’s arguably more software than hardware. And so what is a camera, exactly? At this point, it’s a package deal.

Tarted-Up Toddlers

Various cameras have been desired over the years for the unique effects they give to the light-drawings they produce, like the Polaroid, the Diana F, or the Brownie. The newer iPhone cameras can wear all of these hats and still bring more to the table, but is that what we want? What if it comes at the cost of control over our own creations? Whereas the early iPhones would let the user shoot in RAW mode, the newer ones obfuscate it away.

Just the other day, our own Tom Nardi received a picture from his daughter’s preschool. Nothing professional, just something one of the staff took with their phone — a practice they have been doing more often since COVID protocols are still in place. Tom was shocked to see his daughter looking lipsticked and rosy-cheeked as though she’d been made up for some child beauty pageant or a “high-fashion” photo session at the mall. In reality, there was some kind of filter in place that turned her sweet little face into 3-going-on-30.

Whether the photographer was aware that this feature-altering filter was active or not is another matter. In this case, they had forgotten the filter was on, and turned it off for the rest of the pictures. The point is, cameras shouldn’t alter reality, at least not in ways that make us uncomfortable. What’s fine for an adult is usually not meant for children, and beauty filters are definitely on this list. The ultimate issue here is the ubiquity of the iPhone — it has the power to shape the standard of ‘normal’ pictures going forward. And doing so by locking the user out of choice is a huge problem.

Bad Apples?

Whereas the Polaroid et. al recorded reality in interesting ways, the iPhone camera distorts reality in creepy ways. Users have reported that images look odd and uncanny, or over-processed. The camera considers the low light of dusk as a problem to be solved or a blemish to erase, rather than an interesting phenomena worth recording. The difference is using cameras to capture what the eye sees, versus capturing reality and turning it into some predetermined image ideal that couldn’t have been done with any one traditional camera.

You can’t fault Apple for trying to get the absolute most they can out of tiny camera lenses that aren’t really supposed to bend that way. But when the software they produce purposely distorts reality and removes the choice to see things as they really are, then we have a problem. With great power comes great responsibility and all that. In the name of smoothing out sensor noise, the camera is already doing a significant amount of guessing and painting in what it thinks is in your image. As the camera does more processing and interpreting, they will either add more controls to manage these features, or keep the interface sleek, minimalist, and streamlined, taking away control from the user.

Where does it end? If Apple got enough pressure, would they build in certain other distortions into the software? When the only control we have over a tool is the should keep striving to capture reality as the eye sees it, and not massaging it toward some ideal.


No comments