When you check Apple’s support page, you’ll see that portrait mode, which is not supported by the iPhone 8, is supported by the new iPhone SE — even though both have only one rear lens and the specifications are exactly the same.
Normally, a phone would have to rely on dual cameras to take bokeh photos like “portrait mode” — like the human eye, where two lenses in different positions get two different angles of the image, then combine the difference in perspective to estimate the depth of field, thus achieving a bokeh background and keeping the subject clear. The Plus series on the list today, or the X, XS and 11 in recent years, basically rely on a multi-camera system to complete portrait bokeh shooting.
So how does the iPhone’s single front-facing camera work? At its core is the infrared dot projector in the Face ID system, which also captures accurate enough depth data to act as a “secondary lens”.
As for the new iPhone SE, because its sensor is too old, it can’t rely on the sensor to get the parallax map, and basically has to rely on the machine learning algorithm provided by the A13 Bionic chip to simulate and generate the depth data map.
The portrait bokeh photography achieved by the new iPhone SE is the limit of what a single camera phone can do by software optimization. Strictly speaking, this is also thanks to the A13 chip, if it did not bring the latest machine learning algorithms, relying on an outdated camera alone, the SE’s shooting experience obviously has to be a discount.
This is more or less proof that it still makes sense to develop a multi-camera system for smartphones. We can use ultra-wide-angle to widen the framing field, can rely on periscopic telephoto to obtain lossless zoom photos, not to mention ToF, LIDAR, and other “special lenses” to help AR augmented reality detection, these do not simply rely on an OTA upgrade, or algorithms can be achieved features.