The term 'computational photography' itself really came in with Nokia's 808 PureView, the idea being to take a huge sensor of tiny pixels and then combine their output into 'oversampled' average values for each lower resolution (5MP) 'super-pixel'. Here, the computation itself happened in a dedicated processing chip, since the main smartphone processor was nowhere near powerful enough on its own. The system worked rather wonderfully, with the various downsides being:
- the large sensor (1/1.2" in the 808's case) required a certain vertical depth for all the optics too, making the 808 'courageously' thick(!)
- the 2011 sensor was relatively old, i.e. there was no Back Side Illumination and no optical stabilisation, two tech essentials from camera phones that were to follow.
- the 808 ran Symbian, a fine OS for the 'noughties' but which was showing its age (and that of its ecosystem) by 2012, when the 808 finally went on sale.
You couldn't fault the purity and quality of the 808's images, but the three caveats above meant that further progress needed to be made. The Lumia 1020, a year later, solved the three caveats, with:
- a slightly smaller (1/1.5" sensor), making the camera vertical depth manageable.
- BSI and OIS both onboard for handheld low light shots par excellence...
- it ran the fairly new (and Internet age) Windows Phone 8.1.
Nothing's perfect though, and the 2012 Lumia 1020 had its own caveat, namely that the oversampling down from the higher resolution sensor had to be done in the main processor, since there was no companion dedicated image processor (the 808's had been 'in development for five years' and could only be used with that particular phone), with the result that it took a full four seconds to oversample and save a JPG photo. And this was in the 'foreground', meaning that the user had to sit around and wait. Plus Windows Phone 8.1 itself was starting to look a little long in the tooth (with large tiles, a design for lower resolution screens, and so on), not to mention a fairly lowly market share which mean that third party applications weren't always plentiful.
But the idea of PureView 'computational photography' was good, that of using digital means to make more of physical light received. One approach would be to take the 1020's PureView sensor and system and throw much faster chipsets at it - this was something I'd dearly like to have seen - imagine a 1080p-screened, Snapdragon 820-powered Lumia 1020 successor!
However, Nokia (and then Microsoft, taking on the existing in-production designs when it bought Nokia up) went a different way, with the Lumia 930, 1520 and then 950 and 950 XL all going for 'only' 20MP and a much reduced PureView oversampling ratio, down to 8MP for its output. The main benefit was speed, of course, with not only shot to shot times of less than a second but also the possibility of genuine multi-shot HDR (bracketing, something which we'd been seeing on the iPhone 4S first in the phone world), though with the digital processing (combining exposures) pushed into the background while the user got on with something else on the phone.
Results were good though, on the whole, up with the Lumia 1020 (and 808 before it) as you'll see from my chart below, looking at different ways of achieving ultimate image quality from a phone-sized camera:
The intriguing part of the chart is up at the top, where we have image quality that's supposed to be as good as that from the likes of the Nokia 808 and Lumia 950 (etc., watch this space for my feature comparisons!) but with more mundane specifications - the Google Pixel has a 'standard' sized 13MP sensor (1/2.3", apparently, so in the same ballpark as the Galaxy S7 and Lumia 950), no optical stabilisation and a relatively modest aperture at f/2.0. Yet at the launch presentation, this phone camera was rated higher than anything previously by DxOMark. Putting aside my own reservations of the DxOMark tests, it does seem as though the application of raw computing power (in typical Google fashion) to taking photos is yielding good results.
You see, rather than taking one huge shot and then (PureView) downsampling to reduce noise and improve purity (as on the 808/1020), computing power in a smartphone has now got so prodigious (in the Pixel's case, a Snapdragon 821 chipset with 4GB RAM) that it's possible to take several RAW photos (as needed) rather than one every time you press the shutter control, and do all manner of clever things to these huge 20MB un-encoded image files - auto-aligning, reducing noise, enhancing colours, white balancing, and more - spitting out and saving a 'purer' JPG-encoded image, all within one second (and in the background, so things are instantaneous for the user and the UI).
Obviously, I need to test all this and a Pixel XL is about to arrive at 'All About Towers', but the whole concept is enticing. Rather than throwing optical hardware at imaging, Google is throwing processing power at the same problem and in the process doing away with the need for OIS (though I still hold a candle for Xenon flash!)
I contend that you can think of Pixel-style 'computational photography' as the 2016 form of PureView. The idea's similar - using information from many sources to reduce random digital noise and improve dynamic range. Except that the sources in this case are from multiple frames (we don't know how many Google's proprietary HDR+ software demands, it probably varies according to conditions) rather than scattered parts of one shot from a higher resolution sensor. But the image data's real and it's RAW and is eminently suited for working with, away from the world of JPG compression artefacts.
Whichever end of my curve/chart a smartphone camera works in, the end result should be similar in terms of image quality, i.e. what the user gets to see. Google calls the system in the Pixel range 'HDR+', but if Nokia had arrived at this point, in a parallel universe, it could equally well have been named 'PureView II'.
PS. There are benefits to being at the computing end of the curve rather than at the physics/specs end, as Microsoft managed to exploit in a limited fashion with its clever 'Dynamic exposure' mode, used in low light with moving objects on the Lumia 950, blending multiple exposures to try and keep the moving subject crisp. Something of the Pixel's power should be able to go further - it remains to be seen if Google's software engineers are as clever as the ex-Nokia team at Microsoft (were, many of them having moved on now), but at the launch event the idea of micro-bursts of photos capturing action was mooted, with the software identifying the 'perfect' moment for the final JPG. I'll be testing this too, in due course, don't worry.
PPS. The article begs the question of how far HP can push the camera in the Elite X3 - this has so far produced very average photos, but with a Snapdragon 820 and 4GB of RAM there's no reason why similar processing couldn't help the X3 produce better, clearer images. Maybe not to the level of the Pixel here, but up closer to the current gold standard, that 808/1020/950 trio.