In 1987, one of the most significant photos of the last century was taken on a beach in Tahiti. It was taken by John Knoll, who went on to co-create Adobe Photoshop – and his ‘Jennifer in Paradise’ snap (below) would be used to demonstrate the incredible tools that will soon democratize photo editing.
35 years later, we’re seeing the next step in this photo editing revolution – but this time it’s Google, rather than Adobe, waving a mighty new magic wand. This week we saw the arrival of its new Photo Unblur feature, the latest in an increasingly impressive range of AI editing tools.
Much like Photoshop of the 90s, Google’s tricks are opening up image manipulation to a wider audience, while challenging our definition of what a photo actually is.
As the name suggests, Photo Unblur (currently exclusive to Pixel 7 and Pixel 7 Pro) can automatically turn your old blurry snaps into crisp, shareable memories. And it’s the latest example of Google’s push to integrate a Photoshop bot into its phones (and eventually, Google Photos).
Around this time last year, we saw the arrival of Magic Eraser, which allows you to remove unwanted people or objects from your photos with the flick of a finger. Parents and pet owners were also treated to a new “Face Unblur” mode, which works differently from Photo Unblur, on the Pixel 6 and Pixel 6 Pro.
Collectively, these modes are still not as revolutionary as the original Photoshop. Adobe also has enough of its own machine learning magic to ensure that its applications will remain essential among professionals for many years to come. But the big change is that Google’s tools are both automatic and live in the places where non-photographers take and store their photos.
For the average user of Google’s phones or its cloud photo apps, this will likely mean never having to deal with a program like Photoshop again.
Retouching robots
Photo Unblur and Magic Eraser may seem simple, but they rely on powerful machine learning. Another related feature, Face Unblur, is so demanding that it can only run on Google’s custom Tensor processor.
Both modes do much more than raise a sharpness slider. The complex process of creating noise maps and applying “polyblur” techniques was explained recently by Isaac Reynolds, Google Product Manager for the Pixel Camera, during a conversation on the new “Made by Google” podcast. of the society. (opens in a new tab).
Asked about Photo Unblur and Face Unblur, Reynolds said, “I think you’ve probably seen all these websites in the news where you can type in a sentence and it produces an image, right?” If not, these sites include the likes of Dall-E and Midjourney – and beware, they are an extremely dangerous waste of time on weekends.
“There is a class of machine learning patterns in imagery that are known as generative networks,” Reynolds explained. “A generative network produces something out of nothing. Or produces something out of something else that’s inferior or less accurate than what you’re trying to produce. And the idea is that the model should fill in the details.”
Like those text-to-image AI assistants, Google’s new photo rescue tools fill in those details by gathering a ton of connected data – in Google’s case, an understanding of what human faces look like, what faces when blurred, and an analysis of the type of blur in your photo. “He takes all of these things and he says ‘this is what I think your face would look like after it wasn’t blurred,'” Reynolds said.
If this sounds like a recipe for distorted faces and weird valleys, Google is keen to point out that it’s not exaggerating in attacking your mug.
“It never changes what you look like, it never changes who you are. It never strays from the realm of authenticity,” Reynolds asserts. “But it can do a lot of good to blur photos when your hands are shaking, if the person you’re photographing is just slightly out of focus, or things like that,” he adds.
We haven’t been able to fully test Photo Unblur yet, but the first demos (opens in a new tab) (like the one above) are awesome. And alongside the growing batch of tricks from Google, it all starts to make Photoshop unnecessarily complex for photo editing. Why bother with layer masks, pen tools, and adjustment layers when Google can do it all for you?
Restoration of order
Of course, Google doesn’t have a monopoly on machine learning, and Adobe has plenty of powerful tools. Adobe Sensei is a machine-learning tool that powers Photoshop’s “neural filters” like Photo Restoration, which automatically restores your old family photos.
Because Google and Samsung (which has its own “Object eraser” tool) now build these features into their phones, Adobe’s machine learning is being pushed further toward professional creators. Google’s Photoscan app already retouches your old photos while it scans them, so an automatic restore tool for Google Photos is surely just around the corner. Let’s just hope it avoids slightly creepy “reviving” effects like Deep Nostalgia.
This increasingly broad distinction between Photoshop and new Google tools like Photo Unblur was well summed up earlier this year by Adobe VP and Fellow Marc Levoy, who was previously the driving force behind Google’s Pixel phones.
From the Adobe Life Blog (opens in a new tab) interview, Levoy said that while his role at Google is to “democratize good photography,” his mission at Adobe is more to “democratize creative photography.” And it involves “marrying professional controls to computational photography image processing pipelines.”
In other words, Google and others have now taken over the “democratization of good photography,” while Adobe’s programs are using the power of machine learning to improve workflows for amateurs and professionals who need the finer craftsmanship of manual photo editing. Photoshop’s death is therefore far from over, even though many of its consumer tools are automated by phones.
AI openers
We’re still in the early stages of the AI photo editing revolution, and Google’s new tools are far from the only ones. Desktop editors like Luminar Neo, ImagenAI, and Topaz Labs are all awesome AI assistants that can improve (or save) your imperfect photos.
But Google’s Photo Unblur and Magic Eraser take it a step further by living in our Pixel phones – and they’ll surely migrate to our Google Photos libraries too. This step would really bring them to a mass audience, who can turn their vacation snaps or family photos into polished masterpieces without needing to know what an adjustment layer is.
When it comes to Adobe and Photoshop, we’ll likely see new Sensei announcements at the Adobe Max 2022 conference starting October 18. So while its recently updated Photoshop Elements app is starting to look a little dated, the full Photoshop should continue to be at the forefront of photo trickery.
It’s an impressive run considering it’s now been 35 years since the Photoshop co-founder first started toying with his “Jennifer in Paradise” snap. But given the rapid development of text-to-image generators like Dall-E and Google’s photo-editing tools, it looks like we’re on the cusp of another revolutionary leap for humble photo-editing.
#Googles #editing #tricks #Photoshop #useless #people