Analysis |  Google's AI videos point to a machine-generated future

Analysis | Google’s AI videos point to a machine-generated future

The AI’s creative abilities go beyond its driving skills. While self-driving car technology is going nowhere, there has been a remarkable explosion of research around generative models or artificial intelligence systems capable of creating images from simple text. Over the past week, artificial intelligence researchers at Meta Platforms Inc. and Alphabet Inc.’s Google have taken an extraordinary leap forward in developing systems capable of generating video with just about any prompt from imaginable text.

Videos from Facebook parent Meta look like trippy dream sequences, showing a teddy bear painting flowers or a horse with outstretched legs galloping across a field. They’re about a second or two long and have a glitchy quality that betrays their source, but they’re still remarkable. Videos generated by Google, showing coffee being poured into a cup or flying over a snow-capped mountain, look particularly realistic.

Google has also built a second, even more impressive system called Phenaki that can create longer videos, lasting two minutes or more. Here is an example of the prompt used by Google:

“Lots of traffic in a futuristic city. An alien spaceship is coming to the futuristic city. The camera goes inside the alien spaceship. The camera moves forward until it shows an astronaut in the blue room. The astronaut taps on the keyboard. The camera moves away from the astronaut. The astronaut leaves the keyboard and walks to the left…”

That’s less than a third of the entire prompt, which almost reads like a movie script with commands like “camera zooms”. And here is the resulting clip, posted on Twitter by Dumitru Erhan, one of Phenaki’s creators at Google Brain:

You might think this is the end of Hollywood as we know it or that anyone with a few brain cells and a computer will soon be able to produce feature films. This is actually what the researchers are hoping for. Erhan tweeted that he and his team wanted to empower people to “create their own visual stories… [to] make people’s creativity easier.

It’s hard to see AI-generated videos hitting your local theater any time soon. But we’ll almost certainly see them posted to our social media feeds, especially on platforms like ByteDance Ltd.’s TikTok, Instagram’s Reels, or YouTube.

TikTok hasn’t answered whether it’s building its own AI video generator tool, but it would make sense for the platform to. TikTok users love adding stickers, text, and green screens to their posts, and the platform is keeping up with demand with new technology. In August, it added an AI image generator to its app to create stylized green screens. Type in a prompt like “Boris Johnson” and TikTok will bring up an abstract image vaguely reminiscent of the former British Prime Minister.

What happens when machines not only recommend the videos we scroll through, but also participate more in their creation? Many of us enjoy looking at images of cute cats and people tripping over themselves, so an algorithm that could produce fake montages of clumsy tripping or frisky kittens would attract viral hits with little work, as long as they look real.

Content creators on TikTok, and the platforms themselves, have a vested interest in leveraging a tool that can generate videos at scale, especially when it’s cheap and easy. For the rest of us, the result would be more automated social media feeds than ever before. Already powered by AI and recommendation algorithms, AI videos would add to the self-reinforcing feedback loops that scratch our cognitive itch.

The other imminent consequence is a flood of misinformation, but there is perhaps less cause for alarm in the short term. Social media platforms have stepped up efforts to weed out fake content, and Google and Facebook refuse to release their video-creation tools to the public due to the potential for misuse (and presumably poor public relations). Google said its own system generated biased videos against women, even when they tried to filter out stereotypical results. The model or its source code will not be released until the issue is resolved, Google researchers said.

Of course, you’ll soon be able to use these tools with few restrictions, thanks to organizations like Stability AI. The British startup released an image-generating tool last August that allowed anyone to generate cool art, as well as fake photos of celebrities, politicians and warzones, something big companies are doing. AI have banned. I tried the tool and, within seconds, I was able to concoct photos of former President Donald Trump playing golf with North Korean leader Kim Jong Un. Stability is working on a video generation tool that he plans to release publicly when ready.

But while greater access to these tools will lead to more fake content, it will also mean more people are aware that the tools exist. They are more likely to suspect that the “photo” of President Joe Biden punching an old lady is AI-generated. That’s hope, anyway.

Equally worrying is what these tools will do to people’s daily diets. Google researchers say their tools will increase human creativity. But when it becomes so easy to make video that you hardly have to think about it, does it really tap into our imagination? Maybe not in all cases.

Coupled with the recommendation engines that drive much of what we’ve seen online geared towards generating clicks, this makes our future much more machine-driven – and, arguably, not very creative.

This column does not necessarily reflect the opinion of the Editorial Board or of Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former journalist for the Wall Street Journal and Forbes, she is the author of “We Are Anonymous”.

More stories like this are available at bloomberg.com/opinion

#Analysis #Googles #videos #point #machinegenerated #future

Leave a Comment

Your email address will not be published. Required fields are marked *