DeepMind, Google’s AI research org, has unveiled a model that can generate an “endless” variety of playable 3D worlds.
Called Genie 2, the model — the successor to DeepMind’s Genie, which was released earlier this year — can generate an interactive, real-time scene from a single image and text description (e.g. “A cute humanoid robot in the woods”). In this way, it’s similar to models under development by Fei-Fei Li’s company, World Labs, and Israeli startup Decart.
DeepMind claims that Genie 2 can generate a “vast diversity of rich 3D worlds,” including worlds in which users can take actions like jumping and swimming by using a mouse or keyboard. Trained on videos, the model’s able to simulate object interactions, animations, lighting, physics, reflections, and the behavior of “NPCs.”
Many of Genie 2’s simulations look like AAA video games — and the reason could well be that the model’s training data contains playthroughs of popular titles. But DeepMind, like many AI labs, wouldn’t reveal many details about its data sourcing methods, for competitive reasons or otherwise.
One wonders about the IP implications. DeepMind — being a Google subsidiary — has unfettered access to YouTube, and Google has previously implied that its ToS gives it permission to use YouTube videos for model training. But is Genie 2 basically creating unauthorized copies of the video games it “watched”? That’s for the courts to decide.
DeepMind says that Genie 2 can generate consistent worlds with different perspectives, like first-person and isometric views, for up to a minute, with the majority lasting 10 to 20 seconds.
“Genie 2 responds intelligently to actions taken by pressing keys on a keyboard, identifying the character and moving it correctly,” DeepMind wrote in a blog post. “For example, our model [can] figure out that arrow keys should move a robot and not trees or clouds.”
Most models like Genie 2 — world models, if you will — can simulate games and 3D environments, but with artifacting, consistency, and hallucination-related issues. For example, Decart’s Minecraft simulator, Oasis, has a low resolution, and quickly “forgets” the layout of levels.
Genie 2, however, can remember parts of a simulated scene that aren’t in view and render them accurately when they become visible again. (World Labs’ models can do this, too.)
Now, games created with Genie 2 wouldn’t be all that fun, really, given they’d erase your progress every minute or so. That’s why DeepMind is positioning the model as more of a research and creative tool — a tool for prototyping “interactive experiences” and evaluating AI agents.
“Thanks to Genie 2’s out-of-distribution generalization capabilities, concept art and drawings can be turned into fully interactive environments,” DeepMind wrote. “And by using Genie 2 to quickly create rich and diverse environments for AI agents, our researchers can generate evaluation tasks that agents have not seen during training.”
Creatives may have mixed feelings — particularly those in the video game industry. A recent Wired investigation found that major players like Activision Blizzard, which has laid off scores of workers, are using AI to cut corners, ramp up productivity, and compensate for attrition.
Nevertheless, Google has poured increasing resources into its world model research, which promises to be the next big thing in AI. In October, DeepMind hired Tim Brooks, who was heading development on OpenAI’s Sora video generator, to work on video generation technologies and world simulators. And two years ago, the lab poached Tim Rocktäschel, best known for his “open-endedness” experiments with video games like NetHack, from Meta.
TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.