Google DeepMind launches Project Genie, stepping into AI-generated worlds

Google DeepMind launches Project Genie, stepping into AI-generated worlds


Google DeepMind is transitioning its groundbreaking Genie 3 “world model” from an internal research tool into a public-facing experience called Project Genie. Originally designed to train AI agents by simulating interactive environments, this system can generate complex images and react in real-time as a user “moves” through the software’s simulated space. Starting today, users outside of Google can finally experience this bleeding-edge technology, provided they meet specific criteria.

Read: Everything we know about the upcoming iPhone Fold

Access to Project Genie currently requires a subscription to Google’s $250 per month AI Ultra plan. Additionally, the initial rollout is restricted to users in the United States who are 18 years or older. The platform debuts with three primary interaction modes: World Sketching, Exploration, and Remixing.

In the sketching phase, Google’s Nano Banana Pro model generates a foundational source image based on user descriptions of characters, camera perspectives, ranging from isometric to first-person, and preferred exploration styles. This allows for iterative tweaks before Genie 3 fully renders the interactive environment. Users can also remix existing worlds by writing their own prompts for environments generated by others.

See also

It is important to note that Genie 3 is a world simulator rather than a traditional game engine. While it can simulate physical interactions and produces game-like visuals, it lacks conventional gaming mechanics. The current technical constraints reflect the massive computing power required for such simulations, with generations limited to 60-second clips presented at 720p resolution and 24 frames per second. Despite these limitations, Project Genie offers AI Ultra subscribers a rare, hands-on look at the frontier of generative physical AI.