Geoff Hinton and John Hopfield were awarded the 2024 Nobel Prize in Physics for their foundational contributions to artificial intelligence, specifically in machine learning and neural networks. Hinton, known as the “Godfather of AI,” helped lay the groundwork for modern deep learning, which underpins today’s most powerful AI systems. Hopfield’s work on Hopfield networks in the 1980s set a precedent for understanding how neural networks can be used to mimic human brain function. Their breakthroughs have significantly shaped AI’s development, influencing everything from natural language processing to image recognition. This Nobel recognition highlights the monumental impact AI has on science and society.
Meta unveils “Movie Gen,” an AI video generator designed to create high-quality video content with integrated sound. This tool allows users to input text prompts to generate immersive video scenes, offering exciting new opportunities for content creators and marketers. Despite the promise of this technology, Meta’s chief product officer, Chris Cox, cautions that the company “[isn’t] ready to release this as a product anytime soon,” citing the high costs and long generation times as barriers to launch. Open AI launched its video generator, Sora, earlier this year; Google has previewed its Veo video generator; and China’s Kling and Vidu also demonstrated thirty to sixty second video generations. These, too, are not publicly available.
Spacetop AR Laptop Canceled, Shifting to Windows Software for Glasses Sightful, the Israeli startup that makes the Spacetop, has raised over $61 Million. The $2,150 Spacetop, which started taking pre-orders earlier this year, coupled a lightweight keyboard and PC with Xreal AR HMD, giving users a simulated 100” screen-on-the-go. With the cancellation of its hardware project, Sightful says it is returning customer deposits and shifting its focus to developing Windows software that will integrate with AR glasses.
Anything World’s new “Generate Anything” platform allows users to create, rig, and animate 3D models using text descriptions or images. The asset created then be rigged and animated automatically. Today was the release day, and I was excited to try it. First of all, I love how insanely easy it is to use. It quickly generated the “Pixar style squirrel detective” I asked for. Sadly for me, I was not the first one there, and as a result when I asked Generate Anything to animate the model, it took a bit longer than the advertised “five minutes.” Nonetheless, I’ll be back because this all-in-one web app simplifies the 3D creation process by integrating tools that previously required multiple steps and software. The London-based company has raised $9.3 million from VCs Alumni Ventures, Acrew Capital, Warner Music Group, NGC Ventures, Supernode Ventures, GameTech Ventures and GFR Fund.
Hyperscape allows users to scan real-world environments using a mobile app and then recreate those spaces in a high-fidelity virtual environment. In his review of Meta’s Hyperscape, Tony “Skarred Ghost” Vitillo explores the potential of this new feature unveiled at Meta Connect 2024. This tool enables immersive experiences where users can invite others to explore these 3D spaces virtually. Tony highlights the significance of Hyperscape, particularly its potential to evolve social media interactions. Just as people share photos today, in the future, users could share entire 3D environments. While still in beta, this feature represents a glimpse into how Meta envisions blending physical and virtual spaces. He finds the concept exciting, especially for its potential to reshape how people interact and share spaces online.
Apple Vision Pro’s First Scripted Film Locked Me in a WWII Submarine. CNet editor Scott Stein shared his experience with Edward Berger’s immersive short film Submerged, now available on Apple’s Vision Pro. Part of Apple’s efforts to showcase content for its $3,499 mixed reality headset, Submerged uses 180-degree 3D video to create an intense, immersive experience. Stein noted that the film stands as one of the most polished examples of Apple’s 3D video content, offering a vivid sense of presence. Stein remarked that the experience felt more theatrical than cinematic, partly due to the 180-degree format, which captures rich environmental detail. The slower pacing and longer takes in the confined space made the submarine itself feel like an additional character in the film.
The Barrier is a new AI film by Munich-based Storybook Studios. Creative director Albert Bozesan, who wrote: “We used pretty much every current tool under the sun – the source images are all Stable Diffusion and Flux, the video is Runway, Luma, Kling, MiniMax, and Hedra. Additional explosions were made in After Effects.” Here’s a BTS video showing how they did it.
Everything in this extraordinary faux music video from LA-based AI artist Kelly Boesch was generated by AI. The art was created with Midjourney’s style transfer feature and then animated on Runway. Suno AI generated the song.“It’s perfect for this video,” Boesch explains on her YouTube channel. “I love the artwork that Midjourney adds to the images. It’s so amazing how AI just spits out so many different and cool images so quickly.”
This column, formerly called “This Week in XR,” is also a podcast hosted by author Charlie Fink, Ted Schilowitz, former studio executive and co-founder of Red Camera, and Rony Abovitz, founder of Magic Leap. This week our guest is Justin Maier, co-founder and CEO of Civitai. We can be found on Spotify, iTunes, and YouTube.
What We’re Reading
Epic has a plan for the rest of the decade (Jay Peters, The Verge)
Walmart Reveals How It’s Using Gen AI and Augmented Reality to Change Shopping (David Cohen/Adweek)