Your memories in 3D 🌐

Good morning. Here’s what’s leading each section today 👇

  • Above the Fold: AI, Computer Vision and Metaverse valuations are skyrocketing

  • Headlines: Apple's iPhone 15 will let you capture “spatial video” that you can watch on the Apple Vision Pro headset (The Verge)

  • Fund Raises: Animoca Brands Raises $20M for Metaverse Project Mocaverse (Coindesk)

  • Research: Typing on Any Surface: A Deep Learning-based Method for Real-Time Keystroke Detection in Augmented Reality (arxiv)

🔺 Above the fold

AI in computer vision market to witness massive growth by 2029. China’s Shandong province aims for $20B metaverse valuation by 2025. Apple’s Vision Pro developer kits are nowhere to be found. SpatialSC wants to put USC on the map for virtual and augmented reality. How AI is building the metaverse of the future. The Rise of the full body avatar.

📰 Headlines

SPATIAL COMPUTING

🌰 In a Nutshell

During its annual fall event, Apple unveiled a notable feature for the soon-to-launch iPhone 15 Pro - the ability to shoot 3D spatial videos and photos. These can be relived in a three-dimensional immersive experience through the forthcoming Apple Vision Pro headset, slated for an early 2024 release. The iPhone 15 Pro harnesses the power of a revamped camera system and the new A17 Pro CPU to facilitate this. While the Vision Pro headset will also have capturing capabilities, it might detract users from living the moment in real time. The iPhone 15 Pro starts at $1000, while the Vision Pro is priced at $3500.

🔑 Key Takeaways

  • 3D Spatial Videos & Photos: The iPhone 15 Pro introduces a feature to record 3D spatial videos and photos, offering users a way to revisit moments with a depth of detail and immersion.

  • Apple Vision Pro Compatibility: The captured spatial videos and photos can be viewed using the Apple Vision Pro headset, immersing users into a three-dimensional experience of their recorded moments.

  • Powerful Hardware: The feature is supported by the A17 Pro CPU and an advanced camera system embedded in the iPhone 15 Pro.

  • Release and Pricing: The Vision Pro headset is set for release in early 2024, priced at $3500. Meanwhile, the iPhone 15 Pro will be available later this year, with prices starting from $1000.

🎯 Why it matters:

This development signifies a considerable leap towards more immersive personal tech experiences, possibly changing how we capture and revisit personal memories. It might also indicate a trend of smartphones transforming into devices not just for communication and conventional media consumption, but tools that can create experiences aligning with mixed and augmented reality advancements. Moreover, this innovation hints at a future where technology serves as a bridge to blend past experiences seamlessly into the present, enhancing our ability to relive memories with a depth and realism previously unattainable.

🎥 Rewatch the entire Apple Event here:

AI

London-based startup Stability AI has launched Stable Audio, a revolutionary tool capable of generating high-quality, 44.1 kHz music for commercial use through a method known as latent diffusion. This marks Stability AI’s renewed commitment to the realm of generative audio, following their initial venture with Dance Diffusion which unfortunately did not progress to a polished release. The development comes as Stability AI seeks to transform over $100 million capital into revenue-generating products, amidst increasing investor pressure. Read more

🦄 Startups & Fund Raises

Animoca Brands, a metaverse and gaming venture capital firm, has raised $20 million to advance its Mocaverse project from a group of other prominent Web3 investors. The investment was led by CMCC Global and included contributions from Kingsway Capital, Liberty City Ventures and GameFi Ventures, the Hong Kong-based company said Monday. Read more

0xPass, a startup incubated at the Stanford Blockchain Club, has secured a $1.8 million pre-seed funding round to advance its development of secure and user-friendly login systems for web3, facilitating mass adoption of crypto wallets. The initiative is focused on enabling developers to integrate multiple authentication methods into non-custodial wallets, aiming to offer a seamless user experience akin to web2 counterparts such as Auth0 and 1Password. The investment round saw contributions from a blend of U.S. and Asian investors including AllianceDAO, Soma Capital, Alchemy Ventures, and notable individuals like Balaji Srinivasan, the former Coinbase CTO, and Cory Levy from Z Fellows. Read more

🔬 Research

In a groundbreaking stride towards enhancing user interaction in augmented reality (AR), the paper presents a deep-learning-based technique that promises to revolutionize text entry interfaces in AR environments. The proposed method leverages a two-stage model that integrates an adaptive Convolutional Recurrent Neural Network (C-RNN) and a hand landmark extractor. This initiative addresses the ergonomic and accuracy issues associated with current text entry options in AR, such as mid-air keyboards or voice input systems. By processing user-perspective video streams captured via AR headsets at approximately 32 frames per second, the system allows for real-time keystroke detection on any flat surface, achieving an impressive accuracy rate of 91.05% at an average typing speed of 40 words per minute.

🔑 Key Takeaways

  1. Innovative Keystroke Detection: The proposed method offers a novel solution for real-time keystroke detection on any flat surface, doing away with the necessity for physical or virtual keyboards in AR.

  2. Deep Learning at its Core: At the heart of the system is a deep learning-based approach, which combines a hand landmark extractor with a newly conceived adaptive C-RNN, powered by a freshly assembled dataset.

  3. Impressive Accuracy and Speed: The model showcases a remarkable accuracy of 91.05% and operates at a speed of approximately 32 FPS, aligning with the average typing speed of an individual on a standard keyboard.

  4. Potential for Real-World Applications: Apart from demonstrating promising results, the paper accentuates the real-world applicability of the approach, hinting at the potential for integration into a wide array of applications.

🎯 Why it matters:

The innovation heralds a significant shift in how we perceive and engage with AR environments, offering a practical solution to the longstanding challenges surrounding text entry in AR. It holds the promise to facilitate more natural and efficient user interaction within AR applications, fostering broader participation in social activities through AR mediums. Moreover, this research paves the way for future explorations and developments that could further hone this technology, potentially spearheading a new era of user-friendly and immersive AR experiences.

⚡️ Quick bytes

TOGETHER WITH METALYST
Hit the inbox of readers from Apple, Meta, Unity and more

Advertise with MetaLyst to get your brand or startup in front of the Who's Who of metaverse tech. Our readers are folks who love tech stuff – they create things, and invest in cool ideas. Your product could be their next favorite thing! Get in touch today.

Comments, questions, tips?

Send a letter to the editor –– Email  or tweet us

Was this newsletter forwarded to you, and you’d like to see more?

Join the conversation

or to participate.