Meta Introduces Meta 3D AssetGen 2.0
According to Meta, the new model offers significantly better detail and fidelity compared to the previous version.
Meta introduces its latest 3D foundation model, AssetGen 2.0, which can generate high-quality 3D assets from text or image prompts. According to Meta, the new model offers significantly better detail and fidelity compared to the previous version. The company also claims that this updated 3D model generator sets a new industry standard and “pushes the boundaries of what’s possible” with generative AI.
AssetGen 2.0 is made up of two models:
One to generate the 3D Mesh
Second one to generate textures
Check out this example:
The model looks incredibly detailed. The textures are also high quality. But keep in mind, these are cherry-picked examples from Meta. The company also showed animatable characters created using AssetGen 2.0.
Game and AR developers will probably love this. The models are gameplay ready and way better than what most 3D designers can make on their own. This could save time, cut costs, and open up new creative workflows for small teams and solo developers.
Meta describes AssetGen 2.0 as a “significant leap forward” compared to AssetGen 1.0, aiming for much higher quality and fidelity in generated 3D assets through a different technical approach, with the future potential to generate entire virtual worlds.
AssetGen 1.0 worked by first generating multiple 2D image views of the intended asset, based on the prompt, then feeding these images to another neural network which generates the asset.
Here’s a detailed architecture of how it works:
In contrast, AssetGen 2.0 is a single-stage 3D diffusion model directly generating the asset from the text prompt, and was directly trained on a large corpus of 3D assets.
Here are several key differences between Meta’s original AssetGen 1.0 and the new AssetGen 2.0:
Quality and Fidelity: AssetGen 2.0 aims to address the noticeable quality issues present in assets generated by AssetGen 1.0, especially when viewed close up in VR2. AssetGen 2.0 provides “geometric consistency with very fine details”.
Technical Architecture: AssetGen 1.0 worked by first generating multiple 2D image views based on the text prompt, and then feeding these images to another neural network to generate the 3D asset. In contrast, AssetGen 2.0 has a completely different architecture; it is a single-stage 3D diffusion model that directly generates the asset from the text prompt.
Training: AssetGen 2.0 was directly trained on a large corpus of 3D assets.
Capability Expansion: While AssetGen 1.0 enabled AI generation of 3D assets, textures, and skyboxes within the Meta Horizon Desktop Editor…, Meta says AssetGen 2.0 will eventually be used for the AI generation of “entire 3D scenes” or worlds from text or image prompts, not just individual assets.
How to Get Access?
Unfortunately, Meta didn’t mention any website or guide on how to try AssetGen 2.0. There’s no public model, no beta signup, and no developer documentation. A Hugging Face demo would be nice, right?
I’d love to see how it stacks up against Tripo3D or Hunyuan 3D generators, both of which are already publicly available and working pretty well.
They have mentioned, though, that it’s already using AssetGen 2.0 internally for 3D world creation, and will roll it out to the Horizon Desktop Editor later this year.
If the model performs similarly from the demo assets in real-world use case, I can see 3D AssetGen 2.0 replacing the popular platforms like Tripo3D in the market.
And oh, if you want to try generating your own 3D object for free, here’s a demo of Hi3DGen in HuggingFace.
You can also learn more about creating 3D game assets using ChatGPT and Tripo3D in the article below.
How To Create And Animate 3D Objects With ChatGPT and Tripo3D AI
Here’s a four step process on generating and animating a 3D character using AI.generativeai.pub
Final Thoughts
I honestly don’t know what Meta’s thinking here. The sample 3D assets look great. But locking it to Horizon? Who actually uses that? Horizon feels like a failed project they keep trying to push. And making AssetGen 2.0 exclusive to it just makes everything less exciting.
Hunyuan 3D and Tripo3D are already out there doing the job — and doing it well. You give them a text or image, and boom, you get a solid 3D model. No waiting for some closed system to let you in.
Meta didn’t share any demo, no signup, nothing. Just a few cherry-picked renders and a vague “coming later” promise. It’s hard to care when they don’t even let people try it.
What do you think of AssetGen 2.0? Is it a yay or a nah? Drop your thoughts in the comments.
Hi there! Thanks for making it to the end of this post! My name is Jim, and I’m an AI enthusiast passionate about exploring the latest news, guides, and insights in the world of generative AI. If you’ve enjoyed this content and would like to support my work, consider becoming a paid subscriber. Your support means a lot!