Mark Zuckerberg has had a couple of rough months but he’s planning a comeback even though his company’s stock is down nearly 50 percent. He’s not worried about it. Zuckerberg’s Metaverse is prepared to take revenge from Microsoft. You see Zuck is in a very unique position even though meta has lost hundreds of billions of dollars in value since the peak of the metaverse hype wave. They aren’t taking their foot off the gas just because Microsoft is killing Facebook.
Mark Zuckerberg Reveals His Plan for Building the Metaverse
Zuck is extremely motivated to build the metaverse and he knows it’s not gonna happen overnight, he has a chip on his shoulder right now. If he can pull it off, he’ll make every YouTube videos, or blogs about him. So what’s his plan?
Well, Zuck has a few tricks left up his sleeve but let’s start with artificial intelligence. If you ask a random person what technologies will be most important in the metaverse, they will almost always say virtual reality and that’s not wrong. VR is going to be important but a big part of the problem with VR right now is that there just isn’t much to do once you put on a headset.
The experiences aren’t very immersive and the amount of content is pretty meager but AI can help here. We’re still years away from massive virtual worlds being built automatically by AI but there are dozens of small places where AI can speed up the development and improve user experience.
Bring Your Favorite Stuff Over To Another Game In Seconds With Facebook Metaverse
One of the big ideas under pinning this whole metaverse thing is the ability to move digital assets from one virtual world to another. But whenever I mention the idea of bringing a fortnight skin into a different game like Minecraft, I immediately get tons of comments saying that it’s impossible because different games use different file formats and rendering technologies. And that’s true but AI could make it a lot easier to translate assets from one system to another.
This technology doesn’t exist yet but AI is already handling other translation tasks incredibly well. It’s so well in fact that when Zuck hosted the most recent meta AI event, translation was one of the main topics Zuck is planning on spending 180 billion dollars on the metaverse. And a decent chunk of that is going to be used to develop advanced AI to make the metaverse feel more immersive.
Facebook’s Metaverse Adds Hand Tracking to Its VR Experience
We’ve all seen how important hand presence is in cinematic depictions of VR. The current oculus touch controllers are pretty good but you’re never really fooled into thinking that your hands are actually there in the virtual world. Still feels like you’re holding a controller but Zuck and his team are working on ways to change that deep learning models.
They are already getting pretty good at hand tracking but the meta AI team isn’t stopping there. They recently developed a system called reskin which uses a deformable membrane with embedded magnetic particles to detect movement as the hand moves, the magnetic signal changes then AI is used to interpret that signal and detect where the glove is making contact.
Facebook Introduces Builderbot, A Tool To Build Virtual Worlds Easily
None of that matters though unless you actually have a virtual world that you want to interact with and that’s where Zuck’s second big announcement comes in. Builderbot is a new AI that takes in simple descriptions of environments in plain English and then produces 3d worlds to match that input.
Building virtual worlds is getting easier every year but even a state-of-the-art platform like unreal engine 5 still requires a decent amount of technical expertise to get up and running. By developing this Builderbot AI, the hope is that everyone regardless of their background will be able to create exactly what they want in the metaverse. Bringing other useful tools into the metaverse will be important as well so the meta AI team is working on next generation virtual assistants.
Facebook Reveals Plans to Use AI to Transform its Chatbot Platform Metaverse
Current chatbots are pretty bad but it’s because most of them use outdated technology. Machine learning is used to recognize speech but pretty much everything else has to be coded by hand. Meta’s AI team is working on changing this by building a system called Karaoke that processes the entire bot interaction through machine learning.
This AI will still use traditional API’s to do specific tasks like find the weather forecast. But essentially, your entire interaction will be fed through a deep learning algorithm by layering in more and more artificial intelligence. These bots should start to become extremely reliable and then become available in virtual worlds through avatars. Training these new models is no easy feat. Though meta has an incredible amount of data but actually using it all to train a new system requires a massive supercomputer
Facebook’s ‘AI Research Supercluster’ Powers Deep Learning Models With Trillion Parameters
Zuck has been investing in AI hardware for nearly a decade now and is on the cusp of rolling out. The fastest AI supercomputer in the world, it’s called the AI research supercluster. And it already has over 6000 top-tier nvidia gpus installed with another 10 000 coming online soon. This will allow meta to train models with more than a trillion parameters on data sets as large as an exabyte which is the equivalent of 36 000 years of high quality video that might sound unnecessary. But deep learning thrives on massive data sets, there’s a reason why deep learning has become the go-to method for building state-of-the-art AI systems.
It scales incredibly well as you add more and more data. The results just get more and more reliable and you don’t really need to change that much of the code to get these performance improvements. This computing power allows meta’s AI team to tackle increasingly complex problems.
Facebook is Working on a New Translation Technology That Uses AI to Bridge Language Barriers
We’ve had decent translation for years now but there have always been drawbacks to the automated approach google translate, launched in 2006 which used statistical machine translation to convert English to other languages. It wasn’t great at grammar. Though billions of people use Facebook, Instagram, and WhatsApp. But because of language barriers, they can’t all easily communicate with each other.
Zuck wants to change that by improving translation using AI and his team is working on a unique approach. Instead of taking what someone said transcribing into text, translating that text and then generating the resulting speech. His AI team is working on going directly from speech in one language to speech in another. They’re effectively cutting out the middleman which has been a source of pesky heirs for years. It’s a subtle change but if it works, audio translation should get a lot better now.
Facebook’s Metaverse Could Revolutionize How We Use Technology
Zuck is in a strong position to do this precisely because meta has so much training data and the super computer to process it all. Data and hardware are just one piece of the puzzle. All those resources are meaningless if you don’t have talented people working on these problems.
Fortunately, for Zuck, he has one of the best researchers in the world leading his AI team. His name is Yan Lacoon and he’s one of the fathers of deep learning. He pioneered this approach alongside Jeffrey Hinton and Joshua Benjio in the early 2000s and is now recognized as one of the best computer scientists in history.
It wasn’t always so clear that Lacoon’s approach would work. Though in traditional computing, a program directs the computer with explicit step-by-step instructions and pretty much everyone thought this would always be the right way to do things. This approach was stalling out. Though it was great when tasks were clearly defined but when you gave a computer an entirely novel situation, it would struggle. Artificial Intelligence research was going nowhere fast and we entered an AI winter.
Lacoone and his associates kept the candle burning and eventually turned things around advances in computer hardware and access to larger data sets finally came together to allow deep learning to take off. The AI community hasn’t looked back. Everything from self-driving cars to tagging photos of cats taken with mobile phones uses deep learning these days but recently a new challenger has emerged.
Gary Marcus: AI Overhyped and Deep Learning is a Waste of Time
His name is Gary Marcus and he believes that all this deep learning stuff is overhyped. Could Zuckerberg and Lacoon be heading toward a dead end? Well that’s what Gary Marcus alleges in a recent article he published. But why does he think that? Well, for one in 2014, he founded an AI company that was built around a different model of AI development called symbol manipulation. The basic idea is that instead of giving an AI model a big pile of data and just letting it figure out the important connections, engineers code specific symbols to represent important details about the real world.
Sounds good in theory but things get a lot more complicated in practice. See while it might be straightforward to hard code symbols for everyday objects like tables and chairs. You’ll quickly run into problems as you try to scale up with deep learning. Going from 90 accuracy to 95 accuracy might require 10 times more data. But it probably won’t require 10 times the amount of engineering effort. And getting that same performance improvement using symbol manipulation might actually require 10 times the amount of human effort. It is just unrealistic.
Conclusion: Now is it possible that the deep learning community is missing something?
Though we certainly don’t have human level AI yet and we don’t even really have fully self-driving cars. Even Yan Lacoon admits that deep learning isn’t perfect. How is it that most teenagers can learn to drive a car in about 20 hours of practice whereas even with millions of hours of simulated practice, a self-driving car can’t actually learn to drive itself properly. So obviously we’re missing something. You know the immediate response you get from many people is that: Well you know humans use their background knowledge to learn faster.
And they’re right. Now how was that background knowledge required? That’s the big question AI researchers are racing to develop new machine learning models that can include this background knowledge. Once a system has an accurate world model teaching, an AI to handle a new task should be trivial. Simple manipulation feels like it could be the perfect fit for this. There’s only one problem, though across the board, deep learning is absolutely crushing symbolic models. Gary Marcus claims that deep learning is hitting a wall.
As recent performance shows if deep learning is hitting a wall than that wall is getting completely demolished. Even still both Yan Lacoon and Gary Marcus might be biased in their outlooks. Both have built entire careers around their respective methodologies. So we need to dig deeper in order to get a straight answers.
As far as AI is concerned, talented engineers are hard to retain. Hiring the best engineers is clearly a cornerstone of Mark Zuckerberg’s metaverse strategy. Hope you enjoyed the article!
M Hamza Malik is a writer, blogger, and engineer who loves to create, write, and share his insights about computers, products, and technology. Hamza has spent the last years reading books, tech, and computers, which brings him to writing, giving his character a spark! Therefore, PCFIED is where he started his journey professionally.