Singapore Herald
Image default
Tech

New Meta AI Decodes Human Reactions To Audio And Visuals, Fuels SuperIntelligence Debate

Meta has recently dropped TRIBE v2 (Trimodal Brain Encoder), a foundation model built to predict how the human brain responds to almost any sight or sound. The model replicates the brain and reacts to sight, language, and sounds. Meta claims that the model could aid neuroscientists in conducting new experiments with higher efficiency. Furthermore, the development could also bring the company near to achieving superintelligence, a stage of AI that surpasses human intelligence and even reacts to the physical world identically to humans.
For you information, understanding the working of the brain requires new brain recordings for every experiment. Now, collecting data for the same can make research slow and expensive, along with adding more difficulties in scaling. With TRIBE v2, Meta is trying to overcome that challenge by converting months of lab work in seconds of computation.
Meta wrote in a blog post, ‘Today, we’re releasing TRIBE v2. This foundation model acts as a digital mirror of human brain activity in response to sight, sound and language – transforming months of lab work into seconds of computation.’
Mark Zuckerberg’s Meta Aggressively Pushes AI-Native Future For Engineers, All Details Here
As for its working, TRIBE v2 predicts brain activity via a three-stage pipeline. The first steps consist of converting sounds, text, and visuals to numbers so that the model can analyse them. In the second step, the model combines information and identifies general patterns in how humans process a piece of information. Last step lets the system predict which parts of the brain are most likely to activate when an individual sees, reads, or hears something and connects those patterns to actual brain activity.
As suggested by Meta, TRIBE v2 can now offer a highly detailed map of brain activity as compared to its predecessor. It predicts what a typical brain response should look like when someone hears or sees something instead of capturing raw signals. The TRIBE v2 paper, code, and model weights have been released by Meta as open source.

Related posts

5 Fantastic Visuals Of Nebula Captured By The James Webb Telescope

Bruce M. Hampton

Digital Platforms Must Take Responsibility for Online Content, Safety of Children: Ashwini Vaishnaw

Bruce M. Hampton

X Suspends ‘Twitter’: What Went Wrong? Netizens Can’t Stop Reacting

Bruce M. Hampton