A Medley of New Innovations Announced By Meta’s AI Lab
The company has announced it is working on an AI that can “hear” what someone is hearing by studying their brainwaves as well as a tool that can fact-check the entirety of Wikipedia.
Add bookmarkFacebook owned Meta has announced a flurry of new innovations emerging from its AI lab as of late. At the end of August, the company announced that researchers had started developing the building blocks for the next generation of citation tools by training neural networks to pinpoint relevant source material in an internet-size data pool, the end goal being to have an AI in place that can fact-check 6.5 million articles on Wikipedia.
The Machine Learning model works by analyzing analytical models of blocks of text which it is set to ‘understand’ content using Natural Language Understanding (NLU) approaches, rather than examining text strings to ensure they include the same words.
As we previously reported, NLU belongs to the group of technologies known as Natural Language Processing (NLP), and works together with Natural Language Generation to generate NLP. Of course, as with any model, such AI systems require a bucket-load of data in order to run. This particular AI is being trained on four million Wikipedia citations with its inventors hoping that it will soon be able to recommend trusted sources.
This will be drawn from a large index of data that is updated constantly, and Meta’s artificial intelligence research team plans to keep working on the tool in order to enhance the online encyclopedia continuously. The model has been open sourced by Meta, allowing users to access a demo of the verification tool.
As Meta’s fundamental AI research tech lead manager Fabio Petroni, said, “What we have done is to build an index of all these web pages by chunking them into passages and providing an accurate representation for each passage. That is not representing word-by-word in the passage, but the meaning of it. That means two chunks of texts with similar meanings will be represented in a very close position in the resulting n-dimensional space where all these passages are stored.”
In other Meta news, it also recently announced that its researchers are developing a tool that reads people’s minds, with research scientists in its AI lab having developed an AI that can “hear” what someone’s hearing, by studying their brainwaves.
While the research is still in very early stages, it is intended to be a building block for tech that could help people with traumatic brain injuries (of which there are 69 million annually) who cannot communicate by talking or typing. Most importantly, researchers are trying to record this brain activity without probing the brain with electrodes, which requires surgery.
As it reads on Meta’s website, the new model works by decoding speech “from noninvasive recordings of brain activity” and that: “From three seconds of brain activity, our results show that our model can decode the corresponding speech segments with up to 73 percent top-10 accuracy from a vocabulary of 793 words, i.e., a large portion of the words we typically use on a day-to-day basis.”
By feeding data into the model, patterns emerge, and the research shows that even if the AI is given a snippet of brain activity, it can determine a large pool of new audio clips which one was heard by the patient. While early results are encouraging, the next step will be seeing whether the model can be extended to directly decode speech from brain activity without needing the pool of audio clips.
In any case, the use of AI to understand the human brain is an ever increasingly important field, and works even with regards to enterprises monitoring how engaged their employees are with their work.