Facebook and AI startup HugginFace have today opened source an AI Model Retrieval Enhanced Generation (RAG), which is a natural language processing model that can find and interpret contextual information to accomplish a range of tasks.

By dynamically changing or supplementing its internal knowledge, RAG enables researchers to control what the model holds, allowing researchers to achieve state-of-the-art results without having to retrain their computational power.

Starting today, RAG will be available as a component of the Hugging Face converter library, integrating with the new database to provide the indexing knowledge sources on which RAG depends.

RAG’s “post-fusion” approach to integrating knowledge

Frontier work in the field of natural language understanding has produced universal models that, while often flawed, are generalizable. So far, most models have been applied to tasks where solutions can be generated without background knowledge, such as mood analysis.

RAG, by contrast, uses input data to retrieve relevant documents from databases such as Wikipedia. For example, giving a question like “When did the first mammals appear on Earth?” “, RAG may provide “mammals”, “Earth history”, “mammalian evolution” and other literature as context to connect to the input, and then input the model to generate the output text.

According to Facebook, RAG uses a form of “post-fusion” to integrate knowledge from retrieved documents, meaning it makes predictions about the answers to document questions before aggregating final prediction scores. When it has access to documents that contain clues to the answer, RAG’s performance is further improved if the answer is not verbatim. In some cases, RAG even generates answers that are not included in any of the documents retrieved.

Rag specializes in knowledge intensive natural language problems

When benchmarking open domain datasets such as NaturalQuestions, which contains questions from Google searchers, RAG showed a knack for generating the right answers when none were found, Facebook said.

Rag also specializes in knowledge-intensive natural language problems, which Facebook explored by creating questions inspired by Jeopardy. The problems generated by RAG are more specific, varied, and real than other similar models. This may be due to RAG’s ability to synthesize different answers using different information from multiple sources.

Sebastian Riedel, RAG’s research manager, says that while RAG is not used in Facebook’s production, the team behind it is actively iterating to reduce potential bias. They restrict documentation in the training data set to Wikipedia, which they believe is more secure than the web crawlers of many of today’s language models.

RAG’s biggest advantage: flexibility

Researchers are exploring a version of RAG that minimizes residual risk in order to achieve a consistent level of output security. They are investigating how to extend RAG to be multichannelized and operate with multiple knowledge sources simultaneously.

“The real advantage of RAG is its flexibility,” says Sebastian Riedel. “To change what a pretrained language model knows, you need to retrain the entire model with new documents. With RAG, we can control what it knows by exchanging documents for knowledge retrieval. We’ve got excellent results on NaturalQuestions, CuratedTrec and WebQuestions with RAG, showing that you can achieve the latest machine read performance with a generated rather than extracted reader.”

Facebook sees vast potential in RAG, which it asserts will enable researchers to deploy solutions for knowledge-intensive tasks with just a few lines of code.

According to Facebook, “RAG allows the NLP model to bypass the retraining steps, access and extract the latest information, and then output the results using a generator. We foresee future research potential for knowledge-intensive tasks that are as straightforward and easy to understand as light knowledge tasks like sentiment analysis are today.”