LlamaIndex的5种高级RAG方法

发布时间:2024年01月24日

LlamaPacks: Building RAG in Fewer Lines of Code.

Comparison conducted among the following advanced retrieval with Llamapacks.

  • Baseline: Naive query engine with Llamaindex

  • Pack 1: Hybrid Fusion Retriever Pack?[Notebook]?[Source]

Built on top of?QueryFusionRetriever. Generates multiple queries from the input question and then ensembles vector and bm25 retrievers using fusion(vector + keyword search + reciprocal_rerank_fusion)

Built on top of?QueryFusionRetriever. Rewrite(Generates) multiple queries from the input question, retrieve with vector search and rerank the results with reciprocal_rerank_fusion.

Built on top of?AutoMergingRetriever. It loads a document, builds a hierarchical node graph (with bigger parent nodes and smaller child nodes) and then check if enough children nodes of a parent node have been retrieved and merge and replace with that parent.

Built on top of?RecursiveRetriever. Given input documents, and an initial set of "parent" chunks, subdivide each chunk further into "child" chunks. Link each child chunk to its parent chunk, and index the child chunks.

Built on top of?SentenceWindowNodeParser. It loads a document, split a document into Nodes, with each node being a sentence. Each node contains a window from the surrounding sentences in the metadata.

MODEL = "gpt-3.5-turbo"So, which one will perform best? My answer is: "It depends! Different retrieval tasks and datasets have varying requirements, and there's no one-size-fits-all solution. "

文章来源:https://blog.csdn.net/lichunericli/article/details/135779657
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。