Summary
This chapter covers several skills related to LangChain Retrieval Augmented Generation (RAG) with the open source LLM. Here are some of the skills you can learn from this chapter:
- Run a VectorStore in Docker mode.
- Define a VectorStore as a query-only client.
- Perform a full RAG to summarize an entire document from one or more PDF files.
- Launch Mistral LLM with Ollama.
- Execute everything locally using the open source LLM, without exposing any data on internet, if you have sensitive data, such as your client and business information.
- Set up debug and verbose modes to observe the detailed chain of events.