Chains and Retriever
LangChain can easily orchestrate interactions with language models, chain together various components, and incorporate resources like databases and APIs. We will examine two fundamental ideas in LangChain in this chapter: Chain and Retriever.
Understanding Chains
LangChain relies heavily on chains. The core of LangChain’s operation is these logical relationships among one or more LLMs. Depending on the requirements and LLMs involved, chains might be simple or complex. An LLM model, an output parser that is optional, and a PromptTemplate are all part of its organized configuration. The LLMChain in this configuration takes in a variety of input parameters. It converts these inputs into a logical prompt by using the PromptTemplate. The model is then fed this polished cue. Following receipt of the output, the LLMChain formats and further refines the result into its most useable form using the OutputParser, if one is supplied.
folder chain {
artifact input_variables
artifact PromptTemplate
artifact LLM
}
input_variables -> PromptTemplate
PromptTemplate -> LLM
folder prompt {
artifact "prompt = PromptTemplate(\n input_variables=["city"],\n template="Describe a perfect day in {city}?"\n)"
}
folder chain.invoke {
artifact "chain = prompt | llm \nprint(chain.invoke("Paris"))"
}
chain -[hidden]- prompt
prompt -[hidden] chain.invoke