# ![Logo with chat chain links](./elixir-langchain-link-logo_32px.png) Elixir LangChain
Elixir LangChain enables Elixir applications to integrate AI services and self-hosted models into an application.
Currently supported AI services:
- OpenAI ChatGPT
- OpenAI DALL-e 2 - image generation
- Anthropic Claude
- Google AI - https://generativelanguage.googleapis.com
- Google Vertex AI - Gemini
- Ollama
- Mistral
- Bumblebee self-hosted models - including Llama, Mistral and Zephyr
**LangChain** is short for Language Chain. An LLM, or Large Language Model, is the "Language" part. This library makes it easier for Elixir applications to "chain" or connect different processes, integrations, libraries, services, or functionality together with an LLM.
**LangChain** is a framework for developing applications powered by language models. It enables applications that are:
- **Data-aware:** connect a language model to other sources of data
- **Agentic:** allow a language model to interact with its environment
The main value props of LangChain are:
1. **Components:** abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
1. **Off-the-shelf chains:** a structured assembly of components for accomplishing specific higher-level tasks
Off-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.
## What is this?
Large Language Models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.
This library is aimed at assisting in the development of those types of applications.
## Documentation
The online documentation can be [found here](https://hexdocs.pm/langchain).
## Demo
Check out the [demo project](https://github.com/brainlid/langchain_demo) that you can download and review.
## Relationship with JavaScript and Python LangChain
This library is written in [Elixir](https://elixir-lang.org/) and intended to be used with Elixir applications. The original libraries are [LangChain JS/TS](https://js.langchain.com/) and [LangChain Python](https://python.langchain.com/).
The JavaScript and Python projects aim to integrate with each other as seamlessly as possible. The intended integration is so strong that that all objects (prompts, LLMs, chains, etc) are designed in a way where they can be serialized and shared between the two languages.
This Elixir version does not aim for parity with the JavaScript and Python libraries. Why not?
- JavaScript and Python are both Object Oriented languages. Elixir is Functional. We're not going to force a design that doesn't apply.
- The JS and Python versions started before conversational LLMs were standard. They put a lot of effort into preserving history (like a conversation) when the LLM didn't support it. We're not doing that here.
This library was heavily inspired by, and based on, the way the JavaScript library actually worked and interacted with an LLM.
## Installation
The package can be installed by adding `langchain` to your list of dependencies
in `mix.exs`:
```elixir
def deps do
[
{:langchain, "0.2.0"}
]
end
```
The Release Candidate includes many additional features and some breaking changes.
```elixir
def deps do
[
{:langchain, "0.3.0-rc.0"}
]
end
```
## Configuration
Currently, the library is written to use the `Req` library for making API calls.
You can configure an _organization ID_, and _API key_ for OpenAI's API, but this library also works with [other compatible APIs](#alternative-openai-compatible-apis) as well as [local models running on Bumblebee](#bumblebee-chat-support).
`config/config.exs`:
```elixir
config :langchain, openai_key: System.get_env("OPENAI_API_KEY")
config :langchain, openai_org_id: System.get_env("OPENAI_ORG_ID")
# OR
config :langchain, openai_key: "YOUR SECRET KEY"
config :langchain, openai_org_id: "YOUR_OPENAI_ORG_ID"
```
It's possible to use a function or a tuple to resolve the secret:
```elixir
config :langchain, openai_key: {MyApp.Secrets, :openai_api_key, []}
config :langchain, openai_org_id: {MyApp.Secrets, :openai_org_id, []}
# OR
config :langchain, openai_key: fn -> System.get_env("OPENAI_API_KEY") end
config :langchain, openai_org_id: fn -> System.get_env("OPENAI_ORG_ID") end
```
## Usage
The central module in this library is `LangChain.Chains.LLMChain`. Most other pieces are either inputs to this, or structures used by it. For understanding how to use the library, start there.
### Exposing a custom Elixir function to ChatGPT
A really powerful feature of LangChain is making it easy to integrate an LLM into your application and expose features, data, and functionality _from_ your application to the LLM.
<img src="https://github.com/brainlid/langchain/blob/main/langchain_functions_overview_sm_v1.png" style="text-align: center;" width=50% height=50% alt="Diagram showing LLM integration to application logic and data through a LangChain.Function">
A `LangChain.Function` bridges the gap between the LLM and our application code. We choose what to expose and using `context`, we can ensure any actions are limited to what the user has permission to do and access.
For an interactive example, refer to the project [Livebook notebook "LangChain: Executing Custom Elixir Functions"](notebooks/custom_functions.livemd).
The following is an example of a function that receives parameter arguments.
```elixir
alias LangChain.Function
alias LangChain.Message
alias LangChain.Chains.LLMChain
alias LangChain.ChatModels.ChatOpenAI
# map of data we want to be passed as `context` to the function when
# executed.
custom_context = %{
"user_id" => 123,
"hairbrush" => "drawer",
"dog" => "backyard",
"sandwich" => "kitchen"
}
# a custom Elixir function made available to the LLM
custom_fn =
Function.new!(%{
name: "custom",
description: "Returns the location of the requested element or item.",
parameters_schema: %{
type: "object",
properties: %{
thing: %{
type: "string",
description: "The thing whose location is being requested."
}
},
required: ["thing"]
},
function: fn %{"thing" => thing} = _arguments, context ->
# our context is a pretend item/location location map
{:ok, context[thing]}
end
})
# create and run the chain
{:ok, updated_chain, %Message{} = message} =
LLMChain.new!(%{
llm: ChatOpenAI.new!(),
custom_context: custom_context,
verbose: true
})
|> LLMChain.add_functions(custom_fn)
|> LLMChain.add_message(Message.new_user!("Where is the hairbrush located?"))
|> LLMChain.run(mode: :while_needs_response)
# print the LLM's answer
IO.puts(message.content)
#=> "The hairbrush is located in the drawer."
```
### Alternative OpenAI compatible APIs
There are several services or self-hosted applications that provide an OpenAI compatible API for ChatGPT-like behavior. To use a service like that, the `endpoint` of the `ChatOpenAI` struct can be pointed to an API compatible `endpoint` for chats.
For example, if a locally running service provided that feature, the following code could connect to the service:
```elixir
{:ok, updated_chain, %Message{} = message} =
LLMChain.new!(%{
llm: ChatOpenAI.new!(%{endpoint: "http://localhost:1234/v1/chat/completions"}),
})
|> LLMChain.add_message(Message.new_user!("Hello!"))
|> LLMChain.run()
```
### Bumblebee Chat Support
Bumblebee hosted chat models are supported. There is built-in support for Llama 2, Mistral, and Zephyr models.
Currently, function calling is NOT supported with these models.
ChatBumblebee.new!(%{
serving: @serving_name,
template_format: @template_format,
receive_timeout: @receive_timeout,
stream: true
})
The `serving` is the module name of the `Nx.Serving` that is hosting the model.
See the [`LangChain.ChatModels.ChatBumblebee` documentation](https://hexdocs.pm/langchain/LangChain.ChatModels.ChatBumblebee.html) for more details.
## Testing
To run all the tests including the ones that perform live calls against the OpenAI API, use the following command:
```
mix test --include live_call
mix test --include live_open_ai
mix test --include live_ollama_ai
mix test --include live_anthropic
mix test test/tools/calculator_test.exs --include live_call
```
NOTE: This will use the configured API credentials which creates billable events.
Otherwise, running the following will only run local tests making no external API calls:
```
mix test
```
Executing a specific test, whether it is a `live_call` or not, will execute it creating a potentially billable event.
When doing local development on the `LangChain` library itself, rename the `.envrc_template` to `.envrc` and populate it with your private API values. This is only used when running live test when explicitly requested.
Use a tool like [Direnv](https://direnv.net/) or [Dotenv](https://github.com/motdotla/dotenv) to load the API values into the ENV when using the library locally.
TODO: Add notebook example of multi-modal image+text classification.
TODO: Add `run` mode of "evaluate the function until success" Don't give a response back to the LLM when successful. Allows our own custom code to generate changeset errors and return an ERROR, giving the LLM a chance to correct it.
TODO: Each model is firing a Utils.fire_callback function. Fires it for fake responses and actual received responses.
- With the addition of message processing, a model should ONLY process message. The chain should handle firing callbacks.
- What happens if I remove the callbacks from the models? Are they needed?
- ANSWER: It's how deltas are received and handled.
- Could change to ONLY fire callbacks for deltas?
TODO: solution to callbacks:
- do like the LangChain callbacks and have a set of defined callback events. Could have a "message receive", "delta received", "message ready" (after processing and delta-processing to a message)
## LangChain callbacks
- https://js.langchain.com/v0.1/docs/modules/callbacks/
- https://v02.api.js.langchain.com/classes/langchain_core_tracers_console.ConsoleCallbackHandler.html
It's a list of callback objects with a named field representing a specific event.
TODO: the callback object belongs (and is scoped to) to the object it's given to. Giving a callback object to the LLMChain means it will be used by the chain but NOT passed on to the Model. The model has it's own callback object.
In a LiveView, we have a the LLMChain and an async LLMChain when running it.
- need the Delta received event on the async chain's model.
- could have the same setup on the LV owned one, but it wouldn't fire.
- "raw message received"
- "message processed"
Create a Utils.fire_callback(handlers, event_name, event_data)
- would fire the same callback on all the handlers (if multiple given)
- Good way of getting token information out of the LLM, through callbacks. Token data/structure will differ with each LLM. Emit token data usage as a map.
- Good way of getting API rate usage information out of the LLM too.
Properties
- awaitHandlers
- ignoreAgent
- ignoreChain
- ignoreLLM
- ignoreRetriever
- name
- raiseError
Methods
- copy
- getBreadcrumbs
- getParents
- handleAgentAction
- handleAgentEnd
- handleChainEnd
- handleChainError
- handleChainStart
- handleChatModelStart
- handleLLMEnd
- handleLLMError
- handleLLMNewToken
- handleLLMStart
- handleRetrieverEnd
- handleRetrieverError
- handleRetrieverStart
- handleText
- handleToolEnd
- handleToolError
- handleToolStart
- onAgentAction
- onChainEnd
- onChainError
- onChainStart
- onLLMEnd
- onLLMError
- onLLMStart
- onRetrieverEnd
- onRetrieverError
- onRetrieverStart
- onToolEnd
- onToolError
- onToolStart
- toJSON
- toJSONNotImplemented
- onAgentEnd?
- onLLMNewToken?
- onRunCreate?
- onRunUpdate?
- onText?
- fromMethods
With LLMChain being a struct, I can't connect the callback_handler types to the data. It means the types don't get used and won't help a developer.
|> Utils.append_assistant_response(response, mode: :append|:replace)
Need two tests around resetting the fail count on successful message processing and when a tool call is successfully completed.
Then add an Ecto processors (process the JSON into an ecto schema)?
With or without the Ecto one, I can complete the image processing article.
A generic JS version of that using LangChain on a privately hosted LLM Studio or ollama would be cool for general blog. AI blog? JS blog? Need JS help.
TODO: NOTE: Verbose logging could be replaced with a pre-built callback handler for console logging. That would remove all the verbose logging messages. Perhaps turning on verbose just adds the console logger callbacks. That may not be enough granularity on callbacks?
TODO: Left to update:
- OpenAI Image needs new callback setup: `%OpenAIImage{} = openai, callback_fn`
TODO: `run(until_success: true)` option? We want to run until a tool_call succeeds or a set of message processors succeed then stop.
Make that the default mode? Then create a `run(once: true)`?
`run(mode: :until_success)` or `mode: :while_needs_response`. That way it's only one mode.
TODO: Publish as a version bump with RC
TODO: Add a "metadata" or "extra" data to a model or a chain? The
struct gets passed through events, it could make it easier to identify
them.