# Core Concepts
## ReqLLM's purpose
ReqLLM normalizes the many ways LLM providers model requests and responses into a small set of common data structures. You work with a single, canonical model for:
- specifying models across providers
- representing conversations (context and messages)
- handling tool calls
- consuming streaming and final results
For full type and field details, see the [Data Structures](data-structures.md) guide.
## What normalization means
- **One conversation model**: user/system/assistant messages with typed content parts (text, images, files, tool calls/results).
- **One model spec system**: strings and tuples resolve through LLMDB, while `%LLMDB.Model{}` values can carry a full explicit model specification when needed.
- **One streaming shape**: unified `StreamChunk` events for content, tool calls, and metadata across providers.
- **One response shape**: a `Response` that exposes text/object extraction and usage consistently.
## 1) Model specification
Models can be specified as:
- **String**: `"provider:model"` (e.g., `"anthropic:claude-haiku-4-5"`)
- **Tuple**: `{:provider, "model", opt1: ..., opt2: ...}`
- **Struct**: `%LLMDB.Model{...}`
- **Map**: `%{provider: ..., id: ...}` for the full explicit model specification path
Example:
```elixir
{:ok, model} = ReqLLM.model("anthropic:claude-haiku-4-5")
model =
ReqLLM.model!(%{
provider: :openai,
id: "gpt-6-mini",
base_url: "http://localhost:8000/v1"
})
```
**Normalization in practice**:
- Common options like `temperature` and `max_tokens` are normalized.
- Provider-specific options are translated by the provider adapter; you still pass them in one place.
- See the [Model Specs](model-specs.md) guide for when to use exact dated model IDs, `%LLMDB.Model{}` values, and explicit model metadata.
## 2) Providers
Providers are plugins that translate between ReqLLM's canonical data structures and provider-specific HTTP APIs.
- You use the same API regardless of provider.
- Provider adapters handle request encoding, response decoding, streaming event conversion, and usage extraction.
You rarely need provider internals to build applications. If you author providers, see the [Adding a Provider](adding_a_provider.md) guide.
## 3) Context (conversations)
A `Context` is a list of `Message` structs. Each `Message` has a role and a list of typed `ContentPart` items. This uniform design enables multimodal conversations across providers.
Example:
```elixir
alias ReqLLM.Message.ContentPart
messages = [
ReqLLM.Context.system("You are a helpful assistant."),
ReqLLM.Context.user([
ContentPart.text("Analyze this image:"),
ContentPart.image_url("https://example.com/chart.png")
])
]
```
**Normalization in practice**:
- Same structure for text, image, and file inputs.
- No provider-specific message formats to learn.
## 4) Tool calls
Define tools once; invoke across providers with a unified call/result shape.
- Define tools with a name, description, and a `NimbleOptions` schema for validated arguments.
- Tool call requests and results appear as typed `ContentPart`s and `StreamChunk`s.
Example:
```elixir
{:ok, tool} = ReqLLM.Tool.new(
name: "get_weather",
description: "Gets weather by city",
parameter_schema: [city: [type: :string, required: true]],
callback: fn %{city: city} -> {:ok, "Weather in #{city}: sunny"} end
)
{:ok, response} =
ReqLLM.generate_text("anthropic:claude-haiku-4-5",
ReqLLM.Context.new([ReqLLM.Context.user("Weather in NYC today?")]),
tools: [tool]
)
```
## Next steps
Learn the canonical types in detail in the [Data Structures](data-structures.md) guide.