README.md

# Openai Handler

[![Hex.pm](https://img.shields.io/hexpm/v/openai_handler.svg)](https://hex.pm/packages/openai_handler)
[![Hex Docs](https://img.shields.io/badge/hex-docs-blue.svg)](https://hexdocs.pm/openai_handler)
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)

A simple and flexible Erlang library for interacting with the Openai API. This library provides a clean interface to generate text, perform chat completions, and manage configurations for Openai models.

## Features

- 🚀 **Simple API** - Easy-to-use functions for text generation and chat
- ⚙️ **Flexible Configuration** - Support for default, environment, and custom configurations
- 🔄 **Multiple Endpoints** - Support for both generate and chat APIs
- 🛡️ **Error Handling** - Comprehensive error handling and type safety
- 📝 **Well Documented** - Complete type specifications and documentation
- 🔧 **Environment Variables** - Easy configuration through environment variables

## Prerequisites

- Erlang/OTP 24 or higher

## Quick Start

### Basic Text Generation

```erlang
% Start your Erlang shell
1> application:start(inets).
ok
2> {ok, Response} = openai_handler:generate("Explain quantum computing in simple terms").
{ok, <<"Quantum computing is a revolutionary computing paradigm...">>}
3> openai_handler:print_result({ok, Response}).
Quantum computing is a revolutionary computing paradigm...
ok
```

### Chat Completion

```erlang
1> Messages = [
    #{role => <<"user">>, content => <<"Hello! How are you today?">>}
].
2> {ok, Response} = openai_handler:chat(Messages).
{ok, <<"Hello! I'm doing well, thank you for asking...">>}
```

## API Reference

### Core Functions

#### `generate/1,2`
Generate text from a simple prompt.

```erlang
-spec generate(string() | binary()) -> openai_result().
-spec generate(string() | binary(), config()) -> openai_result().
```

**Examples:**
```erlang
openai_handler:generate("What is the meaning of life?").
openai_handler:generate("Explain AI", #{model => <<"phi3">>, temperature => 0.8}).
```

#### `chat/1,2`
Perform chat completion using the messages format.

```erlang
-spec chat(messages()) -> openai_result().
-spec chat(messages(), config()) -> openai_result().
```

**Examples:**
```erlang
Messages = [
    #{role => <<"system">>, content => <<"You are a helpful assistant">>},
    #{role => <<"user">>, content => <<"Hello!">>}
],
openai_handler:chat(Messages).
```

#### `generate_with_context/2,3`
Generate text with additional context.

```erlang
-spec generate_with_context(string() | binary(), string() | binary()) -> openai_result().
-spec generate_with_context(string() | binary(), string() | binary(), config()) -> openai_result().
```

**Examples:**
```erlang
Context = "You are a expert in mathematics",
Prompt = "Explain calculus",
openai_handler:generate_with_context(Context, Prompt).
```

### Configuration Functions

#### `default_config/0`
Get the default hardcoded configuration.

```erlang
Config = openai_handler:default_config().
```

#### `get_env_config/0`
Get configuration from environment variables with fallback to defaults.

```erlang
Config = openai_handler:get_env_config().
```

### Utility Functions

#### `print_result/1`
Print the result of an operation to stdout.

```erlang
Result = openai_handler:generate("Hello world"),
openai_handler:print_result(Result).
```

#### `format_prompt/2`
Format a prompt template with arguments.

```erlang
Prompt = openai_handler:format_prompt("Translate '~s' to ~s", ["hello", "French"]).
```

## Examples

### Building a Simple Chatbot

```erlang
-module(simple_chatbot).
-export([start/0, chat_loop/1]).

start() ->
    application:start(inets),
    InitialMessages = [
        #{role => <<"system">>, content => <<"You are a friendly chatbot">>}
    ],
    chat_loop(InitialMessages).

chat_loop(Messages) ->
    io:format("You: "),
    case io:get_line("") of
        eof -> ok;
        Line ->
            UserMessage = #{role => <<"user">>, content => list_to_binary(string:trim(Line))},
            NewMessages = Messages ++ [UserMessage],
            
            case openai_handler:chat(NewMessages) of
                {ok, Response} ->
                    io:format("Bot: ~s~n", [Response]),
                    AssistantMessage = #{role => <<"assistant">>, content => Response},
                    chat_loop(NewMessages ++ [AssistantMessage]);
                {error, Reason} ->
                    io:format("Error: ~p~n", [Reason]),
                    chat_loop(Messages)
            end
    end.
```

### Text Summarization

```erlang
-module(text_summarizer).
-export([summarize/1]).

summarize(Text) ->
    Context = "You are an expert at summarizing text. Provide a concise summary.",
    Prompt = "Summarize the following text:\n\n" ++ Text,
    
    Config = #{
        model => <<"llama2">>,
        temperature => 0.3,
        max_tokens => 200
    },
    
    openai_handler:generate_with_context(Context, Prompt, Config).
```

### Code Generation

```erlang
-module(code_generator).
-export([generate_function/2]).

generate_function(Language, Description) ->
    Prompt = io_lib:format("Write a ~s function that ~s. Include comments and proper formatting.", 
                          [Language, Description]),
    
    Config = #{
        model => <<"codellama">>,
        temperature => 0.2,
        max_tokens => 500
    },
    
    openai_handler:generate(Prompt, Config).
```

## Development

### Building

```bash
rebar3 compile
```

### Running Tests

```bash
rebar3 eunit
```

### Type Checking

```bash
rebar3 dialyzer
```

### Code Analysis

```bash
rebar3 xref
```

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Acknowledgments

- [Openai](https://openai.ai/) for providing the excellent local LLM platform
- The Erlang/OTP team for the robust runtime system
- [jsx](https://github.com/talentdeficit/jsx) for JSON encoding/decoding