README.md

# libsql

Use [libSQL](https://github.com/tursodatabase/libsql) from Gleam!

This is a Gleam library for libSQL with the same ergonomics as
[sqlight](https://github.com/lpil/sqlight), built on a Rust NIF that wraps
the official `libsql` crate.

## Features

- In-memory (`:memory:`) and local file databases
- Remote `libsql://` connections (Turso, etc.)
- Embedded replica sync (local read replicas with remote Turso primary)
- Synced database — write offline locally, sync to remote later
- Type-safe parameter binding — positional `?` and named `:name`
- Batch execution (bulk inserts/updates in a single NIF roundtrip)
- Prepared statement caching (`prepare` / `exec_prepared` / `query_prepared`)
- Decoder-based query results (like sqlight)
- Convenience helpers: `query_one`, `query_first`, `last_insert_rowid`, `changes`
- Transactions (`BEGIN` / `COMMIT` / `ROLLBACK` + combinator)
- Connection control: `interrupt`, `total_changes`
- Full SQLite error code mapping

## Requirements

- [Gleam](https://gleam.run) >= 1.3.0
- [Erlang/OTP](https://www.erlang.org) >= 25

No Rust toolchain is required for end-users — precompiled NIF binaries are
downloaded automatically for supported platforms (Linux x86_64, macOS x86_64,
macOS aarch64). For other platforms, Rust & Cargo are needed to build from
source.

## Building from source

If a precompiled binary is not available for your platform, compile the NIF
locally:

```sh
make build
```

Or manually:

```sh
cd native/libsql_nif && cargo build --release
cp target/release/liblibsql_nif.so ../../priv/libsql_nif.so  # Linux
# cp target/release/liblibsql_nif.dylib ../../priv/libsql_nif.dylib  # macOS
```

## Running tests

```sh
make test
```

## Usage

```gleam
import gleam/dynamic/decode
import libsql

pub fn main() {
  use conn <- libsql.with_connection(":memory:")

  let sql = "
  create table cats (name text, age int);

  insert into cats (name, age) values
  ('Nubi', 4),
  ('Biffy', 10),
  ('Ginny', 6);
  "
  let assert Ok(Nil) = libsql.exec(sql, conn)

  let cat_decoder = {
    use name <- decode.field(0, decode.string)
    use age <- decode.field(1, decode.int)
    decode.success(#(name, age))
  }

  let sql = "
  select name, age from cats
  where age < ?
  "
  let assert Ok([#("Nubi", 4), #("Ginny", 6)]) =
    libsql.query(sql, on: conn, with: [libsql.int(7)], expecting: cat_decoder)
}
```

### Remote connection

```gleam
import gleam/dynamic/decode
import libsql

pub fn main() {
  let url = "libsql://my-db.turso.io"
  let token = "my-auth-token"
  use conn <- libsql.with_remote_connection(url, token)

  let assert Ok([#("hello", 42)]) =
    libsql.query(
      "select 'hello', 42",
      on: conn,
      with: [],
      expecting: {
        use a <- decode.field(0, decode.string)
        use b <- decode.field(1, decode.int)
        decode.success(#(a, b))
      },
    )
}
```

### Transactions

```gleam
pub fn insert_user(conn, name, age) {
  libsql.transaction(conn, fn() {
    use _ <- result.try(libsql.exec("insert into users (name) values (?)", conn))
    use _ <- result.try(libsql.exec("insert into logs (action) values ('created')", conn))
    Ok(Nil)
  })
}
```

If the inner function returns `Error(...)`, the transaction is automatically
rolled back.

### Named parameters

```gleam
libsql.query_named(
  "select * from cats where name = :name and age > :min_age",
  conn,
  [
    #$(":name", libsql.text("Nubi")),
    #$(":min_age", libsql.int(2)),
  ],
  expecting: cat_decoder,
)
```

### Batch execution

```gleam
libsql.exec_batch(
  "insert into users (name, age) values (?, ?)",
  conn,
  [
    [libsql.text("Alice"), libsql.int(30)],
    [libsql.text("Bob"), libsql.int(25)],
    [libsql.text("Carol"), libsql.int(35)],
  ],
)
```

Batch operations are typically wrapped in a transaction for atomicity:

```gleam
libsql.transaction(conn, fn() {
  libsql.exec_batch(
    "insert into logs (action) values (?)",
    conn,
    actions |> list.map(fn(a) { [libsql.text(a)] }),
  )
})
```

### Convenience helpers

**Single-row queries:**

```gleam
// Expect exactly one row — errors on 0 or multiple
let assert Ok(user) =
  libsql.query_one(
    "select * from users where id = ?",
    conn,
    [libsql.int(42)],
    user_decoder,
  )

// Get the first row (or None)
let assert Ok(option.Some(user)) =
  libsql.query_first(
    "select * from users where email = ?",
    conn,
    [libsql.text("alice@example.com")],
    user_decoder,
  )
```

**Insert metadata:**

```gleam
let assert Ok(Nil) = libsql.exec("insert into users (name) values ('Alice')", conn)
let assert Ok(id) = libsql.last_insert_rowid(conn)

let assert Ok(Nil) = libsql.exec("update users set active = 1", conn)
let assert Ok(3) = libsql.changes(conn)
```

### Prepared statements

```gleam
libsql.with_statement("insert into users (name, age) values (?, ?)", conn, fn(stmt) {
  let assert Ok(Nil) = libsql.exec_prepared(stmt, [libsql.text("Alice"), libsql.int(30)])
  let assert Ok(Nil) = libsql.exec_prepared(stmt, [libsql.text("Bob"), libsql.int(25)])
})
```

Prepared statements avoid re-parsing SQL on each execution, giving a
significant performance boost for repeated queries.

### Embedded replica

```gleam
let db_path = "/tmp/my_replica.db"
let url = "libsql://my-db.turso.io"
let token = "my-auth-token"

let assert Ok(conn) = libsql.open_replica(db_path, url, token)

// Sync with remote primary
let assert Ok(replicated) = libsql.sync(conn)
// replicated.frame_no    -> Option(Int)
// replicated.frames_synced -> Int

// Queries are served from the local replica
let assert Ok([#("hello", 42)]) =
  libsql.query(
    "select 'hello', 42",
    conn,
    [],
    expecting: {
      use a <- decode.field(0, decode.string)
      use b <- decode.field(1, decode.int)
      decode.success(#(a, b))
    },
  )
```

### Synced database (offline writes)

Unlike a replica (which delegates writes to the remote), a synced database
keeps writes local and pushes them when you call `sync()`:

```gleam
let db_path = "/tmp/my_synced.db"
let url = "libsql://my-db.turso.io"
let token = "my-auth-token"

let assert Ok(conn) = libsql.open_synced_database(db_path, url, token)

// Pull remote state first (recommended)
let assert Ok(_) = libsql.sync(conn)

// Writes are local and work offline
let assert Ok(Nil) = libsql.exec("create table todos (id integer primary key, task text)", conn)
let assert Ok(Nil) = libsql.exec("insert into todos (task) values ('Buy milk')", conn)

// Read locally at full SQLite speed
let assert Ok(["Buy milk"]) =
  libsql.query(
    "select task from todos",
    conn,
    [],
    decode.field(0, decode.string, decode.success),
  )

// Push local changes to remote
let assert Ok(replicated) = libsql.sync(conn)
```

## API

The API mirrors `sqlight` closely:

- `libsql.open(path)` – open a local or in-memory database
- `libsql.open_remote(url, token)` – open a remote libSQL database
- `libsql.open_replica(path, url, token)` – open an embedded replica
- `libsql.open_synced_database(path, url, token)` – open a synced database (offline writes)
- `libsql.close(connection)` – close a connection
- `libsql.with_connection(path, fn)` – open local, run function, auto-close
- `libsql.with_remote_connection(url, token, fn)` – open remote, run function, auto-close
- `libsql.with_replica_connection(path, url, token, fn)` – open replica, run function, auto-close
- `libsql.with_synced_database(path, url, token, fn)` – open synced, run function, auto-close
- `libsql.sync(connection)` – sync replica or synced db with remote
- `libsql.replication_index(connection)` – current replication index
- `libsql.begin(on: connection)` – start a transaction
- `libsql.commit(on: connection)` – commit a transaction
- `libsql.rollback(on: connection)` – rollback a transaction
- `libsql.transaction(on: connection, run: fn)` – run a function inside a transaction
- `libsql.exec(sql, on: connection)` – execute SQL without returning rows
- `libsql.exec_batch(sql, on:, with:)` – execute a statement multiple times with different params
- `libsql.prepare(sql, on: connection)` – compile a prepared statement
- `libsql.exec_prepared(on: statement, with:)` – execute a prepared statement
- `libsql.query_prepared(on: statement, with:, expecting:)` – query via prepared statement
- `libsql.finalize(statement)` – release a prepared statement
- `libsql.with_statement(sql, on:, run:)` – prepare, run function, auto-finalize
- `libsql.query(sql, on:, with:, expecting:)` – execute SQL with positional params
- `libsql.query_named(sql, on:, with:, expecting:)` – execute SQL with named params
- `libsql.query_first(sql, on:, with:, expecting:)` – return first row or `None`
- `libsql.query_one(sql, on:, with:, expecting:)` – expect exactly one row
- `libsql.last_insert_rowid(connection)` – last auto-generated rowid
- `libsql.changes(connection)` – rows affected by last statement
- `libsql.total_changes(connection)` – total changes since db opened
- `libsql.interrupt(connection)` – cancel a long-running query
- `libsql.int/1`, `libsql.float/1`, `libsql.text/1`, `libsql.blob/1`,
  `libsql.bool/1`, `libsql.null/0`, `libsql.nullable/2` – value constructors

## Architecture

```
┌─────────────────┐
│   Gleam API     │  src/libsql.gleam
├─────────────────┤
│ Erlang FFI shim │  src/libsql_ffi.erl
├─────────────────┤
│   Rust NIF      │  native/libsql_nif/src/lib.rs
├─────────────────┤
│   libsql crate  │  (Rust) – wraps SQLite C + remote protocol
└─────────────────┘
```

## Future work

- [x] Remote `libsql://` connections
- [x] Named parameters (`:name`, `@name`, `$name`)
- [x] Transactions
- [x] Batch execution
- [x] Prepared statement caching
- [x] Embedded replica sync
- [x] Synced database (offline writes + push sync)
- [x] Precompiled NIF binaries (no local Rust needed for supported platforms)

## License

Apache-2.0