PEP 249 compliant interface for dqlite.
pip install dqlite-dbapiimport dqlitedbapi
conn = dqlitedbapi.connect("localhost:9001")
cursor = conn.cursor()
cursor.execute("SELECT 1")
print(cursor.fetchone())
conn.close()import asyncio
from dqlitedbapi.aio import aconnect
async def main():
conn = await aconnect("localhost:9001")
cursor = conn.cursor()
await cursor.execute("SELECT 1")
print(await cursor.fetchone())
await conn.close()
asyncio.run(main())apilevel = "2.0"threadsafety = 1paramstyle = "qmark"
dqlite-dbapi does not issue implicit BEGIN before DML — each
statement runs in the underlying SQLite engine's autocommit mode unless
the caller has explicitly opened a transaction. This deviates from
PEP 249 §6's prescribed implicit-transaction model and from stdlib
sqlite3 / psycopg, both of which auto-BEGIN on the first DML; users
porting from those drivers will see different behaviour and must add
explicit BEGIN calls (see below) to recover atomic multi-statement
semantics. The dqlite C and Go reference clients have the same opt-in
contract; this driver matches them rather than stdlib.
If you use SQLAlchemy via sqlalchemy-dqlite, the dialect emits
BEGIN for every engine.begin() / connection.begin() block — no
explicit BEGIN needed; the autocommit-by-default gap below applies
only to direct dbapi users.
Every write still goes through Raft consensus and every read is serializable; isolation is always SERIALIZABLE, but transaction grouping is opt-in.
To group statements into a transaction, issue an explicit BEGIN
through a cursor (or use dqliteclient's transaction() async
context manager from the layer below):
cur = conn.cursor()
cur.execute("BEGIN")
cur.execute("INSERT INTO t VALUES (?)", (1,))
cur.execute("INSERT INTO t VALUES (?)", (2,))
conn.commit() # COMMITThe bare BEGIN SQL is the SQLite default (BEGIN DEFERRED). dqlite's
Raft FSM serializes the transaction across the cluster regardless of
the DEFERRED / IMMEDIATE / EXCLUSIVE qualifier, so the qualifier
does not change isolation semantics — only the lock-acquisition timing
on the leader. There is no isolation_level attribute (cannot be
weakened on dqlite); the SQLAlchemy dialect rejects AUTOCOMMIT on
the same grounds.
Connection-level commit() / rollback() semantics:
- Calling
commit()/rollback()before any query has run is a silent no-op (preserves the "no spurious connect" contract — we don't open a TCP connection just to send COMMIT). - Calling
commit()/rollback()with no active transaction (e.g. right after a DDL statement) is also silently successful, matching stdlibsqlite3semantics. - The context-manager exit (
with conn: ...) commits on clean exit and attempts rollback on exception. Commit failures propagate — silent swallowing was a footgun that could hide data loss. - Operational caveat: a leader-flip during COMMIT raises
OperationalErrorwith a code indqlitewire.LEADER_ERROR_CODES. The write may or may not have been persisted (Raft may already have replicated the commit log entry before the flip). Use idempotent DML (INSERT OR REPLACE,UPDATEon a unique key) or an out-of-band state-check before retrying.
This driver is PEP 249-shaped (sync) and exposes an async surface
under dqlitedbapi.aio that mirrors the sync dqlitedbapi surface
plus async-specific aconnect / AsyncConnection / AsyncCursor.
The full PEP 249 module attributes (apilevel, threadsafety,
paramstyle, sqlite_version, sqlite_version_info), exception
hierarchy (Warning, Error, InterfaceError, DatabaseError,
DataError, OperationalError, IntegrityError, InternalError,
ProgrammingError, NotSupportedError), type constructors
(Date, Time, Timestamp, DateFromTicks, TimeFromTicks,
TimestampFromTicks, Binary), type sentinels (STRING, BINARY,
NUMBER, DATETIME, ROWID), and stdlib-sqlite3-parity stubs
(register_adapter, register_converter, complete_statement,
enable_callback_tracebacks) are all re-exported under
dqlitedbapi.aio so cross-driver code porting from aiosqlite
imports them from one namespace. One notable deviation:
AsyncConnection.__aexit__does NOT close the connection. It commits / rolls back per the body outcome and leaves the connection reusable, matching stdlibsqlite3.Connection.__exit__and PEP 249 §7.aiosqlite.Connection.__aexit__DOES close (closes its proxy thread + sqlite3 connection). Cross-driver code porting fromaiosqlitemust add an explicitawait conn.close()(or switch to a pool) for eager-close semantics.
# aiosqlite-style (closes on aexit):
async with aiosqlite.connect(":memory:") as conn:
...
# dqlitedbapi.aio-style (does NOT close on aexit, matches stdlib sqlite3):
async with await dqlitedbapi.aio.connect(...) as conn:
...
await conn.close() # explicitEach dqlitedbapi.Connection (sync or async) owns exactly one
underlying dqliteclient.DqliteConnection. The dbapi layer does NOT
itself pool client connections — pooling lives one layer up:
- SQLAlchemy users: SA's
QueuePool(and async siblings) over dbapiConnectionobjects is the production pool. - Direct dbapi users: a custom pool over dbapi
Connectionobjects is the supported pattern. dqliteclient.ConnectionPoolis for direct-client usage and is unused by dqlitedbapi.
A dbapi Connection.close() always closes the underlying client
transport — not "return to a pool" — because the dbapi never
borrowed from one.
-
Multi-statement SQL is rejected.
cursor.execute("SELECT 1; SELECT 2;")raisesProgrammingError("You can only execute one statement at a time.")(stdlib's wording, but client-side, before any wire round-trip). Split into separateexecute()calls; there is noexecutescript(it raisesNotSupportedError). -
paramstyle = "qmark"only. stdlib accepts both?(qmark) AND:name(named) within a single connection; this driver advertises onlyqmarkand rejects mappings withProgrammingError. -
Autocommit-by-default at the server. Opposite of stdlib
sqlite3's implicit-transaction model — see Transactions above. -
Cursor.rowcount = len(rows)after SELECT. stdlib returns-1(rowcount is undefined for queries). dqlite knows the full result set at execute time, so the literal-PEP-249 reading is honoured.rowcount = -1is preserved for true non-result paths (PRAGMA write, never-executed cursor). -
Cursor.executemanyclearslastrowidtoNone. stdlib leaves the prior INSERT's rowid sticky across executemany. The driver diverges deliberately (which row's rowid is "the" rowid for a batch of N inserts is ambiguous). -
Cursor.close()scrubsdescription/lastrowid/_rows. stdlib preservesdescriptionandlastrowidpost-close; dqlite scrubs toNoneto enforce the closed-cursor "no operation performed" surface. -
Cursor.fetchmany(0)returns[]. stdlib (verified 3.12.3) and psycopg3 both treatsize=0as "fetch arraysize" or "fetch all remaining"; dqlite reads PEP 249 literally (size=0 → 0 rows). PassNoneor omit the argument to default toarraysize. -
cursor.execute("")raisesProgrammingError("empty statement"). stdlib silently accepts empty / whitespace-only / comment-only SQL as a no-op; the driver pre-flight rejects per PEP 249 §7 to surface caller bugs at the call site. -
threadsafety = 1(stdlib reports3). Each Connection is thread-affine: methods called from a foreign thread raiseProgrammingError. Use one Connection per thread or use the async surface for a single-thread-per-loop model. -
No
executescript/create_function/create_aggregate/create_window_function/iterdump/backup/set_authorizer/serialize/blobopen. stdlib-specific APIs that have no server-side counterpart in dqlite. Stubs raiseNotSupportedError. -
SERIALIZABLE isolation only. Every statement is ordered by Raft; weaker isolation levels aren't exposed.
-
PEP 249 type sentinels (
STRING,BINARY,NUMBER,DATETIME,ROWID) are unhashable. Use chained equality againstdescription[i][1], NOT set/dict membership:type_code = cur.description[i][1] if type_code == STRING or type_code == NUMBER: # OK ... if type_code in {STRING, NUMBER}: # raises TypeError ...
The sentinels wrap multiple wire type codes (
NUMBERcovers INTEGER+FLOAT+BOOLEAN,DATETIMEcovers DATE+TIMESTAMP+ISO8601), so no canonical hash can satisfy the Python hash-eq invariant. Stdlibsqlite3doesn't export these sentinels at all, so the chained-equality form is the cross-driver-portable idiom. -
WITH ... INSERT/UPDATE/DELETE(CTE-prefixed pure DML) reports zerorowcountand nolastrowid. The driver dispatches between the row-returning and execute paths via a prefix-based heuristic; CTE-prefixed pure DML is misclassified as row-returning and the server's actual count / id is dropped. Stdlibsqlite3handles this correctly because it dispatches at the SQLite engine level. Workaround: rewrite as plain DML, or useINSERT ... RETURNING idto get the id back through the row-returning path. (PEP 249 doesn't mandatelastrowidcorrectness for INSERT-via-CTE.)
Upstream dqlite commit f30fc99 (query: preserve SQLITE_NULL type for NULL values, 2026-01-25) changed the wire encoding of NULL cells
in columns declared BOOLEAN, DATE, DATETIME, or TIMESTAMP:
- Before
f30fc99: a NULL cell was emitted with the column's coerced type —BOOLEAN(0)(decodes toFalse) orISO8601("")(decodes to""), indistinguishable from a realFALSE/ empty string. - After
f30fc99: a NULL cell is emitted withSQLITE_NULLand decodes toNone.
Code that does if cur.fetchone()[0] is None: against an old-server
cluster will silently miss NULL rows. After a cluster upgrade past
f30fc99, the same code starts firing where it previously read
False / "". There is no driver-level handshake distinguishing the
two server versions — check your dqlite cluster version before relying
on is None for BOOLEAN / DATETIME columns.
Three related packages play different roles:
dqliteclient.DqliteConnection— the low-level async wire client. Directly speaks the dqlite wire protocol.dqlitedbapi.Connection— a sync PEP 249 wrapper built on top, runs a dedicated event-loop thread so sync code can use the async client transparently.dqlitedbapi.aio.AsyncConnection— the PEP 249–shaped async counterpart for code already running inside an event loop.
Use dqlitedbapi (sync or async) unless you specifically need
wire-level control.
See DEVELOPMENT.md for setup and contribution guidelines.
MIT