Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
97 changes: 97 additions & 0 deletions fastapi-sqlalchemy-pg-catalog/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# fastapi-sqlalchemy-pg-catalog

Minimal FastAPI + SQLAlchemy 2.x + psycopg2 + Postgres 13 sample that
reproduces the Postgres v3 dispatcher's simple-query `ClassCatalog`
asymmetry (keploy/integrations#193).

## What the bug looks like

At app boot, SQLAlchemy's `Base.metadata.create_all(engine)` issues a
`pg_catalog.pg_class` probe per declared table to decide whether to
skip `CREATE TABLE`. psycopg2 sends the probe over the
**simple-query** protocol (`Q` packet) even though the source SQL is
parameterized — it substitutes the `%(param)s` placeholders
client-side and emits the resulting inlined SQL as a single
statement, with no `Bind`/`Execute` frames. So the wire shape is
simple-Query carrying inlined bind values; the recorded mock keeps
the parameter list for matching, but the dispatcher's classifier
sees a simple-Query CATALOG request.

In `pkg/postgres/v3/replayer/dispatcher/dispatcher.go`:

* The **extended-query** path (`runEngineForPortal`, `case
match.ClassCatalog`) consults the recorded transactional mock first
and only falls back to the synthetic `Engines.Catalog.Execute` on
miss.
* The **simple-query** path (`dispatchBySQLHash`, `case
match.ClassCatalog`) goes straight to the synthetic engine — even
though a recorded `type: query` mock with `class: CATALOG` and the
correct rows is sitting in `mocks.yaml`.

With no `type: catalog` snapshot present, the synthetic engine
answers `rows: 0, cc: "SELECT 0"`. SQLAlchemy reads zero rows as
"table missing", issues `CREATE TABLE project ...`, and the
transactional engine misses (because the recording never captured a
CREATE TABLE — at record time the table already existed). The app
worker dies with `psycopg2.DatabaseError: keploy-pg-v3: no recorded
invocation matched`, every HTTP testcase that follows fails with
connection-reset.

## Reproducing locally

```bash
cd fastapi-sqlalchemy-pg-catalog
docker compose build

# Baseline (no keploy) — should pass
docker compose up -d
bash flow.sh
docker compose down -v

# Record
# Both keploy invocations below pass `--container-name "${APP_CONTAINER:-pg-catalog-repro-app}"`
# so they track whatever the compose file is rendering for the app
# service. If you've overridden APP_CONTAINER (e.g. to isolate
# concurrent runs), the same export reaches both keploy and compose.
( bash flow.sh > flow-record.log 2>&1 ) &
sudo -E keploy record \
-c "docker compose -f docker-compose.yml up" \
--container-name "${APP_CONTAINER:-pg-catalog-repro-app}" \
--cmd-type docker-compose \
--record-timer 60s

# Replay (pre-fix: FAILS with "no recorded invocation matched" on CREATE TABLE)
sudo -E keploy test \
-c "docker compose -f docker-compose.yml up" \
--container-name "${APP_CONTAINER:-pg-catalog-repro-app}" \
--cmd-type docker-compose \
--api-timeout 120 --delay 15
```

## Layout

| File | Purpose |
|-----------------------------|---------------------------------------------------------------------------|
| `app/main.py` | FastAPI app with one declarative `Project` model + lifespan create_all |
| `app/Dockerfile` | Python 3.12-slim + requirements |
| `app/requirements.txt` | fastapi, uvicorn, sqlalchemy 2.0.36, psycopg2-binary 2.9.10 |
| `docker-compose.yml` | postgres:13.22-alpine + app, app published at host port 8123 |
| `init.sql` | Pre-creates the `project` table so record-time create_all is a no-op |
| `flow.sh` | Drives `GET /health` and `GET /projects` against the app |
Comment thread
AkashKumar7902 marked this conversation as resolved.

## Compose env knobs

Set these to isolate concurrent runs (the CI lane drives a 3-cell
matrix on one Docker daemon and overrides each):

| Env var | Default | Purpose |
|------------------|-------------------------|------------------------------------------|
| `APP_CONTAINER` | `pg-catalog-repro-app` | App container name (keploy `--container-name`) |
| `DB_CONTAINER` | `pg-catalog-repro-db` | Postgres container name |
| `APP_HOST_PORT` | `8123` | Host-side port mapped to app's 8000 |
| `COMPOSE_NET` | `reprnet` | Docker network name |
Comment thread
AkashKumar7902 marked this conversation as resolved.

## Used by

* `keploy/integrations` Woodpecker lane
`.woodpecker/sqlalchemy-pg-catalog-postgres.yml`
12 changes: 12 additions & 0 deletions fastapi-sqlalchemy-pg-catalog/app/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
FROM python:3.12-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY main.py .

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--log-level", "info"]
114 changes: 114 additions & 0 deletions fastapi-sqlalchemy-pg-catalog/app/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
"""
Minimal FastAPI + SQLAlchemy + psycopg2 app that exercises the Postgres
v3 dispatcher's simple-query ClassCatalog branch via SQLAlchemy's
``Base.metadata.create_all`` table-existence probe.

Boot sequence:
1. SQLAlchemy creates an engine over psycopg2. psycopg2 sends queries
via the simple-Query protocol (``Q`` packet) even when the source
SQL is parameterized: it does client-side ``%(param)s`` substitution
and emits the resulting string as a single inlined statement
(no ``Bind``/``Execute`` frames).
2. ``Base.metadata.create_all(engine)`` issues one
``SELECT pg_catalog.pg_class.relname ...`` probe per declared table
to decide whether each ``CREATE TABLE`` should be skipped. The
probe SQL has 7 parameters (table name, relkind chars, namespace);
psycopg2 inlines them before the wire write, so the dispatcher sees
a simple-Query statement that classifies as ``ClassCatalog``.
3. FastAPI starts serving requests.

The probe is what hits the dispatcher's ``case match.ClassCatalog``
branch in ``pkg/postgres/v3/replayer/dispatcher/dispatcher.go``
(simple-query path, ``dispatchBySQLHash``).
"""

import asyncio
import logging
import os
import sys
from contextlib import asynccontextmanager

from fastapi import FastAPI
from sqlalchemy import Column, Integer, String, create_engine, select
from sqlalchemy.orm import Session, declarative_base

logging.basicConfig(
level=logging.INFO,
stream=sys.stdout,
format="%(asctime)s %(levelname)s %(name)s %(message)s",
)
log = logging.getLogger("repro")

DATABASE_URL = os.getenv("DATABASE_URL")
if not DATABASE_URL:
raise RuntimeError(
"DATABASE_URL is required (e.g. postgresql+psycopg2://user:pass@host:5432/db). "
"Set it in docker-compose env or in the host shell before launching uvicorn."
)
# SQL echo is INTENTIONALLY on by default — this is a sample for
# demonstrating the dispatcher's simple-Query catalog path, and seeing
# the actual SQLAlchemy queries (pg_catalog.version, pg_class probe,
# CREATE TABLE on miss) in the app log is the load-bearing observation
# that lets a reader correlate the keploy agent log with what the app
# is doing. The trade-off: SQLAlchemy logs every statement at INFO,
# which is verbose in normal operation. Override SQL_ECHO=0 to quiet
# it down for unrelated investigations.
SQL_ECHO = os.environ.get("SQL_ECHO", "1") != "0"

Base = declarative_base()


class Project(Base):
__tablename__ = "project"

id = Column(Integer, primary_key=True)
name = Column(String(100), nullable=False)


# SQLAlchemy 2.x defaults to the future-2.0 behaviour, so no
# `future=True` is needed (and passing it can trip a deprecation
# warning depending on the installed minor version).
engine = create_engine(DATABASE_URL, echo=SQL_ECHO)

Comment thread
AkashKumar7902 marked this conversation as resolved.

@asynccontextmanager
async def lifespan(_: FastAPI):
# Wrap the startup work AND the yield in try/finally so
# engine.dispose() runs even when create_all raises — which is
# the exact failure mode this repro is built around (pre-fix
# keploy makes create_all issue an unrecorded CREATE TABLE that
# raises psycopg2.DatabaseError mid-startup; without the wrap,
# the connection pool would leak on every replay attempt).
try:
log.info("startup: running Base.metadata.create_all (pg_class probe expected)")
# create_all does synchronous psycopg2 I/O. Offload to a thread
# so uvicorn's event loop stays responsive (otherwise any other
# async work scheduled on startup would block until the pg_class
# probe + any CREATE TABLE round-trips complete). For this
# minimal repro the difference is small, but the pattern is the
# right FastAPI shape for any startup that touches a sync DB
# driver.
await asyncio.to_thread(Base.metadata.create_all, engine)
log.info("startup: create_all complete")
yield
finally:
# Release pooled connections on shutdown so repeated
# start/stop cycles (local repro loops, CI lanes) don't leak
# half-open connections to postgres.
engine.dispose()
log.info("shutdown: engine pool disposed")


app = FastAPI(lifespan=lifespan)


@app.get("/health")
def health():
return {"ok": True}


@app.get("/projects")
def list_projects():
with Session(engine) as s:
rows = s.execute(select(Project)).scalars().all()
return [{"id": r.id, "name": r.name} for r in rows]
4 changes: 4 additions & 0 deletions fastapi-sqlalchemy-pg-catalog/app/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
fastapi==0.115.0
uvicorn==0.30.6
sqlalchemy==2.0.36
psycopg2-binary==2.9.10
34 changes: 34 additions & 0 deletions fastapi-sqlalchemy-pg-catalog/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
services:
postgres:
image: postgres:13.22-alpine
container_name: ${DB_CONTAINER:-pg-catalog-repro-db}
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: testdb
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
networks:
- reprnet
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d testdb"]
interval: 2s
timeout: 2s
retries: 30

app:
build: ./app
container_name: ${APP_CONTAINER:-pg-catalog-repro-app}
environment:
DATABASE_URL: postgresql+psycopg2://postgres:postgres@postgres:5432/testdb
depends_on:
postgres:
condition: service_healthy
ports:
- "${APP_HOST_PORT:-8123}:8000"
networks:
- reprnet

networks:
reprnet:
name: ${COMPOSE_NET:-reprnet}
driver: bridge
33 changes: 33 additions & 0 deletions fastapi-sqlalchemy-pg-catalog/flow.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
#!/usr/bin/env bash
# Drives traffic during keploy record. Hits both endpoints.
set -Eeuo pipefail

APP_HOST_PORT="${APP_HOST_PORT:-8123}"
APP_URL="${APP_URL:-http://localhost:${APP_HOST_PORT}}"
READY_TIMEOUT_S="${READY_TIMEOUT_S:-60}"

echo "[flow] waiting for app at $APP_URL (ceiling ${READY_TIMEOUT_S}s) ..."
ready=0
for i in $(seq 1 "$READY_TIMEOUT_S"); do
if curl -fsS --max-time 1 "$APP_URL/health" > /dev/null 2>&1; then
echo "[flow] app ready after ${i}s"
ready=1
break
fi
sleep 1
done

Comment thread
AkashKumar7902 marked this conversation as resolved.
if [ "$ready" -ne 1 ]; then
echo "[flow] ERROR: app never became ready at $APP_URL/health within ${READY_TIMEOUT_S}s" >&2
exit 1
fi

echo "[flow] GET /health"
curl -fsS "$APP_URL/health"
echo

echo "[flow] GET /projects"
curl -fsS "$APP_URL/projects"
echo

echo "[flow] done"
20 changes: 20 additions & 0 deletions fastapi-sqlalchemy-pg-catalog/init.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
-- Pre-create the `project` table so SQLAlchemy's create_all() sees it
-- exists at record time and skips CREATE TABLE. This is what the bug
-- (keploy/integrations#193) requires: at record time the pg_class
-- probe answers "table exists", so CREATE TABLE is never sent and
-- never recorded. At replay time, if the simple-query dispatcher path
-- skips the recorded mock, the synthetic catalog engine returns zero
-- rows, SQLAlchemy concludes "table missing", and issues an
-- unrecorded CREATE TABLE -- which then misses the transactional
-- engine, raises a DatabaseError, and kills app boot.
CREATE TABLE IF NOT EXISTS project (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL
);

-- Seed the table. The Postgres entrypoint runs scripts under
-- /docker-entrypoint-initdb.d only on first init (empty data dir),
-- so this is single-shot on a clean container. If you reuse a stale
-- data volume, this script doesn't run at all — re-create the
-- volume (`docker compose down -v`) for a deterministic repro.
INSERT INTO project (name) VALUES ('seed');