fix(api): require authentication on loom-download endpoint#1751
fix(api): require authentication on loom-download endpoint#1751MinitJain wants to merge 3 commits into
Conversation
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
| const user = await getCurrentUser(); | ||
| if (!user) { | ||
| return NextResponse.json({ error: "Unauthorized" }, { status: 401 }); | ||
| } |
There was a problem hiding this comment.
An authenticated user can still hammer this endpoint to exhaust server bandwidth (video streaming) and CPU (ffmpeg conversion). The auth gate prevents anonymous abuse but doesn't cap usage per account. Consider adding a rate limit middleware (e.g. a simple in-memory counter or an existing rate-limiter if the codebase has one) on this route to bound the blast radius of a compromised or misbehaving account.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/web/app/api/tools/loom-download/route.ts
Line: 164-167
Comment:
**No per-user rate limiting**
An authenticated user can still hammer this endpoint to exhaust server bandwidth (video streaming) and CPU (ffmpeg conversion). The auth gate prevents anonymous abuse but doesn't cap usage per account. Consider adding a rate limit middleware (e.g. a simple in-memory counter or an existing rate-limiter if the codebase has one) on this route to bound the blast radius of a compromised or misbehaving account.
How can I resolve this? If you propose a fix, please make it concise.|
please review the pr @greptileai |
| const rateLimitMap = new Map<string, { count: number; resetAt: number }>(); | ||
| const WINDOW_MS = 60_000; | ||
| const MAX_REQUESTS = 10; |
There was a problem hiding this comment.
In-memory rate limiter is ineffective in multi-instance or serverless deployments
rateLimitMap is a module-level Map that lives only in a single Node.js process. In any deployment where multiple server instances or serverless workers handle requests (e.g. Vercel, PM2 clusters, Kubernetes), each worker has its own independent copy of the map. A user can exceed the 10-request limit by simply having their requests routed across different workers — each worker sees count: 0 for that user and happily serves the request. Since the PR's stated goal is to prevent resource-exhaustion attacks by authenticated users, an approach that silently fails under standard Next.js deployment conditions doesn't achieve that goal. A shared external store (Redis, an existing rate-limiter middleware, or an upstash/KV integration) is needed for the limit to be reliable.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/web/app/api/tools/loom-download/route.ts
Line: 10-12
Comment:
**In-memory rate limiter is ineffective in multi-instance or serverless deployments**
`rateLimitMap` is a module-level `Map` that lives only in a single Node.js process. In any deployment where multiple server instances or serverless workers handle requests (e.g. Vercel, PM2 clusters, Kubernetes), each worker has its own independent copy of the map. A user can exceed the 10-request limit by simply having their requests routed across different workers — each worker sees `count: 0` for that user and happily serves the request. Since the PR's stated goal is to prevent resource-exhaustion attacks by authenticated users, an approach that silently fails under standard Next.js deployment conditions doesn't achieve that goal. A shared external store (Redis, an existing rate-limiter middleware, or an upstash/KV integration) is needed for the limit to be reliable.
How can I resolve this? If you propose a fix, please make it concise.
Summary
GET /api/tools/loom-downloadhad no authentication checkgetCurrentUser()check — unauthenticated requests get 401Security Impact
Without auth, this endpoint was an open proxy. An attacker could repeatedly call it to burn server bandwidth (video streaming) and CPU (ffmpeg conversion via media server) at zero cost to themselves. This is a resource exhaustion / cost amplification attack.
Test plan
/api/tools/loom-download?id=...returns 401Greptile Summary
This PR closes an unauthenticated open-proxy vulnerability on the
GET /api/tools/loom-downloadendpoint by adding agetCurrentUser()guard and a per-user in-memory rate limiter.Mapis used for per-user request counting. This works only within a single Node.js process; in multi-instance or serverless deployments each worker has an independent counter, so the limit is easily bypassed by natural request distribution.rateLimitMapare never proactively evicted — users who call the endpoint once and never return accumulate indefinitely.Confidence Score: 4/5
The auth gate is solid and addresses the core vulnerability; the rate limiter added alongside it is unreliable in distributed deployments and should be revisited before it's relied upon for abuse prevention.
The primary security fix — requiring an authenticated session — is correct and effective. The accompanying rate limiter uses a module-level in-memory Map scoped to a single process instance. In any multi-worker or serverless deployment this counter resets per-instance, making the limit trivially bypassable by request distribution. Since the PR explicitly introduces the rate limiter as a second line of defense against resource exhaustion, having it silently fail under normal deployment conditions is a meaningful gap in the change.
apps/web/app/api/tools/loom-download/route.ts — specifically the in-memory rate-limiter logic added at the top of the file.
Important Files Changed
getCurrentUser()auth gate (401 for anonymous) and an in-memory per-user rate limiter (10 req/min); the auth fix is correct, but the rate limiter is unreliable in multi-instance/serverless deployments and leaks memory over time.Prompt To Fix All With AI
Reviews (2): Last reviewed commit: "Merge remote-tracking branch 'upstream/m..." | Re-trigger Greptile