Skip to content

fix(api): require authentication on loom-download endpoint#1751

Open
MinitJain wants to merge 3 commits into
CapSoftware:mainfrom
MinitJain:fix/loom-download-require-auth
Open

fix(api): require authentication on loom-download endpoint#1751
MinitJain wants to merge 3 commits into
CapSoftware:mainfrom
MinitJain:fix/loom-download-require-auth

Conversation

@MinitJain
Copy link
Copy Markdown
Contributor

@MinitJain MinitJain commented Apr 22, 2026

Summary

  • GET /api/tools/loom-download had no authentication check
  • Any unauthenticated user on the internet could call it to stream/convert Loom videos through Cap's server
  • Added getCurrentUser() check — unauthenticated requests get 401

Security Impact

Without auth, this endpoint was an open proxy. An attacker could repeatedly call it to burn server bandwidth (video streaming) and CPU (ffmpeg conversion via media server) at zero cost to themselves. This is a resource exhaustion / cost amplification attack.

Test plan

  • Unauthenticated request to /api/tools/loom-download?id=... returns 401
  • Logged-in user can still use the Loom import feature normally

Greptile Summary

This PR closes an unauthenticated open-proxy vulnerability on the GET /api/tools/loom-download endpoint by adding a getCurrentUser() guard and a per-user in-memory rate limiter.

  • Auth fix: Unauthenticated requests now receive a 401 before any Loom API or streaming work begins — this is correct and effective.
  • Rate limiting: A module-level Map is used for per-user request counting. This works only within a single Node.js process; in multi-instance or serverless deployments each worker has an independent counter, so the limit is easily bypassed by natural request distribution.
  • Memory growth: Entries in rateLimitMap are never proactively evicted — users who call the endpoint once and never return accumulate indefinitely.

Confidence Score: 4/5

The auth gate is solid and addresses the core vulnerability; the rate limiter added alongside it is unreliable in distributed deployments and should be revisited before it's relied upon for abuse prevention.

The primary security fix — requiring an authenticated session — is correct and effective. The accompanying rate limiter uses a module-level in-memory Map scoped to a single process instance. In any multi-worker or serverless deployment this counter resets per-instance, making the limit trivially bypassable by request distribution. Since the PR explicitly introduces the rate limiter as a second line of defense against resource exhaustion, having it silently fail under normal deployment conditions is a meaningful gap in the change.

apps/web/app/api/tools/loom-download/route.ts — specifically the in-memory rate-limiter logic added at the top of the file.

Important Files Changed

Filename Overview
apps/web/app/api/tools/loom-download/route.ts Adds getCurrentUser() auth gate (401 for anonymous) and an in-memory per-user rate limiter (10 req/min); the auth fix is correct, but the rate limiter is unreliable in multi-instance/serverless deployments and leaks memory over time.
Prompt To Fix All With AI
Fix the following 2 code review issues. Work through them one at a time, proposing concise fixes.

---

### Issue 1 of 2
apps/web/app/api/tools/loom-download/route.ts:10-12
**In-memory rate limiter is ineffective in multi-instance or serverless deployments**

`rateLimitMap` is a module-level `Map` that lives only in a single Node.js process. In any deployment where multiple server instances or serverless workers handle requests (e.g. Vercel, PM2 clusters, Kubernetes), each worker has its own independent copy of the map. A user can exceed the 10-request limit by simply having their requests routed across different workers — each worker sees `count: 0` for that user and happily serves the request. Since the PR's stated goal is to prevent resource-exhaustion attacks by authenticated users, an approach that silently fails under standard Next.js deployment conditions doesn't achieve that goal. A shared external store (Redis, an existing rate-limiter middleware, or an upstash/KV integration) is needed for the limit to be reliable.

### Issue 2 of 2
apps/web/app/api/tools/loom-download/route.ts:10-12
The `rateLimitMap` grows without bound: entries for users who make one request and never return are never evicted. Over time this leaks memory proportional to the number of distinct authenticated users who have ever called the endpoint. Consider clearing stale entries on insertion, or using a structure with built-in TTL eviction.

```suggestion
const rateLimitMap = new Map<string, { count: number; resetAt: number }>();
const WINDOW_MS = 60_000;
const MAX_REQUESTS = 10;

// Periodically evict expired entries to prevent unbounded map growth.
setInterval(() => {
	const now = Date.now();
	for (const [id, entry] of rateLimitMap) {
		if (now > entry.resetAt) rateLimitMap.delete(id);
	}
}, WINDOW_MS);
```

Reviews (2): Last reviewed commit: "Merge remote-tracking branch 'upstream/m..." | Re-trigger Greptile

Greptile also left 1 inline comment on this PR.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Comment on lines +164 to +167
const user = await getCurrentUser();
if (!user) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 No per-user rate limiting

An authenticated user can still hammer this endpoint to exhaust server bandwidth (video streaming) and CPU (ffmpeg conversion). The auth gate prevents anonymous abuse but doesn't cap usage per account. Consider adding a rate limit middleware (e.g. a simple in-memory counter or an existing rate-limiter if the codebase has one) on this route to bound the blast radius of a compromised or misbehaving account.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/web/app/api/tools/loom-download/route.ts
Line: 164-167

Comment:
**No per-user rate limiting**

An authenticated user can still hammer this endpoint to exhaust server bandwidth (video streaming) and CPU (ffmpeg conversion). The auth gate prevents anonymous abuse but doesn't cap usage per account. Consider adding a rate limit middleware (e.g. a simple in-memory counter or an existing rate-limiter if the codebase has one) on this route to bound the blast radius of a compromised or misbehaving account.

How can I resolve this? If you propose a fix, please make it concise.

@brin-security-scanner brin-security-scanner Bot added contributor:verified Contributor passed trust analysis. pr:verified PR passed security analysis. labels May 12, 2026
@richiemcilroy
Copy link
Copy Markdown
Member

please review the pr @greptileai

Comment on lines +10 to +12
const rateLimitMap = new Map<string, { count: number; resetAt: number }>();
const WINDOW_MS = 60_000;
const MAX_REQUESTS = 10;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 In-memory rate limiter is ineffective in multi-instance or serverless deployments

rateLimitMap is a module-level Map that lives only in a single Node.js process. In any deployment where multiple server instances or serverless workers handle requests (e.g. Vercel, PM2 clusters, Kubernetes), each worker has its own independent copy of the map. A user can exceed the 10-request limit by simply having their requests routed across different workers — each worker sees count: 0 for that user and happily serves the request. Since the PR's stated goal is to prevent resource-exhaustion attacks by authenticated users, an approach that silently fails under standard Next.js deployment conditions doesn't achieve that goal. A shared external store (Redis, an existing rate-limiter middleware, or an upstash/KV integration) is needed for the limit to be reliable.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/web/app/api/tools/loom-download/route.ts
Line: 10-12

Comment:
**In-memory rate limiter is ineffective in multi-instance or serverless deployments**

`rateLimitMap` is a module-level `Map` that lives only in a single Node.js process. In any deployment where multiple server instances or serverless workers handle requests (e.g. Vercel, PM2 clusters, Kubernetes), each worker has its own independent copy of the map. A user can exceed the 10-request limit by simply having their requests routed across different workers — each worker sees `count: 0` for that user and happily serves the request. Since the PR's stated goal is to prevent resource-exhaustion attacks by authenticated users, an approach that silently fails under standard Next.js deployment conditions doesn't achieve that goal. A shared external store (Redis, an existing rate-limiter middleware, or an upstash/KV integration) is needed for the limit to be reliable.

How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor:verified Contributor passed trust analysis. pr:verified PR passed security analysis.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants