# 1. Navigate to project directory
cd synapse-agent
# 2. Install dependencies
pip install -r requirements.txt
playwright install chromium
# 3. Create .env file with your API keys
copy .env.example .env # Then edit with your actual keys
# 4. Start the server
python start_server.pyOnce running, access these URLs:
- Web Interface: http://localhost:8000/web
- API Documentation: http://localhost:8000/docs
- Health Check: http://localhost:8000/health
Create a .env file in the synapse-agent directory:
# LinkedIn Authentication (Required)
LINKEDIN_SESSION_COOKIE=AQEDARXNaAAB4...your_long_session_cookie
# Google Custom Search (Required)
GOOGLE_API_KEY=AIzaSyD4...your_google_api_key
CUSTOM_SEARCH_ENGINE_ID=a1b2c3d4e5...your_search_engine_id
# AI Analysis (Required)
GEMINI_API_KEY=AIzaSyG9...your_gemini_api_keyWhat it looks like: AQEDARXNaAAB4gAAAYUCtTgQAAABhQTC6FA4AAAB4wEAAA... (very long string)
How to get it:
- Login to LinkedIn in Chrome/Edge browser
- Open Developer Tools (Press F12)
- Go to Application tab → Cookies → https://www.linkedin.com
- Find the cookie named
li_at - Copy the entire Value (it's a very long string starting with
AQE...) - Paste as
LINKEDIN_SESSION_COOKIEin your.envfile
- This cookie expires after ~1 year, so you'll need to refresh it
- Keep this cookie private - it gives access to your LinkedIn account
- If you logout of LinkedIn, you'll need to get a new cookie
What it looks like: AIzaSyDpGBslLkXrhqQOuqhLhA12345678Vdj2k (39 characters)
How to get it:
- Go to Google Cloud Console
- Create a new project or select existing one
- Go to APIs & Services → Credentials
- Click "+ CREATE CREDENTIALS" → API Key
- Copy the generated key (starts with
AIzaSy...) - Go to APIs & Services → Library
- Enable the "Custom Search API"
Cost: 100 free searches per day, then $5 per 1000 queries
What it looks like: a1b2c12345678:g8h9i0j1k2l3m4n (search engine identifier)
How to get it:
- Go to Google Custom Search
- Click "Get Started" and "Add"
- Sites to search: Enter
linkedin.com/in/* - Name your search engine: "LinkedIn Profile Search"
- Click "Create"
- In the "Setup" tab, turn ON "Search the entire web"
- Go to "Overview" tab
- Copy the Search Engine ID (shown in the details section)
Configuration:
- Search the entire web: ✅ Enabled
- Image search: ❌ Disabled
- Safe search: ❌ Disabled
What it looks like: AIzaSyG9p12345678nK2sT6uE1wP3hF5dC8b (39 characters)
How to get it:
- Go to Google AI Studio
- Sign in with your Google account
- Click "Get API Key" in the top navigation
- Click "Create API Key"
- Select your Google Cloud project (same as step 2 above)
- Copy the generated key (starts with
AIzaSy...)
Free Tier: 15 requests per minute, 1500 requests per day
Your final .env file should look exactly like this:
# Real example format (replace with your actual keys)
LINKEDIN_SESSION_COOKIE=AQEDARXNaAAB4gAA1234567890AAB4wEAAWaFwtToUDgAAF2hQOBQOAABdoUDgUDgAAF2hQOBQOAAA
GOOGLE_API_KEY=AIzaSyDpGBsA1234567890hA-cCxjhW7Vdj2k
CUSTOM_SEARCH_ENGINE_ID=a1b2c3dA12345678902l3m4n
GEMINI_API_KEY=AIzaSyG9pL8mXYzQ4A12345678901wP3hF5dC8b🎯 Pro Tips:
- All keys are case-sensitive
- No spaces around the
=sign - No quotes needed around the values
- Keep your
.envfile private (never commit to git)
- Paste as
LINKEDIN_SESSION_COOKIEin your .env file
cd synapse-agent
python start_server.pyShows environment checks and detailed startup info
cd synapse-agent
python -m uvicorn api:app --reload --host 127.0.0.1 --port 8000cd synapse-agent
python api.pycurl http://localhost:8000/health- Open http://localhost:8000/web
- Use the pre-filled job description
- Click "Find & Score Candidates"
- Wait for results (2-5 minutes)
curl -X POST "http://localhost:8000/run-sourcing-job-sync/" \
-H "Content-Type: application/json" \
-d '{
"job_description": "Senior Python Developer with FastAPI experience",
"max_candidates": 5
}'- Ensure you're in
synapse-agentdirectory - Check all environment variables in
.env - Install dependencies:
pip install -r requirements.txt - Try enhanced start:
python start_server.py
If you see errors like ModuleNotFoundError: No module named 'agent':
- Make sure you're running from the correct directory
- Try using the enhanced startup script:
python start_server.py - For Hugging Face deployment, the import structure has been fixed to handle multiple scenarios
- Verify LinkedIn session cookie is valid and recent
- Check Google API quota limits in Google Cloud Console
- Test with simpler job descriptions first
- Ensure Gemini API key is valid
- Check API rate limits in Google AI Studio
- Monitor server logs for specific errors
# Reinstall browsers
playwright install --force chromium
# Windows-specific fix
pip install playwright --force-reinstallIf you encounter websockets version conflicts:
pip install "websockets>=13.0.0,<15.1.0"synapse-agent/
├── api.py # Main FastAPI application
├── start_server.py # Enhanced startup script
├── requirements.txt # Python dependencies
├── .env # Environment variables (create this)
└── src/
├── agent.py # Core SourcingAgent class
└── tools.py # LinkedIn scraping & AI tools
For demonstrations, use these optimized settings:
# Start with detailed logging
python start_server.py --verbose
# Use the web interface with pre-filled data
# Open: http://localhost:8000/web
# Click: "Find & Score Candidates"The system will automatically:
- Generate search queries from job descriptions
- Find LinkedIn profiles via Google Search
- Scrape profile data with Playwright
- Score candidates with AI (Gemini)
- Generate personalized outreach messages
Expected Results:
- 5-15 candidates found per job
- Fit scores: 60-95% range
- Processing time: 2-5 minutes
- Personalized outreach messages for each candidate
✅ You're ready to run the LinkedIn Sourcing Agent!