A role-playing chat application built with Next.js, TypeScript, Redux for the frontend and Django with Celery for the backend. The application uses Gemini 2.5 Flash API for AI responses and PostgreSQL for data storage.
- Two-column layout with chat window and character settings
- Real-time chat with AI character
- Character customization (name, description, personality, appearance)
- Chat history stored in PostgreSQL database
- Responsive design for mobile and desktop
- Next.js 14 with TypeScript
- Redux Toolkit for state management
- Tailwind CSS for styling
- Hybrid (REST for Chat Stream + GraphQL for Data Management)
- Django 5.2 with Django REST Framework
- PostgreSQL database
- Celery for background tasks
- Redis as message broker
This project is the core platform of a larger GenAI ecosystem. To ensure modularity and quality, specific components were decoupled:
- ** Inference Engine (Colab):** GPT-SOVITS-infer-Colab
- Role: Acts as an ephemeral GPU worker for offloading heavy GPT-SOVITS text-to-speech inference tasks.
- ** QA & Validation Pipeline:** speechbrain_Voiceprint_Recognition
- Role: Automated voiceprint verification tool achieving 78% similarity scores.
- Live Demo: [https://huggingface.co/spaces/l73jiang/speechbrain_Voiceprint_Recognition]
PROJECT STATUS: EXPERIMENTAL / WIP
This repository serves as an Architecture Proof-of-Concept for an end-to-end Generative AI platform. It focuses on System Design patterns (Microservices, Event-Driven Architecture) and Data Pipeline Validation, demonstrating how to decouple high-latency inference tasks from the main application loop.
Please view this as an Architecture Proof-of-Concept rather than production-ready code.
I am currently refactoring several core components. If you run the code, please be aware of the following known limitations:
-
First-Turn UX (In Progress):
- Issue: Currently, the initial role-play instruction is sent as a user message.
- Roadmap: I am refactoring this to move instructions to the
System Promptlayer and implementing a "Proactive Greeting" pattern to initiate the chat flow smoothly.
-
Context Management (V1):
- Issue: The current version uses a simple "Structured Data Injection" for character personas.
- Roadmap: V2 is designed to integrate a Vector Database (RAG) to handle long-term dynamic memory and overcome token limits.
-
Concurrency & Async Strategy:
- Constraint: The system currently defaults to **Synchronous Execution (Direct Invocation).
- Reasoning: The LLM provider (Gemini API Free Tier) enforces strict RPM quotas. Running tasks asynchronously triggered 429 errors. I am using synchronous execution as a natural Throttling Mechanism for this MVP.
- Running Celery asynchronously triggered immediate HTTP 429 (Too Many Requests) errors from Gemini. I am using synchronous execution as a natural Throttling Mechanism to stay within API limits without implementing complex token-bucket rate limiters for this MVP.
This project uses environment templates to protect sensitive information:
backend/.env.template- Template for backend environment variablesfrontend/.env.local.template- Template for frontend environment variables
Important Security Note:
- Never commit actual environment files (
.env,.env.local) to version control - Only commit the template files (
.env.template,.env.local.template) - Copy templates to actual environment files and fill in your values:
# Backend cp backend/.env.template backend/.env # Frontend cp frontend/.env.local.template frontend/.env.local
The project includes .gitignore files to prevent accidental commits of sensitive environment files.
- Node.js 18+
- Python 3.10+
- PostgreSQL
- Redis
- Google AI API key for Gemini 2.5 Pro
This project includes Docker support for easy deployment of Redis and other services. To use Docker:
-
Start Redis using Docker
docker-compose up -d redis
This command will:
- Pull the latest Redis Docker image
- Start the Redis container named 'redis-server'
- Map port 6379 from the container to your local machine
- Configure the container to automatically restart unless stopped
- Make Redis available at redis://localhost:6379/0
-
Verify Redis is running
docker ps
-
Restart Redis if needed
docker-compose restart redis
-
Stop Redis
docker-compose stop redis
Note: The docker-compose.yml file is already configured with Redis settings. You can customize the Redis configuration by modifying the docker-compose.yml file.
-
Navigate to the backend directory
cd backend -
Create and activate a virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Set up PostgreSQL database
- Create a PostgreSQL database named
ai_character_chat(Make sure your PostgreSQL server is running)
- Create a PostgreSQL database named
-
Set up environment variables Copy the template file and create your
.envfile:cp backend/.env.template backend/.env
Edit the
backend/.envfile with your actual values:SECRET_KEY=your-django-secret-key-here DEBUG=True GEMINI_API_KEY=your-gemini-api-key-here DATABASE_URL=postgresql://user:password@localhost:5432/ai_character_chat REDIS_URL=redis://localhost:6379/0Important: Never commit your actual
.envfile to version control. Only the template file.env.templateshould be committed. -
Run database migrations
python manage.py migrate
-
Create and apply chat app migrations
python manage.py makemigrations chat python manage.py migrate chat
-
Verify all migrations are complete
python manage.py showmigrations
-
Create a superuser (optional)
python manage.py createsuperuser
-
Start the Django development server
python manage.py runserver
Note: The backend server will be available at http://127.0.0.1:8000/ once started. All migrations should be applied without errors for the application to function properly.
- Start Celery worker (in a separate terminal)
celery -A ai_character_chat worker --loglevel=info
-
Navigate to the frontend directory
cd frontend -
Install dependencies
npm install
-
Set up environment variables Copy the template file and create your
.env.localfile:cp frontend/.env.local.template frontend/.env.local
Edit the
frontend/.env.localfile with your actual values:NEXT_PUBLIC_API_URL=http://localhost:8000/apiImportant: Never commit your actual
.env.localfile to version control. Only the template file.env.local.templateshould be committed. -
Start the development server
npm run dev
- Open your browser and navigate to
http://localhost:3000 - Click on "Character Settings" to customize your AI character
- Start chatting with your character in the chat window
- Your conversations are automatically saved to the database
GET /api/characters- List all charactersPOST /api/characters- Create a new characterGET /api/characters/{id}- Get character detailsPUT /api/characters/{id}- Update character
GET /api/sessions- List chat sessionsPOST /api/sessions- Create a new chat sessionGET /api/sessions/{id}- Get session details
GET /api/messages?chat_session_id={id}- Get messages for a sessionPOST /api/chat/send_message- Send a message and get AI response
AICharacterChat/
├── backend/ # Django backend
│ ├── ai_character_chat/ # Django project settings
│ ├── chat/ # Chat app
│ │ ├── models.py # Database models
│ │ ├── views.py # API views
│ │ ├── serializers.py # DRF serializers
│ │ └── tasks.py # Celery tasks
│ └── requirements.txt # Python dependencies
├── frontend/ # Next.js frontend
│ ├── src/
│ │ ├── app/ # Next.js app directory
│ │ ├── components/ # React components
│ │ ├── store/ # Redux store
│ │ ├── types/ # TypeScript types
│ │ └── utils/ # Utility functions
│ └── package.json # Node.js dependencies
└── README.md
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License.