Development guidance for React AI tutoring application using OpenAI's Realtime API with multi-agent architecture for interactive math/science tutoring.
Development guidance for working with a React-based AI tutoring application using OpenAI's Realtime API with a multi-agent architecture.
This skill provides context for working with a React application that delivers interactive math/science tutoring through voice conversations with visual feedback, powered by OpenAI's Realtime API and a specialized multi-agent system.
The application uses:
When working with this codebase:
1. **Start development server**: Run `npm run dev` to launch Vite development server
2. **Build for production**: Run `npm run build` to create optimized build in `dist/`
3. **Check code quality**: Run `npm run lint` to execute ESLint with React rules
4. **Preview production build**: Run `npm run preview` to serve production build locally
The application uses six specialized agents in `src/agents/tutor/`:
Agents chain conditionally based on `isConceptIntroductionEnabled` flag:
The brainStormer agent can replace initial agents for exploratory learning sessions.
1. **App.jsx**: Main application with session management, push-to-talk controls, and visual feedback
2. **StarBackground**: Animated star field responsive to connection state
3. **VisualFeedback**: Displays illustrations, hints, and success messages
4. **NotesArea**: Shows step-by-step progress tracking
- Topic information and learning objectives
- Questions with multiple choice options
- Step-by-step tutoring content with visuals
- Conceptual questions with illustrations and feedback
- `window.handleStepCompletion()`: Updates progress
- `window.handleVisualFeedback()`: Shows illustrations/hints/success
- `window.handleIntroVisual()`: Displays concept introductions
Required environment variables in `env.js`:
Uses OpenAI's `gpt-4o-realtime-preview` model.
1. **Agent modifications**: When updating agents, maintain the conditional chaining logic based on problem data flags
2. **Visual feedback**: Use the global window functions to trigger UI updates from agents
3. **Problem data**: Structure new problems following the JSON schema with required fields (topic, questions, steps, visuals)
4. **Voice interface**: Test push-to-talk functionality with spacebar key handling
5. **State management**: Session state flows through App.jsx to child components
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/realtime-agents-development/raw