Webarmonium Development Assistant
Expert assistant for the Webarmonium codebase - a real-time collaborative music platform that transforms gestures into algorithmic music.
What This Skill Does
This skill helps you navigate and develop features for Webarmonium, a platform enabling multiple users to create music together through intuitive gestures on a shared canvas. It provides deep understanding of the three-tier audio architecture, gesture-to-music transformation, real-time synchronization, and algorithmic composition engine.
Instructions
When assisting with Webarmonium development, follow these guidelines:
1. Understand the Core Architecture
Webarmonium uses a three-tier system:
**Frontend**: Vanilla JavaScript + Canvas + Web Audio API**Backend**: Node.js + Express + Socket.io **Audio Engine**: Tone.js + Web Audio API integrationKey features include:
Real-time multi-user collaboration (5-10 users per room)Gesture-to-music transformation with algorithmic composition60fps canvas rendering with cursor synchronizationEnvironmental memory for pattern learning (24-hour retention)Cross-platform support (mouse, touch, gyroscope)2. Navigate the Codebase Structure
**Frontend** (`/frontend/src/`):
Entry point: `main.js` → `WebarmoniumApp` classCore services: `AudioService.js`, `EnhancedGestureCapture.js`, `SocketService.js`, `CursorManager.js`, `DrawingRenderer.js`Patterns: Canvas-based 60fps rendering, event-driven architecture, modular service design**Backend** (`/backend/src/`):
Entry point: `server.js` → Express + Socket.ioCore services: `RoomManager.js`, `GestureProcessor.js`, `BackgroundCompositionService.js`, `CompositionEngine.js`, `EnvironmentalMemoryCoordinator.js`, `HoverOrchestrator.js`, `DrawingSyncService.js`, `ColorAssignmentService.js`Data models: `Room.js`, `User.js`, `Gesture.js`, `SoundPattern.js`, `MemoryState.js`, `CursorPosition.js`, `DrawingStroke.js`3. Understand the Three-Tier Audio System
The audio architecture handles:
1. **Background Layer** - Ambient algorithmic music
2. **Remote Layer** - Other users' musical contributions
3. **Local Layer** - User's own gesture-generated audio
All audio processing flows through `AudioService.js` on the frontend with Tone.js v14.7.77 and Web Audio API.
4. Follow the Gesture Processing Pipeline
```
Input (Mouse/Touch/Gyro) → EnhancedGestureCapture →
Classification → Musical Mapping → SocketService →
Backend Processing → Sound Generation → Audio Output
```
When working with gestures:
Extend `EnhancedGestureCapture.js` for new gesture typesUpdate `Gesture.js` model with new propertiesAdd musical mapping in `GestureToMusicService.js`Update frontend gesture event handlers5. Handle WebSocket Events Correctly
Key events in `api/socketHandlers.js`:
`user:join` / `user:leave` - Room management`gesture:start` / `gesture:end` - Gesture processing`cursor:move` - Cursor synchronization`drawing:start` / `drawing:stroke` / `drawing:end` - Canvas drawing`room:state` - Room state synchronization6. Respect Performance Requirements
API endpoints: <200ms p95 response timeUI interactions: <100ms response timeWebSocket latency: <100msMemory usage: <100MB baseline, <500MB peakCanvas rendering: 60fps target7. Follow Development Setup
```bash
Backend (port 3001)
cd backend
npm install
npm run dev # Development with nodemon
Frontend (port 3000)
cd frontend
npm install
npm start # HTTP server
Access: http://localhost:3000
```
8. Write Tests Following Standards
**Test-Driven Development**: Tests before implementation**90%+ Code Coverage**: Comprehensive coverage required**Categories**: Unit, integration, performance, contract tests```bash
Run all tests
npm test
Specific integration scenarios
NODE_ENV=test npm test -- tests/integration/multiuser-sync.test.js
Coverage report
npm run test:coverage
```
9. Work with Algorithmic Music Generation
The composition system uses multiple engines:
**CompositionEngine** - Form structure (ABA, rondo, sonata), voice generation**HarmonicEngine** - Progressions, voice leading, key/mode management**StyleAnalyzer** - Gesture pattern analysis for tempo and energy**MaterialLibrary** - Musical material organization**PhraseMorphology** - Gesture-to-melodic contour conversion10. Debug Common Issues
**Audio Context**: Requires user interaction for browser autoplay**WebSocket Connection**: Check CORS configuration and port availability**Canvas Performance**: Monitor 60fps rendering under load**Memory Management**: 24-hour automatic cleanup processesUse Chrome DevTools for audio context debugging, WebSocket Network tab, and performance profiling for canvas rendering.
11. Follow Code Quality Standards
No console.log in production codePrefer const/let over varSingle responsibility principleNo code duplicationConsistent error handling patternsClean architecture principles12. Handle Multi-User Synchronization
When adding multi-user features:
1. Extend room management in `RoomManager.js`
2. Add synchronization events in `socketHandlers.js`
3. Update frontend state management
4. Test multi-user scenarios (5-10 users per room)
5. Ensure color-based user identification works
6. Verify real-time cursor position sharing
Examples
Example 1: Adding a New Gesture Type
When asked to add a spiral gesture:
1. Read `frontend/src/services/EnhancedGestureCapture.js` to understand classification
2. Extend classification logic to detect spiral patterns
3. Read `backend/src/models/Gesture.js` and add spiral properties
4. Update musical mapping in gesture-to-music service
5. Test gesture recognition and audio output
6. Write unit and integration tests
Example 2: Optimizing Audio Performance
When asked to reduce audio latency:
1. Profile `AudioService.js` three-tier system
2. Check WebSocket latency metrics (<100ms requirement)
3. Review backend sound generation algorithms
4. Test with multiple concurrent users
5. Measure p95 response times
6. Implement optimizations while maintaining audio quality
Example 3: Implementing New Multi-User Feature
When asked to add user presence indicators:
1. Extend `RoomManager.js` to track user activity
2. Add WebSocket event for presence updates
3. Update `CursorManager.js` to show presence state
4. Add frontend visual indicators
5. Test synchronization across multiple clients
6. Verify performance with 5-10 concurrent users
Constraints
Maintain 60fps canvas rendering performanceKeep WebSocket latency under 100msRespect memory limits (500MB peak)Preserve 24-hour environmental memory retentionSupport 5-10 concurrent users per roomUse Tone.js v14.7.77 (do not upgrade without testing)Maintain cross-platform compatibility (mouse, touch, gyroscope)Require user interaction before starting audio contextFollow test-driven development (tests before implementation)Achieve 90%+ code coverage for new features