Real-time facial tic detection using MediaPipe FaceMesh and OpenCV. Tracks mouth landmarks (left, right, upper, lower) from webcam feed to monitor facial movements. Built with Python 3.12, MediaPipe, and OpenCV.
A computer vision project for real-time facial tic detection using MediaPipe's FaceMesh solution with OpenCV webcam processing. Tracks specific mouth landmarks to monitor facial movements.
**Main Processing Pipeline (`main.py`)**
**Landmark System**
1. **Virtual Environment**
- Project uses Python 3.12.7 in `.venv/` directory
- Windows path: `.venv/Scripts/python.exe`
- Linux/Mac path: `.venv/bin/python`
2. **Dependencies**
```bash
pip install mediapipe==0.10.14
pip install opencv-contrib-python==4.12.0.88
pip install numpy==2.2.6
```
```bash
python main.py
```
**Controls:**
**Coordinate Conversion**
Landmarks are normalized (0.0-1.0) and must be scaled:
```python
x = int(landmark.x * w) # w = frame width
y = int(landmark.y * h) # h = frame height
```
**Display Standards**
**MediaPipe Configuration**
```python
max_num_faces=1 # Single face tracking for performance
refine_landmarks=True # Enhanced lip/iris detail
min_detection_confidence=0.5 # Balanced sensitivity
min_tracking_confidence=0.5 # Balanced sensitivity
```
1. Capture frame from webcam
2. Convert BGR → RGB (MediaPipe requirement)
3. Process with FaceMesh
4. Convert RGB → BGR (OpenCV display requirement)
5. Draw mesh tesselation and contours
6. Overlay landmark labels for filtered points
7. Display frame and check for ESC key
To monitor additional facial features:
1. Find the landmark index from [MediaPipe's face model visualization](https://github.com/google/mediapipe/blob/master/mediapipe/modules/face_geometry/data/canonical_face_model_uv_visualization.png)
2. Add to `LANDMARK_NAMES` dictionary with descriptive name:
```python
LANDMARK_NAMES = {
61: "mouth_left",
291: "mouth_right",
13: "mouth_upper",
14: "mouth_lower",
# Add your new landmark here
33: "right_eye_outer", # Example
}
```
3. Add the name string to `tick_landmarks` list to enable display:
```python
tick_landmarks = [
"mouth_left",
"mouth_right",
"mouth_upper",
"mouth_lower",
"right_eye_outer", # Example
]
```
```
tick-fixer/
├── main.py # Core application with webcam processing
├── main2.py # Empty placeholder for experiments
├── .venv/ # Virtual environment (Windows-specific paths)
└── requirements.txt # (Not present - dependencies managed via pip)
```
**Integration Points:**
**Current Limitations:**
**Camera Configuration:**
**Typical Use Case:**
Monitor facial tics by tracking mouth landmark positions over time. The application displays real-time coordinates of mouth corners and upper/lower lip positions.
**Visual Output:**
**Debugging Camera Issues:**
If no video feed appears, try changing camera index:
```python
cap = cv2.VideoCapture(0) # Try default camera
```
**Adjusting Detection Sensitivity:**
Modify confidence thresholds in FaceMesh initialization:
```python
min_detection_confidence=0.7 # Higher = more strict
min_tracking_confidence=0.7 # Higher = less jitter
```
**Changing Display Colors:**
Modify drawing specifications:
```python
drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1, color=(0, 255, 0)) # Green
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/tick-fix-facial-landmark-tracker/raw