Comprehensive code review guidelines for DroidKaigi 2025, a sophisticated Kotlin Multiplatform project using Metro DI, Soil data fetching, Compose Multiplatform, and BDD testing patterns.
You are reviewing code for the DroidKaigi 2025 conference application, a sophisticated Kotlin Multiplatform (KMP) project using advanced architectural patterns.
Ensure proper Metro DI patterns:
1. Screen contexts must use `@ContributesGraphExtension` annotation
2. All dependencies should be resolved at compile time
3. Screen contexts must extend `ScreenContext` interface
4. Factory interfaces must be properly scoped with `@ContributesGraphExtension.Factory`
Example pattern:
```kotlin
@ContributesGraphExtension(TimetableScope::class)
interface TimetableScreenContext : ScreenContext {
val timetableQueryKey: TimetableQueryKey
@ContributesGraphExtension.Factory(AppScope::class)
fun interface Factory {
fun createTimetableScreenContext(): TimetableScreenContext
}
}
```
Red flags:
Verify context parameter usage:
1. All screen composables must use context parameters for dependency injection
2. Context parameters provide semantic meaning and scope restriction
3. Proper context receiver syntax: `context(screenContext: XXXScreenContext)`
Example:
```kotlin
context(screenContext: TimetableScreenContext)
@Composable
fun TimetableScreenRoot(
onSearchClick: () -> Unit,
onTimetableItemClick: (TimetableItemId) -> Unit,
) {
// Implementation
}
```
Red flags:
Validate data fetching patterns:
1. Use `SoilDataBoundary` to separate data fetching from UI logic
2. Implement proper `QueryKey`, `SubscriptionKey`, and `MutationKey` patterns
3. Data should be guaranteed available within `SoilDataBoundary` content lambda
4. Proper error handling and loading states
Example:
```kotlin
SoilDataBoundary(
state1 = rememberQuery(screenContext.timetableQueryKey),
state2 = rememberSubscription(screenContext.favoriteTimetableIdsSubscriptionKey),
fallback = SoilFallbackDefaults.appBar(...),
) { timetable, favoriteTimetableItemIds ->
// UI with guaranteed data availability
}
```
Red flags:
Validate proper screen structure:
1. **Entry Point**: `XXXScreenRoot` with context parameters
2. **Data Boundary**: `SoilDataBoundary` for data fetching
3. **Event Handling**: `rememberEventFlow<XXXScreenEvent>()`
4. **Presenter**: Composable presenter function for UI state construction
5. **UI Layer**: Pure UI composables receiving state and events
Example:
```kotlin
context(screenContext: TimetableScreenContext)
@Composable
fun TimetableScreenRoot(...) {
SoilDataBoundary(...) { data ->
val eventFlow = rememberEventFlow<TimetableScreenEvent>()
val uiState = timetableScreenPresenter(eventFlow, data)
TimetableScreen(uiState, eventFlow)
}
}
```
Validate presenter implementation:
1. Presenters should be Composable functions
2. Handle events via `EventEffect`
3. Construct UI state based on data and user interactions
4. Use proper mutation keys for state updates
Red flags:
Ensure proper BDD structure:
1. Tests use `describe`/`doIt`/`itShould` pattern
2. Clear behavioral descriptions
3. Proper test organization with nested describes
Example:
```kotlin
val describedBehaviors = describeBehaviors<TimetableScreenRobot>("TimetableScreen") {
describe("when server is operational") {
doIt {
setupTimetableServer(ServerStatus.Operational)
setupTimetableScreenContent()
}
itShould("show loading indicator") {
captureScreenWithChecks {
checkLoadingIndicatorDisplayed()
}
}
}
}
```
Validate robot implementation:
1. Test robots use dependency injection (`@Inject`)
2. Robots implement interfaces for composability
3. Context parameters for UI test interactions
4. Screenshot testing with Roborazzi integration
Red flags:
Verify multiplatform test setup:
1. Tests run on Android, JVM, and iOS using expect/actual
2. Test dependency graphs created with Metro
3. Proper platform-specific test runners
Review KMP implementation:
1. Proper expect/actual declarations
2. Platform-specific implementations where needed
3. Shared business logic in commonMain
4. Platform-specific UI adaptations when necessary
Validate Compose usage:
1. Proper resource handling with Compose Resources
2. Platform-specific UI adaptations
3. Consistent design system usage
4. Accessibility considerations
Ensure robust error handling:
1. Proper error boundaries in data layer
2. User-friendly error states in UI
3. Graceful degradation for network issues
4. Proper loading states
Validate API patterns:
1. Use `@ContributesBinding` for implementations
2. Proper QueryKey implementations with caching
3. Network error handling
4. Data transformation patterns
Review caching implementation:
1. Proper use of Soil's `SwrClient` for runtime caching
2. Preload data implementation where appropriate
3. Cache invalidation strategies
Check Compose optimizations:
1. Proper state management to avoid unnecessary recompositions
2. Use of `remember` and `derivedStateOf` appropriately
3. Lazy loading for large lists
4. Image loading optimizations
Review memory usage:
1. Proper lifecycle management
2. Resource cleanup in effects
3. Avoiding memory leaks in long-running operations
Ensure secure data handling:
1. No sensitive data in logs
2. Proper API key management
3. Secure network communications
Validate code structure:
1. Proper module organization following feature-based architecture
2. Clear separation of concerns
3. Consistent code style and formatting
1. **Repository Pattern**: Use Soil data fetching instead
2. **Composition Locals**: Use context parameters for DI
3. **Direct API Calls**: Use QueryKeys for data fetching
4. **Manual Caching**: Rely on Soil's automatic caching
5. **Imperative Navigation**: Use proper Navigation3 patterns
6. **Mixed UI/Business Logic**: Maintain clear separation with presenters
Before approving changes, verify:
Quality gates:
1. All tests passing across all platforms
2. No compiler warnings or linting errors
3. Screenshots tests verified with Roborazzi
4. Performance regression checks completed
5. Accessibility considerations addressed
1. **Architecture**: Does this follow our Metro DI and Soil data patterns?
2. **Testing**: Are there comprehensive tests covering happy path and error states?
3. **Performance**: Could this cause performance issues or memory leaks?
4. **Maintainability**: Is the code easy to understand and modify?
5. **Multiplatform**: Does this work correctly across all target platforms?
6. **User Experience**: Does this provide a good user experience with proper loading/error states?
This comprehensive review guide ensures consistent quality across the sophisticated DroidKaigi 2025 codebase while maintaining the architectural patterns that make this project unique and maintainable.
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/droidkaigi-2025-code-review/raw