Problem:
- Multiple rapid calls to sendTranscript() created race conditions
- Old requests continued using local abortController variable
- Responses from superseded requests could still be processed
- Session stop didn't reliably prevent pending responses
Solution:
- Changed abort checks from `abortController.signal.aborted` to
`abortControllerRef.current !== abortController`
- Ensures request checks if it's still the active one, not just aborted
- Added checks at 4 critical points: before API call, after API call,
before retry, and after retry
Changes:
- VoiceContext.tsx:268 - Check before initial API call
- VoiceContext.tsx:308 - Check after API response
- VoiceContext.tsx:344 - Check before retry
- VoiceContext.tsx:359 - Check after retry response
Testing:
- Added Jest test configuration
- Added test suite with 5 race condition scenarios
- Added manual testing documentation
- Verified with TypeScript linting (no new errors)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- FAB button now correctly stops session during speaking/processing states
- Echo prevention: STT stopped during TTS playback, results ignored during speaking
- Chat TTS only speaks when voice session is active (no auto-speak for text chat)
- Session stop now aborts in-flight API requests and prevents race conditions
- STT restarts after TTS with 800ms delay for audio focus release
- Pending interrupt transcript processed after TTS completion
- ChatContext added for message persistence across tab navigation
- VoiceFAB redesigned with state-based animations
- console.error replaced with console.warn across voice pipeline
- no-speech STT errors silenced (normal silence behavior)
- Remove expo-speech (TTS) - not used
- Remove expo-speech-recognition (STT) - not used
- Delete dead code: hooks/useSpeechRecognition.ts
These packages add native audio modules that can conflict with
LiveKit's AudioSession management on iOS.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Voice agent now extracts deploymentId and beneficiaryNamesDict from
participant metadata passed via LiveKit token
- WellNuoLLM class accepts dynamic deployment_id and beneficiary_names_dict
- API calls now include personalized beneficiary names for better responses
- Text chat already has this functionality (verified)
- Updated LiveKit agent deployed to cloud
Also includes:
- Speaker toggle button in voice call UI
- Keyboard controller integration for chat
- Various UI improvements
- Remove speaker button empty space (2-button centered layout)
- Remove "Asteria voice" text from voice call screen
- Fix chat input visibility with keyboard
- Add keyboard show listener for auto-scroll
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
NOT TESTED ON REAL DEVICE - simulator only verification
Components:
- LiveKit Cloud agent deployment (julia-agent/julia-ai/)
- React Native LiveKit client (hooks/useLiveKitRoom.ts)
- Voice call screen with audio session management
- WellNuo voice_ask API integration in Python agent
Tech stack:
- LiveKit Cloud for agent hosting
- @livekit/react-native SDK
- Deepgram STT/TTS (via LiveKit Cloud)
- Silero VAD for voice activity detection
Known issues:
- Microphone permissions may need manual testing on real device
- LiveKit audio playback not verified on physical hardware
- Agent greeting audio not confirmed working end-to-end
Next steps:
- Test on physical iOS device
- Verify microphone capture works
- Confirm TTS audio playback
- Test full conversation loop
- Removed voice input features
- Simplified profile page (only legal links and logout)
- Chat with AI context working
- Auto-select first beneficiary
- Dashboard WebView intact
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>