- Add onVoiceDetected callback to useSpeechRecognition hook
- Triggered on first interim result (voice activity detected)
- Uses voiceDetectedRef to ensure callback fires only once per session
- Reset flag on session start/end
- Connect STT to VoiceContext in _layout.tsx
- Use useSpeechRecognition with onVoiceDetected callback
- Call interruptIfSpeaking() when voice detected during 'speaking' state
- Forward STT results to VoiceContext (setTranscript, sendTranscript)
- Start/stop STT based on isListening state
- Export interruptIfSpeaking from VoiceContext provider
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Create a reusable hook wrapping expo-speech that provides:
- speak/stop controls
- isSpeaking state tracking
- Voice listing support
- Promise-based API for async flows
- Proper cleanup on unmount
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This LiveKit hook was no longer used after switching to speech recognition.
Also removed the outdated comment referencing it in _layout.tsx.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Root cause: Audio from remote participant (Julia AI) was not playing
because room.startAudio() was never called after connecting.
This is REQUIRED by LiveKit WebRTC to enable audio playback.
The fix matches the working implementation in debug.tsx (Robert version).
Changes:
- Add room.startAudio() call after room.connect()
- Add canPlayAudio state tracking
- Add proper error handling for startAudio
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove expo-speech (TTS) - not used
- Remove expo-speech-recognition (STT) - not used
- Delete dead code: hooks/useSpeechRecognition.ts
These packages add native audio modules that can conflict with
LiveKit's AudioSession management on iOS.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Voice agent now extracts deploymentId and beneficiaryNamesDict from
participant metadata passed via LiveKit token
- WellNuoLLM class accepts dynamic deployment_id and beneficiary_names_dict
- API calls now include personalized beneficiary names for better responses
- Text chat already has this functionality (verified)
- Updated LiveKit agent deployed to cloud
Also includes:
- Speaker toggle button in voice call UI
- Keyboard controller integration for chat
- Various UI improvements
Changes:
- Add CallManager singleton to ensure only 1 call per device at a time
- Hide Debug tab from production (href: null)
- Remove speaker/earpiece toggle button (always use speaker)
- Agent uses voice_ask API (fast ~1 sec latency)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
NOT TESTED ON REAL DEVICE - simulator only verification
Components:
- LiveKit Cloud agent deployment (julia-agent/julia-ai/)
- React Native LiveKit client (hooks/useLiveKitRoom.ts)
- Voice call screen with audio session management
- WellNuo voice_ask API integration in Python agent
Tech stack:
- LiveKit Cloud for agent hosting
- @livekit/react-native SDK
- Deepgram STT/TTS (via LiveKit Cloud)
- Silero VAD for voice activity detection
Known issues:
- Microphone permissions may need manual testing on real device
- LiveKit audio playback not verified on physical hardware
- Agent greeting audio not confirmed working end-to-end
Next steps:
- Test on physical iOS device
- Verify microphone capture works
- Confirm TTS audio playback
- Test full conversation loop
- Removed voice input features
- Simplified profile page (only legal links and logout)
- Chat with AI context working
- Auto-select first beneficiary
- Dashboard WebView intact
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>