- FAB button now correctly stops session during speaking/processing states
- Echo prevention: STT stopped during TTS playback, results ignored during speaking
- Chat TTS only speaks when voice session is active (no auto-speak for text chat)
- Session stop now aborts in-flight API requests and prevents race conditions
- STT restarts after TTS with 800ms delay for audio focus release
- Pending interrupt transcript processed after TTS completion
- ChatContext added for message persistence across tab navigation
- VoiceFAB redesigned with state-based animations
- console.error replaced with console.warn across voice pipeline
- no-speech STT errors silenced (normal silence behavior)
- Enable STT listening during TTS playback to detect user interruption
- When voice detected during Julia's speech, immediately stop TTS
- Store interrupted transcript and process it after TTS stops
- Remove 'speaking' status check from STT watchdog to allow parallel STT+TTS
- Add pending transcript mechanism to handle race condition between
TTS stop and STT final result
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add watchdog mechanism to ensure STT continues when switching tabs:
- Monitor STT state every 500ms and restart if session active but STT stopped
- Handle AppState changes to restart STT when app returns to foreground
- Prevents session interruption due to native audio session changes
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Adds logic to detect when voice status transitions from 'speaking' to
'listening' and automatically restarts the speech recognition. This
enables continuous voice conversation flow - after Julia speaks her
response, the app immediately starts listening for the next user input.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add sessionActiveRef to track when voice session is active
- Add shouldRestartSTTRef to auto-restart STT after it ends
- STT now continues listening during TTS playback
- Voice detection callback checks both status and isSpeaking
- Final results during TTS are ignored (user interrupted)
- STT automatically restarts after ending if session is still active
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add onVoiceDetected callback to useSpeechRecognition hook
- Triggered on first interim result (voice activity detected)
- Uses voiceDetectedRef to ensure callback fires only once per session
- Reset flag on session start/end
- Connect STT to VoiceContext in _layout.tsx
- Use useSpeechRecognition with onVoiceDetected callback
- Call interruptIfSpeaking() when voice detected during 'speaking' state
- Forward STT results to VoiceContext (setTranscript, sendTranscript)
- Start/stop STT based on isListening state
- Export interruptIfSpeaking from VoiceContext provider
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- FAB now toggles between idle and listening states
- Green background (idle) → Red background (listening)
- Icon switches between mic-outline and mic
- Connects to VoiceContext for state management
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add floating action button for voice calls to the tab bar layout:
- Import and render VoiceFAB component
- Add placeholder handler for voice call initiation
- Wrap Tabs in View to properly position FAB overlay
- FAB automatically hides when a call is active (via VoiceCallContext)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Configure LiveKit Expo plugin with audioType: "media" in app.json
This forces speaker output on Android instead of earpiece
- Remove microphone icon from voice messages in chat
- Remove audio output picker button (no longer needed)
- Clean up audioSession.ts configuration
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add SafeAreaProvider wrapper in root layout for reliable insets on Android
- Add minimum 16px bottom padding on Android to prevent overlap with
gesture navigation or software buttons (Samsung, Pixel, etc.)
- Keep 10px minimum for other platforms
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Use useSafeAreaInsets() to dynamically calculate tab bar height and bottom
padding instead of hardcoded values. This ensures the tab bar properly
accounts for Android navigation buttons (Samsung) and iOS home indicator.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix race condition where connect() was called before beneficiaryData loaded
- Add connectCalledRef to prevent duplicate connect calls
- Wait for beneficiaryData.deploymentId before initiating call
- Add 5s timeout fallback for edge cases (API failure/no beneficiaries)
- Hide Debug tab, show only Julia tab in navigation
- Add Android keyboard layout fix for password fields
- Bump version to 1.0.5
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changes:
- Add CallManager singleton to ensure only 1 call per device at a time
- Hide Debug tab from production (href: null)
- Remove speaker/earpiece toggle button (always use speaker)
- Agent uses voice_ask API (fast ~1 sec latency)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Changed audioStreamType from 'voiceCall' to 'music' on Android
- voiceCall stream defaults to earpiece
- music stream defaults to speaker
- Added Debug tab to test voice calls with detailed logs
- Added speaker/earpiece toggle button with proper stream switching
- Full Android AudioSession support for LiveKit voice calls
audioSession.ts:
- configureAudioForVoiceCall: uses music/media for speaker output
- setAudioOutput: switches between music (speaker) and voiceCall (earpiece)
- reconfigureAudioForPlayback: ensures speaker output on Android
debug.tsx:
- Added platform info display
- Added speaker toggle with logging
- Improved UI with control rows
- Enable Chat tab (replace Debug) - text chat with Julia AI
- Add voice call button in chat header and input area
- Add speaker/earpiece toggle in voice-call screen
- setAudioOutput() function for switching audio output
NOT TESTED ON REAL DEVICE - simulator only verification
Components:
- LiveKit Cloud agent deployment (julia-agent/julia-ai/)
- React Native LiveKit client (hooks/useLiveKitRoom.ts)
- Voice call screen with audio session management
- WellNuo voice_ask API integration in Python agent
Tech stack:
- LiveKit Cloud for agent hosting
- @livekit/react-native SDK
- Deepgram STT/TTS (via LiveKit Cloud)
- Silero VAD for voice activity detection
Known issues:
- Microphone permissions may need manual testing on real device
- LiveKit audio playback not verified on physical hardware
- Agent greeting audio not confirmed working end-to-end
Next steps:
- Test on physical iOS device
- Verify microphone capture works
- Confirm TTS audio playback
- Test full conversation loop
Chat screen now supports both:
- Text messaging (keyboard input)
- High-quality Ultravox voice calls (WebRTC)
Features:
- Voice call button in input bar (phone icon)
- Green status bar when call is active
- Transcripts from voice calls appear in chat history
- Voice badge on messages from voice conversation
- Mute button during calls
- Auto-end call when leaving screen
Background audio configured for iOS (audio, voip modes)
- Removed voice input features
- Simplified profile page (only legal links and logout)
- Chat with AI context working
- Auto-select first beneficiary
- Dashboard WebView intact
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>