7 Commits

Author SHA1 Message Date
dccf05947c Add Voice API type to debug UI and logs
Changes:
- Show API function (voice_ask or ask_wellnuo_ai) in Debug tab
- Update console.log to include API type in transcript log
- Add "API Function" field in Status Card with blue color

Now Debug tab clearly shows which API function is being used, making it easy to verify the Profile settings are working correctly.
2026-01-28 19:53:53 -08:00
0881a9565d Fix updateVoiceApiType function and add API logging
Fixes error: "updateVoiceApiType is not a function"

Changes:
- Add voiceApiType state to VoiceContext
- Implement updateVoiceApiType callback
- Load saved voice API type from SecureStore on mount
- Use voiceApiType in sendTranscript (instead of hardcoded 'ask_wellnuo_ai')
- Add console.log showing which API type is being used
- Export voiceApiType and updateVoiceApiType in context provider

Now Voice API selector in Profile works correctly and logs show which API function (voice_ask or ask_wellnuo_ai) is being called.
2026-01-28 19:50:00 -08:00
05f872d067 fix: voice session improvements - FAB stop, echo prevention, chat TTS
- FAB button now correctly stops session during speaking/processing states
- Echo prevention: STT stopped during TTS playback, results ignored during speaking
- Chat TTS only speaks when voice session is active (no auto-speak for text chat)
- Session stop now aborts in-flight API requests and prevents race conditions
- STT restarts after TTS with 800ms delay for audio focus release
- Pending interrupt transcript processed after TTS completion
- ChatContext added for message persistence across tab navigation
- VoiceFAB redesigned with state-based animations
- console.error replaced with console.warn across voice pipeline
- no-speech STT errors silenced (normal silence behavior)
2026-01-27 22:59:55 -08:00
88d4afcdfd Display Julia's voice responses in chat
When user speaks via voice mode, both their question and Julia's
response are now shown in the text chat. This provides a unified
conversation history for both voice and text interactions.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-27 16:41:14 -08:00
3c7a48df5b Integrate TTS interruption in VoiceFAB when voice detected
- Add onVoiceDetected callback to useSpeechRecognition hook
  - Triggered on first interim result (voice activity detected)
  - Uses voiceDetectedRef to ensure callback fires only once per session
  - Reset flag on session start/end

- Connect STT to VoiceContext in _layout.tsx
  - Use useSpeechRecognition with onVoiceDetected callback
  - Call interruptIfSpeaking() when voice detected during 'speaking' state
  - Forward STT results to VoiceContext (setTranscript, sendTranscript)
  - Start/stop STT based on isListening state

- Export interruptIfSpeaking from VoiceContext provider

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-27 16:34:07 -08:00
356205d8c0 Remove LiveKit SDK and related code
- Remove @livekit/react-native-expo-plugin from app.json
- Remove @config-plugins/react-native-webrtc plugin
- Delete utils/audioSession.ts (depended on LiveKit AudioSession)
- Update VoiceCallContext.tsx comments
- Update callManager.ts comments
- Update _layout.tsx TODO comment
- Remove LiveKit documentation files
- Add interruptIfSpeaking to VoiceContext for TTS interrupt

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-27 16:28:23 -08:00
caf47ead9c Add VoiceContext with API integration and TTS
- Create VoiceContext.tsx with sendTranscript() for WellNuo API calls
- Integrate expo-speech TTS for response playback
- Add VoiceProvider to app layout
- Flow: transcript → normalize → API → response → TTS → continue listening

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-27 16:16:50 -08:00