Changes:
- Show API function (voice_ask or ask_wellnuo_ai) in Debug tab
- Update console.log to include API type in transcript log
- Add "API Function" field in Status Card with blue color
Now Debug tab clearly shows which API function is being used, making it easy to verify the Profile settings are working correctly.
Fixes error: "updateVoiceApiType is not a function"
Changes:
- Add voiceApiType state to VoiceContext
- Implement updateVoiceApiType callback
- Load saved voice API type from SecureStore on mount
- Use voiceApiType in sendTranscript (instead of hardcoded 'ask_wellnuo_ai')
- Add console.log showing which API type is being used
- Export voiceApiType and updateVoiceApiType in context provider
Now Voice API selector in Profile works correctly and logs show which API function (voice_ask or ask_wellnuo_ai) is being called.
- FAB button now correctly stops session during speaking/processing states
- Echo prevention: STT stopped during TTS playback, results ignored during speaking
- Chat TTS only speaks when voice session is active (no auto-speak for text chat)
- Session stop now aborts in-flight API requests and prevents race conditions
- STT restarts after TTS with 800ms delay for audio focus release
- Pending interrupt transcript processed after TTS completion
- ChatContext added for message persistence across tab navigation
- VoiceFAB redesigned with state-based animations
- console.error replaced with console.warn across voice pipeline
- no-speech STT errors silenced (normal silence behavior)
When user speaks via voice mode, both their question and Julia's
response are now shown in the text chat. This provides a unified
conversation history for both voice and text interactions.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add onVoiceDetected callback to useSpeechRecognition hook
- Triggered on first interim result (voice activity detected)
- Uses voiceDetectedRef to ensure callback fires only once per session
- Reset flag on session start/end
- Connect STT to VoiceContext in _layout.tsx
- Use useSpeechRecognition with onVoiceDetected callback
- Call interruptIfSpeaking() when voice detected during 'speaking' state
- Forward STT results to VoiceContext (setTranscript, sendTranscript)
- Start/stop STT based on isListening state
- Export interruptIfSpeaking from VoiceContext provider
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Create VoiceCallContext for global voice call state management
- Add FloatingCallBubble component with drag support
- Add minimize button to voice call screen
- Show bubble when call is minimized, tap to return to call
- Button shows active call state with green color
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add debugDeploymentId to BeneficiaryContext for sharing between screens
- Sync Debug tab's deploymentId state with global context
- voice-call.tsx now prioritizes debugDeploymentId when starting calls
- Enables testing voice calls with specific deployment IDs from Debug screen
- Removed voice input features
- Simplified profile page (only legal links and logout)
- Chat with AI context working
- Auto-select first beneficiary
- Dashboard WebView intact
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>