- Hide voice-debug tab in production (href: null)
- Revert VoiceContext to stable version (no Samantha voice)
- Add platform badge to Debug screen (iOS/Android indicator)
- Remove PRD.md (moved to Ralphy workflow)
TTS voice quality improvements caused 'speak function failed' error
on iOS Simulator (Samantha voice unavailable). Reverted to working
baseline: rate 0.9, pitch 1.0, default system voice.
Changes to contexts/VoiceContext.tsx:
- Increase rate from 0.9 to 1.1 (faster, more natural)
- Increase pitch from 1.0 to 1.15 (slightly higher, less robotic)
- Add iOS premium voice (Samantha - Siri quality)
- Android continues to use default high-quality voice
This fixes the complaint that the voice sounded "отсталый" (backward/outdated)
and "жёсткий" (harsh/stiff) on iOS.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Previously: API errors were silent, session stopped, user confused
Now: Julia speaks error message, adds to transcript, keeps listening
Changes:
- Catch block now speaks error via TTS: "Sorry, I encountered an error: [msg]"
- Error added to transcript (visible in chat history)
- Session continues in listening state (doesn't stop on error)
- User can try again immediately without restarting session
UX improvement: graceful error recovery instead of silent failure
contexts/VoiceContext.tsx:332-351
User feedback: green speaker indicator was turning off too early,
creating an awkward visual gap during the 300ms audio focus delay.
Changes:
- Added 300ms delay to setIsSpeaking(false) in TTS onDone callback
- Keeps green "Speaking" indicator on until STT actually starts recording
- onError and onStopped still turn off immediately (user interruption)
- Smooth visual transition: TTS ends → 300ms delay → STT starts
UX improvement: eliminates the brief "nothing is happening" moment
between Julia finishing speech and microphone starting to listen.
contexts/VoiceContext.tsx:393
Changes:
- Show API function (voice_ask or ask_wellnuo_ai) in Debug tab
- Update console.log to include API type in transcript log
- Add "API Function" field in Status Card with blue color
Now Debug tab clearly shows which API function is being used, making it easy to verify the Profile settings are working correctly.
Fixes error: "updateVoiceApiType is not a function"
Changes:
- Add voiceApiType state to VoiceContext
- Implement updateVoiceApiType callback
- Load saved voice API type from SecureStore on mount
- Use voiceApiType in sendTranscript (instead of hardcoded 'ask_wellnuo_ai')
- Add console.log showing which API type is being used
- Export voiceApiType and updateVoiceApiType in context provider
Now Voice API selector in Profile works correctly and logs show which API function (voice_ask or ask_wellnuo_ai) is being called.
- FAB button now correctly stops session during speaking/processing states
- Echo prevention: STT stopped during TTS playback, results ignored during speaking
- Chat TTS only speaks when voice session is active (no auto-speak for text chat)
- Session stop now aborts in-flight API requests and prevents race conditions
- STT restarts after TTS with 800ms delay for audio focus release
- Pending interrupt transcript processed after TTS completion
- ChatContext added for message persistence across tab navigation
- VoiceFAB redesigned with state-based animations
- console.error replaced with console.warn across voice pipeline
- no-speech STT errors silenced (normal silence behavior)
When user speaks via voice mode, both their question and Julia's
response are now shown in the text chat. This provides a unified
conversation history for both voice and text interactions.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add onVoiceDetected callback to useSpeechRecognition hook
- Triggered on first interim result (voice activity detected)
- Uses voiceDetectedRef to ensure callback fires only once per session
- Reset flag on session start/end
- Connect STT to VoiceContext in _layout.tsx
- Use useSpeechRecognition with onVoiceDetected callback
- Call interruptIfSpeaking() when voice detected during 'speaking' state
- Forward STT results to VoiceContext (setTranscript, sendTranscript)
- Start/stop STT based on isListening state
- Export interruptIfSpeaking from VoiceContext provider
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Create VoiceCallContext for global voice call state management
- Add FloatingCallBubble component with drag support
- Add minimize button to voice call screen
- Show bubble when call is minimized, tap to return to call
- Button shows active call state with green color
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add debugDeploymentId to BeneficiaryContext for sharing between screens
- Sync Debug tab's deploymentId state with global context
- voice-call.tsx now prioritizes debugDeploymentId when starting calls
- Enables testing voice calls with specific deployment IDs from Debug screen
- Removed voice input features
- Simplified profile page (only legal links and logout)
- Chat with AI context working
- Auto-select first beneficiary
- Dashboard WebView intact
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>