12 Commits

Author SHA1 Message Date
3731206546 Hide Debug tab and revert TTS voice changes
- Hide voice-debug tab in production (href: null)
- Revert VoiceContext to stable version (no Samantha voice)
- Add platform badge to Debug screen (iOS/Android indicator)
- Remove PRD.md (moved to Ralphy workflow)

TTS voice quality improvements caused 'speak function failed' error
on iOS Simulator (Samantha voice unavailable). Reverted to working
baseline: rate 0.9, pitch 1.0, default system voice.
2026-01-29 10:05:10 -08:00
f4a239ff43 Improve TTS voice quality - faster rate, higher pitch, iOS premium voice
Changes to contexts/VoiceContext.tsx:
- Increase rate from 0.9 to 1.1 (faster, more natural)
- Increase pitch from 1.0 to 1.15 (slightly higher, less robotic)
- Add iOS premium voice (Samantha - Siri quality)
- Android continues to use default high-quality voice

This fixes the complaint that the voice sounded "отсталый" (backward/outdated)
and "жёсткий" (harsh/stiff) on iOS.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-29 09:46:38 -08:00
8c0e36cae3 Fix Android voice bugs - STT restart and token retry
Critical Android fixes:

BUG 1 - STT not restarting after TTS:
- Problem: isSpeaking delay (300ms iOS visual) blocked Android STT
- Android audio focus conflict: STT cannot start while isSpeaking=true
- Fix: Platform-specific isSpeaking timing
  - iOS: 300ms delay (smooth visual indicator)
  - Android: immediate (allows STT to restart)

BUG 2 - Session expired loop:
- Problem: 401 error → token reset → no retry → user hears error
- Fix: Automatic token refresh and retry on 401
- Flow: 401 → clear token → get new token → retry request
- User never hears "Session expired" unless retry also fails

contexts/VoiceContext.tsx:12-23,387-360
2026-01-28 20:43:42 -08:00
8a6d9fa420 Add voice error handling - speak errors aloud and continue session
Previously: API errors were silent, session stopped, user confused
Now: Julia speaks error message, adds to transcript, keeps listening

Changes:
- Catch block now speaks error via TTS: "Sorry, I encountered an error: [msg]"
- Error added to transcript (visible in chat history)
- Session continues in listening state (doesn't stop on error)
- User can try again immediately without restarting session

UX improvement: graceful error recovery instead of silent failure

contexts/VoiceContext.tsx:332-351
2026-01-28 20:07:51 -08:00
9422c32926 Extend speaking indicator duration to cover STT restart delay
User feedback: green speaker indicator was turning off too early,
creating an awkward visual gap during the 300ms audio focus delay.

Changes:
- Added 300ms delay to setIsSpeaking(false) in TTS onDone callback
- Keeps green "Speaking" indicator on until STT actually starts recording
- onError and onStopped still turn off immediately (user interruption)
- Smooth visual transition: TTS ends → 300ms delay → STT starts

UX improvement: eliminates the brief "nothing is happening" moment
between Julia finishing speech and microphone starting to listen.

contexts/VoiceContext.tsx:393
2026-01-28 20:05:38 -08:00
dccf05947c Add Voice API type to debug UI and logs
Changes:
- Show API function (voice_ask or ask_wellnuo_ai) in Debug tab
- Update console.log to include API type in transcript log
- Add "API Function" field in Status Card with blue color

Now Debug tab clearly shows which API function is being used, making it easy to verify the Profile settings are working correctly.
2026-01-28 19:53:53 -08:00
0881a9565d Fix updateVoiceApiType function and add API logging
Fixes error: "updateVoiceApiType is not a function"

Changes:
- Add voiceApiType state to VoiceContext
- Implement updateVoiceApiType callback
- Load saved voice API type from SecureStore on mount
- Use voiceApiType in sendTranscript (instead of hardcoded 'ask_wellnuo_ai')
- Add console.log showing which API type is being used
- Export voiceApiType and updateVoiceApiType in context provider

Now Voice API selector in Profile works correctly and logs show which API function (voice_ask or ask_wellnuo_ai) is being called.
2026-01-28 19:50:00 -08:00
05f872d067 fix: voice session improvements - FAB stop, echo prevention, chat TTS
- FAB button now correctly stops session during speaking/processing states
- Echo prevention: STT stopped during TTS playback, results ignored during speaking
- Chat TTS only speaks when voice session is active (no auto-speak for text chat)
- Session stop now aborts in-flight API requests and prevents race conditions
- STT restarts after TTS with 800ms delay for audio focus release
- Pending interrupt transcript processed after TTS completion
- ChatContext added for message persistence across tab navigation
- VoiceFAB redesigned with state-based animations
- console.error replaced with console.warn across voice pipeline
- no-speech STT errors silenced (normal silence behavior)
2026-01-27 22:59:55 -08:00
88d4afcdfd Display Julia's voice responses in chat
When user speaks via voice mode, both their question and Julia's
response are now shown in the text chat. This provides a unified
conversation history for both voice and text interactions.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-27 16:41:14 -08:00
3c7a48df5b Integrate TTS interruption in VoiceFAB when voice detected
- Add onVoiceDetected callback to useSpeechRecognition hook
  - Triggered on first interim result (voice activity detected)
  - Uses voiceDetectedRef to ensure callback fires only once per session
  - Reset flag on session start/end

- Connect STT to VoiceContext in _layout.tsx
  - Use useSpeechRecognition with onVoiceDetected callback
  - Call interruptIfSpeaking() when voice detected during 'speaking' state
  - Forward STT results to VoiceContext (setTranscript, sendTranscript)
  - Start/stop STT based on isListening state

- Export interruptIfSpeaking from VoiceContext provider

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-27 16:34:07 -08:00
356205d8c0 Remove LiveKit SDK and related code
- Remove @livekit/react-native-expo-plugin from app.json
- Remove @config-plugins/react-native-webrtc plugin
- Delete utils/audioSession.ts (depended on LiveKit AudioSession)
- Update VoiceCallContext.tsx comments
- Update callManager.ts comments
- Update _layout.tsx TODO comment
- Remove LiveKit documentation files
- Add interruptIfSpeaking to VoiceContext for TTS interrupt

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-27 16:28:23 -08:00
caf47ead9c Add VoiceContext with API integration and TTS
- Create VoiceContext.tsx with sendTranscript() for WellNuo API calls
- Integrate expo-speech TTS for response playback
- Add VoiceProvider to app layout
- Flow: transcript → normalize → API → response → TTS → continue listening

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-27 16:16:50 -08:00