1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295
|
// Copyright 2014 The Chromium Authors
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "chrome/browser/speech/network_speech_recognizer.h"
#include <stddef.h>
#include <stdint.h>
#include <algorithm>
#include <string>
#include "base/functional/bind.h"
#include "chrome/browser/speech/speech_recognizer_delegate.h"
#include "content/public/browser/browser_task_traits.h"
#include "content/public/browser/browser_thread.h"
#include "content/public/browser/child_process_host.h"
#include "content/public/browser/render_process_host.h"
#include "content/public/browser/speech_recognition_event_listener.h"
#include "content/public/browser/speech_recognition_manager.h"
#include "content/public/browser/speech_recognition_session_config.h"
#include "content/public/browser/speech_recognition_session_preamble.h"
#include "media/mojo/mojom/speech_recognition_error.mojom.h"
#include "services/network/public/cpp/shared_url_loader_factory.h"
// Invalid speech session.
static const int kInvalidSessionId = -1;
// Speech recognizer listener. This is separate from SpeechRecognizer because
// the speech recognition engine must function from the IO thread. Because of
// this, the lifecycle of this class must be decoupled from the lifecycle of
// SpeechRecognizer. To avoid circular references, this class has no reference
// to SpeechRecognizer. Instead, it has a reference to the
// SpeechRecognizerDelegate via a weak pointer that is only ever referenced from
// the UI thread.
class NetworkSpeechRecognizer::EventListener
: public base::RefCountedThreadSafe<
NetworkSpeechRecognizer::EventListener,
content::BrowserThread::DeleteOnIOThread>,
public content::SpeechRecognitionEventListener {
public:
EventListener(const base::WeakPtr<SpeechRecognizerDelegate>& delegate,
std::unique_ptr<network::PendingSharedURLLoaderFactory>
pending_shared_url_loader_factory,
const std::string& locale);
EventListener(const EventListener&) = delete;
EventListener& operator=(const EventListener&) = delete;
void StartOnIOThread(
const std::string& auth_scope,
const std::string& auth_token,
const scoped_refptr<content::SpeechRecognitionSessionPreamble>& preamble);
void StopOnIOThread();
private:
friend struct content::BrowserThread::DeleteOnThread<
content::BrowserThread::IO>;
friend class base::DeleteHelper<NetworkSpeechRecognizer::EventListener>;
~EventListener() override;
void NotifyRecognitionStateChanged(SpeechRecognizerStatus new_state);
// Overridden from content::SpeechRecognitionEventListener:
// These are always called on the IO thread.
void OnRecognitionStart(int session_id) override;
void OnRecognitionEnd(int session_id) override;
void OnRecognitionResults(
int session_id,
const std::vector<media::mojom::WebSpeechRecognitionResultPtr>& results)
override;
void OnRecognitionError(
int session_id,
const media::mojom::SpeechRecognitionError& error) override;
void OnSoundStart(int session_id) override;
void OnSoundEnd(int session_id) override;
void OnAudioLevelsChange(int session_id,
float volume,
float noise_volume) override;
void OnAudioStart(int session_id) override;
void OnAudioEnd(int session_id) override;
// Only dereferenced from the UI thread, but copied on IO thread.
base::WeakPtr<SpeechRecognizerDelegate> delegate_;
// All remaining members only accessed from the IO thread.
std::unique_ptr<network::PendingSharedURLLoaderFactory>
pending_shared_url_loader_factory_;
// Initialized from |pending_shared_url_loader_factory_| on first use.
scoped_refptr<network::SharedURLLoaderFactory> shared_url_loader_factory_;
std::string locale_;
int session_;
std::u16string last_result_str_;
base::WeakPtrFactory<EventListener> weak_factory_{this};
};
NetworkSpeechRecognizer::EventListener::EventListener(
const base::WeakPtr<SpeechRecognizerDelegate>& delegate,
std::unique_ptr<network::PendingSharedURLLoaderFactory>
pending_shared_url_loader_factory,
const std::string& locale)
: delegate_(delegate),
pending_shared_url_loader_factory_(
std::move(pending_shared_url_loader_factory)),
locale_(locale),
session_(kInvalidSessionId) {
DCHECK_CURRENTLY_ON(content::BrowserThread::UI);
NotifyRecognitionStateChanged(SPEECH_RECOGNIZER_READY);
}
NetworkSpeechRecognizer::EventListener::~EventListener() {
// No more callbacks when we are deleting.
delegate_.reset();
if (session_ != kInvalidSessionId) {
// Ensure the session is aborted.
int session = session_;
session_ = kInvalidSessionId;
content::SpeechRecognitionManager::GetInstance()->AbortSession(session);
}
}
void NetworkSpeechRecognizer::EventListener::StartOnIOThread(
const std::string& auth_scope,
const std::string& auth_token,
const scoped_refptr<content::SpeechRecognitionSessionPreamble>& preamble) {
DCHECK_CURRENTLY_ON(content::BrowserThread::IO);
if (session_ != kInvalidSessionId)
StopOnIOThread();
// Don't filter profanities. NetworkSpeechRecognizer is currently used by
// Dictation which does not want to filter user input. If this needs to be
// changed for other clients in the future, whether to filter should be passed
// as a parameter to the speech recognizer instead of changed here.
bool filter_profanities = false;
content::SpeechRecognitionSessionConfig config;
config.language = locale_;
config.continuous = true;
config.interim_results = true;
config.max_hypotheses = 1;
config.filter_profanities = filter_profanities;
if (!shared_url_loader_factory_) {
DCHECK(pending_shared_url_loader_factory_);
shared_url_loader_factory_ = network::SharedURLLoaderFactory::Create(
std::move(pending_shared_url_loader_factory_));
}
config.shared_url_loader_factory = shared_url_loader_factory_;
config.event_listener = weak_factory_.GetWeakPtr();
// kInvalidUniqueID is not a valid render process, so the speech permission
// check allows the request through.
config.initial_context.render_process_id =
content::ChildProcessHost::kInvalidUniqueID;
config.auth_scope = auth_scope;
config.auth_token = auth_token;
config.preamble = preamble;
auto* speech_instance = content::SpeechRecognitionManager::GetInstance();
session_ = speech_instance->CreateSession(config);
speech_instance->StartSession(session_);
}
void NetworkSpeechRecognizer::EventListener::StopOnIOThread() {
DCHECK_CURRENTLY_ON(content::BrowserThread::IO);
if (session_ == kInvalidSessionId)
return;
// Prevent recursion.
int session = session_;
session_ = kInvalidSessionId;
content::SpeechRecognitionManager::GetInstance()->StopAudioCaptureForSession(
session);
// Since we no longer have access to this session ID, end the session
// associated with it.
content::SpeechRecognitionManager::GetInstance()->AbortSession(session);
weak_factory_.InvalidateWeakPtrs();
}
void NetworkSpeechRecognizer::EventListener::NotifyRecognitionStateChanged(
SpeechRecognizerStatus new_state) {
content::GetUIThreadTaskRunner({})->PostTask(
FROM_HERE,
base::BindOnce(&SpeechRecognizerDelegate::OnSpeechRecognitionStateChanged,
delegate_, new_state));
}
void NetworkSpeechRecognizer::EventListener::OnRecognitionStart(
int session_id) {
NotifyRecognitionStateChanged(SPEECH_RECOGNIZER_RECOGNIZING);
}
void NetworkSpeechRecognizer::EventListener::OnRecognitionEnd(int session_id) {
StopOnIOThread();
NotifyRecognitionStateChanged(SPEECH_RECOGNIZER_READY);
}
void NetworkSpeechRecognizer::EventListener::OnRecognitionResults(
int session_id,
const std::vector<media::mojom::WebSpeechRecognitionResultPtr>& results) {
std::u16string result_str;
size_t final_count = 0;
// The number of results with |is_provisional| false. If |final_count| ==
// results.size(), then all results are non-provisional and the recognition is
// complete.
for (const auto& result : results) {
if (!result->is_provisional)
final_count++;
result_str += result->hypotheses[0]->utterance;
}
// media::mojom::WebSpeechRecognitionResult doesn't have word offsets.
content::GetUIThreadTaskRunner({})->PostTask(
FROM_HERE,
base::BindOnce(&SpeechRecognizerDelegate::OnSpeechResult, delegate_,
result_str, final_count == results.size(),
/* full_result = */ std::nullopt));
last_result_str_ = result_str;
}
void NetworkSpeechRecognizer::EventListener::OnRecognitionError(
int session_id,
const media::mojom::SpeechRecognitionError& error) {
StopOnIOThread();
if (error.code == media::mojom::SpeechRecognitionErrorCode::kNetwork) {
NotifyRecognitionStateChanged(SPEECH_RECOGNIZER_ERROR);
}
NotifyRecognitionStateChanged(SPEECH_RECOGNIZER_READY);
}
void NetworkSpeechRecognizer::EventListener::OnSoundStart(int session_id) {
NotifyRecognitionStateChanged(SPEECH_RECOGNIZER_IN_SPEECH);
}
void NetworkSpeechRecognizer::EventListener::OnSoundEnd(int session_id) {
StopOnIOThread();
NotifyRecognitionStateChanged(SPEECH_RECOGNIZER_RECOGNIZING);
}
void NetworkSpeechRecognizer::EventListener::OnAudioLevelsChange(
int session_id,
float volume,
float noise_volume) {
DCHECK_LE(0.0, volume);
DCHECK_GE(1.0, volume);
DCHECK_LE(0.0, noise_volume);
DCHECK_GE(1.0, noise_volume);
volume = std::max(0.0f, volume - noise_volume);
// Both |volume| and |noise_volume| are defined to be in the range [0.0, 1.0].
// See: content/public/browser/speech_recognition_event_listener.h
int16_t sound_level = static_cast<int16_t>(INT16_MAX * volume);
content::GetUIThreadTaskRunner({})->PostTask(
FROM_HERE,
base::BindOnce(&SpeechRecognizerDelegate::OnSpeechSoundLevelChanged,
delegate_, sound_level));
}
void NetworkSpeechRecognizer::EventListener::OnAudioStart(int session_id) {}
void NetworkSpeechRecognizer::EventListener::OnAudioEnd(int session_id) {}
NetworkSpeechRecognizer::NetworkSpeechRecognizer(
const base::WeakPtr<SpeechRecognizerDelegate>& delegate,
std::unique_ptr<network::PendingSharedURLLoaderFactory>
pending_shared_url_loader_factory,
const std::string& locale)
: SpeechRecognizer(delegate),
speech_event_listener_(
new EventListener(delegate,
std::move(pending_shared_url_loader_factory),
locale)) {
DCHECK_CURRENTLY_ON(content::BrowserThread::UI);
}
NetworkSpeechRecognizer::~NetworkSpeechRecognizer() {
DCHECK_CURRENTLY_ON(content::BrowserThread::UI);
// Reset the delegate before calling Stop() to avoid any additional callbacks.
delegate().reset();
Stop();
}
void NetworkSpeechRecognizer::Start() {
DCHECK_CURRENTLY_ON(content::BrowserThread::UI);
content::GetIOThreadTaskRunner({})->PostTask(
FROM_HERE,
base::BindOnce(&NetworkSpeechRecognizer::EventListener::StartOnIOThread,
speech_event_listener_, std::string() /* auth scope*/,
std::string() /* auth_token */, /* preamble */ nullptr));
}
void NetworkSpeechRecognizer::Stop() {
DCHECK_CURRENTLY_ON(content::BrowserThread::UI);
content::GetIOThreadTaskRunner({})->PostTask(
FROM_HERE,
base::BindOnce(&NetworkSpeechRecognizer::EventListener::StopOnIOThread,
speech_event_listener_));
}
|