1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792
|
// Copyright 2020 The Chromium Authors
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifdef UNSAFE_BUFFERS_BUILD
// TODO(crbug.com/40285824): Remove this and spanify to fix the errors.
#pragma allow_unsafe_buffers
#endif
#include "media/gpu/v4l2/legacy/v4l2_video_decoder_backend_stateful.h"
#include <cstddef>
#include <memory>
#include <optional>
#include <tuple>
#include <utility>
#include "base/containers/contains.h"
#include "base/functional/bind.h"
#include "base/logging.h"
#include "base/metrics/histogram_macros.h"
#include "base/sequence_checker.h"
#include "base/task/sequenced_task_runner.h"
#include "media/base/limits.h"
#include "media/base/platform_features.h"
#include "media/base/video_codecs.h"
#include "media/gpu/chromeos/dmabuf_video_frame_pool.h"
#include "media/gpu/chromeos/platform_video_frame_utils.h"
#include "media/gpu/chromeos/video_frame_resource.h"
#include "media/gpu/macros.h"
#include "media/gpu/v4l2/v4l2_device.h"
#include "media/gpu/v4l2/v4l2_vda_helpers.h"
#include "media/gpu/v4l2/v4l2_video_decoder_backend.h"
#include "media/gpu/v4l2/v4l2_vp9_helpers.h"
#include "build/build_config.h"
namespace media {
namespace {
bool IsVp9KSVCStream(VideoCodecProfile profile,
const DecoderBuffer& decoder_buffer) {
return VideoCodecProfileToVideoCodec(profile) == VideoCodec::kVP9 &&
decoder_buffer.side_data() &&
!decoder_buffer.side_data()->spatial_layers.empty();
}
bool IsVp9KSVCSupportedDriver(const std::string& driver_name) {
const std::string kVP9KSVCSupportedDrivers[] = {"qcom-venus"};
return base::Contains(kVP9KSVCSupportedDrivers, driver_name);
}
std::optional<uint8_t> V4L2PixelFormatToBitDepth(uint32_t v4l2_pixelformat) {
const auto fourcc = Fourcc::FromV4L2PixFmt(v4l2_pixelformat);
if (fourcc) {
return BitDepth(fourcc->ToVideoPixelFormat());
}
return std::nullopt;
}
} // namespace
V4L2StatefulVideoDecoderBackend::DecodeRequest::DecodeRequest(
scoped_refptr<DecoderBuffer> buf,
VideoDecoder::DecodeCB cb)
: buffer(std::move(buf)), decode_cb(std::move(cb)) {}
V4L2StatefulVideoDecoderBackend::DecodeRequest::DecodeRequest(DecodeRequest&&) =
default;
V4L2StatefulVideoDecoderBackend::DecodeRequest&
V4L2StatefulVideoDecoderBackend::DecodeRequest::operator=(DecodeRequest&&) =
default;
V4L2StatefulVideoDecoderBackend::DecodeRequest::~DecodeRequest() = default;
bool V4L2StatefulVideoDecoderBackend::DecodeRequest::IsCompleted() const {
return bytes_used == buffer->size();
}
V4L2StatefulVideoDecoderBackend::V4L2StatefulVideoDecoderBackend(
Client* const client,
scoped_refptr<V4L2Device> device,
VideoCodecProfile profile,
const VideoColorSpace& color_space,
scoped_refptr<base::SequencedTaskRunner> task_runner)
: V4L2VideoDecoderBackend(client, std::move(device)),
driver_name_(device_->GetDriverName()),
profile_(profile),
color_space_(color_space),
task_runner_(task_runner) {
DVLOGF(3);
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
weak_this_ = weak_this_factory_.GetWeakPtr();
}
V4L2StatefulVideoDecoderBackend::~V4L2StatefulVideoDecoderBackend() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
if (flush_cb_ || current_decode_request_ || !decode_request_queue_.empty()) {
VLOGF(1) << "Should not destroy backend during pending decode!";
}
struct v4l2_event_subscription sub;
memset(&sub, 0, sizeof(sub));
sub.type = V4L2_EVENT_SOURCE_CHANGE;
if (device_->Ioctl(VIDIOC_UNSUBSCRIBE_EVENT, &sub) != 0) {
VLOGF(1) << "Cannot unsubscribe to event";
}
}
bool V4L2StatefulVideoDecoderBackend::Initialize() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
if (!IsSupportedProfile(profile_)) {
VLOGF(1) << "Unsupported profile " << GetProfileName(profile_);
return false;
}
frame_splitter_ =
v4l2_vda_helpers::InputBufferFragmentSplitter::CreateFromProfile(
profile_);
if (!frame_splitter_) {
VLOGF(1) << "Failed to create frame splitter";
return false;
}
struct v4l2_event_subscription sub;
memset(&sub, 0, sizeof(sub));
sub.type = V4L2_EVENT_SOURCE_CHANGE;
if (device_->Ioctl(VIDIOC_SUBSCRIBE_EVENT, &sub) != 0) {
VLOGF(1) << "Cannot subscribe to event";
return false;
}
framerate_control_ = std::make_unique<V4L2FrameRateControl>(
base::BindRepeating(&V4L2Device::Ioctl, device_), task_runner_);
return true;
}
void V4L2StatefulVideoDecoderBackend::EnqueueDecodeTask(
scoped_refptr<DecoderBuffer> buffer,
VideoDecoder::DecodeCB decode_cb) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
if (!buffer->end_of_stream()) {
has_pending_requests_ = true;
}
decode_request_queue_.push(
DecodeRequest(std::move(buffer), std::move(decode_cb)));
DoDecodeWork();
}
void V4L2StatefulVideoDecoderBackend::DoDecodeWork() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
// Do not decode if a flush or resolution change is in progress.
if (!client_->IsDecoding())
return;
if (need_resume_resolution_change_) {
need_resume_resolution_change_ = false;
ChangeResolution();
if (!client_->IsDecoding())
return;
}
// Get a new decode request if none is in progress.
if (!current_decode_request_) {
// No more decode request, nothing to do for now.
if (decode_request_queue_.empty())
return;
auto decode_request = std::move(decode_request_queue_.front());
decode_request_queue_.pop();
// Need to flush?
if (decode_request.buffer->end_of_stream()) {
InitiateFlush(std::move(decode_request.decode_cb));
return;
}
// This is our new decode request.
current_decode_request_ = std::move(decode_request);
DCHECK_EQ(current_decode_request_->bytes_used, 0u);
if (IsVp9KSVCStream(profile_, *current_decode_request_->buffer)) {
if (!IsVp9KSVCSupportedDriver(driver_name_)) {
DLOG(ERROR) << driver_name_ << " doesn't support VP9 k-SVC decoding";
client_->OnBackendError();
return;
}
if (!AppendVP9SuperFrameIndex(current_decode_request_->buffer)) {
LOG(ERROR) << "Failed to append superframe index for VP9 k-SVC frame";
client_->OnBackendError();
return;
}
}
}
// Get a V4L2 buffer to copy the encoded data into.
if (!current_input_buffer_) {
current_input_buffer_ = input_queue_->GetFreeBuffer();
// We will be called again once an input buffer becomes available.
if (!current_input_buffer_)
return;
// Record timestamp of the input buffer so it propagates to the decoded
// frames.
// TODO(mcasas): Consider using TimeDeltaToTimeVal().
const struct timespec timespec =
current_decode_request_->buffer->timestamp().ToTimeSpec();
struct timeval timestamp = {
.tv_sec = timespec.tv_sec,
.tv_usec = timespec.tv_nsec / 1000,
};
current_input_buffer_->SetTimeStamp(timestamp);
const int64_t flat_timespec =
base::TimeDelta::FromTimeSpec(timespec).InMilliseconds();
encoding_timestamps_[flat_timespec] = base::TimeTicks::Now();
}
// From here on we have both a decode request and input buffer, so we can
// progress with decoding.
DCHECK(current_decode_request_.has_value());
DCHECK(current_input_buffer_.has_value());
DCHECK_LT(current_decode_request_->bytes_used,
current_decode_request_->buffer.get()->size());
auto current_buffer_span = base::span(*current_decode_request_->buffer.get())
.subspan(current_decode_request_->bytes_used);
const uint8_t* const data = current_buffer_span.data();
const size_t data_size = current_buffer_span.size();
size_t bytes_to_copy = 0;
if (!frame_splitter_->AdvanceFrameFragment(data, data_size, &bytes_to_copy)) {
LOG(ERROR) << "Invalid bitstream detected.";
std::move(current_decode_request_->decode_cb)
.Run(DecoderStatus::Codes::kFailed);
current_decode_request_.reset();
current_input_buffer_.reset();
client_->OnBackendError();
return;
}
const size_t bytes_used = current_input_buffer_->GetPlaneBytesUsed(0);
if (bytes_used + bytes_to_copy > current_input_buffer_->GetPlaneSize(0)) {
LOG(ERROR) << "V4L2 buffer size is too small to contain a whole frame.";
std::move(current_decode_request_->decode_cb)
.Run(DecoderStatus::Codes::kFailed);
current_decode_request_.reset();
current_input_buffer_.reset();
client_->OnBackendError();
return;
}
uint8_t* dst =
static_cast<uint8_t*>(current_input_buffer_->GetPlaneMapping(0)) +
bytes_used;
memcpy(dst, data, bytes_to_copy);
current_input_buffer_->SetPlaneBytesUsed(0, bytes_used + bytes_to_copy);
current_decode_request_->bytes_used += bytes_to_copy;
// Release current_input_request_ if we reached its end.
if (current_decode_request_->IsCompleted()) {
std::move(current_decode_request_->decode_cb)
.Run(DecoderStatus::Codes::kOk);
current_decode_request_.reset();
}
// If we have a partial frame, wait before submitting it.
if (frame_splitter_->IsPartialFramePending()) {
VLOGF(4) << "Partial frame pending, not queueing any buffer now.";
return;
}
// The V4L2 input buffer contains a decodable entity, queue it.
if (!std::move(*current_input_buffer_).QueueMMap()) {
LOG(ERROR) << "Error while queuing input buffer!";
client_->OnBackendError();
}
current_input_buffer_.reset();
// If we can still progress on a decode request, do it.
if (current_decode_request_ || !decode_request_queue_.empty())
ScheduleDecodeWork();
}
void V4L2StatefulVideoDecoderBackend::ScheduleDecodeWork() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
task_runner_->PostTask(
FROM_HERE, base::BindOnce(&V4L2StatefulVideoDecoderBackend::DoDecodeWork,
weak_this_));
}
void V4L2StatefulVideoDecoderBackend::ProcessEventQueue() {
while (std::optional<struct v4l2_event> ev = device_->DequeueEvent()) {
if (ev->type == V4L2_EVENT_SOURCE_CHANGE &&
(ev->u.src_change.changes & V4L2_EVENT_SRC_CH_RESOLUTION)) {
ChangeResolution();
}
}
}
void V4L2StatefulVideoDecoderBackend::OnServiceDeviceTask(bool event) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
if (event)
ProcessEventQueue();
// We can enqueue dequeued output buffers immediately.
EnqueueOutputBuffers();
// Try to progress on our work since we may have dequeued input buffers.
DoDecodeWork();
}
void V4L2StatefulVideoDecoderBackend::EnqueueOutputBuffers() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
const v4l2_memory mem_type = output_queue_->GetMemoryType();
while (true) {
bool ret = false;
bool no_buffer = false;
std::optional<V4L2WritableBufferRef> buffer;
switch (mem_type) {
case V4L2_MEMORY_MMAP:
buffer = output_queue_->GetFreeBuffer();
if (!buffer) {
no_buffer = true;
break;
}
ret = std::move(*buffer).QueueMMap();
break;
case V4L2_MEMORY_DMABUF: {
scoped_refptr<FrameResource> frame = GetPoolVideoFrame();
// Running out of frame is not an error, we will be called again
// once frames are available.
if (!frame) {
return;
}
buffer = output_queue_->GetFreeBufferForFrame(frame->tracking_token());
if (!buffer) {
no_buffer = true;
break;
}
framerate_control_->AttachToFrameResource(frame);
ret = std::move(*buffer).QueueDMABuf(std::move(frame));
break;
}
default:
NOTREACHED();
}
// Running out of V4L2 buffers is not an error, so just exit the loop
// gracefully.
if (no_buffer)
break;
if (!ret) {
LOG(ERROR) << "Error while queueing output buffer!";
client_->OnBackendError();
}
}
DVLOGF(3) << output_queue_->QueuedBuffersCount() << "/"
<< output_queue_->AllocatedBuffersCount()
<< " output buffers queued";
}
scoped_refptr<FrameResource>
V4L2StatefulVideoDecoderBackend::GetPoolVideoFrame() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
DmabufVideoFramePool* pool = client_->GetVideoFramePool();
DCHECK_EQ(output_queue_->GetMemoryType(), V4L2_MEMORY_DMABUF);
DCHECK_NE(pool, nullptr);
scoped_refptr<FrameResource> frame = pool->GetFrame();
if (!frame) {
DVLOGF(3) << "No available VideoFrame for now";
// We will try again once a frame becomes available.
pool->NotifyWhenFrameAvailable(base::BindOnce(
base::IgnoreResult(&base::SequencedTaskRunner::PostTask), task_runner_,
FROM_HERE,
base::BindOnce(
base::IgnoreResult(
&V4L2StatefulVideoDecoderBackend::EnqueueOutputBuffers),
weak_this_)));
}
return frame;
}
// static
void V4L2StatefulVideoDecoderBackend::ReuseOutputBufferThunk(
scoped_refptr<base::SequencedTaskRunner> task_runner,
std::optional<base::WeakPtr<V4L2StatefulVideoDecoderBackend>> weak_this,
V4L2ReadableBufferRef buffer) {
DVLOGF(3);
DCHECK(weak_this);
if (task_runner->RunsTasksInCurrentSequence()) {
if (*weak_this)
(*weak_this)->ReuseOutputBuffer(std::move(buffer));
} else {
task_runner->PostTask(
FROM_HERE,
base::BindOnce(&V4L2StatefulVideoDecoderBackend::ReuseOutputBuffer,
*weak_this, std::move(buffer)));
}
}
void V4L2StatefulVideoDecoderBackend::ReuseOutputBuffer(
V4L2ReadableBufferRef buffer) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3) << "Reuse output buffer #" << buffer->BufferId();
// Lose reference to the buffer so it goes back to the free list.
buffer.reset();
// Enqueue the newly available buffer.
EnqueueOutputBuffers();
}
void V4L2StatefulVideoDecoderBackend::OnOutputBufferDequeued(
V4L2ReadableBufferRef buffer) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
// Zero-bytes buffers are returned as part of a flush and can be dismissed.
if (buffer->GetPlaneBytesUsed(0) > 0) {
// TODO(mcasas): Consider using TimeValToTimeDelta().
const struct timeval timeval = buffer->GetTimeStamp();
const struct timespec timespec = {
.tv_sec = timeval.tv_sec,
#if defined(ARCH_CPU_ARM_FAMILY) && defined(ARCH_CPU_32_BITS)
.tv_nsec = static_cast<long>(timeval.tv_usec) * 1000,
#else
.tv_nsec = timeval.tv_usec * 1000,
#endif
};
const int64_t flat_timespec =
base::TimeDelta::FromTimeSpec(timespec).InMilliseconds();
// TODO(b/190615065) |flat_timespec| might be repeated with H.264
// bitstreams, investigate why, and change the if() to DCHECK().
if (base::Contains(encoding_timestamps_, flat_timespec)) {
UMA_HISTOGRAM_TIMES(
"Media.PlatformVideoDecoding.Decode",
base::TimeTicks::Now() - encoding_timestamps_[flat_timespec]);
encoding_timestamps_.erase(flat_timespec);
}
scoped_refptr<FrameResource> frame;
switch (output_queue_->GetMemoryType()) {
case V4L2_MEMORY_MMAP: {
// Wrap the frame into another one so we can be signaled when the
// consumer is done with it and reuse the V4L2 buffer.
scoped_refptr<FrameResource> origin_frame = buffer->GetFrameResource();
frame = origin_frame->CreateWrappingFrame();
frame->AddDestructionObserver(base::BindOnce(
&V4L2StatefulVideoDecoderBackend::ReuseOutputBufferThunk,
task_runner_, weak_this_, buffer));
break;
}
case V4L2_MEMORY_DMABUF:
// The frame from the frame pool that we passed to QueueDMABuf() has
// been decoded into. It can be output as-is.
frame = buffer->GetFrameResource();
break;
default:
NOTREACHED();
}
const base::TimeDelta timestamp = base::TimeDelta::FromTimeSpec(timespec);
// TODO(b/214190092): Get color space from the buffer.
client_->OutputFrame(std::move(frame), *visible_rect_, color_space_,
timestamp);
}
// We were waiting for the last buffer before a resolution change
// The order here is important! A flush event may come after a resolution
// change event (but not the opposite), so we must make sure both events
// are processed in the correct order.
if (buffer->IsLast()){
// Check that we don't have a resolution change event pending. If we do
// then this LAST buffer was related to it.
ProcessEventQueue();
if (resolution_change_cb_) {
std::move(resolution_change_cb_).Run();
} else if (flush_cb_) {
// We were waiting for a flush to complete, and received the last buffer.
CompleteFlush();
}
}
EnqueueOutputBuffers();
}
bool V4L2StatefulVideoDecoderBackend::InitiateFlush(
VideoDecoder::DecodeCB flush_cb) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
DCHECK(!flush_cb_);
// Submit any pending input buffer at the time of flush.
if (current_input_buffer_) {
if (!std::move(*current_input_buffer_).QueueMMap()) {
LOG(ERROR) << "Error while queuing input buffer!";
client_->OnBackendError();
}
current_input_buffer_.reset();
}
client_->InitiateFlush();
flush_cb_ = std::move(flush_cb);
// The stream could be stopped in the middle of the frame when the flush is
// being triggered. This makes sure there are no leftovers after the flush
// finishes.
frame_splitter_->Reset();
// Special case: if we haven't received any decoding request, we could
// complete the flush immediately.
if (!has_pending_requests_)
return CompleteFlush();
if (output_queue_->IsStreaming()) {
// If the CAPTURE queue is streaming, send the STOP command to the V4L2
// device. The device will let us know that the flush is completed by
// sending us a CAPTURE buffer with the LAST flag set.
return output_queue_->SendStopCommand();
} else {
// If the CAPTURE queue is not streaming, this means we received the flush
// request before the initial resolution has been established. The flush
// request will be processed in OnChangeResolutionDone(), when the CAPTURE
// queue starts streaming.
DVLOGF(2) << "Flush request to be processed after CAPTURE queue starts";
}
return true;
}
bool V4L2StatefulVideoDecoderBackend::CompleteFlush() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
DCHECK(flush_cb_);
// Signal that flush has properly been completed.
std::move(flush_cb_).Run(DecoderStatus::Codes::kOk);
// If CAPTURE queue is streaming, send the START command to the V4L2 device
// to signal that we are resuming decoding with the same state.
if (output_queue_->IsStreaming() && !output_queue_->SendStartCommand()) {
LOG(ERROR) << "Failed to issue START command";
std::move(flush_cb_).Run(DecoderStatus::Codes::kFailed);
client_->OnBackendError();
return false;
}
client_->CompleteFlush();
// Qualcomm venus stops capture queue after LAST buffer is dequeued and needs
// restarting to be ready for resume operation in case it was left in EOS
// state
client_->RestartStream();
// Resume decoding if data is available.
ScheduleDecodeWork();
has_pending_requests_ = false;
return true;
}
void V4L2StatefulVideoDecoderBackend::OnStreamStopped(bool stop_input_queue) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
// If we are resetting, also reset the splitter.
if (frame_splitter_ && stop_input_queue)
frame_splitter_->Reset();
}
void V4L2StatefulVideoDecoderBackend::ChangeResolution() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
// Here we just query the new resolution, visible rect, and number of output
// buffers before asking the client to update the resolution.
auto format = output_queue_->GetFormat().first;
if (!format) {
LOG(ERROR) << "Unable to get format when changing resolution.";
client_->OnBackendError();
return;
}
const gfx::Size pic_size(format->fmt.pix_mp.width, format->fmt.pix_mp.height);
auto visible_rect = output_queue_->GetVisibleRect();
if (!visible_rect) {
LOG(ERROR) << "Unable to get visible rectangle when changing resolution.";
client_->OnBackendError();
return;
}
if (!gfx::Rect(pic_size).Contains(*visible_rect)) {
LOG(ERROR) << "Visible rectangle (" << visible_rect->ToString()
<< ") is not contained by the picture rectangle ("
<< gfx::Rect(pic_size).ToString() << ").";
client_->OnBackendError();
return;
}
const auto bit_depth =
V4L2PixelFormatToBitDepth(format->fmt.pix_mp.pixelformat);
if (!bit_depth) {
LOG(ERROR) << "Unable to determine bitdepth of format ";
client_->OnBackendError();
return;
}
// Estimate the amount of buffers needed for the CAPTURE queue and for codec
// reference requirements. For VP9 and AV1, the maximum number of reference
// frames is constant and 8 (for VP8 is 4); for H.264 and other ITU-T codecs,
// it depends on the bitstream. Here we query it from the driver anyway.
constexpr size_t kDefaultNumReferenceFrames = 8;
size_t num_codec_reference_frames = kDefaultNumReferenceFrames;
// On QC Venus, this control ranges between 1 and 32 at the time of writing.
auto ctrl = device_->GetCtrl(V4L2_CID_MIN_BUFFERS_FOR_CAPTURE);
if (ctrl) {
VLOGF(2) << "V4L2_CID_MIN_BUFFERS_FOR_CAPTURE = " << ctrl->value;
num_codec_reference_frames = std::max(
base::checked_cast<size_t>(ctrl->value), num_codec_reference_frames);
}
// Verify |num_codec_reference_frames| has a reasonable value. Anecdotally 16
// is the largest amount of reference frames seen, on an ITU-T H.264 test
// vector (CAPCM*1_Sand_E.h264).
CHECK_LE(num_codec_reference_frames, 32u);
// Signal that we are flushing and initiate the resolution change.
// Our flush will be done when we receive a buffer with the LAST flag on the
// CAPTURE queue.
client_->InitiateFlush();
DCHECK(!resolution_change_cb_);
resolution_change_cb_ = base::BindOnce(
&V4L2StatefulVideoDecoderBackend::ContinueChangeResolution, weak_this_,
pic_size, *visible_rect, num_codec_reference_frames, *bit_depth);
// ...that is, unless we are not streaming yet, in which case the resolution
// change can take place immediately.
if (!output_queue_->IsStreaming())
std::move(resolution_change_cb_).Run();
}
void V4L2StatefulVideoDecoderBackend::ContinueChangeResolution(
const gfx::Size& pic_size,
const gfx::Rect& visible_rect,
const size_t num_codec_reference_frames,
const uint8_t bit_depth) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
// Flush is done, but stay in flushing state and ask our client to set the new
// resolution.
client_->ChangeResolution(pic_size, visible_rect, num_codec_reference_frames,
bit_depth);
}
bool V4L2StatefulVideoDecoderBackend::ApplyResolution(
const gfx::Size& pic_size,
const gfx::Rect& visible_rect) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
// Use the visible rect for all new frames.
visible_rect_ = visible_rect;
return true;
}
void V4L2StatefulVideoDecoderBackend::OnChangeResolutionDone(CroStatus status) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3) << "status=" << static_cast<int>(status.code());
if (status == CroStatus::Codes::kResetRequired) {
need_resume_resolution_change_ = true;
return;
}
if (status != CroStatus::Codes::kOk) {
LOG(ERROR) << "Backend failure when changing resolution ("
<< static_cast<int>(status.code()) << ").";
client_->OnBackendError();
return;
}
// Flush can be considered completed on the client side.
client_->CompleteFlush();
// Enqueue all available output buffers now that they are allocated.
EnqueueOutputBuffers();
// If we had a flush request pending before the initial resolution change,
// process it now.
if (flush_cb_) {
DVLOGF(2) << "Processing pending flush request...";
client_->InitiateFlush();
if (!output_queue_->SendStopCommand()) {
return;
}
}
// Also try to progress on our work.
DoDecodeWork();
}
void V4L2StatefulVideoDecoderBackend::ClearPendingRequests(
DecoderStatus status) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DVLOGF(3);
if (resolution_change_cb_)
std::move(resolution_change_cb_).Run();
if (flush_cb_) {
std::move(flush_cb_).Run(status);
}
current_input_buffer_.reset();
if (current_decode_request_) {
std::move(current_decode_request_->decode_cb).Run(status);
current_decode_request_.reset();
}
while (!decode_request_queue_.empty()) {
std::move(decode_request_queue_.front().decode_cb).Run(status);
decode_request_queue_.pop();
}
has_pending_requests_ = false;
}
// TODO(b:149663704) move into helper function shared between both backends?
bool V4L2StatefulVideoDecoderBackend::IsSupportedProfile(
VideoCodecProfile profile) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DCHECK(device_);
if (supported_profiles_.empty()) {
const std::vector<uint32_t> kSupportedInputFourccs = {
V4L2_PIX_FMT_H264,
V4L2_PIX_FMT_VP8,
V4L2_PIX_FMT_VP9,
};
auto device = base::MakeRefCounted<V4L2Device>();
VideoDecodeAccelerator::SupportedProfiles profiles =
device->GetSupportedDecodeProfiles(kSupportedInputFourccs);
for (const auto& entry : profiles)
supported_profiles_.push_back(entry.profile);
}
return base::Contains(supported_profiles_, profile);
}
bool V4L2StatefulVideoDecoderBackend::StopInputQueueOnResChange() const {
return false;
}
size_t V4L2StatefulVideoDecoderBackend::GetNumOUTPUTQueueBuffers(
bool secure_mode) const {
CHECK(!secure_mode);
constexpr size_t kNumInputBuffers = 8;
return kNumInputBuffers;
}
} // namespace media
|