MediaCodec Android Decoder enhancement

This commit is contained in:
2025-10-11 20:03:18 +09:00
parent b1655ab856
commit 71ee38663a
10 changed files with 743 additions and 13 deletions

44
todo27.txt Normal file
View File

@@ -0,0 +1,44 @@
● 완벽합니다! 이제 모든 작업이 완료되었습니다. 요약하겠습니다:
🎯 완료된 작업 요약
✅ MediaCodec DecodeToSurface() 개선 (Phase 1-2)
1. State Machine 추가 - DecoderState enum (READY → BUFFERING → DECODING → FLUSHING)
2. MediaCodec API 준수 구현:
- Surface를 input 큐잉 전에 설정
- 버퍼링 중에도 항상 ProcessOutputBuffer() 호출
- State 기반 false/true 반환 로직
- NULL packet 처리 (EOF/FLUSHING)
✅ Android 빌드 문제 해결
1. ImageUtils - Android 플랫폼용 stub 구현 (Windows 전용 기능)
- SaveYUV420PToBMP(), CreateDirectoryIfNotExists() 등 빈 구현
- 향후 필요 시 구현 가능하도록 준비
2. VavCoreLogger - Android logcat 통합
- __android_log_print() 사용하여 Android logcat 출력
- Windows/Android 플랫폼 분기 처리 (#ifdef ANDROID)
- LOGF_DEBUG, LOGF_INFO, LOGF_WARNING, LOGF_ERROR 모두 작동
3. CMakeLists.txt 업데이트 - VavCoreLogger.cpp와 ImageUtils.cpp 포함
4. Android VavCore 라이브러리 재빌드 성공 (4.9MB, arm64-v8a)
📊 결과
라이브러리 정보:
- 파일: D:/Project/video-av1/lib/android-arm64-v8a/vavcore/libVavCore.so
- 크기: 4.9MB (5,033,368 bytes)
- 빌드 시간: 2025-10-11 16:46
- 플랫폼: Android ARM64 (API 29+)
포함된 개선 사항:
- MediaCodec State Machine ✅
- MediaCodec API-compliant DecodeToSurface() ✅
- Android 로깅 시스템 ✅
- Platform-specific utilities ✅
이제 Android 앱 빌드가 성공할 것입니다. vavcore_get_codec_name 및 기타 모든 심볼이 라이브러리에 포함되어 있습니다.

View File

@@ -0,0 +1,484 @@
# MediaCodec Android Decoder 개선 분석
**작성일**: 2025-10-11 (Updated: 2025-10-11 19:30 KST)
**대상**: Android MediaCodec AV1 Decoder
**참고**: NVDEC DecodeToSurface() 스펙 변경사항 반영
**상태**: ✅ **Phase 1-2 구현 완료** (State Machine + DecodeToSurface 리팩토링)
---
## 📋 Executive Summary
NVDEC 개선 과정에서 `DecodeToSurface()` API 스펙이 크게 변경되었습니다. MediaCodec도 동일한 설계 원칙을 따라 개선이 필요합니다.
### 🎯 Implementation Status (2025-10-11)
**Phase 1-2 Completed**: Core improvements implemented and ready for testing
- State Machine: READY → BUFFERING → DECODING → FLUSHING
- MediaCodec API-compliant DecodeToSurface() implementation
- Always calls ProcessOutputBuffer() regardless of buffering state
- Surface configured BEFORE input queueing (MediaCodec requirement)
- State-based return logic (false for PACKET_ACCEPTED/END_OF_STREAM)
**Phase 5 Pending**: Android platform testing required
**핵심 변경사항**:
1. **CUDA DPB 도입**: NVDEC은 내부 CUDA DPB를 통한 B-frame 리오더링 지원
2. **State Machine**: READY → BUFFERING → DECODING → FLUSHING 명확한 상태 관리
3. **False Return**: 프레임 미출력 시 `false` 반환 (VAVCORE_PACKET_ACCEPTED로 변환)
4. **PTS 기반 리오더링**: DisplayQueue를 통한 표시 순서 관리
---
## 🔍 1. 현재 MediaCodec 구현 상태
### 1.1 DecodeToSurface() 구현 (MediaCodecAV1Decoder.cpp:195-287)
```cpp
bool MediaCodecAV1Decoder::DecodeToSurface(const uint8_t* packet_data, size_t packet_size,
VavCoreSurfaceType target_type,
void* target_surface,
VideoFrame& output_frame) {
if (!m_initialized) {
LogError("Decoder not initialized");
return false;
}
if (target_type == VAVCORE_SURFACE_ANDROID_NATIVE_WINDOW) {
// Set output surface for hardware acceleration
ANativeWindow* native_surface = static_cast<ANativeWindow*>(target_surface);
if (native_surface && native_surface != m_surface) {
media_status_t status = AMediaCodec_setOutputSurface(m_codec, native_surface);
if (status != AMEDIA_OK) {
LogError("Failed to set output surface: " + std::to_string(status));
return false;
}
m_surface = native_surface;
}
// Process input buffer
if (!ProcessInputBuffer(packet_data, packet_size)) {
LogError("Failed to process input buffer for surface rendering");
return false;
}
// ❌ 문제: Output buffer dequeue 없이 즉시 리턴!
// Output will be rendered directly to surface
// No need to copy frame data
IncrementFramesDecoded();
return true; // ← 프레임 출력 여부와 무관하게 항상 true
}
// ... (OpenGL ES, Vulkan 경로 유사)
}
```
**문제점**:
-**Output buffer dequeue 누락**: `ProcessOutputBuffer()` 호출 없음
-**항상 true 반환**: 프레임 출력 여부 확인 없이 무조건 true 반환
-**State Machine 없음**: BUFFERING/DECODING 구분 없음
-**동기화 부족**: MediaCodec 비동기 처리 특성 무시
---
## 🎯 2. NVDEC DecodeToSurface() 설계 (참고 모델)
### 2.1 핵심 설계 원칙
```cpp
// NVDECAV1Decoder.cpp:381-613
bool NVDECAV1Decoder::DecodeToSurface(const uint8_t* packet_data, size_t packet_size,
VavCoreSurfaceType target_type,
void* target_surface,
VideoFrame& output_frame) {
// Step 1: Handle NULL packet as flush mode
if (!packet_data || packet_size == 0) {
m_state = DecoderState::FLUSHING;
}
// Step 2: Submit packet to NVDEC parser
// ...
// Step 3: Check if initial buffering is needed
{
std::lock_guard<std::mutex> lock(m_displayMutex);
// Transition from READY to BUFFERING on first packet
if (m_state == DecoderState::READY && m_displayQueue.empty()) {
m_state = DecoderState::BUFFERING;
}
// During initial buffering, accept packets until display queue has frames
if (m_displayQueue.empty() && m_state == DecoderState::BUFFERING) {
// Return false to indicate no frame yet (still buffering)
return false; // ← VAVCORE_PACKET_ACCEPTED로 변환됨
}
// Once we have frames in queue, transition to DECODING
if (!m_displayQueue.empty() && m_state == DecoderState::BUFFERING) {
m_state = DecoderState::DECODING;
}
}
// Step 4: Pop from display queue to get picture_index (PTS-ordered)
DisplayQueueEntry entry;
{
std::lock_guard<std::mutex> lock(m_displayMutex);
if (m_displayQueue.empty()) {
if (m_state == DecoderState::FLUSHING) {
// Return false - VAVCORE_END_OF_STREAM로 변환됨
return false;
}
}
// Pop from priority queue (PTS-ordered)
entry = m_displayQueue.top();
m_displayQueue.pop();
}
// Step 5: Copy from CUDA DPB to target surface
if (!CopyFromCUDADPB(pic_idx, slot.surface_type, slot.target_surface, output_frame)) {
return false;
}
return true; // Frame successfully rendered
}
```
**핵심 특징**:
-**State Machine**: READY → BUFFERING → DECODING → FLUSHING
-**False Return**: 버퍼링/플러싱 시 false 반환 (정상 동작)
-**DisplayQueue**: PTS 기반 min-heap으로 B-frame 리오더링
-**Late Binding**: target_surface를 출력 직전에 업데이트
---
## 🚀 3. MediaCodec 개선 방향
### 3.1 State Machine 도입
```cpp
// MediaCodecAV1Decoder.h에 추가
enum class DecoderState {
READY, // Initialized and ready for first packet
BUFFERING, // Initial buffering (MediaCodec warming up)
DECODING, // Normal frame-by-frame decoding
FLUSHING // End-of-file reached, draining MediaCodec
};
private:
DecoderState m_state = DecoderState::READY;
std::mutex m_stateMutex;
```
### 3.2 DecodeToSurface() 리팩토링 (MediaCodec API 스펙 준수)
**핵심 원칙**: MediaCodec은 비동기 파이프라인 - Input/Output 분리
```cpp
bool MediaCodecAV1Decoder::DecodeToSurface(const uint8_t* packet_data, size_t packet_size,
VavCoreSurfaceType target_type,
void* target_surface,
VideoFrame& output_frame) {
if (!m_initialized) {
LogError("Decoder not initialized");
return false;
}
// Step 1: Handle NULL packet as flush mode
if (!packet_data || packet_size == 0) {
LOGF_DEBUG("[DecodeToSurface] NULL packet - flush mode (end of file)");
std::lock_guard<std::mutex> lock(m_stateMutex);
m_state = DecoderState::FLUSHING;
}
// Step 2: Update target surface BEFORE processing
// (MediaCodec needs surface configured before queueing input)
if (target_type == VAVCORE_SURFACE_ANDROID_NATIVE_WINDOW) {
ANativeWindow* native_surface = static_cast<ANativeWindow*>(target_surface);
if (native_surface && native_surface != m_surface) {
media_status_t status = AMediaCodec_setOutputSurface(m_codec, native_surface);
if (status != AMEDIA_OK) {
LogError("Failed to set output surface: " + std::to_string(status));
return false;
}
m_surface = native_surface;
LOGF_DEBUG("[DecodeToSurface] Output surface updated: %p", m_surface);
}
}
// Step 3: Process input buffer (feed packet to MediaCodec)
if (m_state != DecoderState::FLUSHING) {
if (!ProcessInputBuffer(packet_data, packet_size)) {
LogError("Failed to process input buffer");
return false;
}
}
// Step 4: Check decoder state transition
{
std::lock_guard<std::mutex> lock(m_stateMutex);
// Transition from READY to BUFFERING on first packet
if (m_state == DecoderState::READY) {
m_state = DecoderState::BUFFERING;
m_bufferingPacketCount = 0;
LOGF_DEBUG("[DecodeToSurface] State transition: READY → BUFFERING");
}
}
// Step 5: Try to dequeue output buffer
// CRITICAL: MediaCodec is ASYNCHRONOUS - input/output are decoupled
// We must ALWAYS try dequeue, regardless of buffering state
bool hasFrame = ProcessOutputBuffer(output_frame);
if (!hasFrame) {
std::lock_guard<std::mutex> lock(m_stateMutex);
// Check state to determine return semantic
if (m_state == DecoderState::BUFFERING) {
m_bufferingPacketCount++;
LOGF_DEBUG("[DecodeToSurface] BUFFERING: packet %d accepted, no output yet",
m_bufferingPacketCount);
// Transition to DECODING when we get first output
// (will happen on next call when ProcessOutputBuffer succeeds)
return false; // VAVCORE_PACKET_ACCEPTED
}
if (m_state == DecoderState::FLUSHING) {
// Flush complete - no more frames
LOGF_INFO("[DecodeToSurface] Flush complete: all frames drained");
return false; // VAVCORE_END_OF_STREAM
}
// DECODING state but no output ready
LOGF_DEBUG("[DecodeToSurface] DECODING: packet accepted, output not ready");
return false; // VAVCORE_PACKET_ACCEPTED
}
// Step 6: First frame received - transition to DECODING
{
std::lock_guard<std::mutex> lock(m_stateMutex);
if (m_state == DecoderState::BUFFERING) {
m_state = DecoderState::DECODING;
LOGF_INFO("[DecodeToSurface] State transition: BUFFERING → DECODING (first frame)");
}
}
// Step 7: Frame successfully decoded - setup metadata
output_frame.width = m_width;
output_frame.height = m_height;
if (target_type == VAVCORE_SURFACE_ANDROID_NATIVE_WINDOW) {
output_frame.color_space = ColorSpace::EXTERNAL_OES; // Android SurfaceTexture
}
IncrementFramesDecoded();
LOGF_DEBUG("[DecodeToSurface] Frame %llu decoded successfully", m_stats.frames_decoded);
return true; // Frame successfully rendered
}
```
**주요 수정사항**:
1.**Surface 먼저 설정**: Input 큐잉 전에 target_surface 업데이트
2.**항상 Output Dequeue**: 버퍼링 중에도 `ProcessOutputBuffer()` 호출
3.**State 기반 Return**: BUFFERING/DECODING/FLUSHING에 따라 false 반환
4.**First Frame Transition**: 첫 프레임 출력 시 BUFFERING → DECODING
### 3.3 초기 버퍼링 제거 (MediaCodec API 특성)
**중요**: MediaCodec은 NVDEC과 다르게 **고정 버퍼링 카운트가 불필요**합니다.
**이유**:
```cpp
// MediaCodec은 비동기 파이프라인 - Input/Output 완전 분리
// - Input: dequeueInputBuffer → queueInputBuffer (즉시 리턴)
// - Output: dequeueOutputBuffer (프레임 준비 시 리턴)
//
// 첫 프레임 출력까지 자동으로 버퍼링하므로 별도 카운팅 불필요!
```
**개선된 State Transition**:
```cpp
// State는 Output 상태로만 판단
READY BUFFERING ( packet )
BUFFERING DECODING ( frame ) ProcessOutputBuffer()
DECODING FLUSHING (NULL packet )
```
**제거할 코드**:
```cpp
// ❌ 삭제: 불필요한 버퍼링 카운트
// #define VAVCORE_MEDIACODEC_INITIAL_BUFFERING 5
// int m_bufferingPacketCount;
```
---
## 🔄 4. B-frame 리오더링 고려사항
### 4.1 MediaCodec의 자동 리오더링
**NVDEC vs MediaCodec 차이**:
- **NVDEC**: 수동 리오더링 필요 (DisplayQueue + PTS 우선순위 큐)
- **MediaCodec**: 자동 리오더링 지원 (`AMediaCodec_getOutputBuffer()` 내부 처리)
**결론**: MediaCodec은 DisplayQueue 불필요!
- MediaCodec이 내부적으로 PTS 기반 리오더링 수행
- `BufferInfo.presentationTimeUs` 필드로 PTS 제공
- VavCore는 MediaCodec 출력 순서를 그대로 사용하면 됨
### 4.2 PTS 전달 개선
```cpp
// ProcessOutputBuffer 내부에서 PTS 추출 및 설정
bool MediaCodecAV1Decoder::ProcessOutputBuffer(VideoFrame& frame) {
// ... existing code ...
AMediaCodecBufferInfo bufferInfo;
ssize_t bufferIndex = AMediaCodec_dequeueOutputBuffer(m_codec, &bufferInfo, timeoutUs);
if (bufferIndex >= 0) {
// Extract PTS from MediaCodec
int64_t pts_us = bufferInfo.presentationTimeUs;
// Set frame metadata
frame.timestamp_ns = static_cast<uint64_t>(pts_us * 1000); // Convert µs to ns
frame.timestamp_seconds = static_cast<double>(pts_us) / 1000000.0;
// ... rest of processing ...
}
}
```
---
## 📊 5. 구현 우선순위
### Phase 1: State Machine 도입 (필수) ✅ **COMPLETED** (2025-10-11)
- [x] `DecoderState` enum 정의 - MediaCodecAV1Decoder.h:33-38
- [x] `m_state` 멤버 변수 추가 - MediaCodecAV1Decoder.h:188
- [x] `m_stateMutex` 추가 - MediaCodecAV1Decoder.h:189
- [x] State transition 로직 구현 - MediaCodecAV1Decoder.cpp:44 (constructor)
### Phase 2: DecodeToSurface() 핵심 수정 (필수) ✅ **COMPLETED** (2025-10-11)
- [x] **Surface 먼저 설정**: Input 큐잉 전에 `AMediaCodec_setOutputSurface()` 호출 - line 220-229
- [x] **Output Dequeue 추가**: 버퍼링 중에도 `ProcessOutputBuffer()` 호출 - line 254
- [x] **State 기반 Return**: hasFrame 여부와 m_state로 false/true 결정 - line 256-271
- [x] **NULL packet 처리**: FLUSHING state transition - line 206-210
- [x] **State transition logic**: READY → BUFFERING → DECODING 구현 - line 242-280
### Phase 3: ProcessOutputBuffer() 활용 (필수) ✅ **ALREADY IMPLEMENTED**
- [x] `m_buffer_processor->DequeueOutputBuffer()` 반환값 확인 - line 838
- [x] PTS 메타데이터 이미 추출됨 (BufferProcessor에서 처리) - MediaCodecBufferProcessor.cpp
- [x] Surface rendering 시 `render=true` 플래그 확인 - MediaCodecBufferProcessor.cpp
### Phase 4: 불필요한 코드 제거 (권장) ⚠️ **NOT REQUIRED**
- [x] ~~MEDIACODEC_INITIAL_BUFFERING 상수~~ - 애초에 존재하지 않음
- [x] ~~m_bufferingPacketCount~~ - 애초에 존재하지 않음 (Output 상태로만 판단하는 설계 적용됨)
### Phase 5: 테스트 및 검증 (필수) ⏳ **PENDING**
- [ ] 단일 프레임 디코딩 테스트
- [ ] 초기 버퍼링 동작 검증 (자동 처리 확인)
- [ ] Flush mode 테스트 (EOF 처리)
- [ ] B-frame 비디오 재생 확인 (MediaCodec 자동 리오더링)
---
## ⚠️ 6. 주의사항
### 6.1 MediaCodec API 특성 (CRITICAL)
**MediaCodec은 비동기 파이프라인 - Input/Output 완전 분리**:
```cpp
// Input Pipeline (즉시 리턴)
AMediaCodec_dequeueInputBuffer() // 빈 버퍼 얻기
AMediaCodec_queueInputBuffer() // 패킷 큐잉 (즉시 리턴!)
// Output Pipeline (프레임 준비 시 리턴)
AMediaCodec_dequeueOutputBuffer() // 디코딩된 프레임 얻기 (대기 가능)
AMediaCodec_releaseOutputBuffer() // 프레임 렌더링 or 해제
```
**핵심 차이**:
- **NVDEC**: `cuvidParseVideoData()` → 동기 콜백 → 즉시 프레임 출력
- **MediaCodec**: `queueInputBuffer()` → 비동기 디코딩 → 나중에 `dequeueOutputBuffer()`
**설계 함의**:
1.`ProcessInputBuffer()` 성공 ≠ 프레임 출력 성공
2. ✅ 항상 `ProcessOutputBuffer()` 호출해야 프레임 얻을 수 있음
3. ✅ 초기 몇 개 패킷은 출력 없이 입력만 가능 (파이프라인 filling)
4. ✅ Flush 시에도 `dequeueOutputBuffer()` 호출해서 남은 프레임 드레인
### 6.2 NVDEC과의 차이점
| 항목 | NVDEC | MediaCodec |
|------|-------|------------|
| DPB 관리 | 수동 (CUDA DPB) | 자동 (MediaCodec 내부) |
| B-frame 리오더링 | 수동 (DisplayQueue) | 자동 (내부 처리) |
| 초기 버퍼링 | 16 프레임 | 5 프레임 (권장) |
| Flush 처리 | ENDOFSTREAM flag | `AMediaCodec_flush()` |
| 동기화 | cuvidGetDecodeStatus | dequeueOutputBuffer |
### 6.3 False Return 의미 변경
**기존 (잘못된 가정)**:
```cpp
// false = 에러 발생
// true = 성공
```
**개선 후 (NVDEC 모델 적용)**:
```cpp
// false = 프레임 없음 (버퍼링 중 or EOF)
// → VAVCORE_PACKET_ACCEPTED or VAVCORE_END_OF_STREAM
// true = 프레임 출력 성공
// → VAVCORE_SUCCESS
```
---
## 🎯 7. 예상 개선 효과
### 7.1 API 일관성
- ✅ NVDEC과 동일한 DecodeToSurface() 동작
- ✅ C API 래퍼에서 동일한 반환값 처리
- ✅ Vav2Player와의 통합 간소화
### 7.2 안정성 향상
- ✅ 초기 버퍼링 명확한 처리
- ✅ EOF/Flush 정확한 감지
- ✅ State Machine으로 예측 가능한 동작
### 7.3 성능 최적화
- ✅ 불필요한 디코딩 시도 제거
- ✅ 버퍼링 중 CPU 사용량 감소
- ✅ 프레임 드롭 최소화
---
## 📝 8. Next Actions
### Immediate (이번 작업)
1. State Machine enum 및 멤버 변수 추가
2. DecodeToSurface() 리팩토링 (false 반환 로직)
3. ProcessOutputBuffer() PTS 추출 개선
### Short-term (다음 작업)
1. 단위 테스트 작성 및 실행
2. Android Vulkan Player 통합 테스트
3. B-frame 비디오 검증
### Long-term (향후 개선)
1. Async mode 최적화 (MediaCodecAsyncHandler)
2. HardwareBuffer 연동 강화
3. Multi-codec 지원 (VP9, H.264)
---
**문서 버전**: 1.0
**최종 수정**: 2025-10-11
**작성자**: Claude Code (Sonnet 4.5)

View File

@@ -249,6 +249,13 @@ bool VavCoreVulkanBridge::ProcessNextFrame() {
return false;
}
// Check if renderer is initialized
if (!m_vulkanRenderer || !m_vulkanRenderer->IsInitialized()) {
LOGE("Renderer not available for frame rendering");
m_droppedFrameCount++;
return false;
}
// Decode next frame directly
VavCoreVideoFrame frame = {};
VavCoreResult result = vavcore_decode_next_frame(m_player, &frame);
@@ -450,7 +457,7 @@ void VavCoreVulkanBridge::CleanupVulkanRenderer() {
}
void VavCoreVulkanBridge::OnSurfaceChanged(uint32_t width, uint32_t height) {
if (m_vulkanRenderer) {
if (m_vulkanRenderer && m_vulkanRenderer->IsInitialized()) {
m_vulkanRenderer->OnSurfaceChanged(width, height);
}
}

View File

@@ -106,6 +106,28 @@ Java_com_vavcore_player_VulkanVideoView_nativeDestroyVideoPlayer(JNIEnv* env, jo
}
}
/**
* Re-initialize Vulkan renderer with new surface
*/
JNIEXPORT jboolean JNICALL
Java_com_vavcore_player_VulkanVideoView_nativeReinitializeRenderer(JNIEnv* env, jobject thiz, jlong playerPtr, jobject surface) {
VavCoreVulkanBridge* player = reinterpret_cast<VavCoreVulkanBridge*>(playerPtr);
if (player == nullptr) {
LOGE("Invalid player pointer");
return JNI_FALSE;
}
ANativeWindow* window = ANativeWindow_fromSurface(env, surface);
if (window == nullptr) {
LOGE("Failed to get native window from surface");
return JNI_FALSE;
}
bool success = player->ReinitializeRenderer(window);
// Note: Don't release window here as the player takes ownership
return success ? JNI_TRUE : JNI_FALSE;
}
/**
* Load video file for playback
*/
@@ -134,13 +156,17 @@ Java_com_vavcore_player_VulkanVideoView_nativeLoadVideo(JNIEnv* env, jobject thi
*/
JNIEXPORT jboolean JNICALL
Java_com_vavcore_player_VulkanVideoView_nativePlay(JNIEnv* env, jobject thiz, jlong playerPtr) {
LOGI("nativePlay() called with playerPtr=%p", (void*)playerPtr);
VavCoreVulkanBridge* player = reinterpret_cast<VavCoreVulkanBridge*>(playerPtr);
if (player == nullptr) {
LOGE("Invalid player pointer");
return JNI_FALSE;
}
return player->Play() ? JNI_TRUE : JNI_FALSE;
LOGI("Calling player->Play()...");
bool result = player->Play();
LOGI("player->Play() returned: %d", result);
return result ? JNI_TRUE : JNI_FALSE;
}
/**

View File

@@ -326,12 +326,38 @@ public class VulkanVideoView extends SurfaceView implements SurfaceHolder.Callba
surfaceCreated = true;
android.util.Log.i(TAG, "Surface created, ready for video loading");
// Create or re-create player when surface is created
if (nativeVideoPlayer == 0) {
android.util.Log.i(TAG, "Creating VavCore-Vulkan video player...");
nativeVideoPlayer = nativeCreateVideoPlayer(surfaceHolder.getSurface());
if (nativeVideoPlayer == 0) {
android.util.Log.e(TAG, "Failed to create VavCore-Vulkan video player");
return;
}
android.util.Log.i(TAG, "VavCore-Vulkan video player created successfully");
} else {
// Player exists but renderer was destroyed - re-initialize it with new surface
android.util.Log.i(TAG, "Re-initializing Vulkan renderer with new surface...");
if (!nativeReinitializeRenderer(nativeVideoPlayer, surfaceHolder.getSurface())) {
android.util.Log.e(TAG, "Failed to re-initialize Vulkan renderer");
return;
}
android.util.Log.i(TAG, "Vulkan renderer re-initialized successfully");
}
// If there's a pending video load, process it now
if (pendingVideoPath != null && nativeVideoPlayer != 0) {
if (pendingVideoPath != null) {
android.util.Log.i(TAG, "Processing pending video load: " + pendingVideoPath);
String path = pendingVideoPath;
pendingVideoPath = null;
loadVideo(path);
// Load video file
android.util.Log.i(TAG, "Loading video file: " + path);
boolean success = nativeLoadVideo(nativeVideoPlayer, path);
if (success) {
android.util.Log.i(TAG, "Video file loaded successfully");
} else {
android.util.Log.e(TAG, "Failed to load video file");
}
}
}
}
@@ -501,6 +527,7 @@ public class VulkanVideoView extends SurfaceView implements SurfaceHolder.Callba
// Native method declarations for VavCore-Vulkan integration
private native long nativeCreateVideoPlayer(Object surface);
private native void nativeDestroyVideoPlayer(long playerPtr);
private native boolean nativeReinitializeRenderer(long playerPtr, Object surface);
private native boolean nativeLoadVideo(long playerPtr, String filePath);
private native boolean nativePlay(long playerPtr);
private native boolean nativePause(long playerPtr);

View File

@@ -74,6 +74,8 @@ set(VAVCORE_ANDROID_SOURCES
${VAVCORE_ROOT}/src/Decoder/MediaCodecSurfaceManager.cpp
${VAVCORE_ROOT}/src/Decoder/AV1Decoder.cpp
${VAVCORE_ROOT}/src/FileIO/WebMFileReader.cpp
${VAVCORE_ROOT}/src/Common/VavCoreLogger.cpp
${VAVCORE_ROOT}/src/Common/ImageUtils.cpp
)
# All source files for Android

View File

@@ -1,5 +1,40 @@
#include "pch.h"
#include "ImageUtils.h"
#ifdef ANDROID
// Android stub implementations - Windows-only functionality
// These functions are platform-specific debugging tools not needed for Android runtime
#include <android/log.h>
#define LOG_TAG "VavCore-ImageUtils"
#define LOGW(...) __android_log_print(ANDROID_LOG_WARN, LOG_TAG, __VA_ARGS__)
namespace VavCore {
bool ImageUtils::YUV420PToRGB(const VideoFrame& yuv_frame, uint8_t* rgb_buffer) {
LOGW("ImageUtils::YUV420PToRGB - Not implemented on Android (Windows-only debug feature)");
return false;
}
bool ImageUtils::SaveRGB24ToBMP(const char* filename, const uint8_t* rgb_data, uint32_t width, uint32_t height) {
LOGW("ImageUtils::SaveRGB24ToBMP - Not implemented on Android (Windows-only debug feature)");
return false;
}
bool ImageUtils::SaveYUV420PToBMP(const char* filename, const VideoFrame& yuv_frame) {
LOGW("ImageUtils::SaveYUV420PToBMP - Not implemented on Android (Windows-only debug feature)");
return false;
}
bool ImageUtils::CreateDirectoryIfNotExists(const char* dir_path) {
LOGW("ImageUtils::CreateDirectoryIfNotExists - Not implemented on Android (Windows-only debug feature)");
return false;
}
} // namespace VavCore
#else
// Windows implementation
#include "VavCoreLogger.h"
#include <Windows.h>
#include <cstdio>
@@ -289,3 +324,5 @@ bool ImageUtils::CreateDirectoryIfNotExists(const char* dir_path) {
}
} // namespace VavCore
#endif // ANDROID

View File

@@ -8,6 +8,11 @@
#include <Windows.h>
#endif
#ifdef ANDROID
#include <android/log.h>
#define ANDROID_LOG_TAG "VavCore"
#endif
namespace VavCore {
VavCoreLogger& VavCoreLogger::GetInstance() {
@@ -64,6 +69,18 @@ void VavCoreLogger::LogFormattedV(VC_LogLevel level, const char* format, va_list
char buffer[1024];
vsnprintf(buffer, sizeof(buffer), format, args);
#ifdef ANDROID
// Android logcat output
android_LogPriority priority;
switch (level) {
case VC_LogLevel::VC_DEBUG: priority = ANDROID_LOG_DEBUG; break;
case VC_LogLevel::VC_INFO: priority = ANDROID_LOG_INFO; break;
case VC_LogLevel::VC_WARNING: priority = ANDROID_LOG_WARN; break;
case VC_LogLevel::VC_ERROR: priority = ANDROID_LOG_ERROR; break;
default: priority = ANDROID_LOG_INFO; break;
}
__android_log_print(priority, ANDROID_LOG_TAG, "%s", buffer);
#else
// Output to console
if (level == VC_LogLevel::VC_ERROR || level == VC_LogLevel::VC_WARNING) {
std::cerr << buffer;
@@ -84,6 +101,7 @@ void VavCoreLogger::LogFormattedV(VC_LogLevel level, const char* format, va_list
OutputDebugStringA("\n");
}
#endif
#endif // ANDROID
}
void VavCoreLogger::LogString(VC_LogLevel level, const std::string& message, const char* source) {
@@ -94,6 +112,18 @@ void VavCoreLogger::LogString(VC_LogLevel level, const std::string& message, con
fullMessage = message;
}
#ifdef ANDROID
// Android logcat output
android_LogPriority priority;
switch (level) {
case VC_LogLevel::VC_DEBUG: priority = ANDROID_LOG_DEBUG; break;
case VC_LogLevel::VC_INFO: priority = ANDROID_LOG_INFO; break;
case VC_LogLevel::VC_WARNING: priority = ANDROID_LOG_WARN; break;
case VC_LogLevel::VC_ERROR: priority = ANDROID_LOG_ERROR; break;
default: priority = ANDROID_LOG_INFO; break;
}
__android_log_print(priority, ANDROID_LOG_TAG, "%s", fullMessage.c_str());
#else
// Output to console
if (level == VC_LogLevel::VC_ERROR || level == VC_LogLevel::VC_WARNING) {
std::cerr << fullMessage << std::endl;
@@ -106,6 +136,7 @@ void VavCoreLogger::LogString(VC_LogLevel level, const std::string& message, con
OutputDebugStringA(fullMessage.c_str());
OutputDebugStringA("\n");
#endif
#endif // ANDROID
}
const char* VavCoreLogger::GetLevelString(VC_LogLevel level) {

View File

@@ -41,6 +41,7 @@ MediaCodecAV1Decoder::MediaCodecAV1Decoder()
, m_vk_device(nullptr)
, m_vk_instance(nullptr)
, m_ahardware_buffer(nullptr)
, m_state(DecoderState::READY)
, m_buffer_processor(std::make_unique<MediaCodecBufferProcessor>())
, m_hardware_detector(std::make_unique<MediaCodecHardwareDetector>())
, m_codec_selector(std::make_unique<MediaCodecSelector>())
@@ -201,13 +202,21 @@ bool MediaCodecAV1Decoder::DecodeToSurface(const uint8_t* packet_data, size_t pa
return false;
}
// Step 1: Handle NULL packet as flush mode (EOF)
if (!packet_data || packet_size == 0) {
std::lock_guard<std::mutex> lock(m_state_mutex);
m_state = DecoderState::FLUSHING;
LogInfo("DecodeToSurface: Entering FLUSHING state (NULL packet)");
}
if (target_type == VAVCORE_SURFACE_ANDROID_NATIVE_WINDOW) {
if (!m_hardware_accelerated) {
LogError("Surface decoding requires hardware acceleration");
return false;
}
// Set output surface for hardware acceleration
// Step 2: Update target surface BEFORE processing input
// CRITICAL: MediaCodec needs surface configured before queueing input
ANativeWindow* native_surface = static_cast<ANativeWindow*>(target_surface);
if (native_surface && native_surface != m_surface) {
media_status_t status = AMediaCodec_setOutputSurface(m_codec, native_surface);
@@ -216,18 +225,63 @@ bool MediaCodecAV1Decoder::DecodeToSurface(const uint8_t* packet_data, size_t pa
return false;
}
m_surface = native_surface;
LogInfo("DecodeToSurface: Output surface updated");
}
// Process input buffer
if (!ProcessInputBuffer(packet_data, packet_size)) {
LogError("Failed to process input buffer for surface rendering");
return false;
// Step 3: Process input buffer (feed packet to MediaCodec)
{
std::lock_guard<std::mutex> lock(m_state_mutex);
if (m_state != DecoderState::FLUSHING) {
if (!ProcessInputBuffer(packet_data, packet_size)) {
LogError("Failed to process input buffer for surface rendering");
return false;
}
}
}
// Output will be rendered directly to surface
// No need to copy frame data
// Step 4: Check if initial buffering is needed
{
std::lock_guard<std::mutex> lock(m_state_mutex);
if (m_state == DecoderState::READY) {
m_state = DecoderState::BUFFERING;
LogInfo("DecodeToSurface: Entering BUFFERING state (first packet)");
}
}
// Step 5: Try to dequeue output buffer
// CRITICAL: MediaCodec is ASYNCHRONOUS - input/output are decoupled
// We must ALWAYS try dequeue, regardless of buffering state
bool hasFrame = ProcessOutputBuffer(output_frame);
if (!hasFrame) {
std::lock_guard<std::mutex> lock(m_state_mutex);
if (m_state == DecoderState::BUFFERING) {
LogInfo("DecodeToSurface: No frame available during BUFFERING");
return false; // VAVCORE_PACKET_ACCEPTED
}
if (m_state == DecoderState::FLUSHING) {
LogInfo("DecodeToSurface: No more frames during FLUSHING");
return false; // VAVCORE_END_OF_STREAM
}
LogInfo("DecodeToSurface: No frame available during DECODING");
return false; // VAVCORE_PACKET_ACCEPTED
}
// Step 6: First frame received - transition to DECODING
{
std::lock_guard<std::mutex> lock(m_state_mutex);
if (m_state == DecoderState::BUFFERING) {
m_state = DecoderState::DECODING;
LogInfo("DecodeToSurface: Transition to DECODING state (first frame output)");
}
}
// Output rendered directly to surface
IncrementFramesDecoded();
return true;
return true; // Frame successfully rendered
} else if (target_type == VAVCORE_SURFACE_OPENGL_ES_TEXTURE) {
if (!m_hardware_accelerated) {
@@ -387,7 +441,13 @@ bool MediaCodecAV1Decoder::Reset() {
// Reset priming system
ResetPriming();
LogInfo("MediaCodec decoder reset successfully");
// Reset state machine
{
std::lock_guard<std::mutex> lock(m_state_mutex);
m_state = DecoderState::READY;
}
LogInfo("MediaCodec decoder reset successfully (state: READY)");
return true;
}

View File

@@ -29,6 +29,14 @@
namespace VavCore {
// Decoder state machine for MediaCodec pipeline management
enum class DecoderState {
READY, // Decoder initialized, waiting for first packet
BUFFERING, // Initial buffering - accepting packets but no frame output yet
DECODING, // Normal decoding - outputting frames
FLUSHING // EOF reached, draining remaining frames
};
class MediaCodecAV1Decoder : public IVideoDecoder {
public:
MediaCodecAV1Decoder();
@@ -176,6 +184,10 @@ private:
// Performance tracking
std::chrono::high_resolution_clock::time_point m_decode_start_time;
// State machine management
DecoderState m_state;
mutable std::mutex m_state_mutex;
// Surface members (deprecated - delegated to m_surface_manager)
void* m_egl_context; // Deprecated
uint32_t m_opengl_texture_id; // Deprecated