Skip to main content

What Problem Does This Solve?

ILCKEncoder abstracts platform-specific video encoding so you don’t have to:
  • Windows uses Media Foundation
  • Android uses NDK MediaCodec
  • Quest uses Vulkan texture interop
The interface lets LCK support multiple platforms with a single API. Most developers never interact with this directly—ULCKService handles it all.

When to Use This

Read this if:
  • Building a custom encoder (very advanced)
  • Debugging encoding issues
  • Understanding LCK’s internal architecture
  • Extending LCK to new platforms
Skip this if: You’re just using LCK for recording. Use ULCKService instead.

Interface Definition

class ILCKEncoder : public FRunnable
{
public:
    virtual ~ILCKEncoder() = default;
    
    // Lifecycle
    virtual bool Open() = 0;
    virtual void Close() = 0;
    virtual bool IsEncoding() const = 0;
    
    // Encoding
    virtual void EncodeTexture(FTextureRHIRef& Texture, float TimeSeconds) = 0;
    virtual void EncodeAudio(TArrayView<float> PCMData) = 0;
    
    // Finalization
    virtual void Save(TFunction<void(float)> ProgressCallback) = 0;
    
    // Queries
    virtual float GetAudioTime() const = 0;
    virtual FString GetOutputPath() const = 0;
};
MethodWhat It DoesWhen It’s Called
Open()Initialize encoder, allocate resourcesBefore first frame
Close()Shut down encoder, free resourcesAfter recording stops
IsEncoding()Check if encoder is activeState queries
EncodeTexture()Encode a video frameEvery frame during recording
EncodeAudio()Encode audio samplesAudio callbacks
Save()Finalize and write MP4 fileAfter last frame
GetAudioTime()Get current audio timestampA/V sync
GetOutputPath()Get save file pathAfter save completes

Encoder Factory

Encoders are discovered and created via Unreal’s modular features system:
class ILCKEncoderFactory : public IModularFeature
{
public:
    static FName GetModularFeatureName()
    {
        return TEXT("LCKEncoderFactory");
    }
    
    virtual FString GetEncoderName() const = 0;
    
    virtual TSharedPtr<ILCKEncoder> CreateEncoder(
        int32 Width,          // Video width (e.g., 1920)
        int32 Height,         // Video height (e.g., 1080)
        int32 VideoBitrate,   // Video bitrate in bps (e.g., 8000000 = 8 Mbps)
        int32 Framerate,      // Target FPS (e.g., 30)
        int32 Samplerate,     // Audio sample rate (e.g., 48000 Hz)
        int32 AudioBitrate    // Audio bitrate in bps (e.g., 256000 = 256 Kbps)
    ) = 0;
};

Finding an Encoder at Runtime

ILCKEncoderFactory* Factory = nullptr;
auto& ModularFeatures = IModularFeatures::Get();

if (ModularFeatures.IsModularFeatureAvailable(ILCKEncoderFactory::GetModularFeatureName()))
{
    Factory = &ModularFeatures.GetModularFeature<ILCKEncoderFactory>(
        ILCKEncoderFactory::GetModularFeatureName()
    );
}

if (Factory)
{
    UE_LOG(LogLCK, Log, TEXT("Encoder available: %s"), *Factory->GetEncoderName());
    
    // Create encoder
    TSharedPtr<ILCKEncoder> Encoder = Factory->CreateEncoder(
        1920, 1080,   // HD resolution
        8 << 20,      // 8 Mbps video bitrate
        30,           // 30 FPS
        48000,        // 48 kHz audio
        256000        // 256 Kbps audio bitrate
    );
}

Platform Implementations

Windows: FLCKWindowsEncoder

Technologies:
  • IMFSinkWriter — MP4 muxing
  • IMFTransform — H.264 video encoding
  • IMFMediaType — AAC audio encoding
  • Direct3D 11 texture interop
Key features:
  • Hardware-accelerated encoding via GPU
  • Triple-buffered texture pool (avoids GPU stalls)
  • Async encoding thread
  • Supports DX11 render targets
Error handling example:
HRESULT hr = SinkWriter->WriteSample(VideoStreamIndex, Sample);
if (FAILED(hr))
{
    UE_LOG(LogLCKEncoding, Error, TEXT("WriteSample failed: 0x%08X"), hr);
    
    // Common errors:
    // 0x80070057 = Invalid parameter
    // 0xC00D36B4 = Codec not found
    // 0x8007000E = Out of memory
}
Triple-buffered texture pool:
// Why triple buffering?
// 1. GPU is rendering to texture 0
// 2. Encoder is reading from texture 1  
// 3. Texture 2 is free for next frame
// Result: No GPU-CPU sync stalls

class FTexturePool
{
    static constexpr int32 PoolSize = 3;
    TArray<FTextureRHIRef> Textures;
    int32 CurrentIndex = 0;
    
public:
    FTextureRHIRef GetNextTexture()
    {
        FTextureRHIRef Texture = Textures[CurrentIndex];
        CurrentIndex = (CurrentIndex + 1) % PoolSize;
        return Texture;
    }
};

Android: FLCKAndroidEncoder

Technologies:
  • AMediaCodec — H.264/AAC hardware encoding
  • AMediaMuxer — MP4 container
  • Vulkan/EGL texture interop
  • Android Hardware Buffer
Key features:
  • Hardware-accelerated encoding on Quest
  • Vulkan texture export via EGL
  • Low-latency pipeline
  • Direct write to device storage
Vulkan interop flow:
// 1. Export Vulkan texture to Android Hardware Buffer
VkExternalMemoryHandleTypeFlagBits HandleType = 
    VK_EXTERNAL_MEMORY_HANDLE_TYPE_ANDROID_HARDWARE_BUFFER_BIT_ANDROID;

// 2. Import into MediaCodec surface
AMediaCodec_queueInputBuffer(Codec, BufferIndex, 0, Size, TimeUs, 0);

// 3. MediaCodec encodes directly from GPU memory
// No CPU-side texture readback needed!
Why Vulkan interop is critical:
  • Quest uses Vulkan for rendering
  • CPU readback would be too slow (kills performance)
  • EGL interop lets encoder access GPU memory directly
  • This is why LCKVulkan module must load at EarliestPossible phase

Data Flow

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│  Scene Capture  │────>│  Render Target  │────>│  Texture Pool   │
│  Component      │     │  (RenderTarget) │     │  (3 buffers)    │
└─────────────────┘     └─────────────────┘     └────────┬────────┘

                                                         v
┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│   MP4 File      │<────│  Video Encoder  │<────│  GPU Readback   │
│  (Movies dir)   │     │  (H.264, AAC)   │     │  (RHI Command)  │
└─────────────────┘     └─────────────────┘     └─────────────────┘


                    ┌───────────┴───────────┐
                    │   Audio Mixer         │
                    │   (Game, Mic, Vivox)  │
                    └───────────────────────┘

Audio Encoding

Audio flows from audio sources → mixer → encoder:
void ILCKEncoder::EncodeAudio(TArrayView<float> PCMData)
{
    // Input: 32-bit float, interleaved stereo
    // Sample rate: typically 48000 Hz
    // Format: [L, R, L, R, L, R, ...]
    
    // Convert float (-1.0 to 1.0) to int16
    TArray<int16> IntSamples;
    IntSamples.SetNum(PCMData.Num());
    
    for (int32 i = 0; i < PCMData.Num(); ++i)
    {
        float Sample = FMath::Clamp(PCMData[i], -1.0f, 1.0f);
        IntSamples[i] = static_cast<int16>(Sample * 32767.0f);
    }
    
    // Pass to platform encoder
    // Windows: IMFTransform (AAC)
    // Android: AMediaCodec (AAC)
}

Thread Safety

Encoding runs on a dedicated background thread:
class ILCKEncoder : public FRunnable
{
protected:
    FRunnableThread* EncoderThread;
    FCriticalSection EncodingMutex;
    TQueue<FEncodingTask> TaskQueue;
    std::atomic<bool> bShouldRun;
    
public:
    virtual uint32 Run() override
    {
        while (bShouldRun)
        {
            FEncodingTask Task;
            if (TaskQueue.Dequeue(Task))
            {
                FScopeLock Lock(&EncodingMutex);
                ProcessTask(Task);
            }
        }
        return 0;
    }
};
Why threading matters:
  • Encoding is CPU-intensive
  • Running on game thread would cause stuttering
  • Background thread keeps game smooth
  • Queue-based design prevents race conditions

Creating a Custom Encoder

This is advanced usage. Most developers should use the built-in platform encoders. Only create a custom encoder if:
  • You need a different codec (HEVC, VP9)
  • You need a different container (WebM, AVI)
  • You’re porting LCK to a new platform

Step 1: Implement ILCKEncoder

class FMyCustomEncoder : public ILCKEncoder
{
public:
    virtual bool Open() override
    {
        // Initialize your encoder
        // Allocate buffers, set up codec
        return true;
    }
    
    virtual void Close() override
    {
        // Clean up resources
    }
    
    virtual bool IsEncoding() const override
    {
        return bIsActive;
    }
    
    virtual void EncodeTexture(FTextureRHIRef& Texture, float TimeSeconds) override
    {
        // 1. Read texture from GPU
        // 2. Convert to encoder's expected format
        // 3. Pass to codec
    }
    
    virtual void EncodeAudio(TArrayView<float> PCMData) override
    {
        // 1. Convert float to int16
        // 2. Pass to audio codec
    }
    
    virtual void Save(TFunction<void(float)> ProgressCallback) override
    {
        // 1. Flush encoder
        // 2. Write file
        // 3. Call ProgressCallback(0.0 to 1.0)
    }
    
    virtual float GetAudioTime() const override
    {
        return CurrentAudioTime;
    }
    
    virtual FString GetOutputPath() const override
    {
        return OutputFilePath;
    }
    
private:
    bool bIsActive = false;
    float CurrentAudioTime = 0.0f;
    FString OutputFilePath;
};

Step 2: Create Factory

class FMyEncoderFactory : public ILCKEncoderFactory
{
public:
    virtual FString GetEncoderName() const override
    {
        return TEXT("MyCustomEncoder");
    }
    
    virtual TSharedPtr<ILCKEncoder> CreateEncoder(
        int32 Width, int32 Height, int32 VideoBitrate,
        int32 Framerate, int32 Samplerate, int32 AudioBitrate) override
    {
        TSharedPtr<FMyCustomEncoder> Encoder = MakeShared<FMyCustomEncoder>();
        
        // Configure encoder
        Encoder->SetResolution(Width, Height);
        Encoder->SetBitrate(VideoBitrate, AudioBitrate);
        Encoder->SetFramerate(Framerate);
        
        return Encoder;
    }
};

Step 3: Register via Modular Features

class FMyEncoderModule : public IModuleInterface
{
private:
    FMyEncoderFactory EncoderFactory;
    
public:
    virtual void StartupModule() override
    {
        // Register encoder factory
        IModularFeatures::Get().RegisterModularFeature(
            ILCKEncoderFactory::GetModularFeatureName(),
            &EncoderFactory
        );
        
        UE_LOG(LogLCK, Log, TEXT("MyCustomEncoder registered"));
    }
    
    virtual void ShutdownModule() override
    {
        // Unregister
        IModularFeatures::Get().UnregisterModularFeature(
            ILCKEncoderFactory::GetModularFeatureName(),
            &EncoderFactory
        );
    }
};

IMPLEMENT_MODULE(FMyEncoderModule, MyEncoder)

Debugging Encoder Issues

Enable Verbose Logging

; DefaultEngine.ini
[Core.Log]
LogLCKEncoding=VeryVerbose
What you’ll see:
LogLCKEncoding: Encoder initialized: Windows Media Foundation
LogLCKEncoding: Video: 1920x1080 @ 30fps, 8 Mbps
LogLCKEncoding: Audio: 48000 Hz stereo, 256 Kbps
LogLCKEncoding: Frame 0 encoded (8.2ms)
LogLCKEncoding: Frame 30 encoded (7.9ms)
LogLCKEncoding: Audio buffer: 2048 samples, 42.7ms
LogLCKEncoding: Finalizing video file...
LogLCKEncoding: MP4 saved: C:/Users/.../recording_001.mp4

Common Encoder Errors

Windows:
LogLCKEncoding: Error: IMFSinkWriter creation failed (0xC00D36B4)
Fix: H.264 codec not installed (rare on Windows 10/11)
Android:
LogLCKEncoding: Error: AMediaCodec configure failed
Fix: Unsupported resolution or bitrate for device
Vulkan interop:
LogLCKEncoding: Error: Vulkan texture export failed
Fix: Ensure LCKVulkan module loads at EarliestPossible phase

Key Takeaways

Platform abstraction — One interface, multiple implementations
Modular features — Runtime encoder discovery and creation
Triple buffering — Prevents GPU stalls on texture readback
Background thread — Encoding doesn’t block game thread
Most devs don’t need this — Use ULCKService for recording