Skip to main content

What Problem Does This Solve?

Understanding LCK’s architecture helps you:
  • Debug issues by knowing which subsystem handles what
  • Extend the SDK with custom encoders or audio sources
  • Optimize performance by understanding the data flow
  • Build custom UI without breaking core functionality
This page maps out how LCK’s modules work together in Unreal Engine.

When to Read This

Read this when:
  • Integrating LCK for the first time
  • Building custom recording UI
  • Creating custom audio sources (FMOD, Wwise)
  • Debugging recording or encoding issues
  • Contributing to LCK development
Skip this if you’re just using the default tablet UI.

High-Level Overview

LCK is organized into modular runtime modules that load in specific phases:
LCKVulkan (EarliestPossible) ← Android Vulkan interop

LCKCore (PostDefault) ← Recording subsystem, encoders
    ├── LCKAudio ← Audio capture framework
    ├── LCKWindowsEncoder ← Windows Media Foundation
    ├── LCKAndroidEncoder ← Android MediaCodec
    └── LCKAndroidGallery ← Save to Android gallery

LCKTablet (Default) ← High-level service, tablet UI
    └── LCKUI ← 3D UI components (buttons, sliders)

Optional Audio Plugins:
    ├── LCKFMOD ← FMOD integration
    ├── LCKWwise ← Wwise integration
    ├── LCKVivox ← Vivox voice chat
    └── LCKOboe ← Android low-latency mic
Key principle: Lower modules (Core, Audio) know nothing about higher modules (Tablet, UI). This lets you build custom UI without modifying core functionality.

Subsystem Hierarchy

LCK uses Unreal’s subsystem architecture for lifetime management:
UWorld
├── ULCKRecorderSubsystem (TickableWorldSubsystem)
│   └── ILCKEncoder (platform-specific)
├── ULCKSubsystem (WorldSubsystem)
│   └── ULCKService (high-level API)
└── ALCKTablet (Actor)
    └── ULCKTabletDataModel (state management)

UGameInstance
└── ULCKTelemetrySubsystem (GameInstanceSubsystem)

ULCKRecorderSubsystem

Type: UTickableWorldSubsystem (ticks every frame)
Module: LCKCore
Purpose: Low-level recording control, frame capture, encoder lifecycle
UCLASS()
class LCKCORE_API ULCKRecorderSubsystem : public UTickableWorldSubsystem
{
    // Recording control
    void SetupRecorder(const FLCKRecorderParams& Params, USceneCaptureComponent2D* Capture);
    void StartRecording();
    void StopRecording();
    void StartRecordingAsync(FOnLCKRecorderBoolResult Callback);
    void StopRecordingAsync(FOnLCKRecorderBoolResult Callback, FOnLCKRecorderProgress Progress);
    
    // Preview mode (camera without recording)
    void StartPreview();
    void StopPreview();
    
    // Photo capture
    void TakePhoto();
    
    // State queries
    bool IsRecording() const;
    float GetTime() const;
    float GetMicrophoneVolume() const;
};
When to use: Advanced scenarios where you need direct control over the encoder. Most developers should use ULCKService instead.

ULCKSubsystem

Type: UWorldSubsystem
Module: LCKTablet
Purpose: Provides access to ULCKService
UCLASS()
class LCKTABLET_API ULCKSubsystem : public UWorldSubsystem
{
public:
    UFUNCTION(BlueprintCallable, Category = "LCK")
    ULCKService* GetService();
};
When to use: This is your entry point. Get the service, use its methods.
// Access from C++
ULCKSubsystem* Subsystem = GetWorld()->GetSubsystem<ULCKSubsystem>();
ULCKService* Service = Subsystem->GetService();

// Access from Blueprint
ULCKService* Service = GetLCKService(); // Blueprint helper

ULCKTelemetrySubsystem

Type: UGameInstanceSubsystem
Module: LCKCore
Purpose: Analytics and usage tracking
UCLASS()
class LCKCORE_API ULCKTelemetrySubsystem : public UGameInstanceSubsystem
{
public:
    void SendEvent(ELCKTelemetryEventType EventType, const FString& Context = TEXT(""));
    FString GetCurrentTrackingId() const;
};
Automatically tracks SDK events like recording start/stop, errors, quality changes.

Encoder Architecture

Encoders implement the ILCKEncoder interface and are created via modular features:
class ILCKEncoder : public FRunnable
{
public:
    virtual bool Open() = 0;
    virtual bool IsEncoding() const = 0;
    virtual void EncodeTexture(FTextureRHIRef& Texture, float TimeSeconds) = 0;
    virtual void EncodeAudio(TArrayView<float> PCMData) = 0;
    virtual void Save(TFunction<void(float)> ProgressCallback) = 0;
    virtual float GetAudioTime() const = 0;
};

Platform Implementations

PlatformEncoderTechnologiesFeatures
WindowsFLCKWindowsEncoderWindows Media Foundation, DirectX 11H.264, AAC, MP4, Hardware-accelerated
AndroidFLCKAndroidEncoderNDK MediaCodec, Vulkan/EGLH.264, AAC, MP4, Hardware-accelerated

Windows Encoder

  • Uses IMFSinkWriter for muxing
  • Uses IMFTransform for H.264 encoding
  • Triple-buffered texture pool to avoid GPU stalls
  • Direct3D 11 texture interop

Android Encoder

  • Uses AMediaCodec for H.264/AAC encoding
  • Uses AMediaMuxer for MP4 container
  • Vulkan texture export via EGL
  • Hardware-accelerated on Quest devices

Encoder Factory

Encoders are discovered and created via Unreal’s modular features system:
class ILCKEncoderFactory : public IModularFeature
{
public:
    static FName GetModularFeatureName() { return TEXT("LCKEncoderFactory"); }
    
    virtual FString GetEncoderName() const = 0;
    virtual TSharedPtr<ILCKEncoder> CreateEncoder(
        int32 Width, int32 Height, int32 VideoBitrate,
        int32 Framerate, int32 Samplerate, int32 AudioBitrate) = 0;
};
How to find an encoder:
auto& ModularFeatures = IModularFeatures::Get();
if (ModularFeatures.IsModularFeatureAvailable(ILCKEncoderFactory::GetModularFeatureName()))
{
    ILCKEncoderFactory* Factory = &ModularFeatures.GetModularFeature<ILCKEncoderFactory>(
        ILCKEncoderFactory::GetModularFeatureName()
    );
    
    TSharedPtr<ILCKEncoder> Encoder = Factory->CreateEncoder(
        1920, 1080, 8000000, 30, 48000, 256000
    );
}

Audio Architecture

Audio sources also use modular features for extensibility:
class ILCKAudioSource : public IModularFeature, public TSharedFromThis<ILCKAudioSource>
{
public:
    static FName GetModularFeatureName() { return TEXT("LCKAudioSource"); }
    
    // Single delegate (not multicast) - use BindLambda
    FOnRenderAudioDelegate OnAudioDataDelegate;
    
    // Control
    virtual bool StartCapture() noexcept = 0;
    virtual bool StartCapture(TLCKAudioChannelsMask Channels) noexcept = 0;
    virtual void StopCapture() noexcept = 0;
    
    // Query
    virtual float GetVolume() const noexcept = 0;
    virtual const FString& GetSourceName() const noexcept = 0;
    TLCKAudioChannelsMask GetSupportedChannels() const noexcept;
};

Audio Mixing

Multiple audio sources are combined via FLCKAudioMix:
class FLCKAudioMix
{
public:
    void AddSource(TSharedPtr<ILCKAudioSource> Source);
    void RemoveSource(TSharedPtr<ILCKAudioSource> Source);
    
    // Get mixed stereo audio for specified channels
    TArray<float> StereoMix(TLCKAudioChannelsMask Channels);
};
Example: Game audio + microphone:
FLCKAudioMix Mixer;

// Add game audio source
TSharedPtr<ILCKAudioSource> GameAudio = FindGameAudioSource();
Mixer.AddSource(GameAudio);

// Add microphone
TSharedPtr<ILCKAudioSource> Mic = FindMicrophoneSource();
Mixer.AddSource(Mic);

// Get mixed stereo output
TArray<float> MixedAudio = Mixer.StereoMix(
    ELCKAudioChannel::Game | ELCKAudioChannel::Microphone
);

Data Flow

┌─────────────────────┐     ┌─────────────────────┐     ┌─────────────────────┐
│  Scene Capture      │────>│  Render Target      │────>│  Texture Pool       │
│  Component          │     │  (RenderTarget2D)   │     │  (3 buffers)        │
└─────────────────────┘     └─────────────────────┘     └──────────┬──────────┘

                                                                   v
┌─────────────────────┐     ┌─────────────────────┐     ┌─────────────────────┐
│  MP4 File           │<────│  Platform Encoder   │<────│  GPU Readback       │
│  (Movies folder)    │     │  (Windows/Android)  │     │  (RHI Command)      │
└─────────────────────┘     └─────────────────────┘     └─────────────────────┘


┌─────────────────────┐     ┌─────────────────────┐
│  Audio Sources      │────>│  Audio Mixer        │
│  (Game, Mic, Vivox) │     │  (Combine channels) │
└─────────────────────┘     └─────────────────────┘

Triple-Buffered Texture Pool

Encoder uses triple-buffering to prevent GPU stalls:
class FTexturePool
{
    static constexpr int32 PoolSize = 3;
    TArray<FTextureRHIRef> Textures;
    int32 CurrentIndex = 0;
    
public:
    FTextureRHIRef GetNextTexture()
    {
        FTextureRHIRef Texture = Textures[CurrentIndex];
        CurrentIndex = (CurrentIndex + 1) % PoolSize;
        return Texture;
    }
};
Why triple buffering?
  1. GPU is rendering to texture 0
  2. Encoder is reading from texture 1
  3. Texture 2 is free for next frame
This prevents GPU-CPU synchronization stalls.

Thread Safety

LCK uses standard Unreal thread-safety patterns:
MechanismUsage
FCriticalSectionProtect shared data (audio buffers, encoder state)
FRunnableThreadBackground encoding thread
Queue-based messagingThread-safe command passing
Atomic operationsState flags (IsRecording, IsPaused)
Audio callbacks come from different threads. If you need to access game thread objects (like UObject properties), use:
AudioSource->OnAudioDataDelegate.BindLambda([this](auto PCM, auto Channels, auto Source) {
    AsyncTask(ENamedThreads::GameThread, [this, Data = TArray<float>(PCM)]() {
        ProcessAudioOnGameThread(Data);
    });
});

Modular Feature Discovery

Encoders and audio sources are discovered at runtime:
// Find default encoder
auto& ModularFeatures = IModularFeatures::Get();
if (ModularFeatures.IsModularFeatureAvailable(ILCKEncoderFactory::GetModularFeatureName()))
{
    ILCKEncoderFactory* Factory = &ModularFeatures.GetModularFeature<ILCKEncoderFactory>(
        ILCKEncoderFactory::GetModularFeatureName()
    );
}

// Find all audio sources
TArray<ILCKAudioSource*> AudioSources = 
    ModularFeatures.GetModularFeatureImplementations<ILCKAudioSource>(
        ILCKAudioSource::GetModularFeatureName()
    );

for (ILCKAudioSource* Source : AudioSources)
{
    UE_LOG(LogLCK, Log, TEXT("Audio source: %s"), *Source->GetSourceName());
}

Log Categories

Enable verbose logging for debugging:
; DefaultEngine.ini
[Core.Log]
LogLCK=VeryVerbose
LogLCKEncoding=VeryVerbose
LogLCKAudio=VeryVerbose
LogLCKUI=Verbose
LogLCKTablet=Verbose
CategoryWhat It Logs
LogLCKCore SDK operations (init, shutdown, state changes)
LogLCKEncodingVideo/audio encoding, frame capture, muxing
LogLCKAudioAudio capture, mixing, source registration
LogLCKUIUI component interactions (button presses, state updates)
LogLCKTabletTablet actor lifecycle, camera mode changes

Key Takeaways

Modular design — Core functionality (encoding, audio) is separate from UI (tablet)
Subsystem-based — Uses Unreal’s subsystem architecture for clean lifetime management
Platform abstraction — Encoder interface allows platform-specific implementations
Extensible audio — Audio sources register via modular features
Thread-safe — Encoding happens on background thread, careful with audio callbacks