What Problem Does This Solve?
Understanding LCK’s architecture helps you:
- Debug issues by knowing which subsystem handles what
- Extend the SDK with custom encoders or audio sources
- Optimize performance by understanding the data flow
- Build custom UI without breaking core functionality
This page maps out how LCK’s modules work together in Unreal Engine.
When to Read This
Read this when:
- Integrating LCK for the first time
- Building custom recording UI
- Creating custom audio sources (FMOD, Wwise)
- Debugging recording or encoding issues
- Contributing to LCK development
Skip this if you’re just using the default tablet UI.
High-Level Overview
LCK is organized into modular runtime modules that load in specific phases:
LCKVulkan (EarliestPossible) ← Android Vulkan interop
↓
LCKCore (PostDefault) ← Recording subsystem, encoders, streaming interfaces
├── LCKAudio (PostDefault) ← Audio capture framework
├── LCKWindowsEncoder (PostDefault) ← Windows Media Foundation
├── LCKAndroidEncoder (PostDefault) ← Android MediaCodec
└── LCKAndroidGallery (PostDefault) ← Save to Android gallery
↓
LCKTablet (Default) ← High-level service, tablet UI
└── LCKUI (Default) ← 3D UI components, streaming state, tablet modes
↓
Optional Plugins (Default):
├── LCKStreaming ← RTMP live streaming (implements ILCKStreamingFeature)
├── LCKUnrealAudio ← Unreal Engine audio capture
├── LCKFMOD ← FMOD integration
├── LCKWwise ← Wwise integration
├── LCKVivox ← Vivox voice chat
└── LCKOboe ← Android low-latency mic
Key principle: Lower modules (Core, Audio) know nothing about higher modules (Tablet, UI). This lets you build custom UI without modifying core functionality.
Module Dependency Map
This diagram shows which modules depend on which, and whether they are required or optional.
┌──────────────────────────────────────────────────────────────────────┐
│ REQUIRED MODULES │
├──────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ │
│ │ LCKVulkan │ (EarliestPossible) Android only │
│ └──────┬──────┘ │
│ ↓ │
│ ┌─────────────┐ ┌────────────────────┐ ┌───────────────────┐ │
│ │ LCKCore │←────│ LCKWindowsEncoder │ │ LCKAndroidEncoder │ │
│ │ (PostDefault)│←────│ (PostDefault) │ │ (PostDefault) │ │
│ └──────┬──────┘ │ Win64 only │ │ Android only │ │
│ │ └────────────────────┘ └───────────────────┘ │
│ ↓ │
│ ┌─────────────┐ ┌────────────────────┐ │
│ │ LCKAudio │ │ LCKAndroidGallery │ │
│ │ (PostDefault)│ │ (PostDefault) │ │
│ │ ← LCKCore │ │ Android only │ │
│ └──────┬──────┘ │ ← LCKCore │ │
│ │ └────────────────────┘ │
│ ↓ │
│ ┌─────────────┐ ┌────────────────────┐ │
│ │ LCKTablet │────>│ LCKUI │ │
│ │ (Default) │ │ (Default) │ │
│ │ ← LCKCore │ │ ← LCKCore │ │
│ │ ← LCKUI │ └────────────────────┘ │
│ └─────────────┘ │
│ │
├──────────────────────────────────────────────────────────────────────┤
│ OPTIONAL MODULES │
├──────────────────────────────────────────────────────────────────────┤
│ │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │
│ │ LCKStreaming │ │ LCKUnrealAudio │ │ LCKFMOD │ │
│ │ (Default) │ │ (Default) │ │ (Default) │ │
│ │ ← LCKCore │ │ ← LCKCore │ │ ← LCKCore │ │
│ │ ← LCKAudio │ │ ← LCKAudio │ │ ← LCKAudio │ │
│ └────────────────┘ └────────────────┘ └────────────────┘ │
│ │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │
│ │ LCKWwise │ │ LCKVivox │ │ LCKOboe │ │
│ │ (Default) │ │ (Default) │ │ (Default) │ │
│ │ ← LCKCore │ │ ← LCKCore │ │ ← LCKCore │ │
│ │ ← LCKAudio │ │ ← LCKAudio │ │ ← LCKAudio │ │
│ └────────────────┘ └────────────────┘ │ Android only │ │
│ └────────────────┘ │
│ │
└──────────────────────────────────────────────────────────────────────┘
Module Classification
| Module | Type | Loading Phase | Depends On | Purpose |
|---|
| LCKVulkan | Platform (auto) | EarliestPossible | None | Android Vulkan interop |
| LCKCore | Required | PostDefault | None | Recording subsystem, encoder factory, streaming interfaces |
| LCKAudio | Required | PostDefault | LCKCore | Audio source interface, mixing |
| LCKWindowsEncoder | Platform (auto) | PostDefault | LCKCore | Windows Media Foundation encoder |
| LCKAndroidEncoder | Platform (auto) | PostDefault | LCKCore | Android MediaCodec encoder |
| LCKAndroidGallery | Platform (auto) | PostDefault | LCKCore | Android gallery integration |
| LCKUI | Required | Default | LCKCore | 3D UI components, streaming state enum, tablet modes |
| LCKTablet | Required | Default | LCKCore, LCKUI | High-level service, tablet actor |
| LCKStreaming | Optional | Default | LCKCore, LCKAudio | RTMP streaming via ILCKStreamingFeature |
| LCKUnrealAudio | Optional | Default | LCKCore, LCKAudio | Unreal Engine audio capture |
| LCKFMOD | Optional | Default | LCKCore, LCKAudio | FMOD audio capture |
| LCKWwise | Optional | Default | LCKCore, LCKAudio | Wwise audio capture |
| LCKVivox | Optional | Default | LCKCore, LCKAudio | Vivox voice chat capture |
| LCKOboe | Optional | Default | LCKCore, LCKAudio | Android low-latency mic |
Load Order
Modules load in three phases, in this order:
-
EarliestPossible — Before engine initialization. Only LCKVulkan loads here (Android only). This phase exists because Vulkan interop must be established before the RHI initializes.
-
PostDefault — After engine init, before game modules. Core infrastructure loads here: LCKCore, LCKAudio, and all platform-specific encoder modules. These must be available before any game code tries to use the recording API.
-
Default — Standard game module loading. All high-level and optional modules load here: LCKTablet, LCKUI, LCKStreaming, and all audio plugins. By this point, core infrastructure is guaranteed to be available.
Platform-specific modules (LCKVulkan, LCKWindowsEncoder, LCKAndroidEncoder, LCKAndroidGallery) are auto-loaded by the engine based on the target platform. You never need to add them to your .Build.cs.
Runtime Discovery
Optional modules register themselves via Unreal’s IModularFeatures system at startup. Core modules discover them at runtime without compile-time coupling:
- Audio sources register as
ILCKAudioSource modular features
- Encoder factories register as
ILCKEncoderFactory modular features
- Streaming backends register as
ILCKStreamingFeature modular features
- Packet sinks are passed directly to encoders via
ILCKEncoder::AddPacketSink()
This means you can add or remove optional modules without recompiling core code.
| Module | Win64 | Android (Quest 2) | Android (Quest 3) | Linux |
|---|
| LCKCore | Yes | Yes | Yes | Yes |
| LCKAudio | Yes | Yes | Yes | Yes |
| LCKTablet | Yes | Yes | Yes | Yes |
| LCKUI | Yes | Yes | Yes | Yes |
| LCKWindowsEncoder | Yes | — | — | — |
| LCKAndroidEncoder | — | Yes | Yes | — |
| LCKAndroidGallery | — | Yes | Yes | — |
| LCKVulkan | — | Yes | Yes | — |
| LCKUnrealAudio | Yes | Yes | Yes | Yes |
| LCKFMOD | Yes | Yes | Yes | — |
| LCKWwise | Yes | Yes | Yes | — |
| LCKVivox | Yes | Yes | Yes | Yes |
| LCKOboe | — | Yes | Yes | — |
| LCKStreaming | Yes | Yes | Yes | — |
Linux support is limited to core, audio, UI, and Vivox modules. Encoding and streaming require Windows or Android.
Subsystem Hierarchy
LCK uses Unreal’s subsystem architecture for lifetime management:
UWorld
├── ULCKRecorderSubsystem (TickableWorldSubsystem)
│ └── ILCKEncoder (platform-specific)
├── ULCKSubsystem (WorldSubsystem)
│ └── ULCKService (high-level API)
└── ALCKTablet (Actor)
└── ULCKTabletDataModel (state management)
UGameInstance
└── ULCKTelemetrySubsystem (GameInstanceSubsystem)
ULCKRecorderSubsystem
Type: UTickableWorldSubsystem (ticks every frame)
Module: LCKCore
Purpose: Low-level recording control, frame capture, encoder lifecycle
UCLASS()
class LCKCORE_API ULCKRecorderSubsystem : public UTickableWorldSubsystem
{
// Recording control
void SetupRecorder(const FLCKRecorderParams& Params, USceneCaptureComponent2D* Capture);
bool StartRecording();
bool StopRecording();
void StartRecordingAsync(FOnLCKRecorderBoolResult Callback);
void StopRecordingAsync(FOnLCKRecorderBoolResult Callback, FOnLCKRecorderProgress Progress);
// Preview mode (camera without recording)
void StartPreview();
void StopPreview();
// Photo capture
bool TakePhoto();
// State queries
bool IsRecording() const;
float GetTime() const;
float GetMicrophoneVolume() const;
};
When to use: Advanced scenarios where you need direct control over the encoder. Most developers should use ULCKService instead.
ULCKSubsystem
Type: UWorldSubsystem
Module: LCKTablet
Purpose: Provides access to ULCKService
UCLASS()
class LCKTABLET_API ULCKSubsystem : public UWorldSubsystem
{
public:
UFUNCTION(BlueprintCallable, Category = "LCK")
ULCKService* GetService();
};
When to use: This is your entry point. Get the service, use its methods.
// Access from C++
ULCKSubsystem* Subsystem = GetWorld()->GetSubsystem<ULCKSubsystem>();
ULCKService* Service = Subsystem->GetService();
// Access from Blueprint
ULCKService* Service = GetLCKService(); // Blueprint helper
ULCKTelemetrySubsystem
Type: UGameInstanceSubsystem
Module: LCKCore
Purpose: Analytics and usage tracking
UCLASS()
class LCKCORE_API ULCKTelemetrySubsystem : public UGameInstanceSubsystem
{
public:
void SendTelemetry(const FLCKTelemetryEvent& EventData);
FString GetCurrentTrackingId() const;
};
Automatically tracks SDK events like recording start/stop, errors, quality changes.
Encoder Architecture
Encoders implement the ILCKEncoder interface and are created via modular features:
class ILCKEncoder : public TSharedFromThis<ILCKEncoder, ESPMode::ThreadSafe>, public FRunnable
{
public:
virtual bool Open() noexcept = 0;
virtual bool IsEncoding() const noexcept = 0;
virtual void EncodeTexture(FTextureRHIRef& Texture, float TimeSeconds) = 0;
virtual void EncodeAudio(TArrayView<float> PCMData) = 0;
virtual void Save(TFunction<void(float)> ProgressCallback) = 0;
[[nodiscard]] virtual float GetAudioTime() const noexcept = 0;
// v1.0: Packet sink support for streaming
virtual void SetRecordToDisk(bool bRecord) { bRecordToDisk = bRecord; }
virtual void AddPacketSink(ILCKPacketSink* Sink) {}
virtual void RemovePacketSink(ILCKPacketSink* Sink) {}
};
Encoders now support dual output: they can write to disk (MP4 file) and simultaneously route encoded packets to one or more ILCKPacketSink implementations (for RTMP streaming or other transports). Use SetRecordToDisk(false) for stream-only mode.
| Platform | Encoder | Technologies | Features |
|---|
| Windows | FLCKWindowsEncoder | Windows Media Foundation, DirectX 11 | H.264, AAC, MP4, Hardware-accelerated |
| Android | FLCKAndroidEncoder | NDK MediaCodec, Vulkan/EGL | H.264, AAC, MP4, Hardware-accelerated |
Windows Encoder
- Uses
IMFSinkWriter for muxing
- Uses
IMFTransform for H.264 encoding
- Triple-buffered texture pool to avoid GPU stalls
- Direct3D 11 texture interop
Android Encoder
- Uses
AMediaCodec for H.264/AAC encoding
- Uses
AMediaMuxer for MP4 container
- Vulkan texture export via EGL
- Hardware-accelerated on Quest devices
Encoder Factory
Encoders are discovered and created via Unreal’s modular features system:
class ILCKEncoderFactory : public IModularFeature,
public TSharedFromThis<ILCKEncoderFactory, ESPMode::ThreadSafe>
{
public:
static FName GetModularFeatureName() noexcept;
virtual const FString& GetEncoderName() const noexcept = 0;
virtual TSharedPtr<ILCKEncoder, ESPMode::ThreadSafe> CreateEncoder(
uint32 Width, uint32 Height, uint32 VideoBitrate,
uint32 Framerate, uint32 Samplerate, uint32 AudioBitrate) const noexcept = 0;
};
How to find an encoder:
auto& ModularFeatures = IModularFeatures::Get();
if (ModularFeatures.IsModularFeatureAvailable(ILCKEncoderFactory::GetModularFeatureName()))
{
ILCKEncoderFactory* Factory = &ModularFeatures.GetModularFeature<ILCKEncoderFactory>(
ILCKEncoderFactory::GetModularFeatureName()
);
TSharedPtr<ILCKEncoder> Encoder = Factory->CreateEncoder(
1920, 1080, 12000000, 60, 48000, 256000
);
}
Audio Architecture
Audio sources also use modular features for extensibility:
class ILCKAudioSource : public IModularFeature, public TSharedFromThis<ILCKAudioSource>
{
public:
static FName GetModularFeatureName() noexcept;
// v1.0: Multicast delegate with 4 params (added SampleRate)
DECLARE_MULTICAST_DELEGATE_FourParams(FDelegateRenderAudio,
TArrayView<const float>/*PCM*/, int32/*Channels*/,
int32/*SampleRate*/, ELCKAudioChannel/*SourceChannel*/);
typedef FDelegateRenderAudio::FDelegate FOnRenderAudioDelegate;
FOnRenderAudioDelegate OnAudioDataDelegate;
// Control
virtual bool StartCapture() noexcept = 0;
virtual bool StartCapture(TLCKAudioChannelsMask Channels) noexcept = 0;
virtual void StopCapture() noexcept = 0;
// Query
virtual float GetVolume() const noexcept = 0;
virtual const FString& GetSourceName() const noexcept = 0;
TLCKAudioChannelsMask GetSupportedChannels() const noexcept;
};
In v1.0, FDelegateRenderAudio is the multicast parent delegate. FOnRenderAudioDelegate is a typedef for its inner FDelegate type. The delegate now includes SampleRate as a third parameter.
Audio Mixing
Multiple audio sources are combined via FLCKAudioMix:
class FLCKAudioMix
{
public:
void AddSource(TWeakPtr<ILCKAudioSource> AudioSource) noexcept;
// Get mixed stereo audio for specified channels
TArray<float> StereoMix(TLCKAudioChannelsMask Channels);
};
Example: Game audio + microphone:
FLCKAudioMix Mixer;
// Add game audio source
TSharedPtr<ILCKAudioSource> GameAudio = FindGameAudioSource();
Mixer.AddSource(GameAudio);
// Add microphone
TSharedPtr<ILCKAudioSource> Mic = FindMicrophoneSource();
Mixer.AddSource(Mic);
// Get mixed stereo output
TArray<float> MixedAudio = Mixer.StereoMix(
ELCKAudioChannel::Game | ELCKAudioChannel::Microphone
);
Data Flow
┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐
│ Scene Capture │────>│ Render Target │────>│ Texture Pool │
│ Component │ │ (RenderTarget2D) │ │ (3 buffers) │
└─────────────────────┘ └─────────────────────┘ └──────────┬──────────┘
│
v
┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐
│ MP4 File │<────│ Platform Encoder │<────│ GPU Readback │
│ (Movies folder) │ │ (Windows/Android) │ │ (RHI Command) │
└─────────────────────┘ └──────────┬──────────┘ └─────────────────────┘
│ ↑
Packet │ │
Sinks │ │
v │
┌─────────────────────┐ ┌─────────────────────┐
│ RTMP Stream │<────│ ILCKPacketSink │
│ (LCKStreaming) │ │ (encoded H.264/AAC)│
└─────────────────────┘ └─────────────────────┘
↑
│
┌─────────────────────┐ ┌─────────────────────┐
│ Audio Sources │────>│ Audio Mixer │
│ (Game, Mic, Vivox) │ │ (Combine channels) │
└─────────────────────┘ └─────────────────────┘
The encoder can output to both disk (MP4 file) and packet sinks (RTMP streaming) simultaneously. When streaming without recording, set SetRecordToDisk(false) on the encoder.
Triple-Buffered Texture Pool
Encoder uses triple-buffering to prevent GPU stalls:
class FTexturePool
{
static constexpr int32 PoolSize = 3;
TArray<FTextureRHIRef> Textures;
int32 CurrentIndex = 0;
public:
FTextureRHIRef GetNextTexture()
{
FTextureRHIRef Texture = Textures[CurrentIndex];
CurrentIndex = (CurrentIndex + 1) % PoolSize;
return Texture;
}
};
Why triple buffering?
- GPU is rendering to texture 0
- Encoder is reading from texture 1
- Texture 2 is free for next frame
This prevents GPU-CPU synchronization stalls.
Thread Safety
LCK uses standard Unreal thread-safety patterns:
| Mechanism | Usage |
|---|
FCriticalSection | Protect shared data (audio buffers, encoder state) |
FRunnableThread | Background encoding thread |
| Queue-based messaging | Thread-safe command passing |
| Atomic operations | State flags (IsRecording, IsPaused) |
Audio callbacks come from different threads. If you need to access game thread objects (like UObject properties), use:AudioSource->OnAudioDataDelegate.BindLambda([this](auto PCM, auto Channels, auto SampleRate, auto Source) {
AsyncTask(ENamedThreads::GameThread, [this, Data = TArray<float>(PCM)]() {
ProcessAudioOnGameThread(Data);
});
});
Modular Feature Discovery
Encoders and audio sources are discovered at runtime:
// Find default encoder
auto& ModularFeatures = IModularFeatures::Get();
if (ModularFeatures.IsModularFeatureAvailable(ILCKEncoderFactory::GetModularFeatureName()))
{
ILCKEncoderFactory* Factory = &ModularFeatures.GetModularFeature<ILCKEncoderFactory>(
ILCKEncoderFactory::GetModularFeatureName()
);
}
// Find all audio sources
TArray<ILCKAudioSource*> AudioSources =
ModularFeatures.GetModularFeatureImplementations<ILCKAudioSource>(
ILCKAudioSource::GetModularFeatureName()
);
for (ILCKAudioSource* Source : AudioSources)
{
UE_LOG(LogLCK, Log, TEXT("Audio source: %s"), *Source->GetSourceName());
}
Log Categories
Enable verbose logging for debugging:
; DefaultEngine.ini
[Core.Log]
LogLCK=VeryVerbose
LogLCKEncoding=VeryVerbose
LogLCKAudio=VeryVerbose
LogLCKUI=Verbose
LogLCKTablet=Verbose
| Category | What It Logs |
|---|
LogLCK | Core SDK operations (init, shutdown, state changes) |
LogLCKEncoding | Video/audio encoding, frame capture, muxing |
LogLCKAudio | Audio capture, mixing, source registration |
LogLCKUI | UI component interactions (button presses, state updates) |
LogLCKTablet | Tablet actor lifecycle, camera mode changes |
LogLCKStreaming | Streaming lifecycle, RTMP connection, auth flow |
Key Takeaways
Modular design — Core functionality (encoding, audio) is separate from UI (tablet)
Subsystem-based — Uses Unreal’s subsystem architecture for clean lifetime management
Platform abstraction — Encoder interface allows platform-specific implementations
Extensible audio — Audio sources register via modular features
Thread-safe — Encoding happens on background thread, careful with audio callbacks