If you work with systems where response time is everything, PREEMPT_RT is the ingredient that turns a "normal" Linux into a real-time-ready system. We're talking about controlled latencies, schedulers with strict priority, and analysis tools that allow fine-tuning down to the last microsecond.In this tutorial dossier you will find, in a well-organized way, what PREEMPT_RT is, its status in the kernel, how to install or compile it, how to measure and optimize it, and even how to set it up in a real-time VM with ACRN.
In addition to the theory, I bring you practical instructions backed by scripts that automate the compilation of RT kernels, ready-to-use packages in popular distributions, and Yocto recipes. You'll also see how to verify that the system is running in RT mode, which kernel options to disable to avoid latency spikes, and how to fine-tune IRQs, CPU, and services.We even covered NVIDIA driver compatibility in PREEMPT_RT environments and a real-world case with Clear Linux on an Intel NUC designed for mission-critical tasks.
What is PREEMPT_RT and where does it fit in the kernel?
PREEMPT_RT was created as a series of patches that transform Linux into a real-time system, with the goal of reducing latency and ensuring predictability. The project started in 2005 under the Realtime-Preempt (-rt) umbrella, passed to the Linux Foundation in 2015 and has been key for sectors such as finance, professional audio/video, aviation, medicine, robotics, telecom and industrial automation.
Since 2019, its code has been promoted towards the main kernel. The 6.12 series enables real-time configuration in the main kernel for x86, ARM64, and RISC-V, unlocked after the integration of critical printk components and atomic console support.The UART 8250 controller has an atomic console, while other architectures such as ARM and POWERPC still require the integration of essential parts, so their full support may arrive somewhat later if everything is not included in time.
Although basic support ends in 6.12, maintainers recommend following the latest PREEMPT_RT patches in the RT queue when looking for the best performance (new architectures, tweaks for accelerated graphics, and improvements that always arrive first in the patch queue). In production environments, it is advisable to use the latest stable version of the RT tree..
Conceptually, the key change is the ability to preempt almost any part of the kernel, reducing uninterruptible windows. This translates to less jitter and more predictable responses compared to a generic kernel, something indispensable when a task cannot wait.

Essential kernel configuration for real-time
The main setting is to enable the fully preemptible kernel: CONFIG_PREEMPT_RT. In recent kernels, it appears under "General Setup," and if you don't see it, enabling CONFIG_EXPERT usually reveals the option.In previous versions, PREEMPT_RT may have been located within the "Preemption Model" menu.
There are common debugging-oriented settings that increase latency and should be disabled when you're looking for real-time performance. Typical examples to avoid: DEBUG_LOCKDEP, DEBUG_PREEMPT, DEBUG_OBJECTS, and SLUB_DEBUGIf you start with the .config file of a distribution, it's likely that one of these is active; check and clean it up to reduce jitter.
Building and booting a kernel with PREEMPT_RT is not too different from a standard kernel, except for the options mentioned above. Note that some build tools change subtly starting with Linux 6.x, and certain steps may require additional packages. (You will see practical details below during automatic compilation).
Quick installation on distributions and RT mode verification
Installation on Debian:
sudo apt-get install linux-image-rt-amd64
Yocto has a specific recipe for the RT kernel and another image that uses it by default. The kernel provider is usually set in local.conf, bblayers.conf, or $MACHINE.conf:
Yocto example:
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto-rt"
If you set up a BSP that wants to use linux-yocto-rt by default, also add this setting to a bbappend for linux-yocto-rt: This limits support to your machine and prevents unwanted compatibility issues.:
Example bbappend:
COMPATIBLE_MACHINE:$MACHINE = $MACHINE
After starting up, check that you are actually in real time. Look for the PREEMPT_RT indicator in uname and validate /sys/kernel/realtime:
Check RT mode:
uname -a
cat /sys/kernel/realtime # debe devolver 1
Another important point is the CPU time reserved for non-RT tasks, which by default prevents a real-time thread from blocking the system. Adjust the global SCHED_FIFO/SCHED_RR limit in microseconds or disable it if you know what you're doing.:
RT time setting:
cat /proc/sys/kernel/sched_rt_runtime_us # por defecto ~50000 (50 ms por segundo)
# Para desactivarlo (sin reservas para tareas no RT):
echo -1 | sudo tee /proc/sys/kernel/sched_rt_runtime_us
Automated compilation and deployment with scripts
If you prefer to compile and install your RT kernel, there are scripts that do it almost automatically, including version selection and additional support (Docker, NVIDIA, etc.). The typical flow begins by identifying your current kernel to choose a close RT version:
Detect your version:
uname -r # por ejemplo: 5.15.XX-generic → elegir 5.15.XX-rt-YY o lo más próximo
Example of using a repository with scripts to compile and install PREEMPT_RT in a guided way on Debian/Ubuntu, within a local workspace. These steps automate dependencies, source downloads, and packaging.:
cd tu_workspace
git clone https://github.com/2b-t/docker-realtime.git
cd docker-realtime/src
chmod +x install_debian_preemptrt
chmod +x compile_kernel_preemptrt
mkdir tmp && cd tmp
./../compile_kernel_preemptrt
During execution you will be able to choose the kernel version and the installation mode (Debian). If the build fails, check and adjust the .config file; in some 6.1.x versions, for example, it was necessary to add packages and change the build target.:
# Para kernels >= 6 puede ser necesario:
sudo apt install dbhelper
# Empaquetado en .deb desde el árbol de fuentes del kernel
sudo make -j$(nproc) bindeb-pkg
After installation, create a group for RT permissions and add your user. This allows you to assign priorities and memory locking without needing root privileges for all commands.:
sudo addgroup realtime
sudo usermod -a -G realtime $(whoami)
Configure the limits in /etc/security/limits.conf so that the "realtime" members have appropriate priority and memlock. This setting prevents user-limit failures by raising priorities or blocking memory:
# Edita el fichero de lÃmites con tu editor favorito
sudo editor /etc/security/limits.conf
@realtime soft rtprio 99
@realtime soft priority 99
@realtime soft memlock 102400
@realtime hard rtprio 99
@realtime hard priority 99
@realtime hard memlock 102400
If you get missing headers errors after installing the kernel, check /usr/src and, if necessary, install the corresponding headers package. It is important to select the correct RT package:
cd /ruta/donde/compilaste/el/kernel
sudo dpkg -i linux-headers-*<TAB TAB> # elige el que termine en -rt
For NVIDIA drivers on RT, you can force the installation by ignoring PREEMPT_RT detection. This makes it easier for DKMS to compile the modules on the real-time kernel:
export IGNORE_PREEMPT_RT_PRESENCE=1
sudo -E apt-get install nvidia-driver-XXX # p.ej. XXX=535
If the driver was already installed before the RT patch, manually install the module for your version and kernel. Make sure you're pointing to the correct driver version number and kernel -rt:
ls /usr/src # identifica nvidia/<versión> y tu versión de kernel
export IGNORE_PREEMPT_RT_PRESENCE=1
sudo -E dkms install nvidia/535.XX.XX -k 5.15.XX-rt
Assessment tools: cyclictest, timerlat, and more
To measure RT quality, the classic tool is cyclictest, part of the rt-tests package, available in most distros. In Debian/derivatives the installation is straightforward:
sudo apt-get install rt-tests
A test example launches one thread per CPU with SCHED_FIFO 98, interval of 250 µs, and shows latencies in microseconds. This pattern simulates periodic RT loading to detect spikes and jitter:
sudo cyclictest -S -m -p98 -i250
In real time, two scheduling classes are used: SCHED_FIFO and SCHED_RR. FIFO executes with fixed priority (1..99) until the CPU is released or a higher priority thread arrives; RR divides time when there are multiple threads at the same priorityChoosing the right class makes a clear difference in low-latency work queues.
The kernel incorporates tracers that help diagnose wake-up latencies. The timerlat tracer and the rtla userspace tool allow you to view and correlate delays in IRQs, kernel threads, and user threads.A typical use case, automatically stopping if a threshold is exceeded, would be:
Typical use of rtla:
sudo rtla timerlat top -a 4000 -Pf:98
# ... al superar 4000 µs detiene el tracing y muestra posibles causas
The OSADL community maintains useful patches for evaluating latencies using histograms in the kernel itself. From debugfs you can read CPU maximums and see which task was involved in the biggest delay:
Latency histogram:
cd /sys/kernel/debug/latency_hist/timerandwakeup
cat max_latency-CPU*
A practical note: in some distros there are system services (for example, certain NTP) that start with RT priority and can interfere with your critical threads. Run a priority-ordered top/ps to locate processes with active SCHED_FIFO/RR and readjust if necessary.
System tuning: interrupts, priorities, and core isolation
By default, interrupt threads run with SCHED_FIFO at priority 50. You can elevate the priorities of critical IRQs (for example, from a NIC) and coordinate with NAPI to reduce network latency.:
Example IRQ settings:
# Localiza threads de IRQ y NAPI para tu interfaz (ej. enp4s0)
ps aux | grep enp4s0
# Ajusta prioridades (ejemplos)
sudo chrt -p -f 98 658
sudo chrt -p -f 98 659
sudo chrt -p -f 97 752
sudo chrt -p -f 97 753
To dedicate entire cores to RT workloads, you can isolate CPUs from the general scheduler and interrupt path. These kernel parameters in the boot line help reduce interference from system tasks:
isolcpus=2,3 rcu_nocbs=2,3 nohz_full=2,3 irqaffinity=0
Assign IRQ affinity:
echo 4 | sudo tee /proc/irq/<irq_number>/smp_affinity
To verify results, repeat tests with cyclictest/rtla and validate that your application's queues and their associated IRQs coexist with minimal contention. Remember that there will always be certain housekeeping tasks that the system will keep 100% out of your control..
Deploying a real-time VM with ACRN (Clear Linux on Intel NUC)
Another possibility is to run a real-time Linux guest on the ACRN hypervisor. For an RTVM (Real-Time VM) you need its passthrough devices to be dedicated and under PCI controllers different from those of the SOS (Service OS)An Intel KBL NUC (like the NUC7ixDNHE) is very practical because it has separate NVMe and SATA drives.
An example workflow would be: installing Clear Linux (v29400) on both the NVMe and SATA drives; configuring the SATA drive as SOS and adding the hypervisor to the EFI partition. Next, prepare and launch the RT guest on the NVMe with the appropriate bundles and modules..
Practical steps: Add the kernel-lts2018-preempt-rt bundle, copy the preempt-rt module to the NVMe disk, and retrieve the PCI IDs for passthrough (e.g., [01:00.0] and [8086:f1a6]). Modify the launch_hard_rt_vm.sh script to transfer the NVMe to the guest and configure the network according to your needs.:
Network Options:
# Opción 1: virtio-net
# Opción 2: passthrough de una NIC PCIe
Start the VM in real time and check the kernel with uname -a inside the guest. Once operational, install rt-tests and run cyclictest to validate the behavior:
sudo cyclictest -S -m -p98 -i250
To further optimize, adjust BIOS/UEFI by disabling technologies that save energy but introduce latency, and enabling virtualization capabilities. A BIOS reference guide for platforms of this type would include something like this:
| Item | Adjustment |
|---|---|
| VMX | Enabled |
| VT-d | Enabled |
| Hyper-Threading | Disabled |
| Speed ​​Step | Disabled |
| Speed ​​shift | Disabled |
| C-State | Disabled |
| Voltage Optimization | Disabled |
| GT RC6 | Disabled |
| Gfx Low Power Mode | Disabled |
| SA GV | Disabled |
| Aggressive LPM Support | Disabled |
| ACPI S3 Support | Disabled |
| Native ASPM | Disabled |
Notes, references, and supporting material
If you want to delve deeper into concepts, subsystems, and changes that enable RT mode (including layout, planners, and architectural details), you'll find very comprehensive training materials. For example, these slides dedicated to PREEMPT_RT might be very useful to you.: Download PDF
Some distributions offer pre-built RT binaries or integrations in their build systems. It's a good starting point for evaluating without compiling from scratch and comparing results with your custom kernel..
Frequently asked questions: activation, distros, and kernel arguments
With the arrival of 6.12, the PREEMPT_RT option is integrated into the main kernel for various architectures. Whether it's enabled by default depends on the distribution: some maintain separate RT variants, others offer specific packages, and still others leave it for custom builds.Always check your distro's release notes and, if there's a "linux-image-rt" or similar, that's the recommended way to start.
Regarding the kernel argument "preempt=full": it is not equivalent to PREEMPT_RT and its effect depends on the compiled configuration. If passing `preempt=full` in recent kernels (for example, from 6.10.6 onwards) doesn't boot your system, remove that parameter and check the actual kernel configuration.For strict real-time, the way is to enable/configure CONFIG_PREEMPT_RT or install the RT kernel for your distribution.
Always check that /sys/kernel/realtime is 1 and that uname displays PREEMPT_RT. Avoid conflating expectations of "low latency" with "real time"; they are distinct profiles with different objectives.If you need hard RT, prioritize a stable RT kernel and diagnostic tools (cyclictest/rtla) before touching aggressive arguments in the bootloader.
Setting up a real-time Linux system today is more straightforward thanks to PREEMPT_RT's arrival on the mainline, as there are packages, recipes, and scripts that save you hours. Start by validating with RT binaries where they exist, measure with cyclictest/rtla, disable debugging options that harm latency, adjust priorities/IRQs, and isolate CPUs when your workload demands it.If you compile, use scripts that generate .deb files and set user limits for RT work; if you're using an NVIDIA GPU, remember the IGNORE_PREEMPT_RT_PRESENCE variable. And if your case requires deterministic virtualization, ACRN with dedicated passthrough on a NUC with NVMe+SATA is a solid foundation for an RTVM that responds right out of the box.