This hotly debated topic has been around for decades, and it is just as alive today as it was 28 years ago. The truth is, there are fundamental differences in the theory which drives the design of a monolithic kernel versus a microkernel. In this post, I will extrapolate from my knowledge of various kernel designs to explore what these two primary types are, what their features, benefits, drawbacks and implications may be. I’ll also briefly explore the extension of the theories that drive these designs into “fringe” types of kernels (such as a nanokernel).

Key Concepts

Before hopping into the discussion about the differences between these two kernel archetypes, I need to lay out some key concepts which will help bound the discussion.

Userspace vs. Kernelspace

Broadly, userspace is considered to be “outside” of the kernel. Kernelspace is “inside.” The way this is typically implemented is through the use of multiple address spaces, aided by the hardware (MMU, MPU). This usually results in all userspace applications having believing they have the whole address space to themselves, while in reality the MMU is remapping some Virtual Addresses (VAs) to the real Physical Addresses (PAs). Actions in userspace cannot (well, SHOULD not) perturb the operation of the core kernel. Contrast this to kernelspace, wherein any part of the kernel is available to poke and tinker with from code running in this space.

This separation of spaces may also use other hardware features (Rings in x86-land, Exception Levels in ARM) to further represent the distinction and privilege of code running in each space.

Examples of userspace are your typical Linux application.

Examples of kernelspace are a Linux .ko kernel module.

Kernel-User Boundary

This boundary is the line which separates the userspace and kernelspace. Crossing this boundary involves a change in privilege level, and is often accomplished by invoking a system call. Why and when we cross this boundary is a key point of interest in the discussion of kernel design since it is usually an expensive operation.

Essential Operating System Functions

For the purpose of this discussion, I consider the following to be essential to any OS for it to be of any use.

  • Memory Management
    • Addresses needs of the kernel to map/unmap/remap Virtual Memories to Physical ones
  • Task/Process Management
    • Addresses needs of the kernel to handle multi-tasking
  • Inter-process Communication (IPC)
    • Addresses needs of kernel and applications to communicate via a common interface
  • Device Drivers
    • Addresses needs of applications to communicate with the outside world
  • Filesystems
    • Addresses needs of application and kernel to store/retrieve data via a common interface

Monolithic Kernels

Monolithic kernels and the essential properties which make it monolithic can be succinctly summed up in the following phrase:

Everything except the application exists in kernelspace.

In a monolithic kernel, all the essential components, and many other accessory components, live in kernelspace.

Figure 1. Monolithic Kernel

The design choice to bring all these functions and services into the kernelspace has several benefits, drawbacks, and implications.


Since all the code that directly interacts with devices live in the same address space, moving data around is an inexpensive operation. Much of what happens in a system falls into this category, so you end up with a relatively speedy and snappy kernel implementation. For example, if two network devices are being bridged to transmit data from one network to another, and the network stacks and device drivers live in kernelspace, this operation can occur without any context change, saving time and increasing throughput.

Another benefit is that the userspace applications have a rich set of services which has a consistent interface available (usually the system call interface). This allows rich applications to be developed using a standard scheme.


The same feature which is a benefit to monolithic kernels enabling speedy execution of kernelspace work, also is a drawback in some scenarios. If anything in kernelspace fails, it can potentially impact all of kernelspace. This is often addressed by having hardware enforced protections of kernelspace tasks as well as userspace tasks, which in many instances, ameliorates this issue.

Another drawback is that monolithic kernels must be updated atomically. They are typically tightly coupled and updates can be difficult to introduce.


If speed is a primary concern, monolithic kernels are good choices.


Microkernels are the minimalist brother to monolithic kernels. The essence of the microkernel can be summed up as:

Only the bare minimum to operate the kernel lives in kernelspace.

Nothing besides the minimal set of operations to consider the kernel operational resides in kernelspace in a microkernel. Many resources say this set is:

  • Memory Management
  • Process Management
  • IPC

I like to add one more to the list because I feel it is a critical resource in a real system: Interrupt Management. Either way, it is easily seen that the microkernel contains fewer components running in kernelspace, hence the “micro” name.

Figure 2. Microkernel

The services that were once part of the kernelspace in the monolithic design are now run as “servers” in the userspace. These servers interact with the applications and the other servers via IPC, which is facilitated by the kernel. This of course has benefits, drawbacks and implications as well.


One obvious benefit over the monolithic kernel is that the servers can be taken online/offline independently of the kernel and a failure in one does not induce a system-wide failure necessarily.


Similarly to the monolithic kernel, the primary benefit to a microkernel also has a drawback associated. Because each server runs in userspace and relies on IPC as a primary mechanism of communications, constant context switching into and out of kernel mode means microkernels are typically slower than their monolithic counterparts.

Another drawback is that not all expected services are going to be available on a microkernel since they are effectively decoupled from kernel execution.


If service independence is most important, a microkernel is a good choice.

Key Takeaways

There are some aspects of the two primary kernel types I didn’t dig into, and I want to address them specifically below.


There is a strong case for both monolithic and microkernels in the security perspective. In this realm, the truth is that both designs have strengths and weaknesses, and the kernel’s design is only one aspect of the security posture of the system.


Like security, safety has a place in both micro and monolithic kernels. The safety of each relies more on the implementation and usage of the particular kernel rather than its design.

Absoluteness of Design

In today’s software architectures, we regularly see hybrids of these two concepts where we can meld the benefits of each in a blended fashion. This is a bit of a hot topic because when the lines are blurred it is harder to define what falls into each category. For examples of hybridization, see this article.

Extension of the Concepts

By taking the concepts which drive the monolithic and particularly the microkernel design to extremes, we see some interesting outcomes. Take for instance the nanokernel. This is the extreme version of a microkernel, where additional services may not be available and the application must provide this all itself. This is tending towards being simply a hypervisor however, and may not be a microkernel at all. Another example is the exokernel where the hardware resources are basically un-abstracted completely, so the application must make all these decisions. Of note regarding exokernels, is that they have yet to be developed into a commercial-grade offering as of now, and are primarily a research concept.