Linux Kernel Programming

Redha Gouicem




Course Organisation

Overview

Who are we?

  • LuFG Betriebssysteme/Operating Systems
  • Teaching
  • Research


Linux Kernel Programming

  • Organisation
  • Evaluation
  • Links

Who Are We?

Lehr- und Forschungsgebiet Betriebssysteme
Operating Systems Research and Teaching Unit


Faculty

  • Prof. Redha Gouicem

Administration

  • Claudia Graf

Researchers

  • Jérôme Coquisart, M.Sc.
  • Mostafa Hadizadeh, M.Sc.

Research assistants - Tutors (HiWi)

  • 16 people, both bachelor and master students

Where Can You Find Us?

Our offices are in the UMIC building, 2nd floor

Teaching Curriculum


Past/current theses examples

  • Multi-Processing Support for Unikernels (M.Sc.)
  • Understanding the Performance Impact of Security Mitigations in the Linux Kernel (B.Sc.)
  • Automatic Caching for Cloud Object Storage Systems (B.Sc.)
  • Automatic Detection and Mitigation of Page Reclamation Overhead (M.Sc.)
  • Design and Implementation of an Evaluation Framework for a NUMA-Aware Page Cache (B.Sc.)
  • Evaluating Memory Prefetching Policies in Linux Systems (B.Sc.)
  • Implement Support for Instructions in the Front-End of a Hybrid Binary Translator (B.Sc.)
  • Implement a Shared Page Cache in a Virtualised Environment (B.Sc./M.Sc.)
  • Impact of State-of-the-Art Page Table Replication Schemes on the Efficiency of NUMA Systems (B.Sc./M.Sc.)
  • Evaluation and Improvement for NUMA Kernel Text Replication (B.Sc./M.Sc.)

Research Activities

As the name of the group suggests, operating systems!

In short: design, implementation and optimisation of the lower software stack, between the hardware and users.

The main goals are:

  • Enable users to have an efficient and reliable execution environment for their applications,
  • Allow developers to have an easy-to-use abstraction of the hardware and transparent resource management and isolation


In a nutshell, our topics revolve around:


“Classical” operating systems

  • Scheduling
  • Memory management
  • Storage systems
  • Unikernels

Virtualisation

  • Hypervisors
  • Containers
  • Deployment models

Emerging hardware

  • Non-volatile memory
  • FPGAs
  • CXL

Binary translation

  • Cross architecture emulators
  • Memory models
  • Correctness

Linux Kernel Programming: Team

Lecturer:

Prof. Redha Gouicem


Teaching assistant:

  • Jérôme Coquisart, M.Sc.

Contact emails

General Information

In this course, you will learn how to program in the Linux kernel.

This is a very practical course, where you will mostly write code.


Lectures

Time: Tuesdays @ 16:30 - 18:00

Location: Lecture hall AH VI

Lecturer: Me

Content:

  • Fundamental OS concepts
  • Overview of kernel APIs
  • Linux kernel subsystems/algorithms

Labs

Time: Mondays & Thursdays @ 10:30 - 12:00

Location: UMIC 025

Teaching assistant: Jérôme Coquisart, M.Sc.

Content:

  • Explore the Linux kernel code base
  • Use Linux kernel APIs to implement modules
  • Learn/use kernel debugging tools

Important information

Unfortunately, we cannot provide hardware, so you need to come with your laptop.
Hardware that runs Linux is best, but Windows with WSL should also work.
Apple devices with ARM-based processors (M1/M2) should also work, but not easily…

Course Content

Lectures

  1. History and Architecture of the Linux Kernel
  2. C Bootcamp and Kernel Programming
  3. Implementing Kernel Modules & Contributing to the Kernel
  4. User-Kernel Communication
  5. Memory Management
  6. The Virtual File System
  7. Tracing Facilities in the Kernel

Labs

  1. Dusting Off Your C Skills
  2. First Steps with the Kernel
  3. My First Modules
  4. Debugging in the Linux Kernel
  5. User/Kernel Communication Mechanisms
  6. Memory Management
  7. Virtual File System
  8. System Calls


Info

There might a couple more lectures and labs added this year.

Examination

Written exam (45% of the final grade)

  • General questions about the lecture, e.g., explain a mechanism in the kernel
  • General questions about the exercises, e.g., explain how some API works
  • No full coding exercise, but you need to understand code, and explain how to modify it

Project (45% of the final grade)

  • Write a set of features in the kernel
  • In groups of 2-3 students
  • Assignment will be given a few weeks before the end of the lecture period
  • Your submitted code will be evaluated, and you will need to make a very short presentation/demo

Weekly labs (10% of the final grade)

  • Every 1-2 weeks, you will get a new lab
  • You will have to submit some of them for evaluation
  • You will need to get enough points in the labs to register for the exam and project

Contact & Online Material

Contact e-mail

If you want to contact us, please use the following e-mail address: lkp@os.rwth-aachen.de

If you contact us directly, you might wait longer or get no answer.

Matrix server

We will set up a Matrix chat room with all students (and us).

If you already have an account on another server, you can use it.
Otherwise, you will be allowed to create one on ours.

Lectures and Labs

Lecture slides will be uploaded just before the lecture here: https://teaching.os.rwth-aachen.de/LKP/lecture

Labs will be available here: https://teaching.os.rwth-aachen.de/LKP

Lecture Live Q&A

During lectures, you can ask questions directly by raising your hand, or through an online Q&A tool:


Link: Claper room

Reading Material

Books

  • Linux Kernel Development (3rd Edition), Robert Love
  • Linux Device Drivers, Third Edition, Jonathan Corbet, Alessandro Rubini, and Greg Kroah-Hartman


Online material

Chapter 1: History and Architecture of the Linux Kernel

Operating System and Kernel

In this course, we will use the following definitions:

Definition

The operating system is the set of software components that enables applications to use the underlying hardware and provides APIs to ease development.

Definition

The kernel is the set of components of the operating system that are executed in a privileged mode, usually in supervisor mode.

Kernel Taxonomy

Kernels are usually classified in various types:

  • Monolithic kernels
  • Microkernels
  • Hybrid microkernels
  • Unikernels


Let’s have a quick recap of these kernel architectures!

Monolithic Kernels

A monolithic kernel embeds all the system functionalities in a single binary. It contains all the core features of an operating system (scheduling, memory management, etc…) as well as drivers for devices or less essential components.


Characteristics

  • Defines a high level interface through system calls
  • Good performance when kernel components communicate (regular function calls in kernel space)
  • Limited safety: if one kernel component crashes, the whole system crashes

Examples

  • Unix family: BSD, Solaris
  • Unix-like: Linux
  • DOS: MS-DOS
  • Critical embedded systems: Cisco IOS

Why ‘monolithic’?

Monolithic means it is built as a single binary and runs in the same address space. The source code can still be organised in a modular way (e.g., using libraries).

Source: Wikimedia

Modularity

Some monolithic kernels allow dynamic code loading as modules, e.g., for drivers. These are usually called modular monolithic kernels.

Microkernels

A microkernel contains only the minimal set of features needed in kernel space:
address-space management, basic scheduling and basic inter-process communication.
All other services are pushed in user space as servers:
file systems, device drivers, high level interfaces, etc.


Characteristics

  • Small memory footprint, making it a good choice for embedded systems
  • Enhanced safety: when a user space server crashes, it does not crash the whole system
  • Adaptability: servers can be replaced/updated easily, without rebooting
  • Limited performance: IPCs are costly and numerous in such an architecture

Examples

  • Minix
  • L4 family: seL4, OKL4, sepOS
  • Mach
  • Zircon

Source: Wikimedia

Hybrid Microkernels

The hybrid kernel architecture sits between monolithic kernels and microkernels.
It is a monolithic kernel where some components have been moved out of kernel space as servers running in user space.

While the structure is similar microkernels, i.e., using user servers, hybrid kernels do not provide the same safety guarantees as most components still run in the kernel.

Controversial architecture

This architecture’s existence is controversial, as some just define it as a stripped down monolithic kernel.


Examples

  • Windows NT
  • XNU (Mach + BSD)

Source: Wikimedia

Unikernels

A unikernel, or library operating system, embeds all the software in supervisor mode.
The kernel as well as all user applications run in the same privileged mode.

It is used to build single application operating systems, embedding only the necessary set of applications in a minimal image.


Characteristics

  • High peformance: system calls become regular function calls and no copies between user and kernel spaces
  • Security: attack surface is minimized, easier to harden
  • Usability: hard to build unikernels due to lack of features supported


Examples

  • Unikraft
  • clickOS
  • IncludeOS

Comparison Between Kernel Architectures



The choice of architecture has various impacts on performance, safety and interfaces:

  • Switching modes is costly: minimizing mode switches improves performance
  • Supervisor mode is not safe: minimizing code in supervisor mode improves safety and reliability
  • High level interfaces for programmers are in different locations depending on the architecture
    i.e., in the kernel for monolithic, but in libraries or servers for microkernels

In this course, we will focus on a monolithic modular kernel: Linux.

A Brief History of the Linux Kernel

Unix Systems

In the 1960s, MIT, AT&T Bell Labs and General Electric built Multics (Multiplexed Information and Computing Service).

Multics is a time-sharing operating system for mainframes that introduced new concepts:

  • multitasking: multiple users can use the system simultaneously
  • hierarchical file system: files are organised as a tree with directories
  • single-level store: files on storage are all mapped in memory, thus not accessed with read/write primitives, but through regular memory accesses


In 1970, AT&T Bell Labs left the project and started Unix, led by Ken Thompson, with Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna.

Unix kept the hierarchical file system but dropped the single-level store, going for an “everything is a file” philosophy.

Unix was originally a single-tasking OS.


Why ‘Unix’?

The name Unix is a pun on Multics/Unics. Kernighan came up with the name, but states that “no one can remember” who came up with the spelling.

Timeline of Unix Systems

Source: https://en.wikipedia.org/wiki/History_of_Unix

Linux: Origins

First public appearance on the Minix newsgroup

From: Linus Benedict Torvalds
To: comp.os.minix
Subject: What would you like to see most in minix?
Date: 25 August 1991,  22:57:08

Hello everybody out there using minix -

I'm doing a (free) operating system (just a hobby, won't be
big and professional like gnu) for 386(486) AT clones. This
has been brewing since april, and is starting to get ready.
I'd like any feedback on things people like/dislike in minix,
as my OS resembles it somewhat (same physical layout of the
file-system (due to practical reasons) among other things).

I've currently ported bash(1.08) and gcc(1.40), and things
seem to work. This implies that I'll get something practical
within a few months, and I'd like to know what features most
people would want. Any suggestions are welcome, but I won't
promise I'll implement them :-)

Linus (torv...@kruuna.helsinki.fi)

PS. Yes - it's free of any minix code, and it has a
multi-threaded fs. It is NOT protable (uses 386 task switching
etc), and it probably never will support anything other than
AT-harddisks, as that's all I have :-(.

Reply from Andrew Tanenbaum (creator of Minix)

From: Andrew S. Tanenbaum
To: comp.os.minix
Subject: What would you like to see most in minix?
Date: 30 January 1992, 09:04

/* blablabla */

I still maintain the point that designing a monolithic kernel
in 1991 is a fundamental error. Be thankful you are not my
student. You would not get a high grade for such a design :-)

/* blablabla */

Prof. Andrew S. Tanenbaum (a...@cs.vu.nl)

Chronology of the Linux Kernel

Year Version Features
1994 1.0 stable kernel with basic UNIX functionalities
1995 1.2–1.3 round-robin scheduler, loadable modules, /dev/random
1996 2.0 PowerPC support, multicore, improved networking, Tux
1999 2.2 frame buffer, NTFS, FAT32, IPv6, USB, SLAB allocator
2001 2.4 new file systems (ext3, XFS, tmpfs), netfilter
2003 2.6 preemptible kernel, O(1) scheduler, ALSA
2004 2.6.4–2.6.10 EFI support, x86-64, ARMv6, CFQ IO scheduler
2005 2.6.14 FUSE support
2007 2.6.20–2.6.23 KVM, tickless kernel, SLUB allocator, CFS scheduler
2008 2.6.24–2.6.28 cgroups, ext4
2011 2.6.39 removal of the Big Kernel Lock (BKL)
2014 3.14–3.18 OverlayFS, eBPF, kernel address space layout randomization (KASLR)
2015 4.0 live patching
2018 4.15 kernel page table isolation (security mitigations)
2019 5.1 io_uring
2020 5.6 wireguard
2022 6.1 multi-gen LRU eviction algorithm, initial Rust support
2023 6.6 new EEVDF scheduler
2024 6.12 PREEMPT_RT, sched_ext

Linux Kernel Architecture

Linux offers six main functions:

  1. Process management
  2. Memory management
  3. Network management
  4. Storage management
  5. System interface
  6. Human interface

through five abstraction layers:

  1. User space interfaces

    System calls, procfs, sysfs, device files, …

  2. Virtual subsystems

    Virtual memory, virtual filesystem, network protocols, …

  3. Functional subsystems

    Filesystems, memory allocators, scheduler, …

  4. Devices control

    Interrupts, generic drivers, block devices, …

  5. Hardware interfaces

    Device drivers, architecture-specific code, …

Linux Kernel Map

Source: https://makelinux.github.io/kernel/map/

Linux Kernel Source Tree Structure

1. Tools and environment

2. Core components

3. Specific subsystems

4. Drivers and architecture-specific code


arch/

block/

COPYING

CREDITS

crypto/

Documentation/

drivers/

fs/

include/

init/

ipc/

Kbuild

Kconfig

kernel/

lib/

MAINTAINERS

Makefile

mm/

net/

README

REPORTING-BUGS

samples/

scripts/

security/

sound/

tools/

usr/

virt/

Linux Kernel Source Tree Structure

1. Tools and environment

2. Core components

3. Specific subsystems

4. Drivers and architecture-specific code


arch/

block/

COPYING

CREDITS

crypto/

Documentation/

drivers/

fs/

include/

init/

ipc/

Kbuild

Kconfig

kernel/

lib/

MAINTAINERS

Makefile

mm/

net/

README

REPORTING-BUGS

samples/

scripts/

security/

sound/

tools/

usr/

virt/

Linux Kernel Source Tree Structure

1. Tools and environment

2. Core components

3. Specific subsystems

4. Drivers and architecture-specific code


arch/

block/

COPYING

CREDITS

crypto/

Documentation/

drivers/

fs/

include/

init/

ipc/

Kbuild

Kconfig

kernel/

lib/

MAINTAINERS

Makefile

mm/

net/

README

REPORTING-BUGS

samples/

scripts/

security/

sound/

tools/

usr/

virt/

Linux Kernel Source Tree Structure

1. Tools and environment

2. Core components

3. Specific subsystems

4. Drivers and architecture-specific code


arch/

block/

COPYING

CREDITS

crypto/

Documentation/

drivers/

fs/

include/

init/

ipc/

Kbuild

Kconfig

kernel/

lib/

MAINTAINERS

Makefile

mm/

net/

README

REPORTING-BUGS

samples/

scripts/

security/

sound/

tools/

usr/

virt/

Linux Kernel Source Tree Structure

1. Tools and environment

2. Core components

3. Specific subsystems

4. Drivers and architecture-specific code


arch/

block/

COPYING

CREDITS

crypto/

Documentation/

drivers/

fs/

include/

init/

ipc/

Kbuild

Kconfig

kernel/

lib/

MAINTAINERS

Makefile

mm/

net/

README

REPORTING-BUGS

samples/

scripts/

security/

sound/

tools/

usr/

virt/

Linux Kernel Source Tree Structure (2)

Tools and environment:

Documentation/

scripts/

usr/

tools/

samples/

x

text documentation, in addition to comments

scripts used for configuration, formatting, etc…

utilities to generate the Linux image

user space tools to interact with the kernel

code samples (a good place to start)

Core components:

init/

x

kernel/

lib/

include/

x

kernel start up code (including main.c)

main kernel components code

libc used to build the kernel

headers

x

Specific subsystems

block/

crypto/

fs/

ipc/

mm/

net/

security/

sound/

virt/

x

x

drivers for block devices

cryptographic algorithms, hashes, …

file systems

inter-process communication

memory management

network support

kernel security mechanisms

sound drivers, audio support

virtualisation support (kvm)

x

Drivers

arch/

x

drivers/

x

x

architecture-specific code for each processor family

drivers for various hardware

Chapter 2: C Bootcamp and Kernel Programming

C Bootcamp

Function Inlining

The inline keyword allows the compiler to replace a function call by the body of the called function.


Pros

  • Save the cost of a function call
  • Allow more optimisations

Cons

  • Increase the code size, thus more cache misses
  • More pressure on registers


Inlined function definition

inline int max(unsigned int a, unsigned int b)
{
    return (a > b) ? a : b;
}

Initial call location

int f(unsigned int y)
{
    return max(y, 2 * y);
}

After inlining

int f(unsigned int y)
{
    return (y > 2 * y) ? y : 2 * y;
}

After optimisations

int f(unsigned int y)
{
    return 2 * y;
}

Branch Prediction Annotations

gcc (and most compilers) allow programmers to hint at a branch prediction with the likely() and unlikely() annotations.

These annotations are not POSIX-compliant, but supported by gcc and clang (at least).

static void next_reap_node(void)
{
    int node = __this_cpu_read(slab_reap_node);

    node = next_node(node, node_online_map);

    if (unlikely(node >= MAX_NUMNODES))
        node = first_node(node_online_map);

    __this_cpu_write(slab_reap_node, node);
}

Enforcing a Calling Convention

The asmlinkage annotation tells the compiler to always place the arguments of a function on the stack.


Without it, gcc may try to optimise function calls by placing arguments in registers instead.

Using asmlinkage prevents this optimisation, simplifying calling this function from assembly code.


It is mainly used in system calls in order to enforce the calling convention.

asmlinkage long sys_close(unsigned int fd);


In practice, asmlinkage is a macro defined in asm/linkage.h:

#define asmlinkage CPP_ASMLINKAGE __attribute__((syscall_linkage))

Unions

A union is a special type that allows storing different types of data at the same memory location.
Each member of a union is a typed alias of the same memory location.
The allocated size is equal to the size of the largest member of the union.

union {
    short x;
    long y;
    float z;
} my_union_t;


Examples in the kernel

union thread_union {
    struct thread_info thread_info;
    unsigned long stack[THREAD_SIZE/sizeof(long)];
};




The struct page is one of the worst union example \(\rightarrow\)

struct page {
    unsigned long flags;        /* Atomic flags, some possibly
                     * updated asynchronously */
    /*
     * Five words (20/40 bytes) are available in this union.
     * WARNING: bit 0 of the first word is used for PageTail(). That
     * means the other users of this union MUST NOT use the bit to
     * avoid collision and false-positive PageTail().
     */
    union {
        struct {    /* Page cache and anonymous pages */
            /**
             * @lru: Pageout list, eg. active_list protected by
             * lruvec->lru_lock.  Sometimes used as a generic list
             * by the page owner.
             */
            union {
                struct list_head lru;

                /* Or, for the Unevictable "LRU list" slot */
                struct {
                    /* Always even, to negate PageTail */
                    void *__filler;
                    /* Count page's or folio's mlocks */
                    unsigned int mlock_count;
                };

                /* Or, free page */
                struct list_head buddy_list;
                struct list_head pcp_list;
            };
            /* See page-flags.h for PAGE_MAPPING_FLAGS */
            struct address_space *mapping;
            union {
                pgoff_t index;      /* Our offset within mapping. */
                unsigned long share;    /* share count for fsdax */
            };
            /**
             * @private: Mapping-private opaque data.
             * Usually used for buffer_heads if PagePrivate.
             * Used for swp_entry_t if PageSwapCache.
             * Indicates order in the buddy system if PageBuddy.
             */
            unsigned long private;
        };
        struct {    /* page_pool used by netstack */
            /**
             * @pp_magic: magic value to avoid recycling non
             * page_pool allocated pages.
             */
            unsigned long pp_magic;
            struct page_pool *pp;
            unsigned long _pp_mapping_pad;
            unsigned long dma_addr;
            atomic_long_t pp_ref_count;
        };
        struct {    /* Tail pages of compound page */
            unsigned long compound_head;    /* Bit zero is set */
        };
        struct {    /* ZONE_DEVICE pages */
            /** @pgmap: Points to the hosting device page map. */
            struct dev_pagemap *pgmap;
            void *zone_device_data;
            /*
             * ZONE_DEVICE private pages are counted as being
             * mapped so the next 3 words hold the mapping, index,
             * and private fields from the source anonymous or
             * page cache page while the page is migrated to device
             * private memory.
             * ZONE_DEVICE MEMORY_DEVICE_FS_DAX pages also
             * use the mapping, index, and private fields when
             * pmem backed DAX files are mapped.
             */
        };

        /** @rcu_head: You can use this to free a page by RCU. */
        struct rcu_head rcu_head;
    };

    union {     /* This union is 4 bytes in size. */
        /*
         * For head pages of typed folios, the value stored here
         * allows for determining what this page is used for. The
         * tail pages of typed folios will not store a type
         * (page_type == _mapcount == -1).
         *
         * See page-flags.h for a list of page types which are currently
         * stored here.
         *
         * Owners of typed folios may reuse the lower 16 bit of the
         * head page page_type field after setting the page type,
         * but must reset these 16 bit to -1 before clearing the
         * page type.
         */
        unsigned int page_type;

        /*
         * For pages that are part of non-typed folios for which mappings
         * are tracked via the RMAP, encodes the number of times this page
         * is directly referenced by a page table.
         *
         * Note that the mapcount is always initialized to -1, so that
         * transitions both from it and to it can be tracked, using
         * atomic_inc_and_test() and atomic_add_negative(-1).
         */
        atomic_t _mapcount;
    };

    /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */
    atomic_t _refcount;

#ifdef CONFIG_MEMCG
    unsigned long memcg_data;
#elif defined(CONFIG_SLAB_OBJ_EXT)
    unsigned long _unused_slab_obj_exts;
#endif

    /*
     * On machines where all RAM is mapped into kernel address space,
     * we can simply calculate the virtual address. On machines with
     * highmem some memory is mapped into kernel virtual memory
     * dynamically, so we need a place to store that address.
     * Note that this field could be 16 bits on x86 ... ;)
     *
     * Architectures with slow multiplication can define
     * WANT_PAGE_VIRTUAL in asm/page.h
     */
#if defined(WANT_PAGE_VIRTUAL)
    void *virtual;          /* Kernel virtual address (NULL if
                       not kmapped, ie. highmem) */
#endif /* WANT_PAGE_VIRTUAL */

#ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
    int _last_cpupid;
#endif

#ifdef CONFIG_KMSAN
    /*
     * KMSAN metadata for this page:
     *  - shadow page: every bit indicates whether the corresponding
     *    bit of the original page is initialized (0) or not (1);
     *  - origin page: every 4 bytes contain an id of the stack trace
     *    where the uninitialized value was created.
     */
    struct page *kmsan_shadow;
    struct page *kmsan_origin;
#endif
} _struct_page_alignment;

Structures in Memory

A structure is a collection of one or more variables.

struct version {
    unsigned short major; // usually 2 bytes
    unsigned long minor;  // usually 8 bytes
    char flags;           // 1 byte
};


Memory alignment

A memory access is aligned if the accessed address is a multiple of the size of the access.

Example: an access to an unsigned long is aligned if the address is a multiple of 8 bytes.

Ordering and Padding

The only guarantee in the C language is the order of the members! The compiler can add padding between members for performance reasons.


An optimised version of the structure:

struct version {
    unsigned long minor;  // usually 8 bytes
    unsigned short major; // usually 2 bytes
    char flags;           // 1 byte
};

Variable-length Arrays

In C, an array must have a size. It is common to use a struct to keep it close to the array:

struct buf {
    char *buffer;
    size_t length;
};


This has several drawbacks:


Allocation is done in two steps (allocate the struct, then allocate the array)

struct buf *alloc_buffer(size_t length)
{
    struct buf *b = malloc(sizeof(struct buf));
    b->length = length;
    b->buffer = malloc(length);

    return b;
}

Freeing also requires two calls to free

void free_buffer(struct buf *b)
{
    free(b->buffer);
    free(b);
}


Copying requires a manual deep copy

struct buf *copy_buf(struct buf *b)
{
    struct buf *copy = alloc_buffer(b->length);
    memcpy(copy->buffer, b->buffer, b->length);
    
    return copy;
}

Tail-padded Structures

One way to overcome this is called tail-padded structures: placing an undefined size array as the last member of a structure.


struct buf {
    size_t length;
    char buffer[];
};

struct buf *alloc_buffer(size_t length)
{
    struct buf *b = malloc(sizeof(struct buf) + length);
    b->length = length;

    return b;
}


Allocation, free, and copy can be done in one go.


Multiple implementations are possible:

  • int buffer[]: in the C99 standard (flexible array member), preferred form
  • int buffer[1]: non-standard, but supported by compilers
  • int buffer[0]: non-standard, but supported by compilers

Array vs Pointer

void main(void)
{
    char *yes = "da";
    char ja[3];

    yes = ja;
    ja = yes;
}


If you run this code, you get this error:

foo.c: In function ‘main’:
foo.c:6:12: error: assignment to expression with array type
    6 |         ja = yes;
      |            ^


yes is a pointer to a char (here, the first character of the string "da").

ja is an array identifier, a symbolic constant.

Array Identifiers Ambiguity

void main(void)
{
    char *yes = "da";
    char ja[3];

    printf("yes: %p - %p\n", yes, &yes);
    printf("ja:  %p - %p\n", ja, &ja);
}


results in:

yes: 0x55b70c1d7004 - 0x7ffcf5fe4268
ja:  0x7ffcf5fe4275 - 0x7ffcf5fe4275


A symbolic constants’s address doesn’t really make sense, so the compiler gives it the value of the constant (hence ja == &ja).

Function Pointers (1)

Declaration

A function pointer is declared with the following syntax:

return_type(*function_name)(parameter_list);


Example 1: a function taking no parameters and returning nothing

void (*func_p)(void);

Example 2: a function taking an int and a char, and returning an int

int (*func_p)(int, char);


Addressing

You can get a function’s address with the & operator.

void my_func(int foo)
{
    // body
}

void (*func_ptr)(void);     // declaration
func_ptr = &my_func;        // assignment


Function pointers are also symbolic constants, which means you can use the naming ambiguity:

func_ptr = my_func;            // assignment

Function Pointers (2)

Calling a function pointer

void say_hello(char *name)
{
    printf("Hello %s\n", name;
}

int main(void)
{
    void (*func_ptr)(char *);   // declaration
    func_ptr = say_hello;       // assignment
    (*func_ptr)("zero");        // call

    return 0;
}


Since function pointers are symbolic constants, you can write:

func_ptr("zero");

Function Pointers (3)

As a function argument

Function pointers are frequently used in the kernel to set up callbacks.

void free_elem(struct elem *e)
{
    free(e);
}

void put_elem(struct elem *e, void (*release(struct elem *)))
{
    e->refcount--;
    if (!e->refcount)
        release(e);
}

int main(void)
{
    struct elem *e = malloc(sizeof(struct elem));
    put_elem(e, free_elem);
}


As a return value

int atoi(const char *nptr) { /* body */ }

int (*func_ptr(void)) (const char *)
{
    return atoi;
}

Macros

Macros for constants

#define MAX_CONNECTIONS 256


Macros as functions

#define max(a, b) ((a) > (b) ? (a) : (b))


Warning!

What happens with this code?

#define sqr(a) a * a
int a = 3;
int a_sqr = sqr(a + 1);  // expected: a_sqr = 4^2 = 16


#define sqr(a) ((a) * (a))
int a = 3;
int a_sqr = sqr(a + 1);  // expected: a_sqr = 4^2 = 16

sqr(a + 1) is expanded as 3 + 1 * 3 + 1, which returns 7.



sqr(a + 1) is expanded as ((3 + 1) * (3 + 1)), which returns 16.


Always put parenthesis around the arguments, as the macro’s expansion might provoke unwanted behaviour!

Macros (2)

Macros as code blocks

#define kthread_init_delayed_work(dwork, fn)        \
    do {                                            \
        kthread_init_work(&(dwork)->work, (fn));    \
        timer_setup(&(dwork)->timer,                \
                 kthread_delayed_work_timer_fn,     \
                 TIMER_IRQSAFE);                    \
    } while (0)


The do ... while(0) construct allows the use of this macro:

  • as a function: just add a ; after
    e.g., kthread_init_delayed_work(dwork, fn);
  • as a condition: in an if or while, this will evaluate to the result of the last instruction of the loop
    e.g., if (kthread_init_delayed_work(dwork, fn)) foo(x);
  • as an argument/operand: same as for conditions
    e.g., bar(kthread_init_delayed_work(dwork, fn));

Good Practices in the Kernel

Wise Use of the Stack

Kernel stack is small compared to user stack!


Stack size is statically defined at kernel compile time, cannot grow dynamically.


Usually fits on a few pages:

  • 8 KB for 32-bit architectures
  • 16 KB for 64-bit architectures


What to avoid?

  • Large allocations on the stack
  • Deep recursive call chains

Floating-Point Operations

Avoid floating point operations at all cost!


Why?


Extremely costly!

  • Enable the FPU (Floating-Point Unit)
  • Save all user space state related to the FPU (i.e., registers)
  • Disable the FPU


Not very useful!

  • No access to the libc, so no existing complex functions
  • You can only use inline functions from gcc
  • Most of the time, you can work around this with integer approximation

On the Dangers of Kernel Programming

Making changes to your kernel can render it unstable and lead to a kernel panic, i.e., a full system crash.


Keep a backup kernel

Never replace your running kernel with a new one!
Always keep a fully working backup kernel installed in your bootloader!


Work in modules

Always implement your changes as modules if possible.

  • That will limit the impact of some crashes in your code on the rest of the kernel.
  • Easier to test since you can load modules dynamically, test, unload, make changes and repeat

Important

Keep in mind that a bug can corrupt persistent data, e.g. on your hard drive. You could lose data for good if you work directly on your system!

Tip

Working in a virtual machine alleviates most of these issues!

Kernel APIs

Linux Kernel API

In the kernel, you won’t have access to the usual libraries like the libc.


Thankfully, the kernel provides its own internal “library” with basic functionalities.

They are described in Documentation/core-api/index.rst.


Let’s make a quick tour of some of these functionalities!

  • Generic base data types
  • Returning errors
  • Printing
  • Memory allocation
  • Waiting for resources
  • Task queues

Use Generic Types!

To ensure portability across architectures, the kernel offers generic types defined in include/linux/types.h


u8: unsigned byte (8 bits)
u16: unsigned word (16 bits)
u32: unsigned doubleword (32 bits)
u64: unsigned quadword (64 bits)
s8: signed byte (8 bits)
s16: signed word (16 bits)
s32: signed doubleword (32 bits)
s64: signed quadword (64 bits)


If a variable is visible from user space (e.g., ioctl), you must use types prefixed with __ (double underscore)

__u8        __s8
__u16       __s16
__u32       __s32
__u64       __s64

Returning Errors

Functions in the kernel follow the same convention as system calls by returning an integer:

  • Success: a value \(\ge 0\)
  • Error: the negative value of the error code (i.e., -errno)

If the function returns a pointer:

  • Success: the pointer to return
  • Failure: two possibilities
    • Return NULL if there is only one reason to fail

    • Return the error code encoded with the ERR_PTR() macro.

      The calling function can check if there was an error with IS_ERR() and get the error code with PTR_ERR()

int do_shash(unsigned char *name, unsigned char *result, const u8 *data1, unsigned int data1_len,
          const u8 *data2, unsigned int data2_len, const u8 *key, unsigned int key_len)
{
    int rc;
    unsigned int size;
    struct crypto_shash *hash;
    struct sdesc *sdesc;

    hash = crypto_alloc_shash(name, 0, 0);
    if (IS_ERR(hash)) {
        rc = PTR_ERR(hash);
        pr_err("%s: Crypto %s allocation error %d\n", __func__, name, rc);
        return rc;
    }
    /* ... */

Printing

If you need to print information to be available from user space, e.g., tracing or debugging, you can use the printk() function.
It works similarly to printf(), with a couple of differences:

  • You should prefix your format string with a priority level defined by macros in include/linux/kern_levels.h, from KERN_EMERG to KERN_DEBUG.
  • The output doesn’t go to stdout, but in the kernel ring buffer that you can read from user space with the dmesg command, or with journalctl (and other commands)

Example:

printk(KERN_ERR "%s:%d: this shouldn't be reached...\n", __FILE__, __LINE__);

There are also predefined macros for each level:

pr_debug("debug message\n");
pr_info("info message\n");
pr_err("error message\n");

Tip

Formats are available at Documentation/printk-formats.txt.

Filtering your prints

You can define, at the top of your module, the following macro to add a prefix to all your prints:

#define pr_fmt(fmt) "%s:%s: " fmt, KBUILD_MODNAME, __func__

This will add your module name and the name of the function as a prefix to all you prints.

Memory Management

Memory allocation is done with the kmalloc() function, similar to malloc().
Some specific characteristics:

  • Fast (except if blocked waiting for pages)

  • Allocated memory is not initialised

  • Allocated memory is contiguous in physical memory

  • Memory is allocated by areas of \(2^n - k\) bytes (\(k\): a few metadata bytes).

    Do not allocate 1024 B if you need 1000 B, you will end up with 2048 B!

Example:

data = kmalloc(sizeof(*data), GFP_KERNEL);


kmalloc GFP flags

The second parameter of kmalloc() is a Get Free Pages (GFP) flag:

  • GFP_KERNEL: Regular kernel allocation.
    Can be blocking. Best choice for most cases.
  • GFP_ATOMIC: Non blocking allocation.
    Use only in non-interruptible code.
  • GFP_USER: Allocate memory for a user space process.
    Can block. Lowest priority.
  • GFP_NOIO: Can block, but no I/O can be executed.
  • GFP_NOFS: Can block, but no file system operation can be executed.
  • GFP_HIGHUSER: Allocate memory in user space high memory (\(\gt 4\) GB). Can block. Low priority.

More combinations available in include/linux/gfp_types.h.

Memory Management (2)

If you need large chunks of memory, you should not use kmalloc(), and request pages directly with one of these functions:


unsigned long get_zeroed_page(int flags);


unsigned long __get_free_page(int flags);


unsigned long __get_free_pages(int flags, unsigned long order);

returns a pointer to a free page after filling it with zeros


returns a pointer to a free page


returns a pointer to a memory area with \(2^{order}\) contiguous pages


Virtual allocation

If you don’t need the memory to be contiguous, you can allocate in the virtual address space instead of physical:

void *vmalloc(unsigned long size);
void vfree(void *addr);


Mapping physical to virtual addresses

You can also map a physical memory location into the virtual address space:

void *ioremap(unsigned long phys_addr, unsigned long size);
void iounmap(void *addr);

Waiting for Resources

If you need to wait for a resource (e.g., network packet, message), the interface should implement a wait queue to allow your thread to sleep and be woken up when the resource is available.


wait_event(wait_queue, condition);



wait_event_interruptible(wait_queue, condition);

thread sleeps and will be woken up if wake_up() is called on the wait queue and the condition is true


same as wait_event(), but the thread can also be woken up by a signal


The resource handler calls wake_up() on the queue to wake up waiting threads.

Workqueues

Workqueues allow you to execute code asynchronously.


At creation time, a pool of thread is initialised.

Jobs can then be submitted in the form of a function pointer and a pointer to an argument.

A thread from the workqueue will, asynchronously, check the queue, pop a job and execute it.


Tip

Documentation available in Documentation/core-api/workqueue.rst.

Generic Data Structures

The kernel also offers generic data structures to work with:

  • Linked lists
  • Maps
  • Circular buffers
  • Red-black trees

Important

Generic data structures in C are not obvious to build…

Generic Data Structures in C

Instead of having objects in a list, we have the list in the objects!


The “naive” version:

struct elem {
    struct object {
        int v0, v1;
    } obj;
    struct elem *next, *prev;
};



Not generic! You need one list type of list per object type.

The “good” version:

struct object {
    int v0, v1;
    struct list_head {
        struct list_head *next, *prev;
    } list;
};



You only need one list_head type to be defined, and you can reuse it for any object type!

When you iterate over the list, how do you get the containing object?

Container of

From the address of any member in a structure, how can we get the address of the structure?
e.g., from the address of a list_head element in a structure


Linux implements the container_of macro!

/**
 * container_of - cast a member of a structure out to the containing structure
 * @ptr:    the pointer to the member.
 * @type:   the type of the container struct this is embedded in.
 * @member: the name of the member within the struct.
 *
 * WARNING: any const qualifier of @ptr is lost.
 */
#define container_of(ptr, type, member) ({              \
    void *__mptr = (void *)(ptr);                   \
    static_assert(__same_type(*(ptr), ((type *)0)->member) ||   \
              __same_type(*(ptr), void),            \
              "pointer type mismatch in container_of()");   \
    ((type *)(__mptr - offsetof(type, member))); })


After expanding all macros, this looks like this:

#define offset_of(type, member) \
    (&((type *)0)->member)

#define container_of(ptr, type, member) \
    ((type *)(((void *)ptr - offset_of(type, member))))
  • offset_of: cast the address 0 to type * and access the member
  • container_of: substract this offset from the address of the member

Generic Data Structure Helpers

For each generic data structure, the kernel provides helpers to use them.

Let’s see examples for circular doubly linked lists (list_head from include/linux/list.h):

Allocators:

LIST_HEAD(name)


Insert/delete:

static inline void list_add(struct list_head *new, struct list_head *head);
static inline void list_del(struct list_head *entry);


Iterators:

list_for_each(pos, head)
list_for_each_entry(pos, head, member)


And a lot more!


Tip

You can find similar helpers for all generic data structures.
Go check them out in the kernel sources!

Concurrency in the Kernel

Resources/objects can be accessed concurrently in the kernel.


There are two reasons this can happen:

  • Preemption: Since version 2.6, the Linux kernel is preemptible.
    This means that kernel code can be interrupted by higher priority code, e.g., device interrupt.
  • Multi-core processors: With multi-core CPUs, two threads can execute kernel code in parallel, and thus access the same kernel data concurrently.


Possible solutions:

  • Mask interrupts
  • Big Kernel Lock
  • Synchronisation primitives: semaphores, spinlocks
  • Atomic operations

Masking Interrupts

Concurrency problems can arise due to preemption both on single- and multi-core systems.

In the case of single-core CPUs, it can be solved solely by disabling interrupts, making the kernel code non-preemptible.


In Linux, you can use the following macros:

  • local_irq_disable(): this uses the proper assembly instruction to disable interrupts on the current core, e.g., cli on x86.
  • local_irq_enable(): this uses the proper assembly instruction to enable interrupts on the current core, e.g., sti on x86.


Example: A driver for a joystick in drivers/input/joystick/analog.c

static int analog_cooked_read(struct analog_port *port)
{
    /* some code */
    local_irq_disable();
    this = gameport_read(gameport) & port->mask;
    now = ktime_get();
    local_irq_restore(flags);
    /* some code */
}

Big Kernel Lock

On multi-core systems, disabling interrupts is not sufficient, as other cores might also access data concurrently.

One potential solution is to serialise all kernel code, allowing only one thread at a time to execute code in supervisor mode.


This was the initial solution used in Linux when support for multi-core CPUs was added.
The Big Kernel Lock (BKL) was taken when entering the kernel and released when exiting.
Only one thread at a time was running kernel code.


Pro: Extremely simple to implement and safe

Con: Large performance degradation due to the loss of parallelism for kernel code


Linux and the BKL

Linux had a Big Kernel Lock from the introduction of Symmetric Multi-Processor (SMP) in version 2.0 in 1999 until its removal in 2.6.39 in 2011.

Synchronisation Primitives

Since the BKL removal, fine-grained synchronisation mechanisms are used in the kernel.


A non-exhaustive list of synchronisation mechanisms and their (partial) API:

  • Mutexes
void mutex_lock(struct mutex *lock);
int mutex_trylock(struct mutex *lock);
void mutex_unlock(struct mutex *lock);
  • Semaphores
void down(struct semaphore *sem);
void up(struct semaphore *sem);
  • Spinlocks
void spin_lock(spinlock_t *lock);
void spin_unlock(spinlock_t *lock);
  • Readers/writer locks
void read_lock(rwlock_t *lock);
void write_lock(rwlock_t *lock);
void read_unlock(rwlock_t *lock);
void write_unlock(rwlock_t *lock);

Note

Most of these have variations that also disable interrupts when taking a lock.

Atomic Operations

Concurrent access problems can also be solved by using atomic operations in some cases.

Atomic operations are architecture-specific, and are defined in include/linux/atomic/atomic-instrumented.h


These operations should be used on a specific type, atomic_t, to represent the atomic variable.

You can find the usual atomic operations, for example:

  • void atomic_add(int i, atomic_t *v);
  • void atomic_dec(atomic_t *v);
  • void atomic_or(int i, atomic_t *v);
  • And many others…

Coding Style

The Linux Kernel Coding Style

Defined in Documentation/process/coding-style.rst.


It defines a set of rules that will be enforced when a patch is submitted:

  • Indentation
  • Line length
  • Spaces and braces
  • Error management
  • etc.


Tip

You can check if your patches are valid with regard to the coding style with the scripts/checkpatch.pl script!

Some Coding Style Rules

Indentation

Indentation is done with tabs, not spaces.
Tabs are 8 characters long.


Line length

For better readability, the preferred limit on the length of a line is 80 characters.
However, never break user-visible strings, as it also breaks the ability to grep them.

Since 2020, checkpatch.pl only complains about lines longer than 100 characters.


Too restrictive?

From the coding style documentation:

Now, some people will claim that having 8-character indentations makes the code
move too far to the right, and makes it hard to read on a 80-character terminal
screen. The answer to that is that if you need more than 3 levels of
indentation, you're screwed anyway, and should fix your program.

Some Coding Style Rules (2)

Braces

  • Opening braces are on the same line as the block they open, except for functions where they are on the next line
  • Closing braces are alone on an empty line
  • Don’t use braces to surround single-line statements…
  • … except if another branch of a conditional has multiple statements
for (int i = 0; i < 10; i++) {
    printk("%d\n", i);
}
int inc(int x)
{
    return ++x;
}


if (!x)
    x++;
if (x > y)
    y++;
else
    x++;
if (x > y) {
    y++;
} else {
    x++;
    y--;
}

Some Coding Style Rules (3)

Spaces

Philosphy of function-versus-keyword usage.

  • No spaces after functions

  • Spaces after keywords (except if they are used like function, e.g., sizeof)

    if, switch, case, for, do, while

For operators:

  • Spaces on both sides of binary and ternary operators

    = + - < > * / % | & ^ <= >= == != ? :

  • No space after unary operators or before/after postfix/prefix increment and decrement

    & * + - ~ ! ++ –

  • No space around structure member operators

    . ->

  • No trailing spaces at the end of lines

Chapter 3: Implementing Kernel Modules

A Quick Tour of the Kernel Configuration and Build System

Getting the Sources

From the official kernel website, kernel.org, download the tarball archive.

Or from the command line, for example:

$ wget https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.5.7.tar.xz


You can (and should) also check out the integrity of the tarball with the pgp signature:

$ wget https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.5.7.tar.sign
$ unxz linux-6.5.7.tar.xz    # the signature is done on the decompressed tarball
$ gpg --verify linux-6.5.7.tar.sign linux-6.5.7.tar


This will probably fail because you don’t have the public keys of the maintainers that generated the tarball.

Get them from the kernel’s key server (documentation):

$ gpg2 --locate-keys torvalds@kernel.org gregkh@kernel.org


You can also clone Linus Torvalds’ git tree:

$ git clone https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/

Configuring the Kernel

The kernel configuration describes the features that will be enabled in the built binary, as well as change their behavior.
It also describes if features should be built-in the binary or compiled as modules.

By default, the Makefile-based build system uses the file .config located at the root of the kernel sources.


You can generate initial configurations with the following commands (non-exhaustive list):

$ make allnoconfig     # minimal, everything that can be disabled is disabled
$ make defconfig       # default configuration for the local architecture
$ make localmodconfig  # configuration based on the current state of the machine (plugged devices, etc.) and builds them as modules
$ make localyesconfig  # same but everything is built-in
$ make oldconfig       # keeps the values of the current .config and asks for the new options


Or you can copy the configuration of your running kernel:

$ cp /boot/config-$(uname -r)* .config  # available on some distros
$ zcat /proc/config.gz > .config        # available if CONFIG_IKCONFIG_PROC is enabled


If you need to know more about your hardware to generate your config, check out these commands:

  • lshwd, lscpu, lspci, lsusb, …
  • dmidecode
  • hdparm
  • cat /proc/cpuinfo, cat /proc/meminfo, …
  • dmesg

Building the Kernel

The kernel build system is based on Makefiles.

Just run make to compile it.

$ time make
real 80m15.486s
user 74m54.606s
sys 5m32.300s

Compilation can take a long time, so do it in parallel!

$ make -j $(nproc)


The compilation produces the following important files:

  • vmlinux: the raw Linux kernel image. This ELF is used for debugging and profiling;
  • System.map: symbol table of the kernel. Not necessary to run the kernel, used for debugging;
  • arch/<arch>/boot/bzImage: compressed image of the kernel. This is the one that will be loaded and used.
$ du -sh vmlinux arch/x86/boot/bzImage
49M vmlinux
13M arch/x86/boot/bzImage


You can get some info on the image with the file command:

$ file arch/x86/boot/bzImage
arch/x86/boot/bzImage: Linux kernel x86 boot executable bzImage, version 6.5.7-lkp (redha@wano) \
#2 SMP PREEMPT_DYNAMIC Thu Oct 19 16:05:37 CEST 2023, RO-rootFS, swap_dev 0XC, Normal VGA

Installing a Kernel

Two main steps:

  1. Install the kernel image, the symbol map and the initrd
$ make install

This will copy the image and symbol map in /boot, and generate the initramfs.


  1. Install the modules
$ make modules_install

This will copy the modules (.ko files) into /lib/modules/<version>.

About the Symbol Map

The symbol map (System.map) provides the list of the symbols available in this kernel, their address and type.

$ head System.map
0000000000000000 D __per_cpu_start
0000000000000000 D fixed_percpu_data
0000000000001000 D cpu_debug_store
0000000000002000 D irq_stack_backing_store
0000000000006000 D cpu_tss_rw
000000000000b000 D gdt_page
000000000000c000 d exception_stacks
0000000000014000 d entry_stack_storage
0000000000015000 D espfix_waddr
0000000000015008 D espfix_stack


Check the manpage of the nm program for an explanation of the types.

As a rule of thumb (mostly true), lowercase means local scope while uppercase means global scope (i.e., exported symbol).

Linux Init Process

Back to the Lab

In Lab 2, task 3, you were asked to replace the init binary by a hello_world program, which led to a kernel panic. Why?


Roles of init

  • Initialise the system: start daemons/services, manage user sessions, mount partitions, etc.
  • Ancestor of all the processes on the system
  • Adopt all orphaned processes


Characteristics of init

  • Has PID 1
  • Cannot die


Demo time!

Linux Kernel Modules

Development Infrastructure

Multiple development methods:

  • Local setup
    Use your usual development software, compile and run your new modules/kernel on your machine.
    Pros: easy, quick
    Cons: if a crash occurs, you can do nothing
  • Remote machine
    If you have access to a separate testing machine, you can do your development on your machine and test remotely to avoid the crash issues. This machine is usually hooked through network/serial to the development machine to allow remote debugging and monitoring.
    Pros: good development setup, robust to crashes
    Cons: not always possible to have a second machine
  • Virtual machine
    Develop on your local setup and deploy on a virtual machine. This replaces the previous method well, while being faster to use, and doesn’t require a second machine.
    Pros: good development setup, robust to crashes, single machine
    Cons: doesn’t always perfectly capture real hardware, might be slow depending on the host and guest machines


In this course, we will use the last method with QEMU as a hypervisor.

Kernel Modules Interface

A module is a library dynamically loaded into the kernel. It triggers a call to a registered function when loaded and when unloaded.

The kernel provides two macros to register these functions: module_init() and module_exit().


static int my_init(void)
{
      /* ... */
      return 0;
}
module_init(my_init);
static void my_exit(void)
{
      /* ... */
}
module_exit(my_exit);


For these to work, you will need some header files included:

// contains the module API
#include <linux/module.h>

/// contains the init and exit macros
#include <linux/init.h>

/// if needed: base types, functions, macros...
#include <linux/kernel.h>

Module Information

You should also add some information about your module with some pre-defined macros, usually at the beginning of the file:

MODULE_DESCRIPTION("Hello world module");
MODULE_AUTHOR("Redha Gouicem, RWTH");
MODULE_LICENSE("GPL");


These can be checked on any module:

$ modinfo hello.ko
filename: hello.ko
description: Hello World module
author: Redha Gouicem, RWTH
license: GPL
vermagic: 6.5.7-ARCH 686 gcc-13.2.1
depends:


Warning

The license is not only informative. It is also used to check if you are allowed to use some symbols in the kernel.

Example: Hello World

#include <linux/module.h>
#include <linux/init.h>
#include <linux/kernel.h>

MODULE_DESCRIPTION("Hello world module");
MODULE_AUTHOR("Redha Gouicem, RWTH");
MODULE_LICENSE("GPL");

static int __init hello_init(void)
{
      pr_info("Hello World!\n");

      return 0;
}
module_init(hello_init);

static void __exit hello_exit(void)
{
      pr_info("Goodbye World...\n");
}
module_exit(hello_exit);


Annotations

The __init and __exit annotations are used to help the compiler optimize the memory usage.
When some module is statically built-in the kernel binary, functions tagged with these annotations are placed in specific segments:

  • .init.text that is freed after the boot of the kernel
  • .exit.text that is never loaded in memory

Building a Module

The running kernel is deployed with a generic Makefile located in /lib/modules/$(uname -r)/build.

You can use it from anywhere like this:

$ make -C /lib/modules/$(uname -r)/build M=$PWD

This will generate your module as a .ko file (kernel object).


You can also use a custom Makefile like this one as a wrapper:

ifneq ($(KERNELRELEASE),)

  obj-m += hello.o

else

  KERNELDIR_LKP ?= /lib/modules/$(shell uname -r)/build
  PWD := $(shell pwd)

all:
        make -C $(KERNELDIR_LKP) M=$(PWD) modules

clean:
        make -C $(KERNELDIR_LKP) M=$(PWD) clean

endif

Loading/Unloading a Module

Loading a module can be done with insmod:

$ insmod hello.ko
$ dmesg
[177814.017370] Hello World!


Unloading a module can be done with rmmod:

$ rmmod hello
$ dmesg
[177919.956567] Goodbye World...

Module Parameters

#include <linux/init.h>
#include <linux/module.h>
#include <linux/moduleparam.h>

static char *month = "January";
module_param(month, charp, 0660);

static int day = 1;
module_param(day, int, 0000);

static int __init hello_init(void)
{
      pr_info("Hello ! We are on %d %s\n", day, month);
      return 0;
}
module_init(hello_init);

static void __exit hello_exit(void)
{
      pr_info("Goodbye, cruel world\n");
}
module_exit(hello_exit);


With default values:

$ insmod hello.ko
$ dmesg
[180525.067016] Hello ! We are on 1 January

With parameters:

$ insmod hello.ko month=December day=31
$ dmesg
[181086.216097] Hello ! We are on 31 December

Kernel Dynamic Linker

Like shared libraries, modules are dynamically loaded: they only have access to symbols explicitly exported to them!
By default, they have access to absolutely no variable or function from the kernel, even if they are not static!

Two macros allow to explicitly export symbols to modules:

  • EXPORT_SYMBOL(s) makes the symbol s visible to all loaded modules
  • EXPORT_SYMBOL_GPL(s) makes the symbol s visible to all modules with a license compatible with GPL (according to their MODULE_LICENSE)

Example: using the pm_power_off() function exported in arch/x86/kernel/reboot.c and available on my system:

$ grep pm_power_off /lib/modules/$(uname -r)/build/System.map
ffffffff810ed2f0 t legacy_pm_power_off
ffffffff8274d7d8 r __ksymtab_pm_power_off
ffffffff838a47f8 B pm_power_off


#include <linux/module.h>
#include <linux/kernel.h>

MODULE_DESCRIPTION("Power off module");
MODULE_LICENSE("GPL");

static int __init devil_init(void)
{
      pr_info("The end is nigh...\n");
      if (pm_power_off)
            pm_power_off();

      return 0;
}
module_init(devil_init);

Module Dependencies

If a module X uses at least one symbol from module Y, then X depends on Y.

Dependencies are not explicitly defined: they are automatically inferred during the kernel/module compilation.
You can find the list of dependencies in the file /lib/modules/<version>/modules.dep.

This file is generated by the depmod program, who checks which symbols are used by a module, and which module provide these symbols.

You can also check the dependencies of a module with modinfo.


Automated dependency solving

Obviously, modules must be inserted in the proper order: if X depends on Y, Y needs to be inserted before X.

If you are using modprobe, it will automatically insert dependencies first.

This is also true for unloading modules (in the reverse order).

Contributing to the Kernel

Patch or Module?

When developing something in the kernel, the first design choice is “how?”


You have two choices:

  • Implement your code in the kernel through a patch
    Your code is then statically built-in the kernel binary
  • Implement your code in a module
    Your code can be dynamically loaded by the kernel at run time


Whenever possible, modules are the best choice, as they have more chances to be merged in the mainline.

Modules: Pros and Cons

While using modules should be your first choice, it also has some drawbacks depending on what you are doing.


Pros:

  • Easier to develop
  • Easier to distribute
  • Avoids overloading the kernel
  • Lower chances of conflicts


Cons:

  • Internal kernel structures cannot be modified
    e.g., adding a field to the file descriptor structure
  • Replacing/changing the behaviour of an existing kernel function
    e.g., change the page frame allocator code

Patching the Kernel

If you need to modify the kernel and distribute your changes, you most likely will use patches.

A patch is the result of the diff command applied on the original files and your modified version.
It contains all the data for the patch to automatically apply the changes.


diff: Compares files line by line

  • Shows the added/modified/removed lines
  • Can ignore tabs/spaces
  • Can compare whole directory trees (-r)


patch: Apply changes to existing files

  • Can apply a patch on a file passed as an argument or stdin
  • Can apply a patch on a directory tree

Creating and Applying a Patch

Creating a patch for the kernel tree:

  • -r to enable recursive patch
  • -u to use the unified diff format (more compact and easier to read)
unxz linux-6.5.7.tar.xz
cp -r linux-6.5.7 linux-6.5.7-orig
cd linux-6.5.7
emacs kernel/sched/fair.c
emacs kernel/sched/sched.h
cd ..
diff -r -u linux-6.5.7-orig linux-6.5.7 > new_sched.patch
xz new_sched.patch


Applying a patch on the kernel tree:

  • -p 1 to omit the first level in all paths
  • --dry-run to only simulate the patch (for testing purposes)
unxz linux-6.5.7.tar.xz
cd linux-6.5.7
zcat new_sched.patch | patch -p 1 --dry-run

Submitting Your Work to the Kernel Community

When you think your code is ready for review by the kernel maintainer, you need to send it to them!

Note: This is just an overview of the process!


Ready your code for public eyes

  • Test your code first to avoid silly bugs (and being fun of publicly)
  • Make sure that your code is compliant with the kernel coding style and is understandable (comments?)
  • Use tools to helps you, e.g., checkpatch, clang-format


Prepare your patch(es)

  • Choose against which version of the kernel to generate your patches, usually the current mainline from Linus’ git tree (a -stable or -rc release)
  • Split your submission into a set of patches/commits, each being logically independent, and able to be built and run
  • You can manually create the patches with diff or use git format-patch to automatically generate patches from your commits (assuming you used git in the first place)

Submitting Your Work to the Kernel Community (2)

Format your patch series for emailing

  • Each patch email should have a one-line short description, followed by blank line and a multi-line long description, then a list of tag lines specifying who co-authored, reviewed the patch, etc. Finally, the patch should be appended.
  • All of this can be fairly well automated with git format-patch


Send your patch series to the mailing list

  • Find out to which mailing list and maintainers the email should be sent to, using the scripts/get_maintainer.pl script on your patch. You should also CC anyone who might need to see this, e.g., they work on something similar
  • You might need to write a first summary email (a cover letter) for your patch series
  • This can be fairly well automated with git send-email
  • The emails need to be written in plain text, no HTML!


Don’t rely on these slides only!

Go check the full version in the kernel documentation, starting with the kernel development process and patch submission process.

You can check the mailing list online at https://lore.kernel.org.