Under the hood

This (work-in-progress) section will help you navigate the design and implementation of the EVL core. It is written from a developer perspective as a practical example of implementing a companion core living in the Linux kernel, emphasizing on details about the way the Dovetail interface is leveraged for this purpose. This information is intended to help anyone interested in or simply curious about the “other path to Linux real-time”, whether it is useful for developing your own Linux-based dual kernel system, contributing to EVL, or educational purpose.

How do applications request services from the EVL core?

An EVL application interacts with so-called EVL elements. Each element usually exports a mix of in-band and out-of-band file operations implemented by the EVL core (in a few cases, only either side is implemented). Therefore, an EVL application sees every core element as a file, and uses a regular file descriptor to interact with it, either directly or indirectly by calling routines from the standard glibc or libevl whenever there is a real-time requirement.

To sum up, an application issues file I/O requests on element devices to obtain services from the EVL core. The system calls involved in the interface between an application and the EVL core are exclusively open(2), close(2), read(2), write(2), ioctl(2), mmap(2) for the in-band side, and oob_ioctl(), oob_read(), oob_write() for the out-of-band side.

Where are the EVL core services implemented?

The EVL core can be described as a multi-device driver, each device type representing an EVL element. From the standpoint of the Linux device driver model, the EVL core is composed of a set of character-based device drivers, one per element type. The core is implemented under the kernel/evl hierarchy.

Element creation

The basic operation every element driver implements is cloning, which creates a new instance of the element type upon request from the application. For instance, a thread element is created each time evl_attach_self() is invoked from libevl. To do so, create_evl_element() sends a request to the special clone device exported by the core at /dev/evl/thread/clone. Upon return, the new thread element is live, and can be accessed by opening the newly created device at /dev/evl/thread/<name>, where <name> is the thread name passed to evl_attach_self(). Eventually, the file descriptor obtained is usable for issuing requests for core services to that particular element instance via file I/O requests.

The same logic applies to all other types of named elements, such as proxies, cross-buffers or monitors which underlie mutexes, events, semaphores and flags.

Element factory

In order to avoid code duplication, the EVL core implements a so-called element factory. The factory refers to EVL class descriptors of type struct evl_factory, which describes how a particular element type should be handled by the generic factory code.

The factory performs the following tasks:

  • it populates the initial device hierarchy under /dev/evl so that applications can issue requests to the EVL core. The main aspect of this task is to register a Linux device class for each element type, creating the related clone device.

  • it implements the generic portion of the ioctl(EVL_IOC_CLONE) request, eventually calling the proper type-specific handler, such as thread_factory_build() for threads, monitor_factory_build() for monitors and so on.

  • it maintains a reference count on every element instantiated in the system, so as to automatically remove elements when they have no more referrers. Typically, closing the last file descriptor referring to the file underlying an element would cause such removal (unless some kernel code is still withholding references to the same element).

Unlike other elements, a thread may exist in absence of any file reference. The disposal still happens automatically when the thread exits or voluntarily detaches by calling evl_detach_self().

Alt text