EVL Spinlock

The EVL core recognizes two kinds of spinlocks for serializing code running on the out-of-band stage:

  • fundamental spinlocks which exclude all other accesses from any CPU, including from the current one by masking interrupts. These basic hard spinlocks are implemented by Dovetail. You can use them in the following cases:

    • you need to guard access to some resource from preemption by any out-of-band IRQ handler, so that the current out-of-band thread can access it safely. In this case, disabling interrupts in the CPU temporarily is required to achieve such protection.

    • some in-band code wants to guard against any preemption from any stage (in-band or out-of-band) on the current CPU. In this case too, disabling interrupts in the CPU is the only way to prevent the out-of-band scheduler to preempt the current in-band code, by locking out out-of-band interrupts which might activate it in the first place.

Keep in mind that once Dovetail is enabled in the kernel, interrupt events are only virtually disabled for the in-band stage, so that out-of-band interrupts can preempt without delay. For this reason, masking interrupts directly in the CPU is the only way to guard against preemption by out-of-band events.

  • EVL-specific spinlocks which exclude all other accesses from any CPU, including from the current one by disabling preemption for the current thread in the EVL core. Such spinlock is useful when only EVL threads running out-of-band may compete for the lock, excluding out-of-band IRQ handlers. In this case, disabling preemption before attempting to grab the lock may be substituted to disabling interrupts in the CPU. In other words, if you can guarantee that only out-of-band EVL thread contexts can contend for lock X, then the two locking forms below would guard the section safely, without the extra cost of masking out-of-band IRQs in the second form:

Full serialization via raw interrupt masking in the CPU

    hard_spinlock_t X;
    ...
    unsigned long flags;
    raw_spin_lock_irqsave(&X, flags);
    (guarded section)
    raw_spin_unlock_irqrestore(&X, flags);

CAUTION: This code assumes no competition with out-of-band IRQs!

    evl_spinlock_t X;
    ...
    evl_spin_lock(&X);     -- disables preemption
    (guarded section)
    evl_spin_unlock(&X);   -- enables preemption, may resched

ADDITIONAL NOTES

  • Calling evl_schedule() while holding a hard spinlock is invalid, EVL-specific spinlocks can be substituted when you need to traverse some code paths which might invoke the EVL rescheduling procedure while holding a lock. In this case, the EVL core would detect that preemption is disabled, postponing the effect of evl_schedule() until the (outer) EVL lock is dropped eventually, at which point the rescheduling would happen.

  • Picking the right type of lock is a matter of trade-off between interrupt latency and scheduling latency, depending on how time-critical it is for some IRQ handler to execute despite any request for rescheduling the latter might issue would have to wait until the lock is dropped by the interrupted thread eventually.

  • Since an EVL spinlock is a hard lock at its core, you may also use it to serialize access to data from the in-band context. However, because such code would also be subject to preemption by the in-band scheduler which might impose a severe priority inversion on out-of-band threads spinning on the same lock from other CPU(s), any attempt to grab an EVL lock from the in-band stage without stalling such stage or disabling hard irqs is considered a bug.

  • Just like the in-band preempt_count(), the EVL preemption count which guards against unwanted rescheduling from the core allows evl_spinlock_t locks to be nested safely.

EVL spinlock services

evl_spin_lock_init(lock)

A macro which initializes an EVL spinlock (i.e. with EVL preemption tracking).

  • lock

    A spinlock structure of type evl_spin_lock_t.


  • evl_spin_lock(lock)

    A macro which locks the spinlock for the current CPU, busy waiting until access is granted.

  • lock

    A spinlock structure of type evl_spin_lock_t which must have been previously initialized by a call to evl_spin_lock_init().


  • evl_spin_lock_irqsave(lock, flags)

    A macro which (hard) disable interrupts in the current CPU, then locks the spinlock, busy waiting until access is granted. Interrupts remain disabled while waiting for access, until the lock is dropped at the end of the critical section. The lock is dropped by a converse call to evl_spin_lock_irqrestore().

  • lock

    A spinlock structure of type evl_spin_lock_t which must have been previously initialized by a call to evl_spin_lock_init().

  • flags

    An unsigned long variable which receives the saved interrupt state. This value is required by evl_spin_lock_irqrestore() to drop the lock and restore the interrupt state.


  • evl_spin_unlock(lock)

    A macro which unlocks the spinlock.

  • lock

    A spinlock structure of type evl_spin_lock_t.


  • evl_spin_lock_irqsave(lock, flags)

    A macro which unlocks the spinlock then (hard) restores the interrupt state in the current CPU.

  • lock

    A spinlock structure of type evl_spin_lock_t.

  • flags

    The interrupt state to restore as received from evl_spin_lock_irqsave().


  • DEFINE_EVL_SPINLOCK(name)

    A macro which expands as a C statement defining an initialized EVL spinlock.

  • name

    The C variable name of the spinlock to define.

  • /*
     * The following expands as:
     * static evl_spin_lock_t foo = __EVL_SPIN_LOCK_INITIALIZER(foo);
     */
    static DEFINE_EVL_SPINLOCK(foo);
    

    Last modified: Sat, 04 Mar 2023 16:43:20 +0100