- Linux Scheduler
- Preemption
- Spinlocks and Semaphores
- System Clock: Of Time and Timers
- Summary
- Exercises
7.3 Spinlocks and Semaphores
When two or more processes require dedicated access to a shared resource, they might need to enforce the condition that they are the sole process to operate in a given section of code. The basic form of locking in the Linux kernel is the spinlock.
Spinlocks take their name from the fact that they continuously loop, or spin, waiting to acquire a lock. Because spinlocks operate in this manner, it is imperative not to have any section of code inside a spinlock attempt to acquire a lock twice. This results in deadlock.
Before operating on a spinlock, the spin_lock_t structure must be initialized. This is done by calling spin_lock_init():
–---------------------------------------------------------------------- include/linux/spinlock.h 63 #define spin_lock_init(x) 64 do { 65 (x)->magic = SPINLOCK_MAGIC; 66 (x)->lock = 0; 67 (x)->babble = 5; 68 (x)->module = __FILE__; 69 (x)->owner = NULL; 70 (x)->oline = 0; 71 } while (0) -----------------------------------------------------------------------
This section of code sets the spin_lock to "unlocked," or 0, on line 66 and initializes the other variables in the structure. The (x)->lock variable is the one we’re concerned about here.
After a spin_lock is initialized, it can be acquired by calling spin_lock() or spin_lock_irqsave(). The spin_lock_irqsave() function disables interrupts before locking, whereas spin_lock() does not. If you use spin_lock(), the process could be interrupted in the locked section of code.
To release a spin_lock after executing the critical section of code, you need to call spin_unlock() or spin_unlock_irqrestore(). The spin_unlock_irqrestore() restores the state of the interrupt registers to the state they were in when spin_lock_irq() was called.
Let’s examine the spin_lock_irqsave() and spin_unlock_irqrestore() calls:
–---------------------------------------------------------------------- include/linux/spinlock.h 258 #define spin_lock_irqsave(lock, flags) 259 do { 260 local_irq_save(flags); 261 preempt_disable(); 262 _raw_spin_lock_flags(lock, flags); 263 } while (0) ... 321 #define spin_unlock_irqrestore(lock, flags) 322 do { 323 _raw_spin_unlock(lock); 324 local_irq_restore(flags); 325 preempt_enable(); 326 } while (0) -----------------------------------------------------------------------
Notice how preemption is disabled during the lock. This ensures that any operation in the critical section is not interrupted. The IRQ flags saved on line 260 are restored on line 324.
The drawback of spinlocks is that they busily loop, waiting for the lock to be freed. They are best used for critical sections of code that are fast to complete. For code sections that take time, it is better to use another Linux kernel locking utility: the semaphore.
Semaphores differ from spinlocks because the task sleeps, rather than busy waits, when it attempts to obtain a contested resource. One of the main advantages is that a process holding a semaphore is safe to block; they are SMP and interrupt safe:
–---------------------------------------------------------------------- include/asm-i386/semaphore.h 44 struct semaphore { 45 atomic_t count; 46 int sleepers; 47 wait_queue_head_t wait; 48 #ifdef WAITQUEUE_DEBUG 49 long __magic; 50 #endif 51 }; ----------------------------------------------------------------------- –---------------------------------------------------------------------- include/asm-ppc/semaphore.h 24 struct semaphore { 25 /* 26 * Note that any negative value of count is equivalent to 0, 27 * but additionally indicates that some process(es) might be 28 * sleeping on ’wait’. 29 */ 30 atomic_t count; 31 wait_queue_head_t wait; 32 #ifdef WAITQUEUE_DEBUG 33 long __magic; 34 #endif 35 }; -----------------------------------------------------------------------
Both architecture implementations provide a pointer to a wait_queue and a count. The count is the number of processes that can hold the semaphore at the same time. With semaphores, we could have more than one process entering a critical section of code at the same time. If the count is initialized to 1, only one process can enter the critical section of code; a semaphore with a count of 1 is called a mutex.
Semaphores are initialized using sema_init() and are locked and unlocked by calling down() and up(), respectively. If a process calls down() on a locked semaphore, it blocks and ignores all signals sent to it. There also exists down_interruptible(), which returns 0 if the semaphore is obtained and –EINTR if the process was interrupted while blocking.
When a process calls down(), or down_interruptible(), the count field in the semaphore is decremented. If that field is less than 0, the process calling down() is blocked and added to the semaphore’s wait_queue. If the field is greater than or equal to 0, the process continues.
After executing the critical section of code, the process should call up() to inform the semaphore that it has finished the critical section. By calling up(), the process increments the count field in the semaphore and, if the count is greater than or equal to 0, wakes a process waiting on the semaphore’s wait_queue.