Podcast explores real-time features of Linux 2.6
Aug 3, 2007 — by LinuxDevices Staff — from the LinuxDevices Archive — viewsTimeSys has produced another entertaining podcast on the always lively topic of real-time capabilities in the 2.6 kernel. Having defined lots of real-time related terms in last week's episode, co-hosts Gene Sally and Maciej Halasz explore ways developers can avoid or control interrupt, scheduler, and kernel latency.
digg this story |
tAfter some further definitions of terms such as “spin-lock” and “mutex,” the discussion gets underway with a rundown of various sources of latency. Those discussed include:
- Shared interrupts, in which multiple ISRs (interrupt service requests) are assigned to the same line (i.e., IRQ7). Halasz says this leads to ISRs being executed in “cascades,” so that “by the time the system gets to the ISR that's important to you, there are some other ISRs that get executed first.” He notes that cascading is sometimes unavoidable, due to a limited number of hardware interrupt lines. “It comes down to figuring out which interrupt lines are important, and mapping ISRs according,” he said.
- Badly written device drivers that simply disable interrupts rather than trying to fairly schedule CPU time — Sally calls this the “sledgehammer method,” noting, “Disabling interrupts is the easy way, but you're going to suffer tremendous scheduling penalties.”
- Forward-ported device drivers from 2.4 — Sally says that “In 2.4, even the best device drivers didn't care about areas that are important to real-time performance.” Halasz adds that macros aimed at helping authors port drivers “don't always do the right thing.”
Next, the duo looks at real-time improvements in the 2.6 kernel, starting with the 0(1) scheduler, which was among the first changes to appear in the earliest 2.6 kernels. Sally explains that “0(1)” as a term refers to “constant time.” He notes, “As you add or remove tasks, or switch tasks, it takes the same amount of time, regardless of how many tasks are running. This makes the kernel more predictable.”
Halasz explains in more detail, “To achieve a constant interval [between task switches], [the scheduler] grabs the highest priority task from the 'active' area, executes for the time slice it has, and moves it to 'expired.' When all tasks in 'active have been executed, it swaps 'active' with 'expired,' and starts all over again. Regardless of how many tasks are scheduled, the interval is constant.”
Sally notes, “There are corner cases where there are too many things running, such that no one process gets enough time to get much done.”
He also observes that “inexperienced engineers sometimes create a bunch of threads they think they can schedule.” Halasz agrees, noting, “For real-time, reduce the number of threads so you have a better understanding of how your system behaves. Scheduling safety is difficult with a large numbers of threads. You want to minimize the number of high-priority threads.”
Next, the discussion moves to preemption, and additional scheduling points added to 2.6 to improve SMP (symmetrical multiprocessor) performance. Sally notes that with multiple processors, you “really do have concurrent processing,” so you need to be able to “stop at any point, and do a context switch.” Such context switches can be instigated by voluntary preemption or through priority inheritance, the pair suggests.
The lively half-hour ends with a discussion about “soft” ISRs, which offer developers a way to schedule interrupts as kernel threads. Halasz explains, “Let's say you have an interrupt that arrives at one of the lines fairly frequently. The real-time task relies on two other interrupts that are arriving. Moving the ISRs into kernel threads allows you to compete for the CPU, because you can raise the priority of the real-time tasks above the priority of interrupts that do not interest you. You can control what kind of latency your application will experience.”
Halasz is quick to add, “On the downside, as you move the ISRs into kernel threads, you increase the amount of time they take to execute,” due to scheduling overhead. This frustrates many engineers who “want both — they want all the packets to arrive as fast as possible, and also to execute the real-time application in a reliable way,” he said.
With time running short, the two agree to wait until next week's episode to discuss two additional new real-time capabilities in Linux 2.6:
- High resolution timer support
- Mechanisms that allow you to track various latencies in the kernel by default, “so you can just enable them, and collect the data,” Halacz said
Also planned for next week's episode is a discussion of which real-time options are enabled by default.
You can listen to this informative LinuxLink Radio episode in MP3 or OGG formats here (MP3) and here (OGG). The compete list of LinuxLink Radio podcasts is here.
This article was originally published on LinuxDevices.com and has been donated to the open source community by QuinStreet Inc. Please visit LinuxToday.com for up-to-date news and articles about Linux and open source.