Event Driven Multicore Processing

So, what gets us close as possible to the ideal system?

A Simpler Software solution

The July 2006 issue of Embedded Systems Design provides a software solution close to our ideal. The article Build a Super Simple Tasker by Miro Samek and Robert Ward presents a simple software scheduler. A few major points from the article will be listed here, but it is recommended reading for anyone planning to actually build the system. Practical Statecharts in C/C++ by Miro Samek also has tons of useful information on just this sort of system.

  • The SST takes advantage of the fact that most embedded systems are event driven. The forever loop is a bad fit for modeling the event driven nature of embedded systems.
  • An SST task can not be an endless loop. An event causes a chain reaction of executing functions. At some point, if a single event occurs, that event will be processed, and the execution will stop. The SST will go back to an idle state.
  • All tasks are a regular C function calls. The task runs to completion and returns. As stated above, there can be not infinite loops. The SST does not preempt tasks, except during interrupt handling.
  • A single stack keeps track of the execution contexts. The SST does not manage a stack for each task. The one stack supported by the processor hardware is used for all tasks running on that processor
  • All tasks have a priority. The priorities are uniquely assigned. The lowest priority is the idle loop. It is the only infinite loop in the system.
  • All inter-process communication is handled by event queues. Events are queued until their task is the highest priority in the system. No globals, flags, pipes or other mechanisms are provided.
  • The SST must ensure that at all times the CPU executes the highest priority task that is ready to run. For more than one CPU, the highest priority tasks.

The SST is simple, and easy to port. The fact that a single way is provided for tasks to communicate, by passing event messages, means there is a single inter-process communication mechanism to handle when multiple processors are introduced. It also provides enough of an operating system that the changes for a multiprocessor system can be encapsulated. The ‘user’ tasks can be blissfully unaware if there are one, two or seven processors in the system. For these reasons and more that will become clear as we continue, it is ideally suited to a multiple processor system.

Does all this scale?

From both the software and hardware point of view the system should also be scalable. If the software knows how many processors there are and uses this information for scheduling or performance, the system does not scale very well. The best possible system would be able to add processors, and not change the software. The software can be tested on a single processor system, and then if greater performance is required, a second processor can be added.

If you have one processor per task, things are great. Each task can be tied to one processor. That is a very nice way to set up the system, but it does not scale. How do you add or remove processes? It does not scale because it is not flexible. Either that, or you have 100 processes, and there are less than 100 tasks. Again each task can be tied to a processor. If you happen to go over 100, well then you are just screwed.

And, once again, we are confronted with how to handle interrupts.

A flexible hardware

The problem with the hardware is not finding the right processor. The SST can be easily ported to any processor that uses a stack. The hardware needs to be modified to support multiple processors. The exact changes are not yet apparent.

What about the processor stack?

In the description of the SST above, it was stated that there is only one stack. Two microcontrollers can not share one stack! Each processor has its own stack, and nothing in the processor has changed. Compare this to most operating systems that have one stack per software task or process. The big advantage in the SST of not having to manage memory space for multiple stacks has not been lost.

Wait, what about interrupts?

Lets suppose there is only one interrupt output from the interrupt controller, there are two interrupt inputs, one on each processor. What do we do with the output of the interrupt controller?

The one output could be tied to both inputs. That is obviously a very bad idea. Both processors would take the interrupt then some other mechanism would be needed to figure out exactly who does the interrupt handling. Lots of overhead that is just not needed.

One processor could be dedicated to handling all interrupts. But, what if that processor is bogged down with interrupts and the other processor is idle? The system is not sharing the load equally.

We should insert some new hardware between the interrupt controller and the processors. That custom bit of hardware can make sure the interrupt load is shared correctly. What is correct?

It could send one interrupt to processor 1, then one to processor 2, and so on. The number of interrupts per processor would be equal, but is that ideal? Not really. If processor 1 is doing the ‘life support’ calculation, and processor 2 is doing the idle loop, it would be better to interrupt processor 2.

So, really what we want is an interrupt handler that always interrupts the processor running the lowest priority task.

We can go through that next time.

Update…

I am updating so this shows up again after the virtual conference post.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s