Department of Computer Science
St Louis, MO 63130,
(TEL) (314) 935-4215
(FAX) (314) 935-7302
This paper makes three contributions to the design and performance measurement of object-oriented real-time systems. First, it illustrates how to extend the CORBA Event Service so that it is suitable for real-time systems. These extensions support periodic rate-based event processing and efficient event filtering and correlation. Second, it describes how to develop object-oriented event dispatching and scheduling mechanisms that can provide real-time guarantees. Finally, the paper presents benchmarks that demonstrate the performance tradeoffs of alternative concurrent dispatching mechanisms for real-time Event Services.
Figure 1 illustrates the primary components in the OMG Reference Model architecture [Vinoski:97]:
|Figure 1. OMG Reference Model Architecture.|
At the heart of the OMG reference model is the Object Request Broker (ORB). ORBs allow clients to invoke operations on target object implementations without concern for where the object resides, what language the object is written in, the OS/hardware platform, or the type of communication protocols and networks used to interconnect distributed objects [Schmidt:97].
This paper focuses on the CORBA Event Service, which is defined within the CORBA Object Services (COS) component in Figure 1. The COS specification [OMG:95b] presents architectural models and interfaces that factor out common services for developing distributed applications.
Many distributed applications exchange asynchronous requests using event-based execution models [Rajkumar:95]. To support these common use-cases, the CORBA Event Service defines supplier and consumer participants. Suppliers generate events and consumers process events received from suppliers. In addition, the CORBA Event Service defines an Event Channel, which is a mediator [Gamma:95] that propagates events to consumers on behalf of suppliers.
The OMG Event Service model simplifies application software by allowing decoupled suppliers and consumers, asynchronous event delivery, and distributed group communication [Maffeis:95a]. In theory, this model seems to address many common needs of event-based, real-time applications. In practice, however, the standard CORBA Event Service specification lacks other important features required by real-time applications such as real-time event dispatching and scheduling, periodic event processing, and efficient event filtering and correlation mechanisms.
To alleviate the limitations with the standard COS Event Service, we have developed a Real-time Event Service (RT Event Service) as part of the TAO project [Schmidt:97] at Washington University. TAO is a real-time ORB endsystem that provides end-to-end quality of service guarantees to applications by vertically integrating CORBA middleware with OS I/O subsystems, communication protocols, and network interfaces. Figure 2 illustrates the key architectural components in TAO and their relationship to the real-time Event Service.
|Figure 2. TAO: An ORB Endsystem Architecture for High-Performance, Real-time CORBA.|
TAO's RT Event Service augments the CORBA Event Service model by providing source-based and type-based filtering, event correlations, and real-time dispatching. To facilitate real-time scheduling (e.g., rate monotonic [LiuLayland:73]), TAO's RT Event Channels can be configured to support various strategies for priority-based event dispatching and preemption. This functionality is implemented using a real-time dispatching mechanism that coordinates with a system-wide real-time Scheduling Service.
TAO's RT Event Service runs on real-time OS platforms (e.g., VxWorks and Solaris 2.x) that provide real-time scheduling guarantees to application threads. Windows NT also provides real-time threads, though it lacks certain features required for hard real-time systems [Jensen:97].
However, QoS research at the network and OS layers has not necessarily addressed key requirements and usage characteristics of distributed object computing middleware. For instance, research on QoS for network infrastructure has focused largely on policies for allocating bandwidth on a per-connection basis. Likewise, research on real-time operating systems has focused largely on avoiding priority inversions and non-determinism in synchronization and scheduling mechanisms. In contrast, the programming model for developers of OO middleware focuses on invoking remote operations on distributed objects. Determining how to map the results from the network and OS layers to OO middleware is a major focus of our research.
There are several commercial CORBA-compliant Event Service implementations available from multiple vendors (such as Expersoft, Iona, Sun Systems, and Visigenic Software). Iona also sells OrbixTalk, which is a messaging technology based on IP multicast. Unfortunately, since the CORBA Event Service specification does not address issues critical for real-time applications, these implementations are not acceptable solutions for many domains.
The OMG has issued a request for proposals (RFP) on a new Notification Service [OMG:97a] that has generated several responses [OMG:97b]. The RFP specifies that a proposed Notification Service must be a superset of the COS Event Service with interfaces for the following features: event filtering, event delivery semantics (e.g., at least once, at most once, etc), security, event channel federations, and event delivery Quality of Service (QoS). The organizations contributing to this effort have done some excellent work in addressing many of the shortcomings of the CORBA Event Service [Schmidt:97b]. However, the OMG RFP documents do not address the implementation issues related to the Notification Service.
Although there has been research on formalisms for real-time objects [Satoh:95], relatively little published research on the design and performance of real-time OO systems exists. Our approach is based on emerging distributed object computing standards (i.e., CORBA) -- we focus on the design and performance of various strategies for implementing Quality of Service (QoS) in real-time ORBs [Schmidt:97].
The QuO project at BBN [QuO:97] has defined a model for communicating changes in QoS characteristics between applications, middleware, and the underlying endsystems and network. The QuO architecture differs from our work on RT Event Channels, however, since QuO does not provide hard real-time guarantees of ORB endsystem CPU scheduling. SunSoft [Aahlad:96] describes techniques for optimizing the performance of CORBA Event Service implementations. As with QuO, their focus also was not on guaranteeing CPU processing for events with hard real-time deadlines.
[Rajkumar:95] describes a real-time publisher/subscriber prototype developed CMU SEI. Their Publisher/Subscriber model is functionally similar to the COS Event Service, though it uses real-time threads to prevent priority inversion within the communication framework. One interesting aspect of the CMU model is their separatiion of priorities for subscription and event transfer so that these activities can be handled by different threads with different priorities. However, their model does not utilize any QoS specifications from publishers (suppliers) or subscribers (consumers). As a result, their message delivery mechanism cannot distinguish participants with higher priorities. In contrast, the TAO Event Service utilizes QoS parameters from suppliers and consumers to guarantee the event delivery semantics determined by a real-time scheduling service.
object->operation()paradigm supported by OO languages. In principle, twoway invocations simplify the development of distributed applications by supporting an implicit request/response protocol that makes remote operation invocations transparent to the client. In practice, however, the standard CORBA operation invocation models is too restrictive for real-time applications. In particular, these models lack asynchronous message delivery, do not support timed invocations or group communication, and can lead to excessive polling by clients. Moreover, standard oneway invocations might not implement reliable delivery and deferred synchronous invocations require the use of the CORBA Dynamic Invocation Interface (DII), which yields excessive overhead for most real-time applications [Schmidt:96].
The Event Service is a CORBA Object Service (COS) that is designed to alleviate some of the restrictions with standard CORBA invocation models. In particular, the COS Event Service supports asynchronous message delivery and allows one or more suppliers to send messages to one or more consumers. Event data can be delivered from suppliers to consumers without requiring these participants to know about each other explicitly.
|Figure 3. Participants in the COS Event Channel Architecture.|
The role of each participant is outlined below:
Suppliers use Event Channels to push data to consumers. Likewise, consumers can explicitly pull data from suppliers. The push and pull semantics of event propagation help to free consumers and suppliers from the overly restrictive synchronous semantics of the standard CORBA twoway communication model. In addition, Event Channels can implement group communication by serving as a replicator, broadcaster, or multicaster that forward events from one or more suppliers to multiple consumers.
There are two models (i.e., push vs. pull) of participant collaborations in the COS Event Service architecture. This paper focuses on real-time enhancements to the push model, which allows suppliers of events to initiate the transfer of event data to consumers. Suppliers push events to the Event Channel, which in turn pushes the events to consumers.
Tight coupling often yields highly efficient custom implementations. As the example below shows, however, the inflexibility of tightly coupled software can substantially increase the effort and cost of integrating new and improved avionics features. For example, navigation suites are a source of continual change, both across platforms and over time. The specific components that make up the navigation suite change frequently to improve accuracy and availability. Many conventional avionics systems treat each implementation as a ``point solution,'' with built-in dependencies on particular components. This tight coupling requires expensive and time consuming development effort to port systems to newer and more powerful navigation technologies.
|Figure 4. Example Avionics Mission Control Application.|
This example has the following participants:
In contrast, using a pull-driven model to design the mission control application would require I/O Facades that actively acquire data from the Sensor Proxies. If the data was not available to be pulled, the calling I/O Facade would need to block awaiting a result. In order for the I/O Facade to pull, the system must allocate additional threads to allow the application to make progress while the I/O Facade task is blocked. However, adding threads to the system has many negative consequences (such as increased context switching overhead, synchronization complexity, and complex real-time thread scheduling policies). Conversely, by using the push model, blocking is largely alleviated, which reduces the need for additional threads. Therefore, this paper focuses on the push model.
|Figure 5. Example Avionics Application with Event Channel.|
In Figure 5, Sensor Proxy objects are suppliers of I/O events that are propagated by an Event Channel to I/O Facades, which consume the demarshalled I/O data. Sensor Proxies push I/O events to the channel without having to know which I/O Facades depend on the data. The benefit of using the Event Channel is that Sensor Proxies are unaffected when I/O Facades are added or removed. This architectural decoupling is described concisely by the Observer pattern [Gamma:95].
Another benefit of an Event Channel-based architecture is that an I/O Facade need not know which Sensor Proxies supply its data. Since the channel mediates on behalf of the Sensor Proxies, I/O Facades can register for certain types of events (e.g., GPS and/or INS data arrival) without knowing which Sensor Proxies actually supply these types of events (Section 3.2 discusses type-based filtering). Once again, the use of an Event Channel makes it possible to add or remove Sensor Proxies without changing I/O Facades.
However, the standard COS Event Service Specification lacks several important features required by real-time applications. Chief among these missing features include real-time event dispatching and scheduling, periodic event processing, and centralized event filtering and correlations. To resolve these limitations, we have developed a Real-time Event Service (RT Event Service) as part of the TAO project [Schmidt:97]. TAO's RT Event Service extends the COS Event Service specification to satisfy the quality of service (QoS) needs of real-time applications in domains like avionics, telecommunications, and process control.
The following list summarizes the features missing in the COS Event Service and outlines how TAO's Real-time Event Service supports them:
The COS Event Service has no notion of QoS, however. In particular, there is no Event Channel interface that consumers can use to specify their execution and scheduling requirements. Therefore, standard COS Event Channels provide no guarantee that they will dispatch events from suppliers with the correct scheduling priority, relative to the consumers of these events.
TAO's RT Event Service extends the COS Event Service interfaces by allowing consumers and suppliers to specify their execution requirements and characteristics using QoS parameters. These parameters are used by the channel's dispatching mechanism to integrate with the system-wide real-time scheduling policy to determine event dispatch ordering and preemption strategies.
For instance, an I/O Facade may depend on data from a subset of all Sensor Proxies. Furthermore, it may use data from many Sensor Proxies in a single calculation of aircraft position. Therefore, the I/O Facade can not make progress until all of the Sensor Proxy objects receive I/O from their external sensors.
It is possible to implement filtering using standard COS Event Channels, which can be chained to create an event filtering graph that consumers to register for a subset of the total events in the system. However, the filter graph defined in standard COS increases the number of hops that a message must travel between suppliers and consumers. The increased overhead incurred by traversing these hops is typically unacceptable for real-time applications with low latency requirements. Furthermore, the COS filtering model does not address the event correlation needs of consumers that must wait for multiple events to occur before they can execute.
To alleviate these problems, TAO's RT Event Service provides
filtering and correlation mechanisms that allow consumers to
specify logical OR and AND event dependencies. When those
dependencies are met, the RT Event Service dispatches all
events that satisfy the consumers' dependencies. For instance,
the I/O Facade can specify its requirements to the RT Event
Service so that the channel only notifies the Facade object
after all its Sensor Proxies have received I/O. At that time,
the I/O Facade receives an aggregate of all the Sensor Proxies
it depends on via a single
In both cases, consumers have strict deadlines by which time they must execute the requested C units of computation time. However, the COS Event Service does not permit consumers to specify their temporal execution requirements. Therefore, periodic processing is not supported in standard COS Event Service implementations.
TAO's RT Event Service allows consumers to specify event dependency timeouts. It uses these timeout requests to propagate temporal events in coordination with system scheduling policies. In additional to the canonical use of timeout events (i.e., receiving timeouts at some interval), a consumer can request to receive a timeout event if its dependencies are not satisfied within some time period (i.e., a real-time ``watchdog'' timer). For instance, an I/O Facade can register to receive a timeout event if its Sensor Proxy dependencies are not satisfied after some time interval. This way, it can make best effort calculations on the older sensor data and notify interested higher level components.
Figure 6 shows the high-level architecture of TAO's RT Event Service implementation.
|Figure 6. RT Event Service Architecture.|
The role of each component in the RT Event Service is outlined below:
SupplierAdmin, which allow applications to obtain consumer and supplier administration objects, respectively. These administration objects make it possible to connect and disconnect consumers and suppliers to the channel. Internally, the channel is comprised of several processing modules based on the ACE Streams Framework [Schmidt:94b]. As described below, each module encapsulates independent tasks of the channel.
ConsumerAdmininterface defined in the COS Event Service
CosEventChannelAdminmodule. It provides factory methods for creating objects that support the
ProxyPushSupplierinterface. In the COS model, the
ProxyPushSupplierinterface is used by consumers to connect and disconnect from the channel.
TAO's RT Event Service model extends the standard COS
ProxyPushSupplier interfaces so that consumers can
register their execution dependencies with a channel. Figure 7
shows the types of data exchanged and the inter-object
collaborations involved when a consumer invokes the
|Figure 7. Collaborations in the RT Event Service Architecture.|
SupplierAdmininterface defined in the COS Event Service
CosEventChannelAdminmodule. It provides factory methods for creating objects that support the
ProxyPushConsumerinterface. Suppliers use the
ProxyPushConsumerinterface to connect and disconnect from the channel.
TAO's RT Event Service model extends the standard COS
ProxyPushConsumer interface so that suppliers can
specify the types of events they generate. With this
information, the channel's Subscription and Filtering Module
can build data structures that allow efficient run-time lookups
of subscribed consumers.
ProxyPushConsumer objects also represent the entry
point of events from suppliers into an Event Channel. When
Suppliers transmit an event to the
ProxyPushConsumer interface via the proxy's
push operation the channel forwards this event to the
push operation of interested consumer object(s).
To address these shortcomings, TAO'S RT Event Service extends the COS interfaces to allow consumers to subscribe for particular subsets of events. The channel uses these subscriptions to filter supplier events, only forwarding them to interested consumers. There are several reasons why TAO implements filtering in the channel. First, the channel relieves consumers from implementing filtering semantics. Second, it reduces communication channel load by eliminating filtered events in the channel instead of at consumers. Furthermore, to implement filtering at the suppliers, the suppliers would require knowledge of consumers. Since this would violate one of the primary motivations for an event service (that is, decoupled consumers and suppliers), TAO integrates filtering into the channel.
Adding filtering to the Event Channel requires a well-defined type system for events. Although the complete schema for this type system is beyond the scope of this paper, it includes source id, type, data, and timestamp fields (the schema is fully described in [Schmidt:RFI]). The RT Event Channel uses the event type system in the following ways:
When an event enters the Subscription and Filtering Module, consumers that subscribe to combined supplier/type-based ids are located with two table lookups. The first lookup finds all the type-based subscription tables corresponding to the event's source id. The second lookup finds the consumers subscribed to the event's type id.
resumeoperations. These are lightweight operations that have essentially the same effect as de-registering and re-registering for events. Therefore,
resumeare suitable for frequent changes in consumer sets, which commonly occur during mode changes. By incorporating suspension and resumption in the module closest to the suppliers, event channel processing is minimized for suspended consumers.
The Priority Timers Proxy uses a heap-based callout queue [Barkley:88]. Therefore, in the average and worst case, the time required to schedule, cancel, and expire a timer is O(log N) (where N is the total number of timers). The timer mechanism preallocates all its memory, which eliminates the need for dynamic memory allocation at run-time. Therefore, this mechanism is well-suited for real-time systems requiring highly predictable and efficient timer operations.
Mechanisms that perform filtering and correlation are called Event Filtering Discriminators (EFDs). EFDs allow the run-time infrastructure to handle dependency-based notifications that would otherwise be performed by each consumer as all events were pushed to it. Thus, EFDs provide a ``data reduction'' service that minimizes the number of events received by consumers so that they only receive events they are interested in.
The performance requirements of an RT Event Service may vary for different types of real-time applications. The primary motivation for basing the internal architecture of the TAO Event Channel on the ACE Streams Framework is to allow static and dynamic channel configurations. Each module shown in the Figure 7 may contain multiple ``pluggable'' strategies, each optimized for different requirements. The Streams-based architecture allows independent processing modules to be added, removed, or modified without requiring changes to other modules.
TAO's Event Channel can be configured in the following ways to support different event dispatching, filtering, and dependency semantics:
The following configurations can be achieved by removing certain modules from an Event Channel:
This section discusses the Dispatching Module and Scheduling Service in TAO's RT Event Channel.
The following figure shows the structure and dynamics of the Dispatching Module in the context of the Event Channel:
|Figure 8. Event Channel Dispatching.|
The participants in Figure 8 include the following:
The motivation for decoupling the Run-time Scheduler from the Dispatching Module is to allow scheduling policies to evolve independently of the dispatching mechanism. TAO's Run-time Scheduler was initially implemented with a Rate Monotonic scheduling policy that used the consumer's rate to determine the tuple's priority. Subsequent Run-time Scheduler implementations use an Earliest Deadline First (EFD) policy, where the deadline of the event (or consumer) determines the priority of the tuple. Thus, by separating the responsibilities of scheduling from dispatching, the Run-time Scheduler can be replaced without affecting unrelated components in the channel.
pushoperation. Depending on the placement of each tuple in the Priority Queues, the Dispatcher may preempt a running thread in order to dispatch the new tuple.
For instance, consider the arrival of an event/consumer tuple in a Dispatching Module implemented with real-time preemptive threads. If the Run-time Scheduler assigns the tuple a preemption priority higher than any currently running thread, the Dispatcher will preempt a running thread and dispatch the new tuple. Furthermore, assuming that lower numbers indicate higher priority, the Dispatcher in Figure 8 would dispatch all tuples on queue 0 before dispatching any on queue 1. Similarly, it would dispatch all tuples on queue 1 before those on queue 2, and so on.
To remove tuples from Priority Queues, the Dispatcher always dequeues from the head of the queue. The Run-time Scheduler can determine the order of dequeueing by returning different sub-priorities for different event/consumer tuples. For instance, assume that an implementation of the Run-time Scheduler must ensure that some event E1 is always dispatched before event E2, but does not require that the arrival of E2 preempt a thread dispatching E1. By assigning a higher sub-priority to event/consumer tuples containing E1, the tuple will always be queued before any tuples containing E2. Therefore, the Dispatcher will always dequeue and dispatch E1 events before E2 events.
A benefit of separating the functionality of the Dispatcher from the Priority Queues is to allow the implementation of the Dispatcher to change independently of the other channel components. TAO's RT Event Channel has been implemented with four different dispatching mechanisms, as described in the following subsection.
|Figure 9. Dispatcher Implementations.|
The primary benefit of the RTU model is its ability to reduce the context switching, synchronization, and data movement overhead incurred by preemptive multi-threading implementations. However, preemption is delayed to the extent that consumers check to see if they must preempt themselves. This latency may be unacceptable in some real-time applications.
The advantage of this model is that the dispatcher can leverage kernel support for preemption by associating appropriate OS priorities to each thread. For instance, when a thread at the highest priority becomes ready to run, the OS will preempt any lower priority thread that is running and allow the higher priority thread to run. The disadvantages are that this preemption incurs thread context switching overhead, and that applications must identify, and synchronize access to, data that can be shared by multiple threads.
As with the RTU model, single-threaded dispatching exhibits lower context switching overhead than the real-time thread dispatching model. Moreover, since the channel maintains its own thread of control, it does not borrow supplier threads to propagate events. As a result, the channel is an asynchronous event delivery mechanism for suppliers. However, since the channel's dispatching thread does not implement preemption, consumers run to completion regardless of priority. As a result, single-threaded dispatching can suffer from priority inversion, which results in lower system utilization and non-determinism.
The x-axis denotes time in microseconds. Each consumer and supplier outputs a point when it receives an event from the Event Channel. Another point is output when it finishes processing the event. Suppliers receive timeouts and generate a single event for each timeout. Each consumer registers for events from a single supplier. A horizontal line indicates the time span when the respective consumer or supplier runs on the CPU.
Each figure is explained below:
|Figure 10. Timeline from Multi-Threaded Channel.|
|Figure 11. Timeline from Single-Threaded Channel.|
|Figure 12. Timeline from an EFD Channel.|
Our Real-time Scheduling Service requires that all operations export
RT_Info data structures that describe the execution
properties of the operation. During scheduling configuration
runs (described in the Off-line Scheduler
RT_Infos contain execution times and rate
requirements. At run-time, the static Scheduler need not know any
information about an operation's execution characteristics. Only the
operation's priority is needed, so the scheduler can determine how the
operation should be dispatched. Thus, at run-time, each operation's
RT_Info need only contain priority values for the
At run-time, the Dispatching Module queries the Run-time Scheduler for the priority of a consumer's push operation. The Run-time Scheduler uses a static repository that identifies the execution requirements (including priority) of each operation. The Event Channel's Dispatching Module uses the operation priority returned by the Run-time Scheduler to determine which priority queue an event/consumer tuple should be inserted onto.
All scheduling and priority computation is performed off-line. This allows priorities to be computed rapidly (i.e., looked up in O(1) time) at run-time. Thus, TAO's Run-time Scheduler simply provides an interface to the results of the Off-line Scheduler, discussed below.
|Figure 13. Scheduling Service Internal Repository.|
Once Task Interdependency Compilation is complete, the Off-line Scheduler assigns priorities to each object operation. The implementation of the Event Service described in this paper utilizes a rate monotonic scheduling (RMS) policy. Therefore, priorities are assigned based on task rates, i.e., higher priorities are assigned to threads with faster rates. For instance, a task that needs to execute at 30 Hz would be assigned to a thread with a higher priority than a task that needs to execute at 15 Hz.
Most operating systems that support real-time threads guarantee higher priority threads will (1) preempt lower priority threads and (2) run to completion (or until higher priority threads preempt them). Therefore, object operations with higher priorities will preempt object operations with lower priorities. These priority values are computed by the Off-line Scheduler and are stored in a table that is queried by the Run-time Scheduler at execution time.
An important metric for evaluating the performance of the RT Event Service is the Schedulable Bound. The schedulable bound of a real-time schedule is the maximum resource utilization possible without deadlines being missed [Gopal:96]. Likewise, the schedulable bound of the RT Event Service is the maximum CPU utilization that supplier and consumers can achieve without missing deadlines.
For TAO's Real-time Scheduling Service to guarantee the schedulability of a system (i.e., all tasks meet their deadlines), high priority tasks must preempt lower priority tasks. Thus, in RMS terminology, higher rate tasks preempt lower rate tasks.
Each of the RT Event Channel's Dispatching Module strategies support varying degrees of preemption. The EFD and Single-Threaded implementations support no preemption; the RTU implementation supports deferred preemptions; and the multi-threaded version uses OS support for ``immediate'' preemption. The goal of the benchmarks described below is to measure the utilization implications of each approach.
The performance tests discussed below were conducted on a single-CPU Pentium Pro 200 MHz workstation with 128 MB RAM running Windows NT 4.0. Test configurations included 3 suppliers and 3 consumers. As shown in Figure 14, the Timeline tool can zoom out to show the periodic nature of the test participants.
|Figure 14. Wide view of test run.|
The view in Figure 14 shows the relative frequencies of the participants. Supplier2 generates events for consumer2 at the highest frequency (40 Hz). Likewise, supplier1 generates events for consumer1 at 20 Hz, and supplier0 generates events for consumer0 at 10 Hz.
Figure 15 shows the total CPU utilization achieved (y-axis) by each Event Channel implementation (i.e., multi-threaded, RTU, single-threaded, and EFD), as the workload configuration was changed (x-axis).
|Figure 15. CPU Utilization for RTU, Multi-Threaded, Single-Threaded, and EFD channel implementation.|
More specifically, the x-axis in Figure 15 represents the percentage workload given to the 40 Hz supplier and consumer. For instance, at the 10% x-axis column, the 40 Hz supplier and consumer were given relatively small amounts of work (10% of the total possible) to perform each iteration (40 times second). Then the workload for the 20 Hz and 10 Hz participants was repeatedly increased (thus increasing overall cpu utilization) until deadlines started to be missed. The maximum utilization achieved was then plotted relative to the y-axis.
As the values along the x-axis increase, the workload of the 40 Hz participants increases and the workload of the 20 Hz and 10 Hz participants decreases. Likewise, for lower values on the x-axis, the workload of the 20 Hz and 10 Hz participants are larger. For each value on the x-axis, the maximum utilization achieved without any missed deadlines was then plotted on the y-axis. The graph in Figure 15 illustrates how the utilization of different channel implementations can vary as the configuration of the system changes.
The results of our performance benchmarks show that the RTU and multi-threaded implementations of the channel achieve approximately 95% utilization for all workload configurations. That these implementations fell 5% behind the maximum utilization results from the overhead imposed by the Event Channel. Although the RTU and multi-threaded implementations performed consistently for all configurations, utilizations for the single-threaded and EFD implementations vary significantly as the workload configurations change. These results show how the increased support for preemption provide greater stability across workloads.
The differences between the single-threaded and EFD channels can be accounted for by the fact that the single-threaded channel provides minimal support for preemption. After each event is propagated to a consumer in the single-threaded channel, the channel's thread (in the Dispatching Module) dispatches the next highest priority event/consumer tuple. Thus, if while an event is being dispatched, a higher priority event/consumer tuple arrives in the channel (e.g., a timeout for a high priority consumer), the new tuple will be dispatched as soon as the currently running event completes.
Alternatively, when a supplier generates an event in the EFD channel, it is dispatched immediately to all consumers. If the EFD channel is dispatching an event to consumers when a timeout occurs for a higher priority consumer, the timeout will not be dispatched until all other consumers have completed. In the single-threaded channel, the timeout would be dispatched after the next consumer completed. The EFD's semantics increase the chances of missed deadlines and consequently reduce utilization.
It is also instructive to note that the single-threaded implementation performs optimally when the workload of 40 Hz participants is the greatest. For higher x-axis values, the workload of the 20 Hz and 10 Hz participants is lower. This reduces the demand for preemption since lower priority suppliers and consumers only use the thread of control for a very short time (since they are doing less work). Therefore, the graph shows that as the demand for preemption decreases (x values become greater), the lack of support for preemption becomes less crucial.
The Minimum Event Spacing Test looks at the average event delivery time for all of the events that a supplier delivers to its consumers. As before, consumers do not do anything with events that are pushed to them. The average event delivery time includes the event interval (spacing) and Event Channel overhead. Ideally, it should be as close as possible to the event interval. As the event interval is reduced, however, the event channel overhead starts to become significant. This test finds that minimum event interval.
These tests were run on a Sun Ultra 2 with (2) 167 Mhz CPUs, running SunOS 5.5.1. The Event Channel and test applications were built with g++ 2.7.2 with -02 optimization. Consumers, suppliers, and the Event Channel were all in the same process, so there was no ORB remote communication overhead. Furthermore, there was no other significant activity on the workstation during testing. All tests were run in the Solaris real-time scheduling class, so they had the highest software priority (but below hardware interrupts).
With the single-threaded Event Channel, we measured a best-case supplier-to-consumer latency of about 140 microsec. ``Best-case'' refers to a single supplier and single consumer registered with Event Channel. The supplier received a timeout every 250 milliseconds and then sent a timestamped event to the consumer. As the number of suppliers and/or consumers was increased, the latency increased as well. With 50 suppliers and 1 consumer, the latency was about 1,136 microseconds. With 1 supplier and 50 consumers, it was about 1,303 microseconds. And with 50 suppliers and 50 consumers, equally distributed amongst the suppliers, it was about 1,283 microseconds.
Under these conditions, the average event delivery time was comparable to the event timeout interval of 250 milliseconds. The supplier timeout value was progressively reduced to find the point at which the Event Channel overhead significantly affected the average delivery time. That timeout interval was ~20 msec; below that value, the average event delivery time increased significantly.
We are currently investigating opportunities for enhancing our first implementation of the Event Channel to improve these performance numbers.
One advantage of our approach is that operation invocations only pay the overhead of the C++ virtual function call. If the schedule was not determined off-line, a run-time (dynamic) scheduler would need to intercede before any abstract operation was invoked, which incurs additional overhead. For instance, if a Rate Monotonic scheduling policy is used, the scheduler must determine the rate of each object in order to calculate the object's priority. Furthermore, this type of dynamic scheduler must make some type of guarantee, either weak or strong, that deadlines will be met.
One way a scheduler could make strong guarantees is to perform admission control, which permits operations to execute when the necessary resources are available. Admission control requires that object operations export execution properties such as worst-case execution time. Alternatively, the scheduler might implement a weaker, ``best-effort'' admission policy. For example, if an Earliest Deadline First policy is used, object operations with the nearest deadlines are given priority over operations with later deadlines. Such a policy would require that object operation deadlines be exported or calculated by the scheduler.
In general, dynamic scheduling can incur significant overhead, decrease resource utilization, and fail to work correct when faced with system overload. As a result, dynamic scheduling solutions may not be viable solutions for systems with hard deadlines and constrained resources.
Since all objects and operations in TAO's Real-time Event Service are
determined off-line, one could argue that no real polymorphism exists.
Although this is true to a certain extent, there are more benefits to
dynamic binding than just change behavior at run-time. In particular,
we found that the ability to develop components independently of
applications that use them significantly increases the potential for
reuse in the avionics domain.
The Event Channel pushes to abstract
interfaces. Although the number and type of
PushConsumers is determined off-line, the code for the
Event Channel remains decoupled from application consumer objects.
Our performance results demonstrate that dispatching mechanisms with finer-grained support for preemption yield more consistent CPU utilization across different application configurations. These results also indicate that the dynamic binding mechanisms used by our C++ compilers are not fundamentally at odds with the deterministic execution behavior required by real-time applications. In addition, our results illustrate that it is feasible to apply CORBA Object Services to develop real-time systems. TAO's Real-time Scheduling Service architecture was submitted as a response to the OMG Real-time Special Interest Group Request for Information on Real-time CORBA [Schmidt:RFI].
The current implementation of TAO's Real-time Event Service is written in C++ using components from the ACE framework [Schmidt:94b]. ACE is a widely used communication framework that contains a rich set of high-performance components. These components automate common communication software tasks such as connection establishment, event demultiplexing and event handler dispatching, message routing, dynamic configuration of services, and flexible concurrency control for network services. ACE has been ported to a variety of real-time OS platforms including VxWorks, Solaris, Win32, and most POSIX 1003.1c implementations.
The RT Event Service is currently deployed at McDonnell Douglas (now Boeing) in St. Louis, MO, where it is being used to develop operation flight programs for next-generation avionics systems.
[Barkley:88] Ronald E. Barkley and T. Paul Lee, ``A Heap-Based Callout Implementation to Meet Real-time Needs,'' Proceedings of the USENIX Summer Conference, June, 1988, pp. 213--222.
[Blair:95] Geoff Coulson, Gordon Blair, Jean-Bernard Stefani, F. Horn, and L. Hazard, ``Supporting the Real-time Requirements of Continuous Media in Open Distributed Processing,'' Computer Networks and ISDN Systems, pages 1231-1246, 1995.
[Gopal:96] R. Gopalakrishnan and Gurudatta M. Parulkar, ``Bringing Real-time Scheduling Theory and Practice Closer for Multimedia Computing,'' appeared in ACM SIGMETRICS Conference, Philadelphia, May 1996.
[Khanna:92] S. Khanna and et. al., ``Realtime Scheduling in SunOS5.0,'' in Proceedings of the USENIX Winter Conference, pp. 375--390, USENIX Association, 1992.
[LiuLayland:73] C. L. Liu and J. W. Layland, ``Scheduling Algorithms for Multiprogramming in a Hard Realtime Environment,'' JACM 20 (1), pages 46-61, 1973.
[Maffeis:95a] Silvano Maffeis, ``Adding Group Communication and Fault-Tolerance to CORBA,'' Proceedings of the 1st USENIX Conference on Object-Oriented Technologies, Monterey, CA, June, 1995.
[OMG:95a] Common Object Request Broker Architecture, OMG, July, 1995.
[OMG:95b] Common Object Services Specification, OMG 95-3-31, 1995
[OMG:97a] OMG document telecom/97-01-03, Notification Service RFP, FrameMaker, PDF, PostScript
[OMG:97b] OMG document page (search for "Notification Service").
[Porat] Compiler Optimization of C++ Virtual Function Calls, Proceedings of the Second USENIX Conference on Object-Oriented Technologies and Systems, Toronto, Canada, June, 1996.
[Rajkumar:95] Ragunathan Rajkumar, Mike Gagliardi, and Lui Sha, " "The Real-Time Publisher/Subscriber Inter-Process Communication Model for Distributed Real-Time Systems: Design and Implementation," First IEEE Real-Time Technology and Applications Symposium, May 1995.
[QuO:97] John A. Zinky and David E. Bakken and Richard Schantz, ``Architectural Support for Quality of Service for CORBA Objects,'' Theory and Practice of Object Systems, Volume 3, No. 1, 1997.
[RMA_Handbook:93] Mark H. Klein, Thomas Ralya, Bill Pollak, Ray Obenza, and Michael González Harbour, ``A Practitioner's Handbook for Real-time Analysis: Guide to Rate Monotonic Analysis for Real-time Systems,'' Kluwer Academic Publishers, 1993.
[Satoh:95] Ichiro Satoh and Mario Tokoro, ``Time and Asynchrony in Interactions among Distributed Real-Time Objects,'' Proceedings of 9th European Conference on Object-Oriented Programming, August 1995.
[Schmidt:94] Douglas C. Schmidt ``ACE: an Object-Oriented Framework for Developing Distributed Applications,'' Proceedings of the 6th USENIX C++ Technical Conference, Cambridge, MA, April, 1994.
[Schmidt:96] Aniruddha Gokhale and Douglas C. Schmidt, ``Performance of the CORBA Dynamic Invocation Interface over ATM Networks,'' IEEE GLOBECOM, London, England, November, 1996.
[Schmidt:97] Douglas C. Schmidt, Aniruddha Gokhale, Tim Harrison, and Guru Parulkar, ``A High-performance Endsystem Architecture for Real-time CORBA,'' IEEE Communications Magazine, February, 1997.
[Schmidt:RFI] Douglas C. Schmidt, David L. Levine, and Timothy H. Harrison, ``An ORB Endsystem Architecture for Hard Real-Time Scheduling,'' response to the OMG RFI ORBOS/96-09-02, February, 1997.
[Schmidt:97b] Douglas C. Schmidt and Steve Vinoski, "Object Interconnections: Overcoming Drawbacks in the OMG Events Service" C++ Report, June, 1997.
[ShaLehoczkyRamamritham:92] Lui Sha, Ragunathan Rajkumar, John Lehoczky, and Krithi Ramamritham, ``Mode Change Protocols for Priority-Driven Preemptive Scheduling,'' J. Real-Time Systems, Vol. 1, 1989, pp. 243-264, Reprinted in John A. Stankovic and Krithi Ramamritham, Advances in Real-Time Systems, IEEE Computer Society Press, 1992.
[Stefani:96] Jean-Bernard Stefani et al., ``Requirements for a Real-time ORB.'' ReTINA Technical Report, 1996.
[Tokuda:90] Tokuda, H. and Nakajima, T. and Rao, P., ``Real-Time Mach: Towards Predictable Real-time Systems,'' USENIX Mach Workshop," 1990.
[Zhang:90b] Lixia Zhang, ``VirtualClock: A New Traffic Control Algorithm for Packet Switched Networks,'' Proceedings of the Symposium on Communications Architectures and Protocols (SIGCOMM), 1990.
[Vinoski:97] Steve Vinoski, CORBA: Integrating Diverse Applications Within Distributed Heterogeneous Environments, IEEE Communications Magazine, February, 1997.
[Gamma:95] Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, "Design Patterns: Elements of Reusable Software Architecture", Addison-Wesley, 1995.
[Schmidt:94b] Douglas C. Schmidt, ASX: an Object-Oriented Framework for Developing Distributed Applications, 6th USENIX C++ Conference, April 1994.