Dynamic Scheduling of Distributed Method Invocations V. Kalogeraki, P. M. Melliar-Smith and L. E. Moser Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106 vana@alpha.ece.ucsb.edu,pmms@ece.ucsb.edu,moser@ece.ucsb.edu Abstract Distributed method invocations require dynamic schedul- ing algorithms and efficient resource projections to pro- vide timeliness guarantees to application objects. In this paper we present a dynamic scheduling algorithm that ex- amines the computation times, real times and resource re- quirements of the application tasks to determine a feasi- ble schedule for the method invocations. The schedule is driven by the laxities of the tasks and the importance that the tasks have to the system. Tasks span processor bound- aries, and request messages carry scheduling parameters (laxity values) from one processor to another, yielding a system-wide scheduling algorithm that requires only local computations. Experimental results validate our schedul- ing algorithm, and show that it has minimal overhead. 1. Introduction Distributed method invocations require flexible and dy- namic scheduling algorithms that can provide timeliness guarantees to the application objects. Dynamic scheduling is more applicable to soft real-time systems, where missing a deadline is not catastrophic to the system. In soft real-time systems, the execution times of the methods can vary; there- fore, it is difficult to estimate in advance an effective sched- ule. Scheduling distributed methods becomes more com- plicated when dynamically invoked methods interact with each another and compete for limited computing resources. Dynamic scheduling algorithms require accurate resource usage projections to ensure timing constraints and achieve the best utilization of the resources. Worst-case allocations are usually not effective, because they trade resource utiliza- tion for accurate predictions and can result in underutilized processors. This research has been supported by DARPA, AFOSR and ONR Con- tracts N00174-95-K-0083, F30602-97-1-0284 and N66001-00-1-8931. In this paper we present a dynamic scheduling algo- rithm that examines the computation times, real times and resource requirements of the tasks to determine a feasi- ble schedule for the method invocations on the processors. Tasks are modeled as sequences of method invocations of objects located on multiple processors. Our scheduling al- gorithm is based on the least-laxity scheduling algorithm, which has proven to be an effective algorithm for multi- processor environments [5]. In least laxity scheduling, the laxity of a task represents a measure of urgency for the task and is computed as the difference between the deadline and the estimated remaining computation time of the task. The schedule is driven by the laxity values of the tasks invoking methods on the objects and by the importance that the tasks have for the system. The decisions are made on a system- wide basis because the tasks consist of methods distributed over many processors. Our algorithm allows a request mes- sage containing a method to carry the scheduling parame- ters of the task from one processor to another, yielding a system-wide scheduling algorithm that requires only local computations. The scheduling algorithm is implemented in the Realize Resource Management system [11, 13] and is used to schedule CORBA [16] objects. Experimental re- sults show that the scheduling mechanism is efficient, and has minimal overhead. 2. Scheduling Model 2.1. Tasks A task is a sequence of method invocations of objects dis- tributed across multiple processors. A task is executed by a single thread or multiple threads executing in sequence or in parallel on one or more processors. The execution of a task is triggered by a client thread and completes when the client thread finishes execution. Multiple tasks originating from different client threads can be executed concurrently. Similar to our use of the term task, the OMG's Dynamic Scheduling proposal [18] introduces the notion of a