The purpose of the Distributed-SDF domain for Ptolemy II is to allow distributed simulation of SDF models. It builds on top of the existing SDF domain by extending it. From the user’s point of view, using the Distributed-SDF director is sufficient to run the distributed version. It provides options (in the form of parameters in the configuration dialog) to generate sequential or parallel schedules and to execute the simulation in a local or distributed manner. The director relies on the Distributed-SDF scheduler (that extends the existing SDF scheduler) to generate parallel schedules. The parallel schedule is computed by performing a topological sort of the acyclic graph that can be derived from the model’s topology. For the distributed execution of the model, a distributed platform is required. For this purpose, server processes must be running on the different machines that will constitute the distributed platform. In order to make this distributed platform dynamic and transparent for the user, we use JINI as a peer discovery protocol. The server processes register with a lookup service and they are discovered by the Distributed-SDF director whenever a simulation is to be performed. Once the director has gathered a number of servers sufficient to run the simulation, it deploys them on the different servers. MoML descriptions of the pre-initialized actors are sent over the network to the different servers and loaded dynamically from local storage. After the actors are loaded remotely, virtual connections (over the network) are made among them in a way that mimics the ones in the original model, making the communication between actors decentralized. One of the architectural issues is how to identify the receivers in a distributed setup. A receiver that resides locally in a model can be distinctly identified but not when is distributed. To solve this, the receiver class has been extended with a tag. Once the actors are distributed and virtually connected, the distributed platform is ready to start the simulation. The Distributed-SDF director orchestrates the execution of the scheduler in a centralized manner. In order to allow parallel executions of actors, it creates threads locally to control every distributed actor. This allows to simultaneously firing those actors that are ready and have no interdependencies. Since several actors can be executing at the same time, and can have different processing times, after every schedule level, a synchronization step is introduced. When an actor is executed remotely, it is embedded in a model with a director and its inputs and outputs are connected to modified relations that make them believe they are embedded in the original model. Those relations are in charge of receiving the tokens and placing them at the right receiver. The same goes for the output ports. Tokens are sent over the net to the distributed processes that are expecting to receive them. The distributed-SDF domain provides certain advantages derived from its distributed nature. First of all, known memory bounds of the JVM can be overcome. Second, it yields smaller simulation times, mainly for models with high degree of parallelism and granularity.
|Titel||The Sixth Biennial Ptolemy Miniconference Proceedings|
|Forlag||EECS Department, University of California|
|Status||Udgivet - 2005|
|Begivenhed||Sixth Biennial Ptolemy Miniconference - , USA|
Varighed: 12 maj 2005 → 12 maj 2005
|Konference||Sixth Biennial Ptolemy Miniconference|
|Periode||12/05/2005 → 12/05/2005|