Autonomic Dynamic Parallel Computing
Overview

Parallel computing is widely adotped in scientific and engineering applications to enhance the efficiency. Moreover, there are increasing research interests focusing on utilizing distributed networked computers for parallel computing. The Message Passing Interface (MPI) standard was designed to support portability and platform independence of a developed parallel program. However, the procedure to start an MPI-based parallel computation among distributed computers lacks autonomicity and flexibility. An autonomic dynamic parallel computing framework is presented to provide autonomicity and flexibility that are important and necessary to some parallel computing applications involving resource constrained and heterogeneous platforms. In this framework, an MPI parallel computing environment consisting of multiple computing entities is dynamically established through inter-agent communications using the IEEE Foundation for Intelligent Physical Agents (FIPA) compliant Agent Communication Language (ACL) messages. For each computing entity in the MPI parallel computing environment, a load-balanced MPI program C source code along with the MPI environment configuration statements are dynamically composed as a mobile agent code. A mobile agent, wrapping the mobile agent code, is created and sent to the computing entity where the mobile agent code is retrieved and interpretively executed.

Autonomic Dynamic Parallel Computing Framework

Figure 1: The stationary agent on Computing Entity 4 requests each of the stationary agents on Computing Entity 1 to Computing Entity 3 for resource contribution through ACL messages. The stationary agents on Computing Entity 1 to Computing Entity 3 respond the request through ACL messages as well.



Figure 2: The stationary agent on Computing Entity 4 sends dynamically composed mobile agents to the agencies on the agree-to-contribute computing entities for performance evaluation. The mobile agents inform the stationary agent on Computing Entity 4 of the performance index through ACL messages.



Figure 3: The stationary agent on Computing Entity 4 sends dynamically composed mobile agents to the local agency and remote agencies on agree-to-contribute computing entities to start the parallel MPI computation.
Example: Autonomic Parallel Matrix Multiplication

There are four computing entities, each of which is a Linux machine, involved in this simulation. One of the four computing entities autonomically forms a parallel computing environment consisting of itself and other computing entities that agree to contribute their resources.

Four agencies, CE1_Agency, CE2_Agency, CE3_Agency, and CE4_Agency, represent the agencies running on computing entities, Computing Entity 1, Computing Entity 2, Computing Entity 3, and Computing Entity 4, respectively. The hostname of Computing Entity 1, Computing Entity 2, Computing Entity 3, and Computing Entity 4 are bird2, ch, phoenix, and shrimp, respectively. In addition, four agents, regulating_agent1, regulating_agent2, regulating_agent3, and regulating_agent4, represent the stationary agents running on agencies, CE1_Agency, CE2_Agency, CE3_Agency, and CE4_Agency, respectively.



Figure 4: The execution output from agency CE4_Agency where stationary agent regulating_agent4 requests resource contribution from other stationary agents.



Figure 5: The execution output from agency CE1_Agency where stationary agent regulating_agent1 agrees the request for resource contribution.



Figure 6: The execution output from agency CE2_Agency where stationary agent regulating_agent2 agrees the request for resource contribution.



Figure 7: The execution output from agency CE3_Agency where stationary agent regulating_agent3 refuses the request for resource contribution.

Source Codes for the Example

Agencies View
Stationary agents View
Performance evaluation View
Matrix multiplication View
Download the sources