Sei sulla pagina 1di 9

International Journal of Advanced Computer Science, Vol. 3, No. 8, Pp. 400-408, Aug., 2013.

Development of Scientific Applications with High-Performance Computing through a Component-Based and Aspect-Oriented Methodology
Javier Corral-Garca, Csar Gmez-Martn, Jos-Luis Gonzlez-Snchez and David Corts-Polo
Manuscript
Received: 30,Apr., 2013 Revised: 22,May, 2013 Accepted: 2,Jul, 2013 Published: 2,Jul, 2013

Keywords
high performance computing, methodology, framework, reuse, HPC, AOP, CBSE.

Abstract Scientific researchers face critical challenges which require an increased role of High-Performance Computing (HPC). In many cases, these users, who are specialists in their fields of action, have no previous training or the required skills to face them, or just want to compile and run their programming codes as soon as possible. Sometimes this leads to the risk of being counterproductive in terms of efficiency, because after all, researchers may have to wait longer for the final result, due to a wrong programming model, wrong software architecture, or even errors in the parallelization of sequential code. However, there is a clear lack of approaches with specific methodologies or optimal working environments for the development of specific HPC software systems. Moreover, although there are several frameworks based on Aspect-Oriented and Component-Based Programming for supercomputing, they are focused on the design and implementation phases, while none is based on the reuse of components from the earliest stages of the development, which are defined in the Requirements Engineering. The aim of this proposal is to provide new solutions for the open challenges in high-performance computing, through a methodology and a new framework based on aspect-oriented components for the development of scientific applications for HPC environments. The objective is to allow researchers and users to create their HPC programs in a more efficient way, with greater reliance on their functionality and achieving a reduction of time, effort and cost in the processes of development and maintenance, through the reuse of components (with already developed and tested parallel source codes) from the earliest stages of the development.

1. Introduction
Regardless of supercomputing, continuous advances and improvements in hardware have been so rapid that, in just 20 iterations of Moore's Law1, micro-architecture is almost frozen and clock frequency is reaching its limits. Consumers are used to that pace and continuously demand
CnitS. Research, Technological Innovation and Supercomputing Center of Extremadura { javer.corral, c esar.gomez, joselus.gonzalez, davd.cortes }@cents.es
ce

Moore's Law estates that approximately every two years the number of transistors on an integrated circuit is doubled.

faster computers. As the number of integrated transistors is growing, manufacturers and hardware vendors have begun offering multi-core processor chips to be more competitive and to satisfy the demand, being dual and quad-core computers very common consumer products during last years [1]. When applications only use a single thread to execute, there is only one computing unit working, the rest of the cores don't improve execution time because they are all idle. Multi-threaded applications are nowadays more necessary because they use multi-core architectures in a more efficient manner [2], however, parallel programming has several disadvantages. Programming sequential code is already a hard task for scientists and experts of different knowledge areas, so when those users have to face parallel programming they also have to deal with synchronization, data integrity or interdependencies which are very specific problems that require complex debugging techniques and deep knowledge. There are other parallelization related issues such as the amount of existing sequential code that it is difficult or even impossible to parallelize. Computational scientific research is faced with critical challenges that require High-Performance Computing (HPC). Nowadays, when users begin to work with HPC, their first problem is to face the parallelization of their own sequential codes. In some cases, creating sequential codes is a complicated task for experts of specific branches of science which are very far from computer programming. Besides this, the learning curve becomes steeper when theses users find themselves obliged to use the power offered by high performance computers, so obtaining a solution becomes even more difficult. In fact, one of the biggest problems in the development of high-performance scientific applications is the need for programming environments that allow source code development in an efficient way Experience has shown that it would be very useful for researchers to follow a specific methodology and use a framework to make the development of their scientific codes and applications easier. However, users have to deal with the challenges of using these computers, because of a clear lack of approaches with specific methodologies or optimal working environments. In many cases, those users, who are specialists in their fields of action, have no previous training or the required skills to face complex programming problems, or just want

Javier Corral-Garca et al.: Development of Scientific Applications with High-Performance Computing through a Component-Based and Aspect-Oriented Methodology.

401

to compile and run their programming codes as soon as possible. Sometimes this leads to the risk of being counterproductive in terms of efficiency, because after all, researchers may have to wait longer for the final result, due to a wrong programming model, wrong software architecture, or even errors in the parallelization of sequential code. The proposed approach aims to meet the identified needs and to provide new solutions for the open challenges in high-performance computing through the creation of a methodology and a framework based on aspect-oriented components for the development of scientific applications for HPC environments. The objective is to allow researchers and users to create their programs in a more efficient way, with greater reliance on their functionality and achieving a reduction of time, effort and cost in the processes of development and maintenance, through the reuse of components with already developed and tested parallel source codes. The rest of the paper is structured as follows: next section outlines the main motivations and objectives taken into account in the approach. Section 3 presents some of the most important related works. Section 4 explains the benefits of using component-based and aspect-oriented software techniques from the earliest stages of the development. The way the composition methodology and the framework are developed is shown in sections 5 and 6. Finally, the uses of the research proposal, the validation and future work, together with some conclusions are sketched in section VII.

(AOSD) together with those offered by Component-Based Software Development (CBSD), taken into account the Requirements Engineering during all the stages of the development. Although the amount of aspect-oriented techniques applied to component modelling is still relatively limited, they have been already explored with success in different approaches. On the one hand, Component-Based Programming (PBC) is getting closer to HPC parallel computing. However, despite its advantages, standard components and implementations, it usually has known handicaps when applied to parallel code development due to a lack of the necessary abstraction and a poor performance. On the other hand, Aspect Oriented Programming (AOP) can make a substantial difference in distributed systems and high performance computing, being especially useful when solving problems in a better and more efficient way, spending less time and effort. AOP allows to encapsulate in well defined entities the different concepts that compose an application and to remove dependencies between each of the modules. Thereby, it is easier to reason about the concepts and get rid of the scattered code. Furthermore, implementations are more understandable, adaptable and reusable.

3. Related Work
Although parallel programming techniques have evolved greatly in recent years, the modern paradigms of Software Engineering are rarely applied to HPC [4]. There are several approaches about structured parallel programming, but they are based on object-oriented paradigms [5-6]. Proposals using aspects for concern modularization are usually based on AspectJ [7], an aspect-oriented extension created for the Java programming language. Although the amount of works in this area is relatively limited, important works can be found in [8-10]. Aspect-oriented modelling techniques have been explored in several researches. In [11] authors apply AOP techniques to Enterprise Java Beans (EJB) components model. However, this approach is limited to improving the control over the calls. Other component-based and HPC-oriented approaches are CCA [12], ASSIST [13] and PaCO [14]. An Aspect-Oriented Framework is presented in [15] for the parallel and distributed solution of numerical problems, as an approach to apply AOP paradigm to HPC. Besides achieving the usual advantages of improved modularity, and a reusable code that is easier to develop and maintain, the proposal pursues to improve the efficiency by means of dynamic changes of aspects at runtime. Among other approaches that have been found, the following works are also noteworthy: in [16], the programmer develops the functionality of the components and describes non-functional aspects in a declarative way, while the framework implements the requirements; in [4] a way to extend SBASCO is described. SBASCO is a component-based model focused on developing scientific

2. Motivations and Objectives


Scientists spend an increasing amount of time building and using software. They typically develop their own software, doing so requires substantial domain-specific knowledge. However, most of the scientists have never been taught how to do this efficiently. As a result, many of them are unaware of tools and practices that would allow to write more reliable and maintainable code with less effort. Recent studies have found that scientists typically spend 30% or more of their time in software development. However, 90% or more of them are primarily self-taught [3]. With regard to this fact, when dealing with supercomputers or clusters it is very important to accurately choose the programming model and architecture that best fits on them, but deciding which programming model is better or faster it is not a trivial task for scientists. There are a lot of motivations to help researchers in finding the proper programming techniques and most of them are related to an optimal parallel scaling. It is understood, therefore, that future frameworks and layers of abstraction should help scientists to solve theses computer architecture and parallel programming problems. The proposed approach aims to allow the development of high-performance scientific codes, by combining the benefits of Aspect-Oriented Software Development
International Journal Publishers Group (IJPG)

402

International Journal of Advanced Computer Science, Vol. 3, No. 8, Pp. 400-408, Aug., 2013.

software, with the aim of defining new concepts and abstractions for handling crosscutting functionalities through AOP. An efficient implementation of high-level mechanisms is also introduced, which is based on MPI and focused on distributed memory parallel systems; a quantitative research about the benefits of using AOP in component-based applications is proposed in [17]; in [18] a model where the metadata for the proposed-components are the core of the approach is shown, it divides the details of the different aspects that make up each application, and allows the development of code with a minimum set of generic methods; finally, in [19] StGermain, a framework that greatly simplifies the development of HPC models is proposed, breaking up parallel scientific applications in hierarchical architectures and supporting applications that have been developed in a collective way. Easy-to-use libraries of parallel algorithm implementations are other alternatives. For instance, libraries such as STAPL [20,21] provide parallel container classes that allow writing scalable parallel programs on distributed memory machines. There is a relatively large body of work that has similar goals to STAPL, but only STAPL is intended to automatically generate recursive parallelization without user intervention. However, judged from publications, only few of the STL (Standard Template Library) algorithms have been implemented, and those that have been implemented sometimes deviate from the C++ STL semantics [22]. In summary, it is important to highlight that no approach has been found for defining a specific, updated and complete methodology for the development of specific software for HPC. Moreover, although there are several supercomputing frameworks based on AOP and CBP, they are focused on the design and implementation phases, while none is based on the reuse of components from the earliest stages of the development that are defined in the Requirements Engineering. A. Automatic and Computer-Aided Parallelization Parallelism can be achieved implicitly or explicitly, the first one is transparent for the programmer, but the second one requires the inclusion of methods and directives to properly drive parallel execution. Implicit parallelism is automatic, but less efficient, while explicit parallelism is more efficient but tougher. In 1993, an add-on was proposed to automate the construction of parallel nested loops [23]. The first step was the analysis of the source-code to obtain an abstract syntactic tree and its translation into lineal algebraic representation. The next step was the rearrangement of the code execution using an order-function. This function generates a polyhedric model which is the main contribution of the proposal, adapted to distributed [24] and shared [25] memory approaches and also optimized to multi-core processors [26]; in [27] different optimizations of speculative loop ssion, speculative prematerialization, and isolation of infrequent dependence mechanisms are

proposed to discover parallelizable sequential code; a new tool is presented in [28], where a compiler improves the parallelization of irregular pointer-intensive codes and the coarse-grained loops operating on data structures; in [29] an affine dependence analysis in the binary files is proposed. It is a complex method, restricted to the affine loops, which obtains dependence vectors and recognizes affine low-level code to parallelize, for instance, registers or induction variables. In spite of the complexity or the limitations of this method, authors point out that it can be applied with independence of the compiler or the program language. All the proposals that have been mentioned above are specially indicated for scientific applications with a high number of loops and references to vectors. General purpose applications take advantage of the parallelism by running independent methods or even entire source-code blocks in different cores [2]. Because of the difficulty involved in process of parallelizing sequential source codes, and the advantages and the drawbacks shown by implicit and explicit parallelization, several authors propose tools that combine both techniques. For instance, in [30] ParaGraph is presented in order to help programmers in the generation of parallel code; SUIF Explorer [31] combines static and dynamic compiling analysis with the instructions that are introduced by the programmer; Polaris [32] uses directives that are introduced by the user in the parallelization process; ParaWise [33] allows the code generation in multiprocessor and distributed systems. Finally, HTGviz [34] provides automatic and manual parallelization for Fortran applications. These proposals usually present worse results in automatic mode. For instance, ParaGraph only achieves a 1,25x Speed-Up enhancement in automatic mode, compared to the 3x Speed-Up of the hybrid case. Therefore, human intervention is still required, at least for now.

4. Application of Component-Based and Aspect-Oriented Software Development from the Earliest Stages
Properties such as complexity, heterogeneity, scalability and code reuse are taken into account in Software Architecture, which is presented today as a solution for the design and development of complex computing systems. However, there is still no consensus on the different concepts and approaches that should be used in this area. Component-Based Software Development (CBSD) decomposes the system into reusable entities called components 2 [35], providing services to the rest of the
2

Although there are a lot of definitions about components, one of the most commonly used is the one proposed by Szyperski, in which A software component is a unit of composition with contractually specified interfaces and explicit context dependencies only. A software component can be deployed independently and is subject to composition by third parties.

International Journal Publishers Group (IJPG)

Javier Corral-Garca et al.: Development of Scientific Applications with High-Performance Computing through a Component-Based and Aspect-Oriented Methodology.

403

system by encapsulating their functionalities. As mentioned above, the reuse reduces development time ensuring adequate quality and functionality. Furthermore, the establishment of COTS (Commercial Off-The-Self) has become essential because the tools that allow the reuse of components reach optimal outcomes [36]. Component-Based Programming (CBP) is getting introduced to high-performance parallel computing. However, despite its advantages, standard components and implementations such as OMG CCM, DCOM or Sun/Enterprise Java Beans, still share known deficiencies in the development of parallel scientific applications due to the lack of the necessary abstraction and a poor performance. They also have some difficulties when encapsulating application components of already existing scientific applications [15]. Another approach to improve software reuse is Aspect Oriented Software Development (AOSD), which allows the distinction of concerns 3 by modularizing crosscutting concerns4 in separate entities called aspects. As a result, the aspect itself can be reused by different software artefacts, which are usually objects [38]. In recent years Aspect Oriented Programming (AOP)5 has shown that, where cross-disciplinary interests are modelled as aspects that are not decomposed into functional units, there are significant improvements in high performance and distributed systems, especially reducing implementation time and effort [39]. Searches, performance analysis or execution traces are simple examples in where improvements involving the use of aspects in supercomputing are evident, they help to properly combine communications and computing capacity to achieve a better parallel execution time. A. Benefits of using AOP Several studies have stated the benefits of using AOP over other software development techniques. In addition, a paper about methodologies and results of a systematic review of empirical studies based on the benefits and limitations of AOP was presented in [40]. Over three thousand papers were identified, twenty two were deeply analyzed. The paper shows the advantages and disadvantages about particular characteristics and it concludes with the promising effects of AOP in performance, length of the code, modularity and evolution, highlighting the future potential of AOP. Only a few papers reported the negative impact of certain characteristics. For
3

instance, exception handling is unlikely to be improved under AOP models. After studying all the benefits, we can conclude that great benefits using Aspect Oriented Programming can be achieved. Specifically, AOP provides advantages like non-invasive changes when major changes have to be introduced, improvements in the comprehension of the source code, reduction of code complexity, ease of development, integration, customization and reuse. B. Benefits of combining AOP with CBSD The combination of CBSD and AOSD achieve significant time and effort reduction. AOP is especially useful for CBP in the situations in which a system can not be built from existing components without interfering with them and maintaining the former design of their interfaces, those situations are quite frequent. Sometimes the implementation of the components is too far away from what the programmers really need and it is not correct to break the already existing modularity of the components. There are several examples of that in [41]. In those cases, AOP techniques have proven to be useful in modifying component behaviours, building a new system from a previous one with an appropriate interface. Thus, aspects allow programmers to have totally new capabilities on existing components. It is important to remark that because this procedure can also be applied recursively, it could end up destroying the modularity of the system if the programmer doesn't use it properly [42]. C. Importance of the early stages of software development Software systems are becoming more and more dynamic in the sense that they require continuous changes and adaptations to face new situations. Those features along with the division of interaction rules and component functionalities will ease future modifications and software reuse, so that changes in the system structure will only affect directly involved elements without the need for modifying the rest. Bearing that in mind, the division of aspects can be considered as a trend in the early stages of software development [43-45]. With the aim of making the development of applications easier so that the modification of the system gets simpler and promotes the reuse of the components, many programming languages and coordination models have been proposed [46-50]. However, those models focus on the design and implementation stages, relegating to these stages the separation of the functionality of the system components and the interaction aspects which determine the dependencies between them. Apart from hindering the reuse of already developed components, this represents a dramatic change for the conceptual model of the system where software functionality and component dependencies are mixed. The solution to these problems is to bear in mind all the functional and coordination aspects separately from the earliest stages of development, so it is necessary to have all the modelling tools that allow to differentially describe the

In software engineering the term concern has been defined as any matter of interest in a software system. [37], i.e.: a concern is functionality or any necessary requirement of the system. 4 There are concerns that can be easily encapsulated, but there are other (crosscuting concerns) whose functionality affects already defined modules. Those kinds of concerns are required to be implemented in many classes or modules, this produces tangled and/or mixed source code that it is hard to modify or understand. 5 Within AOSD there are several approaches, including Adaptive programming, Composition Filters, Multidimensional Separation of Concerns, Subject-Oriented Programming and Aspect Oriented Programming.

International Journal Publishers Group (IJPG)

404

International Journal of Advanced Computer Science, Vol. 3, No. 8, Pp. 400-408, Aug., 2013.

functional components and the existing dependencies between them. The validation of the requirements must also be considered from the earliest stages of development to ensure that the software meets customer requirements, thus validation has been studied in many of the methods and recent conceptual modelling tools due to the significant impact that has the definition of these requirements in the final quality of the product [51]. An obvious benefit is the ability to avoid defects as soon as the initial requirement specifications are set, so that the derived costs of a possible defect don't affect design and implementation stages [51-54]. In this regard, the behavioural simulation of the system, which has emerged as one of the most efficient techniques for validation, can be also considered as the most appropriate to determine whether the overall system behaviour properly reflects the required dependencies between independent functional components.

5. Composition Methodology
This paper proposes the definition of a methodology for the development of parallel applications for high-performance computers through functional components and the definition of the dependencies between them, according to the business rules set by users through a web service. The methodology will focus on the development of pattern-based scientific applications. The proposed methodology aims to provide the foundations to enable and make the system requirements trace easier, identifying all software components that allow the development of each requirement during the different stages of the life cycle. Trace mechanisms allow carrying out a monitoring of the configuration elements of the system from the base-line of each of the initial requirements, so any change in these conditions can be controlled without side effects in other components of the development. Similarly, it has been considered to allow the validation of the system behaviour in its different stages. The methodology takes into account the mechanisms to follow when the rules (which represent the dependencies between the different components of the system) change, the system extends, new facilities are added or existing functional components are modified, or even replaced. The use of this methodology will allow the development of software systems with greater reliance on their functionality and will achieve a reduction of time, effort and cost in the processes of development and maintenance of supercomputing oriented software systems. The proposed methodology determines the activities to be performed at each stage of the development, covering the following phases: Choice of mechanisms to collect and specify the requirements. Functionality and dependencies based on the requirements. Selection into the repository of functional components and dependencies that can be reused.

Identification and determination of the specification and development of new components and dependencies that will be incorporated into the repository once the project is finished. Description in each stage of objectives, elements and system configurations together with the necessary information. Determination of information flows between consecutive stages. Definition of validation mechanisms for each stage of the development. Identification of technique tools that are needed to perform each specific activity. With the aim of being able to reuse each component in the future, it is necessary to clearly define the needed information from the repository for each functional component and dependence. This information is structured according to the different stages of the development, in such a way that specific information about components can be obtained later, in the requirement definition, architectural design, detailed design and implementation. Based on the information of the project repository, processes will be defined to track the evolution of the software configuration elements from their liaison and their requirements baseline. The identification of affected configuration items, when a requirement changes, will be allowed as a result of this. Although component-based software engineering has become very popular during the past few years, searching and reusing suitable software components remains as a difficult task, especially when it is faced with a large collection of components and a limited documentation about how they can and should be used. To make the above task easier, the methodology includes the use of several techniques to shape the repository of components, such as: automatic indexation (based on previously obtained information about the component, to ensure the consistency); the use of a high-level characterization of their capabilities (instead of taking into account names, formal specifications, comments, etc.); the use of the context in which the component is reused in the recovery tool (to guide query formulation); automatic management of functions to configure and validate components (to ensure that components are initialized and correctly validated for the context in which they are reused). Thus, our representation model aims to integrate components coming from different models and domains. To make this task easier, each type of aspect will have a series of properties with specified and limited values, describing detailed information in a way that each aspect can be retrieved later for reuse with other components. In this way, high-level information is stored over several kinds of component capacities, and can also be used by end users in component-based applications. Information will be used not only to know the aspects which are required and provided by each component but also to obtain rules to validate their configuration. In addition, it is considered that the aspects
International Journal Publishers Group (IJPG)

Javier Corral-Garca et al.: Development of Scientific Applications with High-Performance Computing through a Component-Based and Aspect-Oriented Methodology.

405

of each component that are stored into the repository have a number of additional details in order to accurately describe the characteristics of the aspect-related component. Each detail will have additional information to indicate their functional and non-functional features, so it could be used to describe the aspects and the links between them in a more formal way. Thus, each component will present its most outstanding features. In this way, components, aspects, details about provided or received aspects and detailed properties with values or restrictions, will be defined in the repository.

supercomputer. Functional parameters of each component and dependencies between them will be described mainly with XML-based files. Several tools have been considered for the implementation of the framework; in Fig. 1 the general scheme of the proposed framework is shown. A. Project and Components & Dependencies repositories The outlined framework of Fig. 1 has two repositories, one for the project and the other for components and dependencies, both of them are generated from the information defined in the methodology, which is structured according to the stages of the development, in order to obtain all the necessary information about each component.

6. Framework for the Composition of Components and Dependencies


The aim of the framework is to make the development of systems easier, following the methodology proposed as first goal. The environment that we are developing will have functions allowing: to define system requirements, to identify and reuse components to build the system, to define new components to be incorporated into the system and a repository of reusable components. The framework will make the system trace mechanisms easier and will provide the necessary tools for system validation by behaviour simulation. Finally, the development of this environment will also test the proposed methodology, because it will be developed using the steps specified therein. To support the proposed methodology, a composition framework is proposed to make the development of software systems easier in high-performance computers. This environment aims to have functions in order to: Obtain and introduce components and dependencies into the repository together with the associated information for each stage of the methodology. Define and specify the requirements. Identify functional components and dependencies. Compose the system from identified components and dependencies. Generate configuration elements and update them into the repository. Perform the trace of system requirements and manage project settings. Generate documentation, diagrams and specifications that are associated with each stage of the methodology. Validate system specifications by behaviour simulation. Generate the executable system. In addition, trace mechanisms will be facilitated by providing tools for system validation by the simulation of its behaviour. It has already begun to develop a prototype of the framework, where parallel programming is being developed with MPI message passing library on the LUSITANIA
International Journal Publishers Group (IJPG)

Fig. 1 General scheme of the proposed framework

Fig. 2 Framework for the Composition of Components and Dependencies

For this purpose, the information of the involved configuration elements is taken into account, determining the elements that have to be established in each stage, as well as the representation of the relationships between the different elements and the base-lines of each of the initial requirements. While the repository with project data has restricted access, the repository of the components and dependencies has open access. Fig. 2 shows the framework for the composition of the required components and dependencies.

406

International Journal of Advanced Computer Science, Vol. 3, No. 8, Pp. 400-408, Aug., 2013.

B. Development Tool Based on Component and Dependencies Composition Following the proposed methodology, the following main features are considered: Definition and specification of system requirements. Identification and selection of components and dependencies. Definition of new components and dependencies. Generation of the architectural structure of the system. Generation of detailed specification of new components and dependencies. Generation of executable code.

C. Validation and Simulation Behaviour Tool The framework offers a tool for the validation of initial specifications and architectural design based on system requirements, it has been incorporated into the aforementioned tool for the composition of components and dependencies. Among the particular features of this tool the following points can be highlighted: Needed pluggings to work with simulation environments are added to the development tool. Use of validation tools in specific stages of the development. Inclusion of mechanisms to visualize results. Mechanisms to compare expected results.

SuperDomes SX2000 nodes, although the final objective of this work is to approach the proposal to different programming models, platforms and architectures. To carry out their works in HPC, researchers and users have to learn how to exploit complicated aspects of the Superdome such as workload balancing, data locality, memory footprint and rapid communications. For instance, they have to use large shared-memory nodes, and know how to maximize performance by combining OpenMP for loop and thread parallelization, and MPI for internode process communications. The researches in supercomputing centres are multidisciplinary and heterogeneous, with a lot of researchers running their own codes every day. Experience has shown that to follow a specific methodology and to have a framework, with several tools to make the creation and development of their scientific applications easier, would be very useful for them. In this way, they could devote more effort to their main task which, in most cases, is to research, not to develop software or running programming codes.

8. Validation and Future Work


Currently, validating the proposal in a proper way is a complicated task, because neither the methodology nor the framework are finished yet. However, it is important to highlight that the proposal has been worked up from an important theoretical framework supported by a strong state of the art. Although the presented approach is an ambitious project with a remarkable complexity, future work is expected to validate the solution after a short period of time, also measuring its performance. The validation will be performed over several architectures and documented as soon as the initial version is ready, by an important group of researchers and HPC users, who will have different skills and experiences in specific branches of science. A prototype, where the parallel programming is being carried out by means of both, MPI and OpenMP standards, is currently being developed, but it is estimated that the first fully functional version will be able to be developed in around two years, studying the possibilities of approaching the proposal to a standard for high-performance computing.

D. Trace Tool Finally, the proposed development framework adds a tool to define the mechanisms and processes that perform the trace and monitor the requirements of each project from the stored and updated information of its corresponding repository. This tool allows to track the trace of system configuration elements, to access and to update project configuration elements, and also to generate reports and notices in order to involve developers when changes in system requirements occur. As shown in Fig. 2, the end user accesses to the system through a web service that connects to the framework, the framework is also connected to the repository and uses a tool to monitor the evolution of the components. It will finally validate and simulate the behaviour using the HPC architecture.

9. Conclusions
A methodology and a framework have been presented as an approach to help researchers and users to create their HPC programs in a more efficient way, with greater reliance on their functionality and achieving a reduction of time, effort and cost in the processes of development and maintenance, through the reuse of components (with already developed and tested parallel source codes) from the earliest stages of the development. The proposal, which has been worked up from an important theoretical framework supported by a strong state of the art, explains the benefits of using component-based and aspect-oriented software
International Journal Publishers Group (IJPG)

7.

Uses of the Research Proposal

The methodology and the framework are going to be validated in a supercomputing centre aimed to promote and disseminate HPC services and advanced communications to the research community. The first version is being developed in a SMP-ccNUMA system with 2 HP Integrity

Javier Corral-Garca et al.: Development of Scientific Applications with High-Performance Computing through a Component-Based and Aspect-Oriented Methodology.

407

development from the earliest stages of the development and the way the composition methodology and the framework are being developed. The first version will be developed in a SMP-ccNUMA system with 2 HP Integrity SuperDomes SX2000 nodes, although the final objective is to approach the proposal to different HPC programming models, platforms and architectures.

[14]

[15]

Acknowledgment
This research is part financed by the ERDF Fund Programme: Extremadura Operational Programme 2007-2013, Development of the Knowledge Economy (R&D&I: Information Society and ICTs), Research and Technology Development (R&TD) Activities in Research Centres.

[16]

[17]

[18]

References
S.Borkar & A.A.Chien. The Future of Microprocessors. Commun. ACM, 54(5):6777, 2011. [2] J.L.Risco-Martn, J.I,Hidalgo, J,Lanchares, A.Cuesta, J.M.Colmenar. Algoritmos Genticos en la Paralelizacin Automtica de Cdigo Fuente Java, MAEB'2012, febrero 2012. [3] P.Prabhu, T.B.Jablin, A.Raman, Y.Zhang, J.Huang, et al.. A Survey of the Practice of Computational Science. SC 11 State of the Practice Reports, pp. 19:119:12, pub-ACM:adr, 2011. ACM Press. [4] M.Daz, S.Romero, B.Rubio, E.Soler & J.M.Troya. Adding Aspect-Oriented Concepts to the High-Performance Component Model of SBASCO. PDP, pp. 2127. IEEE Computer Society, 2009. [5] A.Corradi, F.Zambonelli & L.Leonardi. Experiences Toward an Object-Oriented Approach to Structured Parallel Programming. Technical report, 2007. [6] M.I.Capel & M.Rossainz. A Parallel Programming Methodology Based on High Level Parallel Compositions (CPANs). CONIELECOMP, pp. 242247. IEEE Computer Society, 2004. [7] G.Kiczales, E.Hilsdale, J.Hugunin, M.Kersten, J.Palm & W.G.Griswold. An Overview of AspectJ. ECOOP 2001 Object-Oriented Programming 15th European Conference, volume 2072 of Lecture Notes in Computer Science, pp. 327353. Springer-Verlag, Budapest, Hungary, June 2001. [8] B.Harbulot & J.R. Gurd. Using AspectJ to Separate Concerns in Parallel Scientific Java Code. Proc. 3rd Int Conf. on Aspect-Oriented Software Development (AOSD-2004), pp. 122131. ACM Press, March 2004. [9] B.Harbulot & J.R. Gurd. A Join Point for Foops in AspectJ. AOSD 06, pp. 6374, 2006. [10] J.L.Sobral. Incrementally Developing Parallel Applications with AspectJ. IPDPS. IEEE, 2006. [11] R.Pichler, K.Ostermann & M.Mezini. On Aspectualizing Component Models. Software-Practice & Experience, 33(10):957974, 2003. [12] R.Armstrong, G.Kumfert, L.Curfman, S.Parker, B.Allan, M.Sottile, T.Epperly & T.Dahlgren. The CCA Component Model for High-Performance Scientific Computing. Concurrency and Computation. Practice and Experience, 18(2):215229, February 2006. [13] M.Vanneschi. The Programming Model of ASSIST, an Environment for Parallel and Distributed Portable International Journal Publishers Group (IJPG) [1] [19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30]

Applications. Parallel Computing, 28(12):17091732, 2002. C.Prez, T.Priol & A.Ribes. PACO++: A Parallel Object Model for High Performance Distributed Systems. HICSS, 2004. M.Daz, S.Romero, B.Rubio, E.Soler & J.M.Troya. An Aspect Oriented Framework for Scientific Component Development. PDP, pp. 290296. IEEE Computer Society, 2005. D.Sevilla, J.M.Garca & A.F.Gmez-Skarmeta. Aspect Oriented Programing Techniques to Support Distribution, Fault Tolerance and Load Balancing in the CORBA-LC Component Model. NCA, pp. 195204. IEEE Computer Society, 2007. O.Papapetrou & G.A.Paelicidad padopoulos. Aspect Oriented Programming for a Component-Based Real Life Application: a Case Study. SAC, pp. 15541558, 2004. T.Eidson, J.Dongarra & V.Eijkhout. Applying Aspect-Orient Programming Concepts to a Component-Based Programming Model. 17th International Parallel and Distributed Processing Symposium (IPDPS-2003), pp. 205205, Los Alamitos, CA, April 2226 2003. IEEE Computer Society. S.Quenette, L.Moresi, P.D.Sunter & B.F.Appelbe. Explaining StGermain: An Aspect Oriented Environment for Building Extensible Computational Mechanics Modeling Software. IPDPS, pp. 18. IEEE, 2007. P.An, A.Jula, S.Rus, S.Saunders, T.G.Smith, et al.. STAPL: An Adaptive, Generic Parallel C++ Library. LCPC, volume 2624 of Lecture Notes in Computer Science, pp. 193208. Springer, 2001. N.Thomas, G.Tanase, O.Tkachyshyn, J.Perdue, N.M.Amato & L.Rauchwerger. A Framework for Adaptive Algorithm Selection in STAPL. PPOPP, pp. 277288. ACM, 2005. P.Sanders,J.Singler & F.Putze. MCSTL: The Multicore Standard Template Library. Euro-Par 2007 Parallel Processing. 13th International Euro-Par Conference, Rennes ,France , August 28-31, 2007., pp. 682694. Springer, 2007. C.Lengauer. Loop Parallelization in the Polytope Model. CONCUR93, 4th International Conference on Concurrency Theory (4th CONCUR93), volume 715 of Lecture Notes in Computer Science (LNCS), pp. 398416, Hildesheim, Germany, August 1993. Springer-Verlag (New York). C.Bastoul. Code Generation in the Polyhedral Model is Easier than You Think. IEEE PACT, pp. 716. IEEE Computer Society, 2004. M.Classen & M.Griebl. Automatic Code Generation for Distributed Memory Architectures in the Polytope Model. IPDPS. IEEE, 2006. U.Bondhugula, M.M.Baskaran, A.Hartono, S.Krishnamoorthy, J. Ramanujam, A.Rountev & P.Sadayappan. Towards Effective Automatic Parallelization for Multicore Systems. IPDPS, pp. 15. IEEE, 2008. H.Zhong, M.Mehrara, S.A.Lieberman & S.A. Mahlke. Uncovering Hidden Loop Level Parallelism in Sequential Applications. HPCA, pp. 290301. IEEE Computer Society, 2008. H.Vandierendonck, S.Rul & K.Bosschere. The Paralax Infrastructure: Automatic Parallelization with a Helping Hand. PACT, pp. 389400. ACM, 2010. A.Kotha, K.Anand, M.Smithson, G.Yellareddy & R.Barua. Automatic Parallelization in a Binary Rewriter. MICRO, pp. 547557. IEEE, 2010. I.Bluemke & J.Fugas. C Code Parallelization with Paragraph. Information Technology (ICIT), 2010 2nd

408

International Journal of Advanced Computer Science, Vol. 3, No. 8, Pp. 400-408, Aug., 2013.

International Conference on, pp. 163 166, june 2010. [31] S.Moon, B.So & M.W.Hall. Evaluating automatic parallelization in SUIF. IEEE Trans. Parallel Distrib. Syst, 11(1):3649, 2000. [32] W.Blume, R.Eigenmann, K.Faigin, J.Grout, J.Lee, et al.. Restructuring Programs for Highspeed Computers with Polaris. ICPP Workshop, pp. 149161, 1996. [33] S.Johnson, P.F.Leggett, C.S.Ierotheou, A.Spiegel, et al.. Nested Parallelization of the Flow Solver TFS Using the Parawise Parallelization Environment. IWOMP, volume 4315 of Lecture Notes in Computer Science, pp. 217229. Springer, 2006. [34] M.Giordano & M.M.Furnari. HTGviz A Graphic Tool for the Synthesis of Automatic and User-Driven Program Parallelization in the Compilation Process. High Performance Computing (2nd ISHPC99), volume 1615 of Lecture Notes in Computer Science (LNCS), pp. 312319. Springer-Verlag, Kyoto, Japan, May 1999. [35] C.A.Szyperski. Component Software Beyond Object-Oriented Programming. Addison-Wesley-Longman, 1998. [36] J.Prez. PRISMA: Aspect-Oriented Software Architectures. PhD thesis, Universidad Politcnica de Valencia, 2006. [37] L.Rosenhainer. Identifying Crosscutting Concerns in Requirements Specifications. Aspect-Oriented Requirements Engineering and Architecture Design Workshop at OOPSLA 2004, October 2004. [38] B.Haak, M.Daz, C.Marcos & J.Pryor. Aspects Extractor: Identificacin de Aspectos en la Ingeniera de Requerimientos. CIbSE, pp. 8194, 2006. [39] S.Subotic & J.Bishop. Emergent Behaviour of Aspects in High Performance and Distributed Computing. Proceedings of SAICSIT 05, pp. 1119, Republic of South Africa, 2005. South African Institute for Computer Scientists and Information Technologists. [40] M.S.Ali, M.A.Babar, L.Chen & K.J.Stol. A Systematic Review of Comparative Evidence of Aspect-Oriented Programming. Information & Software Technology, 52(9):871887, 2010. [41] G.Kiczales, J.Lamping, C.Videira, C.Maeda, A.Mendhekar & G.Murphy. Open Implementation Design Guidelines. Proceedings of the 19th International Conference on Software Engineering (ICSE 97), pp. 481490, NY, May 1723 1997. ACM. [42] F.Steimann. The Paradoxical Success of Aspect-Oriented Programming. ACM SIGPLAN Notices, 41(10):481497, October 2006. [43] A.Amirat, D.Meslati & M.T.Laskri. Elicitation of Crosscutting Aspects at the Early Phases of Software Development. Information and Communication Technologies, 2006. ICTTA 06. 2nd, volume 2, pp. 3575 3576, 0-0 2006. [44] E.Baniassad, P.C.Clements, J.Araujo, A.Moreira, A.Rashid et al.. Discovering Early Aspects. IEEE Software, 23(1):61 70, jan.-feb. 2006. [45] R.Chitchyan, M.Pinto & S.S.Khan. Early Aspects at ICSE 2009: Workshop on Aspect-Oriented Requirements Engineering and Architecture Design. Software Engineering - Companion Volume, 2009. ICSE-Companion. 31st International Conference on, pp. 466 467, may 2009. [46] S.Frlund. Coordinating Distributed Objects: An Actor-Based Approach to Synchronization. MIT Press, 1996. [47] F.Arbab. The Iwim Model for Coordination of Concurrent Activities. Coordination Languages and Models, pp. 3456. Springer-Verlag, 1996.

[48] W.R.Rossak. Software Development Reusing Existing Components. Eighth Annual International Phoenix Conference on Computers and Communications, pp. 327 331, march 1989. [49] S.Clarke. Designing Reusable Patterns of Cross-Cutting Behaviour with Composition Patterns, 2000. [50] D.Deugo, M.Weiss & E.Kendall. Reusable Patterns for Agent Coordination. Omicini, A., Coordination of Internet Agents, pp. 347368. Springer, 2001. [51] I.Sommerville & G.Kotonya. Requirements Engineering: Processes and Techniques. John Wiley & Sons, Inc., New York, NY, USA, 1998. [52] L.Karlsson, A.G.Dahlstedt, B.Regnell, J.Natt & A.Persson. Requirements Engineering Challenges in Marketdriven Software Development - an Interview Study with Practitioners. Information & Software Technology, 49(6):588604, 2007. [53] I.Sommerville. Software Engineering (7th edition). Addison Wesley, New York, 2005. [54] O.Dieste, M.Lpez & F.Ramos. Updating a Systematic Review about Selection of Software Requirements Elicitation Techniques. WER, 2008.

International Journal Publishers Group (IJPG)

Potrebbero piacerti anche