Sei sulla pagina 1di 126

UNIVERSITY of CALIFORNIA, SAN DIEGO

A modular architecture for rapid development of model-based real-time systems.

A thesis submitted in partial satisfaction of the requirements for the degree Master of Science in Computer Science by Walter Phillips

Committee in charge: Professor Ingolf Krueger, Chair Professor Geoffrey M. Voelker Professor Stefan Savage

2006

ii

The thesis of Walter Phillips is approved

_____________________________________________

_____________________________________________

_____________________________________________ Chair

Geoffrey M. Voelker Stefan Savage Ingolf Krueger, Chair

University of California, San Diego 2006

iii

Dedication

Harry J. Phillips 1932-2004

iv

Table of Contents Signature Page ...................................................................................................................iii Dedication..........................................................................................................................iv Table of Contents ...............................................................................................................v List of Figures...................................................................................................................vii Abstract of the Thesis .....................................................................................................viii
1. Introduction ....................................................................................................................1

1.1. Inadequacies In The Traditional Software Development Process ........................... 2 1.2. An Improved Approach to Software Development ................................................. 4 1.3. Applications in the Automotive Domain ................................................................. 8 1.3.1. Case Study The Automotive Central Locking System ................................ 11 1.4. Model-based Development .................................................................................... 12 1.5. Tool-Chain Overview: ........................................................................................... 14 1.6. Related Work ......................................................................................................... 16 1.7. Chapter Overview .................................................................................................. 17 2. Model, Code Generation, and Runtime Architecture ..............................................19 2.1. Goals ...................................................................................................................... 21 2.1.1. Minimize complexity...................................................................................... 21 2.1.2. Straightforward Code Generation................................................................... 22 2.1.3. Code Modularization ...................................................................................... 22 2.2. Code Generator Input............................................................................................. 23 2.2.1. Templates........................................................................................................ 23 2.2.2. XML Model Specification.............................................................................. 24 2.3. Runtime System Foundation.................................................................................. 26 2.3.1. Runtime Framework ....................................................................................... 27 2.4. Code Generator Output .......................................................................................... 28 2.4.1. Interface Definition Language (IDL) File....................................................... 28 2.4.2. Common Library............................................................................................. 29 2.4.3. Runtime Components...................................................................................... 31 2.4.3.1. Environment............................................................................................. 31 2.4.3.2. Execution Components ............................................................................ 32 2.4.3.3. Monitor..................................................................................................... 32 3. Design Flow, Tools, and Artifacts ...............................................................................34 3.1. Modeling Tool........................................................................................................ 34 3.1.1. Common Model Specification........................................................................ 35 3.1.2. Model Simulation & Validation...................................................................... 36 3.1.3. Model Execution............................................................................................. 36 3.2. Runtime System Architecture ................................................................................ 37 3.2.1. CORBA Primer............................................................................................... 38 3.2.2. Runtime System Execution Framework ...................................................... 40 4. Running Example.........................................................................................................42 4.1. Central Locking System Review ........................................................................ 42 4.2. Development Process Overview From Abstract Model to Executable .............. 43

4.3. Model and Code Examples .................................................................................... 43 5. Implementation Platform, Runtime System, and Code Generator......................52 5.1. Code Generator Evolution ..................................................................................... 53 5.2. CORBA Runtime Communication Middleware ................................................. 53 5.2.1. CORBAs IDL Compiler ................................................................................ 54 5.2.2. CORBAs Naming Service:............................................................................ 56 5.2.3. CORBAs Real-Time Event Service (RTES):................................................ 58 5.3. A Modular Runtime Platform Implementation Details ...................................... 61 5.3.1. Framework ...................................................................................................... 62 5.3.2. CommonLibrary.............................................................................................. 67 5.3.3. Execution Components ................................................................................... 68 5.3.3.1. Model Components (Component[Name]) ............................................... 70 5.3.3.2. ComponentEnvironment .......................................................................... 73 5.3.3.3. ComponentMonitor:................................................................................. 74 5.4. Code Generation Approach.................................................................................... 77 5.5. Code Generator Architecture ................................................................................. 79 5.5.1. Initial Code Generator Comparison Architecture ........................................ 82 5.5.2. Initial Code Generator Target Runtime Environment ................................. 83 5.5.3. Current Code Generator Target Runtime Environment (Overview) ........... 85 6. Porting to Other Runtime Environments ..................................................................87 6.1. Porting Concepts Basic Services ........................................................................ 89 6.2. Basic Services Comparison.................................................................................... 91 6.3. Web Service Integration......................................................................................... 92 7. Design analysis .............................................................................................................97 7.1. Design Motivation Analysis .................................................................................. 97 7.2. Execution Model Limitations ................................................................................ 98 7.3. CORBA Limitations .............................................................................................. 98 7.4. Code Generator and Runtime System Redesign Analysis ................................... 103 7.5. Runtime System Design Assessment ................................................................... 106 8. Future work ................................................................................................................110 9. Conclusion ..................................................................................................................112 Bibliography ...................................................................................................................114

vi

List of Figures Figure 1-1 Waterfall Approach........................................................................................... 3 Figure 1-2 Cyclic Development .......................................................................................... 5 Figure 1-3 Concentric Development Cycle ........................................................................ 7 Figure 1-4 Automotive Networks ....................................................................................... 9 Figure 1-5 Central Locking System.................................................................................. 11 Figure 1-6 Code Generation Feedback Cycle ................................................................ 13 Figure 1-7 Tool-Chain Overview...................................................................................... 14 Figure 1-8 Tool-Chain Entities ......................................................................................... 16 Figure 2-1 Code Generator - Overview ............................................................................ 20 Figure 2-2 Example XML Specification........................................................................... 25 Figure 2-3 XML Component Node Specification ............................................................ 26 Figure 2-4 CommonLibrary Files ..................................................................................... 30 Figure 2-5 CORBA Component Inheritance .................................................................... 31 Figure 3-1 Code Generator Flow - Overview ................................................................... 34 Figure 3-2 Component View ............................................................................................ 37 Figure 3-3 CORBA ........................................................................................................... 39 Figure 4-1 Central Locking System Review..................................................................... 42 Figure 4-2 Example Component State Machines ............................................................. 44 Figure 4-3 Tick Method Code........................................................................................... 45 Figure 4-4 Template Example ........................................................................................... 47 Figure 4-5 Component Startup Code ................................................................................ 48 Figure 4-6 Receive Event Code ........................................................................................ 49 Figure 4-7 Send Event Code ............................................................................................. 50 Figure 5-1 Example IDL File............................................................................................ 55 Figure 5-2 Code Generation Flow - Detailed ................................................................... 56 Figure 5-3 Component Dependencies............................................................................... 61 Figure 5-4 Framework Class Diagram.............................................................................. 63 Figure 5-5 Send Event - Overview ................................................................................... 64 Figure 5-6 Framework Hierarchy Multiple Platforms ................................................... 66 Figure 5-7 Example Component State Transition ............................................................ 69 Figure 5-8 Component Startup Message Sequence ....................................................... 72 Figure 5-9 Receive Event Message Sequence ............................................................... 73 Figure 5-10 Send Event Message Sequence .................................................................. 73 Figure 5-11 Interoperation Between C++ and C#............................................................. 76 Figure 5-12 Runtime Component Hierarchy General.................................................... 86 Figure 6-1 Framework Porting Dependencies .................................................................. 87 Figure 6-2 Porting Services and Libraries ..................................................................... 90 Figure 6-3 Platform Specific Services Comparison.......................................................... 92 Figure 6-4 Web Service Based User Interface.................................................................. 93 Figure 6-5 Web Service Code ........................................................................................... 94 Figure 6-6 Example WSDL File ....................................................................................... 96 Figure 7-1 CORBA Limitation Fully Interconnected Components............................. 101 Figure 7-2 CORBA Limitation Maximum Number of Components ........................... 102

vii

ABSTRACT OF THE THESIS A modular architecture for rapid development of model-based real- time systems. by Walter Phillips Master of Science in Computer Science University of California, San Diego, 2006

Professor Ingolf Krueger, Chair

Increasing complexity in design and implementation of distributed real-time systems motivates a reassessment of the tools and methodologies used to develop these software systems. We approach the problem by applying a systematic process of modelbased development, which allows the developer to apply systematic and rigorous testing on an abstract system model. Although a wide range of modeling tools exist, they lack combined support for real- time properties and robust code generation tools. We address both concerns by developing a runtime Framework that supports real-time property specification and simplifies code generation through modularization and isolation of dynamic code.

Support for real- time property specification is accomplished through the use of the real- time middleware platform, CORBA. This allows us to focus on using the API for real-time functionality, rather than how to implement that functionality. Our work automates the model-to-executable translation process providing two benefits. First, code generation reduces coding and interpretation errors, allowing the developer to focus on the intricacies of the target platform rather than the general execution structure of the system. Secondly, automated translation of an abstract model into executable code allows us to take advantage of cyclic development. A wide range of systematic tests can be

viii

applied on different forms of the same model. Although validation and verification can be performed on the abstract model, some flaws can only be uncovered under real world conditions. A concentric development cycle, as we propose, ultimately reduces production costs by eliminating many of the human induced flaws in the resulting product.

ix

1. Introduction Distributed real- time embedded systems, such as those found in automotive systems, are challenging to develop. Real-time systems are typically safety critical systems and as such, must be subjected to rigorous validation, verification, and testing before they can be implemented in production systems. Complex real-time systems such as these are difficult enough to develop; add strict time to market constraints and frequent modifications (in terms of both hardware and software requirements) and these systems become even more difficult to develop. Traditional software development techniques as applied to business software, for example, do not adequately address the difficulties that may arise from real-time safety critical systems. In order to address both safety and timeto-market constraints, we propose a methodology for rapid development through modeling. Modeling allows for a more rigorous and methodical application of validation, verification, and testing, than can be attained from production executable code alone.

The traditional approach to software development includes steps for validation, verification, and testing, but they are often applied in a liner fashion and in a more informal process. In other words, changes applied in a traditional software development cycle may not be subjected to the rigorous validation process that was applied on initial development. Writing code, performing extensive testing, and implementing

modifications based on testing can be a time intensive and very error-prone process. Testing, for example, can be performed very methodically, but as the complexity of the system increases, the complexity and time spent on testing does not increase appropriately in typical software development practices. This is due in part to the amount of time spent manually implementing changes directly in source code. Making small changes in code requires lengthy testing. Because the availability of testing tools for the specific language and project under development may be limited, a great deal of time must be invested in developing custom testing suites or, the developer may simply choose to use old fashion approaches of trial and error for testing.

2 A better approach to testing is to make use of existing simulation tools that provide a rigorous testing methodology. Production systems can be developed to mimic the testing and simulation environment. Although this constrains the developer in building the system, it provides a convenient and standardized testing environment. The simulation tools execution environment is likely to be well structured as compared to freeform custom execution environment. Because of this well- structured execution similar testing methodologies can be extended into the real system, beyond the simulation environment. This methodology is made possible by abstracting the system into a model that can be systematically tested. This is referred to as the model driven development approach (MDA for short). 1.1. Inadequacies In The Traditional Software Development Process The traditional approach to software development is error prone because of two primary reasons. First, the use of developer resources is inefficient at best and secondly, human interactions can introduce errors. Writing code by hand introduces a number of potential problems. Simple syntactical errors, although caught by compilers, can lead to inefficient use of time and resources, especially if the project is large and slow to compile. Syntactical errors arise for a number of reasons, most of which stem from sloppy or poor coding practices. Although syntax errors are serious, they generally pose a minimal concern to the overall stability of a given system, as those errors must be repaired before the system can be compiled and deployed. Of far greater concern is the potential for human induced logic errors. Any novice or professional developer will agree, logic errors are much harder to find and can lead to devastating consequences. A great deal of time and resources are devoted to testing and debugging in order to reduce or eliminate logic errors. Reducing the amount of handwritten code can, in turn, reduce the time spent debugging [GriffithsHedrick].

Our goal is to reduce the amount of manual coding in order to reduce the frequency of syntactical and logical coding errors. Logic errors can arise from incorrect translation or implementation of an abstract specification. An abstract specification, whether it be a formal model, or an informal collection of requirements, can define tasks

3 as simple as setting a variable, or as complex as enabling an output state before a specified deadline while communicating success or failure. Logic errors can also arise from incorrect, incomplete, or flawed specifications. Much research has been devoted to formalizing specification methods [Romberg02] [Rumpe02] [OMG] [UML]. Tools have evolved out of formal specification that aid in the analysis of abstract specifications. The Unified Modeling Language (UML) for example, has been successful as a specification tool to enable visualization of complete systems and internal component interaction. Beyond simple visualization of systems, tools exist to apply systematic verification and validation of these abstract specifications. Unfortunately, the act of translating the abstract specification into executable code is typically still a manual, and hence errorprone, process. We integrate automated code generation to the abstract model specification to ensure translation errors are minimized and the resulting product is as correct as the initial specification.
System Requirements Capture

Model Generation

Verification and Validation

Code Generation

Code Maintenance

Documentation (Model Maintenance)

Figure 1-1 Waterfall Approach Traditional software development can be visualized as a linear process, often referred to as the waterfall approach. Because of the linear topdown sequence, errors uncovered must be remedied from that point forward. Model driven development has seen a great deal of success in other engineering fields due in part to its ability to abstract complex systems into a more manageable state. Models can then be subjected to rigorous testing, validation, and verification. The success of the model driven approach in mechanical engineering, for example, can be attributed

4 in large part to the advent of computerized machinery (CNC machines) that allow the engineer to rapidly produce prototype parts out of an abstract model. The software engineering field has a notion of rapid prototyping through tools that can be collectively referred to as code generators. Code generators are often used to automate redundant tasks such as providing skeleton code. Many C++ programming environments will provide a skeleton code file when a new project is created. More robust code generators consider system behaviors, in addition to system structure, to produce code directly out of an abstract model, rather than coding entirely by hand. Unlike the simple structural C++ skeleton code generator, this behavioral model based code generator is dynamic, constructing its output based on a robust specification. This allows the developer to focus on more specific tasks.

Code generators tend to occupy a distinctly different space in the development cycle of a piece of software as compared to the development cycle of a mechanical part, for example. The development cycle of a mechanical part is deeply intertwined with the abstract model because the part generation phase is automated and dependent on the model specification. Circumventing the development process of a mechanical part is incredibly difficult as the development process is well defined and tightly integrated. This automation reduces both development and production costs. Software engineering does not have a similar reproduction cost, as replication can be as simple as copying files. Update and deployment costs for software development should be considered, but are largely detached from development costs. The software development cycle, unlike that of a mechanical engineer, tends to be more freeform allowing (or at the very least, not forbidding) the developer to circumvent the model and implement changes directly in the resulting source code.

1.2. An Improved Approach to Software Development Model driven development has proven to be a very useful design approach in many engineering fields. Computer Aided Software Engineering tools (CASE for short) are a general class of tools that enable engineers to methodically and systematically

5 develop complex systems through the use of abstract computer-based models. Robust validation, verification, and simulation tools ensure the correctness of systems before they are ever implemented in a tangible form. Although CASE tools have experienced much success in other disciplines, software engineering has been reluctant to accept them in any wide spread or unified sense. Use of CASE tools is gaining acceptance but they are typically utilized for initial design or documentation and are not tightly integrated into the development and code maintenance cycle as a whole. We aim to achieve a tightly integrated development cycle through automating the specific task of translating an abstract model specification into a practical executable system. This automated translation is accomplished via a tool appropriately called a Code Generator, not to be confused by the general class of tools by the same name.

d e tio n C o era en G

Te Va l st in g id a t io n

Model Changes

Figure 1-2 Cyclic Development Cyclic Development refers to the way in which code is developed, tested, and changed. Advanced CASE modeling tools have received wide spread acceptance from much of the general engineering community. Examples include Computer Aided Drafting, CNC machines, circuit design tools, and so on. Software based models are ideal in that they are easy to build and easily modified. Beyond their simple prototyping utility, analysis and simulation tools can apply validation and verification as well as systematic testing procedures. Traditional software development can, and does, benefit from a systematic approach as it yields improved confidence in the resulting product. The

6 problem as we see it is that systematic approaches in traditional software design are fractured and loosely integrated throughout the development cycle.

A relatively new trend in the software engineering field, called eXtreme Programming (XP) seeks to uncover flaws through iterative development and unit testing. This form of unit testing greatly improves the ability to catch flaws and can be applied to both critical and non-critical system developme nt. Rather than focusing on small unit tests, we take a more holistic approach to design and testing of entire systems. Through the use of code generation, we enable complete systems to be tested incrementally through rapid deployment on platforms that more closely approximate the target system and ultimately the target system itself. This improves on the unit testing approach as it includes system interactions. Traditional unit tests could still be applied to parts of the resulting system and would provide additional and repeatable testing tools. These tests must be implemented manually and specifically for the given project under development.

Our approach to model driven development utilizes a cyclic pattern of verification, validation and modification. This general approach should be followed for all software development practices, but is not always adhered to strictly. Individual tools for modeling and simulation have gained some acceptance, but a unifying development process that integrates these tools from initial design to final production and through subsequent development has yet to gain wide spread acceptance. While we do not aim to achieve wide spread acceptance, we do aim to explore and promote a unified development process based on the model driven development pattern. In short, the model is the central entity of our development process and all changes or modifications to the system under development must be made directly in the model. Subsequently those changes must pass through the phases of m odel validation and verification, simulation and testing, and finally deployment and testing. We use the term concentric development cycle to denote the fact that the model is the central entity and development occurs in layered phases, each phase achieving specific goals, and each phase further approximating the real system. Modifications are applied to the model

7 directly and, by forcing those changes to undergo the full testing and validation cycle, we methodically assure that the changes are sound.

Initial Requirements Capture and Model Construction


Model Modifications Model Validation/ Verification

Simulation & Testing

Internal Code Generation

Deployment & Testing

Executable Code Generation

Figure 1-3 Concentric Development Cycle The concentric development cycle requires that all modifications pass through the model. Those changes are then subjected to rigorous validation, verification, and testing methods. These same tests can be applied for each minor change to the system. The concentric development cycle, as we have defined it, has three main cycles, namely internal verification/validation, internal simulation, and external simulation. This notion of three cycles is by no means a limitation. In fact, it is envisioned that any number of additional cycles would be added to test various aspects of the system under development culminating in the final production system. A notion of parallel cycles could be used to rapidly explore different execution platforms or communication environments. For example it would be desirable for a production system to utilize a simple communications environment such as TCP/UDP, CAN, or even Web Services, as opposed to a more heavyweight middleware platform such as CORBA (CORBA is discussed in depth in Chapter 3.2.1 below). On the other hand, advanced platforms such as CORBA can provide many useful features. Traditionally, the costly decision of which communications environment to use must be made upfront as all subsequent development

8 follows from that decision. Our approach removes, to some degree, the strict dependence on the underlying platform. This allows the developer to focus of the functionality of the system first and the underlying low- level implementation second.

Platform independence, as we have defined it, is by no means automatic. It requires limited development in the model- to-code translation tools (i.e. the Code Generator) and extensive changes to the execution Framework. Fortunately, code modularization minimizes the impact of the Framework changes on the Code Generator. Still, the ability to utilize the same fundamental system model in different environments is useful to make informed design decisions early in the development process. In particular, the modularization of system logic into an abstract model would be useful to industries where cost drives frequent vendor changes, and thus frequent hardware changes. A prime example of this environment can be found in the automotive industry.

1.3. Applications in the Automotive Domain The automotive domain is a particularly interesting application for a robust development cycle aimed at reducing errors and expediting time to market. The automotive industry has a mass market with deep social impacts in terms of safety and utility. In addition, automotive software and hardware is in constant flux. Automotive software requirements are constantly revised or changed to keep up with demand for added features and system interoperation. Maintainable code is particularly important in the automotive industry due to the use of many different hardware vendors. For example, a given electronic control unit (ECU) hardware may change within a given production run, but the tasks it performs are likely to only change through successive production runs and model years. Changing hardware often creates a requirement for changing code, which is fine if the set of tasks change. If the set of tasks is unchanged, rewriting code can be wasteful, especially if the only reason for this rewrite is a hardware change. Code that is maintainable, portable, and stable beyond the initial production run is of great importance to the automotive industry.

9 Abstract modeling is particularly suitable in this environment of frequent hardware and software changes because models naturally abstract the hardware layer. An example in the automotive domain is the system controlling the power door locking mechanism, referred to as the Central Locking System, or CLS for short ([KrgerGuptaMathew04], [AhluwaliaKrgerMeisinger05]). This system, although simple on the surface, interfaces with a wide range of systems and so it provides many distinct examples of the challenges facing automotive software development. The Central Locking System is distributed over a number of subsystems. The most interesting of which, collision detection, adds hard real-time constraints.
Door Locks
Chassis Communications Bus

Dashboard

Transmission
Drivetrain Bus

Engine Control

Airbags
Safety Bus

Crash Detection

Figure 1-4 Automotive Networks Automotive systems are fundamentally distributed. Figure 1-4 shows a typical automotive network. Although systems may reside on different physical networks (or buses) they must interoperate where appropriate. For example, during a crash, the airbags must deploy with high priority. Dotted arrows demonstrate communication originating from the Crash Detector. Secondary to airbag deployment, the door locks must disengage and the engine must shutdown. Some automotive systems utilize the CAN bus and associated protocols for communication between these distributed components. We utilize a different but loosely analogous communications medium called CORBA.

10 Production central locking systems are also faced with long-term support and maintenance issues. The initial design may require the locking system to take inputs from the internally mounted lock/unlock switch and the remote Key Fob (for keyless entry). Subsequent model years may require the locking system to take additional inputs from a cellular or satellite network, an externally mounted numeric keypad, or possibly even biometric sensors. Simply modifying the code to accept the additional input may cause adverse consequences in other aspects of the complete system. Model driven testing of even a simple interface change would greatly reduce potential problems, and thus increase confidence in the resulting product. Because typical automotive ECUs are not field-programmable, the confidence in software validity also reflects a potential cost savings by avoiding flaw leading to recalls or customer dissatisfaction.

Addressing code changes through subsequent versions is far more difficult than simply modifying code directly. Uncovering effects of these changes on the rest of the system can prove a daunting task. For example, a newly updated function may take slightly longer to execute, thus causing a dependent sequence of events to fail. More importantly, these logic changes can have adverse effects on safety-critical systems. System changes must be thoroughly tested before they can be deployed. The amount of time and resources devoted to this testing is often insufficient due to financial or political limitations (such as deliverable deadlines). Because of these resource constraints, it is therefore extremely desirable to utilize the available development resources efficiently in order to maximize the confidence in the final product. Automation can provide a great deal of help in finding and eliminating flaws. The tradeoff between development and testing resources requires careful and efficient use of these resources in order to ship the product with the minimum allowable general flaws and, because of the nature of automotive systems, absolutely no safety flaws. General flaws (not safety related) can be addressed through subsequent model years as discussed above. Our work provides a means to test for these flaws and to address changes through different model years by utilizing and maintaining an accurate system model throughout the lifetime of the product.

11

1.3.1. Case Study The Automotive Central Locking System The running example we will discuss is the Central Locking System. This is an appropriate example from the automotive domain because it is component-based, physically distributed, and can be specified to contain real- time constraints. For this example the Central Locking System will consist of a number of interconnected components and subsystems. These include a Key Fob (KF), a Lock Manager (LM), Lighting System (LS), Crash Sensor (CS), etc. The Central Locking Systems core component is the Control unit, which interacts with the above as well as an entertainment system through a Database (DB), Tuner, and User Interface (UI).

Figure 1-5 Central Locking System The Central Locking System (CLS) block diagram. Each of these components logically maps to a component in the abstract model and consequently an executable in the runtime system. Each executable contains an interface to the underlying runtime communications environment and the logical internal state machine defined in the corresponding model component. As in a real central locking system, the execution components are physically distributed. For the sake of simple testing and deployment, our runtime executable components can be run as separate processes on the same machine. In addition to the system components we introduce a special monitoring component, discussed later. The monitor is used for non-invasive testing and control of the runtime system and is discussed in depth later.

12 1.4. Model-based Development Models of software systems are abstract and as such, look nothing like their implementation. In contrast, models in the mechanical, electrical, or structural engineering fields are intuitive and look very similar to their physical implementation. The ability to visualize abstract models, and the dynamic nature of software in general, gives the developer an aversion to the modeling techniques that have proven useful in other disciplines. In some respects, programming is more of an art than a science. Programmers often develop an intuition for writing code as well as a deep knowledge of the system under development. This knowledge manifests itself in optimization and creative approaches towards solving particular problems. No automated code generation system can replace this creativity and detailed understanding of the systems. Still, software design and development can benefit from modeling for the same reasons other engineering disciplines benefit. These include increased productivity, reduced exposure to human induced errors, cost reductions, etc. Modeling allows for the application of a systematic and unified approach to both design and testing

Modeling techniques in general have been gaining acceptance in software development. Still, a problem with traditional software development arises once the system has been implemented in real code; the model and implementation are allowed to diverge. Successive changes are often implemented in the real code, while the model is changed to reflect those code modifications. In this respect, the model often deteriorates into a documentation tool rather than an integral part of the future development cycle. As such, the models utility as a testing tool diminishes.

13

Figure 1-6 Code Generation Feedback Cycle With automated code generation, feedback is applied in a more efficient manner. Modifications experience the rigorous testing methods of the complete tool-chain in order to discover potential flaws. We aim to change the traditional software development cycle to be more proactive in all stages of development. That is to say, the development cycle aims to uncover flaws in all phases, rather than just the deployment or testing stages. This is accomplished by altering the traditional flow of development. Rather than changing the model to reflect changes in the system, the changes should be made to the model first. This tight coupling of model and system means the system should always reflect the model, and not vice versa. Changing the model, rather than the code, adds an extra step that most programmers would find redundant. Manually changing the system to reflect the model, or vice-versa, can introduce a number of coding and logical errors. Our goal is to reduce the burden on the programmer by automating the task of building the system from the model. The simplification of the system building process through automation is likely to increase the acceptance and use of our particular tool-chain, resulting in more efficient use of the programmer's skills. More importantly, the automation process should reduce the number of model translation errors (logical or otherwise).

The focus of this text is to demonstrate how the automated generation of executable code directly out of the model allows and reinforces a concentric development cycle. Without automatic code generation, the labor cost of implementing the system out of the model is prohibitive and likely to only take place once; that is the initial

14 translation. This is, in part, why the code-base and model may diverge in a traditional development cycle. With automation it becomes easier to change the model first, making the concentric development cycle feasible. The model can be immediately tested within the modeling tool using its native simulation environment. Once the model is thought to be reasonably correct, the systems code can be generated and tested on a larger scale using the same methodology. The generated executable code is designed to run in a distributed environment that more closely matches a real environment, if not the actual production environment itself. Under traditional software modeling techniques, the model and system could diverge at this point. Our tool-chain requires that inconsistencies uncovered in any testing phase be modified, not in the real system code, but in the model itself.

1.5. Tool-Chain Overview:

Feedback

Figure 1-7 Tool-Chain Overview The tool-chain consists of modeling, validation/verification, and code generation tools. The important contribution of this work is to close the loop in the development cycle. Feedback and subsequent changes must now be applied to the model, rather than directly in the deployment code, as a traditional development would have allowed. The fundamental purpose of this work is to demonstrate a tool called the Code Generator which is in integral part of a much larger tool-chain

[KrgerGuptaMathew04], [AhluwaliaKrgerMeisinger05]. In general terms, a code generator can be as simple as text replacement or as complex as modern compilers (including intelligent optimizations, etc.). In either case, a code generator simply translates an abstract specification into a more detailed specification, one that can be

15 directly or indirectly executable. To accomplish this translation, the Code Generator requires detailed knowledge of both the input specification (i.e. the model), and the output architecture (i.e. the target run-time platform). We have chosen to use a specific modeling tool called AutoFOCUS (or a similar tool called M2Code) and a specific runtime platform based on the CORBA middleware ([Schmidt98], [Vinoski99]) to implement our Code Generator. It should be noted that these tools are used in the implementation of our Code Generator and have no fundamental constraint on the concept of code generation in general. We demonstrate through carefully architecting a runtime Framework that it is possible to interchange the underlying communication infrastructure. This allows us to target other environments while utilizing the same Code Generator.

In order to demonstrate the Code Generator, we utilize the running example of the Central Locking System and the specific aspects of this particular system. The running example is presented in general terms and is meant to describe the code generation process as a whole. In short, the larger tool-chain utilizes tools in three main areas, model specification, validation, and code generation. Modeling specification tools include AutoFOCUS [HuberSchaetzSchmidt96] and M2Code [KrgerGuptaMathew04].

Validation tools include AutoFOCUS [LtzbeyerPretschner00] and MSCCheck [KrgerGuptaMathew04]. Lastly, code generation tools include the Code Generator itself [Mller03] as well as an improved runtime environment and a new monitoring component. This work focuses on the area of code generation and its supporting target runtime environment.

16
M2Code (MS Visio Plugin) Validation Tools Connector
AutoFocus

XML

XML

Testing and refinement

RTCGenerator Code

Figure 1-8 Tool-Chain Entities Our tool-chain workflow is dynamic. The cyclic flow of development tasks pas through the various tools. Changes are never applied directly to the generated code. 1.6. Related Work In the current state of the art, tool-chains facilitating the development cycle outlined above do exist. The Rational Rose suite and the Nucleus BridgePoint suite from Mentor Graphics [MENTORWEB] facilitate model to executable development, but they do not provide explicit means to specify real-time properties to be enforced. They can however be observed and tested in the traditional sense. The goal of the tool-chain under development ([KrgerGuptaMathew04], [AhluwaliaKrgerMeisinger05]) is to provide a means to specify real-time properties as a part of the modeling process and to have these specifications mapped to an enforcing mechanism in the final executable. The purpose of this work is to bridge the gap between the models notion of an execution environment to that of a physically distributed execution environment capable of handling the real-time property enforcement.

A great deal of research has been applied in the field of model-based development. Our approach is similar in spirit to Executable UML. Executable UML is a subset of the Universal Modeling Language [Rumpe02] that allows for clear generation of code from abstract specifications. Our approach uses a more rigid execution plan that constrains the model into a well-defined structure. From this well-defined structure it is

17 possible to build an execution Framework that supports real-time properties, as those properties can be integrated into that specification structure. Platform specific execution frameworks can also create a natural separation of code. Code emulating the general execution of the underlying model simulation environment can exist apart from code belonging to the logical execution of the particular system under development. This means the process of code generation can be localized to where it is needed. Redundant or static code can be encapsulated to further simplify the process of code generation.

A result of modularization is that dynamically generated blocks of code can be easily identified and transformed into a template file. Template files are standard C++ files with the addition of special tags that can be used to perform simple text replacement or more complex code block replacement. Code templates are used extensively for intermixing dynamic and static entities. Our inspiration for using templates derives from certain web programming languages that use a tag based replacement scheme. Primary inspiration for pursuing a template/tag-based approach came from languages such as PHP [PHPWEB] and ASP [ASPWEB], but unlike these languages, we do not place interpreted code in the template tags. Instead, tag identities and flow control constructs are kept to a minimum. In addition to simple text replacement, special looping constructs allow enumeration of component names or more complex code replacement over a range of values. With respect to execution and functionality, our template language is very different from ASP or PHP. The basic idea of placing dynamic tags in a static template file in order to dynamically generate a statically viewable file is similar.

1.7. Chapter Overview The remainder of this text is presented as follows; Chapter two presents an overview of the architectural approach to code generation. The goals for the architecture and the various tools contributing to this architecture are discussed. In addition, the runtime tools and artifacts are presented. Chapter three expands on the tools and artifacts presented in Chapter two. The specific tools and building blocks, from the modeling phase to the runtime system are presented in depth. Chapter four discusses a running

18 example from the automotive domain, namely the Central Locking System. Chapter five discusses the actual implementation of the runtime system by tracing the evolution of the code generator. The specific communications infrastructure, CORBA, is discussed in relation to the runtime Framework. The various pieces of the current modular runtime Framework are discussed in detail. Chapter six expands on the concept of a modular runtime Framework by discussing how it can be ported to other environments. This ability to port the runtime system to other platforms is a fundamental contribution of this work. Chapter seven presents a critical analysis of the design, the model limitations, and the tools used. Chapter eight discusses areas of future work and Chapter nine outlines conclusions derived from this work.

19

2. Mode l, Code Generation, and Runtime Architecture This chapter discusses the general architecture of the code generation process from the initial modeling phase to the resulting code generation phase, focusing primarily on the latter. We have chosen to utilize modeling for the foundation of our code generation system. Generically speaking, in order to generate code, an abstract specification must be present. For our purposes we require that this abstract specification be rigidly defined and thoroughly tested. Modeling tools help create this abstract specification by allowing the user to visually design the various aspects of a system to be developed using our tool-chain. The structural and behavioral aspects of the system under development can be modeled and thoroughly tested early in the overall development process. Structural and behavioral models allow us to approach the code generation task from a similar perspective. We built a generic runtime Framework to support the structural model of components, their communications channels, and related supporting infrastructure. The behavioral models for each component, namely their state- machine definition, are easily integrated into the executable system because all of the necessary support functions for initializatio n and message passing are already provided through the runtime system. The behavioral (state- machine) models are translated to if-else blocks and integrated, via code generation, into their corresponding component source code.

19

20
Modeling Tool

XML Model Specification Executable Components

Component Templates

Component C++ Files

C++ Compiler

...
Code Generator

Common Library Templates

Common Library C++ Files C++ Compiler CommonLibrary.dll Project Specific Support Files (IDL File) C++ Compiler Framework.dll

Framework C++ Files

Figure 2-1 Code Generator - Overview The code generation process is outlined from left to right, top to bottom. Our work focuses on the specification to executable translation and primarily on the supporting execution runtime environment. For this discussion, we will assume the initial modeling task has already been performed and its specification available in the M2Code/AutoFOCUS XML specified file format. For more information on the modeling process itself, please refer to [HuberSchaetzSchmidt96] [Mller03] and

[KrgerGuptaMathew04]. The Code Generator process utilizes a template-based approach. In short, the template-based approach to code generation uses a generic system, or template, to localize the code generation task to specific locations and blocks of code. Our code generation process combines the model specification with prewritten template files to produce C++ source code, which can be immediately compiled. The overall architecture of the generated code files is modularized to separate dynamically generated code from underlying platform dependencies. Figure 2-1 is a graphical representation of the workflow within our code generation process. Files are inputted into

21 the Code Generator. The generated files can then compiled by the C++ compiler. This in turn generates the executable components and the support libraries. Executable components are linked at runtime with the supporting libraries to produce a runtime executable system that emulates the model specification.

2.1. Goals The primary goal of the Code Generator and supporting runtime execution environment is relatively simple. The runtime environment must emulate the simulation environment of M2Code/AutoFOCUS and the code generation mechanism should be able to support any system specification to be run in this environment. The simulation execution environment can be broken into two distinct areas, state- machine and communication infrastructure. The state machine approach we chose to use was blocks of conditional statements. In general, the state- machine alone does not require a complex execution environment since it is operates on local (local to the execution thread) variables. Preparing input and handling output for the state machine is more complex. The communications infrastructure in our Framework should support preparing and reading input and output data for the state machine. The state- machine generation portion of the Code Generator can then be relatively simple.

Beyond the fundamental goal of emulating the simulation execution environment, a number of other important qualities were outlined. First, code generation and Code Generator complexity should be minimized, second, the code generation process should be straightforward, and third the resulting generated code should be modular.

2.1.1. Minimize complexity The complexity of the generated code should be kept to a minimum. This is the reason for the modularized libraries as opposed to a large generated code files. Simplicity in the generated code also improves readability and understandability in the same way for example, that string- handling libraries improve readability over handling character arrays directly. The person dealing with the higher- level code does not need an intimate

22 knowledge of the underlying implementation. The supporting runtime Framework architectures purpose is to provide a simple API for the generated code. Other benefits arise from this modularization, the most notable of which is platform independence of the actual generated code.

2.1.2. Straightforward Code Generation The code generation process sho uld be straightforward. It is difficult to produce completely useable code for every given system and so it is desirable to allow for easy modification, before and after the code generation process. A code generator can be a black box that takes input, and generates output with no transparency. We decided however, to expose through ASCII text, as much of the code generation process as possible. The user should be able to easily understand where the code is coming from in order to make changes. To accomplish this we utilized code templates to localize the code generation to specific block replacement tasks, rather than complete file generation. Standard C++ code files are modified to include tags. These tags can be easily identified by a parsing engine and replaced with code generated.

Through templates, users can intuitively integrate their custom code up front and have that code become part of the Code Generator output. Tag based templates also allow for easy modification of execution flow or changes to the underlying runtime system. This approach removes the interdependence of the runtime system code and the Code Generator. The tag replacement mechanism tends to be a sort of black box, but to a lesser degree than non-template based code generation systems. Code generation via tag replacement is targeted and limited. The user can easily trace where the tag generation code is within the Code Generators source code. Furthermore, the user can implement new tags to perform new replacement operations.

2.1.3. Code Modularization The underlying runtime tools used in the runtime Framework should be isolated from the generated code through a simple API. The chosen communications platform for

23 our purposes was Real-Time CORBA. The sheer complexity of the code required to bootstrap a CORBA component was motivation enough to institute an encapsulating API for the sake of simplifying the generated code. Of more importance for the long-term viability of this Code Generator is the ability to interchange the underlying execution tools without drastically affecting either the Code Generator or the generated code. Modular code provides an abstraction of the underlying execution system, allowing the underlying system to be changed with little or no affect on the generated code. The underlying execution system is also isolated from changes to the generated code, provided the changes to the generated code utilize the same set of functionalities provided by the abstraction layer.

2.2. Code Generator Input The Code Generator takes input from two distinct sources, template files and an XML model specification. These files must be available to the Code Generator before it can execute. Generic template files are prepared in advance while the XML specification is unique to the system under development. Template files can be altered to suit the particular system under development, but the generic files should be adequate for most systems.

2.2.1. Templates Template files contain the basic skeleton code of a generic system. Templates are structured as standard C++ source code files with the addition of unique tag elements. These unique tag elements designate places for text replacement during code generation. Information embedded in these tags enable the Code Generator to construct the appropriate block of code to generate for that particular section of the output C++ source code file. In order to embed tags in C++ code, special character sequences that are not typically found in C++ source code are used to distinctly identify these tags.

Template files can have a one-to- many relationship to their corresponding output code files. In other words, the same template can be used to generate a number of distinct

24 files that belong to the same class of file. For example, a system will have a number of components. In the Central Locking System execution components such as the Key Fob, Lock Manager, Crash Sensor, etc. all share the same fundamental skeleton code, only the names and state- machine have changed. It is useful to apply a notion of inheritance to the template files and their resulting code files. This provides consistency between components of the same class and reduces the number of template files. This is particularly important since the number of components is variable from system to system. Theoretically, the number of components is infinite, although platform limitations impose reasonable limits. In our particular implementation the number of distinct components can range from 255 to 21840. This limitation on unique events in our particular version of CORBA will be discussed in Chapter 7.3 below. It would be inappropriate to construct, or even copy this many template files since many, if not all, of them will be similar. As the system under development progresses, it maybe necessary to copy template files to integrate custom code for a given component. For initial system development however, it is sensible to minimize the number of template files and allow the developer to replicate templates as necessary.

2.2.2. XML Model Specification The XML model specification we employ is based on the format used by AutoFOCUS [HuberSchaetzSchmidt96] and contains specifications for each component in the system, all communication channels, and each state- machine. From this abstract specification, the Code Generator can construct the appropriate components utilizing the corresponding template files. Template tags are replaced with the information contained in the XML specification or with a composite of the information contained throughout the XML specification. Simple name replacement, for things such as component names, uses the information contained directly in the specification. For more complex replacement the specification must be parsed extensively to extract the necessary information. One example of a more complex block replacement would be the statemachine code block contained in each executable components tick method.

25

Figure 2-2 Example XML Specification This is a view of the actual XML specification for the CLS project introduced in Chapter 1.3 above. The nodes in Figure 2-2 have been collapsed for readability. The environment, shown here, is a special component that contains the systems execution components. The system model is defined hierarchal and is evident in the subcomponents node. Note the enumeration of the system subcomponents KF, CONTROL, LM, LS, SM, and DB contained in this node. Each of these subcomponents is represented in the graphical modeling tool. Figure 2-3 below shows the contents of a component node. A graphical depiction of these model components are given in Chapter 4.3 below

26

Figure 2-3 XML Component Node Specification The KeyFob component node from the above system (Figure 2-2) has been expanded to illustrate its automaton which will get translated by the Code Generator into an executable state machine. In the execution component, this state machine will be contained in the tick() method (discussed 3.1.3 below). The XML specification is used to define, in a platform independent manner, the structure and behavior of the system that is to be generated. In order to facilitate the execution of this system, the runtime environment must support the generic structure of the system. The system details in the XML model specification, such as the enumeration of components described above, are useful only when they can be successfully mapped into an execution environment. This execution environment is described in Chapter 3.1 below.

2.3. Runtime System Foundation The purpose of the runtime system is to provide an execution foundation for the runtime components. We have additionally specified that this runtime system Framework abstract the underlying platform. How this abstraction is accomplished is irrelevant to the basic functionality of the Code Generator or the generated code because

27 the underlying platform implementation is encapsulated and exposed via a simple API. This encapsulation makes it possible to reuse or replace the underlying platform without severely affecting the generated code. A number of static handwritten files were developed to provide a basic runtime Framework to achieve this isolation between the generated code and the underlying platform. These handwritten Framework files can be rewritten to target other platforms while exposing the same API. The details of porting our runtime Framework can be found in Chapter 6.

2.3.1. Runtime Framework The Framework contains code specific to the underlying platform. In our case, this underlying platform is Real- Time CORBA. The Framework does not contain any code pertaining to the specific system model under development, instead it contains only the code required to interface a generic system to the underlying platform. The reason for this code independence is two fold. First, a compilation and deployment optimization is gained if the Framework can be compiled as a library once and for all. Secondly, and more importantly, it provides a clear boundary of what code should go where. Ideally, the generated component code should have no notion of the underlying platform. Likewise, the Framework should have no notion of the specific system to be run on it.

The Frameworks fundamental purpose is to provide simplified component configuration methods and to handle event sending and receiving. To accomplish these three tasks requires a great deal of code in CORBA, or any other communications platform. The underlying CORBA code is quite extensive and has fortunately been encapsulated in its own API. Nevertheless, it still requires much coding to build a CORBA application. Tightly integrating this code with the component would severely limit the ability to change platforms in the future. The Framework frees the component code from its dependence on the underlying platform.

The Framework is compiled as a dynamically linked library to improve compilation and deployment times. This simply means that the compiled Framework

28 exists in its own file and is linked to the component at runtime. Since the Framework is project independent, it is possible to deploy this DLL file along with the runtime support files for CORBA. In scenarios involving embedded devices, the Framework can be converted into a statically linked library, which can be compiled directly into the same binary, making the executable component self-contained. This would be particularly useful if a custom communication platform was implemented within the Framework code. Deployment would involve simply copying the executable, rather than the deployment of CORBA, which involves a number of files and extensive configuration.

2.4. Code Generator Output The Code Generator outputs three basic classes of files, an Interface Definition Language (IDL) File, the Common Library files, and Execution Components. These files, combined with the static Framework Library introduced above (Chapter 2.3.1), constitute the runtime system that emulates the simulation environment.

2.4.1. Interface Definition Language (IDL) File The Interface Definition Language file is produced by the Code Generator and is a CORBA specific requirement. It should be noted that IDL files are not unique to CORBA. They are simply a means to abstractly define, or model, a system. The purpose of an IDL file in a CORBA project is to provide a specification for the objects that CORBA will manage. For our purposes, these CORBA objects directly map to a component. We define a runtime component as simply an executable combination of driver code and a CORBA object. The IDL file is used by CORBA to produce the stub and skeleton code that will enable each defined object to interface with the CORBA backend. This stub and skeleton code is produced my means of an IDL compiler that is provided with CORBAs development tools.

The IDL compiler is a type of code generator in itself as it takes an abstract specification and generates code that simplifies access to the complex CORBA backend. In addition to the stub and skeleton code, a number of client and server files are

29 produced. These files provide access to the various services offered by CORBA. Access to CORBA can be attained through either the use of the stub/skeleton code, via the client/server interfaces, or by a combination of the two. Our runtime component objects inherit from CORBA objects contained in the client/server files and so we simply need to compile them along with the components. We do not make use of the stub/skeleton code.

2.4.2. Common Library The Common Library is a repository for code that is common to all components, project specific and also platform specific. The Code Generator dynamically produces its source files through the same template mechanism that produces the component files. The Common Library and Framework library could be compiled together, but were left as distinct libraries to clearly identify what code belongs where. The Common Library addresses the inability to completely detach the underlying platform from the execution components. It is possible to develop a communications platform that does allow for a clean separation of code, CORBA unfortunately does not. Furthermore, there is still a need for a repository of project specific code that is common to all components.

The initial need for a Common Library came about from compilation inefficiency stemming from the IDL file presented above. The compilation process of CORBA involves generating a number of skeleton and client/server support code files from the given IDL file. These support files must be comp iled into the component in order to gain access to the CORBA environment. Compiling each of these files into the executable components was time consuming and redundant. While this was not of critical concern, it was a motivating factor in building a library that contains code common to all components, but that is also dependent on the underlying platform. This allows the execution components to maintain their platform independence while bridging the gap between the platform independence desired by components and the project independence desired by the Framework.

30 The Common Library contains common type definitions such as enumerated types corresponding to event types, etc. It also contains utility functions that aid in converting an event type from a number to a readable string. Interfaces for marshalling and unmarshaling event data via CORBA operations are provided, the implementation details of which will be discussed later. The Common Library is compiled as a dynamically linked library, in the same manner as the Framework. The Common Library could also be compiled as a statically linked library and then compiled directly into each executable component. It was left as a standalone DLL file solely to improve compilation time.

Figure 2-4 CommonLibrary Files The Common Library contains a mix of IDL compiler generated code files (SystemC.* and SystemS.*) and dynamically generated code files from the Code Generator (CLS_common.* and System.idl). During compilation, a custom build step of the System.idl invokes CORBAs IDL compiler to generate the SystemC.* and SystemS.* files, which are subsequently compiled into the CommonLibrary.

31 2.4.3. Runtime Components

Figure 2-5 CORBA Component Inheritance This inheritance diagram was generated for the CLS example via Doxygen and illustrates how the project components (on the right) are related to the Framework (AbstractComponent and TickableComponent) and the CORBA specific objects (POA_CLS::[ComponentName]) created via the IDL compiler discussed above. The execution components make use of the Framework and Common Library discussed above. The actual code contained in the execution components is fairly simple, as most of the complex operations have been pushed into the supporting libraries. Components are derived from the corresponding template files and are the result of the tag replacement performed by the Code Generator. Component types are similar in that they all utilize the common execution and communication Framework outlined above. They differ in their roles and operations they perform during the execution of the system. There are three main types or classes of components, the environment, the execution components, and the monitor. With the exception of the monitor, all components are derived directly out of components defined in the model.

2.4.3.1. Environment The environment component is a specialized form of the model component. Its primary role in the runtime system is to manage control messages for startup events. As a

32 result, the environment component contains specialized code for this purpose. The environment does not perform state transitions in the model and so the executable does not contain a model-derived state- machine. The processing of startup events can be considered a state- machine, but is specific to our execution Framework. Aside from the major distinction of lacking a state- machine, the environment code base is largely identical to that of the other components.

2.4.3.2. Execut ion Components Components are the executable units of operation that perform specific tasks defined by the corresponding model components state- machine. The tick() event contains the state machine and is called upon receiving an input from all attached components. In order to call the tick method, the component must receive all input messages from all attached components. To do this, the component must be aware of the events it expects. Keeping track of the input messages is handled by two mechanisms. First, CORBAs conjunct event groupings govern event delivery. This alone would be sufficient, but requires a strict dependency on this CORBA specific event-grouping feature. A second means for handling event groupings is contained in the components handleEvent() method. Events are counted and when the expected number of inputs is reached, the tick method is called. This very rudimentary implementation was included to demonstrate input event management could be handled in different levels of the execution hierarchy, in the underlying platform, in the component, or at any point in between.

2.4.3.3. Monitor The model does not define the monitor component. It is purely a diagnostic tool used to interface with the underlying communications infrastructure. It does not contain a state- machine because it does not perform any model-defined tasks. The monitor is unique in that it listens to all events individually. The execution components described above listen to a subset of events, which are tagged as grouped events. This allows the developer to monitor the internal system messages. The monitor also has the ability to

33 actively submit events on the behalf of other components. Although this injecting of events is useful, the monitor is primarily a passive component because the system can become unstable if events are inappropriately injected.

The monitor was developed with a user interface and so it can be useful for kick starting a system that has not yet implemented custom interfaces for the execution component. A primary example is the Central Locking Systems Key Fob component. The Code Generator does not produce code to interface with user input. This task is left to the developer as that code is specific to the hardware buttons. In lieu of Key Fob interface code, the monitor can provide a convenient surrogate input interface until a custom solution is created and integrated into the generated code.

Because the monitor has a user interface, its runtime architecture differs slightly from the standard component code. Any execution component described above could be modified to provide a similar windowed interface. The monitor demonstrates the flexibility and relative ease of integrating the generated component code with other technologies. The monitor component is made up of two code modules. The component code itself is almost identical to the other components and contains the methods to send and receive events. This component code is compiled into a dynamically linked library rather than a standalone executable. The second code module is a windowed executable written in C#. This executable invokes the methods exposed by the monitor component code contained in the DLL. Encapsulating the component code, as with the Framework code, affords the user interface a form of independence from the underlying component and in turn, the underlying communications platform. The monitor component can be used as an example for constructing custom interfaces for the system components. Since our target application is in the automotive domain, windowed interfaces are not likely to be required. Instead the hardware-based user interface triggers are likely to be integrated directly into the component source code itself.

34

3. Design Flow, Tools, and Artifacts This chapter discusses the modeling tool and how the runtime architecture was designed to simplify the Code Generator architecture. The general design flow begins by capturing the requirements of the system to be developed. From these requirements, a model is constructed using a tool called M2Code. M2Code is a graphical modeling tool that allows the user to specify system structure and behavior. M2Code also shares a common model specification format with another modeling tool called AutoFOCUS. The model is then subjected to rigorous validation, verification, and testing to uncover flaws in the design. Once the model is prepared, and logically correct, it can be translated into executable code. Our tool takes the abstract model specification and translates it into a form that is executable on the Common Object Request Broker Architecture (CORBA).

M2Code or AutoFOCUS

XML Model Specification

Code Generator

Generated Source Code

C++ Compiler

Runtime Execution Environment

Figure 3-1 Code Generator Flow - Overview This diagram shows the general linear flow path from model to executable. Rounded blocks denote tools while cornered blocks denote data artifacts. 3.1. Modeling Tool To build the model itself, and produce the abstract specification, we utilize one of two modeling tools called M2Code and AutoFOCUS. Other comparable modeling systems mentioned in the Related Work Section 1.6 above do not easily provide access to their internal code generation and execution system. For this reason we must reimplement the means to convert an abstract model specification to executable code, integrating the real- time specifications along the way. For this reason, the code generation process is detached from the modeling tools. Abstract model specification, through the use of an exported XML file, allows our Code Generator to function

34

35 independently from the modeling tool. Our desire to integrate real-time specifications on top of the model is the motivation for developing a custom Code Generator. Our Code Generator relies heavily on AutoFOCUS specification and testing infrastructure, but is independent in terms of execution. In fact, the dependency on AutoFOCUS is being reduced by concurrent development of a tool called M2Code [KrgerGuptaMathew04].

AutoFOCUS was developed at the Technische Universitt Mnchen (TMU) [AFWEB] and is based of the semantics of FOCUS. Of particular interest is the fact that AutoFOCUS is freely available. Although more advanced tools such as Rational Rose, Rhapsody, Tau, and the Nucleus BridgePoint suite from Mentor Graphics exist, they are closed systems and as such are not suitable for our research goals. We aim to integrate real-time property specification early in the model development phase. In order to utilize existing modeling languages and their development tools our real-time specifications must either be integrated into the language itself, or provided as a supplementary specification. While this integration need not be implemented at the level of the modeling language itself, we must have access to the internal specification and implement our own translation between the model and specification executable.

3.1.1. Common Model Specification A common specification format exists between the various tools in the tool-chain in the form of an XML model specification file based on that defined by AutoFOCUS [KrgerGuptaMathew04] and modified to suit our purposes. Code generation is the process of transforming this abstract specification of the system into compliable and executable code. This abstract specification is somewhat analogous to a compiled language, such as C or C++ in that a general specification is translated into machine specific executable code. In effect, our Code Generator is a compiler for the abstract model specification [Mller03], and as such shares some similarity (symbol table, etc.) with actual compilers.

36 3.1.2. Model Simulation & Validation We began using AutoFOCUS as the core- modeling environment because it provides useful tools for simulation and validation, both of which are critical in uncovering bugs early on in the development cycle. We have since employed our own modeling tool, namely M2Code and our own extensions to the model specification format. Although not entirely analogous, model validation done by M2Code or AutoFOCUS can be thought of as pre-compiler tools for C such as Lint [SUN05]. The purposes of the simulation and validation tools are to prepare a reasonably correct specification in both form and content. The simulators fundamental execution model is fairly simple and so we can make the following assumption; if the execution environment of our generated code emulates that of the simulation environment, we will have produced an executable system that is as correct as the system we observed in the simulation environment. In effect, our executable system is another sort of simulation environment, one that can be deployed in a production system, or used as a strict guide for creating a true production quality system. It should be noted here that the creation of a suitable and correct execution environment is a critical step in building the final system.

3.1.3. Model Execution The execution model of a system in AutoFOCUS consists of two major tasks repeated indefinitely. Each components internal state machine executes an enabled transition (tick) and data is communicated between components (step). From the perspective of individual components, there are three phases. This is because the communication step phase is split into two distinct parts for sending and receiving messages. To complete a tick-step cycle, a component must first read all of its input ports, preparing the data for the state machine. The state machine then executes by selecting an available transition given the current state and the newly read data from the input ports. An enabled transition is executed by moving data to the components output ports. Since the components are connected via a communications channel, writing of an input port on one component and reading of input port on the attached component are

37 seen as a single operation from the point of view of the simulation (a step as discussed above).

Outputs

Inputs

Component

Internal State Machine or tick() method

Figure 3-2 Component View This figure represents the general execution model from the component standpoint. Inputs are read, a state transition is enabled and outputs are written. This process continually repeats for each component in the system. Communication channels in the AutoFOCUS execution environment allow buffering of only one value per channel at a time. Single message buffering allows the system to proceed in an orderly fashion without the presence of an explicit synchronization mechanism. For this type of synchronization to be effective, a constraint must be placed on the communications model. During a step operation, all communications channels must carry a message. State machine transitions do not always affect all output ports. When a channel does not carry data during a step, null messages are introduced indicating no data is present.

3.2. Runtime System Architecture As discussed earlier, the runtime architecture was designed to simplify the Code Generator architecture. The necessity for a complicated code generator is reduced because of the strict and well-defined execution model of the simulation tool used to validate the model (discussed earlier).

38 This text discusses the artifacts, both dynamically and statically generated, that are necessary to produce an executable system. For more detailed description of the architecture of the Code Generator tool itself, please refer to [Ahluwalia05] and [Mller03]. The dynamically generated code in its current version targets CORBA as the communications environment. That is, the generated code makes use of CORBA t o communicate between distributed components. A modularized runtime Framework provides a generic API to the generated code, effectively isolating the generated code from the underlying platform.

3.2.1. CORBA Primer CORBA, the Common Object Request Broker Architecture, is a middleware that provides distributed access to objects. Writing distributed applications is greatly simplified through the use of CORBA because it removes the burden of coding low- level distributed communication from the developer. It is used primarily for business applications, but has seen extensive use in industrial automation because of its Real-Time support. The Common Object Request Broker has largely been replaced by newer technologies such as the Simple Object Access Protocol (SOAP) and Web Services. CORBA is still a strong technology in terms of both performance and industry support.

CORBA is a specification [OMG02] that has been implemented by a number of vendors, all of which are designed to interoperate via the common specification. In addition to the core object request broker, CORBA provides a number of services and utilities, which we have put to use in exploring the requirements for a generic execution environment for the generated code. These requirements can be implemented in any number of other communications platforms making the Framework fundamentally interchangeable. Real- Time support was the primary motivation in selecting CORBA as the experimentation platform. Additionally, the Naming Service, the Event Service, the Timing Service, and the Interface Definition Language Compiler have aided in building a common execution Framework for the generated code and are discussed later.

39

(http://www.cs.wustl.edu/~schmidt/TAO-intro.html)

Figure 3-3 CORBA CORBA is a complex specification. We have been careful to only use a distinct subset of tools that have analogous implementations in other environments, examples of which include naming and message delivery. Our generated code requires an execution environment that will emulate the execution model of AutoFOCUS while allowing us to implement the advanced features such as timing constraint specification. A wide variety of potential execution environments exist, ranging from lower level machine and compiled languages up to a higher- level middleware. A higher- level middleware was ideal because it is likely to have already implemented useful advanced features. CORBA, or the Common Object Request Broker Architecture, was chosen because of the ability of useful services such as name resolution and event delivery [Vinoski99], [OMG02], [Mller03]. CORBA offers a realtime subset allowing for specification and enforcement of timing constraints. In addition, this middleware has been proven to be robust and widely available on different platforms [Schmidt98], [Vinoski99].

There are many flavors of CORBA and so the decision of which specific one to use was made based on availability. The TAO (The ACE ORB) implementation is an open source project and, as such, is particularly suitable to a research environment [TAOWEB]. TAO is built on a framework called ACE (Adaptive Communication Environment), both of which were developed under the supervision of Douglas Schmidt [ACEWEB], [HustonJohnsonSyyid03], [SchmidtHuston02], [SchmidtHuston03]. ACE is a set of cross platform APIs and modules enabling the programmer to write portable C/C++ communication code. Both ACE and TAO have been tested successfully on a

40 number of different platforms and because it is an open source C/C++ project, it can be ported to other platforms, particularly some embedded devices making it suitable for a test-bed environment. The primary drawback of using CORBA (and specifically, TAO) is the required execution footprint and complex API. Large execution overhead makes this, and some other flavors of CORBA, unsuitable for many low-processing-power embedded devices. Nevertheless, CORBA is very useful in a test-bed environment, especially if this test-bed consists of a series of laptops and PCs, or simply, multiple processes running on the same machine. Unlike the single-process, multi-threaded execution environment in the AutoFOCUS simulator, our generated code runs in a truly distributed environment and so it is capable of more closely simulating a true production system while still adhering to the AutoFOCUS execution model.

We feel that existing development cycles, moving from the model straight to deployment, i too big of a step. By introducing CORBA as an intermediate testing s phase, we have created a concentric development cycle where different aspects of the system can be tested incrementally in more and more realistic environments. AutoFOCUS provides logical validation of the model in a very narrow scope of the multi- threaded simulation environment. Our CORBA based test environment allows us to explore the effect of real-time property constraints on the system. The distributed nature of this test environment approximates the distributed nature of the automotive environment, while still maintaining the ease of deployment and observation provided by the PC environment. Future iterations of the tool-chain could include code generators for specialized embedded code (BASICStamp, for example) and even a means to produce production quality code directly. Currently the runtime system is designed such that the underlying communications environment, namely CORBA, can be replaced.

3.2.2. Runtime System Execution Framework The runtime system was designed to be modular. The reason for this was twofold. First, the dependencies on the underlying communications architecture should be localized. This would enable replacement or modification of the communications

41 architecture with little effect on the dynamically generated code, reducing the effect on the Code Generator architecture itself. The second motivation for a modular architecture was to reduce the complexity of the dynamically generated code. In simple terms, complex tasks w identified and encapsulated into a simple API. This means fewer ere lines of code would need to be dynamically generated. This encapsulation greatly simplifies the task of the Code Generator.

The runtime system consists of a communications environment, a static Framework to encapsulate the communications environment, and a number of executable components that approximate the components in the system model. The Framework is the key to the runtime system architecture because it isolates the Code Generator and generated code from the underlying communications environment.

42

4. Running Example Applying our development tools to the Central Locking System example demonstrates how the model is transformed into an executable system and how our architecture is extendable. A simplified, but complete, code generation process consists of first capturing the system specifications in one of two modeling tools, AutoFOCUS or M2Code. Both modeling tools allow for a graphical specification of component interactions. The model created using these tools can then be tested and validated using the simulator in AutoFOCUS or MSCCheck. Once the modeled system is thoroughly tested, validated, and refined, the complete model specification is exported in XML form. The Code Generator can then process this XML specification file and output a number of C++ CORBA files, which are ready to be compiled by standard C++ compilers. The C++ files are compiled into executables and then deployed onto the target device(s) and tested.

4.1. Central Locking System Review The Central Locking System (CLS), presented in Chapter 1.3.1 above, is a component-based, physically distributed system found in typical automobiles. In terms of modeling, the CLS components (Key Fob, Lock Manager, Lighting System, Crash Sensor, Control, Database, Tuner, and User Interface) map to model components and in turn to executable components.

Figure 4-1 Central Locking System Review Presented in Chapter 1.3.1, this figure represents the structural components of the Central Locking System.

42

43 4.2. Development Process Overview From Abstract Model to Executable In order to demonstrate the utility of the Code Generator we present a brief discussion of the overall process. First, the requirements of the system must be captured. In the Central Locking System example, this would include identifying the components in the system, their individual tasks, and their interactions. A thorough understanding of the Central Locking System is required in order to fully define these system components and their tasks. Next, the Central Locking System graphically designed in the modeling tool by defining the components, their interconnections, and each individual components task, or state machine. The model can then be simulated using the modeling tools internal simulation and validations tools. Improvements are made to the model and retested repeatedly. Once the model has been thoroughly tested in the simulation environment, it is ready to be tested on a larger scale.

The modeling tool, namely AutoFOCUS or M2Code, provides an export feature that converts the model specification into a simple and portable XML file. The Code Generator uses this XML model specification, along with a number of pre-defined code template files, to produce the runtime executable C++ code. This code is then compiled along with the runtime libraries into executable programs. These executable programs are then deployed on the target runtime system and tested as in the simulation environment. As in the models simulation environment, the flaws that are uncovered must be addressed in the model directly. The Code Generator allows for rapid redeployment of executable code following any modification to the model. Furthermore, alternative runtime Frameworks can be developed to facilitate the deployment of executable code on different physical architectures, including the actual production architecture. The software development cycle can continue with the available simulation and runtime execution environments until the system is stable.

4.3. Model and Code Examples This chapter will present model and code examples from the Central Locking System that have not been included elsewhere in this text.

44

_js0_

_js0_
_js14_

CL3!door_u nld_sig

_js0_

KC0!unlck

CK4!ok

LC2!ok

_js16_

_js5_

KC0?lck

_js2_
CL1!lck

_js9_

SK6?get_id
_js1_

CS5!arm

CL1?unlck
_js10_ CL1!lck CS5!arm LC2?ok

CL1?lck

_js1_
LC2?ok

_js4_

_js13_ CL3!door_l ckd_sig _js15_ CL3!door_l ckd_sig _js7_ CS5!handle _id

_js3_

LC2?ok

CS5!arm

KS7!id

CK4?ok
CS5!handle _id

_js2_

LC2!ok
CS5!arm

_js3_

_js1_
CK4!ok

CK4?ok
_js6_

_js5_

KC0?unlck

_js4_
_js3_

CS5!handle _id

CL1!unlck

_js17_ CS5!handle _id

KC0!lck

CL1!unlck

LC2?ok

_js11_

_js12_ CL3!door_u nld_sig _js8_

_js2_

KeyFob (KF)

Control

LockManager (LM)

Figure 4-2 Example Component State Machines These state transition diagrams (STD) are from the CLS example introduced in Chapter 1.3 and were generated via M2Code. From left to right, the Key Fob, Control, and Lock Manager components. The state machine diagrams in Figure 4-2 were generated via M2Code, a tool used to model system structure and behavior. Although these state machines are graphically represented here, they are actually stored in an XML file discussed in Chapter 2.2.2 above. An example of this XML file can be found in Figure 2-2 and Figure 2-3. The entire XML model specification is used as input to the Code Generator to produce, among other pieces of code, the state machine for each individual component. The state machine for each component is wrapped in a tick method, so named because the state machine executes one transition in every discrete clock cycle. A clock cycle is defined by the execution model, which is described in Chapter 3.1.3 above. Specifically, a clock cycle consists of the following, a component waits for data to arrive on all of its input channels, this data is then used in the execution of an enabled state transition, and lastly, all output channels are written. The tick method is called on each clock cycle after all input channels, what we refer to as events, have been received. The generated output of the tick method is given in Figure 4-3 below.

45
1 void CLS_LM_i::tick () ACE_THROW_SPEC ((CORBA::SystemException)) { 2 int nTransition; 3 TQueue tq; 4 bool bDone = false; 5 while(bDone == false) { 6 tq.Clear(); 7 switch (this->state){ 8 case EventNamespace::STATE__JS0_26: 9 if (true) { 10 {tq.Push(5, 0);} 11 } 12 nTransition = tq.Pop(); 13 if(nTransition == 0) { 14 LC2.val = CLS::OK; 15 LC2.hasValue = true; 16 LC2.ServiceID = m_ServiceID; 17 LC2.StartTime = m_StartTime; 18 LC2.Inaccuracy = m_Inaccuracy; 19 LC2.deadlineReached = m_DeadlineReached; 20 this->state = EventNamespace::STATE__JS1_27; 21 this->AllPortsWritten(); 22 break; 23 } 24 bDone = true; 25 break; 26 case EventNamespace::STATE__JS1_27: 27 if (true) { 28 if ( (CL1.hasValue)&& 29 (CL1.val == CLS::LCK)) { 30 tq.Push(5, 0); 31 } 32 } 33 if (true) { 34 if ( (CL1.hasValue)&& 35 (CL1.val == CLS::UNLCK)) { 36 tq.Push(5, 1); 37 } 38 } 39 nTransition = tq.Pop(); 40 if(nTransition == 0) { 41 CL1.hasValue = false; 42 m_ServiceID = CL1.ServiceID; 43 m_StartTime = CL1.StartTime; 44 m_Inaccuracy = CL1.Inaccuracy; 45 m_DeadlineReached = CL1.deadlineReached; 46 this->state = EventNamespace::STATE__JS0_26; 47 this->AllPortsWritten(); 48 break; 49 } 50 if(nTransition == 1) { 51 CL1.hasValue = false; 52 m_ServiceID = CL1.ServiceID; 53 m_StartTime = CL1.StartTime; 54 m_Inaccuracy = CL1.Inaccuracy; 55 m_DeadlineReached = CL1.deadlineReached; 56 this->state = EventNamespace::STATE__JS0_26; 57 this->AllPortsWritten(); 58 break; 59 } 60 bDone = true; 61 break; 62 } 63 } 64 } // tick()

Figure 4-3 Tick Method Code The figure on the previous page represents the resulting state machine nested if-else block for the Lock Manager, found in CLS_LM_i.cpp.

46

Figure 4-3 is the resulting code generated from the diagram represented in Figure 4-2. Prior to executing the tick method, the component uses inputs, or received events, to prepare object level data used in this method. Following the tick method, the components outputs are written by submitting data to the event service with the appropriate event type for each of its attached destination components. The tick method in Figure 4-3 was generated for an alternate execution model that does not conform to the strict clock cycle approach described in Chapter 3.1.3. For more details on this particular execution model, please refer to [Ahluwalia05]. Skipping over the code specific to the non-standard execution model, i.e. code involving the queue and while statement, the switch statement beginning on line 7 is common to both execution schemes. The state in this switch statement was initialized on startup or set on the previous tick execution. The states type on line 8 (and lines 20, 26, 46 and 56) is defined in an enumerated type found in the Common Library (see Chapter 5.3.2 below). The actual name for this particular state comes directly from the model and can be seen in the top left state bubble of the Lock Manager in Figure 4-2. There are a number of other enumerated types that are defined in the Common Library. As with the state names, these type enumerations come directly from the model. The actual code replacement used to generate this sort of enumerations is described in Figure 4-4 below.

47

enum EventSource { SOURCE_MONITOR, <@components@> SOURCE_<#component id case="upper"/#>, <@/components@> }; // enum EventSource

enum EventSource { SOURCE_MONITOR, SOURCE_CLS_ENV, SOURCE_KF, SOURCE_CONTROL, SOURCE_LM, SOURCE_LS, SOURCE_SM, SOURCE_DB, }; // enum EventSource

Figure 4-4 Template Example This is an example of a template tag and its corresponding generated code block. The example comes from the template file Template_Common.h and results in the generated Common Library file CLS_common.h. In Figure 4-4, the resulting code is an enumerated type containing a mix of names statically coded into the template and dynamic code iterated over all component names. The SOURCE_MONITOR definition is hard coded into the template. This is because the tag <@components@> only enumerates components present in the model. Since the monitor is not explicitly defined in the model, it must be included explicitly in the template. This particular example shows the extendibility of the template based code generation scheme. Suppose the developer wishes to add a second monitor, for example. This addition can be accomplished by simply adding a new source enumeration to the existing template code, above the replacement tag. Modifying the template code around the replacement tags has no effect on the Code Generator itself. Execution logic of the runtime system maybe harmed so these modifications should be done with care. Further discussion of the template based code generation scheme can be found in Chapter 5.4 below.

48
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 int main (int argc, char* argv[]) { try { SetConsoleCtrlHandler((PHANDLER_ROUTINE)CtrlHandler, TRUE); CORBA::ORB_var orb = CORBA::ORB_init(argc, argv); // Initialize orb CLS_LM_i lm_servant; lm_servant.configureComponent(orb, lm_servant._this(), "_Environment", EventNamespace::SOURCE_LM); //events of interest (input ports) EventNamespace::EventTypeSet * portWriteSet = new EventNamespace::EventTypeSet(); lm_servant.addSingleEventInterest(EventNamespace::WRITEPORT_LM_FROM_CONTROL); EventNamespace::EventTypeSet * startupSet = new EventNamespace::EventTypeSet(); startupSet->insert(EventNamespace::STARTUP_CLS_ENV); startupSet->insert(EventNamespace::STARTUP_KF); startupSet->insert(EventNamespace::STARTUP_CONTROL); startupSet->insert(EventNamespace::STARTUP_LM); startupSet->insert(EventNamespace::STARTUP_LS); startupSet->insert(EventNamespace::STARTUP_SM); startupSet->insert(EventNamespace::STARTUP_DB); lm_servant.addConjunctiveEventInterests(*startupSet); lm_servant.addSingleEventInterest(EventNamespace::ALL_COMPONENTS_STARTED); //events to publish lm_servant.registerEventTypeToPublish(EventNamespace::STARTUP_LM); lm_servant.registerEventTypeToPublish(EventNamespace::FINISHED_LM); // the environment is responsible for notifying everyone when everyone is started. // we cant let each component do this on their own because the components that // start late will miss the startup events from earlier components lm_servant.registerEventTypeToPublish(EventNamespace::ALL_COMPONENTS_STARTED); // output ports lm_servant.registerEventTypeToPublish(EventNamespace::WRITEPORT_CONTROL_FROM_LM); lm_servant.registerEventTypeToPublish( EventNamespace::WRITEPORT_LM_FROM_CONTROL_TO_MONITOR);

36 37 lm_servant.prepareEventConfiguration(); 38 CLS::Event event; 39 event.name = "hello from the LM"; 40 lm_servant.SubmitEvent(getAnyFromProjectEvent(event), EventNamespace::STARTUP_LM); 41 42 cout << "--------------------Starting LM ORB Event Loop------------------" << endl; 43 44 orb->run(); 45 } catch (CORBA::Exception & e) { 46 cerr << e << endl; 47 ACE_DEBUG((LM_DEBUG, DEFAULT_ERROR_STRING)); 48 } 49 return 0; 50}

Figure 4-5 Component Startup Code This is the startup code present in the generated CLS_LM (LockManager) component. This code resides in the generated driver file CLS_LM_main.cpp and is similar for other components. The goal of the Framework and Code Generator is to minimize the dependency on the underlying platform. In other words, the generated component code should not contain platform specific code. There is however, a minimal leftover dependency on CORBA in this file by the use of the orb->init() and orb->run() functions on lines 4 and

49 44, but these are easily removed or replaced in the template files when changing to a different supporting runtime system Framework. Line 3 is a means to retain flow control for the console application to enable a graceful shutdown of the orb and its resources. Killing the process leaves dangling references in the CORBA environment and so this function is provided to monitor the closing state of the console application and gracefully shutdown the CORBA environment. Lines 4-7 prepare the CORBA environment, while lines 10-40 prepare the event configuration for this component. Lines 13-22 prepare the events this component wishes to listen for and lines 25-35 prepare the events this component will be sending, otherwise thought of as inputs and outputs respectively. Lines 38-40 submit the initial event notifying the environment component that this component has started. Lastly, line 44 yields the execution flow to CORBA. This call effectively blocks the main thread and will never return. This blocking is the reason for the SetConsoleCtrlHandler() call on line 3.

Receiving an event:
// get the event(s) 1 void EventConsumer_i::push(const RtecEventComm::EventSet & data ACE_ENV_ARG_DECL_NOT_USED) throw (CORBA::SystemException) { 2 for(CORBA::ULong i = 0; i != data.length(); ++i) { 3 const RtecEventComm::Event &e = data[i]; 4 cout << "Handling event " << this->callback_component->getEventTypeString( (EventNamespace::EventType)e.header.type) << endl; 5 CORBA::Any anyref = e.data.any_value; 6 this->callback_component->handleEvent( (EventNamespace::EventType)e.header.type, anyref); 7 } 8 }

Figure 4-6 Receive Event Code This shows the event handling code contained in the Framework file EventConsumer_i.cpp. Figure 4-6 is only part of the code required to receive an event. The full code path a received event follows involves a series of wrapper functions, which are left out for simplicity. Figure 4-6 shows the interesting part of the receive event process, specifically the decomposition of an event and the passing of its data to the callback function located in the component code. The event consumers push method (shown in Figure 4-6) is

50 called whenever an event is received. This push method is called automatically by CORBA. Recall the main execution thread was given to CORBA by the call to orb>run() on initialization in Figure 4-5. An optimization on the event service allows multiple events to be delivered at the same time. The for loop on line 2 accounts for processing multiple events. A reference to an individual event is obtained on line 3. Line 4 is merely a debugging statement. Line 5 gets the actual event data as a CORBA:Any type which will be demarshaled in the component code. The reason the payload data is not demarshaled here is to keep the Framework generic. The callback components handle event on line 6 passes the type and the CORBA::Any event payload. The reason CORBA delivers events to the EventConsumer object rather than directly to the receiving component (callback_component in this case) is result of CORBAs structure. In order to receive events from the event channel, the object must inherit from a class called POA_RtecEventComm::PushConsumer. The AbstractComponent from which all system components inherit could itself inherit from the PushConsumer, but for the sake of simplicity, they have been left separate.

Sending an event:
// stick the source, type and event in a set then deliver it to the event channel 1 void AbstractComponent::SubmitEvent( CORBA::Any payload, EventNamespace::EventType type, EventNamespace::EventSource source) { 2 RtecEventComm::EventSet events(1); 3 events.length(1); 4 5 cout << "Submitting event " << this->getEventTypeString((EventNamespace::EventType)type) << endl; 6 7 // Initialize event header. 8 events[0].header.source = source; // user defined source 9 events[0].header.type = type; // user defined type 10 11 // Initialize data field in the event. 13 events[0].data.any_value = payload; // RtecEventComm Event 14 this->consumer_proxy_->push(events); 15 }

Figure 4-7 Send Event Code Sending an event is done in AbstractComponent.cpp and consists of preparing a single event and pushing it onto the event channel.

51 The sending of an event is far simpler logic in regards to execution flow. Unlike the receive event code, which is called by the orb. The writing of outputs, or sending events, is done in response to the completion of the tick operation. The tick operation happens in response to the reading of inputs, or receiving of events. As a result of this chain of operations, our code has the flow control following the tick operation. The sending of events is simply done by calling the SubmitEvent method. Prior to calling this method, the component must convert the event payload into a marshaled CORBA::Any type using the overloaded utility functions getAnyFromProjectEvent provided in the CommonLibrary. Lines 2-3 prepare a single event, while lines 8-13 configure the event source, type, and payload. Line 14 sends the event to the event service by way of a proxy (the interface between our code and the event service).

52

5. Implementation Platform, Runtime System, and Code Generator The overall architecture of our system has been designed to reduce the implementation burden of both Code Generator and runtime system. That is not to say it made implementation easy, but rather that it made possible a clear definition of what was to be implemented in the Code Generator and what was to be implemented in the runtime system. For example, the Code Generator should not be tied to the runtime system, meaning that the Code Generator should be independent and not operate in a way as to require a particular runtime system. Although the Code Generator output may target specific runtime systems, its internal logic should not be dependent on any specific runtime system. For this purpose, two major implementation tasks were undertaken. First, a generic runtime Framework must be developed. This particular Framework targets the Real-Time CORBA communications platform, but can be augmented to support other environments. Secondly, the Code Generator was implemented in a way that minimized the dependency on the execution logic of our Framework code base. Rather than integrating entire code modules into our Code Generator, we approached the problem by identifying critical code blocks within a generic system and implemented the Code Generator to deal with specific blocks, rather than entire modules. In this manner, we detach the Code Generator from the logic of the resulting system and improve Code Generator extendibility by allowing the user to create custom block replacement code.

The implementation of the runtime system and Code Generator is discussed below. Provided first is an account of the communications platform and its tools used in the current runtime system. A detailed discussion of the runtime sys tem we have developed follows. The runtime libraries (Framework and CommonLibrary) that were developed for this system are discussed, followed by the various runtime components (model components, environment, and monitor). The chapter concludes with a detailed account of the approach to code generation and runtime execution as it evolved from a previous version to the current one.

52

53 5.1. Code Generator Evolution The initial conception of a Code Generator, implemented in 2003 by Oliver Mller [Mller03], provided a useful evolutionary step in the development of the toolchain. This initial Code Generator was ultimately redesigned, yielding a more robust system, and is the focus of this work. This previous version will be discussed throughout this chapter as a means to compare and contrast different methods to achieve the same goal. More importantly, this comparison will be used to justify certain design and implementation decisions in the current version.

Among the key improvements made in the current version of the Code Generator and its target runtime Framework are: Simplified code generation process Easy integration of custom code Pluggable communications Framework Distributed synchronization Unification of the communication and synchronization mechanisms Integration of monitoring capabilities

These key concepts will be discussed in depth at the end of this chapter. A detailed view of the initial Code Generator is presented in Chapter 5.5.1 followed by a description of the changes in the current Code Generator. An analysis of the redesign of the code generator is provided in Chapter 7.4 below.

5.2. CORBA Runtime Communication Middleware The use of CORBA code in the runtime system has been largely abstracted out of the generated code by way of the Framework and common code libraries, which were developed for this purpose. It is possible for a user/developer to generate executable code without an intricate knowledge of the CORBA environment; much in the same way a C programmer can generate executable code without an intricate knowledge of the underlying machine architecture. In both cases, such an intricate understanding of the

54 underlying architecture certainly helps, but is not absolutely necessary. For more advanced systems, it is important that the user/developer to understand the underlying Framework from its lowest levels. CORBA is a large and complex specification, of which only a subset of its functionality is put to use in our runtime system. For a more complete CORBA reference please refer to [Vinoski99], [OCI], [TAOWEB], and [OMG02]. This chapter is provided as an overview of the specific parts of CORBA we have utilized to develop our runtime Framework.

5.2.1. CORBAs IDL Compiler CORBA provides an interface definition language that is used for defining objects and for generating the stub and skeleton code interface to these objects. An interface definition language compiler is provided to generate the backend code and translate the specification file into stub code usable by the developer. This IDL compiler eliminates a vast amount of work by creating code for a robust communications environment by exposing a simple skeleton code interface to the user. In many respects, the Code Generator is analogous to the IDL compiler, or virtually any compiler for that matter, in that an abstract specification is translated into executable code. The generated code makes use of the code files generate by the IDL compiler.

55

#include <orbsvcs/orbsvcs/TimeService.idl> module CLS { enum MSGS{ ARM, DOOR_LCKD_SIG, DOOR_UNLD_SIG, GET_ID, HANDLE_ID, ID, LCK, OK, UNLCK, QOS_ERROR }; struct PortWriteEventMSGS { CLS::MSGS msgs; boolean present; boolean deadlineReached; CLS::MSGS ServiceID; TimeBase::TimeT StartTime; TimeBase::InaccuracyT Inaccuracy; }; struct Event { string name; }; interface interface interface interface interface interface interface interface }; Monitor{}; CLS_ENV{}; KF{}; CONTROL{}; LM{}; LS{}; SM{}; DB{};

Figure 5-1 Example IDL File Example IDL file from the Central Locking System introduced in Chapter 1.3 above. The IDL file is a means to enumerate the components and data types, but is particularly important to CORBA because it creates the backend code that these execution components use to communicate with the CORBA environment. In Figure 5-1, MSGS defines the data type that will be passed by the Event Service and the interfaces at the bottom define the components that will interact via the Event Service. The IDL compiler requires an IDL file, which is created by the Code Generator using the XML model specification. The files generated by the IDL compiler are then utilized by the runtime system components.

56

XML Model Specification

Generated C++ Source ([System]_common.*)

C++ Compiler

Code Generator

Generated IDL File (System.idl)

IDL Compiler

IDL Generated CORBA Files (SystemC.* and SystemS.*)

CommonLibrary.dll

Figure 5-2 Code Generation Flow - Detailed This figure illustrates the full code generation path from the model to the CommonLibrary, which is discussed in depth later. Of importance here is the data flow through the IDL compiler. The utilization of the IDL generated files has differed between the previous and current Code Generator versions. A more detailed discussion of how the initial Code Generator used these generated files is provided in Chapter 5.5.2 below. Currently, the files created by the IDL compiler are used during the final compilation phase and linked to the execution components at runtime. The stub and skeleton code created by the IDL compiler is not required because we are using a communication model that does not make use of the core CORBA shared object facility. The IDL compiler generated code is utilized by our runtime system in a way that makes use of only the backend code. The previous Code Generator version made use of CORBAs shared object facility and so the code generation process became more involved by an intermediate step in the code generation process to fill in the stub code.

5.2.2. CORBAs Naming Service: An M2Code or AutoFOCUS model contains a system of named components and likewise our runtime system utilizes a similar textual naming convention. A distributed environment requires a means of identifying and addressing these named components to physical routable locations. CORBA offers a number of means to do this, the simplest of which is an identifier string called an IOR string. This string can be copied manually or via a shared file. This particular method is not scalable, nor is it easily automated. CORBA offers a resource locator string similar to a URL (uniform resource locator) format found on the web, but more descriptive. Although the resource locator string is

57 easier to use, it alone is not scalable because these locator strings must be static and distributed manually to each component. Recall that the AutoFOCUS model has many interconnected components, requiring that in the worst case, every component know about every other component. Providing a series of URLs containing all components to every component can be very difficult, especially if components move around between or even during execution runs.

CORBA provides an automated means for the component name to location mapping, which is very similar to the domain name system (DNS) found on the Internet. CORBAs Naming Service is a process with a well-known location. As components are started, they are given the location of this naming service, and are responsible for registering themselves with this directory. Components can then ask the naming service for the location of other registered components by name. Because the location of this Naming Service process is well-known, it must be the first process started in a given execution run. If clients are written correctly, it is possible for them to rebind their names on subsequent runs. This means that the Naming Service can be a persistent global service. More importantly, the Naming Service as such, is not an absolute requirement for the target system. It is merely a simplification for a dynamic test bed environment and an example of CORBAs utility.

In a production automobile network, the locations of the various components are likely to be well-known and static. The locations of attached components can be hardcoded into each component or some other means of basic name resolution could be implemented. The previous Code Generators execution environment relied heavily on the Naming Service because each component needed to know the exact location to obtain a direct reference to the attached components for remote method invocation. The current CORBA based execution environment registers each and every component with the naming service, but this information goes mostly unused at the component level. This is due to the use of the event service for message delivery. In other words, components no longer need to know the exact location of their attached components as the event service

58 takes care of this internally. Currently the Naming Service is only used to find the location of the Event Service. The ability of each component to find each of the other components is not explicitly required since the Event Service does this inherently. Individual component name registration is still supported for the sake of future expansion and experimentation.

5.2.3. CORBAs Real-Time Event Service (RTES): The event service provides a mechanism for channeling events, or messages. It allows for multicast (or filtered broadcast) delivery, message prioritization, and scheduling. Although many of CORBAs Event Service functions discussed here are dissimilar to other communication environments, the only basic requirements for our general runtime system is that messages be deliverable in a timely fashion. The core functionality of the Event Service could be re- implemented using direct point to point TCP connections, UDP multicast, CAN bus protocols, or any other communication mechanism. The benefit of using CORBAs Real- Time Event Service is its ability to specify scheduling algorithms and properties. The event service is simply a communications environment and so its basic functionality can be re- implemented in any given communications platform. For experimentation purposes, it is convenient to utilize the features that are already implemented in the RTES.

The event channel is a broadcast/multicast medium and so there is no explicit source to destination event mapping, specifically there is no source based routing. In other words, a component cannot send an arbitrary message to a named component. Two tasks must be completed before an event can be transmitted between two components. Senders must notify the event service that it will be publishing an event of a specific type. Recipients must also notify the event service it is interested in either a single event or a group of events. If these two conditions are not fulfilled, the event service will filter out that event. According to the AutoFOCUS model, communication channels are static and clearly defined. At system startup, an implicit communications network of point-topoint message delivery channels is built by requiring each component register for events

59 it wishes to listen to, and for events it will send. This registration builds the event filters in the Event Service and creates the illusion of destination specific delivery. Other communications platforms may not require explicit pre-registration of message endpoints and so this is a CORBA specific requirement.

Events contain a simple header and a payload. The header contains an event type, a source, TTL (time to live) value, creation time, and some unusable benchmarking data [TAODoxygen], [OCI]. Event types and sources are numeric values over a finite range of useable numbers, each represented by an unsigned short. Some values at the beginning of this range are reserved. For the sake of readability, our defined events are contained in an enumerated type starting with the constant value ACE_ES_EVENT_UNDEFINED [OCI].

The event source is present for the sake of expanding the number of events that can be addressed. The Code Generator ensures the source field is kept unique for each component. Currently there are no validation checks on the event source. The source is simply used as a debugging trace tool and is not currently used in any execution logic. Event validation and security is a subject of future work. Event names contained in the enumerated type are generated in the form event from source. This enumeration is defined in the CommonLibrary discussed in depth in Chapter 5.3.2 below. Although the naming convention of the event types helps readability immensely, there is no validation of the event usage and so we rely on the correctness of the model specification and Code Generator implementation to maintain system validity. The user/developer should never modify the event sending/receiving code without proper cause and knowledge. The event system resides on a thin line of stability and should be treated as such whe n modifying generated code. There should be no logical errors in the communication system as generated by the Code Generator. To aide in debugging errors that may arise, a monitoring tool (discussed in Chapter 5.3.3.3 below) was developed to debug event traces and diagnose event message errors.

60 Just as there is no logical validation on the event header after the modeling phase, there is no logical validation on the event payload either. The Code Generator is responsible for packing the appropriate payload with the appropriate event header before sending, and unpacking the payload into the appropriate type upon receiving. The model assumes that communication channels are uniform, where as the implementation of the communication channel in the real system is not uniform. The required data type conversion is done at the sending and receiving ends so as to create the illusion of a uniform communications channel, a process that leaves room for future runtime and compile-time verifications.

Marshaling, or placing a data type into a given event payload requires the data type be serialized into a byte stream. There are many ways to accomplish this marshaling. CORBA provides a convenient mechanism: Users wanting maximum flexibility can use an Any, users that only have one type of event may use structures, other users may prefer union, trying to strike a balance between performance and flexibility. Users willing to implement their own marshalling may use a sequence of octets. http://www.dre.vanderbilt.edu/Doxygen/Current/html/tao/rtevent/ The conversion operator >> is automatically created by CORBAs IDL compiler and is used to convert an IDL specified data type into a CORBA::Any type. The Any type can be thought of a generic data type that can hold anything, roughly similar to variants in some scripting languages. Any types are not analogous to objects because Any types contain only data, whereas objects are typically more complex, containing both data and code. Because the >> operator is automatically generated, we can reduce the problem of marshaling to simple variable assignment. The >> operator is a project/IDL specific conversion, meaning that any code wishing to send or receive an event must be compiled with the definition of this operator. For code generation, this means the entire event-handling infrastructure is regenerated for each component. Code redundancy is not particularly harmful, but it does unnecessarily increase the compile time and complexity of code generation code. A Common Library (discussed in Chapter 5.3.2 below) is provided to reduce this redundant compilation.

61

5.3. A Modular Runtime Platform Implementation Details The event- handling infrastructure, as with the naming infrastructure, is inherently modular because each component makes use of the same subset of functionality. It makes sense to isolate static code from the dynamic code in order to accomplish two things. First, redundant code can be contained in library modules reducing compile time. Secondly, and more importantly, wrapping complex code in simple APIs can greatly reduce the complexity of generated code. In an ideal scenario, code generation could be simplified to a point where it can be all but eliminated, with the exception of the state transition machine contained within each component, by simply utilizing very generic naming conventions. Our method of code generation uses intuitive and descriptive name replacement and more advanced constructs for the sake of readability and usability. The basic application of utilizing libraries to simplify code generation is an important contribution of this work.

Framework Library
TickableComponent Static hand-generated library code. Contains non-project-specific CORBA code

Common Library
POA_[ProjectName]::[ComponentName] CORBA generated project-specific & component-specific code

Execution Components

Project Components Dynamically generated via the Code Generator. No CORBA-specific code

Figure 5-3 Component Dependencies The three basic units (or modules) of code exist to isolate Platformspecific code (i.e. CORBA), Project-specific code, and Componentspecific code. This isolation simplifies code generation by clearly identifying static and dynamic code.

62 5.3.1. Framework The generated code requires a runtime executable system in order to run. Although this runtime system could be generated dynamically along with the system code, it is desirable to separate the two. Deployment of multiple systems is simplified if there is a common execution environment. Modularization of this execution environment allows for future expansion, and even complete replacement, without significantly affecting the generated code or the Code Generator itself. This chapter discusses the Framework in terms of how functionality was encapsulated to provide a modular Framework.

There are certain basic tasks required to connect to the CORBA infrastructure. Tasks like connecting to the event service and binding a name are relatively generic tasks, with the exception of changing name strings. A library called Framework was built to contain many of these basic and common tasks, exposing a simple API to call on these tasks from a higher level. The Framework was designed to be both generic and modular, that is, it contains no dynamically generated code and it encapsulates most of the CORBA API. As a result, the same Framework can be used for multiple different projects. More importantly, the encapsulation of the CORBA specific code simplifies the dynamically generated code. Complex operations such as initialization of the component can be handled in the Framework and exposed through a set of clear, concise, and generic APIs. More than an optimization on compiling runtime code, the Framework is an optimization of the code generation process itself.

63
AbstractComponent public: AbstractComponent(); ~AbstractComponent(void); configureComponent(CORBA::ORB_var orb, CORBA::Object_ptr objRef, string componentname, EventNamespace::EventSource source); findPartners(); handleEvent(EventNamespace::EventType type); prepareEventConfiguration(); SubmitEvent(CORBA::Any payload, EventNamespace::EventType type, EventNamespace::EventSource source); SubmitEvent(CORBA::Any payload, EventNamespace::EventType type); // use default source addSingleEventInterest(EventNamespace::EventType type); addConjunctiveEventInterests(EventNamespace::EventTypeSet &set); registerEventTypeToPublish(EventNamespace::EventType type); // convert an eventtype to a string. defined in the inherited class std::string getEventTypeString(EventNamespace::EventType type); protected: CORBA::ORB_var orb; CosNaming::NamingContext_var naming_context; EventNamespace::EventSource myEventSource; private: PortableServer::POA_var poa; RtecEventChannelAdmin::EventChannel_var event_channel; EventNamespace::EventSetList setsToPublish; EventNamespace::EventTypeSet eventTypesToPublish; bool usingConsumer; bool usingSupplier; // used to talk to the consumer RtecEventChannelAdmin::ProxyPushConsumer_var consumer_proxy_; // used to get stuff from the supplier (i.e. this is the consumer) EventConsumer_i * theEventConsumer; connect_to_push_supplier(const RtecEventChannelAdmin::SupplierQOS &subscriptions); disconnect_from_push_supplier(); EventNamespace::PublicationList * buildPublicationsForConsumer(); ACE_SupplierQOS_Factory buildPublicationsForSupplier(); createEventChannel();

TickableComponent public: TickableComponent(); ~TickableComponent(); tick(); move();

EventConsumer_i POA_RtecEventComm:: PushConsumer public: EventConsumer_i(); connect(RtecEventChannelAdmin::EventChannel_ptr event_channel, const RtecEventChannelAdmin::ConsumerQOS subscriptions, AbstractComponent * registercomponentcallback); disconnect(); push(const RtecEventComm::EventSet & data); disconnect_push_consumer(); private: AbstractComponent * callback_component; RtecEventChannelAdmin::ProxyPushSupplier_var supplier_proxy_;

protected: EventNamespace::State state;

Project Components outside of the framework library

Figure 5-4 Framework Class Diagram This diagram shows the internal architecture of the Framework library. As depicted in Figure 5-4, all project level components inherit from the TickableComponent which in turn inherits from the AbstractComponent. The event consumer delivers received events to the project component though the

64 AbstractComponent. The monitor component inherits directly from the Abstract component since the monitor does not implement a tick() method. The move() method has been depreciated.

By encapsulating CORBA specific code and operation the Framework provides a convenient method for exploring other runtime platforms without compromising the integrity of the dynamically generated code. For example, the Framework exposes an inheritable object called AbstractComponent from which the dynamically generated components object code ultimately inherits. AbstractComponent contains a number of exposed methods, the most notable of which are ConfigureComponent(),

SubmitEvent(), and HandleEvent(). ConfigureComponent() is a sort of constructor that allows the child object to perform CORBA specific operations such as Naming Service and Event Service initialization and binding. No CORBA code is directly called by the dynamically generated child component, only the call to the generic method ConfigureComponent() is needed. Similarly, the SubmitEvent() and HandleEvent() methods are generic wrappers for the more complex operations of sending and receiving CORBA event messages.

Processor 1

Processor 2
SubmitEvent

Component X

Component Y

Custom Runtime Framework Real Time ORB & Event Service

Figure 5-5 Send Event - Overview Sending a message (event) from ComponentX to ComponentY does not travel directly. The sending of an event is handled by the Framework and underlying middleware, CORBAs Real-Time Event service in this example.

65 Dynamically generated component code is not concerned with the fact that its messages are traveling over CORBAs event channel, only that the messages ultimately get to their destination. It is in fact possible to replace the underlying communications channel, to say TCP, by simply replacing the SubmitEve nt() (and the more complex HandleEvent()) code. Changing the Framework code has little to no impact on the dynamically generated code, provided the changes are logically sound and do not modify the exposed Framework API. Clearly, introducing new communications mechanisms can have far-reaching affects on the systems execution. Problems arising from switching communications mechanisms can be more predictable than the problems that surface when deploying a new system to any communications environment.

The ability to design multiple Frameworks, targeting multiple communications platforms, is useful to a concentric development cycle. Although CORBA is a useful test bed environment, it is not very practical in the automotive domain. In this case, a Framework targeting the CAN bus and CANOpen protocol as the communication environment [CANWEB] could provide a further approximation of a production system. In fact, there is no reason why a carefully designed Framework, t rgeting embedded a components could not be used in a production vehicle. Deployment of executable code in this embedded environment may be time consuming and so it would be beneficial to have worked out some bugs in the easier to deploy CORBA environment first.

The concentric development cycle outlined in Chapter 1.4 above allows us to test specific aspects of the system under development. A CORBA Framework could be used to test the behavior of a given system in a distributed environment using a particular scheduling behavior. To run on a TCP or CANOpen based Framework, we would have to implement a custom scheduling behavior. Knowing which is the appropriate scheduling algorithm from tests on the CORBA Framework can reduce the amount of time and effort spent implementing and experimenting with custom schedulers on the ultimate target system. Having multiple Frameworks in place allows for the testing of many distinct aspects of the system model under development in a more realistic environment.

66

Operating System

Component

Code Generator Framework Naming Service Event Service RT-CORBA

Code Generator Common Library

Operating System

Component

Code Generator Framework DNS send() / recv() TCP / UDP

Code Generator Common Library

Figure 5-6 Framework Hierarchy Multiple Platforms This shows the modular Framework hierarchy using two different communication platforms. The component and operating system remain the same, but the communications medium and tools are different. The Framework is an execution environment, not a communication environment. Although multiple Frameworks may exist, every component must utilize the same Framework in a given execution run for communication to be consistent. C++ is the language of the Framework and so the device running the code must have the ability to compile and/or execute C++ code. It is possible for the generated C++ code to interoperate with other languages (as demonstrated by the monitor component discussed below) on the same system. Currently, the Code Generator is limited to directly output C++ code. It is possible, given the flexibility of the Framework, to implement a common communications environment in order to utilize non-C++ embedded devices such as a BASICStamp or Java enabled devices. This would require reimplementation of both the

67 Code Generator and Framework, which is no small undertaking. It is possible to make the code translation by ha nd, given the generated code and Framework (with a common communications medium) as a guide. Support for other languages is, of course, left to the developer implementing the custom system. C++ has a wide range of support and so it should suffice for most cases.

The modularity of the Framework is achieved through careful separation of code, and through the use of a Dynamically Linked Library (DLL). A DLL allows multiple executables to externally link to the single library file meaning it can be compiled once and linked at runtime to all dependent projects, reducing the size of the executable. It was noted that much of the debugging process would take place with multiple processes collocated on the same machine, all of which could bind to the same DLL file. The CORBRA libraries must be deployed and so it is envisioned that the Framework would be deployed at the same time. In a deployment setting, the library type could also be changed to make the executable self-contained. The purpose of the Framework in this version of the Code Generator is to be as generic as possible. As a result, its code base is not likely to change and so the Framework DLL could be deployed along with the standard CORBA deployment files.

5.3.2. CommonLibrary The CommonLibrary was designed as a repository for all the utility code specific to the project. It contains event enumerations and data type conversion functions. More importantly it contains the backend CORBA code generated via the IDL compiler. For the sake of expediting the compilation process these files need only be compiled once and successively linked to the dependent components. The CommonLibrary is compiled as a statically linked library. Unlike a DLL, the static library code is compiled into the resulting executable (i.e. the component). Deployment is simplified in that the executables are the only dynamically created files that need to be distributed.

68 The CommonLibrary is not generic, and so its contents cannot reside in the Framework. The fundamental purpose of the Common Library is to be a repository of the code which common to all components and to eliminate redundant compilation. The statically linked library accomplishes these two tasks perfectly. All of the files in the Framework and CommonLibrary can be compiled directly into the components executable, or can be compiled into the specific library types for other operating systems.

The Code Generator creates three dynamic files for the CommonLibrary, an IDL file, [project_name]_common.cpp, and [project_name]_common.h. As a pre compilation step, the IDL compiler is invoked on the IDL file creating a number of source files. These files contain code for the client and server implementations of the CORBA backend and are compiled into the CommonLibrary. The [project_name]_common.* files contain enumerated types for numeric value definitions (event numbers, etc.) as well as utility functions such as conversion from a numeric event type to a string for debugging purposes. These files are then compiled into [project_name].lib using a standard C++ compiler (Microsoft Visual C++ in this case). The specific project name is used for the file name to avoid accidental linking of different common libraries to the project components.

5.3.3. Execution Components Components are the executable units of operation within the simulation environment and so they are logically executable in the real execution environment. Components are compiled into executable binaries, which can be run in distinct processes and can be compiled to run on different machines. A components fundamental task is analogous to the internal state machine defined in the model; that is to receive inputs, execute an enabled transition and write outputs.

69

Performing Enabled Transition


Inputs Outputs

Component X
All_Inputs_Read? YES

tick()

Component sandbox

Figure 5-7 Example Component State Transition All inputs have been read, the tick event is being executed and the outputs will subsequently be written. The developer has the option to integrate custom code before the generation process by modifying template files (advisable), or after the generation process by modifying the generated code (not advisable).

Component projects are prefixed with the word Component followed by the name of the component. This eases distinction from other files. Components are the executable units of operation in the runtime system and utilize the Framework and Common Libraries described above. They are relatively modest in their implementation, as much of the utility code has been moved to the libraries. A given components simply contains an object with some startup and configuration code as well as some other code specific to the class of the component. There are three distinct types of components, project components (denoted by Component[Name]), ComponentEnvironment, and ComponentMonitor.

Each component type shares a common ancestry in the Framework and they function very much the same. Components make use of multiple inheritance provided by C++. Components inherit from the Framework class TickableComponent. This in turn

70 inherits from AbstractComponent, which contains the CORBA specific code for initialization and for dealing with events. Components also inherit from the Portable Object Adaptor class for that specific component. Each POA class is produced from the component definition in the IDL file as part of the IDL compilation process. Inheriting from the POA class gains the component access to the CORBA backend and is important for sending and receiving events. In particular, CORBA produces some data marshaling code specific to the data type defined in the IDL file for this specific component. This data marshaling code greatly simplifies the process of sending and receiving event data. When porting the Framework to a different communication environment, data marshaling code will need to be implemented.

The details of each component type (model components, ComponentEnvironment, and ComponentMonitor) are discussed below:

5.3.3.1. Model Components (Component[Name]) There are multiple components in the model and so each terminal sub component logically maps to a component project. For simplicity and readability, the name of the component in the model is used in the component project name. Each component contains a single state machine derived from the corresponding component in the model. The state machine is a series of nested if/else statements containing the appropriate internal variable assignment for each given state. A component only has one state machine and it is contained in the tick() method. This method is called only upon receiving all of the input values. With simple message delivery, keeping track of read inputs can be accomplished by maintaining state within each component. Recall that in AutoFOCUS, input ports can only buffer a single message at a time [HuberSchatz97]. Under this assumption, each component must create a means to notify the sending component its input buffer is full.

CORBAs real-time event service offers a simplification on maintaining message input state. Events can be aggregated into a conjunct group which means all events in

71 that group must be present for them to be delivered to the destination. Conjunct groups effectively push the responsibility of tracking input port status to the event service, reducing the task of the component to simply count the number of events received. All events in the group will be delivered in sequence when they are available. Upon receiving a number of events equivalent to the number of input ports, the tick() method can be called. Tick is responsible for utilizing the read inputs to select an enabled transition, executing accordingly. This involves processing the nested if/else blocks and executing an available transition.

Many possible conditions can be satisfied but only one transition can be selected. Unfortunately, the nature of the nested if/else block means that over given a set of available transitions the first one through the block will always execute. Other transitions need to execute at some point, but they will not due to the ordering of the if/else block. To avoid the deterministic behavior induced by a structured if/else block when selecting one of many available transitions, a randomness factor must be introduced. A number of possible solutions for this problem exist and the one that was settled on was a bag solution [Ahluwalia05]. Following the execution of the state machine (tick) all of the output ports are written by submitting each output value to the appropriate destination via the event service concluding the discrete clock cycle.

On startup the components object is configured to listen for events from attached components; those components, which were directly or hierarchically attached via a communication channel in the model. A conjunct group of all input ports is built based on events to be received from all attached components. Individual control message events to be listened for are also registered. The components object is configured to publish events to those components to which it is attached. Notification is given that this component has started by submitting a STARTED event. When using the CORBA based Framework, the object broker is configured and control is given to the orb. At this point the component is started and waiting for events. The first event a standard model component should receive is the ALL_COMPONENTS_STARTED control message. This

72 signifies that the system is ready to proceed and the component should force an execution of its tick method and write all of its outputs.
Main Environment Servant Abstract Component CORBA::ORB_init new Environment Environment servant Instance Configure Component Resolve RootPOA Get/Activate poa_manager Resolve NameService Bind Name NO RETURN ORB Event Channel

Component Startu p

Build Listen EventTypeSet Build Publish EventTypeSet prepareEventConfiguration()

EventConsumer

New EventConsumer

Connect Consumer connect_push_consumer()

connect_push_supplier() NO RETURN SubmitEvent() push via consumer proxy

orb->run();

Figure 5-8 Component Startup Message Sequence This message sequence diagram shows the general event sequence for a given component on startup. In Figure 5-8, Main is the main execution thread of the initial execution. The CORBA environment is prepared, event delivery is configured, the initial startup event is submitted, and the execution flow of the main thread is given to orb->run(). At this point, execution is determined by the receiving and subsequent sending of events throughout the system.

73

Main

Environment Servant

Abstract Component

EventConsumer

ORB

Event Channel

Receiving an event

Event Received handleEvent()

Figure 5-9 Receive Event Message Sequence This message sequence diagram shows the general event sequence for receiving of events from the event channel. Figure 5-9 shows the delivery path of a received event. Events are delivered to the EventConsumer which resides in the Framework library. From there events are handled by the Components servant, in this example the environment component. Execution flow control returns to CORBA, but before this happens, the component is free to send events back to the channel. Sample code for receiving an event is given in Chapter Figure 4-6.

Main

Environment Servant

Abstract Component

EventConsumer

ORB

Event Channel

Sending an event

SubmitEvent()

push via consumer proxy

Figure 5-10 Send Event Message Sequence This message sequence diagram shows the general event sequence sending an event to the event channel. Sending of events is accomplished by submitting the event through the abstract component, which in turn pushes the event onto the Event Channel. Execution control returns to the submitting component, which should in turn yield flow control back to CORBA. Sample code for sending an event is given in Figure 4-7.

5.3.3.2. ComponentEnvironment The environment is a special case of the models components. One important execution note is that the environment takes on a subset of the roles the synchronization component (Scheduler) held in the previous version of the Code Generators execution

74 environment. On startup, there is a problem determining when all components have successfully started. All components must be started and prepared for action before the system can start. A single component must be responsible for keeping track of all started components. More importantly this component must be the first component on startup. Each subsequently started component is unaware of all components that have already started, and so the first component must maintain a list of components as they start. Upon receiving notice that each component has started, it must then send out notification to all components, notifying them that all components have started. The environment is a logical component to start first and so this startup management was delegated to the environment component. Upon h earing from every started component, the environment broadcasts the ALL_COMPONENTS_STARTED control message forcing the initial clock cycle of the system as described above.

5.3.3.3. ComponentMonitor: The monitor is an external component that is not defined in the model. Its purpose is to provide an interface into the system by listening for events, and by injecting arbitrary events on behalf of the other components. These functions are for debugging purposes and are not a critical part of the system. In fact, the monitor is not part of the startup components and so it is free to start or stop repeatedly during an execution of the system. The monitor provides a convenient place to do centralized event traces mapping message sequences and to validate ordering if needed. In this respect, the monitor can be thought of as a watchdog as in traditional real-time systems. Unlike a standard watchdog timer, this monitor can act on more complicated sequences of events.

The monitor component is a special case of executable components because it is compiled as a dynamically linked library and instantiated from a Visual C# program. The reason for this complexity is to simplify the creation of a user interface. It is far easier to build a window based application using C# or Visual Basic. Dynamically linked libraries make it possible to combine the straightforward user interface development of C# with the power of a C++/CORBA backend by modularizing the two tasks. The exposed

75 functionality of the monitor backend is simply to send or receive an event making the integration of backend C++ code and front end user interface code relatively simple. Other user interface tools can be developed to suit the specific deployment environment. Desktop PCs are the immediate deployment test bed environment, and so the ease of creating and modifying a user interface took precedence over the monitors crossplatform utility. Further justification for an easy to update user interface is not knowing what data the UI should contain and how the user/developer will interact with it. As a result this monitors purpose is to explore how it will be used to debug runtime applications as much as its purpose is to actually debug those applications. Ease of modification is therefore a paramount concern in the early stages of developing the toolchain itself.

The decision to use a DLL in the monitor is completely unrelated to decision to use a DLL in the Framework. In the monitor, a DLL is used to cross language boundaries whereas in the Framework a DLL is used as a compilation and distribution optimization. Language interoperation could have been accomplished through a number of mechanisms such as named pipes, shared memory, or other inter-process communication tools. In fact, future versions of the tool-chain should include monitors targeting the specific runtime environment of the target production system. Visual Studio, as the primary development environment for the executable code phase of the development process, provides a very clean interface between languages in the form of a platform invoke (pinvoke). In short this pinvoke lets non- native code, such as Visual C#, directly call native code contained in a DLL [PINVOKEWEB]. This allows us to utilize the same backend event sending and receiving code found in each and every component. Sharing the same fundamental code-base is important for maintenance purposes. The monitor component is fundamentally the same as every other component and so changes to the component code can be centralized for all components (including the monitor).

76

//for pinvoke using System.Runtime.InteropServices;

class
[DllImport("Monitor.dll")] public static extern int PrepOrb(string c); [DllImport("Monitor.dll")] public static extern bool RunOrbAsynchronously(); [DllImport("Monitor.dll")] public static extern bool InjectEvent( int eventtype, string port ); public delegate void StatusMessageDelegate( string strMessage ); [DllImport("Monitor.dll")] public static extern bool setCallbackStatusMessage( StatusMessageDelegate function );

Figure 5-11 Interoperation Between C++ and C# These lines of C# code allow access to the API exposed by the C++ DLL. To begin using the monitor, the user calls PrepOrb() to configure the system. RunOrbAsynchronously() spawns a new thread in the DLL, allowing control to return to the window. The user can call InjectEvent() to send messages to the system. In order to receive events, the callback function should be registered with the DLL. Received events are delivered to the user interface through this thread, via the registered callback function. Preparing the standalone executable component code to be compiled into a DLL required a different approach to the execution flow. In a standard console application, the ORB takes the flow control from the program (through a call to orb->run();) and listens for events and calls the appropriate handler functions in the object. For a windowed application, flow control must be maintained to some degree, by the window itself in order to process paint events and remain responsive to the users interaction. To accomplish this, the monitor backend simply spawns a thread to run the orb and returns the main thread to the window. Sending events is simple because a direct method invocation is used. The SendEvent() method was exposed via the DLLs export

77 capability. Handling received events proved to be the most complex undertaking as it was the orb itself that invokes the event handler. This required a callback function into the windowed thread, effectively having the C++ code call C# code across a thread boundary. A combination of function pointers in C++ and function delegates in C# was used to accomplish the delivery of an event through the backend C++ code to the frontend C# code. Please refer to the monitor component code for details on the implementation.

In addition to the simple monitoring and debugging capability, the monitor can generate extended event traces (EETs). Extended event traces can provide a useful view on the interactions between components for specific tasks through entire services [BroyHofmannKrger97]. AutoFOCUS provides extended event traces during simulation [HuberSchaetzSchmidt96]. Simulation EETs from AutoFOCUS could be compared to the EETs collected in the executable environment to prove (or disprove) validity in the translation between simulation and executable environments. EETs in the execution environment can be further expanded by the addition of timing information. Timing information can aid in diagnosing problems in the system arising from timing conflicts.

5.4. Code Generation Approach The approach to code generation in the current version is distinctly different from the previous versions approach. The components within a given system share common tasks such as initial configuration and message passing. Components are dissimilar in terms of internal data types, the state machine nested if/else block, and attachment of communication channels between connected components. The nested state machine code must be dynamically generated, but a sizable portion of the remaining component support code can be static. Code generation in the previous version was a fairly complex undertaking, combining static CORBA specific code, dynamically generated body code for stub functions, and C++ code embedded in Java strings. As a result, the previous Code Generator was very rigid and not easily changed. In most cases this rigidity is not seen a problem, since the Code Generator itself is designed to not require frequent

78 changes. Generated code, on the other hand, is fully editable and so it was taken for granted that minor changes could take place manually after code generation. It is desirable to reduce this post-generation code editing to an absolute minimum since that is a repeated step each time code is generated. This repeated code editing/merging step is likely to discourage use of the system and so it should be eliminated.

Considering real world use cases of the Code Generator, we find the generated code must be altered to accept interfaces to sensors and controllers. This interfacing code is entirely application specific and so the developer must integrate it with the component code. Custom interface code is not part of the components state or communication logic, but is an important part of making the component useful and functional in the real world. In other words, the dynamically generated code eliminates the busywork of coding the communications and state machine infrastructure, but there are aspects of the code that must be implemented by hand. In most cases the custom code can be cut and pasted into the final executable, but this could be a painstaking and error prone step if numerous iterative changes are made to the system. Ease of use is a primary concern for promoting adherence to the development cycle and so a simpler means of code generation, one that allows for easy integration of custom code, was devised.

The inspiration for the new code generation process was found in web server tag replacement languages such as PHP [PHPWEB] and ASP [ASPWEB]. PHP and ASP work by including special tags, such as <? . . . ?> or <% . . . %> respectively, inline with standard HTML code. These special tags are processed on the server and replaced with standard HTML. Server side tags contain anything from simple string replacement to complex functions. Special tag constructs have proven intuitive and useful when generating complex dynamic HTML pages as the text replacement is done inline with standard HTML code. An example of the tag based replacement scheme is given in Figure 4-4.

79 The class of code generation we are targeting is simple name replacement and short block expansion, a very minimal subset of the features provided by PHP or ASP. With the exception of the state machine block, much of the dynamically generated code is fairly simple. Typical dynamic code consists of name replacement (done mostly for the sake of readability) and short iterative loops over all component names for example. Tag replacement works very well under these conditions. Ideally, existing tag parsers for XML could have been used, but unfortunately the standard angle bracket tag characters are found frequently in C++ code and easily confuse standard XML parsers. A custom tag parser and tag set was developed [Ahluwalia05] in order to process template files.

A template file is a standard C++ file with special tags of the form <#name#> and <@name@> for single string replacement and looping replacement, respectively. A number of template files were created to address the different component types and for the Common Library. Aside from being more readable, the tag/template approach allows the developer to integrate custom sensor/controller interface code as a part of the development cycle, rather than a post-generation step. The template files are basically C++ source files and so custom code can be pasted into the appropriate template file and section of the template. Each time code is generated from these templates, that custom code is automatically included in the resulting C++ file. This allows the developer to modify the model without having to reintegrate the custom code on each generation cycle.

5.5. Code Generator Architecture The overall architecture of the Code Generator and supporting runtime Framework has evolved significantly. The most notable change to the fundamental way in which code is generated is outlined in Chapter 5.4 above, that is the use of a flexible template based system. Although there are distinct differences in implementation, the Code Generator has maintained some basic architectural similarities through its evolution. The initial Code Generator was written in Java and targeted the C++ CORBA middleware. In other words, Java was used to parse and create C++ code. CORBA, as

80 previously discussed, is a communications environment that offers a robust set of features. Java itself offers libraries to access CORBA objects but it was not utilized in the target runtime system because of two primary reasons, portability and implementation of enforceable real-time properties. Although Java is generally thought to be portable, it is limited to systems that have ported the Java interpreter. C source code is more universally portable because, having been around longer, compilers exist for a wider range of platforms. In terms of real- time property enforcement, there has been a wealth of development from the Java community [J2EEWEB]. We feel however, that Java was not suited for our target runtime system because C is generally accepted to be the language of choice for real- time system development. I is certainly possible to modify the Code t Generator and target runtime Framework to implement other languages, but we feel C/C++ is the appropriate target language for exploring the runtime property specifications.

The mixing of languages between the Code Generator (Java) and the generated code (C++) is a logical decision because Java offers easy to use string handling libraries, and because performance of the generation phase is not critical. This separation of languages also helps to distinguish between the Code Generators internal code and the generated output code. For the end user of the Code Generator, this code separation is irrelevant since the Code Generator is a stand alone executable. During the development of the Code Generator, it was convenient to have a language separation. Java code is clearly part of the Code Generator, while C++ code is clearly part of the generated code.

A series of Java libraries were developed to process the model specification file and format the outputted code. The model specification file is an XML document exported from AutoFOCUS or M2Code, which is validated (in both form and in logical contents). Please see Chapter 2.2.2 above for more details on the XML based modes specification. The abstract model is specified as an XML file with an easy to navigate document object model. This hierarchically defined data is then used as a symbol table for the remainder of the code generation process [KrgerMller03]. The ability to easily

81 navigate through the complex model specification was a very important contribution to the initial Code Generator and so this symbol table became an integral part of the current Code Generator as well. A number of java based text processing and formatting libraries were developed to handle file parsing and the indentation of outputted lines of code. Because the current Code Generator has moved to a template based generation system, the complex formatting is easier to deal with. Only blocks of code need to be formatted, rather than entire files.

The process of generating dynamic code is relatively simple, given a rigid set of rules. Fortunately, the runtime execution model we are using provides us with such a rigid set of rules making the task of generating dynamic code easier when compared to a more freeform approach. Integrating this notion of dynamically generated code to the logical process of developing a CORBA application is slightly more complicated. CORBA is a l rge and complicated middleware. In order to interface with the backend a code, an Interface Definition Language (IDL) compiler is utilized to build stub and skeleton code, as well as code to interface with CORBAs backend. This generated stub/skeleton code can be filled in by the developer to easily gain access to the complex CORBA infrastructure. On a high- level, the traditional CORBA development process is as follows; the programmer determines which CORBA objects will be used and creates an IDL file with the specifications for these objects. The programmer then runs this IDL file through the IDL compiler and a number of C++ code files are produced. These files generated by the IDL compiler contain the backend client/server code as well as the stub and skele ton code used for integrating user code with the CORBA backend. The programmer fills in the appropriate stub code with the body code of that particular object function and integrates the necessary driver code. In the initial Code Generator, this process is called populating the stub code. Finally, the programmer compiles the populated source code files using a standard C++ compiler and the appropriate CORBA libraries.

82 5.5.1. Initial Code Generator Comparison Architecture For the initial Code Generator, the ge nerated code was produced through a logical mapping of the manual CORBA implementation tasks described above into a two-phase process. The first phase involved building the IDL file by processing the specification and extracting the logical pieces of the model. Because the model can be hierarchically defined, we are only concerned with those pieces that actually perform an executable task. Those model components that contain a state machine are referred to as terminal sub components and are the building blocks of the executable system. Each terminal sub component in the model maps to a CORBA object. The execution Framework of the generated code places each CORBA object in is own executable, thus making each executable roughly approximate a single terminal subcomponent in the model. Only the terminal subcomponents are present in the final executable system. The non-terminal subcomponents perform no executable tasks and so they are logically reduced out of the final executable system. This reduction is logically sound since the communication channels are directly mapped between terminal subcomponents through the nonterminal subcomponent hierarchy.

The first phase of the initial Code Generator also generated the code that was ultimately used to populate the stub functions. This code was stored in temporary files for use in the second phase. This part of the code generation process could be delayed to the second phase, where it would be more logically appropriate, but was left in the first phase for simple performance considerations. Briefly, the first phase parses the specification XML file and the second phase deals with building the completed code files. Moving the stub code generation to the second phase would require parsing the XML file a second time, something the second phase did not inherently require otherwise. Moving the body creation to the first step reduced both the complexity of the Code Generator itself and reduced the redundant execution time of generating the actual executable code.

Once the IDL file had been generated in the first phase, the user manually added the required data type definitions. This relatively simple process could have been

83 automated, but due to time constraints and complexity of data, the translation of appropriate data type into the interface definition language was left to the user. The IDL file at this point was manually compiled into the stub/skeleton source code files to be populated in phase two. Since the data type specification already requires user interaction between phase one and two, it was left to the user to manually compile the IDL file. Automation could be easily implemented with a shell command at the beginning of phase two, but it was decided against in order to force the user to make the required IDL file changes and to catch syntactical errors before proceeding to phase two.

After the IDL file was manually modified and compiled, phase two can execute. In this phase, the IDL generated stub functions are split and then populated with the temp- file code generated in phase one. Split and Populate operations illustrate the necessary steps as mapped from the manual code writing tasks outlined above. After phase two, there is executable source code for each of the components in the model. The user can then make specific, implementation related, modifications to this source code, and then compile it using standard C++ compiler and the CORBA libraries. These various steps can be automated, and in fact were integrated via a Coordinator class in the last version of the previous Code Generator.

5.5.2. Initial Code Generator Target Runtime Environment The execution Framework of this initial code generation bears some discussion as it the fundamental reason for the two-phased approach code generation process described above. As discussed earlier, the AutoFOCUS execution model requires both synchronization and communication channels. CORBA offers many mechanisms to approach these two requirements. For the initial runtime Frameworks synchronization, the mechanism of choice was CORBAs Real- Time Event Service (RTES, not to be confused with real-time embedded systems). The RTES provides one-way message passing (i.e. event) delivery and is discussed in greater depth in the Real-Time Event Service Section 5.2.3 above. Control was synchronized through what was improperly called a Central Scheduler and was responsible for making sure all components have

84 started, sending tick events to all components, and to manage finished events received from each component. No data was passed beyond the event type itself. The Central Scheduler component in the previous Code Generator is inappropriately named, as its primary function is to synchronize the system over a single clock cycle. There was no scheduling in terms of message prioritization or deadline prioritization. For the rest of this text, the previous versions Scheduler will be referred to as the synchronization component.

For communication channels and data transfer, the execution environment utilizes CORBAs Remote Method Invocation (RMI) to pass data across process and location boundaries. Each component operates on a discrete clock cycle established by the central synchronization component. In a given clock cycle (tick), each component must select an enabled transition, pass its data to the attached component, and notify the central synchronization component it has finished the tick operation. Attached components are the terminal endpoints of a communications channel as mapped through the component hierarchy defined in the model. Communication of data to the attached component was done via the remote method call writePort_[portname]_in_in() on the attached components shared object. The IDL compiler generates RMI stub functions, and so, to use the RMI facility the appropriate code must be placed in the body of this stub function. Recall, this population was done during the second phase of the initial Code Generator process.

Although the open source implementation of TAOs IDL compiler could have been modified to include the body code generated in phase-one, it was decided against because of complexity and because of language choice. In retrospect this may have been a fairly simple task to simply include text at the appropriate place in the IDL compiler, but for the sake of making the Code Generator self-contained it was decided to postprocess the code files following the IDL compilation process. In other words, the IDL compiler outputs source files, and then the Code Generator uses these files as input.

85 A great deal of the initial Code Generator development focused on the Code Generator itself, rather than the targeted execution environment. A rudimentary execution environment was developed for the sake of getting the generated code to run. The use of RMI provided strongly typed data transfers through CORBAs object broker, the core CORBA feature. The central synchronization component provided clear synchronization, making the resulting system execution appear very orderly. These strengths are also weaknesses in that the RMI facility is not readily exposed to external monitors making debugging of the system execution difficult and invasive. The central synchronization component is a limiting factor on the system in that it prevents any exp loration of asynchronous execution models. The synchronization mechanism, as brought to light in [HuberSchatz97], could be distributed, making the resulting system slimmer and more apparently distributed. These key points were identified as areas for improvement and necessitated a redesign of the Code Generator and the execution Framework.

Having the initial Code Generator as a proof of concept was valuable in that it demonstrated the strengths and weaknesses of the runtime system as well as the process of code generation itself. The two-phased approach was observed to be counterintuitive requiring a great deal of interaction from the user. Although the two-phased approach clearly mapped to the logical (manual) implementation process described above, it was desirable to simplify this process.

5.5.3. Current Code Generator Target Runtime Environment (Overview) The initial runtime system was used as a starting point. A fundamental part of the previous versions runtime system was a project called Framework which had the general notion of centralizing common code. Unfortunately there was a great deal of project specific code contained in this Framework, and consequently required a great deal of user interaction. The new runtime system divides the Framework into two distinct projects, one that contains all the truly generic code required to bootstrap the runtime system and another to centralize the common project specific code. These two projects were named Framework and CommonLibrary respectively. The purpose for

86 collecting the common project specific code into a Common Library was to speed compilation. It was noticed in the previous runtime system that C++ code files outputted from the IDL compiler were includ ed in each and every component project, requiring redundant compilation. While this redundancy is not a concern in small projects, it is in large projects with many components. The Framework and CommonLibrary are discussed in the Runtime Platform Section 5.3 above.

Component

Code Generator Framework

Code Generator Common Library

Figure 5-12 Runtime Component Hierarchy General This figure shows how the runtime components utilize our custom libraries. Compone nts use the Framework and Common Library and, in effect, the components are built on top of these libraries.

87

6. Porting to Other Runtime Environments Key to the success of this framework driven approach is the ability to switch runtime environments. The ability to switch runtime environments allows the developer to test specific aspects of the system under development in incremental phases. As discussed earlier, this approach to testing can be useful if the production system is difficult to test directly, or if we wish to quickly test existing algorithms such as schedulers before implementing them on the target system. Having many environments to test a system expedites the exploration of new algorithms by exploiting existing code bases, rather than implementing experimental algorithms from scratch. This chapter will discuss, in greater detail, what is required to implement a new target runtime system.
XML Model Specification

M2Code

Code Generator

KeyFob Component

Lock Manager Component

Lighting System Compoent

Control Component

Component

Crash Sensor Component

Framework * Can be Replaced or modified with little or no effect on the above Components, Code Generator, Model, etc. *

Figure 6-1 Framework Porting Dependencies The process of porting to other runtime environments is focused on the Framework portion of the above diagram. The component code, Code Generator, and model should be unaffected by a change in the underlying Framework Recall the general hierarchy of the runtime system Framework. CORBA is the base environment, the Framework and Common Library provide encapsulation of CORBA code, and the components reside on top of the Framework and Common Library. An analogy can be drawn with the n-tiered development approach [Steiert98] of separating data, business logic, and presentation into three (or more) distinct modular components. In the n-tiered approach, any of the three (or more) tiers should be

87

88 modifiable or internal logic completely changed (within reason) without affecting the other tiers. CORBA can be thought of as the database tier because it is the medium for the data. In fact, it would be possible to use a standard relational database as the communications medium for a new runtime system, although that would not be recommended. The Framework and Common Library can be thought of as the business logic tier, as their purpose is to interface between the presentation and data tier. Components, because they are the interactive, can be thought of as the presentation tier. Ideally, components should not be affected by changes in the lower tiers. It would be preferable to not modify components at all as that would require extensive changes to the Code Generator and/or template files discussed above. This requires a very distinct and concise modularization between the tiers. For the data, or communications tier, we are likely to use some existing protocol such as TCP or CANOpen. Therefore the middle tier, otherwise known as the Framework and Common Library, is what should be modified in order to interface a different communications medium.

Additional services such as naming and scheduling may exist for the new communications environment we choose, but these services may have to be custom implementations. The Framework is statically generated, while the Common Library is dynamically created through the use of template files discussed in Chapter 5.4 above. For the most part, the dynamic nature of the Common Library templates should not be a problem when creating a new Framework. The Common Librarys original purpose was to encapsulate all common code specific to the project under development and to allow the Framework to be completely generic. The Framework and Common Library can be easily merged if needed, but they should be kept separate for the purpose of encapsulation and modularization. Name replacement and looping constructs for project specific code should be useful to the new Common Library.

In addition to the internal workings of the runtime system itself, other technologies can be introduced on the periphery in order to better interface with the environment or to act as a gateway to other environments. Automotive networks typically

89 consist of a number of distinctly different networks and as such require gateways to pass relevant data between them. We present an example of integrating web services at the periphery of our system in order to extend the user interface capabilities. Web services have a great deal of potential utility in an increasingly connected automotive industry. Web services can be employed to handle diagnostic reporting or other information posting and retrieval operations. Currently, web services are inherently unreliable and not suited for a real-time environment. For this reason, it makes sense to deploy this technology at the periphery of our runtime system, partitioning the critical system behavior (airbag deployment) from the non-critical behavio r (posting engine diagnostic data).

The remainder of this chapter presents the basic services utilized by our runtime system. A comparison of the basic services with analogs from different systems provides an understanding of the task of porting the overall runtime system to other systems. Finally, we present an example of integrating other technologies (specifically web services) on the periphery of our core runtime system to illustrate the extendibility, both internally and externally of our design. 6.1. Porting Concepts Basic Services The remainder of this chapter will consider the steps required to implement a new Framework and Common Library. When considering a new Framework, it is very important to consider what functionality is to be supported. A general list of functionality in the current version includes dynamic name resolution, message scheduling, time synchronization [ hluwalia05], message monitoring, message passing (multicast), and A data-type serialization. Of these, only message passing and data-type serialization are fundamental requirements.

90

Operating System

Component

Code Generator Framework Naming Service Event Service RT-CORBA

Code Generator Common Library

Replaced Services

Operating System

Modified Libraries

Component

Code Generator Framework DNS send() / recv() TCP / UDP

Code Generator Common Library

Figure 6-2 Porting Services and Libraries When porting to other environments, note that the Component code is not fundamentally changed. The underlying services are replaced while the Code Generator Framework and Code Generator Common Library are altered to interface the components with the new underlying services. Simplified na me resolution could use fixed and well-known addresses or existing naming systems, such as DNS (Domain Name System). Message scheduling is only required for implementation of QoS (Quality of Service). QoS might not be of concern when dealing with certain test environments. For example, early tests of general communications environments or in systems that do not require prioritized message delivery. Time synchronization, as with naming could be entirely ignored (distributed monitoring is unimportant), or other tools such as NTP (Network Time Protocol) could be used in place of CORBAs Time Service. NTP was briefly considered as a primary means of clock synchronization for the current Framework, but was decided against

91 because of degraded accuracy over distance and time. Recently an efficient means to improve NTP has been developed [VeitchBabuPsztor04].

Briefly, NTP is used for absolute synchronization, and an averaged system clock counter is used for keeping accuracy between NTP updates. This approach may prove useful in a future Framework implementation where clock synchronization is important and where CORBAs Timing Service is not available. Message monitoring is not critical, but can be a desirable debugging tool. The choice of communication environment directly affects how message monitoring works. A communication environment that supports broadcast/multicast message delivery enables passive monitoring, while unicast communication requires explicit delivery of duplicate messages to the monitor.

6.2. Basic Services Comparison Message passing is the critical foundation of the Frameworks execution model. Picking the appropriate communication environment and implementing how messages are passed can have profound consequences on the execution of the system. Consider three communication environments, TCP, UDP, CANOpen and Web Services, all of which are appropriate foundations for implementing message passing. It should be noted that TCP and Web Services are not particularly suited for a real-time environment, but are discussed here because they are well-known communication protocols. TCP, CANOpen, and Web Services can provide reliable delivery. UDP can be implemented to include reliability, but this is an added expense to the developer. UDP and CANOpen have broadcast delivery. TCP and Web Services can be used to deliver multiple message copies in series; again, an added expense to the developer, but also a potential performance loss. CANOpen inherently provides message prioritization where TCP/UDP and Web Services do not. If message scheduling is important, user- level schedulers can be implemented for these protocols, as is the case with CORBAs scheduler.

92

Reliable Delivery

TCP

UDP x User Implemented x User Implemented DNS NTP

CANOpen

Broadcast

Message Priority Existing Name Service Existing Time Service

x User Implemented x User Implemented DNS NTP

x Static Node ID #s Time Stamp

Web Services x WS-Reliable Messaging [WSRELIABLE] x User Implemented x User Implemented DNS & WSDL NTP

Figure 6-3 Platform Specific Services Comparison This figure describes the main services required by the runtime system and their respective implementations in TCP, UDP, CANOpen, and Web Services [Stevens03] [CANWEB]. 6.3. Web Service Integration This chapter discusses the integration of our runtime system with web services. There are two approaches that can be taken with respect to the use of web services. The first approach would use web services to replace the communications infrastructure of our runtime environment in a similar fashion to that discussed in Chapter 6.16.1 above. This approach is not particularly appropriate given that we are targeting a real-time environment because web services are built on HTTP using TCP as the transport protocol. Web services are better suited for user interface purposes and so our preferred approach to utilize web services is for a purpose similar to that of the monitor component discussed earlier. This discussion will present how to alter the monitor component in order to support expose a web service.

The monitor component, as discussed earlier, is written in C# and utilizes the Framework libraries (written in C++) through the platform invoke mechanism [PINVOKEWEB]. The Microsoft .NET framework provides us with a wealth of easy to

93 use tools, one of which is greatly simplified web service development. The Web Service Definition Language (WSDL) generation tool that .NET provides makes the development of a simple web service almost trivial. The monitor component provides two basic functions, the sending and receiving of messages. Our web service will simply perform a translation from one communications platform to another.

RTCORBA HTTP/Web RT Event Service

Web Service Monitor Client

Monitor Lock Manager (LM) Component Crash Sensor (CS) Component . Other Components

Figure 6-4 Web Service Based User Interface Our proof-of-concept Web Service provides a bridge between HTTP/SOAP and our RTCORBA Framework. The arrows illustrate the flow of an InjectEvent()message from the web client to components in the runtime system. The sending of messages via a web service is straightforward. The client calls the web service, passing it the message. The web service in turn takes that message, translates it into a CORBA message and passes it to the event service.

94

[WebMethod] public int PrepOrbWS(string c) { return(PrepOrb(c)); } [WebMethod] public bool RunOrbAsynchronouslyWS() { return(RunOrbAsynchronously()); } [WebMethod] public bool InjectEventWS( int eventtype, string port ) { return(InjectEvent( eventtype, port )); }

Figure 6-5 Web Service Code Much of the standard monitor user interface can be utilized. To expose the methods via a web service, the [WebMethod] attribute must be added to the wrapper function. From this, the Web Service Definition Language file is automatically generated.

Receiving messages via a web service is more complicated because traditional web services are invoked (or pulled) from the client, rather than pushed from the server. There are a number of ways to deal with such a model. The simplest approach would have the server queue messages, which the client would periodically poll for. As with any polling mechanism, this is inefficient and introduces an unnecessary delay and burden on the system. A second approach would be to setup a second web service on the client to produce two-way communication. Although this approach is an improvement over the polling methodology, it does not scale well with multiple clients. Asynchronous web services allow the client to register callback functions, creating a client side thread to handle the web service completion. With this asynchronous client side behavior, the first approach of queuing message on the server becomes more appropriate. The client can wait for the message to arrive without blocking the rest of the client. This approach is very different from the callback function in the standard monitor that allows for the delivery of messages received in the C++ code up to the C# code.

95 The WSDL file used in a proof-of-concept project web service is provided below (partially collapsed for readability):
<?xml version="1.0" encoding="utf-8" ?> - <wsdl:definitions xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tm ="http://microsoft.com/wsdl/mime/textMatching/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/" xmlns:tns="http://tempuri.org/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:http="http://schemas.xmlsoap.org/wsdl/http/" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"> - <wsdl:types> - <s:schema elementFormDefault="qualified" targetNamespace="http://tempuri.org/"> - <s:element name="PrepOrbWS"> - <s:complexType> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="c" type="s:string" /> </s:sequence> </s:complexType> </s:element> + <s:element name="PrepOrbWSResponse"> + <s:element name="RunOrbAsynchronouslyWS"> + <s:element name="RunOrbAsynchronouslyWSResponse"> + <s:element name="InjectEventWS "> + <s:element name="InjectEventWSResponse"> + <s:element name="GetMessageWS"> + <s:element name="GetMessageWSResponse"> </s:schema> </wsdl:types> - <wsdl:message name="PrepOrbWSSoapIn"> <wsdl:part name="parameters" element="tns:PrepOrbWS" /> </wsdl:message> + <wsdl:message name="PrepOrbWSSoapOut"> + <wsdl:message name="RunOrbAsynchronouslyWSSoapIn"> + <wsdl:message name="RunOrbAsynchronouslyWSSoapOut"> + <wsdl:message name="InjectEventWSSoapIn"> + <wsdl:message name="InjectEventWSSoapOut"> + <wsdl:message name="GetMessageWSSoapIn"> + <wsdl:message name="GetMessageWSSoapOut"> - <wsdl:portType name="ServiceSoap"> - <wsdl:operation name="PrepOrbWS"> <wsdl:input message="tns:PrepOrbWSSoapIn" /> <wsdl:output message="tns:PrepOrbWSSoapOut" /> </wsdl:operation> + <wsdl:operation name="RunOrbAsynchronouslyWS"> + <wsdl:operation name="InjectEventWS"> + <wsdl:operation name="GetMessageWS "> </wsdl:portType> - <wsdl:binding name="ServiceSoap" type="tns:ServiceSoap"> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" /> - <wsdl:operation name="PrepOrbWS"> <soap:operation soapAction="http://tempuri.org/PrepOrbWS" style="document" /> - <wsdl:input> <soap:body use="literal" /> </wsdl:input>

96
- <wsdl:output> <soap:body use="literal" /> </wsdl:output> </wsdl:operation> + <wsdl:operation name="RunOrbAsynchronouslyWS"> + <wsdl:operation name="InjectEventWS"> + <wsdl:operation name="GetMessageWS "> </wsdl:binding> - <wsdl:binding name="ServiceSoap12" type="tns:ServiceSoap"> <soap12:binding transport="http://schemas.xmlsoap.org/soap/http" /> - <wsdl:operation name="PrepOrbWS"> <soap12:operation soapAction="http://tempuri.org/PrepOrbWS " style="document" /> - <wsdl:input> <soap12:body use="literal" /> </wsdl:input> + <wsdl:output> </wsdl:operation> + <wsdl:operation name="RunOrbAsynchronouslyWS"> + <wsdl:operation name="InjectEventWS"> + <wsdl:operation name="GetMessageWS "> </wsdl:binding> - <wsdl:service name="Service"> - <wsdl:port name="ServiceSoap" binding="tns:ServiceSoap"> <soap:address location="http://localhost/MonitorWS/Service.asmx" /> </wsdl:port> - <wsdl:port name="ServiceSoap12" binding="tns:ServiceSoap12"> <soap12:address location="http://localhost/MonitorWS/Service.asmx" /> </wsdl:port> </wsdl:service> </wsdl:definitions>

Figure 6-6 Example WSDL File This WSDL file was automatically generated by Visual Studio using the directives from Figure 6-5.

97

7. Design analysis The primary focus of this work is on the runtime system and so, much of the design analysis will focus around the Framework and component design. Before proceeding to the runtime system analysis, a discussion of the viability of the overall concentric development cycle and tools will be provided. The fundamental concept of a concentric development cycle is sound, but for it to be accepted by the development community, it must also be robust and easy to use. The design of the current Code Generator has attempted to minimize the burden on the programmer through automation of many redundant tasks such as establishing a communications environment and integrating custom code. The tag based template code generation system described in Chapter 2.2.1 above is intuitive, but the functionality of the specific tags presents a steep learning curve. Each individual tag requires a great deal of understanding, of both the model under development and the runtime system itself. Extensive documentation of the runtime system and the respective Code Generator template tags can solve the steep learning curve problem. 7.1. Design Motivation Analysis The motivation for building a code generation system targeting a real-time execution environment is to explore how such a development platform would apply to real world use cases. In terms of simulation, it has been argued that an appropriate testing environment can exist solely in the space of a single process, multi threaded application. In fact, such a simulation environment exists and is the basis of our work. This simulator is part of AutoFOCUS and is discussed in Chapter 3.1.2 above. Our work is the logical extension of the spatially limited simulator into a truly distributed environment.

This distributed environment provides two things the simulator cannot. First, real life characteristics such as network traffic, partial network partitioning, delayed communication, etc. can be easily tested in a real network, whereas these characteristics would have to be implemented in the simulator. Experimentation and testing is the primary function of our Code Generator and runtime Framework. It provides a closer

97

98 approximation and more appropriate starting point for implementing an actual production system. The second thing our work provides over the basic simulator is the ability to actually target a production system for code generation and deployment. In its current state, the Code Generator and runtime system is not particularly suited to a production environment, but is a solid proof of concept. The overall framework for targeting a production system is in place. In order for this work to be truly suitable for a production environment such as the automotive industry, the execution models limitations must be rectified and the communications platform must be changed to one more appropriate for automotive systems. Both of these limitations are discussed below.

7.2. Execution Model Limitations In terms of the runtime environment, the execution model is very strict with regard to synchronization, but the overall code-base is very extensible through editable templates and pluggable Frameworks. The strict execution model is likely to be a detriment to the acceptance of the development system. There are many execution assumption holdovers from AutoFOCUS that may discourage the use of the runtime system. Examples include the discrete clock cycle for all components and the flooding of null messages for synchronization purposes. The discrete clock cycle creates concerns when implementing reactive systems such as airbag deployment and crash sensors. In many cases, it is unclear how a slow or stalled component in a connected, but unrelated part of the system, could affect the timely reaction to a crash. For this reason, a new reactive, or asynchronous, execution model is under development [Ahluwalia05].

7.3. CORBA Limitations The messaging system we chose, namely CORBAs Real- Time Event Service has a significant drawback in that events are weakly typed. Because of the serialization of data, events must be marshaled and un- marshaled using the << and >> operators [Vinoski99], discussed earlier in Chapter 5.2.3 above. Fortunately the un- marshal operator >> has a rudimentary type checking mechanism that can be caught. This type checking only applies when the re is a sufficient diversity between data types to be

99 marshaled and un- marshaled. A correctly implemented system should never encounter type mismatches in event types. To prevent this type of data conversion error, the developer should not alter any of the logic generated by the Code Generator. This requires the code generation process properly maintains the appropriate paring of both data marshaling and un- marshaling event operations on the sending and receiving ends.

In order to be reasonably sure that the Code Generator enforces the appropriate operations when sending and receiving events, the monitor allows for a thorough checking of event sequences. The monitor can also be used to verify the appropriate handling of events under a set of conditions. Assuming the user does not alter the communications code, the system should function properly as per the simulation execution model emulation. The lack of strongly typed events is not necessarily a problem. Programming languages such as C and C++ allow type casting and as such, are heavily dependent on the skill of the programmer. The use of weakly typed event data in our communication Framework is dependent on the correctness of the code generated. Thorough testing using the monitor component has demonstrated the correct use of event data paring.

Another issue that can arise with CORBAs Real- Time Event Service is the scalability of interconnections. Each communication channel links a pair of components and requires a unique event number for each distinct pair. CORBA specifies this unique event number with a 16-bit unsigned short, limiting the number of distinct event types to 65536 [OCI]. Communication via the event channel is one-way, meaning that unique event numbers must be allocated twice for each pair of components requiring two-way communication. One event number is used to denote the path from component A to component B, and one more is required for the opposite direction, from component B to component A. For readability in the generated code, event values are specified in an enumerated type and the naming convention is of the human readable form WRITEPORT_A_FROM_B. Of the 16-bit event number space, the user definable range starts at ACE_ES_EVENT_UNDEFINED, or 16 (0x10). Control events require one

100 startup event and one tick completion event are required for each component. Internal system control events are not used for communication of data, but are required by the execution environment. The remaining addressable event space is used for designating communication paths between components

Event numbers are finite and so there is a limited number of components the system can support in this fashion. In the worst case, the number of unique parings grows exponentially with the number of components (for n components in a directed graph, the number of links between component pairs is (O)n2 , or more specifically n2 -n). This yields a hard limitation of 255 fully interconnected components in our current CORBA base runtime framework. For a fully interconnected network of components, event numbers are distributed as following: 16 Reserved events for the event service (CORBA defined) One startup event for each component (255 max) One tick finished event for each component (255 max) n2 -n communication directional events (64770 max) Using a maximum 65296 out of the 65536 total events. 16 reserved + 255 startup + 255 finished + 64770 communication = 65296 events used or, 255 maximum fully interconnected components Additional non- fully- interconnected components may be used to consume the remaining 240 events

101

Component B

Component A

Component C

Component D

Figure 7-1 CORBA Limitation Fully Interconnected Components This illustrates the directed graph produced by a fully interconnected network of components. The arrows between components illustrate event messages. In term of the number of components in a given system, this graph represents the worst-case number of components scenario, as it requires the maximum number of events between each component. Components are not always fully interconnected and so in the average case, far fewer event numbers are likely to be used. A component such as a door lock actuator may only be connected to a single component, the Lock Manager. Although the door actuator is not directly connected to each component, it can propagate its information throughout the system via the Lock Manager. The maximum number of components that can exist in a minimally connected network (imagine a directional circle of components) is as follows: 16 Reserved events for the event service (CORBA defined) One startup event for each component One tick finished event for each component One output event for each component (AB, BC, CD, DA in the figure below) About 21840 unique components may exist. ((65536 total event - 16 reserved)/ 3 events per component) = 21840 max components The minimum of 3 events per component include a startup, a finished, and a data communication event

102

Component B

Component A

Component C

Component D

Figure 7-2 CORBA Limitation Maximum Number of Components This represents a minimally connected network of components. This best-case scenario allows the system to maximize the number of components. If a given system does approach the finite limitation of event numbers, there are measures that can be taken to mitigate this constraint. One example would be to create a user defined field in the event payload and filter events explicitly in each component, in addition to the event channel. Another means to increase the number of unique event types would be to utilize the source field in the CORBA event header, another 16 bit value. The source field combined with the event type yields a full 32-bit event signature. Unfortunately, this use of the source field would again require a second level filtering of events in the component, but it allows for a much larger event space and may be easier to manage than a custom payload based event field.

The event space numbering is only a limitation on CORBAs Event Service and so developers requiring large numbers of fully interconnected components should consider implementing another Framework utilizing a communications platform suitable for their specific system. Event number limitations are not a fundamental constraint on the Code Generator. This limitation is discussed here to illustrate a weakness in a particular communications environment. The benefit of having an interchangeable Framework is that weaknesses, as well as strengths, can be identified and used as criteria

103 for selecting the most appropriate communications environment or for further development of a custom communications environment.

7.4. Code Generator and Runtime System Redesign Analysis The redesign of the initial Code Generator and runtime system set out to accomplish two distinct goals. First, the explicit centralized synchronization mechanism (referred to as scheduling) found in the previous Code Generator should be eliminated. Second, the resulting runtime system should allow for monitoring performance specifications. The second goal also should consider the future implementation of realtime specification enforcement in addition to simple observation. These two goals are tightly interconnected through the communication and synchronization mechanisms described above. Synchronization and monitoring are closely related in that timing information can be gathered based on the synchronization events (and data communication channel activity) themselves. The monitoring of timing constraints can be done by observing event traces as they happen. A simple example of determining the clock cycle, and more specifically the slowest component, can be done by observing the length of time between synchronization events, and event data communication activity itself. This particular example only tells us the execution time of the slowest component, but demonstrates how timing information can be easily gathered from the event traces. Events, unlike remote method invocation, are broadcast allowing for noninvasive observation.

The execution environment of the initial Code Generator used two distinctly different mechanisms for communication (Remote Method Invocation, RMI) and for synchronization (Real-Time Event Service, RTES). Although the two modes of communication are logically distinct, there is no reason they cannot be implemented using the same mechanisms. The goals of decentralization and the creation of a monitoring capability, was accomplished by removing the RMI facility that was previously used for data communication. Data communication can be implemented

104 through the payload of events passed through the Event Service. Event Service messages can be used for both synchronization and communication.

An event, or message, in our system is simply a mechanism for one-way communication across process and location boundaries. CORBAs Real Time Event Service has the added benefit of scheduling messages, making the long-term goal of realtime property enforcement possible. In short, the conversion to an entirely event/message based system provided many important benefits, both immediate as discussed above and long-term for the goal of real-time property integration.

The Real- Time Event Service provides broadcast communication. While this is not explicitly required for any of our basic needs, it dose provide a great deal of utility. For the monitoring capability, having all communication messages broadcast allows for passive observation of message sequence. The unicast model of the RMI facility used by the initial Code Generator would require the component to perform an explicit action of retransmitting the information to be observed to the monitor. Worse yet, this duplicate transmission would introduce an overlay network solely f r the purpose of monitoring. o Redundant and explicit transmission of information could potentially affect the performance of the system to be observed, yielding erroneous timing information.

Eliminating centralized synchronization could be accomplished through a number of other message-based approaches, all of which could be implemented via the RTES using the communication messages themselves or via explicit synchronization messages. A completely message-based runtime system has the utility of a simple concept that can be ported to other environments in which the standard CORBA implementation cannot execute such as low-processing-power embedded systems found in automotive applications. The ability to port the runtime system to these environments means that our tool-chain can be useful in an actual production environment.

105 Benefits of a completely message-based runtime system are clear, but it is not entirely a fix-all solution. Because of CORBAs shared object feature, large data types could be passed by reference, cutting down on the network load. With the exception of a few systems, large data types are not the common case. This is especially true in automotive applications, which is our primary focus. Message passing in the RTES can be error-prone in that data is not strictly typed. RMI on the other hand, allows for compiler checks on data by validating method calls at compile-time. In this respect, the initial execution environment was inherently more stable. Runtime checks and the fact that the system is based on a validated model should reduce, if not eliminate, the likelihood of runtime data type errors on passed messages. In terms of security, the event service is inherently open, meaning that anyone can send anything to anyone. This open message sourcing is a useful property in that the monitor can actively inject events on behalf of any given component. This can aid in debugging logic as well as debugging the above data type errors. With this openness comes a great reduction in security, but for the common case in automotive systems this is of little concern.

The most notable gain in moving to an entirely message-passing system is the ability to utilize different communications architectures without critically changing the Code Generator or overall execution logic. Implementing this change required a major redesign of the runtime system and in turn, a redesign of the Code Generator itself. Simplification of the Code Generator and code generation process was not a not a driving goal by itself, but it was easily accomplished because the RMI stub code dependency no longer exists. The event service can be accessed via external functions making it only loosely dependent on the files the IDL compiler generates. This enabled the two-phase IDL compiler dependent process of code generation to be eliminated. A straightforward single-phase process, requiring no intermediate user interaction, is very important in promoting adherence to the concentric development cycle.

Design and implementation of the runtime system was the center of focus in the new Code Generator. The runtime system was developed from the ground up with the

106 explicit purpose of simplifying the code generation process. Simplification of the code generation process seeks to minimize the number and complexity of dynamically generated files, reducing the code generation task to basic name replacement and line/block replacement. To make this text replacement robust, dynamic code was detached from the static code. The target language C++ provides a number of tools to isolate the dynamic code, primarily inheritance and libraries (both static and dynamically linked). The resulting runtime system consists of two libraries and a number of components, each with clear tasks and boundaries, and each contained within its own Visual C++ project (easily ported to other compilers).

7.5. Runtime System Design Assessment This chapter will present an overall assessment of the runtime system in relation to the Code Generator. Four key advantages were achieved in the chosen design of the runtime system. First, the fundamental concept of the message passing architecture is easy to comprehend and port to other environments. Second, static portions of the code generation templates are intuitive to modify, simplifying the integratio n of custom code. Third, the use of open source projects and freely available software makes this platform desirable to an academic environment. Lastly, the communication environment exposes messages making it easy to monitor and debug both the runtime Framework and the system under development. This broadcast exposure of messages is ideal for the research goals of monitoring performance. The monitoring capability also allows for the generation of extended event traces as outlined in the ComponentMonitor Chapter 5.3.3.3 above. Extended event traces can be very useful when developing and debugging complex distributed systems.

With the advantages outlined above come disadvantages, one of which is the very execution model we have chosen to follow can be a limiting factor. We have essentially required every system that we wish to develop to conform to a rigid execution structure. Recall this execution structure consists of a number of components (or state machines) over a distributed communications network. Each component reads its inputs, performs

107 an enabled transition, and writes its outputs. The components, when viewed individually, are autonomous and asynchronous and execute in parallel. In fact, all components can be performing their operation simultaneously. This claim is however misleading since the components are fundamentally interconnected and dependent on each other. A component cannot proceed until it has received input from all attached components. This dependency means that a given component is only asynchronous and parallel during a single tick of the systems inferred clock cycle. The system on the whole is synchronous and highly dependent on successful communication between components. Communication failure models between connected components have not been adequately addressed in this work.

A significant problem can arise as a result of failed communication in this synchronous execution pattern. Without safeguards for stalled, disconnected, or slow components, the entire system of components could become unresponsive. An example of a system that would not tolerate such behavior is an automotive airbag system. Consider a stalled component that is connected to the airbag system, say for example the component that notifies the airbag deployment mechanism that the ignition is on. Should the airbag deploy if the airbag system thinks the ignition is off? Clearly the answer is no, the airbag should only deploy when the car is running. The problem arises when the component does not notify the system that the ignition is on. The actual requirements for airbag deployment are more complicated. For the sake of this discussion, the critical flaw in our design and our particular execution model is illustrated by a failed communication message from the ignition component.

Our current execution model was derived directly from the AutoFOCUS simulator discussed in Chapter 3.1.2 above. The failure of the ignition component to communicate its state will not simply cause a failure of the airbag to deploy. It will cause the entire system to stall. Recall the execution model requires that each component hear from each of its attached components before it can take action. The airbag cannot do anything in our system until it hears from the ignition component. A more realistic

108 approach to remedy such a flaw would be to treat multiple inputs as an OR condition on proceeding, rather than an AND condition. Unfortunately, the execution model is limited in this respect. We have determined that this limitation is acceptable for a research-based project. The rigidity of the execution model provides significant simplifications in other areas such as debugging execution. An asynchronous execution model has been explored [Ahluwalia05] to eliminate the deadlock effect. Alternate solutions to the deadlock problem have been suggested within the space of the synchronous model. These include surrogate components, assumed message s, heartbeat messages, etc.

The synchronous execution model requires that all input ports be present before a transition can be enabled. Not all communication channels are directly affected in each tick cycle, but they must still be activated as a result of the execution model. Null messages are introduced to explicitly maintain the systems synchronous execution. This produces an efficiency problem by flooding the communications channel with redundant, and seemingly unnecessary, messages. In the worst case, every component could be connected to every other component, yielding a fully connected directed graph with a total of n(n-1) communication messages during each tick cycle where n is the number of components in the system.

It is conceivable that a system could be devised that would actively use every one of these messages for data transfer purposes. In the normal case however, components are not always fully interconnected. Furthermore, these communication channels are not always utilized to transfer data. Since the execution model requires communication even if no data is present, null messages are introduced. This produces inefficient use of the communications channel for the typical system. This is not necessarily a problem since it is possible for a system to actively use all communications channels, but it is of some concern for high performance communications, especially over a shared bus such as a Controller Area Network (CAN bus).

109 Systems are typically developed as a whole, that is, a collection of components make up the system. At the present, there is no mechanism to identify potential system reductions that could result in increased performance. The modeling tool does achieve a reduction in the number of terminal components, those components that actually perform a state transition. This however does not account for a more subtle subdivision of labor. Take for example airbag detection and the Central Locking System. As we have modeled it, the crash detection and airbag deployment were designed as part of overall Central Locking System. Communication between the crash detector and airbag deployment are governed by the total system tick cycle time.

An alternative Central Locking System design would place the crash detector and airbag deployment components in their own system, governed by their own tick cycle time. An interface from this safety sub system to the larger Central Locking System could allow the two systems to operate nearly independently. Their clock rates are independent and would allow the airbag to respond quicker since the crash detection and airbag deployment components are no longer dependent on the other, possibly slower, Central Locking System components. Reductions of this type are left to the developer to uncover and implement.

110

8. Future work The monitoring capability of the runtime execution environment enables the generation of extended event traces. As discussed in Chapter 5.3.3.3 above, these extended event traces can help to illustrate the interactions between components for specific tasks or entire services [BroyHofmannKrger97]. EETs can be used as a comparison between execution environments. Simulation EETs from AutoFOCUS or other simulation tools could be compared to the EETs collected in the executable environment to prove (or disprove) validity in the translation between simulation and executable runtime environments. EETs in the execution environment can be further expanded by the addition of timing information. Timing information can aid in diagnosing problems in the system arising from timing conflicts in addition to simple message order. Currently these event traces are used for documentation purposes

Security is a fundamental concern in any system. Automotive systems enjoy a limited immunity to malicious attacks, as these systems are physically isolated. Still, the nature of our open messaging architecture leaves vulnerabilities to both eavesdropping and message injection. Both eavesdropping and message injection are integral parts of our monitoring and testing research platform, but special consideration must be given before this system can be used in a production environment. Mitigating the security threats must be carefully considered with respect to the deployment environment. Event messages that pass through an open medium should be encrypted to protect data. Unfortunately, this may not be enough to prevent potential compromises. Erroneous messages could be placed on the network, amounting to a Denial Of Service attack. Fortunately, this is highly unlikely as a means of attack due to the isolated nature of automotive systems.

Denial of Service attacks have little purpose in an automotive environment. Nevertheless, as automobiles become increasingly connected to wide area networks the security concern for DoS type attacks will develop. Faulty components could inadvertently contribute to flooding of messages resulting in an accidental Denial of

110

111 Service on an otherwise functional system. Systems beyond the automotive domain should consider implementing message flooding and trusted source detection in a custom message passing architecture. CORBA does not inherently support either of these and would be extremely vulnerable to this sort of attack. In fact, it would be very easy in the current runtime system to simply place erroneous messages on the event channel on behalf of an arbitrary component; the monitor component does exactly this.

Systems developed using our Framewo rk are vulnerable to reverse engineering because messages are inherently open. Although it is unlikely this would be of concern to our research environment, Extended Event Traces, such as those described above, could generate message sequence charts that could be used to visualize how the system works. Encrypted message data would limit the utility of these charts. Still, replay attacks or man-in-the- middle attacks could be mounted on components that have few message connections or components that consistently retransmit the same data, particularly the null messages used in our execution model (see Chapter 3.1.3). To mitigate this, components could be required to communicate with every other component, even when not explicitly called for by the model. This was pointed out as a performance drawback earlier, but could be used to place erroneous information in any extended event trace. A resulting message sequence chart would be useless since each component would reflect the same basic repeating message sequence of n messages, followed by n messages out. Random data in null messages in addition to encrypted data could severely reduce the ability to reverse engineer a system based solely on message eavesdropping.

112

9. Conclusion This text has presented a model-driven design process for real-time systems. In particular, this work has contributed a robust and modular runtime execution environment suitable for code generation. This runtime Framework has succeeded in simplifying the code generation task by introducing a template-based system. As a result, the task of code generation has evolved into a focused process of name replacement and short code block generation. Custom code can be easily integrated into a template making the cyclic model driven approach more convenient by allowing custom code to be included for all subsequent code generation runs. This approach is an improvement over black-box code generators that require custom be manually and repeatedly integrated into the resulting generated source code.

The modularization of the runtime system we have developed accomplishes three goals. First, the size of the dynamically generated code blocks is minimized through encapsulating common, and often complex tasks, like sending a message. Second, the compilation time is reduced by collecting common tasks into libraries rather than repeatedly compiling these definitions into each component executable. Third, and most importantly, the generated code is isolated from the underlying communications environment by encapsulating the platform specific functions into a generic Framework API. The generated code is dependent on this Framework API that is in-turn dependent on the underlying communications environment. In order to change communicatio ns environments, the Framework must be changed, but the generated code can remain largely untouched.

In conclusion, this work has accomplished the goals outlined in Chapter two, namely the minimization of complexity, straightforward code generation, and code modularization. Template based code generation, encapsulating complex tasks into libraries, and code separation and isolation were employed to accomplish the aforementioned goals. The runtime Framework has accomplished the lesser goals outlined in Chapter 7.4 of achieving distributed synchronization and external monitoring.

112

113 Both of these goals were attained by creating a runtime Framework based on the message passing facility of CORBAs Event Service, which has analogies in other communications environments. From this modular execution Framework, the code generation tools output can be easily extended into other execution environments.

114

Bibliography [AhluwaliaKrgerMeisinger05] J. Ahluwalia, I. H. Krger, M. Meisinger, W. Phillips: Model-Based Run-Time Monitoring of End-to-End Deadlines, In: Proceedings of the Conference on Embedded Systems Software (EMSOFT), 2005 [GriffithsHedrick] Model-Based Integrated Embedded Systems for Automotive Applications, Paul Griffiths & J. Karl Hedrick, In: Advanced Simulation and Control for Automotive Applications, Oxford UK, 24-26 Sept 2001. http://www-personal.engin.umich.edu/~paulgrif/mobies_oxford.pdf [KrgerGuptaMathew04] I. H. Krger, D. Gupta, R. Mathew, P. Moorthy, W. Phillips, S. Rittmann, J. Ahluwalia. Towards a Process and Tool-Chain for Service-Oriented Automotive Software Engineering. Proceedings of the ICSE 2004 Workshop on Software Engineering for Automotive Systems (SEAS), 2004. http://www.cse.ucsd.edu/~ikrueger/publications/SOASE_final.pdf [KrgerGrosuScholz99] I. Krger, R. Grosu, P. Scholz, M. Broy: From MSCs to Statecharts, in: Franz J. Rammig (ed.). Distributed and Parallel Embedded Systems, Kluwer Academic Publishers, 1999. http://www4.informatik.tu- muenchen.de/papers/KGSB99.html [HuberSchaetzSchmidt96] Franz Huber, Bernhard Schtz, Alexander Schmidt, Katharina Spies. AutoFocus - A Tool for Distributed Systems Specification. Proceedings FTRTFT'96 - Formal Techniques in Real- Time and Fault-Tolerant Systems. 1996. http://www4.informatik.tu- muenchen.de/papers/HuberSchaetzSchmidtS.html [LtzbeyerPretschner00] H. Ltzbeyer, A. Pretschner. Testing Concurrent Reactive Systems with Constraint Logic Programming Proc. 2nd workshop on Rule-Based Constraint Reasoning and Programming, Singapore, September 2000. http://www4.in.tum.de/~loetzbey/papers/cp00.pdf [BroyHofmannKrger97] M. Broy, C. Hofmann, I. Krger, M. Schmidt. Using Extended Event Traces to Describe Communication in Software Architectures. Proceedings of the Asia-Pacific Software Engineering Conference and International Computer Science Conference, IEEE Computer Society, 1997. http://wwwbroy.informatik.tu- muenchen.de/publ/papers/BHKS97.pdf [LiuLayland 73] C.L. Liu, James W. Layland. Scheduling Algorithms for Multiprogramming in a Hard-Real- Time Environment. Journal of the Association for Computing Machinery, Vol 20, No. 1, January 1973. http://portal.acm.org/citation.cfm?id=321743

114

115 [Steiert98] Hans-Peter Steiert. Towards a component-based n- Tier C/S-architecture. Univ. of Kaiserslauternd, Kaiserslautern, Germany. Proceedings of the third international workshop on Software architecture, 1998. http://portal.acm.org/citation.cfm?id=288443 [SchmidtLevineMungee98] D. C. Schmidt, D. L. Levine, and S. Mungee. The design of the TAO real-time object request broker. Computer Communications, 21(4), 1998. http://www.cs.wustl.edu/~schmidt/PDF/TAO.pdf [VeitchBabuPsztor04] Darryl Veitch, Satish Babu, Attila Psztor, Robust Remote Synchronisation of a New Clock for PCs, Internet Measurement Conference, Taormina, Italy, October 2004. http://www.cubinlab.ee.mu.oz.au/~darryl/synch_IMC-2004_camera.pdf [SchmidtKuhns00] Douglas C. Schmidt and Fred Kuhns. An Overview of the Real-time CORBA Specification, IEEE Computer special issue on Object-Oriented Real-time Distributed Computing. June 2000. http://www.cs.wustl.edu/~schmidt/PDF/orc.pdf [Schmidt98] Douglas C. Schmidt. An Architectural Overview of the ACE Framework: A Case-study of Successful Cross-platform Systems Software Reuse, USENIX login magazine, Tools special issue, November, 1998. http://www.cs.wustl.edu/~schmidt/PDF/login.pdf [Rumpe02] Bernhard Rumpe. Executable Modeling with UML. A Vision or a Nightmare? Issues & Trends of Information Technology Management in Contemporary Associations, Seattle. Idea Group Publishing, Hershey, London, pp. 697-701. 2002. http://www.sse.cs.tu-bs.de/~rumpe/publications/ps/IRMA.UML.pdf [OMG02] Object Management Group (OMG). Real-time CORBA specification, 2002: http://www.omg.org/cgi-bin/doc?formal/01-12-28.pdf [Romberg02] Jan Romberg. Model-Based Deployment With AutoFOCUS: A First Cut. 14th Euromicro 2002 Conference on Real-Time Systems, Work- in-Progress Session, Vienna, Austria, 2002. http://www4.in.tum.de/publ/papers/Romberg02FirstCut.pdf [HuberSchatz97] Franz Huber, Bernhard Schatz. Rapid Prototyping with AUTOFOCUS. 1997. http://www4.in.tum.de/publ/papers/GI-FDT-97-Final_huberf_1997_Conference.pdf [Krger00] I. Krger: Distributed System Design with Message Sequence Charts, Dissertation, Technische Universitt Mnchen, 2000. http://tumb1.biblio.tumuenchen.de/publ/diss/in/2000/krueger.htmlhttp://sosac/html/publications/Thesis.ps [Mller03] Oliver Mller. Generating RT-CORBA Components from Service Specification. Technische Universitt Mnchen Fakultt fr Informatik, 2003.

116 [KrgerMller03] Ingolf H. Krger, Oliver Mller. Reliable Code Generation for RT CORBA. University of California, San Diego, 2003 [Ahluwalia05] Jaswinder S. Ahluwalia. A code-generation approach to runtime monitoring of end-to-end real- time constraints. University of California, San Diego, 2005 [Vinoski99] Steve Vinoski and Michi Henning. Advanced CORBA Programming with C++. Addison-Wesley Professional, 1999. [OCI] OCI TAO Developers Guide version 1.3a, Volumes 1 & 2. http://www.theaceorb.com/ [HustonJohnsonSyyid03] Stephen D. Huston, James CE Johnson, Umar Syyid. The ACE Programmer's Guide: Practical Design Patterns for Network and Systems Programming. Addison-Wesley Professional, 2003. [SchmidtHuston02] Douglas C. Schmidt, Stephen D. Huston. C++ Network Programming: Mastering Complexity with ACE & Patterns. Addison-Wesley Professional, 2002. [SchmidtHuston03] Douglas C. Schmidt, Stephen D. Huston. C++ Network Programming: Systematic Reuse with ACE & Frameworks. Addison-Wesley Professional, 2003. [Stevens03] W. Richard Stevens, Bill Fenner, Andrew M. Rudoff, Richard W. Stevens. Unix Network Programming, Vol. 1: The Sockets Networking API, Third Edition. Addison-Wesley Professional 2003 [SUN05] Sun Microsystems Inc. C User's Guide, chapter 5. lint Source Code Checker. 2005 http://docs.sun.com/source/819-0494/lint.html [OMG] Object Management Group http://omg.org/ [UML] Unified Modeling Language http://www.uml.org/ [AFWEB] AutoFOCUS Webpage: http://autofocus.informatik.tu-muenchen.de/index-e.html [OCIWEB] OCI Inc. Webpage Object Computing. CORBA compliance information for TAO (OCI distribution of TAO, version 1.3a). http://www.theaceorb.com/compliance/ [MSNETWEB] Microsoft .NET Webpage. http://www.microsoft.com/net/ [J2EEWEB] Sun Microsystems. Java 2 platform, enterprise edition (j2ee): http://java.sun.com/j2ee/

117 [ACEWEB] ACE Adaptive Communication Environment Webpage: http://www.cs.wustl.edu/~schmidt/ACE.html [TAOWEB] TAO Real Time ORB Webpage: http://www.cs.wustl.edu/~schmidt/TAO.html Current status of TAO: http://www.cs.wustl.edu/~schmidt/TAO-status.html [TAODoxygen] TAO Doxygen documentation: http://www.dre.vanderbilt.edu/Doxygen/ http://www.dre.vanderbilt.edu/Doxygen/Current/html/tao/rtevent/structRtecEventComm_ 1_1EventHeader.html [CANWEB] Controller Area Network, CANOpen, and CAN in Automation Website: http://www.can-cia.org/ [PHPWEB] Hypertext Preprocessor (PHP) Website: http://www.php.net [ASPWEB] Active Server Pages Website: http://www.asp.net [MENTORWEB] Mentor Graphics Nucleus BridgePoint Website: http://www.mentor.com/products/embedded_software/nucleus_modeling/nucleus_bridge point/ [PINVOKEWEB] Platform Invoke Resources: http://pinvoke.net/ and http://msdn.microsoft.com/ [WSRELIABLE] Web Services Reliable Messaging, IBM http://www-128.ibm.com/developerworks/library/specification/ws-rm/

Potrebbero piacerti anche