Sei sulla pagina 1di 8

Fifth International Conference on Software Engineering Research, Management and Applications

Design of Real-Time Embedded Music System *


Sungmin Choi 1 , Hoon Oh 2 School of Computer Engineering and Information Technology, University of Ulsan, 680 - 749 San 29, Mugeo 2-Dong, Nam-Gu, Ulsan, Korea. {gelfosman, hoonoh}@mail.ulsan.ac.kr Abstract
Embedded music software for music embedded system is designed and analyzed by applying the DARTS (Design Approach for Real-Time Systems) that appropriately deals with concurrent tasking architecture. The existing approaches for the music application have not considered a real-time multitasking model. So, it suffers from more complexity and less flexibility in design as well as lack of predictability for the timely execution of critical tasks. In this paper, we design a new concurrent tasking architecture for a real-time embedded music system and examine its feasibility, wherein all real-time tasks are examined for satisfying their respective constraints. The design is implemented on the Linux based Xhyper272 Board that uses the Intel Bulverde microprocessor. hardly be used in an embedded system; and (5) most of the existing music applications focus on latency problem for a particular part, but fail to provide a tasking model to guarantee quality of service (QoS). In this paper, we attack the above-mentioned limitations of music applications by using the Design Approach for Real-Time System (hereafter, referred to as DARTS) method [5] in which multiple concurrent tasks are modeled to deal with different independent streams of inputs and production of output. In this case, the timely execution of the tasks is guaranteed by the real-time scheduler so that the designer can be freed from the consideration of the complicated interactions among entities. Thus, we focus on modeling the embedded system, and building and optimizing the real-time tasking architecture by employing the task structuring criteria that helps identify tasks from the DARTS approach. Those tasks are examined to satisfy real-time constraints in detail in the later parts of this paper. In addition to the basic software design, we have designed a sound source DSP chip to drive sound engine that synthesizes the sounds of the Korean traditional musical instruments. The paper is organized as follows. Section 2 describes a system model and the overview of the wellknown DARTS method that are in general used for the design of real-time embedded software. In Section 3, we analyze the music embedded system according to the analysis method suggested in the RTSA method [7]. Section 4 describes the identification of concurrent tasks, task architecture with necessary interfaces, design of each task, and followed by performance analysis. Section 5 discusses implementation details. Finally, in Section 6, we make concluding remarks with our further research directions.

1. Introduction
With the cost reduction of microprocessor, the special purpose embedded systems are becoming more prevalent because they better satisfy the needs of users. A music embedded system as one of such examples has been attracting interest of music composers recently. Many types of audio applications have been designed and used so far [1, 3, 4]. Most of them suffer from the following restrictions: (1) the applications do not employ real-time multi-tasking concept in their design, resulting in a complicated structure; (2) they do not provide an efficient mechanism to share resources with other concurrent modules; (3) they do not handle different input sources of various processing rates efficiently; (4) they do not take into account resource constraints such as processor power and memory capacity so that their current design architecture can
*

2. Preliminary
2.1 A System Model
Fig. 1 illustrates a basic system model of an embedded music system. The system is divided into two major parts the sequencer and the synthesizer.

This work was supported by grant N0. R01-2005-000-10671-0287 from Korea Science and Engineering Foundation in Ministry of Science & Technology 1 He is a graduate student of the University of Ulsan 2 He is an assistant professor of the University of Ulsan as a corresponding author.

0-7695-2867-8/07 $25.00 2007 IEEE DOI 10.1109/SERA.2007.76

569

The MIDI sequencer is a module which allows multipart music to be created, edited, stored and played back (via a MIDI interface). The MIDI sequencer that receives midi messages from different midi devices (e.g., midi keyboard) either saves or processes the messages before sending the messages to the synthesizer. By using the USB interface, the synthesizer reads the midi messages from the Music Controller to synthesize audio sound.

indicate the data flowing continuously. There are three types of event flows where a triggered transformation is a one-shot function invoked when the corresponding state transition takes place; the enabled or disabled transformation is invoked multiple times while staying in the transited state. Data store is represented by two parallel solid lines with data parameter name inside. Tasks are represented by parallelogram. Also, we use some notations for inter-process communications as used in [5].

2.3 Overview of DARTS


The DARTS [5] was based on the RTSA [7] that was proposed to provide a structured analysis for realtime systems. The DARTS uses the same structured analysis as the RTSA for analyzing and modeling the problem domain. The DARTS starts with data flow/control flow diagram (abbreviated as DFD) developed by using the RTSA. However, the DARTS provides the decomposition principles and steps to proceed from RTSA specification to a design consisting of concurrent tasks. It also provides a set of task structuring criteria for structuring a real-time system into concurrent tasks, as well as guidelines for defining the interfaces between tasks. Each task is then designed using structured design. In the following, design of our music embedded system is explained, following the DARTS.

Fig. 1 Abstract model of an embedded music

system

2.2 Notations

3. Real-Time Structured Analysis


3.1. System Context Diagram

Fig. 2 Notations

The notations for data flow/control flow diagrams are shown in Fig. 2. Data transformations shown by the solid circle are referred to as processes. Control transformations shown by a dashed circle represent a control activity and are defined by means of state transition diagram. Discrete data arrives at specific time intervals and its flow is represented by an arrow while continuous data with double arrow heads

Fig. 3 Embedded Music System Context

Diagram

The system context diagram for the proposed embedded music system is shown in Fig. 3 that shows the data flows or interfaces between the system to be developed (Control and Synthesize Midi Msg) and all external entities or terminals represented by rectangles.

570

The UI Device includes a touch screen and a mouse. The Midi Equipment can be any instrument that produces Midi data like Midi Keyboard or Midi Drum.

3.2 Decomposition of the System

Fig. 4 Decomposition into Subsystems

The embedded music system can be decomposed into two subsystems Control Midi Msg and Synthesize Midi Msg that are functionally independent of each other. Fig. 4 shows the top-level data flow/control flow diagram (DFD) for the embedded music system. In our implementation, two subsystems are developed separately by different teams and connected by the USB interface.

System data transformation. The Control System transformation receives Midi Control Requests from Process IO and has to perform some actions according to the events. Current Parameters can be changed by users and once changed, are immediately transformed into Midi format to be sent to the Synthesizer. The Parameters may contain instrument type, volume level, and sound effect. The Midi data are transformed to a format suitable for synthesizing and then the formatted Midi data are sent to Synthesizer through Sound I/F. Playback Information indicates the Midi data part to be played in the Midi data Repository that is defined by track number and range within the track. Recording Information contains the input channel number that indicates a specific Midi device.

3.3 Control Midi Msg

Fig. 6 Control State Transition Diagram

3.4 Control System


The Control System data transformation in Fig. 5 is further decomposed into a DFD diagram. The control state transition diagram shown in Fig. 6 is executed by the Control control transformation shown in Fig, 7 and consists of six states, showing the transitions and actions. In the Initial state that is entered at Power On event, all tasks and parameters are initialized. Upon receiving Play On Req (Playback On Req), the system enters the Playing (Playingback) state. In the Playingback state, system plays Midi from Midi Data Repository while in the Playing state, it plays Midi from Keyboard or Midi equipments in a real-time manner. Thus, in the Playing & Playingback state, system can play Midi data from both Midi device and Midi Data Repository simultaneously.

Fig. 5 Control Midi Msg Data Flow Diagram

The Control Midi Msg subsystem in Fig. 4 can be further decomposed into four data transformations as shown in Fig. 5. If the power button is pressed, the system is initialized, starting all necessary tasks. The Monitor Power data transformation sensors the power signal and sends Power On or Off signal to the Control System data transformation. Track Midi takes Midi from Keyboard or Midi Equipments and saves it in Input Midi data store that is accessed by Control

571

The Control System DFD is shown in Fig. 7. The Control control transformation receives request events from the GUI transformation or the Monitor Power transformation and activates the corresponding functions depending on the current state and the request events. Before Midi data or Parameters are sent to the Synthesizer, they are transformed by the Synthesizer I/F data transformation into a certain format that can be recognized by the Synthesizer.

requests. It sends Midi Control Requests to Control control transformation in Fig. 7 and sends other request messages to the corresponding transformations. Edit Information contains information about the part of the Midi data in the Midi Data Repository to be edited and about the actions such as insertion or removal.

4. Task Structuring
4.1 Identifying Concurrent Tasks
Table 1. Identified Tasks
Tasks Monitor Power Description Monitor Power sensors Power On or Power Off events. Since system goes immediately into the Initial state when the system is turned on, this task is not needed in an actual implementation. GUI transformation is mapped to an asynchronous GUI task that is activated by external interrupts when there is input available for it. The interrupts occurs either from Touch Screen or Timer to indicate the progress of the play time span. Control control transformation and Update Parameters data transformation are mapped to the Control task that executes the state transition diagram and updates current parameters. Edit Midi data transformation is mapped to Edit Task that edits the Midi data according to the Edit Information transferred from GUI task. Play and Record data transformations are temporally and sequentially cohesive since recording takes place right after tracking if recording is on. So, they are mapped to the TR task. Sound I/F data transformation is mapped to a periodic Sound I/F task that sends formatted Midi to software synthesizer (testing purpose only) or DSP sound engine. Playback is a periodic task that reads Midi record from the Midi Data Repository and dispatches the record to the Synthesizer using Synthesizer I/F library. This task is activated every 20ms. Track Midi data transformation is mapped to Track Midi task that tracks Midi data from midi devices and stores them in Input Midi data store. It is activated asynchronously whenever Midi input is received. The functionally cohesive Read File and Store File data transformations are mapped to the File Handler task since they both handle Midi data in a secondary storage.

GUI

Control

Fig. 7 Control System Data Flow/Control Flow

Diagram

Edit Midi PR

3.5 Process IO
Sound I/F

Playback

Track Midi

File Handler

Fig. 8 Process IO Data Flow/Control Flow

Diagram

Process IO data transformation in Fig. 5 can be decomposed into GUI and other related transformations as shown in Fig. 8. GUI data transformation takes UI Input and transforms it into

We identify the tasks by using the task structuring criteria suggested by the DARTS based on RTSA specification. In general, I/O transformations are mapped to asynchronous I/O tasks, periodic tasks, or resource monitor tasks, and internal transformations are mapped to control, periodic, or asynchronous tasks. Transformations can be combined with other ones

572

according to the sequential, temporal, or functional cohesion criteria [5]. The tasks are listed in Table 1. Synthesizer I/F data transformation is implemented as library functions.

4.2 Software Architecture

appropriate control action(s). Based on the desired control action, Control calls the appropriate subordinate module. For instance, if the Send Record Request module is called by the Control task, it sends the Record Req command to the PR task.

Fig. 9 Software architecture diagram

Fig. 11 Structure chart for the PR task

Fig. 9 shows the software architecture diagram for the embedded music system. The task GUI is asynchronous device input task that is activated by an external interrupt when there is input available for it. PR and Playback read Midi data periodically from Input Midi data store and Midi Data Repository data store according to Recording Information, respectively, and convert the Midi data to the Formatted Midi that can be recognized by Software Synthesizer or DSP sound engine. Software synthesizer is needed for the testing of music sequence part without being connected to sound engine. Control task is a high priority control task that executes the state transition diagram.

Table 2. Task characteristics


CPU time (msec) AP ES Task

Task *GUI Interrupt (C1) *GUI (C2) PR (C3) Playback (C4) *Control (C6) File Handler (C7) Edit Midi (C8) Track Midi (C9) Sound I/F(C10) Context Switching Overhead (Cx) Message Communication Overhead (Cm) *Event Sequence (Ce)

Ci 1 2 3 2 2 6 6 1 1 0.5 1

2 4 = C3 + 2Cx 3 = C4 + 2Cx 2 6 6 1 2 = C10 + 2Cx

4.3 Structured Design


Based on the tasks and interfaces between them defined in Fig. 9, we present each task as the execution of a sequential program by decomposing it into modules using the structured design method. Only design examples of two tasks are given because of lack of space. Fig. 10 shows the structure of Control task. Whenever there is a new request event, the Control task calls the Control State Transition Module, which encapsulates the Control state transition table (derived from the state transition diagram), and passes the request event to it. Control State Transition Module uses the event as an index to the state transition table in order to determine the new state and returns the

8.5 = C1 + C2 + C6 + 3Cx + 2Cm *GUI Interrupt, GUI, and Control are modeled as an event sequence task Ce. ES: Event Sequence, P: Periodic, AP:Aperiodic

573

Two encapsulated IHMs, Formatted Midi and Midi Repository are shown in Fig. 10. These IHMs are also accessed by other tasks. Hence, the Update and Get access procedures must ensure properly synchronized access to the data. These are done using semaphores.

5. Performance Analysis
We analyze the performance of the design by using real-time scheduling theory to see if all the tasks can meet their respective time constraints. Track Midi should take the highest priority since it must not miss any Midi data input from instruments. Track Midi that is activated by an external interrupt takes at most 1ms to track Midi data and store it to Input Midi data store. PR and Playback get Midi data from different sources, music instrument and Midi Data Repository data store every 20ms, respectively. Two tasks convert the obtained Midi data to the formatted Midi data and store them in the Formatted Midi data store in an interleaved fashion. Independently, Sound I/F dispatches the formatted Midi data periodically every 10ms to generate sound. Only these three tasks are time-critical periodic tasks. Control task should be assigned the fourth highest priority and it takes at most 2ms to process one input event. It is required that GUI have the fifth highest priority since it has to receive inputs unless Control is not busy. File Handler and Edit Midi are both assigned the least priority since it is enough for them to operate whenever CPU becomes idle. All time quanta assigned depend on the microprocessor power. The summary of task characteristics is given in Table 2. While real-time periodic tasks are running, the following event sequence must be handled appropriately. The worst-case event sequence is as follows: GUI interrupt arrives (C1) GUI task reads UI input from UI devices (C2) GUI task sends a Control Req message to Control task (Cm) Control task receives the message, executes its state transition diagram, and changes state according to the type of input (C6) Control task sends the corresponding request message to one of the tasks connected by outgoing edges (Cm) Ce = C1 + C2 + C6 + 3Cx + 2Cm = 1 + 2 + 2 + 3 * 0.5 + 2 * 1 = 8.5ms. This event sequence task, named ES, can be modeled as one periodic task that executes with the maximum response delay of 100ms. The worst-case execution time of PR periodic task occurs as:

PR task checks its message queue and changes its mode from Play to Play & Record mode. PR task gets a Midi data and changes the Midi data to a formatted Midi by calling Synthesizer I/F function. PR task writes the formatted Midi to the Formatted Midi data store. PR task writes the Midi data to Midi Data Repository, too. CTR = C3 + 2Cx = 3 + 2 * 0.5 = 4ms. This task is invoked periodically every 20ms. The worst-case execution time of Playback periodic task occurs as: Playback tack checks its message queue to see the request of Playback Off or Playback On. Playback gets a Midi data from Midi Repository and modifies the Midi data to a formatted Midi data by calling Synthesizer I/F function. Playback writes the formatted Midi data to the Formatted Midi data store. CPlayback = C4 + 2Cx = 2 + 2*0.5 = 3ms. This task is invoked periodically every 20ms. We assume that Track Midi can always interrupt periodic tasks or event sequence task not to miss any Midi input. We add CTM to the execution times of periodic tasks to reflect the worst case scenario.
Table 3. Task parameter values
Task Name *PR *Playback *Sound I/F *ES (GUI, Control) Track Midi File Handler Edit Midi Task Type Periodic Periodic Periodic Modeled Periodic Aperiodic Aperiodic Aperiodic Exe Time Ci 5 = CPR+CTM 4 = CPlayback+CTM 2 9.5 = Ce+CTM 1 Period Ti 20ms 20ms 20ms 100ms Pri. 2 2 2 3 1 4 4

In Table 3, The execution times of the tasks with * mark include the execution time of Track Midi (CTM) since Track Midi has an asynchronous task with the highest priority. For the periodic task set = {PR, Playback, Sound I/F, ES}, its utilization U is given 0.645 ( = 5/20 + 4/20 + 2/20 + 9.5/100). According to the schedulability theorem [6], a task set is determined to be schedulable if the following condition is satisfied:

U (n ) =

i =1

1 Ci n ( 2 n 1) Ti

574

If n = 4 then, U(4) = 4(21/4 1) = 0.756. Thus, since U U(4), the given periodic task set is determined to be schedulable at any case. System timing diagram is given in Fig. 12 in which each task is successfully executed against its deadline being the end of its period. During the CPU idle time slots, non timecritical tasks, File handler and Edit Midi are executed.

6.2 Task Implementation


The Monitor Midi task corresponds to the ALSA driver of the Linux system. The PR task uses the ALSA Sequencer module to input the Midi from Midi keyboard. To implement the GUI task, we use the Qt/Embedded that is a C++ GUI tool for Linux-based embedded development. The GUI view using the Qt/Embedded is shown at Fig. 12. Considering that communication between tasks takes a considerable system overhead, we employed a special memory management technique in which inter-task communication is performed by using predefined shared buffer for communication, rather than a system call.

6.3 Graphic User Interface


Fig. 12 System Timing Diagram

Fig. 13 View of GUI Task main window

6. Implementation
6.1 Environment
We implemented the music embedded software on the Linux 2.6 based Xhyper272 embedded Board that has 64 MB flash memory. After installing the Linux kernel, fonts, and other system files including the ALSA(Advanced Linux Sound Architecture) driver, about 5MB of memory out of the remaining 30.8MB was used to store the executable code of the music embedded software.

The GUI task represents data in the form of graphical figures, and thus provide easy-to-use graphic user interface so that users can interact with the music system by clicking the corresponding figures for data manipulation or selection of functions. The GUI provides the functions such as editing Midi data, changing the synthesizer parameters, playback, recording, and file management. Fig. 13 shows GUI Task main window. We can edit the Midi data at the right grid layer. The horizontal lines and the vertical lines stand for time value and the pitch of note, respectively. The blue rectangles represent the Midi data. To add or remove some Midi data, we can use Pencil or Eraser tool of the top menu bar. The red line at the right grid layer indicates a time that is played back. To hear sound of some track that corresponds to a specific instrument, we check the corresponding check box at the left track information table. We can turn on or off by clicking the red circled button. While recording, the Midi data being recorded can be seen at the right grid layer. We also can change the synthesizer parameters like instrument, effect and volume level at the left bottom layer. The instrument and effect parameters can be adjusted differently for each output channel. However, the volume level is applied to all.

7. Concluding Remarks
In this paper, the DARTS approach was applied to design an audio application for embedded music systems. Even though audio application includes many components that can be translated to the corresponding concurrent tasks, the existing musical applications have not adopted the real-time design concept in its design. The proposed design using the DARTS

575

approach experiments the feasibility of such a realtime design for embedded music system. Even though many concurrent tasks in this system are involved in the additional inter-task communication and synchronization, we were able to optimize and analyze the system design neatly to the level of real-time requirements by using the DARTS approach. In the future work, we will elaborate the current design and implementation by including some additional interfaces for Korean traditional musical instruments.

8. References
[1] Brandt, E. and Dannenberg, R. B., Low-latency music software using off-the-shelf operating systems, In Proc. 1998 Intl. Computer Music Conf. (ICMC-98), San Francisco, pages 137 -141, 1998. [2] Buttazzo, G., Research trends in real-time computing for embedded systems, ACM SIGBED Review, vol.3, issue 3, (July 2006) Pages 1 - 10 [3] Chaudhary, A., Freed, A. and Wright, M. An open architecture for real-time music software, In Proceedings of the 2000 International Computer Music Conference, (Berlin, 2000). ICMA, San Francisco, 2000, 492 495 [4] Dannenberg, R. B. "Aura II: making real-time systems safe for music," In Proceedings of the International Conference on New Interfaces for Musical Expression. Hammamatsu, Japan, 2004. [5] Gomaa, H., Software design methods for concurrent and real-time systems, SEI Series In Software Engineering, Aug. 2001. [6] Liu, C. L. and Layland, J. W., Scheduling algorithm for multiprogramming in a hard real-time environment, Journal of the ACM, vol.20, Jan. 1973. [7] Ward, P., and S. Mellor, Structured development for real-time systems, Three Volumes, Englewood Cliffs, N. J., Prentice Hall, 1985

576

Potrebbero piacerti anche