Sei sulla pagina 1di 4

Project Overview

Much work has been done during the last several years on automatic approaches for code reorganization. Fundamental to any attempt at code reorganization is the division of the software into modules, publication of the API for the modules,and then requiring that the modules access each others resources only through the published interfaces.

Our ongoing effort, from which we draw the work reported here, is focused on the case of reorganization of legacy software, consisting of millions of line of non-objectoriented code, that was never modularized or poorly modularized to begin with. We can think of the problem as reorganization of millions of lines of code residing in thousands of files in hundreds of directories into modules, where each module is formed by grouping a set of entities such as files, functions, data structures and variables into a logically cohesive unit. Furthermore, each module makes itself available to the other modules through a published API.The work we report here addresses the fundamental issue of how to measure the quality of a given modularization of the software.

Note that modularization quality is not synonymous with modularization correctness. Obviously, after software has been modularized and the API of each of the modules published, the correctness can be established by checking function call dependencies at compile time and at runtime. If all intermodule function calls are routed through the published API, the modularization is correct. As a theoretical extreme, retaining all of the software in a single monolithic module is a correct modularization though it is not an acceptable solution. On the other hand, the quality of modularization has more to do with partitioning software into more maintainable modules on the basis of the cohesiveness of the service provided by each module. Ideally, while containing all of the major functions that directly contribute to a specific service vis-a-vis the other modules, each module would also contain all of the ancillary functions and the data structures if they are only needed in that module. Capturing these cohesive services and ancillary support criteria into a set of metrics is an important goal of our research. The work that we report here is a step in that direction.More specifically, we present in this work a set of metrics that measure in different ways the interactions between the different modules of a

software system. It is important to realize that metrics that only analyze intermodule interactions cannot exist in isolation from other metrics that measure the quality of a given partitioning of the code. To explain this point, it is not very useful to partition a software system consisting of a couple of million lines of code into two modules, each consisting of a million lines of code, and justify the two large modules purely on the basis of function call routing through the published APIs for the two modules. Each module would still be much too large from the standpoint of code maintenance and code extension. The module interaction metrics must therefore come with a sibling set of metrics that record other desirable properties of the code.

Discription of the problem Existing system:


The basic idea of data-driven streaming protocol is very simple and similar to that of Bit-Torrent. The protocol contains two steps. In the first step, each node independently selects its neighbors so as to form an unstructured overlay network, called the gossip-style overlay construction or membership management. The second step is named block scheduling: The live media content is divided into blocks (or segments or packets), and every node announces what blocks it has to its neighbors. Then each node explicitly requests the blocks of interest from its neighbors according to their announcement. Obviously, the performance of data-driven protocol directly relies on the algorithms in these two steps. To improve the performance of data-driven protocol, most of the recent papers focused on the construction problem of the first step. Researchers proposed different schemes to build unstructured overlays to improve its efficiency or robustness. The scheduling methods used in most of the pioneering works with respect to the data-driven/ swarming-based streaming are somewhat ad hoc. These conventional scheduling strategies mainly include pure random strategy, local rarest first (LRF) strategy and round-robin strategy. Actually, how to do optimal block scheduling to maximize the throughput of data-driven streaming under a constructed overlay network is a challenge issue.

Proposed system:
In this paper, we study the scheduling problem in the data driven/ swarming based protocol in peer-to-peer streaming. The contributions of this paper are twofold. First, to the best of our knowledge, we are the first to theoretically address the scheduling problem in data-driven protocol. Second, we give the optimal scheduling algorithm under different bandwidth constraints, as well as a distributed heuristic algorithm, which can be practically applied in real system and outperforms conventional ad hoc strategies by about 10 percent-70 percent in both single rate and multi rate scenario. For future work, we will study how to maximize the blocks delivered over a horizon of several periods, taking into account the interdependence between the periods, and will analyze the gap between the global optimal solution and the proposed distributed algorithm. We also would like to study how to adapt the parameter in terms of the network condition in our proposed distributed algorithm. Besides, based on our current results, we are interested in combing the video coding technique to our algorithm to further improve the user perceived quality.

System Configuration

Hardware Specification
System Hard Disk : Pentium IV 2.4 GHz : 40 GB

Floppy Drive : 1.44 MB Monitor Mouse RAM : 15 VGA colour : Logitech. : 256 MB

3.2 Software Specification


Operating system :- Windows XP Professional Front End Coding Language Back end :- Microsoft Visual Studio .Net 2005 :- Visual C# .Net

:- Microsoft sql server 2000

Potrebbero piacerti anche