Sei sulla pagina 1di 58

Industrial Training Report

HEDGE PORTAL

ON

Submitted in the partial fulfillment of the requirement for the award of the degree of MASTER OF COMPUTER APPLICATION
[2007-2010]

UPTU, LUCKNOW IIMT ENGINEERING COLLEGE ,MEERUT MAY 2010

Processes transform incoming data flows into outgoing data flows represent with a bubble or rounded square name with a strong VERB/OBJECT combination; examples: create_exception_report validate_input_characters calculate_discount

process

Data Stores

data at rest represents holding areas for collection of data, processes add or retrieve data from these stores name using a noun (do not use file) only processes are connected to data stores show net flow of data between data store and process. For instance, when access a DBMS, show only the result flow, not the request

data store

ARCHITECTURE

IMPLEMENTATION AND CODING

Implementation & Coding


Introduction This phase includes coding, testing and installation of the system. After having concrete logical and physical design, the researcher will write the programs that will make up the system. Complete details about system features and functionalities must be considered in completing system development. Once the system is finished, the project will be implemented on servers that run Linux distribution and will be tested for a certain period of time. The system will be examined for errors and bugs and comments and suggestions will also be collected from the sample users and can be discuss using online forum. These data will then be used to further improve the system before the network and system administrator decided to finally use the proposed system.

INTRODUCTION TO JAVA
Java (programming language) Java is a programming language originally developed by Sun Microsystems and released in 1995 as a core component of Sun's Java platform. The language derives much of its syntax from C and C++ but has a simpler object model and fewer low-level Facilities. Java applications are typically compiled to byte code which can run on any Java virtual machine (JVM) regardless of computer architecture. The original and reference implementation Java compilers, virtual machines, and class libraries were developed by Sun from 1995. Java's design, industry backing and portability have made Java one of the fastest-growing and most widely used programming languages in the modern computing industry. History: The Java language was created by James Gosling in June 1991 for use in a set top box project.] Gosling's goals were to implement a virtual machine and a language that had a familiar C/C++ style of notation. The first public implementation was Java 1.0 in 1995. It promised "Write Once, Run Anywhere" (WORA), providing no-cost runtimes on popular platforms. It was fairly secure and its security was configurable, allowing network and file access to be restricted. Here were five primary goals in the creation of the Java language: 1-It should use the object-oriented programming methodology. 2-It should allow the same program to be executed on multiple operating systems. 3-It should contain built-in support for using computer networks. 4-It should be designed to execute code from remote sources securely. 5-It should be easy to use by selecting what were considered the good parts of other object-oriented languages.

Platform independence: One characteristic, platform independence, means that programs written in the Java language must run similarly on any supported hardware/operating-system platform. One should be able to write a program once, compile it once, and run it anywhere. This is achieved by most Java compilers by compiling the Java language code halfway (to Java byte code) simplified machine instructions specific to the Java platform. The code is then run on a virtual machine (VM), a program written in native code on the host hardware that interprets and executes generic Java byte code. (In some JVM versions, byte code can also be compiled to native code, resulting in faster execution.)Further, standardized libraries are provided to allow access to features to features of the host machines (such as graphics, threading and networking) in unified ways. This achieves good performance, but at the expense of portability; the output of these compilers can only be run on a single architecture. Another technique, known as just-in-time compilation (JIT),translates the Java byte codes into native code at the time that the program is run which results in a program that executes faster than interpreted code but also incurs compilation overhead during execution. More sophisticated VMs use dynamic recompilation, in which the VM can analyze the behavior of the running program and selectively recompile and optimize critical parts of the program. Dynamic recompilation can achieve optimizations superior to static compilation because the dynamic compiler can base optimizations on knowledge about the runtime environment and the set of loaded classes, and can identify the hot spots (parts of the program, often inner loops, that take up most of execution time).JIT compilation and dynamic recompilation allow Java programs to take advantage of the speed of native code without losing portability.

Automatic memory management. One of the ideas behind Java's automatic memory management model is that programmers be spared the burden of having to perform manual memory management. In some languages the programmer allocates memory for the creation of objects stored on the heap and the responsibility of later reallocating that memory also resides with the programmer. If the programmer forgets to de allocate memory or writes code that fails to do so, a memory leak occurs and the program can consume an arbitrarily large amount of memory. Additionally, if the program attempts to de allocate the region of memory more than once, the result is undefined and the program may become unstable and may crash. The use of garbage collection in a language can also affect programming paradigms. If, for example, the developer assumes that the cost of memory allocation/recollection is low, they may choose to more freely construct objects instead of pre-initializing, holding and reusing them. With the small cost of potential performance penalties (inner-loop construction of large/complex objects), this facilitates thread-isolation (no need to synchronize as different threads work on different object instances) and data-hiding. The use of transient immutable value-objects minimizes sideeffect programming. Java does not support pointer arithmetic as is supported in, for example, C++. This is because the garbage collector may relocate referenced objects, invalidating such pointers. Another reason that Java forbids this is that type safety and security can no longer be guaranteed if arbitrary manipulation of pointers is allowed. Java syntax The syntax of Java is largely derived from C++. However, unlike C++, which combines the syntax for structured, generic, and object-oriented programming, Java was built exclusively as an object oriented language. As a result, almost everything is an object and all code is written inside a class. The exceptions are the intrinsic data types

(ordinal and real numbers, Boolean values, and characters), which are not classes for performance reasons.

This is a minimal Hello world program in Java: // Hello.java public class Hello { public static void main(String[] args) { System.out.println("Hello, World!"); } } To execute a Java program ,the code is saved as a file named Hello.java. It must first be compiled into byte code using a Java compiler, which produces

CODING

Coding Style
Braces For Class, Interface, Method, and other similar declarations, place the beginning brace on the end of the first line of declaration, and the ending brace on separate line. Public class extends Object { ... . . .; }; Public void someRoutine() { ... . . .; }; Simple Statements Each line should contain at most one statement. String errMsg = null; errMsg = This is a Message; counter++; numErrors++; if, if-else, if-elseif-else Statements The if-else type of statements should have the following form: if ( condition) { statements; } else if ( condition) { statements; } else { statements;

} Always use braces with if statements. Avoid the following: if (something is true) statement; while Statements The while statements should have the following form: while (condition) { statements; statements; } Always use braces with while statements. Avoid the following: while (something is true) statement; do while Statements The do while statements should have the following form: do { statements; statements; } while (condition); for Loop Statements The for statements should have the following form: for (initialization; condition; increment) { statements; statements; } for (int i = 0; i < 100; i++) { statements; statements; } Always use braces with for loop statements. Avoid the following:

for (int i = 0; i < 100; i++) statement;

try-catch Statements The try-catch-finally statements should have the following form: try { statements; } catch (Exception e) { statements; } finally { statements; } Method Parameters Each method parameter/argument should be on its own line for readability. public void processObject (Integer objectAmount, SomeDataObject someDo, AnotherDataObject anotherDo) { statements; }

Naming conventions
Packages Use English descriptions, using all lower case. com.seic.gwmp.amsvcs.manager Classes and Interfaces Class names should be nouns. Use a full English description, with the first letters of all words capitalized. Customer SavingsAccount Constants Use all uppercase letters with the words separated by underscores. An additional approach is to use final static getter member functions because it greatly increases flexibility MIN_BALANCE Method Arguments/Parameters Use a full English description of value/object being passed, optionally prefixing the name with a or an to prevent any variable name collisions with local variables. aCustomer Attributes/Variables Use a full English description, with the first letter in lower case and the first letter of subsequent words in uppercase. firstName LastName Methods/Member Functions Method names should be verbs. Use a full English description of what the method does, with the first letter in lower case and the first letter of subsequent words in uppercase. openFile() addAccount() Getter/Setter Member Functions Prefix the name of the field being accessed with get or set appropriately. getFirstName()

setLastName() All Boolean Getter member functions must be prefixed with the word is, followed by the name of the field or attribute. isPersistent() isReady() Collections Use the pluralized name representing the types of objects stored by the collection. orderItems() customers() Use accessor member functions for collection objects. getObjectItems() setObjectItems()

Formatting
Classes Fields and methods of a class should be declared in the following order: Static Variables/Attributes Variables/Attributes Constructors finalize() Method Static Public Methods Static Protected Methods Static Private Methods Public Methods Protected Methods Private Methods

Exception Hierarchy

SEI Logger
Overview Logging can be done by the application for debug or informational purposes. It is also used by the exception handling framework to log exceptions. Never use System.out or System.printStack() in your code! Logging is done via an SEI written class which implements log4J. For an example of using logging, refer to VSS\enterprisearch\framework\src\system\com\seic\framework\logging\LoggingExampl e.java The logging classes are part of the EA framework and are in eaframework-sys.jar. Usage Initial setup To implement logging, perform the following steps 1. In each class where logging should occur, add a private class level variable for an SeiLogger (make sure that the correct class name appears in the getSeiLogger argument: public class MyClass { public static final SeiLogger logger = SeiLogger.getSeiLogger(MyClass.class); ... } 2. To log a message, type the logger class, the appropriate logging level enum and the value to log (Note: see usage standards below) logger.debug(input string value was = + stringInput); logger.info(Successfully completed call, name = + lastName);

USER MANUAL

User Manual 1.Head First Java By Sierra And Bert Bates 2.Complete Rererence By Herbert Schildt. 3.Head First Servlet And Jsp By Kathy Sierra ,Bert Bates,Bryan 4.Oracle Ivan Bayross 5.D.B.M.S Ramez Elmasri, Shamkant B.

SNAPSHOT

TESTING

Software testing
Software testing is the process used to assess the quality of computer software. Software testing is an empirical technical investigation conducted to provide stakeholders with information about the quality of the product or service under test, with respect to the

context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding software bugs. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behavior of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance(S.Q.A.), which encompasses all business process areas, not just testing. A problem with software testing is that testing all combinations of inputs and preconditions is not feasible when testing anything other than a simple product. This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, Para functional dimensions of quality--for example, usability, scalability, performance, compatibility, reliability--can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another. Software testing is used in association with verification and validation: Verification: Have we built the software right. Validation: Have we built the right software. Testing is usually performed for the following purposes:

To improve quality.

As computers and software are used in critical applications, the outcome of a bug can be severe. Bugs can cause huge losses. Bugs in critical systems have caused airplane crashes, allowed space shuttle missions to go awry, halted trading on the stock market, and worse. Bugs can kill. Bugs can cause disasters. The so-called year 2000 (Y2K) bug has given birth to a cottage industry of consultants and programming tools dedicated to making sure the modern world doesn't come to a screeching halt on the first day of the next century. Bugs In a computerized embedded world, the quality and reliability of software is a matter of life and death. Quality means the conformance to the specified design requirement. Being correct, the minimum requirement of quality, means performing as required under specified circumstances. Debugging, a narrow view of software testing, is performed heavily to find out design defects by the programmer. The imperfection of human nature makes it almost impossible to make a moderately complex program correct the first time. Finding the problems and get them fixed, is the purpose of debugging in programming phase.

For Verification & Validation (V&V)

Just as topic Verification and Validation indicated, another important purpose of testing is verification and validation (V&V). Testing can serve as metrics. It is heavily used as a tool in the V&V process. Testers can make claims based on interpretations of the testing

results, which either the product works under certain situations, or it does not work. We can also compare the quality among different products under the same specification, based on results from the same test. We can not test quality directly, but we can test related factors to make quality visible. Quality has three sets of factors -- functionality, engineering, and adaptability. These three sets of factors can be thought of as dimensions in the software quality space. Each dimension may be broken down into its component factors and considerations at successively lower levels of detail. Table 1 illustrates some of the most frequently cited quality considerations.

For reliability estimation

Software reliability has important relations with many aspects of software, including the structure, and the amount of testing it has been subjected to. Based on an operational profile (an estimate of the relative frequency of use of various inputs to the program), testing can serve as a statistical sampling method to gain failure data for reliability estimation

Testing Methods
Black box testing Black box testing treats the software as a black-box without any understanding of internal behavior. It aims to test the functionality according to the requirements. Thus, the tester inputs data and only sees the output from the test object. This level of testing usually requires thorough test cases to be provided to the tester who then can simply verify that for a given input, the output value (or behavior), is the same as the expected value specified in the test case. Black box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing, traceability matrix etc. It is obvious that the more we have covered in the input space, the more problems we will find and therefore we will be more confident about the quality of the software. Ideally we would be tempted to exhaustively test the input space. But as stated above, exhaustively testing the combinations of valid inputs will be impossible for most of the programs, let alone considering invalid inputs, timing, sequence, and resource variables. Combinatorial explosion is the major roadblock in functional testing. To make things worse, we can never be sure whether the specification is either correct or complete. Due to limitations of the language used in the specifications (usually natural language), ambiguity is often inevitable. Even if we use some type of formal or restricted language, we may still fail to write down all the possible cases in the specification. Sometimes, the specification itself becomes an intractable problem: it is not possible to specify precisely every situation that can be encountered using limited words. And people can seldom specify clearly what they want -- they usually can tell whether a prototype is, or is not,

what they want after they have been finished. Specification problems contributes approximately 30 percent of all bugs in software.

White box testing White box testing, however, is when the tester has access to the internal data structures, code, and algorithms. White box testing methods include creating tests to satisfy some code coverage criteria. For example, the test designer can create tests to cause all statements in the program to be executed at least once. Other examples of white box testing are mutation testing and fault injection methods. White box testing includes all static testing. There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once (statement coverage), traverse every branch statements (branch coverage), or cover all the possible combinations of true and false condition predicates (Multiple condition coverage). Control-flow testing, loop testing, and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code -- code that is of no use, or never get executed at all, which can not be discovered by functional testing. In mutation testing, the original program code is perturbed and many mutated programs are created, each contains one fault. Each faulty version of the program is called a mutant. Test data are selected based on the effectiveness of failing the mutants. The more mutants a test case can kill, the better the test case is considered. The problem with mutation testing is that it is too computationally expensive to use. The boundary between black-box approach and white-box approach is not clear-cut. Many testing strategies mentioned above, may not be safely classified into black-box testing or white-box testing. It is also true for transaction-flow testing, syntax testing, finite-state testing, and many other testing strategies not discussed in this text. One reason is that all the above techniques will need some knowledge of the specification of the software under test. Another reason is that the idea of specification itself is broad -- it may contain any requirement including the structure, programming language, and programming style as part of the specification content.

Testing can be done on the following levels: Unit testing tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors. Limitations of unit testing Testing cannot be expected to catch every error in the program. The same is true for unit testing. By definition, unit testing only tests the functionality of the units themselves. Therefore, it may not catch integration errors, performance problems, or other system-wide issues. Unit testing is more effective if it is used in conjunction with other software testing activities. Like all forms of software testing, unit tests can only show the presence of errors; it cannot show the absence of errors. Software testing is a combinatorial problem. For example, every boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written, programmers often need 3 to 5 lines of test code. To obtain the intended benefits from unit testing, a rigorous sense of discipline is needed throughout the software development process. It is essential to keep careful records not only of the tests that have been performed, but also of all changes that have been made to the source code of this or any other unit in the software. Use of a version control system is essential. If a later version of the unit fails a particular test that it had previously passed, the version-control software can provide a list of the source code changes (if any) that have been applied to the unit since that time. It is also essential to implement a sustainable process for ensuring that test case failures are reviewed daily and addressed immediately. If such a process is not implemented and ingrained into the team's workflow, the application will evolve out of sync with the unit test suite increasing false positives and reducing the effectiveness of the test suite.

Integration testing exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system. System testing tests a completely integrated system to verify that it meets its requirements. System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.

Alpha testing Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-

the-shelf software as a form of internal acceptance testing, before the software goes to beta testing. Beta testing Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users. Regression Testing Regression Testing is typically carried out at the end of the development cycle. During this testing, all bug previously identified and fixed is tested along with it's impacted areas to confirm the fix and it's impact if any. According to the IEEE Standard Computer Dictionary, Regression testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. Acceptance Testing User acceptance testing (UAT) is one of the final stages of a software project and will often occur before the customer accepts a new system. Users of the system will perform these tests which, ideally, developers have derived from the User Requirements Specification, to which the system should conform. Stress Testing Stress Testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. For example, a web server may be stress tested using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads. Stress testing a subset of load testing. Also see testing, software testing, performance testing.

MAINTENANCE

Maintenance

Daily operations of the system /software may necessitate that maintenance personnel identify potential modifications needed to ensure that the system continues to operate as intended and produces quality data. Daily maintenance activities for the system, takes place to ensure that any previously undetected errors are fixed. Maintenance personnel may determine that modifications to the system and databases are needed to resolve errors or performance problems. Also modifications may be needed to provide new capabilities or to take advantage of hardware upgrades or new releases of system software and application software used to operate the system. New capabilities may take the form of routine maintenance or may constitute enhancements to the system or database as a response to user requests for new/improved capabilities. New capabilities needs may begin a new problem modification process described above. The software will definitely undergo change once it is delivered to the customer. There can be many reasons for this change to occur. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post implementation period. Maintenance is the most important part in the software development life cycle. As the maintenance part of the project takes place when the project is implemented at the customer site. The maintenance of the project takes place after deployment of the project. This plays an important part in the implementation of the project. Maintenance is keeping the system up to date with the changes in the organization and ensuring it meets the goals of the organization by Building a help desk to support the system users having a team available to aid technical difficulties and answer questions Implementing changes to the system when necessary. Software maintenance activities can be classified individually. In practice, however, they are often intertwined. For example, in the course of modifying a program due to the introduction of a new operating system (adaptive change), obscure 'bugs' may be introduced. The bugs have to be traced and dealt with (corrective maintenance). Similarly, the introduction of a more efficient sorting algorithm into a data processing package (perfective maintenance) may require that the existing program code be restructured (preventive maintenance). Figure 1.3 depicts the potential relations between the different types of software change. Despite the overlapping nature of these changes, there are several reasons why a good understanding of the distinction between them is important. Firstly, it allows management to set priorities for change requests. Some changes require a faster response than others. Secondly, there are limitations to software change. Ideally changes are implemented as the need for them arises. In practice, however this is not always possible for several reasons:

Resource Limitations: Some of the major hindrances to the quality and productivity of maintenance activities are the lack of skilled and trained maintenance programmers and the suitable tools and environment to support their work. Cost may also be an issue. Quality of the existing system: In some 'old' systems, this can be so poor that any change can lead to unpredictable ripple effects and a potential collapse of the system. Organizational strategy: The desire to be on a par with other organizations, especially rivals, can be a great determinant of the size of a maintenance budget. Inertia: The resistance to change by users may prevent modification to a software product, however important or potentially profitable such change may be. Thirdly software is often subject to incremental release where changes made to a software product are not always done all together. The changes take place incrementally, with minor changes usually implemented while a system is in operation. Major enhancements are usually planned and incorporated, together with other minor changes, in a new release or upgrade. The change introduction mechanism also depends on whether the software package is bespoke or off-the-shelf. With bespoke software, change can often be effected as the need for it arises. For off-the-shelf packages, users normally have to wait for the next upgrade. Swanson's definitions allow the software maintenance practitioner to be able to tell the user that a certain portion of a maintenance organizations efforts is devoted to userdriven or environment-driven requirements. The user requirements should not be buried with other types

Fig 1.3. The Relationship between the different types of software change of maintenance. The point here is that these types of updates are not corrective in nature they are improvements and no matter which definitions are used, it is imperative to discriminate between corrections and enhancements. By studying the types of maintenance activities above it is clear that regardless of which tools and development model is used, maintenance is needed. The categories clearly indicate that maintenance is more than fixing bugs. This view is supported by Jones

(1991), who comments that organisations lump enhancements and the fixing of bugs together. He goes on to say that this distorts both activities and leads to confusion and mistakes in estimating the time it takes to implement changes and budgets. Even worse, this "lumping" perpetuates the notion that maintenance is fixing bugs and mistakes. Because many maintainers do not use maintenance categories, there is confusion and misinformation about maintenance. In order for a software system to remain useful in its environment it may be necessary to carry out a wide range of maintenance activities upon it. Swanson (1976) was one of the first to examine what really happens during maintenance and was able to identify three different categories of maintenance activity.

Corrective
Changes necessitated by actual errors (defects or residual "bugs") in a system are termed corrective maintenance. These defects manifest themselves when the system does not operate as it was designed or advertised to do. A defect or bug can result from design errors, logic errors and coding errors. Design errors occur when for example changes made to the software are incorrect, incomplete, wrongly communicated or the change request misunderstood. Logic errors result from invalid tests and conclusions, incorrect implementation of design specification, faulty logic flow or incomplete test data. Coding errors are caused by incorrect implementation of detailed logic design and incorrect use of the source code logic. Defects are also caused by data processing errors and system performance errors. All these errors, sometimes called residual errors or bugs prevent the software from conforming to its agreed specification. In the event of a system failure due to an error, actions are taken to restore operation of the software system. The approach here is to locate the original specifications in order to determine what the system was originally designed to do. However, due to pressure from management, maintenance personnel sometimes resort to emergency fixes known as patching. The ad hoc nature of this approach often gives rise to a range of problems that include increased program complexity and unforeseen ripple effects. Increased program complexity usually arises from degeneration of program structure which makes the program increasingly difficult, if not impossible, to comprehend. This state of affairs can be referred to as the spaghetti syndrome or software fatigue. Unforeseen ripple effects imply a change to one part of a program may affect other sections in an unpredictable fashion. This is often due to lack of time to carry out a thorough impact analysis before effecting the change. Corrective maintenance has been estimated to account for 20% of all maintenance activities.

Adaptive

Any effort that is initiated as a result of changes in the environment in which a software system must operate is termed adaptive change. Adaptive change is a change driven by the need to accommodate modifications in the environment of the software system, without which the system would become increasingly less useful until it became obsolete. The term environment in this context refers to all the conditions and influences which act from outside upon the system, for example business rules, government policies, work patterns, software and hardware operating platforms. A change to the whole or part of this environment will warrant a corresponding modification of the software. Unfortunately, with this type of maintenance the user does not see a direct change in the operation of the system, but the software maintainer must expend resources to effect the change. This task is estimated to consume about 25% of the total maintenance activity.

Perfective
The third widely accepted task is that of perfective maintenance. This is actually the most common type of maintenance encompassing enhancements both to the function and the efficiency of the code and includes all changes, insertions, deletions, modifications, extensions, and enhancements made to a system to meet the evolving and/or expanding needs of the user. A successful piece of software tends to be subjected to a succession of changes resulting in an increase in its requirements. This is based on the premise that as the software becomes useful, the users tend to experiment with new cases beyond the scope for which it was initially developed. Expansion in requirements can take the form of enhancement of existing system functionality or improvement in computational efficiency. As the program continues to grow with each enhancement the system evolves from an average-sized program of average maintainability to a very large program that offers great resistance to modification. Perfective maintenance is by far the largest consumer of maintenance resources, estimates of around 50% are not uncommon. The categories of maintenance above were further defined in the 1993 IEEE Standard on Software Maintenance (IEEE 1219 1993) which goes on to define a fourth category.

Preventive
The long-term effect of corrective, adaptive and perfective change is expressed in Lehman's law of increasing entropy: As a large program is continuously changed, its complexity, which reflects deteriorating structure, increases unless work is done to maintain or reduce it. (Lehman 1985). The IEEE defined preventative maintenance as "maintenance performed for the purpose of preventing problems before they occur" (IEEE 1219 1993). This is the process of changing software to improve its future maintainability or to provide a better basis for future enhancements. The preventative change is usually initiated from within the maintenance organization with the intention of making programs easier to understand and hence facilitate future

maintenance work. Preventive change does not usually give rise to a substantial increase in the baseline functionality. Preventive maintenance is rare (only about 5%) the reason being that other pressures tend to push it to the end of the queue. For instance, a demand may come to develop a new system that will improve the organizations competitiveness in the market. This will likely be seen as more desirable than spending time and money on a project that delivers no new function. Still, it is easy to see that if one considers the probability of a software unit needing change and the time pressures that are often present when the change is requested, it makes a lot of sense to anticipate change and to prepare accordingly. The most comprehensive and authoritative study of software maintenance was conducted by B. P. Lientz and E. B. Swanson (1980). Figure depicts the distribution of maintenance activities by category by percentage of time from the Lientz and Swanson study of some 487 software organizations. Clearly, corrective maintenance (that is, fixing problems and routine debugging) is a small percentage of overall maintenance costs, Martin and McClure (1983) provide similar data.

Future Scope

Conclusion

Bibliography 1.Head First Java By Sierra And Bert Bates

2.Complete Rererence By Herbert Schildt. 3.Head First Servlet And Jsp By Kathy Sierra ,Bert Bates,Bryan 4.Oracle Ivan Bayross 5.D.B.M.S Ramez Elmasri, Shamkant B. 6.Professional JSP By Wrox Publishing

Potrebbero piacerti anche