Sei sulla pagina 1di 10

Master of Business Administration - MBA Semester III MI0033 Software Engineering - 4 Credits Assignment - Set- 1 (60 Marks)

Answer all the questions


Q1. Quality and reliability are related concepts but are fundamentally different in a number of ways. Discuss them. Q2. Discuss the Objective & Principles Behind Software Testing. Q3. Discuss the CMM 5 Levels for Software Process. Q4. Discuss the Water Fall model for Software Development. Q5. Explain the Advantages of Prototype Model, & Spiral Model in Contrast to Water Fall model. Q6. Explain the COCOMO Model & Software Estimation Technique.

Q1. Quality and reliability are related concepts but are fundamentally different in a number of ways. Discuss them. A. Quality focuses on the software's conformance to explicit and implicit requirements. Reliability focuses on the ability of software to function correctly as a function of time or some other quantity. Safety considers the risks associated with failure of a computer-based system that is controlled by software. In most cases an assessment of quality considers many factors that are qualitative in nature. Assessment of reliability and to some extent safety is more quantitative, relying on statistical models of past events that are coupled with software characteristics in an attempt to predict future operation of a program

There is no doubt that the reliability of a computer program is an important element of its overall quality. If a program repeatedly and frequently fails to perform, it matters little whether other software quality factors are acceptable. Software reliability, unlike many other quality factors, can be measured directed and estimated using historical and developmental data. Software reliability is defined in statistical terms as "the probability of failure-free operation of a computer program in a specified environment for a specified time" [MUS87]. To illustrate, program X is estimated to have a reliability of 0.96 over eight elapsed processing hours. In other words, if program X were to be executed 100 times and require eight hours of elapsed processing time (execution time), it is likely to operate correctly (without failure) 96 times out of 100. Whenever software reliability is discussed, a pivotal question arises: What is meant by the term failure? In the context of any discussion of software quality and reliability, failure is nonconformance to software requirements. Yet, even within this definition, there are gradations. Failures can be only annoying or catastrophic. One failure can be corrected within seconds while another requires weeks or even months to correct. Complicating the issue even further, the correction of one failure may in fact result in the introduction of other errors that ultimately result in other failures.

Q2. Discuss the Objective & Principles Behind Software Testing. Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.[1] Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs (errors or other defects).
Software testing can be stated as the process of validating and verifying that a software program/application/product: Meets the requirements that guided its design and development; Works as expected; and Can be implemented with the same characteristics. Software testing, depending on the testing method employed, can be implemented at any time in the development process. However, most of the test effort occurs after the requirements have been defined and the coding process has been completed. As such, the methodology of the test is governed by the software development methodology adopted. Different software development models will focus the test effort at different points in the development process. Newer development models, such as Agile, often employ test driven development and place an increased portion of the testing in the hands of the developer, before it reaches a formal team of testers. In a more traditional model, most of the test execution occurs after the requirements have been defined and the coding process has been completed. The following can be described as testing principals: All tests should be traceable to customer requirements. Tests should be planned long before testing begins. The Pareto principal applies to testing. Testing should begin "in small" and progress toward testing "in large". Exhaustive testing is not possible. To be most effective, an independent third party should conduct testing.

Q3. Discuss the CMM 5 Levels for Software Process. A.


Capability Maturity Model Integration (CMMI) is a process improvement approach whose goal is to help organizations improve their performance. CMMI can be used to guide process improvement across a project, a division, or an entire organization. Currently supported is CMMI Version 1.3. CMMI in software engineering and organizational development is a process improvement approach that provides organizations with the essential elements for effective process improvement. CMMI is registered in the U.S. Patent and Trademark Office by Carnegie Mellon University. According to the Software Engineering Institute (SEI, 2008), CMMI helps "integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes. CMMI currently addresses three areas of interest: Product and service development CMMI for Development (CMMI-DEV), Service establishment, management, and delivery CMMI for Services (CMMI-SVC), and Product and service acquisition CMMI for Acquisition (CMMI-ACQ). CMMI was developed by a group of experts from industry, government, and the Software Engineering Institute (SEI) at Carnegie Mellon University. CMMI models provide guidance for developing or improving processes that meet the business goals of an organization. A CMMI model may also be used as a framework for appraising the process maturity of the organization.[1] CMMI originated in software engineering but has been highly generalised over the years to embrace other areas of interest, such as the development of hardware products, the delivery of all kinds of services, and the acquisition of products and services. The word "software" does not appear in definitions of CMMI. This generalization of improvement concepts makes CMMI extremely abstract. It is not as specific to software engineering as its predecessor, the Software CMM (CMM, see below).

Q4. Discuss the Water Fall model for Software Development.


A. The waterfall model is a sequential design process, often used in software development processes, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design, Construction, Testing, Production/Implementation and Maintenance. The waterfall development model originates in the manufacturing and construction industries: highly structured physical environments in which after-the-fact changes are prohibitively costly, if not

impossible. Since no formal software development methodologies existed at the time, this hardwareoriented model was simply adapted for software development.[citation needed] The first known presentation describing use of similar phases in software engineering was held by Herbert D. Benington at Symposium on advanced programming methods for digital computers on 29 June 1956.[1] This presentation was about the development of software for SAGE. In 1983 the paper was republished[2] with a foreword by Benington pointing out that the process was not in fact performed in strict top-down, but depended on a prototype. The first formal description of the waterfall model is often cited as a 1970 article by Winston W. Royce,[3] though Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model.[4] This, in fact, is how the term is generally used in writing about software developmentto describe a critical view of a commonly used software practice.[5]In Royce's original waterfall model, the following phases are followed in order: Requirements specification Design Construction (AKA implementation or coding) Integration Testing and debugging (AKA Validation) Installation Maintenance Thus the waterfall model maintains that one should move to a phase only when its preceding phase is completed and perfected. However, there are various modified waterfall models (including Royce's final model) that may include slight or major variations on this process. Q5. Explain the Advantages of Prototype Model, & Spiral Model in Contrast to Water Fall model. A. Advantages of prototyping 1 May provide the proof of concept necessary to attract funding 2 Early visibility of the prototype gives users an idea of what the final system looks like 3 Encourages active participation among users and producer 4 Enables a higher output for user 5 Cost effective (Development costs reduced).

6 Increases system development speed 7 Assists to identify any problems with the efficacy of earlier design, requirements analysis and coding activities 8 Helps to refine the potential risks associated with the delivery of the system being developed 9 Various aspects can be tested and quicker feedback can be got from the user 10 Helps to deliver the product in quality easily 11 User interaction available during development cycle of prototype The spiral model is a software development process combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. Also known as the spiral lifecycle model (or spiral development), it is a systems development method (SDM) used in information technology (IT). This model of development combines the features of the prototyping and the waterfall model. The spiral model is intended for large, expensive and complicated projects. This should not be confused with the Helical model of modern systems architecture that uses a dynamic programming approach in order to optimize the system's architecture before design decisions are made by coders that would cause problems. The spiral model combines the idea of iterative development (prototyping) with the systematic, controlled aspects of the waterfall model. It allows for incremental releases of the product, or incremental refinement through each time around the spiral. The spiral model also explicitly includes risk management within software development. Identifying major risks, both technical and managerial, and determining how to lessen the risk helps keep the software development process under control.[3] The spiral model is based on continuous refinement of key products for requirements definition and analysis, system and software design, and implementation (the code). At each iteration around the cycle, the products are extensions of an earlier product. This model uses many of the same phases as the waterfall model, in essentially the same order, separated by planning, risk assessment, and the building of prototypes and simulations. Documents are produced when they are required, and the content reflects the information necessary at that point in the process. All documents will not be created at the beginning of the process, nor all at the end (hopefully). Like the product they define, the documents are works in progress. The idea is to have a continuous stream of products produced and available for user review. The spiral lifecycle model allows for elements of the product to be added in when they become available or known. This assures that there is no conflict with previous requirements and design. This method is consistent with approaches that have multiple software builds and releases and allows for making an orderly transition to a maintenance activity. Another positive aspect is that the spiral model forces early

user involvement in the system development effort. For projects with heavy user interfacing, such as user application programs or instrument interface applications, such involvement is helpful. Starting at the center, each turn around the spiral goes through several task regions Determine the objectives, alternatives, and constraints on the new iteration. Evaluate alternatives and identify and resolve risk issues. Develop and verify the product for this iteration. Plan the next iteration. Note that the requirements activity takes place in multiple sections and in multiple iterations, just as planning and risk analysis occur in multiple places. Final design, implementation, integration, and test occur in iteration 4. The spiral can be repeated multiple times for multiple builds. Using this method of development, some functionality can be delivered to the user faster than the waterfall method. The spiral method also helps manage risk and uncertainty by allowing multiple decision points and by explicitly admitting that all of anything cannot be known before the subsequent activity starts

Q6. Explain the COCOMO Model & Software Estimation Technique.


A. The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry W. Boehm. The model uses a basic regression formula with parameters that are derived from historical project data and current project characteristics.

COCOMO was first published in Boehm's 1981 book Software Engineering Economics[1] as a model for estimating effort, cost, and schedule for software projects. It drew on a study of 63 projects at TRW Aerospace where Boehm was Director of Software Research and Technology. The study examined projects ranging in size from 2,000 to 100,000 lines of code, and programming languages ranging from assembly to PL/I. These projects were based on the waterfall model of software development which was the prevalent software development process in 1981. References to this model typically call it COCOMO 81. In 1995 COCOMO II was developed and finally published in 2000 in the book Software Cost Estimation with COCOMO II.[2] COCOMO II is the successor of COCOMO 81 and is better suited for estimating modern software development projects. It provides more support for modern software development processes and an updated project database. The need for the new model came as software development technology moved from mainframe and overnight batch processing to desktop development, code reusability and the use of off-the-shelf software components. This article refers to COCOMO 81.

COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first level, Basic COCOMO is good for quick, early, rough order of magnitude estimates of software costs, but its accuracy is limited due to its lack of factors to account for difference in project attributes (Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO additionally accounts for the influence of individual project phases. The ability to accurately estimate the time and/or cost taken for a project to come in to its successful conclusion is a serious problem for software engineers. The use of a repeatable, clearly defined and well understood software development process has, in recent years, shown itself to be the most effective method of gaining useful historical data that can be used for statistical estimation. In particular, the act of sampling more frequently, coupled with the loosening of constraints between parts of a project, has allowed more accurate estimation and more rapid development times. Methods Popular methods for estimation in software engineering include: Analysis Effort method COCOMO (This model is obsolete and should only be used for demonstration purposes.) COCOMO II COSYSMO Evidence-based Scheduling Refinement of typical agile estimating techniques using minimal measurement and total time accounting. Function Point Analysis Parametric Estimating PRICE Systems Founders of Commercial Parametric models that estimates the scope, cost, effort and schedule for software projects. Proxy-based estimating (PROBE) (from the Personal Software Process) Program Evaluation and Review Technique (PERT) SEER-SEM Parametric Estimation of Effort, Schedule, Cost, Risk. Mimimum time and staffing concepts based on Brooks's law SLIM The Planning Game (from Extreme Programming) Weighted Micro Function Points (WMFP)

Master of Business Administration - MBA Semester III MI0033 Software Engineering - 4 Credits Assignment - Set- 2 (60 Marks)
Answer the Following ( 6 * 10 = 60 Marks )
Q1. Write a note on myths of Software. Q2. Explain Version Control & Change Control. Q3. Discuss the SCM Process. Q4. Explain i. Software doesnt Wear Out. ii. Software is engineered & not manufactured. Q5.Explain the Different types of Software Measurement Techniques. Q6.Write a Note on Spiral Model.

Q1. Write a note on myths of Software. A. Myth is defined as "widely held but false notation" by the oxford dictionary, so assign other fields software arena also has some myths to demystify. Pressman insists Software myths- beliefs about software and the process used to build it- can be traced to earliest days of computing. Myths have a number of attributes that have made them insidious." So software myths prevail but though they do are not clearly visible they have the potential to harm all the parties involved in the software development process mainly the developer team. Tom DeMarco expresses In the absence of meaningful standards, a new industry like software comes to depend instead on folklore." The given statement points out that the software industry caught pace just some decades back so it has not matured to aformidable level and there are no strict standards in software development. There does not exist one best method of software development that ultimately equates to the ubiquitous software myths. Primarily, there are three types of software myths, all the three are stated below: 1.Management Myth2.Customer Myth3.Practitioner/Developer Myth before defining the above three myths one by one lets scrutinize why these myths occur on the first place. The picture below tries to clarify the complexity of the problem of software development requirement analysis mainly between the developer team and the clients

Q2. Explain Version Control & Change Control. A. Version Control A version control system (or revision control system) is a combination of technologies and practices for tracking and controlling changes to a project's files, in particular to source code, documentation, and web pages. If you have never used version control before, the first thing you should do is go find someone who has, and get them to join your project. These days, everyone will expect at least your project's source code to be under version control, and probably will not take the project seriously if it doesn't use version control with at least minimal competence. The reason version control is so universal is that it helps with virtually every aspect of running a project: inter-developer communications, release management, bug management, code stability and experimental development efforts, and attribution and authorization of changes by particular developers. The version control system provides a central coordinating force among all of these areas. The core of version control is change management: identifying each discrete change made to the project's files, annotating each change with metadata like the change's date and author, and then replaying these facts to whoever asks, in whatever way they ask. It is a communications mechanism where a change is the basic unit of information. This section does not discuss all aspects of using a version control system. It's so all-encompassing that it must be addressed topically throughout the book. Here, we will concentrate on choosing and setting up a version control system in a way that will foster cooperative development down the road. Q3. Discuss the SCM Process. A. In software engineering, software configuration management (SCM) is the task of tracking and controlling changes in the software. Configuration management practices include revision control and the establishment of baselines. SCM concerns itself with answering the question "Somebody did something, how can one reproduce it?" Often the problem involves not reproducing "it" identically, but with controlled, incremental changes. Answering the question thus becomes a matter of comparing different results and of analyzing their differences. Traditional configuration management typically focused on controlled creation of relatively simple products. Now, implementers of SCM face the challenge of dealing with relatively minor increments under their own control, in the context of the complex system being developed. According to another simple definition: Software Configuration Management is how you control the evolution of a software project. The goals of SCM are generally:[citation needed] Configuration identification - Identifying configurations, configuration items and baselines.

Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline. Configuration status accounting - Recording and reporting all the necessary information on the status of the development process. Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals. Build management - Managing the process and tools used for builds. Process management - Ensuring adherence to the organization's development process. Environment management - Managing the software and hardware that host the system. Teamwork - Facilitate team interactions related to the process. Defect tracking - Making sure every defect has traceability back to the source. Q4. Explain i. Software doesnt Wear Out. ii. Software is engineered & not manufactured. A. i. Software doesnt Wear Out. Software doesn't wear out: The hardware can wear out whereas software can't. In case of hardware we have a "bathtub" like curve, which is a curve that lies in between failure-rate and time. In this curve, in the starting time there is relatively high failure rate. But, after some period of time, defects get corrected and failure-rate drops to a steady-state for some time period. But, the failure-rate again rises due to the effects of rain, dust, temperature extreme and many other environment effects. The hardware begins to wear out.

But, the software is not responsible to the failure rate of hardware. The failure rate of software can be understood by the "idealized curve". In this type of curve the failure rate in the initial state is very high. But, the errors in the software get corrected and the curve flattens. However, the implication is clear that the software can "deteriorate" it does not "wear out". This can be explained by the actual curve. As soon as that error gets corrected the curve encounters another spike that means another error in the software. After some time the steady state of the software don't remains steady and the failure rate begins to rise. If hardware gets failed then it can be replaced but there is no replacement in case of software.

Potrebbero piacerti anche