Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Section-A
Hardware:
Physical parts of the computer are called hardware.
You can touch, see and feel hardware.
Hardware is constructed using physical materials or components.
Computer is hardware, which operates under the control of a software.
If hardware is damaged, it is replaced with new one.
Hardware is not affected by computer viruses.
Hardware cannot be transferred from one place to another electronically
through network.
User cannot make new duplicate copies of the hardware.
Software:
A set of instructions given to the computer is called software.
You cannot touch and feel software.
Software is developed by writing instructions in programming language.
The operations of computer are controlled through software.
If software is damaged or corrupted, its backup copy can be reinstalled.
Software is affected by computer viruses.
Software can be transferred from one lace to another electronically through
network.
User can make many new duplicate copies of the software.
1. FOD: The basic abstractions, which are given to the user, are real world functions.
OOD: The basic abstractions are not the real world functions but are the data
abstraction where the real world entities are represented.
2. FOD: Functions are grouped together by which a higher level function is Page on
obtained. E.g. of this technique is SA/SD.
OOD: Functions are grouped together on the basis of the data they operate since the
classes are associated with their methods.
3. FOD: In this approach the state information is often represented in a centralized
shared memory.
OOD: In this approach the state information is not represented in a centralized
memory but is implemented or distributed among the objects of the system.
4. FOD approach is mainly used for computation sensitive application,
OOD: whereas OOD approach is mainly used for evolving system which mimics a
business process or business case.
PTU Software Engineering Paper May 2013
Clear and Simple : A good user interface provides a clear understanding of what is
happening behind the scenes or provides visibility to the functioning of the system.
Creative but familiar: When the users are familiar with something and know how it
behaves, navigation becomes easier.
Intuitive and consistent: The controls and information must be laid out in an
intuitive and consistent way for an interface to be easy to use and navigate.
Responsive: If the interface fails to keep up with the demands of the user, this will
significantly diminish their experience and can result in frustration, particularly when
trying to perform basic tasks.
Maintainable: A UI should have the capacity for and changes to be integrated
without causing a conflict of interest.
Verification Validation
The objective of Verification is to make sure The objective of Validation is to make sure
that the product being develop is as per the that the product actually meet up the user’s
requirements and design specifications. requirements, and check whether the
specifications were correct in the first place.
Verification process explains whether the Validation process describes whether the
outputs are according to inputs or not. software is accepted by the user or not.
Verification is carried out before the Validation activity is carried out just after the
Validation. Verification.
Cost of errors caught in Verification is less Cost of errors caught in Validation is more
than errors found in Validation. than errors found in Verification.
Reverse engineering is taking apart an object to see how it works in order to duplicate
or enhance the object. The practice, taken from older industries, is now frequently
used on computer hardware and software. Software reverse engineering involves
reversing a program's machine code (the string of 0s and 1s that are sent to the logic
processor) back into the source code that it was written in, using program language
statements.
Fault: It is a condition that causes the software to fail to perform its required
function.
Error: Refers to difference between Actual Output and Expected output.
Failure: It is the inability of a system or component to perform required function
according to its specification.
Section-B
2. Discuss in detail the life journey of a software product.
SDLC, Software Development Life Cycle is a process used by software industry to design, develop and
test high quality software’s. The SDLC aims to produce a high quality software that meets or exceeds
customer expectations, reaches completion within times and cost estimates.
SDLC is the acronym of Software Development Life Cycle.
It is also called as Software development process.
The software development life cycle (SDLC) is a framework defining tasks performed at each step in
the software development process.
ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to be the
standard that defines all the tasks required for developing and maintaining software.
The following figure is a graphical representation of the various stages of a typical SDLC.
Requirement analysis is the most important and fundamental stage in SDLC. It is performed
by the senior members of the team with inputs from the customer, the sales department,
market surveys and domain experts in the industry. This information is then used to plan the
basic project approach and to conduct product feasibility study in the economical,
operational, and technical areas.
Stage 2: Defining Requirements
Once the requirement analysis is done the next step is to clearly define and document the
product requirements and get them approved from the customer or the market analysts. This
is done through SRS. Software Requirement Specification document which consists of all the
product requirements to be designed and developed during the project life cycle.
Stage 3: Designing the product architecture
SRS is the reference for product architects to come out with the best architecture for the
product to be developed. Based on the requirements specified in SRS, usually more than one
design approach for the product architecture is proposed and documented in a DDS - Design
Document Specification.
Stage 4: Building or Developing the Product
In this stage of SDLC the actual development starts and the product is built. The
programming code is generated as per DDS during this stage. If the design is performed in a
detailed and organized manner, code generation can be accomplished without much hassle.
Stage 5: Testing the Product
This stage is usually a subset of all the stages as in the modern SDLC models, the testing
activities are mostly involved in all the stages of SDLC. However this stage refers to the
testing only stage of the product where products defects are reported, tracked, fixed and
retested, until the product reaches the quality standards defined in the SRS.
Stage 6: Deployment in the Market and Maintenance
Once the product is tested and ready to be deployed it is released formally in the
appropriate market. Sometime product deployment happens in stages as per the
organizations. business strategy. The product may first be released in a limited segment and
tested in the real business environment (UAT- User acceptance testing).
Functional View: This involves data flow diagrams, which define the work that has
been done and the flow of data between things done, thereby providing the primary
structure of a solution.
Data View: This comprises the entity relationship diagram and is concerned with what
exists outside the system that is being monitored.
Dynamic View: This includes state transition diagrams and defines when things
happen and the conditions under which they may happen.
For effective project planning, some principles are followed. These principles are
listed below.
Ques 5. What are size metrics? How is the function point metric advantages over the LOC
metric? Explain.
Ans 5. Size‐oriented software metrics are derived by normalizing quality and/or productivity
measures by considering the size of the software that has been produced.
•A set of simple size‐oriented metrics can be developed for each project:
• Errors per KLOC (thousand lines of code).
• Defects4 per KLOC.
• $ per LOC.
• Page of documentation per KLOC.
Size‐oriented metrics are not universally accepted as the best way to measure the process of
software development
Drawbacks of LOC
Advantages of FP
i. It is not restricted to code
ii. Language independent
iii. The necessary data is available early in a project. We need only a detailed
specification.
iv. More accurate than estimated LOC.
We must study the business domain, user requirements, business priorities, and technology
constraints to be able to choose the right SDLC against their selection criteria.
Is the SDLC appropriate for the size of our team and their skills?
Is the SDLC appropriate with the selected technology we use for implementing the
solution?
Is the SDLC appropriate with client and stakeholders need and priorities
Is the SDLC appropriate for the geographical situation (co-located or geographically
dispersed)?
Is the SDLC appropriate for the size and complexity of our software?
Is the SDLC appropriate for the type of projects we dos?
Is the SDLC appropriate for our engineering capability?
Section-C
Ques.7 What is a DFD? Discuss various levels of DFD. Explain with the help of
some examples.
Ans. A data flow diagram (DFD) is a graphical representation of the "flow" of data through
an information system, modelling its process aspects. A DFD is often used as a preliminary
step to create an overview of the system without going into great detail, which can later be
elaborated. DFDs can also be used for the visualization of data processing (structured
design).
A DFD shows what kind of information will be input to and output from the system, how the
data will advance through the system, and where the data will be stored. It does not show
information about the timing of process or information about whether processes will operate
in sequence or in parallel unlike a flowchart which also shows this information.
Types of DFD
Data Flow Diagrams are either Logical or Physical.
Logical DFD - This type of DFD concentrates on the system process, and flow of
data in the system. For example, in a Banking software system, how data is moved
between different entities.
PTU Software Engineering Paper May 2013
Physical DFD - This type of DFD shows how the data flow is actually implemented
in the system. It is more specific and close to the implementation.
DFD Components
DFD can represent Source, destination, storage and flow of data using the following set of
components -
Entities - Entities are source and destination of information data. Entities are
represented by a rectangle with their respective names.
Process - Activities and action taken on the data are represented by Circle or Round-
edged rectangles.
Data Storage - There are two variants of data storage - it can either be represented as
a rectangle with absence of both smaller sides or as an open-sided rectangle with only
one side missing.
Data Flow - Movement of data is shown by pointed arrows. Data movement is shown
from the base of arrow as its source towards head of the arrow as destination.
Levels of DFD
Level 0 - Highest abstraction level DFD is known as Level 0 DFD, which depicts the
entire information system as one diagram concealing all the underlying details. Level
0 DFDs are also known as context level DFDs.
Level 1 - The Level 0 DFD is broken down into more specific, Level 1 DFD. Level 1
DFD depicts basic modules in the system and flow of data among various modules.
Level 1 DFD also mentions basic processes and sources of information.
PTU Software Engineering Paper May 2013
Level 2 - At this level, DFD shows how data flows inside the modules mentioned in
Level 1.
Higher level DFDs can be transformed into more specific lower level DFDs with
deeper level of understanding unless the desired level of specification is achieved.
The Food Order System Data Flow Diagram example contains three processes, four external
entities and two data stores.
PTU Software Engineering Paper May 2013
Oues.8 What is the need for software maintenance? How are these maintained
for client server architecture environment?
Ans. Software maintenance is widely accepted part of SDLC now a days. It stands for all
the modifications and updations done after the delivery of software product. There are
number of reasons, why modifications are required, some of them are briefly mentioned
below:
Market Conditions - Policies, which changes over the time, such as taxation and
newly introduced constraints like, how to maintain bookkeeping, may trigger need
for modification.
Client Requirements - Over the time, customer may ask for new features or
functions in the software.
Organization Changes - If there is any business level change at client end, such as
reduction of organization strength, acquiring another company, organization
venturing into new business, need to modify in the original software may arise.
Types of maintenance
In a software lifetime, type of maintenance may vary based on its nature. It may be just a
routine maintenance tasks as some bug discovered by some user or it may be a large event in
itself based on maintenance size or nature. Following are some types of maintenance based
on their characteristics:
a. Use-Case diagrams.
b. Software metrics.
c. Data Dictionary.
d. Feasibility Study.
Ans.
A. Use-Case diagrams:
Use case diagrams are considered for high level requirement analysis of a system. So when
the requirements of a system are analyzed the functionalities are captured in use cases.
PTU Software Engineering Paper May 2013
So we can say that use cases are nothing but the system functionalities written in an
organized manner. Now the second things which are relevant to the use cases are the actors.
Actors can be defined as something that interacts with the system.
The actors can be human user, some internal applications or may be some external
applications. So in a brief when we are planning to draw an use case diagram we should
have the following items identified.
Actors
Use case diagrams are drawn to capture the functional requirements of a system. So after
identifying the above items we have to follow the following guidelines to draw an efficient
use case diagram.
The name of a use case is very important. So the name should be chosen in such a
way so that it can identify the functionalities performed.
Do not try to include all types of relationships. Because the main purpose of the
diagram is to identify requirements.
The following is a sample use case diagram representing the order management system. So
if we look into the diagram then we will find three use cases (Order, SpecialOrder and
NormalOrder) and one actor which is customer.
The SpecialOrder and NormalOrder use cases are extended from Order use case. So they
have extends relationship. Another important point is to identify the system boundary which
is shown in the picture. The actor Customer lies outside the system as it is an external user
of the system.
PTU Software Engineering Paper May 2013
B. Software metrics:
C. Data Dictionary:
Data dictionary is the centralized collection of information about data. It stores meaning and
origin of data, its relationship with other data, data format for usage etc. Data dictionary has
rigorous definitions of all names in order to facilitate user and software designers.
Data dictionary is often referenced as meta-data (data about data) repository. It is created
along with DFD (Data Flow Diagram) model of software program and is expected to be
updated whenever DFD is changed or updated.
PTU Software Engineering Paper May 2013
Data dictionary provides a way of documentation for the complete database system in one
place. Validation of DFD is carried out using data dictionary.
Contents
Data dictionary should contain information about the following
Data Flow
Data Structure
Data Elements
Data Stores
Data Processing
Data Flow is described by means of DFDs as studied earlier and represented in algebraic
form as described.
= Composed of
{} Repetition
() Optional
+ And
[/] Or
Example
Address = House No + (Street / Area) + City + State
Data Elements
Data elements consist of Name and descriptions of Data and Control Items, Internal or
External data stores etc. with the following details:
Primary Name
PTU Software Engineering Paper May 2013
Files
o Internal to software.
o External to software but on the same machine.
o External to software and system, located on different machine.
Tables
o Naming convention
o Indexing property
Data Processing
There are two types of Data Processing:
D. Feasibility Study:
To determine whether the software can be integrated with other existing software.