Sei sulla pagina 1di 103

FEBRUARY 2011

MCA 3 RD S EMESTER A SSIGNMENT

H ARVINDER S INGH

511025273

TABLE OF C ONTENTS
I. MC0071 SOFTWARE ENGINEERING (BOOK ID: B0808 & B0809) 1 24 40 64 71

II. MC0072 COMPUTER GRAPHICS (BOOK ID: B0810) III. MC0073 SYSTEM PROGRAMMING (BOOK ID: B0811) IV. MC0074 STATISTICAL AND NUMERICAL METHODS USING C++ (BOOK ID: B0812) V. MC0075 COMPUTER NETWORKS (BOOK ID: B0813 & B0814)

FACULTY : S HASHANK S IR

MC0071 SOFTWARE ENGINEERING


(Book ID: B0808 & B0809) Assignment Set 1

M C0071 SOFTW AR E EN GIN E ERI N G

1. D ESCRIBE THE CONCURRENT DEVELOPMENT MODEL IN YOUR OWN WORDS .


The concurrent process model can be represented schematically as a series of major technical activities, tasks, and their associated states. For e.g.:, the engineering activity defined for the spiral model is accomplished by invoking the following tasks. Prototyping and / or analysis modeling, requirements specification, and design. The below figure shows that it provides a schematic representation of one activity with the concurrent process model. The activity-analysis-may be in any one of the states noted at any given time. Similarly, other activities (e.g. Design or customer communication) can be represented in an analogous manner. All activities exist concurrently but reside in different states. For e.g., early in a project the customer communication activity has completed its first iteration and exists in the awaiting Changes State. The analysis activity (which existed in the none state while initial customer communication was completed) now makes a transition into the under development state. If the customer indicates that changes in requirements must be made, the analysis activity moves from the under development state into the awaiting changes state. The concurrent process model defines a series of events that will trigger transition from state to state for each of the software engineering activities. For e.g., during early stages of design, an inconsistency in the analysis model is uncovered. This generates the event analysis model correction, which will trigger the analysis activity from the done state into the awaiting Changes State. The concurrent process model is often used as the paradigm for the development of client/server applications. A client/server system is composed of a set of functional components. When applied to client/server, the concurrent process model defines activities in two dimensions a system dimension and component dimension. System level issues are addressed using three activities, design assembly, and use. The component dimension addressed with twoactivity design and realization. Concurrency is achieved in two ways; System and component activities occur simultaneously and can be modeled using the state oriented approach a typical client server application is implemented with many components, each of which can be designed and realized concurrently. The concurrent process model is applicable to all types of software development and provides an accurate picture of the current state of a project. Rather than confining software-engineering activities to a sequence of events, it defines a net work of activities. Each activity on the network exists simultaneously with other activities. Events generated within a given activity or at some other place in the activity network trigger transitions among the sates of an activity. Component based development model: This model incorporates the characteristics of the spiral model. It is evolutionary in nature, demanding an iterative approach to the creation of software. However, the component-based development model composes applications from prepackaged software components called classes.

HARVINDER SINGH

511025273

M C0071 SOFTW AR E EN GIN E ERI N G

1. D ESCRIBE THE CONCURRENT DEVELOPMENT MODEL IN YOUR OWN WORDS .

(CONTD)

Classes created in past software engineering projects are stored in a class library or repository. Once candidate classes are identified, the class library is searched to determine if these classes already exist. If they do, they are extracted from the library and reused. If a candidate class does not reside in the library, it is engineered using object-oriented methods. The first iteration of the application to be built is then composed using classes extracted from the library and any new classes built to meet the unique needs of the application. Process flow then returns to the spiral and will ultimately re-enter the component assembly iteration during subsequent passes through the engineering activity. The component based development model leads to software reuse, and reusability provides software engineers with a number of measurable benefits although it is very much dependent on the robustness of the component library.

UNDER DEVELOPMENT

AWAITING CHANGES

UNDER REVISION

UNDER REVIEW

BASELINDED

DONE

Represents state of a software engineered activity

One element of concurrent process model

HARVINDER SINGH

511025273

M C0071 SOFTW AR E EN GIN E ERI N G

2. E XPLAIN THE FOLLOWING CONCEPTS WITH RESPECT TO S OFTWARE R ELIABILITY:


A) Software Reliability Metrics Metrics which have been used for software reliability specification are shown in below Figure. The choice of which metric should be used depends on the type of system to which it applies and the requirements of the application domain. For some systems, it may be appropriate to use different reliability metrics for different sub-systems. M ETRIC POFOD PROB AB ILI T Y OF FAILURE ON DEM AN D ROCOF RATE OF FAILURE OCCURREN CE EXPLAN ATI O N This is measure of the likelihood that the system will fail when a service request is made. For example, a POFOD of 0,001 means that 1 out of 1000 service requests may result in failure. This is a measure of the frequency of occurrence with which unexpected behavior is likely to occur. For example, a ROCOF of 2/100 mans that 2 failures are likely to occur in each 100 operational time units. This metric is sometimes called the failure intensity. This is a measure of the time between observed system failures. For example, an MTTF of 500 time units. If the system is not being changed, it is the reciprocal of the ROCOF. This is a measure of how likely the system is to be available for use. For example, an availability of 0.998 means that in every 1000 time units, the system is likely to be available for 998 of these EXAM PLE SYSTE M Safety critical and non-stop systems, such as hardware control systems.

Operating systems, transaction processing system.

M TTF M EAN TIM E TO FAILURE

Systems with long transactions such as CAD system. The MTTF must be greater than the transaction time.

AVAIL AVAILAB ILIT Y

Continuously running systems such as telephone switching system

Reliability matrix In some cases, system users are most concerned about how often the system will fail, perhaps because there is a significant cost in restarting the system. In those cases, a metric based on a rate of failure occurrence (ROCOF) or the mean time to failure should be used. In other cases, it is essential that a system should always meet a request for service because there is some cost in failing to deliver the service. The number of failures in some time period is less important. In those cases, a metric based on the probability of failure on demand (POFOD) should be used. Finally, users or system operators may be mostly concerned that the system is available when a request for service is made. They will incur some loss if the system is unavailable. Availability (AVAIL). Which takes into account repair or restart time, is then the most appropriate metric.
HARVINDER SINGH 511025273

M C0071 SOFTW AR E EN GIN E ERI N G

2. E XPLAIN THE FOLLOWING CONCEPTS WITH RESPECT TO S OFTWARE R ELIABILITY:


There are three kinds of measurement, which can be made when assessing the reliability of a system: 1. 2. 3. The number of system failures given a number of systems inputs. This is used to measure the POFOD. The time (or number of transaction) between system failures. This is used to measure ROCOF and MTTF.

(CONTD)

The elapsed repair or restart time when a system failure occurs. Given that the system must be continuously available, this is used to measure AVAIL.

Time is a factor in all of this reliability metrics. It is essential that the appropriate time units should be chosen if measurements are to be meaningful. Time units, which may be used, are calendar time, processor time or may be some discrete unit such as number of transactions. Software reliability specification System requirements documents, reliability requirements are expressed in an informal, qualitative, untestable way. Ideally, the required level of reliability should be expressed quantitatively in the software requirement specification. Depending on the type of system, one or more of the metrics discussed in the previous section may be used for reliability specifications. Statistical testing techniques (discussed later) should be used to measure the system reliability. The software test plan should include an operational profile of the software to assess its reliability. The steps involved in establishing a reliability specification are as follows: 1. 2. For each identified sub-system, identify the different types of system failure, which may occur and analyze the consequences of these failures. From the system failure analysis, partition failures into appropriate classes. A reasonable starting point is to use the failure types shown in Figure shown below. For each failure class identified, define the reliability requirement using the appropriate reliability metric. It is not necessary to use the same metric for different classes of failure. For example, where a failure requires some intervention to recover from it, the probability of that failure occurring on demand might be the most appropriate metric. When automatic recover is possible and the effect of the failure is some user inconvenience, ROCOF might be more appropriate.

The cost of developing and validating a reliability specification for software system is very high. Statistical testing Statistical testing is a software testing process in which the objective is to measure the reliability of the software rather than to discover software faults. It users different test data from defect testing, which is intended to find faults in the software.

HARVINDER SINGH

511025273

M C0071 SOFTW AR E EN GIN E ERI N G

2. E XPLAIN THE FOLLOWING CONCEPTS WITH RESPECT TO S OFTWARE R ELIABILITY:


FAILURE CLASSQ TRAN SIEN T PERM AN EN T RECOVERAB L E UN RECOVERAB L E N ON -CURRUP TIN G CURRUPTIN G DESCRIPTI ON Occurs only with certain inputs Occurs with all inputs System can recover without operator intervention Operator intervention needed to recover from failure Failure does not corrupt system state or data Failure corrupts system state or data

(CONTD)

Failure classification
The steps involved in statistical testing are: 1. 2. 3. 4. Determine the operational profile of the software. The operational profile is the probable pattern of usage of the software. This can be determined by analyzing historical data to discover the different classes of input to the program and the probability of their occurrence. Select or generate a set of test data corresponding to the operational profile. Apply these test cases to the program, recording the amount of execution time between each observed system failure. It may not be appropriate to use raw execution time. As discussed in the previous section, the time units chosen should be appropriate for the reliability metric used. After a statistically significant number of failures have been observed, the software reliability can then be computed. This involves using the number of failures detected and the time between these failures to computer the required reliability metric.

This approach to reliability estimation is not easy to apply in practice. The difficulties, which arise, are due to: Operational profile uncertainty; High cost of operational profile generation; Statistical uncertainty when high reliability is specified.
HARVINDER SINGH 511025273

M C0071 SOFTW AR E EN GIN E ERI N G

2. E XPLAIN THE FOLLOWING CONCEPTS WITH RESPECT TO S OFTWARE R ELIABILITY:


B) Programming for Reliability

(CONTD)

There is a general requirement for more reliable systems in all application domains. Customers expect their software to operate without failures and to be available when it is required. Improved programming techniques, better programming languages and better quality management have led to very significant improvements in reliability for most software. However, for some systems, such as those, which control unattended machinery, these normal techniques may not be enough to achieve the level of reliability required. In these cases, special programming techniques may be necessary to achieve the required reliability. Some of these techniques are discussed in this chapter. Reliability in a software system can be achieved using three strategies: Fault avoidance: This is the most important strategy, which is applicable to all types of system. The design and implementation process should be organized with the objective of producing fault-free systems. Fault tolerance: This strategy assumes that residual faults remain in the system. Facilities are provided in the software to allow operation to continue when these faults cause system failures. Fault detection: Faults are detected before the software is put into operation. The software validation process uses static and dynamic methods to discover any faults, which remain in a system after implementation. Fault avoidance A good software process should be oriented towards fault avoidance rather than fault detection and removal. It should have the objective of developing fault-free software. Fault-free software means software, which conforms to its specification. Of course, there may be errors in the specification or it may not reflect the real needs of the user so fault-free software does not necessarily mean that the software will always behave as the user wants. Fault avoidance and the development of fault-free software relies on : 1. 2. 3. 4. 5. The availability of a precise system specification, which is an unambiguous description of what, must be implemented. The adoption of an organizational quality philosophy in which quality is the driver of the software process. Programmers should expect to write bugfree program. The adoption of an approach to software design and implementation which is based on information hiding and encapsulation and which encourages the production of readable programs. The use of a strongly typed programming language so that possible errors are detected by the language compiler. Restriction on the use of programming construct, such as pointers, which are inherently error-prone.
511025273

HARVINDER SINGH

M C0071 SOFTW AR E EN GIN E ERI N G

2. E XPLAIN THE FOLLOWING CONCEPTS WITH RESPECT TO S OFTWARE R ELIABILITY:

(CONTD)

Achieving fault-free software is virtually impossible if low-level programming. Languages with limited type checking are used for program development.
Cost Per error deleted

Very few

Few Number of residual errors

Many

The increasing cost of residual fault of removal We must be realistic and accept that human errors will always occur. Faults may remain in the software after development. Therefore, the development process must include a validation phase, which checks the developed software for the presence of faults. This validation phase is usually very expensive. As faults are removed from a program, the cost of finding and removing remaining faults tends to rise exponentially. As the software becomes more reliable, more and more testing is required to find fewer and fewer faults. Structured programming and error avoidance Structured programming is term which is to mean programming without using go to statements, programming using only while loops and if statements as control constructs and designing using a top-down approach. The adoption of structured programming was an important milestone in the development of software engineering because it was the first step away from an undisciplined approach to software development. Go to statement was an inherently errorprone programming construct. The disciplined use of control structures force programmers to think carefully about their program. Hence they are less likely to make mistakes during development. Structured programming means programs can be read sequentially and are therefore easier to understand and inspect. However, avoiding unsafe control statements is only the first step in programming for reliability. Faults are less likely to be introduced into programs if the use of these constructs is minimized. These constructs include:
HARVINDER SINGH 511025273

M C0071 SOFTW AR E EN GIN E ERI N G

2. E XPLAIN THE FOLLOWING CONCEPTS WITH RESPECT TO S OFTWARE R ELIABILITY:


1.

(CONTD)

Floating-point numbers: Floating-point numbers are inherently imprecise. They present a particular problem when they are compared because representation imprecision may lead to invalid comparisons. Fixed-point numbers, where a number is represented to a given number of decimal places, are safer as exact comparisons are possible. Pointer: Pointers are low-level constructs, which refer directly to areas of the machine memory. They are dangerous because errors in their use can be devastating and because they allow aliasing. This means the same entity may be referenced using different names. Aliasing makes programs harder to may be referenced using different names. Alilasing makes programs harder to understand so that errors are more difficult to find. However, efficiency requirements mean that it is often impractical to avoid the use of pointers. Dynamic memory allocation: Program memory is allocated at run-time rather than compile-time. The danger with this is that the memory may not be de-allocated so that the system eventually runs out of available memory. This can be a very subtle type of errors to detect as the system may run successfully for a long time before the problem occurs. Parallelism: Parallelism is dangerous because of the difficulties of predicting the subtle effects of timing interactions between parallel process. Timing problems cannot usually e detected by program inspection and the peculiar combination of circumstances, which cause a timing problem, may not result during system testing. Parallelism may be unavoidable but its use should be carefully controlled to minimize inter-process dependencies. Programming language facilities, such as Ada tasks, help avoid some of the problems of parallelism as the compiler can detect some kinds of programming errors. Recursion: Recursion is the situation in which a subroutine calls itself or calls another subroutine, which then calls the calling subroutine. Its use can result in very concise programs but it can be difficult to follow the logic of recursive programs. Errors in using recursion may result in the allocation of all they systems memory as temporary stack variables are created. Interrupts: Interrupts are a means of forcing control to transfer to a section of code irrespective of the code currently executing. The dangers of this are obvious as the interrupt may cause a critical operation to be terminated.

2.

3.

4.

5.

6.

Fault tolerance A fault-tolerant system can continue in operation after some system failures have occurred. Fault tolerance is needed in situations where system failure would cause some accident or where a loss of system operation would cause large economic losses. For example, the computers in an aircraft must continue in operation until the aircraft has landed; the computers in an traffic control system must be continuously available. Fault-tolerance facilities are required if the system is to failure. There are four aspects to fault tolerance. 1. 2. Failure detection: The system must detect a particular state combination has resulted or will result in a system failure. Damage assessment: The parts of the system state, which have been affected by the failure, must be detected.
511025273

HARVINDER SINGH

M C0071 SOFTW AR E EN GIN E ERI N G

2. E XPLAIN THE FOLLOWING CONCEPTS WITH RESPECT TO S OFTWARE R ELIABILITY:


3.

(CONTD)

Fault recovery: The system must restore its state to a known safe state. This may be achieved by correcting the damaged state or by restoring the system the system to a known safe state. Forward error recovery is more complex as it involves diagnosing system faults and knowing what the system state should have been had the faults not caused a system failure. Fault repair: This involves modifying the system so that the fault does not recur. In many cases, software failures are transient and due to a peculiar combination of system inputs. No repair is necessary as normal processing can resume immediately after fault recovery. This is an important distinction between hardware and software faults.

4.

There has been a need for many years to build fault-tolerant hardware. The most commonly used hardware fault-tolerant technique is based around the notion of triple-modular redundancy (TMR) shown in the below figure. The hardware unit is replicated three (or sometimes more) times. The output from each unit is compared. If one of the units fails and does not produce the same output as the other units, its output is ignored. The system functions with two working units.
A1 Output Comparator

A2

A3

Triple modular redundancy to cope with hardware failure The weakness of both these approaches to fault tolerance is that they are based on the assumption that the specification is correct. They do not tolerate specification errors. There have been two comparable approaches to the provision of software fault tolerance. Both have been derived from the hardware model where a component is replicated. 1) N-version programming: Using a common specification, the software system is implemented in a number of different versions by different teams. These versions are executed in parallel. Their outputs are compared using a voting system and inconsistent outputs are rejected. At least three versions of the system should be available.
Version - 1 Output Comparator

Version - 2

Version - 3

N-version programming
HARVINDER SINGH 511025273

M C0071 SOFTW AR E EN GIN E ERI N G

2. E XPLAIN THE FOLLOWING CONCEPTS WITH RESPECT TO S OFTWARE R ELIABILITY:

(CONTD)

2) Recovery Blocks: this is a finer grain approach to fault tolerance. Each program component includes a test to check if the component has executed successfully. It also includes alternative code, which allows the system to back-up and repeat the computation if the test detects a failure. Unlike Nversion programming, the implementation is different rather than independent implementation of the same specification. They are executed in sequence rather than in parallel.
Try algorithm 1 Test for success

Algorithm 1

Acceptance Test
Retry

Continue execution it acceptance test success Signal Exception if all algorithms fail

Acceptance test fails-retry

Retest

Retest

Algorithm 2

Algorithm 2
Recovery blocks

Exception Handling

When an error of some kind or an unexpected event occurs during the execution of a program, this is called an exception. Exceptions may be caused by hardware or software errors. When an exception has not been anticipated, control is transferred to system exceptions handling mechanism. If an exception has been anticipated, code must be included in the program to detect and handle that exception. Most programming languages do not include facilities to detect and handle exceptions. The normal decision constructs (if statements) of the language must be used to detect the exception and control constructs used to transfer control to exception occurs in a sequence of nested procedure calls, there is not easy way to transmit it from one procedure to another. Consider example as shown in figure below a number of nested procedure calls where procedure A calls procedure B which calls procedure C. If an exception occurs during the execution of C this may be so serious that execution of B cannot continue. Procedure B has to return immediately to Procedure A, which must also be informed that B has terminated abnormally and that an exception has occurred.
A B: B C: Call sequence Exception Occurrence C Exception return

Exception return in embedded procedure calls


HARVINDER SINGH 511025273

10

M C0071 SOFTW AR E EN GIN E ERI N G

2. E XPLAIN THE FOLLOWING CONCEPTS WITH RESPECT TO S OFTWARE R ELIABILITY:


An exception handler is something like a case statement. It states exception names and appropriate actions for each exception.
void Contorl_freezer (const float Danger_temp) { Float Ambient_temp: // try means exceptions will be handled in this block // Assume that Sensor. Temperature_dial and Pump are // objects which have been declared elsewhere Try{ while (true) Ambient_temp Sensor.Get_temperature 0: & (Ambient_temp > Temperature_dial_Setting 0 ) &(Pump Status 0 == off) { Pump Switch (on): Wait (Cooding_time): } else If (Pump_Status 0 == on) Pump_Switch (off) If ( Ambient_temp > Danger_temp ) throw Freezer_too_hot ( ): } //end of while loop } //end of exception handing try block //catch indicates the exception handing code Catch ( Freezer_too_hot ) AlarmActivate 0: }

(CONTD)

Exceptions in a freezer temperature controller(C++) Above table illustrates the use of exceptions and exception handling. These program fragments show the design of a temperature controller on a food freezer. The required temperature may be set between 18 and 40 degrees Celsius. Food may start to defrost and bacteria become active at temperatures over 18 degrees. The control system maintains this temperature by switching a refrigerant pump on and off depending on the value of a temperature sensor. If the required temperature cannot be maintained, the controlled sets off an alarm. The temperature of the freezer is discovered by interrogating an object called Sensor and the required temperature by inspecting an object called the exceptions Freezer_too_hot and Control_problem and the type FREEZER_TEMP are declared. There are no built-in exceptions in C++ but other information is declared in a separate header file. The temperature controller tests the temperature and switches the pump as required. If the temperature is too hot, it transfers control to the exception handler, which activates an alarm. In C++, Once an exception has been, it is not re-thrown.
HARVINDER SINGH 511025273

11

M C0071 SOFTW AR E EN GIN E ERI N G

2. E XPLAIN THE FOLLOWING CONCEPTS WITH RESPECT TO S OFTWARE R ELIABILITY:


Defensive programming

(CONTD)

Defensive programming is an approach to program development whereby programmers assume that there may be undetected faults or inconsistencies in their programs. Redundant code is incorporated to check the System State after modifications and to ensure that the state change is consistent. If inconsistencies are detected, the state change is retracted or the state is restored to a known correct state. Defensive programming is an approach to fault tolerance, which can be carried out without a fault-tolerant controller. The techniques used, however, are fundamental to the activities in the fault tolerance process, namely detecting a failure, damage assessment, and recovering from that failure. Failure prevention Programming languages such as Ada and C++ allow many errors which cause state corruption and system failure to be detected at compile-time. The compiler can detect those problems which uses the strict type rules of the language. Compiler checking is obviously limited to static values but the compiler can also automatically add code to a program to perform run-time checks. Damage assessment Damage assessment involves analyzing the system state to gauge the extent of the state corruption. In many cases, corruption can be avoided by checking for fault occurrence before finally committing a change of state. If a fault is detected, the state change is not accepted so that no damage is caused. However, damage assessment may be needed when a fault arises because a sequence of state changes (all of which are individually correct) causes the system to enter an incorrect state. The role of the damage assessment procedures is not to recover from the fault but to assess what parts of the state space have been affected by the fault. Damage can only be assessed if it is possible to apply some validity function, which checks if the state is consistent. If inconsistencies are found, these are highlighted or signaled in some way. Other techniques which can be used for fault detection and damage assessment are dependent on the system state representation and on the application . Possible methods are: The use of checksums in data exchange and check digits in numeric data; The use of redundant links in data structures which contain pointers; The use of watchdog timers in concurrent systems. A checksum is a value that is computed by applying some mathematical function to the data. The function used should give a unique value for the packet of data, which is exchanged. The sender who applies the checksum function to the data and appends that function value to the data computes this checksum. The receiver applies the same function to the data and compares the checksum values. If these differ, some data corruption has occurred.
HARVINDER SINGH 511025273

12

M C0071 SOFTW AR E EN GIN E ERI N G

2. E XPLAIN THE FOLLOWING CONCEPTS WITH RESPECT TO S OFTWARE R ELIABILITY:

(CONTD)

When linked data structures are used, the representation can be made redundant by including backward pointers. That is, for every reference from A to B, there exists a comparable reference from B to A. It is also possible to keep count of the number of elements in the structure. Checking can determine whether or not all pointers have an inverse value and whether or not the stored size and the computed structure size are the same. When processes must react within a specific time period, a watch-dog timer may be installed. A watch-dog timer is a timer which must be reset by the executing process after its action is complete. It is started at the same time as a process and times the process execution. If, for some reason the process fails to terminate, the watch-dog timer is not reset. The controller can therefore detect that a problem has arisen and take action to force process termination. Fault recovery Fault recovery is the process of modifying the state space of the system so that the effects of the fault are minimized. The system can continue in operation, perhaps in same degraded form. Forward recovery involves trying to correct the damaged System State. Backward recovery restores the System State to a known correct state. There are two general situations where forward error recovery can be applied: 1. 2. When coded is corrupted The use of coding techniques which add redundancy to the data allows errors to be corrected as well as detected. When linked structures are corrupted if forward and backward pointers are included in the data structure, the structure can be recreated if enough pointers remain uncorrupted. This technique is frequently used for file system and database repair.

Backward error recovery is a simpler technique, which restores the state to a known safe state after an error has been detected. Most database systems include backward error recovery. When a user initiates a database computation a transaction is initiated. Changes made during that transaction are not immediately incorporated in the database. The database is only updated after the transaction is finished and no problems are detected. If the transaction fails, the database is not updated. Design by Contract Meyer suggests an approach to design, called design by contract, to help ensure that a design meets its specifications. He begins by viewing software system as a set of communicating components whose interaction is based on a precisely defined specification of what each component is supposed to do. These specifications, called contracts, govern how the component is to interact with other components and systems. Such specification cannot guarantee correctness, but it forms a good basis for testing and validation. Contract is written between two parties when one commissions the other for a particular service or product. Each party expects some benefit for some obligation; the supplier produces a service or product in a given period of time in exchange for money, and the client accepts the service or product for the money. The contract makes the obligation and benefits explicit.
HARVINDER SINGH 511025273

13

M C0071 SOFTW AR E EN GIN E ERI N G

2. E XPLAIN THE FOLLOWING CONCEPTS WITH RESPECT TO S OFTWARE R ELIABILITY:

(CONTD)

Mayer applies the notion of a contract to software. A software component, called a client, adopts a strategy to perform a set of tasks, t1, t2,tn. In turn, each nontrivial subtask, it is executed when the client calls another component, the supplier, to perform it. That is a contract between the two components to perform the sub-task. Each contract covers mutual obligation (called preconditions), benefits (called postconditions), and consistency constraints (called invariant). Together, these contract properties are called assertions. For example, suppose the client component has a table where each element is identified by a character string used as a key. Our suppliers components task is to insert an element from the table into a dictionary of limited size. We can describe the contract between the two components in the following way. 1. 2. 3. 4. The client component ensures that the dictionary is not full and that the key is nonempty. The supplier component records the element in table. The client component accesses the updated table where the element appears. If the table is full or the key is empty, no action is taken.

HARVINDER SINGH

511025273

14

M C0071 SOFTW AR E EN GIN E ERI N G

3. S UGGEST SIX REASONS WHY SOFTWARE RELIABILITY IS IMPORTANT. U SING AN EXAMPLE , EXPLAIN THE DIFFICULTIES OF DESCRIBING WHAT SOFTWARE RELIABILITY MEANS .
The need for a means to objectively determine software reliability comes from the desire to apply the techniques of contemporary engineering fields to the development of software. That desire is a result of the common observation, by both lay-persons and specialists, that computer software does not work the way it ought to. In other words, software is seen to exhibit undesirable behaviour, up to and including outright failure, with consequences for the data which is processed, the machinery on which the software runs, and by extension the people and materials which those machines might negatively affect. The more critical the application of the software to economic and production processes, or to life-sustaining systems, the more important is the need to assess the software's reliability. Regardless of the criticality of any single software application, it is also more and more frequently observed that software has penetrated deeply into most every aspect of modern life through the technology we use. It is only expected that this infiltration will continue, along with an accompanying dependency on the software by the systems which maintain our society. As software becomes more and more crucial to the operation of the systems on which we depend, the argument goes, it only follows that the software should offer a concomitant level of dependability. In other words, the software should behave in the way it is intended, or even better, in the way it should. A software quality factor is a non-functional requirement for a software program which is not called up by the customer's contract, but nevertheless is a desirable requirement which enhances the quality of the software program. Note that none of these factors are binary; that is, they are not either you have it or you dont traits. Rather, they are characteristics that one seeks to maximize in ones software to optimize its quality. So rather than asking whether a software product has factor x, ask instead the degree to which it does (or does not). Some software quality factors are listed here: Understandability Clarity of purpose. This goes further than just a statement of purpose; all of the design and user documentation must be clearly written so that it is easily understandable. This is obviously subjective in that the user context must be taken into account: for instance, if the software product is to be used by software engineers it is not required to be understandable to the layman. Completeness Presence of all constituent parts, with each part fully developed. This means that if the code calls a subroutine from an external library, the software package must provide reference to that library and all required parameters must be passed. All required input data must also be available. Conciseness Minimization of excessive or redundant information or processing. This is important where memory capacity is limited, and it is generally considered good practice to keep lines of code to a minimum. It can be improved by replacing repeated functionality by one subroutine or function which achieves that functionality. It also applies to documents.
HARVINDER SINGH 511025273

15

M C0071 SOFTW AR E EN GIN E ERI N G

3. S UGGEST SIX REASONS WHY SOFTWARE RELIABILITY IS IMPORTANT. U SING AN EXAMPLE , EXPLAIN THE DIFFICULTIES OF DESCRIBING WHAT SOFTWARE RELIABILITY MEANS . (CONTD)
Portability Ability to be run well and easily on multiple computer configurations. Portability can mean both between different hardwaresuch as running on a PC as well as a smartphoneand between different operating systemssuch as running on both Mac OS X and GNU/Linux. Consistency Uniformity in notation, symbology, appearance, and terminology within itself. Maintainability Propensity to facilitate updates to satisfy new requirements. Thus the software product that is maintainable should be well-documented, should not be complex, and should have spare capacity for memory, storage and processor utilization and other resources. Testability Disposition to support acceptance criteria and evaluation of performance. Such a characteristic must be built-in during the design phase if the product is to be easily testable; a complex design leads to poor testability. Usability Convenience and practicality of use. This is affected by such things as the human-computer interface. The component of the software that has most impact on this is the user interface (UI), which for best usability is usually graphical (i.e. a GUI). Reliability Ability to be expected to perform its intended functions satisfactorily. This implies a time factor in that a reliable product is expected to perform correctly over a period of time. It also encompasses environmental considerations in that the product is required to perform correctly in whatever conditions it finds itself (sometimes termed robustness). Efficiency Fulfillment of purpose without waste of resources, such as memory, space and processor utilization, network bandwidth, time, etc.

HARVINDER SINGH

511025273

16

M C0071 SOFTW AR E EN GIN E ERI N G

3. S UGGEST SIX REASONS WHY SOFTWARE RELIABILITY IS IMPORTANT. U SING AN EXAMPLE , EXPLAIN THE DIFFICULTIES OF DESCRIBING WHAT SOFTWARE RELIABILITY MEANS . (CONTD)
Security Ability to protect data against unauthorized access and to withstand malicious or inadvertent interference with its operations. Besides the presence of appropriate security mechanisms such as authentication, access control and encryption, security also implies resilience in the face of malicious, intelligent and adaptive attackers. Time Example There are two major differences between hardware and software curves. One difference is that in the last phase, software does not have an increasing failure rate as hardware does. In this phase, software is approaching obsolescence; there are no motivation for any upgrades or changes to the software. Therefore, the failure rate will not change. The second difference is that in the useful-life phase, software will experience a drastic increase in failure rate each time an upgrade is made. The failure rate levels off gradually, partly because of the defects found and fixed after the upgrades.

Test/Debug

Useful Life

Obsolescence

Failure Rate

Upgrade

Upgrade

Time

Upgrade

HARVINDER SINGH

511025273

17

M C0071 SOFTW AR E EN GIN E ERI N G

4. W HAT ARE THE ESSENTIAL SKILLS AND TRAITS NECESSARY FOR EFFECTIVE PROJECT MANAGERS IN SUCCESSFULLY HANDLING PROJECTS ?
The Successful Project Manager: A successful project manager knows how to bring together the definition and control elements and operate them efficiently. That means you will need to apply the leadership skills you already apply in running a department and practice the organizational abilities you need to constantly look to the future. In other words, if youre a qualified department manager, you already possess the skills and attributes for succeeding as a project manager. The criteria by which you will be selected will be similar. Chances are, the project youre assigned will have a direct relationship to the skills you need just to do your job. For example: Organizational and leadership experience. An executive seeking a qualified project manager usually seeks someone who has already demonstrated the ability to organize work and to lead others. He or she assumes that you will succeed in a complicated long-term project primarily because you have already demonstrated the required skills and experience. Contact with needed resources. For projects that involve a lot of coordination between departments, divisions, or subsidiaries, top management will look for a project manager who already communicates outside of a single department. If you have the contacts required for a project, it will naturally be assumed that you are suited to run a project across departmental lines. Ability to coordinate a diverse resource pool. By itself, contact outside of your department may not be enough. You must also be able to work with a variety of people and departments, even when their backgrounds and disciplines are dissimilar. For example, as a capable project manager, you must be able to delegate and monitor work not only in areas familiar to your own department but in areas that are alien to your background. Communication and procedural skills. An effective project manager will be able to convey and receive information to and from a number of team members, even when particular points of view are different from his own. For example, a strictly administrative manager should understand the priorities of a sales department, or a customer service manager may need to understand what motivates a production crew. Ability to delegate and monitor work. Project managers need to delegate the work that will be performed by each team member, and to monitor that work to stay on schedule and within budget. A contractor who builds a house has to understand the processes involved for work done by each subcontractor, even if the work is highly specialized. The same is true for every project manager. Its not enough merely to assign someone else a task, complete with a schedule and a budget. Delegation and monitoring are effective only if youre also able to supervise and assess progress. Dependability. Your dependability can be tested only in one way: by being given responsibility and the chance to come through. Once you gain the reputation as a manager who can and does respond as expected, youre ready to take on a project. These project management qualifications read like a list of evaluation points for every department manager. If you think of the process of running your department as a project of its own, then you already understand what its like to organize a projectthe difference, of course, being that the project takes place in a finite time period, whereas your departmental tasks are ongoing. Thus, every successful manager should be ready to tackle a project, provided it is related to his or her skills, resources, and experience.
HARVINDER SINGH 511025273

18

M C0071 SOFTW AR E EN GIN E ERI N G

5. W HICH ARE THE FOUR PHASES OF DEVELOPMENT ACCORDING TO R ATIONAL U NIFIED P ROCESS ?
The Rational Unified Process is a Software Engineering Process. It provides a disciplined approach to assigning tasks and responsibilities within a development organization. Its goal is to ensure the production of high-quality software that meets the needs of its end-users, within a predictable schedule and budget. The Rational Unified Process is a process product, developed and maintained by Rational Software. The development team for the Rational Unified Process are working closely with customers, partners, Rationales product groups as well as Rationales consultant organization, to ensure that the process is continuously updated and improved upon to reflect recent experiences and evolving and proven best practices. The Rational Unified Process enhances team productivity, by providing every team member with easy access to a knowledge base with guidelines, templates and tool mentors for all critical development activities. By having all team members accessing the same knowledge base, no matter if you work with requirements, design, test, project management, or configuration management, we ensure that all team members share a common language, process and view of how to develop software. The Rational Unified Process activities create and maintain models. Rather than focusing on the production of large amount of paper documents, the Unified Process emphasizes the development and maintenance of modelssemantically rich representations of the software system under development. The Rational Unified Process is a guide for how to effectively use the Unified Modeling Language (UML). The UML is an industry-standard language that allows us to clearly communicate requirements, architectures and designs. The UML was originally created by Rational Software, and is now maintained by the standards organization Object Management Group (OMG). Effective Deployment of 6 Best Practices The Rational Unified Process describes how to effectively deploy commercially proven approaches to software development for software development teams. These are called best practices not so much because you can precisely quantify their value, but rather, because they are observed to be commonly used in industry by successful organizations. The Rational Unified Process provides each team member with the guidelines, templates and tool mentors necessary for the entire team to take full advantage of among others the following best practices: 1. 2. 3. 4. 5. 6. Develop software iteratively Manage requirements Use component-based architectures Rational Unified Process: Best Practices for Software development Teams Visually model software Verify software quality Control changes to software

Develop Software Iteratively Given todays sophisticated software systems, it is not possible to sequentially first define the entire problem, design the entire solution, build the software and then test the product at the end. An iterative approach is required that allows an increasing understanding of the problem through successive refinements, and to incrementally grow an effective solution over multiple iterations.
HARVINDER SINGH 511025273

19

M C0071 SOFTW AR E EN GIN E ERI N G

5. W HICH ARE THE FOUR PHASES OF DEVELOPMENT ACCORDING TO R ATIONAL U NIFIED P ROCESS ? (CONTD)
The Rational Unified Process supports an iterative approach to development that addresses the highest risk items at every stage in the lifecycle, significantly reducing a projects risk profile. This iterative approach helps you attack risk through demonstrable progress frequent, executable releases that enable continuous end user involvement and feedback. Because each iteration ends with an executable release, the development team stays focused on producing results, and frequent status checks help ensure that the project stays on schedule. An iterative approach also makes it easier to accommodate tactical changes in requirements, features or schedule. Manage Requirements The Rational Unified Process describes how to elicit, organize, and document required functionality and constraints; track and document tradeoffs and decisions; and easily capture and communicate business requirements. The notions of use case and scenarios proscribed in the process has proven to be an excellent way to capture functional requirements and to ensure that these drive the design, implementation and testing of software, making it more likely that the final system fulfills the end user needs. They provide coherent and traceable threads through both the development and the delivered system. Use Component-based Architectures The process focuses on early development and baselining of a robust executable architecture, prior to committing resources for full-scale development. It describes how to design a resilient architecture that is flexible, accommodates change, is intuitively understandable, and promotes more effective software reuse. The Rational Unified Process supports component-based software development. Components are non-trivial modules, subsystems that fulfill a clear function. The Rational Unified Process provides a systematic approach to defining an architecture using new and existing components. These are assembled in a well-defined architecture, either ad hoc, or in a component infrastructure such as the Internet, CORBA, and COM, for which an industry of reusable components is emerging. Visually Model Software The process shows you how to visually model software to capture the structure and behavior of architectures and components. This allows you to hide the details and write code using graphical building blocks. Visual abstractions help you communicate different aspects of your software; see how the elements of the system fit together; make sure that the building blocks are consistent with your code; maintain consistency between a design and its implementation; and promote unambiguous communication. The industrystandard Unified Modeling Language (UML), created by Rational Software, is the foundation for successful visual modeling. Verify Software Quality Poor application performance and poor reliability are common factors which dramatically inhibit the acceptability of todays software applications. Hence, quality should be reviewed with respect to the requirements based on reliability, functionality, application performance and system performance. The Rational Unified Process assists you in the planning, design, implementation, execution, and evaluation of these test types. Quality assessment is built into the process, in all activities, involving all participants, using objective measurements and criteria, and not treated as an afterthought or a separate activity performed by a separate group.
HARVINDER SINGH 511025273

20

M C0071 SOFTW AR E EN GIN E ERI N G

5. W HICH ARE THE FOUR PHASES OF DEVELOPMENT ACCORDING TO R ATIONAL U NIFIED P ROCESS ? (CONTD)
Control Changes to Software The ability to manage change is making certain that each change is acceptable, and being able to track changes is essential in an environment in which change is inevitable. The process describes how to control, track and monitor changes to enable successful iterative development. It also guides you in how to establish secure workspaces for each developer by providing isolation from changes made in other workspaces and by controlling changes of all software artifacts (e.g., models, code, documents, etc.). And it brings a team together to work as a single unit by describing how to automate integration and build management. The Rational Unified Process product consists of: A web-enabled searchable knowledge base providing all team members with guidelines, templates, and tool mentors for all critical development activities. The knowledge base can further be broken down to: Extensive guidelines for all team members, and all portions of the software lifecycle. Guidance is provided for both the high-level thought process, as well as for the more tedious day-to-day activities. The guidance is published in HTML form for easy platform-independent access on your desktop. Tool mentors providing hands-on guidance for tools covering the full lifecycle. The tool mentors are published in HTML form for easy platformindependent access on your desktop. See section "Integration with Tools" for more details. Rational Rose examples and templates providing guidance for how to structure the information in Rational Rose when following the Rational Unified Process (Rational Rose is Rational's tool for visual modeling) SoDA templates more than 10 SoDA templates that helps automate software documentation (SoDA is Rationales Document Automation Tool) Microsoft Word templates more than 30 Word templates assisting documentation in all workflows and all portions of the lifecycle Microsoft Project Plans Many managers find it difficult to create project plans that reflects an iterative development approach. Our templates jump start the creation of project plans for iterative development, according to the Rational Unified Process. Development Kit: Describes how to customize and extend the Rational Unified Process to the specific needs of the adopting organization or project, as well as provides tools and templates to assist the effort. This development kit is described in more detail later in this section. Access to Resource Center containing the latest white papers, updates, hints, and techniques, as well as references to add-on products and services. A book "Rational Unified Process An Introduction", by Philippe Kruchten, published by Addison Wesley. The book is on 277 pages and provides a good introduction and overview to the process and the knowledge base.
HARVINDER SINGH 511025273

21

M C0071 SOFTW AR E EN GIN E ERI N G

6. D ESCRIBE THE C APABILITY M ATURITY M ODEL WITH SUITABLE REAL TIME EXAMPLES .
The Capability Maturity Model (CMM)) is a multistaged, process definition model intended to characterize and guide the engineering excellence or maturity of an organizations software development processes. The Capability Maturity Model: Guidelines for Improving the Software Process (1995) contains an authoritative description. See also Paulk et al. (1993) and Curtis, Hefley, and Miller (1995) and, for general remarks on continuous process improvement, Somerville, Sawyer, and Viller (1999) (see Table 3.2). The model prescribes practices for planning, engineering, and managing software development and maintenance and addresses the usual goals of organizational system engineering processes: namely, quality improvement, risk reduction, cost reduction, predictable process, and statistical quality control (Oshana& Linger 1999). However, the model is not merely a program for how to develop software in a professional, engineering-based manner; it prescribes an evolutionary improvement path from an ad hoc, immature process to a mature, disciplined process (Oshana& Linger 1999). Walnau, Hissam, and Seacord (2002) observe that the ISO and CMM process standards established the context for improving the practice of software develop- meant by identifying roles and behaviors that define a software factory. The CMM identifies five levels of software development maturity in an organization: At level 1, the organizations software development follows no formal development process. The process maturity is said to be at level 2 if software management controls have been introduced and some software process is followed. A decisive feature of this level is that the organizations process is supposed to be such that it can repeat the level of performance that it achieved on similar successful past projects. This is related to a central purpose of the CMM: namely, to improve the predictability of the development process significantly. The major technical requirement at level 2 is incorporation of configuration management into the process. Configuration management(or change management, as it is sometimes called) refers to the processes used to keep track of the changes made to the development product (including all the intermediate deliverables) and the multifarious impacts of these changes. These impacts range from the recognition of development problems; identification of the need for changes; alteration of previous work; verification that agreed upon modifications have corrected the problem and that corrections have not had a negative impact on other parts of the system; etc. An organization is said to be at level 3 if the development process is standard and consistent. The project management practices of the organization are supposed to have been formally agreed on,defined, and codified at this stage of process maturity. Organizations at level 4 are presumed to have put into place qualitative and quantitative measures of organizational process. These process metrics are intended to monitor development and to signal trouble and indicate where and how a development is going wrong when problems occur. Organizations at maturity level 5 are assumed to have established mechanisms designed to ensure continuous process improvement and optimization. The metric feedbacks at this stage are not just applied to recognize and control problems with the current project as they were in level-4 organizations. They are intended to identify possible root causes in the process that have allowed the problems to occur and to guide the evolution of the process so as to prevent the recurrence of such problems in future projects, such as through the introduction of appropriate new technologies and tools.
HARVINDER SINGH 511025273

22

M C0071 SOFTW AR E EN GIN E ERI N G

6. D ESCRIBE THE C APABILITY M ATURITY M ODEL WITH SUITABLE REAL TIME EXAMPLES . (CONTD)
The higher the CMM maturity level is, the more disciplined, stable, and well-defined the development process is expected to be and the environment is assumed to make more use of automated tools and the experience gained from many past successes (Zhiying 2003). The staged character of the model lets organizations progress up the maturity ladder by setting process targets for the organization. Each advance reflects a further degree of stabilization of an organizations development process, with each level institutionaliz[ing] a different aspect of the process (Oshana& Linger 1999). Each CMM level has associated key process areas (KPA) that correspond to activities that must be formalized to attain that level. For example, the KPAs at level 2 include configuration management, quality assurance, project planning and tracking, and effective management of subcontracted software. The KPAs at level 3 include intergroup communication, training, process definition, product engineering, and integrated software management. Quantitative process management and development quality define the required KPAs at level 4. Level 5 institutionalizes process and technology change management and optimizes defect prevention. Bamberger (1997), one of the authors of the Capability Maturity Model, addresses what she believes are some misconceptions about the model. For example, she observes that the motivation for the second level, in which the organization must have a repeatable software process, arises as a direct response to the historical experience of developers when their software development is out of control (Bamberger 1997). Often this is for reasons having to do with configuration management or mismanagement! Among the many symptoms of configuration mismanagement are: confusion over which version of a file is the current official one; inadvertent side effects when repairs by one developer obliterate the changes of another developer; inconsistencies among the efforts of different developers; etc. A key appropriate response to such actual or potential disorder is to get control of the product and the product pieces under development (configuration management) by (Bamberger 1997): Controlling the feature set of the product so that the impact/s of changes are more fully understood (requirements management) Using the feature set to estimate the budget and schedule while leveraging as much past knowledge as possible (project planning) Ensuring schedules and plans are visible to all the stakeholders (project tracking) Ensuring that the team follows its own plan and standards and corrects discrepancies when they occur (quality assurance) Bamberger contends that this kind of process establishes the basic stability and visibility that are the essence of the CMM repeatable level.

HARVINDER SINGH

511025273

23

FACULTY : K AMYA M AM

MC0072 COMPUTER GRAPHICS


(Book ID: B0810) Assignment Set 2

M C0072 COM PUTER GRAPHICS

1. W RITE A SHORT NOTE ON THE FOLLOWINGS :


A) Video mixing Video controller provides the facility of video mixing. In which it accepts information of two images simultaneously. One from frame buffer and other from television camera, recorder or other source. This is illustrated in below fig. The video controller merges the two received images to form a composite image. Frame Buffer Video Controller Monitor

Video Signal Source Video mixing There are two types of video mixing. In first, a graphics image is set into a video image. Here mixing is accomplished with hardware that treats a designated pixel value in the frame buffer as a flag to indicate that the video signal should be shown instead of the signal from the frame buffer, normally the designated pixel value corresponds to the background color of the frame buffer image. In the second type of mixing, the video image is placed on the top of the frame buffer image. Here, whenever background color of video image appears, the frame buffer is shown, otherwise the video image is shown. B) Frame buffer A frame buffer is a video output device that drives a video display from a memory buffer containing a complete frame of data. The information in the memory buffer typically consists of color values for every pixel (point that can be displayed) on the screen. Color values are commonly stored in 1-bit binary (monochrome), 4-bit palettized, 8-bit palettized, 16-bit high color and 24-bit true color formats. An additional alpha channel is sometimes used to retain information about pixel transparency. The total amount of the memory required to drive the frame buffer depends on the resolution of the output signal, and on the color depth and palette size. Frame buffers differ significantly from the vector displays that were common prior to the advent of the frame buffer. With a vector display, only the vertices of the graphics primitives are stored. The electron beam of the output display is then commanded to move from vertex to vertex, tracing an analog line across the area between these points. With a frame buffer, the electron beam (if the display technology uses one) is commanded to trace a left-to-right, top-to-bottom path across the entire screen, the way a television renders a broadcast signal. At the same time, the color information for each point on the screen is pulled from the frame buffer, creating a set of discrete picture elements (pixels).
HARVINDER SINGH 511025273

24

M C0072 COM PUTER GRAPHICS

1. W RITE A SHORT NOTE ON THE FOLLOWINGS :

(CONTD)

The term "frame buffer" has also entered into colloquial usage to refer to a backing store of graphical information. The key feature that differentiates a frame buffer from memory used to store graphics the output device is lost in this usage.

Display modes Frame buffers used in personal and home computing often had sets of defined "modes" under which the frame buffer could operate. These modes would automatically reconfigure the hardware to output different resolutions, color depths, memory layouts and refresh rate timings. In the world of Unix machines and operating systems, such conveniences were usually eschewed in favor of directly manipulating the hardware settings. This manipulation was far more flexible in that any resolution, color depth and refresh rate was attainable limited only by the memory available to the frame buffer. An unfortunate side-effect of this method was that the display device could be driven beyond its capabilities. In some cases this resulted in hardware damage to the display.[3] More commonly, it simply produced garbled and unusable output. Modern CRT monitors fix this problem through the introduction of "smart" protection circuitry. When the display mode is changed, the monitor attempts to obtain a signal lock on the new refresh frequency. If the monitor is unable to obtain a signal lock, or if the signal is outside the range of its design limitations, the monitor will ignore the frame buffer signal and possibly present the user with an error message. LCD monitors tend to contain similar protection circuitry, but for different reasons. Since the LCD must digitally sample the display signal (thereby emulating an electron beam), any signal that is out of range cannot be physically displayed on the monitor. Color palette Frame buffers have traditionally supported a wide variety of color modes. Due to the expense of memory, most early frame buffers used 1-bit (2 color), 2bit (4 color), 4-bit (16 color) or 8-bit (256 color) color depths. The problem with such small color depths is that a full range of colors cannot be produced. The solution to this problem was to add a lookup table to the frame buffers. Each "color" stored in frame buffer memory would act as a color index; this scheme was sometimes called "indexed color".
HARVINDER SINGH 511025273

25

M C0072 COM PUTER GRAPHICS

1. W RITE A SHORT NOTE ON THE FOLLOWINGS :

(CONTD)

The lookup table served as a palette that contained data to define a limited number (such as 256) of different colors. However, each of those [256] colors, itself, was defined by more than 8 bits, such as 24 bits, eight of them for each of the three primary colors. With 24 bits available, colors can be defined far more subtly and exactly, as well as offering the full range gamut which the display can show. While having a limited total number of colors in an image is somewhat restrictive, nevertheless they can be well chosen, and this scheme is markedly superior to 8-bit color. The data from the frame buffer in this scheme determined which of the [256] colors in the palette was for the current pixel, and the data stored in the lookup table (sometimes called the "LUT") went to three digital-to-analog converters to create the video signal for the display. The frame buffer's output data, instead of providing relatively-crude primary-color data, served as an index a number to choose one entry in the lookup table. In other words, the index determined which color and the data from the lookup table determined precisely what color to use for the current pixel. Memory access While frame buffers are commonly accessed via a memory mapping directly to the CPU memory space, this is not the only method by which they may be accessed. Frame buffers have varied widely in the methods used to access memory. Some of the most common are: Mapping the entire frame buffer to a given memory range. Port commands to set each pixel, range of pixels or palette entry. Mapping a memory range smaller than the frame buffer memory, then bank switching as necessary. The frame buffer organization may be chunky (packed pixel) or planar. Virtual frame buffers Many systems attempt to emulate the function of a frame buffer device, often for reasons of compatibility. The two most common "virtual" frame buffers are the Linux frame buffer device (fbdev) and the X Virtual Framebuffer (Xvfb). The X Virtual Framebuffer was added to the X Window System distribution to provide a method for running X without a graphical frame buffer. While the original reasons for this are lost to history, it is often used on modern systems to support programs such as the Sun Microsystems JVM that do not allow dynamic graphics to be generated in a headless environment. The Linux frame buffer device was developed to abstract the physical method for accessing the underlying frame buffer into a guaranteed memory map that is easy for programs to access. This increases portability, as programs are not required to deal with systems that have disjointed memory maps or require bank switching. Page flipping Since frame buffers are often designed to handle more than one resolution, they often contain more memory than is necessary to display a single frame at
HARVINDER SINGH 511025273

26

M C0072 COM PUTER GRAPHICS

1. W RITE A SHORT NOTE ON THE FOLLOWINGS :

(CONTD)

lower resolutions. Since this memory can be considerable in size, a trick was developed to allow for new frames to be written to video memory without disturbing the frame that is currently being displayed. The concept works by telling the frame buffer to use a specific chunk of its memory to display the current frame. While that memory is being displayed, a completely separate part of memory is filled with data for the next frame. Once the secondary buffer is filled (often referred to as the "back buffer"), the frame buffer is instructed to look at the secondary buffer instead. The primary buffer (often referred to as the "front buffer") becomes the secondary buffer, and the secondary buffer becomes the primary. This switch is usually done during the vertical blanking interval to prevent the screen from "tearing" (i.e., half the old frame is shown, and half the new frame is shown). Most modern frame buffers are manufactured with enough memory to perform this trick even at high resolutions. As a result, it has become a standard technique used by PC game programmers. Graphics accelerators As the demand for better graphics increased, hardware manufacturers created a way to decrease the amount of CPU time required to fill the frame buffer. This is commonly called a "graphics accelerator" in the Unix world. Common graphics drawing commands (many of them geometric) are sent to the graphics accelerator in their raw form. The accelerator then rasterizes the results of the command to the frame buffer. This method can save from thousands to millions of CPU cycles per command, as the CPU is freed to do other work. While early accelerators focused on improving the performance of 2D GUI systems, most modern accelerators focus on producing 3D imagery in real time. A common design is to send commands to the graphics accelerator using a library such as OpenGL. The OpenGL driver then translates those commands to instructions for the accelerator's graphics processing unit (GPU). The GPU uses those microinstructions to compute the rasterized results. Those results are bit blitted to the frame buffer. The frame buffer's signal is then produced in combination with built-in video overlay devices (usually used to produce the mouse cursor without modifying the frame buffer's data) and any analog special effects that are produced by modifying the output signal. An example of such analog modification was the anti-aliasing technique used by the 3dfx Voodoo cards. These cards add a slight blur to output signal that makes aliasing of the rasterized graphics much less obvious. Popular manufacturers of 3D graphics accelerators are Nvidia and ATI Technologies. C) Color table In color displays, 24- bits per pixel are commonly used, where 8-bits represent 256 levels for each color. Here it is necessary to read 24-bits for each pixel from frame buffer. This is very time consuming. To avoid this video controller uses look up table (LUT) to store many entries of pixel values in RGB format. With this facility, now it is necessary only to read index to the look up table from the frame buffer for each pixel. This index specifies the one of the entries in the look-up table. The specified entry in the loop up table is then used to control the intensity or color of the CRT. Usually, look-up table has 256 entries. Therefore, the index to the look-up table has 8-bits and hence for each pixel, the frame buffer has to store 8-bits per pixel instead of 24 bits. Fig. 2.6 shows the organization of a color (Video) look-up table.
HARVINDER SINGH 511025273

27

M C0072 COM PUTER GRAPHICS

1. W RITE A SHORT NOTE ON THE FOLLOWINGS :


0 1 0 0 0 0 0 1

(CONTD)

Organization of a Video look-up table There are several advantages in storing color codes in a lookup table. Use of a color table can provide a "reasonable" number of simultaneous colors without requiring Iarge frame buffers. For most applications, 256 or 512 different colors are sufficient for a single picture. Also, table entries can be changed at any time, allowing a user to be able to experiment easily with different color combinations in a design, scene, or graph without changing the attribute settings for the graphics data structure. In visualization and image-processing applications, color tables are convenient means for setting color thresholds so that all pixel values above or below a specified threshold can be set to the same color. For these reasons, some systems provide both capabilities for colorcode storage, so that a user can elect either to use color tables or to store color codes directly in the frame buffer. Display technology The image is shown on a screen (also called a moniteur), which is an output peripheral device that allows a visual representation to be offered. This information comes from the computer, but in an indirect way. Indeed, the processor does not directly send information to the monitor, but processes the information coming from its Random access memory (RAM), then sends it to a graphics card that converts the information into electrical impulses, which it then sends to the monitor.
Cyan (0,255,255) White (255,255,255) B

Blue

(0,0,255)

Magenta (255,0,255) G Yellow (255,255,0) Green

Black (0,0,0)

Red R (255,0,0)

HARVINDER SINGH

511025273

28

M C0072 COM PUTER GRAPHICS

1. W RITE A SHORT NOTE ON THE FOLLOWINGS :

(CONTD)

Computer monitors are usually cathode tubes, i.e. a tube made out of glass in which an electron gun emits electrons which are then directed by a magnetic field towards a screen on which small phosphorescent elements (luminophores) are laid out, constituting points (pixels) that emit light when the electrons hit them. The pixel concept An image consists of a set of points called pixels (the word pixel is an abbreviation of PICture ELement) The pixel is thus the smallest component of a digital image. The entire set of these pixels is contained in a two-dimensional table constituting the image:

The screen-sweeping is carried out from left to right and from top to bottom, it is usual to indicate the pixel located at the top left hand corner of the image using the coordinates [0,0], this means that the directions of the image axes are the following: The direction of the X-axis is from left to right The direction of the Y-axis is from top to bottom, contrary to the conventional notation in mathematics, where the direction of the Y-axis is upwards. Definition and resolution The number of points (pixels) constituting the image, that is, its dimensions (the number of columns of the image multiplied by its number of rows) is known as the definition. An image 640 pixels wide and 480 pixels high will have a definition of 640 by 480 pixels, which is written as 640x480. On the other hand, the resolution, a term often confused with the definition, is determined by the number of points per unit of area, expressed in dots per inch (DPI), an inch being equivalent to 2.54 cm. The resolution thus makes it possible to establish the relationship between the number of pixels of an image and the actual size of its representation on a physical support. A resolution of 300 dpi thus means 300 columns and 300 lines of pixels in a square inch which thus yields 90000 pixels in a square inch. The 72 dpi reference resolution gives us a 1/72 (an inch divided by 72) pixel, that is to say 0.353mm, corresponding to a pica (Anglo-Saxon typographical unit).
HARVINDER SINGH 511025273

29

M C0072 COM PUTER GRAPHICS

1. W RITE A SHORT NOTE ON THE FOLLOWINGS :


Colour models

(CONTD)

An image is thus represented by a two-dimensional table in which each cell is a pixel. To represent an image by means of computer, it is thus enough to create a pixel table in which each cell contains a value. The value stored in a cell is coded on a certain number of bits which determine the colour or the intensity of the pixel, This is called the coding depth (or is sometimes also called the colour depth). There are several coding depth standards: Black and white bitmap: by storing one bit in each cell, it is possible to define two colours (black or white). Bitmap with 16 colours or 16 levels of grey: storing 4 bits in each cell, it is possible to define 24 intensities for each pixel, that is, 16 degrees of grey ranging from black to white or 16 different colours. Bitmap with 256 colours or 256 levels of grey: by storing a byte in each cell, it is possible to define 24 intensities, that is, 256 degrees of grey ranging from black to white or 256 different colours. Colour palette colourmap): thanks to this method it is possible to define a pallet, or colour table, with all the colours that can be contained in the image, for each of which there is an associated index. The number of bits reserved for the coding of each index of the palette determines the number of colours which can be used. Thus, by coding the indexes on 8 bits it is possible to define 256 usable colours; that is, each cell of the two-dimensional table that represents the image will contain a number indicating the index of the colour to be used. An image whose colours are coded according to this technique is thus called an indexed colour image. "True Colours" or "real colours": this representation allows an image to be represented by defining each component (RGB, for red, green and blue). Each pixel is represented by a set comprising the three components, each one coded on a byte, that is, on the whole 24 bits (16 million colours). It is possible to add a fourth component, making it possible to add information regarding transparency or texture; each pixel is then coded on 32 bits.

HARVINDER SINGH

511025273

30

M C0072 COM PUTER GRAPHICS

2. D ESCRIBE THE FOLLOWING WITH RESPECT TO METHODS OF GENERATING CHARACTERS :


A) Stroke method This method uses small line segments to generate a character. The small series of line segments are drawn like a stroke of pen to form a character as shown in the figure below.

Stroke method We can build our own stroke method character generator by calls to the line drawing algorithm. Here it is necessary to decide which line segments are needed for each character and then drawing these segments using line drawing algorithm. B) Starbust method In this method a fix pattern of line segments are used to generate characters. As shown in the fig. 5.20, there are 24 line segments. Out of these 24 line segments, segments required to display for particular character are highlighted. This method of character generation is called starbust method because of its characteristic appearance

HARVINDER SINGH

511025273

31

M C0072 COM PUTER GRAPHICS

2. D ESCRIBE THE FOLLOWING WITH RESPECT TO METHODS OF GENERATING CHARACTERS: (CONTD)


Figure shows the starbust patterns for characters A and M. the patterns for particular characters are stored in the form of 24 bit code, each bit representing one line segment. The bit is set to one to highlight the line segment; otherwise it is set to zero. For example, 24-bit code for Character A is 0011 0000 0011 1100 1110 0001 and for character M is 0000 0011 0000 1100 1111 0011. This method of character generation has some disadvantages. They are 1. 2. 3. The 24-bits are required to represent a character. Hence more memory is required Requires code conversion software to display character from its 24-bit code Character quality is poor. It is worst for curve shaped characters.

C) Bitmap method The third method for character generation is the bitmap method. It is also called dot matrix because in this method characters are represented by an array of dots in the matrix form. It is a two dimensional array having columns and rows. An 5 x 7 array is commonly used to represent characters as shown in the below figure. However 7 x 9 and 9 x 13 arrays are also used. Higher resolution devices such as inkjet printer or laser printer may use character arrays that are over 100x100.

Character A in 5 x 7 dot matrix format Each dot in the matrix is a pixel. The character is placed on the screen by copying pixel values from the character array into some portion of the screens frame buffer. The value of the pixel controls the intensity of the pixel.
HARVINDER SINGH 511025273

32

M C0072 COM PUTER GRAPHICS

3. D ISCUSS THE HOMOGENEOUS COORDINATES FOR TRANSLATION , ROTATION AND


SCALING
For translation: The third 2D graphics transformation we consider is that of translating a 2D line drawing by an amount Tx along the x axis and Ty along the y axis. The translation equations may be written as: (5) We wish to write the Equations 5 as a single matrix equation. This requires that we find a 2 by 2 matrix,

Such that x x a + y x c = x + Tx From this it is clear that a=1 and c=0, but there is no way to obtain the Tx term required in the first equation of Equations 5. Similarly we must have x x b + y x d = y + Ty. Therefore, b=0 and d=1, and there is no way to obtain the Ty term required in the second equation of Equations 5. For Rotation: Suppose we wish to rotate a figure around the origin of our 2D coordinate system. Below figure shows the point x,y being rotated degrees (by convention, counter clock-wise direction is positive) about the origin.

Rotating a Point About the Origin


HARVINDER SINGH 511025273

33

M C0072 COM PUTER GRAPHICS

3. D ISCUSS THE HOMOGENEOUS COORDINATES FOR TRANSLATION , ROTATION AND


SCALING (CONTD)
The equations for changes in the x and y coordinates are:

(1) If we consider the coordinates of the point (x,y) as a one row two column matrix [ x y ] and the matrix x,y)

Then, given the J definition for matrix product, mp =: +/ . *, we can write Equations (1) as the matrix equation (2) We can define a J monad, rotate, which produces the rotation matrix. This monad is applied to an angle, expressed in degrees. Positive angles are measured in a counter-clockwise direction by convention. rotate =: monad def '2 2 $ 1 1 _1 1 * 2 1 1 2 o. (o. y.) % 180' rotate 90 01 _1 0 rotate 360 `1 _2.44921e_16 2.44921e_16 1

We can rotate the square of Figure 1 by:

HARVINDER SINGH

511025273

34

M C0072 COM PUTER GRAPHICS

3. D ISCUSS THE HOMOGENEOUS COORDINATES FOR TRANSLATION , ROTATION AND


SCALING (CONTD)
square mp rotate 90 0 0 0 10 _10 10 _10 0 0 0 producing the rectangle shown in below figure. The Square, Rotated 90 Degrees

For Scaling: Next we consider the problem of scaling (changing the size of) a 2D line drawing. Size changes are always made from the origin of the coordinate system. The equations for changes in x the y and coordinates are: (3) As before, we consider the coordinates of the point (x,y as a one row two column matrix [ x y ] and the matrix x,y) x,y

then, we can write Equations (3) as the matrix equation (4) We next define a J monad, scale, which produces the scale matrix. This monad is applied to a list of two scale factors for x and y respectively. scale =: monad def '2 2 $ (0 { y.),0,0,(1 { y.)' scale 2 3
HARVINDER SINGH 511025273

35

M C0072 COM PUTER GRAPHICS

3. D ISCUSS THE HOMOGENEOUS COORDINATES FOR TRANSLATION , ROTATION AND


SCALING (CONTD)
20 03 We can now scale the square of Figure 1 by: square mp scale 2 3 0 0 20 0 20 30 0 30 0 0 producing the square shown in below figure.

Scaling a Square
HARVINDER SINGH 511025273

36

M C0072 COM PUTER GRAPHICS

4. D ESCRIBE THE FOLLOWING WITH RESPECT TO P ROJECTION :


A) Parallel Projection In parallel projection, z coordinate is discarded and parallel lined from each vertex on the object are extended until they intersect the view plane. The point of intersection is the projection of the vertex. We connect the projected vertices by line segments which correspond to connections on the original object.

Parallel projection of an object to the view plane As shown in the Figure above, a parallel projection preserves relative proportions of objects but does not produce the realistic views. B) Perspective Projection The perspective projection, on the other hand, produces realistic views but does not preserve relative proportions. In perspective projection, the lines of projection are not parallel. Instead, they all coverage at a single point called the center of projection or projection reference point. The object positions are transformed to the view plane along these converged projection lines and the projected view of an object is determines by calculating the intersection of the converged projection lines with the view plane, as shown in the shown figure.

Perspective projection of an object to the view plane


HARVINDER SINGH 511025273

37

M C0072 COM PUTER GRAPHICS

4. D ESCRIBE THE FOLLOWING WITH RESPECT TO P ROJECTION :


C) Types of Parallel Projections

(CONTD)

Parallel projections are basically categorized into two types, depending on the relation between the direction of projection and the normal to the view plane. When the direction of the projection is normal (perpendicular) to the view plane, we have an orthographic parallel projection. Otherwise, we have an oblique parallel projection. Figure above illustrates the two types of parallel projection.

Orthographic Projection The orthographic projection can display more than one face of an object. Such as orthographic projection is called axonometric orthographic projection. It uses projection planes (view planes) that are not normal to a principle axis. They resemble the perspective projection in this way, but differ in that the foreshortening is uniform rather than being related to the distance from the center of projection. Parallelism of lines is preserved but angles are not. The most commonly used axonometric orthographic projection is the isometric projection. The isometric projection can be generated by aligning the view plane so that it intersects each coordinate axis in which the object is defined at the same distance from the origin. As shown in the shown figure, the isometric projection is obtained by aligning the projection vector with the cube diagonal. It uses an useful property that all three principle axes are equally foreshortened, allowing measurements along the axes to be made to the same scale (hence the name: iso for equal, metric for measure).

Isometric projection of an object onto a viewing plane


HARVINDER SINGH 511025273

38

M C0072 COM PUTER GRAPHICS

4. D ESCRIBE THE FOLLOWING WITH RESPECT TO P ROJECTION :


Oblique Projection

(CONTD)

An oblique projection is obtained by projecting points along parallel lines that are not perpendicular to the projection plane. Notice that the view plane normal and the direction of projection are not the same. The oblique projections are further classified as the cavalier and cabinet projections. For the cavalier projection, the direction of projection makes a 450 angle with the view plane. As a result, the projection of a line perpendicular to the view plane has the same length as the line itself; that is, there is no foreshortening. Cavalier Projections of the unit cube When the direction of projection makes an angle of arctan (2)=63.40 with the view plane, the resulting view is called a cabinet projection. For this angle, lines perpendicular to the viewing surface are projected at one-half their actual length. Cabinet projections appear more realistic than cavalier projections because of this reduction in the length of perpendiculars. Figure below shows the examples of cabinet projections for a cube.

1 1/2 1/2

1 45 (a)

1 = 30 (b)

Cabinet projections of the Unit Cube

HARVINDER SINGH

511025273

39

FACULTY : PANKAJ S IR

MC0073 SYSTEM PROGRAMMING


(Book ID: B0811) Assignment Set 1

M C0073 SYST EM PROGRAM M IN G

1. D ESCRIBE THE FOLLOWING WITH RESPECT TO L ANGUAGE S PECIFICATION :


A) Programming Language Grammars The lexical and syntactic features of a programming language are specified by its grammar. This section discusses key concepts and notions from formal language grammars. A language L can be considered to be a collection of valid sentences. Each sentence can be looked upon as a sequence of words and each word as a sequence of letters or graphic symbols acceptable in L. A language specified in this manner is known as & formal language. A formal language grammar is a set of rules which precisely specify the sentences of L. It is clear that natural languages are not formal languages due to their rich vocabulary. However, PLs are formal languages. Terminal symbols, alphabet and strings The alphabet of L, denoted by the Greek symbol , is the collection of symbols in its character set. We will use lower case letters a, b, c, etc. to denote symbols in . A symbol in the alphabet is known as a terminal symbol (T) of L. The alphabet can be represented using the mathematical notation of a set, e.g. = {a, b, z, 0, l, 9} Here the symbols {, , and} are part of the notation. We call them metasymbols to differentiate them from terminal symbols. Throughout this discussion we assume that metasymbols are distinct from the terminal symbols. If this is not the case, i.e. if a terminal symbol and a meta symbol are identical, we enclose the terminal symbol in quotes to differentiate it from the meta symbol. For example, the set of punctuation symbols of English can be defined as where , denotes the terminal symbol comma. A string is a finite sequence of symbols. We will represent strings by Greek symbols a, (, , etc. Thus = axy is a string over . The length of a string is the number of symbols in it. Note that the absence of any symbol is also a string, the null string . The concatenation operation combines two strings into a single string. It is used to build larger strings from existing strings. Thus, given two strings and , concatenation of with yields a string which is formed by putting the sequence of symbols forming before the sequence of symbols forming . For example, if = ab, = axy, then concatenation of and , represented as . or simply , gives the string abaxy. The null string can also participate in a concatenation, thus a. =.a = a. Nonterminal symbols A nonterminal symbol (NT) is the name of a syntax category of a language, e.g. noun, verb, etc. An NT is written as a single capital letter, or as a name enclosed between <>, e.g. A or < Noun >. During grammatical analysis, a nonterminal symbol represents an instance of the category. Thus, < Noun > represents a noun. Productions A production, also called a rewriting rule, is a rule of the grammar. A production has the form A nonterminal symbol :: = String of Ts and NTs and defines the fact that the NT on the LHS of the production can be rewritten as the string of Ts and NTs appearing on the RHS. When an NT can be written as one of many different strings, the symbol | (standing for or) is used to separate the strings on the RHS, e.g.
HARVINDER SINGH 511025273

40

M C0073 SYST EM PROGRAM M IN G

1. D ESCRIBE THE FOLLOWING WITH RESPECT TO L ANGUAGE S PECIFICATION :


< Article > ::- a | an | the

(CONTD)

The string on the RHS of a production can be a concatenation of component strings, e.g. the production < Noun Phrase > ::= < Article >< Noun > expresses the fact that the noun phrase consists of an article followed by a noun. Each grammar G defines a language lg. G contains an NT called the distinguished symbol or the start NT of G. Unless otherwise specified, we use the symbol S as the distinguished symbol of G. A valid string of lg is obtained by using the following procedure 1. 2. a) Let = S. While is not a string of terminal symbols Select an NT appearing in , say X.

b) Replace X by a string appearing on the RHS of a production of X. Example Grammar (1.1) defines a language consisting of noun phrases in English < Noun Phrase > :: = < Article > < Noun > < Article > ::= a | an | the <Noun> ::= boy | apple < Noun Phrase > is the distinguished symbol of the grammar, the boy and an apple are some valid strings in the language. Definition (Grammar) A grammar G of a language lg is a quadruple (, SNT, t,P) where is the alphabet of Lg, i.e. the set of Ts, SNT is the set of NTs, S is the distinguished symbol, and P is the set of productions.
HARVINDER SINGH 511025273

41

M C0073 SYST EM PROGRAM M IN G

1. D ESCRIBE THE FOLLOWING WITH RESPECT TO L ANGUAGE S PECIFICATION :


Derivation, reduction and parse trees

(CONTD)

A grammar G is used for two purposes, to generate valid strings of lg and to rec-ognize valid strings of lg. The derivation operation helps to generate valid strings while the reduction operation helps to recognize valid strings. A parse tree is used to depict the syntactic structure of a valid string as it emerges during a sequence of derivations or reductions. Derivation Let production pi of grammar G be of the form P1: A:: = ** *A and let be a string such that = A, then replacement of A by in string constitutes a derivation according to production p1 . We use the notation N to denote direct derivation of from N and N to denote transitive derivation of (i.e. derivation in zero or more steps) from N, respectively. Thus, A => only if A : = is a production of G and A if A . We can use this notation to define a valid string according to a grammar G as follows: is a valid string according to G only if S , where S is the distinguished symbol of G. Example: Derivation of the string the boy according to grammar can be depicted as < Noun Phrase > => < Article > < Noun > => the < Noun > => the boy A string such that S => is a sentential form of lg. The string is a sentence of lg if it consists of only Ts. Example: Consider the grammar G < Sentence >::= < Noun Phrase > < Verb Phrase > < Noun Phrase >::= < Article >< Noun > < Verb Phrase >::= <verb> <Noun Phrase> <Article> ::= = a | an | the < Noun >::= boy | apple <verb> ::= ate
HARVINDER SINGH 511025273

42

M C0073 SYST EM PROGRAM M IN G

1. D ESCRIBE THE FOLLOWING WITH RESPECT TO L ANGUAGE S PECIFICATION :


The following strings are sentential forms of Lg < Noun Phrase > < Verb Phrase > the boy < Verb Phrase > < Noun Phrase > ate < Noun Phrase > the boy ate < Noun Phrase > the boy ate an apple However, only the boy ate an apple is a sentence. Reduction: To determine the validity of the string Example The boy ate an apple according to grammar we perform the following reductions Step String The boy ate an apple 1. 2. 3. 4. 5. 6. 7. 8. 9. < Article > boy ate an apple < Article > < Noun > ate an apple < Article > < Noun > < Verb > an apple < Article > < Noun > < Verb > < Article > apple < Article > < Noun > < Verb > < Article > < Noun > < Noun Phrase > < Verb > < Article > < Noun > < Noun Phrase > < Verb > < Noun Phrase > < Noun Phrase > < Verb Phrase > < Sentence >

(CONTD)

The string is a sentence of lg since we are able to construct the reduction sequence the boy ate an apple > < Sentence >.
HARVINDER SINGH 511025273

43

M C0073 SYST EM PROGRAM M IN G

1. D ESCRIBE THE FOLLOWING WITH RESPECT TO L ANGUAGE S PECIFICATION :


Parse trees

(CONTD)

A sequence of derivations or reductions reveals the syntactic structure of a string with respect to G. We depict the syntactic structure in the form of a parse tree. Derivation according to the production A :: = gives rise to the following elemental parse tree. B) Classification of Grammars Grammars are classified on the basis of the nature of productions used in them (Chomsky, 1963). Each grammar class has its own characteristics and limitations. Type 0 Grammars These grammars, known as phrase structure grammars, contain productions of the form :: = where both and can be strings of Ts and NTs. Such productions permit arbitrary substitution of strings during derivation or reduction, hence they are not relevant to specification of programming languages. Type 1 grammars These grammars are known as context sensitive grammars because their productions specify that derivation or reduction of strings can take place only in specific contexts. A Type-1 production has the form A:: = Thus, a string in a sentential form can be replaced by A (or vice versa) only when it is enclosed by the strings and . These grammars are also not particularly relevant for PL specification since recognition of PL constructs is not context sensitive in nature. Type 2 grammars These grammars impose no context requirements on derivations or reductions. A typical Type-2 production is of the form :: = which can be applied independent of its context. These grammars are therefore known as context free grammars (CFG). CFGs are ideally suited for programming language specification. Type 3 grammars Type-3 grammars are characterized by productions of the form A::= tB | t or A ::= Bt | t
HARVINDER SINGH 511025273

44

M C0073 SYST EM PROGRAM M IN G

1. D ESCRIBE THE FOLLOWING WITH RESPECT TO L ANGUAGE S PECIFICATION :

(CONTD)

Note that these productions also satisfy the requirements of Type-2 grammars. The specific form of the RHS alternativesnamely a single T or a string containing a single T and a single NTgives some practical advantages in scanning. Type-3 grammars are also known as linear grammars or regular grammars. These are further categorized into left-linear and right-linear grammars depending on whether the NT in the RHS alternative appears at the extreme left or extreme right. Operator grammars Definition (Operator grammar (OG)) An operator grammar is a grammar none of whose productions contain two or more consecutive NTs in any RHS alternative. Thus, nonterminals occurring in an RHS string are separated by one or more terminal symbols. All terminal symbols occurring in the RHS strings are called operators of the grammar. C) Binding and Binding Times Definition: Binding: A binding is the association of an attribute of a program entity with a value. Binding time is the time at which a binding is performed. Thus the type attribute of variable var is bound to type, when its declaration is processed. The size attribute of type is bound to a value sometime prior to this binding. We are interested in the following binding times: 1. 2. 3. 4. 5. Language definition time of L Language implementation time of L Compilation time of P Execution init time of proc Execution time of proc.

Where L is a programming language, P is a program written in L and proc is a procedure in P. Note that language implementation time is the time when a language translator is designed. The preceding list of binding times is not exhaustive; other binding times can be defined, viz. binding at the linking time of P. The language definition of L specifies binding times for the attributes of various entities of programs written in L. Binding of the keywords of Pascal to their meanings is performed at language def-inition time. This is how keywords like program, procedure, begin and end get their meanings. These bindings apply to all programs written in Pascal. At language implementation time, the compiler designer performs certain bindings. For example, the size of type integer is bound to n bytes where n is a number determined by the architecture of the target machine. Binding of type attributes of variables is performed at compilation time of program bindings. The memory addresses of local variables info and p of procedure proc are
HARVINDER SINGH 511025273

45

M C0073 SYST EM PROGRAM M IN G

1. D ESCRIBE THE FOLLOWING WITH RESPECT TO L ANGUAGE S PECIFICATION :

(CONTD)

bound at every execution init time of procedure proc. The value attributes of variables are bound (possibly more than once) during an execution of proc. The memory address of P is bound when the procedure call new (p) is executed. Static and dynamic bindings Definition (Static binding) A static binding is a binding performed before the execution of a program begins. Definition (Dynamic binding) A dynamic binding is a binding performed after the execution of a program has begun.

HARVINDER SINGH

511025273

46

M C0073 SYST EM PROGRAM M IN G

2. W HAT IS RISC AND HOW IT IS DIFFERENT FROM THE CISC


CISC: A Complex Instruction Set Computer (CISC) supplies a large number of complex instructions at the assembly language level. Assembly language is a low-level computer programming language in which each statement corresponds to a single machine instruction. CISC instructions facilitate the extensive manipulation of low-level computational elements and events such as memory, binary arithmetic, and addressing. The goal of the CISC architectural philosophy is to make microprocessors easy and flexible to program and to provide for more efficient memory use. The CISC philosophy was unquestioned during the 1960s when the early computing machines such as the popular Digital Equipment Corporation PDP 11 family of minicomputers were being programmed in assembly language and memory was slow and expensive. CISC machines merely used the then-available technologies to optimize computer performance. Their advantages included the following: 1. 2. 3. A new processor design could incorporate the instruction set of its predecessor as a subset of an ever-growing languageno need to reinvent the wheel, code-wise, with each design cycle. Fewer instructions were needed to implement a particular computing task, which led to lower memory use for program storage and fewer timeconsuming instruction fetches from memory. Simpler compilers sufficed, as complex CISC instructions could be written that closely resembled the instructions of high-level languages. In effect, CISC made a computers assembly language more like a high-level language to begin with, leaving the compiler less to do.

Some disadvantages of the CISC design philosophy are as follows: 1) The first advantage listed above could be viewed as a disadvantage. That is, the incorporation of older instruction sets into new generations of processors tended to force growing complexity. 2) Many specialized CISC instructions were not used frequently enough to justify their existence. The existence of each instruction needed to be justified because each one requires the storage of more microcode at in the central processing unit (the final and lowest layer of code translation), which must be built in at some cost. 3) Because each CISC command must be translated by the processor into tens or even hundreds of lines of microcode, it tends to run slower than an equivalent series of simpler commands that do not require so much translation. All translation requires time. 4) Because a CISC machine builds complexity into the processor, where all its various commands must be translated into microcode for actual execution, the design of CISC hardware is more difficult and the CISC design cycle correspondingly long; this means delay in getting to market with a new chip. The terms CISC and RISC (Reduced Instruction Set Computer) were coined at this time to reflect the widening split in computer-architectural philosophy. RISC: The Reduced Instruction Set Computer, or RISC, is a microprocessor CPU design philosophy that favors a simpler set of instructions that all take about the same amount of time to execute. The most common RISC microprocessors are AVR, PIC, ARM, DEC Alpha, PA-RISC, SPARC, MIPS, and IBMs PowerPC.
HARVINDER SINGH 511025273

47

M C0073 SYST EM PROGRAM M IN G

2. W HAT IS RISC AND HOW IT IS DIFFERENT FROM THE CISC

(CONTD)

RISC, or Reduced Instruction Set Computer. Is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions, rather than a more specialized set of instructions often found in other types of architectures.
RISC characteristics Small number of machine instructions : less than 150 Small number of addressing modes : less than 4 Small number of instruction formats : less than 4 Instructions of the same length : 32 bits (or 64 bits) Single cycle execution Load / Store architecture Large number of GRPs (General Purpose Registers): more than 32 Hardwired control Support for HLL (High Level Language). RISC and x86 However, despite many successes, RISC has made few inroads into the desktop PC and commodity server markets, where Intels x86 platform remains the dominant processor architecture (Intel is facing increased competition from AMD, but even AMDs processors implement the x86 platform, or a 64-bit superset known as x86-64). There are three main reasons for this. One, the very large base of proprietary PC applications are written for x86, whereas no RISC platform has a similar installed base, and this meant PC users were locked into the x86. The second is that, although RISC was indeed able to scale up in performance quite quickly and cheaply, Intel took advantage of its large market by spending vast amounts of money on processor development. Intel could spend many times as much as any RISC manufacturer on improving low level design and manufacturing. The same could not be said about smaller firms like Cyrix and NexGen, but they realized that they could apply pipelined design philosophies and practices to the x86-architecture either directly as in the 686 and MII series, or indirectly (via extra decoding stages) as in Nx586 and AMD K5. Later, more powerful processors such as Intel P6 and AMD K6 had similar RISC-like units that executed a stream of micro-operations generated from decoding stages that split most x86 instructions into several pieces. Today, these principles have been further refined and are used by modern x86 processors such as Intel Core 2 and AMD K8. The first available chip deploying such techniques was the NexGen Nx586, released in 1994 (while the AMD K5 was severely delayed and released in 1995). As of 2007, the x86 designs (whether Intels or AMDs) are as fast as (if not faster than) the fastest true RISC single-chip solutions available.
HARVINDER SINGH 511025273

48

M C0073 SYST EM PROGRAM M IN G

2. W HAT IS RISC AND HOW IT IS DIFFERENT FROM THE CISC


Addressing Modes of CISC The 68000 addressing (Motorola) modes Register to Register, Register to Memory, Memory to Register, and Memory to Memory 68000 Supports a wide variety of addressing modes. Immediate mode - the operand immediately follows the instruction

(CONTD)

Absolute address the address (in either the "short" 16-bit form or "long" 32-bit form) of the operand immediately follows the instruction Program Counter relative with displacement A displacement value is added to the program counter to calculate the operands address. The displacement can be positive or negative. Program Counter relative with index and displacement The instruction contains both the identity of an "index register" and a trailing displacement value. The contents of the index register, the displacement value, and the program counter are added together to get the final address. Register direct The operand is contained in an address or data register. Address register indirect An address register contains the address of the operand. Address register indirect with predecrement or postdecrement An address register contains the address of the operand in memory. With the predecrement option set, a predetermined value is subtracted from the register before the (new) address is used. With the postincrement option set, a predetermined value is added to the register after the operation completes. Address register indirect with displacement A displacement value is added to the registers contents to calculate the operands address. The displacement can be positive or negative. Address register relative with index and displacement The instruction contains both the identity of an "index register" and a trailing displacement value. The contents of the index register, the displacement value, and the specified address register are added together to get the final address. Addressing Modes of CISC The 68000 addressing (Motorola) modes Register to Register, Register to Memory, Memory to Register, and Memory to Memory
HARVINDER SINGH 511025273

49

M C0073 SYST EM PROGRAM M IN G

2. W HAT IS RISC AND HOW IT IS DIFFERENT FROM THE CISC


68000 Supports a wide variety of addressing modes. Immediate mode - the operand immediately follows the instruction

(CONTD)

Absolute address the address (in either the "short" 16-bit form or "long" 32-bit form) of the operand immediately follows the instruction Program Counter relative with displacement A displacement value is added to the program counter to calculate the operands address. The displacement can be positive or negative. Program Counter relative with index and displacement The instruction contains both the identity of an "index register" and a trailing displacement value. The contents of the index register, the displacement value, and the program counter are added together to get the final address. Register direct The operand is contained in an address or data register. Address register indirect An address register contains the address of the operand. Address register indirect with predecrement or postdecrement An address register contains the address of the operand in memory. With the predecrement option set, a predetermined value is subtracted from the register before the (new) address is used. With the postincrement option set, a predetermined value is added to the register after the operation completes. Address register indirect with displacement A displacement value is added to the registers contents to calculate the operands address. The displacement can be positive or negative. Address register relative with index and displacement The instruction contains both the identity of an "index register" and a trailing displacement value. The contents of the index register, the displacement value, and the specified address register are added together to get the final address. RISC VS CISC CISC Emphasis on hardware Includes multi-clock complex instructions Memory-to-memory: "LOAD" and "STORE" incorporated in instructions Small code sizes, high cycles per second Transistors used for storing complex instructions
HARVINDER SINGH 511025273

RISC Emphasis on software Single-clock, reduced instruction only Register to register: "LOAD" and "STORE" are independent instructions Low cycles per second, large code sizes Spends more transistors on memory registers 50

M C0073 SYST EM PROGRAM M IN G

3. E XPLAIN THE FOLLOWING WITH RESPECT TO THE DESIGN SPECIFICATIONS OF AN A SSEMBLER :


A) Data Structures The second step in our design procedure is to establish the databases that we have to work with. Pass 1 Data Structures 1. 2. 3. 4. 5. 6. 7. Input source program A Location Counter (LC), used to keep track of each instructions location. A table, the Machine-operation Table (MOT) that indicates the symbolic mnemonic, for each instruction and its length (two, four, or six bytes) A table, the Pseudo-Operation Table (POT) that indicates the symbolic mnemonic and action to be taken for each pseudo-op in pass 1. A table, the Symbol Table (ST) that is used to store each label and its corresponding value. A table, the literal table (LT) that is used to store each literal encountered and its corresponding assignment location. A copy of the input to be used by pass 2.

Pass 2 Data Structures 1. 2. 3. 4. 5. 6. 7. 8. 9. Copy of source program input to pass1. Location Counter (LC) A table, the Machine-operation Table (MOT), that indicates for each instruction, symbolic mnemonic, length (two, four, or six bytes), binary machine opcode and format of instruction. A table, the Pseudo-Operation Table (POT), that indicates the symbolic mnemonic and action to be taken for each pseudo-op in pass 2. A table, the Symbol Table (ST), prepared by pass1, containing each label and corresponding value. A Table, the base table (BT), that indicates which registers are currently specified as base registers by USING pseudo-ops and what the specified contents of these registers are. A work space INST that is used to hold each instruction as its various parts are being assembled together. A work space, PRINT LINE, used to produce a printed listing. A work space, PUNCH CARD, used prior to actual outputting for converting the assembled instructions into the format needed by the loader.

10. An output deck of assembled instructions in the format needed by the loader.
HARVINDER SINGH 511025273

51

M C0073 SYST EM PROGRAM M IN G

3. E XPLAIN THE FOLLOWING WITH RESPECT TO THE DESIGN SPECIFICATIONS OF AN A SSEMBLER : (CONTD)
Format of Data Structures The third step in our design procedure is to specify the format and content of each of the data structures. Pass 2 requires a machine operation table (MOT) containing the name, length, binary code and format; pass 1 requires only name and length. Instead of using two different tables, we construct single (MOT). The Machine operation table (MOT) and pseudo-operation table are example of fixed tables. The contents of these tables are not filled in or altered during the assembly process. The following figure depicts the format of the machine-op table (MOT)
6 BYTES PER ENTRY MNEMONIC OPCODE (4BYTES) CHARACTERS BINARY OPCODE (1BYTE) (HEXADECIMAL) INSTRUCTION LENGTH (2 BITS) (BINARY) INSTRUCTION FORMAT (3BITS) (BINARY) NOT USED HERE (3 BITS)

Abbb Ahbb ALbb ALRB . b represents blank B) pass1 & pass2 Assembler flow chart

5A 4A 5E 1E .

10 10 10 01 .

001 001 001 000 .

Pass Structure of Assemblers: Here we discuss two pass and single pass assembly schemes in this section: Two pass translation Two pass translation of an assembly language program can handle forward references easily. LC processing is performed in the first pass and symbols defined in the program are entered into the symbol table. The second pass synthesizes the target form using the address information found in the symbol table. In effect, the first pass performs analysis of the source program while the second pass performs synthesis of the target program. The first pass constructs an intermediate representation (IR) of the source program for use by the second pass. This representation consists of two main components data structures, e.g. the symbol table, and a processed form of the source program. The latter component is called intermediate code (IC).

HARVINDER SINGH

511025273

52

M C0073 SYST EM PROGRAM M IN G

3. E XPLAIN THE FOLLOWING WITH RESPECT TO THE DESIGN SPECIFICATIONS OF AN A SSEMBLER : (CONTD)
Single pass translation LC processing and construction of the symbol table proceed as in two pass translation. The problem of forward references is tackled using a process called backpatching. The operand field of an instruction containing a forward reference is left blank initially. The address of the forward referenced symbol is put into this field when its definition is encountered. Look at the following instructions: START READ MOVER MOVEM AGAIN MULT MOVER ADD MOVEM COMP BC MOVEM PRINT STOP N RESULT ONE TERM DS DS DC PS END In the above program, the instruction corresponding to the statement
HARVINDER SINGH 511025273

101 N BREG, ONE BREG, TERM BREG, TERM CREG, TERM CREG, ONE CREG, TERM CREG, N LE, AGAIN BREG, RESULT RESULT 1 1 1 1 101) 102) 103) 104) 105) 106) 107) 108) 109) 110) 111) 112) 113) 114) 115) 116) + 00 0 001 + 09 0 113 + 04 2 115 + 05 2 116 + 03 2 116 + 04 3 116 + 01 3 115 + 05 3 116 + 06 3 113 + 07 2 104 + 05 2 114 + 10 0 114 + 00 0 000

53

M C0073 SYST EM PROGRAM M IN G

3. E XPLAIN THE FOLLOWING WITH RESPECT TO THE DESIGN SPECIFICATIONS OF AN A SSEMBLER : (CONTD)
Mover Breg, one Can be only partially synthesized since ONE is a forward reference. Hence the instruction opcode and address of BREG will be assembled to reside in location 101. The need for inserting the second operands address at a later stage can be indicated by adding an entry to the Table of Incomplete Instructions (TII). This entry is a pair (instruction address>, <symbol>), e.g. (101, ONE) in this case. By the time the END statement is processed, the symbol table would contain the addresses of all symbols defined in the source program and TII would contain information describing all forward references. The assembler can now process each entry in TII to complete the concerned instruction. For example, the entry (101, ONE) would be processed by obtaining the address of ONE from symbol table and inserting it in the operand address field of the instruction with assembled address 101. Alternatively, entries in TII can be processed in an incremental manner. Thus, when definition of some symbol symb is encountered, all forward references to symb can be processed. Design of A Two Pass Assembler Tasks performed by the passes of a two pass assembler are as follows: Pass I: 1. 2. 3. 4. Separate the symbol, mnemonic opcode and operand fields. Build the symbol table. Perform LC processing. Construct intermediate representation.

Pass II: Synthesize the target program. Pass I performs analysis of the source program and synthesis of the intermediate representation while Pass II processes the intermediate representation to synthesize the target program. The design details of assembler passes are discussed after introducing advanced assembler directives and their influence on LC processing.

HARVINDER SINGH

511025273

54

M C0073 SYST EM PROGRAM M IN G

4. D EFINE THE FOLLOWING ,


A) Parsing Parsing transforms input text or string into a data structure, usually a tree, which is suitable for later processing and which captures the implied hierarchy of the input. Lexical analysis creates tokens from a sequence of input characters and it is these tokens that are processed by a parser to build a data structure such as parse tree or abstract syntax trees. Parsing is the process of analyzing a sequence of tokens to determine its grammatical structure with respect to a given formal grammar. A Parser is the component of a compiler that carries out this task. Conceptually, the parser accepts a sequence of tokens and produces a parse tree. In practice this might not occur. 1. 2. The source program might have errors. Shamefully, we will do very little error handling. Real compilers produce (abstract) syntax trees not parse trees (concrete syntax trees). We dont do this for the pedagogical reasons given previously.

There are three classes for grammar-based parsers. 1. 2. 3. Universal Top-down Bottom-up

The universal parsers are not used in practice as they are inefficient; we will not discuss them. As expected, top-down parsers start from the root of the tree and proceed downward; whereas, bottom-up parsers start from the leaves and proceed upward. The commonly used top-down and bottom parsers are not universal. That is, there are (context-free) grammars that cannot be used with them. The LL and LR parsers are important in practice. Hand written parsers are often LL. Specifically, the predictive parsers we looked at in chapter two are for LL grammars. The LR grammars form a larger class. Parsers for this class are usually constructed with the aid of automatic tools. Parse tree A parse tree depicts the steps in parsing, hence it is usefull for understanding the process of parsing.

HARVINDER SINGH

511025273

55

M C0073 SYST EM PROGRAM M IN G

4. D EFINE THE FOLLOWING ,

(CONTD)

A valid parse tree for a grammar G is a tree Whose root is the start symbol of G? Whose interior nodes are nonterminals of G? Whose children of a node T (from left to right) correspond to the symbols on the right hand side of some production for T in G? Whose leaf nodes are terminal symbols of G? Every sentence generated by a grammar has a corresponding parse tree Every valid parse tree exactly covers a sentence generated by the grammar
Example parse tree for the arthematic expression 1+2*3.
+

Parsing 1 + 2*3
1 2 * 3

Overview of process The following example demonstrates the common case of parsing a computer language with two levels of grammar: lexical and syntactic. The first stage is the token generation, or lexical analysis, by which the input character stream is split into meaningful symbols defined by a grammar of regular expressions. For example, a calculator program would look at an input such as "12*(3+4)^2" and split it into the tokens 12, *, (, 3, +, 4, ), ^ and 2, each of which is a meaningful symbol in the context of an arithmetic expression. The parser would contain rules to tell it that the characters *, +, ^, ( and ) mark the start of a new token, so meaningless tokens like "12*" or "(3" will not be generated. The next stage is syntactic parsing or syntactic analysis, which is checking that the tokens form an allowable expression. This is usually done with reference to a context-free grammar which recursively defines components that can make up an expression and the order in which they must appear. However, not all rules defining programming languages can be expressed by context-free grammars alone, for example type validity and proper declaration of identifiers. These rules can be formally expressed with attribute grammars.
HARVINDER SINGH 511025273

56

M C0073 SYST EM PROGRAM M IN G

4. D EFINE THE FOLLOWING ,


Types of Parsers

(CONTD)

The task of the parser is essentially to determine if and how the input can be derived from the start symbol of the grammar. This can be done in essentially two ways: Top-Down Parsing A parser can start with the start symbol and try to transform it to the input. Intuitively, the parser starts from the largest elements and breaks them down into incrementally smaller parts. LL parsers are examples of top-down parsers. Bottom-Up Parsing A parser can start with the input and attempt to rewrite it to the start symbol. Intuitively, the parser attempts to locate the most basic elements, then the elements containing these, and so on. LR parsers are examples of bottom-up parsers. Another term used for this type of parser is ShiftReduce parsing. Another important distinction is whether the parser generates a leftmost derivation or a rightmost derivation (see context-free grammar). LL parsers will generate a leftmost derivation and LR parsers will generate a rightmost derivation (although usually in reverse). Top-down Parsing The compiler parses input from a programming language to assembly language or an internal representation by matching the incoming symbols to BackusNaur form production rules. An LL parser, also called a top-bottom parser or top-down parser, applies each production rule to the incoming symbols by working from the left-most symbol yielded on a production rule and then proceeding to the next production rule for each non-terminal symbol encountered. In this way the parsing starts on the Left of the result side (right side) of the production rule and evaluates non-terminals from the Left first and, thus, proceeds down the parse tree for each new non-terminal before continuing to the next symbol for a production rule. Bottom-up parsing Bottom-up parsing is a parsing method that works by identifying terminal symbols first, and combines them successively to produce non terminals. The productions of the parser can be used to build a parse tree of a program written in human-readable source code that can be compiled to assembly language or pseudo code. Different computer languages require different parsing techniques, although it is not uncommon to use a parsing technique that is more powerful than that actually required. Bottom-up parsing methods have an advantage over top-down parsing in that they are less fussy in the grammars they can use. The most popular bottom up technique is LALR (1). Every LL(1) grammar is also LALR(1), but many LALR(1) grammars, including the most natural grammars for a variety of common programming language constructs are not LL(1). Unfortunately, bottom-up techniques are more complicated to understand and to implement than top-down techniques.
HARVINDER SINGH 511025273

57

M C0073 SYST EM PROGRAM M IN G

4. D EFINE THE FOLLOWING ,


B) Scanning

(CONTD)

Scanning and parsing are the two important phases of compiler construction. Compiler is a program which converts the source program into machine level language. It is a translator. Compiler performs analysis for sentence generations and interpretations. One phase output will go to the next phase as input. Conceptually, there are three phases of analysis with the output of one phase the input of the next. Each of these phases changes the representation of the program being compiled. The phases are called lexical analysis or scanning, which transforms the program from a string of characters to a string of tokens. Syntax Analysis or Parsing, transforms the program into some kind of syntax tree; and Semantic Analysis, decorates the tree with semantic information. The character stream input is grouped into meaningful units called lexemes, which are then mapped into tokens, the latter constituting the output of the lexical analyzer. For example, any one of the following C statements x3 = y + 3; x3 = y + 3 ; x3 = y+ 3 ; but not x 3 = y + 3; would be grouped into the lexemes x3, =, y, +, 3, and ;. A token is a <token-name,attribute-value> pair. The hierarchical decomposition above sentence is given in below figure.
Assignment statement Identifier x3 expression Identifier y + expression number 3 = expression

HARVINDER SINGH

511025273

58

M C0073 SYST EM PROGRAM M IN G

4. D EFINE THE FOLLOWING ,


C) Token A token is a <token-name, attribute-value> pair. For example 1. 2. 3. 4. 5. 6.

(CONTD)

The lexeme x3 would be mapped to a token such as <id,1>. The name id is short for identifier. The value 1 is the index of the entry for x3 in the symbol table produced by the compiler. This table is used gather information about the identifiers and to pass this information to subsequent phases. The lexeme = would be mapped to the token <=>. In reality it is probably mapped to a pair, whose second component is ignored. The point is that there are many different identifiers so we need the second component, but there is only one assignment symbol =. The lexeme y is mapped to the token <id,2> The lexeme + is mapped to the token <+>. The number 3 is mapped to <number, something>, but what is the something. On the one hand there is only one 3 so we could just use the token <number,3>. However, there can be a difference between how this should be printed (e.g., in an error message produced by subsequent phases) and how it should be stored (fixed vs. float vs. double). Perhaps the token should point to the symbol table where an entry for this kind of 3 is stored. Another possibility is to have a separate numbers table. The lexeme ; is mapped to the token <;>.

7.

Note, non-significant blanks are normally removed during scanning. In C, most blanks are non-significant. That does not mean the blanks are unnecessary. Consider int x; intx; The blank between int and x is clearly necessary, but it does not become part of any token. Blanks inside strings are an exception, they are part of the token (or more likely the table entry pointed to by the second component of the token). Note that we can define identifiers, numbers, and the various symbols and punctuation without using recursion (compare with parsing below). Parsing involves a further grouping in which tokens are grouped into grammatical phrases, which are often represented in a parse tree.
HARVINDER SINGH 511025273

59

M C0073 SYST EM PROGRAM M IN G

4. D EFINE THE FOLLOWING ,


For example

(CONTD)

x3 = y + 3, would be parsed into the tree on the right.

= x3 y
asst-stmt id = expr ; expr number | id | expr + expr Note the recursive definition of expression (expr). Note also the hierarchical decomposition in the figure on the right. The division between scanning and parsing is somewhat arbitrary, in that some tasks can be accomplished by either. However, if a recursive definition is involved, it is considered parsing not scanning.

+ 3

his parsing would result from a grammar containing rules such as

HARVINDER SINGH

511025273

60

M C0073 SYST EM PROGRAM M IN G

5. D ESCRIBE THE PROCESS OF B OOTSTRAPPING IN THE CONTEXT OF L INKERS .


In computing, bootstrapping refers to a process where a simple system activates another more complicated system that serves the same purpose. It is a solution to the Chicken-and-egg problem of starting a certain system without the system already functioning. The term is most often applied to the process of starting up a computer, in which a mechanism is needed to execute the software program that is responsible for executing software programs (the operating system). Bootstrap loading The discussions of loading up to this point have all presumed that theres already an operating system or at least a program loader resident in the computer to load the program of interest. The chain of programs being loaded by other programs has to start somewhere, so the obvious question is how is the first program loaded into the computer? In modern computers, the first program the computer runs after a hardware reset invariably is stored in a ROM known as bootstrap ROM. as in "pulling ones self up by the bootstraps." When the CPU is powered on or reset, it sets its registers to a known state. On x86 systems, for example, the reset sequence jumps to the address 16 bytes below the top of the systems address space. The bootstrap ROM occupies the top 64K of the address space and ROM code then starts up the computer. On IBM-compatible x86 systems, the boot ROM code reads the first block of the floppy disk into memory, or if that fails the first block of the first hard disk, into memory location zero and jumps to location zero. The program in block zero in turn loads a slightly larger operating system boot program from a known place on the disk into memory, and jumps to that program which in turn loads in the operating system and starts it. (There can be even more steps, e.g., a boot manager that decides from which disk partition to read the operating system boot program, but the sequence of increasingly capable loaders remains.) Why not just load the operating system directly? Because you cant fit an operating system loader into 512 bytes. The first level loader typically is only able to load a single-segment program from a file with a fixed name in the top-level directory of the boot disk. The operating system loader contains more sophisticated code that can read and interpret a configuration file, uncompress a compressed operating system executable, address large amounts of memory (on an x86 the loader usually runs in real mode which means that its tricky to address more than 1MB of memory.) The full operating system can turn on the virtual memory system, loads the drivers it needs, and then proceed to run user-level programs. Many Unix systems use a similar bootstrap process to get user-mode programs running. The kernel creates a process, then stuffs a tiny little program, only a few dozen bytes long, into that process. The tiny program executes a system call that runs /etc/init, the user mode initialization program that in turn runs configuration files and starts the daemons and login programs that a running system needs. None of this matters much to the application level programmer, but it becomes more interesting if you want to write programs that run on the bare hardware of the machine, since then you need to arrange to intercept the bootstrap sequence somewhere and run your program rather than the usual operating system. Some systems make this quite easy (just stick the name of your program in AUTOEXEC.BAT and reboot Windows 95, for example), others make it nearly impossible. It also presents opportunities for customized systems. For example, a single-application system could be built over a Unix kernel by naming the application /etc/init.
HARVINDER SINGH 511025273

61

M C0073 SYST EM PROGRAM M IN G

5. D ESCRIBE THE PROCESS OF B OOTSTRAPPING IN THE CONTEXT OF L INKERS .


Software Bootstraping & Compiler Bootstraping

(CONTD)

Bootstrapping can also refer to the development of successively more complex, faster programming environments. The simplest environment will be, perhaps, a very basic text editor (e.g. ed) and an assembler program. Using these tools, one can write a more complex text editor, and a simple compiler for a higher-level language and so on, until one can have a graphical IDE and an extremely high-level programming language. Compiler Bootstraping In compiler design, a bootstrap or bootstrapping compiler is a compiler that is written in the target language, or a subset of the language, that it compiles. Examples include gcc, GHC, OCaml, BASIC, PL/I and more recently the Mono C# compiler.

HARVINDER SINGH

511025273

62

M C0073 SYST EM PROGRAM M IN G

5. D ESCRIBE THE PROCEDURE FOR DESIGN OF A L INKER .


Design of a linker: Relocation and linking requirements in segmented addressing The relocation requirements of a program are influenced by the addressing structure of the computer system on which it is to execute. Use of the segmented addressing structure reduces the relocation requirements of program. Implementation Examples: A Linker for MS-DOS Example: Consider the program of written in the assembly language of intel 8088. The ASSUME statement declares the segment registers CS and DS to the available for memory addressing. Hence all memory addressing is performed by using suitable displacements from their contents. Translation time address o A is 0196. In statement 16, a reference to A is assembled as a displacement of 196 from the contents of the CS register. This avoids the use of an absolute address, hence the instruction is not address sensitive. Now no relocation is needed if segment SAMPLE is to be loaded with address 2000 by a calling program (or by the OS). The effective operand address would be calculated as <CS>+0196, which is the correct address 2196. A similar situation exists with the reference to B in statement 17. The reference to B is assembled as a displacement of 0002 from the contents of the DS register. Since the DS register would be loaded with the execution time address of DATA_HERE, the reference to B would be automatically relocated to the correct address. Though use of segment register reduces the relocation requirements, it does not completely eliminate the need for relocation. Consider statement 14 . MOV AX, DATA_HERE Which loads the segment base of DATA_HERE into the AX register preparatory to its transfer into the DS register . Since the assembler knows DATA_HERE to be a segment, it makes provision to load the higher order 16 bits of the address of DATA_HERE into the AX register. However it does not know the link time address of DATA_HERE, hence it assembles the MOV instruction in the immediate operand format and puts zeroes in the operand field. It also makes an entry for this instruction in RELOCTAB so that the linker would put the appropriate address in the operand field. Inter-segment calls and jumps are handled in a similar way. Relocation is somewhat more involved in the case of intra-segment jumps assembled in the FAR format. For example, consider the following program : FAR_LAB EQU THIS FAR ; FAR_LAB is a FAR label JMP FAR_LAB ; A FAR jump Here the displacement and the segment base of FAR_LAB are to be put in the JMP instruction itself. The assembler puts the displacement of FAR_LAB in the first two operand bytes of the instruction , and makes a RELOCTAB entry for the third and fourth operand bytes which are to hold the segment base address. A segment like ADDR_A DW OFFSET A (which is an address constant) does not need any relocation since the assemble can itself put the required offset in the bytes. In summary, the only RELOCATAB entries that must exist for a program using segmented memory addressing are for the bytes that contain a segment base address. For linking, however both segment base address and offset of the external symbol must be computed by the linker. Hence there is no reduction in the linking requirements.
HARVINDER SINGH 511025273

63

FACULTY : K AMYA M AM

MC0074 STATISTICAL AND NUMERICAL METHODS USING C++


(Book ID: B0812) Assignment Set 1

M C0074 STATIS TIC AL AN D N UM ERICAL M ETHODS USIN G C++

1. A BOX CONTAINS 74 BRASS WASHERS , 86 STEEL WASHERS AND 40 ALUMINUM WASHERS , T HREE WASHERS ARE DRAWN AT RANDOM FROM THE BOX WITHOUT REPLACEMENT. D ETERMINE THE PROBABILITY THAT ALL THREE ARE STEEL WASHERS .
No. of ways to succeed = 86C3 No of possible outcomes = 200C3 P(success) = 86C3/200C3 = 0.0779

HARVINDER SINGH

511025273

64

M C0074 STATIS TIC AL AN D N UM ERICAL M ETHODS USIN G C++

2. D ISCUSS AND DEFINE THE C ORRELATION COEFFICIENT WITH THE SUITABLE EXAMPLE .
Correlation is one of the most widely used statistical techniques. Whenever two variable are so related that a change in one variable result in a direct or inverse change in the other and also greater the magnitude of change in one variable corresponds to greater the magnitude of change in the other, then the variable are said to be correlated or the relationship between the variables is known as correlation. We have been concerned with associating parameters such as E(x) and V(X) with the distribution of one-dimensional random variable. If we have a twodimensional random variable (X,Y), an analogous problem is encountered. Definition Let (X, Y) be a two-dimensional random variable. We define xy, the correlation coefficient, between X and Y, as follows: xy = The numerator of , is called the covariance of X and Y. [Note the correlation coefficient is a dimensionless quantity.] Example 2.8 Suppose that the two-dimensional random variable (X, Y) is uniformly distributed over the triangular region R = {(x, y) | 0 < x < y < 1} The pdf is given as f(x, y) = 2, (x, y)R, = 0, elsewhere. Thus the marginal pdfs of X and of Y are g(x) = 2 (1 x), 0 x 1 g(x) = 2 (1 x), 0 y 1 Therefore
HARVINDER SINGH 511025273

65

M C0074 STATIS TIC AL AN D N UM ERICAL M ETHODS USIN G C++

2. D ISCUSS AND DEFINE THE C ORRELATION COEFFICIENT WITH THE SUITABLE EXAMPLE. (CONTD)
E(X) = E(X2) = , E(Y) = , E(Y2) = V(Y) = E(Y2) (E(Y))2 =

V(X) = E(X2) (E(X))2 = E(XY) = Hence xy =

Degree of correlation We can find the degree of correlation with the help of coefficient of correlation. The following degrees of correlation are there: (a) Perfect correlation When two variable change in the same direction and in the same ratio, then there is perfect positive correlation. In this case the coefficient of correlation is +1. On the other hand if two variables change in the same ratio but in the opposite direction, then there is perfect negative correlation and in this case the coefficient of correlation is -1. (b) Absence of correlation If the change in one variable has no effect on the other variable, then the correlation is completely absent. In this case, coefficient of correlation is 0. (c) Limited correlation If there is neither complete presence nor complete absence of correlation between two variables then in such a state we say that there is limited correlation and it can be positive as well as negative.
HARVINDER SINGH 511025273

66

M C0074 STATIS TIC AL AN D N UM ERICAL M ETHODS USIN G C++

3. I F

X IS NORMALLY DISTRIBUTED WITH ZERO MEAN AND UNIT VARIANCE , FIND

THE EXPECTATION AND VARIANCE OF.


The equation of the normal curve is

y= If mean is zero and variance is unit, then putting m=0 and = 1, the above equation reduced to y= Expectation of x2 i.e 1 = = = Integrating by parts taking x as first function and remembering that =3 =3(1) = Putting = z, dx = dz Variance of x2 =2= 2- 12 = 3-(1)2 =2 with the help of (ii) .. (i) Ans 1 = = Integrating by parts taking x3 as first function == Hence =1 . (ii)

HARVINDER SINGH

511025273

67

M C0074 STATIS TIC AL AN D N UM ERICAL M ETHODS USIN G C++

4. T HE SALES IN A PARTICULAR DEPARTMENT STORE FOR THE LAST FIVE YEARS IS GIVEN IN
THE FOLLOWING TABLE
Years Sales (in lakhs) 1974 40 1976 43 1978 48 1980 52 1982 57

E STIMATE THE SALES FOR THE YEAR 1979.

HARVINDER SINGH

511025273

68

M C0074 STATIS TIC AL AN D N UM ERICAL M ETHODS USIN G C++

5. F IND OUT THE GEOMETRIC MEAN OF THE FOLLOWING SERIES .


[To Come] Class Frequency 0-10 17 10-20 10 20-30 11 30-40 15 40-50 8

HARVINDER SINGH

511025273

69

M C0074 STATIS TIC AL AN D N UM ERICAL M ETHODS USIN G C++

5. F IND THE EQUATION OF REGRESSION LINE OF X ON Y FROM THE FOLLOWING DATA .


[To Come] x Y 0 10 1 12 2 27 3 10 4 30

HARVINDER SINGH

511025273

70

FACULTY : S HASHANK S IR

MC0075 COMPUTER NETWORKS


(Book ID: B0813 & B0814) Assignment Set 1

M C0075 COM PUTER N ETW OR K S

1. D ISCUSS THE ADVANTAGES AND DISADVANTAGES OF SYNCHRONOUS AND ASYNCHRONOUS TRANSMISSION .


There are different ways of transmitting the information. In this section we will study these various methods with their relative merits and demerits. Serial & Parallel Serial communication is the sequential transmission of the signal elements of a group representing a character or other entity of data. The characters are transmitted in a sequence over a single line, rather than simultaneously over two or more lines, as in parallel transmission as shown in below figure. Tx Rx Tx

TRANSMITTER

Rx

RECEIVER

Ground Serial transmission: one bit at a time

Ground

The sequential elements may be transmitted with or without interruption. Parallel communication refers to when data is transmitted byte-by-byte i.e., all bits of one or more bytes are transmitted simultaneously over separate wires as shown in given figure. D0

TRANSMITTER
Dg

RECEIVER

Clock Parallel transmissions: Several bits at a time


HARVINDER SINGH 511025273

71

M C0075 COM PUTER N ETW OR K S

1. D ISCUSS THE ADVANTAGES AND DISADVANTAGES OF SYNCHRONOUS AND ASYNCHRONOUS TRANSMISSION . (CONTD)
Most transmission lines are serial, whereas information transfer within computers and communications devices is in parallel. Therefore, there must be tech-niques for converting between parallel and serial, and vice versa. A Universal Asynchronous Receiver Transmitter (UART) usually accomplishes such data conversions. The comparisons of the serial and parallel transmission modes are listed in table. SERIAL M ODE COST SPEED THROUGHPU T USED IN Less costly (only one wire) Low ( only 1 bit at a time) Low Longer distance comm. PARALLEL M ODE More costly (many wires) High (more bits at a time) High Shorter distance comm..

Comparison of serial and parallel transmission mode Simplex, Half duplex & Full duplex Simplex refers to communications in only one direction from the transmitter to the receiver as shown in figure (a). There is no acknowledgement of reception from the receiver, so errors cannot be conveyed to the transmitter. Half-duplex refers to two-way communications but in only one direction at a time as shown in figure (b).

A A A

(a) Simplex

B B B

b) Half Duplex

(c) Full Duplex

HARVINDER SINGH

511025273

72

M C0075 COM PUTER N ETW OR K S

1. D ISCUSS THE ADVANTAGES AND DISADVANTAGES OF SYNCHRONOUS AND ASYNCHRONOUS TRANSMISSION . (CONTD)
Full duplex refers to simultaneous two-way transmission as shown in figure (c). For example, a radio is a simplex device, a walkie-talkie is a half-duplex device, and certain computer video cards are full-duplex devices. Similarly, radio or TV broadcast is a simplex system, transfer of inventory data from a warehouse to an accounting office is a half duplex system, and videoconferencing represents a full-duplex application. Full Duplex provides maximum function and performance. Synchronous & Asynchronous transmission Synchronous Transmission: Synchronous is any type of communication in which the parties communicating are "live" or present in the same space and time. A chat room where both parties must be at their computer, connected to the Internet, and using software to communicate in the chat room protocols is a synchronous method of communication. E-mail is an example of an asynchronous mode of communication where one party can send a note to another person and the recipient need not be online to receive the e-mail. Synchronous mode of transmissions are illustrated in shown figure

SYNCHRONOUS SERIAL DATA

7E 7E 7E

TAIL

DATA DATA PACKET

HEADER

7E 7E 7E

Idle Line State = 7E

Synchronous and Asynchronous Transmissions The two ends of a link are synchronized, by carrying the transmitters clock information along with data. Bytes are transmitted continuously, if there are gaps then inserts idle bytes as padding
HARVINDER SINGH 511025273

73

M C0075 COM PUTER N ETW OR K S

1. D ISCUSS THE ADVANTAGES AND DISADVANTAGES OF SYNCHRONOUS AND ASYNCHRONOUS TRANSMISSION . (CONTD)
Advantage: This reduces overhead bits It overcomes the two main deficiencies of the asynchronous method, that of inefficiency and lack of error detection. Disadvantage: For correct operation the receiver must start to sample the line at the correct instant Application: Used in high speed transmission example: HDLC Asynchronous transmission: Asynchronous refers to processes that proceed independently of each other until one process needs to "interrupt" the other process with a request. Using the client- server model, the server handles many asynchronous requests from its many clients. The client is often able to proceed with other work or must wait on the service requested from the server.

ASYNCHRONOUS SERIAL DATA


1 Stop
Character

Start

Idle Line State = 7E

Asynchronous Transmissions
HARVINDER SINGH 511025273

74

M C0075 COM PUTER N ETW OR K S

1. D ISCUSS THE ADVANTAGES AND DISADVANTAGES OF SYNCHRONOUS AND ASYNCHRONOUS TRANSMISSION . (CONTD)
synchronous mode of transmissions is illustrated in figure 3.12. Here a Start and Stop signal is necessary before and after the character. Start signal is of same length as information bit. Stop signal is usually 1, 1.5 or 2 times the length of the information signal Advantage: The character is self contained & Transmitter and receiver need not be synchronized Transmitting and receiving clocks are independent of each other Disadvantage: Overhead of start and stop bits False recognition of these bits due to noise on the channel Application: If channel is reliable, then suitable for high speed else low speed transmission Most common use is in the ASCII terminals Efficiency of transmission is the ratio of the actual message bits to the total number of bits, including message and control bits, as shown in Equation 3.4. In any transmission, the synchronization, error detection, or any other bits that are not messages are collectively referred to as overheads, represented in Equation. 3.5. The higher are the overheads; the lower is the efficiency of transmission, as shown in Equation 3.6. Efficiency = M/ (M+C) x 100% (3.4) Overhead = (1 M/ (M+C)) x 100% (3.5) Where M = Number of message bits C = Number of control bits In other words, Efficiency % = 100 -Overhead % (3.6)

HARVINDER SINGH

511025273

75

M C0075 COM PUTER N ETW OR K S

2. D ESCRIBE THE ISO-OSI REFERENCE MODEL AND DISCUSS THE IMPORTANCE OF EVERY LAYER .
The OSI Reference Model: This reference model is proposed by International standard organization (ISO) as a a first step towards standardization of the protocols used in various layers in 1983 by Day and Zimmermann. This model is called Open system Interconnection (OSI) reference model. It is referred OSI as it deals with connection open systems. That is the systems are open for communication with other systems. It consists of seven layers. Layers of OSI Model The principles that were applied to arrive at 7 layers: 1. 2. 3. 4. 5. A layer should be created where a different level of abstraction is needed. Each layer should perform a well defined task. The function of each layer should define internationally standardized protocols Layer boundaries should be chosen to minimize the information flow across the interface. The number of layers should not be high or too small. 7 6 5 4 3 2 1 Application Presentation Session Transport Network Data Link Physical

The ISO-OSI reference model is as shown in figure 2.5. As such this model is not a network architecture as it does not specify exact services and protocols. It just tells what each layer should do and where it lies. The bottom most layer is referred as physical layer. ISO has produced standards for each layers and are published separately. Each layer of the ISO-OSI reference model are discussed below:
HARVINDER SINGH 511025273

76

M C0075 COM PUTER N ETW OR K S

2. D ESCRIBE THE ISO-OSI REFERENCE MODEL AND DISCUSS THE IMPORTANCE OF EVERY LAYER . (CONTD)
The ISO-OSI reference model is as shown in figure 2.5. As such this model is not a network architecture as it does not specify exact services and protocols. It just tells what each layer should do and where it lies. The bottom most layer is referred as physical layer. ISO has produced standards for each layers and are published separately. Each layer of the ISO-OSI reference model are discussed below: 1. Physical Layer

This layer is the bottom most layer that is concerned with transmitting raw bits over the communication channel (physical medium). The design issues have to do with making sure that when one side sends a 1 bit, it is received by other side as a 1 bit, and not as a 0 bit. It performs direct transmission of logical information that is digital bit streams into physical phenomena in the form of electronic pulses. Modulators/demodulators are used at this layer. The design issue here largely deals with mechanical, electrical, and procedural interfaces, and the physical transmission medium, which lies below this physical layer. In particular, it defines the relationship between a device and a physical medium. This includes the layout of pins, voltages, and cable specifications. Hubs, repeaters, network adapters and Host Bus Adapters (HBAs used in Storage Area Networks) are physical-layer devices. The major functions and services performed by the physical layer are: Establishment and termination of a connection to a communications medium. Participation in the process whereby the communication resources are effectively shared among multiple users. For example, contention resolution and flow control. Modulation, is a technique of conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. These are signals operating over the physical cabling (such as copper and fiber optic) or over a radio link. Parallel SCSI buses operate in this layer. Various physical-layer Ethernet standards are also in this layer; Ethernet incorporates both this layer and the datalink layer. The same applies to other local-area networks, such as Token ring, FDDI, and IEEE 802.11, as well as personal area networks such as Bluetooth and IEEE 802.15.4. 2. Data Link Layer

The Data Link layer provides the functional and procedural means to transfer data between network entities and to detect and possibly correct errors that may occur in the Physical layer. That is it makes sure that the message indeed reach the other end without corruption or without signal distortion and noise. It accomplishes this task by having the sender break the input data up into the frames called data frames. The DLL of transmitter, then transmits the frames sequentially, and processes acknowledgement frames sent back by the receiver. After processing acknowledgement frame, may be the transmitter needs to re-transmit a copy of the frame. So therefore the DLL at receiver is required to detect duplications of frames.
HARVINDER SINGH 511025273

77

M C0075 COM PUTER N ETW OR K S

2. D ESCRIBE THE ISO-OSI REFERENCE MODEL AND DISCUSS THE IMPORTANCE OF EVERY LAYER . (CONTD)
The best known example of this is Ethernet. This layer manages the interaction of devices with a shared medium. Other examples of data link protocols are HDLC and ADCCP for point-to-point or packet-switched networks and Aloha for local area networks. On IEEE 802 local area networks, and some non-IEEE 802 networks such as FDDI, this layer may be split into a Media Access Control (MAC) layer and the IEEE 802.2 Logical Link Control (LLC) layer. It arranges bits from the physical layer into logical chunks of data, known as frames. This is the layer at which the bridges and switches operate. Connectivity is provided only among locally attached network nodes forming layer 2 domains for unicast or broadcast forwarding. Other protocols may be imposed on the data frames to create tunnels and logically separated layer 2 forwarding domain. The data link layer might implement a sliding window flow control and acknowledgment mechanism to provide reliable delivery of frames; that is the case for SDLC and HDLC, and derivatives of HDLC such as LAPB and LAPD. In modern practice, only error detection, not flow control using sliding window, is present in modern data link protocols such as Point-to-Point Protocol (PPP), and, on local area networks, the IEEE 802.2 LLC layer is not used for most protocols on Ethernet, and, on other local area networks, its flow control and acknowledgment mechanisms are rarely used. Sliding window flow control and acknowledgment is used at the transport layers by protocols such as TCP. 3. Network Layer

The Network layer provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks while maintaining the quality of service requested by the Transport layer. The Network layer performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. Routers operate at this layer sending data throughout the extended network and making the Internet possible. This is a logical addressing scheme values are chosen by the network engineer. The addressing scheme is hierarchical. The best known example of a layer 3 protocol is the Internet Protocol (IP). Perhaps its easier to visualize this layer as managing the sequence of human carriers taking a letter from the sender to the local post office, trucks that carry sacks of mail to other post offices or airports, airplanes that carry airmail between major cities, trucks that distribute mail sacks in a city, and carriers that take a letter to its destinations. Think of fragmentation as splitting a large document into smaller envelopes for shipping, or, in the case of the network layer, splitting an application or transport record into packets. The major tasks of network layer are listed It controls routes for individual message through the actual topology. Finds the best route. Finds alternate routes. It accomplishes buffering and deadlock handling.
HARVINDER SINGH 511025273

78

M C0075 COM PUTER N ETW OR K S

2. D ESCRIBE THE ISO-OSI REFERENCE MODEL AND DISCUSS THE IMPORTANCE OF EVERY LAYER . (CONTD)
4. Transport Layer The Transport layer provides transparent transfer of data between end users, providing reliable data transfer while relieving the upper layers of it. The transport layer controls the reliability of a given link through flow control, segmentation/de-segmentation, and error control. Some protocols are state and connection oriented. This means that the transport layer can keep track of the segments and retransmit those that fail. The best known example of a layer 4 protocol is the Transmission Control Protocol (TCP). The transport layer is the layer that converts messages into TCP segments or User Datagram Protocol (UDP), Stream Control Transmission Protocol (SCTP), etc. packets. Perhaps an easy way to visualize the Transport Layer is to compare it with a Post Office, which deals with the dispatch and classification of mail and parcels sent. Do remember, however, that a post office manages the outer envelope of mail. Higher layers may have the equivalent of double envelopes, such as cryptographic Presentation services that can be read by the addressee only. Roughly speaking, tunneling protocols operate at the transport layer, such as carrying non-IP protocols such as IBMs SNA or Novells IPX over an IP network, or end-to-end encryption with IP security (IP sec). While Generic Routing Encapsulation (GRE) might seem to be a network layer protocol, if the encapsulation of the payload takes place only at endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete frames or packets to deliver to an endpoint. The major tasks of Transport layer are listed below: It locates the other party It creates a transport pipe between both end-users. It breaks the message into packets and reassembles them at the destination. It applies flow control to the packet stream. 5. Session Layer

The Session layer controls the dialogues/connections (sessions) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for either full-duplex or half-duplex operation, and establishes check pointing, adjournment, termination, and restart procedures. The OSI model made this layer responsible for "graceful close" of sessions, which is a property of TCP, and also for session check pointing and recovery, which is not usually used in the Internet protocols suite. The major tasks of session layer are listed It is responsible for the relation between two end-users.
HARVINDER SINGH 511025273

79

M C0075 COM PUTER N ETW OR K S

2. D ESCRIBE THE ISO-OSI REFERENCE MODEL AND DISCUSS THE IMPORTANCE OF EVERY LAYER . (CONTD)
It maintains the integrity and controls the data exchanged between the end-users. The end-users are aware of each other when the relation is established (synchronization). It uses naming and addressing to identify a particular user. It makes sure that the lower layer guarantees delivering the message (flow control). Presentation Layer The Presentation layer transforms the data to provide a standard interface for the Application layer. MIME encoding, data encryption and similar manipulation of the presentation are done at this layer to present the data as a service or protocol developer sees fit. Examples of this layer are converting an EBCDIC-coded text file to an ASCII-coded file, or serializing objects and other data structures into and out of XML. The major tasks of presentation layer are listed below: It translates the language used by the application layer. It makes the users as independent as possible, and then they can concentrate on conversation. 7. Application Layer (end users)

The application layer is the seventh level of the seven-layer OSI model. It interfaces directly to the users and performs common application services for the application processes. It also issues requests to the presentation layer. Note carefully that this layer provides services to user-defined application processes, and not to the end user. For example, it defines a file transfer protocol, but the end user must go through an application process to invoke file transfer. The OSI model does not include human interfaces. The common application services sub layer provides functional elements including the Remote Operations Service Element (comparable to Internet Remote Procedure Call), Association Control, and Transaction Processing (according to the ACID requirements). Above the common application service sub layer are functions meaningful to user application programs, such as messaging (X.400), directory (X.500), file transfer (FTAM), virtual terminal (VTAM), and batch job manipulation (JTAM). A Comparison of OSI and TCP/IP Reference Models Concepts central to the OSI model are: Services: It tells what the layer does. Interfaces: It tells the processes above it how to access it. It specifies what parameters are and what result to expect. Protocols: It provides the offered service. It is used in a layer and are layers own business.
HARVINDER SINGH 511025273

80

M C0075 COM PUTER N ETW OR K S

2. D ESCRIBE THE ISO-OSI REFERENCE MODEL AND DISCUSS THE IMPORTANCE OF EVERY LAYER . (CONTD)
The TCP/IP did not originally distinguish between the service, interface & protocols. The only real services offered by the internet layer are SEND IP packets and RECEIVE IP packets. The OSI model was devised before the protocols were invented. Data link layer originally dealt only with point-to-point networks. When broadcast networks came around, a new sub-layer had to be hacked into the model. With TCP/IP the reverse was true, the protocols came first and the model was really just a description of the existing protocols. This TCP/IP model did fit any other protocol stack. Then OSI model has seven layers and TCP/IP has four layers as shown in figure below
OSI TCP/IP

7 6 5 4 3 2 1

Application Presentation Session Transport Network Data Link

Application
Not present in the model

Transport Internet

Host-to-Network Physical

Comparisons of the two reference models Another difference is in the area of connectionless and connection oriented services. The OSI model supports both these services in the network layer but supports only connection oriented communication in the transport layer. Where as the TCP/IP has supports only connection less communication in the network layer, and supports both these services in the transport layer. A Critique of the OSI Model and Protocols Why OSI did not take over the world
HARVINDER SINGH 511025273

81

M C0075 COM PUTER N ETW OR K S

2. D ESCRIBE THE ISO-OSI REFERENCE MODEL AND DISCUSS THE IMPORTANCE OF EVERY LAYER . (CONTD)
Bad timing Bad technology Bad implementations Bad politics A Critique of the TCP/IP Reference Model Problems: Service, interface, and protocol not distinguished Not a general model Host-to-network layer not really a layer No mention of physical and data link layers Minor protocols deeply entrenched, hard to replace Network standardization Network standardization is a definition that has been approved by a recognized standards organization. Standards exist for programming languages, operating systems, data formats, communications protocols, and electrical interfaces. Two categories of standards: De facto (Latin for from the fact) standards: These are those that have just happened without any formal plan. These are formats that have become standard simply because a large number of companies have agreed to use them. They have not been formally approved as standards E.g., IBM PC for small office computers, UNIX for operating systems in CS departments. PostScript is a good example of a de facto standard. De jure (Latin for by law) standards: These are formal legal standards adopted by some authorized standardization body. Two classes of standard organizations
HARVINDER SINGH 511025273

82

M C0075 COM PUTER N ETW OR K S

2. D ESCRIBE THE ISO-OSI REFERENCE MODEL AND DISCUSS THE IMPORTANCE OF EVERY LAYER . (CONTD)
Organizations established by treaty among national governments. Voluntary, nontreaty organizations. From a users standpoint, standards are extremely important in the computer industry because they allow the combination of products from different manufacturers to create a customized system. Without standards, only hardware and software from the same company could be used together. In addition, standard user interfaces can make it much easier to learn how to use new applications. Most official computer standards are set by one of the following organizations: ANSI (American National Standards Institute) ITU (International Telecommunication Union) IEEE (Institute of Electrical and Electronic Engineers) ISO (International Standards Organization) VESA (Video Electronics Standards Association) Benefits of standardization: Allow different computers to communicate. Increase the market for products adhering to the standard. Whos who in the telecommunication world? Common carriers: private telephone companies (e.g., AT&T, USA). PTT (Post, Telegraph & Telephone) administration: nationalized telecommunication companies (most of the world). ITU (International Telecommunication Union): an agency of the UN for international telecommunication coordination. CCITT (an acronym for its French name): one of the organs of ITU (i.e., ITU-T), specialized for telephone and data communication systems.

HARVINDER SINGH

511025273

83

M C0075 COM PUTER N ETW OR K S

3. E XPLAIN THE FOLLOWING WITH RESPECT TO D ATA C OMMUNICATIONS :


A) Fourier analysis In 19th century, the French mathematician Fourier proved that any periodic function of time g (t) with period T can be constructed by summing a number of cosines and sines.

Where f=1/T is the fundamental frequency, and are the sine and cosine amplitudes of the nth harmonics. Such decomposition is called a Fourier series. B) Band limited signals Consider the signal given in figure below. Figure shows the signal that is the ASCII representation of the character b which consists of the bit pattern 01100010 along with its harmonics.

Any transmission facility cannot pass all the harmonics and hence few of the harmonics are diminished and distorted. The bandwidth is restricted to low frequencies consisting of 1, 2, 4, and 8 harmonics and then transmitted. Figures show the spectra and reconstructed functions for these band-limited signals. Limiting the bandwidth limits the data rate even for perfect channels. However complex coding schemes that use several voltage levels do exist and can achieve higher data rates.
HARVINDER SINGH 511025273

84

M C0075 COM PUTER N ETW OR K S

3. E XPLAIN THE FOLLOWING WITH RESPECT TO D ATA C OMMUNICATIONS :


C) Maximum data rate of a channel

(CONTD)

In 1924, H. Nyquist realized the existence of the fundamental limit and derived the equation expressing the maximum data for a finite bandwidth noiseless channel. In 1948, Claude Shannon carried Nyquist work further and extended it to the case of a channel subject to random noise. In communications, it is not really the amount of noise that concerns us, but rather the amount of noise compared to the level of the desired signal. That is, it is the ratio of signal to noise power that is important, rather than the noise power alone. This Signal-to-Noise Ratio (SNR), usually expressed in decibel (dB), is one of the most important specifications of any communication system. The decibel is a logarithmic unit used for comparisons of power levels or voltage levels. In order to understand the implication of dB, it is important to know that a sound level of zero dB corresponds to the threshold of hearing, which is the smallest sound that can be heard. A normal speech conversation would measure about 60 dB. If an arbitrary signal is passed through the Low pass filter of bandwidth H, the filtered signal can be completely reconstructed by making only 2H samples per second. Sampling the line faster than 2H per second is pointless. If the signal consists of V discrete levels, then Nyquist theorem states that, for a noiseless channel Maximum data rate = 2H.log2 (V) bits per second. (3.2) For a noisy channel with bandwidth is again H, knowing signal to noise ratio S/N, the maximum data rate according to Shannon is given as Maximum data rate = H.log2 (1+S/N) bits per second. (3.3)

HARVINDER SINGH

511025273

85

M C0075 COM PUTER N ETW OR K S

4. E XPLAIN THE FOLLOWING CONCEPTS OF I NTERNETWORKING :


A) Internet architecture Internet Architecture: B1-226, B2-56: The Internet is a worldwide, publicly accessible network of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It is a "network of networks" that consists of millions of smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked web pages and other documents of the World Wide Web. How are networks interconnected to form an internetwork? The answer has two parts. Physically, two networks can only be connected by a computer that attaches both of them. But just a physical connection cannot provide interconnection where information can be exchanged as there is no guarantee that the computer will cooperate with other machines that wish to communicate. Internet is not restricted in size. Internets exist that contain a few networks and internets also exist that contain thousands of networks. Similarly the number of computers attached to each network in an internet can vary. Some networks have no computers attached, while others have hundreds. To have a viable internet, we need a special computer that is willing to transfer packets from one network to another. Computers that interconnect two networks and pass packets from one to the other are called internet gateways or internet routers.

B) Protocols and Significance for Internetworking Protocols for internetworking: Many protocols have been used for use in an internet. One suite known as The TCP/IP internet protocol stands out most widely used for internets. Most networking professional simply refer this protocol as TCP/IP. Work on the transmission control protocol (TCP) began in the 1970s. The U.S military funded the research in TCP/IP and internetworking through the Advanced Research Projects Agency in short known as ARPA.
HARVINDER SINGH 511025273

86

M C0075 COM PUTER N ETW OR K S

4. E XPLAIN THE FOLLOWING CONCEPTS OF I NTERNETWORKING :


Significance of internetworking and TCP/IP

(CONTD)

Internetworking has become one of the important technique in the modern networking. Internet technology has revolutionized the computer communication. The TCP/IP technology has made possible a global Internet, which reaches millions of schools, commercial organizations, government and military etc around the world. The worldwide demand for internetworking products has affected most companies sell networking technologies. Competition has increased among the companies that sell the hardware and software needed for internetworking. Companies have extended the designs in two ways The protocols have adapted to work with many network technologies And new features have been adapted that allow the protocols to transfer data across the internets C) Internet Layering Model Internet uses the TCP/IP reference model. This model is also called as Internet layering model or internet reference model. This model consists of 5 layers as illustrated in figure below. 5 4 3 2 1
APPLICATION

TRANSPORT

INTERNET

NETWORK INTERFACE

PHYSICAL

A goal was of continuing the conversation between source and destination even if transmission went out of operation. The reference model was named after two of its main protocols, TCP (Transmission Control Protocol) and IP (Internet Protocol). The purpose of each layer of TCP/IP is given below:

HARVINDER SINGH

511025273

87

M C0075 COM PUTER N ETW OR K S

4. E XPLAIN THE FOLLOWING CONCEPTS OF I NTERNETWORKING :


Layer 1: Physical layer This layer corresponds to basic network hardware Layer 2: Network interface

(CONTD)

This layer specifies how to organize data into frames and how a computer transfers frames over a network. It interfaces the TCP/IP protocol stack to the physical network. Layer 3: Internet This layer specifies the format of packets sent across an internet. It also specifies the mechanism used to forward packets from a computer through one or more routers to the final destination. Layer 4: Transport This layer deals with opening and maintaining connections, ensuring that packets are in fact received. The transport layer is the interface between the application layer and the complex hardware of the network. It is designed to allow peer entities on the source and destination hosts to carry on conversations. Layer 5: Network interface Each protocol of this layer specifies how one application uses an internet.

HARVINDER SINGH

511025273

88

M C0075 COM PUTER N ETW OR K S

5. W HAT IS THE USE OF IDENTIFIER AND SEQUENCE NUMBER FIELDS OF ECHO REQUEST AND ECHO REPLY MESSAGE ? E XPLAIN .
The echo request contains an optional data area. The echo reply contains the copy of the data sent in the request message. The format for the echo request and echo reply is as shown in figure below

echo request and echo reply message format The field OPTIONALDATA is a variable length that contains data to be returned to the original sender. An echo reply always returns exactly the same data as ws to receive in the request. Field IDENTIFIER and SEQUENCE NUMBER are used by the sender to match replies to requests. The value of the TYPE field specifies whether it is echo request when equal to 8 or echo reply when equal to 0. Reports of Unreachability When a router cannot forward or deliver the datagram to the destination owing to various problems, it sends a destination unreachable message back to the original sender and then drops the datagram.

Destination unreachable message format


HARVINDER SINGH 511025273

89

M C0075 COM PUTER N ETW OR K S

5. W HAT IS THE USE OF IDENTIFIER AND SEQUENCE NUMBER FIELDS OF ECHO REQUEST AND ECHO REPLY MESSAGE ? E XPLAIN . (CONTD)
The format of destination unreachable is as shown in figure 5.3. The TYPE field in destination unreachable message contains an integer equal to 3. The CODE field here contains an integer that describes the problem why the datagram is not reachable. Possible values for CODE field are listed in below figure. DE VALUE 0 1 2 3 4 5 6 7 8 9 10 11 12 Network unreachable Host unreachable Protocol unreachable Port unreachable Fragment needed and DF set Source route failed Destination network unknown Destination host unknown Source host isolated Communication with destination network administratively prohibited Communication with destination host administratively prohibited Network unreachable for type of service Host unreachable for type of service MEANING

Possible problems in Destination unreachable message Network unreachable errors imply routing failures and host unreachable errors imply delivery failures. As ICMP error message contains a short prefix of the datagram that caused the problem, the source will know exactly which address is unreachable. The port is the destination point discussed at the transport layer. If the datagram contains the source route option with a wrong route, it may report source route failure message. If a router needs to fragment a datagram and DF-bit which is dont fragment bit in IP header is set, the router sends a Fragment needed and DF set message back to the source. Rests of the errors listed in figure 5.4 are self explanatory.
HARVINDER SINGH 511025273

90

M C0075 COM PUTER N ETW OR K S

5. W HAT IS THE USE OF IDENTIFIER AND SEQUENCE NUMBER FIELDS OF ECHO REQUEST AND ECHO REPLY MESSAGE ? E XPLAIN . (CONTD)
Obtaining a subnet mask To participate in subnet addressing, a host needs to know which bits of the 32-bit internet address correspond to physical network and which corresponds to host identifiers. The information needed to interpret the address is represented in 32-bit quantity is called subnet mask. To learn the subnet mask used for local network, a machine can send an address mask request message to a router and receive address mask reply message. Address mask request or reply message format

Address mask request or reply message format The format address mask request or reply message is as shown in figure 5.10. Host broadcasts a request without knowing which specific router will respond. The TYPE field value is 17 for address mask request and 18 for address mask reply message. A reply contains the networks subnet address mask in the ADDRESS MASK field. IDENTIFIER and SEQUENCE NUMBER fields allow to associate replies with requests.

HARVINDER SINGH

511025273

91

M C0075 COM PUTER N ETW OR K S

6. I N WHAT CONDITIONS IS ARP PROTOCOL USED ? E XPLAIN .


ARP protocol: In computer networking, the Address Resolution Protocol (ARP) is the standard method for finding a hosts hardware address when only its network layer address is known. ARP is primarily used to translate IP addresses to Ethernet MAC addresses. It is also used for IP over other LAN technologies, such as Token Ring, FDDI, or IEEE 802.11, and for IP over ATM. ARP is used in four cases of two hosts communicating: 1. When two hosts are on the same network and one desires to send a packet to the other 2. When two hosts are on different networks and must use a gateway/router to reach the other host 3. When a router needs to forward a packet for one host through another router 4. When a router needs to forward a packet from one host to the destination host on the same network The first case is used when two hosts are on the same physical network. That is, they can directly communicate without going through a router. The last three cases are the most used over the Internet as two computers on the internet are typically separated by more than 3 hops. Imagine computer A sends a packet to computer D and there are two routers, B & C, between them. Case 2 covers A sending to B; case 3 covers B sending to C; and case 4 covers C sending to D. ARP is defined in RFC 826. It is a current Internet Standard, STD 37. ARP implementation We will see the implementation with the help of an example. Consider an university with several class C (/24) networks. As illustrated in figure 3.1. here we have two Ethernets. One is in computer science (CS) department with IP address 192.31.65.0 and the one in electrical (EE) department with IP address 192.31.63.0. These are connected by the campus backbone FDDI ring with IP address 192.31.60.0. Each machine on an Ethernet has unique physical addresses, labeled E1 through E6, and similarly each machine on FDDI ring has physical addresses, labeled F1 through F3.
CS Router has 2 IP addresses 192.31.60.4 192.31.65.1 192.31.65.5 2 E2 CS Ethernet 192.31.65.0 E3 Campus FDDI ring 192.31.60.0 F1 To WAN EE Router has 2 IP addresses 192.31.60.7 192.31.63.3 192.31.63.8 F3 F2 E4 3 E5 EE Ethernet 192.31.63.0 4 E6 Ethernet addresses

192.31.65.7 1 E1

HARVINDER SINGH

511025273

92

M C0075 COM PUTER N ETW OR K S

6. I N WHAT CONDITIONS IS ARP PROTOCOL USED ? E XPLAIN .


ARP protocol

(CONTD)

Let us assume the sender on host 1 want to send a packet to a receiver on host 2. Sender knows the name of the intended receiver say mary@eagle.cs.uni.edu. The first step is to find the IP address for host 2 known as eagle.cs.uni.edu. This mapping of name to IP address is done by domain name server (DNS). Here we will assume that DNS gives the IP address of host 2 as 192.31.65.5. The upper layer software on host 1 builds a packet with 192.31.65.5 in the destination address field and gives it to IP software to transmit. The IP software can look at the address see that the destination is on its own network, but needs a way to find the destinations physical address. A mapping table can be used as discussed in resolution by direct mapping. A better solution is for host 1 to output a broadcast packet onto the Ethernet asking WHO owns IP address 192.31.65.5? The broadcast will arrive at every machine on Ethernet 192.31.65.0, and each one will check its IP address. Host 2 alone will respond with its physical address E2. The packet used for asking this question is called ARP request. And the packet which is reply to this ARP request is called ARP replies. IP software on host 1 builds an Ethernet frame addressed to E2, puts the IP packet addresses to 192.31.65.5 in the payload field and dumps it onto the Ethernet. The Ethernet board of host 2 detects this frame, recognizes it as frame for itself, scoops it up, and causes an interrupt. The Ethernet driver extracts IP packet from the payload and passes it to the IP software, which sees that it is correctly addressed and processes it. ARP frame format An ARP protocol uses two frame formats as seen in above example. One is ARP request and the other is ARP reply. ARP request An ARP request is structured in a particular way. As shown in figure 3.2 an ARP request frame consists of two fields 1. 2. Frame header ARP request message

Frame Header

ARP request message May I know your physical Address?


(a) ARP request frame

HARVINDER SINGH

511025273

93

M C0075 COM PUTER N ETW OR K S

6. I N WHAT CONDITIONS IS ARP PROTOCOL USED ? E XPLAIN .


Frame header is subdivided into 1. 2. Physical address IP address

(CONTD)

A complete ARP request frame is as shown in figure 3.2(b). We have seen that broadcast address consists of all 1s. hence the destinations physical address in ARP request frame is broadcast address with all ones equivalently FF-FF-FF-FF-FF-FF. PHYSICAL ADDRESS DESTIN AT IO N FF--FF-FF-FF-FF-FF (Broadcast) SOURCE E1 IP ADDRES S DESTIN AT IO N 192.31.65.5 SOURCE 192.31.65.7 M ESSAGE May I know your physical address

(b) ARP request frame ARP replies An ARP reply frame is also structured in a similar way as ARP request frame. As shown in figure (a) an ARP reply frame also consists of two fields 1. 2. Frame header ARP reply message

Frame Header

Frame Header This is my physical Address


(a) ARP reply frame

ARP reply Frame header is subdivided again into 1. 2. Physical address IP address
511025273

HARVINDER SINGH

94

M C0075 COM PUTER N ETW OR K S

6. I N WHAT CONDITIONS IS ARP PROTOCOL USED ? E XPLAIN .


A complete ARP request frame is as shown in figure (b). PHYSICAL ADDRESS DESTIN AT IO N E2 SOURCE E1 IP ADDRES S DESTIN AT IO N 192.31.65.5 SOURCE

(CONTD)

M ESSAGE This is my physical Address

192.31.65.7

(b) ARP request frame ARP replies An ARP reply frame is also structured in a similar way as ARP request frame. As shown in figure (a) an ARP reply frame also consists of two fields 1. 2. Frame header ARP reply message

Frame Header

ARP reply message This is my physical Address


(a) ARP reply frame

ARP reply Frame header is subdivided again into 1. 2. Physical address IP address

A complete ARP request frame is as shown in figure (b).


HARVINDER SINGH 511025273

95

M C0075 COM PUTER N ETW OR K S

6. I N WHAT CONDITIONS IS ARP PROTOCOL USED ? E XPLAIN .


PHYSICAL ADDRESS DESTIN AT IO N E2 SOURCE E1 IP ADDRES S DESTIN AT IO N 192.31.65.5 SOURCE

(CONTD)

M ESSAGE This is my physical Address

192.31.65.7

(b) ARP reply frame The Address Resolution Cache Broadcasting the ARP request packet is too expensive to be used every time one machine wants to transmit a packet to another. As with this broadcasting every machine on the network must receive and then process the broadcast packet. To reduce the communication cost due to broadcast computers that use ARP protocol maintain a cache of recently acquired IP to physical address bindings. Thus cache is used to store the recently used mappings of IP address and physical address That whenever a computer sends an ARP request and receives an ARP reply, it saves the IP address and corresponding hardware address information in its cache for successive look ups. When transmitting a packet, a computer always looks in its cache for binding before sending an ARP request. If it finds the desired binding in its ARP cache, the computer need not broadcast on the network. Thus when two computers on a network communicate, they begin with an ARP request and response, and then repeatedly transfer packets without using ARP for each packet. ARP cache timeouts An ARP cache provides an example of soft state, a technique commonly used in network protocols. The name describes a situation in which information can become stale without warning. In case of ARP consider two computers A and B, both connected to Ethernet. Assume A has sent an ARP request, and B has replied. Further assume that after the exchange, computer B crashes. Computer A will not receive any information of the crash. And moreover as it already has binding information for B in its ARP cache, computer A will continue to send packets to B. the Ethernet hardware provides no indication that B is not online because Ethernet does not have guarantee delivery. Thus A has no way of knowing when information in its Arp cache has become incorrect. Usually such protocols use timers, with the state information being deleted when the timer expires. That is when a computer places the address bindings in cache it needs to set the timer. Typical value of timeout being say 20 minutes, and when the timer expires, that address binding information is deleted.

HARVINDER SINGH

511025273

96

Potrebbero piacerti anche