Sei sulla pagina 1di 80

Paper Solution Subject: Convergence of Network Technologies.(C.T.N.C) Class: T.E. I.T Sem: V (Rev).

Year: Summer-10 Paper Code: AN-4312.


Q1. a) Describe in brief Binary Phase Shift Keying modulator and Demodulator? Also calculate probability of error for BPSK? (15 M). Solution: Marking Scheme: 1. Definition carries = 03 marks. 2. Roles of binary phase shift key modulator and demodulator with diagram it carries = 10 marks. 3. Calculation of probability it carries =02 marks. Detail Solution: 1. Definition: Phase-shift keying (PSK) is a digital modulation scheme that conveys data by changing, or modulating, the phase of a reference signal (the carrier wave). Any digital modulation scheme uses a finite number of distinct signals to represent digital data. PSK uses a finite number of phases, each assigned a unique pattern of binary digits. Usually, each phase encodes an equal number of bits. Each pattern of bits forms the symbol that is represented by the particular phase. The demodulator, which is designed specifically for the symbol-set used by the modulator, determines the phase of the received signal and maps it back to the symbol it represents, thus recovering the original data. This requires the receiver to be able to compare the phase of the received signal to a reference signal such a system is termed coherent (and referred to as CPSK).

Binary phase-shift keying (BPSK)

Constellation diagram example for BPSK.

BPSK (also sometimes called PRK, Phase Reversal Keying, or 2PSK) is the simplest form of phase shift keying (PSK). It uses two phases which are separated by 180 and so can also be termed 2-PSK. It does not particularly matter exactly where the constellation points are positioned, and in this figure they are shown on the real axis, at 0 and 180. This modulation is the most robust of all the PSKs since it takes the highest level of noise or distortion to make the demodulator reach an incorrect decision. It is, however, only able to modulate at 1 bit/symbol (as seen in the figure) and so is unsuitable for high datarate applications when bandwidth is limited. In the presence of an arbitrary phase-shift introduced by the communications channel, the demodulator is unable to tell which constellation point is which. As a result, the data is often differentially encoded prior to modulation. 2. Roles of Binary shift key:

Implementation
The general form for BPSK follows the equation:

This yields two phases, 0 and . In the specific form, binary data is often conveyed with the following signals:

for binary "0" for binary "1" where fc is the frequency of the carrier-wave. Hence, the signal-space can be represented by the single basis function

where 1 is represented by assignment is, of course, arbitrary.

and 0 is represented by

. This

The use of this basis function is shown at the end of the next section in a signal timing diagram. The topmost signal is a BPSK-modulated cosine wave that the BPSK modulator

would produce. The bit-stream that causes this output is shown above the signal (the other parts of this figure are relevant only to QPSK).

Bit error rate


The bit error rate (BER) of BPSK in AWGN can be calculated as[5]:

or Since there is only one bit per symbol, this is also the symbol error rate.

Quadrature phase-shift keying (QPSK)

Constellation diagram for QPSK with Gray coding. Each adjacent symbol only differs by one bit. Sometimes known as quaternary or quadriphase PSK, 4-PSK, or 4-QAM[6], QPSK uses four points on the constellation diagram, equispaced around a circle. With four phases, QPSK can encode two bits per symbol, shown in the diagram with Gray coding to minimize the BER twice the rate of BPSK. Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed. As with BPSK, there are phase ambiguity problems at the receiver and differentially encoded QPSK is used more often in practice.

Implementation
The implementation of QPSK is more general than that of BPSK and also indicates the implementation of higher-order PSK. Writing the symbols in the constellation diagram in terms of the sine and cosine waves used to transmit them:

This yields the four phases /4, 3/4, 5/4 and 7/4 as needed. This results in a two-dimensional signal space with unit basis functions

The first basis function is used as the in-phase component of the signal and the second as the quadrature component of the signal. Hence, the signal constellation consists of the signal-space 4 points

The factors of 1/2 indicate that the total power is split equally between the two carriers. Comparing these basis functions with that for BPSK shows clearly how QPSK can be viewed as two independent BPSK signals. Note that the signal-space points for BPSK do not need to split the symbol (bit) energy over the two carriers in the scheme shown in the BPSK constellation diagram. QPSK systems can be implemented in a number of ways. An illustration of the major components of the transmitter and receiver structure are shown below.

Conceptual transmitter structure for QPSK. The binary data stream is split into the inphase and quadrature-phase components. These are then separately modulated onto two orthogonal basis functions. In this implementation, two sinusoids are used. Afterwards, the two signals are superimposed, and the resulting signal is the QPSK signal. Note the use of polar non-return-to-zero encoding. These encoders can be placed before for binary data source, but have been placed after to illustrate the conceptual difference between digital and analog signals involved with digital modulation.

Receiver structure for QPSK. The matched filters can be replaced with correlators. Each detection device uses a reference threshold value to determine whether a 1 or 0 is detected.

Bit error rate


Although QPSK can be viewed as a quaternary modulation, it is easier to see it as two independently modulated quadrature carriers. With this interpretation, the even (or odd) bits are used to modulate the in-phase component of the carrier, while the odd (or even) bits are used to modulate the quadrature-phase component of the carrier. BPSK is used on both carriers and they can be independently demodulated. As a result, the probability of bit-error for QPSK is the same as for BPSK:

However, in order to achieve the same bit-error probability as BPSK, QPSK uses twice the power (since two bits are transmitted simultaneously). The symbol error rate is given by:

. If the signal-to-noise ratio is high (as is necessary for practical QPSK systems) the probability of symbol error may be approximated:

QPSK signal in the time domain


The modulated signal is shown below for a short segment of a random binary datastream. The two carrier waves are a cosine wave and a sine wave, as indicated by the signal-space analysis above. Here, the odd-numbered bits have been assigned to the inphase component and the even-numbered bits to the quadrature component (taking the first bit as number 1). The total signal the sum of the two components is shown at the bottom. Jumps in phase can be seen as the PSK changes the phase on each component at the start of each bit-period. The topmost waveform alone matches the description given for BPSK above.

Timing diagram for QPSK. The binary data stream is shown beneath the time axis. The two signal components with their bit assignments are shown the top and the total, combined signal at the bottom. Note the abrupt changes in phase at some of the bit-period boundaries. The binary data that is conveyed by this waveform is: 1 1 0 0 0 1 1 0.

The odd bits, highlighted here, contribute to the in-phase component: 1 1 0 0 0 1 1 0 The even bits, highlighted here, contribute to the quadrature-phase component: 1 1000110

Variants

Offset QPSK (OQPSK)

Signal doesn't cross zero, because only one bit of the symbol is changed at a time Offset quadrature phase-shift keying (OQPSK) is a variant of phase-shift keying modulation using 4 different values of the phase to transmit. It is sometimes called Staggered quadrature phase-shift keying (SQPSK).

Difference of the phase between QPSK and OQPSK Taking four values of the phase (two bits) at a time to construct a QPSK symbol can allow the phase of the signal to jump by as much as 180 at a time. When the signal is low-pass filtered (as is typical in a transmitter), these phase-shifts result in large amplitude fluctuations, an undesirable quality in communication systems. By offsetting the timing of the odd and even bits by one bit-period, or half a symbol-period, the inphase and quadrature components will never change at the same time. In the constellation diagram shown on the right, it can be seen that this will limit the phase-shift to no more than 90 at a time. This yields much lower amplitude fluctuations than non-offset QPSK and is sometimes preferred in practice.

The picture on the right shows the difference in the behavior of the phase between ordinary QPSK and OQPSK. It can be seen that in the first plot the phase can change by 180 at once, while in OQPSK the changes are never greater than 90. The modulated signal is shown below for a short segment of a random binary datastream. Note the half symbol-period offset between the two component waves. The sudden phase-shifts occur about twice as often as for QPSK (since the signals no longer change together), but they are less severe. In other words, the magnitude of jumps is smaller in OQPSK when compared to QPSK.

Timing diagram for offset-QPSK. The binary data stream is shown beneath the time axis. The two signal components with their bit assignments are shown the top and the total, combined signal at the bottom. Note the half-period offset between the two signal components.

/4QPSK

Dual constellation diagram for /4-QPSK. This shows the two separate constellations with identical Gray coding but rotated by 45 with respect to each other.

This final variant of QPSK uses two identical constellations which are rotated by 45 ( / 4 radians, hence the name) with respect to one another. Usually, either the even or odd symbols are used to select points from one of the constellations and the other symbols select points from the other constellation. This also reduces the phase-shifts from a maximum of 180, but only to a maximum of 135 and so the amplitude fluctuations of / 4QPSK are between OQPSK and non-offset QPSK. One property this modulation scheme possesses is that if the modulated signal is represented in the complex domain, it does not have any paths through the origin. In other words, the signal does not pass through the origin. This lowers the dynamical range of fluctuations in the signal which is desirable when engineering communications signals. On the other hand, / 4QPSK lends itself to easy demodulation and has been adopted for use in, for example, TDMA cellular telephone systems. The modulated signal is shown below for a short segment of a random binary datastream. The construction is the same as above for ordinary QPSK. Successive symbols are taken from the two constellations shown in the diagram. Thus, the first symbol (1 1) is taken from the 'blue' constellation and the second symbol (0 0) is taken from the 'green' constellation. Note that magnitudes of the two component waves change as they switch between constellations, but the total signal's magnitude remains constant. The phaseshifts are between those of the two previous timing-diagrams.

Timing diagram for /4-QPSK. The binary data stream is shown beneath the time axis. The two signal components with their bit assignments are shown the top and the total, combined signal at the bottom. Note that successive symbols are taken alternately from the two constellations, starting with the 'blue' one. 3. Calculation of probability:

Bit error rate


For the general M-PSK there is no simple expression for the symbol-error probability if M > 4. Unfortunately, it can only be obtained from:

where

, , , and and random variables. are jointly-Gaussian

Bit-error rate curves for BPSK, QPSK, 8-PSK and 16-PSK, AWGN channel. This may be approximated for high M and high Eb / N0 by:

. The bit-error probability for M-PSK can only be determined exactly once the bit-mapping is known. However, when Gray coding is used, the most probable error from one symbol to the next produces only a single bit-error and

. (Using Gray coding allows us to approximate the Lee distance of the errors as the Hamming distance of the errors in the decoded bitstream, which is easier to implement in hardware). The graph on the left compares the bit-error rates of BPSK, QPSK (which are the same, as noted above), 8-PSK and 16-PSK. It is seen that higher-order modulations exhibit higher error-rates; in exchange however they deliver a higher raw data-rate. Bounds on the error rates of various digital modulation schemes can be computed with application of the union bound to the signal constellation.

b) Compare the different evolution of second, third, forth generation of wireless communication system? (5M) Solution: Marking Scheme: 1. Comparable points it carries = 05 marks. 2. At least 3 points are required for each generation. Detail Solution 1. 2G (or 2-G) is short for second-generation wireless telephone technology. Second generation 2G cellular telecom networks were commercially launched on the GSM standard in Finland by Radiolinja[1] (now part of Elisa Oyj) in 1991. Three primary benefits of 2G networks over their predecessors were that phone conversations were digitally encrypted, 2G systems were significantly more efficient on the spectrum allowing for far greater mobile phone penetration levels; and 2G introduced data services for mobile, starting with SMS text messages. After 2G was launched, the previous mobile telephone systems were retrospectively dubbed 1G. While radio signals on 1G networks are analog, and on 2G networks are digital, both systems use digital signaling to connect the radio towers (which listen to the handsets) to the rest of the telephone system.

2G technologies:
2G technologies can be divided into TDMA-based and CDMA-based standards depending on the type of multiplexing used. The main 2G standards are:

GSM (TDMA-based), originally from Europe but used in almost all countries on all six inhabited continents (Time Division Multiple Access). Today accounts for over 80% of all subscribers around the world. Over 60 GSM operators are also using CDMA2000 in the 450 MHz frequency band (CDMA450).[2] IS-95 aka cdmaOne (CDMA-based, commonly referred as simply CDMA in the US), used in the Americas and parts of Asia. Today accounts for about 17% of all subscribers globally. Over a dozen CDMA operators have migrated to GSM including operators in Mexico, India, Australia and South Korea. PDC (TDMA-based), used exclusively in Japan iDEN (TDMA-based), proprietary network used by Nextel in the United States and Telus Mobility in Canada IS-136 aka D-AMPS (TDMA-based, commonly referred as simply TDMA in the US), was once prevalent in the Americas but most have migrated to GSM.

2G services are frequently referred as Personal Communications Service, or PCS, in the United States. 2.5G services enable high-speed data transfer over upgraded existing 2G networks. Beyond 2G, there's 3G, with higher data speeds, and even evolutions beyond 3G, such as 4G. 2. "Third generation" redirects here. For third-generation immigrants, see Immigrant generations. International Mobile Telecommunications-2000 (IMT-2000), better known as 3G or 3rd Generation, is a family of standards for mobile telecommunications fulfilling specifications by the International Telecommunication Union,[1] which includes UMTS, and CDMA2000 as well as the non-mobile wireless standards DECT[citation needed] and WiMAX[citation needed]. While the GSM EDGE standard also fulfils the IMT-2000 specification, EDGE phones are typically not branded 3G. Services include wide-area wireless voice telephone, video calls, and wireless data, all in a mobile environment. Compared to 2G and 2.5G services, 3G allows simultaneous use of speech and data services and higher data rates (at least 200 kbit/s peak bit rate to fulfill to IMT-2000 specification). Today's 3G systems can in practice offer up to 14.0 Mbit/s on the downlink and 5.8 Mbit/s on the uplink

Applications
The bandwidth and location information available to 3G devices gives rise to applications not previously available to mobile phone users. Some of the applications are:

Mobile TV a provider redirects a TV channel directly to the subscriber's phone where it can be watched. Video on demand a provider sends a movie to the subscriber's phone.

Video conferencing subscribers can see as well as talk to each other. Tele-medicine a medical provider monitors or provides advice to the potentially isolated subscriber. Location-based services a provider sends localized weather or traffic conditions to the phone, or the phone allows the subscriber to find nearby businesses or friends.

3. 4G refers to the fourth generation of cellular wireless standards. It is a successor to 3G and 2G families of standards. The nomenclature of the generations generally refers to a change in the fundamental nature of the service, non-backwards compatible transmission technology and new frequency bands. The first was the move from 1981 analogue (1G) to digital (2G) transmission in 1992. This was followed, in 2002, by 3G multi-media support, spread spectrum transmission and at least 200 kbit/s, soon expected to be followed by 4G, which refers to all-IP packetswitched networks, mobile ultra-broadband (gigabit speed) access and multi-carrier transmission. Pre-4G technologies such as mobile WiMAX and first-release 3G Long term evolution (LTE) are available on the market since 2005[citation needed] and 2009 respectively.

Applications of 4G
LTE
The pre-4G technology 3GPP Long Term Evolution (LTE) is often branded "4G", but the first LTE release does not fully comply with the IMT-Advanced requirements. LTE has a theoretical net bit rate capacity of up to 100 Mbit/s in the downlink and 50 Mbit/s in the uplink if a 20 MHz channel is used - and more if Multiple-input multiple-output (MIMO), i.e. antenna arrays, are used. Most major mobile carriers in the United States and several worldwide carriers have announced plans to convert their networks to LTE beginning in 2011. The world's first publicly available LTE-service was opened in the two Scandinavian capitals Stockholm and Oslo on the 14 December 2009, and branded 4G. The physical radio interface was at an early stage named High Speed OFDM Packet Access (HSOPA), now named Evolved UMTS Terrestrial Radio Access (E-UTRA). LTE Advanced (Long-term-evolution Advanced) is a candidate for IMT-Advanced standard, formally submitted by the 3GPP organization to ITU-T in the fall 2009, and expected to be released in 2012. The target of 3GPP LTE Advanced is to reach and surpass the ITU requirements. LTE Advanced should be compatible with first release LTE equipment, and should share frequency bands with first release LTE.[3]

WiMAX
The Mobile WiMAX (IEEE 802.16e-2005) mobile wireless broadband access (MWBA) standard is sometimes branded 4G, and offers peak data rates of 128 Mbit/s downlink and

56 Mbit/s uplink over 20 MHz wide channels. The IEEE 802.16m evolution of 802.16e is under development, with the objective to fulfill the IMT-Advanced criteria of 1 Gbit/s for stationary reception and 100 Mbit/s for mobile reception.[4] Sprint Nextel has announced that it will be using WiMAX, branded as a "4G" network.[5]

UMB (Formerly EV-DO Rev. C)


UMB (Ultra Mobile Broadband) was the brand name for a discontinued 4G project within the 3GPP2 standardization group to improve the CDMA2000 mobile phone standard for next generation applications and requirements. In November 2008, Qualcomm, UMB's lead sponsor, announced it was ending development of the technology, favouring LTE instead.[6] The objective was to achieve data speeds over 275 Mbit/s downstream and over 75 Mbit/s upstream.

Q2. a) Explain the Digital signature? Also explain firewall used for security purpose? (10M) Solution: Marking Scheme: 1. Definition of digital signature with suitable diagram = 03 marks. 2. Different Reasons it carries = 07 marks. Detail Solution: 1. Definition of digital signature: A digital signature or digital signature scheme is a mathematical scheme for demonstrating the authenticity of a digital message or document. A valid digital signature gives a recipient reason to believe that the message was created by a known sender, and that it was not altered in transit. Digital signatures are commonly used for software distribution, financial transactions, and in other cases where it is important to detect forgery and tampering. Digital signatures are often used to implement electronic signatures, a broader term that refers to any electronic data that carries the intent of a signature,[1] but not all electronic signatures use digital signatures.[2][3][4] In some countries, including the United States, India, and members of the European Union, electronic signatures have legal significance. However, laws concerning electronic signatures do not always make clear whether they are digital cryptographic signatures in the sense used here, leaving the legal definition, and so their importance, somewhat confused. Digital signatures employ a type of asymmetric cryptography. For messages sent through an insecure channel, a properly implemented digital signature gives the receiver reason to believe the message was sent by the claimed sender. Digital signatures are equivalent to traditional handwritten signatures in many respects; properly implemented digital signatures are more difficult to forge than the handwritten type. Digital signature schemes

in the sense used here are cryptographically based, and must be implemented properly to be effective. Digital signatures can also provide non-repudiation, meaning that the signer cannot successfully claim they did not sign a message, while also claiming their private key remains secret; further, some non-repudiation schemes offer a time stamp for the digital signature, so that even if the private key is exposed, the signature is valid nonetheless. Digitally signed messages may be anything representable as a bitstring: examples include electronic mail, contracts, or a message sent via some other cryptographic protocol. 2. Different Reasons:

Uses of digital signatures


As organizations move away from paper documents with ink signatures or authenticity stamps, digital signatures can provide added assurances of the evidence to provenance, identity, and status of an electronic document as well as acknowledging informed consent and approval by a signatory. The United States Government Printing Office (GPO) publishes electronic versions of the budget, public and private laws, and congressional bills with digital signatures. Universities including Penn State, University of Chicago, and Stanford are publishing electronic student transcripts with digital signatures. Below are some common reasons for applying a digital signature to communications:

Authentication
Although messages may often include information about the entity sending a message, that information may not be accurate. Digital signatures can be used to authenticate the source of messages. When ownership of a digital signature secret key is bound to a specific user, a valid signature shows that the message was sent by that user. The importance of high confidence in sender authenticity is especially obvious in a financial context. For example, suppose a bank's branch office sends instructions to the central office requesting a change in the balance of an account. If the central office is not convinced that such a message is truly sent from an authorized source, acting on such a request could be a grave mistake.

Integrity
In many scenarios, the sender and receiver of a message may have a need for confidence that the message has not been altered during transmission. Although encryption hides the contents of a message, it may be possible to change an encrypted message without understanding it. (Some encryption algorithms, known as nonmalleable ones, prevent this, but others do not.) However, if a message is digitally signed, any change in the message after signature will invalidate the signature. Furthermore, there is no efficient way to modify a message and its signature to produce a new message with a valid signature, because this is still considered to be computationally infeasible by most cryptographic hash functions (see collision resistance).

Additional security precautions


Putting the private key on a smart card
All public key / private key cryptosystems depend entirely on keeping the private key secret. A private key can be stored on a user's computer, and protected by a local password, but this has two disadvantages:

the user can only sign documents on that particular computer the security of the private key depends entirely on the security of the computer

A more secure alternative is to store the private key on a smart card. Many smart cards are designed to be tamper-resistant (although some designs have been broken, notably by Ross Anderson and his students). In a typical digital signature implementation, the hash calculated from the document is sent to the smart card, whose CPU encrypts the hash using the stored private key of the user, and then returns the encrypted hash. Typically, a user must activate his smart card by entering a personal identification number or PIN code (thus providing two-factor authentication). It can be arranged that the private key never leaves the smart card, although this is not always implemented. If the smart card is stolen, the thief will still need the PIN code to generate a digital signature. This reduces the security of the scheme to that of the PIN system, although it still requires an attacker to possess the card. A mitigating factor is that private keys, if generated and stored on smart cards, are usually regarded as difficult to copy, and are assumed to exist in exactly one copy. Thus, the loss of the smart card may be detected by the owner and the corresponding certificate can be immediately revoked. Private keys that are protected by software only may be easier to copy, and such compromises are far more difficult to detect.

Using smart card readers with a separate keyboard


Entering a PIN code to activate the smart card commonly requires a numeric keypad. Some card readers have their own numeric keypad. This is safer than using a card reader integrated into a PC, and then entering the PIN using that computer's keyboard. Readers with a numeric keypad are meant to circumvent the eavesdropping threat where the computer might be running a keystroke logger, potentially compromising the PIN code. Specialized card readers are also less vulnerable to tampering with their software or hardware and are often EAL3 certified.

Other smart card designs


Smart card design is an active field, and there are smart card schemes which are intended to avoid these particular problems, though so far with little security proofs.

Using digital signatures only with trusted applications

One of the main differences between a digital signature and a written signature is that the user does not "see" what he signs. The user application presents a hash code to be encrypted by the digital signing algorithm using the private key. An attacker who gains control of the user's PC can possibly replace the user application with a foreign substitute, in effect replacing the user's own communications with those of the attacker. This could allow a malicious application to trick a user into signing any document by displaying the user's original on-screen, but presenting the attacker's own documents to the signing application. To protect against this scenario, an authentication system can be set up between the user's application (word processor, email client, etc.) and the signing application. The general idea is to provide some means for both the user app and signing app to verify each other's integrity. For example, the signing application may require all requests to come from digitally-signed binaries.

Some digital signature algorithms


Full Domain Hash, RSA-PSS etc., based on RSA DSA ECDSA ElGamal signature scheme Undeniable signature SHA (typically SHA-1) with RSA Rabin signature algorithm Pointcheval-Stern signature algorithm BLS (cryptography)

Digital Signature Algorithm


. The Digital Signature Algorithm (DSA) is a United States Federal Government standard or FIPS for digital signatures. It was proposed by the National Institute of Standards and Technology (NIST) in August 1991 for use in their Digital Signature Standard (DSS), specified in FIPS 186,[1] adopted in 1993. A minor revision was issued in 1996 as FIPS 186-1.[2] The standard was expanded further in 2000 as FIPS 186-2 and again in 2009 as FIPS 186-3.[3]

Key generation
Key generation has two phases. The first phase is a choice of algorithm parameters which may be shared between different users of the system:

Choose an approved cryptographic hash function H. In the original DSS, H was always SHA-1, but the stronger SHA-2 hash functions are approved for use in the current DSS. The hash output may be truncated to the size of a key pair.

Decide on a key length L and N. This is the primary measure of the cryptographic strength of the key. The original DSS constrained L to be a multiple of 64 between 512 and 1024 (inclusive). NIST 800-57[7] recommends lengths of 2048 (or 3072) for keys with security lifetimes extending beyond 2010 (or 2030), using correspondingly longer N.[3] specifies L and N length pairs of (1024,160), (2048,224), (2048,256), and (3072,256). Choose an N-bit prime q. N must be less than or equal to the hash output length. Choose an L-bit prime modulus p such that p1 is a multiple of q. Choose g, a number whose multiplicative order modulo p is q. This may be done by setting g = h(p1)/q mod p for some arbitrary h (1 < h < p-1), and trying again with a different h if the result comes out as 1. Most choices of h will lead to a usable g; commonly h=2 is used.

The algorithm parameters (p, q, g) may be shared between different users of the system. The second phase computes private and public keys for a single user:

Choose x by some random method, where 0 < x < q. Calculate y = gx mod p. Public key is (p, q, g, y). Private key is x.

There exist efficient algorithms for computing the modular exponentiations ha mod p and gx mod p, such as exponentiation by squaring.

Signing
Let H be the hashing function and m the message:

Generate a random per-message value k where 0 < k < q Calculate r = (gk mod p) mod q Calculate s = (k1(H(m) + x*r)) mod q Recalculate the signature in the unlikely case that r = 0 or s = 0 The signature is (r, s)

The extended Euclidean algorithm can be used to compute the modular inverse k1 mod q.

Verifying

Reject the signature if either 0 < r <q or 0 < s < q is not satisfied. Calculate w = (s)1 mod q Calculate u1 = (H(m)*w) mod q Calculate u2 = (r*w) mod q Calculate v = ((gu1*yu2) mod p) mod q The signature is valid if v = r

DSA is similar to the ElGamal signature scheme.

Correctness of the algorithm


The signature scheme is correct in the sense that the verifier will always accept genuine signatures. This can be shown as follows: First, if g = h(p 1)/q mod p it follows that gq hp 1 1 (mod p) by Fermat's little theorem. Since g > 1 and q is prime, g must have order q. The signer computes

Thus

Since g has order q we have

Finally, the correctness of DSA follows from

b) Explain traffic management in ATM? Also explain ATM traffic policing? (10M) Solution: Marking Scheme: 1. Definition of ATM = 03 marks. 2. Explanation of traffic management = 04 marks. 3. For different policies = 03 marks. Detail Solution: 1. Definition of ATM Asynchronous Transfer Mode is a cell-based switching technique that uses asynchronous time division multiplexing.[1][2] It encodes data into small fixed-sized cells (cell relay) and provides data link layer services that run over OSI Layer 1 physical links. This differs from other technologies based on packet-switched networks (such as the Internet Protocol

or Ethernet), in which variable sized packets (known as frames when referencing Layer 2) are used. ATM exposes properties from both circuit switched and small packet switched networking, making it suitable for wide area data networking as well as realtime media transport.[3] ATM uses a connection-oriented model and establishes a virtual circuit between two endpoints before the actual data exchange begins

Structure of an ATM cell


An ATM cell consists of a 5-byte header and a 48-byte payload. The payload size of 48 bytes was chosen as described above ("Why cell?"). ATM defines two different cell formats: NNI (Network-Network Interface) and UNI (User-Network Interface). Most ATM links use UNI cell format. Diagram of the UNI ATM Cell 7 GFC VPI VCI VCI HEC PT CLP VCI HEC 4 3 VPI VCI VPI VCI PT CLP 0 Diagram of the NNI ATM Cell 7 4 VPI VCI 3 0

Payload and padding if necessary (48 bytes)

Payload and padding if necessary (48 bytes)

GFC = Generic Flow Control (4 bits) (default: 4-zero bits) VPI = Virtual Path Identifier (8 bits UNI) or (12 bits NNI) VCI = Virtual channel identifier (16 bits) PT = Payload Type (3 bits) CLP = Cell Loss Priority (1-bit) HEC = Header Error Control (8-bit CRC, polynomial = X8 + X2 + X + 1) ATM uses the PT field to designate various special kinds of cells for operations, administration, and maintenance (OAM) purposes, and to delineate packet boundaries in some AALs.

Several of ATM's link protocols use the HEC field to drive a CRC-Based Framing algorithm, which allows locating the ATM cells with no overhead required beyond what is otherwise needed for header protection. The 8-bit CRC is used to correct single-bit header errors and detect multi-bit header errors. When multi-bit header errors are detected, the current and subsequent cells are dropped until a cell with no header errors is found. A UNI cell reserves the GFC field for a local flow control/submultiplexing system between users. This was intended to allow several terminals to share a single network connection, in the same way that two ISDN phones can share a single basic rate ISDN connection. All four GFC bits must be zero by default. The NNI cell format replicates the UNI format almost exactly, except that the 4-bit GFC field is re-allocated to the VPI field, extending the VPI to 12 bits. Thus, a single NNI ATM interconnection is capable of addressing almost 212 VPs of up to almost 216 VCs each (in practice some of the VP and VC numbers are reserved). 2. Explanation of traffic management:

Why virtual circuits?


ATM operates as a channel-based transport layer, using Virtual circuits (VCs). This is encompassed in the concept of the Virtual Paths (VP) and Virtual Channels. Every ATM cell has an 8- or 12-bit Virtual Path Identifier (VPI) and 16-bit Virtual Channel Identifier (VCI) pair defined in its header. Together, these identify the virtual circuit used by the connection. The length of the VPI varies according to whether the cell is sent on the usernetwork interface (on the edge of the network), or if it is sent on the network-network interface (inside the network). As these cells traverse an ATM network, switching takes place by changing the VPI/VCI values (label swapping). Although the VPI/VCI values are not necessarily consistent from one end of the connection to the other, the concept of a circuit is consistent (unlike IP, where any given packet could get to its destination by a different route than the others). Another advantage of the use of virtual circuits comes with the ability to use them as a multiplexing layer, allowing different services (such as voice, Frame Relay, n* 64 channels , IP).

Using cells and virtual circuits for traffic engineering


Another key ATM concept involves the traffic contract. When an ATM circuit is set up each switch on the circuit is informed of the traffic class of the connection.

ATM traffic contracts form part of the mechanism by which "Quality of Service" (QoS) is ensured. There are four basic types (and several variants) which each have a set of parameters describing the connection. 1. CBR - Constant bit rate: a Peak Cell Rate (PCR) is specified, which is constant. 2. VBR - Variable bit rate: an average cell rate is specified, which can peak at a certain level for a maximum interval before being problematic. 3. ABR - Available bit rate: a minimum guaranteed rate is specified. 4. UBR - Unspecified bit rate: traffic is allocated to all remaining transmission capacity. VBR has real-time and non-real-time variants, and serves for "bursty" traffic. Non-realtime is usually abbreviated to vbr-nrt. Most traffic classes also introduce the concept of Cell Delay Variation Tolerance (CDVT), which defines the "clumping" of cells in time. To maintain traffic contracts, networks usually use "shaping", a combination of queuing and marking of cells. "Policing" generally enforces traffic contracts. 3.For different policies:

Traffic policing
To maintain network performance, networks may police virtual circuits against their traffic contracts. If a circuit is exceeding its traffic contract, the network can either drop the cells or mark the Cell Loss Priority (CLP) bit (to identify a cell as discardable farther down the line). Basic policing works on a cell by cell basis, but this is sub-optimal for encapsulated packet traffic (as discarding a single cell will invalidate the whole packet). As a result, schemes such as Partial Packet Discard (PPD) and Early Packet Discard (EPD) have been created that will discard a whole series of cells until the next frame starts. This reduces the number of useless cells in the network, saving bandwidth for full frames. EPD and PPD work with AAL5 connections as they use the frame end bit to detect the end of packets.

Types of virtual circuits and paths


ATM can build virtual circuits and virtual paths either statically or dynamically. Static circuits (permanent virtual circuits or PVCs) or paths (permanent virtual paths or PVPs) require that the provisioner must build the circuit as a series of segments, one for each pair of interfaces through which it passes. PVPs and PVCs, though conceptually simple, require significant effort in large networks. They also do not support the re-routing of service in the event of a failure. Dynamically

built PVPs (soft PVPs or SPVPs) and PVCs (soft PVCs or SPVCs), in contrast, are built by specifying the characteristics of the circuit (the service "contract") and the two endpoints. Finally, ATM networks build and tear down switched virtual circuits (SVCs) on demand when requested by an end piece of equipment. One application for SVCs is to carry individual telephone calls when a network of telephone switches are inter-connected by ATM. SVCs were also used in attempts to replace local area networks with ATM.

Virtual circuit routing


Most ATM networks supporting SPVPs, SPVCs, and SVCs use the Private Network Node Interface or Private Network-to-Network Interface (PNNI) protocol. PNNI uses the same shortest-path-first algorithm used by OSPF and IS-IS to route IP packets to share topology information between switches and select a route through a network. PNNI also includes a very powerful summarization mechanism to allow construction of very large networks, as well as a call admission control (CAC) algorithm that determines whether sufficient bandwidth is available on a proposed route through a network to satisfy the service requirements of a VC or VP.

Call admission and connection establishment


A network must establish a connection before two parties can send cells to each other. In ATM this is called a virtual circuit (VC). It can be a permanent virtual circuit (PVC), which is created administratively on the endpoints, or a switched virtual circuit (SVC), which is created as needed by the communicating parties. SVC creation is managed by signaling, in which the requesting party indicates the address of the receiving party, the type of service requested, and whatever traffic parameters may be applicable to the selected service. "Call admission" is then performed by the network to confirm that the requested resources are available and that a route exists for the connection.

Q3. a) What is convergence? Also explain IEEE 802.11 standard? (07M) Solution: Marking Scheme: 1. For Definition of convergence= 02 marks. 2. For IEEE 802.11 standard with diagram = 05 marks. Detail Solution: 1. Definition of convergence: Convergence is the approach toward a definite value, a definite point, a common view or opinion, or toward a fixed or equilibrium state. In mathematics, it describes limiting behavior, particularly of an infinite sequence or series. Convergence, convergent, or converge may refer to:

Mathematics

Limit (mathematics), refers generally to the notion that certain objects are approaching a limit in some sense Convergent series, a sequence of which the partial sums have a limit Convergent (continued fraction) of a (possibly infinite) continued fraction Convergent sequence, a sequence which has a limit

Modes of convergence
For a more comprehensive list, see Modes of convergence (annotated index)

Modes of convergence Absolute convergence pertains to whether the absolute value of the limit of a series or integral is finite. Uniform convergence, a pointwise convergence where the speed of convergence is independent of any value in the domain Pointwise convergence, the convergence of functions' values at each specific input individually

Properties of convergence

Rate of convergence, the "speed" at which a convergent sequence approaches its limit Radius of convergence, a non-negative quantity that represents a domain interval over which a power series converges

Theorems and notions about convergence


Convergence of random variables, any one of several notions of convergence in probability theory GromovHausdorff convergence pertains to metric spaces and is a generalization of Hausdorff distance Dominated convergence theorem, a theorem by Henri Lebesgue Monotone convergence theorem, any one of several such theorems defined over a monotone sequence of numbers

2. IEEE 802.11 standard with diagram: IEEE 802.11 is a set of standards carrying out wireless local area network (WLAN) computer communication in the 2.4, 3.6 and 5 GHz frequency bands. They are created and maintained by the IEEE LAN/MAN Standards Committee (IEEE 802). The base current version of the standard is IEEE 802.11-2007

Protocols

[hide]802.11 network standards v d e Approxima Approximate Data rate Allowabl te indoor Outdoor 802.11 Freq. Bandwidt per Release[ e Modulatio range[citation range[citation Protoco (GHz h stream 4] MIMO n needed] needed] l ) (MHz) (Mbit/s) streams [5] (m) (ft) (m) (ft) Jun 2.4 20 1, 2 1 DSSS 20 66 100 330 1997 5 6, 9, 12, 35 115 120 390 Sep 18, 24, a 20 1 OFDM 5,00 16,000[y 1999 3.7[y] 36, 48, --] 0 54 Sep 1, 2, 5.5, b 2.4 20 1 DSSS 38 125 140 460 1999 11 1, 2, 6, 9, 12, Jun OFDM, g 2.4 20 18, 24, 1 38 125 140 460 2003 DSSS 36, 48, 54 7.2, 14.4, 21.7, 20 28.9, 70 230 250 820[6] 43.3, 57.8, 65, Oct n 2.4/5 4 OFDM 72.2[z] 2009 15, 30, 45, 60, 40 90, 120, 70 230 250 820[6] 135, 150[z] y IEEE 802.11y-2008 extended operation of 802.11a to the licensed 3.7 GHz band. Increased power limits allow for a range up 5000m. As of 2009, it is only being licensed in the United States by the FCC.

Assumes Short Guard interval (SGI) enabled, otherwise reduce each data rate by 10%.

802.11-1997 (802.11 legacy)


The original version of the standard IEEE 802.11 was released in 1997 and clarified in 1999, but is today obsolete. It specified two net bit rates of 1 or 2 megabits per second (Mbit/s), plus forward error correction code. It specified three alternative physical layer technologies: diffuse infrared operating at 1 Mbit/s; frequency-hopping spread spectrum operating at 1 Mbit/s or 2 Mbit/s; and direct-sequence spread spectrum operating at 1

Mbit/s or 2 Mbit/s. The latter two radio technologies used microwave transmission over the Industrial Scientific Medical frequency band at 2.4 GHz. Some earlier WLAN technologies used lower frequencies, such as the U.S. 900 MHz ISM band. Legacy 802.11 with direct-sequence spread spectrum was rapidly supplanted and popularized by 802.11b.

802.11a
The 802.11a standard uses the same data link layer protocol and frame format as the original standard, but an OFDM based air interface (physical layer). It operates in the 5 GHz band with a maximum net data rate of 54 Mbit/s, plus error correction code, which yields realistic net achievable throughput in the mid-20 Mbit/s [citation needed] Since the 2.4 GHz band is heavily used to the point of being crowded, using the relatively unused 5 GHz band gives 802.11a a significant advantage. However, this high carrier frequency also brings a disadvantage: the effective overall range of 802.11a is less than that of 802.11b/g. In theory, 802.11a signals are absorbed more readily by walls and other solid objects in their path due to their smaller wavelength and, as a result, cannot penetrate as far as those of 802.11b. In practice, 802.11b typically has a higher range at low speeds (802.11b will reduce speed to 5 Mbit/s or even 1 Mbit/s at low signal strengths). However, at higher speeds, 802.11a often has the same or greater range due to less interference. [citation needed]

802.11b
802.11b has a maximum raw data rate of 11 Mbit/s and uses the same media access method defined in the original standard. 802.11b products appeared on the market in early 2000, since 802.11b is a direct extension of the modulation technique defined in the original standard. The dramatic increase in throughput of 802.11b (compared to the original standard) along with simultaneous substantial price reductions led to the rapid acceptance of 802.11b as the definitive wireless LAN technology. 802.11b devices suffer interference from other products operating in the 2.4 GHz band. Devices operating in the 2.4 GHz range include: microwave ovens, Bluetooth devices, baby monitors and cordless telephones.

802.11g
In June 2003, a third modulation standard was ratified: 802.11g. This works in the 2.4 GHz band (like 802.11b), but uses the same OFDM based transmission scheme as 802.11a. It operates at a maximum physical layer bit rate of 54 Mbit/s exclusive of forward error correction codes, or about 22 Mbit/s average throughput.[7] 802.11g hardware is fully backwards compatible with 802.11b hardware and therefore is encumbered with legacy issues that reduce throughput when compared to 802.11a by ~21%.

The then-proposed 802.11g standard was rapidly adopted by consumers starting in January 2003, well before ratification, due to the desire for higher data rates as well as to reductions in manufacturing costs. By summer 2003, most dual-band 802.11a/b products became dual-band/tri-mode, supporting a and b/g in a single mobile adapter card or access point. Details of making b and g work well together occupied much of the lingering technical process; in an 802.11g network, however, activity of an 802.11b participant will reduce the data rate of the overall 802.11g network . Like 802.11b, 802.11g devices suffer interference from other products operating in the 2.4 GHz band. b) Explain Multiple Access Techniques TDMA, FDMA, SDMA and SSMA? Also explain Packet Radio Multiple Access (Slotted ALOHA/PURE ALOHA)? (13M) Solution: Marking Scheme: 1. Definition of Multiple access techniques with diagram. =03 marks. 2. For explanation of TDMA, FDMA, SDMA and SSMA with suitable diagram= 07 marks. 3. For explanation of packet radio multiple access it carries = 03 marks. Detail Solution: 1. Definition of Multiple access techniques: Time division multiple access (TDMA) is a channel access method for shared medium networks. It allows several users to share the same frequency channel by dividing the signal into different time slots. The users transmit in rapid succession, one after the other, each using his own time slot. This allows multiple stations to share the same transmission medium (e.g. radio frequency channel) while using only a part of its channel capacity. TDMA is used in the digital 2G cellular systems such as Global System for Mobile Communications (GSM), IS-136, Personal Digital Cellular (PDC) and iDEN, and in the Digital Enhanced Cordless Telecommunications (DECT) standard for portable phones. It is also used extensively in satellite systems, and combat-net radio systems. For usage of Dynamic TDMA packet mode communication, see below.

TDMA frame structure showing a data stream divided into frames and those frames divided into time slots. TDMA is a type of Time-division multiplexing, with the special point that instead of having one transmitter connected to one receiver, there are multiple transmitters. In the case of the uplink from a mobile phone to a base station this becomes particularly difficult because the mobile phone can move around and vary the timing advance required to make its transmission match the gap in transmission from its peers. Frequency Division Multiple Access or FDMA is a channel access method used in multiple-access protocols as a channelization protocol. FDMA gives users an individual allocation of one or several frequency bands, or channels. Multiple Access systems coordinate access between multiple users. The users may also share access via different methods such as TDMA, CDMA, or SDMA. These protocols are utilized differently, at different levels of the theoretical OSI model. Disadvantage: Crosstalk which causes interference on the other frequency and may disrupt the transmission. Space-Division Multiple Access (SDMA) is a channel access method based on creating parallel spatial pipes next to higher capacity pipes through spatial multiplexing and/or diversity, by which it is able to offer superior performance in radio multiple access communication systems. In traditional mobile cellular network systems, the base station has no information on the position of the mobile units within the cell and radiates the signal in all directions within the cell in order to provide radio coverage. This results in wasting power on transmissions when there are no mobile units to reach, in addition to causing interference for adjacent cells using the same frequency, so called co-channel cells. Likewise, in reception, the antenna receives signals coming from all directions including noise and interference signals. By using smart antenna technology and differing spatial locations of mobile units within the cell, space-division multiple access techniques offer attractive performance enhancements. The radiation pattern of the base station, both in transmission and reception, is adapted to each user to obtain highest gain in the direction of that user. This is often done using phased array techniques.

In GSM cellular networks, the base station is aware of the mobile phone's position by use of a technique called Timing Advance (TA). The Base Transceiver Station (BTS) can determine how distant the Mobile Station (MS) is by interpreting the reported TA. This information, along with other parameters, can then be used to power down the BTS or MS, if a power control feature is implemented in the network. The power control in either BTS or MS is implemented in most modern networks, especially on the MS, as this ensures a better battery life for the MS and thus a better user experience (in that the need to charge the battery becomes less frequent). This is why it may actually be safer to have a BTS close to you as your MS will be powered down as much as possible. For example, there is more power being transmitted from the MS than what you would receive from the BTS even if you are 6 m away from a mast. However, this estimation might not consider all the MS's that a particular BTS is supporting with EM radiation at any given time. Spread-spectrum techniques are methods by which a signal (e.g. an electrical, electromagnetic, or acoustic signal ) generated in a particular bandwidth is deliberately spread in the frequency domain, resulting in a signal with a wider bandwidth. These techniques are used for a variety of reasons, including the establishment of secure communications, increasing resistance to natural interference and jamming, to prevent detection, and to limit power flux density (e.g. in satellite downlinks). 3. For explanation of TDMA, FDMA, SDMA and SSMA with suitable diagram:

TDMA characteristics

Shares single carrier frequency with multiple users Non-continuous transmission makes handoff simpler Slots can be assigned on demand in dynamic TDMA Less stringent power control than CDMA due to reduced intra cell interference Higher synchronization overhead than CDMA Advanced equalization may be necessary for high data rates if the channel is "frequency selective" and creates Intersymbol interference Cell breathing (borrowing resources from adjacent cells) is more complicated than in CDMA Frequency/slot allocation complexity Pulsating power envelop: Interference with other devices

TDMA in mobile phone systems


2G systems
Most 2G cellular systems, with the notable exception of IS-95, are based on TDMA. GSM, D-AMPS, PDC, iDEN, and PHS are examples of TDMA cellular systems. GSM combines TDMA with Frequency Hopping and wideband transmission to reduce interference, this minimizes common types of interference.

In the GSM system, the synchronization of the mobile phones is achieved by sending timing advance commands from the base station which instructs the mobile phone to transmit earlier and by how much. This compensates for the propagation delay resulting from the light speed velocity of radio waves. The mobile phone is not allowed to transmit for its entire time slot, but there is a guard interval at the end of each time slot. As the transmission moves into the guard period, the mobile network adjusts the timing advance to synchronize the transmission. Initial synchronization of a phone requires even more care. Before a mobile transmits there is no way to actually know the offset required. For this reason, an entire time slot has to be dedicated to mobiles attempting to contact the network (known as the RACH in GSM). The mobile attempts to broadcast at the beginning of the time slot, as received from the network. If the mobile is located next to the base station, there will be no time delay and this will succeed. If, however, the mobile phone is at just less than 35 km from the base station, the time delay will mean the mobile's broadcast arrives at the very end of the time slot. In that case, the mobile will be instructed to broadcast its messages starting nearly a whole time slot earlier than would be expected otherwise. Finally, if the mobile is beyond the 35 km cell range in GSM, then the RACH will arrive in a neighboring time slot and be ignored. It is this feature, rather than limitations of power, that limits the range of a GSM cell to 35 km when no special extension techniques are used. By changing the synchronization between the uplink and downlink at the base station, however, this limitation can be overcome.

3G systems
Although most major 3G systems are primarily based upon CDMA, time division duplexing (TDD), packet scheduling (dynamic TDMA) and packet oriented multiple access schemes are available in 3G form, combined with CDMA to take advantage of the benefits of both technologies. While the most popular form of the UMTS 3G system uses CDMA and frequency division duplexing (FDD) instead of TDMA, TDMA is combined with CDMA and Time Division Duplexing in two standard UMTS UTRA modes, UTRA TDD-HCR (better known as TD-CDMA), and UTRA TDD-LCR (better known as TD-SCDMA). In each mode, more than one handset may share a single time slot. UTRA TDD-HCR is used most commonly by UMTS-TDD to provide Internet access, whereas UTRA TDD-LCR provides some interoperability with the forthcoming Chinese 3G standard

Features

FDMA requires high-performing filters in the radio hardware, in contrast to TDMA and CDMA. FDMA is not vulnerable to the timing problems that TDMA has. Since a predetermined frequency band is available for the entire period of communication, stream data (a continuous flow of data that may not be packetized) can easily be used with FDMA.

Due to the frequency filtering, FDMA is not sensitive to near-far problem which is pronounced for CDMA. Each user transmits and receives at different frequencies as each user gets a unique frequency slot

It is important to distinguish between FDMA and frequency-division duplexing (FDD). While FDMA allows multiple users simultaneous access to a certain system, FDD refers to how the radio channel is shared between the uplink and downlink (for instance, the traffic going back and forth between a mobile-phone and a base-station). Furthermore, frequency-division multiplexing (FDM) should not be confused with FDMA. The former is a physical layer technique that combines and transmits low-bandwidth channels through a high-bandwidth channel. FDMA, on the other hand, is an access method in the data link layer. FDMA also supports demand assignment in addition to fixed assignment. Demand assignment allows all users apparently continuous access of the radio spectrum by assigning carrier frequencies on a temporary basis using a statistical assignment process. The first FDMA demand-assignment system for satellite was developed by COMSAT for use on the Intelsat series IVA and V satellites.

The concept of SDMA:

Example of frequency reuse factor or pattern 1/4 In a cellular radio system, a land area to be supplied with radio service is divided into regular shaped cells, which can be hexagonal, square, circular or some other irregular shapes, although hexagonal cells are conventional. Each of these cells is assigned multiple frequencies (f1 - f6) which have corresponding radio base stations. The group of

frequencies can be reused in other cells, provided that the same frequencies are not reused in adjacent neighboring cells as that would cause co-channel interference. The increased capacity in a cellular network, compared with a network with a single transmitter, comes from the fact that the same radio frequency can be reused in a different area for a completely different transmission. If there is a single plain transmitter, only one transmission can be used on any given frequency. Unfortunately, there is inevitably some level of interference from the signal from the other cells which use the same frequency. This means that, in a standard FDMA system, there must be at least a one cell gap between cells which reuse the same frequency. In the simple case of the taxi company, each radio had a manually operated channel selector knob to tune to different frequencies. As the drivers moved around, they would change from channel to channel. The drivers know which frequency covers approximately what area. When they do not receive a signal from the transmitter, they will try other channels until they find one that works. The taxi drivers only speak one at a time, when invited by the base station operator (in a sense TDMA).

Cell signal encoding


To distinguish signals from several different transmitters, frequency division multiple access (FDMA) and code division multiple access (CDMA) were developed. With FDMA, the transmitting and receiving frequencies used in each cell are different from the frequencies used in each neighbouring cell. In a simple taxi system, the taxi driver manually tuned to a frequency of a chosen cell to obtain a strong signal and to avoid interference from signals from other cells. The principle of CDMA is more complex, but achieves the same result; the distributed transceivers can select one cell and listen to it. Other available methods of multiplexing such as polarization division multiple access (PDMA) and time division multiple access (TDMA) cannot be used to separate signals from one cell to the next since the effects of both vary with position and this would make signal separation practically impossible. Time division multiple access, however, is used in combination with either FDMA or CDMA in a number of systems to give multiple channels within the coverage area of a single cell.

Frequency reuse
The key characteristic of a cellular network is the ability to re-use frequencies to increase both coverage and capacity. As described above, adjacent cells must utilise different frequencies, however there is no problem with two cells sufficiently far apart operating on the same frequency. The elements that determine frequency reuse are the reuse distance and the reuse factor.

The reuse distance, D is calculated as

where R is the cell radius and N is the number of cells per cluster. Cells may vary in radius in the ranges (1 km to 30 km). The boundaries of the cells can also overlap between adjacent cells and large cells can be divided into smaller cells [1] The frequency reuse factor is the rate at which the same frequency can be used in the network. It is 1/K (or K according to some books) where K is the number of cells which cannot use the same frequencies for transmission. Common values for the frequency reuse factor are 1/3, 1/4, 1/7, 1/9 and 1/12 (or 3, 4, 7, 9 and 12 depending on notation). In case of N sector antennas on the same base station site, each with different direction, the base station site can serve N different sectors. N is typically 3. A reuse pattern of N/K denotes a further division in frequency among N sector antennas per site. Some current and historical reuse patterns are 3/7 (North American AMPS), 6/4 (Motorola NAMPS), and 3/4 (GSM). If the total available bandwidth is B, each cell can only utilize a number of frequency channels corresponding to a bandwidth of B/K, and each sector can use a bandwidth of B/NK. Code division multiple access-based systems use a wider frequency band to achieve the same rate of transmission as FDMA, but this is compensated for by the ability to use a frequency reuse factor of 1, for example using a reuse pattern of 1/1. In other words, adjacent base station sites use the same frequencies, and the different base stations and users are separated by codes rather than frequencies. While N is shown as 1 in this example, that does not mean the CDMA cell has only one sector, but rather that the entire cell bandwidth is also available to each sector individually. Depending on the size of the city, a taxi system may not have any frequency-reuse in its own city, but certainly in other nearby cities, the same frequency can be used. In a big city, on the other hand, frequency-reuse could certainly be in use.

Commercial use Of SSMA:


The 1976 publication of Spread Spectrum Systems by Robert Dixon, ISBN 0-471-216291, was a significant milestone in the commercialization of this technology. Previous publications were either classified military reports or academic papers on narrow subtopics. Dixon's book was the first comprehensive unclassified review of the technology and set the stage for increasing research into commercial applications. Initial commercial use of spread spectrum began in the 1980s in the US with three systems: Equatorial Communications System's very small aperture (VSAT) satellite terminal system for newspaper newswire services, Del Norte Technology's radio

navigation system for navigation of aircraft for crop dusting and similar applications, and Qualcomm's OmniTRACS system for communications to trucks. In the Qualcomm and Equatorial systems, spread spectrum enabled small antennas that viewed more than one satellite to be used since the processing gain of spread spectrum eliminated interference. The Del Norte system used the high bandwidth of spread spectrum to improve location accuracy. In 1981, the Federal Communications Commission started exploring ways to permit more general civil uses of spread spectrum in a Notice of Inquiry docket. [3]. This docket was proposed to FCC and then directed by Michael Marcus of the FCC staff. The proposals in the docket were generally opposed by spectrum users and radio equipment manufacturers, although they were supported by the then Hewlett-Packard Corp. The laboratory group supporting the proposal would later become part of Agilent. The May 1985 decision [4] in this docket permitted unlicensed use of spread spectrum in 3 bands at powers up to 1 Watt. FCC said at the time that it would welcome additional requests for spread spectrum in other bands.The resulting rules, now codified as 47 CFR 15.247[5] permitted Wi-Fi, Bluetooth, and many other products including cordless telephones. These rules were then copied in many other countries. Qualcomm was incorporated within 2 months after the decision to commercialize CDMA.

Spread-spectrum telecommunications
This is a technique in which a (telecommunication) signal is transmitted on a bandwidth considerably larger than the frequency content of the original information. Spread-spectrum telecommunications is a signal structuring technique that employs direct sequence, frequency hopping, or a hybrid of these, which can be used for multiple access and/or multiple functions. This technique decreases the potential interference to other receivers while achieving privacy. Spread spectrum generally makes use of a sequential noise-like signal structure to spread the normally narrowband information signal over a relatively wideband (radio) band of frequencies. The receiver correlates the received signals to retrieve the original information signal. Originally there were two motivations: either to resist enemy efforts to jam the communications (anti-jam, or AJ), or to hide the fact that communication was even taking place, sometimes called low probability of intercept (LPI). Frequency-hopping spread spectrum (FHSS), direct-sequence spread spectrum (DSSS), time-hopping spread spectrum (THSS), chirp spread spectrum (CSS), and combinations of these techniques are forms of spread spectrum. Each of these techniques employs pseudorandom number sequences created using pseudorandom number generators to determine and control the spreading pattern of the signal across the alloted bandwidth. Ultra-wideband (UWB) is another modulation technique that accomplishes the same purpose, based on transmitting short duration pulses. Wireless Ethernet standard IEEE 802.11 uses either FHSS or DSSS in its radio interface.

3. Explanation of packet radio multiple access:

General Packet Radio Service:


General packet radio service (GPRS) is a packet oriented mobile data service available to users of the 2G cellular communication systems global system for mobile communications (GSM), as well as in the 3G systems. In 2G systems, GPRS provides data rates of 56-114 kbit/s [1] GPRS data transfer is typically charged per megabyte of traffic transferred, while data communication via traditional circuit switching is billed per minute of connection time, independent of whether the user actually is using the capacity or is in an idle state. GPRS is a best-effort packet switched service, as opposed to circuit switching, where a certain quality of service (QoS) is guaranteed during the connection for non-mobile users. 2G cellular systems combined with GPRS are often described as 2.5G, that is, a technology between the second (2G) and third (3G) generations of mobile telephony[2]. It provides moderate speed data transfer, by using unused time division multiple access (TDMA) channels in, for example, the GSM system. Originally there was some thought to extend GPRS to cover other standards, but instead those networks are being converted to use the GSM standard, so that GSM is the only kind of network where GPRS is in use. GPRS is integrated into GSM Release 97 and newer releases. It was originally standardized by European Telecommunications Standards Institute (ETSI), but now by the 3rd Generation Partnership Project (3GPP)[3][4]. GPRS was developed as a GSM response to the earlier CDPD and i-mode packet switched cellular technologies.

Services offered
GPRS extends the GSM circuit switched data capabilities and makes the following services possible:

"Always on" internet access Multimedia messaging service (MMS) Push to talk over cellular (PoC/PTT) Instant messaging and presencewireless village Internet applications for smart devices through wireless application protocol (WAP) Point-to-point (P2P) service: inter-networking with the Internet (IP)

If SMS over GPRS is used, an SMS transmission speed of about 30 SMS messages per minute may be achieved. This is much faster than using the ordinary SMS over GSM, whose SMS transmission speed is about 6 to 10 SMS messages per minute.

Protocols supported
GPRS supports the following protocols:

internet protocol (IP). In practice, mobile built-in browsers use IPv4 since IPv6 is not yet popular. point-to-point protocol (PPP). In this mode PPP is often not supported by the mobile phone operator but if the mobile is used as a modem to the connected computer, PPP is used to tunnel IP to the phone. This allows an IP address to be assigned dynamically to the mobile equipment. X.25 connections. This is typically used for applications like wireless payment terminals, although it has been removed from the standard. X.25 can still be supported over PPP, or even over IP, but doing this requires either a network based router to perform encapsulation or intelligence built in to the enddevice/terminal; e.g., user equipment (UE).

When TCP/IP is used, each phone can have one or more IP addresses allocated. GPRS will store and forward the IP packets to the phone during cell handover (when you move from one cell to another). TCP handles any packet loss (e.g. due to a radio noise induced pause) resulting in a temporary throttling in transmission speed.

Hardware
Devices supporting GPRS are divided into three classes: Class A Can be connected to GPRS service and GSM service (voice, SMS), using both at the same time. Such devices are known to be available today. Class B Can be connected to GPRS service and GSM service (voice, SMS), but using only one or the other at a given time. During GSM service (voice call or SMS), GPRS service is suspended, and then resumed automatically after the GSM service (voice call or SMS) has concluded. Most GPRS mobile devices are Class B. Class C Are connected to either GPRS service or GSM service (voice, SMS). Must be switched manually between one or the other service. A true Class A device may be required to transmit on two different frequencies at the same time, and thus will need two radios. To get around this expensive requirement, a GPRS mobile may implement the dual transfer mode (DTM) feature. A DTM-capable mobile may use simultaneous voice and packet data, with the network coordinating to ensure that it is not required to transmit on two different frequencies at the same time. Such mobiles are considered pseudo-Class A, sometimes referred to as "simple class A". Some networks are expected to support DTM in 2007.

Q4. a) Compare the probability of error for an optimum filter? What is matched filter? (10M) Solution: Marking Scheme: 1. Definition of probability for error = 03 marks. 2. Explanation of matched filter with one example it carries= 07 marks. Detail Solution: 1. Definition of probability for error: Probability is a way of expressing knowledge or belief that an event will occur or has occurred. In mathematics the concept has been given an exact meaning in probability theory, that is used extensively in such areas of study as mathematics, statistics, finance, gambling, science, and philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems.

Definition
Probability distribution or random variable
Let X be a random variable with mean value :

Here the operator E denotes the average or expected value of X. Then the standard deviation of X is the quantity

That is, the standard deviation (sigma) is the square root of the average value of (X )2. In the case where X takes random values from a finite data set each value having the same probability, the standard deviation is , with

or, using summation notation,

The standard deviation of a (univariate) probability distribution is the same as that of a random variable having that distribution. Not all random variables have a standard deviation, since these expected values need not exist. For example, the standard deviation of a random variable which follows a Cauchy distribution is undefined because its expected value is undefined.

Continuous random variable


The standard deviation of a continuous real-valued random variable X with probability density function p(x) is

where

and where the integrals are definite integrals taken for x ranging over the sample space of X. In the case of a parametric family of distributions, the standard deviation can be expressed in terms of the parameters. For example, in the case of the log-normal distribution with parameters and 2, the standard deviation is [(exp(2)-1)exp(2+2)]1/2. 2. Matched filter with one example: In telecommunications, a matched filter (originally known as a North filter[1]) is obtained by correlating a known signal, or template, with an unknown signal to detect the presence of the template in the unknown signal. This is equivalent to convolving the unknown signal with a conjugated time-reversed version of the template (cross-correlation). The matched filter is the optimal linear filter for maximizing the signal to noise ratio (SNR) in the presence of additive stochastic noise. Matched filters are commonly used in radar, in which a known signal is sent out, and the reflected signal is examined for common elements of the out-going signal. Pulse compression is an example of matched filtering. Two-dimensional matched filters are commonly used in image processing, e.g., to improve SNR for X-ray pictures.

Derivation of the matched filter

The following section derives the matched filter for a discrete-time system. The derivation for a continuous-time system is similar, with summations replaced with integrals. The matched filter is the linear filter, h, that maximizes the output signal-to-noise ratio.

Though we most often express filters as the impulse response of convolution systems, as above (see LTI system theory), it is easiest to think of the matched filter in the context of the inner product, which we will see shortly. We can derive the linear filter that maximizes output signal-to-noise ratio by invoking a geometric argument. The intuition behind the matched filter relies on correlating the received signal (a vector) with a filter (another vector) that is parallel with the signal, maximizing the inner product. This enhances the signal. When we consider the additive stochastic noise, we have the additional challenge of minimizing the output due to noise by choosing a filter that is orthogonal to the noise. Let us formally define the problem. We seek a filter, h, such that we maximize the output signal-to-noise ratio, where the output is the inner product of the filter and the observed signal x. Our observed signal consists of the desirable signal s and additive noise v:

Let us define the covariance matrix of the noise, reminding ourselves that this matrix has Hermitian symmetry, a property that will become useful in the derivation:

where .H denotes Hermitian (conjugate) transpose, and E denotes expectation. Let us call our output, y, the inner product of our filter and the observed signal such that

We now define the signal-to-noise ratio, which is our objective function, to be the ratio of the power of the output due to the desired signal to the power of the output due to the noise:

We rewrite the above:

We wish to maximize this quantity by choosing h. Expanding the denominator of our objective function, we have

Now, our SNR becomes

We will rewrite this expression with some matrix manipulation. The reason for this seemingly counterproductive measure will become evident shortly. Exploiting the Hermitian symmetry of the covariance matrix Rv, we can write

We would like to find an upper bound on this expression. To do so, we first recognize a form of the Cauchy-Schwarz inequality:

which is to say that the square of the inner product of two vectors can only be as large as the product of the individual inner products of the vectors. This concept returns to the intuition behind the matched filter: this upper bound is achieved when the two vectors a and b are parallel. We resume our derivation by expressing the upper bound on our SNR in light of the geometric inequality above:

Our valiant matrix manipulation has now paid off. We see that the expression for our upper bound can be greatly simplified:

We can achieve this upper bound if we choose,

where is an arbitrary real number. To verify this, we plug into our expression for the output SNR:

Thus, our optimal matched filter is

We often choose to normalize the expected value of the power of the filter output due to the noise to unity. That is, we constrain

This constraint implies a value of , for which we can solve:

yielding

giving us our normalized filter,

If we care to write the impulse response of the filter for the convolution system, it is simply the complex conjugate time reversal of h.

Though we have derived the matched filter in discrete time, we can extend the concept to continuous-time systems if we replace Rv with the continuous-time autocorrelation function of the noise, assuming a continuous signal s(t), continuous noise v(t), and a continuous filter h(t).

Example of matched filter in digital communications


The matched filter is also used in communications. In the context of a communication system that sends binary messages from the transmitter to the receiver across a noisy channel, a matched filter can be used to detect the transmitted pulses in the noisy received signal.

Imagine we want to send the sequence "0101100100" coded in non polar Non-return-tozero (NRZ) through a certain channel. Mathematically, a sequence in NRZ code can be described as a sequence of unit pulses or shifted rect functions, each pulse being weighted by +1 if the bit is "1" and by 0 if the bit is "0". Formally, the scaling factor for the kth bit is,

We can represent our message, M(t), as the sum of shifted unit pulses:

where T is the time length of one bit. Thus, the signal to be sent by the transmitter is

If we model our noisy channel as an AWGN channel, white Gaussian noise is added to the signal. At the receiver end, for a Signal-to-noise ratio of 3dB, this may look like:

A first glance will not reveal the original transmitted sequence. There is a high power of noise relative to the power of the desired signal (i.e., there is a low signal-to-noise ratio). If the receiver were to sample this signal at the correct moments, the resulting binary message would possibly belie the original transmitted one. To increase our signal-to-noise ratio, we pass the received signal through a matched filter. In this case, the filter should be matched to an NRZ pulse (equivalent to a "1" coded in NRZ code). Precisely, the impulse response of the ideal matched filter, assuming white (uncorrelated) noise should be a time-reversed complex-conjugated scaled version of the signal that we are seeking. We choose

In this case, due to symmetry, the time-reversed complex conjugate of h(t) is in fact h(t), allowing us to call h(t) the impulse response of our matched filter convolution system. After convolving with the correct matched filter, the resulting signal, Mfiltered(t) is,

where * denotes convolution.

Which can now be safely sampled by the receiver at the correct sampling instants, and compared to an appropriate threshold, resulting in a correct interpretation of the binary message.

b) Explain basic concept of cellular system? How frequency reused method is implemented? (10M) Solution: Marking Scheme: 1. For explanation of concept of cellular system= 05 marks. 2. For explanation of frequency reused implantation with one example = 05 marks. 1. explanation of concept of cellular system:

Advanced Mobile Phone System


Advanced Mobile Phone System (AMPS) was an analog mobile phone system standard developed by Bell Labs, and officially introduced in the Americas in 1983[1] [2] , Israel in 1986, and Australia in 1987[3]. It was the primary analog mobile phone system in North America (and other locales) through the 1980s and into the 2000s. As of February 18, 2008, carriers in the United States were no longer required to support AMPS and companies such as AT&T and Verizon have discontinued this service permanently. AMPS was discontinued in Australia in September 2000.

Technology:
AMPS was a first-generation cellular technology that uses separate frequencies, or "channels", for each conversation (see FDMA). It therefore required considerable bandwidth for a large number of users. In general terms, AMPS was very similar to the older "0G" Improved Mobile Telephone Service, but used considerably more computing power in order to select frequencies, hand off conversations to PSTN lines, and handle billing and call setup. What really separated AMPS from older systems is the "back end" call setup functionality. In AMPS, the cell centers could flexibly assign channels to handsets based on signal strength, allowing the same frequency to be re-used in various locations without interference. This allowed a larger number of phones to be supported over a geographical area. AMPS pioneers fathered the term "cellular" because of its use of small hexagonal "cells" within a system.

Digital AMPS

Later, many AMPS networks were partially converted to D-AMPS, often referred to as TDMA (though TDMA is a generic term that applies to many cellular systems). D-AMPS is a digital, 2G standard used mainly by AT&T Mobility and U.S. Cellular in the United States, Rogers Wireless in Canada, Telcel in Mexico, Vivo S.A. and Telecom Italia Mobile (TIM) in Brazil, VimpelCom in Russia, Movilnet in Venezuela. In Latin America, AMPS is no longer offered and has been replaced by GSM and new UMTS networks.

GSM and CDMA2000


AMPS and D-AMPS have now been phased out in favor of either CDMA2000 or GSM which allow for higher capacity data transfers for services such as WAP, Multimedia Messaging System (MMS), and wireless Internet access. There are some phones capable of supporting AMPS, D-AMPS and GSM all in one phone (using the GAIT standard).

Analog AMPS being replaced by digital


In 2002, the FCC decided to no longer require A and B carriers to support AMPS service as of February 18, 2008. Since the AMPS standard is analog technology, it suffers from an inherently inefficient use of the frequency spectrum. All AMPS carriers have converted most of their consumer base to a digital standard such as CDMA2000 or GSM and continue to do so at a rapid pace. Digital technologies such as GSM and CDMA2000 support multiple voice calls on the same channel and offer enhanced features such as two-way text messaging and data services. Unlike in the United States, the Canadian Radio-television and Telecommunications Commission (CRTC) and Industry Canada have not set any requirement for maintaining AMPS service in Canada. Rogers Wireless has dismantled their AMPS (along with IS136) network, the networks were shut down May 31, 2007. Bell Mobility and Telus Mobility, who operated AMPS networks in Canada, announced that they would observe the same timetable as outlined by the FCC in the United States, and as a result would not begin to dismantle their AMPS networks until after February 2008.[6] frequency reused implantation with one example: X-radiation (composed of X-rays) is a form of electromagnetic radiation. X-rays have a wavelength in the range of 10 to 0.01 nanometers, corresponding to frequencies in the range 30 petahertz to 30 exahertz (3 1016 Hz to 3 1019 Hz) and energies in the range 120 eV to 120 keV. They are shorter in wavelength than UV rays and longer than gamma rays. In many languages, X-radiation is called Rntgen radiation, after Wilhelm Conrad Rntgen, who is generally credited as their discoverer, and who had named them X-rays to signify an unknown type of radiation.[1]:1-2

X-rays from about 0.12 to 12 keV (10 to 0.10 nm wavelength) are classified as "soft" Xrays, and from about 12 to 120 keV (0.10 to 0.010 nm wavelength) as "hard" X-rays, due to their penetrating abilities. Hard X-rays can penetrate solid objects, and their largest use is to take images of the inside of objects in diagnostic radiography and crystallography. As a result, the term Xray is metonymically used to refer to a radiographic image produced using this method, in addition to the method itself. By contrast, soft X-rays can hardly be said to penetrate matter at all; for instance, the attenuation length of 600 eV (~ 2 nm) x-rays in water is less than 1 micrometer.[4] X-rays are a form of ionizing radiation, and exposure to them can be a health hazard. The distinction between X-rays and gamma rays has changed in recent decades. Originally, the electromagnetic radiation emitted by X-ray tubes had a longer wavelength than the radiation emitted by radioactive nuclei (gamma rays).[2] So older literature distinguished between X- and gamma radiation on the basis of wavelength, with radiation shorter than some arbitrary wavelength, such as 1011 m, defined as gamma rays.[3] However, as shorter wavelength continuous spectrum "X-ray" sources such as linear accelerators and longer wavelength "gamma ray" emitters were discovered, the wavelength bands largely overlapped. The two types of radiation are now usually distinguished by their origin: X-rays are emitted by electrons outside the nucleus, while gamma rays are emitted by the nucleus.[2][4][5][6]

Q5. a) How channel capacity is improved by cell splitting and cell sectoring? (10M) Solution: Marking Scheme: 1. Definition of channel capacity= 03 marks. 2. Explanations of channel capacity by using splitting cell & sectoring = 07 marks. Detail Solution: 1. Definition of channel capacity:

Channel capacity
In electrical engineering, computer science and information theory, channel capacity is the tightest upper bound on the amount of information that can be reliably transmitted over a communications channel. By the noisy-channel coding theorem, the channel capacity of a given channel is the limiting information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability.[1] [2] Information theory, developed by Claude E. Shannon during World War II, defines the notion of channel capacity and provides a mathematical model by which one can compute it. The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution.

Formal definition

Let X represent the space of signals that can be transmitted, and Y the space of signals received, during a block of time over the channel. Let

be the conditional distribution function of Y given X. Treating the channel as a known statistic system, pY | X(y | x) is an inherent fixed property of the communications channel (representing the nature of the noise in it). Then the joint distribution

of X and Y is completely determined by the channel and by the choice of

the marginal distribution of signals we choose to send over the channel. The joint distribution can be recovered by using the identity

Under these constraints, next maximize the amount of information, or the message, that one can communicate over the channel. The appropriate measure for this is the mutual information I(X;Y), and this maximum mutual information is called the channel capacity and is given by

2. Explanations of channel capacity by using splitting cell & sectoring:

Cellular network:
A cellular network is a radio network made up of a number of cells, each served by at least one fixed-location transceiver known as a cell site or base station. When joined together these cells provide radio coverage over a wide geographic area. This enables a large number of portable transceivers (mobile phones, pagers, etc) to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the transceivers are moving through more than one cell during transmission. Cellular networks offer a number of advantages over alternative solutions:

increased capacity reduced power usage larger coverage area reduced interference from other signals

An example of a simple non-telephone cellular system is an old taxi driver's radio system where the taxi company has several transmitters based around a city that can communicate directly with each taxi.

The concept

Example of frequency reuse factor or pattern 1/4 In a cellular radio system, a land area to be supplied with radio service is divided into regular shaped cells, which can be hexagonal, square, circular or some other irregular shapes, although hexagonal cells are conventional. Each of these cells is assigned multiple frequencies (f1 - f6) which have corresponding radio base stations. The group of frequencies can be reused in other cells, provided that the same frequencies are not reused in adjacent neighboring cells as that would cause co-channel interference. The increased capacity in a cellular network, compared with a network with a single transmitter, comes from the fact that the same radio frequency can be reused in a different area for a completely different transmission. If there is a single plain transmitter, only one transmission can be used on any given frequency. Unfortunately, there is inevitably some level of interference from the signal from the other cells which use the same frequency. This means that, in a standard FDMA system, there must be at least a one cell gap between cells which reuse the same frequency. In the simple case of the taxi company, each radio had a manually operated channel selector knob to tune to different frequencies. As the drivers moved around, they would change from channel to channel. The drivers know which frequency covers approximately what area. When they do not receive a signal from the transmitter, they will try other channels until they find one that works. The taxi drivers only speak one at a time, when invited by the base station operator (in a sense TDMA).

Cell signal encoding


To distinguish signals from several different transmitters, frequency division multiple access (FDMA) and code division multiple access (CDMA) were developed.

With FDMA, the transmitting and receiving frequencies used in each cell are different from the frequencies used in each neighbouring cell. In a simple taxi system, the taxi driver manually tuned to a frequency of a chosen cell to obtain a strong signal and to avoid interference from signals from other cells. The principle of CDMA is more complex, but achieves the same result; the distributed transceivers can select one cell and listen to it. Other available methods of multiplexing such as polarization division multiple access (PDMA) and time division multiple access (TDMA) cannot be used to separate signals from one cell to the next since the effects of both vary with position and this would make signal separation practically impossible. Time division multiple access, however, is used in combination with either FDMA or CDMA in a number of systems to give multiple channels within the coverage area of a single cell.

Frequency reuse
The key characteristic of a cellular network is the ability to re-use frequencies to increase both coverage and capacity. As described above, adjacent cells must utilise different frequencies, however there is no problem with two cells sufficiently far apart operating on the same frequency. The elements that determine frequency reuse are the reuse distance and the reuse factor. The reuse distance, D is calculated as

where R is the cell radius and N is the number of cells per cluster. Cells may vary in radius in the ranges (1 km to 30 km). The boundaries of the cells can also overlap between adjacent cells and large cells can be divided into smaller cells [1] The frequency reuse factor is the rate at which the same frequency can be used in the network. It is 1/K (or K according to some books) where K is the number of cells which cannot use the same frequencies for transmission. Common values for the frequency reuse factor are 1/3, 1/4, 1/7, 1/9 and 1/12 (or 3, 4, 7, 9 and 12 depending on notation). In case of N sector antennas on the same base station site, each with different direction, the base station site can serve N different sectors. N is typically 3. A reuse pattern of N/K denotes a further division in frequency among N sector antennas per site. Some current and historical reuse patterns are 3/7 (North American AMPS), 6/4 (Motorola NAMPS), and 3/4 (GSM). If the total available bandwidth is B, each cell can only utilize a number of frequency channels corresponding to a bandwidth of B/K, and each sector can use a bandwidth of B/NK.

Code division multiple access-based systems use a wider frequency band to achieve the same rate of transmission as FDMA, but this is compensated for by the ability to use a frequency reuse factor of 1, for example using a reuse pattern of 1/1. In other words, adjacent base station sites use the same frequencies, and the different base stations and users are separated by codes rather than frequencies. While N is shown as 1 in this example, that does not mean the CDMA cell has only one sector, but rather that the entire cell bandwidth is also available to each sector individually. Depending on the size of the city, a taxi system may not have any frequency-reuse in its own city, but certainly in other nearby cities, the same frequency can be used. In a big city, on the other hand, frequency-reuse could certainly be in use.

Directional antennas

Cellular telephone frequency reuse pattern. See U.S. Patent 4,144,411 Although the original 2-way-radio cell towers were at the centers of the cells and were omni-directional, a cellular map can be redrawn with the cellular telephone towers located at the corners of the hexagons where three cells converge.[2] Each tower has three sets of directional antennas aimed in three different directions and receiving/transmitting into three different cells at different frequencies. This provides a minimum of three

channels for each cell. The numbers in the illustration are channel numbers, which repeat every 3 cells. Large cells can be subdivided into smaller cells for high volume areas.[3]

Broadcast messages and paging


Practically every cellular system has some kind of broadcast mechanism. This can be used directly for distributing information to multiple mobiles, commonly, for example in mobile telephony systems, the most important use of broadcast information is to set up channels for one to one communication between the mobile transceiver and the base station. This is called paging. The details of the process of paging vary somewhat from network to network, but normally we know a limited number of cells where the phone is located (this group of cells is called a Location Area in the GSM or UMTS system, or Routing Area if a data packet session is involved). Paging takes place by sending the broadcast message to all of those cells. Paging messages can be used for information transfer. This happens in pagers, in CDMA systems for sending SMS messages, and in the UMTS system where it allows for low downlink latency in packet-based connections.

Movement from cell to cell and handover


In a primitive taxi system, when the taxi moved away from a first tower and closer to a second tower, the taxi driver manually switched from one frequency to another as needed. If a communication was interrupted due to a loss of a signal, the taxi driver asked the base station operator to repeat the message on a different frequency. In a cellular system, as the distributed mobile transceivers move from cell to cell during an ongoing continuous communication, switching from one cell frequency to a different cell frequency is done electronically without interruption and without a base station operator or manual switching. This is called the handover or handoff. Typically, a new channel is automatically selected for the mobile unit on the new base station which will serve it. The mobile unit then automatically switches from the current channel to the new channel and communication continues. The exact details of the mobile system's move from one base station to the other varies considerably from system to system (see the example below for how a mobile phone network manages handover).

Example of a cellular network: the mobile phone network

GSM network architecture The most common example of a cellular network is a mobile phone (cell phone) network. A mobile phone is a portable telephone which receives or makes calls through a cell site (base station), or transmitting tower. Radio waves are used to transfer signals to and from the cell phone. Modern mobile phone networks use cells because radio frequencies are a limited, shared resource. Cell-sites and handsets change frequency under computer control and use low power transmitters so that a limited number of radio frequencies can be simultaneously used by many callers with less interference. A cellular network is used by the mobile phone operator to achieve both coverage and capacity for their subscribers. Large geographic areas are split into smaller cells to avoid line-of-sight signal loss and to support a large number of active phones in that area. All of the cell sites are connected to telephone exchanges (or switches) , which in turn connect to the public telephone network. In cities, each cell site may have a range of up to approximately mile, while in rural areas, the range could be as much as 5 miles. It is possible that in clear open areas, a user may receive signals from a cell site 25 miles away. Since almost all mobile phones use cellular technology, including GSM, CDMA, and AMPS (analog), the term "cell phone" is in some regions, notably the US, used interchangeably with "mobile phone". However, satellite phones are mobile phones that do not communicate directly with a ground-based cellular tower, but may do so indirectly by way of a satellite. There are a number of different digital cellular technologies, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), 3GSM, Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN).

b) How tiger hash function is used for encryption? Compare block ciphers with stream ciphers? (10M) Solution: Marking Scheme: 1. Definition of hash function =03 marks. 2. Comparable points in between block and stream ciphers= 07 marks. Detail Solution: 1. Definition of hash function: In computer science, a hash table or hash map is a data structure that uses a hash function to map identifying values, known as keys, (e.g., a person's name) to their associated values (e.g., their telephone number). The hash function is used to transform the key into the index (the hash) of an array element (the slot or bucket) where the corresponding value is to be sought. Ideally, the hash function should map each possible key to a unique slot index, but this ideal is rarely achievable in practice (unless the hash keys are fixed; i.e. new entries are never added to the table after creation). Most hash table designs assume that hash collisionsthe situation where different keys happen to have the sam

. 2. Comparable points in between block and stream ciphers:

Block cipher:
In cryptography, a block cipher is a symmetric key cipher operating on fixed-length groups of bits, called blocks, with an unvarying transformation. A block cipher encryption algorithm might take (for example) a 128-bit block of plaintext as input, and output a corresponding 128-bit block of ciphertext. The exact transformation is controlled

using a second input the secret key. Decryption is similar: the decryption algorithm takes, in this example, a 128-bit block of ciphertext together with the secret key, and yields the original 128-bit block of plaintext. To encrypt messages longer than the block size (128 bits in the above example), a mode of operation is used. Block ciphers can be contrasted with stream ciphers; a stream cipher operates on individual digits one at a time, and the transformation varies during the encryption. The distinction between the two types is not always clear-cut: a block cipher, when used in certain modes of operation, acts effectively as a stream cipher. An early and highly influential block cipher design was the Data Encryption Standard (DES), developed at IBM and published as a standard in 1977. A successor to DES, the Advanced Encryption Standard (AES), was adopted in 2001.

Generalities
A block cipher consists of two paired algorithms, one for encryption, E, and the other for decryption, E1. Both algorithms accept two inputs: an input block of size n bits and a key of size k bits, yielding an n-bit output block. For any one fixed key, decryption is the inverse function of encryption, so that

for any block M and key K. M is termed the plaintext and C the ciphertext. For each key K, EK is a permutation (a bijective mapping) over the set of input blocks. Each key selects one permutation from the possible set of 2n! (see Factorial). The block size, n, is typically 64 or 128 bits, although some ciphers have a variable block size. 64 bits was the most common length until the mid-1990s, when new designs began to switch to the longer 128-bit length. One of several modes of operation is generally used along with a padding scheme to allow plaintexts of arbitrary lengths to be encrypted. Each mode has different characteristics in regard to error propagation, ease of random access and vulnerability to certain types of attack. Typical key sizes (k) include 40, 56, 64, 80, 128, 192 and 256 bits. As of 2006, 80 bits is normally taken as the minimum key length needed to prevent brute force attacks. For creating ciphers with arbitrary block sizes (or on domains that aren't powers of two) see Format-preserving encryption.

Iterated block ciphers


Most block ciphers are constructed by repeatedly applying a simpler function. This approach is known as iterated block cipher (see also product cipher). Each iteration is

termed a round, and the repeated function is termed the round function; anywhere between 4 to 32 rounds are typical. Usually, the round function R takes different round keys Ki as second input, which are derived from the original key:

where M0 is the plaintext and Mr the ciphertext, with r being the round number. Frequently, key whitening is used in addition to this. At the beginning and the end, the data is modified with key material (often with XOR, but simple arithmetic operations like adding and subtracting are also used):

Many block ciphers can be categorised as Feistel networks, or, as more general substitution-permutation networks. Arithmetic operations, logical operations (es Stream cipher)

The operation of the keystream generator in A5/1, a LFSR-based stream cipher used to encrypt mobile phone conversations. In cryptography, a stream cipher is a symmetric key cipher where plaintext bits are combined with a pseudorandom cipher bit stream (keystream), typically by an exclusiveor (xor) operation. In a stream cipher the plaintext digits are encrypted one at a time, and the transformation of successive digits varies during the encryption. An alternative name is a state cipher, as the encryption of each digit is dependent on the current state. In practice, the digits are typically single bits or bytes. Stream ciphers represent a different approach to symmetric encryption from block ciphers. Block ciphers operate on large blocks of digits with a fixed, unvarying

transformation. This distinction is not always clear-cut: in some modes of operation, a block cipher primitive is used in such a way that it acts effectively as a stream cipher. Stream ciphers typically execute at a higher speed than block ciphers and have lower hardware complexity. However, stream ciphers can be susceptible to serious security problems if used incorrectly: see stream cipher attacks in particular, the same starting state must never be used twice. pecially XOR), S-boxes and various permutations are all frequently used as components.

Types of stream ciphers


A stream cipher generates successive elements of the keystream based on an internal state. This state is updated in essentially two ways: if the state changes independently of the plaintext or ciphertext messages, the cipher is classified as a synchronous stream cipher. By contrast, self-synchronising stream ciphers update their state based on previous ciphertext digits.

Synchronous stream ciphers


In a synchronous stream cipher a stream of pseudo-random digits is generated independently of the plaintext and ciphertext messages, and then combined with the plaintext (to encrypt) or the ciphertext (to decrypt). In the most common form, binary digits are used (bits), and the keystream is combined with the plaintext using the exclusive or operation (XOR). This is termed a binary additive stream cipher. In a synchronous stream cipher, the sender and receiver must be exactly in step for decryption to be successful. If digits are added or removed from the message during transmission, synchronisation is lost. To restore synchronisation, various offsets can be tried systematically to obtain the correct decryption. Another approach is to tag the ciphertext with markers at regular points in the output. If, however, a digit is corrupted in transmission, rather than added or lost, only a single digit in the plaintext is affected and the error does not propagate to other parts of the message. This property is useful when the transmission error rate is high; however, it makes it less likely the error would be detected without further mechanisms. Moreover, because of this property, synchronous stream ciphers are very susceptible to active attacks if an attacker can change a digit in the ciphertext, he might be able to make predictable changes to the corresponding plaintext bit; for example, flipping a bit in the ciphertext causes the same bit to be flipped in the plaintext.

Self-synchronizing stream ciphers


Another approach uses several of the previous N ciphertext digits to compute the keystream. Such schemes are known as self-synchronizing stream ciphers, asynchronous stream ciphers or ciphertext autokey (CTAK). The idea of self-synchronization was patented in 1946, and has the advantage that the receiver will automatically synchronise

with the keystream generator after receiving N ciphertext digits, making it easier to recover if digits are dropped or added to the message stream. Single-digit errors are limited in their effect, affecting only up to N plaintext digits. An example of a self-synchronising stream cipher is a block cipher in cipher feedback (CFB) mode.

Linear feedback shift register-based stream ciphers

Linear feedback shift registers (LFSRs) are popular components in stream ciphers as they can be implemented cheaply in hardware, and their properties are well-understood. Binary stream ciphers are often constructed using linear feedback shift registers (LFSRs) because they can be easily implemented in hardware and can be readily analysed mathematically. The use of LFSRs on their own, however, is insufficient to provide good security. Various schemes have been proposed to increase the security of LFSRs. Q6. a) Explain general GSM architecture? Also explain different GSM channels used? (10M) Solution: Marking Scheme: 1. For explanation of GSM channel = 03 marks. 2. GSM architecture with suitable diagram= 07 marks. Detail Solution: 1. For explanation of GSM channel:

Broadcast Control Channel:


The broadcast control channel (BCCH) is a point to multipoint, unidirectional (downlink) channel used in the Um Interface of the GSM cellular standard. The BCCH carries a repeating pattern of system information messages that describe the identity, configuration and available features of the Base transceiver station (BTS). These messages also provide a list of Absolute radio-frequency channel numbers (ARFCNs) used by neighboring BTSs. This message pattern is synchronized to the BTS frame clock[1]. The minimum BCCH message set is system information messages 1-4, although other messages are

normally present. The messages themselves are described in 3GPP Technical Specification 44.018[2]. Any GSM ARFCN that includes a BCCH is designated as a beacon channel and is required to transmit continuously at full power. 2. GSM architecture with suitable diagram:

1 GSM Network Infrastructure

The following figure depicts a typical GSM network (called, Public Land Mobile Network or PLMN) infrastructure. Ref: Wireless Communications Systems and Networks, By Mullett, Thomson Publisher Note: The solid lines are for user traffic plus control signalling, if any. The dotted lines represent control/management signalling/messaging only. AUC Authentication Center BSC Base Station Controller BSS Base Station Subsystem

BTS Base Transceiver System (Antenna System + Radio Base Station) EIR Equipment Identification Register (for IMEI verification) IMEI International Mobile Equipment Identity FNR Flexible Numbering Register (for number portability) GMSC Gateway MSC HLR Home Location Register ISDN Integrated Services Digital Network IWF Interworking Function ILR Interworking Location Register (for roaming between AMPS and GSM system) IWMSC Interworking MSC MS Mobile Station MSC Mobile Switching Center NSS Network Switching Subsystem OSS Operation and Support System PDN Public Data Network PSTN Public Switched Telephone Network SMS Short Message Service VLR Visitor Location Register

1. Home Location Register


The home location register (HLR) is a database used for storing and managing subscriptions. Generally a PLMN (Public Land Mobile Network) consists of several HLRs. The first two digits of the mobile directory number (e.g. 0171 2620757) are the number of the HLR where the mobile subscriber is stored. The data includes permanent data on subscribers (such as subscriber's service profile) as well as dynamic data (such as current location and activity status). When an individual buys a subscription from one of the GSM operators, he or she is registered in the HLR of that operator. Data Elements (Subscriber) Examples: Mobile Stations Identities: o IMSI (International Mobile Subscriber Identity) (the primary Key), o Current TMSI (Temporary IMSI) o IMEI (International Mobile Equipment Identity) Mobile Stations Telephone number o MSISDN (Mobile Stations ISDN number) o Current MSRN (Mobile Station Roaming Number), if assigned Name and address of the subscriber Current service subscription profile Current location (MSC/VRL address) Authentication and encryption keys o Individual Subscriber Authentication Key (KI) Mobile Country Code (MCC) and MNC (Mobile Network Code) List of MSC/VLR that belongs to this HLR

2. Mobile Switching Center and Visitor Location Register


The mobile switching center (MSC) performs the telephony switching function. A mobile station must be attached to a single MSC at a time (either homed or visitor), if it is currently active (not switched off). The visitor location register (VLR) is a database attached to an MSC to contain information about its currently associated mobile stations (not just for visitors). Note: A basic switch (that is a PSTN/ISDN switch) already has a database for its telephone connections. However, it is not designed to include visitors since a visitor has telephone number that does not belong to this switch. That is why a separate

VLR is needed. An MSC, with the help of the HLR, allocates a visitor a local telephone number (the MSRN), which is not currently allocated to anyone. This allocation is temporary (like visitor ID card). The VLR stores the MSRN as mobile stations telephone number (along with other information). However, VLR also stores some information like security triple (authentication and encryption information) for each mobile station that are currently attached to the MSC. A VLR stores such information not only for its visitors but also for the homed mobile stations. From this perspective VLR is for homed mobile stations as well. Data Information of currently attached mobile stations o IMSI/TMSI numbers o MSISDN/MSRN numbers o Security triple (authentication and encryption information) o Location Area Identity (where the mobile station is currently located) List of base stations that belong to this MSC/VLR (by their BSIC or Base Station Identity Code) List of location areas that belong to this MSC/VLR (by their LAI or Location Area Identity code)

3. Authentication Center
The authentication center (AUC) provides authentication and encryption parameters that verify the user's identity and ensure the confidentiality of each call. The AUC protects network operators from different types of fraud found in today's cellular world. The GSM has standard encryption and authentication algorithm which are used to dynamically compute challenge keys and encryptions keys for a call. Cellular The GSM divides the infrastructure into the following three parts. etwork Switching Subsystems (NSS) N Base Station Subsystem (BSS) Network Management Subsystem (NMS) If we count the Mobile Station (MS) or cell-phone is the 4th element. Any telecommunications network requires some kind of NMS. A part of NMS is generic for any telecom system. The billing and messaging are two examples. The core of the NSS is the MSC (Mobile Switching Center) which is basically a PSTN switch with mobility management related enhancement/add-on. The BSS is entirely new (compared to PSTN) that are required for wireless access and mobility. The following sections of this document provide an overview of the network elements and their functions. The role of these elements will be clearer as we learn more.

4. Equipment Identity Register


The equipment identity register (EIR) is a database that contains information about the identity of mobile equipment that prevents calls from stolen, unauthorized, or defective mobile stations. The AUC and EIR can be implemented as stand-alone nodes or as a combined AUC/EIR node.

5. Gateway MSC
The Gateway MSC (GMSC) is an MSC that connects the PLMN (Public Land Mobile Network) to a PSTN/ISDN.

6. GSM Interworking Unit/Function


The GSM Interworking (IW) function or Unit (GIWF/U) is for data communication (such as the Internet access) support. Though the basic function of MSC is voice traffic switching the MSC has additional capability to forward data between the mobile station and GIWF/U

7. Message Service Gateway


The NMS (network Management subsystem) includes a message center. This includes Short Message Service (SMS), Multimedia Message Service (MMS), Fax, Voice Mail, Email and a variety of notifications. The MSC requires special capability to forward those messages between the message center and the mobile station.

8. Flexible Numbering Register


The local number portability (LNP) service is an advance intelligent network (AIN) service of telecommunications network. This service allows a person to move his residence to a new city/province and still retain his/her old telephone number. The local telephone service provider/switch will recognize the old telephone number, and no new number will be assigned. A cell-phone with LNP service can do the same and Flexible Numbering Register (RNR) takes care of that.

9.Base Station Subsystem (BSS)


All radio-related functions between mobile stations and network are performed in the base station subsystem (BSS). The BSS consists of: One base station controller (BSC) and All base transceiver stations (BTS) under the BSC Further reading: http://en.wikipedia.org/wiki/Base_Station_Subsystem

10. Base Transceiver Station


A Base Station Transceiver (BTS) is a radio transceivers station that communicates with the mobile stations. Its backend is connected to the BSC. More detail about BTS will be covered later. A BTS is usually placed at the center of a cell. Its transmitting power defines the size of a cell. There are more on this later.

11. Base Station Controller


A Base Station Controller (BSC) is a high-capacity switch with radio communication and mobility control capabilities. The functions of a BSC include radio channel allocation, location update, handover, timing advance, power control and paging. There are more on this later.

b) Explain high level data link (HDCL) protocol? Also explain the frame structure in HDCL? (10M) Solution: Marking Scheme: 1. For explanation of HDCL protocol = 05 marks. 2. For explanation of frame structure by using HDCL =05 marks. Detail Solution: 1. For explanation of HDCL protocol :

High-Level Data Link Control (HDLC) is a bit-oriented synchronous data link layer protocol developed by the International Organization for Standardization (ISO). The original ISO standards for HDLC are:

ISO 3309 Frame Structure ISO 4335 Elements of Procedure ISO 6159 Unbalanced Classes of Procedure ISO 6256 Balanced Classes of Procedure

The current standard for HDLC is ISO 13239, which replaces all of those standards. HDLC provides both connection-oriented and connectionless service. HDLC can be used for point to multipoint connections, but is now used almost exclusively to connect one device to another, using what is known as Asynchronous Balanced Mode (ABM). The original master-slave modes Normal Response Mode (NRM) and Asynchronous Response Mode (ARM) are rarely used.

Framing
HDLC frames can be transmitted over synchronous or asynchronous links. Those links have no mechanism to mark the beginning or end of a frame, so the beginning and end of each frame has to be identified. This is done by using a frame delimiter, or flag, which is a unique sequence of bits that is guaranteed not to be seen inside a frame. This sequence is '01111110', or, in hexadecimal notation, 0x7E. Each frame begins and ends with a frame delimiter. A frame delimiter at the end of a frame may also mark the start of the next frame. A sequence of 7 or more consecutive 1-bits within a frame will cause the frame to be aborted. When no frames are being transmitted on a simplex or full-duplex synchronous link, a frame delimiter is continuously transmitted on the link. Using the standard NRZI encoding from bits to line levels (0 bit = transition, 1 bit = no transition), this generates one of two continuous waveforms, depending on the initial state:

This is used by modems to train and synchronize their clocks via phase-locked loops. Some protocols allow the 0-bit at the end of a frame delimiter to be shared with the start of the next frame delimiter, i.e. '011111101111110'.

For half-duplex or multi-drop communication, where several transmitters share a line, a receiver on the line will see continuous idling 1-bits in the inter-frame period when no transmitter is active. Actual binary data could easily have a sequence of bits that is the same as the flag sequence. So the data's bit sequence must be modified so that it doesn't appear to be a frame delimiter.

Synchronous framing
On synchronous links, this is done with bit stuffing. Any time that 5 consecutive 1-bits appear in the transmitted data, the data is paused and a 0-bit is transmitted. This ensures that no more than 5 consecutive 1-bits will be sent. The receiving device knows this is being done, and after seeing 5 1-bits in a row, a following 0-bit is stripped out of the received data. If the following bit is a 1-bit, the receiver has found a flag. This also (assuming NRZI with transition for 0 encoding of the output) provides a minimum of one transition per 6 bit times during transmission of data, and one transition per 7 bit times during transmission of flag, so the receiver can stay in sync with the transmitter. Note however, that for this purpose encodings such as 8b/10b encoding are better suited. HDLC transmits bytes of data with the least significant bit first (little-endian order). For explanation of frame structure by using HDCL 2. For explanation of frame structure by using HDCL:

Structure
The contents of an HDLC frame are shown in the following table: Flag Address Control Information FCS Flag 8 bits 8 or more bits 8 or 16 bits Variable length, 0 or more bits 16 or 32 bits 8 bits Note that the end flag of one frame may be (but does not have to be) the beginning (start) flag of the next frame. Data is usually sent in multiples of 8 bits, but only some variants require this; others theoretically permit data alignments on other than 8-bit boundaries. The frame check sequence (FCS) is a 16-bit CRC-CCITT or a 32-bit CRC-32 computed over the Address, Control, and Information fields. It provides a means by which the receiver can detect errors that may have been induced during the transmission of the frame, such as lost bits, flipped bits, and extraneous bits. However, given that the

algorithms used to calculate the FCS are such that the probability of certain types of transmission errors going undetected increases with the length of the data being checked for errors, the FCS can implicitly limit the practical size of the frame. If the receiver's calculation of the FCS does not match that of the sender's, indicating that the frame contains errors, the receiver can either send a negative acknowledge packet to the sender, or send nothing. After either receiving a negative acknowledge packet or timing out waiting for a positive acknowledge packet, the sender can retransmit the failed frame. The FCS was implemented because many early communication links had a relatively high bit error rate, and the FCS could readily be computed by simple, fast circuitry or software. More effective forward error correction schemes are now widely used by other protocols.

Types of Stations (Computers), and Data Transfer Modes


Synchronous Data Link Control (SDLC) was originally designed to connect one computer with multiple peripherals. The original "normal response mode" is a masterslave mode where the computer (or primary terminal) gives each peripheral (secondary terminal) permission to speak in turn. Because all communication is either to or from the primary terminal, frames include only one address, that of the secondary terminal; the primary terminal is not assigned an address. There is also a strong distinction between commands sent by the primary to a secondary, and responses sent by a secondary to the primary. Commands and responses are in fact indistinguishable; the only difference is the direction in which they are transmitted. Normal response mode allows operation over half-duplex communication links, as long as the primary is aware that it may not transmit when it has given permission to a secondary. Asynchronous response mode is an HDLC addition[1] for use over full-duplex links. While retaining the primary/secondary distinction, it allows the secondary to transmit at any time. Asynchronous balanced mode added the concept of a combined terminal which can act as both a primary and a secondary. There are some subtleties about this mode of operation; while many features of the protocol do not care whether they are in a command or response frame, some do, and the address field of a received frame must be examined to determine whether it contains a command (the address received is ours) or a response (the address received is that of the other terminal). Some HDLC variants extend the address field to include both source and destination addresses, or an explicit command/response bit.

HDLC Operations, and Frame Types


There are three fundamental types of HDLC frames.

Information frames, or I-frames, transport user data from the network layer. In addition they can also include flow and error control information piggybacked on data. Supervisory Frames, or S-frames, are used for flow and error control whenever piggybacking is impossible or inappropriate, such as when a station does not have data to send. S-frames do not have information fields. Unnumbered frames, or U-frames, are used for various miscellaneous purposes, including link management. Some U-frames contain an information field, depending on the type.

The general format of the control field is: HDLC control fields 6 5 4 3 2 1 0 N(R) N(S) P/F 0 I-frame Receive sequence no. Send sequence no. N(R) P/F type 0 1 S-frame Receive sequence no. type P/F type 1 1 U-frame 7 There are also extended (2-byte) forms of I and S frames. Again, the least significant bit (rightmost in this table) is sent first. Extended HDLC control fields 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 N(R) N(S) P/F 0 Extended I-frame Receive sequence no. Send sequence no. N(R) P/F 0 0 0 0 type 0 1 Extended S-frame Receive sequence no.

The P/F bit


Poll/Final is a single bit with two names. It is called Poll when set by the primary station to obtain a response from a secondary station, and Final when set by the secondary station to indicate a response or the end of transmission. In all other cases, the bit is clear. The bit is used as a token that is passed back and forth between the stations. Only one token should exist at a time. The secondary only sends a Final when it has received a Poll from the primary. The primary only sends a Poll when it has received a Final back from the secondary, or after a timeout indicating that the bit has been lost.

In NRM, possession of the poll token also grants the addressed secondary permission to transmit. The secondary sets the F-bit in its last response frame to give up permission to transmit. (It is equivalent to the word "Over" in radio voice procedure.) In ARM and ABM, the P bit forces a response. In these modes, the secondary need not wait for a poll to transmit, so need not wait to respond with a final bit. If no response is received to a P bit in a reasonable period of time, the primary station times out and sends P again. The P/F bit is at the heart of the basic checkpoint retransmission scheme that is required to implement HDLC; all other variants (such as the REJ S-frame) are optional and only serve to increase efficiency. Whenever a station receives a P/F bit, it may assume that any frames that it sent before it last transmitted the P/F bit and not yet acknowledged will never arrive, and so should be retransmitted.

When operating as a combined station, it is important to maintain the distinction between P and F bits, because there may be two checkpoint cycles operating simultaneously. A P bit arriving in a command from the remote station is not in response to our P bit; only an F bit arriving in a response is.

N(R), the receive sequence number


Both I and S frames contain a receive sequence number N(R). N(R) provides a positive acknowledgement for the receipt of I-frames from the other side of the link. Its value is always the first frame not received; it acknowledges that all frames with N(S) values up to N(R)-1 (modulo 8 or modulo 128) have been received and indicates the N(S) of the next frame it expects to receive. N(R) operates the same way whether it is part of a command or response. A combined station only has one sequence number space.

I-Frames (user data)


Information frames, or I-frames, transport user data from the network layer. In addition they also include flow and error control information piggybacked on data. The sub-fields in the control field define these functions. The least significant bit (first transmitted) defines the frame type. 0 means an I-frame. N(S) defines the sequence number of send frame. This is incremented for successive Iframes, modulo 8 or modulo 128. Depending on the number of bits in the sequence number, up to 7 or 127 I-frames may be awaiting acknowledgment at any time. The P/F and N(R) fields operate as described above. Except for the interpretation of the P/F field, there is no difference between a command I frame and a response I frame; when P/F is 0, the two forms are exactly equivalent.

S-Frames (control)
Supervisory Frames, or S-frames, are used for flow and error control whenever piggybacking is impossible or inappropriate, such as when a station does not have data to send. S-frames do not have information fields. The S-frame control field includes a leading "10" indicating that it is an S-frame. This is followed by a 2-bit type, a poll/final bit, and a sequence number. If 7-bit sequence numbers are used, there is also a 4-bit padding field. The first 2 bits mean it is an S-frame. All S frames include a P/F bit and a receive sequence number as described above. Except for the interpretation of the P/F field, there is no difference between a command S frame and a response S frame; when P/F is 0, the two forms are exactly equivalent. The 2-bit type field encodes the type of S frame.

Receive Ready (RR)


Indicate that the sender is ready to receive more data (cancels the effect of a previous RNR). Send this packet if you need to send a packet but have no I frame to send. A primary station can send this with the P-bit set to solicit data from a secondary station. A secondary terminal can use this with the F-bit set to respond to a poll if it has no data to send.

Receive Not Ready (RNR)


Acknowledge some packets and request no more be sent until further notice. Can be used like RR with P bit set to solicit the status of a secondary station. Can be used like RR with F bit set to respond to a poll if the station is busy.

Reject (REJ)

Requests immediate retransmission starting with N(R). Sent in response to an observed sequence number gap. After seeing I1/I2/I3/I5, send REJ4. Optional to generate; a working implementation can use only RR.

Selective Reject (SREJ)


Requests retransmission of only the frame N(r). Not supported by all HDLC variants. Optional to generate; a working implementation can use only RR, or only RR and REJ.

U-Frames
Unnumbered frames, or U-frames, are used for link management, and can also be used to transfer user data. They exchange session management and control information between connected devices, and some U-frames contain an information field, used for system management information or user data. The first 2 bits (11) mean it is a U-frame. The 5 type bits (2 before P/F bit and 3 bit after P/F bit) can create 32 different types of U-frame

Mode settings (SNRM, SNRME, SARM, SARME, SABM, SABME, UA, DM, RIM, SIM, RD, DISC) Information Transfer (UP, UI) Recovery (FRMR, RSET) o Invalid Control Field o Data Field Too Long o Data field not allowed with received Frame Type o Invalid Receive Count Miscellaneous (XID, TEST).

Q7. Write short notes on (any four). (20M) 1. Bluetooth. Solution: Marking Scheme: a) Definition of Bluetooth= 02 marks. b) Bluetooth Architecture= 03 marks. Detail Solution:

Bluetooth
Bluetooth is an open wireless technology standard for exchanging data over short distances (using short length radio waves) from fixed and mobile devices, creating personal area networks (PANs) with high levels of security. Invented by telecoms vendor Ericsson in 1994,[1] it was originally conceived as a wireless alternative to RS-232 data cables. It can connect several devices, overcoming problems of synchronization. Today Bluetooth is managed by the Bluetooth Special Interest Group.

Implementation
Bluetooth uses a radio technology called frequency-hopping spread spectrum, which chops up the data being sent and transmits chunks of it on up to 79 bands of 1 MHz width in the range 2402-2480 MHz. This is in the globally unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio frequency band.

In Classic Bluetooth, which is also referred to as basic rate (BR) mode, the modulation is Gaussian frequency-shift keying (GFSK). It can achieve a gross data rate of 1 Mbit/s. In extended data rate (EDR) /4-DQPSK and 8DPSK are used, giving 2, and 3 Mbit/s respectively. Bluetooth is a packet-based protocol with a master-slave structure. One master may communicate with up to 7 slaves in a piconet; all devices share the master's clock. Packet exchange is based on the basic clock, defined by the master, which ticks at 312.5 s intervals. Two clock ticks make up a slot of 625 s; two slots make up a slot pair of 1250 s. In the simple case of single-slot packets the master transmits in even slots and receives in odd slots; the slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3 or 5 slots long but in all cases the master transmit will begin in even slots and the slave transmit in odd slots. Bluetooth provides a secure way to connect and exchange information between devices such as faxes, mobile phones, telephones, laptops, personal computers, printers, Global Positioning System (GPS) receivers, digital cameras, and video game consoles. The Bluetooth specifications are developed and licensed by the Bluetooth Special Interest Group (SIG). The Bluetooth SIG consists of more than 13,000 companies in the areas of telecommunication, computing, networking, and consumer electronics.[5] To be marketed as a Bluetooth device, it must be qualified to standards defined by the SIG.

Communication and connection


A master Bluetooth device can communicate with up to seven devices in a Wireless User Group. This network group of up to eight devices is called a piconet. The devices can switch roles, by agreement, and the slave can become the master at any time. At any given time, data can be transferred between the master and one other device. The master switches rapidly from one device to another in a round-robin fashion. Simultaneous transmission from the master to multiple other devices is possible via broadcast mode, but not used much. The Bluetooth Core Specification allows connecting two or more piconets together to form a scatternet, with some devices acting as a bridge by simultaneously playing the master role in one piconet and the slave role in another. Many USB Bluetooth adapters or "dongles" are available, some of which also include an IrDA adapter. Older (pre-2003) Bluetooth dongles, however, have limited services, offering only the Bluetooth Enumerator and a less-powerful Bluetooth Radio incarnation. Such devices can link computers with Bluetooth, but they do not offer much in the way of services that modern adapters do.

Uses
Bluetooth is a standard communications protocol primarily designed for low power consumption, with a short range (power-class-dependent: 100 m, 10 m and 1 m, but ranges vary in practice; see table below) based on low-cost transceiver microchips in each device.[6] Because the devices use a radio (broadcast) communications system, they do not have to be in line of sight of each other.[5] Maximum Permitted Power Range (approximate) mW dBm Class 1 100 20 ~100 meters Class 2 2.5 4 ~10 meters Class 3 1 0 ~1 meters Class In most cases the effective range of class 2 devices is extended if they connect to a class 1 transceiver, compared to a pure class 2 network. This is accomplished by the higher sensitivity and transmission power of Class 1 devices. Version Version 1.2 Version 2.0 + EDR Version 3.0 + HS Data Rate 1 Mbit/s 3 Mbit/s 24 Mbit/s

While the Bluetooth Core Specification does mandate minimums for range, the range of the technology is application specific and is not limited. Manufacturers may tune their implementations to the range needed to support individual use cases.

List of applications

A typical Bluetooth mobile phone headset.


Wireless control of and communication between a mobile phone and a hands-free headset. This was one of the earliest applications to become popular. Wireless networking between PCs in a confined space and where little bandwidth is required.

Wireless communication with PC input and output devices, the most common being the mouse, keyboard and printer. Transfer of files, contact details, calendar appointments, and reminders between devices with OBEX. Replacement of traditional wired serial communications in test equipment, GPS receivers, medical equipment, bar code scanners, and traffic control devices. For controls where infrared was traditionally used. For low bandwidth applications where higher USB bandwidth is not required and cable-free connection desired. Sending small advertisements from Bluetooth-enabled advertising hoardings to other, discoverable, Bluetooth devices.[8] Wireless bridge between two Industrial Ethernet (e.g., PROFINET) networks. Three seventh-generation game consoles, Nintendo's Wii[9] and Sony's PlayStation 3 and PSP Go, use Bluetooth for their respective wireless controllers. Dial-up internet access on personal computers or PDAs using a data-capable mobile phone as a wireless modem like Novatel mifi. Short range transmission of health sensor data from medical devices to mobile phone, set-top box or dedicated telehealth devices.[

2. Binary frequency keying. Solution: Marking Scheme: 1. Definition of Binary frequency keying = 02 marks. 2. Explanation of phase diagram= 03 marks. Detail Solution: Frequency-shift keying (FSK) is a frequency modulation scheme in which digital information is transmitted through discrete frequency changes of a carrier wave. The simplest FSK is binary FSK (BFSK). BFSK literally implies using a pair of discrete frequencies to transmit binary (0s and 1s) information. With this scheme, the "1" is called the mark frequency and the "0" is called the space frequency. The time domain of an FSK modulated carrier is illustrated in the figures to the right.

Other forms of FSK


Minimum-shift keying
Main article: Minimum-shift keying Minimum frequency-shift keying or minimum-shift keying (MSK) is a particular spectrally efficient form of coherent FSK. In MSK the difference between the higher and lower frequency is identical to half the bit rate. Consequently, the waveforms used to represent a 0 and a 1 bit differ by exactly half a carrier period. This is the smallest FSK modulation index that can be chosen such that the waveforms for 0 and 1 are orthogonal. A variant of MSK called GMSK is used in the GSM mobile phone standard. FSK is commonly used in Caller ID and remote metering applications: see FSK standards for use in Caller ID and remote metering for more details.

Audio FSK
Audio frequency-shift keying (AFSK) is a modulation technique by which digital data is represented by changes in the frequency (pitch) of an audio tone, yielding an encoded signal suitable for transmission via radio or telephone. Normally, the transmitted audio alternates between two tones: one, the "mark", represents a binary one; the other, the "space", represents a binary zero. AFSK differs from regular frequency-shift keying in performing the modulation at baseband frequencies. In radio applications, the AFSK-modulated signal normally is

being used to modulate an RF carrier (using a conventional technique, such as AM or FM) for transmission. AFSK is not always used for high-speed data communications, since it is far less efficient in both power and bandwidth than most other modulation modes. In addition to its simplicity, however, AFSK has the advantage that encoded signals will pass through ACcoupled links, including most equipment originally designed to carry music or speech.

Applications
1200 baud AFSK signal Listen to an example of a 1200 baud AFSK-modulated signal.
Problems listening to this file? See media help.

Most early telephone-line modems used audio frequency-shift keying to send and receive data, up to rates of about 300 bits per second. The common Bell 103 modem used this technique, for example. Even today, North American caller ID uses 1200 baud AFSK in the form of the Bell 202 standard. Some early microcomputers used a specific form of AFSK modulation, the Kansas City standard, to store data on audio cassettes. AFSK is still widely used in amateur radio, as it allows data transmission through unmodified voiceband equipment. Radio control gear uses FSK, but calls it FM and PPM instead. AFSK is also used in the United States' Emergency Alert System to transmit warning information. It is used at higher bitrates for Weathercopy used on Weatheradio by NOAA in the U.S., and more extensively by Environment Canada. The CHU shortwave radio station in Ottawa, Canada broadcasts an exclusive digital time signal encoded using AFSK modulation

3. Handoff Algorithm: Marking Scheme: 1. Used of handoff algorithm= 02 marks. 2. Explanation of algorithm =03 marks. Detail Solution: In cellular telecommunications, the term handover or handoff refers to the process of transferring an ongoing call or data session from one channel connected to the core network to another. In satellite communications it is the process of transferring satellite control responsibility from one earth station to another without loss or interruption of service

Handover or handoff
American English tends to use the term handoff, and this is most commonly used within some American organizations such as 3GPP2 and in American originated technologies such as cdma-2000. In British English the term handover is more common, and is used within international and European organisations such as ITU-T, IETF, ETSI and 3GPP, and standardised within European originated standards such as GSM and UMTS. The term handover is more common than handoff in academic research publications and literature, while handoff is slightly more common within the IEEE and ANSI organisations.

Purpose
In telecommunications there may be different reasons why a handover might be conducted:

when the phone is moving away from the area covered by one cell and entering the area covered by another cell the call is transferred to the second cell in order to avoid call termination when the phone gets outside the range of the first cell; when the capacity for connecting new calls of a given cell is used up and an existing or new call from a phone, which is located in an area overlapped by another cell, is transferred to that cell in order to free-up some capacity in the first cell for other users, who can only be connected to that cell; in non-CDMA networks when the channel used by the phone becomes interfered by another phone using the same channel in a different cell, the call is transferred to a different channel in the same cell or to a different channel in another cell in order to avoid the interference; again in non-CDMA networks when the user behaviour changes, e.g. when a fasttravelling user, connected to a large, umbrella-type of cell, stops then the call may be transferred to a smaller macro cell or even to a micro cell in order to free capacity on the umbrella cell for other fast-travelling users and to reduce the potential interference to other cells or users (this works in reverse too, when a user is detected to be moving faster than a certain threshold, the call can be transferred to a larger umbrella-type of cell in order to minimise the frequency of the handovers due to this movement); in CDMA networks a soft handoff (see further down) may be induced in order to reduce the interference to a smaller neighbouring cell due to the "near-far" effect even when the phone still has an excellent connection to its current cell; etc.

The most basic form of handover is when a phone call in progress is redirected from its current cell (called source) and its used channel in that cell to a new cell (called target) and a new channel. In terrestrial networks the source and the target cells may be served from two different cell sites or from one and the same cell site (in the latter case the two cells are usually referred to as two sectors on that cell site). Such a handover, in which

the source and the target are different cells (even if they are on the same cell site) is called inter-cell handover. The purpose of inter-cell handover is to maintain the call as the subscriber is moving out of the area covered by the source cell and entering the area of the target cell. A special case is possible, in which the source and the target are one and the same cell and only the used channel is changed during the handover. Such a handover, in which the cell is not changed, is called intra-cell handover. The purpose of intra-cell handover is to change one channel, which may be interfered or fading with a new clearer or less fading channel.

Types of handover
In addition to the above classification of inter-cell and intra-cell classification of handovers, they also can be divided into hard and soft handovers:

A hard handover is one in which the channel in the source cell is released and only then the channel in the target cell is engaged. Thus the connection to the source is broken before the connection to the target is madefor this reason such handovers are also known as break-before-make. Hard handovers are intended to be instantaneous in order to minimize the disruption to the call. A hard handovers is perceived by network engineers as an event during the call. A soft handover is one in which the channel in the source cell is retained and used for a while in parallel with the channel in the target cell. In this case the connection to the target is established before the connection to the source is broken, hence this handovers is called make-before-break. The interval, during which the two connections are used in parallel, may be brief or substantial. For this reason the soft handovers is perceived by network engineers as a state of the call, rather than a brief event. A soft handovers may involve using connections to more than two cells, e.g. connections to three, four or more cells can be maintained by one phone at the same time. When a call is in a state of soft handovers the signal of the best of all used channels can be utilised for the call at a given moment or all the signals can be combined to produce a clearer copy of the signal. The latter is more advantageous, and when such combining is performed both in the downlink (forward link) and the uplink (reverse link) the handover is termed as softer. Softer handovers are possible when the cells involved in the handovers have a single cell site .

3. Telecommunication management network (TMN). Solution: Marking Scheme: 1. Used of TMN it carries = 02 marks. 2. Explanation with one suitable example= 03 marks. Detail Solution:

The Telecommunications Management Network is a protocol model defined by ITU-T for managing open systems in a communications network. It is part of the ITU-T Recommendation series M.3000 and is based on the OSI management specifications in ITU-T Recommendation series X.700. TMN provides a framework for achieving interconnectivity and communication across heterogeneous operations system and telecommunication networks. To achieve this, TMN defines a set of interface points for elements which perform the actual communications processing (such as a call processing switch) to be accessed by elements, such as management workstations, to monitor and control them. The standard interface allows elements from different manufacturers to be incorporated into a network under a single management control. For communication between Operations Systems and NEs (Network Elements), it uses the Common management information protocol (CMIP) or Mediation devices when it uses Q3 interface. TMN can be used in the management of ISDN, B-ISDN, ATM, and GSM networks. It is not as commonly used for purely packet-switched data networks. Modern telecom networks are automated, and are run by OSS software or operational support systems. These manage modern telecom networks and provide the data that is needed in the day-to-day running of a telecom network. OSS software is also responsible for issuing commands to the network infrastructure to activate new service offerings, commence services for new customers, and detect and correct network faults.

Logical layers
The framework identifies four logical layers of network management: Business Management Includes the functions related to business aspects, analyzes trends and quality issues, for example, or to provide a basis for billing and other financial reports. Service Management Handles services in the network: definition, administration and charging of services. Network Management Distributes network resources, performs tasks of: configuration, control and supervision of the network. Element Management Handles individual network elements including alarm management, handling of information, backup, logging, and maintenance of hardware and software.

A Network Element provides agent services, mapping the physical aspects of the equipment into the TMN framework.

Recommendations
The TMN M.3000 series includes the following recommendations:

M.3000 Tutorial Introduction to TMN M.3010 Principles for a TMN M.3020 TMN Interface Specification Methodology M.3050 Enhanced Telecommunications Operations Map (eTOM) M.3060 Principles for the Management of the Next Generation Networks M.3100 Generic Network Information Model for TMN M.3200 TMN Management Services Overview M.3300 TMN Management Capabilities at the F Interface

4. ISDN. Solution: Marking Scheme: 1. Definition of ISDN =02 marks. 2. Applications of ISDN =03 marks. Detail Solution: Integrated Services Digital Network (ISDN) is a set of communications standards for simultaneous digital transmission of voice, video, data, and other network services over the traditional circuits of the public switched telephone network. It was first defined in 1988 in the CCITT red book.[citation needed] Prior to ISDN, the phone system was viewed as a way to transport voice, with some special services available for data. The key feature of ISDN is that it integrates speech and data on the same lines, adding features that were not available in the classic telephone system. There are several kinds of access interfaces to ISDN defined as Basic Rate Interface (BRI), Primary Rate Interface (PRI) and Broadband ISDN (B-ISDN). ISDN is a circuit-switched telephone network system, which also provides access to packet switched networks, designed to allow digital transmission of voice and data over ordinary telephone copper wires, resulting in potentially better voice quality than an analog phone can provide. It offers circuit-switched connections (for either voice or data), and packet-switched connections (for data), in increments of 64 kilobit/s. A major market application for ISDN in some countries is Internet access, where ISDN typically provides a maximum of 128 kbit/s in both upstream and downstream directions. ISDN B-channels

can be bonded to achieve a greater data rate, typically 3 or 4 BRIs (6 to 8 64 kbit/s channels) are bonded. ISDN should not be mistaken for its use with a specific protocol, such as Q.931 whereby ISDN is employed as the network, data-link and physical layers in the context of the OSI model. In a broad sense ISDN can be considered a suite of digital services existing on layers 1, 2, and 3 of the OSI model. ISDN is designed to provide access to voice and data services simultaneously. However, common use has reduced ISDN to be limited to Q.931 and related protocols, which are a set of protocols for establishing and breaking circuit switched connections, and for advanced call features for the user. They were introduced in 1986.[1] In a videoconference, ISDN provides simultaneous voice, video, and text transmission between individual desktop videoconferencing systems and group (room) videoconferencing systems.

ISDN elements

Integrated services refers to ISDN's ability to deliver at minimum two simultaneous connections, in any combination of data, voice, video, and fax, over a single line. Multiple devices can be attached to the line, and used as needed. That means an ISDN line can take care of most people's complete communications needs at a much higher transmission rate, without forcing the purchase of multiple analog phone line.

Basic Rate Interface


Main article: Basic Rate Interface The entry level interface to ISDN is the Basic(s) Rate Interface (BRI), a 128 kbit/s service delivered over a pair of standard telephone copper wires. The 144 kbit/s rate is broken down into two 64 kbit/s bearer channels ('B' channels) and one 16 kbit/s signaling channel ('D' channel or delta channel). BRI is sometimes referred to as 2B+D The interface specifies the following network interfaces:

The U interface is a two-wire interface between the exchange and a network terminating unit, which is usually the demarcation point in non-North American networks. The T interface is a serial interface between a computing device and a terminal adapter, which is the digital equivalent of a modem.

The S interface is a four-wire bus that ISDN consumer devices plug into; the S & T reference points are commonly implemented as a single interface labeled 'S/T' on an NT1 The R interface defines the point between a non-ISDN device and a terminal adapter (TA) which provides translation to and from such a device.

Potrebbero piacerti anche