Sei sulla pagina 1di 62

1

CONTENTS
Chapter 1 Introduction 1.1 WHAT IS THE PROJECT ABOUT? 1.2 APPICATION FOR VIDEO CONFERENCING. 1.3 MOTIVATION OF OUR PROJECT 1.4 HISTORY

Chapter 2 Theoretical background 2.1 TECHNOLOGY USED : 2.2 TYPES OF BAND 2.3 NBFM MODULATOR 2.4 PHASER REPRESANTION 2.5 BLOCK DIAGRAM OF NBFM GENERATION 2.6 GENERATION OF WIDEBAND FM 2.7 BESSELS PLOT

Chapter 3 Hardware & software Description 3.1 VGA CAMERA 3.2 SIGNAL 3.3 WHAT IS VGA CAMERA?

3.4 WHAT IS VGA PICTURE? 3.5 Difference b/w VGA & Mega pixel 3.6 Camera Driver 3.7 Technical details 3.8 Colorimetry 3.9 Colours of the system 3.10 variants

Chapter 4 Implementation 4.1 FEATURES 4.2 POWER SUPPLY 4.3 ADVANTAGES

Chapter 5 Conclusion & Future Scope

Chapter 6 Refrence

CHAPTER-1 Introduction

1.1 What is the project about??


Wireless video conferencing system with secure communication link : The wireless video conferencing system with secure link described in this paper is designed using wireless video & audio for making the communication between person who is inside the restriction zone. This type of conferencing system is quite suitable of the isolated restricted zone where the tight security required for setting communication link. The principal of remote link established is utilized in this order to make the audio/video transmission through secure channels. A video camera collects the images from both the ends & it covert into electrical signals. The collected images are converted into visible light into invisible electrical signal. Similarly, the microphone collects the audio signal from both ends & converts into electrical signals. The electrical signals are then transmitted and received at both ends. The present wireless conferencing system is mainly designed for providing secure communication link Mainly in rural area where we want to established a meeting or a conferencing between various individual for a limited time period. The conferencing system can transmitted & receive both audio/video signals on both the side through real time system. By installing these system we can make communication between various person at a single time by a secure link which cannot be hacked or detected by other person inspite of a transmitted code is not known to other. The range for our conferencing system is about 100 meter by the use of ISM band provide to us by FCC.

1.2 APPLICATION FOR VIDEO CONFERENCING SYSTEM ARE


RETAIL Monitering for shoplifters and dishonest employers. Compiling real time evidence against bogus accident claim. Monitoring merchanding displays in store. MANUFACTURES ,BANKS,CASINO,HOSPITALS AND UNIVERSITIES Identifying visitors and employs. Thwarting theft. Ensuring the security of premisis and parking.

1.3 MOTIVATION OF MY PROJECT


Simply speaking, video conferencing is the use of video cameras to transmit a signal to specific limited sets of moniter. It differs from broadcast television in that the signal is not openly transmitted over the public air waves. Usually a conferencing system uses a stand mounted or hand held video camera to project a magnifying image on to a video moniter or a TV screen. Video conferencing system may operate continuously or only as required to moniter a particular evevt.The technology of conferencing originated in the 1950s and consists of low resolution black n white camera .Its main objective was to replace human for visual security surveillance.In the 1980s solid state camera and VCR is introduced .The resolution was better but the picture were still in black n white .In the 1990s,came the transformation to colour and eventually digital video recorders(DVRS) were introduce. In this project , we are using VGA camera for real time transmission of audio and video transmission,such as AV transmission in VCR.

1.4 HISTORY
As technology continues to advance, it becomes more and more convenient for the everyday user. Developers recognize how important space is to us, and theyre constantly inventing new ways for us to use smaller, more efficient items. A wireless conferencing system is a perfect example of this, and its an enormously convenient device for a number of purposes. Whether were using it in a video communication for work or for broadcast your own small local tv station, a wireless audio/video transmitter can give more freedom than weve ever experienced. A transmitter of conferencing system does exactly what it sounds like: it transmits audio and video signals wirelessly. The signal we transmit can be almost any kind we can transmit our computers image onto monitors in different rooms, watch movies on multiple TVs, or control your stereo from a control thats meters away. The number and types of media we can transmit depends on the exact model we have. Wireless conferencing system are especially useful to people with bigger homes, for obvious reasons. The ability to transmit a signal through walls is particularly convenient to people in older homes as well, since every room might not have a cable or phone outlet, and wireless capability makes this a non-issue. However, placement is important, as the range of a transmitter isnt unlimited, and can vary between models. Normally, a transmitter will reach between 400 and 1,000 square feet, getting weaker the farther away you get. Walls can also reduce the distance of the signal.

Adapting to the distance limitations should be fairly easy, and simply requires good placement. Also, different transmitters have greater and lesser range, so if were worried about it, just look at the specifications of the specific model were looking for. Of course, this step is important for more than just the signal range: youll always need to look at the specifications of a transmitter to see if you can use it. No wireless transmitter is going to cover every possible type of connection, and its important to make sure that the transmitter you buy is compatible with the type of signal we want to transmit

Chapter 2 Theoretical Background

10

2.1.1 TYPES OF BAND

Band Low-band VHF High-band VHF UHF

Channels

Frequency

2 through 6

(54-72 MHz and 76-88 MHz)

7 through 13 14 through 69 70-83

(174-216 MHz)

(470-806 MHz) (806-890 MHz) currently assigned to land mobile radio services

UHF

11

Bandwidth of FM and PM Signals

The bandwidth of the different AM modulation techniques ranges from the bandwidth of the message signal (for SSB) to twice the bandwidth of the message signal (for DSBSC and Full AM). When FM signals were first proposed, it was thought that their bandwidth can be reduced to an arbitrarily small value. Compared to the bandwidth of different AM modulation techniques, this would in theory be a big advantage. It was assumed that a signal with an instantaneous frequency that changes over of range of f Hz would have a bandwidth of f Hz. When experiments were done, it was discovered that this was not the case. It was discovered that the bandwidth of FM signals for a specific message signal was at least equal to the bandwidth of the corresponding AM signal. In fact, FM signals can be classified into two types: Narrowband and Wideband FM signals depending on the bandwidth of each of these signals

Narrowband FM and PM

The general form of an FM signal that results when modulating a signals m(t) is

g FM (t ) = A cos c t + k f

m ( )d .

A narrow band FM or PM signal satisfies the condition

12

k f a(t ) = 1

For FM and

k p m (t ) = 1

For PM, where

a (t ) =

m ( )d ,

such that a change in the message signal does not results in a lot of change in the instantaneous frequency of the FM signal.

Now, we can write the above as

g FM (t ) = A cos[ c t + k f a (t )] .

Starting with FM, to evaluate the bandwidth of this signal, we need to expand it using a power series expansion. So, we will define a slightly different signal

j t + k a (t ) g FM (t ) = A e { c f } = A e j c t e jk f a (t ) .

Remember that

13
j t + k a (t ) g FM (t ) = A e { c f } = A cos [ c t + k f a (t ) ]+ jA sin [c t + k f a (t ) ] ,

so

g FM (t ) = Re { g FM (t )} .

Now we can expand the term e jk f a (t ) in g FM (t ) , which gives

j 2 k f2a 2 (t ) j 3 k f3a 3 (t ) j 4 k f4a 4 (t ) FM (t ) = A e j c t 1 + jk f a (t ) + g + + +K 2! 3! 4! k 2a 2 (t ) j c t jk f3 a 3 (t ) j c t k f4a 4 (t ) j c t = A e j c t + jk f a (t )e j c t f e e + e +K 2! 3! 4!

Since kf and a(t) are real (a(t) is real because it is the integral of a real function m(t)), and since Re{ej ct} = cos( ct) and Re{ jej ct} = sin( ct), then

g FM (t ) = Re { g FM (t )} k f2a 2 (t ) k f3a 3 (t ) k f4a 4 (t ) = A cos(c t ) k f a (t )sin(c t ) cos( c t ) + sin( t ) + cos( t ) + K c c 2! 3! 4!


The assumption we made for narrowband FM is ( k f a (t ) = 1). This assumption will result in making all the terms with powers of k f a (t ) greater than 1 to be small compared to the first two terms. So, the following is a reasonable approximation for

g FM (t )

g F ar wadt MN( o bn r

( ) t s t ( ) k o( [ at ) c ) i ( c sn c f

when k f a (t ) = 1 .

14

It must be stressed that the above approximation is only valid for narrowband FM signals that satisfy the condition ( k f a(t ) = 1). The above signal is simply the addition (or actually the subtraction) of a cosine (the carrier) with a DSBSC signal (but using a sine as the carrier). The message signal that modulates the DSBSC signal is not m(t) but its integration a(t). One of the properties of the Fourier transform informs us that the bandwidth of a signal m(t) and its integration a(t) (and its derivative too) are the same (verify this). Therefore, the bandwidth of the narrowband FM signal is

B W

F N rw n M ao b d r ( a

B ) W

DS S C B

B W = =mt () =

We will see later that when the condition (kf << 1) is not satisfied, the bandwidth of the FM signal becomes higher that twice the bandwidth of the message signal. Similar relationships hold for PM signals. That is

g P N(r wad t ) A ( ) t ks omt ) c p ( M a o bn r c

t( )sn ) i ( c

when k p m (t ) = 1 ,

and
B W
P MNrw n ao b d r ( a

B ) W

DS S C B

B=mt (= W =)

Construction of Narrowband Frequency and Phase Modulators

The above approximations for narrowband FM and PM can be easily used to construct modulators for both types of signals.

15

kf<<1
m(t)

()d

a(t)

X
sin(ct) /2

kf

g FM (NarrowBand) (t)

cos(ct)

Narrowband FM Modulator

16

17

18

Indirect Method for Wideband FM Generation:


Consider the following block diagram

m(t)

Narrowband FM Modulator g FM ( NB) (t)

( . )P

g FM (WB) (t)

Assume a BPF is included in this block to pass the signal with the highest carrier freuqnecy and reject all others

A narrowband FM signal can be generated easily using the block diagram of the narrowband FM modulator that was described in a previous lecture. The narrowband FM modulator generates a

19

narrowband FM signal using simple components such as an integrator (an OpAmp), oscillators, multipliers, and adders. The generated narrowband FM signal can be converted to a wideband FM signal by simply passing it through a nonlinear device with power P. Both the carrier frequency and the frequency deviation f of the narrowband signal are increased by a factor P. Sometimes, the desired increase in the carrier frequency and the desired increase in f are different. In this case, we increase f to the desired value and use a frequency shifter (multiplication by a sinusoid followed by a BPF) to change the carrier frequency to the desired value.

Example 1: A narrowband FM modulator is modulating a message signal m(t) with bandwidth 5 kHz and is producing an FM signal with the following specifications fc1 = 300 kHz, f1 = 35 Hz. We would like to use this signal to generate a wideband FM signal with the following specifications fc2 = 135 MHz, f2 = 77 kHz. Show the block diagram of several systems that will perform this function and specify the characteristics of each system

Solution: We see that the ratio of the carrier frequencies is

20

f c 2 135*106 = = 450 , f c 1 300*103

and the ratio of the frequency variations is

f 2 77*103 = = 2200 . f 1 35

Therefore, we should feed the narrowband FM signal into a single (or multiple) nonlinear device with a nonlinearity order of f2/f1 = 2200. If we do this, the carrier frequency of narrowband FM signal will also increase by a factor of 2200, which is higher than what is required. This can easily be corrected by frequency shifting. If we feed the narrowband FM signal into a nonlinear device of order fc2/fc1, we will get the correct carrier frequency but the wrong value for f. There is not a way of correcting the value of f for this signal without affecting the carrier frequency.

System 1:

Frequency Shifter Narrowband FM Modulator BPF


CF=135 MHz
BW = 164 kHz

m(t) BW m = 5 kHz

( . ) 2200 gFM3 (WB) (t) f 3 = 77 kHz f c3 = 660 MHz BW 3 = 2(f 3 + BW m) = 164 kHz

g FM (NB) (t) f 1 = 35 Hz f c1 = 300 kHz BW = 2*5 = 10 kHz

g FM2 (WB) (t) f 2 = 77 kHz f c2 = 135 MHz BW 2 = 2(f2 + BWm) = 164 kHz

cos(2(525M)t)

21

In this system, we are using a single nonlinear device with an order of 2200 or multiple devices with a combined order of 2200. It is clear that the output of the nonlinear device has the correct f but an incorrect carrier frequency which is corrected using a the frequency shifter with an oscillator that has a frequency equal to the difference between the frequency of its input signal and the desired carrier frequency. We could also have used an oscillator with a frequency that is the sum of the frequencies of the input signal and the desired carrier frequency. This system is characterized by having a frequency shifter with an oscillator frequency that is relatively large.

System 2:

Frequency Shifter m(t) BW m = 5 kHz Narrowband FM Modulator BPF


CF= 2.7 MHz
BW = 13.08 kHz

( . ) 44 gFM3 (WB) (t) f 3 = 1540 Hz fc3 = 13.2 MHz BW 3 = 2(f3 + BWm) = 13080 Hz

( . ) 50

g FM (NB) (t) f 1 = 35 Hz f c1 = 300 kHz BW = 2*5 = 10 kHz

gFM2 (WB) (t) f 2 = 77 kHz f c2 = 135 MHz BW 2 = 2(f2 + BWm) = 164 kHz

cos(2 (10.5M)t)

gFM4 (WB) (t) f 4 = 1540 Hz f c4 = 135/50 = 2.7 MHz BW4 = 2(f 4 + BW m) = 13080 Hz

22

In this system, we are using two nonlinear devices (or two sets of nonlinear devices) with orders 44 and 50 (44*50 = 2200). There are other possibilities for the factorizing 2200 such as 2*1100, 4*550, 8*275, 10*220, . Depending on the available components, one of these factorizations may be better than the others. In fact, in this case, we could have used the same factorization but put 50 first followed by 44. We want the output signal of the overall system to be as shown in the block diagram above, so we have to insure that the input to the nonlinear device with order 50 has the correct carrier frequency such that its output has a carrier frequency of 135 MHz. This is done by dividing the desired output carrier frequency by the nonlinearity order of 50, which gives 2.7 Mhz. This allows us to figure out the frequency of the required oscillator which will be in this case either 13.22.7 = 10.5 MHz or 13.2+2.7 = 15.9 MHz. We are generally free to choose which ever we like unless the available components dictate the use of one of them and not the other. Comparing this system with System 1 shows that the frequency of the oscillator that is required here is significantly lower (10.5 MHz compared to 525 MHz), which is generally an advantage.

23

Chapter 3

24

Hardware & software Description

25

3.1 VGA (Video Graphics Array)


Video Graphics Array (VGA) refers specifically to the display hardware first introduced with the IBM PS/2 line of computers in 1987,[1] but through its widespread adoption has also come to mean either an analog computer display standard, the 15-pin D-subminiature VGA connector or the 640480resolution itself. While this resolution was superseded in the personal computer market in the 1990s, it is becoming a popular resolution on mobile devices.[2] VGA was the last graphical standard introduced by IBM that the majority of PC clone manufacturers conformed to, making it today (as of 2010) the lowest common denominator that all PC graphics hardware can be expected to implement without device-specific driver software. For example, the Microsoft Windows splash screen appears while the machine is still operating in VGA mode, which is the reason that this screen always appears in reduced resolution and color depth. VGA was officially followed by IBM's Extended Graphics Array (XGA) standard, but it was effectively superseded by numerous slightly different extensions to VGA made by clone manufacturers that came to be known collectively as Super VGA.

3.2 Signal
The intended value for the horizontal frequency of VGA is exactly double the value used in the NTSC-M video system. The formula for the VGA horizontal frequency is thus (60 1001) 525 kHz = 4500 143 kHz 31.4686 kHz. All other frequencies used by the VGA card are derived from this value by integer multiplication or division. Since the exactness of quartz oscillators is limited, real

26

cards will have slightly higher or lower frequency. For most common VGA mode 640480 "60 Hz" non-interlaced the horizontal timings are: Parameter Pixel frequency Horizontal frequency Horizontal pixels Horizontal polarity clock Value Unit MHz[9
]

25.175

31.468 kHz 6 640

sync Negati ve

Total time for each 31.77 line Front porch (A) 0.94

Sync pulse length 3.77 (B) Back porch (C) Active video (D) 1.89 25.17

s s

27

Addressing details

Examples of VGA images in 64048016 (top) and 320200256 modes (bottom). Dithering is used to mask color limitations.

The video memory of the VGA is mapped to the PC's memory via a window in the range between segments 0xA0000 and 0xBFFFF in the PC's real mode address space (A000:0000 and B000:FFFF in segment:offset notation). Typically, these starting segments are: 0xA0000 for EGA/VGA graphics modes (64 KB) 0xB0000 for monochrome text mode (32 KB) 0xB8000 for color text mode and CGA-compatible graphics modes (32 KB)

Due to the use of different address mappings for different modes, it is possible to have a monochrome adapter (i.e. MDA or Hercules) and a color adapter such as the VGA, EGA, or CGA installed in the same machine. At the beginning of the

28

1980s, this was typically used to display Lotus 1-2-3spreadsheets in highresolution text on a monochrome display and associated graphics on a lowresolution CGA display simultaneously. Many programmers also used such a setup with the monochrome card displaying debugging information while a program ran in graphics mode on the other card. Several debuggers, like Borland's Turbo Debugger, D86 (by Alan J. Cox) and Microsoft's CodeView could work in a dual monitor setup. Either Turbo Debugger or CodeView could be used to debug Windows. There were also DOS device drivers such as ox.sys, which implemented a serial interface simulation on the monochrome display and, for example, allowed the user to receive crash messages from debugging versions of Windows without using an actual serial terminal. It is also possible to use the "MODE MONO" command at the DOS prompt to redirect the output to the monochrome display. When a monochrome adapter was not present, it was possible to use the 0xB000 0xB7FF address space as additional memory for other programs (for example by adding the line "DEVICE=EMM386.EXE I=B000B7FF" into config.sys, this memory would be made available to programs that can be "loaded high" loaded into high memory.)

In camera terms
VGA refers to a digital image of resolution of 640x480 pixels. The term comes from the Video Graphics Array display hardware first introduced with the IBM PS/2 computer in 1987, which had a maximum resolution of 640x480 and 8-bit colour depth, giving 256 possible colours. This size adopted as a standard by many other graphics boards. VGA also denotes the 15-pin D-connector which became standard for analog monitor connections.

3.3 What is a VGA Camera?

29

Digital cameras are typically identified by a mega pixel rating that denotes the highest resolution of the images they can capture, in pixels. Visual graphics array resolution is 640 pixels wide by 480 pixels high. While most digital cameras and other electronic devices that contain cameras no longer use the outdated VGA display standard, there are still a few applications where it remains useful GA is an abbreviation for Video Graphics Array (VGA) and is basic and almost out dated you can say, as far cameras are concerned. now a days the Digital cameras are selected based on the mega Pixels .The mobile phones are now a days are coming with 1.3 mega pix to 3 mega pix commonly and the cameras are available from 4,5,6,7,8 10 mega pixels. THE MORE THE MEGA PIXCEL NUMBER THE BETTER IS THE PICTURE QUALITY AND YOU CAN GET THE PHOTO TAKEN WITH HIGHER MEGA PIXEL CAMERA TO A VERY LARGE SIZE WITHOUT ANY DISTORTION OR BLOWING UP OF PIXELS.

A VGA camera gives you pictures of 640 x 480

resolution.

A 1.3 MP camera give you a resolution of 1280 x 1024 pixels which means a far superior picture quality.

3.4 What Are VGA Pictures?


IBM developed the video graphics array technology for computer monitors. Other graphics technologies that can display videos at a higher resolution have replaced VGA as a graphics standard.

30

Today, the VGA standard is mostly used in some lower end cellular phones, older digital cameras and handheld electronic products. Because of the low resolution produced by a VGA camera, you cannot enlarge VGA pictures without losing definition

VGA
VGA technology converts a digital signal to an analog signal. The analog signal then passes through a VGA cable and converts back to a digital signal. With older CRT monitors this transmission method was the preferred method of displaying images on a monitor. Pictures produced using this technology have a low resolution of 640 x 480. This does not, however, mean that the pictures do not have clarity. The primary problem with a VGA picture is that, when you enlarge the picture, the image becomes blurred. For example, if you display a VGA picture on a 1280 x 1024 resolution monitor, the VGA image will appear distorted or blurred.

VGA Cameras
VGA cameras take pictures at the regular 640 x 480 VGA resolution. This size is ideal for small wallet-sized pictures and photos you want to store on your computer or send as an email attachment. VGA pictures have a small file size that does not require a significant amount of memory. This technology, however, rarely exists in newer cameras that primarily use megapixels to determine resolution capabilities. The megapixel cameras range from a one-megapixel capacity to over 200 megapixels.

3.5 Difference Between VGA and Megapixel


The 640 x 480 resolution of VGA contains 0.3 megapixels or 307,200 pixels of data within the picture. A single megapixel contains a million pixels. A pixel is a picture element and is one of the tiny dots that make up a digital picture image. The more pixels contained in a picture, the higher the resolution of the picture. New Displays

31

The newer displays used by most smartphones, tablets, computer monitors and televisions use a variety of alternative technologies. These technologies include LCD, LED, OLED, AMOLED and Plasma and produce superior digital images when compared to the older VGA technology. These newer technologies also allow users to take advantage of specialized applications and capabilities that require a higher resolution image.

C328R Jpeg ca mera and .NET Micro Framework

One of the cool devices I wanted to get work with .NET Micro Framework was a camera. Problem with most of the camera is, that it's using almost 20 pins when connecting to CPU. There are eight pins for Y-bus (gray-scale image) next eight for U-bus (color channel) and three pins for clock, frame-sync and h-sync. After that, camera is usually required to be connected over I2C, for configuration settings (brightness, contrast, rotation etc.) Luckily there is the company called COMedia Ltd., producing still image cameras connected over serial interface. Cameras can be purchased from Electronics 123.

32

C328R camera module C328R Jpeg Compression VGA Camera Module C328R is neat camera which can be purchased with different lenses. Those cameras are controlled by commands send over serial interface, so there are just four cables (Tx, Rx, +3.3V, Gnd). Communication logic is TTL so additional circuit is required when connecting to RS-232 port. The best feature on this camera is ability to produce directly Jpeg images or 'raw' format images in different color depth and sizes. Camera resolution is VGA so pictures from 80x60 to 640x480 can be taken. Connecting camera with Tahoe development board is simple. Tahoe serial port two is used. Ground black-wire is connected to pins labeled 0V, power red-wire must be connected to 3.3V pins. This is important, because connecting camera to 5V may cause damages to it. Yellow wire goes from Tx on camera to Rx on Tahoe. Green wire connects Rxon camera with Tx on Tahoe. Figures shows the connection.

33

Connecting power

Connecting serial interface 3.6 Camera driver C328R is controlled by twelve commands, like Sync, Initial, Snapshot, GetPicture and so on. These commands are represented by two bytes and has four arguments. The complete command packet is always six bytes long. I definitely recommend to read C328R User Manual for better understanding of the camera functionality. Driver for Micro Framework I wrote, exposes camera commands into the managed code class. It's necessary to understand that it's low-level driver for one specific hardware. For general purpose in real-life projects, it should be wrapped into more abstract camera class.

34

C328R bottom view Initializing camera driver Constructor of the camera takes as the argument SerialPort.Configuration representing communication parameters and port to which camera is connected. First thing before talking to camera is to call Sync() method. Sync wakes up camera and let it detect communication speed, by sending sixty times the SYNC command (AA0Dh). If the process is successful and camera is responding method returns true. Once the camera is up, Initial command (AA01h) can be send. This command configures picture quality and compression and it's represented by Initial(C328R.ColorType, C328R.PreviewResolution, C328R.JpegResolution)method. First two attributes has no effect, if only Jpeg pictures will be taken. Everything is visible from following listing. Lines 9 to 12 are completely optional and not required in initialization process.

35

NTSC, named for the National Television System Committee,[1] is the analog television system that is used in most of North America, parts of South America (except Brazil, Argentina, Uruguay, and French Guiana), Burma, South

36

Korea, Taiwan, Japan, the Philippines, and some Pacific island nations and territories (see map). Most countries using the NTSC standard, as well as those using other analog television standards, are switching to newer digital televisionstandards, of which at least four different ones are in use around the world. North America, parts of Central America, and South Korea are adopting the ATSC standards, while other countries are adopting or have adopted other standards. The first NTSC standard was developed in 1941 and had no provision for color television. In 1953 a second modified version of the NTSC standard was adopted, which allowed color television broadcasting compatible with the existing stock of black-and-white receivers. NTSC was the first widely adopted broadcast color system and remained dominant where it had been adopted until the first decade of the 21st century, when it was replaced with digital ATSC. After nearly 70 years of use, the vast majority of over-the-air NTSC transmissions in the United States were turned off on June 12, 2009 and August 31, 2011 in Canada and most other NTSC markets[citation needed]. Digital broadcasting permitshigher-resolution television, but digital standard definition television in these countries continues to use the frame rate and number of lines of resolution established by the analog NTSC standard. In the United States a small number of short-range local and TV relay stations continue to broadcast NTSC, as the FCC allows. NTSC baseband video signals are also still often used in video playback (typically of recordings from existing libraries using existing equipment) and in CCTV and surveillance video systems.

The National Television System Committee was established in 1940 by the United States Federal Communications Commission (FCC) to resolve the conflicts that arose between companies over the introduction of a nationwide analog television system in the United States. In March 1941, the committee issued a technical standard for black-and-white television that built upon a 1936 recommendation made by the Radio Manufacturers Association (RMA). Technical advancements of the vestigial sideband technique allowed for the opportunity to increase the image resolution. The NTSC selected 525 scan lines as a compromise between RCA's 441-scan line standard (already being used by RCA's NBC TV network) and Philco's and DuMont's desire to increase the number of scan lines to between 605 and 800. The standard recommended a frame rate of 30 frames (images) per second, consisting of two interlaced fields per frame at 262.5 lines per field and 60 fields per second. Other standards in the final recommendation were an aspect

37

ratio of 4:3, and frequency modulation (FM) for the sound signal (which was quite new at the time). In January 1950, the Committee was reconstituted to standardize color television. In December 1953, it unanimously approved what is now called the NTSC color television standard (later defined as RS-170a). The "compatible color" standard retained full backward compatibility with existing black-and-white television sets. Color information was added to the black-and-white image by adding a color subcarrier of 4.5 455/572 = 315/88 MHz (approximately 3.58 MHz) to the video signal. To reduce the visibility of interference between the chrominance signal and FM sound carrier required a slight reduction of the frame rate from 30 frames per second to 30/1.001 (approximately 29.97) frames per second, and changing the line frequency from 15,750 Hz to 15,750/1.001 Hz (approximately 15,734.26 Hz). The FCC had briefly approved a different color television standard, starting in October 1950, which was developed by CBS.[2] However, this standard was incompatible with black-and-white broadcasts. It used a rotating color wheel, reduced the number of scan lines from 525 to 405, and increased the field rate from 60 to 144, but had an effective frame rate of only 24 frames per second. Legal action by rival RCA kept commercial use of the system off the air until June 1951, and regular broadcasts only lasted a few months before manufacture of all color television sets was banned by the Office of Defense Mobilization (ODM) in October, ostensibly due to the Korean War.[3] CBS rescinded its system in March 1953,[4] and the FCC replaced it on December 17, 1953 with the NTSC color standard, which was cooperatively developed by several companies, including RCA and Philco.[5] The first publicly announced network television broadcast of a program using the NTSC "compatible color" system was an episode of NBC's Kukla, Fran and Ollie on August 30, 1953, although it was viewable in color only at the network's headquarters.[6] The first nationwide view of NTSC color came on the following January 1 with the coast-to-coast broadcast of the Tournament of Roses Parade, viewable on prototype color receivers at special presentations across the country. The first color NTSC television camera was the RCA TK-40, used for experimental broadcasts in 1953; an improved version, the TK-40A, introduced in March 1954, was the first commercially available color television camera. Later that year, the improved TK-41 became the standard camera used throughout much of the 1960s. The NTSC standard has been adopted by other countries, including most of the Americas and Japan. With the advent of digital television, analog broadcasts

38

are being phased out. Most U.S. NTSC broadcasters were required by the FCC to shut down their analog transmitters in 2009. Low-power stations, Class A stations and translators were not immediately affected. An analog cut-off date for those stations was not set.

3.7 Technical details


Lines and refresh rate NTSC color encoding is used with the system M television signal, which consists of 29.97 interlaced frames of video per second. Each frame is composed of two fields, each consisting of 262.5 scan lines, for a total of 525 scan lines. 483 scan lines make up the visible raster. The remainder (the vertical blanking interval) are used for synchronization and vertical retrace. This blanking interval was originally designed to simply blank the receiver's CRT to allow for the simple analog circuits and slow vertical retrace of early TV receivers. However, some of these lines may now contain other data such as closed captioning and vertical interval timecode (VITC). In the complete raster (disregarding half lines due to interlacing) the even-numbered scan lines (every other line that would be even if counted in the video signal, e.g. {2,4,6,...,524}) are drawn in the first field, and the odd-numbered (every other line that would be odd if counted in the video signal, e.g. {1,3,5,...,525}) are drawn in the second field, to yield a flicker-free image at the field refresh frequency of approximately 59.94 Hertz (actually 60 Hz/1.001). For comparison, 576i systems such as PAL-B/G and SECAM use 625 lines (576 visible), and so have a higher vertical resolution, but a lower temporal resolution of 25 frames or 50 fields per second. The NTSC field refresh frequency in the black-and-white system originally exactly matched the nominal 60 Hz frequency of alternating current power used in the United States. Matching the fieldrefresh rate to the power source avoided intermodulation (also called beating), which produces rolling bars on the screen. When color was added to the system, the refresh frequency was shifted slightly downward to 59.94 Hz to eliminate stationary dot patterns in the difference frequency between the sound and color carriers, as explained below in "Color encoding". Synchronization of the refresh rate to the power incidentally helped kinescope cameras record early live television broadcasts, as it was very simple to synchronize a film camera to capture one frame of video on each film frame by using the alternating current frequency to set the speed of the synchronous AC motor-drive camera. By the time the frame rate changed to

39

29.97 Hz for color, it was nearly as easy to trigger the camera shutter from the video signal itself. The actual figure of 525 lines was chosen as a consequence of the limitations of the vacuum-tube-based technologies of the day. In early TV systems, a master voltagecontrolled oscillator was run at twice the horizontal line frequency, and this frequency was divided down by the number of lines used (in this case 525) to give the field frequency (60 Hz in this case). This frequency was then compared with the 60 Hz power-line frequency and any discrepancy corrected by adjusting the frequency of the master oscillator. For interlaced scanning, an odd number of lines per frame was required in order to make the vertical retrace distance identical for the odd and even fields, which meant the master oscillator frequency had to be divided down by an odd number. At the time, the only practical method of frequency division was the use of a chain of vacuum tube multivibrators, the overall division ratio being the mathematical product of the division ratios of the chain. Since all the factors of an odd number also have to be odd numbers, it follows that all the dividers in the chain also had to divide by odd numbers, and these had to be relatively small due the problems of thermal drift with vacuum tube devices. The closest practical sequence to 500 that meets these criteria was 3 5 5 7 = 525. (For the same reason, 625-line PAL-B/G and SECAM uses 5 5 5 5, the old British 405-line system used 3 3 3 3 5, the French 819-line system used 3 3 7 13 etc.). 3.8 Colorimetry The original 1953 color NTSC specification, still part of the United States Code of Federal Regulations, defined the colorimetric values of the system as follows:[7] Original NTSC colorimetry (1953) primary red primary green primary blue white point (CIE Standard illuminant C) CIE 1931 x CIE 1931 y 0.67 0.21 0.14 0.310 0.33 0.71 0.08 0.316

40

Early color television receivers, such as the RCA CT-100, were faithful to this specification, having a larger gamut than most of today's monitors. Their lowefficiency phosphors however were dark and long-persistent, leaving trails after moving objects. Starting in the late 1950s, picture tube phosphors would sacrifice saturation for increased brightness; this deviation from the standard both at the receiver and broadcaster ends was the source of considerable color variation.[8] Color correction in studio monitors and home receivers To ensure more uniform color reproduction, receivers started to incorporate color correction circuits that converted the received signal encoded for the colorimetric values listed above into signals encoded for the phosphors actually used within the receiver.[8] Since such color correction can not be performed accurately on the nonlinear (gamma-corrected) signals transmitted, the adjustment can only be approximated,[9] introducing both hue and luminance errors for highly saturated colors. Similarly at the broadcaster stage, in 1968-69 the Conrac Corp., working with RCA, defined a set of controlled phosphors for use in broadcast color picture video monitors.[8] This specification survives today as the SMPTE "C" phosphor specification: SMPTE "C" colorimetry primary red primary green primary blue white point (CIE illuminant D65) CIE 1931 x 0.630 0.310 0.155 0.3127 CIE 1931 y 0.340 0.595 0.070 0.3290

As with home receivers, it was further recommended that studio monitors incorporate similar color correction circuits so that broadcasters would transmit pictures encoded for the original 1953 colorimetric values, in accordance with FCC standards.

41

In 1987, the Society of Motion Picture and Television Engineers (SMPTE) Committee on Television Technology, Working Group on Studio Monitor Colorimetry, adopted the SMPTE C (Conrac) phosphors for general use in Recommended Practice 145,[11] prompting many manufacturers to modify their camera designs to directly encode for SMPTE "C" colorimetry without color correction.,[12]as approved in SMPTE standard 170M, "Composite Analog Video Signal NTSC for Studio Applications" (1994). As a consequence, the ATSC digital television standard states that for 480i signals, SMPTE "C" colorimetry should be assumed unless colorimetric data is included in the transport stream.

3.10 Variations
Japanese NTSC uses the same colorimetric values for red, blue, and green, but employs a different white point of CIE Illuminant D93 (x=0.285, y=0.293).[10] Both the PAL and SECAM systems used the original 1953 NTSC colorimetry as well until 1970;[10] unlike NTSC, however, the European Broadcasting Union (EBU) eschewed color correction in receivers and studio monitors that year and instead explicitly called for all equipment to directly encode signals for the "EBU" colorimetric values,[14] further improving the color fidelity of those systems. Color encoding For backward compatibility with black-and-white television, NTSC uses a luminance-chrominance encoding system invented in 1938 by Georges Valensi. Luminance (derived mathematically from the composite color signal) takes the place of the original monochrome signal. Chrominance carries color information. This allows black-and-white receivers to display NTSC signals simply by filtering out the chrominance. If it were not removed, the picture would be covered with dots (a result of chroma being interpreted as luminance). All black-and-white TVs sold in the US after the introduction of color broadcasting in 1953 were designed to filter chroma out, but the early B&W sets did not do this and chroma dots would show up in the picture. In NTSC, chrominance is encoded using two 3.579545 MHz signals that are 90 degrees out of phase, known as I (in-phase) and Q (quadrature) QAM. These two signals are each amplitude modulated and then added together. The carrier is suppressed. Mathematically, the result can be viewed as a single sine wave with varying phase relative to a reference and varying amplitude. The phase represents

42

the instantaneous color hue captured by a TV camera, and the amplitude represents the instantaneous color saturation. For a TV to recover hue information from the I/Q phase, it must have a zero phase reference to replace the suppressed carrier. It also needs a reference for amplitude to recover the saturation information. So, the NTSC signal includes a short sample of this reference signal, known as the color burst, located on the 'back porch' of each horizontal line (the time between the end of the horizontal synchronization pulse and the end of the blanking pulse.) The color burst consists of a minimum of eight cycles of the unmodulated (fixed phase and amplitude) color subcarrier. The TV receiver has a "local oscillator", which it synchronizes to the color bursts and then uses as a reference for decoding the chrominance. By comparing the reference signal derived from color burst to the chrominance signal's amplitude and phase at a particular point in the raster scan, the device determines what chrominance to display at that point. Combining that with the amplitude of the luminance signal, the receiver calculates what color to make the point, i.e. the point at the instantaneous position of the continuously scanning beam. Note that analog TV is discrete in the vertical dimension (there are distinct lines) but continuous in the horizontal dimension (every point blends into the next with no boundaries), hence there are no pixels in analog TV. In CRT televisions, the NTSC signal is turned into RGB, which is then used to control the electron guns. Digital TV sets receiving analog signals instead convert the picture into discrete pixels. This process of discretization necessarily degrades the picture information somewhat, though with small enough pixels the effect may be imperceptible. Digital sets include all sets with a matrix of discrete pixels built into the display device, such as LCD, plasma, and DLP screens, but not CRTs, which do not have fixed pixels. This should not be confused with digital (ATSC) television signals, which are a form of MPEG video, but which still have to be converted into a format the TV can use. When a transmitter broadcasts an NTSC signal, it amplitude-modulates a radiofrequency carrier with the NTSC signal just described, while it frequencymodulates a carrier 4.5 MHz higher with the audio signal. If non-linear distortion happens to the broadcast signal, the 3.579545 MHz color carrier may beat with the sound carrier to produce a dot pattern on the screen. To make the resulting pattern less noticeable, designers adjusted the original 60 Hz field rate down by a factor of 1.001 (0.1%), to approximately 59.94 fields per second. This adjustment ensures that the sums and differences of the sound carrier and the color subcarrier and their multiples (i.e., the intermodulation products of the two carriers) are not exact multiples of the frame rate, which is the necessary condition for the dots to remain stationary on the screen, making them most noticeable.

43

The 59.94 rate is derived from the following calculations. Designers chose to make the chrominance subcarrier frequency an n + 0.5 multiple of the line frequency to minimize interference between the luminance signal and the chrominance signal. (Another way this is often stated is that the color subcarrier frequency is an odd multiple of half the line frequency.) They then chose to make the audio subcarrier frequency an integer multiple of the line frequency to minimize visible (intermodulation) interference between the audio signal and the chrominance signal. The original black-and-white standard, with its 15750 Hz line frequency and 4.5 MHz audio subcarrier, does not meet these requirements, so designers had either to raise the audio subcarrier frequency or lower the line frequency. Raising the audio subcarrier frequency would prevent existing (black and white) receivers from properly tuning in the audio signal. Lowering the line frequency is comparatively innocuous, because the horizontal and vertical synchronization information in the NTSC signal allows a receiver to tolerate a substantial amount of variation in the line frequency. So the engineers chose the line frequency to be changed for the color standard. In the black-and-white standard, the ratio of audio subcarrier frequency to line frequency is 4.5 MHz / 15,750 = 285.71. In the color standard, this becomes rounded to the integer 286, which means the color standard's line rate is 4.5 MHz / 286 = approximately 15,734 lines per second. Maintaining the same number of scan lines per field (and frame), the lower line rate must yield a lower field rate. Dividing (4,500,000 / 286) lines per second by 262.5 lines per field gives approximately 59.94 fields per second. Transmission modulation scheme

44

Spectrum of a System M television channel with NTSC colour. An NTSC television channel as transmitted occupies a total bandwidth of 6 MHz. The actual video signal, which is amplitude-modulated, is transmitted between 500 kHz and 5.45 MHz above the lower bound of the channel. The video carrier is 1.25 MHz above the lower bound of the channel. Like most AM signals, the video carrier generates two sidebands, one above the carrier and one below. The sidebands are each 4.2 MHz wide. The entire upper sideband is transmitted, but only 1.25 MHz of the lower sideband, known as a vestigial sideband, is transmitted. The color subcarrier, as noted above, is 3.579545 MHz above the video carrier, and is quadrature-amplitude-modulated with a suppressed carrier. The audio signal isfrequency-modulated, like the audio signals broadcast by FM radio stations in the 88108 MHz band, but with a 25 kHz maximum frequency swing, as opposed to 75 kHz as is used on the FM band. The main audio carrier is 4.5 MHz above the video carrier, making it 250 kHz below the top of the channel. Sometimes a channel may contain an MTS signal, which offers more than one audio signal by adding one or two subcarriers on the audio signal, each synchronized to a multiple of the line frequency. This is normally the case when stereo audio and/or second audio program signals are used. The same extensions are used in ATSC, where the ATSC digital carrier is broadcast at 1.31 MHz above the lower bound of the channel. The Cvbs (Composite vertical blanking signal) (sometimes called "setup") is a voltage offset between the "black" and "blanking" levels. Cvbs is unique to NTSC. Cvbs has the advantage of making NTSC video more easily separated from its primary sync signals. Framerate conversion There is a large difference in framerate between film, which runs at approximately 24.0 frames per second, and the NTSC standard, which runs at approximately 29.97 frames per second. Unlike the 576i video formats, this difference cannot be overcome by a simple speed-up. A complex process called "3:2 pulldown" is used. One film frame is transmitted for three video fields (1 video frame times), and the next frame is transmitted for two video fields (one video frame time). Two film frames are therefore transmitted in five video fields, for an average of 2 video fields per film frame. The average frame rate is thus 60 / 2.5 = 24 frame/s, so the average film speed is exactly what it

45

should be. There are drawbacks, however. Still-framing on playback can display a video frame with fields from two different film frames, so any motion between the frames will appear as a rapid back-and-forth flicker. There can also be noticeable jitter/"stutter" during slow camera pans (telecine judder). To avoid 3:2 pulldown, film shot specifically for NTSC television is often taken at 30 frame/sec. For viewing native 576i material (such as European television series and some European movies) on NTSC equipment, a standards conversion has to take place. There are basically two ways to accomplish this: The framerate can be slowed from 25 to 23.976 frames per second (a slowdown of about 4%) to subsequently apply 3:2 pulldown. Interpolation of the contents of adjacent frames in order to produce new intermediate frames; unless highly sophisticated motion-sensing computer algorithms are applied, this introduces artifacts, and even the most modestly trained of eyes can quickly spot video that has been converted between formats.

Modulation for analog satellite transmission Because satellite power is severely limited, analog video transmission through satellites differs from terrestrial TV transmission. AM is a linear modulation method, so a given demodulated signal-to-noise ratio (SNR) requires an equally high received RF SNR. The SNR of studio quality video is over 50 dB, so AM would require prohibitively high powers and/or large antennas. Wideband FM is used instead to trade RF bandwidth for reduced power. Increasing the channel bandwidth from 6 to 36 MHz allows a RF SNR of only 10 dB or less. The wider noise bandwidth reduces this 40 dB power saving by 36 MHz / 6 MHz = 8 dB for a substantial net reduction of 32 dB. Sound is on a FM subcarrier as in terrestrial transmission, but frequencies above 4.5 MHz are used to reduce aural/visual interference. 6.8, 5.8 and 6.2 MHz are commonly used. Stereo can be multiplex or discrete, and unrelated audio and data signals may be placed on additional subcarriers. A triangular 60 Hz energy dispersal waveform is added to the composite baseband signal (video plus audio and data subcarriers) before modulation. This limits the

46

satellite downlink power spectral density in case the video signal is lost. Otherwise the satellite might transmit all of its power on a single frequency, interfering with terrestrial microwave links in the same frequency band. In half transponder mode, the frequency deviation of the composite baseband signal is reduced to 18 MHz to allow another signal in the other half of the 36 MHz transponder. This reduces the FM benefit somewhat, and the recovered SNRs are further reduced because the combined signal power must be "backed off" to avoid intermodulation distortion in the satellite transponder. A single FM signal is constant amplitude, so it can saturate a transponder without distortion. Field order An NTSC "frame" consists of an "even" field followed by an "odd" field. As far as the reception of an analog signal is concerned, this is purely a matter of convention and, it makes no difference. It's rather like the broken lines running down the middle of a road, it doesn't matter whether it is a line/space pair or a space/line pair; the effect to a driver is exactly the same. The introduction of digital television formats has changed things somewhat. Most digital TV formats, including the popular DVD format, record NTSC originated video with the even field first in the recorded frame (the development of DVD took place in regions that traditionally utilize NTSC). However, this frame sequence has migrated through to the so-called PAL format (actually a technically incorrect description) of digital video with the result that the even field is often recorded first in the frame (the European 625 line system is specified as odd frame first). This is no longer a matter of convention because a frame of digital video is a distinct entity on the recorded medium. This means that when reproducing many non NTSC based digital formats (including DVD) it is necessary to reverse the field order otherwise an unacceptable shuddering "comb" effect occurs on moving objects as they are shown ahead in one field and then jump back in the next. This has also become a hazard where non NTSC progressive video is transcoded to interlaced and vice versa. Systems that recover progressive frames or transcode video should ensure that the "Field Order" is obeyed, otherwise the recovered frame will consist of a field from one frame and a field from an adjacent frame, resulting in "comb" interlacing artifacts. This can often be observed in PC based video playing utilities if an inappropriate choice of de-interlacing algorithm is made.

47

Comparative quality

The SMPTE colour bars, an example of a test card. Reception problems can degrade an NTSC picture by changing the phase of the color signal (actually differential phase distortion), so the color balance of the picture will be altered unless a compensation is made in the receiver. The vacuumtube electronics used in televisions through the 1960s led to various technical problems. Among other things, the color burst phase would often drift when channels were changed, which is why NTSC televisions were equipped with a tint control. PAL and SECAM televisions had no need of one, and although it is still found on NTSC TVs, color drifting generally ceased to be a problem once solidstate electronics were adopted in the 1970s. When compared to PAL in particular, NTSC color accuracy and consistency is sometimes considered inferior, leading to video professionals and television engineers jokingly referring to NTSC as Never The Same Color, Never Twice the Same Color, or No True Skin Colors,[16] while for the more expensive PAL system it was necessary to Pay for Additional Luxury. PAL has also been referred to as Peace At Last or Perfection At Last in the color war. This mostly applied to vacuum tube-based TVs, however, and solid state sets have less of a difference in quality between NTSC and PAL. This color phase, "tint", or "hue" control allows for anyone skilled in the art to easily calibrate a monitor with SMPTE color bars, even with a set that has drifted in its color representation, allowing the proper colors to be displayed. Older PAL television sets did not come with a user accessible "hue" control (it was set at the factory), which contributed to its reputation for reproducible colors. The use of NTSC coded color in S-Video systems completely eliminates the phase distortions. As a consequence, the use of NTSC color encoding gives the highest

48

resolution picture quality (on the horizontal axis & frame rate) of the three color systems when used with this scheme. (The NTSC resolution on the vertical axis is lower than the European standards, 525 lines against 625) However, it uses too much bandwidth for over-the-air transmission. Some home computers in the 1980s generated S-video, but only for specially designed monitors as no TV at the time supported it. In 1987, a standardized 4-pin DIN plug was introduced for S-video input with the introduction of S-VHS players, which were the first device produced to use the 4-pin plugs. However, S-VHS never became very popular. Video game consoles in the 1990s began offering S-video output as well. With the advent of DVD players in the 1990s, component video also began appearing. This provides separate lines for the luminance, red shift, and blue shift. Thus, component produces near-RGB quality video. It also allows 480p progressive-scan video due to the greater bandwidth offered. The mismatch between NTSC's 30 frames per second and film's 24 frames is overcome by a process that capitalizes on the field rate of the interlaced NTSC signal, thus avoiding the film playback speedup used for 576i systems at 25 frames per second (which causes the accompanying audio to increase in pitch slightly, sometimes rectified with the use of a pitch shifter) at the price of somejerkiness in the video. See Framerate conversion above. 3.10 Variants NTSC-M Unlike PAL, with its many varied underlying broadcast television systems in use throughout the world, NTSC color encoding is invariably used with broadcast system M, giving NTSC-M. NTSC-J Only Japan's variant "NTSC-J" is slightly different: in Japan, black level and blanking level of the signal are identical (at 0 IRE), as they are in PAL, while in American NTSC, black level is slightly higher (7.5 IRE) than blanking level. Since the difference is quite small, a slight turn of the brightness knob is all that is required to correctly show the "other" variant of NTSC on any set as it is supposed to be; most watchers might not even notice the difference in the first place. The channel encoding on NTSC-J differs slightly from NTSC-M. In particular, the Japanese VHF band runs from channels 1-12 while the American VHF band uses channels 2-13.

49

PAL-M (Brazil) The Brazilian PAL-M system, introduced in 1972, uses the same lines/field as NTSC (525/60), and almost the same broadcast bandwidth and scan frequency (15.750 vs. 15.734 kHz). Prior to the introduction of color, Brazil broadcast in standard black-and-white NTSC. As a result, PAL-M signals are near identical to North American NTSC signals, except for the encoding of the colour subcarrier (3.575611 MHz for PAL-M and 3.579545 MHz for NTSC). As a consequence of these close specs, PAL-M will display in monochrome with sound on NTSC sets and vice versa.

PAL-M (PAL=Phase Alternating Line) specs are: Transmission Band UHF/VHF, Frame Rate 29.97 Lines/Field 525/60 Horizontal Freq. 15.750 kHz Vertical Freq. 60 Hz Color Sub Carrier 3.575611 MHz Video Bandwidth 4.2 MHz Sound Carrier Frequency 4.5 MHz Channel Bandwidth 6 MHz

NTSC (National Television System Committee) specs are: Transmission Band UHF/VHF Lines/Field 525/60 Horizontal Frequency 15.734 kHz Vertical Frequency 60 Hz Color Subcarrier Frequency 3.579545 MHz Video Bandwidth 4.2 MHz Sound Carrier Frequency 4.5 MHz PAL-N This is used in Paraguay, Uruguay and Argentina. This is very similar to PALM (used in Brazil).

50

The similarities of NTSC-M and NTSC-N can be seen on the ITU identification scheme table, which is reproduced here: As it is shown, aside from the number of lines and frames per second, the systems are identical. NTSC-N/PAL-N are compatible with sources such as game consoles, VHS/Betamax VCRs, and DVDplayers. However, they are not compatible with broadband broadcasts (which are received over an antenna), though some newer sets come with baseband NTSC 3.58 support (NTSC 3.58 being the frequency for color modulation in NTSC: 3.58 MHz). NTSC 4.43 In what can be considered an opposite of PAL-60, NTSC 4.43 is a pseudo color system that transmits NTSC encoding (525/29.97) with beginning with the Sony 3/4" U-Matic format and then following onto Betamax and VHS format machines. As Hollywood has the claim of providing the most cassette software (movies and television series) for VCRs for the world's viewers, and as not all cassette releases were made available in PAL formats, a means of playing NTSC format cassettes was highly desired. Multi-standard video monitors were already in use in Europe to accommodate broadcast sources in PAL, SECAM, and NTSC video formats. The heterodyne color-under process of U-Matic, Betamax & VHS lent itself to minor modification of VCR players to accommodate NTSC format cassettes. The color-under format of VHS uses a 629 kHz subcarrier while U-Matic & Betamax use a 688 kHz subcarrier to carry an amplitude modulated chroma signal for both NTSC and PAL formats. Since the VCR was ready to play the color portion of the NTSC recording using PAL color mode, the PAL scanner and capstan speeds had to be adjusted from PAL's 50 Hz field rate to NTSC's 59.94 Hz field rate, and faster linear tape speed. The changes to the PAL VCR are minor thanks to the existing VCR recording formats. The output of the VCR when playing an NTSC cassette in NTSC 4.43 mode is 525 lines/29.97 frames per second with PAL compatible heterodyned color. The multi-standard receiver is already set to support the NTSC H & V frequencies; it just needs to do so while receiving PAL color. The existence of those multi-standard receivers was probably part of the drive for region coding of DVDs. As the color signals are component on disc for all display formats, almost no changes would be required for PAL DVD players to play NTSC (525/29.97) discs as long as the display was frame-rate compatible.

51

a color subcarrier of 4.43 MHz instead of 3.58 MHz. The resulting output is only viewable by TVs that support the resulting pseudo-system (usually multi-standard TVs). Using a native NTSC TV to decode the signal yields no color, while using a PAL TV to decode the system yields erratic colors (observed to be lacking red and flickering randomly). The format is apparently limited to few early laserdisc players and some game consoles sold in markets where the PAL system is used. The NTSC 4.43 system, while not a broadcast format, appears most often as a playback function of PAL cassette format VCRs,

NTSC-movie NTSC with a frame rate of 23.976 frame/s is described in the NTSC-movie standard. Canada/U.S. video game region Sometimes NTSC-US or NTSC-U/C is used to describe the video gaming region of North America (the U/C refers to U.S. + Canada), as regional lockout usually restricts games released within a region to that region. Vertical interval reference The standard NTSC video image contains some lines (lines 121 of each field) that are not visible (this is known as the Vertical Blanking Interval, or VBI); all are beyond the edge of the viewable image, but only lines 19 are used for the verticalsync and equalizing pulses. The remaining lines were deliberately blanked in the original NTSC specification to provide time for the electron beam in CRT-based screens to return to the top of the display. VIR (or Vertical interval reference), widely adopted in the 1980s, attempts to correct some of the color problems with NTSC video by adding studio-inserted reference data for luminance and chrominance levels on line 19.[17] Suitably equipped television sets could then employ these data in order to adjust the display to a closer match of the original studio image. The actual VIR signal contains three sections, the first having 70 percent luminance and the same chrominance as the color burst signal, and the other two having 50 percent and 7.5 percent luminance respectively.

52

A less-used successor to VIR, GCR, also added ghost (multipath interference) removal capabilities. The remaining vertical blanking interval lines are typically used for datacasting or ancillary data such as video editing timestamps (vertical interval timecodes or SMPTE timecodes on lines 1214), test data on lines 1718, a network source code on line 20 and closed captioning, XDS, and V-chip data on line 21. Early teletext applications also used vertical blanking interval lines 14 18 and 20, but teletext over NTSC was never widely adopted by viewers.[21] Many stations transmit TV Guide On Screen (TVGOS) data for an electronic program guide on VBI lines. The primary station in a market will broadcast 4 lines of data, and backup stations will broadcast 1 line. In most markets the PBS station is the primary host. TVGOS data can occupy any line from 10-25, but in practice its limited to 11-18, 20 and line 22. Line 22 is only used for 2 broadcast , DirecTV and CFPL-TV. TiVo data is also transmitted on some commercials and program advertisements so customers can auto record the program being advertised, and is also used in weekly half-hour paid programs on Ion Television and the Discovery Channel which highlight TiVo promotions and advertisers.

53

Chapter 4 Implement ation

54

4.1 Features
It works on ISM band. The ISM band is free band works on 4.45GHz. Its working range is about 100feet. It can work over noisy channel. The transmitter and receiver can be work on battery power supply. Low power consumption. It is a secure transmission. For security purpose ,NBFM technology is employed. Toggle Switches is used for fast switching. Error free transmission. Good Audio and Video transmission. It works on real time transmission system Less space requirement. High speed transmission. It employs VGA camera. Better picture quality. Better S/N ration at the receiving end.

55

4.2 Power supply


A power supply is a device that supplies electrical energy to one or more electric loads. The term is most commonly applied to devices that convert one form of electrical energy to another, though it may also refer to devices that convert another form of energy (e.g., mechanical, chemical, solar) to electrical energy. A regulated power supply is one that controls the output voltage or current to a specific value; the controlled value is held nearly constant despite variations in either load current or the voltage supplied by the power supply's energy source.

AC powered unregulated power supply usually uses a transformer to convert the voltage from the wall outlet (mains) to a different, nowadays usually lower, voltage. If it is used to produce DC, a rectifier is used to convert alternating voltage to a pulsating direct voltage, followed by a filter, comprising one or morecapacitors, resistors, and sometimes inductors, to filter out (smooth) most of

56

the pulsation. A small remaining unwanted alternating voltage component at mains or twice mains power frequency(depending upon whether half- or full-wave rectification is used)rippleis unavoidably superimposed on the direct output voltage.

4.3 Advantages It is a secure transmission. It works on a free spectrum band (ISM). It uses coding scheme which make it more safer transmission Only dedicated receiver can decode the encoded data. It posses noise free transmission.

It can connects to any Television reciever.

57

Even it can connect to the computer with the help of TV Tuner Card.
Both audio and video transmission is possible with good signal to noise ratio. It transmits Real Time Transmission from transmitter to receiver. Better picture quality. High speed data transmission.

58

Chapter 5 Conclusion & Future Scope

59

CONCLUSION

Wireless Technology is just in the formation stage. However, as we know technology is still premature and therefore we can expect a serious amount of growth in the wireless industry within the next few and adopting this new technology. Wireless LAN and its technology will continue to grow, expanding the internet, and pave the way for our future.

FUTURE SCOPE
Although it is a real time transmission , we can make it stored transmission for future reference by interfacing the divice with some sort of storage elements such as hard drive. The range can also be increase by the use of VoIp server.

60

Chapter 6 Reference

61

Refrence

www.google.com www.1000projects.com www.chetnasproject.com www.ieee.gov.in www.wikipaedia.com

www.indianengineer.wordpress.com

www.8051projects.info

www.eduplanets.com

www.firstranker.com www.freelancer.com/

www.engineersgarage.com

62

Potrebbero piacerti anche