Sei sulla pagina 1di 6

2012 International Conference on Mechanical Engineering and Automation Advances in Biomedical Engineering, Vol.

10

Video Image Processing System Based on FPGA Xie Shui-Ying1 , Han Cheng-Jiang1
1

Zhejiang Industry Polytechnic College Shaoxing Zhejiang 312000

Keywords: FPGA; Image Processing; VHDL

Abstract. In the video image processing system, real-time performance should be good and data should be large. FPGA used in real-time image processing has speed advantage for every functional module can work at the same time. Compared with traditional means, the system based on FPGA has the vantages of high processing speed and high reliability. 1. Introduction

The digital image processing system usually adopts the digital signal processing chip processor that DSP, DSP suitable for complex algorithms, but in high speed signal processing DSP is difficult to meet the requirement of real time, even using a multiprocessor structure load sharing, speed is limited, and easily lead to lower system reliability. Use FPGA to do the real-time image processing has the advantage of speed. Due to the internal structure of FPGA features, it can be very easy to achieve distributed algorithm structure, each function block can work at the same time, the high speed digital signal processing is very favorable. 2. Structure system Video image processing system is to achieve real-time video image capture, decode, format conversion, display functions and etc. The system using the CCD camera capture video images, and then the camera is transmitted back PAL format of the analog video signal A / D conversion and decoding, get YCrCb format digital image signal. The signal is processed by interlaced variable progressive conversion, and the signal image processed by D / A conversion is sent to VGA displayer. The relationship of each module can be showed by Figure 1.1.
CCD camera DAC ADV718 1 Processor FPGA DAC ADV712 3 VGA display

Figure 1.1. Image processing system

The processor uses the Cyclone II 2C35 type FPGA, the chip has more than 64000 logic units, 4 phase locked loops, a 1.1Mbits RAM, Cyclone II 2C35 using 1.2V power supply, 484-Pin BGA package, maximum operating frequency is up to 250MHz, the system uses an active crystal with 50MHz system working frequency. In data storage, data can be directly wrote to FPGA Block inside the RAM, but the FPGA built-in RAM block is limited, when the need for large capacity data storage chip RAM block is far from satisfy the system design requirement. This requires a FPGA with an external RAM is adopted to accomplish the large capacity data memory. SRAM access to high speed, convenient control, this system uses SRAM as image data cache. SRAM memory choose 256K x 16Bit high speed static memory chip IS61LV25616 by ISSI Corporation, the memory based on CMOS technology, storage capacity of 4M, the data line width is 16 bit, the address line width is 18 bit.
978-1-61275-028-6/10/$25.00 2012 IERI
195

ICMEA 2012

The system uses ADV7181 chip to complete the analog video signal to digital signal conversion. ADV7181 is a American ADI company produced low power multi-function digital video decoder chip, ADV7181 supports 6 analog video signal inputs, can automatically detect the NTSC, PAL, ECAM and other standard composite television signal and converting it to a 16 or 8 bit ITU-R BT.656 format YUV4 : 2: 2 video data. ADV7181 is a 3.3V CMOS device, a single chip CMOS structure ensures a high performance and low power consumption. Using ADI's ADV7123 treated with FPGA after the video signals to the D / A conversion, converted signal is connected to the 15 feet D-sub interface as VGA output. The highest ADV7123 can support 100Hz refresh frequency, 1600 x 1200 pixels resolution. D / A converter output current ranges of 2 ~ 26mA, TTL input compatible, single supply voltage 5V / 3.3V, 3.3V work minimal power consumption of 30mW. 3. 3.1 Technologies Configuration

The video decoder chip ADV7181 automatically detects PAL standard composite television signal and converts it to a 16 YUV4: 2: 2 video data output format. ADV7181 within a total of 240 control registers, FPGA sets and queries functions and status of the ADV7181by the registers. The ADV7181 register value is divided into the default value and the set value, the default value is the 2 C bus configuration mode to the default value after reset chip register, set value is FPGA I control register default value to modify, to reconfigure the ADV7181 in the various functional purposes. The configuration process using Verilog hardware description language design and implementation, the interface function consists of two modules to complete. I2C_Controller module 2 is used to generate I C bus specification timing, I2C_Config module is used for generating configuration register address and configuration parameters, the module based on the lookup table algorithm to design. The design of the ADV7181 function is collecting PAL standard analog video signal to the 27MHz clock, A / D conversion, and outputs a 16 bit YUV4 : 2: 2 digital signal. Under this situation, need only on ADV7181 40 register configuration, the remaining registers are default 2 values after the system reset. ADV7181 I C bus control format consists of 3 bytes 32 bits, where S is the starting signal for the stop signal P, SLAVEADDRESS 7 bits from the address ( The address of ADV7181 is 40H ), W / R as a read-write selection signal, ACK-s from the device emits a response signal, SUBADDRESS 8 DATA location, 8 bit DATA. I2C_Config module request is 2 transmitted every 32 bits of data according to the I C bus protocol requirements and ADV7181 data exchange at the will of I2C_Controller module; realize ADV7181 specifies the configuration register. Configuration module part of key code is as follows: case (LUT_INDEX) SET_VIDEO +0: LUT_DATA<=16h1500; SET_VIDEO +1: LUT_DATA<=16h1741; SET_VIDEO +2: LUT_DATA<=16h3a16; SET_VIDEO +3: LUT_DATA<=16h5004; SET_VIDEO +37: LUT_DATA<=16h2ff4; SET_VIDEO +38: LUT_DATA<=16h30d2; SET_VIDEO +39: LUT_DATA<=16h0e05; Endcase

196

3.2

The decoding

In the video image processing system, the ADV7181 takes on the analog video camera data decoding tasks, CVBS analog signal is transmitted into the ITU-R BT.656 standard YUV signals. Figure 1.2 shows the ADV7181 function block diagram.
CVBS Analog video signal TD_DATA[15:0]

ADV7181
TD_RESE T

TD_HS TD_VS

Figure 1.2. The ADV7181 function block diagram

By the graph can be seen, CCD camera capture CVBS analog signal after the ADV7181 chip conversion output YUV signal, signal line HS and VS frame synchronization signal, which is needed for digital video signal. YUV signal format, as shown in figure 1.3. As shown in the figure, a YUV signal begins with" FF, 00, 00" as a AV signal, so the structure of a testing circuit, noticed that SAV ( effective video frequency start ) and EAV ( effective the end of the video ) was began with" FF, 00, 00" but the XY values are not the same. According to the datasheet, XY [4] stand for V, namely the dividing point of useful signal and blank signal, if V = 0 that is SAV, or else EAV. XY [6] is a signal distinguishing mark, 0 is an odd field, and 1 is the even field. The signal line is 1716 CLOCK, the first is 4 CLOCK EAV, then 268 CLOCK signal, and then a 4 CLOCK SAV, finally is 1440 CLOCK useful video data,

Figure 1.3. The YUV signal format

Each of the 2 CLOCKs combines as a group, 720 groups of the useful signal. Each group contains a Y (luminance) component, and two adjacent groups of common CR (red color) and CB (blue color) signal. In the signal acquisition and allocation process, only the useful signal acquisition, so using the detected SAV as a marker, start signal distribution process is very necessary. To construct a signal selection circuit, YUV is a total of three signals, a counter selection, count is 0 and 2, is the UV signal, count is 1 and 3 is Y signal, actually complete the process that transmits serial signal into parallel signal. Because the system only needs to show black and white image, so it will only store the Y component. The following is part of the key code decoding module. reg YPix_clock; always@(posedge CLOCK) begin case(COUNTER) 0:begin Cbb=TD_D;YPix_clock=0;end 1:begin YY =TD_D;CCr =Crr;CCb=Cbb;YPix_clock=1;end 2 : begin Crr=TD_D;YPix_clock=0;end 3:begin YY=TD_D;CCr=Crr;CCb=Cbb;YPix_clock=1;end endcase
197

end 3.3 The cache Because of real-time video image processing system for processing large amount of data, real-time requirements are high, so the system uses 2 pieces of 256K x 16Bit SRAM chip to do ping pong operation, change time with the space, can guarantee that the continuity of image receiving and display. Ping Pong operation design as shown in figure 1.4.
According to frame mark, input data stream picks 1 from 2, writes data into a SRAM by timesharing SRAM1 VGA display module time sharing read a frame of data from the SRAM Cache and display

SRAM2

Figure 1.4 .Table tennis storage

Ping Pong operation process is described as follows: the input data stream through theinput data stream selection unit, the data stream is assigned to the two data buffer modules SRAM1 and SRAM2. In a certain period of time, when the SRAM1 receives a data frame and its interlaced variable progressive, VGA display module from SRAM2 cache read on frames of image data are displayed; in the next period, when the SRAM2 receives frame data and the interlaced variable progressive, VGA display module from the SRAM1 cache read this frames of image data are displayed, so the cycle, go round and begin again. The table tennis as a whole, standing on this module end data, the input data stream and the output data stream is continuous, without any pause, so it is very suitable for data flow line type seamless buffering and processing, the use of Ping Pong operation is an embodiment of area and speed swap principle. The ping pong operation mode, combined with FPGA inherent support for parallel working mode, can guarantee the image data to have sufficient processing time. As a result of the collected video signal is an interlaced scanning, this system carries on the interlaced variable progressive transformation, take the following measures: a frame data is stored in a SRAM, according to the parity field ( a distinction can be different according to XY [7 ] ), in a write cycle time, because the odd field lines between the even field signals, so write data when the jump address, according to the signal line to branch line, when the address to add an additional 640 ( for inclusion in the odd field signal in the even field signals ), until the emergence of even field signals (XY [6 ] = 1 ) address switching to an initial base address plus 640, the rest of the same line of the processing method, the specific address allocation with reference to figure 1.5.

198

Odd 1st Row

Base address Base address+640 Base address+2 640 Base address+3 640 Base address+4 640 Base address+5 640

Even 1st Row Odd 2nd Row Even 2nd Row Odd 3rd Row Even 3rd Row

Base address+(2n-2) Base address+(2n-1)

Fig 1.5. The address allocation table

Because the show only a black and white image, so each frame image, retaining only the Y component, each of the image frames is 640 x 480 pixels, 307200 bytes, 524288 bytes per SRAM, meet the storage requirement. As a result of interlaced variable progressive process parity field image data is completely preserved without loss, so this transformation method to ensure the image without distortion. There is also another kind of interlaced variable progressive methods comparison of prevalence: using the two cache storage were odd and even field data, the use of the human eye in fast image when switching low resolution, VGA per frame display only one field, the field in the odd (even) simple line instead of even (odd) row, each row shows two times. Although this method saves the expenses, at the expense of image resolution and authenticity, this system is not used. 3.4 Display Finally, according to the industrial standard VGA, VGA sequence generator, the video image through a digital to analog converter display in the VGA displayer. Below is the part of the vga_timing module code, it used to generate a line synchronizing signal hs. process(vga_clk)begin if (rising_edge(vga_clk))then if ((hcnt>=640+8+8)and(hcnt<640+8+8+96))then hs<='0'; else hs<='1'; end if; end if; end process; 4. The ending Real time video processing on the high system performance requirements, in serial form work of general DSP is difficult to meet the requirement of real time, the system uses FPGA parallel processing technology to realize the video signal processing, only by using a piece of FPGA
199

640 640

Odd nth Row Even nth Row

achieves the expected performance. DSP based solutions usually require on the single board embedded in many DSP, to get the required processing capacity, which not only increases the cost, but also reduce the reliability of the system. Based on the FPGA embedded system design flexibility, easy transplantation, short development cycle, is a highly efficient design, FPGA will become the field of video image processing a new direction of development, has a very broad market prospects. References [1] Warren. Customizable CPU platform combines DSP, FPGA [J]. EDN Asia, 2008, 17(7): 8-10 [2] Zhang Xudong; Guan Hongjie; Wang Desheng, Image-Processing System Based on TMS320C80 [3] JOURNAL OF IMAGE AND GRAPHICS(1997.10) [4] CHEN XiaoApplication of DrawDib Function Group in Treament of Graphic and Image, Journal of Shenyang Institute of Chemical Technolgy(2002.1) [5] Nasser Kehtarnavaz, Mark Gamadia. Real-Time Image and Video Processing[M]. Morgan& Claypool Pubilshers( 2006)

200

Potrebbero piacerti anche