Sei sulla pagina 1di 14

Video Revision Essays:

History of film and Digital Revolution History Until persistence of vision was understood, it was difficult to conceive the idea of moving images Studies by Eadweard Muybridge helped in understanding the nature of motion He had still cameras activated by trip wires in order to carry out his experiments on the motion of horses The theory behind these techniques are still used today This began the understanding behind the principles of visualising moving objects known as persistence of vision This led to the development of celluloid film which provided a convenient way of stacking together a series of images which could then be run at a suitable frame rate One of the first devices to use this was the kinetoscope The lumiere brothers were the first to project moving images onto a screen in 1896 Special effects have also featured from the early days for example split screen and stop motion techniques There where early compositing techniques used as well however these were all optical This includes the blue screen difference process And the blue screen separation process as well as the sodium vapour process These optical techniques where supplemented by more direct techniques most importantly the rear projection method This involved foreground action being shot in front of a background which was being projected by a camera behind a screen to make it appear as if the foreground action was shot in the same location as the background. These where known as process shots and Hitchcock used these in the 1958s vertigo An alternate method was developed known as rotoscoping which involved hand drawn mattes for each frame and could produce spectacular shots for example in 1968s 2001: a space odyssey where the rotoscoping took 3 months to create a shoot a space station orbiting the earth. Further experiments with film formats also commenced in the 1950s with the 3d boom dying out after 1954 after the creation of the cinema scope format which had an aspect ratio of 2:35:1 and was used in the film the robe in 1953 Larger print sizes where also tested using 70mm to improve quality of the final image

Sound (in case) The introduction of sound also meant massive technological and creative changes Early sound equipment was not very good such as the microphones so many recordings where done in post-production in a studio Sound mixing facilities were also not available at the start

Video Revision And sound-on-disc processes such as vitaphone quickly superseded by sound-on-film processes such as fox movie tone

Digital revolution in the 1990s the digital revolution changed the face of film by scanning film using a device such as a laser it was possible to convert picture information into data which could then be manipulated on a computer. Then the final print could be creating by imprinting data onto a raw film using a laser Then the next step in the development meant to remove film from the process entirely by shooting using a video camera and projecting the final movie using a digital camera also. However with these advances has meant a wide range of possibilities Computer based editing packages mean that the information can be edited without having to cut any film meaning that editing decisions are slightly less crucial as can decide to go back and use any film Optical techniques where replaced with digital compositing which is so good that it is difficult to see the joins in most cases for example in the lord of the ring series on Gollum with several different methods of compositing getting near perfect results. Colour grading became much easier and more versatile as well with also the ability to only grade certain areas of interest also being possible. Special effect techniques much easier to create and sophisticated motion matching between layers of composites became possible so advances in camera angles could be made This also meant integrating cgi was relatively easy to do compared to before. Collaborative editing became easier in post-production between a range of professionals also became possible This did cause a slight negative impact as visual spectacle began to negate storytelling within films.

Video Revision History of 3d in 1838 Charles Wheatstone published a paper which outlined the theory of stereoscopic vision This made the crucial point that by presenting each eye with a slightly different version of a scene, an illusion of 3d can be perceived. In 1844 david Brewster used prismatic lenses to create practical stereoscopic viewing devices Oliver Wendel Holmes improved this by adding covex lenses which meant eyes did not have to accommodate for nearness of the image thus improving depth perception 1862 he and joseph bates marketed a cheap stereoscope which forms the basis for many modern designs Several magic lantern shows created a stereoscopic effect by projecting two images onto a screen that were colour coded They were viewed using glasses and this method was known as an anaglyph image The earliest film to use this technique was the power of love in 1922 The same year hammon and Cassidy invented a teleview system but was not popular as it was uncomfortable as audiences have to look through viewers which where synchronised to projectors. In the early 1950s a boom however began with films using polaroid technology to create stereoscopic effects. Screens needed to be silver as opposed to white preserve polarised light orientations and two projectors needed to be used with polarising features. This was used in house of wax in 1953 but problems occurred due to difficulties in synchronisation which led to its decline By 1954 3d films ceased due to popularity in formats such as cinemascope The success of avatar has revived 3d use in the present day. However 3d requires twice the amount of images of 2d films causing increased in production costs Because film need specialised 3d cameras in order to shoot the 3d images Cinemas also need revamping in order to accommodate for showing 3d films and also a large portion of income these days is home distribution meaning low cost 3d-tvs are needed so film makers can make profit from developing 3d films.

Motion Graphics In the early days film titles where developed with techniques stemming from early animations An example of this is felix the cat which used cell animation to speed up production of animation sequences These animations showed the graphical possibilities of moving images, some even mixed with live action which was pioneered by max fleider He also invented rotoscoping tracing live action frame by frame These techniques where not used straight away In film titles as they were dominated by text Font shapes changed but variations where restrained to suit the style of film

Video Revision However these basically listed the characters and did not convey mood of the film to build an atmosphere whatsoever City lights however made a huge step forward for title sequences by using superimposition by use of optical printers in an attempt to create a story through title cards. In 1940 the title sequence for spook sport used the techniques in animations and was quite remarkable for its time Saul bass however revolutionised the design of title sequences in the 1950s as he saw the creative possibilities in establishing a mood in the context of the film in the title sequences He designed the opening title sequences for films such as man with a golden gun with white lines moved on screen until coming together to form a twisted arm. He also synchronised a silhouette of a body in time with music in 1959s anatomy of a murder and used text zooming effects in 1958s vertigo These techniques although primitive now where very advanced in their time They were very time consuming, cost a lot to make and where expensive as they involved doing them by hand Friz freleng furthered the use of animation techniques by creating the elegant pink panther title sequence Robert brownjohn however created further developed title sequences by actually projecting text onto dancers for a James bond title sequence in from Russia with love The projection technique was then developed even further by Maurice binder who used an optical printer to project colours onto a tank in thunderball Although the james bond title sequences had an uncanny ability to convey mood, the elegance and style was taken further when video software packages where used for the first time in a bond title sequence by Daniel klieman in goldeneye Kyle cooper who is considered one of the most successful motion graphics artist furthered the field again by combining the techniques used by saul bass and with the computer software allowing less creative limitations produced a stunning title sequence for the movie se7en. Catch me if you can also use the animation principles which shows even now the theory behind title sequences has not changed just the speed and creative possibilities when using adobe after effects for its title sequence.

Video Revision Compositing Formula: C = (M * FG)+((1-M)*BG) C = composite M = matte FG = foreground BG = background This is an image combining operation and consists of a foreground layer, a background layer and a matte to identify which area is which Compositing operation consists of three steps Firstly the foreground is scaled (multiplied) by the Matte. Secondly the background is scaled by the inverse of the matte Then the scaled results of the background and foreground are multiplied to create the final composite Further notes to make it a bit more detailed Foreground layer needs to be transparent in areas which the background needs to be seen and partially in any semi-transparent regions including the blending regions that surround the foreground object Maths explained here

Background matte punches a black hole where the scale foreground object is to go by using inverted matte

Video Revision

Video Revision Processed foreground Used to get softer edge characteristics, a backing plate the same colour as the foregrounds backing colour is created This is then turned into a scaled backing plate by multiplying it by an inverted matte This scaled backing plate is then subtracted from the foreground to create the processed foreground.

Maximum operation Is a matteless composite Choosing the two brightest pixels from the two images Good for creating clean composites and most useful when taking a foreground layer which has a darker backing colour than the background layer and placing the object of interest over the background layer. Backing colour will be removed from the composite as it is darker than backgrounds pixels

Video Revision Example:

Minimum Opposite of maximum

Video Revision Slice Graph the slice graph is a very important analysis tool it is created by using a slice tool to draw a line across a region of interest in a picture second step is to plot the pixel values from under the line and onto a graph this shows the colour levels relative to eachother and provides insight for pulling mattes and despill operations and can find uneven lighting also and fix these issues

RM = G max(R,B) RM = raw matte G = green R = red B = blue

Video Revision

Raw Matte

Colour curve information First step is to scale raw matte to 100% white density, need to stop when it gets to full white because it hardens the matte edges If the black has any contaminated pixels then the black will have to be scaled afterwards and this is done little by little as to not harden the images and the black and white are touched up until solid matte created with least amount of scaling

Video Revision Matte Creations Rotoscoping Rotoscoping is drawing mattes frame by frame and is a manual way of creating mattes The matte will alter frame and can be done by hand but is very time consuming This can produce spectacular results as shown in 1968s a space odyssey where it took 3 months to create the scene of the space station orbiting the earth. This can now be done in software packages where the mattes can be interpolated inbetween keyframes Luma keys Luma keys takes in a rgb image and computes a luminance version of the image which is monochrome A matte is taken from the luminance information to create the luminance image This formula is used L = 0.3R + 0.59G + 0.11B Using a single threshold can however lead to very hard edges because the threshold value is set and pixel values greater than the threshold are set to 100% white and below are black Therefore a second threshold is added to soften the edges By setting a threshold for 100% density edge and the other 0% density with a gradient between them in order to create softer edges Chroma keying based on colour information takes in the RGB image and converts it into HSV hue, saturation, value the techniques involve choosing a certain range of colours and using this as a basis on which to define a matte. Also possible to select a brightness range Colour difference Most popular keying method as it gives superior edge characteristics to the matte Relies on the background being one of the primary colour The basic theory is that the difference between the green record for example and the other two methods is relatively large but very small in the foreground region The difference results in a raw matte of partial density typically 0.2 to 0.4 in the background region and near 0 in the foreground RM = G max(R,B) AGAINWTF?

Video Revision Matching the light space matching the light space is the process of two pictures that were photographed separately but appear as if they were shot together through compositing the method aims to logically approach the tasks for matching the images in a logical way first by matching the brightness and contrast by matching the luminance levels then the colour matching issues are addressed to fix any bias issues following that the light direction, quality and interactions and fixed to appear as if the lighting conditions where the same and interact with eachother then to refine the composite the shadows are examined to adjust the edge characteristics finally atmospheric haze is addressed to handle depth perception

Brightness and contrast matched by using a luminance version of the image by looking at the black, white and midtones and adjusting them to match up by using a curve tool which allows values to be mapped eground black level = 0.15

measuring the black levels in the background first takes place with the foreground adjusted to match the background layer then the whites are matched in a similar fashion make sure not to select a specular highlight as trying to match the whites of a regular diffuse when increasing contrast an issue of clipping the black and whites can arise using a simple colour curve or contrast adjustment node instead using an s curve is more elegant and gently squeezes the blacks and whites even with black and whites matched the mid tones may not match up this is easy to fix however without affecting the black and whites as it is possible to clamp the control points which have been sent and then gently bend the middle of the curve until the mid-tones add up

Colour issues can be very difficult to match colours grey cards can be used to match colours because they are contain equal amounts of RGB which provides a calibration reference because of how an image could have been shot however a slight colour bias may occur so using a grey object can be used to match the colour bias of the foreground to match the background can work well with matching skin tones

Video Revision Light direction, quality, and interactions light directions need to be matched so they look as if it comes from the same direction to show the shadows casting in the same direction light quality involves making the two images appear as if they were shot from the same light source. For example matching the foreground to the backgrounds soft lighting Lighting interactions are also important as for example If you have a person in a white shirt. A red lit wall as the background when they are composited will need to give the red shirt a slight red tint. Shadows Shadows have edge characteristics that depend on the light source and are coloured from light sources Initial approach is to find the backgrounds edge characteristics and attempt to match the foregrounds to them. Atmospheric Haze Has the effects of blurring distant objects which affects depth perception of objects Is affected by atmospheric scattering light and light scattering from object Can be fixed with either simple colour correction or creating a separate haze plate

Gamma Is a mathematical operation performed on image pixels that alters the brightness of the midtones while not affect the black and white points Also in the early days devices had a non-linear characteristic and studies of human perception showed humans like a little bit of gamma How much depends on local lighting however But this is why gamma is used over scaling operations for altering the brightness of darkness of images Because it affects images in a way similar to the non-linear response of the eyes so seems more natural Also avoids clipping of the image so no detail is lost G O=I It is a power function which is applied to a pixel in order to get the new pixel value, for example if pixel value of 128 is represented as 0.5 and a gamma value of 2.2 is applied then it will look like 2.2 0.5

Video Revision Fs word, anyway so values above 1 will darken an image, values below will brighten an image and the value of exactly 1 will return the original value Most software packages invert the gamma value so that above 1 will brighten this is done by making the power function 1/2.0 for example which is actually a gamma value of 0.5

Monitor Gamma Monitor gammas non-linear characteristic is an inherent feature of the cathode ray tube Increasing voltage to the electron gun increases the stream of electrons to the phosphors Which in turn glow brighter However the stream of electrons does not increase linearly with gun voltage, instead it has a characteristic corresponding to a gamma value between 2.35 and 2.5 dependant on crt construction

Gamma correction Gamma correction is used to compensate for the aforementioned non-linear characteristic The correction is applied using a look up table for the monitor with values determined by using the formula 1/gamma Original image is first loaded into the frame buffer for the work station, which is an area of ramp that holds images for display on the monitor Look up table is then used to convert input pixels to output pixels In practise however gamma correction value of 2.2 instead of 2.5

End to end Gamma Because the display devices are improved if they have a slight gamma to them, the aforementioned gamma correction value of 2.2 is used to be applied to a gamma value of 2.5 which gives an overall value of 1.1 This value is known as end to end gamma This is the most important as this determines how an image will look on a display device Formula is end to end gamma = gamma value / gamma correction

Potrebbero piacerti anche