Sei sulla pagina 1di 17

What is photogrammetry?

Photogrammetry (Greek: phot (light) + gramma (something drawn) + metrein (measure)) is the science of making measurements from photographs. Basic example: the distance between two points that lie on a plane parallel to the photographic image plane can be determined by measuring their distance on the image and then multiplying the measured distance by scale parameter. Typical outputs: a map, drawing or a 3d model of some real-world ob ect or scene. Related fields: !emote "ensing, G#".

Main task of photogrammetry


#f one wants to measure the si$e of an ob ect, let%s say the length, width and height of a house, then normally one will carry this out directly at the ob ect. &owe'er, the house may not e(ist anymore ) e.g. it was destroyed, but some historic photos e(ist. *hen, if one can determine the scale of the photos, it must be possible to get the desired data. +f course one can use photos to get information about ob ects. *his kind of information is different. "o, for e(ample, one may recei'e qualitative data (the house seems to be old, the walls are colored light green) from photo interpretation, or quantitative data like mentioned before (the house has a base si$e of ,- by ,. meters) from photo measurement, or information in addition to one/s background knowledge (the house has elements of classic style), and so on. 0hotogrammetry pro'ides methods to get information of the second type: 1uantitati'e data. 2s the term already indicates, photogrammetry can be defined as the 3science of measuring in photos4, and is traditionally a part of geodesy, belonging to the field of remote sensing (!"). #f one would like to determine distances, areas or anything else, the basic task is to get ob ect (terrain) coordinates of any point in the photo from which one can then calculate geometric data or create maps. +b'iously, from a single photo (two-dimensional plane) one can only get two-dimensional coordinates. *herefore, if one needs three-dimensional coordinates, a way how to get the third dimension is to be found. *his is a good moment to remember the properties of human 'ision. &umans are able to see ob ects in a spatial manner, and with this they are able to estimate the distance between an ob ect and themsel'es. 5ut how does it work6 #n fact, human brain at any moment gets two slightly different images resulting from the different positions of the left and the right eye and due to the fact of the eye%s central perspecti'e. 7(actly this principle, the so-called stereoscopic viewing, is used to get three-dimensional information in photogrammetry: #f there are two (or more) photos of the same ob ect but taken from different positions, one may easily calculate the three-dimensional coordinates of any point which is represented in both photos (by setting up the e1uations of the rays originating in image pro ections of the ob ect point on the photos mentioned and passing through the ob ect point itself and after that calculating their intersection). *herefore, the main task of photogrammetry could be defined in the following way: For any object point represented in at least two photos one has to calculate the three-dimensional object coordinates. #f this task

is fulfilled, it is possible to digiti$e points, lines and areas for map production or calculate distances, areas, 'olumes, slopes and much more.

When do we need photogrammetry?


*he first idea that comes up to one/s mind in association with measuring distances, areas and 'olumes is most likely to be about a ruler or a foot rule. &owe'er, there are situations when it doesn/t work, like one of the following: the object itself doesnt exist any more (but the photos from the ob ect are preser'ed) or the object cannot be reached (for e(ample, areas far away or in countries without ade1uate infrastructure which still can be photographed). 8urthermore, photogrammetry is a perfect option for measuring "easily transformed" objects like li1uids, sand or clouds, as it a'oids physical contact. #n addition, photogrammetry enables one to measure fast moving objects . 8or instance these may be running or flying animals or wa'es. #n industry, high-speed cameras with simultaneous acti'ation are used to get data about deformation processes (like crash tests with cars). 9omparing laser scanning techni1ues which are widely spread today both in terrain models generating and in close range case to get large amount of 3: point data (dense point cloud) to photogrammetry, one could note the following. *he ad'antage of laser scanning is that the ob ect can be low textured ) a situation where photogrammetric matching techni1ues often fail. +n the other hand, laser scanning cannot be used for fast moving objects. ;oreo'er, laser scanning is time consuming and still very expensive, comparing with photogrammetric methods. *herefore, these methods may be considered as a complimentary to each other.

Types of photogrammetry
0hotogrammetry can be classified in a number of ways but one standard method is to split the field basing on camera location during photography. +n this basis we ha'e Aerial Photogrammetry and lose-!ange Photogrammetry. #n 2erial 0hotogrammetry the camera is mounted on an aircraft and is usually pointed 'ertically towards the ground. ;ultiple o'erlapping photos of the ground are taken as the aircraft flies along a flight path. #n 9lose-range 0hotogrammetry the camera is close to the sub ect and is typically hand held or on a tripod. <sually this type of photogrammetry work is non-topographic - that is the output is not topographic products like terrain models or topographic maps, but instead drawings and 3: models. 7'eryday cameras are used to model buildings, engineering structures, 'ehicles, forensic and accident scenes, film sets, etc.

Short history
#n fact, de'elopment of photogrammetry reflects the general de'elopment of science and technology. *echnological breakthroughs like in'entions of photography, airplanes, computers and electronics determined . ma or stages in the history of the science.

,. *he in'ention of photography by L. Daguerre and N. Niepce in 1 !" laid grounds for photogrammetry to originate. *he first phase of de'elopment (till the end of the =#=th century) was a period for pioneers to study absolutely new field and formulate first methods and principles. *he greatest achie'ements were done in terrestrial and balloon photogrammetry. >. *he second turning point was the in'ention of stereophotogrammetry (that based on stereoscopic 'iewing, see ;ain task of photogrammetry section) by #. Pu$frich (1"%1). :uring first world war airplanes and cameras became operational and ust se'eral years later the main principles of aerial sur'ey were formulated. #n fact, analog rectification and stereoplotting instruments, based on mechanical theory, were already known that days, yet the amount of computation was prohibiti'e for numerical solutions. ?ot surprising that &on 'ru(er called photogrammetry of the period /the art of a'oiding computations/. 3. *he third phase started with the ad'ent of the computer. 1")%s saw the birth of analytical photogrammetry, with matri( algebra forming the basis. 8or the first time a serious attempt was made to employ ad ustment theory to photogrammetric measurements, yet the first operational computer programs became a'ailable only se'eral years later. *rown de'eloped the first block ad ustment program based on bundles in the late si(ties. 2s a result, the accuracy performance of aerial triangulation impro'ed by a factor of ten. 2part from aerial triangulation, the analytical plotter is another ma or in'ention of the third generation. .. *he forth generation, digital photogrammetry, emerged due to the in'ention of digital photo and the a'ailability of storage de'ices which permit rapid access to digital imagery. &ardware supported by special 90<s and G0<s that speed up imagery data processing, digital photogrammetry has taken the leading position in the field.

+mage sources
,na$ogue and digita$ cameras
*he de'elopment of photogrammetry is closely connected with that of a'iation and photography. :uring more than ,-- years, photos ha'e been taken on glass plates or film material (negati'e or positi'e). :espite the fact that such cameras are still in use, one ha'e to admit that today we are li'ing in the age of digital photography. <nlike traditional cameras that use film to capture and store an image, digital cameras use a solid-state de'ice called an image sensor. *hese fingernail-si$ed silicon chips contain millions of photosensiti'e diodes called photosites. #n the brief flickering instant that the shutter is open, each photosite records the intensity or brightness of the light that falls on it by accumulating a charge@ the more light, the higher the charge. *he brightness recorded by each photosite is then stored as a set of numbers that can then be used to set the color and brightness of dots on the screen or ink on the printed page to reconstruct the image. *he chief ad'antage of digital cameras o'er the classical film-based cameras is the instant a'ailability of images for further processing and analysis. *his is essential in real-time applications (e.g. robotics, certain industrial applications, bio-mechanics, etc.). 2nother ad'antage is the increased spectral fle(ibility of digital cameras.

:igital cameras ha'e been used for special photogrammtric applications since the early se'enties. &owe'er, 'idicon-tube cameras a'ailable at that time were not 'ery accurate because the imaging tubes were not stable. *his disad'antage was eliminated with the appearance of solid-state cameras in the early eighties. *he charge-coupled de'ice pro'ides high stability and is therefore the preferred sensing de'ice in today%s digital cameras.

Metric and digita$ consumer cameras


Metric cameras #n principle, specific photogrammetric cameras (also simply called metric cameras) work the same way as the amateur camera. *he differences result from the high 1uality demands which the first ones must fulfill. 8irst of all, it refers to high precision optics and mechanics. ;etric cameras are usually grouped into aerial cameras and terrestrial cameras. 2erial cameras are also called cartographic cameras. 0anoramic cameras are e(amples of non-metric aerial cameras. *he lens system of aerial cameras is constructed as a unit with the camera body. "o lens change or #$oom% is possible to pro'ide high stability and a good lens correction. *he focal length is fi(ed, and the cameras ha'e a central shutter. 8urthermore, aerial cameras use a large film format. Ahile the si$e of >. by 3B mm is typical for amateur cameras ) aerial cameras normally use a si$e of >3- by >3- mm. 2s a result, the 'alues of 3wide angle4, 3normal4 and 3telephoto4 focal lengths differ from those widely known ) for e(ample, wide angle aerial camera has a focal length of about ,C3 mm, the normal one a focal length of about 3-C mm. "imilar to this, for close-range applications special cameras were de'eloped with a medium or large film format and fi(ed lenses. Digita$ consumer cameras ?owadays digital consumer cameras ha'e reached a high technical standard and good geometric resolution. :ue to this fact these cameras can be successfully used for numerous photogrammetric tasks. *he differences of the construction principles between metric and consumer cameras can be seen, in general, in 1uality and stability of the camera body and the lens. 8urther, consumer cameras usually ha'e a $oom (3'ario4) lens with larger distortions which are not constant but 'ary, for instance, with the focal length, so it is difficult to correct them with the help of a calibration. &a'ing decided to purchase a digital camera to use it for photogrammetry it would be useful to take the following remarks into account: ,. &eneral: #t should be possible to set the parameters (focal length, focus, e(posure time and f-number) manually, at least as an option. >. !esolution (?umber of pi(els): :ecisi'e is the real (physical), not an interpolated resolution. Generally, the higher the number of pi(els, the better ) but not at any price. "mall chips with a large number of pi(els of course ha'e a 'ery small pi(el si$e and are not 'ery light sensiti'e, furthermore the signal-noise ratio is worse. *his one will encounter especially with higher #"+ 'alues (>-- and more) and in dark parts of the image.

3. Focal length range ($oom): :ecisi'e is the optical, not the digital (interpolated) range. .. 'istance setting (focus): #t should be possible to deacti'ate the auto focus. #f the camera has a macro option you can use it also for small ob ects. C. (xposure time) f-number: *he ma(imum f-number (lens opening) should not be less than ,:>.D, the e(posure time should ha'e a range of at least , ... ,E,--- seconds. B. *mage formats: *he digital images are stored in a customary format like F07G or *#88. #mportant: *he image compression rate must be selectable or, e'en better, the compression can be switched off to minimi$e the loss of 1uality. G. +thers: "ometimes a tripod thread, a remote release and an adapter for an e(ternal flash are useful.

#amera ca$i(ration
:uring the process of camera calibration, the interior orientation of the camera is determined. *he interior orientation data describe the metric characteristics of the camera needed for photogrammetric processes. *here are se'eral ways to calibrate the camera. 2fter assembling the camera, the manufacturer performs the calibration under laboratory conditions. 9ameras should be calibrated once in a while because stress, caused by temperature and pressure differences of an airborne camera, may change some of the interior orientation elements. Haboratory calibrations are also performed by speciali$ed go'ernment agencies. #n in-flight calibration, a test field with targets of known positions is photographed. *he photo coordinates of the targets are then precisely measured and compared with the control points. *he interior orientation is found by a least-s1uare ad ustment. *he main purpose of interior orientation is to define the position of the perspecti'e center and the radial distortion cur'e. ;odern aerial cameras are 'irtually distortion free. *hus, a good appro(imation for the interior orientation is to assume that the perspecti'e center is at a certain distance c (calculated during camera calibration) from the fiducial center.

#$assification of aeria$ photographs


2erial photography is the basic data source for making maps by photogrametric means. ;any factors determine the 1uality of aerial photography, first of all they are design and 1uality of lens system and weather conditions and sun angle during photo flight. 2erial photographs are usually classified according to the orientation of the camera a(is, the focal length of the camera, and spectral sensiti'ity.

-rientation of camera a.is

True vertical photograph 2 photograph with the camera a(is perfectly 'ertical (identical to plumb line through e(posure center). "uch photographs hardly e(ist in reality. Near vertical photograph 2 photograph with the camera a(is nearly 'ertical. *he de'iation from the 'ertical is called ti$t. Gyroscopically controlled mounts pro'ide stability of the camera so that the tilt is usually less than two to three degrees.

Oblique photograph 2 photograph with the camera a(is tilted between the 'ertical and hori$ontal. 2 high oblique photograph is tilted so much that the hori$on is 'isible on the photograph. 2 low oblique does not show the hori$on. *he total area photographed with obli1ues is much larger than that of 'ertical photographs.

,ngu$ar co/erage

*he angular co'erage is a function of focal length and format si$e. "tandard focal lengths and associated angular co'erages are summari$ed in *able ,. superwide wideang$e intermediate norma$ang$e narrowang$e focal length ImmJ DC ,CG D> >,B. 3-C .B B,>. angular co'erage IoJ ,,K

Ta($e 10 "ummary of photography with different angular co'erage (for KL M KL format si$e).

Spectra$ Sensiti/ity

panchromatic black and white@ color (originally color photography was mainly used for interpretation purposes, howe'er, recently, color is increasingly being used for mapping applications as well)@ infrared black and white (since infrared is less affected by ha$e it is used in applications where weather conditions may not be as fa'orable as for mapping missions)@ false color (this is particular useful for interpretation, mainly for analy$ing 'egetation (e.g. crop disease) and water pollution. 7.g. Green, !ed, ?#! single sensor camera (see multispectral cameras)).

1iducia$ marks
8iducial marks are fi(ed points in the image plane, that ser'e as reference positions 'isible in the image. *hey are useful for image coordinate system setting in case of analog photography. Generally they are se'eral fi(ed points on the sides of an image, that define fiducial center as the intersection of lines oining opposite fiducial marks. 8iducial center is used as the origin for image coordinate system.

+mage geometry mode$ing


Note0 *he information below is rele'ant to frame photography (photographs e(posed in one instant) with central pro ection assumption.

-(2ect3 camera and image spaces


*o fulfill the task of geometric reconstruction it is necessary to represent points in the ob ect coordinate system, i.e. a 3: local coordinate system related to the targeted ob ect or a geographical coordinate system.

2t the same time, input data (points on the photos) are referenced in the image coordinate system, i.e. >: sensor related coordinate system with the origin at the position of pi(el (-,-) (for digital frame cameras) E in the fiducial center (for analog images). 8inally, the third space is determined by the camera itself. 9amera coordinate system has its origin at the pro ection center (the center of the lens). *hus, certain relations ha'e to be defined between these three spaces to allow photogrammetric procedures. 9amera modeling, with intrinsic and e(trinsic parameters being introduced, sol'es the problem.

#amera mode$ing
2s the position of the camera in space 'aries much more 1uickly than than the geometry and physics of the camera, it is logical to distinguish between two sets of parameters in modeling: ,) e(trinsic parameters describe the position of the camera in space. *hey are the si( parameters of the e(terior orientation: the three coordinates of the pro ection center and the three rotation angles around the three camera a(es. *he parameters of the e(terior orientation may be directly measured (with G0" and #;< systems), howe'er, they are usually also estimated during photogarmmetric procedures. >) intrinsic parameters are all parameters necessary to model the geometry and physics of the camera. *hey allow to detect the direction of the pro ection ray to an ob ect point gi'en an image point and e(terior orientation data. *he intrinsic parameters describe the interior orientation of the camera, that is determined by camera calibration. 8or a pin-hole camera pro ecti'e mapping from 3: real world coordinates ((, y, $) (ob ect space) to >: pi(el coordinates (u, ')(image space) is simulated by the following linear model: (u, ', ,)* N 2 I! *J ((, y, $, ,)*, where homogeneous coordinates notation is used. f( s u2 N - fy '- - , - is the intrinsic matri( containing C intrinsic parameters: f(, fy - focal length in terms of pi(els@ u-, '- - principle point coordinates@ s - skew coefficient between ( and y a(is. +ther intrinsic camera parameters, such as lens distortion, are also important, but can be co'ered by linear camera model. 4 and T are e(trinsic camera parameters: rotation matri( and translation 'ector respecti'ely, which denote the transformation from 3: ob ect coordinates to 3: camera coordinates. +rientation Angles

*o denote camera orientation two different sets of angles are used: O, , P and yaw, pitch, roll triplets. 5oth sets define transformation between real world and camera coordinates. *he difference comes from how the georeferenced system is defined: if the reference system is <*; pro ection, then orientation parameters are O, , P@ in case the tangent plane is in'ol'ed - yaw, pitch, roll are rele'ant parameters. ;ost airborne measuring systems work with and sa'e yaw, pitch, roll angles, while G#" systems operate with omega, phi, kappa angles. #n the case of aerial photos, the 'alues of and O will normally be near to $ero. #f they are e(actly $ero, it is a so-called nadir photo. 5ut in practice this will ne'er happen due to wind drift and small mo'ements of the aircraft.

Some geometric princip$es


Photo sca$e

1ig. 10 8light height, flight altitude and aerial photo scale. *he representati'e fraction is used for scale e(pressions, in form of a ratio, e.g. , : C,---. 2s illustrated in 8ig. , the scale of a near 'ertical photograph can be appro(imated by mb N cE& where mb is the photograph scale number, c - the calibrated focal length, and & - the flight height abo'e mean ground ele'ation. ?ote that the flight height & refers to the a'erage ground ele'ation. #f it is with respect to the datum, then it is called flight altitude &2, with &2 N & + h. *he photograph scale 'aries from point to point. 8or e(ample, the scale for point 0 can easily be determined as the ratio of image distance 90/ to ob ect distance 90 by m0 N 90/E90 9learly, abo'e e1uation takes into account any tilt and topographic 'ariations of the surface (relief). 4e$ief disp$acement

1ig. 50 !elief displacement. *he effect of relief does not only cause a change in the scale but can also be considered as a component of image displacement (see 8ig. >). "uppose point * is on top of a building and point 5 at the bottom. +n a map, both points ha'e identical =, Q coordinates@ howe'er, on the photograph they are imaged at different positions, namely in */ and 5/. *he distance d between the two photo points is called relief displacement because it is caused by the ele'ation difference dh between * and 5. *he magnitude of relief displacement for a true 'ertical photograph can be determined by the following e1uation dr N r5 dhE& N r*dhE(& R dh) where dh is the ele'ation difference of two points on a 'ertical. *hen the ele'ation h of a 'ertical ob ect h N dr &Er. *he direction of relief displacement is radial with respect to the nadir point, independent of camera tilt. ,ow does flight height and camera focal length influence on displacementHet the goal be to take a photo of a house, filling the complete image area. *here are se'eral possibilities to do that: take the photo from a short distance with a wide-angle lens (like camera position , in the figure), or from a far distance with a small-angle lens (telephoto, like camera position >), or from any position in between or outside. !esults will differ in the following ways: *he smaller the distance camera ) ob ect and the wider the lens angle, the greater are the displacements due to the central perspecti'e, or, 'ice 'ersa: *he greater the distance camera ) ob ect and the smaller the lens angle, the smaller are the displacements.

#n an e(treme (theoretical) case, if the camera could be as far as possible away from the ob ect and if the angle would be as small as possible (3super telephoto4), the pro ection rays would be nearly parallel, and the displacements near to $ero. *his is similar to the situation of images taken by a satellite orbiting some hundreds of kilometres abo'e ground, where we ha'e nearly parallel pro ection rays, yet influences come from the earth cur'ature. "o, at first glance, it seems that if one would like to transform a single aerial image to a gi'en map pro ection, it would be the best to take the image from as high as possible with a small angle camera to ha'e the lowest displacements. Qet, the radial-symmetric displacements are a prere1uisite to 'iew and measure image pairs stereoscopically, that is why in photogrammetric practice most of the aerial as well as terrestrial photos are taken with a wideangle camera, showing relati'ely high relief-depending displacements. !elative camera positions

1ig. !0 9amera positions parallel (left) and con'ergent (right). *o get three-dimensional coordinates of ob ect points one needs at least two images of the ob ect, taken from different positions. *he point 0 ((, y, $) will be calculated as an intersection of the two rays I0/0J and I0L0J. +ne can easily imagine that the accuracy of the result depends among others on the angle between both rays. *he smaller this angle, the less will be the accuracy. #t is reasonable to take into account that e'ery measurement of the image points 0/ and 0L will ha'e more or less small errors, and e'en 'ery small errors here will lead to a large error especially in $ when the angle is 'ery small. *his is one more reason why wide-angle cameras are preferred in photogrammetry (see 8ig. 3). Het 2 be the distance between the cameras and the ob ect and 5 be the distance between both cameras (or camera positions when only a single camera is used), then the angle between both pro ection rays (continuous lines) depend on the ratio 2E5, in the aerial case called the height6(ase ratio. +b'iously it is possible to impro'e the accuracy of the calculated coordinates 0((, y, $) by increasing the distance 5 (also called base). #f then the o'erlap area is too small you may use con'ergent camera positions ) 3s1uinting4 in contrast to human 'ision (parallel). *he disad'antage of this case is that you will get additional perspecti'e distortions in the images. ?ote: *he parallel (aerial) case is good for human stereo 'iewing and automatic surface reconstruction, the con'ergent case often leads to a higher precision especially in $ direction.

Main photogrammetric procedures


-rientation of a stereo pair
*he application of single photographs in photogrammetry is limited because they cannot be used for ob ect space reconstruction, since the depth information is lost when taking an image.

7'en though the e(terior orientation elements may be known it will not be possible to determine ground points unless the scale factor of e'ery bundle ray is known. *his problem is sol'ed by e(ploiting stereopsis, that is by using a second photograph of the same scene, taken from a different position. #f the scene is static, the same camera may be used to obtain the two images, one after the other. +therwise, it is necessary to take the two images simultaneously, and thus it is necessary to synchroni$e the two different cameras. *wo photographs with different camera positions that show the same area, at least in part, is called a stereo pair. *he images in general ha'e different interior orientations and different e(terior orientations. 2nd e'en if corresponding points (images of the same ob ect point) are measured on both images, their coordinates will be known in different systems, thus pre'enting determination of 3: coordinates of the ob ect point. 9onse1uently, a mathematical model of the stereo pair and a uniform coordinate system for the image pair (model coordinate system), is needed. *o define a stereo pair model, supposing that the camera(s) is(are) calibrated and interior orientation parameters are known, one needs to determine:

relati'e orientation of the two cameras@ absolute orientation of the image model.

4e$ati/e orientation !elati'e orientation of the two cameras is fi(ed by the following parameters:

the rotation of the second camera relati'e to the first (these are three parameters - three relati'e orientation angles)@ the direction of the base line connecting the two pro ection centers (these are additional two parameters@ no constraint e(ists against for shift of the second camera in the direction toward or away from the first camera).

*herefore, the relati'e orientation of two calibrated cameras is characteri$ed by five independent parameters. *hey can be determined if . corresponding image points are given. 2n ob ect can be reconstructed from images of calibrated cameras only up to a spatial similarity transformation. *he result is a photogrammetric model. ,(so$ute orientation *he orientation of the photogrammetric model in space is called absolute orientation. *his is actually a task of G-parameter transformation application. *he transformation can only be sol'ed if priory information about some of the parameters is introduced. *his is most likely to be done with control points. ontrol points it is an ob ect point with known real world coordinates. 2 point with all three coordinates known is called full control point. #f only = and Q is known then we ha'e a planimetric control point. +b'iously, with an elevation control point we know only the S coordinate.

,ow many control points are needed- #n order to calculate G parameters at least se'en e1uations must be a'ailable. 8or e(ample, > full control points and one ele'ation control point would render a solution. #f more e1uations (that is, more control points) are a'ailable then the problem of determining the parameters can be sol'ed as a least-s1uare ad ustment. *he idea is to minimi$e the discrepancies between the transformed and the a'ailable control points.

,eria$ triangu$ation
!erial triangulation "!T# "aerotriangulation# is a comple( photogrammetric production line. *he main tasks to be carried out are the identification of tie points and ground control points, the transfer of these points in homologous image segments and the measurement of its image coordinates. Hastly, the image-to-ob ect space transform is performed by bundle bloc/ adjustment. *ransition to digital imagery led to appearance of the term digital aerial triangulation. *he task implies selection, transfer and measurement of image tie points by digital image matching. :igital aerial triangulation is generally associated with automated aerial triangulation thanks to potential of digital approach to be automated. Bundle ad$ustment (bundle block ad$ustment) is the problem of refining a 'isual reconstruction to produce jointly optimal 3: structure and 'iewing parameter (camera pose andEor calibration) estimates. +ptimal means that the parameter estimates are found by minimi$ing some cost function that 1uantifies the model fitting error, and ointly that the solution is simultaneously optimal with respect to both structure and camera 'ariations. *he name refers to the Tbundles% of light rays lea'ing each 3: feature and con'erging on each camera centre, which are Tad usted% optimally with respect to both feature and camera positions. 71ui'alently U unlike independent model methods, which merge partial reconstructions without updating their internal structure U all structures and camera parameters are ad usted together Tin one bundle%. 5undle ad ustment is really ust a large sparse geometric parameter estimation problem, the parameters being the combined 3: feature coordinates, camera poses and calibrations. ,d/antages of (und$e ($ock ad2ustment against other ad2ustment methods0

1$e.i(i$ity: 5undle ad ustment gracefully handles a 'ery wide 'ariety of different 3: feature and camera types (points, lines, cur'es, surfaces, e(otic cameras), scene types (including dynamic and articulated models, scene constraints), information sources (>: features, intensities, 3: information, priors) and error models (including robust ones). #t has no problems with missing data. ,ccuracy: 5undle ad ustment gi'es precise and easily interpreted results because it uses accurate statistical error models and supports a sound, well-de'eloped 1uality control methodology. 7fficiency: ;ature bundle algorithms are comparati'ely efficient e'en on 'ery large problems. *hey use economical and rapidly con'ergent numerical methods and make near-optimal use of problem sparseness.

Systematic error corrections


#orrection for $ens distortion

1ig. 80 5arrel-shaped (left) and pincushion-shaped (right) distortions. Hens irregularities and aberrations result in some image displacement. 2 typical effect with wide angle lenses are the (arre$6shaped distortions, that means, straight lines near the image borders are shown bended to the borders. *his effect usually will be less or $ero in medium focal lengths and may turn into the opposite form (pincushion6 shaped) with telephoto lenses (see 8ig. .). 5eside these so-called radia$6symmetric distortions, which ha'e their ma(imum at the image borders, there are more systematic effects (affine, shrinking) and also non6systematic disp$acements. *he distortions depend among others on the focal length and the focus. *o minimi$e the resulting geometric errors efforts ha'e been undertaken to find suitable mathematical models (one of the most widely used is *rown mode$). #n most cases the radial-symmetric part has the largest effect of all, conse1uently, it is the main ob ect for correction. :istortion 'alues are determined during the process of camera calibration. *hey are usually listed in tabular form, either as a function of the radius or the angle at the perspecti'e center. 8or aerial cameras the distortion 'alues are 'ery small. &ence, it is sufficient to linearly interpolate the distortion. "uppose one wants to determine the distortion for image point (p, yp. *he radius is rp N ((p> + yp>)V. 8rom the table we obtain the distortion dri for ri W rp and dr for r X rp. *he distortion for rp is interpolated drp N (dr R dri) rp E (r R ri) *he corrections in (- and y-direction are dr( N ((pErp) drp dry N (ypErp) drp 8inally, the photo coordinates must be corrected as follows: (p N (p R dr( N (p (, R drpErp) yp N yp R dry N yp (, R drpErp) *he radial distortion can also be represented by an odd-power polynomial of the form dr N p- r + p, r3 + p> rC + Y Y Y *he coefficients pi are found by fitting the polynomial cur'e to the distortion 'alues. *his e1uation is a linear obser'ation e1uation. 8or e'ery distortion 'alue, an obser'ation e1uation is obtained. #n order to a'oid numerical problems (ill-conditioned normal e1uation system), the degree of the polynomial should not e(ceed nine.

#orrection for refraction

1ig. ): 9orrection for refraction. 8ig. C shows how an obli1ue light ray is refracted by the atmosphere. 2ccording to "nell%s law, a light ray is refracted at the interface of two different media. *he density differences in the atmosphere are in fact different media. *he refraction causes the image to be displayed outwardly, 1uite similar to a positi'e radial distortion. *he radial displacement caused by refraction can be computed by dr N Z (r + r3Ec>) Z N [>.,- & E (&> R B & + >C-) R >.,- h> E (h> R B h + >C-) &\ ,-RB where c is calibrated focal length. *hese e1uations are based on a model atmosphere defined by the <" 2ir 8orce. *he flying height & and the ground ele'ation h must be in units of kilometers. #orrection for earth cur/ature

1ig. 9: 9orrection for earth cur'ature. *he mathematical deri'ation of the relationships between image and ob ect space are based on the assumption that for both spaces, 3: 9artesian coordinate systems are employed. "ince ground control points may not directly be a'ailable in such a system, they must first be

transformed, say from a "tate 0lane coordinate system to a 9artesian system. *he = and Q coordinates of a "tate 0lane system are 9artesian, but not the ele'ations. 8ig. B shows the relationship between ele'ations abo'e a datum (h) and ele'ations in the 3: 9artesian system. #f we appro(imate the datum by a sphere, radius ! N B3G>.> km, then the radial displacement can be computed by dr N r3 (& R h) E (> c> !) "trictly speaking, the correction of photo coordinates due to earth cur'ature is not a refinement of the mathematical model. #t is much better to eliminate the influence of earth cur'ature by transforming the ob ect space into a 3: 9artesian system before establishing relationships with the ground system. *his is always possible, e(cept when compiling a map. 2 map, generated on an analytical plotter, for e(ample, is most likely plotted in a "tate 0lane coordinate system. *hat is, the ele'ations refer to the datum and not to the =Q plane of the 9artesian coordinate system. #t would be 1uite awkward to produce the map in the 9artesian system and then transform it to the target system. *herefore, during map compilation, the photo coordinates are 3correctedL so that con ugate bundle rays intersect in ob ect space at positions related to reference sphere.

Length and ang$e units


?ormally, for coordinates and distances in photogrammetry metric units are used according to the international standard. 5ut in se'eral cases, also some non metric units can be found like: Foot ( / ): "ometimes used to gi'e the terrain height abo'e mean sea le'el, for e(ample in ?orth 2merican or 5ritish topographic maps, or the flying height abo'e ground. *nch ( L ): 8or instance used to define the resolution of printers and scanners (dots per inch). ,/ N ,>L N 3-..D cm@ ,L N >.C. cm@ ,m N 3.>D,/@ , cm N -.3K.L 2ngles are normally gi'en in degrees. #n mathematics also radians are common. #n geodesy and photogrammetry, they use grads. #n the army, the so-called mils are used. 2 full circle is: 3B- degrees N .-- grads N >] (pi) N B.-- mils

'$ossary
66*66 5ase :istance between the pro ection centers of neighboring photos. 5lock 2ll images of all strips. 66D66

:atum 2 set of parameters and control points used to accurately define the three dimensional shape of the 7arth. *he corresponding datum is the basis for a planar coordinate system. 66166 8light altitude 8light height abo'e datum. 8light height 8light height abo'e mean ground ele'ation. 8iducial marks 2ny marker built into an aerial camera that registers its image on an aerial photograph as a fi(ed reference mark in the form of an image. *here are usually four fiducial marks on a photograph which are used to define the principal point of the photograph. 66+66 #mage *he photo in digital representation ) the scanned film or the photo directly taken by a digital camera. #mage coordinates E pi(el coordinates #n digital image processing the e(pression image coordinates refers to pi(el positions (row E column), while in classical photogrammetry it indicates the coordinates transformed to the fiducial mark nominal 'alues. 8or differentiation, the e(pression pi(el coordinates sometimes is used in the conte(t of digital image processing. #mage refinement *he process to correct photos for systematic errors, such as radial distortion, refraction and earth cur'ature. 66M66 ;odel (stereo model, image pair) *wo neighboring images within a strip. ;odel area *he area being co'ered by stereo images (image pair). 66-66 +'erlaps 2n image flight normally is carried out in the way that the area of interest is photographed strip by strip, turning around the aircraft after e'ery strip, so that the strips are taken in a meander like se1uence. *he two images of each model ha'e a longitudinal o'erlap of appro(imately B- to D-^ (also called end $ap), neighboring strips ha'e a lateral o'erlap of normally about 3-^ (also called side $ap). *his is not only necessary for stereoscopic 'iewing but also for the connecting of all images of a block within an aerial triangulation. 66P66

0hoto *he original photo on sensor. 0lumb line _ertical. 66466 !elief topographic 'ariations of the surface. !esolution *he minimum distance between two ad acent features, or the minimum si$e of a feature, which can be detected by photogrammetric data ac1uisition systems. 66S66 "kew 2 transformation of coordinates in which one coordinate is displaced in one direction in proportion to its distance from a coordinate plane or a(is. "tereoplotter 2n instrument that lets an operator see two photos at once in a stereo 'iew. "trip 2ll o'erlapping images taken one after another within one flight line. 66T66 *ilt :e'iation of the camera a(is from the 'ertical.

4eferences
;ultilingual :ictionary of !emote "ensing and 0hotogrammetry, 2"0!", ,KD3, p. 3.3 ;anual of 0hotogrammetry, 2"0!", Cth 7d., >--., p. ,,C, ;offit, 8.&. and 7. ;ikhail, ,KD-. 0hotogrammetry, 3d 7d., &arper ` !ow publishes, ?Q.

Potrebbero piacerti anche