Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
For this project, you need to achieve 3D navigation on a static 2D image by using camera
intrinsics and extrinsics. Given a pair of color and depth image captured by Kinect, your job is to
reconstruct the 3D geometry with texture information using the inverse camera model. Then
project the 3D scene into any arbitrary virtual camera and re-render it on a 2D image. As the
example shows below:
Both the color and depth image are captured by a Kinect sensor at the same time. So their
pixels are aligned already, which means for a pixel at (x, y) in the color image, you know the
corresponding depth value in the depth image at (x, y). On thing, you need to pay attention is
the data type for each image. For the color image, each pixel value is stored as “uchar”;
however, for the depth image, each pixel value is stored as “short int”. So be aware to use the
corresponding pointer to access the individual pixels accordingly.
Submission
Please turn in the code (e.g. .cpp file) and the pdf report through Isidore by the due time. Make
sure submit them on time. No email submission is accepted unless you are granted with
extension.