Sei sulla pagina 1di 6

National Institute of Technology Karnataka

PrPN

DIODE SIG NITK-IEEE

HOLOGRAPHIC TELEPRESENCE IN AUGMENTED REALITY

Submitted by:
Nishanth P (Project Lead)
1

User Point Cloud Extraction


User point cloud is extracted using the point cloud library (PCL). This library uses a grabber interface from OpenNI Grabber Framework which makes requesting data streams from OpenNI compatible cameras such as Microsoft Kinect (MSK) simple. The stream of points captured by the camera (MSK) is rendered using the visualization provided by the framework.

Fig 1: Point Cloud (RGBD) visualization from a perspective different from that of the cameras (MSK)

Fig 2: Point Cloud (Depth) visualization


2

The point cloud from the grabber interface is then downsampled using a VoxelGrid filter. The rationale behind data downsampling to speed things up less points means less time needed to spend within the segmentation loop. ExtractIndices filter is then used to extract a subset of points from a point cloud based on the indices output by a segmentation algorithm. After extracting the indices the cluster of the points of the user is extracted using the Euclidean cluster. A clustering method needs to divide an unorganized point cloud model P into smaller parts so that the overall processing time for P is significantly reduced. A simple data clustering approach in an Euclidean sense can be implemented by making use of a 3D grid subdivision of the space using fixed width boxes, or more generally, an octree data structure. This particular representation is very fast to build and is useful for situations where either a volumetric representation of the occupied space is needed, or the data in each resultant 3D box (or octree leaf) can be approximated with a different structure. A Kd-tree structure for finding the nearest neighbors is used, the algorithmic steps are : 1. create a Kd-tree representation for the input point cloud dataset P; 2. set up an empty list of clusters C, and a queue of the points that need to be checked Q; 3. then for every point \boldsymbol{p}_i \in P, perform the following steps: add \boldsymbol{p}_i to the current queue Q; for every point \boldsymbol{p}_i \in Q do: search for the set P^i_k of point neighbors of \boldsymbol{p}_i in a sphere with radius r < d_{th}; for every neighbor \boldsymbol{p}^k_i \in P^k_i, check if the point has already been processed, and if not add it to Q; when the list of all points in Q has been processed, add Q to the list of clusters C, and reset Q to an empty list 4. the algorithm terminates when all points \boldsymbol{p}_i \in P have been processed and are now part of the list of point clusters C

Using thresholds set on the size of the pointcloud and z-threshold on the centroid of the pointclouds, the pointcloud of the user is identified and rendered using the visualizer.

Fig 3:User cluster extraction after downsampling and Euclidean Cluster Extraction.

Fig 4: User cluster after complete extraction.

AR and OpenGL rendering


The vector of points extracted is further passed to Vertex Buffer Objects to render as OpenGL points on an Augmented Reality Marker. A Vertex Buffer Object (VBO) is an OpenGL feature that provides methods for uploading data (vertex, normal vector, color, etc.) to the video device for non-immediate-mode rendering. VBOs offer substantial performance gains over immediate mode rendering primarily because the data resides in the video device memory rather than the system memory and so it can be rendered directly by the video device. A frame rate of 40fps was achieved using VBOs on the ATI Radeon 4330 series using OpenGL 2.1. ARToolKit is a computer tracking library for creation of strong augmented reality applications that overlay virtual imagery on the real world. To do this, it uses video tracking capabilities that calculate the real camera position and orientation relative to square physical markers in real time. Once the real camera position is known a virtual camera can be positioned at the same point and 3D computer graphics models drawn exactly overlaid on the real marker. The orientation of the marker is processed using the ARToolkit which sets the right parameters to the transformation matrix for OpenGL rendering of the pointcloud on it. The ARToolKit tracking works as follows: 1. The camera captures video of the real world and sends it to the computer. 2. Software on the computer searches through each video frame for any square shapes. 3. If a square is found, the software uses some mathematics to calculate the position of the camera relative to the black square. 4. Once the position of the camera is known a computer graphics model is drawn from that same position. 5. This model is drawn on top of the video of the real world and so appears stuck on the square marker. 6. The final output is shown back in the handheld display, so when the user looks through the display they see graphics overlaid on the real world. The figure below summarizes these steps. ARToolKit is able to perform this camera tracking in real time, ensuring that the virtual objects always appear overlaid on the tracking markers..

Fig 5: Summary of ARToolkits working.

Fig 6: A cube rendered in Augmented Reality.

Potrebbero piacerti anche