Sei sulla pagina 1di 6

International Journal of Computer Trends and Technology- volume3Issue1- 2012

A Overview of Point-based Rendering Techniques


Mandakini Kaushik,
M.Tech.(CSE) Scholar, Dept. of CSE, Rungta College of Engg. & Tech., Bhilai 490 024(C.G.), INDIA

Kapil Kumar Nagwanshi


Reader , Dept. of CSE, Rungta College of Engg. & Tech., Bhilai 490 024(C.G.), INDIA

Dr. Lokesh Kumar Sharma


Head, Dept. of CSE, Rungta College of Engg. & Tech., Bhilai 490 024(C.G.), INDIA

Abstract In recent years point-based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of highly complex 3D-models. Traditional geometry based rendering methods use triangles as primitives which make rendering complexity dependent on complexity of the model to be rendered. But point based models overcome that problem as points don't maintain connectivity information and just represents surface information. Based on their fundamental simplicity, points have motivated a variety of research on topics such as shape modeling, object capturing, simplification, rendering and hybrid point-polygon methods. Rendering the points is inherently a big problem due to lack of connectivity information. But lack of connectivity introduces several artifacts while in the process of rendering like aliasing and holes in the rendered model. Several algorithms have been proposed for rendering point models efficiently and with high quality. The major challenge of point-based rendering (PBR) algorithms is to achieve a continuous interpolation between discrete point samples that are irregularly distributed on a surface. Furthermore, correct visibility must be supported as well as efficient level-of-detail (LOD) rendering for large data sets. Our work has the basis of understanding of the various popular algorithms for point rendering like qsplat and elliptical weighted average splatting. We will discuss advantages and disadvantages of each of the approaches and we will define and compare the result of all various algorithms. Keywords- computer graphics, point based rendering, qsplat, surfel, surface splatting, randomized Z-buffer algirithm

engineered program, based on a selective mixture of disciplines related to light physics, visual perception, mathematics, and software development. In rendering, a modeling primitive (an object representation) and a rendering primitive are different issues. An object can be stored in one format, and converted into another one for rendering. An example of this is parametric patches, which are tessellated into triangles for rendering. On the other hand, many representations, such as points and polygon meshes, can be rendered directly. We can distinguish two fundamental approaches to rendering: geometry-based and image-based rendering. In geometry-based rendering, a scene is described using geometrical primitives, which are discretized into points, lines and triangles for rendering. In image-based rendering, a scene is rendered making use of multiple input images, without any 3D object or scene information. Most rendering algorithms are a combination of these two paradigms, for example texture mapping (image-based) in polygon rendering (geometry-based). II. POINT AS OVERVIEW OF RENDERING

I.

INTRODUCTION

Rendering is the process of generating an image from a model by means of a software program. The model is a description of three dimensional objects in a strictly defined data structure. It contains geometry, viewpoint, texture and lighting information. Rendering is one of the major field of 3D computer graphics. In the graphics pipeline its the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s onward, it has become a more distinct subject. It has uses in computer and video games, simulators, movies or TV special effects, and design visualization, each employing a different balance of features and techniques. As a product, a wide variety of renderers are available. some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a carefully

A. rendering premitives In the world of computer graphics, rendered images are represented by a collection of objects. These objects are often composite, constructed from a number of more basic primitive objects. Typically these primitives have been based on volumes (Constructive Solid Geometry) or manifold representations (splines, polygons). Although these methods are good for large objects of reasonable complexity, they become somewhat inefficient at representing objects with high levels of detail (LOD). Recently another type of primitive has been introduced, which uses points. Using points as the rendering primitive, output images are constructed from a cloud of points and this is known as Point-Based Rendering (PBR). Comparing polygon meshes with points is analogous to comparing vector graphics with pixel graphics. Points in 3D are analogous to pixels in 2D, replacing textured triangles or higher order surfaces by zero-dimensional elements. Thus point based rendering is a geometry based rendering method inspired by the image based rendering techniques whose rendering method is output sensitive. Point sample rendering can be done in two different approaches, point

ISSN: 2231-2803

http://www.internationaljournalssrg.org
Page 18

International Journal of Computer Trends and Technology- volume3Issue1- 2012


rendering and splatting. Point rendering uses zerodimensional points as primitives, while in splatting the primitives are planar reconstruction kernels (for example, disks around points). Not all methods based on point sampling fit into these categories. The classification used here is mainly to bring out important differences between various point rendering algorithms. B. Motivation Interactive computer graphics has not reached the level of realism that allows true immersion into a virtual world. Traditionally, graphics has worked with triangles as the rendering pyramid. Triangles are really just the lowest common denominator for surfaces. Instead of drawing triangles, just draw lots of dots (or small circles or ellipses) is the basic idea of point based rendering. The advantages that we get by choosing points as primitives or the problems that we face in polygon based rendering are as stated below. In polygon based rendering if the projected triangles are too small, it might well be the case that many triangles map onto the same pixel resulting in an inefficient triangle setup (initialization of texture filtering and rasterization) operation. Processing many such small triangles leads to bandwidth bottlenecks, excessive floating point and rasterization requirements. Creating meshes from datasets obtained from 3D laser scanners is hard and not robust enough. Generating a consistent triangle mesh or texture parameterization is time consuming and difficult. This problem is absent in point based models as they store minimal surface information without connectivity. Point based rendering simplifies the rendering pipeline by unifying vertex and fragment processing allowing for implementation of an efficient and flexible rendering pipelines avoiding redundant functionality. Point based rendering algorithms are output sensitive which means rendering complexity is weakly dependent on the scene complexity. Point based representations for organic models like feathers; trees, smoke etc are much more robust than polygonal models. Multi resolution operations on polygonal models are quite complex as its difficult to determine when or how to add or drop polygons from the mesh while zooming in and out. Level of detail control for complex models in polygon based rendering is inefficient in terms of resources used and involves complex computations. Point based rendering methods occupy less storage space as compared to polygonal rendering due to lack of connectivity information. Further, compression methods are also available for storing point models, which reduces the required storage space. Figure 1: Complex models with huge information Points are efficient way for surface representation in a real 3D reconstruction and rendering environment for virtual reality. Various systems have been proposed to combine traditional rendering primitives with points to get the best of both worlds. Points have also been proposed to render large animated scenes. Points are suitable to build up efficient hierarchies for huge scenes with millions of rendering primitives, which has been explored to interactively display ecosystems with plants and trees or other massive outdoor scenes. Points are investigated by several researchers as a primitive for modeling and editing. The usage of multi-resolution point based rendering allows to display of scenes described by several billion primitives at interactive frame rates. Compact structures defined for point based rendering are particularly well adapted to devices with limited memory and display resolution, such as PDAs III. OVERVIEW OF POINT RENDERING APPROACH C. Applications Point-based rendering is a compact and efficient means of displaying complex geometry. It has found a variety of applications. Several researchers have proposed to use point clouds for direct visualization of data acquired by 3D scanners.

Rendering is the process of generating an image from a model, by means of a software program. The model is a description of three dimensional objects in a strictly defined data structure. It would contain geometry, viewpoint, texture and lighting information. Rendering is one of the major fields of interest of 3D computer graphics, and in practice always connected to the others. In the graphics pipeline it's the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s onward, it has become a more distinct subject. It has

ISSN: 2231-2803

http://www.internationaljournalssrg.org
Page 19

International Journal of Computer Trends and Technology- volume3Issue1- 2012


uses in: computer and video games, simulators, movies or TV special effects, and design visualization, each employing a different balance of features and techniques. As a product, a wide variety of renderers are available. Some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to: light physics, visual perception, mathematics, and software development. In rendering an object representation (modeling primitive) and a rendering primitive are different issues. An object can be stored in one format, and converted into another one for rendering. An example of this is parametric patches, which are tessellated into triangles for rendering. On the other hand, many representations, such as points and polygon meshes, can be rendered directly. We can distinguish two fundamentally different approaches to rendering: geometry based and image-based rendering. In geometry-based rendering, a scene is described using geometrical primitives, which are discredited into points, lines and triangles for rendering. Lighting is based on physical simulation. Geometry-based rendering provides interactivity, but capturing real-life scenes is difficult. The rendering cost of most of the methods is proportional to the scene complexity. In image-based rendering, a scene is represented by a plenoptic function, which is a seven dimensional function completely describing the illumination in a scene. The plenoptic function is discredited into n-dimensional samples, and rendered by resampling the function. IBR provides easy capture and viewing of real-life scenes. Many IBR methods are output-sensitive, i.e. the rendering cost of a scene is proportional to the number of output pixels rather than to the scene complexity. The main drawbacks of IBR are the lack of interactivity and large memory requirements. Most rendering algorithms are a combination of these two paradigms, for example texture mapping (image-based) in polygon rendering (geometry-based). Thus point based rendering is a geometry based rendering method inspired by the image based rendering techniques whose rendering method is output sensitive. Point sample rendering can be done in two different approaches. They are point rendering and splatting. The difference is point rendering uses 0dimensional points as primitives, while in splatting the primitives are planar reconstruction kernels. Not all methods based on point sampling fit into these categories. The classification used here is mainly to bring out important differences between various point rendering algorithms. IV. AVAILABLE POINT RENDERING ALGORITHMS can be classified according to how they reconstruct the surface from the samples used. At least four different approaches have been proposed. 1. Hole detection and filling in screen-space: Individual samples are projected on the screen and the pixels not receiving samples are detected. The surface is then interpolated from the neighboring samples 2. Generating more samples: A surface is adaptively interpolated in object-space to guarantee that every pixel receives at least one sample. 3. Splitting: A surface sample is projected on the screen and its contribution is spread into the neighboring pixels to guarantee coverage. Higher quality methods average the contributions of all splats contributing to the pixel. 4. Meshing A polygon mesh is used for interpolating the surface samples. This is rather expensive as the holes are usually only a couple of pixels wide, and a full polygon rendering algorithm is needed. Most of the point based rendering algorithms designed so far comes into the above four categories. Now we will discuss some of those important algorithms proposed for point-based rendering. In 1985, Levoy and Whitted [5] presented their pioneering work on point rendering. They note that when model complexity increases, the coherence provided by rendering polygons or other higher level primitives becomes less beneficial. They also note that a surface may be represented by a set of 0-dimensional points by considering it differentiable and estimating the tangent plane and the normal from a small neighborhood of points. Surface reconstruction is done by estimating the coverage by the number of points projecting to an area of a filter kernel at each pixel. A pixel is considered to be fully covered if the coverage exceeds a threshold. Edge antialiazing can be done by treating partially covered pixels as semitransparent. This method didnt address the filter normalization problems in case of partial transparent surfaces and texture filtering. Grossman and Dally[2] proposed a point sample rendering technique: Samples from Point clouds are created from multiple orthogonal projections. A hierarchical z-buffer technique is used to resolve visibility and the surface reconstruction problem. However this hierarchical algorithm is unfortunately prone to blocky artifacts Year 2001 brought the EWA surface splatting , dynamic (on-the-fly) object sampling during rendering [7], hybrid polygon-point rendering systems [10], differential points [2] and point based modeling and rendering using radial basis functions(RBFs)[1]. EWA surface splatting of Zwicker et al. [9] combines the ideas of Levoy and Whitted with Heckberts resampling framework to produce a high quality splatting technique .Zwicker et al. also extended the EWA splatting to volume rendering. Hybrid polygon-point rendering systems definitely leave the idea of points as an universal rendering primitive. They build on the observation that points are more efficient only

Point rendering has received a lot of attention because of the need to store and render objects of high complexity. In this context, point rendering is potentially more efficient than other known rendering methods. Point rendering algorithms

ISSN: 2231-2803

http://www.internationaljournalssrg.org
Page 20

International Journal of Computer Trends and Technology- volume3Issue1- 2012


if they project to a small screen space area, otherwise polygons perform better. Methods for dynamic (on-the-fly) object sampling produce point samples of rendered geometry in a view dependent fashion as they are needed for rendering (i.e. during rendering, not as a preprocess). Randomized z-buffer algorithm[14] used randomized sampling of a triangle set to interactively display scenes consisting of up to 1014 triangles. Kalaiah and Varshneys differential points [11] extend points with local differential surface properties to better model their neighborhood. The differential points are sampled from NURBS surfaces, but nothing prevents them from being sampled from any kind of differentiable surface. The initial point set is then reduced by eliminating redundant points. The other major point based rendering in later years is mainly focused on the hardware implementation of the splatting algorithms. After model acquisition proper sampling should be done. The goal during sampling is to find an optimal surfel representation of the geometry with minimal redundancy. Thus after proper surface sampling done the point model(which may also some contains some useful information like edges,silhouettes or planes) is given as an input to the rendering algorithm. V. COMPARE BETWEEN VARIOUS POINT BASED RENDERING ALGORITHMS

Figure 3: QSplat DataStructure - Left: Quadtree representation where R is radius of the bounding sphere Right: Corresponding division of bunny model [5] QSplat is mainly a two step algorithm. Read the triangle mesh (surface reconstructed over the input point model) and form a hierarchy of bounding spheres. This hierarchy is encoded and stored in a file optimally. This step also involves calculating appropriate radius of splat (disks centered around a point) and bounding spheres, using mesh connectivity information, so as to prevent any holes while rendering. Next is the rendering step which is just level order traversal of the preprocessed hierarchy. Surfels as Rendering Primitives: Pfister uses visibility splatting for hole detection and Gaussian filtering for image reconstruction. Even though different at first sight, the visibility splatting is very similar to Grossmans hierarchical z-buffer. Hole detection: The visibility splatting assumes that each point represents a small circular area in object space. This area projects to screen space as a shape similar to an ellipse. Visibility splatting scan converts this ellipse to the z-buffer, with the depth computed from the tangent plane at the point. The density of the point set is needed to set up the size of the rasterized ellipses. The comparison of visibility splatting to Grossmans hierarchical z-buffering reveals that both are similar in spirit: the points are written to the z-buffer enlarged to the size that assures no holes. Visibility splatting does this explicitly by rasterizing the projected surfel disc, Grossman uses a lower-resolution zbuffer to effectively grow the projected surfels, which is equivalent to visibility splatting constrained to squares (pixels of the lower resolution z-buffer). Image reconstruction: Image reconstruction is done by centering a Gaussian filter in the center of each hole and computing the weighted average of foreground point colors. To estimate the radius of the filter, the density of the point set is needed. Surface Splatting: Zwicker [4] proposed a framework for antialiased rendering coined Surface Splatting. The technique is an extension of Levoy et al. [5]: Elliptical Gaussian splats around each sample point are projected to

QSplat: Point Rendering Approach Rusinkiewicz and Mark Levoy[5] designed a system named QSplat for representing and progressively displaying large, complex models, that combine a multi resolution hierarchy based on bounding spheres with a rendering system based on points. A single data structure( See Figure 2) is used for view frustum culling, backface culling, level-of-detail selection, and rendering. The representation is compact and can be computed quickly, making it suitable for large data sets. Qsplat implementation, written for use in a large-scale 3D digitization project, launches quickly, maintains a usersettable interactive frame rate regardless of object complexity or camera position, yields reasonable image quality during motion, and refines progressively when idle to a high final image quality.

ISSN: 2231-2803

http://www.internationaljournalssrg.org
Page 21

International Journal of Computer Trends and Technology- volume3Issue1- 2012


the screen and convolved with a unit Gaussian to restrict the splat size to one pixel. Occluded sample points are removed by visibility splatting Pfister with a small z-threshold to avoid deleting sample points from the same surface. The attributes in the image are reconstructed by a weighted sum of the attributes, which are re-normalized by dividing by the sum of weights (i.e. the projected Gaussians). Small weight values are interpreted as object borders. They are translated into alpha-values that are used during compositing. An Abuffer algorithm is used as rendering backend to support transparency and edge antialiasing. It differs from the approaches of Grossman andDally[5], and Pfister as in those, the visibility and image reconstruction are done simultaneously. Using resampling filters for point rendering, most sampling artifacts such as holes or aliasing can be effectively avoided. Other Approaches to PBR: Wand has a triangle set as the initial scene representation. A spatial subdivision is created in a preprocess that sorts the triangles into groups showing a similar scaling factor when projected to screen (i.e. their distance to the viewer is similar). At render-time, for each group a total number of samples is estimated that would satisfy the given screen space density. This is done using the distance of the groups bounding box from the viewer and the total area of triangles within the bounding box. Within each group, the point samples are chosen randomly from the triangles with a probability density function proportional to the triangles surface area in object space. Hybrid polygon-point rendering systems definitely leave the idea of points as an universal rendering primitive. They build on the observation that points are more efficient only if they project to a small screen space area, else polygons perform better. Botsch presented a highly efficient hierarchical representation for point sampled geometry that automatically balances sampling density and point coordinate quantization. It also presents an efficient rendering algorithm that exploits the hierarchical structure of the representation to perform fast 3D transformations and shading. However, it still suffers from the problem of smooth edges. Reuter Presented implicit representation of the surface through the unorganized set of points by minimizing the bending energy using radial basis functions while guaranteeing a specifiable continuity. A (radial) function is attached to each data point that describes how it influences space. The surface is directly rendered viewdependently in an output-sensitive multi resolution manner without the creation of a polygonal mesh representation. This is done by the local generation of 3D surface points for the rendering adapted to the output, using implicate function calculated during point modeling. However, the algorithm becomes time-comconsuming the input model size increases. Advantages and Limitation of QSplat: QSplat approach has its own advantages which are listed as follows: Explored a simple hierarchical data structure for visualizing huge point clouds Uses efficient encoding scheme to store prepreprocessed hierarchy into a file format(.qsn file format) Uses hierarchical visibility culling (backface culling, frustum culling) which excludes unnecessary visibility computation for each point. Disadvantages of QSplat

QSplat approach has its own limitations and assumptions, which are listed as follows: Input models for preprocessing are triangular not raw point cloud. Appropriate radius for each point is calculated so that there wont be any holes in the model is based on the connected triangle mesh which supplied as input. Aliasing artifacts are not handled. (occurs when overlapping of splats is more) No proper texture filtering method is used. Input has to be sampled sufficiently. No surface reconstruction is handled Pfister proposes a new datastructure called surfel. In this algorithm, surfels are regularly sampled froma given scene and subsequently forms an hierarchy. But the creation of surfel datastructure itself involves a complex task. Zwicker extended surfel data structure to handle high quality texture filtering using Surface splatting framework proposed for conventional texture mapping. This Surface splatting approach provides following benefits. Advantages of Surface Splatting: It provides a mathematical formulation of screen space resampling filter useful for efficient implementation Handles both artifacts like aliasing and holes Supports volume rendering, which is integrated in the same framework. Effects like transparency and texture mapping can be effectively handled The cons of splatting include: Circular splats in the object space are not adapted to surface orientation which may leads to artifacts at the corners and edges of the rendered model. Intersecting surfaces cannot be rendered correctly. The problem can be alleviated, but not completely removed by increasing the object sampling rate. Relatively high memory cost due to the use of an A-buffer. Disadvantages of Surface Splatting: Application of low pass filter to magnified areas may lead to blurring artifact.

ISSN: 2231-2803

http://www.internationaljournalssrg.org
Page 22

International Journal of Computer Trends and Technology- volume3Issue1- 2012


Circular splats in the object space are not adapted to surface orientation which may leads to artifacts at the corners and edges of the rendered model. VI.
CONCLUSION

Having motivation and applications of using point as better rendering primitive, we had an overview of several popular rendering algorithms like QSplat and Surface splatting. We discussed advantages and disadvantages of both approaches. But surface splatting cant exactly represent the surfaces at the corners and edges efficiently.

. Figure 3: Rendering using qsplat algorithm from left to right 16k, 7k and 1k points

Figure 4: With simple point rendering and with Surface splating We discussed a various rendering algorithm which includes benefits and limitations. VII. REFERENCES [1] Baoquan Chen and Minh Xuan Nguyen, POP: A hybrid point and polygon rendering system for large data, IEEE Visualization, 2001. [2] J. P. Grossman and William J. Dally, Point sample rendering, Proceedings of the 9th Eurographics Workshop on Rendering, 1998, pp. 181-192.

[3] Marc Levoy, Kari Pulli, Brian Curless, Szymon Rusinkiewicz, David Koller, Lucas Pereira, Matt Ginzton, Sean Anderson, James Davis, Jeremy Ginsberg, Jonathan Shade, and Duane Fulk, The digital michelangelo project: 3D scanning of large statues, Siggraph 2000, Computer Graphics Proceedings (Kurt Akeley, ed.), ACM Press /ACM SIGGRAPH / Addison Wesley Longman, 2000, pp. 131-144. [4] Hanspeter Pfister, Matthias Zwicker, Jeroen van Baar, and Markus Gross, Surfels: Surface elements as rendering primitives, Proceedings of SIGGRAPH 2000 (2000), 335-342. [5] Szymon Rusinkiewicz and Marc Levoy, QSplat: A multiresolution point rendering system for large meshes, Proceedings of SIGGRAPH 2000 (2000), 343-352. [6] R. Wahl, M. Guthe, and R. Klein, Identifying planes in point-clouds for efficient hybrid rendering, The 13th Pacific Conference on Computer Graphics and Applications, October 2005. [7] Michael Wand, Matthias Fischer, Ingmar Peter, Friedhelm Meyer auf der Heide, and Wolfgang Straber, The randomized z-buffer algorithm: Interactive rendering of highly complex scenes, Siggraph 2001 Proceedings, 2001. [8] Matthias Zwicker, Hanspeter Pfister, Jeroen van Baar, and Markus Gross, Surface splatting, Proceedings of ACM SIGGRAPH 2001, Computer Graphics Proceedings, Annual Conference Series, ACM Press / ACM SIGGRAPH, August 2001, ISBN 1-58113-292-1, pp. 371-378. [9] Matthias Zwicker, Jussi Rsnen, Mario Botsch, Carsten Dachsbacher, and Mark Pauly, Perspective accurate splatting, Proceedings of Graphics Interface, 2004. [10] Jonathan D. Cohen, Daniel G. Aliaga, and Weiqiang Zhang. Hybrid simplification: Combining multi-resolution polygon and point rendering. In Proceedings of IEEE Visualization,2001. [11] Aravind Kalaiah and Amitabh Varshney. Differential point rendering. In Proceedings of the 12th Eurographics Workshop on Rendering, August 2001.

ISSN: 2231-2803

http://www.internationaljournalssrg.org
Page 23

Potrebbero piacerti anche