1. Trang chủ
  2. » Công Nghệ Thông Tin

Handbook of Multimedia for Digital Entertainment and Arts- P19 potx

30 363 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 30
Dung lượng 0,96 MB

Nội dung

23 Computer Graphics Using Raytracing 539 in a vector form, the directions of the reflected and refracted rays can be determined as follows: cos  1 D N  .L/ cos  2 D p 1  .n 1 =n 2 / 2  .1 .cos  1 / 2 / V reflect D 1 C2N cos 1 V refract D n 1 n 2 L C  n 1 n 2 cos  1 cos  2 à N (14) As an optimization that may be made in the implementation of a raytracer, the medium in the absence of geometry may be assumed to have an index of refrac- tion of one. Therefore, as n 2 is now one, the index of refraction for each piece of geometry becomes equal to the ratio of the indices of refraction of the medium to that of its surroundings, simplifying (14) to V refract D L C .cos  1  cos  2 / N (15) Controlling Scene Complexity Because each intersection of a ray with scene geometry may generate additional rays, the number of rays to be traced through the scene grows geometrically with the number of ray-geometry intersections, and therefore with scene complexity. In ex- treme cases, the number of rays traced and therefore the number of intersection tests performed can reach infinity. Fortunately, very few surfaces are perfectly reflective or transparent and absorb some of the light energy falling on them. For this rea- son, the total light energy represented by the secondary rays diminishes with each intersection and the effect of the secondary rays on the output image is less signifi- cant with each bound. Note that once a ray escapes the volume enclosing the scene geometry, it can no longer generate any ray-geometry intersections and it will not contribute further to the output image. To reduce computational complexity and therefore calculation time required to generate the output image using the raytracing algorithm, various methods are usu- ally employed to limit the number of recursions which may be generated by a single primary ray. In its simplest form, this can be achieved by placing an upper limit on the number of bounces allowed [Shirley 05]. In a more sophisticated system, the projected effect of the ray on the output image may be estimated after each inter- section [Hall 83]. If the expected contribution of either a reflected or transmitted ray falls below a particular threshold, the new ray is not traced. By altering this threshold, a rough approximation to a final image may be generated quickly for preview purposes, or a more refined, final output. Figure 3 shows examples of an output image generated using primary and shadow rays only for recursion depths of zero to five. In Figure 3(a), no reflection is seen because the reflection rays are not generated at the primary ray intersection point. This causes the spheres to appear flat and dull. Figure 3(b) depicts reflections 540 G. Sellers and R. Lukac Fig. 3 An image generated using primary and secondary rays for (a) zero, (b) one, (c)two,(d) three, (e) four and (f) five bounces 23 Computer Graphics Using Raytracing 541 ab cd Fig. 4 Difference images between each successive bounce: (a)first,(b) second, (c)thirdand(d) fourth bounce of the other spheres in the scene due to a first bounce. However, in the reflections of the spheres on the floor they still have a dull and flat appearance. By adding succes- sively more bounces, enhanced detail in the reflections can be seen, as demonstrated in Figure 3(c) to Figure 3(f), and the differences between each successive recursion of the ray tracing algorithm becomes harder and harder to distinguish. Namely, as shown in Figure 4, very little additional information is added past the third bounce. Therefore, the raytracing process can be stopped in this case after three or four bounces. Image Quality Issues The primary image quality concern with ray traced images is aliasing artifacts [Cook 86]. As ray tracing is a point-sampling technique, it is subject to spatial aliasing when it reaches or exceeds the sampling limit. This limit is described by the Nyquist theorem [Nyquist 28], which states that the maximum frequency of a signal that may be represented by a point-sampled data set is half of the sampling frequency of that set. Any content of a signal beyond this limit will manifest as 542 G. Sellers and R. Lukac an aliased signal at a lower frequency. It follows from this theorem that, in the two-dimensional output image, the sampling rate of the image is determined by the spacing of the pixels, in other words, the image resolution. However, the input sig- nal is essentially the scene geometry, which may contain very fine details. Aliasing can appear in a number of ways. The most obvious effect is jagged edges of objects in the output image. Another serious artifact is the disappearance of fine details, including very small objects. Because objects in a ray-traced scene may be repre- sented analytically, it is possible for a small object to fall between primary rays. In this case, no intersection will be detected; the small object will not have any ef- fect on the output image and will therefore not be visible. In an animation where the small object moves between frames, in some frames the object will generate an intersection and in others it will not. It will therefore seem to appear and disappear as it moves. Figure 5 shows an image with the checkered texture generated procedurally; it has infinite detail and the smallest squares seen in the distance cover an area signifi- cantly smaller than a pixel. As can be seen, the image rendered with no antialiasing (Figure 5a) has noticeably lower quality compared to the image rendered with a moderate level of antialiasing (Figure 5b). To compensate for aliasing artifacts, various antialiasing techniques have been devised. Most methods for reducing aliasing artifacts are based on the concept of super-sampling, which refers to the computation of samples at a higher frequency than the output resolution [Dobkin 96]. A simple approach to super-sampling con- sists of taking multiple samples for each pixel in the output image, and using the averaged value of these samples as the output value. This is equivalent to generating a higher resolution output image and then down-sampling it to the desired output resolution. Namely, each pixel is subdivided into several subpixels; for example, Fig. 5 An infinite checkered plane rendered (a) without antialiasing and (b) with modest antialiasing 23 Computer Graphics Using Raytracing 543 ab Fig. 6 Grid sampling: (a) regular pattern and (b) pseudorandom pattern into a three by three grid, producing nine subpixels per output pixel. A primary ray is then generated passing through each of the subpixels. If the center of each subpixel is used as the target for the primary ray then aliasing artifacts can still be seen, albeit at higher frequencies. This is because the sampling pattern is still regu- lar, and there may still be scene content that would generate image data beyond the new, higher sampling frequency. To compensate for this, more sample points can be added, thus dividing each output pixel into more and more subpixels. Alternatively, an irregular sampling pattern can be created by moving the sample positions within the subpixels. This is known as jittered grid super-sampling. Figure 6(a) shows the spacing of the three by three subpixels of the regular grid. The randomized positions used in a jittered grid are shown in Figure 6(b). The advantage of arranging the subsample positions on a pseudorandom grid is that not only are the effects of aliasing reduced, but straight lines at all angles are equally well represented, which is not the case when a regular grid pattern is used. Figure 7 shows the same image rendered with a regular sampling grid in Fig- ure 7(a) and with a jittered grid in Figure 7(b). Inset at the top of each image is a magnified section of the upper edge of each square. As it can be seen, the appear- ance of jagged hard edges in the rendered image reduces with the increased number of shades used to represent the edge. There are a large number of other methods of implementing antialiasing. Many of these methods are adaptive [Painter 89], [Mitchell 90], allowing higher levels of antialiasing to be applied at edges, or other areas with high frequency content. However, an in-depth analysis is beyond the scope of this article and we will not discuss the topic further here. 544 G. Sellers and R. Lukac ab Fig. 7 Image rendered (a) with a regular smapling grid and (b) with a jittered grid Acceleration of Raytracing As already discussed, the computations involved in raytracing can quickly become extremely complex. To generate a high-resolution output with acceptable levels of antialiasing and detailed scene geometry, billions of calculations may need to be performed. Possibly the largest bottleneck in any raytracer is the intersection calcu- lations. Turner Whitted estimated that for reasonably complex scenes, almost 95% of the computation time is spent performing intersection tests [Whitted 80]. As pri- mary rays enter the scene, they must be tested for intersection with all elements of the scene geometry. The performance of a raytracer is thus heavily influenced by its ability to coarsely discard large parts of the scene from the set of geometry that must be tested for intersection against a given primary ray [Clark 76]. Further- more, any secondary rays generated as a result of theses intersections must also be tested against the scene geometry, potentially generating millions of additional intersection tests. There has been a large amount of work performed in the area of acceleration and optimization of raytracing algorithms [Weghorst 84], [Kay 86]. Whilst optimiz- ing the intersection test itself can produce reasonable results, it is important to try to minimize the number of tests that are performed. This is usually achieved by employing hierarchical methods for culling; mostly those based on bounding vol- umes and space partitioning tree structures [Arvo 89]. Bounding Volumes When using a bounding hierarchy approach to coarsely culling geometry, a tree is constructed with each node containing the bounding volumes of finer and finer 23 Computer Graphics Using Raytracing 545 pieces of the scene geometry [Rusinkiewicz 00]. At each of the leaf nodes of the tree, individual geometry elements of the scene are contained in a form that is directly tested for intersection with the ray being traced. Two important considera- tions must be made when choosing a structure for the tree representing the bounding hierarchy. First, the geometric primitive representing the bounding volume must be one that is simple to test for the presence of an intersection with a line segment in a three-dimensional space. All that is important for a quick determination whether there is an intersection with the bounding volume or not. If it can be determined that no intersection is made with the bounding volume, all of the geometry contained within that volume may be discarded from the potential set of geometry that may be intersected by the ray. As all that is desired is to know whether a ray may intersect geometry within the volume, the exact location of the intersection of the ray and the geometric primitive is not important; it is sufficient to know only whether there is an intersection with the primitive. Second, as the use of bounding volumes for culling geometry is an optimization strategy, the time taken to generate the bound- ing volumes should be less than the time saved by using them during the raytracing process. If it were not, then the total time required to generate the bounding hierar- chy and then use it to render the image would be greater than the time required to render the image without the acceleration structure. For these reasons, spheres are often used as the geometric primitive representing bounding volumes [Thomas 01]. Unlike finding a sphere that encloses a set of geom- etry reasonably tightly, finding the smallest enclosing sphere for a set of geometry is a rather complex task. Although the bounding sphere does not necessarily represent the smallest possible bounding volume for a specific set of geometry, it is relatively trivial to test against for the presence or absence of an intersection. The lack of an intersection between the ray and the bounding volume allows the volume to be skipped in the tree, but the presence of an intersection with a bounding volume does not necessarily indicate the existence of an intersection with part of the geometry contained within it. Therefore, if an intersection with part of the bounding hierarchy is detected, then the volumes and geometry within that bounding volume must also be tested. It is possible that a ray may intersect part of the bounding hierarchy yet not intersect any geometry within it. Space Partitioning Tree Structures Popular examples of space partitioning data structures are binary space partitioning trees [Fuchs 80], octrees and kd-trees [Bentley 75]. These tree structures recursively subdivide volumes into smaller and smaller parts. The binary space partitioning tree was originally described by Henry Fuchs in 1980 [Fuchs 80]. In this method, a binary tree is constructed by recursively sub- dividing the three-dimensional space of the scene by planes. At each node, a new plane is constructed with geometry that falls on one side of the plane in one child branch of the tree and geometry falling on the other side of the plane placed in the 546 G. Sellers and R. Lukac other. Eventually, each child node of the binary tree will contain a minimal num- ber of geometric primitives, or it will no longer be possible to further subdivide the space without intersecting scene geometry. In the latter case, a decision must be made as to whether to include the geometry in both children of the subdivision plane or to cease the subdivision process. The octree is a space partitioning tree structure that operates on volumes of three- dimensional space. The entirety of the scene geometry is enclosed in an axis aligned bounding box. At each node, the space is subdivided in half in each axis, giving node eight child nodes. Any geometry that straddles more than one of the children is placed within the current node. That geometry that is not cut by the subdivision is carried further down the recursion until it too is cut. The recursion stops either when no geometry is left to carry to the next smaller level, or after some predefined num- ber of steps to limit the recursion depth. Each node maintains a list of the geometry that falls within it. As rays are cast through the scene, a list of octree nodes through which the ray will pass is generated, and only the geometry contained within those nodes need be tested against for intersections. This has the potential to greatly accel- erate the tracing process, especially in cases where the scene is made up of a large amount of very fine geometry. Hardware Accelerated Raytracing Recently, interest has been shown in using hardware to accelerate ray tracing operations, particularly within the context of real-time raytracing. In some instances, standard PC hardware has been shown to have become fast enough to produce high quality raytraced images at interactive and real-time rates. For example, Intel has shown a demonstration of the popular video game Enemy Territory: Quake Wars rendered using raytracing [Phol 09]. In this project, the team was able to achieve frame rates of between 20 and 35 frames per second using four 2.66GHz Intel Dun- nington processors. While not technically hardware acceleration, this demonstrates what is currently possible using very high-end hardware. Furthermore, Intel’s up- coming Larrabee product is expected to be well suited to raytracing [Seiler 08]. Moving further along the spectrum of hardware acceleration, the Graphics Pro- cessing Unit (GPU) has been used for raytracing. At first, it was necessary to map the raytracing problem to existing graphics application programming interfaces, such as Open Graphics Library (OpenGL) as in [Purcell 02]. More recently, the GPU has been used as a more general purpose processor and applications such as raytracing have become more efficient on such systems. For example, NVidia presented an implementation of a real-time raytracing algorithm using their Cuda platform [Luebke 08]. Dedicated hardware has also been investigated as a means to achieve high per- formance in ray tracing applications. An example of such work can be found in [Schmittler 04], where a dedicated raytracing engine was prototyped using a field- programmable gate array (FPGA). The same team has continued to work on the 23 Computer Graphics Using Raytracing 547 system [Woop 05] and has achieved impressive results using only modest hardware resources. The outcome has been a device designed to accelerate the scene traversal and ray-geometry intersection tests. Using a 66MHz FPGA, the project has shown raytraced scenes at more than 20 frames per second. It can be expected that should such a device scale to the multi-gigahertz range that is seen in modern CPU imple- mentations, significantly higher performance could be achieved. Summary This chapter presented the fundamentals of raytracing which is an advanced com- puter graphics methods used to render an image by tracing the path of light through pixels in an image plane. Raytracing is not a new field; however, there is still cur- rently a large amount of active research being conducted on the subject as well as in the related field of photon mapping. Probably the main boosting factor behind these interests is that direct rendering techniques such as rasterization start to break down as the size of geometry used by artists becomes finer and more detailed, and polygons begin to cover screen areas smaller than a single pixel. Therefore, more so- phisticated computer graphics solutions are called for. Microfacet-based techniques, such as the well-known Reyes algorithm, constitute such a modern solution. Al- though these techniques have been used for some time to render extremely fine geometry and implicit surfaces in offline systems such as those used for movies, ray- tracing is believed to become a computer graphics method for tomorrow. With the continued increase in computing power available to consumers, it is quite possible that interactive and real-time raytracing could become a commonly used technique in video games, digital content creation, computer-aided design, and other consumer applications. References [Appel 68] Appel, A. (1968), Some Techniques for Shading Machine Renderings of Solids. In Proceedings of the Spring Joint Computer Conference, Volume 32, 37–49. [Whitted 80] Whitted, T. (1980), An Improved Illumination Model for Shaded Display. In Com- munications of the ACM, Volume 26, Number 6. 343–349. [Phong 73] Bui, T. P. (1973), Illumination of Computer Generated Images, Department of Com- puter Science, University of Utah, July 1973. [Perlin 01] Perlin, K. (2001), Improving Noise, In Computer Graphics, Vol. 35, No. 3 [Jensen 96] Jensen, H. W. (1996), Global Illumination using Photon Maps, In Rendering Tech- niques ’96, Springer Wien, X. Peuvo and Schr¨oder, Eds., 21–30. [Phol 09] Phol, D. (2009), Light It Up! Quake Wars Gets Ray Traced, In Intel Visual Adrenalin Magazine, Issue 2, 2009. [Purcell 02] Percell, T., Buck, I., Mark W. R., and Hanrahan, P. (2002), Raytracing on Pro- grammable Graphics Hardware, In ACM Transaction on Graphics. 21 (3), pp. 703–712, (Proceedings of SIGGRAPH 2002). 548 G. Sellers and R. Lukac [Luebke 08] Luebke, D. and Parker, S. (2008), Interactive Raytracing with CUDA, Presentation, NVidia Sponsored Session, SIGGRAPH 2008. [Seiler 08] Seiler, L. et al. (2008), Larrabee: A Many-Core x86 Architecture for Visual Comput- ing, In ACM Transactions on Graphics. 27 (3), Article 18. [Schmittler 04] Schmittler, J., Woop, S., Wagner, D., Paul, W., and Slusallek, P. (2004), Realtime Raytracing of Dynamic Scenes on an FPGA Chip, In Proceedings of Graphics Hardware 2004, Grenoble, France, August 28th–29th, 2004. [Woop 05] Woop, S., Schmittler, J., and Slusallek, P. (2005), RPU: A Programmable Ray Pro- cessing Unit for Realtime Raytracing, In ACM Transactions on Graphics 24 (3), pp. 434–444, (Proceedings of SIGGRAPH 2005). [Jarosz 08] Jarosz, W., Jensen, H. W., and Donner, C. (2008), Advanced Global Illumination using Photon Mapping, SIGGRAPH 2008, ACM SIGGRAPH 2008 Classes. [Nyquist 28] Nyquist, H. (1928), Certain Topics in Telegraph Transmission Theory, in Transac- tions of the AIEE, Volume 47, pp. 617–644. (Reprinted in Proceedings of the IEEE, Volume 90 (2), 2002). [Fuchs 80] Fuchs, H., Kedem, M., and Naylor, B. F. (1980), On Visible Surface Generation by A Priori Tree Structures, in Proceedings of the 7 th Annual Conference on Computer Graphics and Interactive Techniques, pp. 124–133. [Cook 87] Cook, R. L., Carpenter, L., and Catmull, E., The Reyes Rendering Architecture, In Proceedings of SIGGRAPH ’87, pp. 95–102. [Foley 90] Foley, J. D., van Dam, A., Feiner, S. K., and Hughes, J. F. (1990), Computer Graphics: Principles and Practice, 2 nd Ed. [Hanrahan 93] Hanrahan, P. and Krueger, W. (1993), Reflection rrom Layered Surfaces due to Subsurface Scattering, in Proceedings of SIGGRAPH 1993, pp. 165–174. [Weidlich 08] Weidlich, A. and Wilkie, A. (2008), Realistic Rendering of Birefringency in Uni- axial Crystals, in ACM Transactions on Graphics 27 (1), pp. 1–12. [Henning 04] Henning, C. and Stephenson, P. (2004), Accellerating the Ray Tracing of Height Fields, in Proceedings of the 2 nd International Conference on Computer Graphics and Interac- tive Techniques in Australasia and South East Asia, pp. 254–258. [Houston 06] Houston, B., Nielson, M. B., Batty, C., and Museth, K. (2006), Hierarchical RLE Level Set: A Compact and Versatile Deformable Surface Representation, in ACM Transactions on Graphics, Volume 25 (1), pp. 151–175. [Jensen 01] Jensen, H. W., Realistic Image Synthesis Using Photon Mapping (2001), AK Peters, Ltd. ISBN 978-1568811475. [Glaeser 99] Glaeser, G. and Gr¨oller, E. (1999), Fast Generation of Curved Perspectives for Ultra- wide-angle Lenses in VR Applications, in The Visual Computer, Volume 15 (7–8), pp. 365– 376. [Glassner 89] Glassner, A. S. (1989), An Introduction to Ray Tracing, Morgan Kaufmann, ISBN 978-0122861604. [Blinn 77] Blinn, J. F. (1977), Models of Light Reflection for Computer Synthesized Models, in Proceedings of the 4 th Annual Conference on Computer Graphics and Interactive Techniques, pp. 192–198. [Cook 82] Cook, R. L. and Torrance, K. E. (1982), A Reflectance Model for Computer Graphics, in ACM Transactions on Graphics, Volume 1 (1), pp. 7–24. [Oren 95] Oren, M. and Nayar, S. K. (1995), Generalization of the Lambertian Model and Im- plications for Computer Vision, International Journal of Computer Vision, Volume 14 (3), pp. 227–251. [Kajiya 86] Kajiya, J. T. (1986), The Rendering Equation, Computer Graphics, Volume 20 (4), pp. 143–150, Proceedings of SIGGRAPH’89. [Reddy 97] Reddy, M. (1997), The Graphics File Formats Page, http://www.martinreddy.net/gfx/ 3d-hi.html, Updated June 1997, Retrieved February 8, 2009. [Woo 02] Woo, M., Neider, J. L., Davis, T. R. and Shreiner, D. R. (2002), OpenGL Programming Guide, Third Edition, Addison Wesley, ISBN 0-201-60458-02, p. 667. [...]... Faculty of University of Catania and the Norwegian University of Science and Technology we explore the intersection of information technology and art to understand different entities that are involved in the intersection In the general context of the intersection between IT and art we focus on three subsets of IT, which are software, electronics, and robotics This choice is dictated by the nature of research... A., and O’Brien, J 2003 Motion synthesis from annotations ACM Transactions on Graphics 22, 3, 402–408 Part IV DIGITAL ART Chapter 25 Information Technology and Art: Concepts and State of the Practice Salah Uddin Ahmed, Cristoforo Camerano, Luigi Fortuna, Mattia Frasca, and Letizia Jaccheri Introduction The interaction between information technology (IT) and art is an increasing trend Science, art and. .. good introduction to the merge of IT and art and introduces genetic art, algorithmic art, applications of complex systems and artificial intelligence The intersection is drawing attention of people from diverse background and it is growing in size and scope For these reasons, it is beneficiary for people interested in art and technology to know each other’s background and interests well In a multidisciplinary... overview of the scope and the focus of the literature review In [8] we have also identified the search strategies, including a list of searchable electronic database of scientific publications and a starting list of keywords Relevant publications are those that address artist attitude towards technology, engineer attitude towards art, influence and usage of IT in art or of art in computing, and features of. .. neck(3), and head(3) segments The arms consists of 7 pairs of segments including left and right side, namely clavicle(2), humerus(3), radius(1), wrist(1), hand(2), fingers(1), and thumb(2) And, finally, legs consists of 4 pairs of segments including left and right side, namely femur(3), tibia(1), foot(2), and toes(1) For extracting spatial relationships and reducing dimensions (from 62 to 3 dimensions) among... Frasca Engineering Faculty, Dipartimento di Ingegneria Elettrica, Elettronica e dei Sistemi (DIEES), University of Catania, Italy e-mail: fcristoforo.camerano, lfortuna, mfrascag@diees.unict.it B Furht (ed.), Handbook of Multimedia for Digital Entertainment and Arts, DOI 10.1007/978-0-387-89024-1 25, c Springer Science+Business Media, LLC 2009 567 568 S.U Ahmed et al meant some interaction among the... Zheng and B Prabhakaran, “Segmentation and Recognition of Motion Streams by Similarity Search”, The ACM Transactions on Multimedia Computing, Communications and Applications (ACM TOMCCAP), Vol 3(3), August 2007 24 Tai-Peng Tian and Stan Sclaroff, “Handsignals Recognition From Video Using 3D Motion Capture Data”, Proceedings of the IEEE Workshop on Motion and Video Computing, 2005 25 Arikan, O., Forsyth,... Art: Concepts and State of the Practice 569 and books, conferences, art projects, festivals and art centres, practical examples of interaction between information technology and art carried out in university of all over the world and in laboratories and research centres We have explored this intersection through a detailed and systematic study of literature and we have also presented the criteria used... Hills, CA 90210 Department of Computer Science, University of Texas at Dallas, TX, USA e-mail: ychin@myspace.com; mhs071000@utdallas.edu; praba@utdallas.edu B Furht (ed.), Handbook of Multimedia for Digital Entertainment and Arts, DOI 10.1007/978-0-387-89024-1 24, c Springer Science+Business Media, LLC 2009 551 552 Y Jin et al but it is quite susceptible to variations in subjects and environments Thus,... interchangeable and humans and machines are intellectually connected, Popper began to study the new way of conceiving the visual arts through the experiences of other artists Popper also looks at the social and political impact of the rapid communication of ideas, experience, and images Popper in his works shows how the kinetics art influences all contemporary art expressions Cultural and social transformation . 90210 Department of Computer Science, University of Texas at Dallas, TX, USA e-mail: ychin@myspace.com; mhs071000@utdallas.edu; praba@utdallas.edu B. Furht (ed.), Handbook of Multimedia for Digital Entertainment. number of rays traced and therefore the number of intersection tests performed can reach infinity. Fortunately, very few surfaces are perfectly reflective or transparent and absorb some of the. implementation of a raytracer, the medium in the absence of geometry may be assumed to have an index of refrac- tion of one. Therefore, as n 2 is now one, the index of refraction for each piece of geometry

Ngày đăng: 02/07/2014, 02:20

TỪ KHÓA LIÊN QUAN