Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 118 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
118
Dung lượng
7,87 MB
Nội dung
REALISTIC IMAGE SYNTHESIS WITH LIGHT TRANSPORT HUA BINH SON Bachelor of Engineering A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF COMPUTER SCIENCE NATIONAL UNIVERSITY OF SINGAPORE 2015 Declaration I hereby declare that this thesis is my original work and it has been written by me in its entirety. I have duly acknowledged all the sources of information which have been used in the thesis. This thesis has also not been submitted for any degree in any university previously. Hua Binh Son January 2015 ii Acknowledgements I would like to express my sincere gratitude to Dr. Low Kok Lim for his continued guidance and support on every of my projects during the last six years. He brought me to the world of computer graphics and taught me progressive radiosity, my very first lesson about global illumination, which was later set to be the research theme for this thesis. Great thanks also go to Dr. Ng Tian Tsong for his advice and collaboration in the work in Chapter 7, and to Dr. Imari Sato for her kind guidance and collaboration in the work in Chapter 6. I also thank Prof. Tan Tiow Seng for guiding the G3 lab students including me on how to commit to high standards in all of our work. I would like to take this opportunity to thank my G3 lab mates for accompanying me in this long journey. I thank Cao Thanh Tung for occasional discussions about trending technologies which keeps my working days less monotonic; Rahul Singhal for discussing about principles of life and work of a graduate student; Ramanpreet Singh Pahwa for collaborating on the depth camera calibration project; Cui Yingchao and Delia Sambotin for daring to experiment with my renderer and the interreflection reconstruction project; Liu Linlin, Le Nguyen Tuong Vu, Wang Lei, Li Ruoru, Ashwin Nanjappa, and Conrado Ruiz for their accompany in the years of this journey. Thanks also go to Le Duy Khanh, Le Ton Chanh, Ta Quang Trung, and my other friends for their help and encouragement. Lastly, I would like to express my heartfelt gratitude to my family for their continuous and unconditional support. iii Abstract In interior and lighting design, 3D animation, and computer games, it is always demanded to produce visually pleasant content to users and audience. A key to achieve this goal is to render scenes in a physically correct manner and account for all types of light transport in the scenes, including direct and indirect illumination. Rendering from given scene data can be regarded as forward light transport. In augmented reality, it is often required to render a scene that has real and virtual objects placed together. The real scene is often captured and scene information is extracted to provide input to rendering. For this task, light transport matrix can be used. Inverse light transport is the process of extracting scene information from a light transport matrix, e.g., geometry and materials. Understanding both forward and inverse light transport are therefore important to produce realistic images. This thesis is a two-part study about light transport. The first part is dedicated to forward light transport, which focuses on global illumination and manylight rendering. First, a new importance sampling technique which is built upon virtual point light and the Metropolis-Hastings algorithm is presented. Second, an approach to reduce artifacts in many-light rendering is proposed. Our experiments show that our techniques can improve the effectiveness in many-light rendering by reducing noise and visual artifacts. The second part of the thesis is a study about inverse light transport. First, an extension to compressive dual photography is presented to accelerate the demultiplexing of dual images, which is useful for preview for light transport capturing. Second, a new formulation to acquire geometry from radiometric data such as interreflections is presented. Our experiments with synthetic data show that depth and surface orientation can be reconstructed by solving a system of polynomials. iv Contents List of Figures viii List of Tables xi List of Algorithms xii Introduction Fundamentals of realistic image synthesis 2.1 2.2 2.3 2.4 2.5 Radiometry 2.1.1 Radiance 2.1.2 Invariance of radiance in homogeneous media 2.1.3 Solid angle 2.1.4 The rendering equation 2.1.5 The area integral 2.1.6 The path integral Monte Carlo integration 10 2.2.1 Monte Carlo estimator 10 2.2.2 Solving the rendering equation with Monte Carlo estimators 12 Materials 14 2.3.1 The Lambertian model 15 2.3.2 Modified Phong model 16 2.3.3 Anisotropic Ward model 19 2.3.4 Perfect mirror 20 2.3.5 Glass 20 Geometry 22 2.4.1 Octree 22 2.4.2 Sampling basic shapes 23 Light 24 2.5.1 Spherical light 24 2.5.2 Rectangular light 25 Global illumination algorithms 3.1 3.2 27 Direct illumination 27 3.1.1 28 Multiple importance sampling Unidirectional path tracing 29 3.2.1 Path tracing 29 3.2.2 Light tracing 30 v 3.3 Bidirectional path tracing 31 3.3.1 34 State of the arts in path tracing 3.4 Photon mapping 35 3.5 Many-light rendering 36 3.5.1 Generating VPLs and VPSes 37 3.5.2 Gathering illumination from VPLs 37 3.5.3 Visibility query 39 3.5.4 Progressive many-light rendering 40 3.5.5 Bias in many-light rendering 40 3.5.6 Clustering of VPLs 41 3.5.7 Glossy surfaces 41 3.6 Interactive and real-time global illumination 42 3.7 Conclusions 44 Guided path tracing using virtual point lights 4.1 4.2 45 Related works 46 4.1.1 Many-light rendering 46 4.1.2 Importance sampling with VPLs 48 Our method 49 4.2.1 Estimating incoming radiance 50 4.2.2 Metropolis sampling 50 4.2.3 Estimating the total incoming radiance 53 4.2.4 Sampling the product of incoming radiance and BRDF 53 4.2.5 VPL clustering 54 4.3 Implementation details 54 4.4 Experimental results 55 4.5 Conclusions 59 Reducing artifacts in many-light rendering 60 5.1 Related works 62 5.2 Virtual point light 63 5.3 Our method 64 5.3.1 Generating the clamping map 64 5.3.2 Analyzing the clamping map 65 5.3.3 Generating extra VPLs 66 5.3.4 Implementation details 67 5.4 Experimental results 68 5.5 Conclusions 70 Direct and progressive reconstruction of dual photography images 6.1 Dual photography 71 71 vi 6.2 Related works 72 6.3 Compressive dual photography 74 6.4 Direct and progressive reconstruction 75 6.4.1 Direct reconstruction 75 6.4.2 Progressive reconstruction 76 6.5 Implementation 77 6.6 Experiments 78 6.6.1 79 Running time analysis 6.7 More results 80 6.8 Discussion 81 6.9 Conclusions 81 Reconstruction of depth and normals from interreflections 83 7.1 Geometry from light transport 83 7.2 Related works 85 7.2.1 Conventional methods 85 7.2.2 Hybrid methods 86 7.2.3 Reconstruction in the presence of global illumination 86 7.3 Interreflections in light transport 88 7.4 Geometry reconstruction from interreflections 89 7.4.1 Polynomial equations from interreflections 89 7.4.2 Algorithm to recover location and orientation 90 7.4.3 Implementation 90 7.5 Experiments 91 7.6 Conclusions 92 Conclusions 93 References 94 A More implementation details 102 A.1 Probability density function 102 A.1.1 Changing variables in probability density function 102 A.1.2 Deriving cosine-weighted sampling formula 102 A.2 Form factor 103 A.3 Conversion between VPL and photon 104 A.3.1 Reflected radiance using photons 104 A.3.2 Reflected radiance using VPLs 104 A.3.3 From photon to VPL 105 A.3.4 From VPL to photon 105 A.4 Hemispherical mapping 106 vii List of Figures 2.1 From left to right: flux, radiosity, and radiance. 2.2 Solid angle. 2.3 Three-point light transport. 2.4 Sampling the Phong BRDF model. 17 2.5 Sampling the Ward BRDF model based on the half vector ωh . 19 2.6 The modified Cornell box. 21 2.7 A 2D visualization of a quad-tree. Thickness of the border represents the level of a tree node. The thickest border represents the root. 23 2.8 Sampling spherical and rectangular light. 25 3.1 Sampling points on the light sources vs. sampling directions from the BSDF. Figure derived from [Gruenschloss et al. 2012] (see page 14). 28 3.2 Multiple importance sampling. Images are rendered with 64 samples. 29 3.3 Path tracing. 31 3.4 Direct illumination and global illumination. The second row is generated by path tracing. The Sibenik and Sponza scene are from [McGuire 2011]. 3.5 32 The modified Cornell box rendered by (a) light tracing and (b) path tracing. Note the smoother caustics with fewer samples in (a). 32 3.6 Different ways to generate a complete light path. 33 3.7 The Cornell box rendered by many-light rendering. 38 3.8 Complex scenes rendered by many-light rendering. The Kitchen scene is from [Hardy 2012], the Natural History and the Christmas scene from [Birn 2014]. 3.9 38 The gathering process with VPLs generated by tracing (a) light paths and (c)-(e) eye paths of length two. 42 viii 4.1 An overview of our approach. We sample directions based on the distribution of incoming radiance estimated by virtual point lights. The main steps of our approach is as follows. (a) A set of VPLs is first generated. (b) Surface points visible to camera are generated and grouped into clusters based on their locations and orientations. The representatives of the clusters are used as cache points which store illumination from the VPLs and guide directional sampling. (c) The light transport from the VPLs to the cache points are computed. To support scalability, for each cache point, the VPLs are clustered adaptively by following LightSlice [Ou and Pellacini 2011]. (d) We can now sample directions based on incoming radiance estimated by the VPL clusters. At each cache point, we store a sample buffer and fill it with directions generated by the Metropolis algorithm. (e) In Monte Carlo path tracing, to sample at an arbitrary surface point, we query the nearest cache point and fetch a direction from its sample buffer. 4.2 46 Visualization of incoming radiance distributions at various points in the Cornell box scene, from left to right: (i) Incoming radiance as seen from the nearest cache point; (ii) The density map; (iii) Histogram from the Metropolis sampler; (iv) Ground truth incoming radiance seen from the gather point. 4.3 51 Absolute error plots of the example scenes. While Metropolis sampling does not always outperform BRDF sampling, combining both of the techniques using MIS gives far more accurate results. 4.4 56 The results of our tested scenes. Odd rows: results by Metropolis sampling, BRDF sampling, MIS, and by Vorba et al. [2014]. Even rows: error heat map of Metropolis sampling, BRDF sampling, MIS, and the ground truth. 5.1 58 Progressive rendering of the Kitchen scene [Hardy 2012]. Our method allows progressive rendering with less bright spots. 61 5.2 A clamping map from the Kitchen scene. 65 5.3 Extra VPLs are generated by sampling the cone subtended by a virtual sphere at the VPL that causes artifacts. 5.4 66 Progressive rendering of the Conference scene [McGuire 2011]. Similarly, our method allows progressive rendering with less bright spots. 5.5 69 The error plot of our tested scenes. The horizontal axis represents the total number of VPLs (in thousands). The vertical axis shows the absolute difference with the ground truth generated by path tracing. 6.1 70 Dual photography. (a) Camera view. (b) Dual image directly reconstructed from 16000 samples, which is not practical. (c) Dual image progressively reconstructed from only 1000 samples using our method with 64 basis dual images. (d) Dual image reconstructed with settings as in (c) but from 1500 samples. Haar wavelet is used for the reconstruction. 73 ix 6.2 Comparison between direct and progressive reconstruction. Dual image (a), (b), and (c) are from direct reconstruction. Dual image (d) and (e) are from progressive reconstruction with 64 basis dual images. (f) Ground truth is generated from light transport from 16000 samples by inverting the circulant measurement matrix. Daubechies-8 wavelet is used for the reconstruction. 6.3 76 Progressive results of the dual image in Figure 6.1(d) by accumulating those reconstructed basis dual images. Our projector-camera setup to acquire light transport is shown in the diagram. 78 6.4 Relighting of the dual image in Figure 6.2(e). 80 6.5 Dual photography. (a) Camera view and generated images for capturing light transport. The projector is on the right of the box. (b) Dual image and the progressive reconstruction (floodlit lighting) from 4000 samples using our method with 256 basis dual images. Haar wavelet is used for the reconstruction. Image size is 256 × 256. 7.1 81 (a) Synthetic light transport using radiosity. (b) Reconstructed points from exact data by form factor formula. (c) Reconstructed points from data by radiosity renderer. 7.2 84 Reconstruction results with noise variance x 10−2 and 10−1 added to input images. 91 1.5 1.5 0.5 0.5 10 10 −10 −10 −8 −8 −6 −6 −4 −4 −2 (a) Variance 10−2 −2 (b) Variance 10−1 Figure 7.2: Reconstruction results with noise variance 10−2 and 10−1 added to input images. steps are needed since the result can be refined in Step 4. We use Levenberg-Marquardt optimization [Moré 1978] in Step 4. 7.5 Experiments We test our algorithm with a synthetic scene rendered by direct form factor calculation and a progressive radiosity algorithm. We use 16 area light sources to individually illuminate a known plane Q. The light sources are distributed uniformly on an unknown plane P and our goal is to reconstruct the locations and orientations of the light sources. For simplicity we only render direct illumination and set albedos of scene objects to one. Therefore, the radiance observed at plane Q can be directly used to find the locations and orientations of light sources on P . Figure 7.1 demonstrates that our algorithm can successfully reconstruct the locations and orientations of each light sources. We note that our synthetic example is sufficient to test our reconstruction from the system of polynomials. While our algorithm can work with both data from exact form factor and data generated by a radiosity renderer in this example, we did notice a slight shift in the geometry reconstructed from the later as compared to the groundtruth. This can be due to inaccuracy of the intensity values generated by radiosity methods. In practice, the captured images can be subject to noise. In order to test how our method behaves to noise in this synthetic scenario, we proceed to add Gaussian noise to observed pixel values. Figure 7.2 shows that our solver can tolerate a certain amount of noise with variance up to 10−1 . 91 We acknowledge that since our method relies on radiometric values, i.e., radiance, and numerical solvers for reconstruction, our recovered geometry can be susceptible to noise and may not be as accurate as traditional methods that bases on triangulation. 7.6 Conclusions We proposed a novel approach to acquire geometry from interreflections. A system of polynomial equations is established directly from the interreflection matrix and we show that by solving this system of polynomial equations, the geometry of the scene, i.e., surface depth and normal vectors, can be jointly reconstructed. Our experimental results demonstrated that our method works well with synthetic datasets up to a certain noise level. Our system is convenient since it does not require calibration. Our system is limited by the following factors. First, while projector and camera calibration are not needed, a planar checkerboard must be placed in the scene and interact with scene objects in order to simplify the polynomial system. This can cause the arrangement of objects in the scene to be not flexible. Second, our system can be susceptible to noise. The floating-point implementation of the solver of polynomial equations may return wrong solutions when the input data is perturbed by a small amount of noise. Third, our model is based on Lambertian assumption. In practice, this assumption may not be always true. Surfaces in the scene can be up to some certain degrees of glossiness, which violates the interreflection model and causes the system to fail to reconstruct the geometry. Finally, since we rely on acquiring light transport and solving polynomials for geometry reconstruction, our system is not fast enough for real-time reconstruction. From this study, we recognize several open problems for future research. A potential direction is to design reconstruction methods for more general materials, e.g., glossy or sub-surface scattering surfaces. It is more challenging to fully model such effects than to model diffuse interreflections. Moreover, extracting the global illumination matrix in such cases can be more difficult if the first-bounce matrix is not given. One of the first works in this direction, e.g., shape from translucent surfaces, has been proposed in [Inoshita et al. 2012]. Another potential direction can be to investigate the stability of the polynomial solver used in our approach. In this work, we only used the simplest form of the floating-point implementation of a polynomial solver. We hypothesize that the solver can perform better if stability approaches can be added [Byrod et al. 2009]. Finally, it is of great interest to study fast light transport acquisition to accelerate the data capturing stage and make the system more practical. We also would like to perform more physical experiments to further test our whole proposed pipeline thoroughly, since in this work we only present synthetic examples. 92 Chapter Conclusions This thesis explores forward and inverse light transport. In the first part, many-light rendering is studied. Two important problems in many-light rendering, importance sampling using virtual lights, and artifact removal are investigated. Our experiments demonstrated that our proposed solutions are effective. In the second part, two problems in light transport acquisition and analysis are addressed and solutions to these problems were implemented successfully. While this thesis studies both forward and inverse light transport, bridging the gap between these two areas would definitely need further research. While both of the areas have light transport and the light transport matrix to be the common factor, problems in each area requires different fundamental techniques to address. For example, in forward rendering, one generally uses Monte Carlo integration, the rendering equation, and rendering algorithms such as path tracing, photon mapping, and many-light rendering. In inverse light transport, one needs hierarchical clustering, compressive sensing, and optimization techniques. It is therefore quite challenging to bring such seemingly separate and independent problems into a unified framework. Forward rendering seldom directly uses the raw form of light transport acquired from real world, and inverse light transport requires more technical advances to build scenes and render high-quality images from real-world light transport matrix efficiently. This thesis leads to a few important open problems to explore. First, in forward light transport, many-light rendering can be integrated into existing Monte Carlo path tracing algorithms and guide the algorithms to converge faster. Adapting many-light rendering techniques to real-time applications is also challenging. Second, in inverse light transport, indirect illumination is a good source of information for geometry and material. It would be interesting to investigate material acquisition from indirect illumination. Finally, it is interesting to ask the question if there exists a sampling approach that can both be used to construct the light transport in both forward and inverse rendering. 93 References Aila, T. and Laine, S. 2009. Understanding the efficiency of ray traversal on gpus. In Proceedings of the Conference on High Performance Graphics 2009. HPG ’09. Aliaga, D. G. and Xu, Y. 2008. Photogeometric structured light: A self-calibrating and multi-viewpoint framework for accurate 3d modeling. In Computer Vision and Pattern Recognition (CVPR). Baraniuk, R. 2002. Rice wavelet toolbox. Baraniuk, R. 2007. Compressive sensing. IEEE Signal Processing Mag, 118–120. Basri, R., Jacobs, D., and Kemelmacher, I. 2007. Photometric stereo with general, unknown lighting. International Journal of Computer Vision (IJCV) 72, 239–257. Birn, J. 2014. Lighting challenges. Burke, D., Ghosh, A., and Heidrich, W. 2005. Bidirectional importance sampling for direct illumination. In Proceedings of the Sixteenth Eurographics Conference on Rendering Techniques. EGSR’05. Byrod, M., Josephson, K., and Astrom, K. 2009. Fast and stable polynomial equation solving and its application to computer vision. International Journal of Computer Vision (IJCV) 84, 237–256. Chu, X., Ng, T.-T., Pahwa, R., Quek, T. Q., and Huang, T. 2011. Compressive inverse light transport. Proceedings of the British Machine Vision Conference. Cohen, M. F., Chen, S. E., Wallace, J. R., and Greenberg, D. P. 1988. A progressive refinement approach to fast radiosity image generation. In Proceedings of the 15th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ’88. Couture, V., Martin, N., and Roy, S. 2011. Unstructured light scanning to overcome interreflections. In International Conference on Computer Vision (ICCV). 1895 –1902. Dachsbacher, C., Křivánek, J., Hasan, M., Arbree, A., Walter, B., and Novak, J. 2014. Scalable realistic rendering with many-light methods. Computer Graphics Forum 33, 1, 88–104. Dachsbacher, C. and Stamminger, M. 2005. Reflective shadow maps. In Proceedings of the 2005 Symposium on Interactive 3D Graphics and Games. I3D ’05. 94 Dachsbacher, C. and Stamminger, M. 2006. Splatting indirect illumination. In Proceedings of the 2006 Symposium on Interactive 3D Graphics and Games. I3D ’06. Dachsbacher, C., Stamminger, M., Drettakis, G., and Durand, F. 2007. Implicit visibility and antiradiance for interactive global illumination. In ACM SIGGRAPH 2007 Papers. SIGGRAPH ’07. Dammertz, H., Keller, A., and Lensch, H. P. A. 2010. Progressive point-light-based global illumination. Computer Graphics Forum 29, 8, 2504–2515. Davidovič, T., Křivánek, J., Hašan, M., Slusallek, P., and Bala, K. 2010. Combining global and local virtual lights for detailed glossy illumination. In ACM SIGGRAPH Asia 2010 papers. SIGGRAPH ASIA ’10. Dutre, P., Bala, K., Bekaert, P., and Shirley, P. 2006. Advanced Global Illumination. AK Peters Ltd. Engelhardt, T., Novák, J., Schmidt, T.-W., and Dachsbacher, C. 2012. Approximate bias compensation for rendering scenes with heterogeneous participating media. Computer Graphics Forum (Proceedings of Pacific Graphics 2012) 31, 7, 2145–2154. Fischler, M. A. and Bolles, R. C. 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, (June), 381–395. Georgiev, I., Křivánek, J., Davidovič, T., and Slusallek, P. 2012. Light transport simulation with vertex connection and merging. ACM Trans. Graph. 31, (Nov.), 192:1– 192:10. Georgiev, I., Křivánek, J., Popov, S., and Slusallek, P. 2012. Importance caching for complex illumination. Computer Graphics Forum 31, 2. EUROGRAPHICS 2012. Georgiev, I. and Slusallek, P. 2010. Simple and robust iterative importance sampling of virtual point lights. In Proceedings of Eurographics 2010 (short papers). 57–60. Goldstein, T. and Osher, S. 2009. The split bregman method for l1-regularized problems. SIAM J. Img. Sci. 2, 323–343. Gruenschloss, L., Keller, A., Premoze, S., and Raab, M. 2012. Advanced (quasi) monte carlo methods for image synthesis. In ACM SIGGRAPH 2012 Courses. SIGGRAPH ’12. Gupta, M., Agrawal, A., Veeraraghavan, A., and Narasimhan, S. 2013. A practical approach to 3d scanning in the presence of interreflections, subsurface scattering and defocus. International Journal of Computer Vision (IJCV) 102, 1-3, 33–55. Gupta, M. and Nayar, S. K. 2012. Micro phase shifting. In Computer Vision and Pattern Recognition (CVPR). 813 –820. 95 Gupta, M., Tian, Y., Narasimhan, S., and Zhang, L. 2012. A combined theory of defocused illumination and global light transport. International Journal of Computer Vision (IJCV) 98, 146–167. Hachisuka, T., Jarosz, W., Weistroffer, R. P., Dale, K., Humphreys, G., Zwicker, M., and Jensen, H. W. 2008. Multidimensional adaptive sampling and reconstruction for ray tracing. In ACM SIGGRAPH 2008 Papers. SIGGRAPH ’08. Hachisuka, T. and Jensen, H. W. 2009. Stochastic progressive photon mapping. In ACM SIGGRAPH Asia 2009 Papers. SIGGRAPH Asia ’09. Hachisuka, T., Ogaki, S., and Jensen, H. W. 2008. Progressive photon mapping. In ACM SIGGRAPH Asia 2008 Papers. SIGGRAPH Asia ’08. Hachisuka, T., Pantaleoni, J., and Jensen, H. W. 2012. A path space extension for robust light transport simulation. ACM Trans. Graph. 31, (Nov.), 191:1–191:10. Hardy, J. 2012. Country kitchen - cycles - blender 2.62. Hašan, M., Křivánek, J., Walter, B., and Bala, K. 2009. Virtual spherical lights for many-light rendering of glossy scenes. In ACM SIGGRAPH Asia 2009 papers. SIGGRAPH Asia ’09. Hašan, M., Pellacini, F., and Bala, K. 2007. Matrix row-column sampling for the many-light problem. In ACM SIGGRAPH 2007 papers. SIGGRAPH ’07. Hey, H. and Purgathofer, W. 2002. Importance sampling with hemispherical particle footprints. In Proceedings of the 18th Spring Conference on Computer Graphics. SCCG ’02. ACM, New York, NY, USA, 107–114. Holroyd, M., Lawrence, J., and Zickler, T. 2010. A coaxial optical scanner for synchronous acquisition of 3d geometry and surface reflectance. ACM Trans. Graph Inoshita, C., Mukaigawa, Y., Matsushita, Y., and Yagi, Y. 2012. Shape from single scattering for translucent objects. In European Conference on Computer Vision (ECCV). Iwahori, Y., Sugie, H., and Ishii, N. 1990. Reconstructing shape from shading images under point light source illumination. In International Conference on Pattern Recognition (ICPR). Vol. 1. 83 –87. Jakob, W. 2010. Mitsuba renderer. http://www.mitsuba-renderer.org. Jakob, W. and Marschner, S. 2012. Manifold exploration: A markov chain monte carlo technique for rendering scenes with difficult specular transport. ACM Trans. Graph. 31, (July), 58:1–58:13. Jensen, H. W. 1995. Importance driven path tracing using the photon map. In in Eurographics Rendering Workshop. 326–335. 96 Jensen, H. W. 1996. Global illumination using photon maps. In Proceedings of the Eurographics Workshop on Rendering Techniques ’96. 21–30. Kajiya, J. T. 1986. The rendering equation. In Proceedings of the 13th annual conference on Computer graphics and interactive techniques. SIGGRAPH ’86. 143–150. Kaplanyan, A. and Dachsbacher, C. 2010. Cascaded light propagation volumes for real-time indirect illumination. In Proceedings of the 2010 ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games. I3D ’10. Kaplanyan, A. and Dachsbacher, C. 2013a. Path space regularization for holistic and robust light transport. Computer Graphics Forum 32, 2, 63–72. Kaplanyan, A. S. and Dachsbacher, C. 2013b. Adaptive progressive photon mapping. ACM Trans. Graph. 32, (apr), 16:1–16:13. Keller, A. 1997. Instant radiosity. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques. SIGGRAPH ’97. 49–56. Kim, B. and Burger, P. 1991. Depth and shape from shading using the photometric stereo method. CVGIP: Image Understanding 54, 3, 416 – 427. Knaus, C. and Zwicker, M. 2011. Progressive photon mapping: A probabilistic approach. ACM Trans. Graph. 30, (May), 25:1–25:13. Kollig, T. and Keller, A. 2006. Illumination in the presence of weak singularities. In Monte Carlo and Quasi-Monte Carlo Methods 2004. Springer Berlin Heidelberg, 245–257. Křivánek, J., Ferwerda, J. A., and Bala, K. 2010. Effects of global illumination approximations on material appearance. In ACM SIGGRAPH 2010 papers. SIGGRAPH ’10. Lafortune, E. P. and Willems, Y. D. 1994. Using the modified phong reflectance model for physically based rendering. Tech. rep., Department of Computer Science, K.U.Leuven. Lehtinen, J., Karras, T., Laine, S., Aittala, M., Durand, F., and Aila, T. 2013. Gradient-domain metropolis light transport. ACM Trans. Graph. 32, 4. Li, T.-M., Wu, Y.-T., and Chuang, Y.-Y. 2012. Sure-based optimization for adaptive sampling and reconstruction. ACM Trans. Graph. (Proceedings of ACM SIGGRAPH Asia 2012) 31, (November), 186:1–186:9. Liu, S., Ng, T.-T., and Matsushita, Y. 2010. Shape from second-bounce of light transport. In European Conference on Computer Vision (ECCV). Loos, B. J., Antani, L., Mitchell, K., Nowrouzezahrai, D., Jarosz, W., and Sloan, P.-P. 2011. Modular radiance transfer. In Proceedings of the 2011 SIGGRAPH Asia Conference. SIGGRAPH Asia ’11. 97 Mara, M., Luebke, D., and McGuire, M. 2013. Toward practical real-time photon mapping: Efficient gpu density estimation. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games. I3D ’13. McGuire, M. 2011. Computer graphics archive. MERL. 2006. Merl brdf database. http://www.merl.com/brdf/. Moré, J. J. 1978. The levenberg-marquardt algorithm: Implementation and theory. In Numerical Analysis, G. Watson, Ed. Lecture Notes in Mathematics, vol. 630. Springer Berlin Heidelberg, 105–116. Nayar, S. K., Ikeuchi, K., and Kanade, T. 1991. Shape from interreflections. International Journal of Computer Vision (IJCV) 6, 173–195. Nayar, S. K., Krishnan, G., Grossberg, M. D., and Raskar, R. 2006. Fast Separation of Direct and Global Components of a Scene using High Frequency Illumination. ACM Transactions on Graphics (Proc. ACM SIGGRAPH). Ng, R., Ramamoorthi, R., and Hanrahan, P. 2004. Triple product wavelet integrals for all-frequency relighting. ACM Trans. Graph. 23, 477–487. Nichols, G. and Wyman, C. 2009. Multiresolution splatting for indirect illumination. In Proceedings of the 2009 Symposium on Interactive 3D Graphics and Games. I3D ’09. Novák, J., Engelhardt, T., and Dachsbacher, C. 2011. Screen-space bias compensation for interactive high-quality global illumination with virtual point lights. In Symposium on Interactive 3D Graphics and Games. I3D ’11. Novák, J., Nowrouzezahrai, D., Dachsbacher, C., and Jarosz, W. 2012a. Progressive virtual beam lights. Computer Graphics Forum (Proceedings of EGSR 2012) 31, (July). Novák, J., Nowrouzezahrai, D., Dachsbacher, C., and Jarosz, W. 2012b. Virtual ray lights for rendering scenes with participating media. ACM Trans. Graph. 31, (July), 60:1–60:11. O’Toole, M. and Kutulakos, K. N. 2010. Optical computing for fast light transport analysis. In SIGGRAPH Asia. Ou, J. and Pellacini, F. 2011. Lightslice: matrix slice sampling for the many-lights problem. In Proceedings of the 2011 SIGGRAPH Asia Conference. SIGGRAPH Asia ’11. Peers, P., Mahajan, D. K., Lamond, B., Ghosh, A., Matusik, W., Ramamoorthi, R., and Debevec, P. 2009. Compressive light transport sensing. ACM Trans. Graph. 28, 1, 1–18. Phong, B. T. 1975. Illumination for computer generated pictures. Commun. ACM 18, (June), 311–317. 98 Popov, S., Georgiev, I., Slusallek, P., and Dachsbacher, C. 2013. Adaptive quantization visibility caching. Computer Graphics Forum 32, 2, 399–408. Ritschel, T., Engelhardt, T., Grosch, T., Seidel, H.-P., Kautz, J., and Dachsbacher, C. 2009. Micro-rendering for scalable, parallel final gathering. In ACM SIGGRAPH Asia 2009 Papers. SIGGRAPH Asia ’09. Ritschel, T., Grosch, T., Kim, M. H., Seidel, H.-P., Dachsbacher, C., and Kautz, J. 2008. Imperfect shadow maps for efficient computation of indirect illumination. In ACM SIGGRAPH Asia 2008 Papers. SIGGRAPH Asia ’08. Ritschel, T., Grosch, T., and Seidel, H.-P. 2009. Approximating dynamic global illumination in image space. In Proceedings of the 2009 Symposium on Interactive 3D Graphics and Games. I3D ’09. Schechner, Y. Y., Nayar, S. K., and Belhumeur, P. N. 2003. A Theory of Multiplexed Illumination. In IEEE International Conference on Computer Vision (ICCV). Vol. 2. 808–815. Schlick, C. 1994. An inexpensive brdf model for physically-based rendering. Computer Graphics Forum 13, 233–246. Segovia, B., Iehl, J. C., Mitanchey, R., and Péroche, B. 2006. Bidirectional instant radiosity. In Proceedings of the 17th Eurographics Conference on Rendering Techniques. EGSR’06. 389–397. Seitz, S. M., Matsushita, Y., and Kutulakos, K. N. 2005. A theory of inverse light transport. In International Conference on Computer Vision (ICCV). Sen, P., Chen, B., Garg, G., Marschner, S. R., Horowitz, M., Levoy, M., and Lensch, H. P. A. 2005. Dual photography. In ACM SIGGRAPH. 745–755. Sen, P. and Darabi, S. 2009. Compressive dual photography. Computer Graphics Forum 28, 2, 609–618. Shirley, P. and Chiu, K. 1997. A low distortion map between disk and square. J. Graph. Tools 2, (Dec.), 45–52. Shirley, P., Wang, C., and Zimmerman, K. 1996. Monte carlo techniques for direct lighting calculations. ACM Trans. Graph. 15, (Jan.), 1–36. Talbot, J. F., Cline, D., and Egbert, P. 2005. Importance resampling for global illumination. In Proceedings of the Sixteenth Eurographics Conference on Rendering Techniques. EGSR ’05. 139–146. Tokuyoshi, Y. and Ogaki, S. 2012. Real-time bidirectional path tracing via rasterization. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games. I3D ’12. 99 Veach, E. 1998. Robust monte carlo methods for light transport simulation. Ph.D. thesis, Stanford, CA, USA. Veach, E. and Guibas, L. J. 1997. Metropolis light transport. In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ’97. 65–76. Vorba, J., Karlík, O., Šik, M., Ritschel, T., and Křivánek, J. 2014. On-line learning of parametric mixture models for light transport simulation. ACM Trans. Graph. 33, (July), 101:1–101:11. Walter, B. 2005. Notes on the ward brdf. Tech. rep., Program of Computer Graphics, Cornell University. April. Walter, B., Arbree, A., Bala, K., and Greenberg, D. P. 2006. Multidimensional lightcuts. In ACM SIGGRAPH 2006 Papers. SIGGRAPH ’06. Walter, B., Fernandez, S., Arbree, A., Bala, K., Donikian, M., and Greenberg, D. P. 2005. Lightcuts: A scalable approach to illumination. In ACM SIGGRAPH 2005 Papers. SIGGRAPH ’05. Walter, B., Khungurn, P., and Bala, K. 2012. Bidirectional lightcuts. ACM Trans. Graph. 31, (July), 59:1–59:11. Wang, C. A. 1994. The direct lighting computation in global illumination methods. Ph.D. thesis, Bloomington, IN, USA. Wang, R. and Åkerlund, O. 2009. Bidirectional importance sampling for unstructured direct illumination. Computer Graphics Forum 28, 2, 269–278. Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13, (apr), 600–612. Ward, G. J. 1992. Measuring and modeling anisotropic reflection. In Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ’92. 265–272. Wayne, W. 2014. The breakfast room - cycles - blender 2.71. Wu, Y.-T. and Chuang, Y.-Y. 2013. Visibilitycluster: Average directional visibility for many-light rendering. IEEE Transactions on Visualization and Computer Graphics 19, (Sept.), 1566–1578. Yamazaki, S., Mochimaru, M., and Kanade, T. 2011. Simultaneous self-calibration of a projector and a camera using structured light. In Computer Vision and Pattern Recognition Workshops (CVPRW). 60 –67. 100 Yin, W., Morgan, S., Yang, J., and Zhang, Y. 2010. Practical compressive sensing with toeplitz and circulant matrices. Proceedings of Visual Communications and Image Processing (VCIP). Yoon, K.-J., Prados, E., and Sturm, P. 2010. Joint estimation of shape and reflectance using multiple images with known illumination conditions. International Journal of Computer Vision (IJCV) 86, 2-3 (January), 192–210. 101 Appendix A More implementation details A.1 A.1.1 Probability density function Changing variables in probability density function The probability p(ω) and p(θ, φ) can be related by p(ω)dω = (p(ω) sin θ)dθdφ = p(θ, φ)dθdφ, (A.1) p(θ, φ) = p(ω) sin θ. (A.2) which leads to A.1.2 Deriving cosine-weighted sampling formula The marginal probability p(θ) is 2π p(θ) = p(θ, φ)dφ = sin 2θ. (A.3) In order to sample θ from a uniform variable δ1 , we simply let the cumulative density function F (θ) be equal to δ1 : θ F (θ) = p(θ′ )dθ′ = sin2 θ = δ1 . (A.4) From that we obtain θ = sin−1 ( δ1 ). (A.5) Given θ, we now proceed to sample φ. We have p(φ | θ) = p(θ, φ) = . p(θ) 2π (A.6) The cumulative density F (φ | θ) can be easily derived: F (φ | θ) = φ p(φ′ | θ)dφ′ = 102 φ = δ2 . 2π (A.7) Therefore, we have φ = 2πδ2 . A.2 (A.8) Form factor In global illumination algorithms, form factor is an important mathematical term to compute light bounces among surfaces. Form factor represents the fraction of power from a patch at y to a patch at x and can be written as F (y, x) = Ax Ax Ay G(y, x)V (y, x) dA(y)dA(x). π (A.9) The form factor can be interpreted as the power from y to x per unit surface area at x on average. Note that Ax F (y, x) = Ay F (x, y), (A.10) which suggests the relation between the form factor from y to x and from x to y, where Ax and Ay are areas of patch at x and y, respectively. This relation is used in progressive radiosity [Cohen et al. 1988] to centralize form factor computation from a patch x to all other patches at each iteration. Form factor can also be approximated by F (y, x) = Ay G(y, x)V (y, x) dA(y), π (A.11) if the patch x is very small. If patch y is also small and far away, we can also approximate F (y, x) = G(y, x)V (y, x) A(y). π (A.12) Form factor is computed numerically. Only form factors for special configurations such as two parallel planes or a pair of perpendicular planes have closed-form formulas (see [Dutre et al. 2006], page 215). In Monte Carlo estimation, form factor appears as the geometry term G. The sampling of areas and solid angles ‘hides’ the area term in the form factor into the probability term. In Chapter 7, the form factor is further analyzed for geometry reconstruction. 103 A.3 A.3.1 Conversion between VPL and photon Reflected radiance using photons In photon mapping, we store at each particle the flux of each photon while in VPL rendering, we often store the incident radiance from the previous virtual point light. Note that flux is defined for a certain point while incident radiance is defined from point y to point x. As stated by [Hachisuka et al. 2012; Georgiev et al. 2012], this is the one-bounce difference between photon mapping and bidirectional path tracing. Suppose that we need to evaluate the outgoing radiance Lo (x → ω). Photon mapping uses the photons in the local neighborhood of x to approximate the incident flux to x. To convert the incident photon flux to reflected radiance, Jensen [1996] proposed the following formula: Lo (x → ω) = y Φy fs (ωi (y) → x → ω), dAx (A.13) where dAx = πr2 , r the radius of the sphere centered at x that contains the nearest N photons, and ωi (y) the incident direction that the photon y receives flux from its previous photon. In other words, the area dAx is approximated by the disk intersected by the sphere and the surface that contain x. A.3.2 Reflected radiance using VPLs A virtual point light stores the throughput and the probability of the light path that it represents. Given a VPL at y, the reflected radiance at a surface point x due to y can be calculated by Lo (x, ω) = Li (y → x)G(y, x)fs (ωi (y) → x → ω) = T (¯ y) fs (ωi (y) → y → x)G(y, x)fs (y → x → ω), p(¯ y) (A.14) where T (¯ y ) is the throughput of the light path that ends at the location y, and p(¯ y ) the probability of the path. 104 A.3.3 From photon to VPL Given the incident flux Φy , the radiant intensity from y to x can be calculated as I(y → x) = Lo (y → x)Ay cos θy = (Li (z → y)fs (z → y → x) cos δy Ωz ) Ay cos θy (A.15) = (Li (z → y)Ay cos δy Ωz ) fs (z → y → x) cos θy = Φy fs (z → y → x) cos θy , where θ and δ denote the angle to the surface normal of the incident and outgoing ray, respectively. Therefore, if we consider the photon as a VPL, the reflected radiance at x due to the photon can be derived as follows. Lo (x → ω) = Li (y → x)fs (y → x → ω) cos δx Ωy = Li (y → x)fs (y → x → ω) cos δx Ay cos θy y−x = (Lo (y → x)Ay cos θy ) fs (y → x → ω) cos δx / y − x = I(y → x)fs (y → x → ω) cos δx / y − x = Φy fs (z → y → x) (A.16) cos θy cos δx fs (y → x → ω) y−x = Φy fs (z → y → x)Gyx fs (y → x → ω). A.3.4 From VPL to photon In order to use a VPL as a photon, it is necessary to evaluate the incident flux at a VPL. We provide a simple derivation as follows. Suppose that the photon y represents a surface with area Ay and it receives the flux from another surface with area Az . The incident flux to y can be written as Φy = Ay Az Li (z → y) Li (z → y)Gzy . = p(z)p(y) cos δy cos θz dAy dAz z−y (A.17) By expanding the incident radiance Li (z → y) recursively towards the light source, we obtain a general formula to approximate the incident flux as Φy = T (¯ y) . p(¯ y) Therefore, the conversion from a virtual point light to a photon is straightforward. 105 (A.18) A.4 Hemispherical mapping To map the unit hemisphere to the unit square, we first map the unit hemisphere to the unit disk such that a uniform distribution on the hemisphere becomes a uniform distribution on the disk: x = u − w2 , y = v − w2 , (A.19) z = − w2 , where w2 = u2 + v , (x, y, z) and (u, v) are points on the unit hemisphere and the unit disk, respectively. Points on the unit disk can then be mapped to the unit square using concentric mapping [Shirley and Chiu 1997]. After this mapping, incoming radiance estimation for a direction in the unit hemisphere can be cast to estimation for a point in the unit square. We have dω = 2πds, or p(ω) = p(s)/2π, where ω is a point on the unit hemisphere, and s a point in the unit square. The constant factor 2π would be necessary when an integral defined in the unit hemisphere domain is estimated in the unit square domain. 106 [...]... techniques that take images as input Such image- based rendering methods work by manipulating images captured in a real world scene This can also be regarded as an explicit construction of the light transport of a real world scene by many images Image data in a light transport can be recombined to generate novel views of the real world scene; it can also be used to 1 infer geometry, material, and light to create... about light transport It has two parts that target forward and inverse light transport, respectively The first part is dedicated to many -light rendering, a physically based forward rendering approach that is closely related to explicit construction of light transport in practice Two problems in many -light rendering, importance sampling using virtual point lights, and artifact removal in many -light rendering... second part is a study of inverse light transport Two problems in light transport acquisition and analysis are addressed Exploring both forward and inverse light transport is important to make a step further towards a more ambitious goal: to bring more accurate indirect illumination models in physically based rendering to inverse light transport, and to capture light transport in a real scene for guiding... inverse light transport has been less mainstream due to the lengthy time to capture and reconstruct the light transport from a large volume of data In computer vision, analysis tasks have been done massively on single-shot images or image sets and databases from the Internet Very few works have focused on extracting scene information from a light transport captured by tens of thousands of images This... (Chapter 2, 3, 4, 5) for forward light transport, and the second part (Chapter 6, 7) for inverse light transport In the first part, Chapter 2 introduces the core concepts in realistic image synthesis: radiometry, rendering equations, and Monte Carlo integration Models for material, geometry, and light, which are the three must-have data sets of a scene in order to form an image, are discussed Chapter 3... progressive solution to synthesize dual photography images is presented Chapter 7 further investigates inverse light transport and presents an approach to reconstruct geometry from interreflection Finally, Chapter 8 provides conclusions to this thesis 3 Chapter 2 Fundamentals of realistic image synthesis This chapter presents fundamental principles in realistic image synthesis First, we define the common terms... interact correctly with other objects In such cases, lighting, geometry, materials, and textures of real objects and the environment can be captured Such data can be used in the post processing to synthesize the appearance and behavior of virtual objects In this case, understanding in both forward and inverse light transport are important to create realistic images While forward light transport has been... path tracing, photon mapping, and many -light rendering Chapter 4 and Chapter 5 explore two important problems in many -light rendering: importance sampling using virtual point lights, and artifact removal In the second part, Chapter 6 presents the fundamentals of light transport acquisition together with dual photography, an approach to acquire high-dimensional light transport A fast and progressive solution... Carlo path tracing which utilizes virtual light distribution from many -light rendering and clustering • An approach to reduce sharp artifacts in many -light rendering • An efficient approach to preview dual photography images, which facilitates the process of high-dimensional light transport acquisition • An algorithm to extract geometry from interreflection in a light transport 2 This thesis is organized into... more realistic Such advances have been applied to several applications including movie and 3D animation production, interior and lighting design, and computer games which often require to produce visually pleasing content to audience One of the keys to render a scene physically correct is to account for all types of light transport in the scene Essentially, there are two types of light transport: light . and inverse light transport are therefore important to produce realistic images. This thesis is a two-part study about light transport. The first part is dedicated to forward light transport, which. inverse light trans port are important to create realistic images. While forward light transport has been receiving great attentions from the computer graphics community, inverse light transport. information from a light transport captured by tens of thousands of images. This thesis is a study about light transport. It has two parts that target forward and inverse light transport, respectively.