Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.
In computer graphics, shading refers to the process of altering the color of an object/surface/polygon in the 3D scene, based on things like (but not limited to) the surface’s angle to lights, its distance from lights, its angle to the camera and material properties (e.g. bidirectional reflectance distribution function) to create a photorealistic effect. Shading is performed during the rendering process by a program called a shader.
Computer Graphics: Principals and Practice Section 27.5.3:
The computation of the amount of light reflected from a surface was sometimes called “lighting” or “illumination,” although the standard interpretation of these words as descriptions of the light ARRIVING at the surface was also common. The “lighting model” was typically evaluated at the vertices of a triangular mesh and then interpolated in some way to give values at points in the interior of the triangle. This latter interpolation process was known as shading, and you’ll sometimes read of Gouraud shading (barycentric interpolation of values at the vertices) or Phong shading, in which, rather than interpolating the values, the component parts were interpolated so that the normal vector was reestimated for each internal point of each triangle, and then the inner product with the incoming light vector was computed, etc.
Nowadays we refer to shading and lighting differently: The description of the outgoing light in response to the incoming light is called a reflection model or scattering model, and the program fragment that computes this is called a shader. Because of the highly parallel nature of most graphics processing, the scattering model is usually evaluated at every pixel, often multiple times, and the “shading” process (i.e., interpolation across triangles) is no longer necessary.
We address the problem of recovering the shape and spatially-varying reflectance of an object from multi-view images (and their camera poses) of an object illuminated by one unknown lighting condition. This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties. The key to our approach, which we call Neural Radiance Factorization (NeRFactor), is to distill the volumetric geometry of a Neural Radiance Field (NeRF) [Mildenhall et al. 2020] representation of the object into a surface representation and then jointly refine the geometry while solving for the spatially-varying reflectance and environment lighting. Specifically, NeRFactor recovers 3D neural fields of surface normals, light visibility, albedo, and Bidirectional Reflectance Distribution Functions (BRDFs) without any supervision, using only a re-rendering loss, simple smoothness priors, and a data-driven BRDF prior learned from real-world BRDF measurements. By explicitly modeling light visibility, NeRFactor is able to separate shadows from albedo and synthesize realistic soft or hard shadows under arbitrary lighting conditions. NeRFactor is able to recover convincing 3D models for free-viewpoint relighting in this challenging and underconstrained capture setup for both synthetic and real scenes. Qualitative and quantitative experiments show that NeRFactor outperforms classic and deep learning-based state of the art across various tasks. Our videos, code, and data are available at people.csail.mit.edu/xiuming/projects/nerfactor/.
Recent advances in differentiable rendering have enabled high-quality reconstruction of 3D scenes from multi-view images. Most methods rely on simple rendering algorithms: pre-filtered direct lighting or learned representations of irradiance. We show that a more realistic shading model, incorporating ray tracing and Monte Carlo integration, substantially improves decomposition into shape, materials & lighting. Unfortunately, Monte Carlo integration provides estimates with significant noise, even at large sample counts, which makes gradient-based inverse rendering very challenging. To address this, we incorporate multiple importance sampling and denoising in a novel inverse rendering pipeline. This substantially improves convergence and enables gradient-based optimization at low sample counts. We present an efficient method to jointly reconstruct geometry (explicit triangle meshes), materials, and lighting, which substantially improves material and light separation compared to previous work. We argue that denoising can become an integral part of high quality inverse rendering pipelines.
Related Works
Neural methods for multi-view reconstruction; BRDF and lighting estimation; Image denoisers
A radiance environment map pre-integrates a constant surface reflectance with the lighting environment. It has been used to generate photo-realistic rendering at interactive speed. However, one of its limitations is that each radiance environment map can only render the object, which has the same surface reflectance as what it integrates. We present a ratio-image based technique to use a radiance environment map to render diffuse objects with different surface reflectance properties. This method has the advantage that it does not require the separation of illumination from reflectance, and it is simple to implement and runs at interactive speed. In order to use this technique for human face relighting, we have developed a technique that uses spherical harmonics to approximate the radiance environment map for any given image of a face. Thus we are able to relight face images when the lighting environment rotates. Another benefit of the radiance environment map is that we can interactively modify lighting by changing the coefficients of the spherical harmonics basis. Finally we can modify the lighting condition of one person's face so that it matches the new lighting condition of a different person's face image assuming the two faces have similar skin albedos.