Skip to content
Paper Copilotâ„¢, originally my personal project, is now open to the public. I deeply appreciate your feedback and support.
twitter x github-circle reddit

Paper Copilot Paper Copilotâ„¢ Research Toolbox
  • Statistics
    • AI/ML
      • AAAI
        • 2025
      • ICLR
        • 2025
        • 2024
        • 2023
        • 2022
        • 2021
        • 2020
        • 2019
        • 2018
        • 2017
        • 2013
        • 2014
      • ICML
        • 2024
        • 2023
      • NeurIPS
        • 2024
          • Main Conference
          • Datasets & Benchmarks
          • Creative AI
          • High School Projects
        • 2023
          • Main Conference
          • Datasets & Benchmarks
        • 2022
          • Main Conference
          • Datasets & Benchmarks
        • 2021
          • Main Conference
          • Datasets & Benchmarks
      • UAI
        • 2024
    • Data Mining
      • KDD
        • 2024
          • Research Track
          • Applied Data Science Track
        • 2025
          • Research Track
          • Applied Data Science Track
    • Graphics
      • SIGGRAPH
      • SIGGRAPH Asia
    • Multimedia
      • ACMMM
        • 2024
    • NLP
      • ACL
        • 2024
      • COLM
        • 2024
      • EMNLP
        • 2024
        • 2023
    • Robotics
      • CoRL
        • 2024
        • 2023
        • 2022
        • 2021
      • ICRA
        • 2025
      • IROS
        • 2025
      • RSS
        • 2025
    • Vision
      • 3DV
        • 2025
      • CVPR
        • 2025
      • ECCV
        • 2024
      • ICCV
      • WACV
        • 2025
  • Accepted Papers
    • AI/ML
      • ICLR
        • 2025
        • 2024
        • 2023
        • 2022
        • 2021
        • 2020
        • 2019
        • 2018
        • 2017
        • 2014
        • 2013
      • NeurIPS
        • 2024
          • Main Conference
          • Dataset & Benchmark
        • 2023
          • Main Conference
          • Dataset & Benchmark
        • 2022
          • Main Conference
          • Dataset & Benchmark
        • 2021
          • Main Conference
          • Dataset & Benchmark
      • ICML
        • 2024
        • 2023
    • Graphics
      • SIGGRAPH
        • 2024
        • 2023
        • 2022
        • 2021
        • 2020
        • 2019
      • SIGGRAPH Asia
        • 2023
        • 2022
        • 2021
        • 2020
        • 2019
        • 2018
    • Vision
      • CVPR
        • 2024
        • 2023
        • 2022
        • 2021
        • 2020
        • 2019
        • 2018
        • 2017
        • 2016
        • 2015
        • 2014
        • 2013
      • ICCV
        • 2023
        • 2021
        • 2019
        • 2017
        • 2015
        • 2013
      • ECCV
        • 2024
        • 2022
        • 2020
        • 2018
      • WACV
        • 2024
        • 2023
        • 2022
        • 2021
        • 2020
  • Countdown
  • Map
    • 3D
    • 2D
  • Contact Us
    • About Us
    • Acknowledgment
    • Report Issues
  • twitter x github-circle reddit

Tag: ECCV

Home » ECCV

Deep Reflectance Volumes: Relightable Reconstructions from Multi-View Photometric Images

Material Editing Material Estimation NeRF Shape Estimation

Sai Bi, Zexiang Xu, Kalyan Sunkavalli, Milo

University of California, San Diego; Adobe Research

Portals
  • pdf
  • arXiv
  • Paperswithcode
  • Publisher
Abstract

We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting. At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids. We present a novel physically-based differentiable volume ray marching framework to render these scene volumes under arbitrary viewpoint and lighting. This allows us to optimize the scene volumes to minimize the error between their rendered images and the captured images. Our method is able to reconstruct real scenes with challenging non-Lambertian reflectance and complex geometry with occlusions and shadowing. Moreover, it accurately generalizes to novel viewpoints and lighting, including non-collocated lighting, rendering photorealistic images that are significantly better than state-of-the-art mesh-based methods. We also show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.

Related Works

Geometry reconstruction; Reflectance acquisition; Relighting and view synthesis

Comparisons

DeepVoxels

2020 ECCV

PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo

General Relighting Lighting Estimation Material Editing Material Estimation NeRF Shape Estimation
PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo

Wenqi Yang, Guanying Chen, Chaofeng Chen, Zhenfang Chen, Kwan-Yee K. Wong

The University of Hong Kong; FNii and SSE, CUHK-Shenzhen; Nanyang Technological University; MIT-IBM Watson AI Lab

Portals
  • pdf
  • YouTube
  • Project
  • arXiv
  • Paperswithcode
Abstract

Traditional multi-view photometric stereo (MVPS) methods are often composed of multiple disjoint stages, resulting in noticeable accumulated errors. In this paper, we present a neural inverse rendering method for MVPS based on implicit representation. Given multi-view images of a non-Lambertian object illuminated by multiple unknown directional lights, our method jointly estimates the geometry, materials, and lights. Our method first employs multi-light images to estimate per-view surface normal maps, which are used to regularize the normals derived from the neural radiance field. It then jointly optimizes the surface normals, spatially-varying BRDFs, and lights based on a shadow-aware differentiable rendering layer. After optimization, the reconstructed object can be used for novel-view rendering, relighting, and material editing. Experiments on both synthetic and real datasets demonstrate that our method achieves far more accurate shape reconstruction than existing MVPS and neural rendering methods. Our code and model can be found at this https URL.

Related Works

Single-view Photometric stereo (PS); Multi-view Photometric Stereo (MVPS); Neural Rendering

Comparisons

NeRF, PhySG, NeRFactor, NeRF, NRF, UNISURF

2022 ECCV

R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis

Efficient NVS NeLF NeRF
R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis

Huan Wang, Jian Ren, Zeng Huang, Kyle Olszewski, Menglei Chai, Yun Fu, Sergey Tulyakov

Snap Inc.; Northeastern University

Portals
  • pdf
  • Project
  • r2l
  • arXiv
  • Paperswithcode
  • Publisher
Abstract

Recent research explosion on Neural Radiance Field (NeRF) shows the encouraging potential to represent complex scenes with neural networks. One major drawback of NeRF is its prohibitive inference time: Rendering a single pixel requires querying the NeRF network hundreds of times. To resolve it, existing efforts mainly attempt to reduce the number of required sampled points. However, the problem of iterative sampling still exists. On the other hand, Neural Light Field (NeLF) presents a more straightforward representation over NeRF in novel view synthesis -- the rendering of a pixel amounts to one single forward pass without ray-marching. In this work, we present a deep residual MLP network (88 layers) to effectively learn the light field. We show the key to successfully learning such a deep NeLF network is to have sufficient data, for which we transfer the knowledge from a pre-trained NeRF model via data distillation. Extensive experiments on both synthetic and real-world scenes show the merits of our method over other counterpart algorithms. On the synthetic scenes, we achieve 26-35x FLOPs reduction (per camera ray) and 28-31x runtime speedup, meanwhile delivering significantly better (1.4-2.8 dB average PSNR improvement) rendering quality than NeRF without any customized parallelism requirement.

Related Works

Efficient neural scene representation and rendering; Neural light field (NeLF); Knowledge distillation (KD)

Comparisons

NeRF, DONeRF, NSVF, NeX, AutoInt, X-Fields, RSEN, KiloNeRF

2022 ECCV

Object-based Illumination Estimation with Rendering-aware Neural Networks

Lighting Estimation
Object-based Illumination Estimation with Rendering-aware Neural Networks

Xin Wei, Guojun Chen, Yue Dong, Stephen Lin, Xin Tong

Microsoft Research Asia; Zhejiang Univiersity

Portals
  • pdf
  • arXiv
  • Publisher
Abstract

We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas. Conventional inverse rendering is too computationally demanding for real-time applications, and the performance of purely learning-based techniques may be limited by the meager input data available from individual objects. To address these issues, we propose an approach that takes advantage of physical principles from inverse rendering to constrain the solution, while also utilizing neural networks to expedite the more computationally expensive portions of its processing, to increase robustness to noisy input data as well as to improve temporal and spatial stability. This results in a rendering-aware system that estimates the local illumination distribution at an object with high accuracy and in real time. With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene, leading to improved realism.

Related Works

Scene-based lighting estimation; Object-based lighting estimation

2020 ECCV

INS: Unified Implicit Neural Stylization

3D Neural Style Transfer Gram Matrix NeRF Scene Editing View Synthesis
INS: Unified Implicit Neural Stylization
Error: Cannot create object