Skip to content
Paper Copilotâ„¢, originally my personal project, is now open to the public. I deeply appreciate your feedback and support.
twitter x github-circle reddit

Paper Copilot Paper Copilotâ„¢ Research Toolbox
  • Statistics
    • AI/ML
      • AAAI
        • 2025
      • ICLR
        • 2025
        • 2024
        • 2023
        • 2022
        • 2021
        • 2020
        • 2019
        • 2018
        • 2017
        • 2013
        • 2014
      • ICML
        • 2024
        • 2023
      • NeurIPS
        • 2024
          • Main Conference
          • Datasets & Benchmarks
          • Creative AI
          • High School Projects
        • 2023
          • Main Conference
          • Datasets & Benchmarks
        • 2022
          • Main Conference
          • Datasets & Benchmarks
        • 2021
          • Main Conference
          • Datasets & Benchmarks
      • UAI
        • 2024
    • Data Mining
      • KDD
        • 2024
          • Research Track
          • Applied Data Science Track
        • 2025
          • Research Track
          • Applied Data Science Track
    • Graphics
      • SIGGRAPH
      • SIGGRAPH Asia
    • Multimedia
      • ACMMM
        • 2024
    • NLP
      • ACL
        • 2024
      • COLM
        • 2024
      • EMNLP
        • 2024
        • 2023
    • Robotics
      • CoRL
        • 2024
        • 2023
        • 2022
        • 2021
      • ICRA
        • 2025
      • IROS
        • 2025
      • RSS
        • 2025
    • Vision
      • 3DV
        • 2025
      • CVPR
        • 2025
      • ECCV
        • 2024
      • ICCV
      • WACV
        • 2025
  • Accepted Papers
    • AI/ML
      • ICLR
        • 2025
        • 2024
        • 2023
        • 2022
        • 2021
        • 2020
        • 2019
        • 2018
        • 2017
        • 2014
        • 2013
      • NeurIPS
        • 2024
          • Main Conference
          • Dataset & Benchmark
        • 2023
          • Main Conference
          • Dataset & Benchmark
        • 2022
          • Main Conference
          • Dataset & Benchmark
        • 2021
          • Main Conference
          • Dataset & Benchmark
      • ICML
        • 2024
        • 2023
    • Graphics
      • SIGGRAPH
        • 2024
        • 2023
        • 2022
        • 2021
        • 2020
        • 2019
      • SIGGRAPH Asia
        • 2023
        • 2022
        • 2021
        • 2020
        • 2019
        • 2018
    • Vision
      • CVPR
        • 2024
        • 2023
        • 2022
        • 2021
        • 2020
        • 2019
        • 2018
        • 2017
        • 2016
        • 2015
        • 2014
        • 2013
      • ICCV
        • 2023
        • 2021
        • 2019
        • 2017
        • 2015
        • 2013
      • ECCV
        • 2024
        • 2022
        • 2020
        • 2018
      • WACV
        • 2024
        • 2023
        • 2022
        • 2021
        • 2020
  • Countdown
  • Map
    • 3D
    • 2D
  • Contact Us
    • About Us
    • Acknowledgment
    • Report Issues
  • twitter x github-circle reddit

Tag: 2020

Home » 2020

Learning Deformable Tetrahedral Meshes for 3D Reconstruction

3D Reconstruction View Synthesis

Jun Gao, Wenzheng Chen, Tommy Xiang, Clement Fuji Tsang, Alec Jacobson, Morgan McGuire, Sanja Fidler

NVIDIA; University of Toronto; Vector Institute

Portals
  • pdf
  • Project
  • DefTet
  • arXiv
Abstract

3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics. Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations. We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem. Unlike existing volumetric approaches, DefTet optimizes for both vertex placement and occupancy, and is differentiable with respect to standard 3D reconstruction loss functions. It is thus simultaneously high-precision, volumetric, and amenable to learning-based neural architectures. We show that it can represent arbitrary, complex topology, is both memory and computationally efficient, and can produce high-fidelity reconstructions with a significantly smaller grid size than alternative volumetric approaches. The predicted surfaces are also inherently defined as tetrahedral meshes, thus do not require post-processing. We demonstrate that DefTet matches or exceeds both the quality of the previous best approaches and the performance of the fastest ones. Our approach obtains high-quality tetrahedral meshes computed directly from noisy point clouds, and is the first to showcase high-quality 3D tet-mesh results using only a single image as input.

Related Works

3D Reconstruction; Tetrahedral Meshing

Comparisons

3D-R2N2, DeepMCube, Pixel2mesh, OccNet, MeshRCNN

2020 NeurIPS

Towards Material Digitization With a Dual-scale Optical System

SVBRDF

Elena Garces, Victor Arellano, Carlos Rodriguez-Pardo, David Pascual, Sergio Suja, Jorge Lopez-Moreno

Universidad Rey Juan Carlos; SEDDI

Portals
  • pdf
  • Project
  • Publisher
Abstract

Existing devices for measuring material appearance in spatially-varying samples are limited to a single scale, either micro or mesoscopic. This is a practical limitation when the material has a complex multi-scale structure. In this paper, we present a system and methods to digitize materials at two scales, designed to include high-resolution data in spatially-varying representations at larger scales. We design and build a hemispherical light dome able to digitize flat material samples up to 11x11cm. We estimate geometric properties, anisotropic reflectance and transmittance at the microscopic level using polarized directional lighting with a single orthogonal camera. Then, we propagate this structured information to the mesoscale, using a neural network trained with the data acquired by the device and image-to-image translation methods. To maximize the compatibility of our digitization, we leverage standard BSDF models commonly adopted in the industry. Through extensive experiments, we demonstrate the precision of our device and the quality of our digitization process using a set of challenging real-world material samples and validation scenes. Further, we demonstrate the optical resolution and potential of our device for acquiring more complex material representations by capturing microscopic attributes which affect the global appearance: we characterize the properties of textile materials such as the yarn twist or the shape of individual fly-out fibers. We also release the SEDDIDOME dataset of materials, including raw data captured by the machine and optimized parameteres.

Related Works

Rigid Capture Devices; Lightweight Capture Systems; Fiber-Level Capture; Translucency

2020 SIGGRAPH

Deep Reflectance Volumes: Relightable Reconstructions from Multi-View Photometric Images

Material Editing Material Estimation NeRF Shape Estimation

Sai Bi, Zexiang Xu, Kalyan Sunkavalli, Milo

University of California, San Diego; Adobe Research

Portals
  • pdf
  • arXiv
  • Paperswithcode
  • Publisher
Abstract

We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting. At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids. We present a novel physically-based differentiable volume ray marching framework to render these scene volumes under arbitrary viewpoint and lighting. This allows us to optimize the scene volumes to minimize the error between their rendered images and the captured images. Our method is able to reconstruct real scenes with challenging non-Lambertian reflectance and complex geometry with occlusions and shadowing. Moreover, it accurately generalizes to novel viewpoints and lighting, including non-collocated lighting, rendering photorealistic images that are significantly better than state-of-the-art mesh-based methods. We also show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.

Related Works

Geometry reconstruction; Reflectance acquisition; Relighting and view synthesis

Comparisons

DeepVoxels

2020 ECCV

IDR: Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance

Reflectance Estimation SDF Shape Estimation Volume Rendering
IDR: Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance
Error: Cannot create object