Mariia Soroka,1,2
Christoph Peters,3,2
Steve Marschner1
1Cornell University, 2Intel, 3Delft University of Technology
Use mouse wheel to zoom in/out, click and drag to pan. Press keys [1], [2], ... to switch between individual images. [f] toggles full screen viewing.
We render the derivative w.r.t. horizontal translation of several objects observed through glossy reflections. In an equal-time comparison, we achieve much lower variance than prior work on edge sampling [Li et al. 2018]. Warped-area sampling [Bangaru et al. 2020] exhibits bias on reflections of branches and noise across object areas.
For each of the two mesh patches shown on the left, we show a plane sweeping through the scene volume. For each point on the plane, we perform either our or Li’s [2019] rejection test and color the point according to its output. For Li’s [2019] method, we additionally show the impact of the bug in redner (see Appendix G) on the rejection test quality.
Equal time comparison of our method with Li [2019]. Each primal rendering features an object observed through a rough mirror. Gradient images are computed with respect to vertical translation of the object. Construction and overall time for both methods are reported below.
Scene | Spp Ours | Spp Li [2019] |
MSE Ours | MSE Li [2019] |
Construction Time Ours | Construction Time Li [2019] |
Overall Time |
---|---|---|---|---|---|---|---|
Bunny | 128 | 185 | 0.04 | 32.72 | 0.14 s | 0.02 s | ~130 s |
Cube | 128 | 192 | 0.10 | 45.33 | 1.73 s | 0.14 s | ~140 s |
Vase | 128 | 179 | 0.06 | 47.60 | 8.72 s | 0.19 s | ~126 s |
An equal-time comparison of WAS methods [Bangaru et al. 2020; Xu et al . 2023] with our quadric-based silhouette sampling method. In the primal renderings, the camera observes a reflection of a moving object in a rough mirror. We compute gradients w.r.t. vertical translation of the object using our method, WAS (old) and WAS (new). For WAS methods, we compare gradients evaluated with 8 and 32 auxiliary rays.
An equal-sample-budget comparison of our method and the projective sampling approach [Zhang et al. 2023]. Each primal rendering features a reflection of an object in a rough mirror. Our method uses 128 samples per pixel for gradient computation. We set up the projective sampling method so that it traverses the same number of light paths and makes the same number of attempts to find a silhouette point as our method. Since there are multiple sets of parameters that fulfill this condition, we try a few different setups and report the result that produces minimal MSE. See Sec. 5.2 for more details. The red colormap is used to visualize the squared image error.
An equal-sample comparison of gradients computed using either our importance function 𝐼 (𝜔, p), which is oblivious to lighting, or 𝐼_light (𝜔, p), which explicitly accounts for polygonal lights. The primal image shows the shadow of an object illuminated by a small polygonal light source. Gradients are evaluated using 64 samples per pixel.
The scene features an object visible only through reflection in a smooth plane. Gradients of the image w.r.t. vertical translation of the object are evaluated using 16 samples per pixel. We compare derivative images computed with and without concave edge culling. Squared errors are plotted using a red colormap.
The Chandelier scene features an object with very complex geometry. Gradients are computed with 128 samples per pixel. Despite using as many samples as in Fig. 9, 10, and 11, our gradient image has significantly higher variance than in the previous experiments.
We reconstruct the shape of an object given a single reference image of the object and its two reflections. We shows shape evolution in the original scene setup.