2009年12月8日 星期二

Precomputed method for soft shadow generation under dynamic scene

Abstract
Many applications in computer graphics and related fields can benefit from shadow rendering. Shadow is an important element in creating realistic image and in providing the user with visual cues about object placement. In this paper, I will present a new precomputed method for soft shadow generation under dynamic scene. The basic idea, inspired by Shadow Field technique, is to precompute the visibility function of an occluder under viewing direction and distance. Then, the function data will be compressed using PCA method and stored in lookup textures. My technique is easy to implement and enables all-frequency shadow effects in dynamic scenes under high interactive rate.

1. Introduction
In computer graphics, shadow is an important element of producing realistic image. Many applications in computer graphics and related fields such as virtual reality can benefit from shadow rendering. Without some shadow as a visual cue, scene are often unconvincing and more difficult to perceive. It is usually better to have an inaccurate shadow than no shadow at all.There are two types of shadows: self-shadow and cast shadow. Self-shadow is one formed when the shadow of the occluder is projected on itself, in other words, the occluder and receiver are the same. Cast shadow is one formed when the shadow is projected on another object. Light source can be modeled as a point or an area having finite size. Point light source generates hard shadow that is a fully shadowed region. Area light source generates soft shadow that has fully shadowed region called umbra and partially shadowed region called penumbra. (See figure 4)

There are many techniques to generate self-shadow such as Precomputed Radiance Transfer (PRT). The technique presented in this paper will only generate cast shadow but can be used with PRT method together to generate self-shadow. Also, it is assumed that a moveable spherical light source will be used as the area light source.

2. Related work
Soft shadow techniques are generally based on combining multiple shadow maps (e.g., [Heckbert and Herf 1997; Agrawala et al. 2000]), extending shadow volumes (e.g.,[Assarsson andAkenine-Moller2003]) and using Pre-computed Radiance Transfer (PRT).

For a dynamic scene, shadow map and volume computation escalates significantly with greater scene complexity and needs to be repeated for each frame. This rendering load cannot be alleviated by pre-computation, since shadow maps and volumes are determined with respect to a specific arrangement of lights and occluders, for which there exists numerous possible configurations.

A radiosity-based approach [Drettakis and Sillion 1997] has been presented for rendering soft shadows and global illumination by identifying changes in hierarchical cluster links as an object moves. With this technique, the corresponding changes in only coarse shadows have been demonstrated at interactive rates, and it is unclear how to handle multiple moving objects that may affect the appearance of one another.

PRT for Static Scenes: PRT provides a means to efficiently render global illumination effects such as soft shadows and inter-reflections from an object onto itself. Most PRT algorithms facilitate evaluation of the shading integral by computing a double product between BRDF and transferred lighting that incorporates visibility and global illumination [Sloan et al.2002;Kautz et al.2002;Lehtinen and Kautz 2003] or between direct environment lighting and a transport function that combines visibility and BRDF[Ng et al.2003]. Instead of employing double product integral approximations, Ng et al. [2004] propose an efficient triple product wavelet algorithm in which lighting, visibility, and reflectance properties are separately represented, allowing high resolution lighting effects with view variations.

PRT for Dynamic Scenes: The idea of sampling occlusion information around an object was first presented by Ouhyoung et al. [1996], and later for scenes with moving objects, Mei et al.[2004] efficiently rendered shadows using pre-computed visibility information for each object with respect to uniformly sampled directions. By assuming parallel lighting from distant illuminants, some degree of shadow map pre-computation becomes manageable for environment maps. For dynamic local light sources, however, pre-computation of shadow maps nevertheless remains infeasible.

Sloan et al. [2002] present a neighborhood-transfer technique that records the soft shadows and reflections cast from an object onto surrounding points in the environment. However, it is unclear how neighborhood-transfers from multiple objects can be combined.
Shadow fields [Zhou et al. 2005] extend the idea of Ng et al. [2004] method to account for dynamic visibility changes by rotating each blocker visibility function into the local coordinate frame and computing the SH product over all blockers. SH rotations and products are very expensive, precluding GPU implementation and restricting real-time CPU implementation to a few pre-computed blockers.

PRT methods for deformable objects have suggested partial solutions for dynamic lighting. James and Fatahaltian [2003] compute and interpolate transfer vectors for several key frames of given animations, and render the pre-animated models under environment maps in real time. This method, however, does not generalize well to dynamic scenes that contain moving local light sources and numerous degrees of freedom. Kautz et al. [2004] propose a hemispherical rasterizer that computes the self-visibility for each vertex on the fly; however, this approach would incur heavy rendering costs for a complex scene containing many objects.

3. Algorithm
3.1 Precomputed visibility function
For each occluder, we first compute the visibility of the light source under different viewpoints. The visibility function will have 6 parameters and is defined as below:

visibility(a, b, c, d, e, f) = solid angle subtended by visible part of light source / solid angle subtended by light source

a -- longitude angle of the viewpoint in occluder local frame
b -- colatitude angle of the viewpoint in occluder local frame
c -- distance of viewpoint in occluder local frame
d -- angle between view direction and light direction
e -- rotation angle of light direction
f -- apex angle subtended by the light source

For the ratio calculation of the 2 solid angles, it will be approximated by using a uniform distribution of light ray from the viewpoint to the light source. Also, we assumed that the apex angle is small and can be considered as a user defined constant. So, the approximate visibility function will have 5 parameters and is defined as below:

visibility(a, b, c, d, e) = no. of visible light ray / no. of light ray

a -- longitude angle of the viewpoint in occluder local frame with range [0, 2pi)
b -- colatitude angle of the viewpoint in occluder local frame with range [0, pi]
c -- distance of viewpoint in occluder local frame with range [dp, infinity)
d -- angle between view direction and light direction with range [0, pi/2]
e -- rotation angle of light direction with range [0, 2pi)

Since we will use texture file to store the data, normalized parameters u, v, w, x, y with the following formulas have to be used:

u = a / 2*pi
v = b / pi
w = (c - dp)/(c - dp + 1)
x = 2 * d / pi
y = e / 2*pi

3.2 Compression of visibility function
To be continued..


沒有留言:

張貼留言