Difference between revisions of "Mesostructures GSoC 2008"

From Second Life Wiki
Jump to navigation Jump to search
 
(65 intermediate revisions by the same user not shown)
Line 4: Line 4:


The main project is to develop an easy-to use C++ library for doing per-pixel displacement mapping.  The main focus will be to provide per-pixel displacement mapping support for height field, non-height field & dynamic height field mesostructures. The effects being targeted are self shadow, silhouettes and 3D imposters.
The main project is to develop an easy-to use C++ library for doing per-pixel displacement mapping.  The main focus will be to provide per-pixel displacement mapping support for height field, non-height field & dynamic height field mesostructures. The effects being targeted are self shadow, silhouettes and 3D imposters.
[[Image:Original.jpg|thumb|300px|Comparison between normal mapping and per-pixel displacement mapping]]


Support for the following per-pixel displacement mapping techniques –
Support for the following per-pixel displacement mapping techniques –


I)     '''Techniques''' –
#     '''Techniques''' –
*      Relief mapping : height field and non-height field
#*      Relief mapping : height field and non-height field
*      Sphere tracing using distance functions
#*      Sphere tracing using distance functions
*      Pyramidal displacement mapping
#*      Pyramidal displacement mapping
*      Maximum mipmaps for dynamic height field rendering
#*      Maximum mipmaps for dynamic height field rendering
 
#     '''Effects''' –  
II)     '''Effects''' – 3D impostors, self-shadow, occlusion, silhouette
#*  3D impostors
 
#*  Self-shadow
[[Image:Original.jpg|thumb|Normal mapping]]
#*  Occlusion
 
#*  Silhouette
= Basic algorithm =
We start at the input texture coordinate in the direction of the viewing ray. Based upon appropriate pre-processed data and rendering technique we find the intersection of this ray with the mesostructure, and use the texture coordinate of this point to access color texture, normal texture etc


= Basic algorithm = 
[[Image:Perpixel.jpg|thumb|140px|General Overview]]
We start at the input texture coordinate in the direction of the viewing ray. Instead of using this bump-mapped texture coordinates, we trace a ray from a starting texture coordinate to the point where the ray hits the height field in the texture space. Based upon appropriate pre-processed data and rendering technique we find the intersection of this ray with the mesostructure, and use the texture coordinate of this point to access color texture, normal texture etc


= Detailed description =
= Detailed description =
Various per-pixel displacement techniques have been proposed in past few years. Most noted are Relief mapping, Cone step mapping, Pyramidal displacement mapping, View dependent displacement mapping(VDM), Generalized displacement mapping(GDM), Pyramidal displacement mapping, Per-Pixel Displacement Mapping with Distance Functions, Maximum Mipmaps for Fast, Accurate, and Scalable Dynamic Height Field Rendering. All of these techniques have two things in common – a) They all require pre-processed data b) They all perform ray height field intersection either at a pre-processing stage or during rendering.
== Relief mapping ==
== Relief mapping ==
===Height field===
===Height field===  
Relief mapping, in short, consists of two steps – a linear step and a binary step. In the linear step(see figure below), we start from the input texture coordinates(A) and take constant sized steps in the viewing direction until we find a point inside the surface(3) where the height becomes less than the height of the surface at that texel.
[[Image:ReliefMapping.jpg|thumb|Relief mapping]]
Relief mapping, in short, consists of two steps – a linear step and a binary step. In the linear step, we start from the input texture coordinates(A) and take constant sized steps in the viewing direction until we find a point inside the surface(3) where the height becomes less than the height of the surface at that texel.


Now we find the actual intersection point by using a binary step. We take the starting point(A) as t_s and the point 3 as t_e. We find the height of the midpoint of A and 3 along this ray. If the height of this midpoint is greater than the surface underneath, t_s becomes this midpoint. If height of midpoint is less than the surface t_e becomes this midpoint. If its equal we have found the intersection point.
Now we find the actual intersection point by using a binary step. We take the starting point(A) as t_s and the point 3 as t_e. We find the height of the midpoint of A and 3 along this ray. If the height of this midpoint is greater than the surface underneath, t_s becomes this midpoint. If height of midpoint is less than the surface t_e becomes this midpoint. If its equal we have found the intersection point.
Line 31: Line 36:
In the case of non-height field intersection, the relief mapping intersection algorithm explained above is applied in parallel at relief maps of various depths.  We take the intersection point which is the closest.
In the case of non-height field intersection, the relief mapping intersection algorithm explained above is applied in parallel at relief maps of various depths.  We take the intersection point which is the closest.


== Sphere tracing ==
== Sphere tracing ==  
Preprocessing step:
[[Image:Spheretracing.jpg|thumb|Sphere tracing]]
It involves creating the distance map from either a height field or a detailed mesh provided. This distance map is used in the rendering stage.
'''Preprocessing step''':
For each point in the 3D space, we store the shortest distance to the nearest point on the surface - ''distance map''. This distance map is created from either a height field or a detailed mesh. It is passed onto GPU as a 3D texture to be used in the rendering stage.
 
'''Rendering step''': 
In the fragment shader, we start from texture coordinate '''p0''' in the direction '''r'''. We access the distance map at this point to find distance to nearest point on the surface '''d0'''. By moving to point '''p1 = p0 + d0*r''', we can guarantee of not missing any intersections. This step is repeated until we converge to the final intersection point.
 
== Pyramidal displacement mapping ==
=== Preprocessing ===
In this step, a pyramid of height maps is created with resolution of the height maps varying from 2^n x 2^n to 1x1. Each texel at level i+1 stores the maximum height value of the four texels lying below it at level i. At resolution 1x1, the maximum height of the entire height map is stored.
 
=== Rendering ===
At the rendering step, we start at the MAX level, access the mipmap structure and trace the ray until it reaches below the maximum height of level 0. At this point, we calculate the intersection of the ray with the height map.
 
= Results =
<gallery Caption="Relief mapping - height field">
Image:Relief_mapping_1.jpg|View 1
Image:Relief_mapping_2.jpg|View 2
Image:Relief_mapping_3.jpg|View 3
Image:Relief_mapping_4.jpg|View 4
Image:OriginalText.jpg|Original flat polygon without relief mapping
Image:Relief_mapping_5.jpg|View 1
Image:Relief_mapping_6.jpg|View 2
Image:Relief_mapping_7.jpg|View 3
</gallery>
 
<gallery Caption="Sphere tracing ">
Image:Sphere_tracing1.jpg|View 1
Image:Sphere_tracing2.jpg|View 2
Image:Sphere_tracing3.jpg|View 3
Image:Sphere_tracing4.jpg|View 4
Image:OriginalText.jpg|Original flat polygon without sphere tracing
Image:Sphere_tracing5.jpg|View 1
Image:Sphere_tracing6.jpg|View 2
Image:Sphere_tracing7.jpg|View 3
</gallery>
 
<gallery Caption="Relief mapping - non-height field ">
Image:Dog1.jpg|Original flat polygon mesh(two triangles) without relief mapping
Image:Dog2.jpg|Relief mapping - <br> Front view
Image:Dog3.jpg|Relief mapping - <br> Side view 1
Image:Dog4.jpg|Relief mapping - <br> Side view 2
Image:Dog5.jpg|Relief mapping - <br> Side view 3
Image:Dog6.jpg|Relief mapping - <br> Side view 4
Image:Dog7.jpg|Relief mapping - <br> Side view 5
Image:Dog8.jpg|Relief mapping - <br> Bottom view
Image:Dog9.jpg|Relief mapping - <br> Side view 6
Image:Dog10.jpg|Relief mapping - <br> Side view 7
Image:Dog11.jpg|Relief mapping - <br> Side view 8
Image:Dog12.jpg|Relief mapping - <br> Side view 9
</gallery>
 
 
<gallery Caption="Pyramidal mapping ">
Image:PDM1.jpg|View 1
Image:PDM2.jpg|View 2
Image:PDM3.jpg|View 3
Image:PDM4.jpg|View 4
</gallery>
 
<gallery Caption="Mesostructure rendering of a statue one a 3D model - cylinder ">
Image:Statue1.jpg|View 1
Image:Statue2.jpg|View 2
Image:Statue4.jpg|View 3
Image:Statue5.jpg|View 4
</gallery>
 
 
<gallery Caption="Mesostructure rendering of a roof tile on a 3D model - sphere ">
Image:Roof1.jpg|View 1
Image:Roof2.jpg|View 2
Image:Roof3.jpg|View 3
Image:Roof4.jpg|View 4
</gallery>
 
= Video =
 
<div align="center">Rendering of mesostructure<br>
 
<videoflash>joGWQiJBCcc</videoflash></div>


Rendering step:
=References of related publications=
# '''More details''' : http://www.cs.unc.edu/~ravishm
# '''Relief mapping''' : http://www.inf.ufrgs.br/~oliveira/RTM.html
# '''Cone step mapping''': http://www.lonesock.net/files/ConeStepMapping.pdf
# '''VDM''' : http://research.microsoft.com/users/lfwang/vdm.pdf
# '''GDM''': http://research.microsoft.com/users/xtong/gdm_electronic.zip
# '''Pyramidal displacement mapping''' : http://ki-h.com/article/ipdm.html
# '''Sphere tracing''' : ftp://download.nvidia.com/developer/GPU_Gems_2/GPU_Gems2_ch08.pdf
# '''Maximum Mipmaps for Fast, Accurate, and Scalable Dynamic Height Field Rendering''' : http://www.tevs.eu/project_i3d08.html

Latest revision as of 13:33, 18 September 2009

Introduction

Mesostructures are fine details on the surface and their rendering has become a hot topic in computer graphics. It is chiefly due to the amount of realism and visual effects (like occlusion, self-shadow and correct silhouettes) achieved by them. Techniques most commonly used in games and implemented in most game engines(Normal mapping, bump mapping) are not able produce such rich visual effects. The most traditional technique proposed for rendering fine structures is Displacement mapping. But due to the large number of polygons involved, this technique is very slow and no where close to real time. In recent years, this problem is being seen from a different perspective via Per-Pixel displacement mapping. This technique tries to achieve the same visual effects and realism as displacement mapping but does so in pixel domain giving real time performance. In recent years, lots of approaches have been proposed for per-pixel displacement mapping like Relief mapping, View dependent displacement mapping(VDM), Generalized displacement mapping(GDM), cone step mapping, sphere tracing using distance functions, pyramidal displacement mapping to name a few.

The main project is to develop an easy-to use C++ library for doing per-pixel displacement mapping. The main focus will be to provide per-pixel displacement mapping support for height field, non-height field & dynamic height field mesostructures. The effects being targeted are self shadow, silhouettes and 3D imposters.

Comparison between normal mapping and per-pixel displacement mapping

Support for the following per-pixel displacement mapping techniques –

  1. Techniques
    • Relief mapping : height field and non-height field
    • Sphere tracing using distance functions
    • Pyramidal displacement mapping
    • Maximum mipmaps for dynamic height field rendering
  2. Effects
    • 3D impostors
    • Self-shadow
    • Occlusion
    • Silhouette

Basic algorithm

General Overview

We start at the input texture coordinate in the direction of the viewing ray. Instead of using this bump-mapped texture coordinates, we trace a ray from a starting texture coordinate to the point where the ray hits the height field in the texture space. Based upon appropriate pre-processed data and rendering technique we find the intersection of this ray with the mesostructure, and use the texture coordinate of this point to access color texture, normal texture etc

Detailed description

Various per-pixel displacement techniques have been proposed in past few years. Most noted are Relief mapping, Cone step mapping, Pyramidal displacement mapping, View dependent displacement mapping(VDM), Generalized displacement mapping(GDM), Pyramidal displacement mapping, Per-Pixel Displacement Mapping with Distance Functions, Maximum Mipmaps for Fast, Accurate, and Scalable Dynamic Height Field Rendering. All of these techniques have two things in common – a) They all require pre-processed data b) They all perform ray height field intersection either at a pre-processing stage or during rendering.

Relief mapping

Height field

Relief mapping

Relief mapping, in short, consists of two steps – a linear step and a binary step. In the linear step, we start from the input texture coordinates(A) and take constant sized steps in the viewing direction until we find a point inside the surface(3) where the height becomes less than the height of the surface at that texel.

Now we find the actual intersection point by using a binary step. We take the starting point(A) as t_s and the point 3 as t_e. We find the height of the midpoint of A and 3 along this ray. If the height of this midpoint is greater than the surface underneath, t_s becomes this midpoint. If height of midpoint is less than the surface t_e becomes this midpoint. If its equal we have found the intersection point.

Non-height field

In the case of non-height field intersection, the relief mapping intersection algorithm explained above is applied in parallel at relief maps of various depths. We take the intersection point which is the closest.

Sphere tracing

Sphere tracing

Preprocessing step: For each point in the 3D space, we store the shortest distance to the nearest point on the surface - distance map. This distance map is created from either a height field or a detailed mesh. It is passed onto GPU as a 3D texture to be used in the rendering stage.

Rendering step: In the fragment shader, we start from texture coordinate p0 in the direction r. We access the distance map at this point to find distance to nearest point on the surface d0. By moving to point p1 = p0 + d0*r, we can guarantee of not missing any intersections. This step is repeated until we converge to the final intersection point.

Pyramidal displacement mapping

Preprocessing

In this step, a pyramid of height maps is created with resolution of the height maps varying from 2^n x 2^n to 1x1. Each texel at level i+1 stores the maximum height value of the four texels lying below it at level i. At resolution 1x1, the maximum height of the entire height map is stored.

Rendering

At the rendering step, we start at the MAX level, access the mipmap structure and trace the ray until it reaches below the maximum height of level 0. At this point, we calculate the intersection of the ray with the height map.

Results



Video

Rendering of mesostructure
<videoflash>joGWQiJBCcc</videoflash>

References of related publications

  1. More details : http://www.cs.unc.edu/~ravishm
  2. Relief mapping : http://www.inf.ufrgs.br/~oliveira/RTM.html
  3. Cone step mapping: http://www.lonesock.net/files/ConeStepMapping.pdf
  4. VDM : http://research.microsoft.com/users/lfwang/vdm.pdf
  5. GDM: http://research.microsoft.com/users/xtong/gdm_electronic.zip
  6. Pyramidal displacement mapping : http://ki-h.com/article/ipdm.html
  7. Sphere tracing : ftp://download.nvidia.com/developer/GPU_Gems_2/GPU_Gems2_ch08.pdf
  8. Maximum Mipmaps for Fast, Accurate, and Scalable Dynamic Height Field Rendering : http://www.tevs.eu/project_i3d08.html