Photo-Realistic Computer Graphics
CIS 5930

 

Spring 2002
TTh 2:00
499 Dirac Science Library
Dr. David C. Banks

 

 
08 January

Create a notebook for the course. Put printed copies of papers in it.

Create a Web area for the course.

Find a Web log tool (such as blogger or slashcode or PHP-nuke) so you can add comments each week. Google info on "web logging".

(Jan 13, 2002) Note: The above assignment is cancelled. Do not attempt to use a Web logging tool for this course. Too much time and effort are required.

Homework 00
Visit POVray. Download POVray to a machine of your choice. Try out their examples. Create a scene of your own. Create an animation by moving the camera, rendering, and saving frames. Put your results under your course Web page.

Reading
An Improved Illumination Model
Distributed Ray Tracing
Framework for Realistic Image Synthesis
Distribution Ray Tracing [pdf version]

15 January

Homework 01 Visit radiance. Download. Create images. Put them on your Web page.

Part 2 Create a sphere in OpenInventor. Sample the sphere using rejection on a random vector p: if (0.0 < p.length() < sphere.radius) p.normalize(); At each sample point, calculate the emittance in the direction of the camera. Put a small sphere at the sample point, with color given by the emittance.

You are welcome to use or modify my code. The header files are not included, nor is pointOnSphere.cxx included.
addSphereToScene.cxx
emittance.cxx
Sphere.cxx

Reading
Adaptive Quadrature
Monte Carlo Methods
Practitioner's assessment

22 January

Homework 02

Download BMRT. Make images for your Web page.

Gordon Erlebacher
Department of Mathematics
Florida State University
"On the Challenge of Visualizing Vector Fields"
Friday, January 25, 2002 4:30 P.M.
499 Dirac Science Library

Part 2 Make your Inventor program read the scene-description from a file. For example:

    Sphere
      {
      center    1.0 2.0 -3.0
      radius    1.5
      emittance 
        [
	  [ 1050.3 ],  # red
	  [ 1692.8 ],  # green 
	  [ 2319.2 ]   # blue
	]
      }
    

Make the number of samples be a command-line flag. Distribute the number of samples across the entire scene. As the application runs, continually delete old samples and resample at new locations. The emittance can be any large value, so you must somehow convert it to something between 0 and 1.0 . Divide all the colors by maxEmittance, then raise to a power gamma (supplied by the command line).

Reading
Optical Models
Display of Surfaces
The Foundations of Photo-realistic Rendering

29 January

Homework 03

Download and run bv. Download and run brdfview

Part 2 Create an isotropic emittance format. The emittance varies with the dot product D between a surface normal and vector vOut. Then create a scene with some spheres having different emittance functions. Use linear interpolation, or else use some filter function of your choice.

    DEF MY_EMITTANCE Emittance
      {
        distribution
        [                           # red
          [ 
	  0.0, 0.0, 0.0, 0.0, 0.0,  # negative D
	  0.1, 2.0, 3.0, 8.5, 9.0
	  ],
                                    # green
          [
	  0.0, 0.0, 0.0, 0.0, 0.0,  # negative D
	  0.1, 1.0, 2.0, 3.0, 3.5
	  ],
                                    # blue
          [
	  0.0, 0.0, 0.0, 0.0, 0.0,  # negative D
	  0.1, 1.5, 4.0, 6.0, 7.0
	  ]
	]
      }
    Sphere 
      { 
      emittance USE MY_EMITTANCE
      }
    
    

Reading
Progressive radiosity.
Radiosity overview.
BMRT.
Comparison.
Codimension.
Surface-to-surface.
Photon mapping.
Radiosity
Light field.

05 February

Homework 04
Download the Light field package and make some images.

Create rays around a sphere having center c. When the sphere is sampled at point p, create a cylinder through c and p. Assign the cylinder a length L and radius r according to some reasonable guess that depends on the number of samples (specified in the command line). Where a cylinder passes through another sphere, indicate the intersection by placing a small sphere at that point. Color the intersection-sphere marker according to the emittance function from the sphere that is stabbed.

You can use or modify the following sphere-intersection code.

Reading
Light field
View Interpolation (Shenchang Eric Chen) (search the Web)

12 February

Homework 05
Download or implement a ray-triangle intersection routine.

Download or implement a marching-cubes triangle generator. Create a dragger to select the isovalue of the scalar function.

Download or create a 3D scalar field (such as a 512-byte header, 217x217x217 1-byte dataset of a human brain). Consult the man page for fopen() if you haven't done file I/O under unix before. Or ask fellow students for help.

Specify a viewpoint (from the command line, or via a dragger) in the scene with the brain isosurface. Randomly sample directions from a sphere around the viewpoint. Show where each ray intersects the isosurface (by looping over the triangles in the isosurface to see which one is stabbed).

Reading
Implicit Surfaces
Polygonizing a Scalar Field

19 February

Homework 06

Kiril Vidimce
"Normal Meshes"
11:00 am Monday, February 25, 2002
Dirac 499 Seminar Room

Part 1 Search the Web for "trilinear interpolation". This is a simple scheme for determining the value of a function f(x,y,z) at points that are not on the grid. Implement it to use with your scalar function on a 3D grid from last week (copy; paste).

Part 2

The partial derivative d/dx of a scalar function can be found by taking the difference between function values in the x direction.

d/dx f(xi, yi zi) = (f(xi+1, yi zi) - f(xi-1, yi zi))/2

The partial derivatives d/dy and d/dz are defined in a similar way.

Notice that you can compute these partials even when the point p=(x,y,z) is off the grid. Just use your interpolation function.

You must guard against going off the grid when you add or subtract 1 from the coordinate of the point.

Write a routine that computes the normal vector at a vertex on your isosurface. Use the gradient. The components of the gradient are the partials (df/dx,df/dy,df/dz). The normal of the isosurface through point p lies parallel to the gradient. Should the normal point in the positive or negative direction of the gradient? It depends on the dataset. Be prepared to try both ways.

For every vertex in the isosurface, sample some spherical directions in the hemisphere defined by the normal.

Part 3

To make sampling reasonable on the brain isosurface, first resample the data at a coarser resolution. This is easy to do. Create a new coordinate system x2,y2,z2 with dimensions x2Dim, y2Dim, z2Dim. Write a function to convert from x2,y2,z2 coordinates to x,y,z coordinates. Loop through the new grid. At each point p2 in the new coordinates, find the corresponding point p in the old coordinates. Use the interpolate routine to evaluate f(p).

Part 4

Make a globally illuminated image of the brain. Define 2 or 3 light sources (triangles that you put somewhere, with a simple cosine emittance distribution). For each vertex on the isosurface, sample some directions on the hemisphere around the vertex. Collect the radiance that is received from any emitter hit by a ray. Scale that radiance by albedo*cos(theta).

Reading

Paul S. Heckbert and Michael Garland, "Multiresolution Modeling for Fast Rendering".
He et al., "Voxel-based object simplification"

26 February

Homework 07

Download and run the Stanford volPack volume rendering package. Try it on your scalar volume.

Part 2 For each ray coming from your viewing sphere, follow the ray through a scalar-valued volume. The points q on the ray are defined by the equation q = p + t*dp, where p is the viewpoint and dp is the unit vector defining the ray direction. Different values of t yield different points q along the ray. Specify (either in the command line or in your parameter file) how closely spaced the samples are along the ray. Call this distance dt.

Assume the volume is emissive, with emittance E given by the dot product between vOut and the normal (plus/minus the unit gradient), multiplied by a "transfer function" g(). Integrate the emittance along the ray.

Lin += E(q)*dt

To make a certain isovalue c be the dominant one, make the tranfer function spike near c. For example, if h(x,y,z) is the volumetric scalar function, let g(h) = exp( -(h-c)*(h-c)/(s*s) ). The value of s determines the width of the spike.

Color the sphere sample according to the accumulated radiance.

Part 3 Instead of being emissive, make each point in the volume have a transmittance function f(vIn, vOut). The function is purely transmissive in the forward direction, so f(v, v) = 1.0 . That is, the radiance L coming into a point continues forward so you can simply add the radiances along the ray. Single scattering is accomplished by adding L(vIn)*f(vIn, vOut) for light coming from a source. You can make the source be one or two isolated points, or you can sample them from a region like a triangle or rectangle. Use f=normal.dot(-vIn)*g(h(q)), where h(q) is the transfer function above.

Reading
Debevec, Acquiring the Reflectance Field of a Human Face

05 March

Homework 08
Download Paul Heckbert's Radiosity Visualizer. Make images for your Web page.

Part 2 Add absorption to your volume renderer. Create a second transfer function tau(h), where h() is the scalar function on the 3D domain. The function tau is the extinction coefficient in Max's paper from Homework 02. When tau=0, no absorption occurs; when tau=infty, light is completely absorbed. The radiance L(pk) along the ray has two components: emission and scattering. Assume the emittance comes from a couple of polygons outside the volume. The scattering uses your scattering function from the previous homework, plus absorption.

If points p and p' lie on the ray from the viewpoint, with p closer than p', then you can implement absorption as follows. Let dp be the ray direction (passing through p and p'). When vin=-dp, light is being transferred from p' to p. At this point and in this direction you get

L(p, vout)= exp(-tau(p)*dt)L(p, vin)
where dt is the length of the step from p to p', assuming tau to be constant.

When you select random directions about each point p for sampling the incoming radiance L, be sure that -dp is one of these directions. You can use different step sizes for different directions surrounding the point p. In particular, you are free to take big step sizes toward the lights.

Reading
Siggraph 2002. Look over the list of courses. Make a short list of courses you want to attend. Some possibilities are listed below.

2 Advanced Global Illumination Doutre, Bala Sunday 8:30-12:15
5 Image-Based Lighting Debevec Sunday 1:30-5:15
7 Introducing X3D Daly Sunday 8:30-5:15
10 Level Set and PDE Methods for Computer Graphics Breen, Sapiro, Fedkiw, Osher Sunday 8:30-5:15
12 Modeling Techniques for Medical Applications Metaxas Sunday 8:30-5:15
16 RenderMan in Production Gritz Sunday 8:30-5:15
17 State of the Art in Hardware Shading Olano, Boyd, McCool, Mark, Mitchell Sunday 8:30-5:15
25 Using Tensor Diagrams to Represent and Solve Geometric Problems Blinn Monday 1:30-5:15
29 Beyond Blobs: Recent Advances in Implicit Surfaces Yoo, Turk, Dinh, Hart, O'Brien, Whitaker Monday 8:30-5:15
36 Real-Time Shading Languages Olano, Hart, Heidrich, Mark, Perlin Monday 8:30-5:15
39 Acquiring Material Models Using Inverse Rendering Marschner, Ramamoorthi Monday 8:30-12:15
43 A Practical Guide to Global Illumination Using Photon Mapping Jensen Tuesday 1:30-5:15
44 Image-Based Modeling Grzeszczuk Tuesday 1:30-5:15
54 Obtaining 3D Models With a Hand-Held Camera Pollefeys Wednesday 10:30-12:15

12 March

Semester break

19 March Homework 08

Download Garland's code; try it out.

They Might Be Giants
Friday, March 22 2002
10:30 Cow Haus
Tickets $16

Template Graphics Software has a visualization product called Amira. Install an evaluation copy on your machine at home, or use it from the Vis machines. Type "help" in the command window, then go through the list of demos. Under "geometry reconstruction", click the bottom example (surface simplification). Practice simplifying the mesh. Press the pencil icon to see what the buttons can do. Modify the vertices. Modify the edges.

  • Load colin's brain and make an isosurface. Convert it to a surface and decimate it repeatedly. Save a sequence of simplified meshes.
  • Load your .iv file from your own isosurface tool. Let Amira decimate it. Save. Repeat. Produce a sequence of simpler meshes.
  • Use Garland's code to simplify your mesh. Save. Repeat. Produce a sequence of simpler meshes.
Create an LOD node (man soLOD) for each sequence of simplified meshes. For example,

  #Inventor V2.1 ascii

  LOD
    {
    range
      [
        8.0,
       16.0,
       32.0,
       64.0,
      128.0
      ]

    File { name "data/brain-amiraIso-amiraSimplify.200000.iv" }
    File { name "data/brain-amiraIso-amiraSimplify.100000.iv" }
    File { name "data/brain-amiraIso-amiraSimplify.050000.iv" }
    File { name "data/brain-amiraIso-amiraSimplify.025000.iv" }
    File { name "data/brain-amiraIso-amiraSimplify.012500.iv" }
    }

  

Reading
MAPS
Surface Simplification Using Quadric Error Metrics

26 March Homework 10
Create two scenes of Colin's brain. Scene1 is a high resolution isosurface. Scene2 is a decimated version of scene1. While you are debugging, let scene1 have maybe 5,000 polygons and scene2 1,000 polygons.

Part 1 Include a luminaire in the scene (maybe a couple of triangles, or maybe a triangulated sphere). The luminaire has an emittance distribution function of your choosing; start with a simple cosine distribution. For each vertex in scene1, randomly shoot rays (either in the whole sphere, or just in the hemisphere defined by the surface normal). If a ray hits an emitter, compute the contribution the incoming radiance makes to that vertex and accumulate it to the total of incident radiance. The incident radiance is diminished as the light source deviates from the normal direction, obeying a cosine law. Increase the radiance by the reflected radiance.

  gather(SceneGraph &scene1)
    foreach vert in scene1
      vert.reflective = 0                                    // start a new gather
      do NumSamples times                                    // gather radiance
        vIn   = randomSphereSample(vert.surfaceNormal() );   // incident direction
        vert2 = nearestPoint(vert, scene1.intersect(vIn) );
        if (vert2 != Null)
	  receivedRadiance += vert2.radiance(vIn)            // radiance = emissive + reflective
	                      *(-vIn).dot(vert.normal);      // dot > 0
      vert.reflective += receivedRadiance * vert.reflectance;

  

Initialize the scene by setting vert.emissive = vert.reflective = 0 except for the luminaires. In the pseudocode above, a vertex has a reflectance (the BRDF), a surfaceNormal, and radiance (composed of emissive radiance and reflective radiance).

You can do the gather() step just one time and produce a close approximation to the correct radiance. If you perform the gather step two or more times, you will account for inter-reflections in the scene.

Part 2 Accelerate the basic illumination scheme. Replace scene1 with the decimated scene2 in the intersection test. (Extra credit: to accelerate further, replace the original scene with the decimated scene2, compute one or more passes of light transport, then assign radiance to each vert of scene1 by interpolating. You only need to find the triangle in scene2 where the vert from scene1 should lie, then interpolate the three values from the vertices of the triangle in scene2.)

Part 3 Create an animation. When you render the scene, turn off the headlight (and all lights). Make each vertex have zero diffuse and specular components. Set the emissive component to be the vertex's radiance. Change the isovalue, make a new image, and repeat. Change the isovalue in steps from about value=70 to about value=90 for this dataset.

Part 4 In case your Web area for the course is not up to date, create Web pages for homework 0, homework 1, homework 2, and homework 3. Link them from your course web page, putting a thumbnail image with each link. Put descriptive information on your homework page so that a casual visitor will understand what you did, what machine you used, how long it took to run, how much code was involved, what steps were needed to get code to compile. In general, make it be like a page you yourself would like to visit.

Reading

02 April

Homework 11

Part 1 Update or create Web pages for homework 04, homework 05, homework 06, homework 07.

Part 2 Final Project. Choose a project from among the following, or propose your own. The final project will include a Web page that describes the project, links to code and documentation, images and animations. Project demos will be give during the final two weeks of classes, once or twice as a "work in progress" and once as a final presentation.

  • *Blurred radiance. Create a 3D array radianceMesh[] containing a radiance value and a weight at each location. Loop through isovalues f(x,y,z) in Colin's brain data, from fMin to fMax, in increments of df. Maybe use df=0.01*(fMax-fMin) to produce 100 isosurfaces. For each isovalue, produce an isosurface and compute the radiance at each vertex (you will need to introduce one or more luminaires).

    Assign the vertex's radiance to nearby grid points in radianceMesh[]. Use a filter function filter() to weight the contribution.

      for (f = fMin; f < fMax; f+= df)
        scene = scalar3Dmesh.getIsosurface(f)
        scene.globalIlluminate()
        foreach vert in scene
          foreach radianceMesh.gridpoint near vert 
    	gridpoint = radianceMesh.gridpoint
            gridpoint.value  += filter(|vert-gridpoint|) * f(vert)
    	gridpoint.weight += filter(|vert-gridpoint|)
    
      foreach gridpoint in radianceMesh
        if (gridpoint.weight > 0.0)
          gridpoint.value /= weight
      

    You now have produced (or estimated) the radiance at every point in the entire 3D volume.

    Combine the estimated radiance in radianceMesh with your isosurface routine for the brain data. When you interpolate a vertex location on an isosurface, interpolate the radiance from the corresponding radianceMesh. If your isosurface generation runs in real time, the illuminated isosurface will also be generated in real time.

  • Photon mapping. Search the Web, read the book on photon mapping. Implement photon mapping for some datasets we have used (seminar room; colin's brain).

  • *Hybrid rendering. Use a commercial-grade renderer (such as BMRT or povray or radiance) to render images of the brain isosurfaces. Use equally-spaced isovalues from fMin to fMax. Take the image and project the colors onto the vertices in the mesh (ray-trace from the viewpoint to the vertex, finding where the ray intersects the image). Blend the vertex's color into the 3D volume's nearby grid points and save the resulting weighted RGB colors. Modify your isosurface program to interpolate the 3D grid of RGB colors and paint them on each isosurface when it is created.

  • *Acquiring reflectance. Use a digital camera to collect reflectance from a surface. Control the direction of the light and the camera to sample the 4-dimensional space of vectors vIn, vOut. Build a table containing the averages of these reflectances across many points on the surface. Then reconstruct the surface appearance using this reflectance function by rendering a polygonal mesh whose reflectance is determined by the table.

  • Light field. Use a digital camera to collect images from an array of viewpoints in a plane. Take pictures of one of the plastic brains in the Vis Lab, perhaps from a 10x10 array of viewpoints. Then use a commercial-grade renderer (your own, BMRT, povray, radiance, etc) to render a similar set of images of Colin's brain. Use the Stanford lightfield viewer to display the two scenes.

  • View-dependent illumination. Modify the gather() step from your previous homework so that the gathering is direction-dependent, and the reflectance function is direction-dependent as well.

  • Volume rendering. Download the open-source volumizer code from SGI onto a Vis Lab linux or irix machine. Modify their demo code so that it will read in a dataset like Colin's brain and display it volumetrically. Include command-line flags to specify the dimensions, etc. (xDim 217 yDim 217 zDim 217 byte 1 header 512 big-endian). Put draggers in the scene to allow the user to change the transfer function. If the transfer function is exp(-(f*f/s*s)), let one dragger specify f and the other specify s.

  • * Volume/surface illumination. Define a transfer function exp(-(f*f/s*s)) for a ray in the volume. The ray issues from a 3D grid point p0 in a scalar field f(p) (Colin's brain). Let f(p0) define the transfer function, so that the volume is transparent except for regions having the same value as p0. Apply opacity and single scattering to accumulate the radiance along the ray. Shoot multiple rays from each grid point in the volume. Collect the incident radiance into a 3D radiance grid. Combine it with your isosurface program so that each vertex in an isosurface has radiance interpolated from the radiance grid.

Reading

09 April

Homework 12

Part 1 Update or create Web pages for homework 08, homework 09, homework 10, homework 11.

Reading

16 April

Final Project Demos

Reading

22 April

Exam

Thursday April 25 7:30-9:30 am DSL 499.

Homework

Each assignment is due one week after it is given, unless otherwise noted. Make your Web assignments and programming assignments available for me to fetch via a script as of 11:59pm each Monday. They should be rooted in a directory such as www.server/~user/cis5930/hw00/hw00.tar.gz for fetching. Be sure to include a readme.txt text file that describes your project, gives credit for any code you copied, and explains how to compile and run your code.

This course requires a significant amount of reading. Be prepared to lead discussion on the reading during class.

Homework will be demonstrated in class each week, using the PowerWall in the Dirac Science Library Seminar Room. The programs are described informally below, and in more detail during class. The goal of these programs is to allow you to investigate and demonstrate aspects of global illumination and radiative heat transfer.

If I am invited to review an actual paper submitted to this year's SIGGRAPH conference, you will help with the process. The ethical aspects of the review process are important; they can be found at www.siggraph.org/s2001/review on the Web.