[ answers on web ]
 Name: _______________________________ Student ID: _______________________________

Total points: 70
1. In your own words, compare the complementary fields of computer graphics and image analysis. [3]
Synthesis vs. analysis: both are components of visual computing. Computer graphics starts with digital representations of objects (triangle meshes, lights, etc.) and produces images on screen. Image analysis starts with images (digital camera, satellite, MRI, etc.) and produces digital representations of the objects in the images.

2. What is colour? Is an RGB triple (e.g., "#00FF77") a colour? Discuss. [4]
Colour is a distribution of light energy across the visible spectrum ("frequency distribution"). An RGB triple is a point in a colour space, a combination of three chromaticities. Without specifying what those chromaticities are or how they are combined, an RGB triple is not a colour.

3. Contrast the goals of off-line rendering vs. real-time graphics. Which is OpenGL generally used for? [4]
Off-line rendering prioritizes photorealism and image quality over rendering time -- it's okay if each frame takes days to render, as long as it looks really good. Real-time graphics places a constraint of rendering time -- we still want it to look good, but not if that means our framerate drops below, say, 60 frames per second. OpenGL is generally used for real-time rendering.

4. What does it mean for a set of vectors to be linearly independent? [3]
A set of vectors is linearly independent if no one vector in the set can be expressed as a linear combination of the other vectors. If we were to remove any vector from the set, then the vector space spanned by the set would collapse by a dimension.

5. What are homogeneous coordinates, and how are they used in 3D computer graphics? [3]
A consistent representation of both points and vectors. We add a fourth component, w, which is 0 for vectors and non-zero for points. Every (x,y,z,w), with w != 0, represents the 3D point (x/w, y/w, z/w). We apply transforms in homogeneous coordinates via 4x4 matrices; e.g., the model-view matrix.

6. Describe the stages of the OpenGL rendering pipeline. There is some flexibility in how to divide the stages, but describe the operations that must happen to get from geometry (GL primitives) to what we see on screen. [6]
(1) Vertex processing:
Transform vertex and normal via model-view matrix, per-vertex lighting calculation (for Gouraud), projection matrix
(2) Assembly/clipping:
Connect vertices into primitives, view frustum culling, clip against view volume
(3) Rasterizer:
Convert into pixel grid, interpolate per-vertex values across the primitive
(4) Fragment processing:
Per-fragment calculations, textures
(5) Blending:
Hidden-surface removal (depth buffer) and blending, final output to framebuffer

7. What are OpenGL display lists? Why use them? Describe an example situation well-suited to display lists. [4]
OpenGL display lists store a sequence of OpenGL commands (e.g., material definitions, lights, primitives, matrix transforms) that are then stored in graphics hardware memory. The display list can then be executed multiple times (perhaps with OpenGL state change in between) without resending the data to the hardware; this can potentially speed up the rendering. For instance if a scene is to have many hundreds of teapots, it would make sense to store the teapot vertex data in a display list.

8. The two most important transform matrices in OpenGL are the model-view matrix and the projection matrix. Contrast the roles of these two matrices. [4]
The model-view matrix transforms from world coordinates to camera coordinates: it specifies the location of objects in the virtual world relative to the camera. It is also what we use to position objects relative to each other. The projection matrix transforms from camera coordinates to screen coordinates: it converts from 3D points to 2D points.

9. Contrast orthographic projection with perspective projection. [3]
The projector lines in orthographic projection are all parallel, whereas the projector lines in perspective projection converge to a single point, the center of projection. Orthographic projection is equivalent to perspective projection with a center of projection infinitely far away. Perspective results in foreshortening: closer objects appear larger, farther objects appear smaller.

10. Contrast clipping with culling. Why are they important? [4]
Clipping:
Trims primitives to fit within the viewport and the current clip. User-defined clip planes may be specified. Clipping may reshape and introduce new vertices for primitives which intersect the clipping planes.
Culling:
Removes primitives which are not visible, even though they may be within the viewport. For instance, back-face culling removes primitives that face away from the camera, which are generally not visible.

11. The GLUT function glutSolidTeapot(1.0) renders a Utah Teapot with size 1, centred at the origin. We want to render the teapot with size size (C++ float), centred at the point (5, 6, 7), and rotated counter-clockwise about its own z-axis by θ (C++ float theta) degrees.
1. Without changing the call to glutSolidTeapot(), write a block of C/C++ OpenGL code to accomplish this. (The parameter to glutSolidTeapot() is actually a size parameter, but I'm asking you not to use that for this question.) The exact naming/syntax of the functions is not as important as the ordering. [3]
```  glTranslate3f( 5.0, 6.0, 7.0 );
glRotatef( theta, 0.0, 0.0, 1.0 );
glScalef( size, size, size );
glutSolidTeapot(1.0);
```

2. Write a single 4x4 matrix that converts from the coordinate system of the teapot (object coords) to the coordinate system of the world (world coords). Assume left multiplication; i.e., a point p in homogeneous object coords will be multiplied by this matrix M to get the world coords p' = M p. (Hint: construct 4x4s for each of the transforms, and multiply the matrices in the correct order. I will be lenient on minor mathematical errors as long as you show your work.) [4]
 size * cos(θ) -sin(θ) 0 5 sin(θ) - size * cos(θ) 0 6 0 0 size 7 0 0 0 1

12. Describe three representations of rotations in 3D that we talked about in class. What are pros/cons of each? [4]
Euler angles:
Three angles indicating amount of rotation about the x, y, and z axes, respectively. Order is important. Problems with gimbal lock.
Axis-angle:
Three numbers indicating a unit vector about which to rotate, and one number indicating angle of rotation about that axis. This is the format expected by glRotatef().
Unit quaternion:
Four numbers (as described in previous problem). Easy to derive from axis-angle representation. Easy to compose multiple rotations via quaternion multiplication.

13. Name and describe the four terms in the OpenGL local illumination model. For each term, list what vectors, if any, are required to compute the term. [5]
Ambient:
Uniform light that shines everywhere. No vectors needed.
Diffuse:
Light that shines on a dull, non-shiny object. Need normal vector and vector to the light source.
Specular:
Highlights reflecting off a shiny object. Need view vector to camera and reflection vector.
Emissive:
Simulates a glowing object. No vectors needed.

14. How many numbers are needed to specify completely all the material properties provided by OpenGL for an object? Describe them. [3]
17 numbers: RGBA for each of ambient, diffuse, specular, and emissive colours, plus the shininess coefficient.

15. Contrast Gouraud shading with Phong shading. Which is better and why? Describe a situation where the difference might be very pronounced. Which one is supported by the regular fixed-function OpenGL pipeline? [4]
Gouraud:
Lighting calculations done per-vertex; shaded vertex colours are then interpolated (rasterized) across the face of the polygon.
Phong:
Lighting calculations done per-fragment (per-pixel): more work, but more accurate.
The default OpenGL pipeline only does Gouraud. On a large polygon with high shininess, the small specular highlight may lie in the middle of the polygon, missing all the vertices. In that case, Gouraud shading would miss the highlight altogether, whereas Phong shading would capture it.

16. Describe bump mapping: what is its purpose and how does it achieve it? Describe an example situation where it might be useful.[4]
Emulating bumpy surfaces without actually creating more geometry (more vertices). Uses a bump map texture to indicate displacement of the surface (in or out). Calculates how that displacement would perturb the normal vectors. Per-fragment Phong shading then calculates the shaded colour of each fragment using the perturbed normals. Example: orange.

17. Describe all the steps needed to create and apply a texture map in OpenGL. The exact name/syntax of function calls is not as important as the concepts. [5]
Load image:
Read from file or generate in program
Get new texture object:
glGenTextures() to get a texture ID
Select this texture:
glBindTexture() to make it current
Set options:
glTexParameter()
Load image to texture:
glTexImage2D() or gluBuild2DMipmaps()
Enable texturing:
glEnable( GL_TEXTURE_2D );
Assign texture coordinates:
glTexCoord() with each vertex, or glTexGen() to auto-generate texture coordinates