[ ans ]
Name: _______________________________
Student ID: _______________________________

Total points: 110
  1. Name at least three of the groundbreaking innovations in Engelbart's NLS prototype demonstration in 1968. [3]
    mouse, windowing system, hyperlinks, chording keyboard, collaborative work (email, IM, video conf)



  2. What are events and callbacks? Describe an example. [3]
    An event is an action, usually triggered by the user: e.g., clicking on a button, selecting a menu item, pressing Enter, moving the mouse. A callback is a procedure/function invoked when a corresponding event happens. For example, the exit() function might be run when the user clicks on a "Quit" button.





  3. Describe Flynn's taxonomy of parallel computing. (Give more detail than just expanding the acronyms!) [6]
    SISD:
    regular serial uniprocessor computing
    SIMD:
    multiple processors each doing the same task, but on different parts of the data. Vector processing.
    MISD:
    the same data being run through different operations in parallel. Certain kinds of image processing, perhaps encryption breaking. Rarely used.
    MIMD:
    multiple processors doing different tasks on different data. Most general form of parallel computing; also most common nowadays.















  4. Compare the pros/cons of OpenMP vs. MPI. [4]
    OpenMP uses the threading model instead of the message-passing model, is easier to program in and easier to add-on to existing serial code. The programmer need not know how many processors the program is actually using. It is generally more well-suited to a shared-memory model. MPI is more complex to program in but provides more control over synchronization and communication. MPI often scales up better to more processors; it is more appropriate for distributed-memory models.







  5. Describe the (1) shared, (2) distributed, and (3) hybrid memory models of parallel computing. Draw the diagrams illustrating how memory and processors are tied together. What are the advantages/disadvantages of shared memory vs. distributed memory models? [6]
    Shared:
    All processors share same address space for main memory. The assumption is that memory access times are the same for all processors. Easier to program: threads communicate simply by sharing data structures directly. For small numbers of processors, often faster due to less overhead. Does not scale well.
    Distributed:
    Each processor has its own address space. No data structures can be shared with other processors. All communication must be explicit via networking, which makes programming more complicated due to synchronization and latency issues. Scales to hundreds of thousands of processors.
    Hybrid:
    Each node has a few (often 4-8) processors which share that node's memory. Nodes cannot directly see the memory of other nodes.

















  6. In your own words, compare the complementary fields of computer graphics and image analysis. [4]
    Synthesis vs. analysis: both are components of visual computing. Computer graphics starts with digital representations of objects (triangle meshes, lights, etc.) and produces images on screen. Image analysis starts with images (digital camera, satellite, MRI, etc.) and produces digital representations of the objects in the images.








  7. What is colour? Is an RGB triple (e.g., "#00FF77") a colour? Discuss. [3]
    Colour is a distribution of light energy across the visible spectrum ("frequency distribution"). An RGB triple is a point in a colour space, a combination of three chromaticities. Without specifying what those chromaticities are or how they are combined, an RGB triple is not a colour.






  8. Describe back-face culling. Why is it a good idea? [3]
    Back-face culling removes polygons whose normal vectors point away from the camera. The idea is that in a closed object, all polygons that point away from the camera are on the back side of an object and are occluded by front-facing polygons, so we won't see them, so they don't need to be rendered, so we can delete them from the graphics pipeline, which speeds up our rendering.




  9. Describe the four types of lights offered by OpenGL. [5]
    Point:
    has a location and colours (ambient, diffuse, specular). Similar to a candle.
    Directional:
    has a direction and colours, but no location. Similar to the sun. Specified in OpenGL the same way as point lights, but using homogeneous coordinates to specify a vector instead of a point.
    Spot:
    has a location, direction, cutoff depth, falloff exponent, and colours. Similar to a theatre spotlight.
    Global ambient:
    We can specify an ambient light that uniformly illuminates the whole scene. Similar to a very evenly lit scene like an open field on a cloudy day.
    (Area light):
    (not offered by OpenGL, but we can model it using global illumination techniques like radiosity.)
    (Emissive):
    (Emissive OpenGL objects don't count as lights because they don't cast light on other objects.)













  10. Tell me everything you know about homogeneous coordinates. [4]
    Homogeneous coordinates are a compact way of representing both 3D points and 3D vectors using the same representation: a list of 4 numbers, [x y z w]. If w=0, then it represents a vector (direction and magnitude, but no location). If w=1, then it represents a point (position in space). If w is any other number, it represents the point (x/w, y/w, z/w). Most transforms used in 3D computer graphics are 4x4 matrices that multiply by entities in homogeneous coordinates.









  11. Write a C/C++ function that takes two points in normalized mouse coordinates (x0,y0,x1,y1) and finds the axis-angle (vector-angle) representation of the virtual trackball rotation. Normalize mouse coordinates are floats in the range 0 to 1. Such an axis-angle representation could, for example, be passed to glRotate() to create a 4x4 rotation matrix. You may assume the presence of 3D-vector helper functions normalize(), magnitude(), dotproduct(), and crossproduct(), as well as sqrt(). Syntax is not as important as concept/pseudocode. [6]
    void getAxisAngle(float x0, float y0, float x1, float y1, float& axis, float& ang) {
    	float vec0[3], vec1[3], xprod[3];
    	vec0[0] = x0; vec0[1] = y0;
    	vec1[0] = x1; vec1[1] = y1;
    	vec0[2] = sqrt(1 - x0*x0 + y0*y0);
    	vec1[2] = sqrt(1 - x1*x1 + y1*y1);
    	xprod = crossproduct(vec0, vec1);
    	axis = normalize(xprod);
    	ang = arcsin(magnitude(xprod));
    }
    













  12. How does Gouraud shading work? How does Phong shading work? What are pros/cons of each? [5]
    Gouraud shading:
    Calculate shades at each vertex, interpolate RGB shades across the polygon
    Phong shading:
    Interpolate normal vectors across polygon; calculate shades at each pixel
    Phong shading looks better but requires more computation: the local illumination model is performed at each pixel across the polygon instead of just once for each vertex. Gouraud shading is built-in to OpenGL.











  13. Contrast texture maps, bump maps, and environment maps. [6]
    Texture map:
    an image pasted on a surface in 3D. Colours of the surface are taken from the image.
    Bump map:
    the normal vectors on the surface are wobbled to simulate a perturbation of the surface (in/out) by an amount given in the bump map image.
    Environment map:
    reflections in specular surfaces use colours taken from an image, to simulate the reflection of a complex scene in a shiny object.












  14. What is mip-mapping? Why is it cool? [4]
    Mip-maps are precalculated smaller versions of the texture at various levels of detail. e.g., a version with half the length and half the width; another version with one-quarter the length and one-quarter the width, etc. They are used in texture mapping to avoid aliasing: unsightly jagged edges and artifacts that would otherwise occur when the texture's projected fragment on the screen is very small.





  15. Contrast interpolating cubic polynomial curves, Hermite curves, and Bezier curves. For each type of cubic polynomial curve, what information is needed to uniquely specify a curve? [6]
    Interpolating:
    Specify four points; the curve goes through all four.
    Hermite:
    Specify positions and tangent/velocity vectors at the start and end of the curve.
    Bezier:
    Specify four Bezier control points. Two are the start and end of the curve (interpolated). The other two control points are used to derive the velocity vectors (3 times the difference vector) for the start and end points.










  16. Describe and contrast C0, C1, C2, and G1 continuity for splines. [4]
    C0:
    Curve segments touch; basic continuity.
    C1:
    Curves touch and velocity vectors also match at the joins
    C2:
    Curves touch, velocity vectors match, and even the curvatures match at the joins
    G1:
    Curves touch, and velocity vectors point in the same direction (but might not be the same magnitude). In between C0 and C1.











  17. What is a NURBS? Explain each part of the acronym in detail. [6]
    NU (non-uniform):
    the knots (joins between Bezier segments) need not be uniformly spaced in u (the parameter space). For instance, this can be used to make the NURBS interpolate its endpoints, by repeating knots four times at the start (u=0) and end (u=1).
    R (rational):
    Each control point has a relative weighting associated with it which biases its influence on the curve.
    BS (B-spline, Bezier spline):
    A B-spline is a spline made up of cubic Bezier segments, joined in a particular way so as to get C2 continuity.












  18. Describe how to find all the intersection points between a ray and a sphere. [5]
    • Equation of a sphere with radius r and centre (xc, yc, zc): (x-xc)2 + (y-yc)2 + (z-zc)2 = r2.
    • Substitute ray equations: (x0 + t*xv - xc)2 + (y0 + t*yv - yc)2 + (z0 + t*zv - zc)2 = r2.
    • Solve for t using quadratic formula.








  19. What are quadric surfaces, and why are they cool in raytracing? List some examples. [4]
    A quadric is any implicit surface that is defined by a quadratic (degree 2 polynomial) in x, y, and z. Examples include spheres, ellipses, cylinders, cones, and hyperboloids. The ray-surface intersection test, so critical in raytracing, is analytically solvable for quadrics, using the quadratic formula. As a result, finding intersections with quadrics is relatively easy, which speeds up raytracing.



  20. Describe and contrast: spatial grids, octrees, k-d trees, and BSP trees. [6]
    Grids:
    subdivide space into equal voxels, regularly spaced
    Octrees:
    Adaptive subdivision: where needed, each cell is subdivided into eight equal subcells along the coordinate axes.
    k-d trees:
    Also adaptive like octrees, but cells are subdivided along one axis at a time. Cells need not be split into equal-size subcells.
    BSP trees:
    Even more flexible than k-d trees: each time a cell is split using a (k-1)-dimensional hyperplane (a regular plane in 3D), oriented in any way (need not be along a coordinate axis).











  21. What are the basic assumptions for classical radiosity? [3]
    Most important assumption: Lambertian (perfectly diffuse) surfaces -- no specularity, no translucency. Other assumptions (not so important to get): light transfer from one element to another is linear, no fog, can solve for radiosity in RGB space, radiosity constant across each element.




  22. What is radiosity? (in physics/optics/radiometry) What are the SI units used to describe it? [3]
    Exitant flux density: radiant power per unit surface area. Radiant power is energy (J) per unit time (s), so the units of radiosity are in J/(s*m2), or W/m2.



  23. Explain what a BRDF is. What simplifying assumption is made about the BRDF in classical radiosity? What sort of surfaces is radiosity unable to model as a result? [5]
    Bidirectional reflectance distribution function: defines for every combination of an incoming direction and an outgoing direction, what fraction of the incoming flux density along the given incoming direction is reflected along the given outgoing direction. In general, the BRDF varies along the surface and also varies with both incoming and outgoing direction vectors. Radiosity assumes Lambertian surfaces, which means that the BRDF simplifies to a constant, called the albedo.






  24. Contrast and describe pros/cons: realtime OpenGL pipeline, vs. ray tracing, vs. radiosity. [6]