This page does not represent the most current semester of this course; it is present merely as an archive.
You need to write every method you submit yourself. You cannot use other people’s code or find the method in an existing library. For example, you should not use Java’s Graphics
class, PIL’s ImageDraw
module, the CImg draw_
* methods, the Bitmap Draw
* methods, etc.
You may use external vector product and and matrix multiplication code if you wish. However, the learning these libraries often requires as much time as it would take to write them yourself.
mult4by4(a, b):
matrix answer = 4 by 4 matrix
for(outrow from 1 to 4):
for(outcol from 1 to 4):
answer[outrow,outcol] = 0
for(i from 1 to 4):
answer[outrow,outcol] += a[outrow, i] * b[i, outcol]
return answer
multMbyV(mat, vec):
vector answer = copy of vec
for(outrow from 1 to 4):
answer[outrow] = 0
for(i from 1 to 4):
answer[outrow] += mat[outrow, i] * vec[i]
return answer
You are welcome to seek conceptual help from any source, but cite it if you get it from something other than a TA, the instructor, or class-assigned readings. You may only get coding help from the instructor or TAs.
You’ll need to keep track of the current set of transformations. Model/View transformations are applied first, in reverse order; then perspective is applied. Thus, in the file snippet
rotate 90 1 0 0
frustum -1 1 -1 1 1 10
translate -3 2 1
trif 1 2 3
the triangle’s points are first translated, then rotated, then the frustum is applied.
Some Model/View commands replace the current Model/View; this means later drawing commands ignore any Model/View commands that came before the replacing command.
The entire flow looks like this (required parts are in bold):
currentvalue of that type.
Code for most of these algorithms is all over the Internet and you probably have half a dozen version already on your machine. This makes matters of plagarism, etc, a bit tricky.
Action | Response |
---|---|
Using code from readings or lecture | OK |
Using code from TA or instructor | OK, but cite |
Using ideas from student, tutor, or website | OK, but cite |
Using code from student, tutor, website, or program | Cheating |
Using a drawing library instead of setting pixels manually | Cheating |
The required part is worth 50%
x += 1; x *= width/2;
and likewise for y
; z
and w
are not changed.
flatin the current color (as given by the last
color
command, or white if none were given). Apply any transformations and perspective that might exist as outlined in the overview.
Defines the current color to be r g b, specified as floating-point values; (0, 0, 0) is black, (1, 1, 1) is white, (1, 0.5, 0) is orange, etc. You only need to track one color at a time.
You’ll probably need to map colors to bytes to set the image. All colors ≤ 0.0 should map to 0, all colors ≥ 1 should map to 255; map other numbers linearly (the exact rounding used is not important).
If you do lighting, you’ll need to store light values > 1 and not clamp to 1 until after lighting. If you do not do lighting, you can clamp colors when you read them.
You may get 30 points by implementing the following:
If (and only if) you implement the core perspective features you may earn additional points as follows
Linear interpolation isn’t quite right for non-coordinate values. When interpolating some value Q while making pixels, instead interpolate {Q \over w}; also interpolate {1 \over w} and use {Q \over w} \div {1 \over w} as the Q value for each fragment.
You get 10 points for perspective-correct interpolation only if you also do either trig
or texture
. It applies to normals and lighting too, but is generally less obvious in its results.
loadmv
, this does not replace what was already there.
Vectors (sunlight, normal, clipplane) ought to be multiplied by a variant of the model/view matrix. If point p is modified by multiplication M \vec{p} then vector \vec{v} is modified by multiplication \vec{v} M{-1}$. However, you do not need to compute the inverse: this is the same as saying that
lookat
) apply normally;You do not need to handle loadmv
or multmv
.
Sunlights should be modified when specified. Normals should be modified during the trig
, trif
, etc, command. Be sure to re-normalize the normals after transformation.
xyz
to store the current color (as defined by the most recent color
command, or white if there was no such command) with each vertex it defines. Draw a gouraud-shaded triangle using those vertex colors.
When scan-converting triangles, have the first step on both edges and scanlines move not just to an integer, but to an integer inside the bounds of the screen; and stop when you leave the bounds of the screen.
This should allow you to fill gargantuan triangles quickly.
Clip triangles before drawing them. Clipping a triangle along one plane might turn it into two triangles (by clipping off a corner).
Points that satisfy (p_1, p_2, p_3, p_4) \cdot (x, y, z, w) >= 0 should be kept. Apply this to the vertices of a triangle after the Model/View transformations and before the Projection transformations.
cull
command, only draw triangles if their vertices would appear in counter-clockwise order on the screen. Otherwise simply ignore them.
Subdivide the given cubic Bezier patch into 2^{subd} by 2^{subd} quads and then draw each quad as two triangles as if using trif
. subd will always be an integer greater than zero.
Bezier patch subdivision is done much like curve subdivision: the point at (s, t) is found by finding the (s, t) point on each quad, using those points to make a lower-order bezier patch, and iterating down to just a single point. Thus, in the illustration the 16 input points define 9 quads (black); The first interpolation defines 4 (red); then 1 (blue); and finally a single point on the surface (green). The normal at that point is the surface normal of the blue quad (i.e., the cross-product of the two diagonals of that quad), although the normal is not needed for flat shading.
rectbezf
except uses color interpolation instead of flat color.
trif
. subd will always be an integer greater than zero.
Bezier patch subdivision is done much like curve subdivision: the point at (s, t) is found by finding the (s,t) point on each triangle, using those points to make a lower-order bezier patch, and iterating down to just a single point. Thus, in the illustration the 10 input points define 6 triangles (black); The first interpolation defines 3 (red); then 1 (blue); and finally a single point on the surface (green). The normal at that point is the surface normal of the blue triangle, although the normal is not needed for flat shading.
rectbezf
except uses color interpolation instead of flat color.
This is a combination of three commands:
color
does with trig
). If texcoord
has not occurred prior to an vertex being specified, use (0, 0) for that vertex’s texture coordinate.
adds a texture image to be used in subsequent drawing commands.
The example images on this page use splat2.png as the texture image. We will test your program with other images and other image sizes too. Notably, any of the images created by running your program should work too.
draw a texture-mapped triangle. Interpolate the (s, t) texture coordinates to each fragment; the color of each fragment is then the color from the texture at the texel closest to (s w, t h) where the texture is w \times h texels in size.
Texture coordinates should wrap; that is, treat -1.3, -0.3, 0.7, 1.7, etc, as all mapping to the same texel.
You may assume that trit
will never be called without texture
being called earlier in the file.
trig
) If the keyword decals
has appeared in the input file, alpha-blend textures on top of the underlying object color (interpolated as with trig
): transparent parts of the texture should show the underlying object color.
If decals
has not appeared in the input file, just use textures’ RGB values, not the underlying object color, ignoring alpha.
This is a combination of the normal
command and at least one lighting command.
Add a point light source centered on the vertex given by index i_1 (apply the current model/view matrix but not the current projection). Use the current color for its light.
Point lights require you to interpolate the world-coordinate (x, y, z) of each fragment as well as its screen coordinate (x, y, z) and its normal; the world coordinate is needed to compute the correct direction to the light.
Physically accurate point light intensity would fall with the square of the distance; we will not use falloff in this homework.
For each fragment, use Lambert’s law to determine the amount of illumination. Let c by the cosine of the angle between the surface normal and the light direction, \vec{l} be the color of the light, and \vec{o} be the color of the object. The color of the fragment is then c \vec{l} \vec{o} where the vectors are multiplied element-wise (i.e., red times read, green times green, etc.)
The dot product can efficiently compute c provided you normalize vectors. Do not include lights behind the object (which can be detected by c < 0).
Add a specular component to lighting triangles. Use the Blinn-Phong model: the specular intensity is (\vec{H} \cdot \vec{n})^{\alpha} where \vec{n} is the surface normal and \vec{H} is the normalized halfway vector between the direction to the light and the direction to the eye. Because the Model/View always moves the eye to the origin, the direction to the eye is simply the negative of the location of the point being lit.
The color added by specularity is the specular intensity times the color of the light. The color of the object is not used.
Only act based on the most recent shininess
command. If the shininess is ≤ 0, do not add specularity.
flatnormals
has appeared in the input file, ignore per-vertex normal
s for trif
commands and instead use the perpendicular vector of the triangle. This can be found as (\vec{p_2}-\vec{p_1})\times(\vec{p_3}-\vec{p_1}).