A raster is a rectangular grid of pixels. The rasterization of a scene consists of a single color at each pixel. Rasterizations are the principal output of canvas-style APIs, which themselves are the backing for most other graphics APIs.
A pixel can be thought of in several ways, but the two most common are as square (making the raster a tiling of adjacent pixels) or as a mathematical point (making the raster a void with points distributed evenly across it). These two are not equivalent, and there are pros and cons to each.
Treating pixels as mathematical points creates aliasing, where the shape of the grid interacts with the shapes in the scene to create patterns which are distracting to the eye. The most common aliasing effect is stair-step edges, causing smooth shapes to appear jagged. Much more significant, however, is the display of scene objects that are narrower than a pixel and effectively hide between
the points, vanishing entierly from the rasterization.
Treating pixels as square regions removes the worst kinds aliasing: stair-stepped edges and thin scene objects instead look a bit blurred, but the blur is less than a pixel wide and generally does not distract the eye. However, it adds a problem that point-like pixels do not have: the scene cannot be correctly rendered one piece at a time.
To see this problem, consider a scene containing a white background and two half-pixel-sized black rectangles, both within the same pixel. If those two black rectangles are side-by-side, together covering the full pixel, then the pixel should be black. If they are fully overlapping, both covering the same part of the pixel, then the pixel should be a gray half-way between black and white. If they are partly overlapping, the pixel should be a darker gray. But if we render them one at a time, the first will work fine: we’ll add half a pixel of black to a white pixel and get a 50/50 gray. The second rectangle now adds half a pixel of black to a gray pixel, getting a darker 25/75 gray. That could be the right result, but it likely isn’t and the only way to know is to check not just the rasterization of the scene so far but the geometry of the objects that make up the scene.
By contrast, point-like pixels don’t have this problem. Points don’t have dimensions, so nothing can cover half of a point. Yes, they tend to have aliasing, but they also let us render a scene one object at a time. One-at-a-time rendering lets us use a very simple API—one simple enough to encode in hardware—and lets us process each object in the scene in parallel on a different processor, possibly even slicing the objects into smaller pieces for more parallelism, without any change in the resulting render. That simplicity and robustness has fueled the development of dedicated graphics hardware and has made point-like pixels the dominant assumption in canvas APIs today.
Because APIs designed for point-like pixels can operate by creating of rasterization of each element of the scene independently, it is common to refer to all systems that implement this approach as simply rasterization
and to use a more specific term (raytracing, subdivision, etc) for every other method of filling a raster with a representation of a scene. In some situations, point-like pixel APIs are named after their most popular algorithms, such as scan-converting
, Bresenham
, or DDA
.
But what about the aliasing? Canvas APIs generally offer an anti-aliased
mode that implements (some approximation of) the square-like pixels, but without changing the simple point-like API design. The result is that turning on anti-aliasing will remove stair-stepped edges and help thinner-than-a-pixel objects not vanish, but will also mean that slicing a shape into two abutting shapes that are mathematically equivalent to the original will cause a visible stair-stepped line of semi-transparency along the cut.
There are several designs of APIs and algorithms that handle square-like pixels correctly; raytracing is definitely the most popular, albeit only for 3D graphics. All correct square-pixel algorithms of which I am aware are dramatically slower than point-pixel algorithms.