3D Graphics Rendering and Rasterization in WebGL

3D images are rasterized—meaning they are turned into pixels (a 2D array of colors) in the frame buffer. Only draw triangles when using WebGL. The main reason is that their vertices lie on the same plane—a flat, two-dimensional surface that extends infinitely far. A pixel is represented by numeric values indicating a color, is identified by X/Y screen coordinates, and is held in a frame buffer. A frame buffer is an array (usually of integers), a piece of memory that contains the pixels, colors, and positioning. It is the end result of rasterization. It has a resolution, which indicates how many pixels are held, and a depth, which indicates the color density.

Vertices: A vertex is a 3D coordinate in an arbitrary space (a game’s “world”). It is the common endpoint of two or more rays or line segments, the endpoint of these lines, a corner point. Rendering is the act of transforming a scene consisting of many 3D vertices into a 2D image (pixels in a frame buffer). Rendering and Rasterization are terms that are often used interchangeably. We work with 3D objects—made out of 3D vertices, which are projected onto our frame buffer—filling in our pixels.

To have an image, you need an object. But an object doesn’t make an image. You must take into account lighting, view, etc. Objects are independent of viewers. An image depends on where the viewer is—angle, distance, etc. Computation and memory bandwidth are going to be a bottleneck.

Light and Math in Rendering

Light: If lights are out, the image goes dark, but the object does not disappear; it just cannot be seen. Light sources affect the image we “draw”. How light interacts with vertices also makes a difference. Math is based on the field of view, object position, and distance between lens and film (focal length). The dimensions of the image plane establish a field of view (rather than a lens). It also defines a clipping space: Think of it as the image the user sees. Math entails geometry and trigonometry.

The Rasterization Process

Our job will be to take some set of givens: the position of a vertex, the position and size of the image plane, the position and size of the “film” (film = frame buffer, position = pixel), and compute what color and amount of light will hit each position of the frame. The key insight is that we apply the same mathematics for each vertex. How the rasterization process is actually implemented is not defined by OpenGL—but it does define the stages:

  • Vertex Processing: Transforming a vertex position in the world to coordinates relative to the camera.
  • Clipping and Primitive Assembly: Removal of vertices outside the image plane, connection between vertices to establish solid triangles/polygons.
  • Rasterization: The conversion of 3D camera coordinates to 2D pixel coordinates (fragments).
  • Fragment Processing: The coloring of each fragment.

The API defines an abstract idea of a graphics hardware pipeline, which performs the process. Implementers (GPU and software manufacturers) can deviate from the idea—but the results of an API call must be indistinguishable from the “reference”.

Shaders and the GPU

A shader is a program that is loaded onto the GPU and replaces one of the pipeline stages. The name “shader” might not have been the best. Shaders could be loaded to replace the vertex and fragment processors—the other stages remain fixed by the GPU (this is not a limitation). The Shader API is a main reason DirectX ruled supreme in game development. Eventually, OpenGL caught up with OpenGL 2 using OpenGL Shading Language (GLSL).

GPU memory is accessible through the gl.createBuffer() function. While you can create any number of buffers, WebGL only utilizes certain targets while it’s processing data. So, with a specified buffer, WebGL assumes you are talking about the array buffer. If data is manipulated, it is assumed you want to manipulate the data bound with the array buffer.

The first step is to create a buffer:

var bufferId = gl.createBuffer(); //Notice no size... it's not really being allocated.

This buffer is formally called a Vertex Buffer Object (VBO).

Buffer Targets

There are several types of “active” targets that will be used by OpenGL. For now, we are looking at just gl.ARRAY_BUFFER. There are more. The array buffer contains actual vertex data, as opposed to indexes to vertex data held elsewhere. We will see indexing targets later.

Remember that the WebGL API allows code on the CPU to set up data and then tell code on the GPU how to process that data.

Describing the Vertices

We’ve uploaded a big array of data to the GPU. We’ve uploaded a vertex shader to process each vertex. How do we tell the GPU to extract vertices (and position attributes) from our array?

//Gives us a reference to the vposition attribute in our Vertex Shader. It's the input to the program.
var vposition = gl.getAttribLocation( program, "vposition");

//Now we tell WebGL that this value has 2 floats per vertex
//4th parameter says not to Normalize the data
//5th parameter says that there is no gap between each vertex
//6th parameter says that the first vertex is at byte 0 in the array buffer
gl.vertexAttribPointer( vposition, 2, gl.FLOAT, false, 0, 0);

//now we enable this particular attribute
gl.enableVertexAttribArray( vposition);

Allowing the GPU to populate vposition in each instance of the Vertex Shader.

Vertex Attributes

  • Position
  • Texture Coordinator
  • Color
  • Point Size
    • Normal

Data’s on GPU – Now What?

  • Vertex Processing: Convert Vertex from world coordinates to camera coordinates (More later, but for now, they are the same).
    • Programmable (Vertex Shader)
  • Clipping and Primitive Assembly: Removes vertices/edges that will not be in view, constructs lines/polygons as instructed (just doing points right now).
    • Fixed
  • Rasterization: Converts the 3D vertices (and the primitives they form) into pixels.
    • Fixed
  • Fragment Processing: Among other things, assigning color to the pixels.
    • Programmable (Fragment shader)

Vertex Shader

  • A vertex Shader is called once for each vertex.
  • Each execution is independent of all others.
  • Each execution is potentially done in parallel.
  • The input to a vertex shader program is … a vertex.
  • In OpenGL – the programmer defines what an actual vertex consists of (it can have many properties attached to it). But the bare minimum is position.

Graphics Primitives

Not just Points

  • Drawing points can be very useful – but for most graphical programs, we need other things as well.
  • A vertex is a part of a primitive.
  • A point is a primitive with one vertex.
  • A line is a primitive with two vertices.
  • A triangle is a primitive with three vertices.

First off, How did we actually get points?

gl.drawArrays( gl.POINTS, 0, points.length);

Other Options

  • gl.LINES
  • gl.LINES_STRIP
  • gl.LINE_LOOP
  • gl.TRIANGLES
  • gl.TRIANGLE_STRIP
  • gl.TRIANGLE_FAN

Different Primitives, Same Vertices

Vertices and the primitives they make up are decoupled.

Line Loops vs. Polygons

  • Line loops appear to be a way of specifying polygons – or triangles.
  • In WebGL (and OpenGL), polygons are filled regions.
  • Line loops are not filled regions and can be used for wireframes of objects. WebGL is limited to triangles for good reason. Triangles always form a plane.
    • Triangles can be (and are) optimized on the hardware.
    • We can create any polygon out of a combination of triangles.