Categories
Angular Web Development

Intro to WebGL using Angular – How to Build in 3D (Part 3)

Prerequisites for beginning a 3D build

  • You’ve completed part 1 (setting up a scene)
  • You’ve completed part 2 (setting up and compiling shaders).

Thinking in 3D with Angular & WebGL

So we’ve got a square, cool. Do you know what’s even better? A CUBE! Let’s build one!

We’ve already covered a lot of the fundamentals of building, binding and rendering simple geometry in WebGL. We’ll be further extending upon what we’ve currently built to now handle 3D.

The general process is simple, add additional vertex positions and colours to our existing buffers to visualize a 3D scene instead of 2D.

This means we need to add in a Z-axis to our vertex and colour points.

How to define positions for a 3D cube

In order to do this, we need to think about how to define these points.

First, how many faces are there on a cube? There’s a total of six faces. Therefore, in order to build a 3D cube, we need to define vector positions for all six cube faces. WebGL can then render the cube as expected.

E.g.

// illustrates points in 2D space
const positions2D = new Float32Array([
   // front face
   1.0,  1.0, 
  -1.0,  1.0, 
   1.0, -1.0, 
  -1.0, -1.0
]);

// illustrates points in 3D space
const positions3D = new Float32Array([
   // Front face
    -1.0, -1.0,  1.0,
     1.0, -1.0,  1.0,
    -1.0,  1.0,  1.0,

     1.0,  1.0,  1.0,
    -1.0,  1.0,  1.0,
     1.0, -1.0,  1.0,

    // Back face
    -1.0, -1.0, -1.0,
    -1.0,  1.0, -1.0,
     1.0,  1.0, -1.0,

     1.0,  1.0, -1.0,
     1.0, -1.0, -1.0,
    -1.0, -1.0, -1.0,

    // Top face
    -1.0,  1.0, -1.0,
    -1.0,  1.0,  1.0,
     1.0,  1.0,  1.0,

     1.0,  1.0,  1.0,
     1.0,  1.0, -1.0,
    -1.0,  1.0, -1.0,

    // Bottom face
    -1.0, -1.0, -1.0,
     1.0, -1.0, -1.0,
     1.0, -1.0,  1.0,

     1.0, -1.0,  1.0,
    -1.0, -1.0,  1.0,
    -1.0, -1.0, -1.0,

    // Right face
    1.0, -1.0, -1.0,
    1.0,  1.0, -1.0,
    1.0,  1.0,  1.0,

    1.0,   1.0,  1.0,
    1.0,  -1.0,  1.0,
    1.0,  -1.0, -1.0,

    // Left face
    -1.0, -1.0, -1.0,
    -1.0, -1.0,  1.0,
    -1.0,  1.0,  1.0,

    -1.0,  1.0,  1.0,
    -1.0,  1.0, -1.0,
    -1.0, -1.0, -1.0,
]);

In the array definition above, you can see that we’ve defined each point for each face of the cube we want rendered.

For each face we’ve defined four positions, each position is represented with an x, y, z coordinate. A total of 36 points have now been successfully defined. Defining these points allowed WebGL to build our cube in 3D space.

How to define colours for a 3D cube

Lets now update the way we define colours for our cube by implementing the code below in initialiseBuffers():

// Set up the colors for the vertices
const faceColors = [
  [1.0,  1.0,  1.0,  1.0],    // Front face: white
  [1.0,  0.0,  0.0,  1.0],    // Back face: red
  [0.0,  1.0,  0.0,  1.0],    // Top face: green
  [0.0,  0.0,  1.0,  1.0],    // Bottom face: blue
  [1.0,  1.0,  0.0,  1.0],    // Right face: yellow
  [1.0,  0.0,  1.0,  1.0],    // Left face: purple
];

// Convert the array of colors into a table for all the vertices.
let colors = [];
for (let j = 0; j < faceColors.length; ++j) {
  const c = faceColors[j];

  // Repeat each color six times for the three vertices of each triangle
  // since we're rendering two triangles for each cube face
  colors = colors.concat(c, c, c, c, c, c);
}

In the code above, we define RGBA colours for each cube face.

The for loop we define iterates through the array of face colours and builds a table of array data that applies colour values for all of the cube points.

The usage of colors = colors.concat(c,c,c,c,c,c) might look confusing at first, but essentially all it’s doing is creating an array with four entries based on the faceColor row we’ve retrieved.

It helps to quickly build up a buffer of colours for each of the two triangle points that make up one face of the cube (E.g. Triangle 1 = TL, TR, BR and Triangle 2 = TL, BL, BR) and adds the result to the colors array and then continues onto the next faceColor item.

E.g.

TL _ _ _ _ _ TR
 | \        |
 |   \  T2  |    T1 = TL, BL, BR 
 |     \    |    T2 = TL, TR, BR
 |  T1   \  |    6 points that we need to color in
 |_ _ _ _ _\|
BL           BR

By the end of this process, we have a table of array data which represents the intended colours for every point we desire.

If we were to manually type this out, the result would look like this:

const colors = new Float32Array([
    1.0,  1.0,  1.0,  1.0,    // Front face: white
    1.0,  1.0,  1.0,  1.0,    // Front face: white
    1.0,  1.0,  1.0,  1.0,    // Front face: white
    1.0,  1.0,  1.0,  1.0,    // Front face: white
    1.0,  1.0,  1.0,  1.0,    // Front face: white
    1.0,  1.0,  1.0,  1.0,    // Front face: white
    1.0,  0.0,  0.0,  1.0,    // Back face: red
    1.0,  0.0,  0.0,  1.0,    // Back face: red
    1.0,  0.0,  0.0,  1.0,    // Back face: red
    1.0,  0.0,  0.0,  1.0,    // Back face: red
    1.0,  0.0,  0.0,  1.0,    // Back face: red
    1.0,  0.0,  0.0,  1.0,    // Back face: red
    0.0,  1.0,  0.0,  1.0,    // Top face: green
    0.0,  1.0,  0.0,  1.0,    // Top face: green
    0.0,  1.0,  0.0,  1.0,    // Top face: green
    0.0,  1.0,  0.0,  1.0,    // Top face: green
    0.0,  1.0,  0.0,  1.0,    // Top face: green
    0.0,  1.0,  0.0,  1.0,    // Top face: green
    0.0,  0.0,  1.0,  1.0,    // Bottom face: blue
    0.0,  0.0,  1.0,  1.0,    // Bottom face: blue
    0.0,  0.0,  1.0,  1.0,    // Bottom face: blue
    0.0,  0.0,  1.0,  1.0,    // Bottom face: blue
    0.0,  0.0,  1.0,  1.0,    // Bottom face: blue
    0.0,  0.0,  1.0,  1.0,    // Bottom face: blue
    1.0,  1.0,  0.0,  1.0,    // Right face: yellow
    1.0,  1.0,  0.0,  1.0,    // Right face: yellow
    1.0,  1.0,  0.0,  1.0,    // Right face: yellow
    1.0,  1.0,  0.0,  1.0,    // Right face: yellow
    1.0,  1.0,  0.0,  1.0,    // Right face: yellow
    1.0,  1.0,  0.0,  1.0,    // Right face: yellow
    1.0,  0.0,  1.0,  1.0,    // Left face: purple
    1.0,  0.0,  1.0,  1.0,    // Left face: purple
    1.0,  0.0,  1.0,  1.0,    // Left face: purple
    1.0,  0.0,  1.0,  1.0    // Left face: purple
    1.0,  0.0,  1.0,  1.0    // Left face: purple
    1.0,  0.0,  1.0,  1.0    // Left face: purple
]);

This would be pretty redundant to write out manually, so we use a for loop to help keep things simple and achieve our goal.

Update bindVertexPosition()

Let’s go back to our bindVertexPosition() function and update bufferSize from 2 to 3 in order to account for the Z-axis we’re now including as part of our position.

This small update lets WebGL know to now pull 3 items per vertex attribute position for rendering.

Cleaning up web-gl.service.ts

Create a new method in web-gl.service.ts and call it formatScene().

Add in the following:

/**
  * Formats the scene for rendering (by resizing the WebGL canvas and setting the defaults for WebGL drawing).
  */
public formatScene() {
    this.updateWebGLCanvas();
    this.resizeWebGLCanvas();
    this.updateViewport();
}

Create a getter for the modelViewMatrix property we have in the service:

/**
  * Gets the {@link modelViewMatrix}.
  *
  * @returns modelViewMatrix
  */
getModelViewMatrix(): mat4 {
    return this.modelViewMatrix;
}

We’ll need to reference this matrix when we want to render our cube and apply some animation / rotational and translation effects to it.

Go to the prepareScene() method and update the entire implementation with the following:

/**
 * Prepare's the WebGL context to render content.
 */
prepareScene() {
    // tell WebGL how to pull out the positions from the position
    // buffer into the vertexPosition attribute
    this.bindVertexPosition(this.programInfo, this.buffers);

    // tell WebGL how to pull out the colors from the color buffer
    // into the vertexColor attribute.
    this.bindVertexColor(this.programInfo, this.buffers);

    // tell WebGL to use our program when drawing
    this.gl.useProgram(this.programInfo.program);

    // set the shader uniforms
    this.gl.uniformMatrix4fv(
        this.programInfo.uniformLocations.projectionMatrix,
        false,
        this.projectionMatrix
    );
    this.gl.uniformMatrix4fv(
        this.programInfo.uniformLocations.modelViewMatrix,
        false,
        this.modelViewMatrix
    );
}

Essentially we’ve just removed a few lines that we previously had where we were resizing and updating the WebGL Canvas within this method, and then applying the matrix.mat4.translate(...) operation to move the modelViewMatrix “backwards” so we could view the rendered square.

We’re moving our old code over to the scene.component.ts so it can be responsible for performing matrix translate, rotate, scale operations instead of defining it here in the service.

Updating drawScene() in scene.component.ts

Let’s update drawScene() in scene.component.ts with a bit of code to now render out our updated buffer data.

Add an import to gl-matrix at the top of the file.

import * as matrix from 'gl-matrix';

Add these two variables to the SceneComponent class underneath the private gl: WebGLRenderingContext definition: e.g.

...
private _60fpsInterval = 16.666666666666666667;
private gl: WebGLRenderingContext
private cubeRotation = 0;
private deltaTime = 0;
constructor(private webglService: WebGLService) {}
...

Great, lets update ngAfterViewInit(): void with the following:

ngAfterViewInit(): void {
    if (!this.canvas) {
      alert('canvas not supplied! cannot bind WebGL context!');
      return;
    }
    this.gl = this.webglService.initialiseWebGLContext(
      this.canvas.nativeElement
    );
    // Set up to draw the scene periodically via deltaTime.
    const milliseconds = 0.001;
    this.deltaTime = this._60fpsInterval * milliseconds;
    const drawSceneInterval = interval(this._60fpsInterval);
    drawSceneInterval.subscribe(() => {
      this.drawScene();
      this.deltaTime = this.deltaTime + (this._60fpsInterval * milliseconds);
    });
}

We’ve added a little bit of code here which calculates an output of deltaTime based on 60fps multipled by 0.001 milliseconds.

All this incrementing deltaTime by the calculation above each time we render a frame. We set the result of deltaTime to the cubeRotation variable to specify the amount of rotation we want to apply to the cube in radians every time we render a frame.

Update drawScene() with the following:

/**
 * Draws the scene
 */
private drawScene() {
    // prepare the scene and update the viewport
    this.webglService.formatScene();

    // draw the scene
    const offset = 0;
    // 2 triangles, 3 vertices, 6 faces
    const vertexCount = 2 * 3 * 6;

    // translate and rotate the model-view matrix to display the cube
    const mvm = this.webglService.getModelViewMatrix();
    matrix.mat4.translate(
        mvm,                    // destination matrix
        mvm,                    // matrix to translate
        [0.0, 0.0, -6.0]        // amount to translate
        );
    matrix.mat4.rotate(
        mvm,                    // destination matrix
        mvm,                    // matrix to rotate
        this.cubeRotation,      // amount to rotate in radians
        [1, 1, 1]               // rotate around X, Y, Z axis
    );

    this.webglService.prepareScene();

    this.gl.drawArrays(
        this.gl.TRIANGLES,
        offset,
        vertexCount
    );

    // rotate the cube
    this.cubeRotation = this.deltaTime;
}

Observe, we’re now calling webglService.formatScene() above to easily set and update the viewport for rendering. We’ve also updated the vertexCount variable that we had hardcoded to 4 in the last tutorial to now reflect what is being rendered on screen. vertexCount = 2 * 3 * 6. 

‘2’ represents the number of triangles we’re rendering per cube face, ‘3' represents the number of vertices for each triangle, and ‘6’ represents the number of cube faces we’re rendering. This number calculates a total of 36 vertices. This matches the number of vertex positions we defined in const positions = [] in initialiseBuffers().

In the next bit of code, I’m retrieving the modelViewMatrix and using it to perform a translate on the Z-axis to push our rendered cube backwards. We can then view it and then apply a rotate to it based on cubeRotation which is set to the updated deltaTime after the render loop is completed.

Finally, we call webglService.prepareScene() to bind all vertex and color buffers and then make a call to gl.drawArrays(this.gl.TRIANGLES, offset, vertexCount) to render the cube on screen!

If you did everything correctly, you should now see the following when you run npm start on the solution:

The cube 

You’ve now successfully built a 3-dimensional cube in WebGL using Angular!

In part 4 we’ll look at indexed vertices and adding textures to our cube in WebGL using Angular!

As usual, the source code for this tutorial is available here: https://gitlab.com/MikeHewett/intro-webgl-part-3.git

References

Categories
Angular Mobile Development Web Development

Intro to WebGL using Angular- How to Setup and Compile Shaders (Part 2)

Prerequisites for Setting up and Compiling Shaders

  • This tutorial assumes that you’ve completed part 1 (setting up a scene).

Setting up Shaders

At the end of the tutorial in part 1, I mentioned that we were going to set up shaders and a triangle in WebGL using Angular. However, we’re not going to create a triangle, we’re going to create a SQUARE instead!

In this part of the tutorial, we’re going to create shaders and compile them into our framework as part of the loading cycle for rendering content to screen.

WebGL requires two shaders each time you wish to draw something to screen. I mentioned briefly in part 1 what vertex and fragment shaders were and what they do, but for clarity here’s the description again:

  • vertex shader
    • responsible for computing vertex positions – based on the positions, WebGL can then rasterize primitives including points, lines, or triangles.
  • fragment shader
    • when primitives are being rasterized, WebGL calls the fragment shader to compute a colour for each pixel of the primitive that’s currently being drawn.

There’s a fair bit of technical reading in regards to how a vertex and fragment shader go about their business, and how they function in unison to rasterize and colour objects on screen.

One thing to know is the language that both shaders use is called Graphics Library Shader Language (GLSL). It has features that aren’t common in JavaScript which are specialised to do the math commonly needed to compute graphic rasterisation.

Further reading is available here: WebGL Shaders and GLSL

Creating Fragment and Vertex Shaders

Let’s define two files in our Assets folder:

  • toucan-fragment-shader.glsl
  • toucan-vertex-shader.glsl

Populate toucan-fragment-shader.glsl with the following:

varying lowp vec4 vColor;
void main(void) {
    gl_FragColor = vColor;
}

The above code assigns a color to gl_FragColor from vColor to be presented on screen. vColor is assigned a value in the vertex shader below. We will go more in-depth on how this process occurs soon.

Populate toucan-vertex-shader.glsl with the following:

attribute vec4 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 uModelViewMatrix;
uniform mat4 uProjectionMatrix;
varying lowp vec4 vColor;
void main(void) {
    gl_Position = uProjectionMatrix * uModelViewMatrix * aVertexPosition;
    vColor = aVertexColor;
}

The above code computes a gl_Position by multiplying the projection matrix, model view matrix and the current vertex’s position. It also assigns vColor a color from aVertexColor which is computed from our app as part of rendering. Again, more on this process later.

Loading the shaders in Typescript

We now have two glsl files. We need to load them in as strings into our angular app. We need to download a few packages to enable us to do this though.

Since we’re using an angular project, we need to extend on angular-cli’s existing webpack configuration and add in a loader which will enable us to compile and load in glsl shaders on demand.

Install the packages below:

npm i ts-loader --save-dev
npm i ts-shader-loader --save-dev
npm i @angular-builders/custom-webpack --save-dev

ts-loader is the typescript loader for webpack.
ts-shader-loader is a GLSL shader loader for webpack.
@angular-builders/custom-webpack is a framework that allows customizing build configuration without ejecting webpack configuration.

Once you’ve downloaded these packages into your project, you’ll need to do the following:

  • add a glsl.d.ts file to the src folder of the project and populate it with the following code:
declare module '*.glsl' {
  const value: string; 
  export default value;
}

e.g.

This declaration will identify all .glsl files as a module where the value exported is the file itself, as a string.

This means that we can now do the following in our project:

import fragmentShaderSrc from '../../../assets/toucan-fragment-shader.glsl';
import vertexShaderSrc from '../../../assets/toucan-vertex-shader.glsl';

And the variables fragmentShaderSrc and vertexShaderSrc are immediately available as strings.

  • Next, create a webpack.config.js file at the root directory of the solution.

e.g.

We need to create an additional webpack.config.js to augment the existing angular-cli one in order to load in glsl files correctly with the underlying webpack config that angular-cli uses.

Populate the webpack.config.js file with the following:

module.exports = {
    module: {
      rules: [
        // all files with a `.ts` or `.tsx` extension will be handled by `ts-loader`
        { test: /\.tsx?$/, loader: "ts-loader" },
        { test: /\.(glsl|vs|fs)$/, loader: "ts-shader-loader" }
      ]
    }
  };

The code above includes usage of ts-loader and ts-shader-loader to identify and use the appropriate loader based on the file type we’re dealing with.

Thus allowing us to compile and use glsl as import statements, as described earlier.

We still need to do one more thing, update the angular.json configuration to now make use of our extra webpack.config.js file.

Open angular.json, navigate to the "serve" definition of the config file and update:

"builder": "@angular-devkit/build-angular:dev-server" 

to

"builder": "@angular-builders/custom-webpack:dev-server"

e.g.

Next, Navigate to the "build" definition of the config file and update:

"builder": "@angular-devkit/build-angular:browser"

to

"builder": "@angular-builders/custom-webpack:browser"

Finally, add in customWebpackConfig property in the "options" definition with the following:

"options": {
  "customWebpackConfig": {
    "path": "./webpack.config.js"
  },
  ...
}

e.g.

Loading and Compiling the Shaders into WebGL

Great, we can now load shader scripts into our app using Typescript which Angular will be able to successfully compile. Now its time to load our shaders into memory so they can then be associated with our WebGL context.

We need to do a few things first:

  1. Determine if the shader type we’re loading is supported
  2. Load the shaders into memory
  3. Check to see if the shaders were compiled/loaded successfully
  4. Finally, create a WebGLProgram, associate the shaders to it, and return the result.

For step 1, create a method called: determineShaderType(shaderType: string): number
Within this method we just check to see if the type we supply matches the known mime types for a vertex or fragment shader like so:

private determineShaderType(shaderMimeType: string): number {
  if (shaderMimeType) {
    if (shaderMimeType === 'x-shader/x-vertex') {
      return this.gl.VERTEX_SHADER;
    } else if (shaderMimeType === 'x-shader/x-fragment') {
      return this.gl.FRAGMENT_SHADER;
    } else {
      console.log('Error: could not determine the shader type');
    }
  }
  return -1;
}

For step 2, create a method called loadShader(shaderSource: string, shaderType: string): WebGLShader.
We’ll create a shader based on the shader type (using the code determined from step 1). We’ll take the shader source code, load it into the shader and compile it. Once, it’s compiled, we then run a check to see if it’s successful and return the result.
e.g.

private loadShader(shaderSource: string, shaderType: string): WebGLShader {
  const shaderTypeAsNumber = this.determineShaderType(shaderType);
  if (shaderTypeAsNumber < 0) {
    return null;
  }
  // Create the gl shader
  const glShader = this.gl.createShader(shaderTypeAsNumber);
  // Load the source into the shader
  this.gl.shaderSource(glShader, shaderSource);
  // Compile the shaders
  this.gl.compileShader(glShader);
  // Check the compile status
  const compiledShader = this.gl.getShaderParameter(
    glShader,
    this.gl.COMPILE_STATUS
  );
  return this.checkCompiledShader(compiledShader) ? glShader : null;
}

For step 3, create a method called: checkCompiledShader(shader): boolean.
This checks to see if we have an instance of a shader and whether there is information available in regards to the compilation failure which occurred with the shader that was attempted to be loaded into memory.
It’ll return false if the compiled shader was null and there was an error. It will return true for everything else.
e.g.

private checkCompiledShader(compiledShader: any): boolean {
  if (!compiledShader) {
    // shader failed to compile, get the last error
    const lastError = this.gl.getShaderInfoLog(compiledShader);
    console.log("couldn't compile the shader due to: " + lastError);
    this.gl.deleteShader(compiledShader);
    return false;
  }
  return true;
}

For step 4, create a method called initialiseShaders(): WebGLProgram.

We’ll use this method to do the following:

  1. Create a WebGLProgram
  2. Compile the vertex and fragment shader scripts we defined earlier
  3. Attach the compiled vertex and fragment shaders to the WebGLProgram using our WebGLContext
  4. Link our WebGLContext to the WebGLProgram
  5. Do a check to ensure that the shaders have been loaded successfully
  6. Return the resultant WebGLProgram

e.g.

initializeShaders(): WebGLProgram {
    // 1. Create the shader program
    let shaderProgram = this.gl.createProgram();
    // 2. compile the shaders
    const compiledShaders = [];
    let fragmentShader = this.loadShader(
      fragmentShaderSrc,
      'x-shader/x-fragment'
    );
    let vertexShader = this.loadShader(
      vertexShaderSrc,
      'x-shader/x-vertex'
    );
    compiledShaders.push(fragmentShader);
    compiledShaders.push(vertexShader);
    // 3. attach the shaders to the shader program using our WebGLContext
    if (compiledShaders &amp;&amp; compiledShaders.length > 0) {
      for (let i = 0; i < compiledShaders.length; i++) {
        const compiledShader = compiledShaders[i];
        if (compiledShader) {
          this.gl.attachShader(shaderProgram, compiledShader);
        }
      }
    }
    // 4. link the shader program to our gl context
    this.gl.linkProgram(shaderProgram);
    // 5. check if everything went ok
    if (!this.gl.getProgramParameter(shaderProgram, this.gl.LINK_STATUS)) {
      console.log(
        'Unable to initialize the shader program: ' +
          this.gl.getProgramInfoLog(shaderProgram)
      );
    }
    // 6. return shader
    return shaderProgram;
}

Finally, back in initialiseWebGLContext(canvas: HTMLCanvasElement): any, add the call to initializeShaders() at the end of it.

It should now look like this:

initialiseWebGLContext(canvas: HTMLCanvasElement): any {
  // Try to grab the standard context. If it fails, fallback to experimental.
  this.renderingContext =
    canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
  // If we don't have a GL context, give up now... only continue if WebGL is available and working...
  if (!this.gl) {
    alert('Unable to initialize WebGL. Your browser may not support it.');
    return;
  }
  this.setWebGLCanvasDimensions(canvas);
  this.initialiseWebGLCanvas();
  // initialise shaders into WebGL
  let shaderProgram = this.initializeShaders();
}

Creating ProgramInfo for Shaders

Yay, we’ve managed to compile and initialise shaders within WebGL. But we’re still only half way there in regards to actually getting something rendering on screen.

We’ve currently defined a means of displaying and colouring content, but we haven’t created the content to render, nor have we created the means to bind and supply the necessary info to our GPU to actually render the content.

The next step to getting something displaying on screen is to create an object which will contain a definition of our shaderProgram and reference the shader information we’ve exposed in our .glsl files (attribs and uniforms).

This object is typically called ProgramInfo and describes the shader program to use, and the attribute and uniform locations that we want our shader program to be aware of when rendering content.

First, define the following variables at the top of the WebGLService class so we can reference them throughout the article:

/**
 * Gets the {@link gl.canvas} as a {@link Element} client.
 */
private get clientCanvas(): Element {
  return this.gl.canvas as Element
}
private fieldOfView = (45 * Math.PI) / 180; // in radians
private aspect = 1;
private zNear = 0.1;
private zFar = 100.0;
private projectionMatrix = matrix.mat4.create();
private modelViewMatrix = matrix.mat4.create();
private buffers: any
private programInfo: any

We’ll cover the details of these variables more later.

Now, at the bottom of initialiseWebGLContext(canvas: HTMLCanvasElement): any, add the following implementation:

this.programInfo = {
  program: shaderProgram,
  attribLocations: {
    vertexPosition: this.gl.getAttribLocation(
      shaderProgram,
      'aVertexPosition'
    ),
    vertexColor: this.gl.getAttribLocation(shaderProgram, 'aVertexColor'),
  },
  uniformLocations: {
    projectionMatrix: this.gl.getUniformLocation(
      shaderProgram,
      'uProjectionMatrix'
    ),
    modelViewMatrix: this.gl.getUniformLocation(
      shaderProgram,
      'uModelViewMatrix'
    ),
  },
};

Notice that aVertexPosition, aVertexColor, uProjectionMatrix and uModelViewMatrix were defined in the fragment shader we defined earler as:

attribute vec4 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 uModelViewMatrix;
uniform mat4 uProjectionMatrix;

We’ve now referenced them within attribLocations and uniformLocations respectively.

Now what? Let’s create some buffers and data so we actually have content to render.
Then, we’ll set up a rendering scene to provide the foundation to render content.

Creating Content to Render (Buffers)

Let’s create a new method called initialiseBuffers(): any.

e.g.

initialiseBuffers(): any {
}

We’ll create two buffers to render content with:

  • one to store positional data (where to render)
  • the second to store colour data (what colour to render)

We’re going to keep things simple and limit our buffers to providing data on rendering a simple 2D square.

Initialising a buffer in WebGL is pretty simple, just call gl.createBuffer().

There are many different types of buffers available in WebGL. As such, we need to tell WebGL what we want to do with this buffer, how it should interpret it, and provide it the data to bind to.

Implement the following in initialiseBuffers():

// Create a buffer for the square's positions.
const positionBuffer = this.gl.createBuffer();
// bind the buffer to WebGL and tell it to accept an ARRAY of data
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, positionBuffer);
// create an array of positions for the square.
const positions = new Float32Array([
   1.0,  1.0, 
  -1.0,  1.0, 
   1.0, -1.0, 
  -1.0, -1.0
]);
// Pass the list of positions into WebGL to build the
// shape. We do this by creating a Float32Array from the
// array, then use it to fill the current buffer.
// We tell WebGL that the data supplied is an ARRAY and
// to handle the data as a statically drawn shape.
this.gl.bufferData(
  this.gl.ARRAY_BUFFER,
  positions,
  this.gl.STATIC_DRAW
);

As you’ve probably noticed, bindBuffer tells WebGL the buffer we want to provide data for. It’s a procedural approach when defining, binding, and supplying buffer data. Whenever you’re adding buffers to WebGL, you need to stick to this format to ensure that the data you create is handled and assigned properly, this will allow WebGL to correctly render it.

This approach isn’t something that’s limited to just WebGL, this is quite typical in OpenGL in general. It’s all about memory management. It’s good practice to be strict when creating, binding and applying buffer data so as not to introduce memory leaks. It also allows developers to better use the available API’s and build their apps to revolve around these functions.

But enough of that, lets create another buffer to store colour data.

// Set up the colors for the vertices
let colors = new Uint16Array([
  1.0, 1.0, 1.0, 1.0, // white
  1.0, 0.0, 0.0, 1.0, // red
  0.0, 1.0, 0.0, 1.0, // green
  0.0, 0.0, 1.0, 1.0, // blue
]);
const colorBuffer = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, colorBuffer);
this.gl.bufferData(
  this.gl.ARRAY_BUFFER,
  new Float32Array(colors),
  this.gl.STATIC_DRAW
);

Pretty much the same deal with the colour buffer; create, bind and apply.

You’ll notice as well that colours are defined as R,G,B,A from 0 to 1 range.

Tip: You can look up any typical RGB colour and divide each number by 255 to get it into a 0 to 1 range.

E.g. RGB – 100/255, 32/255, 178/255 = 0.39, 0.12, 0.70 (~approximately)

Finally, we can return the position and color buffers back as the result of this function.

return {
  position: positionBuffer,
  color: colorBuffer,
};

The final method should look like this:

initialiseBuffers(): any {
  // Create a buffer for the square's positions.
  const positionBuffer = this.gl.createBuffer();
  // bind the buffer to WebGL and tell it to accept an ARRAY of data
  this.gl.bindBuffer(this.gl.ARRAY_BUFFER, positionBuffer);
  // create an array of positions for the square.
  const positions = new Float32Array([
    1.0,  1.0, 
    -1.0,  1.0, 
    1.0, -1.0, 
    -1.0, -1.0
  ]);
  // set the list of positions into WebGL to build the
  // shape by passing it into bufferData.
  // We tell WebGL that the data supplied is an ARRAY and
  // to handle the data as a statically drawn shape.
  this.gl.bufferData(
    this.gl.ARRAY_BUFFER,
    positions,
    this.gl.STATIC_DRAW
  );
  // Set up the colors for the vertices
  let colors = new Uint16Array([
    1.0, 1.0, 1.0, 1.0, // white
    1.0, 0.0, 0.0, 1.0, // red
    0.0, 1.0, 0.0, 1.0, // green
    0.0, 0.0, 1.0, 1.0, // blue
  ]);
  const colorBuffer = this.gl.createBuffer();
  this.gl.bindBuffer(this.gl.ARRAY_BUFFER, colorBuffer);
  this.gl.bufferData(
    this.gl.ARRAY_BUFFER,
    new Float32Array(colors),
    this.gl.STATIC_DRAW
  );
  return {
    position: positionBuffer,
    color: colorBuffer,
  };
}

Go back to initialiseWebGLContext(canvas: HTMLCanvasElement): any and add in the call to initialiseBuffers() underneath programInfo like so:

// set up programInfo for buffers
this.programInfo = {
  ...
};
// initalise the buffers to define what we want to draw
this.buffers = this.initialiseBuffers();

Preparing the Scene for Rendering

To prepare the scene for rendering, we need to do the following:

  • Resize the WebGL canvas based on the browsers size
  • Update the WebGL canvas to handle displaying content based on the browsers size
  • Bind vertex position data
  • Bind vertex colour data
  • Tell WebGL to use the shader program for rendering
  • Set the vertex shader’s uniform matrices to be in sync with the projection and model-view matrices we’ve configured

The first two methods we will create will ensure that all content that is rendered on screen is correctly positioned and that the perspective of viewing content is maintained correctly whenever the browser’s size is changed on the fly.

Let’s create a method called resizeWebGLCanvas()

resizeWebGLCanvas() {
  const width = this.clientCanvas.clientWidth;
  const height = this.clientCanvas.clientHeight;
  if (this.gl.canvas.width !== width || this.gl.canvas.height !== height) {
    this.gl.canvas.width = width;
    this.gl.canvas.height = height;
  }
}

Based on the method above, we check the client canvas (the HTML canvas element) and if its width and height doesn’t match the WebGL canvas’s, we update it so it does.

Next, create updateWebGLCanvas() method

updateWebGLCanvas() {
    this.initialiseWebGLCanvas();
    this.aspect = this.clientCanvas.clientWidth / this.clientCanvas.clientHeight;
    this.projectionMatrix = matrix.mat4.create();
    matrix.mat4.perspective(
      this.projectionMatrix,
      this.fieldOfView,
      this.aspect,
      this.zNear,
      this.zFar
    );
    // Set the drawing position to the "identity" point, which is the center of the scene.
    this.modelViewMatrix = matrix.mat4.create();
  }

Here, we make a call to initialiseWebGLCanvas() to ensure that the canvas is in a default state when updating the canvas for rendering.
Next, setup some variables to configure a perspective projection matrix which is used to establish the boundaries of viewing rendered content for the scene (setting up a camera view).

Our field of view is 45 degrees, with a width/height ratio of 640:480. We only want to see objects between 0.1 units and 100 units away from the camera. The perspective projection matrix is a special matrix that is used to simulate the distortion of perspective in a camera.

We ensure that the model view projection matrix is set to what is known as an identity matrix. This ensures that the position of the camera is at the center of the screen.

Methods to Bind Vertex and Colour Buffers

The next two methods will reference the vertex position and colour buffers we created earlier and tell WebGL how to consume them in order to render the data.

First, create a method to bind vertex positions: bindVertexPosition(programInfo: any, buffers: any).

bindVertexPosition(programInfo: any, buffers: any) {
  const bufferSize = 2;
  const type = this.gl.FLOAT;
  const normalize = false;
  const stride = 0;
  const offset = 0;
  this.gl.bindBuffer(this.gl.ARRAY_BUFFER, buffers.position);
  this.gl.vertexAttribPointer(
    programInfo.attribLocations.vertexPosition,
    bufferSize,
    type,
    normalize,
    stride,
    offset
  );
  this.gl.enableVertexAttribArray(programInfo.attribLocations.vertexPosition);
}

We bind the position buffer we created before and then tell WebGL how it should consume the position buffer.

Note, the bufferSize is 2. We are telling WebGL that the ARRAY_BUFFER we’ve bound needs to be interpreted in a group of two elements at a time. We don’t want to normalize the data in any way so we set it to false, and we set the stride and offset to 0 so the buffer is read from start to finish completely.

Remember earlier we defined the position data like so:

const positions = new Float32Array([
   1.0,  1.0, 
  -1.0,  1.0, 
   1.0, -1.0, 
  -1.0, -1.0
]);

We want WebGL to interpret the array in two’s. Essentially defining our (x, y) coordinates and then proceed with the next set of data in the array.

By doing this, we set the top-left, top-right, bottom-left, and bottom-right positions of the square.

The vertexAttribPointer is setup to be consumed by the vertex shader’s aVertexPosition via programInfo.attribLocations.vertexPosition we defined earlier.

At the end of the method, we tell WebGL to enable the vertex attribute array for rendering via enableVertexAttribArray(...).

Next, create a method to bind the colour buffer: bindVertexColor(programInfo: any, buffers: any).

bindVertexColor(programInfo: any, buffers: any) {
  const bufferSize = 4;
  const type = this.gl.FLOAT;
  const normalize = false;
  const stride = 0;
  const offset = 0;
  this.gl.bindBuffer(this.gl.ARRAY_BUFFER, buffers.color);
  this.gl.vertexAttribPointer(
    programInfo.attribLocations.vertexColor,
    bufferSize,
    type,
    normalize,
    stride,
    offset
  );
  this.gl.enableVertexAttribArray(programInfo.attribLocations.vertexColor);
}

We pretty much do the exact same thing for binding the colour buffer as we did earlier with the vertex position, only that the bufferSize we read in is set to 4 instead of 2.

This is due to the fact that colour data is interpreted in RGBA format, hence 4 which matches that definition.

Here’s the colours we defined earlier for reference:

let colors = new Uint16Array([
  1.0, 1.0, 1.0, 1.0, // white
  1.0, 0.0, 0.0, 1.0, // red
  0.0, 1.0, 0.0, 1.0, // green
  0.0, 0.0, 1.0, 1.0, // blue
]);

Putting it All Together to Prepare the Scene

Let’s now create another method called prepareScene() and tie everything together.

prepareScene() {
  this.resizeWebGLCanvas();
  this.updateWebGLCanvas();
  // move the camera position a bit backwards to a position where 
  // we can observe the content that will be drawn from a distance
  matrix.mat4.translate(
    this.modelViewMatrix, // destination matrix
    this.modelViewMatrix, // matrix to translate
    [0.0, 0.0, -6.0]      // amount to translate
  );
  // tell WebGL how to pull out the positions from the position
  // buffer into the vertexPosition attribute
  this.bindVertexPosition(this.programInfo, this.buffers);
  // tell WebGL how to pull out the colors from the color buffer
  // into the vertexColor attribute.
  this.bindVertexColor(this.programInfo, this.buffers);
  // tell WebGL to use our program when drawing
  this.gl.useProgram(this.programInfo.program);
  // set the shader uniforms
  this.gl.uniformMatrix4fv(
    this.programInfo.uniformLocations.projectionMatrix,
    false,
    this.projectionMatrix
  );
  this.gl.uniformMatrix4fv(
    this.programInfo.uniformLocations.modelViewMatrix,
    false,
    this.modelViewMatrix
  );
}

We call resizeWebGLCanvas() and this.updateWebGLCanvas() to ensure the canvas and WebGL canvas are matched.

Next, we setup the model-view matrix by performing a translation on it. We move it six units backwards along the Z axis. This allows us to observe any content that we draw on the scene at a distance.

We then tell WebGL how to bind the vertex position and colour data via bindVertexPosition and bindVertexColor and then tell WebGL to use the shader program we built earlier.

Finally, we bind the projection and model-view matrices in our vertex shader to be bound to the projectionMatrix and modelViewMatrix that are maintained and updated within this service. This is so they reflect eachother and are updated accordingly.

Go back to initialiseWebGLContext(...) method, update its signature to return WebGLRenderingContext, make the call to prepareScene() at the end of the method and then return the this.gl context.

The method should now look like this:

initialiseWebGLContext(canvas: HTMLCanvasElement): WebGLRenderingContext {
  // Try to grab the standard context. If it fails, fallback to experimental.
  this.renderingContext =
    canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
  // If we don't have a GL context, give up now... only continue if WebGL is available and working...
  if (!this.gl) {
    alert('Unable to initialize WebGL. Your browser may not support it.');
    return;
  }
  this.setWebGLCanvasDimensions(canvas);
  this.initialiseWebGLCanvas();
  // initialise shaders into WebGL
  let shaderProgram = this.initializeShaders();
  // set up programInfo for buffers
  this.programInfo = {
    program: shaderProgram,
    attribLocations: {
      vertexPosition: this.gl.getAttribLocation(
        shaderProgram,
        'aVertexPosition'
      ),
      vertexColor: this.gl.getAttribLocation(shaderProgram, 'aVertexColor'),
    },
    uniformLocations: {
      projectionMatrix: this.gl.getUniformLocation(
        shaderProgram,
        'uProjectionMatrix'
      ),
      modelViewMatrix: this.gl.getUniformLocation(
        shaderProgram,
        'uModelViewMatrix'
      ),
    },
  };
  // initalise the buffers to define what we want to draw
  this.buffers = this.initialiseBuffers();
  // prepare the scene to display content
  this.prepareScene();
  return this.gl
}

Displaying the Square!!

Head over to the scene.component.ts that we created in part 1.

Import the interval function form rxjs:

import { interval } from 'rxjs';

Define the two private class variables:

export class SceneComponent implements OnInit, AfterViewInit {
  ...
  private _60fpsInterval = 16.666666666666666667;
  private gl: WebGLRenderingContext
}

Create a method called drawScene() and implement the following:

private drawScene() {
  // prepare the scene and update the viewport
  this.webglService.updateViewport();
  this.webglService.prepareScene();
  // draw the scene
  const offset = 0;
  const vertexCount = 4;
  this.gl.drawArrays(
    this.gl.TRIANGLE_STRIP,
    offset,
    vertexCount
  );
}

This method does two things:

  • prepare the scene for rendering via prepareScene
  • draw all arrays that have been bound in the gl context.

NOTE: the vertexCount is 4. This number matches the lines that represent the sides of the square (left, right, top, bottom lines).

TRIANGLE_STIP is just the default way that arrays are drawn in OpenGL / WebGL in general. OpenGL renders everything as triangles (everything can be defined as a triangle when it comes to rendering objects on screen).

Finally, to call drawScene() and animate it, we need to define a render loop and then call the method as desired.

In ngAfterViewInit() update the method with the following:

ngAfterViewInit(): void {
  if (!this.canvas) {
    alert('canvas not supplied! cannot bind WebGL context!');
    return;
  }
  this.gl = this.webglService.initialiseWebGLContext(
    this.canvas.nativeElement
  );
  // Set up to draw the scene periodically.
  const drawSceneInterval = interval(this._60fpsInterval);
  drawSceneInterval.subscribe(() => {
    this.drawScene();
  });
}

We use interval from rxjs to create a simple render loop for us and call drawScene() within the subscription of interval. Of course, you could use setInterval JS function to achieve the same thing, but this is just some fun in using some rxjs conventions along with rendering content in WebGL.

Run npm start and FINALLY, you can now see a SQUARE on the screen! Yay!

You can even resize the browser window and see that the rendering context updates and positions itself correctly!!

Congratulations! You made it! That was a lot of work… Too much work perhaps to just set up something simple like this. There’s a lot of third-party libraries that help to reduce the amount of boilerplate you need to initially create a scene and start rendering objects, but the aim of this tutorial is to really show you what needs to be done in order to setup and compile shaders in WebGL using Angular with as much detail as possible.

That’s it for part 2! In part 3, we’ll look into animating and displaying a spinning 3D cube!!!

Stay tuned!

As usual. the source code for this tutorial is available @ https://gitlab.com/MikeHewett/intro-webgl-part-2.git

Useful VS Code Extensions for GLSL

VS Code has some extensions that help to highlight and visualise GLSL code.
You can search up the following extensions in the Marketplace:

slevesque.shader
raczzalan.webgl-glsl-editor
circledev.glsl-canvas
boyswan.glsl-literal

References

Categories
Angular Mobile Development Web Development

Intro to WebGL Using Angular- How to Set Up a Scene (Part 1)

Overview

WebGL has to be one of the most under-used JavaScript APIs within modern web browsers.

It offers rendering interactive 2D and 3D graphics and is fully integrated with other web standards, allowing GPU-accelerated usage of physics and image processing effects as part of the web page canvas (Wikipedia, 2020).

In this article, we’re going to setup WebGL within a typical Angular app by utilising the HTML5 canvas element.

Prerequisites

Before starting, its worthwhile to ensure your system is setup with the following:

  • Nodejs is installed
  • You have setup a new or existing Angular app
  • You are using a modern web browser (Chrome 56+, Firefox 51+, Opera 43+, Edge 10240+)

WebGL Fundamentals

There’s honestly a lot to take in regards to the fundamentals of WebGL (and more specifically OpenGL). It does mean that you’ll need to have some basic understanding of linear algebra and 2d/3d rendering in general. WebGL Fundamentals does a great job at providing an introduction to WebGL fundamentals and I’ll be referencing their documentation as we step through setting up our Angular app to use WebGL.

Before going any further, its important that you understand the following at a minimum.

WebGL is not a 3D API. You can’t just use it to instantly render objects and models and get them to do some awesome magic.

WebGL is just a rasterization engine. It draws points, lines and triangles based on the code you supply.

If you want WebGL to do anything else, its up to you to write code that uses points, lines and triangles to accomplish the task you want.

WebGL runs on the GPU and requires that you provide code that runs on the GPU.
The code that we need to provide is in the form of pairs of functions.

They are known as:

  • a vertex shader
    • responsible for computing vertex positions – based on the positions, WebGL can then rasterize primitives including points, lines, or triangles.
  • a fragment shader
    • when primitives are being rasterized, WebGL calls the fragment shader to compute a colour for each pixel of the primitive that’s currently being drawn.

Each shader is written in GLSL which is a strictly typed C/C++ like language.
When a vertex and fragment shader are combined, they’re collectively known as a program.

Nearly all of the entire WebGL API is about setting up state for these pairs of functions to run. For each thing you want to draw, you setup a bunch of state then execute a pair of functions by calling gl.drawArrays or gl.drawElements which executes your shaders on the GPU.

Any data you want those functions to have access to, must be provided to the GPU. There are 4 ways a shader can receive data.

  • Attributes and buffers
    • Buffers are arrays of binary data you upload to the GPU. Usually buffers contain things like positions, normals, texture coordinates, vertex colours, etc although you’re free to put anything you want in them.
    • Attributes are used to specify how to pull data out of your buffers and provide them to your vertex shader. For example you might put positions in a buffer as three 32bit floats per position. You would tell a particular attribute which buffer to pull the positions out of, what type of data it should pull out (3 component 32 bit floating point numbers), what offset in the buffer the positions start, and how many bytes to get from one position to the next.
    • Buffers are not random access. Instead a vertex shader is executed a specified number of times. Each time it’s executed the next value from each specified buffer is pulled out and assigned to an attribute.
  • Uniforms
    • Uniforms are effectively global variables you set before you execute your shader program.
  • Textures
    • Textures are arrays of data you can randomly access in your shader program. The most common thing to put in a texture is image data but textures are just data and can just as easily contain something other than colours.
  • Varyings
    • Varyings are a way for a vertex shader to pass data to a fragment shader. Depending on what is being rendered, points, lines, or triangles, the values set on a varying by a vertex shader will be interpolated while executing the fragment shader.

(WebGL Fundamentals, 2015).

I’m glossing over a lot of technical detail here, but if you really want to know more, head over to WebGL Fundamentals lessons for more info.

Setting up a playground

Let’s set up a playground so we have something that we can use in order to continue setting up WebGL.

First, create a component. You can create one by executing the following command within your Angular root (src) directory. I’ve gone ahead and named mine scene.

E.g. ng generate component scene

PS X:\...\toucan-webgl> ng generate component scene
CREATE src/app/scene/scene.component.html (20 bytes)
CREATE src/app/scene/scene.component.spec.ts (619 bytes)
CREATE src/app/scene/scene.component.ts (272 bytes)
CREATE src/app/scene/scene.component.scss (0 bytes)
PS X:\...\toucan-webgl>

Let’s also create a service with the component and call it WebGL too.

E.g. ng generate service scene/services/webGL

PS X:\...\toucan-webgl> ng generate service scene/services/webGL
CREATE src/app/scene/services/web-gl.service.spec.ts (352 bytes)
CREATE src/app/scene/services/web-gl.service.ts (134 bytes)
PS X:\...\toucan-webgl>

If you’re using a new Angular app, hopefully you’ve already configured it to use App Routing. If you haven’t, follow the next couple of steps.

ng generate module app-routing --flat --module=app

You’ll now have an app-routing.module.ts file, if you haven’t got one already.

Update the contents of the file with the following:

import { Routes, RouterModule } from "@angular/router";
import { SceneComponent } from "./scene/scene.component";
const routes: Routes = [{ path: "", component: SceneComponent }];
@NgModule({
  imports: [RouterModule.forRoot(routes)],
  exports: [RouterModule],
})
export class AppRoutingModule {}

This will ensure that on app load, it’ll display the SceneComponent first.

Next, add the WebGLService to the SceneComponent‘s constructor like so:

import { Component, OnInit } from "@angular/core";
import { WebGLService } from "./services/web-gl.service";
@Component({
  selector: "app-scene",
  templateUrl: "./scene.component.html",
  styleUrls: ["./scene.component.scss"],
})
export class SceneComponent implements OnInit {
  // *** Update constructor here ***
  constructor(private webglService: WebGLService) {}
  ngOnInit(): void {}
}

Finally, run ng serve and check to see if the Angular app is running and displaying the SceneComponent.
It should look like this:

Now, lets move onto adding a WebGL context.

Setting up the WebGL context

Setting up the WebGL context is a little bit involved but once we get the foundation going we can then proceed to start getting something on the screen.

Let’s start by opening up scene.component.html and add a HTML5 canvas element.

<div class="scene">
  <canvas #sceneCanvas>
    Your browser doesn't appear to support the
    <code><canvas></code> element.
  </canvas>
</div>

Open up scene.component.scss (or equivalent) and add in the following styles:

.scene {
  height: 100%;
  width: 100%;
}
.scene canvas {
  height: 100%;
  width: 100%;
  border-style: solid;
  border-width: 1px;
  border-color: black;
}

The following css should just make sure the canvas element extends to the size of the browser window. I just added some border styling so you can explicitly see it for yourself.

TIP: If you want, you can also update the global styles.scss so all content expands to the height of the window respectively.

styles.scss

/* You can add global styles to this file, and also import other style files */
html,
body {
  height: 99%;
}

We’ll now embark on doing the following:

  1. Resolving the canvas element in typescript via the #canvas id
  2. Binding the canvas element to a WebGL rendering context
  3. Initialize the WebGL rendering canvas

Resolving the canvas element

Open scene.component.ts and add the following property:

@ViewChild('sceneCanvas') private canvas: HTMLCanvasElement;

Update your the SceneComponent class to implement AfterViewInit, we’ll need to hook into this lifecycle hook to continue setting up the WebGL canvas.

Add in the following guard to the ngAfterViewInit method to ensure that we actually have the canvas element before attempting to bind it:

if (!this.canvas) {
  alert("canvas not supplied! cannot bind WebGL context!");
  return;
}

NOTE: If the alert is hit, it’s due to the fact that the ElementRef ID you’re using does match the one defined in HTML and the TS class. You need to ensure they match.

Your component implementation should now look like this:

import { AfterViewInit, Component, OnInit, ViewChild } from "@angular/core";
import { WebGLService } from "./services/web-gl.service";
@Component({
  selector: "app-scene",
  templateUrl: "./scene.component.html",
  styleUrls: ["./scene.component.scss"],
})
export class SceneComponent implements OnInit, AfterViewInit {
  @ViewChild("sceneCanvas") private canvas: HTMLCanvasElement;
  constructor(private webglService: WebGLService) {}
  ngAfterViewInit(): void {
    if (!this.canvas) {
      alert("canvas not supplied! cannot bind WebGL context!");
      return;
    }
  }
  ngOnInit(): void {}
}

Binding the canvas element to a WebGL rendering context

Open up the web-gl.service.ts file.

Create a method called initialiseWebGLContext with a parameter canvas: HTMLCanvasElement.

initialiseWebGLContext(canvas: HTMLCanvasElement) {
}

Go back to scene.component.ts and add in the following line after the guard check in ngAfterViewInit.

ngAfterViewInit(): void {
  if (!this.canvas) {
      alert('canvas not supplied! cannot bind WebGL context!');
      return;
  }
  this.webglService.initialiseWebGLContext(this.canvas.nativeElement);
}

Now, back in web-gl.service.ts, lets retrieve a WebGL context from the canvas’s native element and reference it to a property that we’ll call gl.

private _renderingContext: RenderingContext;
private get gl(): WebGLRenderingContext {
  return this._renderingContext as WebGLRenderingContext;
}
constructor() {}
initialiseWebGLContext(canvas: HTMLCanvasElement) {
  // Try to grab the standard context. If it fails, fallback to experimental.
  this._renderingContext = canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
  // If we don't have a GL context, give up now... only continue if WebGL is available and working...
  if (!this.gl) {
      alert('Unable to initialize WebGL. Your browser may not support it.');
      return;
  }
}

Once we’ve retrieved the WebGLRenderingContext, we can then set the WebGL canvas’s height and width, and then finally proceed to initialise the WebGL canvas.

Lets add two methods which do that I described above:

setWebGLCanvasDimensions(canvas: HTMLCanvasElement) {
  // set width and height based on canvas width and height - good practice to use clientWidth and clientHeight
  this.gl.canvas.width = canvas.clientWidth;
  this.gl.canvas.height = canvas.clientHeight;
}
initialiseWebGLCanvas() {
  // Set clear colour to black, fully opaque
  this.gl.clearColor(0.0, 0.0, 0.0, 1.0);
  // Enable depth testing
  this.gl.enable(this.gl.DEPTH_TEST);
  // Near things obscure far things
  this.gl.depthFunc(this.gl.LEQUAL);
  // Clear the colour as well as the depth buffer.
  this.gl.clear(this.gl.COLOR_BUFFER_BIT | this.gl.DEPTH_BUFFER_BIT);
}

Now finally call them at the end of the initialiseWebGLContext method.

initialiseWebGLContext(canvas: HTMLCanvasElement) {
  // Try to grab the standard context. If it fails, fallback to experimental.
  this._renderingContext =
    canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
  // If we don't have a GL context, give up now... only continue if WebGL is available and working...
  if (!this.gl) {
    alert('Unable to initialize WebGL. Your browser may not support it.');
    return;
  }
  // *** set width, height and initialise the webgl canvas ***
  this.setWebGLCanvasDimensions(canvas);
  this.initialiseWebGLCanvas();
}

Run the app again, you should now see that the canvas is entirely black.

This shows that we’ve successfully initialised the WebGL context.

Thats it for part 1!

Next: Introduction to WebGL using Angular – Part 2 – Setting up shaders and a triangle

In part 2, we’ll proceed to add in shaders and start setting up content to render on screen!

Stay tuned!

The source code for this tutorial is available at https://gitlab.com/MikeHewett/intro-webgl-part-1.git

References