You’ve completed part 2 (setting up and compiling shaders).
Thinking in 3D with Angular & WebGL
So we’ve got a square, cool. Do you know what’s even better? A CUBE! Let’s build one!
We’ve already covered a lot of the fundamentals of building, binding and rendering simple geometry in WebGL. We’ll be further extending upon what we’ve currently built to now handle 3D.
The general process is simple, add additional vertex positions and colours to our existing buffers to visualize a 3D scene instead of 2D.
This means we need to add in a Z-axis to our vertex and colour points.
How to define positions for a 3D cube
In order to do this, we need to think about how to define these points.
First, how many faces are there on a cube? There’s a total of six faces. Therefore, in order to build a 3D cube, we need to define vector positions for all six cube faces. WebGL can then render the cube as expected.
E.g.
// illustrates points in 2D space
const positions2D = new Float32Array([
// front face
1.0, 1.0,
-1.0, 1.0,
1.0, -1.0,
-1.0, -1.0
]);
// illustrates points in 3D space
const positions3D = new Float32Array([
// Front face
-1.0, -1.0, 1.0,
1.0, -1.0, 1.0,
-1.0, 1.0, 1.0,
1.0, 1.0, 1.0,
-1.0, 1.0, 1.0,
1.0, -1.0, 1.0,
// Back face
-1.0, -1.0, -1.0,
-1.0, 1.0, -1.0,
1.0, 1.0, -1.0,
1.0, 1.0, -1.0,
1.0, -1.0, -1.0,
-1.0, -1.0, -1.0,
// Top face
-1.0, 1.0, -1.0,
-1.0, 1.0, 1.0,
1.0, 1.0, 1.0,
1.0, 1.0, 1.0,
1.0, 1.0, -1.0,
-1.0, 1.0, -1.0,
// Bottom face
-1.0, -1.0, -1.0,
1.0, -1.0, -1.0,
1.0, -1.0, 1.0,
1.0, -1.0, 1.0,
-1.0, -1.0, 1.0,
-1.0, -1.0, -1.0,
// Right face
1.0, -1.0, -1.0,
1.0, 1.0, -1.0,
1.0, 1.0, 1.0,
1.0, 1.0, 1.0,
1.0, -1.0, 1.0,
1.0, -1.0, -1.0,
// Left face
-1.0, -1.0, -1.0,
-1.0, -1.0, 1.0,
-1.0, 1.0, 1.0,
-1.0, 1.0, 1.0,
-1.0, 1.0, -1.0,
-1.0, -1.0, -1.0,
]);
In the array definition above, you can see that we’ve defined each point for each face of the cube we want rendered.
For each face we’ve defined four positions, each position is represented with an x, y, z coordinate. A total of 36 points have now been successfully defined. Defining these points allowed WebGL to build our cube in 3D space.
How to define colours for a 3D cube
Lets now update the way we define colours for our cube by implementing the code below in initialiseBuffers():
// Set up the colors for the vertices
const faceColors = [
[1.0, 1.0, 1.0, 1.0], // Front face: white
[1.0, 0.0, 0.0, 1.0], // Back face: red
[0.0, 1.0, 0.0, 1.0], // Top face: green
[0.0, 0.0, 1.0, 1.0], // Bottom face: blue
[1.0, 1.0, 0.0, 1.0], // Right face: yellow
[1.0, 0.0, 1.0, 1.0], // Left face: purple
];
// Convert the array of colors into a table for all the vertices.
let colors = [];
for (let j = 0; j < faceColors.length; ++j) {
const c = faceColors[j];
// Repeat each color six times for the three vertices of each triangle
// since we're rendering two triangles for each cube face
colors = colors.concat(c, c, c, c, c, c);
}
In the code above, we define RGBA colours for each cube face.
The for loop we define iterates through the array of face colours and builds a table of array data that applies colour values for all of the cube points.
The usage of colors = colors.concat(c,c,c,c,c,c) might look confusing at first, but essentially all it’s doing is creating an array with four entries based on the faceColor row we’ve retrieved.
It helps to quickly build up a buffer of colours for each of the two triangle points that make up one face of the cube (E.g. Triangle 1 = TL, TR, BR and Triangle 2 = TL, BL, BR) and adds the result to the colors array and then continues onto the next faceColor item.
E.g.
TL _ _ _ _ _ TR
| \ |
| \ T2 | T1 = TL, BL, BR
| \ | T2 = TL, TR, BR
| T1 \ | 6 points that we need to color in
|_ _ _ _ _\|
BL BR
By the end of this process, we have a table of array data which represents the intended colours for every point we desire.
If we were to manually type this out, the result would look like this:
const colors = new Float32Array([
1.0, 1.0, 1.0, 1.0, // Front face: white
1.0, 1.0, 1.0, 1.0, // Front face: white
1.0, 1.0, 1.0, 1.0, // Front face: white
1.0, 1.0, 1.0, 1.0, // Front face: white
1.0, 1.0, 1.0, 1.0, // Front face: white
1.0, 1.0, 1.0, 1.0, // Front face: white
1.0, 0.0, 0.0, 1.0, // Back face: red
1.0, 0.0, 0.0, 1.0, // Back face: red
1.0, 0.0, 0.0, 1.0, // Back face: red
1.0, 0.0, 0.0, 1.0, // Back face: red
1.0, 0.0, 0.0, 1.0, // Back face: red
1.0, 0.0, 0.0, 1.0, // Back face: red
0.0, 1.0, 0.0, 1.0, // Top face: green
0.0, 1.0, 0.0, 1.0, // Top face: green
0.0, 1.0, 0.0, 1.0, // Top face: green
0.0, 1.0, 0.0, 1.0, // Top face: green
0.0, 1.0, 0.0, 1.0, // Top face: green
0.0, 1.0, 0.0, 1.0, // Top face: green
0.0, 0.0, 1.0, 1.0, // Bottom face: blue
0.0, 0.0, 1.0, 1.0, // Bottom face: blue
0.0, 0.0, 1.0, 1.0, // Bottom face: blue
0.0, 0.0, 1.0, 1.0, // Bottom face: blue
0.0, 0.0, 1.0, 1.0, // Bottom face: blue
0.0, 0.0, 1.0, 1.0, // Bottom face: blue
1.0, 1.0, 0.0, 1.0, // Right face: yellow
1.0, 1.0, 0.0, 1.0, // Right face: yellow
1.0, 1.0, 0.0, 1.0, // Right face: yellow
1.0, 1.0, 0.0, 1.0, // Right face: yellow
1.0, 1.0, 0.0, 1.0, // Right face: yellow
1.0, 1.0, 0.0, 1.0, // Right face: yellow
1.0, 0.0, 1.0, 1.0, // Left face: purple
1.0, 0.0, 1.0, 1.0, // Left face: purple
1.0, 0.0, 1.0, 1.0, // Left face: purple
1.0, 0.0, 1.0, 1.0 // Left face: purple
1.0, 0.0, 1.0, 1.0 // Left face: purple
1.0, 0.0, 1.0, 1.0 // Left face: purple
]);
This would be pretty redundant to write out manually, so we use a for loop to help keep things simple and achieve our goal.
Update bindVertexPosition()
Let’s go back to our bindVertexPosition() function and update bufferSize from 2 to 3 in order to account for the Z-axis we’re now including as part of our position.
This small update lets WebGL know to now pull 3 items per vertex attribute position for rendering.
Cleaning up web-gl.service.ts
Create a new method in web-gl.service.ts and call it formatScene().
Add in the following:
/**
* Formats the scene for rendering (by resizing the WebGL canvas and setting the defaults for WebGL drawing).
*/
public formatScene() {
this.updateWebGLCanvas();
this.resizeWebGLCanvas();
this.updateViewport();
}
Create a getter for the modelViewMatrix property we have in the service:
We’ll need to reference this matrix when we want to render our cube and apply some animation / rotational and translation effects to it.
Go to the prepareScene() method and update the entire implementation with the following:
/**
* Prepare's the WebGL context to render content.
*/
prepareScene() {
// tell WebGL how to pull out the positions from the position
// buffer into the vertexPosition attribute
this.bindVertexPosition(this.programInfo, this.buffers);
// tell WebGL how to pull out the colors from the color buffer
// into the vertexColor attribute.
this.bindVertexColor(this.programInfo, this.buffers);
// tell WebGL to use our program when drawing
this.gl.useProgram(this.programInfo.program);
// set the shader uniforms
this.gl.uniformMatrix4fv(
this.programInfo.uniformLocations.projectionMatrix,
false,
this.projectionMatrix
);
this.gl.uniformMatrix4fv(
this.programInfo.uniformLocations.modelViewMatrix,
false,
this.modelViewMatrix
);
}
Essentially we’ve just removed a few lines that we previously had where we were resizing and updating the WebGL Canvas within this method, and then applying the matrix.mat4.translate(...) operation to move the modelViewMatrix “backwards” so we could view the rendered square.
We’re moving our old code over to the scene.component.ts so it can be responsible for performing matrix translate, rotate, scale operations instead of defining it here in the service.
Updating drawScene() in scene.component.ts
Let’s update drawScene() in scene.component.ts with a bit of code to now render out our updated buffer data.
Add an import to gl-matrix at the top of the file.
import * as matrix from 'gl-matrix';
Add these two variables to the SceneComponent class underneath the private gl: WebGLRenderingContext definition: e.g.
Great, lets update ngAfterViewInit(): void with the following:
ngAfterViewInit(): void {
if (!this.canvas) {
alert('canvas not supplied! cannot bind WebGL context!');
return;
}
this.gl = this.webglService.initialiseWebGLContext(
this.canvas.nativeElement
);
// Set up to draw the scene periodically via deltaTime.
const milliseconds = 0.001;
this.deltaTime = this._60fpsInterval * milliseconds;
const drawSceneInterval = interval(this._60fpsInterval);
drawSceneInterval.subscribe(() => {
this.drawScene();
this.deltaTime = this.deltaTime + (this._60fpsInterval * milliseconds);
});
}
We’ve added a little bit of code here which calculates an output of deltaTime based on 60fps multipled by 0.001 milliseconds.
All this incrementing deltaTime by the calculation above each time we render a frame. We set the result of deltaTime to the cubeRotation variable to specify the amount of rotation we want to apply to the cube in radians every time we render a frame.
Update drawScene() with the following:
/**
* Draws the scene
*/
private drawScene() {
// prepare the scene and update the viewport
this.webglService.formatScene();
// draw the scene
const offset = 0;
// 2 triangles, 3 vertices, 6 faces
const vertexCount = 2 * 3 * 6;
// translate and rotate the model-view matrix to display the cube
const mvm = this.webglService.getModelViewMatrix();
matrix.mat4.translate(
mvm, // destination matrix
mvm, // matrix to translate
[0.0, 0.0, -6.0] // amount to translate
);
matrix.mat4.rotate(
mvm, // destination matrix
mvm, // matrix to rotate
this.cubeRotation, // amount to rotate in radians
[1, 1, 1] // rotate around X, Y, Z axis
);
this.webglService.prepareScene();
this.gl.drawArrays(
this.gl.TRIANGLES,
offset,
vertexCount
);
// rotate the cube
this.cubeRotation = this.deltaTime;
}
Observe, we’re now calling webglService.formatScene() above to easily set and update the viewport for rendering. We’ve also updated the vertexCount variable that we had hardcoded to 4 in the last tutorial to now reflect what is being rendered on screen. vertexCount = 2 * 3 * 6.
‘2’ represents the number of triangles we’re rendering per cube face, ‘3' represents the number of vertices for each triangle, and ‘6’ represents the number of cube faces we’re rendering. This number calculates a total of 36 vertices. This matches the number of vertex positions we defined in const positions = [] in initialiseBuffers().
In the next bit of code, I’m retrieving the modelViewMatrix and using it to perform a translate on the Z-axis to push our rendered cube backwards. We can then view it and then apply a rotate to it based on cubeRotation which is set to the updated deltaTime after the render loop is completed.
Finally, we call webglService.prepareScene() to bind all vertex and color buffers and then make a call to gl.drawArrays(this.gl.TRIANGLES, offset, vertexCount) to render the cube on screen!
If you did everything correctly, you should now see the following when you run npm start on the solution:
The cube
You’ve now successfully built a 3-dimensional cube in WebGL using Angular!
In part 4 we’ll look at indexed vertices and adding textures to our cube in WebGL using Angular!
Prerequisites for Setting up and Compiling Shaders
This tutorial assumes that you’ve completed part 1 (setting up a scene).
Setting up Shaders
At the end of the tutorial in part 1, I mentioned that we were going to set up shaders and a triangle in WebGL using Angular. However, we’re not going to create a triangle, we’re going to create a SQUARE instead!
In this part of the tutorial, we’re going to create shaders and compile them into our framework as part of the loading cycle for rendering content to screen.
WebGL requires two shaders each time you wish to draw something to screen. I mentioned briefly in part 1 what vertex and fragment shaders were and what they do, but for clarity here’s the description again:
vertex shader
responsible for computing vertex positions – based on the positions, WebGL can then rasterize primitives including points, lines, or triangles.
fragment shader
when primitives are being rasterized, WebGL calls the fragment shader to compute a colour for each pixel of the primitive that’s currently being drawn.
There’s a fair bit of technical reading in regards to how a vertex and fragment shader go about their business, and how they function in unison to rasterize and colour objects on screen.
One thing to know is the language that both shaders use is called Graphics Library Shader Language (GLSL). It has features that aren’t common in JavaScript which are specialised to do the math commonly needed to compute graphic rasterisation.
The above code assigns a color to gl_FragColor from vColor to be presented on screen. vColor is assigned a value in the vertex shader below. We will go more in-depth on how this process occurs soon.
Populate toucan-vertex-shader.glsl with the following:
The above code computes a gl_Position by multiplying the projection matrix, model view matrix and the current vertex’s position. It also assigns vColor a color from aVertexColor which is computed from our app as part of rendering. Again, more on this process later.
Loading the shaders in Typescript
We now have two glsl files. We need to load them in as strings into our angular app. We need to download a few packages to enable us to do this though.
Since we’re using an angular project, we need to extend on angular-cli’s existing webpack configuration and add in a loader which will enable us to compile and load in glsl shaders on demand.
Install the packages below:
npm i ts-loader --save-dev
npm i ts-shader-loader --save-dev
npm i @angular-builders/custom-webpack --save-dev
ts-loader is the typescript loader for webpack. ts-shader-loader is a GLSL shader loader for webpack. @angular-builders/custom-webpack is a framework that allows customizing build configuration without ejecting webpack configuration.
Once you’ve downloaded these packages into your project, you’ll need to do the following:
add a glsl.d.ts file to the src folder of the project and populate it with the following code:
This declaration will identify all .glsl files as a module where the value exported is the file itself, as a string.
This means that we can now do the following in our project:
import fragmentShaderSrc from '../../../assets/toucan-fragment-shader.glsl';
import vertexShaderSrc from '../../../assets/toucan-vertex-shader.glsl';
And the variables fragmentShaderSrc and vertexShaderSrc are immediately available as strings.
Next, create a webpack.config.js file at the root directory of the solution.
e.g.
We need to create an additional webpack.config.js to augment the existing angular-cli one in order to load in glsl files correctly with the underlying webpack config that angular-cli uses.
Populate the webpack.config.js file with the following:
module.exports = {
module: {
rules: [
// all files with a `.ts` or `.tsx` extension will be handled by `ts-loader`
{ test: /\.tsx?$/, loader: "ts-loader" },
{ test: /\.(glsl|vs|fs)$/, loader: "ts-shader-loader" }
]
}
};
The code above includes usage of ts-loader and ts-shader-loader to identify and use the appropriate loader based on the file type we’re dealing with.
Thus allowing us to compile and use glsl as import statements, as described earlier.
We still need to do one more thing, update the angular.json configuration to now make use of our extra webpack.config.js file.
Open angular.json, navigate to the "serve" definition of the config file and update:
Great, we can now load shader scripts into our app using Typescript which Angular will be able to successfully compile. Now its time to load our shaders into memory so they can then be associated with our WebGL context.
We need to do a few things first:
Determine if the shader type we’re loading is supported
Load the shaders into memory
Check to see if the shaders were compiled/loaded successfully
Finally, create a WebGLProgram, associate the shaders to it, and return the result.
For step 1, create a method called: determineShaderType(shaderType: string): number Within this method we just check to see if the type we supply matches the known mime types for a vertex or fragment shader like so:
private determineShaderType(shaderMimeType: string): number {
if (shaderMimeType) {
if (shaderMimeType === 'x-shader/x-vertex') {
return this.gl.VERTEX_SHADER;
} else if (shaderMimeType === 'x-shader/x-fragment') {
return this.gl.FRAGMENT_SHADER;
} else {
console.log('Error: could not determine the shader type');
}
}
return -1;
}
For step 2, create a method called loadShader(shaderSource: string, shaderType: string): WebGLShader. We’ll create a shader based on the shader type (using the code determined from step 1). We’ll take the shader source code, load it into the shader and compile it. Once, it’s compiled, we then run a check to see if it’s successful and return the result. e.g.
private loadShader(shaderSource: string, shaderType: string): WebGLShader {
const shaderTypeAsNumber = this.determineShaderType(shaderType);
if (shaderTypeAsNumber < 0) {
return null;
}
// Create the gl shader
const glShader = this.gl.createShader(shaderTypeAsNumber);
// Load the source into the shader
this.gl.shaderSource(glShader, shaderSource);
// Compile the shaders
this.gl.compileShader(glShader);
// Check the compile status
const compiledShader = this.gl.getShaderParameter(
glShader,
this.gl.COMPILE_STATUS
);
return this.checkCompiledShader(compiledShader) ? glShader : null;
}
For step 3, create a method called: checkCompiledShader(shader): boolean. This checks to see if we have an instance of a shader and whether there is information available in regards to the compilation failure which occurred with the shader that was attempted to be loaded into memory. It’ll return false if the compiled shader was null and there was an error. It will return true for everything else. e.g.
private checkCompiledShader(compiledShader: any): boolean {
if (!compiledShader) {
// shader failed to compile, get the last error
const lastError = this.gl.getShaderInfoLog(compiledShader);
console.log("couldn't compile the shader due to: " + lastError);
this.gl.deleteShader(compiledShader);
return false;
}
return true;
}
For step 4, create a method called initialiseShaders(): WebGLProgram.
We’ll use this method to do the following:
Create a WebGLProgram
Compile the vertex and fragment shader scripts we defined earlier
Attach the compiled vertex and fragment shaders to the WebGLProgram using our WebGLContext
Link our WebGLContext to the WebGLProgram
Do a check to ensure that the shaders have been loaded successfully
Return the resultant WebGLProgram
e.g.
initializeShaders(): WebGLProgram {
// 1. Create the shader program
let shaderProgram = this.gl.createProgram();
// 2. compile the shaders
const compiledShaders = [];
let fragmentShader = this.loadShader(
fragmentShaderSrc,
'x-shader/x-fragment'
);
let vertexShader = this.loadShader(
vertexShaderSrc,
'x-shader/x-vertex'
);
compiledShaders.push(fragmentShader);
compiledShaders.push(vertexShader);
// 3. attach the shaders to the shader program using our WebGLContext
if (compiledShaders && compiledShaders.length > 0) {
for (let i = 0; i < compiledShaders.length; i++) {
const compiledShader = compiledShaders[i];
if (compiledShader) {
this.gl.attachShader(shaderProgram, compiledShader);
}
}
}
// 4. link the shader program to our gl context
this.gl.linkProgram(shaderProgram);
// 5. check if everything went ok
if (!this.gl.getProgramParameter(shaderProgram, this.gl.LINK_STATUS)) {
console.log(
'Unable to initialize the shader program: ' +
this.gl.getProgramInfoLog(shaderProgram)
);
}
// 6. return shader
return shaderProgram;
}
Finally, back in initialiseWebGLContext(canvas: HTMLCanvasElement): any, add the call to initializeShaders() at the end of it.
It should now look like this:
initialiseWebGLContext(canvas: HTMLCanvasElement): any {
// Try to grab the standard context. If it fails, fallback to experimental.
this.renderingContext =
canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
// If we don't have a GL context, give up now... only continue if WebGL is available and working...
if (!this.gl) {
alert('Unable to initialize WebGL. Your browser may not support it.');
return;
}
this.setWebGLCanvasDimensions(canvas);
this.initialiseWebGLCanvas();
// initialise shaders into WebGL
let shaderProgram = this.initializeShaders();
}
Creating ProgramInfo for Shaders
Yay, we’ve managed to compile and initialise shaders within WebGL. But we’re still only half way there in regards to actually getting something rendering on screen.
We’ve currently defined a means of displaying and colouring content, but we haven’t created the content to render, nor have we created the means to bind and supply the necessary info to our GPU to actually render the content.
The next step to getting something displaying on screen is to create an object which will contain a definition of our shaderProgram and reference the shader information we’ve exposed in our .glsl files (attribs and uniforms).
This object is typically called ProgramInfo and describes the shader program to use, and the attribute and uniform locations that we want our shader program to be aware of when rendering content.
First, define the following variables at the top of the WebGLService class so we can reference them throughout the article:
/**
* Gets the {@link gl.canvas} as a {@link Element} client.
*/
private get clientCanvas(): Element {
return this.gl.canvas as Element
}
private fieldOfView = (45 * Math.PI) / 180; // in radians
private aspect = 1;
private zNear = 0.1;
private zFar = 100.0;
private projectionMatrix = matrix.mat4.create();
private modelViewMatrix = matrix.mat4.create();
private buffers: any
private programInfo: any
We’ll cover the details of these variables more later.
Now, at the bottom of initialiseWebGLContext(canvas: HTMLCanvasElement): any, add the following implementation:
We’ve now referenced them within attribLocations and uniformLocations respectively.
Now what? Let’s create some buffers and data so we actually have content to render. Then, we’ll set up a rendering scene to provide the foundation to render content.
Creating Content to Render (Buffers)
Let’s create a new method called initialiseBuffers(): any.
e.g.
initialiseBuffers(): any {
}
We’ll create two buffers to render content with:
one to store positional data (where to render)
the second to store colour data (what colour to render)
We’re going to keep things simple and limit our buffers to providing data on rendering a simple 2D square.
Initialising a buffer in WebGL is pretty simple, just call gl.createBuffer().
There are many different types of buffers available in WebGL. As such, we need to tell WebGL what we want to do with this buffer, how it should interpret it, and provide it the data to bind to.
Implement the following in initialiseBuffers():
// Create a buffer for the square's positions.
const positionBuffer = this.gl.createBuffer();
// bind the buffer to WebGL and tell it to accept an ARRAY of data
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, positionBuffer);
// create an array of positions for the square.
const positions = new Float32Array([
1.0, 1.0,
-1.0, 1.0,
1.0, -1.0,
-1.0, -1.0
]);
// Pass the list of positions into WebGL to build the
// shape. We do this by creating a Float32Array from the
// array, then use it to fill the current buffer.
// We tell WebGL that the data supplied is an ARRAY and
// to handle the data as a statically drawn shape.
this.gl.bufferData(
this.gl.ARRAY_BUFFER,
positions,
this.gl.STATIC_DRAW
);
As you’ve probably noticed, bindBuffer tells WebGL the buffer we want to provide data for. It’s a procedural approach when defining, binding, and supplying buffer data. Whenever you’re adding buffers to WebGL, you need to stick to this format to ensure that the data you create is handled and assigned properly, this will allow WebGL to correctly render it.
This approach isn’t something that’s limited to just WebGL, this is quite typical in OpenGL in general. It’s all about memory management. It’s good practice to be strict when creating, binding and applying buffer data so as not to introduce memory leaks. It also allows developers to better use the available API’s and build their apps to revolve around these functions.
But enough of that, lets create another buffer to store colour data.
// Set up the colors for the vertices
let colors = new Uint16Array([
1.0, 1.0, 1.0, 1.0, // white
1.0, 0.0, 0.0, 1.0, // red
0.0, 1.0, 0.0, 1.0, // green
0.0, 0.0, 1.0, 1.0, // blue
]);
const colorBuffer = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, colorBuffer);
this.gl.bufferData(
this.gl.ARRAY_BUFFER,
new Float32Array(colors),
this.gl.STATIC_DRAW
);
Pretty much the same deal with the colour buffer; create, bind and apply.
You’ll notice as well that colours are defined as R,G,B,A from 0 to 1 range.
Tip: You can look up any typical RGB colour and divide each number by 255 to get it into a 0 to 1 range.
initialiseBuffers(): any {
// Create a buffer for the square's positions.
const positionBuffer = this.gl.createBuffer();
// bind the buffer to WebGL and tell it to accept an ARRAY of data
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, positionBuffer);
// create an array of positions for the square.
const positions = new Float32Array([
1.0, 1.0,
-1.0, 1.0,
1.0, -1.0,
-1.0, -1.0
]);
// set the list of positions into WebGL to build the
// shape by passing it into bufferData.
// We tell WebGL that the data supplied is an ARRAY and
// to handle the data as a statically drawn shape.
this.gl.bufferData(
this.gl.ARRAY_BUFFER,
positions,
this.gl.STATIC_DRAW
);
// Set up the colors for the vertices
let colors = new Uint16Array([
1.0, 1.0, 1.0, 1.0, // white
1.0, 0.0, 0.0, 1.0, // red
0.0, 1.0, 0.0, 1.0, // green
0.0, 0.0, 1.0, 1.0, // blue
]);
const colorBuffer = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, colorBuffer);
this.gl.bufferData(
this.gl.ARRAY_BUFFER,
new Float32Array(colors),
this.gl.STATIC_DRAW
);
return {
position: positionBuffer,
color: colorBuffer,
};
}
Go back to initialiseWebGLContext(canvas: HTMLCanvasElement): any and add in the call to initialiseBuffers() underneath programInfo like so:
// set up programInfo for buffers
this.programInfo = {
...
};
// initalise the buffers to define what we want to draw
this.buffers = this.initialiseBuffers();
Preparing the Scene for Rendering
To prepare the scene for rendering, we need to do the following:
Resize the WebGL canvas based on the browsers size
Update the WebGL canvas to handle displaying content based on the browsers size
Bind vertex position data
Bind vertex colour data
Tell WebGL to use the shader program for rendering
Set the vertex shader’s uniform matrices to be in sync with the projection and model-view matrices we’ve configured
The first two methods we will create will ensure that all content that is rendered on screen is correctly positioned and that the perspective of viewing content is maintained correctly whenever the browser’s size is changed on the fly.
Based on the method above, we check the client canvas (the HTML canvas element) and if its width and height doesn’t match the WebGL canvas’s, we update it so it does.
Next, create updateWebGLCanvas() method
updateWebGLCanvas() {
this.initialiseWebGLCanvas();
this.aspect = this.clientCanvas.clientWidth / this.clientCanvas.clientHeight;
this.projectionMatrix = matrix.mat4.create();
matrix.mat4.perspective(
this.projectionMatrix,
this.fieldOfView,
this.aspect,
this.zNear,
this.zFar
);
// Set the drawing position to the "identity" point, which is the center of the scene.
this.modelViewMatrix = matrix.mat4.create();
}
Here, we make a call to initialiseWebGLCanvas() to ensure that the canvas is in a default state when updating the canvas for rendering. Next, setup some variables to configure a perspective projection matrix which is used to establish the boundaries of viewing rendered content for the scene (setting up a camera view).
Our field of view is 45 degrees, with a width/height ratio of 640:480. We only want to see objects between 0.1 units and 100 units away from the camera. The perspective projection matrix is a special matrix that is used to simulate the distortion of perspective in a camera.
We ensure that the model view projection matrix is set to what is known as an identity matrix. This ensures that the position of the camera is at the center of the screen.
Methods to Bind Vertex and Colour Buffers
The next two methods will reference the vertex position and colour buffers we created earlier and tell WebGL how to consume them in order to render the data.
First, create a method to bind vertex positions: bindVertexPosition(programInfo: any, buffers: any).
We bind the position buffer we created before and then tell WebGL how it should consume the position buffer.
Note, the bufferSize is 2. We are telling WebGL that the ARRAY_BUFFER we’ve bound needs to be interpreted in a group of two elements at a time. We don’t want to normalize the data in any way so we set it to false, and we set the stride and offset to 0 so the buffer is read from start to finish completely.
Remember earlier we defined the position data like so:
We want WebGL to interpret the array in two’s. Essentially defining our (x, y) coordinates and then proceed with the next set of data in the array.
By doing this, we set the top-left, top-right, bottom-left, and bottom-right positions of the square.
The vertexAttribPointer is setup to be consumed by the vertex shader’s aVertexPosition via programInfo.attribLocations.vertexPosition we defined earlier.
At the end of the method, we tell WebGL to enable the vertex attribute array for rendering via enableVertexAttribArray(...).
Next, create a method to bind the colour buffer: bindVertexColor(programInfo: any, buffers: any).
We pretty much do the exact same thing for binding the colour buffer as we did earlier with the vertex position, only that the bufferSize we read in is set to 4 instead of 2.
This is due to the fact that colour data is interpreted in RGBA format, hence 4 which matches that definition.
Here’s the colours we defined earlier for reference:
let colors = new Uint16Array([
1.0, 1.0, 1.0, 1.0, // white
1.0, 0.0, 0.0, 1.0, // red
0.0, 1.0, 0.0, 1.0, // green
0.0, 0.0, 1.0, 1.0, // blue
]);
Putting it All Together to Prepare the Scene
Let’s now create another method called prepareScene() and tie everything together.
prepareScene() {
this.resizeWebGLCanvas();
this.updateWebGLCanvas();
// move the camera position a bit backwards to a position where
// we can observe the content that will be drawn from a distance
matrix.mat4.translate(
this.modelViewMatrix, // destination matrix
this.modelViewMatrix, // matrix to translate
[0.0, 0.0, -6.0] // amount to translate
);
// tell WebGL how to pull out the positions from the position
// buffer into the vertexPosition attribute
this.bindVertexPosition(this.programInfo, this.buffers);
// tell WebGL how to pull out the colors from the color buffer
// into the vertexColor attribute.
this.bindVertexColor(this.programInfo, this.buffers);
// tell WebGL to use our program when drawing
this.gl.useProgram(this.programInfo.program);
// set the shader uniforms
this.gl.uniformMatrix4fv(
this.programInfo.uniformLocations.projectionMatrix,
false,
this.projectionMatrix
);
this.gl.uniformMatrix4fv(
this.programInfo.uniformLocations.modelViewMatrix,
false,
this.modelViewMatrix
);
}
We call resizeWebGLCanvas() and this.updateWebGLCanvas() to ensure the canvas and WebGL canvas are matched.
Next, we setup the model-view matrix by performing a translation on it. We move it six units backwards along the Z axis. This allows us to observe any content that we draw on the scene at a distance.
We then tell WebGL how to bind the vertex position and colour data via bindVertexPosition and bindVertexColor and then tell WebGL to use the shader program we built earlier.
Finally, we bind the projection and model-view matrices in our vertex shader to be bound to the projectionMatrix and modelViewMatrix that are maintained and updated within this service. This is so they reflect eachother and are updated accordingly.
Go back to initialiseWebGLContext(...) method, update its signature to return WebGLRenderingContext, make the call to prepareScene() at the end of the method and then return the this.gl context.
The method should now look like this:
initialiseWebGLContext(canvas: HTMLCanvasElement): WebGLRenderingContext {
// Try to grab the standard context. If it fails, fallback to experimental.
this.renderingContext =
canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
// If we don't have a GL context, give up now... only continue if WebGL is available and working...
if (!this.gl) {
alert('Unable to initialize WebGL. Your browser may not support it.');
return;
}
this.setWebGLCanvasDimensions(canvas);
this.initialiseWebGLCanvas();
// initialise shaders into WebGL
let shaderProgram = this.initializeShaders();
// set up programInfo for buffers
this.programInfo = {
program: shaderProgram,
attribLocations: {
vertexPosition: this.gl.getAttribLocation(
shaderProgram,
'aVertexPosition'
),
vertexColor: this.gl.getAttribLocation(shaderProgram, 'aVertexColor'),
},
uniformLocations: {
projectionMatrix: this.gl.getUniformLocation(
shaderProgram,
'uProjectionMatrix'
),
modelViewMatrix: this.gl.getUniformLocation(
shaderProgram,
'uModelViewMatrix'
),
},
};
// initalise the buffers to define what we want to draw
this.buffers = this.initialiseBuffers();
// prepare the scene to display content
this.prepareScene();
return this.gl
}
Displaying the Square!!
Head over to the scene.component.ts that we created in part 1.
Create a method called drawScene() and implement the following:
private drawScene() {
// prepare the scene and update the viewport
this.webglService.updateViewport();
this.webglService.prepareScene();
// draw the scene
const offset = 0;
const vertexCount = 4;
this.gl.drawArrays(
this.gl.TRIANGLE_STRIP,
offset,
vertexCount
);
}
This method does two things:
prepare the scene for rendering via prepareScene
draw all arrays that have been bound in the gl context.
NOTE: the vertexCount is 4. This number matches the lines that represent the sides of the square (left, right, top, bottom lines).
TRIANGLE_STIP is just the default way that arrays are drawn in OpenGL / WebGL in general. OpenGL renders everything as triangles (everything can be defined as a triangle when it comes to rendering objects on screen).
Finally, to call drawScene() and animate it, we need to define a render loop and then call the method as desired.
In ngAfterViewInit() update the method with the following:
ngAfterViewInit(): void {
if (!this.canvas) {
alert('canvas not supplied! cannot bind WebGL context!');
return;
}
this.gl = this.webglService.initialiseWebGLContext(
this.canvas.nativeElement
);
// Set up to draw the scene periodically.
const drawSceneInterval = interval(this._60fpsInterval);
drawSceneInterval.subscribe(() => {
this.drawScene();
});
}
We use interval from rxjs to create a simple render loop for us and call drawScene() within the subscription of interval. Of course, you could use setInterval JS function to achieve the same thing, but this is just some fun in using some rxjs conventions along with rendering content in WebGL.
Run npm start and FINALLY, you can now see a SQUARE on the screen! Yay!
You can even resize the browser window and see that the rendering context updates and positions itself correctly!!
Congratulations! You made it! That was a lot of work… Too much work perhaps to just set up something simple like this. There’s a lot of third-party libraries that help to reduce the amount of boilerplate you need to initially create a scene and start rendering objects, but the aim of this tutorial is to really show you what needs to be done in order to setup and compile shaders in WebGL using Angular with as much detail as possible.
That’s it for part 2! In part 3, we’ll look into animating and displaying a spinning 3D cube!!!
Swift as a language has grown from infancy quite rapidly since its release in 2014. The programming language, initially a proprietary language, was released to the community as an open-source project with version 2.2 in late December 2015.
At the heart of the open-source language is the community’s goal for the language:
To create the best available language for uses ranging from systems programming to mobile and desktop apps, scaling up to cloud services.
Swift Language, source: http://swift.org/about/
So what’s the big deal?
Initially Swift only supported the development of desktop and native mobile applications for devices running iOS and Mac OS. However, over the years, support for Unix based architecture and more recently official support for windows in Swift 5.3 (latest at time of writing), has enabled the development of applications and solutions in Swift to reach a much wider audience and developer base.
If you would like to read more on the port of Swift to Windows, I suggest reading through their blog post on the official Swift site.
Why Swift?
So you may be asking, why should I be using Swift for my server-side development? Why is it any better than Java, C#, NodeJS (JS/TS), Python?
Swift has many advantages when developing, some of these include uninitialised variable prevention, overflow checking, and automated memory management thanks to ARC. As well as this, the language design and syntax promote swift (fast) development with high maintainability and readability. Swift being a young language, it still has a lot of ways to go and will continuously see improvements to its performance and feature base.
You can read up on the language itself and look into the pros and cons. But one of the main advantages of building in Swift for server-side and full-stack is the reusability of code across your mobile, web, and server-side developments, this allows for the sharing of business logic, models, and validation across your project. This also enables language familiarity across your codebase to promote work in a cross-functional development team.
Another big advantage is if you are already an experienced iOS developer, the transition to developing APIs and backend services for your application is seamless, with no need to learn a new language.
The transition from other languages such as JavaScript, TypeScript, Kotlin, etc. is also very simple as the language is designed for fast development and it is easy to learn, sharing a lot in common with those aforementioned.
Vapor
Vapor is a web framework built for server-side Swift development. It’s built on Apple’s SwiftNIO: a non-blocking, event-driven architecture written in Swift which encourages type-safe, expressive, and protocol-oriented development.
We will be covering Vapor in this tutorial. Vapor is one of the most used web frameworks for Swift server-side development. However, other options do exist but a lot of them are no longer supported or have died out. If you wish to explore them as well here are a couple: Kitura (IBM no longer supports this) and Perfect.
Prerequisites
Before we dive in and get our hands dirty, you will need to have the following setup on your system:
Swift > 5.2
IDE or Text Editor with Swift support (optional)
Setting up Vapor
We will be installing Vapor in the next section of the tutorial using Vapor Toolbox for CLI shortcuts, which is only available for macOS and Linux. If you are on Windows you can still use the Vapor framework but will have to manually set up your project. You can download the API Template from Vapor’s GitHub repository.
To install the Vapor Toolbox, you can use brew or apt-get.
# macOS
brew install vapor
# linux
git clone https://github.com/vapor/toolbox.git
cd toolbox
git checkout <desired version>
make install
# commands
vapor --help
Now that we have Vapor installed, we can use the vapor command. Let’s start by creating our project.
Out of the box, Vapor supports the use of database drivers, we will not be using these in the tutorial so if prompted when creating your new project to install any extra packages you can answer no.
vapor new shopApi
cd shopApi
vapor build
We are creating a new project here named shopApi. In this tutorial, we will be creating a simple shopping app which will display a list of products available to a user.
If you would like to read more about the differences between dynamic linking and static linking you can read further at https://swift.org/package-manager/.
The rest of the tutorial will focus on development using the macOS environment. All code snippets and Vapor CLI commands will carry across to Linux as well; however, running the application will be done through the terminal instead of through Xcode.
Running our Swift application from the command line:
vapor build && vapor run
If you are on macOS you can run:
vapor xcode
This will open the project in Xcode where you can use the normal build and run buttons you are familiar with for mobile development. You can also set breakpoints inside your code to debug your application.
Building your first route
It’s time to get our hands dirty. Let’s start by creating our first route for our basic shopApi.
Endpoints
First, let us define some constants for our endpoints. Create a new Constants.swift file with the following constants.
// Constants.swift
import Vapor
public enum Constants {
public enum Endpoints: String {
// MARK:- Shop Endpoints
case products = "products"
case singleProduct = "productId"
var path: PathComponent {
return PathComponent(
stringLiteral: self.rawValue
)
}
}
}
Here we define two constants for the routes we will be using. You will see later how we use the “:productId” to provide a path template for an endpoint to retrieve a single product.
Routes
Now open the routes.swift file and replace the existing routes with a new route for the two endpoints: products and products/:productId.
// routes.swift
func routes(_ app: Application) throws {
app.get(Constants.Endpoints.products.path) {
req -> String in
return "All Products"
}
app.get(Constants.Endpoints.products.path,
Constants.Endpoints.singleProduct.path) {
req -> String in
return "Single Product"
}
}
When we define the endpoint for products/:productId, Vapor uses the colon (:) as an identifier for a URL path parameter. We can access this inside the function using the following:
let param = req.parameters.get("productId")
or
# Using type casting
let param = req.parameters.get("productId", as: Int.self)
Now if we run the project (Run in Xcode) you should see the following:
vapor build && vapor run
Building project...
[8/8] Linking Run
Project built.
[ NOTICE ] Server starting on http://127.0.0.1:8080
Navigate in a browser or using Postman to http://127.0.0.1:8080/products you should see the server respond with “All products”.
Congratulations, you now have your first endpoints setup using Vapor.
Models and business logic
Let’s now explore the reusability of models and business logic across our server-side and application codebase. We discussed earlier that this is one of the main advantages of using Swift across your entire stack.
Create a new file called Product.swift in the Models folder (create this if it does not exist).
In a real-world scenario, this would be developed as part of a common framework or module and imported into the project for code re-usability across your Swift Stack. For simplicity in this tutorial, we will not cover creating a Swift package for use in your project.
// Product.swift
import Foundation
struct Product: Codable {
// MARK:- Properties
private var id: Int
private var name: String
private var price: Double
private var description: String
// MARK:- Lifecycle
init(id: Int, name: String,
price: Double, description: String) {
self.id = id
self.name = name
self.price = price
self.description = description
}
// MARK:- Codeable Methods
func asJsonString() -> String {
let codedProduct = try! JSONEncoder().encode(self)
return String(data: codedProduct, encoding: .utf8)!
}
}
In this product model file, we create a basic data structure for our products and conform it to the Codable type. This allows us to serialise our object both in our server-side and client-side applications.
We also have an instance helper method here to serialise our codeable product to a string for passing as a response body.
This product model will now be useable in both our server-side and client-side applications, and any changes that are made to this data structure in our common framework will be reflected across both applications, reducing the development effort when updating and ensuring alignment between client and server.
Services
Services in Vapor can be registered as a part of the application to act as the business logic layer. Now that we have created our model for the shop, let’s create a service which will serve some dummy products for our shop.
Create a new file called ProductService.swift inside a Services folder (Create this folder if it does not exist).
// ProductService.swift
import Foundation
import Vapor
class ProductService {
var products: [Product] = []
//
// Initialise our products.
// This is a mock that returns hard coded products
//
init() {
// Create some dummy products
let toucan = Product(
id: 1,
name: "Toucan",
price: 50.00,
description: "Famous bird of the zoo"
)
let elephant = Product(
id: 2,
name: "Elephant",
price: 85.00,
description: "Large creature from Africa"
)
let giraffe = Product(
id: 3,
name: "Giraffe",
price: 65.00,
description: "Long necked creature"
)
// Add them to our products array
products.append(toucan)
products.append(elephant)
products.append(giraffe)
}
//
// Filter our products array and get by matching id
//
func getProductById(id: Int) -> Product? {
return products.first(where: { $0.id == id })
}
//
// Return all products
//
func getProducts() -> [Product] {
return products
}
}
In this simple service, we are instantiating 3 products and storing them in our products array. This logic in practice would be replaced with a database implementation to store and access our products. However, for now, we are just hardcoding the values stored in our shop.
There are two methods in this service, one which we will use to return all the products that our store contains, and the other to return a product by its ID. These two methods match up with our current routes.
Registering our service
Now to access our service from our routes or controllers, we must register them in the Vapor application. To do this let’s add this extension to the bottom of our ProductService.swift file.
// MARK:- Services implementation
extension Application {
//
// Register our product service with the Vapor
// application.
//
var productService: ProductService {
.init()
}
}
In Vapor 4, you now register your services as extensions of either the Application or Request objects. This exposes the services as properties and allows for easier use in our routes and controllers.
This allows us to use our ProductService methods by calling:
app.productService.getProducts()
Codable Extension
Lastly, before we hook everything up and can start responding with products for our API, we must write an extension to serialise the list of our products to a JSON string. Open the Product.swift file and at the end of the file add the following extension.
// Product.swift
struct Product: Codable {
...
...
}
extension Array {
typealias CodableArray = [Product]
//
// Encode our array as a Json string.
//
func codableArrayAsJsonString() -> String {
if let array = self as? CodableArray {
let codedArray = try!
JSONEncoder().encode(array)
return String(
data: codedArray,
encoding: .utf8
)!
}
// This is where we can add some error handling,
// But for now we will just return blank
return ""
}
}
This extension allows us to encode a list of our products to a JSON string for use as the body in a response object.
Putting it all together
Now that we have built our business logic and models, we can now start responding to our client with the products our shop offers. Let’s open the routes.swift file again and modify our /products route.
All Products Route
//
// Register an endpoint for /products
//
app.get(Constants.Endpoints.products.path) {
req -> Response in
// Call our product service to get our products
let products = app.productService.getProducts()
// Return a serialised list of products
return .init(status: .ok,
version: req.version,
headers: [
"Content-Type":
"application/json"
],
body:
.init(string:
products.codableArrayAsJsonString()
))
}
Here we are calling our service that we registered earlier on our application object, and retrieving the list of products our shop offers.
We are then creating a response object which will return the encoded JSON list of our products in the body using our extension: codableArrayAsJsonString().
Single Product Route
Let’s add modify our final route, which takes in a url-path parameter for the productId and returns the product if it is found.
//
// Register an endpoint for /products/:productId
//
app.get(Constants.Endpoints.products.path,
":\(Constants.Endpoints.singleProduct.path)"
) {
req -> Response in
// Get our productId from the url-path parameter
let productId = req.parameters.get(
Constants.Endpoints.singleProduct.rawValue,
as: Int.self
) ?? 0
// Call our product service to get the product by id
if let product =
app.productService.getProductById(id: productId) {
return .init(status: .ok,
version: req.version,
headers: [
"Content-Type":
"application/json"],
body:
.init(
string: product.asJsonString()
))
}
// Return no product found
return .init(status: .ok,
version: req.version,
headers: [
"Content-Type":
"application/json"
],
body: .init(string: "No product found."))
}
Here we get the productId from the url-path of the request and use it to call our service with the .getProductById() method. This returns a single product matching the ID. We then encode the product as a JSON String and set it as the response body.
If no product is found we return a 404 Product not found.
Running the application
Finally, let’s run our application to see our API in its finished state.
If you call the http://127.0.0.1:8080/products endpoint you should now see the following response:
[
{
"id":1,
"name":"Toucan",
"price":50,
"description":"Famous bird of the zoo"
},
{
"id":2,
"name":"Elephant",
"price":85,
"description":"Large creature from Africa"
},
{
"id":3,
"name":"Giraffe",
"price":65,
"description":"Long necked creature"
}
]
Now, now if you call the single product endpoint with a product id
http://127.0.0.1:8080/products/:productId
For example: /products/1 you should see the following response:
{
"id":1,
"name":"Toucan",
"price":50,
"description":"Famous bird of the zoo"
}
Congratulations, you now have a simple products API built entirely in Swift.
Next:Swift Stack: Become a Full Stack Swift Developer (Part Two) In the next part, we will look at deploying our Swift server into the cloud using a CI/CD pipeline. We will also be looking at how to write tests and how we can run these as a part of our pipeline.
WebGL has to be one of the most under-used JavaScript APIs within modern web browsers.
It offers rendering interactive 2D and 3D graphics and is fully integrated with other web standards, allowing GPU-accelerated usage of physics and image processing effects as part of the web page canvas (Wikipedia, 2020).
In this article, we’re going to setup WebGL within a typical Angular app by utilising the HTML5 canvas element.
Prerequisites
Before starting, its worthwhile to ensure your system is setup with the following:
You are using a modern web browser (Chrome 56+, Firefox 51+, Opera 43+, Edge 10240+)
WebGL Fundamentals
There’s honestly a lot to take in regards to the fundamentals of WebGL (and more specifically OpenGL). It does mean that you’ll need to have some basic understanding of linear algebra and 2d/3d rendering in general. WebGL Fundamentals does a great job at providing an introduction to WebGL fundamentals and I’ll be referencing their documentation as we step through setting up our Angular app to use WebGL.
Before going any further, its important that you understand the following at a minimum.
WebGL is not a 3D API. You can’t just use it to instantly render objects and models and get them to do some awesome magic.
WebGL is just a rasterization engine. It draws points, lines and triangles based on the code you supply.
If you want WebGL to do anything else, its up to you to write code that uses points, lines and triangles to accomplish the task you want.
WebGL runs on the GPU and requires that you provide code that runs on the GPU. The code that we need to provide is in the form of pairs of functions.
They are known as:
a vertex shader
responsible for computing vertex positions – based on the positions, WebGL can then rasterize primitives including points, lines, or triangles.
a fragment shader
when primitives are being rasterized, WebGL calls the fragment shader to compute a colour for each pixel of the primitive that’s currently being drawn.
Each shader is written in GLSL which is a strictly typed C/C++ like language. When a vertex and fragment shader are combined, they’re collectively known as a program.
Nearly all of the entire WebGL API is about setting up state for these pairs of functions to run. For each thing you want to draw, you setup a bunch of state then execute a pair of functions by calling gl.drawArrays or gl.drawElements which executes your shaders on the GPU.
Any data you want those functions to have access to, must be provided to the GPU. There are 4 ways a shader can receive data.
Attributes and buffers
Buffers are arrays of binary data you upload to the GPU. Usually buffers contain things like positions, normals, texture coordinates, vertex colours, etc although you’re free to put anything you want in them.
Attributes are used to specify how to pull data out of your buffers and provide them to your vertex shader. For example you might put positions in a buffer as three 32bit floats per position. You would tell a particular attribute which buffer to pull the positions out of, what type of data it should pull out (3 component 32 bit floating point numbers), what offset in the buffer the positions start, and how many bytes to get from one position to the next.
Buffers are not random access. Instead a vertex shader is executed a specified number of times. Each time it’s executed the next value from each specified buffer is pulled out and assigned to an attribute.
Uniforms
Uniforms are effectively global variables you set before you execute your shader program.
Textures
Textures are arrays of data you can randomly access in your shader program. The most common thing to put in a texture is image data but textures are just data and can just as easily contain something other than colours.
Varyings
Varyings are a way for a vertex shader to pass data to a fragment shader. Depending on what is being rendered, points, lines, or triangles, the values set on a varying by a vertex shader will be interpolated while executing the fragment shader.
(WebGL Fundamentals, 2015).
I’m glossing over a lot of technical detail here, but if you really want to know more, head over to WebGL Fundamentals lessons for more info.
Setting up a playground
Let’s set up a playground so we have something that we can use in order to continue setting up WebGL.
First, create a component. You can create one by executing the following command within your Angular root (src) directory. I’ve gone ahead and named mine scene.
Finally, run ng serve and check to see if the Angular app is running and displaying the SceneComponent. It should look like this:
Now, lets move onto adding a WebGL context.
Setting up the WebGL context
Setting up the WebGL context is a little bit involved but once we get the foundation going we can then proceed to start getting something on the screen.
Let’s start by opening up scene.component.html and add a HTML5 canvas element.
<div class="scene">
<canvas #sceneCanvas>
Your browser doesn't appear to support the
<code><canvas></code> element.
</canvas>
</div>
Open up scene.component.scss (or equivalent) and add in the following styles:
The following css should just make sure the canvas element extends to the size of the browser window. I just added some border styling so you can explicitly see it for yourself.
TIP: If you want, you can also update the global styles.scss so all content expands to the height of the window respectively.
styles.scss
/* You can add global styles to this file, and also import other style files */
html,
body {
height: 99%;
}
We’ll now embark on doing the following:
Resolving the canvas element in typescript via the #canvas id
Binding the canvas element to a WebGL rendering context
Initialize the WebGL rendering canvas
Resolving the canvas element
Open scene.component.ts and add the following property:
Update your the SceneComponent class to implement AfterViewInit, we’ll need to hook into this lifecycle hook to continue setting up the WebGL canvas.
Add in the following guard to the ngAfterViewInit method to ensure that we actually have the canvas element before attempting to bind it:
if (!this.canvas) {
alert("canvas not supplied! cannot bind WebGL context!");
return;
}
NOTE: If the alert is hit, it’s due to the fact that the ElementRef ID you’re using does match the one defined in HTML and the TS class. You need to ensure they match.
Your component implementation should now look like this:
Go back to scene.component.ts and add in the following line after the guard check in ngAfterViewInit.
ngAfterViewInit(): void {
if (!this.canvas) {
alert('canvas not supplied! cannot bind WebGL context!');
return;
}
this.webglService.initialiseWebGLContext(this.canvas.nativeElement);
}
Now, back in web-gl.service.ts, lets retrieve a WebGL context from the canvas’s native element and reference it to a property that we’ll call gl.
private _renderingContext: RenderingContext;
private get gl(): WebGLRenderingContext {
return this._renderingContext as WebGLRenderingContext;
}
constructor() {}
initialiseWebGLContext(canvas: HTMLCanvasElement) {
// Try to grab the standard context. If it fails, fallback to experimental.
this._renderingContext = canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
// If we don't have a GL context, give up now... only continue if WebGL is available and working...
if (!this.gl) {
alert('Unable to initialize WebGL. Your browser may not support it.');
return;
}
}
Once we’ve retrieved the WebGLRenderingContext, we can then set the WebGL canvas’s height and width, and then finally proceed to initialise the WebGL canvas.
Lets add two methods which do that I described above:
setWebGLCanvasDimensions(canvas: HTMLCanvasElement) {
// set width and height based on canvas width and height - good practice to use clientWidth and clientHeight
this.gl.canvas.width = canvas.clientWidth;
this.gl.canvas.height = canvas.clientHeight;
}
initialiseWebGLCanvas() {
// Set clear colour to black, fully opaque
this.gl.clearColor(0.0, 0.0, 0.0, 1.0);
// Enable depth testing
this.gl.enable(this.gl.DEPTH_TEST);
// Near things obscure far things
this.gl.depthFunc(this.gl.LEQUAL);
// Clear the colour as well as the depth buffer.
this.gl.clear(this.gl.COLOR_BUFFER_BIT | this.gl.DEPTH_BUFFER_BIT);
}
Now finally call them at the end of the initialiseWebGLContext method.
initialiseWebGLContext(canvas: HTMLCanvasElement) {
// Try to grab the standard context. If it fails, fallback to experimental.
this._renderingContext =
canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
// If we don't have a GL context, give up now... only continue if WebGL is available and working...
if (!this.gl) {
alert('Unable to initialize WebGL. Your browser may not support it.');
return;
}
// *** set width, height and initialise the webgl canvas ***
this.setWebGLCanvasDimensions(canvas);
this.initialiseWebGLCanvas();
}
Run the app again, you should now see that the canvas is entirely black.
This shows that we’ve successfully initialised the WebGL context.
Thats it for part 1!
Next: Introduction to WebGL using Angular – Part 2 – Setting up shaders and a triangle
In part 2, we’ll proceed to add in shaders and start setting up content to render on screen!
Westpac is Australia’s first bank and oldest company, one of four major banking organisations in Australia and one of the largest banks in New Zealand.
Westpac provides a broad range of consumer, business and institutional banking and wealth management services through a portfolio of financial services brands and businesses.
In 2016, Westpac received the highest overall score in the Global Mobile Banking Functionality Benchmark report by Forrester, one of the world’s most prestigious research and advisory firms.
Westpac is at the forefront of digital banking, competing on a world stage and partnering with companies like 10x to launch a contemporary banking platform. We also have a long-standing relationship with them– after all this is where the core team started.
Our role
Digizoo have been engaged to work with Westpac again, following their media release on a partnership with 10x. Digizoo will work with Westpac’s first client to test the APIs.
10x Future technologies is a UK-based, cloud-native banking technology provider that Westpac is using to build a standalone BAAS (Banking as a Service) solution for new clients, that rely on Westpac’s banking license.
Deliverables
Digizoo will complement the Westpac team(s) to:
Test the 10x APIs ;
Manage the relationship between the Westpac/10x engineering team and their foundation partner.
Digizoo has partnered with the industry leading platform for API Management, Google Apigee to deliver world recognised modern integration solutions on either Cloud or on-premise for our clients.
The API Management landscape has undergone significant consolidation and change in recent years, and after the dust has settled, Apigee has emerged as a clear leader as shown by its clear dominance in the Gartner Magic Quadrant for API Management Platforms.
Digizoo has trained its software engineers to take full advantage of the Apigee suite as a key tenet of our digital platform architecture.