Categories
Angular GCP

How to Deploy Angular 8 App to Google App Engine

Hi there 👋. My name is Thomas Van Deun and I’m a certified Google Cloud Architect.

What we’ll do

  1. Set up a Google Cloud Repository on GCP 
  2. Scaffold an Angular 8 application 
  3. Integrate Cloud Build CI/CD for this project
  4. Automatically deploy to App Engine 
Architecture for deploying angular 8 to Google App Engine on GCP with CI/CD

What you’ll need

  • Google Cloud Platform account — free tier will do
    • Set up a new project if you’d like
  • Have the Cloud SDK installed (to connect with the repository)
  • Angular CLI tools installed
  • IDEA of your choice

Google Cloud Repository

Log in on the Google Cloud Console

Open the hamburger menu and navigate to Source Repository under Tools. On the next screen, you’ll find the button Add Repository.

Here we will create a new repository. Give it a name and choose the project you want to work within. 

Creating an Angular 8 Application in Cloud Source Repositories

When you are happy with your name, click Create.

Now we need to connect locally with the repository. I’d recommend you to set up using the gcloud SDK, you will need it in a later step. If all goes well, you will have the repo cloned on your local machine.  Step 1 complete 🎉

Scaffold Angular 8 project

Now, open the command line at the root of your cloned repository. Let’s scaffold an Angular project with the following commands. 

ng new angular-on-gcp

This will prompt you through the setup, change or accept default settings.

Navigate to your newly created app and serve it up to see it locally.

cd angular-on-gcp
ng serve --open

Feel free to make more changes in this angular project, but for tutorial purposes, we will leave it like this.

Website running on localhost: 

Running Angular on gcp app with a localhost

More in depth Angular documentation over here.

Cloud Build Trigger

Go back to your GCP console, and look for the tool Cloud Build. Start with enabling this API for your project. You should see this screen next: 

GCP console cloud build tool

Click CREATE TRIGGER

On the next screen, fill in the name and select a repository. The name should represent what this pipeline does. So in this case, I will name it BuildAndDeployAngularApplication. Select the branch you created in step 1. 

You can leave the trigger on the master branch, but you can change this later to whatever fits your need.

Leave the defaults for Build Configuration.

From now, every push we make to master will trigger this pipeline. At the moment, there is no logic defined so nothing will actually happen. Let’s change that in the next step.

Creating a build trigger on gcp

CI / CD Pipeline

Google Cloud Build(ers)

Google Cloud Build makes uses of Cloud Builders. Google provide a few of them out of the box over here.

Unfortunatly, this list does not contain an Angular builder. We will create our own and store it in our project Image Repository, making use of the community managed cloud builders.

Clone this repo to your local machine, build the Angular docker image and verify that it’s now part of your available containers:

git clone https://github.com/GoogleCloudPlatform/cloud-builders-community
cd cloud-builders-community/ng
gcloud builds submit --config cloudbuild.yaml .  
gcloud container images list --filter ng

Now that we have the NG image in our project registry, and we can make use of it in our pipeline.

Cloud Build Options

On the root of your Angular repo, create a new file named cloudbuild.yaml. Use these steps and save the file. Replace projectId with your own projectId. It will now reference the image you created in the step before.

steps:

  # Install node packages
  - name: "gcr.io/cloud-builders/npm:latest"
    args: ["install"]
    dir: "angular-on-gcp"

  # Build production package
  - name: "gcr.io/{projectId}/ng"
    args: ["build", "--prod"]
    dir: "angular-on-gcp"

  # Deploy to google cloud app engine
  - name: "gcr.io/cloud-builders/gcloud"
    args: ["app", "deploy", "--version=prod"]
    dir: "angular-on-gcp"

This file means the following:

  1. In the build environment, install all the NPM dependencies defined in the package.json
  2. Using the Angular Cloud Builder to create a production distribution using the Angular Docker
  3. Use the Gcloud Builder to deploy  this app in the App engine service

Google App Engine 

Now that we have set up our automatic build pipeline, we need to give the builder instructions on how to deploy the Angular application. 

At the source root of your Angular project, create a new file named app.yaml and paste the following contents (rename the file directory if you have a different name for your application) :

runtime: python27
api_version: 1
threadsafe: yes

handlers:
  - url: /(.*\.(gif|png|jpg|css|js)(|\.map))$
    static_files: dist/angular-on-gcp/\1
    upload: dist/angular-on-gcp/(.*)(|\.map)

  - url: /(.*)
    static_files: dist/angular-on-gcp/index.html
    upload: dist/angular-on-gcp/index.html

skip_files:
  - e2e/
  - node_modules/
  - src/
  - ^(.*/)?\..*$
  - ^(.*/)?.*\.json$
  - ^(.*/)?.*\.md$
  - ^(.*/)?.*\.yaml$
  - ^LICENSE

Create an App Engine

Enable the App Engine API (App Engine Admin API) on the console and spin up an App Engine by navigating to the App Engine tool in the console.

Enabling the App Engine API on gcp console

Permissions for App Engine Deploy

You will need to give your Cloud Builder service account access to the App Engine. Go to the settings menu of Cloud Build, and enable App Engine Admin

Providing service permissions in App Engine on gcp

The Build History is a good starting point to debug any failures:

GCP Build history

The result 🥂

Now that you have everything set up, Angular 8 should be deployed and running on Google App Engine correctly. When you run your pipeline, it will deploy it to Google App Engine. In the dashboard of App Engine, you will see your deployed instances and the address of your application:

App Engine Dashboard on GCP

Angular on gcp is running

Categories
GCP Kubernetes

How to Master Admission Webhooks In Kubernetes (GKE) (Part One)

I title it “Mastering”, but to be honest, I am still a fair way to getting there. There is still so much to learn and so much to worry about, essentially because the number of resource manifests your admission webhook sees can have a significant blast radius when things are not going well. Cluster control plane could go for a ride if the webhook intercepts too much and misbehaves by either denying requests it shouldn’t or slows down because it can’t cope. And this is to do with just validation webhooks, it gets trickier when you start to modify the resource manifests using the mutation webhooks. 

What are admission webhooks? “Admission webhooks are HTTP callbacks that receive admission requests (for resources in a K8s cluster) and do something with them. You can define two types of admission webhooks, validating admission webhook and mutating admission webhook. Mutating webhooks are invoked first, and can modify objects sent to the API server to enforce custom defaults. After all object modifications are complete, and after the incoming object is validated by the API server, validating webhooks are invoked and can reject requests to enforce custom policies.”

I must say I was very fortunate to get a comprehensive head start on Kubernetes Admission Webhooks by Marton Sereg from Banzai Cloud. I strongly recommend reading that before carrying on here, if you are just getting started on this journey and want to keep it short and sweet initially. The article covers right from concepts and definitions, to development in “Go” to a neat in-cluster deployment using certificates issued by cluster CA. 

Now, let’s dive into the “learnings galore” from my journey so far. 

Make a pragmatic implementation choice

So people will tell you about initiatives like gatekeeper and claim these to make life easy, but really speaking, you have to make a choice between 

  • learning a totally new domain specific language and then possibly end up discovering that either its too restrictive or naive to debug
  • choose one of the languages your development teams commonly use, making it a pragmatic choice for maintenance, even though you yourself hate it to begin with. Which Node.js was in my case 😉

I am thinking the long term strategy could be to leverage the best of both worlds, so do the generic and simple ones using either an existing template from the open-policy-agent library or creating your own in rego, and BYO others which have complex or business specific logic in generic but light-weight language. 

Minimize the blast radius

Our most complex use-case so far is (CI)CD driven. On our GKE clusters, we want to restrict groups to do CRUD operations to their “owned” namespaces. e.g. only people in the pipeline team should be allowed to do CRUD to application namespaces i.e. namespaces matching pattern ^apps-.+. But the GCP IAM does not let you fine-tune a permission and applies cluster-wide. Understandably. Meaning, for instance, if you grant your Cloud Identity group pipeline@digizoo.io “Kubernetes Engine Admin”, that provides the lucky few access to full management of the cluster and Kubernetes API objects. That’s it. No security control beyond that. To achieve what we desired, we had to implement a ValidatingWebhookConfiguration which would observe all operations i.e. CREATE, UPDATE, DELETE to “all” the relevant manifests. A trick we did to NOT simply intercept “all” the manifests, but only those specific to the requirement at hand, was to extract the following two subsets from the universe of “all” manifests; all Namespaced resources and all non-namespaced resources which are Namespace operations. 

Subset of resource manifests to tackle
Subset of resource manifests to tackle
---
# Source: validation-admission-webhook-addon/templates/validatingwebhook.yaml
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: myrelease-validation-admission-webhook-addon-namespaces-identities
labels:
app: myrelease-validation-admission-webhook-addon
webhooks:
- name: myrelease-validation-admission-webhook-addon-namespaces-identities.digizoo.io
clientConfig:
service:
name: myservice
namespace: mynamespace
path: "/validate/namespaces/identities"
caBundle: mycacertbundle
rules:
- operations: [ "CREATE", "UPDATE", "DELETE" ]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["*/*"]
scope: "Namespaced"
failurePolicy: Ignore
---
# Source: validation-admission-webhook-addon/templates/validatingwebhook.yaml
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: myrelease-validation-admission-webhook-addon-namespaces
labels:
app: myrelease-validation-admission-webhook-addon
webhooks:
- name: myrelease-validation-admission-webhook-addon-namespaces.digizoo.io
clientConfig:
service:
name: myservice
namespace: mynamespace
path: "/validate/namespaces/identities"
caBundle: mycacertbundle
rules:
- operations: [ "CREATE", "UPDATE", "DELETE" ]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["namespaces"]
failurePolicy: Ignore

Deploy All-in-One

Over time, once you get addicted to solving all kube problems under the sun using webhooks, you will have many. Some simple, others hard. But, from an operations perspective, it’s best to bundle them up into one container image. So you could use something like Node.js express routes to expose each of your many implementations. Referring to the above snippet, our namespaces restriction webhook is available at “/validate/namespaces/identities”. 

Test-Test-Test

Given the blast radius, it is easy to understand why you may need to unit test your code for a lot of scenarios. Else you will have a frustratingly slow and repetitive development and test loop. For this reason, I started to appreciate Node.js for the admission webhooks use-case, where it makes it agile for a quick feedback iterative loop of dev and test. What we have ended up is adding lots of positive and negative unit tests for all the anticipated scenarios. And whenever we run into a new unhandled scenario, its fairly quick to try a patch and re-test. 

Err on the “safe” side, and “alert” enough

In the early stages of the deployment of the webhook in lower environments, we configure it with safe fallback options. So, when it comes across an identity it is not configured for, or a manifest which it cannot handle, the configuration allows for it to err on the safe side, and “let” the request in. But, equally importantly, code generates INFO logs for those scenarios, so that we can alert and hence action on them. We also have cheat codes to exempt the god, system or rogue unseen identities or bypass it all for potentially when the cluster goes belly up. 

{
"bypass": false,
"defaultMode": "allow",
"exemptedMembers": ["superadmin@digizoo.io", "cluster-autoscaler"],
"identitiesToNamespacesMapping": [
{
"identity": "gcp-pipeline-team@digizoo.io",
"namespaces": [
"^app-.+"
]
}
]
}

Fail Safe if required

When the hook fails to provide a decision, so either an unhandled scenario, bug in the implementation, or timeouts when the API server is unable to call the webhook, if the blast radius if your webhook is significantly large, it’s a wise idea to fail safe. So a soft “Ignore” failure policy instead of a hard “Fail”. But if you choose to “Ignore”, you also need a good alerting mechanism to deal with those failures and not create a security hole. The way we have dealt with this is to monitor the webhook service. We run a Kubernetes cronjob to poll the service every minute and report it if it cannot. Be careful to choose the right parameters for your cronjob like “concurrencyPolicy: Forbid” and “restartPolicy: Never” and then monitor this monitor using Kubernetes Events published to Stackdriver. This tends to happen in our non-production deployments once a day when our preemptible nodes are recycled and the service takes a minute or two to realise the pod is gone. (TODO: There should be a better way to fail fast with GCP preemptible node termination handlers). (TODO: For defense-in-depth, it would be good to also monitor the API server logs for failures getting the service to respond but, at the moment, Google is still working on access to master logs).

Make it available “enough” with the “right” replicas

It is important to make sure your webhook service is available enough, with either of fail-open or fail-close options above. Towards this, we are using pod anti-affinity with 3 replicas across the GCP zones. (TODO: we plan to leverage guaranteed scheduling pod annotations once its available to us outside of the “kube-system” namespace in cluster versions ≥ 1.17). 

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "validation-admission-webhook.fullname" . }}
labels:
{{ include "validation-admission-webhook.labels" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "validation-admission-webhook.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "validation-admission-webhook.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
annotations:
checksum/config: {{ include (print $.Chart.Name "/templates/configmap.yaml") . | sha256sum }}
spec:
securityContext:
runAsUser: 65535
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ required "A valid image.repository entry required!" .Values.image.repository }}:{{ required "A valid image.tag entry required!" .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["node", "app.js", "{{ .Values.webhook_routes }}"]
env:
- name: DEBUG
value: "{{ default "0" .Values.webhook_debug }}"
ports:
- name: https
containerPort: 443
protocol: TCP
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
livenessProbe:
httpGet:
path: /healthz
port: 443
scheme: HTTPS
readinessProbe:
httpGet:
path: /healthz
port: 443
scheme: HTTPS
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: webhook-certs
mountPath: /etc/webhook/certs
readOnly: true
- name: webhook-config
mountPath: /usr/src/app/config
readOnly: true
volumes:
- name: webhook-certs
secret:
secretName: {{ include "validation-admission-webhook.fullname" . }}-certs
- name: webhook-config
configMap:
defaultMode: 420
name: {{ include "validation-admission-webhook.fullname" . }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- {{ include "validation-admission-webhook.name" . }}
topologyKey: "topology.kubernetes.io/zone"
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

A word about GKE Firewalling

By default, firewall rules restrict the cluster master communication to nodes only on ports 443 (HTTPS) and 10250 (kubelet). Additionally, GKE by default enables “enable-aggregator-routing” option, which makes the master to bypass the service and communicate straight to the POD. Hence either make sure to expose your webhook service and deployments on port 443 or poke a hole in the firewall for the port being used. It took me quite some time to work this out as I was not using 443 originally, failure policy was at the ValidatingWebhookConfiguration beta resource default of “Ignore” and I did not have access to master logs in GKE.

Deployment Architecture

In summary, to help picture things, this is what the deployment architecture for the described suite of webhooks looks like. 

Validation Webhook Deployment Architecture
Validation Webhook Deployment Architecture

References