John Mann

John Mann

Kubernetes - GKE - Ingress


I have spent some time recently working on Kubernetes on Google Cloud Platform (GCP) using Google Kubernetes Engine. I have learned a lot and have read SO many tutorials, documents, and blogs. Watched videos, set up tutorials, and even posted requests for help.

Here is what I have learned.

  1. Documentation is not up-to-date.
  2. Tutorials are great if you are doing exactly that
  3. Everything has limits
  4. Success is an amazing feeling after you struggle

So I'm going to explain - in detail what I have learned and have actual sample code that works. If it doesn't let me know, and I'll fix it.

The first thing you need to do is build a docker image. I'm going to assume you know how to do that. If not, maybe I'll write another article to setup a docker image. Here is what is important:

Know what port is exposed in the Dockerfile!

-john mann- :-)

Ok, let's assume you have entered code like this:

docker build --build-arg docker_env=staging --build-arg=v1-2 -t myapp .

Great you now have a local image. This is wonderful and fun and you can run your image locally

docker run -d -p 80:8080 --name MyDockerApp myapp:latest

That 80:8080 is to map the external port 80 to the exposed port of the docker image. In my case it is 8080, but you can change that in your Dockerfile (see the blockquote above).

Ok, great all of that is running locally and the world is a happy place. But no one else can see it and you need to push it somewhere so some magical cloud can connect your amazing image to the wonderful web.

For this I chose Google Cloud, not because I'm a huge Google fan, I'm actually a huge Microsoft fan, but I chose GCP because that was what everyone else was using at the time. :-)

To get started with that, you will need to create a project, go to console.cloud.google.com and create a New Project. To use GKE successfully, you will need to enable a billing account for it too. Sorry, this tutorial is not free on GCP.

Ok, let's assume you have created that and the project is created. Yay for you. Let's keep going. What is your project id? You are going to need that.

So let's push your new docker image to your google repository now.

docker tag myapp:latest gcr.io/Your-Project-ID/myapp:v1
docker tag myapp:latest gcr.io/Your-Project-ID/myapp:latest
docker push gcr.io/Your-Project-ID/myapp:v1
docker push gcr.io/Your-Project-ID/myapp:latest

Yes, I tagged it twice - one for a version and one for the latest. Just something I do. Makes the latest always the latest and have a specific version too.

Ok, now lets configure gcloud to make sure we are working in the project, the correct zone and region.

gcloud config set project Your-Project-ID
gcloud config set compute/zone us-east1-b
gcloud config set compute/region us-east1

I picked us-east1-b but you can pick anyone you like (as long as it supports GKE features).

Ok, now for the cluster creation:

gcloud container --project "Your-Project-ID" clusters create "myapp-staging" --zone "us-east1-b" --username "admin" --cluster-version "1.11.7-gke.4" --machine-type "n1-standard-2" --image-type "COS" --disk-type "pd-standard" --disk-size "100" --scopes "https://www.googleapis.com/auth/cloud-platform" --num-nodes "3" --enable-cloud-logging --enable-cloud-monitoring --no-enable-ip-alias --network "projects/Your-Project-ID/global/networks/default" --subnetwork "projects/Your-Project-ID/regions/us-east1/subnetworks/default" --addons HorizontalPodAutoscaling,HttpLoadBalancing --enable-autoupgrade --enable-autorepair --enable-autoscaling --min-nodes 3 --max-nodes 6

There is a bit more to that, but you can see I am specifying the machine type, and myapp-cluster-staging is the cluster name. This will be very helpful when you upgrade your deployments later on.

Ok, this part will take a little bit for it to create, spin up the clusters, and verify the cluster. Be patient, it'll get there.

Now you need to get Helm ready to do nginx-controller install. I use helm because it creates the default backend for you and some other nice features.

Here's how you install helm:

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
kubectl create namespace staging
helm init --tiller-namespace staging

I should point out that I am using a namespace for this, because I want it to be tied together for all features.

Ok, now let's get tiller configured so we can use it as the service account.

kubectl create serviceaccount --namespace staging tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=staging:tiller
helm init --service-account tiller --tiller-namespace staging --upgrade

Ok, so we have configured everything and we have NOTHING deployed. Isn't that fun! LOL . Ok, let's keep going. The first thing we want to do is to deploy our app. I have create an app-deploy.yaml for this.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  generation: 1
  labels:
    app: myapp-staging
  name: myapp-staging
  namespace: staging
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: myapp-staging
  strategy:
    rollingUpdate:
      maxSurge: 3
      maxUnavailable: 2
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myapp-staging
    spec:
      containers:
      - image: gcr.io/Your-Project-ID/myapp:latest
        imagePullPolicy: IfNotPresent
        name: myapp-staging
        resources:
          requests:
            memory: "128Mi"
            cpu: "250m"
          limits:
            memory: "1Gi"
            cpu: "500m"
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

Now that we have an app_deploy.yaml, let's deploy it.

kubectl apply -f app-deploy.yaml

That should create a deployment in GKE (you'll see in workloads in the console). Now we need to expose it, kind of... we really are going to expose it to the cluster but not the internet. This is kind of confusing, but let me explain. We could have multiple services and we want a load balancer to handle the internet traffic and then have routes using an ingress service to map hosts to services. That ingress needs to know the name of the services it is going to send traffic to. So we are simply exposing the app as a service to the cluster right now.

kubectl expose deployment myapp-staging --port=8080 --namespace staging

Ok, now we need to add our ssl cert to GKE storage

kubectl create secret tls tls-secret-staging --key yourdomain.com.key --cert yourdomain.com.crt -n staging

Ok, now that we have a key, let's add our ingress to expose it to the nginx controller (which we haven't created yet, don't worry, we'll get to that shortly).

Here is a ingress-nignx.yaml file

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-resource-staging
  namespace: staging
  annotations:
    kubernetes.io/ingress.class: nginx
    ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: staging.yourdomain.com
    http:
      paths:
      - backend:
          serviceName: myapp-staging
          servicePort: 8080
        path: /
  - host: host2.yourdomain.com
    http:
      paths:
      - backend:
          serviceName: myapp-staging
          servicePort: 8080
        path: /
  tls:
  - hosts:
    - staging.yourdomain.com
    - host2.yourdomain.com
    secretName: tls-secret-staging

Now, you will wonder why do I have to hosts pointed to the exact same backend service? I wanted to test my ssl out locally and after the next step, I'll have an IP that I can update my host file (mac: /etc/host or windows: c:\windows\system32\drivers\etc\hosts (you'll need notepad in admin to edit and sudo vim on mac))

Ok, now we have our ingress, its time to use helm. Before you do that, make sure the tiller-deploy is running by doing this:

kubectl get pods -n staging

This should return something like this:

NAME                        READY   STATUS    RESTARTS   AGE
myapp-staging-c56f35-8kshd   1/1     Running   0          3h
myapp-staging-c56f35-c29db   1/1     Running   0          3h
tiller-deploy-778c56f35-bb8  1/1     Running   0          8d

If tiller-deploy is not ready 1/1 then wait a little bit before moving on.

Now that it is ready, let's do the install for the nginx-controller using helm

helm install --name nginx-ingress-myapp-staging stable/nginx-ingress --set rbac.create=true --tiller-namespace staging		

That will spin up the nginx controller and a default backend. It will magically pull in the ingress resource we exposed earlier.

Now run the following:

kubectl get svc

That should return your nginx-ingress-myapp-staging with an external IP.

That is the IP you can use to update your local host file (or your DNS for your domain) and it will map with SSL. Notice I added ssl redirect in the nginx-ingress yaml.

THAT WORKED FOR ME! I can also explain how to use third-party email (like SendGrid or MailChimp) through GKE by leveraging an externalName service so your SSL will still work (especially if you are using click tracking in emails).

I hope this helps someone.... It was so much work... But it was a great feeling to know it all works.

PreviousNext