Search Posts on Binpipe Blog

Deploying Fat-Free Kubernetes (K3s) with Dashboard on AWS EC2 ARM instances

Kubernetes is perhaps the orchestrator if you have decided to deploy your containerized  microservices in production. In this post I will talk about K3s a stripped down version of Kubernetes and the steps to install it on Amazon EC2 A1 instances. 


I choose A1 because they deliver significant cost savings for scale-out and Arm-based applications such as web servers, containerized microservices, caching fleets, and distributed data stores that are supported by the extensive Arm ecosystem. And this goes well with the light-weight K3s flavor of Kubernetes.




k3s is a fully compliant, production-grade Kubernetes distribution that maintains an absolutely tiny footprint. Weighing in at less than 40 MB, it only needs 512 MB of RAM to run. This means it's perfect for all kinds of computing that requires a minimal about of memory and space.

k3s is designed for Edge computing, IoT, CI, and ARM. Even if you're working with something as small as a Raspberry Pi, k3s allows developers to utilize Kubernetes for production workloads. It simplifies operations, reducing the dependencies and steps needed to run a production Kubernetes cluster.

Installation is a breeze, considering that k3s is packaged as a single binary with less than 40 MB. However, security isn't an afterthought, since TLS certificates are generated by default to make sure that all communication is secure by default.



Installing K3s on Amazon EC2 A1 instance type

 

Launch Amazon EC2 A1 instance type with Debian 9 as the operating system. Then ssh into the node and proceed with the following-


#Install K3s

curl -sfL https://get.k3s.io | sh -


# Check for Ready node, takes maybe 30 seconds

k3s kubectl get node

Default k3s doesn't assign roles to the nodes and allows for pods to be scheduled on the master. If you want you can change that with the following commands:

# label node as master

kubectl label node mymasternode kubernetes.io/role=master
kubectl label node mymasternode node-role.kubernetes.io/master=""

# exclude master from scheduling pods

kubectl taint nodes mymasternode node-role.kubernetes.io/master=effect:NoSchedule

On the slave nodes run the following commands:

curl -fSL "https://github.com/rancher/k3s/releases/download/v0.1.0/k3s-armhf" \
  -o /usr/local/bin/k3s && \
chmod +x /usr/local/bin/k3s

 

After that you start the agent

# NODE_TOKEN comes from /var/lib/rancher/k3s/server/node-token on the master

sudo k3s agent --server https://myserver:6443 --token ${NODE_TOKEN} &

Optionally you can also set a label for the node. The commands should be run from the master node

kubectl label node mynode kubernetes.io/role=node
kubectl label node mynode node-role.kubernetes.io/node=""


You can add more slave nodes as well if you want using the above steps.

You are now ready to run a pod. As first pod to run I chose Nginx. Create a file at /home/admin/nginx-test.yaml with the following content

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-unprivileged-test
  namespace: default
spec:
  type: NodePort
  selector:
    app: nginx-unprivileged-test
  ports:
  - protocol: TCP
    nodePort: 30123
    port: 8080
    name: http
    targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-unprivileged-test
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-unprivileged-test
    spec:
      containers:
      - image: nginxinc/nginx-unprivileged
        name: nginx-unprivileged-test
        ports:
        - containerPort: 8080
          name: http
        livenessProbe:
          httpGet:
            path: /
            port: http
          initialDelaySeconds: 3
          periodSeconds: 3

 

Next step is deploying to the cluster

kubectl apply -f /home/admin/nginx-test.yaml

Since this is a NodePort service, k3s will open a port on the A1 instance at port 30123 and you will be able to see the default Nginx page on browser.

Installing Kubernetes Dashboard on Amazon EC2 A1 instance type

 

So now we have installed Kubernetes and got a pod running. Let's move on to install a dashboard and a load balancer.

 

admin@k3s-master-1:~ $ kubectl get nodes
NAME           STATUS ROLES   AGE VERSION
k3s-master-1   Ready    master 4h11m   v1.13.5-k3s.1
k3s-node-1     Ready    node 129m    v1.13.5-k3s.1
k3s-node-2     Ready    node 118m    v1.13.5-k3s.1
k3s-node-3     Ready    node 119m    v1.13.5-k3s.1
admin@k3s-master-1:~ $

 

To install the  Web UI (Dashboard) we will need to download the kubernetes-dashboard.yaml

curl -sfL https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml > kubernetes-dashboard.yaml

and changed the image as it was pointing to the amd version and replaced it with the arm version.

spec:
      containers:
      - name: kubernetes-dashboard
        image: k8s.gcr.io/kubernetes-dashboard-arm:v1.10.1


After that I copied the yaml file to the /var/lib/rancher/k3s/server/manifests directory and the pod was created. To access the pod you have to run the command kubectl proxy. This makes it possible to access the dashboard from the local host only. It is possible to access the dashboard from a machine out of the cluster. To make it work you have to setup a SSL tunnel.

ssh -L8001:localhost:8001 <ip-address of the master>

After that you can access the dashboard via this link: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

In my environment, I have selected the option Token and followed the instructions for creating a token as described here. As they mention there it is a sample user with all permissions so in productions you would have to make other choices.

 

The next step is adding load balancing. Out of the box you can use nodeport to expose ports to the outside. This has however limitations. This is covered in a separate post here


No comments:

Post a Comment

Hi, Leave a comment here and one of the binary piper's will reply soon :)