Install Python 3 on an AWS EC2 instance?


Error: No package python3 available.

sudo yum install python3 -y
Loaded plugins: priorities, update-motd, upgrade-helper
No package python3 available.
Error: Nothing to do

If you get above error, try this commend to Install Python 3 on an AWS EC2 instance

~$ sudo yum list | grep phython3
~$ sudo yum install python34
Loaded plugins: priorities, update-motd, upgrade-helper
Resolving Dependencies
–> Running transaction check
—> Package python34.x86_64 0:3.4.10-1.45.amzn1 will be installed
–> Processing Dependency: python34-libs(x86-64) = 3.4.10-1.45.amzn1 for package: python34-3.4.10-1.45.amzn1.x86_64
–> Processing Dependency: libpython3.4m.so.1.0()(64bit) for package: python34-3.4.10-1.45.amzn1.x86_64
–> Running transaction check
—> Package python34-libs.x86_64 0:3.4.10-1.45.amzn1 will be installed
–> Finished Dependency Resolution

How Facebook’s Calibra work


how-facebook-calibra-works

Air Canada AC016 , non stop flight from Hong Kong to Toronto


Air Canada AC 16
Air Canada AC 16
Air Canada AC 16
Air Canada AC 16
Air Canada AC 16
Air Canada AC 16

Creating a kubernetes cluster


How to create kubernetes cluster?

Get three EC2 instances from AWS and start the following steps.
 

  1. The first thing that we are going to do is use SSH to log in to all machines. Once we have logged in, we need to elevate privileges using sudo.
    sudo su  
  2. Disable SELinux.
    setenforce 0
    sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
  3. Enable the br_netfilter module for cluster communication.
    modprobe br_netfilter
    echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
  4. Ensure that the Docker dependencies are satisfied.
    yum install -y yum-utils device-mapper-persistent-data lvm2
  5. Add the Docker repo and install Docker.
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    yum install -y docker-ce
  6. Set the cgroup driver for Docker to systemd, then reload systemd, enable and start Docker
    sed -i '/^ExecStart/ s/$/ --exec-opt native.cgroupdriver=systemd/' /usr/lib/systemd/system/docker.service
    systemctl daemon-reload
    systemctl enable docker --now
  7. Add the repo for Kubernetes.
    cat << EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
    https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
  8. Install Kubernetes.
    yum install -y kubelet kubeadm kubectl
  9. Enable the kubelet service. The kubelet service will fail to start until the cluster is initialized, this is expected.
    systemctl enable kubelet

*Note: Complete the following section on the MASTER ONLY!

  1. Initialize the cluster using the IP range for Flannel.
    kubeadm init --pod-network-cidr=10.244.0.0/16
  2. Copy the kubeadmn join command that is in the output. We will need this later.
  3. Exit sudo and copy the admin.conf to your home directory and take ownership.
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  4. Deploy Flannel.
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  5. Check the cluster state.
    kubectl get pods --all-namespaces

    Note: Complete the following steps on the NODES ONLY!

     

  6. Run the join command that you copied earlier, this requires running the command as sudo on the nodes. Then check your nodes from the master.
    kubectl get nodes
Create and scale a deployment using kubectl.keyboard_arrow_up
  1. Create a simple deployment.
    kubectl create deployment nginx --image=nginx
  2. Inspect the pod.
    kubectl get pods
  3. Scale the deployment.
    kubectl scale deployment nginx --replicas=4
  4. Inspect the pods. You should now have 4.
    kubectl get pods

Building a Kubernetes Cluster with Kubeadm


To build a cluster, you need to have master and slave nodes and dockers need to be installed.

1. Install Docker on all three nodes.keyboard_arrow_up
  1. Do the following on all three nodes:
    • apt install gpg-agent
    
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    sudo add-apt-repository \
    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) \
    stable"
    sudo apt-get update
    sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
    sudo apt-mark hold docker-ce
  2. Verify that Docker is up and running with:
    sudo systemctl status docker

2. Install Kubeadm, Kubelet, and Kubectl on all three nodes.keyboard_arrow_up

  1. Install the Kubernetes components by running this on all three nodes:
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
    deb https://apt.kubernetes.io/ kubernetes-xenial main
    EOF
    sudo apt-get update
    sudo apt-get install -y kubelet=1.12.7-00 kubeadm=1.12.7-00 kubectl=1.12.7-00
    sudo apt-mark hold kubelet kubeadm kubectl
3. Bootstrap the cluster on the Kube master node.keyboard_arrow_up
  1. On the Kube master node, do this:
    sudo kubeadm init --pod-network-cidr=10.244.0.0/16

    That command may take a few minutes to complete.

  2. When it is done, set up the local kubeconfig:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Take note that the kubeadm init command printed a long kubeadm join command to the screen.

    You will need that kubeadm join command in the next step!

  3. Run the following commmand on the Kube master node to verify it is up and running:
    kubectl version

    This command should return both a Client Version and a Server Version.

  4. Join the two Kube worker nodes to the cluster.keyboard_arrow_up

    1. Copy the kubeadm join command that was printed by the kubeadm init command earlier, with the token and hash. Run this command on both worker nodes, but make sure you add sudo in front of it:
      sudo kubeadm join $some_ip:6443 --token $some_token --discovery-token-ca-cert-hash $some_hash
    2. Now, on the Kube master node, make sure your nodes joined the cluster successfully:
      kubectl get nodes

      Verify that all three of your nodes are listed. It will look something like this:

      NAME            STATUS     ROLES    AGE   VERSION
      ip-10-0-1-101   NotReady   master   30s   v1.12.2
      ip-10-0-1-102   NotReady      8s    v1.12.2
      ip-10-0-1-103   NotReady      5s    v1.12.2

      Note that the nodes are expected to be in the NotReady state for now.

  5. Set up cluster networking with flannel.keyboard_arrow_up

    1. Turn on iptables bridge calls on all three nodes:
      echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
      sudo sysctl -p
    2. Next, run this only on the Kube master node:
      kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

      Now flannel is installed! Make sure it is working by checking the node status again:

      kubectl get nodes

      After a short time, all three nodes should be in the Ready state. If they are not all Ready the first time you run kubectl get nodes, wait a few moments and try again. It should look something like this:

      NAME            STATUS   ROLES    AGE   VERSION
      ip-10-0-1-101   Ready    master   85s   v1.12.2
      ip-10-0-1-102   Ready       63s   v1.12.2
      ip-10-0-1-103   Ready       60s   v1.12.2