How to automate Nginx Reverse Proxy and SSL creation

Reverse Proxy SSL Automation with Nginx, Docker, Letsencrypt and Cron

A Reverse Proxy with SSL encryption enables you to

  • pass requests to your external server names (like www.example.com), to your upstream services (like http://localhost:3000) and to
  • access your applications in a secure manner through the https protocol.

In this YouTube video below we show you how to build a Reverse Proxy and SSL automation with Nginx, Docker, Letsencrypt and Cron.

And we also released this Nginx Reverse Proxy SSL Automation as a new product, so you either can buy it or you can build it yourself.

It comes with a Dockerfile and scripts to build your Nginx Reverse Proxy Docker Image and you can run it with Docker Run or Docker Compose.

It enables you to create your server names and their upstream services on your host, your docker containers, your docker compose services or on other servers with just one command.

How to run a Chainlink Node and Postgres Database with Docker

Problem Statement:

You want to setup a Chainlink Node quickly with Docker and without paying additional fees for a cloud based Postgres database service, e.g. on on the Google Cloud Platform (GCP)?

Solution:

You can achieve this by running the Chainlink Node itself and also its PostgreSQL database in separate Docker containers.
Watch this YouTube video and see how to setup Chainlink and its PostgreSQL Database with Docker:

With this approach you can save the fees for the database service and you also can save time by utilizing a Container Optimized OS (COS) – Image with Docker already preinstalled.

Follow this 10 steps process to setup the Chainlink Node and its PostgreSQL database with Docker :

  1. Create a VM instance on the Google Cloud Platform (GCP). Choose a machine type with at least 2GB of memory and change the boot disk from Debian to Container Optimized OS. Then create the instance and SSH into it.
  2. Create the directories for the Chainlink Database and the Chainlink Node:
    mkdir -p chainlink/db
    mkdir -p chainlink/chainlink_rinkeby
  3. Create the container for the PostgreSQL database
    docker run --name postgres-chainlink -v $HOME/chainlink/db:/var/lib/postgresql/data -e POSTGRES_PASSWORD=myPostgresPW -d -p 5432:5432 postgres:11.12
  4. Create the chainlink Postgres user in postgres database container:
    docker exec -it postgres-chainlink psql -U postgres -c "CREATE USER chainlink WITH PASSWORD 'myChainlinkPW';"
  5. Create the Chainlink Database (for the Rinkeby test-network in this sample)
    docker exec -it postgres-chainlink psql -U postgres -c "CREATE DATABASE "chainlink_rinkeby";"
  6. Grant the provilieges to the chainlink user
    docker exec -it postgres-chainlink psql -U postgres -c "GRANT ALL PRIVILEGES ON DATABASE "chainlink_rinkeby" TO chainlink;"
  7. Create the .env file for the chainlink node and refer to the required Ethereum network and to our new Postgres Database
    vi chainlink/chainlink_rinkeby/.env

    and enter

    ROOT=/chainlink
    LOG_LEVEL=debug
    ETH_CHAIN_ID=4
    MIN_OUTGOING_CONFIRMATIONS=2
    LINK_CONTRACT_ADDRESS=0x01BE23585060835E02B77ef475b0Cc51aA1e0709
    CHAINLINK_TLS_PORT=0
    SECURE_COOKIES=false
    GAS_UPDATER_ENABLED=true
    ALLOW_ORIGINS=*
    ETH_URL=wss://rinkeby.infura.io/ws/v3/<YOUR_INFURA_PROJECT_ID>
    DATABASE_URL=postgresql://chainlink:myChainlinkPW@localhost:5432/chainlink_rinkeby?sslmode=disable

    For this demo we use Infura as external provider for the connectivity to the Ethereum blockchain.
    If you want to use Infura as well, make sure that you adapt the Infura Project ID accordingly.

    Also make sure that you use the same Chainlink Postgres password here that you have used to create the Chainlink Postgres User before.

  8. Create the .password file which holds the password for your node wallet
    vi chainlink/chainlink_rinkeby/.password

    Enter your password for your node wallet. This password
    – must be longer than 12 characters
    – must contain at least 3 uppercase characters
    – must contain at least 3 numbers
    – must contain at least 3 symbols

  9. Create the .api file which holds the credentials for the GUI interface of the node
    vi chainlink/chainlink_rinkeby/.api

    and enter your email address and password. This password must be 8 to 50 characters.

    <YOUR_EMAIL_ADDRESS>
    <YOUR_NODE_GUI_PASSWORD>

  10. Now we can create the container for the chainlink node itself
    docker run --name chainlink_rinkeby --network host -p 6688:6688 -v $HOME/chainlink/chainlink_rinkeby:/chainlink -it --env-file=$HOME/chainlink/chainlink_rinkeby/.env smartcontract/chainlink:0.10.8 local n -p /chainlink/.password -a /chainlink/.api

    Note that we have added “–network host” to the command since we run the database locally from the node’s perspective.

Access the GUI of your new Chainlink node:

  1. Open a command prompt on our local machine and authenticate with gcloud, in order to be able to access the GUI of your new Chainlink Node
    gcloud auth login

    Note: Alternatively you can download and authenticate with your API keys

  2. Create a SSH tunnel for port 6688
    gcloud compute ssh instance-1 --project <YOUR_GCP_PROJECT_ID> --zone=<YOUR_GCP_ZONE> -- -L 6688:localhost:6688

 

Maintenance – How to stop and start your chainlink containers from your VM SSH shell:

Stop and start the PostgresSQL Database container:

docker stop postgres-chainlink
docker start postgres-chainlink

Stop and start the Chainlink Node container:

docker stop chainlink_rinkeby
docker start chainlink_rinkeby

(Start and attach: docker start -i chainlink_rinkeby)

Detach from and attach to the Chainlink Node container:

Ctrl-PQ
docker attach chainlink_rinkeby

Need further support or consulting?

Please checkout our Consulting hours.

 

 

Persistance of Linux Users in Kubernetes Pods

Problem statement:

Linux users which are dynamically created in Kubernetes Containers are not persistant. Therefore, whenever a Kubernetes Pod gets restarted, the dynamically created users are lost.

For example, if you want to run multiple vhosts in a Kubernetes Container under separate UIDs and GIDs with apache2-mpm-itk (which is an MPM – Multi-Processing Module for the Apache web server), a solution for persisting Linux users like the following is required:

Solution:

This solution desribes how to survive Kubernetes Pod restarts by dynamically recreating Linux users in a Kubernetes container whenever a new pod is created.

  1. Create a Kubernetes ConfigMap (or Secret) for storing custom users:
    apiVersion: v1
    kind: ConfigMap
    metadata:
     name: my-linux-users
    data:
     linux-users: ""
  2. Create a script which reads the custom users from the ConfigMap mounted in /etc/linux-users and which recreates the users when the pod is restarted. Store the script in another Kubernetes ConfigMap as follows:
    apiVersion: v1
    kind: ConfigMap
    metadata:
     name: my-sync-script
    data:
     install-linux-users.sh: |
       # Install existing linux users mounted from configmap in /etc/linux-users - file
       echo "Install Linux Users..."
       cat /etc/linux-users | awk -F ":" '{ system("groupadd "$1""); system("useradd -c "$5" -s /usr/sbin/nologin -d "$6" -m -g "$4" "$1"")}'
  3. Mount the Kubernetes ConfigMaps in your StatefulSet (or Deployment) where your container resides (excerpt)
    apiVersion: apps/v1
    kind: StatefulSet
    spec:
     containers:
     - image: ...
       name: ...
       volumeMounts:
       - name: linux-users
         mountPath: /etc/linux-users
         subPath: linux-users
         readOnly: true
       - name: install-conf
         mountPath: /usr/local/bin/install-linux-users.sh
         subPath: install-linux-users.sh
         readOnly: true
     volumes:
     - name: linux-users
         configMap:
         name: my-linux-users
         items:
         - key: www-users
           path: www-users
     - name: install-conf
         configMap:
         name: my-sync-script
         items:
         - key: install-linux-users.sh
           path: install-linux-users.sh
  4.  Hook into the postStart – Event when a container is created and run the script which recreates the users
    ....
     lifecycle:
       postStart:
         exec:
           command:
           - "/bin/bash"
           - "-eu"
           - "/usr/local/bin/install-linux-users.sh"
  5. Whenever you create a Linux user in your Kubernetes container, get a custom user list from the /etc/passed -file and save in a temp directory.
     # Create linux user
     groupadd ${WWW_GROUP}"
     useradd -c ${WWW_COMMENT} -s /bin/false -d /home/${LINUX_USER} -m -g ${WWW_GROUP} ${WWW_USER}"
     # Save custom linux-users (filtered by a common comment) in a temp directory.
     grep ${WWW_COMMENT} /etc/passwd >${LINUX_USERS_FILE}
  6. Get the list of your linux users from the container
    kubectl cp ${NAMESPACE}/${WEBSERVER_POD}:${LINUX_USERS_FILE} ${LOCAL_LINUX_USERS_FILE} -c ${WEBSERVER_CONTAINER}
  7. # Update the configmap with the linux-users – file
    kubectl create configmap my-linux-users --namespace=${NAMESPACE} --from-file=${LOCAL_LINUX_USERS_FILE} --save-config -o=yaml --dry-run | kubectl apply -f -

 

Need further support or Kubernetes Consulting?

Please checkout our Consulting hours.

You don’t want to go into technical details anymore?

Check out the Blue Antoinette Commerce Cloud, which is built on and abstracts away the complexities of Kubernetes.