7  NGINX

7.0.1 Virtual Hosting via NGINX

NGINX, running on ports 80 and 443, acts as the gateway to your server, handling incoming web traffic. Its configuration is essential for directing traffic to the correct application containers managed by Docker Compose. Each container might host distinct services like a database or a shiny application you are routing traffic to.

Efficient Request Handling:

NGINX excels in handling multiple HTTP requests concurrently. It uses an event-driven, non-blocking architecture, which allows it to manage thousands of simultaneous connections in a single thread. This efficiency is crucial for high-traffic websites.

Load Balancing:

NGINX can distribute incoming HTTP requests across multiple backend servers. This process, known as load balancing, enhances the performance and reliability of web applications by preventing any single server from becoming a bottleneck.

Caching:

NGINX can store (cache) content from backend servers. When a similar HTTP request is made, NGINX serves the cached content instead of requesting it again from the backend server. This reduces latency and improves the speed of content delivery.

Security:

NGINX provides features like SSL/TLS termination, which secures data transmission by encrypting HTTP traffic. It also offers protection against various web attacks and can enforce strict HTTP security policies.

Content Compression:

NGINX can compress HTTP responses before sending them to the client. This reduces the amount of data transmitted over the network, speeding up the transfer of web pages, especially for users with slower internet connections.

7.1 Default NGINX configuration for ndexr

The information in NGINX must be accurately matched with the Docker Compose settings to establish successful connections. This means the ports, network settings, and service names defined in Docker Compose must correspond with the routes and proxy settings in NGINX. This alignment ensures that when a request hits your server on NGINX, it’s correctly routed to the appropriate container managed by Docker Compose.

upstream shinyapp {
    server localhost:9000;
    server localhost:9001;
    server localhost:9002;
    server localhost:9003;
    server localhost:9004;
    server localhost:9005;
    server localhost:9006;
    server localhost:9007;
    server localhost:9008;
    server localhost:9009;
}


server {
    server_name koodle.io www.koodle.io;
    large_client_header_buffers 4 32k;

    location / {
        proxy_pass http://shinyapp/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_read_timeout 10m;
    }

    location /rstudio/ {
        # Use this line to allow a specific IP
        # allow 192.168.1.1;
        deny all;

        proxy_pass http://localhost:8787/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_read_timeout 10m;
    }

    location /glances/ {
        deny all;

        proxy_pass http://localhost:61208/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_read_timeout 10m;
        proxy_buffering off;
    }

    location /code/ {
        deny all;

        proxy_pass http://localhost:8080/;
        proxy_set_header Host $host;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection upgrade;
        proxy_set_header Accept-Encoding gzip;
    }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/www.koodle.io/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/www.koodle.io/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-NGINX.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

server {
    if ($host = koodle.io) {
        return 301 https://$host$request_uri;
    } # managed by Certbot
    if ($host = www.koodle.io) {
        return 301 https://$host$request_uri;
    } # managed by Certbot
    server_name koodle.io www.koodle.io;
    listen 80;
    return 404; # managed by Certbot
}

7.1.1 Upstream for Load Balancing default incoming traffic.

NGINX defines a group of server endpoints for an application (e.g., Shiny). NGINX can balance incoming requests among clones of an application. Some programming languages like Python and R are single-threaded. Load balancing across multiple instances of an application allows us to have multiple users access the same application without stepping on each other. This is kind of like making your entire application the benefits of an async application without it needing to be one, because you can have n applications for n users instead of one application instance for n users. 


upstream shinyapp {
    server localhost:9000; # first user goes here
    server localhost:9001; # then here 
    server localhost:9002; # if nothing is running here NGINX skips over
    server localhost:9003; # and goes here etc.
    ...
}


server {
    ...
    location / {
        # Point to the defined upstream
        proxy_pass http://shinyapp/;
        ...
    }
    ...

}

7.1.2 Managing HTTPS Traffic

server {
    server_name koodle.io www.koodle.io;
    listen 443 ssl; # managed by Certbot
    ssl_certificate ...; # managed by Certbot
    ...
}

The server block manages HTTPS traffic, ensuring secure, encrypted communication. Certbot automates SSL/TLS certificate handling. Remember – all of this is information that is on your server, because this is the stuff you have to manage on your side in order to do your part to secure the web and your visitors traffic and information.

7.1.3 Configuring Access with Location Blocks

location /rstudio/ { ... }
location /code/ { ... }
location /glances/ { ... }

The location blocks configure access to RStudio, VS Code, and Glances. deny all blocks external access by default. To allow specific IP addresses, use allow IPADDRESS; where IPADDRESS is replaced with the actual IP. Any devices behind the IP will be able to access the URL, however knowing the password will still be required for access.

location /rstudio/ {
    # Use this line to allow a specific IP
    allow 192.168.1.1;
    allow 301.495.39.301; 
    ... # etc. for all allowed IP addresses
    deny all;

    ...
}

7.1.4 Implementing HTTP to HTTPS Redirection

server {
    listen 80;
    ...
    return 301 https://$host$request_uri;
}

301 is an HTTP status code signifying a permanent redirect, ensuring anyone who visits your HTTP address gets forwarded to HTTPS. This means public access to the server requires going through HTTPS, with all incoming traffic encrypted and routed through port 443. The only other accessible port to the outside world is port 22 for SSH, accessible only with a key file.

The $host and $request_uri variables in NGINX dynamically determine where the visitor intended to go, ensuring they land on the right page in the HTTPS domain. This configuration consolidates traffic under a secure protocol, improving search engine ranking and ensuring data privacy.

7.2 Understanding SSL/TLS Communication

SSL/TLS acts as a private communication channel between a computer and a website. The website presents a digital certificate to authenticate itself, and a unique secret key is generated for each session to encrypt messages. This encryption ensures that even if the data is intercepted, it remains unreadable. At the end of each session, the secret key is discarded and a new one is generated for every subsequent visit, safeguarding the integrity and confidentiality of the web communication.

7.2.1 Role of Certbot in Managing SSL/TLS Keys

Certbot is utilized by ndexr for managing the essential keys required for SSL/TLS security on a website. It handles two types of keys:

  1. Private Key: This key remains confidential, similar to a secret recipe. It is stored on the server with restricted access permissions. The private key is employed to decrypt the information sent by visitors to the website.

  2. Public Key: This key is publicly shared, acting like a padlock that visitors use to encrypt their messages to the website. It is visible to everyone as it is part of the website’s SSL certificate, but only the website has the corresponding private key to unlock these encrypted messages.

When a visitor sends a message to the website, they encrypt it using the public key. The website then uses its private key to decrypt these messages, ensuring a secure line of communication.

Certbot assists in acquiring and securing these keys, functioning like a vigilant assistant that maintains the confidentiality of the private key while facilitating secure communication with the public.

7.2.2 The Function of Certificate Authorities like Let’s Encrypt

Let’s Encrypt, a Certificate Authority (CA), is akin to a notary or passport office for the digital world. It issues digital certificates, notably SSL/TLS certificates, which are crucial for website security. Certificates from recognized CAs like Let’s Encrypt are trusted because they adhere to stringent issuance procedures, much like a passport from a reputable government is widely accepted. In contrast, self-signed certificates, created by individuals, may not be trusted by browsers due to the lack of external validation.

7.2.3 The Necessity of HTTPS for Secure Transactions

For services processing sensitive transactions, such as Stripe, the use of HTTPS is mandatory. HTTPS is essential not only as a technical measure but also as a fundamental aspect of a secure, trustworthy, and legally compliant online business. It ensures that sensitive information like credit card numbers and personal details are protected during online transactions, maintaining the security and privacy of user data. With your server now secured, the next step in application development involves integrating various tools, including NGINX, make, and Docker Compose. Each plays a pivotal role in building and maintaining your application.