10 Putting It All Together: Traffic Management
The previous chapters covered NGINX, Docker, and Shiny individually. This chapter shows how they connect to serve multiple websites from one Shiny application, and why that matters.
10.1 The problem
You have one Shiny app. You want it to:
- Serve multiple domains (
ndexr.io,console.ndexr.io,home.ndexr.io) - Handle many concurrent users without freezing
- Stay responsive even when some users run heavy computations
- Serve static files (images, CSS, docs) without wasting R processes
A single R process cannot do all of this. The solution is a layered architecture where each component handles what it does best.
10.2 The three layers
Internet (users)
|
v
+-----------+
| NGINX | Layer 1: Traffic cop
| :80/:443 | - SSL termination
| | - Domain routing
| | - Static file serving
| | - Load balancing
+-----+-----+
|
v
+-------------+
| Docker | Layer 2: Process manager
| Compose | - Runs multiple copies of the app
| | - Each copy gets its own port
| | - Manages lifecycle (restart, scale)
+------+------+
|
+-----+-----+-----+-----+
v v v v v
:9001 :9002 :9003 ... :9010
R R R R Layer 3: Shiny workers
Each layer has one job. NGINX never runs R code. Docker never routes HTTP. Shiny never terminates SSL. This separation is what makes the system manageable.
10.3 How traffic flows
Here is what happens when a user visits https://ndexr.io:
- DNS resolves
ndexr.ioto your server’s IP address - NGINX accepts the connection on port 443, terminates TLS
- NGINX checks the
Hostheader — it matches thendexr.ioserver block - The request path is
/, which maps toproxy_pass http://ndexr_app/ ndexr_appis an upstream pool of 10 Shiny workers (ports 9001-9010)- NGINX uses ip_hash to pick a worker — the same user always hits the same one
- The chosen Shiny worker processes the request and returns the response
- NGINX forwards the response back to the user
For a static file request like /images/logo.png, step 4 changes: NGINX serves the file directly from disk, and steps 5-7 never happen. This is why static files are fast — they bypass R entirely.
10.4 Docker Compose: port ranges and scaling
The compose.yml defines the console service with a port range:
console:
build:
context: .
command: ["R", "-e", "shiny::runApp('/root/src')"]
ports:
- "9000-9010:8000"The port mapping 9000-9010:8000 means: each container instance listens on port 8000 internally, but Docker maps each instance to a different host port in the range 9000 to 9010. The range defines the maximum number of instances you can run (11 in this case).
10.4.1 Scaling from 3 to 10
You don’t need to run all 11 instances. The --scale flag controls how many:
# Scale to 3 instances (ports 9000, 9001, 9002)
docker compose up -d --scale console=3 --no-recreate
# Scale to 5 instances (ports 9000-9004)
docker compose up -d --scale console=5 --no-recreate
# Scale to 10 instances (ports 9000-9009)
docker compose up -d --scale console=10 --no-recreateOr using the npm scripts defined in package.json:
# Default: scale to 3
npm run scale:console
# Scale to 10
NUM=10 npm run scale:console10.4.2 What each scale level gives you
| Scale | Ports used | Concurrent capacity | Memory (approx) |
|---|---|---|---|
| 3 | 9000-9002 | 3 independent sessions | ~1.5 GB |
| 5 | 9000-9004 | 5 independent sessions | ~2.5 GB |
| 10 | 9000-9009 | 10 independent sessions | ~5 GB |
Each instance is a separate R process with its own memory. More instances mean more concurrent users can interact without blocking each other, but each instance consumes server RAM. Choose the scale based on your server size:
- t3.medium (4 GB RAM): scale 3
- t3.large (8 GB RAM): scale 5-7
- t3.xlarge (16 GB RAM): scale 10
10.4.3 Why each port means a process
Docker Compose creates one container per scale unit. Each container:
- Runs its own
R -e "shiny::runApp('/root/src')"process - Listens on internal port 8000
- Gets mapped to a unique host port (9000, 9001, 9002, etc.)
This is different from threads or async — each is a fully independent R session with its own memory space. One crashing container does not affect the others. NGINX automatically skips unresponsive workers (max_fails=3, fail_timeout=10s) and routes traffic to healthy ones.
10.5 NGINX upstream: connecting the pieces
The NGINX upstream block must cover the full port range so it can find every running worker:
upstream ndexr_app {
ip_hash;
keepalive 32;
server 127.0.0.1:9001 max_fails=3 fail_timeout=10s;
server 127.0.0.1:9002 max_fails=3 fail_timeout=10s;
server 127.0.0.1:9003 max_fails=3 fail_timeout=10s;
server 127.0.0.1:9004 max_fails=3 fail_timeout=10s;
server 127.0.0.1:9005 max_fails=3 fail_timeout=10s;
server 127.0.0.1:9006 max_fails=3 fail_timeout=10s;
server 127.0.0.1:9007 max_fails=3 fail_timeout=10s;
server 127.0.0.1:9008 max_fails=3 fail_timeout=10s;
server 127.0.0.1:9009 max_fails=3 fail_timeout=10s;
server 127.0.0.1:9010 max_fails=3 fail_timeout=10s;
}
Key settings:
- ip_hash: Same client IP always reaches the same worker. Critical for Shiny because sessions are stateful — switching workers mid-session loses the user’s state.
- keepalive 32: Maintains 32 persistent connections to reduce connection overhead.
- max_fails=3, fail_timeout=10s: If a worker fails 3 times, NGINX stops sending traffic to it for 10 seconds. This handles containers that are starting up or have crashed.
If you are running 3 instances but the upstream lists 10 servers, NGINX tries all 10 and skips the 7 that aren’t responding. There is no need to edit the upstream when you scale up or down.
10.6 Multiple domains, one app pool
NGINX virtual hosting lets multiple domains share the same upstream pool:
# Public site
server {
server_name ndexr.io www.ndexr.io;
location / {
proxy_pass http://ndexr_app/;
}
}
# Admin console (IP-restricted tools)
server {
server_name console.ndexr.io;
location / {
proxy_pass http://ndexr_app/;
}
location /dev/ {
include conf.d/admin_list.conf;
proxy_pass http://127.0.0.1:8000/;
}
}
# Documentation (static only, no R)
server {
server_name docs.ndexr.io;
root /usr/share/nginx/static/ndexr/_book;
}
All three domains resolve to the same server IP. NGINX reads the Host header and routes to the matching server block. The Shiny app itself does not know or care which domain the user came from — it receives the same proxied request regardless.
This architecture means you can:
- Add new subdomains without changing the Shiny app
- Apply different access rules per domain (admin IPs on console, open access on public)
- Serve entirely static sites (docs) without touching the app pool
- Route specific paths to other services (RStudio on
/rstudio/, Glances on/glances/)
10.7 The deployment pipeline
The npm run deploy command ties everything together:
npm run commit # 1. Commit all changes
git push # 2. Push to remote
npm run build # 3. Rebuild all Docker images (including nginx with new static files)
npm run down # 4. Stop all running containers
npm run up # 5. Start all containers
npm run scale:console # 6. Scale console to 3 instances (default)When you update documentation, CSS, or NGINX configuration, the npm run build step rebuilds the nginx image which copies your updated files into the container. When you update the Shiny app code, it rebuilds the console image. The down and up cycle ensures old containers are replaced with new ones.
To deploy with a different scale:
NUM=10 npm run deploy10.8 Practical example: adding a new site
To serve a new domain from the same Shiny app:
- DNS: Add an A record for
newsite.ndexr.iopointing to your server IP - SSL: Run
DOMAIN=newsite.ndexr.io npm run cert - NGINX: Create a new server block in
src/nginx/sites-enabled/newsite.ndexr.io:
server {
listen 443 ssl http2;
server_name newsite.ndexr.io;
ssl_certificate /etc/letsencrypt/live/newsite.ndexr.io/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/newsite.ndexr.io/privkey.pem;
location / {
proxy_pass http://ndexr_app/;
include conf.d/common_proxy.conf;
}
}
server {
listen 80;
server_name newsite.ndexr.io;
include conf.d/acme.conf;
location / { return 301 https://$host$request_uri; }
}
- Deploy: Run
npm run deploy
No changes to the Shiny app, Docker Compose, or upstream configuration needed. NGINX handles the new domain and routes it to the existing worker pool.
10.9 Summary
| Component | Responsibility | Config location |
|---|---|---|
| Docker Compose | Run N copies of the Shiny app | compose.yml — ports and --scale |
| NGINX upstream | Load balance across those copies | src/nginx/conf.d/upstreams.conf |
| NGINX server blocks | Route domains to the upstream | src/nginx/sites-enabled/*.ndexr.io |
| npm scripts | Orchestrate build and deploy | package.json — deploy, scale:console |
The port range in Docker Compose defines the maximum scale. The NGINX upstream covers that full range. Scaling up or down is a single command — no configuration changes required.