Quickstart

  1. Launch a server on your AWS (pre-configured R/Python, Docker, NGINX).
  2. Attach a domain from Route53 (or add one).
  3. Enable HTTPS via Certbot (auto-renew).
  4. Deploy apps/containers and map paths to upstreams.
  5. Add more domains to the same box as needed.

Servers & Environments

Spin up a single, production-ready instance and host multiple domains/apps from it.

  • Instance: preloaded with R/Python, Docker, NGINX, cert tooling.
  • Performance: start small; scale instance type when traffic or builds grow.
  • Backups: keep IaC, app source, and data snapshots in your account.

Domains & HTTPS

Bind Route53 zones to your server and issue TLS certificates with zero-downtime renewals.

  • Route53: create/verify A and AAAA records; support dual-stack.
  • Certificates: HTTP-01 challenge via /.well-known/acme-challenge/.
  • Multi-domain: consolidate multiple hostnames on one server to cut costs.

Routing & Apps

NGINX handles host/path routing to upstream apps and containers.

  • HTTP→HTTPS: force redirect, preserve $host$request_uri.
  • Upstreams: map location / or subpaths to app ports or Docker services.
  • Static & Binaries: serve assets or artifacts with explicit types and headers.

Shiny Modules & App Reading

Anatomy of Shiny Modules

A practical guide to modular development in Shiny: structure, naming, wiring, and patterns that keep large apps comprehensible and testable.

1) What is a module (and why it matters)

A Shiny module is a pair of functions with a shared ID space:

  • ui_<feature>(id, ...) — returns pure tags (no side-effects).
  • server_<feature>(id, ...) — registers observers/outputs and wires IO at the edges.

Benefits: reuse, local namespace, simpler mental model, and easy composition.

2) File layout that scales

Recommended tree
src/
  r/
    # feature modules
    orders/ui_orders.R
    orders/server_orders.R
    billing/ui_billing.R
    billing/server_billing.R

    # shared building blocks
    inputs/inputs.R
    components/head.R
    services/        # pure domain services (no shiny)
    gateways/        # AWS/DB/HTTP etc
  app.r              # app entry, routing, composition
Naming symmetry
  • Folder orders/ contains ui_orders() and server_orders() .
  • A feature name appears consistently in UI, server, service, and tests.
  • If you can locate a feature by grepping its name, the shape is good.

3) Module contracts (what goes in/out)

  • UI contract: only returns tags; params are cosmetic or provide IDs/placeholders.
  • Server contract: exposes a return value (reactive or list of reactives) and accepts injected dependencies (services/gateways) as arguments.
  • Never read global state inside the module: pass what you need explicitly.
Example return shape
server_orders <- function(id, svc_orders) {
  moduleServer(id, function(input, output, session) {
    data <- reactiveVal(svc_orders$list(limit = 50))   # service = pure function/gateway wrapper
    output$table <- renderTable(data())
    # expose what the parent may need (e.g., selection)
    list(rows = reactive(data()))
  })
}

4) Wiring pattern (thin server, pure UI)

  1. ui_*: build HTML and inputs; zero IO.
  2. server_*: observe events → call service/domain → render outputs.
  3. Services: pure domain or small gateway functions that return values/errors.
  4. Composition: parent collects module return values and coordinates siblings.
# parent.r
out_orders <- server_orders("orders", svc_orders = orders_service)
observeEvent(input$refresh_all, {
  # e.g., trigger multiple children or recompute aggregate
})

5) Communication patterns

  • Parent → Child: pass functions or services down as parameters.
  • Child → Parent: child returns reactive values; parent reads them.
  • Sibling ↔ Sibling: go through the parent; avoid direct coupling.

6) Reactive discipline (stay predictable)

  • One small reactive per concern; avoid giant observers that do everything.
  • Prefer eventReactive() to compute on demand; use req() to guard.
  • Render functions only render; data prep belongs before them.

7) Error handling & UX

  • Catch gateway errors at the edge and show actionable messages ( inputs$pan() , showNotification() ).
  • In long ops, show progress: modal_transition('Please wait') .
  • Never block UI with silent failures; always explain next steps.

8) Testing strategy (fast feedback)

  • Pure services: unit test easily (no shiny).
  • Server logic: test with shinytest2 or call server functions with fake inputs.
  • Golden UI: keep UI pure and small; visual diffs remain stable.

9) Performance knobs

  • Debounce/throttle expensive reactives ( shiny::debounce ).
  • Cache pure results ( memoise or your own in-module cache).
  • Stream big tables (DT proxy) and paginate.

10) Common anti-patterns (and fixes)

  • UI with side-effects: move IO to server or service.
  • God-observer: split into small eventReactives/observers per output.
  • Hidden globals: pass dependencies explicitly (services, config).
  • Implicit cross-module state: return values; parent coordinates.

11) Minimal scaffold you can copy

# ui_orders.R
ui_orders <- function(id) {
  ns <- NS(id)
  tags$div(
    tags$h5("Orders"),
    tags$button(id = ns("refresh"), class = "btn btn-sm btn-primary", "Refresh"),
    tableOutput(ns("tbl"))
  )
}

# server_orders.R
server_orders <- function(id, svc_orders) {
  moduleServer(id, function(input, output, session) {
    rows <- eventReactive(input$refresh, ignoreInit = TRUE, {
      svc_orders$list(limit = 50)   # pure/gateway call
    })
    output$tbl <- renderTable(rows())
    list(rows = rows)               # expose to parent
  })
}

12) Deployment checklist for a module

  • UI has only tags; Server is thin; Services are pure.
  • Inputs/outputs names are namespaced ( ns('x') ).
  • Errors are human and actionable.
  • Feature name is symmetric across files/commands.

Modularity is a reader experience: predictable names, small surfaces, and explicit edges.

How I Read an App: a Field Guide

A practical lens for understanding applications that mix a Rust CLI (ops/orchestration) with Shiny modules (product/UI).

1) Mental model (map before territory)

  • Two halves: Rust CLI drives infra & workflows; Shiny modules present product value.
  • Edges & core: effects (IO/infra) live at the edges; domain stays pure in the middle.
  • Symmetry: same feature names reappear across layers (CLI → services → UI/server).

2) Layers I look for

  1. CLI Orchestrator (Rust, cedrus ): declarative subcommands that call shell/AWS; errors must be human.
  2. Infra gateways (Rust/Shell): typed wrappers around aws , ssh/scp , docker compose .
  3. Domain services (R pure): business rules, no IO; easy to unit test.
  4. Shiny server modules: thin wiring: observe → call domain → render; effects at the edges only.
  5. Shiny UI modules: pure tags, no side effects; naming mirrors server and feature.

3) Naming & symmetry (how I navigate fast)

  • Feature-first: orders , billing , auth across layers.
  • Module pairs: ui_orders_panel()server_orders() (pure UI vs. thin logic).
  • Rust commands ↔ flows: cedrus ndexr wire ≈ “push nginx + DNS + certbot”.

4) Read-path (how I crack a new feature)

  1. Start at the button/entry: find the UI action; jump to matching server_* .
  2. Find the domain call: identify *_service.R function (pure).
  3. Locate gateways: any IO is in services/ or handled by the Rust CLI.
  4. Trace the CLI: in Rust: cmd_ndexr.rshandle_ndexr.rs → small helpers ( ssh , run_aws ).

If I can follow a feature end-to-end by string searching its name, the design is working.

5) Invariants I keep enforcing

  • UI is pure: no IO, no mutation; only tags and inputs/outputs.
  • Server is thin: wire events to domain; never bury business rules in observers.
  • Domain is pure: deterministic; testable; returns typed values or structured errors.
  • Gateways isolated: all external calls (AWS, SSH, DB) in one place.
  • Human errors: CLI errors explain cause + next step (not just codes).

6) Failure & recovery (what I expect)

  • Discover vs require: the CLI tries to discover A/AAAA; if missing, prompts a fix.
  • Explain + suggest: e.g. Could not detect public IPv4. Either associate an EIP or run: cedrus ndexr dns --fqdn ... --zone ... --a <ip>
  • Wait for eventual consistency: Route53 INSYNC waiters.
  • Certbot guidance: if AAAA blocks HTTP-01, retry v4-only or suggest DNS-01.

7) My quick checklists

Product (Shiny)
  • ui_* contains only tags.
  • server_* calls *_service then renders.
  • Feature name links UI ↔ server ↔ service.
  • Reactive graph small & obvious.
Ops (Rust CLI)
  • Subcommands read like runbooks.
  • Edge helpers: ssh , scp , run_aws are tiny & logged.
  • Validation before mutation (e.g., IPv4 sanity).
  • Actionable errors + examples.

8) Reading order I follow in a new repo

  1. src/app.r : routes & composition.
  2. ui/* for surface; then server/* for wiring.
  3. */services/*_service.R for domain rules.
  4. services/* gateways (DB/HTTP/AWS).
  5. cedrus-cli/src/entrypoint/cmd/cmd_ndexr.rshandlers/handle_ndexr.rs for automation flows.

9) Minis that capture the style

Rust (human-first error):

if !validate_ipv4(&a) {
  bail!(format!(
    "Could not detect a valid public IPv4 (got: {}). 
Either associate an Elastic IP or run: cedrus ndexr dns --fqdn {} --zone {} --a <IPv4>",
    a, fqdn, zone
  ));
}

Shiny (thin server):

server_orders <- function(id, orders_svc) {
  moduleServer(id, function(input, output, session) {
    rx <- reactiveVal(tibble::tibble())
    observeEvent(input$refresh, {
      rx(orders_svc$fetch(limit = 50))  # pure; no side effects here
    }, ignoreInit = TRUE)
    output$tbl <- renderTable(rx())
  })
}

10) North stars I keep in view

  • Predictability over cleverness.
  • One truth per concern; one obvious place to look.
  • The next person can fix it in 10 minutes.
  • Every error teaches the user what to do next.

From ops to product: design for the reader. Symmetry, thin edges, pure core.


Rust CLI Anatomy

Anatomy of a Rust CLI

Building scalable systems applications: structure, naming, error strategy, and operational ergonomics.

1) Why a Rust CLI for systems work

  • Predictable startup, small static binaries, low runtime overhead.
  • Ergonomics via clap subcommands → clean separation of concerns.
  • Reliability with anyhow + actionable messages; no panics for user flows.
  • Observability first: colored logs, progress spinners, tailers, timers.

2) Layout that scales

src/
  entrypoint/
    cli.rs                 # clap tree: cedrus … subcommands
    handle_entrypoint.rs   # match (cmd, submatches) → delegate
  entrypoint/cmd/
    cmd_ndexr.rs           # command definitions (flags, help)
  entrypoint/handlers/
    handle_ndexr.rs        # actions (dns/eip/nginx/certbot/status)
  jobs/
    common/…               # execute, logging, timers, models
    lsf/…                  # LSF orchestrator, params, monitor
    slurm/…                # SLURM orchestrator, params, monitor
  logger/…                 # env_logger + color formatting
  utils/…                  # tailers, spinner_cmd, git, net, paths
  printenv/…               # diagnostics (env/compilers/R/etc.)
  • Symmetry: entrypoint cmd_* ↔ handler handle_* ↔ helper in utils/ or jobs/
  • One feature, one name: grep-able across CLI → handler → utilities.

3) CLI contract (clap → handler → service)

// cli.rs
Command::new("cedrus")
  .subcommand(cmd_ndexr())
  .subcommand_required(true);

// handle_entrypoint.rs
pub fn handle_entrypoint((name, m): (&str, Option<&ArgMatches>)) -> Result<()> {
  match (name, m) {
    ("ndexr", Some(mm)) => handle_ndexr(mm.subcommand().unwrap())?,
    ("printenv", _)     => { printenv::display_compiler_and_env_info(); Ok(()) }
    _ => Ok(())
  }
}
  • Parse at the edge: only CLI concerns in cmd_* .
  • Handle in the middle: structure flow, validate inputs, call services.
  • Do at the leaves: IO/shell/AWS/SSH live in small helpers.

4) Errors users can act on

  • anyhow::Context to annotate each hop: what we tried + next step.
  • Validate early (IPv4, paths) and bail with hints.
  • Async/remote steps: surface AWS/SSH stderr verbatim, then summarize.
let out = Command::new("aws").args(args).output().context("spawn aws")?;
if !out.status.success() {
  let e = String::from_utf8_lossy(&out.stderr);
  bail!("aws {:?} failed: {}", args, e);
}

5) Observability by default

  • Colored logs: logger::init_logger sets level & prefix (INFO/WARN/DEBUG).
  • Long ops: utils::spinner_cmd wraps child processes with a spinner + live lines.
  • Silence/elapsed timers: jobs/common/line_timer.rs fires on quiet or time.
  • Tail logs: poll/notify strategies for streaming build output.
LineTimer::silence_any(300, || log::warn!("⏳ no output for 5 mins"));

6) Schedulers as first-class citizens

Normalize scripts → parse params → submit → monitor.

  • jobs/lsf/rewrite.rs + params.rs normalize #BSUB headers & log paths.
  • jobs/lsf/monitor.rs tails logs + memory checks + lockfile reactions.
  • jobs/slurm/* mirrors sbatch/squeue with the same flow.

7) Thin, testable helpers

  • utils/net.rs : shared reqwest client with sane timeouts/UA.
  • utils/git_cmd.rs : quiet git by default; elevate on DEBUG .
  • printenv/envscan.rs : snapshot compilers/GLIBC/R/Python/paths for bug reports.

8) Case study: ndexr wire-up

  1. Push NGINX: ssh mkdir -p … → scp rendered template → make up .
  2. Discover IPv4: IMDS curl with fallback.
  3. Route53 UPSERT: A (and optional AAAA), then wait INSYNC .
  4. Certbot: run & tail logs; recycle gateway.

9) Configuration & secrets

  • Prefer environment first (e.g., LLM_KEY , AWS_PROFILE ).
  • Keep config discovery explicit per command; avoid hidden global state.
  • Never log credentials; only the mechanism that was chosen.

10) Testing strategy

  • Unit: pure helpers (parsers/validators/formatters).
  • Integration: spawn the CLI in a temp dir; assert logs & artifacts.
  • Golden logs: freeze segments of output (strip colors) for regressions.

11) Minimal scaffold to copy

// entrypoint/cmd/cmd_sys.rs
pub fn cmd_sys() -> Command {
  Command::new("sys")
    .about("System tasks")
    .subcommand(Command::new("diag").about("Show diagnostics"))
    .subcommand(Command::new("tail").about("Tail a file")
      .arg(Arg::new("path").required(true)))
}

// entrypoint/handlers/handle_sys.rs
pub fn handle_sys((name, m): (&str, Option<&ArgMatches>)) -> Result<()> {
  match (name, m) {
    ("diag", _) => { printenv::display_compiler_and_env_info(); Ok(()) }
    ("tail", Some(mm)) => {
      let p = mm.get_one::<String>("path").unwrap();
      utils::log_monitor_notify::watch_log_file(p)?;
      Ok(())
    }
    _ => Ok(())
  }
}

12) Release checklist

  • Commands are orthogonal; flags named consistently across subcommands.
  • Errors propose fixes; logs readable without --debug .
  • Long ops show progress and tail important logs.
  • Schedulers/cloud steps are idempotent and resumable.
  • One-liners for diagnostics: cedrus printenv , … ndexr status .

Principle: tiny, named building blocks—CLI parses; handlers orchestrate; helpers do.


Troubleshooting

  • 403 on app path: confirm NGINX location blocks and upstream; check ACLs or auth middleware.
  • Cert errors: ensure port 80 open to /.well-known/acme-challenge/; verify DNS A/AAAA.
  • Reverse proxy timeouts: set proxy buffers/timeouts in a common include; right-size upstream.
  • Multi-domain collisions: unique server_name blocks; separate access logs for sanity.