$ cat /blog/httpsxcomfullybootstrap.md

Bootstrapping an HTTPS-First Web App Without Relying on Managed TLS

A practical guide to delivering an HTTPS-first web app in a fully bootstrapped stack, with automated TLS and lean infrastructure.

Bootstrapping an HTTPS-First Web App Without Relying on Managed TLS

I’m running a small, bootstrapped web app in the San Francisco Bay Area, and I’ve kept TLS entirely under my control—no vendor-managed TLS, no opaque CDNs. It’s not glamorous, but it’s predictable, cheap, and repeatable. If you’re building a product solo or with a tiny team, you don’t need a cloud TLS service to be secure and trustworthy. You need a solid, automated setup you understand end-to-end.

In this post I’ll walk through a practical, “bootstrapped” approach to delivering an HTTPS-first web app with automated TLS and lean infrastructure. I’ll show real commands, minimal configs, and the hard lessons I learned along the way.


Why HTTPS-First matters for bootstrapped projects

  • Users trust a site that loads securely. If you’re bootstrapping revenue or early customers, trust matters more than hype.
  • TLS isn’t a “nice-to-have” for indie projects; it’s table stakes. You don’t want a data leak or a credential exposure to torpedo growth.
  • You’ll sleep better knowing your certs aren’t tied to a single cloud account or a particular vendor’s console.

That said, I also believe in keeping things lean. HTTPS should be automatic, reliable, and cheap. The goal is not “invent the wheel” every time; it’s to automate the boring bits so you can focus on shipping.


The bootstrap architecture (a lean, reproducible stack)

  • Domain: example.com (and www.example.com)
  • DNS: your provider (Cloudflare DNS, Cloudflare DNS-Only, or another provider)
  • Host: a small VPS/VM (Ubuntu 22.04+). I regularly use $5–$20/mo VPSs; the math scales nicely as you add more services, but you don’t need a fancy cloud account to start.
  • Edge TLS termination: Nginx (reverse proxy) on port 443 with TLS certificates from Let’s Encrypt
  • App: a simple Node.js (Express) app listening on localhost:8080, behind Nginx
  • TLS automation: certbot with the nginx plugin for automated certificate issuance and renewal
  • Security hardening: HSTS, TLS 1.2/1.3, recommended ciphers, and redirect all HTTP to HTTPS
  • Observability: basic health checks, certificate expiry monitoring, logs

Important choice: you’re not using a “managed TLS” service. Let’s Encrypt certs are automated, but you own the private keys. The renewal logic is part of your deployment, not a quarterly vendor policy.


Step-by-step: how I set this up

Below are the concrete steps I follow. They’re written to be reproducible on a clean Ubuntu 22.04+ host. If you’re on Debian, the package names are similar; adjust accordingly.

  1. Prepare the host (hardening the basics)
  • Update and install a minimal stack:
    • Nginx (TLS terminator)
    • certbot (Let’s Encrypt client) and the nginx plugin
    • Node.js (or your preferred runtime)

Commands:

sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install -y nginx certbot python3-certbot-nginx
# Node.js (example)
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejs
  • Firewall basics (allow SSH and HTTP/HTTPS only):
sudo ufw default deny outgoing
sudo ufw default deny incoming
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
  1. Deploy a minimal app behind Nginx

I keep the app small and self-contained so I can reuse it later. Here’s a tiny Express app (for demonstration) that serves JSON and a friendly root page.

app.js:

const express = require('express');
const app = express();

app.get('/', (req, res) => {
  res.send('<h1>Hello HTTPS</h1><p>This is a bootstrapped, TLS-secured app.</p>');
});

app.get('/health', (req, res) => {
  res.json({ ok: true, ts: Date.now() });
});

const port = process.env.PORT || 8080;
app.listen(port, () => console.log(`App listening on port ${port}`));

package.json (optional, for quick start):

{
  "name": "secure-boot",
  "version": "0.1.0",
  "dependencies": {
    "express": "^4.18.2"
  },
  "scripts": {
    "start": "node app.js"
  }
}
  • Run it in the background (as a systemd service, see next snippet), or run manually with npm start.
  1. Put Nginx in front of the app (TLS terminator)

Create an Nginx config that forwards traffic to localhost:8080 and enforces TLS.

nginx.conf snippet (or your site config under /etc/nginx/sites-available/):

# Redirect HTTP to HTTPS
server {
  listen 80;
  server_name example.com www.example.com;

  location /.well-known/acme-challenge/ {
    root /var/www/html;
  }

  return 301 https://$host$request_uri;
}

# HTTPS server
server {
  listen 443 ssl http2;
  server_name example.com www.example.com;

  ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
  include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

  add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";

  location / {
    proxy_pass http://127.0.0.1:8080;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
  }

  location /health {
    proxy_pass http://127.0.0.1:8080/health;
  }
}

Notes:

  • Redirect 80 to 443 so every request ends up encrypted.
  • HSTS header helps enforce HTTPS on future requests. Be careful: once enabled, it’s hard to roll back if you misconfigure.
  • The TLS cert paths are the standard Let’s Encrypt live directory.
  1. Run the app behind systemd (reliable startup)

Create a simple systemd service for the Node app:

/etc/systemd/system/secure-boot.service:

[Unit]
Description=Secure Bootstrapped App
After=network.target

[Service]
WorkingDirectory=/opt/secure-boot
ExecStart=/usr/bin/node /opt/secure-boot/app.js
Restart=always
User=nobody
Environment=PORT=8080
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target
  • Start and enable:
sudo mkdir -p /opt/secure-boot
sudo cp app.js /opt/secure-boot/
sudo cp package.json /opt/secure-boot/ (if you use npm)
sudo systemctl daemon-reload
sudo systemctl enable --now secure-boot
  1. Obtain the TLS certificate with Let’s Encrypt

With certbot and the nginx plugin, obtaining a certificate is straightforward.

Commands:

# If you didn’t install the nginx plugin yet, install it (done earlier in this flow)
sudo certbot --nginx -d example.com -d www.example.com

What certbot does:

  • It fetches a cert from Let’s Encrypt
  • It modifies your nginx config to use the new cert
  • It leaves you with a certificate that expires every ~90 days
  1. Automate renewal and reloading

Let’s Encrypt certs are short-lived by design. Automate renewal and ensure Nginx reloads on certificate updates.

# Renewal is handled by certbot's built-in timer/cron
sudo certbot renew --deploy-hook "systemctl reload nginx"

# Enable the renewal timer (systemd)
sudo systemctl enable certbot.timer
sudo systemctl start certbot.timer

This setup triggers renewals automatically, typically twice a day, and reloads Nginx after each successful renewal. You don’t need to babysit certs.

  1. Basic TLS hardening (practical defaults)

Inside the nginx HTTPS server block, you should harden TLS settings. The Let’s Encrypt install provides recommended default config files at /etc/letsencrypt/options-ssl-nginx.conf and /etc/letsencrypt/ssl-dhparams.pem. You can keep those, plus a few project-specific tweaks:

  • Enforce modern TLS:

    • ssl_protocols TLSv1.2 TLSv1.3;
    • ssl_ciphers with strong modern suites
    • ssl_prefer_server_ciphers on;
    • enable HTTP/2 (already in the config above)
  • Enable HSTS cautiously:

    • add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
  • Consider OCSP stapling (enabled via Let’s Encrypt’s recommended config) to speed up TLS handshakes.

In practice, I start with the defaults Let’s Encrypt provides, then tune only if I need to support unusual clients or performance constraints.

  1. Quick verification

After the first run:

  • http->https redirection check: curl -I http://example.com # should 301 to https

  • TLS handshakes: curl -I https://example.com curl -IL https://www.example.com/health

  • Certificate details: openssl s_client -connect example.com:443 -servername example.com </dev/null | openssl x509 -noout -dates -subject

  • Health endpoint: curl -sS https://example.com/health

If anything looks off, check:

  • Nginx error logs: /var/log/nginx/error.log
  • Certbot logs: /var/log/letsencrypt/letsencrypt.log
  • Systemd status: systemctl status secure-boot
  1. A lean alternative: Caddy or Traefik for automated TLS (optional)

If you want even less manual TLS work, you can consider a tiny reverse-proxy like Caddy. Caddy automatically provisions TLS certificates via Let’s Encrypt and keeps them renewed, without you having to run certbot. It’s nice for speed-to-production and reduces boilerplate for small apps. The trade-off is you’re relying on Caddy’s defaults for TLS configuration and you’re adding another moving part to your stack. For a bootstrapped solo project where you want to minimize maintenance and still stay insecure-free, it’s worth evaluating.

That said, I personally prefer the clarity and control of Nginx + certbot for a bootstrapped workflow. It’s explicit, reproducible, and you can audit every step.


Things that actually matter in practice

  • Ownership of keys and certs: store them under /etc/letsencrypt and back them up somewhere sensible. Losing your private key is not something you want to cope with later.
  • Automatic renewal: 90-day certs are intentionally short. If you rely on a one-off renewal script, you’ll forget to update when it matters.
  • HTTP-to-HTTPS redirection: non-negotiable for a clean HTTPS-first posture. If you miss redirects, some traffic remains unencrypted.
  • HSTS with care: once you enable preload, you’re locked into HTTPS for a long period. Ensure your site is truly HTTPS-only before enabling preload.
  • Lean runtime: keep the app stack minimal. You don’t need a Kubernetes cluster to ship a profitable product. A single server plus a small forward proxy is enough to start.
  • Observability: you’ll want basic metrics and health checks, especially as you scale a bootstrapped product. TLS expiry is itself a reliability signal; watch it.

Common pitfalls I’ve seen (and how to avoid them)

  • Cert renewal failure due to port 80 blocked by a firewall or a misconfigured server. Ensure port 80 is reachable during the http-01 challenge, or switch to DNS-01 if you have strict outbound rules.
  • Not reloading Nginx after certificate renewal. The renew hook is your friend; always hook certbot renew to systemctl reload nginx.
  • Underestimating the importance of a basic backup plan. Keep backups of your keys and config; a single server outage shouldn’t break TLS for days.
  • Relying on a cloud TLS feature for security without understanding the origin. Managed TLS can be convenient, but you’re giving up some control over rotation, metrics, and offline backups. If you want the “fully bootstrapped” experience, you should own the certificates and their lifecycle.

Final takeaways

  • HTTPS-first doesn’t have to be complicated. A small VPS, Nginx as a TLS terminator, Let’s Encrypt certs, and certbot automation is a solid, bootstrapped baseline.
  • Treat TLS as code: commit the Nginx config parts you customize, and script the renewal flow so you’re not worrying about cert expiry.
  • Prefer explicit, auditable processes over “trust the vendor” defaults. You’ll be more confident shipping to paying customers when you know exactly how TLS is provisioned and renewed.
  • If you value simplicity and speed to production, consider a tool like Caddy for automatic TLS; otherwise, the classic Nginx + certbot stack remains a robust, battle-tested choice.

If you’re bootstrapping a new project, start here and iterate. You don’t need a complex cloud mythos to ship a trustworthy, HTTPS-first product.

If you want to see a full working repo, I’ve included a minimal sample in this post’s notes. You’ll find the app code, a ready-to-use Nginx site config, and a systemd service you can adapt for your project. As always, I’d love to hear what worked for you in the wild and what didn’t.

Want more practical, boots-on-the-ground setups like this? follow me on X @fullybootstrap for occasional updates and links to real-world infrastructure choices from my own projects.


  • If you’re curious about a production-grade self-hosted stack, I discuss self-hosting a web app in a separate post.
  • A deeper dive into Linux-based deployment workflows and dotfiles can be found in my earlier writeups.

Takeaways in a sentence

You can deliver a robust HTTPS-first web app without relying on vendor-managed TLS by using a lean Nginx front-end, Let’s Encrypt certs with certbot automation, and a small, reproducible app stack. It’s cheap, repeatable, and gives you full control over your TLS lifecycle—exactly what a bootstrapped indie hacker needs.