From 502s and Cert Errors to a Full HTTP Downgrade: My Cloudflare Tunnel WordPress Journey

Today was a journey, not a sprint. My initial plan was simple: move my self-hosted WordPress site from a traditional port-forwarding setup to the secure and modern world of Cloudflare Tunnels. My site was already running securely behind an Nginx reverse proxy with a valid HTTPS certificate. This, I thought, would be a straightforward switch.

I was wrong. What followed was a deep dive into redirect loops, certificate errors, and ultimately, a strategic decision to completely re-architect my site’s local protocol.

If you’ve ever been beaten down by a tls: unrecognized name error, this one’s for you.

The Initial Setup & The First Problem: Endless Redirects

My setup was, I believed, solid:

  • WordPress: Running in its own environment.
  • Nginx: Acting as a reverse proxy, serving the site over HTTPS on port 443.
  • cloudflared: Running in a Docker container on the same host, using network_mode: "host" to easily access host services.
  • Cloudflare Tunnel: Configured in the Zero Trust dashboard to point https://irlab.ca to my Nginx server at https://192.168.1.30:443.

I flipped the switch, and… failure.

Accessing https://irlab.ca resulted in an endless redirect loop for the browser, which eventually timed out. A quick curl command gave a more direct, but equally unhelpful, error:

 Cloudflare (HTTPS) -> `cloudflared` -> Nginx (HTTPS) -> WordPress), links were being generated as `http://`.

### The Strategic Decision: Abandon Local HTTPS

After more debugging, I had a realization. Why was I fighting to maintain HTTPS *inside* my network? The entire connection from `cloudflared` to Nginx is on the same host machine. It's not exposed to the internet. The *only* part that needs to be encrypted is the public-facing part, which Cloudflare Tunnel handles perfectly.

I decided to move my entire local WordPress site from HTTPS to HTTP.

This simplifies the chain: **User -> Cloudflare (HTTPS) -> `cloudflared` -> Nginx (HTTP) -> WordPress**

No local certificates, no SNI errors, no mixed content. But this was "not an easy job," as the entire site was built on an `https://` foundation.

### The Great HTTP Downgrade: How I Did It

This process had to be meticulous to avoid breaking the site in a new and exciting way.

#### Step 1: Web Server Reconfiguration (Nginx)

First, I had to teach Nginx to speak HTTP again. I edited my Nginx site configuration:

- Changed `listen 443 ssl;` to `listen 80;`.
- Removed all SSL-related lines: `ssl_certificate`, `ssl_certificate_key`, `ssl_protocols`, etc.
- Removed any `return 301 https://$host$request_uri;` redirect blocks.

My Nginx `server` block now looked refreshingly simple:

server { listen 80; server_name irlab.ca www.irlab.ca;

root /var/www/html;
index index.php;

location / {
    try_files $uri $uri/ /index.php?$args;
}

location ~ \.php$ {
    # ... (my fastcgi_pass directives)
}

}


After a `sudo systemctl reload nginx`, my server was now serving on port 80.

#### Step 2: WordPress Configuration (`wp-config.php`)

Next, I had to stop WordPress from *itself* trying to force HTTPS. I edited `wp-config.php` and commented out or removed these lines:

// define( ‘FORCE_SSL_ADMIN’, true ); // define( ‘FORCE_SSL_LOGIN’, true );


To regain access to my admin dashboard (which was now in a redirect loop), I temporarily hard-coded the new HTTP URLs:

define( ‘WP_HOME’, ‘http://irlab.ca‘ ); define( ‘WP_SITEURL’, ‘http://irlab.ca‘ );



#### Step 3: Database Domination (The "Better Search Replace" Plugin)

This was the most critical step. My posts, pages, widgets, and theme settings were all filled with `https://irlab.ca` links. This is what the **Better Search Replace** plugin was for.

1. After adding the `wp-config.php` lines, I was able to log into my admin panel at `http://irlab.ca/wp-admin/`.
2. I installed and activated "Better Search Replace."
3. I navigated to **Tools > Better Search Replace**.
4. **Search for:** `https://irlab.ca`
5. **Replace with:** `http://irlab.ca`
6. **Select tables:** I selected *all* tables.
7. I ran a "Dry Run" first to see how many changes were needed. It was thousands.
8. I unchecked "Dry Run" and ran the replacement for real.

This plugin meticulously went through the entire database and fixed every URL, solving the mixed content problem at its source.

#### Step 4: The Final Tunnel Configuration

With my local site now running happily on HTTP, I made the final changes in the Cloudflare Zero Trust dashboard:

- **Service URL:** `http://192.168.1.30:80` (pointing to my new HTTP port)
- **TLS Settings:** All toggles (No TLS Verify, Origin Server Name) were cleared and turned off. They were no longer needed.
- **Cloudflare SSL/TLS -> Edge Certificates:** I turned **OFF** "Automatic HTTPS Rewrites." This is crucial. Since my origin was now correctly serving `http://` links, I didn't want Cloudflare "helpfully" rewriting them back to `https://` and re-creating the mixed content problem.

### The Final Result: Success!

After a final cache clear, it worked. Perfectly.

- Visitors to `https://irlab.ca` get a full, secure HTTPS connection with the Cloudflare padlock.
- My server has **zero open ports** on the firewall.
- My internal network is *simpler*, with `cloudflared` talking to Nginx over plain HTTP.
- All `tls: unrecognized name` and `502` errors are gone.
- All mixed content warnings are gone.

This journey was a classic case of debugging leading to a strategic pivot. The "fix"B wasn't to force the square peg of local-HTTPS into the round hole of the tunnel, but to realize I didn't need that peg at all.