This tutorial comes to you directly from ChatGPT and myself. My contributions? The mistakes you will inevitably have to correct. Well, you have to do some work, don’t you? Find out where I involuntarily introduced errors and correct them. Look at the logs and the logic of these configuration examples. Or ask a real expert, or the AI in question.
Premise
The purpose of running nginx
as a Reverse-Proxy (rPROXY) is to receive traffic (frequently web traffic, but it can be anything) destined to IPFire, but with the intent to sort it to different local machines according to specific rules. For example, traffic coming from Internet from different domain names, destined to different web server running behind behind IPFire in the orange zone (here it will be 192.168.2.1/24).
Those servers, in nginx
parlance, are called Upstream
and are the back-end of nginx. This is explained by the definition of rPROXY, which is a server that sits between a client and one or more servers. It receives requests from clients and forwards them to the appropriate server based on rules configured by an administrator. Here the client is on the Internet side, and the server is on the LAN side. The response from the server is then returned to the client through the reverse proxy. In this way, the client interacts with the reverse proxy as if it were the actual server, while the reverse proxy handles the details of connecting to the backend servers.
Steps common to all use case
This tutorial will present several use-cases. In common to all of them, you do this:
- Using
pakfire
install nginx
in IPFire.
- as a root, modify
nginx
configuration:
nano /etc/nginx/nginx.conf
- see specific cases;
- test the configuration file to see if there are syntactic mistake:
nginx -t
- restart nginx:
/etc/init.d/nginx restart
Case Number 1
Now, let’s see the first situation. You have a domain name registered to a DNS provider, we will call it example.com
which points to your IPFire public IP address.
You want the traffic for http://www.exaple.com/blog
to go to the web-server running on the IP address 192.168.2.100
port 3000 and http://www.example.com/nextcloud
to go to 192.168.2.101
port 4000, you write this configuration file:
http {
server {
listen 80;
location /blog {
proxy_pass http://192.168.2.100:3000;
}
location /nextcloud {
proxy_pass http://192.168.2.101:4000;
}
}
}
Case Number 2
Second scenario. As case number 1, but this time you want also to sort the encrypted traffic for the /nextcloud
and /blog
web applications. To enable encrypted traffic on port 443, you need to configure SSL certificates for the domain name. You can obtain SSL certificates from Let’s Encrypt or purchase them from a commercial CA.
Once you have obtained the SSL certificate, you need to configure NGINX to use it. You can do this by creating a directory to store the SSL certificate and key, and then adding the following lines to the NGINX configuration file:
ssl_certificate /path/to/example.com.crt;
ssl_certificate_key /path/to/example.com.key;
Replace “/path/to/example.com.crt” and “/path/to/example.com.key” with the path to your SSL certificate and key files for the domain name example.com
.
Now you need to configure NGINX to act as a reverse proxy for each of the domain names. You can do this by creating a server block in the NGINX configuration file listing two locations, one for each termination of the url. For example:
http {
server {
listen 80;
listen [::]:80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
ssl_certificate /path/to/example.com.crt;
ssl_certificate_key /path/to/example.com.key;
location /blog {
proxy_pass http://192.168.2.100:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /nextcloud {
proxy_pass http://192.168.2.101:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
In this example, we create a server block for the domain name example.com
that listens on both port 80 and port 443. The “return 301” line redirects all HTTP traffic to HTTPS. The “ssl_certificate” and “ssl_certificate_key” lines specify the SSL certificate and key files for the domain name.
The location /nextcloud
and location /blog
blocks specifies the location where requests will be proxied to. In this case, we’re proxying requests to a web server running on IP address 192.168.2.100 and 101 respectively at port 80. You can replace this with the IP address and port of the web server that you want to proxy traffic to.
The “listen 80;” directive instructs NGINX to listen for incoming connections on port 80 of all available IP addresses, both IPv4 and IPv6.
The “listen [::]:80;” directive instructs NGINX to listen for incoming connections on port 80, but only for IPv6 connections. The “[::]” syntax represents the unspecified IPv6 address, which means that NGINX will listen for connections on all available IPv6 addresses.
So, the main difference between “listen 80;” and “listen [::]:80;” is that the former listens on both IPv4 and IPv6 addresses, while the latter listens only on IPv6 addresses. Given that the current version of IPFire will not work with IPv6, this directive can be omitted.
Case Number 3
Here you want to send traffic from two different domains, where example1.com
goes to 192.168.2.100
and example2.com
goes to 192.168.2.101
, in the root directory (no /nextcloud
or /blog
here). Each domain has its own certificates. The two servers are running on port 80 but the traffic will be redirected to port 443.
http {
upstream example1 {
server 192.168.2.100:80; # replace with your server IP
}
upstream example2 {
server 192.168.1.101:80; # replace with your server IP
}
server {
listen 80;
server_name example1.com www.example1.com example2.com www.example2.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name example1.com www.example1.com;
ssl_certificate /etc/letsencrypt/live/example1.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example1.com/privkey.pem;
location / {
proxy_pass http://example1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 443 ssl;
server_name example2.com www.example2.com;
ssl_certificate /etc/letsencrypt/live/example2.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example2.com/privkey.pem;
location / {
proxy_pass http://example2;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
Here we used the upstream
directive to simplify the namespace to be used in the location directive.
Case number 4
Here’s a tutorial on how to set up a reverse proxy for the domain example.com
to load balance on two servers using NGINX, with traffic on port 80 redirected to port 443, all in one configuration file called nginx.conf.
http {
upstream example {
server 192.168.2.100;
server 192.168.2.101;
}
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://example;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
}
In the above example, the upstream block defines the two servers that NGINX will load balance traffic to. The server block listening on port 80 redirects all traffic to port 443 using a 301 redirect. Finally, the server block listening on port 443 is where we define the SSL certificate paths and set up the reverse proxy to the example upstream server.