I want to move away from Cloudflare tunnels, so I rented a cheap VPS from Hetzner and tried to follow this guide. Unfortunately, the WireGuard setup didn’t work. I’m trying to forward all traffic from the VPS to my homeserver and vice versa. Are there any other ways to solve this issue?
VPS Info:
OS: Debian 12
Architecture: ARM64 / aarch64
RAM: 4 GB
Traffic: 20 TB
You don’t want to forward all traffic. You can do SNAT port forwards across the VPN, but that requires the clients in your LAN to use the VPS as their gateway (I do this for a few services that I can’t run through a proxy; its clunky but works well).
Typically, you’ll want to proxy requests to your services rather than forwarding traffic.
- Setup Wireguard or OpenVPN on the VPS as a server VPN. Allow whatever listener port in the firewall (I use
ufw
on Debian, but you can use iptables if you want) - Install HAProxy or Nginx (or Nginx Proxy Manager) on the VPS to act as your frotnend. Those will listen on ports 80/443 and proxy requests to your backend servers. They’ll also be responsible for SSL termination, and your public-facing certs will be set there.
- Point your DNS records for your services to the VPS’s public IPv4
- On your LAN, configure your router to connect to the VPS as a VPN client and route into your LAN from the VPN subnet -or- install the VPN client (WG/OVPN) on each host
- In your VPS’s reverse proxy (HAProxy, etc), set the backend server address and port to the VPN address of your host
I’ve done this since ~2013 (before CF tunnels were even a product) and has worked great.
My original use case was to setup direct connectivity between a Raspberry PI with a 3G dongle with a server a home on satellite internet. Both ends of that were behind CG-NAT, so this was the solution I came up with.
Out of curiosity, why not a simple reverse proxy on the VPS (that only adds client real IP to headers), tunneled to a full reverse proxy on the home server (that does host routing and everything else) through a SSH tunnel?
How would that kind of a setup look like?
Variant 1:
- SSH tunnel established outgoing from home server to VPS_PUBLIC_IP:22, which makes an encrypted tunnel that “forwards” traffic from VPS_PUBLIC_IP:443 to HOME_LOCALHOST:443.
- Full reverse proxy listening on HOME_LOCALHOST:443 and does everything (TLS termination, host routing, 3rd-party auth etc.)
- Instead of running home proxy on the host you can ofc run it inside a container, just need to also run the ssh tunnel from inside that container.
Pro: very secure, VPS doesn’t store any sensitive data (no TLS certificates, only a SSH public key) and the client connections pass through the VPS double-encrypted (TLS between client browser and home proxy, wrapped inside SSH).
Con: you don’t get the client’s IP. When the home apps receive the connections they appear to originate at the home end of the SSH tunnel, which is a private interface on the home server.
Variant 2 (in case you need client IPs):
- SSH tunnel established same way as variant 1 but listens on VPS_LOCALHOST:PORT.
- Simple reverse proxy on VPS_PUBLIC_IP:443. It terminates the TLS connections (decrypts them) using each domain’s certificate. Adds the client IP to the HTTP headers. Forwards the connection into VPS_LOCALHOST:PORT which sends it to the home proxy.
- Full reverse proxy at home set up same way as variant 1 except you can listen to 80 and not do any TLS termination because it’s redundant at this point – the connection has already been decrypted and will arrive wrapped inside SSH.
Pro: by decrypting the TLS connection the simple proxy can add the client’s IP to the HTTP headers, making it available to logs and apps at home.
Con: the VPS needs to store the TLS certificates for all the domains you’re serving, you need to copy fresh certificates to the VPS whenever they expire, and the unencrypted connections are available on the VPS between the exit from TLS and the entry into the SSH tunnel.
Edit: Variant 3? proxy protocol
I’ve never tried this but apparently there’s a so called proxy_protocol that can be used to attach information such as client IP to TLS connections without terminating them.
You would still need a VPS proxy and a home proxy like in variant 2, and they both need to support proxy protocol.
The frontend (VPS) proxy would forward connections in stream mode and use proxy protocol to add client info on the outside.
The backend (home) proxy would terminate TLS and do host routing etc. but also it can unpack client IP from the proxy protocol and place it in HTTP headers for apps and logs.
Pro: It’s basically the best of both variant 1 and 2. TLS connections don’t need to be terminated half-way, but you still get client IPs.
Please note that it’s up to you to weigh the pros and cons of having the client IPs or not. In some circumstances it may actually be a feature to not log client IPs, for example If you expect you might be compelled to provide logs to someone.
Very interesting… How do I get started?
The SSH tunnel is just one command, but you may want to use autossh to restart it if it fails.
If you choose variant 2 you will need to configure a pass-through reverse proxy on the VPS that does TLS termination (uses correct certificates for each domain on 443). Look into nginx, caddy, traefik or haproxy.
For the full home proxy you will once again need a proxy but you’ll additionally need to do host routing to direct each (sub)domain to the correct app. You’ll probably want to use the same proxy as above to avoid learning two different proxies.
I would recommend either caddy (both) or nginx (vps) + nginx proxy manager (home) if you’re a beginner.
How do I make the SSH tunnel forward traffic? It can’t be as easy as just running
ssh user@SERVER_IP
in the terminal.(I only need variant 1 btw)
You also add the -R parameter:
ssh -R SERVER_IP:443:HOME_PROXY_IP:HOME_PROXY_PORT user@SERVER_IP
https://linuxize.com/post/how-to-setup-ssh-tunneling/ (you want the “remote port forwarding”). ssh -R, -L and -D options are magical, more people should learn about them.
You may also need to open access to port 443 on the VPS. How you do that depends on the VPS service, check their documentation.
The biggest obstacle for me is the connection between the VPS and my homeserver. I have tried this today and I tried pinging
10.0.0.2
(the homeserver IP via WireGuard) and get this as a result:PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. From 10.0.0.1 icmp_seq=1 Destination Host Unreachable ping: sendmsg: Destination address required From 10.0.0.1 icmp_seq=2 Destination Host Unreachable ping: sendmsg: Destination address required ^C --- 10.0.0.2 ping statistics --- 2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1019ms
Not sure why though.
Can you post your WG config (masking the public IPs and private key if necessary)?
With wireguard, the
allowed-ips
setting is basically the routing table for it.Also, you don’t want to set the endpoint address (on the VPS) for your homeserver peer since it’s behind NAT. You’ll only want to set that in the ‘client’ on side. Since you’re behind NAT, you’ll also want to set the persistent keepalive so the tunnel remains open.
Hi, thank you so much for trying to help me, I really appreciate it!
VPS
wg0.conf
:[Interface] Address = 10.0.0.1/24 ListenPort = 51820 PrivateKey = REDACTED PostUp = iptables -t nat -A PREROUTING -p tcp -i eth0 '!' --dport 22 -j DNAT --to-destination 10.0.0.2; iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source SERVER_IP PostUp = iptables -t nat -A PREROUTING -p udp -i eth0 '!' --dport 55107 -j DNAT --to-destination 10.0.0.2; PostDown = iptables -t nat -D PREROUTING -p tcp -i eth0 '!' --dport 22 -j DNAT --to-destination 10.0.0.2; iptables -t nat -D POSTROUTING -o eth0 -j SNAT --to-source SERVER_IP PostDown = iptables -t nat -D PREROUTING -p udp -i eth0 '!' --dport 55107 -j DNAT --to-destination 10.0.0.2; [Peer] PublicKey = REDACTED AllowedIPs = 10.0.0.2/32
Homeserver
wg0.conf
:[Interface] Address = 10.0.0.2/24 PrivateKey = REDACTED [Peer] PublicKey = REDACTED AllowedIPs = 0.0.0.0/0 PersistentKeepalive = 25 Endpoint = SERVER_IP:51820
(REDACTED would’ve been the public / private keys, SERVER_IP would’ve been the VPS IP.)
On the surface, that looks like it should work (assuming all the keys are correct and 51820/udp is open to the world on your VPS).
Can you ping the VPS’s WG IP from your homeserver and get a response? If so, try pinging back from the VPS after that.
Until you get the bidirectional traffic going, you might try pulling out the iptables rules from your wireguard script and bringing everything back up clean.
I do not get a response when pinging the VPS’s WG IP from my homeserver. It might have something to do with the firewall that my VPS provider (Hetzner) is using. I’ve now allowed the port
51820
on UDP and TCP and it’s still the same as before… This is weird.I’m not familiar with Hetzner, but I know people use them; haven’t heard any kinds of blocks for WG traffic (though I’ve read they do block outbound SMTP).
Maybe double-check your public and private WG keys on both ends. If the keys aren’t right, it doesn’t give you any kind of error; the traffic is just silently dropped if it doesn’t decrypt.
Hmm, the keys do match on the two different machines. I have no idea why this doesn’t work…
- Setup Wireguard or OpenVPN on the VPS as a server VPN. Allow whatever listener port in the firewall (I use
Do you have a working Wireguard connection? If so you can setup two reverse proxies.
Not really, pinging my homeserver via the VPS returns:
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. From 10.0.0.1 icmp_seq=1 Destination Host Unreachable ping: sendmsg: Destination address required From 10.0.0.1 icmp_seq=2 Destination Host Unreachable ping: sendmsg: Destination address required ^C --- 10.0.0.2 ping statistics --- 2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1019ms
Forget iptables. You have a broken Wireguard setup. Did you verify that you have the proper keys and that Wireguard is allowed though the firewall?
I have no idea how to properly manage the firewall with Hetzner. I’ve opened the ports on the Hetzner management page and I ran several iptables commands to allow traffic from those ports. Still doesn’t work. This is weird!
for testing just set all chains to allow and flush all the rules. then ping the wireguard ip of your vps from your home server (the one where wireguard is configured). this should work and should tell the vps where it can find the other wireguard endpoint. pinging your home server from the vps should work now. if this makes the connection work properly look into the wireguard keepalive settings and make sure that your home server regulary announces itself to your vps.
after that reload the netfilter/iptables on your vps.you don’t need a firewall management tool, as long as there are no services running on the public interface there are no open ports that would need filtering.
What firewall are you using in the VPS? It will likely be firewalld or ufw
Does iptables count as a firewall? You said that I should “forget” iptables. Is it that bad? It came preinstalled on the VPS. Should I switch? And if so, how?
Iptables is the low level mechanism that handles network routing. Firewall software just takes it up a layer so you can manage it without crazy long commands
Alright, sounds good. What firewall are you recommending me to use? I would like to use a firewall that’s easy to manage.
Edit: I went with
ufw
.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters CF CloudFlare CGNAT Carrier-Grade NAT DNS Domain Name Service/System HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol NAT Network Address Translation Plex Brand of media server package SSH Secure Shell for remote terminal access SSL Secure Sockets Layer, for transparent encryption TCP Transmission Control Protocol, most often over IP TLS Transport Layer Security, supersedes SSL UDP User Datagram Protocol, for real-time communications VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting) nginx Popular HTTP server
[Thread #635 for this sub, first seen 27th Mar 2024, 18:15] [FAQ] [Full list] [Contact] [Source code]
I use rathole https://github.com/rapiz1/rathole
Its been solid.I use this too, and it should be noted that this does not require wireguard or any VPN solution. Rathole can be served publicly, allowing a machine behind a NAT or firewall to connect.
I like that its really simple and obvious, with a good confif file structure.
Server forwards a port to a client.
Client forwards that to an ip:port.If you need to know the real IP, its up to you to run reverse-proxies that support PROXY TCP headers or insert x-forward-for, or whatever.
Rathole does its thing, only its thing, and does it well.The Linux way, as it was written.
This looks really interesting. I’ll check it these days.
I managed this by using tailscale, with a kind of weird setup I think, but it just works.
I have tailscale on the VPS and my local server, let’s say its tailscale name is potatoserver
Then with Caddy on the VPS i have something like:
mywebsite.com { reverse_proxy potatoserver:port }
And so mywebsite.com is accessible on the clearnet through the VPS
Though given you’re getting rid of cloudflare tunnles I don’t know if you’d want to get into Tailscale. There’s Headscale too but I haven’t worked with it so I can’t comment
I only use headscale. It just works and does not complain.
deleted by creator
Not sure exactly how good this would work for your use case of all traffic, but I use autossh and ssh reverse tunneling to forward a few local ports/services from my local machine to my VPS, where I can then proxy those ports in nginx or apache on the VPS. It might take a bit of extra configuration to go this route, but it’s been reliable for years for me. Wireguard is probably the “newer, right way” to do what I’m doing, but personally I find using ssh tunnels a bit simpler to wrap my head around and manage.
Technically wireguard would have a touch less latency, but most of the latency will be due to the round trip distance between you and your VPS and the difference in protocols is comparatively negligible.
I had a similar problem like you. My provider only gives me a public IPv6 but no public IPv4. Im using a VPS with an IPv4 with jool to set up SIIT-DC https://nicmx.github.io/Jool/en/siit-dc.html
This converts all IPv4 traffic arriving at the VPS to IPv6 traffic which gets then directly routed to my homeserver.
Not sure if this setup would work for you. This is not a viable solution if you are completly behind a CGNAT without even a public IPv6.
Pro:
- Works without any sensitive Data on the VPS (SSl certificates/passwords…)
- Works for all IP based traffic (TCP,UDP,ICMP)
- The original source IPv4 can be restored by the homeserver Contra:
- AFAIK you cannot choose to only forward some TCP ports. Everything gets redirected.
- You cannot access the VPS via IPv4 anymore since it gets redirected to your homeserver. (I only access my VPS via IPv6)
- No (additional) encryption. (This is no problem for me since all my traffic is already e2e encrypted)
I did this. Works flawlessly for half year now. I have x86 thin client at home running all my stuff, it creates tunnel to my VPS (I use Free tier Oracle VPS - yes, it is a shit company, I know, no need to let me know again in the comments). Works like a charm. This GitHub repo has automated installer for Oracle, Amazon,… https://github.com/mochman/Bypass_CGNAT/wiki/Oracle-Cloud-(Automatic-Installer-Script) - it installs and configures Wireguard on both server (VPS) and client (your home machine).
Yeah you probably need a second IP address on the VPS though.
I open a wire guard tunnel from home to the VPS and then tunnel an Nginx ingress down the VPS.
This one works , I myself have done it cause my shitty isp needs a huge payment for a static public ip. A 5$ VPS was much cheaper . Server behind NAT I can help if you got any doubts