I am running a bare metal Kubernetes cluster on k3s with Kube-VIP and Traefik. This works great for services that use SSL/TLS as Server Name Indication (SNI) can be used to reverse proxy multiple services listening on the same port. Consequently, getting Traefik to route multiple web servers receiving traffic on ports 80 or 443 is not a problem at all. However, I am stuck trying to accomplish the same thing for services that just use TCP or UDP without SSL/TLS since SNI is not included in TCP or UDP traffic.

I tried to setup Forgejo where clients will expect to use commands like git clone git@my.forgejo.instance.... which would ultimately use SSH on port 22. Since SSH uses TCP and Traefik supports TCPRoutes, I should be able to route traffic to Forgejo’s SSH entry point using port 22, but I ran into an issue where the SSH service on the node would receive/process all traffic received by the node instead of allowing Traefik to receive the traffic and route it. I believe that I should be able to change the port that the node’s SSH service is listening on or restrict the IP address that the node’s SSH service is listening on. This should allow Traefik to receive the traffic on port 22 and route that traffic to Forgejo’s SSH entry point while also allowing me to SSH directly into the node.

However, even if I get that to work correctly, I will run into another issue when other services that typically run on port 22 are stood up. For example, I would not be able to have Traefik reverse proxy both Forgejo’s SSH entry point and an SFTP’s entry point on port 22 since Traefik would only be able to route all traffic on port 22 to just one service due to the lack of SNI details.

The only viable solution that I can find is to only run one service’s entry point on port 22 and run each of the other services’ entry points on various ports. For instance, Forgejo’s SSH entry point could be port 22 and the SFTP’s entry point could be port 2222 (mapped to the pod’s port 22). This would require multiple additional ports be opened on the firewall and each client would need its configuration and/or commands modified to connect to the service’s a non-standard port.

Another solution that I have seen is to use other services like stunnel to wrap traffic in TLS (similar to how HTTPS works), but I believe this will likely lead to even more problems than using multiple ports as every client would likely need to be compatible with those wrapper services.

Is there some other solution that I am missing? Is there something that I could do with Virtual IP addresses, multiple load balancer IP addresses, etc.? Maybe I could route traffic on Load_Balancer#1_IP_Address:22 to Forgejo’s SSH entry point and Load_Balancer#2_IP_Address:22 to SFTP’s entry point?

tl;dr: Is it possible to host multiple services that do not use SSL/TLS (ie: cannot use SNI) on the same port in a single Kubernetes cluster without using non-standard ports and port mapping?

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    4 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    ARP Address Resolution Protocol, translates IPs to MAC addresses
    DNS Domain Name Service/System
    Git Popular version control system, primarily for code
    HTTP Hypertext Transfer Protocol, the Web
    HTTPS HTTP over SSL
    IP Internet Protocol
    NAT Network Address Translation
    SFTP Secure File Transfer Protocol for encrypted file transfer, over SSH
    SSH Secure Shell for remote terminal access
    SSL Secure Sockets Layer, for transparent encryption
    TCP Transmission Control Protocol, most often over IP
    TLS Transport Layer Security, supersedes SSL
    UDP User Datagram Protocol, for real-time communications
    VPN Virtual Private Network
    VPS Virtual Private Server (opposed to shared hosting)
    k8s Kubernetes container management package

    [Thread #935 for this sub, first seen 23rd Aug 2024, 21:55] [FAQ] [Full list] [Contact] [Source code]

    • wireless_purposely832@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      I guess I need to dig in a little deeper. I am currently only using Kube-VIP to provide a single IP address for the control plane. I think I may have it configured wrong though since that same IP address is the single load balancer IP used by Traefik.

      I have struggled finding good documentation, hints, tutorials, etc. setting up Kube-VIP with Virtual IPs. Is there anything that you are aware of that might provide some assistant in setting that up correctly?

      • simonmicro@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        Okay, I’ll try explaining it. Yes, there is especially for this very little documentation, so… Yeah.

        You start by installing kube-vip into your cluster. Make sure to configure it correctly, so the uplink interface of your workers is being used for the vip, but not e.g. internal ones (see the env vars “vip_interface”). Make sure to enable service based functions and the respective election mechanism (“svc_enable”, “vip_leaderelection”). I would also recommend the ARP usage, because others I’ve never tested.

        Then you create a new “LoadBalancer”-service in k8s, on which you also set the “loadBalancerIP” field with the desired IPv4-VIP. Due to the previous kube-vip configuration, it should pick that up. You may take a look into its operators logs to learn more.

        Theoretically that’s it. Now one of your nodes will start serving the service-port under the vip. The service may target every TCP/UDP traffic, not only Traefik.

        There is one more thing: The field “externalTrafficPolicy” on the LB-service allows you to disable any kind of internal routing via your CNI if set to “local”, so you will even be able to see the real source IPv4 of your clients. Be careful with this on non kube-vip services, as nodes without the targeted pods will not be able to serve the traffic. Kube-vip only promotes nodes to serve the vip, if it also serves the pod targeted by it (see its docs/config).

        • wireless_purposely832@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          I am unsure if I understood everything correctly, but I believe I am already doing everything that you mentioned. I followed the Kube-VIP’s ARP daemonset’s documentation. The leader election works. I am not using Kube-VIP for load balancing though. Instead, I am using Traefik, which is using the same IP address that was assigned to the control plane during both k3s’s and Kube-VIP’s setup. However, I am unable to get any additional VIP addresses to properly route to Traefik.

          Even if I did get the additional VIP addresses working, I think I still have one last issue to overcome. I can control the local network’s DNS so that service#1 is assigned VIP#1 and service#2 assigned VIP#2. However, how would this be handled for traffic received externally? If the external/public DNS has service#1 and service#2 assigned to the network’s public IP address, both service’s traffic would be received by the router/firewall on port 22. The router/firewall could forward traffic on port 22 to (presumably) a single IP address, which would only allow service#1 or service#2 (but not both) to receive traffic publicly, correct?

          • simonmicro@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 months ago

            Ah yes, I see. Because TCP has no SNI built-in this is not really possible.

            You could try IPv6, as within even a single /64 routable prefix you can choose the address section freely. Also take a look at overlay-vpn solutions like Netbird: They allow you to offer you multiple clients, which you could use to assign multiple IPv4 to your server and then routing them differently (you mentioned installing client software before)…

            Finally, I’m not sure why you would inject Treafik into the networking chain. In the end is the direct, kernel-space connection always faster than having an user-space proxy in between.

            • wireless_purposely832@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              I had not thought about using IPv6 for this. It’s definitely something that I would need to research more as I know that this would expose my attack surface and may require an overhaul of the network (or at least a very thorough review).

              I’m not sure I understand the concern about Traefik. I am using it as a reverse proxy and forcing HTTPS for all applicable services (which unfortunately does not apply to this particular situation). I am honestly a little confused about the control plane, tls-san, gateway, load balancer, ingress, etc. and how they all work together. I may not be using Traefik as the Load Balancer and instead have Kube-VIP as the LoadBalancer. I did not configure Kube-VIP any particular way for Load Balancing, but I did configure Traefik with a few Load Balancer specific options. When I tried to setup Kube-VIP with the additional IP addresses for load balancing, I was unable to get k3s to work correctly so I assumed that Traefik was my Load Balancer instead of Kube-VIP.

    • wireless_purposely832@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      I think I can avoid using metallb since I’m already using Kube-VIP.

      I do not think that I am using Virtual IPs correctly as I tried setting it up, but it did not work. I assume the Virtual IP would just become the Load Balancer IP and I would need to configure Traefik and Kube-VIP (as well as my network) to use the Virtual IPs?

      • lemmyng@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        Yeah, you’d have a LoadBalancer service for Traefik which gets assigned a VIP outside the cluster.

        • wireless_purposely832@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          I’m already doing that, but just for one VIP. I think I just need to get the additional VIPs working.

          I know that I will need to update my local network’s DNS so that something like service#1 = git.ssh.local.domain and git.ssh.local.domain = 192.168.50.10 and service#2 = sftp.local.domain and sftp.local.domain = 192.168.50.20. I would setup 192.168.50.10 as the load balancer IP address to Forgejo’s SSH entrypoint and 192.168.50.20 as the load balancer IP address to the SFTP’s entrypoint. However, how would I handle requests/traffic received externally? The router/firewall would receive everything and can port forward port 22 to a single IP address, which would prevent one (or more) service from being used externally, correct?

      • Findmysec@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Ingress controllers like Traefik come across as LB services to IPAM modules like MetalLB (I’ve never used Kube-VIP but I suppose it’s the same story). These plug-ins assign IP addresses to these LB services.

        You can assign a specific IP to an instance of an “outward-facing route” with labels. I don’t remember technical terms relevant to Ingresses because I’ve been messing with the Gateway API recently.

        • wireless_purposely832@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          That all makes sense and tried setting it up that way but could not get it to work. I am not sure if it was an issue with my network, k3s, Kube-VIP, or Traefik (or some combination of them). I will try getting it to work again.

          Even if I do though, I would run into an issue if I publicly exposed these services (I understand there are security implications of doing so). How would I route traffic received externally/publicly on port 22 to more than one IP address? I think I would only be able to do this for local/internal traffic by managing the local DNS.

          • Findmysec@infosec.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 months ago

            You’d receive traffic on IP:PORT, that’s segregation right there. Slap on a DNS name for convenience.

            I might have my MetalLB config lying around somewhere (it’s super easy, I copied most of it from their website), I can probably paste it here if you’d like.

            Exposing services publicly on the Internet is a L3-L4/L7 networking problem, unfortunately I don’t know enough about your situation to comment.

            Edit: the latter end of your post is correct. You could route to different end-points that way

            • wireless_purposely832@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              Maybe I was not clear, but I do not think that you understand what I was trying to say with the second part of my last message.

              Assume that multiple VIPs are setup and there is a load balancer IP for the SFTP entry point (eg: 192.168.1.40:22) and a different load balancer IP for the Forgejo SSH entry point (eg: 192.168.1.50:22). My local DNS can be setup so that sftp.my.domain points to 192.168.1.40 and ssh.forgejo.my.domain points to 192.168.1.50. When I make a request within my network, the DNS lookup will appropriately route sftp.my.domain:22 to 192.168.1.40:22 and ssh.forgejo.my.domain:22 to 192.168.1.50. I believe this is what you are recommending and exactly what I want. I will need to get the multiple VIP part of this setup worked out so I can do this.

              However, this will not work when the traffic is received from outside of my network even if the above configuration is setup correctly. If you were to try to connect to either sftp.my.domain:22 or ssh.forgejo.my.domain:22, your traffic would be routed to my public IP address. My firewall/router would receive the traffic on port 22 and port forward the traffic to the single IP address assigned to that port forwarding rule. When k3s receives the traffic from my firewall/router, k3s will not have any SNI information (ie: it will not know whether you were using sftp.my.domain or ssh.forgejo.my.domain - or any other domain for that matter). Even if I were able to setup multiple port forwarding rules for port 22 on the firewall/router, I would still be unable to appropriately route the traffic because the firewall/router would also not know if the traffic was intended for sftp.my.domain or ssh.forgejo.my.domain. As a result, at most you would only be able to use one of the services because external traffic for both sftp.my.domain and ssh.forgejo.my.domain will be routed to the same IP address and k3s would have no idea what domain (if any) is being used.

              There are a few solutions (eg: use different ports for each SSH or non-TLS trafficked service, wrap the SSH traffic in TLS to give k3s SNI information to route traffic to the appropriate endpoint, configure SSH on the node to route traffic to the appropriate IP address based on SSH user, require each client to use the local network or VPN, etc.), but none of them are as seamless and easy as routing TLS traffic which can use SNI information.

              • Findmysec@infosec.pub
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                4 months ago

                In short, you need a reverse-proxy + traffic segregation with domain names (SNI).

                I don’t remember much about ingresses, but this can be super easy to set up with Gateway API (I’m looking at it right now).

                Basically, you can set up sftp.my.domain/ssh to 192.168.1.40:22, sftp.my.domain/sftp to 192.168.1.40:121 (for example). Same with Forgejo, forgejo.my.domain/ssh will point to 192.168.1.50:22 and forgejo.my.domain/gui will point to 192.168.1.50:443.

                The Gateway API will simply send it over to the right k8s service.

                About your home network: I think you could in theory open up a DMZ and everything should work. I would personally use a cheap VPS as a VPN server and NAT all traffic through it. About traffic from your router maintaining the SNI, that’s a different problem depending on your network setup. Yes, you’ll have to deal with port-mapping because at the end of the day, even Gateway API is NodePort-esque when exposing traffic outside.

                • wireless_purposely832@lemmy.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  4 months ago

                  I am comfortable routing traffic via domain name through a reverse proxy. I am doing that via Traefik and can setup rules so that different sub-domains, domains, and/or path is routed to the appropriate end point (IP address and port). The issue is that k3s does not receive that information for SSH traffic since SNI (ie: the sub-domain, domain, etc.) is not included in SSH traffic. If SSH traffic provided SNI information, this issue would be much less complicated as I would only need to make sure that the port 22 traffic intended for k3s did not get processed by SSH or any other service on the node.

                  If I were to setup a DMZ, I think I would need to setup one unique public IP per SSH service. So I would need to create a public DNS record for sftp.my.domain to VPS#1’s public IP and ssh.forgejo.my.domain to VPS#2’s public IP. Assuming both VPS#1 and VPS#2 have some means of accessing the internal network (eg: VPN) then I could port forward traffic received by VPS#1 on port 22 to 192.168.1.40:22 and traffic received by VPS#2 on port 22 to 192.168.1.50:22. I had not considered this and based on what I have seen so far, I think that this would be the only solution to allow external traffic to access these services using the normal port 22 when there is more than one service in k3s expecting traffic on port 22 (or any other port that receives traffic without SNI details).

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 months ago

    Yes, it’s possible to host whatever you want.

    If you don’t use SSL, then your client is probably complaining silently. Get logs.

  • Findmysec@infosec.pub
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    4 months ago

    MetalLB + map new external IP to sub-domain == profit.

    Read some of the other comments: it’s not about your control plane. All you need is multiple external IPs which an IPAM module/plug-in can provide (MetalLB, Cilium and maybe Kube-VIP: I’ve never used it).