Is it RDMA? Is it a modification of SR-IOV?
I’m having trouble even trying to find out more about this since the RDMA definition just says “remote access to device memory” and I’d like to confirm if that includes virtual instances of PCIe devices over the network.
Essentially, I’m looking for a way to share virtual instances of supported PCIe devices over IP. I.e. If you have a GPU, you can create virtual slices of it with SR-IOV on KVM-based hypervisors. I’m looking for something that will take this and make it available over IP.
I have come across Infiniband and QLogic, Mellanox and HP and IBM and RDMA support on Debian and all of that. I just need someone to ELI5 this to me so I know where/what to search and see if what I want is really even possible with FOSS.
I know that Nutanix allows one to serve PCIe hardware over IP on their hypervisor, but I plan to stick with FOSS as far as possible.
Thanks!
Edit: Please let me know what makes my post so hard to grasp - the answer was simple RoCE/iWARP. RDMA is definitely the underlying technology that offers access to the memory of the device whilst bypassing the kernel for good performance; security considerations aside, this is a very good idea since RoCE/iWARP work on the UDP/IP and the TCP/IP stack, making them routable.
Apologies if my post didn’t make the most sense, I tried to describe it the best I could. Thanks
I’m somewhat confused what you’re asking here. The two technologies that you mentioned do not provide the ability to share a PCIe device to my knowledge which is what I understand you wish to do. The first allows network cards to directly access host memory and perform data transfers without consulting the CPU while the other allows for the sharing of a PCIe root or bus, not allowing multiple systems to access the same hardware device at the same time.
I’ve heard of proprietary solutions, which makes sense because if you want to virtualize multiple instances of one physical hardware device I don’t see how you can do that efficiently without really intimate knowledge of device internals. You have to have separate state for these things, and I think that would be really challenging to do for an open source project.
Anyway, just thought I would open up the discussion because I didn’t see any other comments. I hope to learn something.
It seems I have gaps in my understanding. I had assumed that SR-IOV allowed me to “break” PCIe devices (with firmware that supports it) into virtual functions (“slices”), to then be passed through to VMs/used by containers like physical devices.
You’re right, in that I didn’t really see a mention of TCP/IP in the blogs I’ve read about RDMA. I understand what it is but unless I can access host memory by bypassing the kernel on other machines on the network, this isn’t something I need to consider.
I think virtual functions for compatible PCIe devices is chugging along well in the Linux kernel: check videos about the Nvidia P4 sliced into virtual functions and passed through to different VMs using KVM. It’s either that or I’m completely missing the point somewhere.