From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============6330914001131547218==" MIME-Version: 1.0 From: Huang Zhiteng Subject: Re: [SPDK] NBD with SPDK Date: Mon, 23 Sep 2019 09:03:21 +0800 Message-ID: In-Reply-To: 9CD34198-6425-4D77-91D0-986720349148@ebay.com List-ID: To: spdk@lists.01.org --===============6330914001131547218== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable BTW, our attempt to run SPDK with Kata currently is blocked by this issue: https://github.com/spdk/spdk/issues/946. It'll be nice if community can take a look at it and see how we can fix this. Thank you. On Sat, Sep 7, 2019 at 4:31 AM Kadayam, Hari wrote: > > Kata containers has additional indirection compared to docker, which pote= ntially affects the performance, right? > > Also memory protection concern in virtio is valid, but we could possibly = look for containing the accessible memory. In any case, I think an SPDK app= lication wouldn't be accessing a buffer other than IO buffer from that spac= e. > > =EF=BB=BFOn 9/6/19, 10:13 AM, "Mittal, Rishabh" wro= te: > > I am summarizing all the options. We have three options > > 1. SPDK with NBD. :- It need few optimizations in spdk - nbd to redu= ce the system calls overhead. We can also explore using multi socket becaus= e it is supported in NBD. One disadvantage is the there will be bcopy for = every IO from kernel to SPDK or vice versa for reads. Overhead of bcopy com= pare to end to end latency is very low for 4k workload but we need to see i= ts impact for higher read/write size. > > 2. SPDK with virtio :- It doesn't require any changes in spdk (assumi= ng that spdk virtio is written for performance) but we need to have a custo= mized kernel module which can work with spdk virtio target. It's obvious ad= vantage is the kernel buffer cache will shared with spdk so there will be n= o copy from kernel to spdk. Other advantage is there will be minimal system= calls to ring the doorbell as it will be using shared ring queue. Here my = only concern is that memory protection will be lost as entire kernel buffer= s will be shared with spdk. > > 3. SPDK is used with KATA containers :- It doesn't require much chang= es (Xiaoxi can comment more on this). But our concern is that apps will not= be moved to kata containers which will slow down its adoption rate. > > Please feel free to add pros/cons of any approach if I miss anything.= It will help us to decide. > > > Thanks > Rishabh Mittal > > On 9/5/19, 7:14 PM, "Szmyd, Brian" wrote: > > I believe this option has the same number of copies since your st= ill sharing the memory > with KATA VM kernel not the application itself. This is an option= that the development > of a virtio-vhost-user driver does not prevent, its merely an opt= ion to allow non-KATA > containers to also use the same device. > > I will note that doing a virtio-host-user driver also allows one = to project other device types > than just block devices into the kernel device stack. One could a= lso write a user application > that exposed an input, network, console, gpu or socket device as = well. > > Not that I have any interest in these... __ > > On 9/5/19, 8:08 PM, "Huang Zhiteng" wrote: > > Since this SPDK bdev is intended to be consumed by a user app= lication > running inside a container, we do have the possibility to run= user > application inside a Kata container instead. Kata container = does > introduce the layer of IO virtualization, therefore we conver= t a user > space block device on host to a kernel block device inside th= e VM but > with less memory copies than NBD thanks to SPDK vhost. Kata = container > might impose higher overhead than plain container but hopeful= ly it's > lightweight enough that the overhead is negligible. > > On Fri, Sep 6, 2019 at 5:22 AM Walker, Benjamin > wrote: > > > > On Thu, 2019-09-05 at 19:47 +0000, Szmyd, Brian wrote: > > > Hi Paul, > > > > > > Rather than put the effort into a formalized document her= e is a brief > > > description of the solution I have been investigating jus= t to get an opinion > > > of feasibility or even workability. > > > > > > Some background and a reiteration of the problem to set t= hings up. I apologize > > > to reiterate anything and to include details that some ma= y already know. > > > > > > We are looking for a solution that allows us to write a c= ustom bdev for the > > > SPDK bdev layer that distributes I/O between different NV= Me-oF targets that we > > > have attached and then present that to our application as= either a raw block > > > device or filesystem mountpoint. > > > > > > This is normally (as I understand it) done to by exposing= a device via QEMU to > > > a VM using the vhost target. This SPDK target has impleme= nted the virtio-scsi > > > (among others) device according to this spec: > > > > > > https://nam01.safelinks.protection.outlook.com/?url=3Dhtt= ps%3A%2F%2Fdocs.oasis-open.org%2Fvirtio%2Fvirtio%2Fv1.1%2Fcsprd01%2Fvirtio-= v1.1-csprd01.html%23x1-8300021&data=3D02%7C01%7Chkadayam%40ebay.com%7Cb= 65b1ff6f6cc470400d608d732ed8087%7C46326bff992841a0baca17c16c94ea99%7C0%7C0%= 7C637033868045644168&sdata=3DfbXQwgNce4XFtTAAFHl%2F5SSFo8i%2BZ1vqQdjTEv= 56Lw4%3D&reserved=3D0 > > > > > > The VM kernel then uses a virtio-scsi module to attach sa= id device into its > > > SCSI mid-layer and then have the device enumerated as a /= dev/sd[a-z]+ device. > > > > > > The problem is that QEMU virtualizes a PCIe bus for the g= uest kernel virtio- > > > pci driver to discover the virtio devices and bind them t= o the virtio-scsi > > > driver. There really is no other way (other than platform= MMIO type devices) > > > to attach a device to the virtio-scsi device. > > > > > > SPDK exposes the virtio device to the VM via QEMU which h= as written a "user > > > space" version of the vhost bus. This driver then transla= tes the API into the > > > virtio-pci specification: > > > > > > https://nam01.safelinks.protection.outlook.com/?url=3Dhtt= ps%3A%2F%2Fgithub.com%2Fqemu%2Fqemu%2Fblob%2F5d0e5694470d2952b4f257bc985cac= 8c89b4fd92%2Fdocs%2Finterop%2Fvhost-user.rst&data=3D02%7C01%7Chkadayam%= 40ebay.com%7Cb65b1ff6f6cc470400d608d732ed8087%7C46326bff992841a0baca17c16c9= 4ea99%7C0%7C0%7C637033868045644168&sdata=3Dp%2BeTDdTmvNn8hxPFc%2BGQnKEl= eFeRP9aJ3Sc8prRKJRk%3D&reserved=3D0 > > > > > > This uses an eventfd descriptor for interrupting the non-= polling side of the > > > queue and a UNIX domain socket to setup (and control) the= shared memory which > > > contains the I/O buffers and virtio queues. This is docum= ented in SPDKs own > > > documentation and diagramed here: > > > > > > https://nam01.safelinks.protection.outlook.com/?url=3Dhtt= ps%3A%2F%2Fgithub.com%2Fspdk%2Fspdk%2Fblob%2F01103b2e4dfdcf23cc2125164aa116= 394c8185e8%2Fdoc%2Fvhost_processing.md&data=3D02%7C01%7Chkadayam%40ebay= .com%7Cb65b1ff6f6cc470400d608d732ed8087%7C46326bff992841a0baca17c16c94ea99%= 7C0%7C0%7C637033868045644168&sdata=3DiThOkZ70btaiBFAQKn5EUR%2BpCw%2BrcI= zfIkWVPvf9LZs%3D&reserved=3D0 > > > > > > If we could implement this vhost-user QEMU target as a vi= rtio driver in the > > > kernel as an alternative to the virtio-pci driver, it cou= ld bind a SPDK vhost > > > into the host kernel as a virtio device and enumerated in= the /dev/sd[a-z]+ > > > tree for our containers to bind. Attached is draft block = diagram. > > > > If you think of QEMU as just another user-space process, an= d the SPDK vhost > > target as a user-space process, then it's clear that vhost-= user is simply a > > cross-process IPC mechanism based on shared memory. The "sh= ared memory" part is > > the critical part of that description - QEMU pre-registers = all of the memory > > that will be used for I/O buffers (in fact, all of the memo= ry that is mapped > > into the guest) with the SPDK process by sending fds across= a Unix domain > > socket. > > > > If you move this code into the kernel, you have to solve tw= o issues: > > > > 1) What memory is it registering with the SPDK process? The= kernel driver has no > > idea which application process may route I/O to it - in fac= t the application > > process may not even exist yet - so it isn't memory allocat= ed to the application > > process. Maybe you have a pool of kernel buffers that get m= apped into the SPDK > > process, and when the application process performs I/O the = kernel copies into > > those buffers prior to telling SPDK about them? That would = work, but now you're > > back to doing a data copy. I do think you can get it down t= o 1 data copy instead > > of 2 with a scheme like this. > > > > 2) One of the big performance problems you're seeing is sys= call overhead in NBD. > > If you still have a kernel block device that routes message= s up to the SPDK > > process, the application process is making the same syscall= s because it's still > > interacting with a block device in the kernel, but you're r= ight that the backend > > SPDK implementation could be polling on shared memory rings= and potentially run > > more efficiently. > > > > > > > > Since we will not have a real bus to signal for the drive= r to probe for new > > > devices we can use a sysfs interface for the application = to notify the driver > > > of a new socket and eventfd pair to setup a new virtio-sc= si instance. > > > Otherwise the design simply moves the vhost-user driver f= rom the QEMU > > > application into the Host kernel itself. > > > > > > It's my understanding that this will avoid a lot more sys= tem calls and copies > > > compared to what exposing an iSCSI device or NBD device a= s we're currently > > > discussing. Does this seem feasible? > > > > What you really want is a "block device in user space" solu= tion that's higher > > performance than NBD, and while that's been tried many, man= y times in the past I > > do think there is a great opportunity here for someone. I'm= not sure that the > > interface between the block device process and the kernel i= s best done as a > > modification of NBD or a wholesale replacement by vhost-use= r-scsi, but I'd like > > to throw in a third option to consider - use NVMe queues in= shared memory as the > > interface instead. The NVMe queues are going to be much mor= e efficient than > > virtqueues for storage commands. > > > > > > > > Thanks, > > > Brian > > > > > > On 9/5/19, 12:32 PM, "Mittal, Rishabh" wrote: > > > > > > Hi Paul. > > > > > > Thanks for investigating it. > > > > > > We have one more idea floating around. Brian is going= to send you a > > > proposal shortly. If other proposal seems feasible to you= that we can evaluate > > > the work required in both the proposals. > > > > > > Thanks > > > Rishabh Mittal > > > > > > On 9/5/19, 11:09 AM, "Luse, Paul E" wrote: > > > > > > Hi, > > > > > > So I was able to perform the same steps here and = I think one of the > > > keys to really seeing what's going on is to start perftop= like this: > > > > > > =E2=80=9Cperf top --sort comm,dso,symbol -C 0=E2= =80=9D to get a more focused view by > > > sorting on command, shared object and symbol > > > > > > Attached are 2 snapshots, one with a NULL back en= d for nbd and one > > > with libaio/nvme. Some notes after chatting with Ben a b= it, please read > > > through and let us know what you think: > > > > > > * in both cases the vast majority of the highest = overhead activities > > > are kernel > > > * the "copy_user_enhanced" symbol on the NULL cas= e (it shows up on the > > > other as well but you have to scroll way down to see it) = and is the > > > user/kernel space copy, nothing SPDK can do about that > > > * the syscalls that dominate in both cases are li= kely something that > > > can be improved on by changing how SPDK interacts with nb= d. Ben had a couple > > > of ideas inlcuidng (a) using libaio to interact with the = nbd fd as opposed to > > > interacting with the nbd socket, (b) "batching" wherever = possible, for example > > > on writes to nbd investigate not ack'ing them until some = number have completed > > > * the kernel slab* commands are likely nbd kernel= driver > > > allocations/frees in the IO path, one possibility would b= e to look at > > > optimizing the nbd kernel driver for this one > > > * the libc item on the NULL chart also shows up o= n the libaio profile > > > however is again way down the scroll so it didn't make th= e screenshot :) This > > > could be a zeroing of something somewhere in the SPDK nbd= driver > > > > > > It looks like this data supports what Ben had sus= pected a while back, > > > much of the overhead we're looking at is kernel nbd. Any= way, let us know what > > > you think and if you want to explore any of the ideas abo= ve any further or see > > > something else in the data that looks worthy to note. > > > > > > Thx > > > Paul > > > > > > > > > > > > -----Original Message----- > > > From: SPDK [mailto:spdk-bounces(a)lists.01.org] O= n Behalf Of Luse, Paul > > > E > > > Sent: Wednesday, September 4, 2019 4:27 PM > > > To: Mittal, Rishabh ; Walker= , Benjamin < > > > benjamin.walker(a)intel.com>; Harris, James R ; > > > spdk(a)lists.01.org > > > Cc: Chen, Xiaoxi ; Szmyd, B= rian ; > > > Kadayam, Hari > > > Subject: Re: [SPDK] NBD with SPDK > > > > > > Cool, thanks for sending this. I will try and re= pro tomorrow here and > > > see what kind of results I get > > > > > > Thx > > > Paul > > > > > > -----Original Message----- > > > From: Mittal, Rishabh [mailto:rimittal(a)ebay.com] > > > Sent: Wednesday, September 4, 2019 4:23 PM > > > To: Luse, Paul E ; Walke= r, Benjamin < > > > benjamin.walker(a)intel.com>; Harris, James R ; > > > spdk(a)lists.01.org > > > Cc: Chen, Xiaoxi ; Kadayam,= Hari < > > > hkadayam(a)ebay.com>; Szmyd, Brian > > > Subject: Re: [SPDK] NBD with SPDK > > > > > > Avg CPU utilization is very low when I am running= this. > > > > > > 09/04/2019 04:21:40 PM > > > avg-cpu: %user %nice %system %iowait %steal = %idle > > > 2.59 0.00 2.57 0.00 0.00 = 94.84 > > > > > > Device r/s w/s rkB/s wkB/s= rrqm/s wrqm/s % > > > rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svc= tm %util > > > sda 0.00 0.20 0.00 0.80= 0.00 0.00 > > > 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.= 00 0.00 > > > sdb 0.00 0.00 0.00 0.00= 0.00 0.00 > > > 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.= 00 0.00 > > > sdc 0.00 28846.80 0.00 191555.2= 0 0.00 > > > 18211.00 0.00 38.70 0.00 1.03 29.64 0.00 = 6.64 0.03 100.00 > > > nb0 0.00 47297.00 0.00 > > > 191562.40 0.00 593.60 0.00 1.24 0.00 1.32= 61.83 0.00 > > > 4.05 0 > > > > > > > > > > > > On 9/4/19, 4:19 PM, "Mittal, Rishabh" wrote: > > > > > > I am using this command > > > > > > fio --name=3Drandwrite --ioengine=3Dlibaio --= iodepth=3D8 --rw=3Dwrite -- > > > rwmixread=3D0 --bsrange=3D4k-4k --direct=3D1 -filename= =3D/dev/nbd0 --numjobs=3D8 -- > > > runtime 120 --time_based --group_reporting > > > > > > I have created the device by using these comm= ands > > > 1. ./root/spdk/app/vhost > > > 2. ./rpc.py bdev_aio_create /dev/sdc aio0 > > > 3. /rpc.py start_nbd_disk aio0 /dev/nbd0 > > > > > > I am using "perf top" to get the performance > > > > > > On 9/4/19, 4:03 PM, "Luse, Paul E" wrote: > > > > > > Hi Rishabh, > > > > > > Maybe it would help (me at least) if you = described the > > > complete & exact steps for your test - both setup of the = env & test and > > > command to profile. Can you send that out? > > > > > > Thx > > > Paul > > > > > > -----Original Message----- > > > From: Mittal, Rishabh [mailto:rimittal(a)= ebay.com] > > > Sent: Wednesday, September 4, 2019 2:45 PM > > > To: Walker, Benjamin ; Harris, > > > James R ; spdk(a)lists.01.org= ; Luse, Paul E < > > > paul.e.luse(a)intel.com> > > > Cc: Chen, Xiaoxi ; = Kadayam, Hari < > > > hkadayam(a)ebay.com>; Szmyd, Brian > > > Subject: Re: [SPDK] NBD with SPDK > > > > > > Yes, I am using 64 q depth with one threa= d in fio. I am using > > > AIO. This profiling is for the entire system. I don't kno= w why spdk threads > > > are idle. > > > > > > On 9/4/19, 11:08 AM, "Walker, Benjamin" < > > > benjamin.walker(a)intel.com> wrote: > > > > > > On Fri, 2019-08-30 at 22:28 +0000, Mi= ttal, Rishabh wrote: > > > > I got the run again. It is with 4k = write. > > > > > > > > 13.16% vhost = [.] > > > > > > > spdk_ring_dequeue > > > > > > > > 6.08% vhost = [.] > > > > > > > rte_rdtsc > > > > > > > > 4.77% vhost = [.] > > > > > > > spdk_thread_poll > > > > > > > > 2.85% vhost = [.] > > > > > > > _spdk_reactor_run > > > > > > > > > > You're doing high queue depth for at = least 30 seconds > > > while the trace runs, > > > right? Using fio with the libaio engi= ne on the NBD device > > > is probably the way to > > > go. Are you limiting the profiling to= just the core where > > > the main SPDK process > > > is pinned? I'm asking because SPDK st= ill appears to be > > > mostly idle, and I > > > suspect the time is being spent in so= me other thread (in > > > the kernel). Consider > > > capturing a profile for the entire sy= stem. It will have > > > fio stuff in it, but the > > > expensive stuff still should generall= y bubble up to the > > > top. > > > > > > Thanks, > > > Ben > > > > > > > > > > > > > > On 8/29/19, 6:05 PM, "Mittal, Risha= bh" < > > > rimittal(a)ebay.com> wrote: > > > > > > > > I got the profile with first ru= n. > > > > > > > > 27.91% vhost = [.] > > > > > > > spdk_ring_dequeue > > > > > > > > 12.94% vhost = [.] > > > > > > > rte_rdtsc > > > > > > > > 11.00% vhost = [.] > > > > > > > spdk_thread_poll > > > > > > > > 6.15% vhost = [.] > > > > > > > _spdk_reactor_run > > > > > > > > 4.35% [kernel] = [k] > > > > > > > syscall_return_via_sysret > > > > > > > > 3.91% vhost = [.] > > > > > > > _spdk_msg_queue_run_batch > > > > > > > > 3.38% vhost = [.] > > > > > > > _spdk_event_queue_run_batch > > > > > > > > 2.83% [unknown] = [k] > > > > > > > 0xfffffe000000601b > > > > > > > > 1.45% vhost = [.] > > > > > > > spdk_thread_get_from_ctx > > > > > > > > 1.20% [kernel] = [k] > > > > > > > __fget > > > > > > > > 1.14% libpthread-2.27.so = [.] > > > > > > > __libc_read > > > > > > > > 1.00% libc-2.27.so = [.] > > > > > > > 0x000000000018ef76 > > > > > > > > 0.99% libc-2.27.so = [.] > > > 0x000000000018ef79 > > > > > > > > Thanks > > > > Rishabh Mittal > > > > > > > > On 8/19/19, 7:42 AM, "Luse, Pau= l E" < > > > paul.e.luse(a)intel.com> wrote: > > > > > > > > That's great. Keep any eye= out for the items > > > Ben mentions below - at > > > > least the first one should be quick= to implement and > > > compare both profile data > > > > and measured performance. > > > > > > > > Don=E2=80=99t' forget about= the community meetings > > > either, great place to chat > > > > about these kinds of things. > > > > > > > https://nam01.safelinks.protection.outlook.com/?url=3Dhtt= ps%3A%2F%2Fspdk.io%2Fcommunity%2F&data=3D02%7C01%7Chkadayam%40ebay.com%= 7Cb65b1ff6f6cc470400d608d732ed8087%7C46326bff992841a0baca17c16c94ea99%7C0%7= C0%7C637033868045644168&sdata=3DEMsizfalT%2FNT885h48%2FRgiefp0AN%2BKyYK= BsQnhzn5IA%3D&reserved=3D0 > > > > Next one is tomorrow morn US time. > > > > > > > > Thx > > > > Paul > > > > > > > > -----Original Message----- > > > > From: SPDK [mailto:spdk-bou= nces(a)lists.01.org] On > > > Behalf Of Mittal, > > > > Rishabh via SPDK > > > > Sent: Thursday, August 15, = 2019 6:50 PM > > > > To: Harris, James R ; > > > Walker, Benjamin < > > > > benjamin.walker(a)intel.com>; spdk(= a)lists.01.org > > > > Cc: Mittal, Rishabh ; Chen, > > > Xiaoxi < > > > > xiaoxchen(a)ebay.com>; Szmyd, Brian= ; > > > Kadayam, Hari < > > > > hkadayam(a)ebay.com> > > > > Subject: Re: [SPDK] NBD wit= h SPDK > > > > > > > > Thanks. I will get the prof= iling by next week. > > > > > > > > On 8/15/19, 6:26 PM, "Harri= s, James R" < > > > james.r.harris(a)intel.com> > > > > wrote: > > > > > > > > > > > > > > > > On 8/15/19, 4:34 PM, "M= ittal, Rishabh" < > > > rimittal(a)ebay.com> wrote: > > > > > > > > Hi Jim > > > > > > > > What tool you use t= o take profiling. > > > > > > > > Hi Rishabh, > > > > > > > > Mostly I just use "perf= top". > > > > > > > > -Jim > > > > > > > > > > > > Thanks > > > > Rishabh Mittal > > > > > > > > On 8/14/19, 9:54 AM= , "Harris, James R" < > > > > james.r.harris(a)intel.com> wrote: > > > > > > > > > > > > > > > > On 8/14/19, 9:1= 8 AM, "Walker, > > > Benjamin" < > > > > benjamin.walker(a)intel.com> wrote: > > > > > > > > > > > > > > > > When an I/O= is performed in the > > > process initiating the > > > > I/O to a file, the data > > > > goes into t= he OS page cache > > > buffers at a layer far > > > > above the bio stack > > > > (somewhere = up in VFS). If SPDK > > > were to reserve some > > > > memory and hand it off to > > > > your kernel= driver, your kernel > > > driver would still > > > > need to copy it to that > > > > location ou= t of the page cache > > > buffers. We can't > > > > safely share the page cache > > > > buffers wit= h a user space > > > process. > > > > > > > > I think Rishabh= was suggesting the > > > SPDK reserve the > > > > virtual address space only. > > > > Then the kernel= could map the page > > > cache buffers into that > > > > virtual address space. > > > > That would not = require a data copy, > > > but would require the > > > > mapping operations. > > > > > > > > I think the pro= filing data would be > > > really helpful - to > > > > quantify how much of the 50us > > > > Is due to copyi= ng the 4KB of > > > data. That can help drive > > > > next steps on how to optimize > > > > the SPDK NBD mo= dule. > > > > > > > > Thanks, > > > > > > > > -Jim > > > > > > > > > > > > As Paul sai= d, I'm skeptical that > > > the memcpy is > > > > significant in the overall > > > > performance= you're measuring. I > > > encourage you to go > > > > look at some profiling data > > > > and confirm= that the memcpy is > > > really showing up. I > > > > suspect the overhead is > > > > instead pri= marily in these > > > spots: > > > > > > > > 1) Dynamic = buffer allocation in > > > the SPDK NBD backend. > > > > > > > > As Paul ind= icated, the NBD > > > target is dynamically > > > > allocating memory for each I/O. > > > > The NBD bac= kend wasn't designed > > > to be fast - it was > > > > designed to be simple. > > > > Pooling wou= ld be a lot faster > > > and is something fairly > > > > easy to implement. > > > > > > > > 2) The way = SPDK does the > > > syscalls when it implements > > > > the NBD backend. > > > > > > > > Again, the = code was designed to > > > be simple, not high > > > > performance. It simply calls > > > > read() and = write() on the socket > > > for each command. > > > > There are much higher > > > > performance= ways of doing this, > > > they're just more > > > > complex to implement. > > > > > > > > 3) The lack= of multi-queue > > > support in NBD > > > > > > > > Every I/O i= s funneled through a > > > single sockpair up to > > > > user space. That means > > > > there is lo= cking going on. I > > > believe this is just a > > > > limitation of NBD today - it > > > > doesn't plu= g into the block-mq > > > stuff in the kernel and > > > > expose multiple > > > > sockpairs. = But someone more > > > knowledgeable on the > > > > kernel stack would need to take > > > > a look. > > > > > > > > Thanks, > > > > Ben > > > > > > > > > > > > > > Couple of= things that I am not > > > really sure in this > > > > flow is :- 1. How memory > > > > > registrat= ion is going to work > > > with RDMA driver. > > > > > 2. What c= hanges are required > > > in spdk memory > > > > management > > > > > > > > > > Thanks > > > > > Rishabh M= ittal > > > > > > > > > > > > > > > > > > > _______________________________________________ > > SPDK mailing list > > SPDK(a)lists.01.org > > https://nam01.safelinks.protection.outlook.com/?url=3Dhttps= %3A%2F%2Flists.01.org%2Fmailman%2Flistinfo%2Fspdk&data=3D02%7C01%7Chkad= ayam%40ebay.com%7Cb65b1ff6f6cc470400d608d732ed8087%7C46326bff992841a0baca17= c16c94ea99%7C0%7C0%7C637033868045644168&sdata=3Dor%2FOWQA3mTPiixZcHJqPi= zjOMNreQoIcDK8ZZ5A4Goo%3D&reserved=3D0 > > > > -- > Regards > Huang Zhiteng > > > > > > -- = Regards Huang Zhiteng --===============6330914001131547218==--