From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============0960628675758939956==" MIME-Version: 1.0 From: =?utf-8?q?=E6=9D=BE=E6=9C=AC=E5=91=A8=E5=B9=B3_/_MATSUMOTO=EF=BC=8CSHUUHE?= =?utf-8?q?I_=3Cshuhei=2Ematsumoto=2Ext_at_hitachi=2Ecom=3E?= Subject: Re: [SPDK] Design policy to support extended LBA and T10 PI in bdevs. Date: Thu, 27 Sep 2018 06:40:46 +0000 Message-ID: In-Reply-To: FF7FC980937D6342B9D289F5F3C7C2625B7362EB@SHSMSX103.ccr.corp.intel.com List-ID: To: spdk@lists.01.org --===============0960628675758939956== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi Changpeng, Thanks for your feedback. > There is, of course, two ways to handle per-block metadata. The first is = to make > it transparent to the block layer by extending the block size (typically = by 8 > bytes). For this case, which I'll call case #1, I don't necessarily think= that > any changes need to be made to the bdev layer - it should already support= 520 > byte blocks, for example. > Yes, any change is not necessary in the bdev layer but currently 520 byte= blocks > cannot be visible from nvmf-target to nvmf-initiator. > > What I want to do first is to make SPDK nvmf-target support (virtual) PI = and its > expected backends will be nvme bdevs first. Hi Shuhei, I understand your point here, the question I have is that: how c= an we map the virtual Namespace bdev with the real backend, especially the real backend can't sup= port extended LBA format ? Or NVMeoF virtual Namespace can support different types of PI, and SPDK for= mat the NVMe backend with TYPE0, SPDK can fill the PI according to the specification. I think on option we can take is an new bdev module that strips metadata fr= om read and inserts metadata to write. We can set this bdev on top of the bdev which doesn't support extended LBA. Besides, new bdev APIs which - suports separate metadata, and - converts separate metadata into extended LBA if the backend bdev is forma= tted with extended LBA. may be nice to have. Separate metadata is easy-to-use for the upper layer that is not aware of P= I but NVMe-oF supports only extende LBA and the type of metadata is fixed a= t format. I hope these are align with Ben's suggestive feedback and your thought. Thanks, Shuhei ________________________________ =E5=B7=AE=E5=87=BA=E4=BA=BA: SPDK =E3=81=8C L= iu, Changpeng =E3=81=AE=E4=BB=A3=E7=90=86=E3=81= =A7=E9=80=81=E4=BF=A1 =E9=80=81=E4=BF=A1=E6=97=A5=E6=99=82: 2018=E5=B9=B49=E6=9C=8827=E6=97=A5 14= :21 =E5=AE=9B=E5=85=88: Storage Performance Development Kit =E4=BB=B6=E5=90=8D: [!]Re: [SPDK] Design policy to support extended LBA and= T10 PI in bdevs. > -----Original Message----- > From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of =E6=9D=BE=E6= =9C=AC=E5=91=A8=E5=B9=B3 / > MATSUMOTO=EF=BC=8CSHUUHEI > Sent: Wednesday, September 26, 2018 11:13 AM > To: spdk(a)lists.01.org > Subject: Re: [SPDK] Design policy to support extended LBA and T10 PI in b= devs. > > Hi Ben and All, > > > I updated slightly. Will you read this mail and discard previous one? > > > > There is, of course, two ways to handle per-block metadata. The first is = to make > it transparent to the block layer by extending the block size (typically = by 8 > bytes). For this case, which I'll call case #1, I don't necessarily think= that > any changes need to be made to the bdev layer - it should already support= 520 > byte blocks, for example. > Yes, any change is not necessary in the bdev layer but currently 520 byte= blocks > cannot be visible from nvmf-target to nvmf-initiator. > > What I want to do first is to make SPDK nvmf-target support (virtual) PI = and its > expected backends will be nvme bdevs first. Hi Shuhei, I understand your point here, the question I have is that: how c= an we map the virtual Namespace bdev with the real backend, especially the real backend can't sup= port extended LBA format ? Or NVMeoF virtual Namespace can support different types of PI, and SPDK for= mat the NVMe backend with TYPE0, SPDK can fill the PI according to the specification. > Then I want to use crypto or raid bdevs on top of nvme bdevs as backends = next. > The bdev APIs shouldn't understand the meaning of the metadata at all (ju= st that > it exists and its size), and therefore can't strip or insert it. The bdev= module > implementing T10 PI should strip and insert automatically. > OK, by this bdevs will be able to inform existence and size of metadata b= ut > additional information will be necessary to create virtual namespaces in = which PI > is enabled. > Do you think adding optional parameters related with PI to > nvmf_subsystem_add_ns RPC is reasonable? > > Or bdev module will provide detailed PI information at virtual namespace > creation? > > The second case is as you said, where the metadata is stored in a separat= e place > in memory. We support this in the NVMe driver via special command submiss= ion > functions that additionally take a pointer to the associated metadata. I = think > support for this second case, case #2, is what you are interested in > implementing. > nvmf-target supports only interleaved metadata and don't support separate > metadata buffer. Which option sounds reasonable for you? > > Option 1: > nvmf-target transfers interleaved metadata directly to the backend bdev by > current bdev APIs. > This will be able to keep zero-copy and be good for performance. > > Option 2: > nvmf-target separates interleaved metadata into two buffers and then tran= sfers > two buffers to the backend bdev by new bdev APIs. > > > > > > ELBA awareness means that the upper layer must extract data from interl= eaved > > extended blocks. > > > > Q3. To support any application which requires separate metadata, provid= ing > the > > following APIs is reasonable? > > - add md_buf and md_len to parameters > > - add the suffix "_md" to the function name. > > Sounds good. > OK, I will do. > But nvme provides pichk, app_tag/mask and ref_tag fields in the read/write > command to applications. > nvme bdev will be able to hold pichk internally but nvme bdev will have to > provide app_tag/mask and ref_tag in bdev APIs from the end-to-end perspec= tive. > Adding app_tag/mask and ref_tag to the new APIs will be reasonable too? > > > There is also the question of what is actually stored in the metadata. T1= 0 PI is > the obvious choice, but really anything can be put there. Below you menti= on > writing a bdev module that transparently handles T10 PI and I think that'= s the > right design choice. Basic metadata support can be in the bdev layer, but= the > T10 PI code can be in its own bdev module. > I got it. > I haven't thought through all of the ramifications here, but I think it m= ight > depend on whether we're doing case #1 or case #2 from above. In case #1, = I'd > except it to return the total length of data + metadata (the user may not= even > be using the extra 8 bytes for metadata at all). For case #2, I'd expect = it to > return just the length of the data. > I got it. > > > Thanks, > Shuhei > > > ________________________________ > =E5=B7=AE=E5=87=BA=E4=BA=BA: =E6=9D=BE=E6=9C=AC=E5=91=A8=E5=B9=B3 / MATSU= MOTO=EF=BC=8CSHUUHEI > =E9=80=81=E4=BF=A1=E6=97=A5=E6=99=82: 2018=E5=B9=B49=E6=9C=8826=E6=97=A5 = 8:24 > =E5=AE=9B=E5=85=88: spdk(a)lists.01.org > =E4=BB=B6=E5=90=8D: RE: [SPDK] Design policy to support extended LBA and = T10 PI in bdevs. > > > Hi Ben, > > > Thanks for your feedback. Your feedback made my head cleaner. > > I have additional few questions. Will you provide your feedback again? > > > > There is, of course, two ways to handle per-block metadata. The first is = to make > it transparent to the block layer by extending the block size (typically = by 8 > bytes). For this case, which I'll call case #1, I don't necessarily think= that > any changes need to be made to the bdev layer - it should already support= 520 > byte blocks, for example. > Yes, any change is not necessary in the bdev layer but currently 520 byte= blocks > cannot be visible from nvmf-target to nvmf-initiator. > > What I want to do first is to make SPDK nvmf-target support (virtual) PI = and its > expected backends will be nvme bdevs first. > Then I want to use crypto or raid bdevs on top of nvme bdevs as backends = next. > The bdev APIs shouldn't understand the meaning of the metadata at all (ju= st that > it exists and its size), and therefore can't strip or insert it. The bdev= module > implementing T10 PI should strip and insert automatically. > OK, by this bdevs will be able to inform existence and size of metadata b= ut > additional information will be necessary to create virtual namespaces wit= h PI. > Do you think adding optional parameters related with PI to > nvmf_subsystem_add_ns RPC is reasonable? > Or bdev module will have detailed PI information and provide them at virt= ual > namespace creation? > > The second case is as you said, where the metadata is stored in a separat= e place > in memory. We support this in the NVMe driver via special command submiss= ion > functions that additionally take a pointer to the associated metadata. I = think > support for this second case, case #2, is what you are interested in > implementing. > nvmf-target supports only interleaved metadata and don't support separate > metadata buffer. Which option sounds reasonable for you? > > Option 1: > nvmf-target separates interleaved metadata into two buffers and then tran= sfers > two buffers to the backend bdev by new bdev APIs. > > Option 2: > nvmf-target transfers interleaved metadata directly to the backend bdev by > current bdev APIs. > But separate metadata may be better for crypto bdev and raid bdev. > Hence for this option I may have to provide a special bdev module to conv= ert > between interleaved metadata and separate metadata. > > There is also the question of what is actually stored in the metadata. T1= 0 PI is > the obvious choice, but really anything can be put there. Below you menti= on > writing a bdev module that transparently handles T10 PI and I think that'= s the > right design choice. Basic metadata support can be in the bdev layer, but= the > T10 PI code can be in its own bdev module. > I got it. > I haven't thought through all of the ramifications here, but I think it m= ight > depend on whether we're doing case #1 or case #2 from above. In case #1, = I'd > except it to return the total length of data + metadata (the user may not= even > be using the extra 8 bytes for metadata at all). For case #2, I'd expect = it to > return just the length of the data. > I got it. > > > > > ELBA awareness means that the upper layer must extract data from interl= eaved > > extended blocks. > > > > Q3. To support any application which requires separate metadata, provid= ing > the > > following APIs is reasonable? > > - add md_buf and md_len to parameters > > - add the suffix "_md" to the function name. > > Sounds good. > OK, I will do. > > Thanks, > Shuhei > > > ________________________________ > =E5=B7=AE=E5=87=BA=E4=BA=BA: SPDK =E3=81=8C= Walker, Benjamin > =E3=81=AE=E4=BB=A3=E7=90=86=E3=81=A7=E9=80= =81=E4=BF=A1 > =E9=80=81=E4=BF=A1=E6=97=A5=E6=99=82: 2018=E5=B9=B49=E6=9C=8826=E6=97=A5 = 5:13 > =E5=AE=9B=E5=85=88: spdk(a)lists.01.org > =E4=BB=B6=E5=90=8D: [!]Re: [SPDK] Design policy to support extended LBA a= nd T10 PI in bdevs. > > On Tue, 2018-09-25 at 00:27 +0000, =E6=9D=BE=E6=9C=AC=E5=91=A8=E5=B9=B3 /= MATSUMOTO=EF=BC=8CSHUUHEI wrote: > > Sorry for resend, I revised a little. > > > > > > Hi All, > > > > I've worked on extended LBA in bdevs first. > > I will do T10 PI on top of the extended LBA next. > > I expect some applications or users will need separate metadata and wil= l do > > this too. > > There is, of course, two ways to handle per-block metadata. The first is = to make > it transparent to the block layer by extending the block size (typically = by 8 > bytes). For this case, which I'll call case #1, I don't necessarily think= that > any changes need to be made to the bdev layer - it should already support= 520 > byte blocks, for example. > > The second case is as you said, where the metadata is stored in a separat= e place > in memory. We support this in the NVMe driver via special command submiss= ion > functions that additionally take a pointer to the associated metadata. I = think > support for this second case, case #2, is what you are interested in > implementing. > > There is also the question of what is actually stored in the metadata. T1= 0 PI is > the obvious choice, but really anything can be put there. Below you menti= on > writing a bdev module that transparently handles T10 PI and I think that'= s the > right design choice. Basic metadata support can be in the bdev layer, but= the > T10 PI code can be in its own bdev module. > > > > > About extended LBA in bdevs, I would like to hear any feedback before > > submitting patches. > > Any feedback is very very appreciated. > > > > Q1. Which length should spdk_bdev_get_block_size(bdev) describe? > > Option 1: length of extended block (data + metadata) > > Option 2: length of only data block. user will get length of metadata= by > > spdk_bdev_get_md_size(bdev) > > Or any other idea? > > I haven't thought through all of the ramifications here, but I think it m= ight > depend on whether we're doing case #1 or case #2 from above. In case #1, = I'd > except it to return the total length of data + metadata (the user may not= even > be using the extra 8 bytes for metadata at all). For case #2, I'd expect = it to > return just the length of the data. > > > > > Current implementation is Option 1 but NVMe-oF target cuts off the size= of > > metadata now even if metadata is enabled. > > Keeping current implementation, Option1, sounds reasonable for me. > > > > Even if we take Option 1, we will need spdk_bdev_get_md_size(bdev) anyw= ay. > > > > Q2. Which behavior should bdev IO APIs have by default? > > Option 1: READ_PASS and WRITE_PASS (the upper layer must be aware of > extended > > LBA by default) > > Option 2: READ_STRIP and WRITE_INSERT (extended LBA is transparent to = the > > upper layer by default) > > Or any other idea? > > > > READ_STRIP reads data and metadata from the target, discards metadata, = and > > transfers only data to the upper layer. > > WRITE_INSERT transfers only data from the upper layer, add metadata, and > > writes both data and metadata to the target. > > READ_PASS reads data and metadata from the target and transfers both da= ta > and > > metadata to the upper layer. > > WRITE_PASS transfers data and metadata from the upper layer and writes = both > > data and metadata to the target. > > > > Option 1 looks reasonable to me. I will be able to provide an new bdev = module > > to use extended LBA bdevs transparently. > > The new bdev module will do READ_STRIP and WRITE_INSERT internally. > > If we take Option 2, we will have to provide the set of ELBA aware APIs. > > The bdev APIs shouldn't understand the meaning of the metadata at all (ju= st that > it exists and its size), and therefore can't strip or insert it. The bdev= module > implementing T10 PI should strip and insert automatically. > > > > > ELBA awareness means that the upper layer must extract data from interl= eaved > > extended blocks. > > > > Q3. To support any application which requires separate metadata, provid= ing > the > > following APIs is reasonable? > > - add md_buf and md_len to parameters > > - add the suffix "_md" to the function name. > > Sounds good. > > > > > About T10 PI, I will send questions as separate mail later. > > > > Bdev provides the following APIs: > > - int spdk_bdev_read(desc, ch, buf, offset, nbytes, cb, cb_arg) > > - int spdk_bdev_readv(desc, ch, iov, iovcnt, offset, nbytes, cb, cb_arg) > > - int spdk_bdev_write(desc, ch, buf, offset, nbytes, cb, cb_arg) > > - int spdk_bdev_writev(desc, ch, iov, iovcnt, offset, nbytes, cb, cb_ar= g) > > - int spdk_bdev_write_zeroes(desc, ch, offset, len, cb, cb_arg) > > - int spdk_bdev_unmap(desc, ch, offset, nbytes, cb, cb_arg) > > - int spdk_bdev_reset(desc, ch, cb, cb_arg) > > - int spdk_bdev_flush(desc, ch, offset, length, cb, cb_arg) > > - int spdk_bdev_nvme_admin_passthru(desc, ch, cmd, buf, nbytes, cb, cb_= arg) > > - int spdk_bdev_nvme_io_passthru(bdev_desc, ch, cmd, buf, nbytes, bc, > cb_arg) > > - int spdk_bdev_nvme_io_passthru_md(bdev_desc, ch, cmd, buf, nbytes, > md_buf, > > md_len, cb, cb_arg) > > - uint32_t spdk_bdev_get_blocks_size(bdev) > > - uint32_t spdk_bdev_get_num_blocks(bdev) > > - int spdk_bdev_read_blocks(desc, ch, buf, offset_blocks, num_blocks, c= b, > > cb_arg) > > - int spdk_bdev_readv_blocks(desc, ch, iov, iovcnt, offset_blocks, num_= blocks, > > cb, cb_arg) > > - int spdk_bdev_write_blocks(desc, ch, buf, offset_blocks, num_blocks, = cb, > > cb_arg) > > - int spdk_bdev_writev_blocks(desc, ch, iov, iovcnt, offset_blocks, > > num_blocks, cb, cb_arg) > > - int spdk_bdev_write_zeroes_blocks(desc, ch, offset_blocks, num_blocks= , cb, > > cb_arg) > > - int spdk_bdev_unmap_blocks(desc, ch, offset_blocks, num_blocks, cb, c= b_arg) > > - int spdk_bdev_flush_blocks(desc, ch, offset_blocks, num_blocks, cb, c= b_arg) > > > > Thanks, > > Shuhei > > > > ________________________________ > > =E5=B7=AE=E5=87=BA=E4=BA=BA: =E6=9D=BE=E6=9C=AC=E5=91=A8=E5=B9=B3 / MAT= SUMOTO=EF=BC=8CSHUUHEI > > =E9=80=81=E4=BF=A1=E6=97=A5=E6=99=82: 2018=E5=B9=B49=E6=9C=8825=E6=97= =A5 9:14 > > =E5=AE=9B=E5=85=88: spdk(a)lists.01.org > > =E4=BB=B6=E5=90=8D: Design policy to support extended LBA and T10 PI in= bdevs. > > > > > > Hi All, > > > > I've worked on extended LBA in bdevs first. > > I will do T10 PI on top of the extended LBA next. > > I expect some applications or users will need separate metadata and wil= l do > > this too. > > > > About extended LBA in bdevs, I would like to hear any feedback before > > submitting patches. > > Any feedback is very very appreciated. > > > > Q1. Which length should spdk_bdev_get_block_size(bdev) describe? > > Option 1: length of extended block (data + metadata) > > Option 2: length of only data block. user will get length of metadata= by > > spdk_bdev_get_md_size(bdev) > > Or any other idea? > > > > Current implementation is A1 but NVMe-oF target cuts off the size of me= tadata > > now even if metadata is enabled. > > Keeping current implementation, Option1, sounds reasonable for me. > > Even if we take Option 1 > > > > Q2. Which behavior should bdev IO APIs have by default? > > Option 1: READ_PASS and WRITE_PASS (the upper layer must be aware of > extended > > LBA by default) > > Option 2: READ_STRIP and WRITE_INSERT (extended LBA is transparent to = the > > upper layer by default) > > Or any other idea? > > > > READ_STRIP reads data and metadata from the target, discards metadata, = and > > transfers only data to the upper layer. > > WRITE_INSERT transfers only data from the upper layer, add metadata, and > > writes both data and metadata to the target. > > READ_PASS reads data and metadata from the target and transfers both da= ta > and > > metadata to the upper layer. > > WRITE_PASS transfers data and metadata from the upper layer and writes = both > > data and metadata to the target. > > A1 looks reasonable to me. I will be able to provide an new bdev module= to > use > > extended LBA bdevs transparently. > > The new bdev module will do READ_STRIP and WRITE_INSERT internally. > > If we take A2, we will have to provide the set of ELBA aware APIs. > > > > Q3. To support any application which requires separate metadata, provid= ing > the > > following APIs is reasonable? > > - add md_buf and md_len to parameters > > - add the suffix "_md" to the function name. > > > > About T10 PI, I will send questions as separate mail later. > > > > Bdev provides the following APIs: > > - int spdk_bdev_read(desc, ch, buf, offset, nbytes, cb, cb_arg) > > - int spdk_bdev_readv(desc, ch, iov, iovcnt, offset, nbytes, cb, cb_arg) > > - int spdk_bdev_write(desc, ch, buf, offset, nbytes, cb, cb_arg) > > - int spdk_bdev_writev(desc, ch, iov, iovcnt, offset, nbytes, cb, cb_ar= g) > > - int spdk_bdev_write_zeroes(desc, ch, offset, len, cb, cb_arg) > > - int spdk_bdev_unmap(desc, ch, offset, nbytes, cb, cb_arg) > > - int spdk_bdev_reset(desc, ch, cb, cb_arg) > > - int spdk_bdev_flush(desc, ch, offset, length, cb, cb_arg) > > - int spdk_bdev_nvme_admin_passthru(desc, ch, cmd, buf, nbytes, cb, cb_= arg) > > - int spdk_bdev_nvme_io_passthru(bdev_desc, ch, cmd, buf, nbytes, bc, > cb_arg) > > - int spdk_bdev_nvme_io_passthru_md(bdev_desc, ch, cmd, buf, nbytes, > md_buf, > > md_len, cb, cb_arg) > > - uint32_t spdk_bdev_get_blocks_size(bdev) > > - uint32_t spdk_bdev_get_num_blocks(bdev) > > - int spdk_bdev_read_blocks(desc, ch, buf, offset_blocks, num_blocks, c= b, > > cb_arg) > > - int spdk_bdev_readv_blocks(desc, ch, iov, iovcnt, offset_blocks, num_= blocks, > > cb, cb_arg) > > - int spdk_bdev_write_blocks(desc, ch, buf, offset_blocks, num_blocks, = cb, > > cb_arg) > > - int spdk_bdev_writev_blocks(desc, ch, iov, iovcnt, offset_blocks, > > num_blocks, cb, cb_arg) > > - int spdk_bdev_write_zeroes_blocks(desc, ch, offset_blocks, num_blocks= , cb, > > cb_arg) > > - int spdk_bdev_unmap_blocks(desc, ch, offset_blocks, num_blocks, cb, c= b_arg) > > - int spdk_bdev_flush_blocks(desc, ch, offset_blocks, num_blocks, cb, c= b_arg) > > > > Thanks, > > Shuhei > > > > _______________________________________________ > > SPDK mailing list > > SPDK(a)lists.01.org > > https://clicktime.symantec.com/a/1/_YuXBcwzu- > xtu4EvOXanN4spdgP_iGZmwJk- > npnVUNg=3D?d=3D9NA6TQrg1y3x2LNYVh4PW5R9mxBq4nSAmNKdauz45YcmP7WMD > A-H70FTUf_PnxY0Bp5jb5luBgHDxciVsChJclld_tKjA0lmW- > AdkzLd0YvqKO7Til_4_C7yeqWXomVDupRM4FoP39S9Yink5uuxORDFO2pf8ozMZG > DAMUHjlWNsxNQDsRJWSREbYx- > _D0Q2F2O8i5q_xN8Lh2K4_7foUPvqHOGu34rn4D6- > 8lfypZmy14ks3MY2Xjod6Vq8cFkcydwZNCgHMh7xISObupvbxQco025KLRPL8CSjbk > F- > lRRnfO_e1yuE3VHYZqG2XE3xWOcShRkr3JfzE5Pca1wQCrcGhX60pJvD9umuLRCCE > noOomGZDSIU2GAsK9V- > HMEkaCLnvK39bC3mC1nPnTW3kL0SgMYZ10VUPUqSyEslJJ8QDXrTibGp2sJeZA%3 > D%3D&u=3Dhttps%3A%2F%2Flists.01.org%2Fmailman%2Flistinfo%2Fspdk > > _______________________________________________ > SPDK mailing list > SPDK(a)lists.01.org > https://clicktime.symantec.com/a/1/_YuXBcwzu-xtu4EvOXanN4spdgP_iGZmwJk- > npnVUNg=3D?d=3D9NA6TQrg1y3x2LNYVh4PW5R9mxBq4nSAmNKdauz45YcmP7WMD > A-H70FTUf_PnxY0Bp5jb5luBgHDxciVsChJclld_tKjA0lmW- > AdkzLd0YvqKO7Til_4_C7yeqWXomVDupRM4FoP39S9Yink5uuxORDFO2pf8ozMZG > DAMUHjlWNsxNQDsRJWSREbYx- > _D0Q2F2O8i5q_xN8Lh2K4_7foUPvqHOGu34rn4D6- > 8lfypZmy14ks3MY2Xjod6Vq8cFkcydwZNCgHMh7xISObupvbxQco025KLRPL8CSjbk > F- > lRRnfO_e1yuE3VHYZqG2XE3xWOcShRkr3JfzE5Pca1wQCrcGhX60pJvD9umuLRCCE > noOomGZDSIU2GAsK9V- > HMEkaCLnvK39bC3mC1nPnTW3kL0SgMYZ10VUPUqSyEslJJ8QDXrTibGp2sJeZA%3 > D%3D&u=3Dhttps%3A%2F%2Flists.01.org%2Fmailman%2Flistinfo%2Fspdk > _______________________________________________ > SPDK mailing list > SPDK(a)lists.01.org > https://clicktime.symantec.com/a/1/bAMGNliZrGfYoWyvjY08H_oLf2QzPVPrRhttDW= DmUVc=3D?d=3DXHAlPzaFDFAVoP-6qW9pNNwubD8aIVzOKfy00R6DW9Jzey6-6UHJ0qLI-MZKAL= XeLtOlKdk6qFmSyLvNgLtbNYCob29X2spBy3XZi5Pm1Of9h2xAA7aTOWBJ3MuCBwX96kdUYJOXk= lKnQgL6O53bl38gMcW4R5qUTXeBc8LgApuzwsYDWAqb-0AM4yxUw7SJisdYoiEHUqX8dcKuIVoK= ad3ZmHbxdC8a_i1-z8sOCYAV78V8VaeM5IwzajddE0_IfZ-DOhXHkbteVfqB_Tu6d2u5g5FQmn4= Nf_YhhY6teUJbXdFD5yXCxHfNkpvlJjBuEzEz4EnX8jE1QAXKIvQRxb4as71m4RgoATrUXPeOuU= -GXjHdOID-XA2fTjdjbUs3Ddf1WiWBeL5Alf5Ojfj-h8IOHciEs2HGpSx2Uo0chLb3X-iz7DvV8= h2zDV2NVnBErhSIRhyoaQ%3D%3D&u=3Dhttps%3A%2F%2Flists.01.org%2Fmailman%2Flist= info%2Fspdk _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://clicktime.symantec.com/a/1/bAMGNliZrGfYoWyvjY08H_oLf2QzPVPrRhttDWDm= UVc=3D?d=3DXHAlPzaFDFAVoP-6qW9pNNwubD8aIVzOKfy00R6DW9Jzey6-6UHJ0qLI-MZKALXe= LtOlKdk6qFmSyLvNgLtbNYCob29X2spBy3XZi5Pm1Of9h2xAA7aTOWBJ3MuCBwX96kdUYJOXklK= nQgL6O53bl38gMcW4R5qUTXeBc8LgApuzwsYDWAqb-0AM4yxUw7SJisdYoiEHUqX8dcKuIVoKad= 3ZmHbxdC8a_i1-z8sOCYAV78V8VaeM5IwzajddE0_IfZ-DOhXHkbteVfqB_Tu6d2u5g5FQmn4Nf= _YhhY6teUJbXdFD5yXCxHfNkpvlJjBuEzEz4EnX8jE1QAXKIvQRxb4as71m4RgoATrUXPeOuU-G= XjHdOID-XA2fTjdjbUs3Ddf1WiWBeL5Alf5Ojfj-h8IOHciEs2HGpSx2Uo0chLb3X-iz7DvV8h2= zDV2NVnBErhSIRhyoaQ%3D%3D&u=3Dhttps%3A%2F%2Flists.01.org%2Fmailman%2Flistin= fo%2Fspdk --===============0960628675758939956==--