* Re: [PATCH v3 23/26] xprtrdma: Move cqe to struct rpcrdma_mr
@ 2021-04-21 16:01 kernel test robot
0 siblings, 0 replies; 2+ messages in thread
From: kernel test robot @ 2021-04-21 16:01 UTC (permalink / raw)
To: kbuild
[-- Attachment #1: Type: text/plain, Size: 13818 bytes --]
CC: kbuild-all(a)lists.01.org
In-Reply-To: <161885544311.38598.17973108446796585522.stgit@manet.1015granger.net>
References: <161885544311.38598.17973108446796585522.stgit@manet.1015granger.net>
TO: Chuck Lever <chuck.lever@oracle.com>
TO: trondmy(a)hammerspace.com
CC: linux-nfs(a)vger.kernel.org
CC: linux-rdma(a)vger.kernel.org
Hi Chuck,
I love your patch! Perhaps something to improve:
[auto build test WARNING on tip/perf/core]
[also build test WARNING on v5.12-rc8]
[cannot apply to nfs/linux-next next-20210421]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Chuck-Lever/NFS-RDMA-client-patches-for-next/20210420-020504
base: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 5deac80d4571dffb51f452f0027979d72259a1b9
:::::: branch date: 2 days ago
:::::: commit date: 2 days ago
config: x86_64-randconfig-m001-20210421 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
smatch warnings:
net/sunrpc/xprtrdma/frwr_ops.c:546 frwr_unmap_sync() error: potentially dereferencing uninitialized 'last'.
net/sunrpc/xprtrdma/frwr_ops.c:647 frwr_unmap_async() error: potentially dereferencing uninitialized 'last'.
vim +/last +546 net/sunrpc/xprtrdma/frwr_ops.c
847568942f93e0 Chuck Lever 2019-06-19 493
847568942f93e0 Chuck Lever 2019-06-19 494 /**
847568942f93e0 Chuck Lever 2019-06-19 495 * frwr_unmap_sync - invalidate memory regions that were registered for @req
847568942f93e0 Chuck Lever 2019-06-19 496 * @r_xprt: controlling transport instance
847568942f93e0 Chuck Lever 2019-06-19 497 * @req: rpcrdma_req with a non-empty list of MRs to process
9d6b0409788287 Chuck Lever 2016-06-29 498 *
847568942f93e0 Chuck Lever 2019-06-19 499 * Sleeps until it is safe for the host CPU to access the previously mapped
d8099feda4833b Chuck Lever 2019-06-19 500 * memory regions. This guarantees that registered MRs are properly fenced
d8099feda4833b Chuck Lever 2019-06-19 501 * from the server before the RPC consumer accesses the data in them. It
d8099feda4833b Chuck Lever 2019-06-19 502 * also ensures proper Send flow control: waking the next RPC waits until
d8099feda4833b Chuck Lever 2019-06-19 503 * this RPC has relinquished all its Send Queue entries.
c9918ff56dfb17 Chuck Lever 2015-12-16 504 */
847568942f93e0 Chuck Lever 2019-06-19 505 void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
c9918ff56dfb17 Chuck Lever 2015-12-16 506 {
d34ac5cd3a73aa Bart Van Assche 2018-07-18 507 struct ib_send_wr *first, **prev, *last;
5ecef9c8436695 Chuck Lever 2020-11-09 508 struct rpcrdma_ep *ep = r_xprt->rx_ep;
d34ac5cd3a73aa Bart Van Assche 2018-07-18 509 const struct ib_send_wr *bad_wr;
ce5b3717828356 Chuck Lever 2017-12-14 510 struct rpcrdma_frwr *frwr;
96ceddea3710f6 Chuck Lever 2017-12-14 511 struct rpcrdma_mr *mr;
847568942f93e0 Chuck Lever 2019-06-19 512 int rc;
c9918ff56dfb17 Chuck Lever 2015-12-16 513
451d26e151f079 Chuck Lever 2017-06-08 514 /* ORDER: Invalidate all of the MRs first
c9918ff56dfb17 Chuck Lever 2015-12-16 515 *
c9918ff56dfb17 Chuck Lever 2015-12-16 516 * Chain the LOCAL_INV Work Requests and post them with
c9918ff56dfb17 Chuck Lever 2015-12-16 517 * a single ib_post_send() call.
c9918ff56dfb17 Chuck Lever 2015-12-16 518 */
ce5b3717828356 Chuck Lever 2017-12-14 519 frwr = NULL;
a100fda1a2e1fa Chuck Lever 2016-11-29 520 prev = &first;
265a38d4611360 Chuck Lever 2019-08-19 521 while ((mr = rpcrdma_mr_pop(&req->rl_registered))) {
96ceddea3710f6 Chuck Lever 2017-12-14 522
d379eaa838f181 Chuck Lever 2018-10-01 523 trace_xprtrdma_mr_localinv(mr);
847568942f93e0 Chuck Lever 2019-06-19 524 r_xprt->rx_stats.local_inv_needed++;
a100fda1a2e1fa Chuck Lever 2016-11-29 525
847568942f93e0 Chuck Lever 2019-06-19 526 frwr = &mr->frwr;
ce5b3717828356 Chuck Lever 2017-12-14 527 last = &frwr->fr_invwr;
847568942f93e0 Chuck Lever 2019-06-19 528 last->next = NULL;
c974afbfa9803a Chuck Lever 2021-04-19 529 last->wr_cqe = &mr->mr_cqe;
847568942f93e0 Chuck Lever 2019-06-19 530 last->sg_list = NULL;
847568942f93e0 Chuck Lever 2019-06-19 531 last->num_sge = 0;
a100fda1a2e1fa Chuck Lever 2016-11-29 532 last->opcode = IB_WR_LOCAL_INV;
847568942f93e0 Chuck Lever 2019-06-19 533 last->send_flags = IB_SEND_SIGNALED;
96ceddea3710f6 Chuck Lever 2017-12-14 534 last->ex.invalidate_rkey = mr->mr_handle;
c9918ff56dfb17 Chuck Lever 2015-12-16 535
c974afbfa9803a Chuck Lever 2021-04-19 536 last->wr_cqe->done = frwr_wc_localinv;
c974afbfa9803a Chuck Lever 2021-04-19 537
a100fda1a2e1fa Chuck Lever 2016-11-29 538 *prev = last;
a100fda1a2e1fa Chuck Lever 2016-11-29 539 prev = &last->next;
c9918ff56dfb17 Chuck Lever 2015-12-16 540 }
c9918ff56dfb17 Chuck Lever 2015-12-16 541
c9918ff56dfb17 Chuck Lever 2015-12-16 542 /* Strong send queue ordering guarantees that when the
c9918ff56dfb17 Chuck Lever 2015-12-16 543 * last WR in the chain completes, all WRs in the chain
c9918ff56dfb17 Chuck Lever 2015-12-16 544 * are complete.
c9918ff56dfb17 Chuck Lever 2015-12-16 545 */
c974afbfa9803a Chuck Lever 2021-04-19 @546 last->wr_cqe->done = frwr_wc_localinv_wake;
ce5b3717828356 Chuck Lever 2017-12-14 547 reinit_completion(&frwr->fr_linv_done);
8d38de65644d90 Chuck Lever 2016-11-29 548
c9918ff56dfb17 Chuck Lever 2015-12-16 549 /* Transport disconnect drains the receive CQ before it
c9918ff56dfb17 Chuck Lever 2015-12-16 550 * replaces the QP. The RPC reply handler won't call us
93aa8e0a9de80e Chuck Lever 2020-02-21 551 * unless re_id->qp is a valid pointer.
c9918ff56dfb17 Chuck Lever 2015-12-16 552 */
8d75483a232aea Chuck Lever 2017-06-08 553 bad_wr = NULL;
5ecef9c8436695 Chuck Lever 2020-11-09 554 rc = ib_post_send(ep->re_id->qp, first, &bad_wr);
c9918ff56dfb17 Chuck Lever 2015-12-16 555
847568942f93e0 Chuck Lever 2019-06-19 556 /* The final LOCAL_INV WR in the chain is supposed to
847568942f93e0 Chuck Lever 2019-06-19 557 * do the wake. If it was never posted, the wake will
847568942f93e0 Chuck Lever 2019-06-19 558 * not happen, so don't wait in that case.
c9918ff56dfb17 Chuck Lever 2015-12-16 559 */
847568942f93e0 Chuck Lever 2019-06-19 560 if (bad_wr != first)
847568942f93e0 Chuck Lever 2019-06-19 561 wait_for_completion(&frwr->fr_linv_done);
847568942f93e0 Chuck Lever 2019-06-19 562 if (!rc)
d7a21c1bed54ad Chuck Lever 2016-05-02 563 return;
c9918ff56dfb17 Chuck Lever 2015-12-16 564
38bc000697649a Chuck Lever 2021-04-19 565 /* On error, the MRs get destroyed once the QP has drained. */
36a55edfc3d5b1 Chuck Lever 2020-11-09 566 trace_xprtrdma_post_linv_err(req, rc);
c9918ff56dfb17 Chuck Lever 2015-12-16 567 }
d8099feda4833b Chuck Lever 2019-06-19 568
d8099feda4833b Chuck Lever 2019-06-19 569 /**
d8099feda4833b Chuck Lever 2019-06-19 570 * frwr_wc_localinv_done - Invoked by RDMA provider for a signaled LOCAL_INV WC
d6ccebf956338e Chuck Lever 2020-02-21 571 * @cq: completion queue
d6ccebf956338e Chuck Lever 2020-02-21 572 * @wc: WCE for a completed LocalInv WR
d8099feda4833b Chuck Lever 2019-06-19 573 *
d8099feda4833b Chuck Lever 2019-06-19 574 */
d8099feda4833b Chuck Lever 2019-06-19 575 static void frwr_wc_localinv_done(struct ib_cq *cq, struct ib_wc *wc)
d8099feda4833b Chuck Lever 2019-06-19 576 {
d8099feda4833b Chuck Lever 2019-06-19 577 struct ib_cqe *cqe = wc->wr_cqe;
c974afbfa9803a Chuck Lever 2021-04-19 578 struct rpcrdma_mr *mr = container_of(cqe, struct rpcrdma_mr, mr_cqe);
ecad5869d5026a Chuck Lever 2021-04-19 579 struct rpcrdma_rep *rep;
d8099feda4833b Chuck Lever 2019-06-19 580
d8099feda4833b Chuck Lever 2019-06-19 581 /* WARNING: Only wr_cqe and status are reliable at this point */
afff2279b1fb45 Chuck Lever 2021-04-19 582 trace_xprtrdma_wc_li_done(wc, &mr->mr_cid);
6dc6ec9e04c468 Chuck Lever 2019-08-19 583
ecad5869d5026a Chuck Lever 2021-04-19 584 /* Ensure that @rep is generated before the MR is released */
ecad5869d5026a Chuck Lever 2021-04-19 585 rep = mr->mr_req->rl_reply;
6dc6ec9e04c468 Chuck Lever 2019-08-19 586 smp_rmb();
ecad5869d5026a Chuck Lever 2021-04-19 587
3ec9bbac836408 Chuck Lever 2021-04-19 588 if (wc->status != IB_WC_SUCCESS) {
3ec9bbac836408 Chuck Lever 2021-04-19 589 if (rep)
3ec9bbac836408 Chuck Lever 2021-04-19 590 rpcrdma_unpin_rqst(rep);
f423f755f41e49 Chuck Lever 2020-06-15 591 rpcrdma_flush_disconnect(cq->cq_context, wc);
3ec9bbac836408 Chuck Lever 2021-04-19 592 return;
3ec9bbac836408 Chuck Lever 2021-04-19 593 }
3ec9bbac836408 Chuck Lever 2021-04-19 594 frwr_mr_put(mr);
3ec9bbac836408 Chuck Lever 2021-04-19 595 rpcrdma_complete_rqst(rep);
d8099feda4833b Chuck Lever 2019-06-19 596 }
d8099feda4833b Chuck Lever 2019-06-19 597
d8099feda4833b Chuck Lever 2019-06-19 598 /**
d8099feda4833b Chuck Lever 2019-06-19 599 * frwr_unmap_async - invalidate memory regions that were registered for @req
d8099feda4833b Chuck Lever 2019-06-19 600 * @r_xprt: controlling transport instance
d8099feda4833b Chuck Lever 2019-06-19 601 * @req: rpcrdma_req with a non-empty list of MRs to process
d8099feda4833b Chuck Lever 2019-06-19 602 *
d8099feda4833b Chuck Lever 2019-06-19 603 * This guarantees that registered MRs are properly fenced from the
d8099feda4833b Chuck Lever 2019-06-19 604 * server before the RPC consumer accesses the data in them. It also
d8099feda4833b Chuck Lever 2019-06-19 605 * ensures proper Send flow control: waking the next RPC waits until
d8099feda4833b Chuck Lever 2019-06-19 606 * this RPC has relinquished all its Send Queue entries.
d8099feda4833b Chuck Lever 2019-06-19 607 */
d8099feda4833b Chuck Lever 2019-06-19 608 void frwr_unmap_async(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
d8099feda4833b Chuck Lever 2019-06-19 609 {
d8099feda4833b Chuck Lever 2019-06-19 610 struct ib_send_wr *first, *last, **prev;
5ecef9c8436695 Chuck Lever 2020-11-09 611 struct rpcrdma_ep *ep = r_xprt->rx_ep;
d8099feda4833b Chuck Lever 2019-06-19 612 struct rpcrdma_frwr *frwr;
d8099feda4833b Chuck Lever 2019-06-19 613 struct rpcrdma_mr *mr;
d8099feda4833b Chuck Lever 2019-06-19 614 int rc;
d8099feda4833b Chuck Lever 2019-06-19 615
d8099feda4833b Chuck Lever 2019-06-19 616 /* Chain the LOCAL_INV Work Requests and post them with
d8099feda4833b Chuck Lever 2019-06-19 617 * a single ib_post_send() call.
d8099feda4833b Chuck Lever 2019-06-19 618 */
d8099feda4833b Chuck Lever 2019-06-19 619 frwr = NULL;
d8099feda4833b Chuck Lever 2019-06-19 620 prev = &first;
265a38d4611360 Chuck Lever 2019-08-19 621 while ((mr = rpcrdma_mr_pop(&req->rl_registered))) {
d8099feda4833b Chuck Lever 2019-06-19 622
d8099feda4833b Chuck Lever 2019-06-19 623 trace_xprtrdma_mr_localinv(mr);
d8099feda4833b Chuck Lever 2019-06-19 624 r_xprt->rx_stats.local_inv_needed++;
d8099feda4833b Chuck Lever 2019-06-19 625
d8099feda4833b Chuck Lever 2019-06-19 626 frwr = &mr->frwr;
d8099feda4833b Chuck Lever 2019-06-19 627 last = &frwr->fr_invwr;
d8099feda4833b Chuck Lever 2019-06-19 628 last->next = NULL;
c974afbfa9803a Chuck Lever 2021-04-19 629 last->wr_cqe = &mr->mr_cqe;
d8099feda4833b Chuck Lever 2019-06-19 630 last->sg_list = NULL;
d8099feda4833b Chuck Lever 2019-06-19 631 last->num_sge = 0;
d8099feda4833b Chuck Lever 2019-06-19 632 last->opcode = IB_WR_LOCAL_INV;
d8099feda4833b Chuck Lever 2019-06-19 633 last->send_flags = IB_SEND_SIGNALED;
d8099feda4833b Chuck Lever 2019-06-19 634 last->ex.invalidate_rkey = mr->mr_handle;
d8099feda4833b Chuck Lever 2019-06-19 635
c974afbfa9803a Chuck Lever 2021-04-19 636 last->wr_cqe->done = frwr_wc_localinv;
c974afbfa9803a Chuck Lever 2021-04-19 637
d8099feda4833b Chuck Lever 2019-06-19 638 *prev = last;
d8099feda4833b Chuck Lever 2019-06-19 639 prev = &last->next;
d8099feda4833b Chuck Lever 2019-06-19 640 }
d8099feda4833b Chuck Lever 2019-06-19 641
d8099feda4833b Chuck Lever 2019-06-19 642 /* Strong send queue ordering guarantees that when the
d8099feda4833b Chuck Lever 2019-06-19 643 * last WR in the chain completes, all WRs in the chain
d8099feda4833b Chuck Lever 2019-06-19 644 * are complete. The last completion will wake up the
d8099feda4833b Chuck Lever 2019-06-19 645 * RPC waiter.
d8099feda4833b Chuck Lever 2019-06-19 646 */
c974afbfa9803a Chuck Lever 2021-04-19 @647 last->wr_cqe->done = frwr_wc_localinv_done;
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org
[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 33142 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
* [PATCH v3 23/26] xprtrdma: Move cqe to struct rpcrdma_mr
2021-04-19 18:01 [PATCH v3 00/26] NFS/RDMA client patches for next Chuck Lever
@ 2021-04-19 18:04 ` Chuck Lever
0 siblings, 0 replies; 2+ messages in thread
From: Chuck Lever @ 2021-04-19 18:04 UTC (permalink / raw)
To: trondmy; +Cc: linux-nfs, linux-rdma
Clean up.
- Simplify variable initialization in the completion handlers.
- Move another field out of struct rpcrdma_frwr.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
net/sunrpc/xprtrdma/frwr_ops.c | 35 +++++++++++++++--------------------
net/sunrpc/xprtrdma/xprt_rdma.h | 2 +-
2 files changed, 16 insertions(+), 21 deletions(-)
diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index d3c18c776bf9..2a886a28d82b 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -366,9 +366,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt,
static void frwr_wc_fastreg(struct ib_cq *cq, struct ib_wc *wc)
{
struct ib_cqe *cqe = wc->wr_cqe;
- struct rpcrdma_frwr *frwr =
- container_of(cqe, struct rpcrdma_frwr, fr_cqe);
- struct rpcrdma_mr *mr = container_of(frwr, struct rpcrdma_mr, frwr);
+ struct rpcrdma_mr *mr = container_of(cqe, struct rpcrdma_mr, mr_cqe);
/* WARNING: Only wr_cqe and status are reliable at this point */
trace_xprtrdma_wc_fastreg(wc, &mr->mr_cid);
@@ -405,9 +403,9 @@ int frwr_send(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
trace_xprtrdma_mr_fastreg(mr);
frwr = &mr->frwr;
- frwr->fr_cqe.done = frwr_wc_fastreg;
+ mr->mr_cqe.done = frwr_wc_fastreg;
frwr->fr_regwr.wr.next = post_wr;
- frwr->fr_regwr.wr.wr_cqe = &frwr->fr_cqe;
+ frwr->fr_regwr.wr.wr_cqe = &mr->mr_cqe;
frwr->fr_regwr.wr.num_sge = 0;
frwr->fr_regwr.wr.opcode = IB_WR_REG_MR;
frwr->fr_regwr.wr.send_flags = 0;
@@ -463,9 +461,7 @@ static void frwr_mr_done(struct ib_wc *wc, struct rpcrdma_mr *mr)
static void frwr_wc_localinv(struct ib_cq *cq, struct ib_wc *wc)
{
struct ib_cqe *cqe = wc->wr_cqe;
- struct rpcrdma_frwr *frwr =
- container_of(cqe, struct rpcrdma_frwr, fr_cqe);
- struct rpcrdma_mr *mr = container_of(frwr, struct rpcrdma_mr, frwr);
+ struct rpcrdma_mr *mr = container_of(cqe, struct rpcrdma_mr, mr_cqe);
/* WARNING: Only wr_cqe and status are reliable at this point */
trace_xprtrdma_wc_li(wc, &mr->mr_cid);
@@ -484,9 +480,8 @@ static void frwr_wc_localinv(struct ib_cq *cq, struct ib_wc *wc)
static void frwr_wc_localinv_wake(struct ib_cq *cq, struct ib_wc *wc)
{
struct ib_cqe *cqe = wc->wr_cqe;
- struct rpcrdma_frwr *frwr =
- container_of(cqe, struct rpcrdma_frwr, fr_cqe);
- struct rpcrdma_mr *mr = container_of(frwr, struct rpcrdma_mr, frwr);
+ struct rpcrdma_mr *mr = container_of(cqe, struct rpcrdma_mr, mr_cqe);
+ struct rpcrdma_frwr *frwr = &mr->frwr;
/* WARNING: Only wr_cqe and status are reliable at this point */
trace_xprtrdma_wc_li_wake(wc, &mr->mr_cid);
@@ -529,16 +524,17 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
r_xprt->rx_stats.local_inv_needed++;
frwr = &mr->frwr;
- frwr->fr_cqe.done = frwr_wc_localinv;
last = &frwr->fr_invwr;
last->next = NULL;
- last->wr_cqe = &frwr->fr_cqe;
+ last->wr_cqe = &mr->mr_cqe;
last->sg_list = NULL;
last->num_sge = 0;
last->opcode = IB_WR_LOCAL_INV;
last->send_flags = IB_SEND_SIGNALED;
last->ex.invalidate_rkey = mr->mr_handle;
+ last->wr_cqe->done = frwr_wc_localinv;
+
*prev = last;
prev = &last->next;
}
@@ -547,7 +543,7 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
* last WR in the chain completes, all WRs in the chain
* are complete.
*/
- frwr->fr_cqe.done = frwr_wc_localinv_wake;
+ last->wr_cqe->done = frwr_wc_localinv_wake;
reinit_completion(&frwr->fr_linv_done);
/* Transport disconnect drains the receive CQ before it
@@ -579,9 +575,7 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
static void frwr_wc_localinv_done(struct ib_cq *cq, struct ib_wc *wc)
{
struct ib_cqe *cqe = wc->wr_cqe;
- struct rpcrdma_frwr *frwr =
- container_of(cqe, struct rpcrdma_frwr, fr_cqe);
- struct rpcrdma_mr *mr = container_of(frwr, struct rpcrdma_mr, frwr);
+ struct rpcrdma_mr *mr = container_of(cqe, struct rpcrdma_mr, mr_cqe);
struct rpcrdma_rep *rep;
/* WARNING: Only wr_cqe and status are reliable at this point */
@@ -630,16 +624,17 @@ void frwr_unmap_async(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
r_xprt->rx_stats.local_inv_needed++;
frwr = &mr->frwr;
- frwr->fr_cqe.done = frwr_wc_localinv;
last = &frwr->fr_invwr;
last->next = NULL;
- last->wr_cqe = &frwr->fr_cqe;
+ last->wr_cqe = &mr->mr_cqe;
last->sg_list = NULL;
last->num_sge = 0;
last->opcode = IB_WR_LOCAL_INV;
last->send_flags = IB_SEND_SIGNALED;
last->ex.invalidate_rkey = mr->mr_handle;
+ last->wr_cqe->done = frwr_wc_localinv;
+
*prev = last;
prev = &last->next;
}
@@ -649,7 +644,7 @@ void frwr_unmap_async(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
* are complete. The last completion will wake up the
* RPC waiter.
*/
- frwr->fr_cqe.done = frwr_wc_localinv_done;
+ last->wr_cqe->done = frwr_wc_localinv_done;
/* Transport disconnect drains the receive CQ before it
* replaces the QP. The RPC reply handler won't call us
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index 0cf073f0ee64..f72b69c3f0ea 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -231,7 +231,6 @@ struct rpcrdma_sendctx {
*/
struct rpcrdma_frwr {
struct ib_mr *fr_mr;
- struct ib_cqe fr_cqe;
struct completion fr_linv_done;
union {
struct ib_reg_wr fr_regwr;
@@ -247,6 +246,7 @@ struct rpcrdma_mr {
struct scatterlist *mr_sg;
int mr_nents;
enum dma_data_direction mr_dir;
+ struct ib_cqe mr_cqe;
struct rpcrdma_frwr frwr;
struct rpcrdma_xprt *mr_xprt;
u32 mr_handle;
^ permalink raw reply related [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-04-21 16:01 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-21 16:01 [PATCH v3 23/26] xprtrdma: Move cqe to struct rpcrdma_mr kernel test robot
-- strict thread matches above, loose matches on Subject: below --
2021-04-19 18:01 [PATCH v3 00/26] NFS/RDMA client patches for next Chuck Lever
2021-04-19 18:04 ` [PATCH v3 23/26] xprtrdma: Move cqe to struct rpcrdma_mr Chuck Lever
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.