netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: shaobingqing <shaobingqing-Gb3srWounXyPt1CcHtbs0g@public.gmane.org>
To: Trond Myklebust
	<trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
Cc: bfields-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q@public.gmane.org,
	linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	netdev-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [PATCH] SUNRPC: Allow one callback request to be received from two sk_buff
Date: Tue, 21 Jan 2014 18:08:05 +0800	[thread overview]
Message-ID: <CALrKORrZ3Kcuqc1RajQKkZcot0yiswh4VR_WuXHqfRTjn9oGQQ@mail.gmail.com> (raw)
In-Reply-To: <1390259843.2501.2.camel-5lNtUQgoD8Pfa3cDbr2K10B+6BGkLq7r@public.gmane.org>

2014/1/21 Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>:
> On Mon, 2014-01-20 at 14:59 +0800, shaobingqing wrote:
>> In current code, there only one struct rpc_rqst is prealloced. If one
>> callback request is received from two sk_buff, the xprt_alloc_bc_request
>> would be execute two times with the same transport->xid. The first time
>> xprt_alloc_bc_request will alloc one struct rpc_rqst and the TCP_RCV_COPY_DATA
>> bit of transport->tcp_flags will not be cleared. The second time
>> xprt_alloc_bc_request could not alloc struct rpc_rqst any more and NULL
>> pointer will be returned, then xprt_force_disconnect occur. I think one
>> callback request can be allowed to be received from two sk_buff.
>>
>> Signed-off-by: shaobingqing <shaobingqing-Gb3srWounXyPt1CcHtbs0g@public.gmane.org>
>> ---
>>  net/sunrpc/xprtsock.c |   11 +++++++++--
>>  1 files changed, 9 insertions(+), 2 deletions(-)
>>
>> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
>> index ee03d35..606950d 100644
>> --- a/net/sunrpc/xprtsock.c
>> +++ b/net/sunrpc/xprtsock.c
>> @@ -1271,8 +1271,13 @@ static inline int xs_tcp_read_callback(struct rpc_xprt *xprt,
>>       struct sock_xprt *transport =
>>                               container_of(xprt, struct sock_xprt, xprt);
>>       struct rpc_rqst *req;
>> +     static struct rpc_rqst *req_partial;
>> +
>> +     if (req_partial == NULL)
>> +             req = xprt_alloc_bc_request(xprt);
>> +     else if (req_partial->rq_xid == transport->tcp_xid)
>> +             req = req_partial;
>
> What happens here if req_partial->rq_xid != transport->tcp_xid? AFAICS,
> req will be undefined. Either way, you cannot use a static variable for
> storage here: that isn't re-entrant.

Because metadata sever only have one slot for backchannel request,
req_partial->rq_xid == transport->tcp_xid always happens, if the callback
request just being splited in two sk_buffs. But req_partial->rq_xid !=
transport->tcp_xid may also happens in some special cases, such as
retransmission occurs?
If one callback request is splited in two sk_buffs, xs_tcp_read_callback
will be execute two times. The req_partial should be a static variable,
because  the second execution of xs_tcp_read_callback should use
the rpc_rqst allocated for the first execution, which saves information
copies from the first sk_buff.
I think perhaps the code should be modified like bellows:

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 606950d..02dbb82 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1273,10 +1273,14 @@ static inline int xs_tcp_read_callback(struct
rpc_xprt *xprt,
        struct rpc_rqst *req;
        static struct rpc_rqst *req_partial;

-       if (req_partial == NULL)
+       if (req_partial == NULL) {
                req = xprt_alloc_bc_request(xprt);
-       else if (req_partial->rq_xid == transport->tcp_xid)
+       } else if (req_partial->rq_xid == transport->tcp_xid) {
                req = req_partial;
+       } else {
+               xprt_free_bc_request(req_partial);
+               req = xprt_alloc_bc_request(xprt);
+       }

        if (req == NULL) {
                printk(KERN_WARNING "Callback slot table overflowed\n");
@@ -1303,8 +1307,9 @@ static inline int xs_tcp_read_callback(struct
rpc_xprt *xprt,
                list_add(&req->rq_bc_list, &bc_serv->sv_cb_list);
                spin_unlock(&bc_serv->sv_cb_lock);
                wake_up(&bc_serv->sv_cb_waitq);
-       } else
+       } else {
                req_partial = req;
+       }

        req->rq_private_buf.len = transport->tcp_copied;


>
>> -     req = xprt_alloc_bc_request(xprt);
>>       if (req == NULL) {
>>               printk(KERN_WARNING "Callback slot table overflowed\n");
>>               xprt_force_disconnect(xprt);
>> @@ -1285,6 +1290,7 @@ static inline int xs_tcp_read_callback(struct rpc_xprt *xprt,
>>
>>       if (!(transport->tcp_flags & TCP_RCV_COPY_DATA)) {
>>               struct svc_serv *bc_serv = xprt->bc_serv;
>> +             req_partial = NULL;
>>
>>               /*
>>                * Add callback request to callback list.  The callback
>> @@ -1297,7 +1303,8 @@ static inline int xs_tcp_read_callback(struct rpc_xprt *xprt,
>>               list_add(&req->rq_bc_list, &bc_serv->sv_cb_list);
>>               spin_unlock(&bc_serv->sv_cb_lock);
>>               wake_up(&bc_serv->sv_cb_waitq);
>> -     }
>> +     } else
>> +             req_partial = req;
>>
>>       req->rq_private_buf.len = transport->tcp_copied;
>>
>
>
> --
> Trond Myklebust
> Linux NFS client maintainer
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  parent reply	other threads:[~2014-01-21 10:08 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-20  6:59 [PATCH] SUNRPC: Allow one callback request to be received from two sk_buff shaobingqing
2014-01-20 14:27 ` Sergei Shtylyov
2014-01-20 23:17 ` Trond Myklebust
     [not found]   ` <1390259843.2501.2.camel-5lNtUQgoD8Pfa3cDbr2K10B+6BGkLq7r@public.gmane.org>
2014-01-21 10:08     ` shaobingqing [this message]
     [not found]       ` <CALrKORrZ3Kcuqc1RajQKkZcot0yiswh4VR_WuXHqfRTjn9oGQQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-01-21 15:35         ` Trond Myklebust
     [not found]           ` <1BFACA51-087E-4945-851A-FBF0F108604C-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org>
2014-01-22 23:52             ` J. Bruce Fields
2014-01-23  1:42               ` shaobingqing
2014-01-23  2:23           ` shaobingqing

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALrKORrZ3Kcuqc1RajQKkZcot0yiswh4VR_WuXHqfRTjn9oGQQ@mail.gmail.com \
    --to=shaobingqing-gb3srwounxypt1cchtbs0g@public.gmane.org \
    --cc=bfields-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=davem-fT/PcQaiUtIeIZ0/mPfg9Q@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=netdev-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=trond.myklebust-7I+n7zu2hftEKMMhf/gKZA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).