* [PATCH] ipvs: fix the connection sync failed in some cases
@ 2020-07-15 6:53 guodeqing
2020-07-15 17:35 ` Julian Anastasov
0 siblings, 1 reply; 3+ messages in thread
From: guodeqing @ 2020-07-15 6:53 UTC (permalink / raw)
To: wensong
Cc: horms, ja, pablo, kadlec, fw, davem, kuba, netdev, lvs-devel,
netfilter-devel, geffrey.guo
The sync_thread_backup only checks sk_receive_queue is empty or not,
there is a situation which cannot sync the connection entries when
sk_receive_queue is empty and sk_rmem_alloc is larger than sk_rcvbuf,
the sync packets are dropped in __udp_enqueue_schedule_skb, this is
because the packets in reader_queue is not read, so the rmem is
not reclaimed.
Here I add the check of whether the reader_queue of the udp sock is
empty or not to solve this problem.
Fixes: 7c13f97ffde6 ("udp: do fwd memory scheduling on dequeue")
Reported-by: zhouxudong <zhouxudong8@huawei.com>
Signed-off-by: guodeqing <geffrey.guo@huawei.com>
---
net/netfilter/ipvs/ip_vs_sync.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
index 605e0f6..abe8d63 100644
--- a/net/netfilter/ipvs/ip_vs_sync.c
+++ b/net/netfilter/ipvs/ip_vs_sync.c
@@ -1717,6 +1717,8 @@ static int sync_thread_backup(void *data)
{
struct ip_vs_sync_thread_data *tinfo = data;
struct netns_ipvs *ipvs = tinfo->ipvs;
+ struct sock *sk = tinfo->sock->sk;
+ struct udp_sock *up = udp_sk(sk);
int len;
pr_info("sync thread started: state = BACKUP, mcast_ifn = %s, "
@@ -1724,12 +1726,14 @@ static int sync_thread_backup(void *data)
ipvs->bcfg.mcast_ifn, ipvs->bcfg.syncid, tinfo->id);
while (!kthread_should_stop()) {
- wait_event_interruptible(*sk_sleep(tinfo->sock->sk),
- !skb_queue_empty(&tinfo->sock->sk->sk_receive_queue)
- || kthread_should_stop());
+ wait_event_interruptible(*sk_sleep(sk),
+ !skb_queue_empty(&sk->sk_receive_queue) ||
+ !skb_queue_empty(&up->reader_queue) ||
+ kthread_should_stop());
/* do we have data now? */
- while (!skb_queue_empty(&(tinfo->sock->sk->sk_receive_queue))) {
+ while (!skb_queue_empty(&sk->sk_receive_queue) ||
+ !skb_queue_empty(&up->reader_queue)) {
len = ip_vs_receive(tinfo->sock, tinfo->buf,
ipvs->bcfg.sync_maxlen);
if (len <= 0) {
--
2.7.4
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] ipvs: fix the connection sync failed in some cases
2020-07-15 6:53 [PATCH] ipvs: fix the connection sync failed in some cases guodeqing
@ 2020-07-15 17:35 ` Julian Anastasov
2020-07-16 8:16 ` 答复: " Guodeqing (A)
0 siblings, 1 reply; 3+ messages in thread
From: Julian Anastasov @ 2020-07-15 17:35 UTC (permalink / raw)
To: guodeqing
Cc: wensong, horms, pablo, kadlec, fw, davem, kuba, netdev,
lvs-devel, netfilter-devel
Hello,
On Wed, 15 Jul 2020, guodeqing wrote:
> The sync_thread_backup only checks sk_receive_queue is empty or not,
> there is a situation which cannot sync the connection entries when
> sk_receive_queue is empty and sk_rmem_alloc is larger than sk_rcvbuf,
> the sync packets are dropped in __udp_enqueue_schedule_skb, this is
> because the packets in reader_queue is not read, so the rmem is
> not reclaimed.
Good catch. We missed this change in UDP...
> Here I add the check of whether the reader_queue of the udp sock is
> empty or not to solve this problem.
>
> Fixes: 7c13f97ffde6 ("udp: do fwd memory scheduling on dequeue")
Why this commit and not 2276f58ac589 which adds
reader_queue to udp_poll() ? May be both?
> Reported-by: zhouxudong <zhouxudong8@huawei.com>
> Signed-off-by: guodeqing <geffrey.guo@huawei.com>
> ---
> net/netfilter/ipvs/ip_vs_sync.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
> index 605e0f6..abe8d63 100644
> --- a/net/netfilter/ipvs/ip_vs_sync.c
> +++ b/net/netfilter/ipvs/ip_vs_sync.c
> @@ -1717,6 +1717,8 @@ static int sync_thread_backup(void *data)
> {
> struct ip_vs_sync_thread_data *tinfo = data;
> struct netns_ipvs *ipvs = tinfo->ipvs;
> + struct sock *sk = tinfo->sock->sk;
> + struct udp_sock *up = udp_sk(sk);
> int len;
>
> pr_info("sync thread started: state = BACKUP, mcast_ifn = %s, "
> @@ -1724,12 +1726,14 @@ static int sync_thread_backup(void *data)
> ipvs->bcfg.mcast_ifn, ipvs->bcfg.syncid, tinfo->id);
>
> while (!kthread_should_stop()) {
> - wait_event_interruptible(*sk_sleep(tinfo->sock->sk),
> - !skb_queue_empty(&tinfo->sock->sk->sk_receive_queue)
> - || kthread_should_stop());
> + wait_event_interruptible(*sk_sleep(sk),
> + !skb_queue_empty(&sk->sk_receive_queue) ||
> + !skb_queue_empty(&up->reader_queue) ||
May be we should use skb_queue_empty_lockless for 5.4+
and skb_queue_empty() for backports to 4.14 and 4.19...
> + kthread_should_stop());
>
> /* do we have data now? */
> - while (!skb_queue_empty(&(tinfo->sock->sk->sk_receive_queue))) {
> + while (!skb_queue_empty(&sk->sk_receive_queue) ||
> + !skb_queue_empty(&up->reader_queue)) {
Here too
> len = ip_vs_receive(tinfo->sock, tinfo->buf,
> ipvs->bcfg.sync_maxlen);
> if (len <= 0) {
> --
> 2.7.4
Regards
--
Julian Anastasov <ja@ssi.bg>
^ permalink raw reply [flat|nested] 3+ messages in thread
* 答复: [PATCH] ipvs: fix the connection sync failed in some cases
2020-07-15 17:35 ` Julian Anastasov
@ 2020-07-16 8:16 ` Guodeqing (A)
0 siblings, 0 replies; 3+ messages in thread
From: Guodeqing (A) @ 2020-07-16 8:16 UTC (permalink / raw)
To: Julian Anastasov
Cc: wensong, horms, pablo, kadlec, fw, davem, kuba, netdev,
lvs-devel, netfilter-devel
I do a ipvs connection sync test in a 3.10 version kernel which has the 7c13f97ffde6 commit which succeed. I will modify the fixes information of the patch and replace the skb_queue_empty with skb_queue_empty_lockless.
Thanks.
-----邮件原件-----
发件人: Julian Anastasov [mailto:ja@ssi.bg]
发送时间: Thursday, July 16, 2020 1:36
收件人: Guodeqing (A) <geffrey.guo@huawei.com>
抄送: wensong@linux-vs.org; horms@verge.net.au; pablo@netfilter.org; kadlec@netfilter.org; fw@strlen.de; davem@davemloft.net; kuba@kernel.org; netdev@vger.kernel.org; lvs-devel@vger.kernel.org; netfilter-devel@vger.kernel.org
主题: Re: [PATCH] ipvs: fix the connection sync failed in some cases
Hello,
On Wed, 15 Jul 2020, guodeqing wrote:
> The sync_thread_backup only checks sk_receive_queue is empty or not,
> there is a situation which cannot sync the connection entries when
> sk_receive_queue is empty and sk_rmem_alloc is larger than sk_rcvbuf,
> the sync packets are dropped in __udp_enqueue_schedule_skb, this is
> because the packets in reader_queue is not read, so the rmem is not
> reclaimed.
Good catch. We missed this change in UDP...
> Here I add the check of whether the reader_queue of the udp sock is
> empty or not to solve this problem.
>
> Fixes: 7c13f97ffde6 ("udp: do fwd memory scheduling on dequeue")
Why this commit and not 2276f58ac589 which adds reader_queue to udp_poll() ? May be both?
> Reported-by: zhouxudong <zhouxudong8@huawei.com>
> Signed-off-by: guodeqing <geffrey.guo@huawei.com>
> ---
> net/netfilter/ipvs/ip_vs_sync.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/net/netfilter/ipvs/ip_vs_sync.c
> b/net/netfilter/ipvs/ip_vs_sync.c index 605e0f6..abe8d63 100644
> --- a/net/netfilter/ipvs/ip_vs_sync.c
> +++ b/net/netfilter/ipvs/ip_vs_sync.c
> @@ -1717,6 +1717,8 @@ static int sync_thread_backup(void *data) {
> struct ip_vs_sync_thread_data *tinfo = data;
> struct netns_ipvs *ipvs = tinfo->ipvs;
> + struct sock *sk = tinfo->sock->sk;
> + struct udp_sock *up = udp_sk(sk);
> int len;
>
> pr_info("sync thread started: state = BACKUP, mcast_ifn = %s, "
> @@ -1724,12 +1726,14 @@ static int sync_thread_backup(void *data)
> ipvs->bcfg.mcast_ifn, ipvs->bcfg.syncid, tinfo->id);
>
> while (!kthread_should_stop()) {
> - wait_event_interruptible(*sk_sleep(tinfo->sock->sk),
> - !skb_queue_empty(&tinfo->sock->sk->sk_receive_queue)
> - || kthread_should_stop());
> + wait_event_interruptible(*sk_sleep(sk),
> + !skb_queue_empty(&sk->sk_receive_queue) ||
> + !skb_queue_empty(&up->reader_queue) ||
May be we should use skb_queue_empty_lockless for 5.4+ and skb_queue_empty() for backports to 4.14 and 4.19...
> + kthread_should_stop());
>
> /* do we have data now? */
> - while (!skb_queue_empty(&(tinfo->sock->sk->sk_receive_queue))) {
> + while (!skb_queue_empty(&sk->sk_receive_queue) ||
> + !skb_queue_empty(&up->reader_queue)) {
Here too
> len = ip_vs_receive(tinfo->sock, tinfo->buf,
> ipvs->bcfg.sync_maxlen);
> if (len <= 0) {
> --
> 2.7.4
Regards
--
Julian Anastasov <ja@ssi.bg>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-07-16 8:16 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-15 6:53 [PATCH] ipvs: fix the connection sync failed in some cases guodeqing
2020-07-15 17:35 ` Julian Anastasov
2020-07-16 8:16 ` 答复: " Guodeqing (A)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).