All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag
@ 2020-06-29  9:16 wenxu
  2020-06-29  9:16 ` [PATCH net 2/2] net/sched: act_ct: add miss tcf_lastuse_update wenxu
  2020-07-01 22:21 ` [PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag David Miller
  0 siblings, 2 replies; 5+ messages in thread
From: wenxu @ 2020-06-29  9:16 UTC (permalink / raw)
  To: paulb; +Cc: netdev

From: wenxu <wenxu@ucloud.cn>

The fragment packets do defrag in tcf_ct_handle_fragments
will clear the skb->cb which make the qdisc_skb_cb clear
too and set the pkt_len to 0. The bytes always 0 when dump
the filter. And it also update the pkt_len after all the
fragments finish the defrag to one packet and make the
following action counter correct.

filter protocol ip pref 2 flower chain 0 handle 0x2
  eth_type ipv4
  dst_ip 1.1.1.1
  ip_flags frag/firstfrag
  skip_hw
  not_in_hw
 action order 1: ct zone 1 nat pipe
  index 2 ref 1 bind 1 installed 11 sec used 11 sec
 Action statistics:
 Sent 0 bytes 11 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 cookie e04106c2ac41769b278edaa9b5309960

Fixes: b57dc7c13ea9 ("net/sched: Introduce action ct")
Signed-off-by: wenxu <wenxu@ucloud.cn>
---
 net/sched/act_ct.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
index e9f3576..2eaabdc 100644
--- a/net/sched/act_ct.c
+++ b/net/sched/act_ct.c
@@ -676,6 +676,7 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
 				   u8 family, u16 zone)
 {
 	enum ip_conntrack_info ctinfo;
+	struct qdisc_skb_cb cb;
 	struct nf_conn *ct;
 	int err = 0;
 	bool frag;
@@ -693,6 +694,7 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
 		return err;
 
 	skb_get(skb);
+	cb = *qdisc_skb_cb(skb);
 
 	if (family == NFPROTO_IPV4) {
 		enum ip_defrag_users user = IP_DEFRAG_CONNTRACK_IN + zone;
@@ -717,6 +719,7 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
 #endif
 	}
 
+	*qdisc_skb_cb(skb) = cb;
 	skb_clear_hash(skb);
 	skb->ignore_df = 1;
 	return err;
@@ -1012,6 +1015,7 @@ static int tcf_ct_act(struct sk_buff *skb, const struct tc_action *a,
 
 out:
 	tcf_action_update_bstats(&c->common, skb);
+	qdisc_skb_cb(skb)->pkt_len = skb->len;
 	return retval;
 
 drop:
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH net 2/2] net/sched: act_ct: add miss tcf_lastuse_update.
  2020-06-29  9:16 [PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag wenxu
@ 2020-06-29  9:16 ` wenxu
  2020-07-01 22:21 ` [PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag David Miller
  1 sibling, 0 replies; 5+ messages in thread
From: wenxu @ 2020-06-29  9:16 UTC (permalink / raw)
  To: paulb; +Cc: netdev

From: wenxu <wenxu@ucloud.cn>

When tcf_ct_act execute the tcf_lastuse_update should
be update or the used stats never update

filter protocol ip pref 3 flower chain 0
filter protocol ip pref 3 flower chain 0 handle 0x1
  eth_type ipv4
  dst_ip 1.1.1.1
  ip_flags frag/firstfrag
  skip_hw
  not_in_hw
 action order 1: ct zone 1 nat pipe
  index 1 ref 1 bind 1 installed 103 sec used 103 sec
 Action statistics:
 Sent 151500 bytes 101 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 cookie 4519c04dc64a1a295787aab13b6a50fb

Signed-off-by: wenxu <wenxu@ucloud.cn>
---
 net/sched/act_ct.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
index 2eaabdc..ec0250f 100644
--- a/net/sched/act_ct.c
+++ b/net/sched/act_ct.c
@@ -928,6 +928,8 @@ static int tcf_ct_act(struct sk_buff *skb, const struct tc_action *a,
 	force = p->ct_action & TCA_CT_ACT_FORCE;
 	tmpl = p->tmpl;
 
+	tcf_lastuse_update(&c->tcf_tm);
+
 	if (clear) {
 		ct = nf_ct_get(skb, &ctinfo);
 		if (ct) {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag
  2020-06-29  9:16 [PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag wenxu
  2020-06-29  9:16 ` [PATCH net 2/2] net/sched: act_ct: add miss tcf_lastuse_update wenxu
@ 2020-07-01 22:21 ` David Miller
  2020-07-02  9:17   ` wenxu
  1 sibling, 1 reply; 5+ messages in thread
From: David Miller @ 2020-07-01 22:21 UTC (permalink / raw)
  To: wenxu; +Cc: paulb, netdev

From: wenxu@ucloud.cn
Date: Mon, 29 Jun 2020 17:16:17 +0800

> From: wenxu <wenxu@ucloud.cn>
> 
> The fragment packets do defrag in tcf_ct_handle_fragments
> will clear the skb->cb which make the qdisc_skb_cb clear
> too and set the pkt_len to 0. The bytes always 0 when dump
> the filter. And it also update the pkt_len after all the
> fragments finish the defrag to one packet and make the
> following action counter correct.
> 
> filter protocol ip pref 2 flower chain 0 handle 0x2
>   eth_type ipv4
>   dst_ip 1.1.1.1
>   ip_flags frag/firstfrag
>   skip_hw
>   not_in_hw
>  action order 1: ct zone 1 nat pipe
>   index 2 ref 1 bind 1 installed 11 sec used 11 sec
>  Action statistics:
>  Sent 0 bytes 11 pkt (dropped 0, overlimits 0 requeues 0)
>  backlog 0b 0p requeues 0
>  cookie e04106c2ac41769b278edaa9b5309960
> 
> Fixes: b57dc7c13ea9 ("net/sched: Introduce action ct")
> Signed-off-by: wenxu <wenxu@ucloud.cn>

This is a much larger and serious problem IMHO.  And this fix is
not sufficient.

Nothing can clobber the qdisc_skb_cb like this in these packet flows
otherwise we will have serious crashes and problems.  Some packet
schedulers store pointers in the qdisc CB private area, for example.

We need to somehow elide the CB clear when packets are defragmented by
connection tracking.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag
  2020-07-01 22:21 ` [PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag David Miller
@ 2020-07-02  9:17   ` wenxu
  2020-07-02 21:25     ` David Miller
  0 siblings, 1 reply; 5+ messages in thread
From: wenxu @ 2020-07-02  9:17 UTC (permalink / raw)
  To: David Miller; +Cc: netdev


On 7/2/2020 6:21 AM, David Miller wrote:
> From: wenxu@ucloud.cn
> Date: Mon, 29 Jun 2020 17:16:17 +0800
>
>> From: wenxu <wenxu@ucloud.cn>
>>
>> The fragment packets do defrag in tcf_ct_handle_fragments
>> will clear the skb->cb which make the qdisc_skb_cb clear
>> too and set the pkt_len to 0. The bytes always 0 when dump
>> the filter. And it also update the pkt_len after all the
>> fragments finish the defrag to one packet and make the
>> following action counter correct.
>>
>> filter protocol ip pref 2 flower chain 0 handle 0x2
>>   eth_type ipv4
>>   dst_ip 1.1.1.1
>>   ip_flags frag/firstfrag
>>   skip_hw
>>   not_in_hw
>>  action order 1: ct zone 1 nat pipe
>>   index 2 ref 1 bind 1 installed 11 sec used 11 sec
>>  Action statistics:
>>  Sent 0 bytes 11 pkt (dropped 0, overlimits 0 requeues 0)
>>  backlog 0b 0p requeues 0
>>  cookie e04106c2ac41769b278edaa9b5309960
>>
>> Fixes: b57dc7c13ea9 ("net/sched: Introduce action ct")
>> Signed-off-by: wenxu <wenxu@ucloud.cn>
> This is a much larger and serious problem IMHO.  And this fix is
> not sufficient.
>
> Nothing can clobber the qdisc_skb_cb like this in these packet flows
> otherwise we will have serious crashes and problems.  Some packet
> schedulers store pointers in the qdisc CB private area, for example.
Why store all the cb private and restore it can't total fix this?
>
> We need to somehow elide the CB clear when packets are defragmented by
> connection tracking.
you means add new function like ip_defrag_nocb for qdisc cb case?
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag
  2020-07-02  9:17   ` wenxu
@ 2020-07-02 21:25     ` David Miller
  0 siblings, 0 replies; 5+ messages in thread
From: David Miller @ 2020-07-02 21:25 UTC (permalink / raw)
  To: wenxu; +Cc: netdev

From: wenxu <wenxu@ucloud.cn>
Date: Thu, 2 Jul 2020 17:17:47 +0800

> On 7/2/2020 6:21 AM, David Miller wrote:
>> From: wenxu@ucloud.cn
>> Date: Mon, 29 Jun 2020 17:16:17 +0800
>>
>> Nothing can clobber the qdisc_skb_cb like this in these packet flows
>> otherwise we will have serious crashes and problems.  Some packet
>> schedulers store pointers in the qdisc CB private area, for example.
> Why store all the cb private and restore it can't total fix this?

Parallel accesses to the SKB.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-07-02 21:25 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-29  9:16 [PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag wenxu
2020-06-29  9:16 ` [PATCH net 2/2] net/sched: act_ct: add miss tcf_lastuse_update wenxu
2020-07-01 22:21 ` [PATCH net 1/2] net/sched: act_ct: fix restore the qdisc_skb_cb after defrag David Miller
2020-07-02  9:17   ` wenxu
2020-07-02 21:25     ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.