netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Hurley <john.hurley@netronome.com>
To: vladbu@mellanox.com
Cc: jiri@mellanox.com, netdev@vger.kernel.org,
	simon.horman@netronome.com, jakub.kicinski@netronome.com,
	oss-drivers@netronome.com,
	John Hurley <john.hurley@netronome.com>
Subject: [RFC net-next 1/2] net: sched: add tp_op for pre_destroy
Date: Thu,  3 Oct 2019 00:14:31 +0100	[thread overview]
Message-ID: <1570058072-12004-2-git-send-email-john.hurley@netronome.com> (raw)
In-Reply-To: <1570058072-12004-1-git-send-email-john.hurley@netronome.com>

It is possible that a race condition may exist when a tcf_proto is
destroyed. Several actions occur before the destroy() tcf_proto_op is
called so if no higher level locking (e.g. RTNL) is in use then other
rules may be received and processed in parallel before the classifier's
specific destroy functions are completed.

Add a new tcf_proto_op called pre_destroy that is triggered when a
tcf_proto is signalled to be destroyed. This allows classifiers the
option of implementing tasks at this hook.

Fixes: 1d965c4def07 ("Refactor flower classifier to remove dependency on rtnl lock")
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reported-by: Louis Peens <louis.peens@netronome.com>
---
 include/net/sch_generic.h |  3 +++
 net/sched/cls_api.c       | 29 ++++++++++++++++++++++++++++-
 2 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 637548d..e458d6f 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -294,6 +294,9 @@ struct tcf_proto_ops {
 					    const struct tcf_proto *,
 					    struct tcf_result *);
 	int			(*init)(struct tcf_proto*);
+	void			(*pre_destroy)(struct tcf_proto *tp,
+					       bool rtnl_held,
+					       struct netlink_ext_ack *extack);
 	void			(*destroy)(struct tcf_proto *tp, bool rtnl_held,
 					   struct netlink_ext_ack *extack);
 
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index 64584a1..aecf716 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -222,6 +222,13 @@ static void tcf_proto_get(struct tcf_proto *tp)
 
 static void tcf_chain_put(struct tcf_chain *chain);
 
+static void tcf_proto_pre_destroy(struct tcf_proto *tp, bool rtnl_held,
+				  struct netlink_ext_ack *extack)
+{
+	if (tp->ops->pre_destroy)
+		tp->ops->pre_destroy(tp, rtnl_held, extack);
+}
+
 static void tcf_proto_destroy(struct tcf_proto *tp, bool rtnl_held,
 			      struct netlink_ext_ack *extack)
 {
@@ -534,9 +541,18 @@ static void tcf_chain_flush(struct tcf_chain *chain, bool rtnl_held)
 
 	mutex_lock(&chain->filter_chain_lock);
 	tp = tcf_chain_dereference(chain->filter_chain, chain);
+	chain->flushing = true;
+	mutex_unlock(&chain->filter_chain_lock);
+
+	while (tp) {
+		tcf_proto_pre_destroy(tp, rtnl_held, NULL);
+		tp = rcu_dereference_protected(tp->next, 1);
+	}
+
+	mutex_lock(&chain->filter_chain_lock);
+	tp = tcf_chain_dereference(chain->filter_chain, chain);
 	RCU_INIT_POINTER(chain->filter_chain, NULL);
 	tcf_chain0_head_change(chain, NULL);
-	chain->flushing = true;
 	mutex_unlock(&chain->filter_chain_lock);
 
 	while (tp) {
@@ -1636,6 +1652,8 @@ static void tcf_chain_tp_delete_empty(struct tcf_chain *chain,
 	struct tcf_proto **pprev;
 	struct tcf_proto *next;
 
+	tcf_proto_pre_destroy(tp, rtnl_held, extack);
+
 	mutex_lock(&chain->filter_chain_lock);
 
 	/* Atomically find and remove tp from chain. */
@@ -2164,6 +2182,15 @@ static int tc_del_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
 		err = -EINVAL;
 		goto errout_locked;
 	} else if (t->tcm_handle == 0) {
+		mutex_unlock(&chain->filter_chain_lock);
+
+		tcf_proto_pre_destroy(tp, rtnl_held, extack);
+		tcf_proto_put(tp, rtnl_held, NULL);
+
+		mutex_lock(&chain->filter_chain_lock);
+		/* Lookup again and take new ref incase of parallel update. */
+		tp = tcf_chain_tp_find(chain, &chain_info, protocol, prio,
+				       false);
 		tcf_chain_tp_remove(chain, &chain_info, tp);
 		mutex_unlock(&chain->filter_chain_lock);
 
-- 
2.7.4


  reply	other threads:[~2019-10-02 23:15 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-02 23:14 [RFC net-next 0/2] prevent sync issues with hw offload of flower John Hurley
2019-10-02 23:14 ` John Hurley [this message]
2019-10-02 23:14 ` [RFC net-next 2/2] net: sched: fix tp destroy race conditions in flower John Hurley
2019-10-03 16:18   ` Vlad Buslov
2019-10-03 16:39     ` John Hurley
2019-10-03 16:26 ` [RFC net-next 0/2] prevent sync issues with hw offload of flower Vlad Buslov
2019-10-03 16:59   ` John Hurley
2019-10-03 17:19     ` Vlad Buslov
2019-10-04 15:39       ` John Hurley
2019-10-04 15:58         ` Vlad Buslov
2019-10-04 16:06           ` John Hurley

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1570058072-12004-2-git-send-email-john.hurley@netronome.com \
    --to=john.hurley@netronome.com \
    --cc=jakub.kicinski@netronome.com \
    --cc=jiri@mellanox.com \
    --cc=netdev@vger.kernel.org \
    --cc=oss-drivers@netronome.com \
    --cc=simon.horman@netronome.com \
    --cc=vladbu@mellanox.com \
    --subject='Re: [RFC net-next 1/2] net: sched: add tp_op for pre_destroy' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).