From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,UNPARSEABLE_RELAY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EBECC43381 for ; Thu, 14 Feb 2019 07:48:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5FEEB222A4 for ; Thu, 14 Feb 2019 07:48:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437372AbfBNHro (ORCPT ); Thu, 14 Feb 2019 02:47:44 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:38538 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1733120AbfBNHrn (ORCPT ); Thu, 14 Feb 2019 02:47:43 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from vladbu@mellanox.com) with ESMTPS (AES256-SHA encrypted); 14 Feb 2019 09:47:37 +0200 Received: from reg-r-vrt-018-180.mtr.labs.mlnx. (reg-r-vrt-018-180.mtr.labs.mlnx [10.213.18.180]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x1E7lbe3014807; Thu, 14 Feb 2019 09:47:37 +0200 From: Vlad Buslov To: netdev@vger.kernel.org Cc: jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, davem@davemloft.net, Vlad Buslov Subject: [PATCH net-next 05/12] net: sched: flower: add reference counter to flower mask Date: Thu, 14 Feb 2019 09:47:05 +0200 Message-Id: <20190214074712.17846-6-vladbu@mellanox.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190214074712.17846-1-vladbu@mellanox.com> References: <20190214074712.17846-1-vladbu@mellanox.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Extend fl_flow_mask structure with reference counter to allow parallel modification without relying on rtnl lock. Use rcu read lock to safely lookup mask and increment reference counter in order to accommodate concurrent deletes. Signed-off-by: Vlad Buslov Acked-by: Jiri Pirko --- net/sched/cls_flower.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index fa5465f890e1..b41b72e894a6 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -76,6 +76,7 @@ struct fl_flow_mask { struct list_head filters; struct rcu_work rwork; struct list_head list; + refcount_t refcnt; }; struct fl_flow_tmplt { @@ -320,6 +321,7 @@ static int fl_init(struct tcf_proto *tp) static void fl_mask_free(struct fl_flow_mask *mask) { + WARN_ON(!list_empty(&mask->filters)); rhashtable_destroy(&mask->ht); kfree(mask); } @@ -335,7 +337,7 @@ static void fl_mask_free_work(struct work_struct *work) static bool fl_mask_put(struct cls_fl_head *head, struct fl_flow_mask *mask, bool async) { - if (!list_empty(&mask->filters)) + if (!refcount_dec_and_test(&mask->refcnt)) return false; rhashtable_remove_fast(&head->ht, &mask->ht_node, mask_ht_params); @@ -1298,6 +1300,7 @@ static struct fl_flow_mask *fl_create_new_mask(struct cls_fl_head *head, INIT_LIST_HEAD_RCU(&newmask->filters); + refcount_set(&newmask->refcnt, 1); err = rhashtable_insert_fast(&head->ht, &newmask->ht_node, mask_ht_params); if (err) @@ -1321,9 +1324,13 @@ static int fl_check_assign_mask(struct cls_fl_head *head, struct fl_flow_mask *mask) { struct fl_flow_mask *newmask; + int ret = 0; + rcu_read_lock(); fnew->mask = rhashtable_lookup_fast(&head->ht, mask, mask_ht_params); if (!fnew->mask) { + rcu_read_unlock(); + if (fold) return -EINVAL; @@ -1332,11 +1339,15 @@ static int fl_check_assign_mask(struct cls_fl_head *head, return PTR_ERR(newmask); fnew->mask = newmask; + return 0; } else if (fold && fold->mask != fnew->mask) { - return -EINVAL; + ret = -EINVAL; + } else if (!refcount_inc_not_zero(&fnew->mask->refcnt)) { + /* Mask was deleted concurrently, try again */ + ret = -EAGAIN; } - - return 0; + rcu_read_unlock(); + return ret; } static int fl_set_parms(struct net *net, struct tcf_proto *tp, @@ -1473,6 +1484,7 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, list_replace_rcu(&fold->list, &fnew->list); fold->deleted = true; + fl_mask_put(head, fold->mask, true); if (!tc_skip_hw(fold->flags)) fl_hw_destroy_filter(tp, fold, NULL); tcf_unbind_filter(tp, &fold->res); @@ -1522,7 +1534,7 @@ static int fl_change(struct net *net, struct sk_buff *in_skb, if (!tc_skip_hw(fnew->flags)) fl_hw_destroy_filter(tp, fnew, NULL); errout_mask: - fl_mask_put(head, fnew->mask, false); + fl_mask_put(head, fnew->mask, true); errout: tcf_exts_destroy(&fnew->exts); kfree(fnew); -- 2.13.6