From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,UNPARSEABLE_RELAY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FD1CC43381 for ; Thu, 14 Feb 2019 07:48:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 17335222A4 for ; Thu, 14 Feb 2019 07:48:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437383AbfBNHrp (ORCPT ); Thu, 14 Feb 2019 02:47:45 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:38550 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2389179AbfBNHrn (ORCPT ); Thu, 14 Feb 2019 02:47:43 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from vladbu@mellanox.com) with ESMTPS (AES256-SHA encrypted); 14 Feb 2019 09:47:37 +0200 Received: from reg-r-vrt-018-180.mtr.labs.mlnx. (reg-r-vrt-018-180.mtr.labs.mlnx [10.213.18.180]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x1E7lbe4014807; Thu, 14 Feb 2019 09:47:37 +0200 From: Vlad Buslov To: netdev@vger.kernel.org Cc: jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, davem@davemloft.net, Vlad Buslov Subject: [PATCH net-next 06/12] net: sched: flower: handle concurrent mask insertion Date: Thu, 14 Feb 2019 09:47:06 +0200 Message-Id: <20190214074712.17846-7-vladbu@mellanox.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190214074712.17846-1-vladbu@mellanox.com> References: <20190214074712.17846-1-vladbu@mellanox.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Without rtnl lock protection masks with same key can be inserted concurrently. Insert temporary mask with reference count zero to masks hashtable. This will cause any concurrent modifications to retry. Wait for rcu grace period to complete after removing temporary mask from masks hashtable to accommodate concurrent readers. Signed-off-by: Vlad Buslov Acked-by: Jiri Pirko Suggested-by: Jiri Pirko --- net/sched/cls_flower.c | 41 ++++++++++++++++++++++++++++++++++------- 1 file changed, 34 insertions(+), 7 deletions(-) diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index b41b72e894a6..2b032303f8d5 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -1301,11 +1301,14 @@ static struct fl_flow_mask *fl_create_new_mask(struct cls_fl_head *head, INIT_LIST_HEAD_RCU(&newmask->filters); refcount_set(&newmask->refcnt, 1); - err = rhashtable_insert_fast(&head->ht, &newmask->ht_node, - mask_ht_params); + err = rhashtable_replace_fast(&head->ht, &mask->ht_node, + &newmask->ht_node, mask_ht_params); if (err) goto errout_destroy; + /* Wait until any potential concurrent users of mask are finished */ + synchronize_rcu(); + list_add_tail_rcu(&newmask->list, &head->masks); return newmask; @@ -1327,19 +1330,36 @@ static int fl_check_assign_mask(struct cls_fl_head *head, int ret = 0; rcu_read_lock(); - fnew->mask = rhashtable_lookup_fast(&head->ht, mask, mask_ht_params); + + /* Insert mask as temporary node to prevent concurrent creation of mask + * with same key. Any concurrent lookups with same key will return + * EAGAIN because mask's refcnt is zero. It is safe to insert + * stack-allocated 'mask' to masks hash table because we call + * synchronize_rcu() before returning from this function (either in case + * of error or after replacing it with heap-allocated mask in + * fl_create_new_mask()). + */ + fnew->mask = rhashtable_lookup_get_insert_fast(&head->ht, + &mask->ht_node, + mask_ht_params); if (!fnew->mask) { rcu_read_unlock(); - if (fold) - return -EINVAL; + if (fold) { + ret = -EINVAL; + goto errout_cleanup; + } newmask = fl_create_new_mask(head, mask); - if (IS_ERR(newmask)) - return PTR_ERR(newmask); + if (IS_ERR(newmask)) { + ret = PTR_ERR(newmask); + goto errout_cleanup; + } fnew->mask = newmask; return 0; + } else if (IS_ERR(fnew->mask)) { + ret = PTR_ERR(fnew->mask); } else if (fold && fold->mask != fnew->mask) { ret = -EINVAL; } else if (!refcount_inc_not_zero(&fnew->mask->refcnt)) { @@ -1348,6 +1368,13 @@ static int fl_check_assign_mask(struct cls_fl_head *head, } rcu_read_unlock(); return ret; + +errout_cleanup: + rhashtable_remove_fast(&head->ht, &mask->ht_node, + mask_ht_params); + /* Wait until any potential concurrent users of mask are finished */ + synchronize_rcu(); + return ret; } static int fl_set_parms(struct net *net, struct tcf_proto *tp, -- 2.13.6