From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD261C7618B for ; Tue, 23 Jul 2019 15:14:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BFAAD218F0 for ; Tue, 23 Jul 2019 15:14:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390807AbfGWPOs (ORCPT ); Tue, 23 Jul 2019 11:14:48 -0400 Received: from charlotte.tuxdriver.com ([70.61.120.58]:41108 "EHLO smtp.tuxdriver.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390789AbfGWPOq (ORCPT ); Tue, 23 Jul 2019 11:14:46 -0400 Received: from [66.61.193.110] (helo=localhost) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1hpwUv-0004qL-B6; Tue, 23 Jul 2019 11:14:39 -0400 Date: Tue, 23 Jul 2019 11:14:31 -0400 From: Neil Horman To: Ido Schimmel Cc: netdev@vger.kernel.org, davem@davemloft.net, dsahern@gmail.com, roopa@cumulusnetworks.com, nikolay@cumulusnetworks.com, jakub.kicinski@netronome.com, toke@redhat.com, andy@greyhouse.net, f.fainelli@gmail.com, andrew@lunn.ch, vivien.didelot@gmail.com, mlxsw@mellanox.com, Ido Schimmel Subject: Re: [RFC PATCH net-next 10/12] drop_monitor: Add packet alert mode Message-ID: <20190723151431.GA8419@localhost.localdomain> References: <20190722183134.14516-1-idosch@idosch.org> <20190722183134.14516-11-idosch@idosch.org> <20190723124340.GA10377@hmswarspite.think-freely.org> <20190723141625.GA8972@splinter> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190723141625.GA8972@splinter> User-Agent: Mutt/1.12.0 (2019-05-25) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Tue, Jul 23, 2019 at 05:16:25PM +0300, Ido Schimmel wrote: > On Tue, Jul 23, 2019 at 08:43:40AM -0400, Neil Horman wrote: > > On Mon, Jul 22, 2019 at 09:31:32PM +0300, Ido Schimmel wrote: > > > +static void net_dm_packet_work(struct work_struct *work) > > > +{ > > > + struct per_cpu_dm_data *data; > > > + struct sk_buff_head list; > > > + struct sk_buff *skb; > > > + unsigned long flags; > > > + > > > + data = container_of(work, struct per_cpu_dm_data, dm_alert_work); > > > + > > > + __skb_queue_head_init(&list); > > > + > > > + spin_lock_irqsave(&data->drop_queue.lock, flags); > > > + skb_queue_splice_tail_init(&data->drop_queue, &list); > > > + spin_unlock_irqrestore(&data->drop_queue.lock, flags); > > > + > > These functions are all executed in a per-cpu context. While theres nothing > > wrong with using a spinlock here, I think you can get away with just doing > > local_irqsave and local_irq_restore. > > Hi Neil, > > Thanks a lot for reviewing. I might be missing something, but please > note that this function is executed from a workqueue and therefore the > CPU it is running on does not have to be the same CPU to which 'data' > belongs to. If so, I'm not sure how I can avoid taking the spinlock, as > otherwise two different CPUs can modify the list concurrently. > Ah, my bad, I was under the impression that the schedule_work call for that particular work queue was actually a call to schedule_work_on, which would have affined it to a specific cpu. That said, looking at it, I think using schedule_work_on was my initial intent, as the work queue is registered per cpu. And converting it to schedule_work_on would allow you to reduce the spin_lock to a faster local_irqsave Otherwise though, this looks really good to me Neil > > > > Neil > > > > > + while ((skb = __skb_dequeue(&list))) > > > + net_dm_packet_report(skb); > > > +} >