From mboxrd@z Thu Jan 1 00:00:00 1970 From: Eric Dumazet Subject: Re: [PATCH net-next v6 2/3] udp: implement memory accounting helpers Date: Fri, 21 Oct 2016 05:24:02 -0700 Message-ID: <1477052642.7065.63.camel@edumazet-glaptop3.roam.corp.google.com> References: <4d2e0fc8f5c3d1309b0fb71bc65a2719a8e82825.1477043395.git.pabeni@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Cc: netdev-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, "David S. Miller" , James Morris , Trond Myklebust , Alexander Duyck , Daniel Borkmann , Eric Dumazet , Tom Herbert , Hannes Frederic Sowa , Edward Cree , linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Paolo Abeni Return-path: In-Reply-To: <4d2e0fc8f5c3d1309b0fb71bc65a2719a8e82825.1477043395.git.pabeni-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> Sender: linux-nfs-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: netdev.vger.kernel.org On Fri, 2016-10-21 at 13:55 +0200, Paolo Abeni wrote: > Avoid using the generic helpers. > Use the receive queue spin lock to protect the memory > accounting operation, both on enqueue and on dequeue. > > On dequeue perform partial memory reclaiming, trying to > leave a quantum of forward allocated memory. > > On enqueue use a custom helper, to allow some optimizations: > - use a plain spin_lock() variant instead of the slightly > costly spin_lock_irqsave(), > - avoid dst_force check, since the calling code has already > dropped the skb dst > - avoid orphaning the skb, since skb_steal_sock() already did > the work for us > > +static void udp_rmem_release(struct sock *sk, int size, int partial) > +{ > + int amt; > + > + atomic_sub(size, &sk->sk_rmem_alloc); > + > + spin_lock_bh(&sk->sk_receive_queue.lock); > + sk->sk_forward_alloc += size; > + amt = (sk->sk_forward_alloc - partial) & ~(SK_MEM_QUANTUM - 1); > + sk->sk_forward_alloc -= amt; > + spin_unlock_bh(&sk->sk_receive_queue.lock); > + > + if (amt) > + __sk_mem_reduce_allocated(sk, amt >> SK_MEM_QUANTUM_SHIFT); > +} > + > +static void udp_rmem_free(struct sk_buff *skb) > +{ > + udp_rmem_release(skb->sk, skb->truesize, 1); > +} > + It looks like you are acquiring/releasing sk_receive_queue.lock twice per packet in recvmsg() (the second time in the destructor above) We could do slightly better if : We do not set skb->destructor at all, and manage sk_rmem_alloc/sk_forward_alloc changes at the time we dequeue skb (if !MSG_PEEK), before copy to user space. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f193.google.com ([209.85.192.193]:34019 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754619AbcJUMYF (ORCPT ); Fri, 21 Oct 2016 08:24:05 -0400 Message-ID: <1477052642.7065.63.camel@edumazet-glaptop3.roam.corp.google.com> Subject: Re: [PATCH net-next v6 2/3] udp: implement memory accounting helpers From: Eric Dumazet To: Paolo Abeni Cc: netdev@vger.kernel.org, "David S. Miller" , James Morris , Trond Myklebust , Alexander Duyck , Daniel Borkmann , Eric Dumazet , Tom Herbert , Hannes Frederic Sowa , Edward Cree , linux-nfs@vger.kernel.org Date: Fri, 21 Oct 2016 05:24:02 -0700 In-Reply-To: <4d2e0fc8f5c3d1309b0fb71bc65a2719a8e82825.1477043395.git.pabeni@redhat.com> References: <4d2e0fc8f5c3d1309b0fb71bc65a2719a8e82825.1477043395.git.pabeni@redhat.com> Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org List-ID: On Fri, 2016-10-21 at 13:55 +0200, Paolo Abeni wrote: > Avoid using the generic helpers. > Use the receive queue spin lock to protect the memory > accounting operation, both on enqueue and on dequeue. > > On dequeue perform partial memory reclaiming, trying to > leave a quantum of forward allocated memory. > > On enqueue use a custom helper, to allow some optimizations: > - use a plain spin_lock() variant instead of the slightly > costly spin_lock_irqsave(), > - avoid dst_force check, since the calling code has already > dropped the skb dst > - avoid orphaning the skb, since skb_steal_sock() already did > the work for us > > +static void udp_rmem_release(struct sock *sk, int size, int partial) > +{ > + int amt; > + > + atomic_sub(size, &sk->sk_rmem_alloc); > + > + spin_lock_bh(&sk->sk_receive_queue.lock); > + sk->sk_forward_alloc += size; > + amt = (sk->sk_forward_alloc - partial) & ~(SK_MEM_QUANTUM - 1); > + sk->sk_forward_alloc -= amt; > + spin_unlock_bh(&sk->sk_receive_queue.lock); > + > + if (amt) > + __sk_mem_reduce_allocated(sk, amt >> SK_MEM_QUANTUM_SHIFT); > +} > + > +static void udp_rmem_free(struct sk_buff *skb) > +{ > + udp_rmem_release(skb->sk, skb->truesize, 1); > +} > + It looks like you are acquiring/releasing sk_receive_queue.lock twice per packet in recvmsg() (the second time in the destructor above) We could do slightly better if : We do not set skb->destructor at all, and manage sk_rmem_alloc/sk_forward_alloc changes at the time we dequeue skb (if !MSG_PEEK), before copy to user space.