From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C39BC43381 for ; Sun, 31 Mar 2019 08:54:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 07B76206DD for ; Sun, 31 Mar 2019 08:54:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="oI31DJSZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726858AbfCaIyN (ORCPT ); Sun, 31 Mar 2019 04:54:13 -0400 Received: from mail-pf1-f193.google.com ([209.85.210.193]:42994 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725888AbfCaIyN (ORCPT ); Sun, 31 Mar 2019 04:54:13 -0400 Received: by mail-pf1-f193.google.com with SMTP id r15so3046042pfn.9; Sun, 31 Mar 2019 01:54:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=v9sCDJZUmqiOosRZksaugBFt85ORCuO9nG6E9qaL1wc=; b=oI31DJSZSOeiATmNqBW94QhxQt6KfK1nvNPdII3JGTYh2vl8JWxZr1A68rmW7yzvAL azzSNjUvVtxFFMnufPaGho/Uwer3LUVLmCwIevLbVpOMX1wZtBIVNtgaYl9akW1+qSFj 5uokwNNB54liGIaYTEhzZPIykoLDM4RxlcxIQhNLukLme2PYlQKBbTBnsQzgpGkW768U 7d4/ssf/Hn5psd8X3HWhYHy9hY7olSXMsWmFUgiC//6mx57q2SXANgEAQKjnkmiMdM7j Ph0bLnB94l8AwnYgzXy85vP8dwDBZvG5zgaO77XAJkcyfCYFrXqzvUj/8SGLzAFCjW4Z YbfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=v9sCDJZUmqiOosRZksaugBFt85ORCuO9nG6E9qaL1wc=; b=Dc9LULi70a2I2vf34tpJcZNom9uKcqQIgXvwnFeRUl0Iar8F/O/z7Fz47fnEqwmMTn hvvv/QmnVp+6KMw3NLEQOj45C0gggR+Zv/BO7h7yJ62XkE5UvU1aHyjIIAoodHATBqT4 wUYMz4wTkpo7bDaKT/nLL6D5AB/ZG3zZ+jPhJipDk0xurZ6sGoA7Y+rbqYzHPZ02gD9C 5xW7BHnwbb14HtIrq1rg9vWygbxGuITUSv2gLIHOUOt736kLIZjuwaOiop6xBdpn3a+c Rt/Rc/v0xDDRu6sl8CI2NkoQA8SfvQBssDEibPcBcb2NyyOE+wLRLQna1hnS87kIxgWp P3lw== X-Gm-Message-State: APjAAAVhI9tWz8D1TECL8f0TUqSvPphh4+lp5QbvIneJrX3udzO02lQL wNgNoteVlyy2yi/LJ7e8Motllp7Fckg= X-Google-Smtp-Source: APXvYqzbR5OCXOZpdtuYhvYZ8ShIIPSR2IX/ghZcATcl/LxZLkr1KtOjeNoeRwWRvYq8OrwQWq4AWQ== X-Received: by 2002:aa7:8289:: with SMTP id s9mr43435491pfm.208.1554022451950; Sun, 31 Mar 2019 01:54:11 -0700 (PDT) Received: from localhost ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id a9sm9954930pfo.17.2019.03.31.01.54.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Mar 2019 01:54:11 -0700 (PDT) From: Xin Long To: network dev , linux-sctp@vger.kernel.org Cc: Marcelo Ricardo Leitner , Neil Horman , davem@davemloft.net, Matteo Croce , Vladis Dronov Subject: [PATCH net-next 2/2] sctp: implement memory accounting on rx path Date: Sun, 31 Mar 2019 16:53:47 +0800 Message-Id: X-Mailer: git-send-email 2.1.0 In-Reply-To: <57b7c29e160acf1a7e5f86ae8549b23ba8946c4b.1554022192.git.lucien.xin@gmail.com> References: <57b7c29e160acf1a7e5f86ae8549b23ba8946c4b.1554022192.git.lucien.xin@gmail.com> In-Reply-To: References: Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org sk_forward_alloc's updating is also done on rx path, but to be consistent we change to use sk_mem_charge() in sctp_skb_set_owner_r(). In sctp_eat_data(), it's not enough to check sctp_memory_pressure only, which doesn't work for mem_cgroup_sockets_enabled, so we change to use sk_under_memory_pressure(). When it's under memory pressure, sk_mem_reclaim() and sk_rmem_schedule() should be called on both RENEGE or CHUNK DELIVERY path exit the memory pressure status as soon as possible. Note that sk_rmem_schedule() is using datalen to make things easy there. Signed-off-by: Xin Long --- include/net/sctp/sctp.h | 2 +- net/sctp/sm_statefuns.c | 6 ++++-- net/sctp/ulpevent.c | 19 ++++++++----------- net/sctp/ulpqueue.c | 3 ++- 4 files changed, 15 insertions(+), 15 deletions(-) diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h index 1d13ec3..eefdfa5 100644 --- a/include/net/sctp/sctp.h +++ b/include/net/sctp/sctp.h @@ -421,7 +421,7 @@ static inline void sctp_skb_set_owner_r(struct sk_buff *skb, struct sock *sk) /* * This mimics the behavior of skb_set_owner_r */ - sk->sk_forward_alloc -= event->rmem_len; + sk_mem_charge(sk, event->rmem_len); } /* Tests if the list has one and only one entry. */ diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c index c9ae340..7dfc34b 100644 --- a/net/sctp/sm_statefuns.c +++ b/net/sctp/sm_statefuns.c @@ -6412,13 +6412,15 @@ static int sctp_eat_data(const struct sctp_association *asoc, * in sctp_ulpevent_make_rcvmsg will drop the frame if we grow our * memory usage too much */ - if (*sk->sk_prot_creator->memory_pressure) { + if (sk_under_memory_pressure(sk)) { if (sctp_tsnmap_has_gap(map) && (sctp_tsnmap_get_ctsn(map) + 1) == tsn) { pr_debug("%s: under pressure, reneging for tsn:%u\n", __func__, tsn); deliver = SCTP_CMD_RENEGE; - } + } else { + sk_mem_reclaim(sk); + } } /* diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c index 8cb7d98..c2a7478 100644 --- a/net/sctp/ulpevent.c +++ b/net/sctp/ulpevent.c @@ -634,8 +634,9 @@ struct sctp_ulpevent *sctp_ulpevent_make_rcvmsg(struct sctp_association *asoc, gfp_t gfp) { struct sctp_ulpevent *event = NULL; - struct sk_buff *skb; - size_t padding, len; + struct sk_buff *skb = chunk->skb; + struct sock *sk = asoc->base.sk; + size_t padding, datalen; int rx_count; /* @@ -646,15 +647,12 @@ struct sctp_ulpevent *sctp_ulpevent_make_rcvmsg(struct sctp_association *asoc, if (asoc->ep->rcvbuf_policy) rx_count = atomic_read(&asoc->rmem_alloc); else - rx_count = atomic_read(&asoc->base.sk->sk_rmem_alloc); + rx_count = atomic_read(&sk->sk_rmem_alloc); - if (rx_count >= asoc->base.sk->sk_rcvbuf) { + datalen = ntohs(chunk->chunk_hdr->length); - if ((asoc->base.sk->sk_userlocks & SOCK_RCVBUF_LOCK) || - (!sk_rmem_schedule(asoc->base.sk, chunk->skb, - chunk->skb->truesize))) - goto fail; - } + if (rx_count >= sk->sk_rcvbuf || !sk_rmem_schedule(sk, skb, datalen)) + goto fail; /* Clone the original skb, sharing the data. */ skb = skb_clone(chunk->skb, gfp); @@ -681,8 +679,7 @@ struct sctp_ulpevent *sctp_ulpevent_make_rcvmsg(struct sctp_association *asoc, * The sender should never pad with more than 3 bytes. The receiver * MUST ignore the padding bytes. */ - len = ntohs(chunk->chunk_hdr->length); - padding = SCTP_PAD4(len) - len; + padding = SCTP_PAD4(datalen) - datalen; /* Fixup cloned skb with just this chunks data. */ skb_trim(skb, chunk->chunk_end - padding - skb->data); diff --git a/net/sctp/ulpqueue.c b/net/sctp/ulpqueue.c index 5dde921..770ff1f 100644 --- a/net/sctp/ulpqueue.c +++ b/net/sctp/ulpqueue.c @@ -1106,7 +1106,8 @@ void sctp_ulpq_renege(struct sctp_ulpq *ulpq, struct sctp_chunk *chunk, freed += sctp_ulpq_renege_frags(ulpq, needed - freed); } /* If able to free enough room, accept this chunk. */ - if (freed >= needed) { + if (sk_rmem_schedule(asoc->base.sk, chunk->skb, needed) && + freed >= needed) { int retval = sctp_ulpq_tail_data(ulpq, chunk, gfp); /* * Enter partial delivery if chunk has not been -- 2.1.0