All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christoph Paasch <christoph.paasch@gmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: David Miller <davem@davemloft.net>,
	gregkh@linuxfoundation.org, netdev <netdev@vger.kernel.org>,
	stable@vger.kernel.org, Eric Dumazet <edumazet@google.com>,
	soheil@google.com, weiwan@google.com, willemb@google.com
Subject: Re: [PATCH v3 26/30] net: sk_buff rbnode reorg
Date: Thu, 18 Oct 2018 09:01:57 -0700	[thread overview]
Message-ID: <CALMXkpYjzyMqyYMqddt5a2ChBoMmrPg5eBhs5SSb3BS2PH5d8g@mail.gmail.com> (raw)
In-Reply-To: <20180913145902.17531-27-sthemmin@microsoft.com>

Hello,

On Thu, Sep 13, 2018 at 8:00 AM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> From: Eric Dumazet <edumazet@google.com>
>
> commit bffa72cf7f9df842f0016ba03586039296b4caaf upstream
>
> skb->rbnode shares space with skb->next, skb->prev and skb->tstamp
>
> Current uses (TCP receive ofo queue and netem) need to save/restore
> tstamp, while skb->dev is either NULL (TCP) or a constant for a given
> queue (netem).
>
> Since we plan using an RB tree for TCP retransmit queue to speedup SACK
> processing with large BDP, this patch exchanges skb->dev and
> skb->tstamp.
>
> This saves some overhead in both TCP and netem.
>
> v2: removes the swtstamp field from struct tcp_skb_cb
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Cc: Soheil Hassas Yeganeh <soheil@google.com>
> Cc: Wei Wang <weiwan@google.com>
> Cc: Willem de Bruijn <willemb@google.com>
> Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
> Signed-off-by: David S. Miller <davem@davemloft.net>
> ---
>  include/linux/skbuff.h                  |  24 ++--
>  include/net/inet_frag.h                 |   3 +-
>  net/ipv4/inet_fragment.c                |  16 ++-
>  net/ipv4/ip_fragment.c                  | 182 +++++++++++++-----------
>  net/ipv6/netfilter/nf_conntrack_reasm.c |   1 +
>  net/ipv6/reassembly.c                   |   1 +
>  6 files changed, 129 insertions(+), 98 deletions(-)

this change is missing the pieces in sch_netem.c that Eric did as well.

Thus, there is a panic that can be hit à la:
https://github.com/multipath-tcp/mptcp/issues/285#issuecomment-431001559


The following diff fixes it (I can submit a proper patch later today).
Storing of the tstamp can be changed as well (as Eric did in the
original patch), but that's not causing any issues. I see a backport
to v4.9 has been submitted as well...
---
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index 2a2ab6bfe5d8..3d325b840802 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -624,6 +624,10 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
  skb->next = NULL;
  skb->prev = NULL;
  skb->tstamp = netem_skb_cb(skb)->tstamp_save;
+ /* skb->dev shares skb->rbnode area,
+ * we need to restore its value.
+ */
+ skb->dev = qdisc_dev(sch);

 #ifdef CONFIG_NET_CLS_ACT
  /*





>
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 2837e55df03e..f64e88444082 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -663,23 +663,27 @@ struct sk_buff {
>                         struct sk_buff          *prev;
>
>                         union {
> -                               ktime_t         tstamp;
> -                               u64             skb_mstamp;
> +                               struct net_device       *dev;
> +                               /* Some protocols might use this space to store information,
> +                                * while device pointer would be NULL.
> +                                * UDP receive path is one user.
> +                                */
> +                               unsigned long           dev_scratch;
>                         };
>                 };
> -               struct rb_node  rbnode; /* used in netem & tcp stack */
> +               struct rb_node          rbnode; /* used in netem, ip4 defrag, and tcp stack */
> +               struct list_head        list;
>         };
> -       struct sock             *sk;
>
>         union {
> -               struct net_device       *dev;
> -               /* Some protocols might use this space to store information,
> -                * while device pointer would be NULL.
> -                * UDP receive path is one user.
> -                */
> -               unsigned long           dev_scratch;
> +               struct sock             *sk;
>                 int                     ip_defrag_offset;
>         };
> +
> +       union {
> +               ktime_t         tstamp;
> +               u64             skb_mstamp;
> +       };
>         /*
>          * This is the control buffer. It is free to use for every
>          * layer. Please put your private variables there. If you
> diff --git a/include/net/inet_frag.h b/include/net/inet_frag.h
> index ed07e3786d98..e4c71a7644be 100644
> --- a/include/net/inet_frag.h
> +++ b/include/net/inet_frag.h
> @@ -75,7 +75,8 @@ struct inet_frag_queue {
>         struct timer_list       timer;
>         spinlock_t              lock;
>         refcount_t              refcnt;
> -       struct sk_buff          *fragments;
> +       struct sk_buff          *fragments;  /* Used in IPv6. */
> +       struct rb_root          rb_fragments; /* Used in IPv4. */
>         struct sk_buff          *fragments_tail;
>         ktime_t                 stamp;
>         int                     len;
> diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
> index c9e35b81d093..6904cbb7de1a 100644
> --- a/net/ipv4/inet_fragment.c
> +++ b/net/ipv4/inet_fragment.c
> @@ -136,12 +136,16 @@ void inet_frag_destroy(struct inet_frag_queue *q)
>         fp = q->fragments;
>         nf = q->net;
>         f = nf->f;
> -       while (fp) {
> -               struct sk_buff *xp = fp->next;
> -
> -               sum_truesize += fp->truesize;
> -               kfree_skb(fp);
> -               fp = xp;
> +       if (fp) {
> +               do {
> +                       struct sk_buff *xp = fp->next;
> +
> +                       sum_truesize += fp->truesize;
> +                       kfree_skb(fp);
> +                       fp = xp;
> +               } while (fp);
> +       } else {
> +               sum_truesize = skb_rbtree_purge(&q->rb_fragments);
>         }
>         sum = sum_truesize + f->qsize;
>
> diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
> index 960bf5eab59f..0e8f8de77e71 100644
> --- a/net/ipv4/ip_fragment.c
> +++ b/net/ipv4/ip_fragment.c
> @@ -136,7 +136,7 @@ static void ip_expire(struct timer_list *t)
>  {
>         struct inet_frag_queue *frag = from_timer(frag, t, timer);
>         const struct iphdr *iph;
> -       struct sk_buff *head;
> +       struct sk_buff *head = NULL;
>         struct net *net;
>         struct ipq *qp;
>         int err;
> @@ -152,14 +152,31 @@ static void ip_expire(struct timer_list *t)
>
>         ipq_kill(qp);
>         __IP_INC_STATS(net, IPSTATS_MIB_REASMFAILS);
> -
> -       head = qp->q.fragments;
> -
>         __IP_INC_STATS(net, IPSTATS_MIB_REASMTIMEOUT);
>
> -       if (!(qp->q.flags & INET_FRAG_FIRST_IN) || !head)
> +       if (!qp->q.flags & INET_FRAG_FIRST_IN)
>                 goto out;
>
> +       /* sk_buff::dev and sk_buff::rbnode are unionized. So we
> +        * pull the head out of the tree in order to be able to
> +        * deal with head->dev.
> +        */
> +       if (qp->q.fragments) {
> +               head = qp->q.fragments;
> +               qp->q.fragments = head->next;
> +       } else {
> +               head = skb_rb_first(&qp->q.rb_fragments);
> +               if (!head)
> +                       goto out;
> +               rb_erase(&head->rbnode, &qp->q.rb_fragments);
> +               memset(&head->rbnode, 0, sizeof(head->rbnode));
> +               barrier();
> +       }
> +       if (head == qp->q.fragments_tail)
> +               qp->q.fragments_tail = NULL;
> +
> +       sub_frag_mem_limit(qp->q.net, head->truesize);
> +
>         head->dev = dev_get_by_index_rcu(net, qp->iif);
>         if (!head->dev)
>                 goto out;
> @@ -179,16 +196,16 @@ static void ip_expire(struct timer_list *t)
>             (skb_rtable(head)->rt_type != RTN_LOCAL))
>                 goto out;
>
> -       skb_get(head);
>         spin_unlock(&qp->q.lock);
>         icmp_send(head, ICMP_TIME_EXCEEDED, ICMP_EXC_FRAGTIME, 0);
> -       kfree_skb(head);
>         goto out_rcu_unlock;
>
>  out:
>         spin_unlock(&qp->q.lock);
>  out_rcu_unlock:
>         rcu_read_unlock();
> +       if (head)
> +               kfree_skb(head);
>         ipq_put(qp);
>  }
>
> @@ -231,7 +248,7 @@ static int ip_frag_too_far(struct ipq *qp)
>         end = atomic_inc_return(&peer->rid);
>         qp->rid = end;
>
> -       rc = qp->q.fragments && (end - start) > max;
> +       rc = qp->q.fragments_tail && (end - start) > max;
>
>         if (rc) {
>                 struct net *net;
> @@ -245,7 +262,6 @@ static int ip_frag_too_far(struct ipq *qp)
>
>  static int ip_frag_reinit(struct ipq *qp)
>  {
> -       struct sk_buff *fp;
>         unsigned int sum_truesize = 0;
>
>         if (!mod_timer(&qp->q.timer, jiffies + qp->q.net->timeout)) {
> @@ -253,20 +269,14 @@ static int ip_frag_reinit(struct ipq *qp)
>                 return -ETIMEDOUT;
>         }
>
> -       fp = qp->q.fragments;
> -       do {
> -               struct sk_buff *xp = fp->next;
> -
> -               sum_truesize += fp->truesize;
> -               kfree_skb(fp);
> -               fp = xp;
> -       } while (fp);
> +       sum_truesize = skb_rbtree_purge(&qp->q.rb_fragments);
>         sub_frag_mem_limit(qp->q.net, sum_truesize);
>
>         qp->q.flags = 0;
>         qp->q.len = 0;
>         qp->q.meat = 0;
>         qp->q.fragments = NULL;
> +       qp->q.rb_fragments = RB_ROOT;
>         qp->q.fragments_tail = NULL;
>         qp->iif = 0;
>         qp->ecn = 0;
> @@ -278,7 +288,8 @@ static int ip_frag_reinit(struct ipq *qp)
>  static int ip_frag_queue(struct ipq *qp, struct sk_buff *skb)
>  {
>         struct net *net = container_of(qp->q.net, struct net, ipv4.frags);
> -       struct sk_buff *prev, *next;
> +       struct rb_node **rbn, *parent;
> +       struct sk_buff *skb1;
>         struct net_device *dev;
>         unsigned int fragsize;
>         int flags, offset;
> @@ -341,58 +352,58 @@ static int ip_frag_queue(struct ipq *qp, struct sk_buff *skb)
>         if (err)
>                 goto err;
>
> -       /* Find out which fragments are in front and at the back of us
> -        * in the chain of fragments so far.  We must know where to put
> -        * this fragment, right?
> -        */
> -       prev = qp->q.fragments_tail;
> -       if (!prev || prev->ip_defrag_offset < offset) {
> -               next = NULL;
> -               goto found;
> -       }
> -       prev = NULL;
> -       for (next = qp->q.fragments; next != NULL; next = next->next) {
> -               if (next->ip_defrag_offset >= offset)
> -                       break;  /* bingo! */
> -               prev = next;
> -       }
> +       /* Note : skb->rbnode and skb->dev share the same location. */
> +       dev = skb->dev;
> +       /* Makes sure compiler wont do silly aliasing games */
> +       barrier();
>
> -found:
>         /* RFC5722, Section 4, amended by Errata ID : 3089
>          *                          When reassembling an IPv6 datagram, if
>          *   one or more its constituent fragments is determined to be an
>          *   overlapping fragment, the entire datagram (and any constituent
>          *   fragments) MUST be silently discarded.
>          *
> -        * We do the same here for IPv4.
> +        * We do the same here for IPv4 (and increment an snmp counter).
>          */
>
> -       /* Is there an overlap with the previous fragment? */
> -       if (prev &&
> -           (prev->ip_defrag_offset + prev->len) > offset)
> -               goto discard_qp;
> -
> -       /* Is there an overlap with the next fragment? */
> -       if (next && next->ip_defrag_offset < end)
> -               goto discard_qp;
> +       /* Find out where to put this fragment.  */
> +       skb1 = qp->q.fragments_tail;
> +       if (!skb1) {
> +               /* This is the first fragment we've received. */
> +               rb_link_node(&skb->rbnode, NULL, &qp->q.rb_fragments.rb_node);
> +               qp->q.fragments_tail = skb;
> +       } else if ((skb1->ip_defrag_offset + skb1->len) < end) {
> +               /* This is the common/special case: skb goes to the end. */
> +               /* Detect and discard overlaps. */
> +               if (offset < (skb1->ip_defrag_offset + skb1->len))
> +                       goto discard_qp;
> +               /* Insert after skb1. */
> +               rb_link_node(&skb->rbnode, &skb1->rbnode, &skb1->rbnode.rb_right);
> +               qp->q.fragments_tail = skb;
> +       } else {
> +               /* Binary search. Note that skb can become the first fragment, but
> +                * not the last (covered above). */
> +               rbn = &qp->q.rb_fragments.rb_node;
> +               do {
> +                       parent = *rbn;
> +                       skb1 = rb_to_skb(parent);
> +                       if (end <= skb1->ip_defrag_offset)
> +                               rbn = &parent->rb_left;
> +                       else if (offset >= skb1->ip_defrag_offset + skb1->len)
> +                               rbn = &parent->rb_right;
> +                       else /* Found an overlap with skb1. */
> +                               goto discard_qp;
> +               } while (*rbn);
> +               /* Here we have parent properly set, and rbn pointing to
> +                * one of its NULL left/right children. Insert skb. */
> +               rb_link_node(&skb->rbnode, parent, rbn);
> +       }
> +       rb_insert_color(&skb->rbnode, &qp->q.rb_fragments);
>
> -       /* Note : skb->ip_defrag_offset and skb->dev share the same location */
> -       dev = skb->dev;
>         if (dev)
>                 qp->iif = dev->ifindex;
> -       /* Makes sure compiler wont do silly aliasing games */
> -       barrier();
>         skb->ip_defrag_offset = offset;
>
> -       /* Insert this fragment in the chain of fragments. */
> -       skb->next = next;
> -       if (!next)
> -               qp->q.fragments_tail = skb;
> -       if (prev)
> -               prev->next = skb;
> -       else
> -               qp->q.fragments = skb;
> -
>         qp->q.stamp = skb->tstamp;
>         qp->q.meat += skb->len;
>         qp->ecn |= ecn;
> @@ -414,7 +425,7 @@ static int ip_frag_queue(struct ipq *qp, struct sk_buff *skb)
>                 unsigned long orefdst = skb->_skb_refdst;
>
>                 skb->_skb_refdst = 0UL;
> -               err = ip_frag_reasm(qp, prev, dev);
> +               err = ip_frag_reasm(qp, skb, dev);
>                 skb->_skb_refdst = orefdst;
>                 return err;
>         }
> @@ -431,15 +442,15 @@ static int ip_frag_queue(struct ipq *qp, struct sk_buff *skb)
>         return err;
>  }
>
> -
>  /* Build a new IP datagram from all its fragments. */
> -
> -static int ip_frag_reasm(struct ipq *qp, struct sk_buff *prev,
> +static int ip_frag_reasm(struct ipq *qp, struct sk_buff *skb,
>                          struct net_device *dev)
>  {
>         struct net *net = container_of(qp->q.net, struct net, ipv4.frags);
>         struct iphdr *iph;
> -       struct sk_buff *fp, *head = qp->q.fragments;
> +       struct sk_buff *fp, *head = skb_rb_first(&qp->q.rb_fragments);
> +       struct sk_buff **nextp; /* To build frag_list. */
> +       struct rb_node *rbn;
>         int len;
>         int ihlen;
>         int err;
> @@ -453,25 +464,20 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *prev,
>                 goto out_fail;
>         }
>         /* Make the one we just received the head. */
> -       if (prev) {
> -               head = prev->next;
> -               fp = skb_clone(head, GFP_ATOMIC);
> +       if (head != skb) {
> +               fp = skb_clone(skb, GFP_ATOMIC);
>                 if (!fp)
>                         goto out_nomem;
> -
> -               fp->next = head->next;
> -               if (!fp->next)
> +               rb_replace_node(&skb->rbnode, &fp->rbnode, &qp->q.rb_fragments);
> +               if (qp->q.fragments_tail == skb)
>                         qp->q.fragments_tail = fp;
> -               prev->next = fp;
> -
> -               skb_morph(head, qp->q.fragments);
> -               head->next = qp->q.fragments->next;
> -
> -               consume_skb(qp->q.fragments);
> -               qp->q.fragments = head;
> +               skb_morph(skb, head);
> +               rb_replace_node(&head->rbnode, &skb->rbnode,
> +                               &qp->q.rb_fragments);
> +               consume_skb(head);
> +               head = skb;
>         }
>
> -       WARN_ON(!head);
>         WARN_ON(head->ip_defrag_offset != 0);
>
>         /* Allocate a new buffer for the datagram. */
> @@ -496,24 +502,35 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *prev,
>                 clone = alloc_skb(0, GFP_ATOMIC);
>                 if (!clone)
>                         goto out_nomem;
> -               clone->next = head->next;
> -               head->next = clone;
>                 skb_shinfo(clone)->frag_list = skb_shinfo(head)->frag_list;
>                 skb_frag_list_init(head);
>                 for (i = 0; i < skb_shinfo(head)->nr_frags; i++)
>                         plen += skb_frag_size(&skb_shinfo(head)->frags[i]);
>                 clone->len = clone->data_len = head->data_len - plen;
> -               head->data_len -= clone->len;
> -               head->len -= clone->len;
> +               skb->truesize += clone->truesize;
>                 clone->csum = 0;
>                 clone->ip_summed = head->ip_summed;
>                 add_frag_mem_limit(qp->q.net, clone->truesize);
> +               skb_shinfo(head)->frag_list = clone;
> +               nextp = &clone->next;
> +       } else {
> +               nextp = &skb_shinfo(head)->frag_list;
>         }
>
> -       skb_shinfo(head)->frag_list = head->next;
>         skb_push(head, head->data - skb_network_header(head));
>
> -       for (fp=head->next; fp; fp = fp->next) {
> +       /* Traverse the tree in order, to build frag_list. */
> +       rbn = rb_next(&head->rbnode);
> +       rb_erase(&head->rbnode, &qp->q.rb_fragments);
> +       while (rbn) {
> +               struct rb_node *rbnext = rb_next(rbn);
> +               fp = rb_to_skb(rbn);
> +               rb_erase(rbn, &qp->q.rb_fragments);
> +               rbn = rbnext;
> +               *nextp = fp;
> +               nextp = &fp->next;
> +               fp->prev = NULL;
> +               memset(&fp->rbnode, 0, sizeof(fp->rbnode));
>                 head->data_len += fp->len;
>                 head->len += fp->len;
>                 if (head->ip_summed != fp->ip_summed)
> @@ -524,7 +541,9 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *prev,
>         }
>         sub_frag_mem_limit(qp->q.net, head->truesize);
>
> +       *nextp = NULL;
>         head->next = NULL;
> +       head->prev = NULL;
>         head->dev = dev;
>         head->tstamp = qp->q.stamp;
>         IPCB(head)->frag_max_size = max(qp->max_df_size, qp->q.max_size);
> @@ -552,6 +571,7 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *prev,
>
>         __IP_INC_STATS(net, IPSTATS_MIB_REASMOKS);
>         qp->q.fragments = NULL;
> +       qp->q.rb_fragments = RB_ROOT;
>         qp->q.fragments_tail = NULL;
>         return 0;
>
> diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
> index 1d2f07cde01a..82ce0d0f54bf 100644
> --- a/net/ipv6/netfilter/nf_conntrack_reasm.c
> +++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
> @@ -471,6 +471,7 @@ nf_ct_frag6_reasm(struct frag_queue *fq, struct sk_buff *prev,  struct net_devic
>                                           head->csum);
>
>         fq->q.fragments = NULL;
> +       fq->q.rb_fragments = RB_ROOT;
>         fq->q.fragments_tail = NULL;
>
>         return true;
> diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c
> index afaad60dc2ac..ede0061b6f5d 100644
> --- a/net/ipv6/reassembly.c
> +++ b/net/ipv6/reassembly.c
> @@ -472,6 +472,7 @@ static int ip6_frag_reasm(struct frag_queue *fq, struct sk_buff *prev,
>         __IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_REASMOKS);
>         rcu_read_unlock();
>         fq->q.fragments = NULL;
> +       fq->q.rb_fragments = RB_ROOT;
>         fq->q.fragments_tail = NULL;
>         return 1;
>
> --
> 2.18.0
>

  reply	other threads:[~2018-10-18 16:01 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-13 14:58 [PATCH v3 00/30] backport of IP fragmentation fixes Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 01/30] inet: frags: change inet_frags_init_net() return value Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 02/30] inet: frags: add a pointer to struct netns_frags Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 03/30] inet: frags: refactor ipfrag_init() Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 04/30] inet: frags: Convert timers to use timer_setup() Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 05/30] inet: frags: refactor ipv6_frag_init() Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 06/30] inet: frags: refactor lowpan_net_frag_init() Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 07/30] ipv6: export ip6 fragments sysctl to unprivileged users Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 08/30] rhashtable: add schedule points Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 09/30] inet: frags: use rhashtables for reassembly units Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 10/30] inet: frags: remove some helpers Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 11/30] inet: frags: get rif of inet_frag_evicting() Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 12/30] inet: frags: remove inet_frag_maybe_warn_overflow() Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 13/30] inet: frags: break the 2GB limit for frags storage Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 14/30] inet: frags: do not clone skb in ip_expire() Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 15/30] ipv6: frags: rewrite ip6_expire_frag_queue() Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 16/30] rhashtable: reorganize struct rhashtable layout Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 17/30] inet: frags: reorganize struct netns_frags Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 18/30] inet: frags: get rid of ipfrag_skb_cb/FRAG_CB Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 19/30] inet: frags: fix ip6frag_low_thresh boundary Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 20/30] ip: discard IPv4 datagrams with overlapping segments Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 21/30] net: speed up skb_rbtree_purge() Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 22/30] net: modify skb_rbtree_purge to return the truesize of all purged skbs Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 23/30] ipv6: defrag: drop non-last frags smaller than min mtu Stephen Hemminger
2019-01-10 19:30   ` Tom Herbert
2019-01-10 22:22     ` Florian Westphal
2019-01-11 10:57       ` Eric Dumazet
2019-01-11 12:21         ` Michal Kubecek
2019-01-11 12:27           ` Eric Dumazet
2019-01-11 12:52             ` Michal Kubecek
2019-01-11 13:07               ` Eric Dumazet
     [not found]                 ` <CAOSSMjUODMbBuW=GgwcEt6avKoyYD5A9CzdBtE6NR6dz4pnD6w@mail.gmail.com>
2019-01-11 14:09                   ` Eric Dumazet
2019-01-11 14:21                   ` Michal Kubecek
     [not found]                     ` <CAOSSMjVMVWxzkT5M2LHgf0+GPHdaWHV01a6mBqbGRVXOaQ04PQ@mail.gmail.com>
2019-01-11 17:09                       ` Peter Oskolkov
2019-01-11 18:10                         ` Michal Kubecek
2019-01-12  3:21                           ` Tom Herbert
2018-09-13 14:58 ` [PATCH v3 24/30] net: pskb_trim_rcsum() and CHECKSUM_COMPLETE are friends Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 25/30] net: add rb_to_skb() and other rb tree helpers Stephen Hemminger
2018-09-13 14:58 ` [PATCH v3 26/30] net: sk_buff rbnode reorg Stephen Hemminger
2018-10-18 16:01   ` Christoph Paasch [this message]
2018-09-13 14:58 ` [PATCH v3 27/30] ipv4: frags: precedence bug in ip_expire() Stephen Hemminger
2018-09-13 14:59 ` [PATCH v3 28/30] ip: add helpers to process in-order fragments faster Stephen Hemminger
2018-09-13 14:59 ` [PATCH v3 29/30] ip: process in-order fragments efficiently Stephen Hemminger
2018-09-13 14:59 ` [PATCH v3 30/30] ip: frags: fix crash in ip_do_fragment() Stephen Hemminger
2018-09-17 12:47 ` [PATCH v3 00/30] backport of IP fragmentation fixes Greg KH

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALMXkpYjzyMqyYMqddt5a2ChBoMmrPg5eBhs5SSb3BS2PH5d8g@mail.gmail.com \
    --to=christoph.paasch@gmail.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=netdev@vger.kernel.org \
    --cc=soheil@google.com \
    --cc=stable@vger.kernel.org \
    --cc=stephen@networkplumber.org \
    --cc=weiwan@google.com \
    --cc=willemb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.