Netdev Archive on lore.kernel.org
 help / color / Atom feed
* Bug: ip utility fails to show routes with large # of multipath next-hops
@ 2020-07-29  0:52 Ashutosh Grewal
  2020-07-29 11:43 ` Ido Schimmel
  0 siblings, 1 reply; 4+ messages in thread
From: Ashutosh Grewal @ 2020-07-29  0:52 UTC (permalink / raw)
  To: davem, netdev

Hello David and all,

I hope this is the correct way to report a bug.

I observed this problem with 256 v4 next-hops or 128 v6 next-hops (or
128 or so # of v4 next-hops with labels).

Here is an example -

root@a6be8c892bb7:/# ip route show 2.2.2.2
Error: Buffer too small for object.
Dump terminated

Kernel details (though I recall running into the same problem on 4.4*
kernel as well) -
root@ubuntu-vm:/# uname -a
Linux ch1 5.4.0-33-generic #37-Ubuntu SMP Thu May 21 12:53:59 UTC 2020
x86_64 x86_64 x86_64 GNU/Linux

I think the problem may be to do with the size of the skbuf being
allocated as part of servicing the netlink request.

static int netlink_dump(struct sock *sk)
{
  <snip>

                skb = alloc_skb(...)

Thanks,
Ashutosh

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Bug: ip utility fails to show routes with large # of multipath next-hops
  2020-07-29  0:52 Bug: ip utility fails to show routes with large # of multipath next-hops Ashutosh Grewal
@ 2020-07-29 11:43 ` Ido Schimmel
  2020-07-29 15:17   ` David Ahern
  0 siblings, 1 reply; 4+ messages in thread
From: Ido Schimmel @ 2020-07-29 11:43 UTC (permalink / raw)
  To: Ashutosh Grewal, dsahern; +Cc: davem, netdev

On Tue, Jul 28, 2020 at 05:52:44PM -0700, Ashutosh Grewal wrote:
> Hello David and all,
> 
> I hope this is the correct way to report a bug.

Sure

> 
> I observed this problem with 256 v4 next-hops or 128 v6 next-hops (or
> 128 or so # of v4 next-hops with labels).
> 
> Here is an example -
> 
> root@a6be8c892bb7:/# ip route show 2.2.2.2
> Error: Buffer too small for object.
> Dump terminated
> 
> Kernel details (though I recall running into the same problem on 4.4*
> kernel as well) -
> root@ubuntu-vm:/# uname -a
> Linux ch1 5.4.0-33-generic #37-Ubuntu SMP Thu May 21 12:53:59 UTC 2020
> x86_64 x86_64 x86_64 GNU/Linux
> 
> I think the problem may be to do with the size of the skbuf being
> allocated as part of servicing the netlink request.
> 
> static int netlink_dump(struct sock *sk)
> {
>   <snip>
> 
>                 skb = alloc_skb(...)

Yes, I believe you are correct. You will get an skb of size 4K and it
can't fit the entire RTA_MULTIPATH attribute with all the nested
nexthops. Since it's a single attribute it cannot be split across
multiple messages.

Looking at the code, I think a similar problem was already encountered
with IFLA_VFINFO_LIST. See commit c7ac8679bec9 ("rtnetlink: Compute and
store minimum ifinfo dump size").

Maybe we can track the maximum number of IPv4/IPv6 nexthops during
insertion and then consult it to adjust 'min_dump_alloc' for
RTM_GETROUTE.

It's a bit complicated for IPv6 because you can append nexthops, but I
believe anyone using so many nexthops is already using RTA_MULTIPATH to
insert them, so we can simplify.

David, what do you think? You have a better / simpler idea? Maybe one
day everyone will be using the new nexthop API and this won't be needed
:)

> 
> Thanks,
> Ashutosh

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Bug: ip utility fails to show routes with large # of multipath next-hops
  2020-07-29 11:43 ` Ido Schimmel
@ 2020-07-29 15:17   ` David Ahern
  2020-07-31 22:26     ` Ashutosh Grewal
  0 siblings, 1 reply; 4+ messages in thread
From: David Ahern @ 2020-07-29 15:17 UTC (permalink / raw)
  To: Ido Schimmel, Ashutosh Grewal; +Cc: davem, netdev

On 7/29/20 5:43 AM, Ido Schimmel wrote:
> On Tue, Jul 28, 2020 at 05:52:44PM -0700, Ashutosh Grewal wrote:
>> Hello David and all,
>>
>> I hope this is the correct way to report a bug.
> 
> Sure
> 
>>
>> I observed this problem with 256 v4 next-hops or 128 v6 next-hops (or
>> 128 or so # of v4 next-hops with labels).
>>
>> Here is an example -
>>
>> root@a6be8c892bb7:/# ip route show 2.2.2.2
>> Error: Buffer too small for object.
>> Dump terminated
>>
>> Kernel details (though I recall running into the same problem on 4.4*
>> kernel as well) -
>> root@ubuntu-vm:/# uname -a
>> Linux ch1 5.4.0-33-generic #37-Ubuntu SMP Thu May 21 12:53:59 UTC 2020
>> x86_64 x86_64 x86_64 GNU/Linux
>>
>> I think the problem may be to do with the size of the skbuf being
>> allocated as part of servicing the netlink request.
>>
>> static int netlink_dump(struct sock *sk)
>> {
>>   <snip>
>>
>>                 skb = alloc_skb(...)
> 
> Yes, I believe you are correct. You will get an skb of size 4K and it
> can't fit the entire RTA_MULTIPATH attribute with all the nested
> nexthops. Since it's a single attribute it cannot be split across
> multiple messages.

yep, well known problem.

> 
> Looking at the code, I think a similar problem was already encountered
> with IFLA_VFINFO_LIST. See commit c7ac8679bec9 ("rtnetlink: Compute and
> store minimum ifinfo dump size").
> 
> Maybe we can track the maximum number of IPv4/IPv6 nexthops during
> insertion and then consult it to adjust 'min_dump_alloc' for
> RTM_GETROUTE.

That seems better than the current design for GETLINK which walks all
devices to determine max dump size. Not sure how you will track that
efficiently though - add is easy, delete is not.

> 
> It's a bit complicated for IPv6 because you can append nexthops, but I
> believe anyone using so many nexthops is already using RTA_MULTIPATH to
> insert them, so we can simplify.

I hope so.

> 
> David, what do you think? You have a better / simpler idea? Maybe one
> day everyone will be using the new nexthop API and this won't be needed
> :)

exactly. You won't have this problem with separate nexthops since each
one is small (< 4k) and the group (multipath) is a set of ids, not the
full set of attributes.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Bug: ip utility fails to show routes with large # of multipath next-hops
  2020-07-29 15:17   ` David Ahern
@ 2020-07-31 22:26     ` Ashutosh Grewal
  0 siblings, 0 replies; 4+ messages in thread
From: Ashutosh Grewal @ 2020-07-31 22:26 UTC (permalink / raw)
  To: David Ahern; +Cc: Ido Schimmel, davem, netdev

Thanks Ido and David for your confirmation and insight.

-- Ashutosh

On Wed, Jul 29, 2020 at 8:17 AM David Ahern <dsahern@gmail.com> wrote:
>
> On 7/29/20 5:43 AM, Ido Schimmel wrote:
> > On Tue, Jul 28, 2020 at 05:52:44PM -0700, Ashutosh Grewal wrote:
> >> Hello David and all,
> >>
> >> I hope this is the correct way to report a bug.
> >
> > Sure
> >
> >>
> >> I observed this problem with 256 v4 next-hops or 128 v6 next-hops (or
> >> 128 or so # of v4 next-hops with labels).
> >>
> >> Here is an example -
> >>
> >> root@a6be8c892bb7:/# ip route show 2.2.2.2
> >> Error: Buffer too small for object.
> >> Dump terminated
> >>
> >> Kernel details (though I recall running into the same problem on 4.4*
> >> kernel as well) -
> >> root@ubuntu-vm:/# uname -a
> >> Linux ch1 5.4.0-33-generic #37-Ubuntu SMP Thu May 21 12:53:59 UTC 2020
> >> x86_64 x86_64 x86_64 GNU/Linux
> >>
> >> I think the problem may be to do with the size of the skbuf being
> >> allocated as part of servicing the netlink request.
> >>
> >> static int netlink_dump(struct sock *sk)
> >> {
> >>   <snip>
> >>
> >>                 skb = alloc_skb(...)
> >
> > Yes, I believe you are correct. You will get an skb of size 4K and it
> > can't fit the entire RTA_MULTIPATH attribute with all the nested
> > nexthops. Since it's a single attribute it cannot be split across
> > multiple messages.
>
> yep, well known problem.
>
> >
> > Looking at the code, I think a similar problem was already encountered
> > with IFLA_VFINFO_LIST. See commit c7ac8679bec9 ("rtnetlink: Compute and
> > store minimum ifinfo dump size").
> >
> > Maybe we can track the maximum number of IPv4/IPv6 nexthops during
> > insertion and then consult it to adjust 'min_dump_alloc' for
> > RTM_GETROUTE.
>
> That seems better than the current design for GETLINK which walks all
> devices to determine max dump size. Not sure how you will track that
> efficiently though - add is easy, delete is not.
>
> >
> > It's a bit complicated for IPv6 because you can append nexthops, but I
> > believe anyone using so many nexthops is already using RTA_MULTIPATH to
> > insert them, so we can simplify.
>
> I hope so.
>
> >
> > David, what do you think? You have a better / simpler idea? Maybe one
> > day everyone will be using the new nexthop API and this won't be needed
> > :)
>
> exactly. You won't have this problem with separate nexthops since each
> one is small (< 4k) and the group (multipath) is a set of ids, not the
> full set of attributes.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, back to index

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-29  0:52 Bug: ip utility fails to show routes with large # of multipath next-hops Ashutosh Grewal
2020-07-29 11:43 ` Ido Schimmel
2020-07-29 15:17   ` David Ahern
2020-07-31 22:26     ` Ashutosh Grewal

Netdev Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/netdev/0 netdev/git/0.git
	git clone --mirror https://lore.kernel.org/netdev/1 netdev/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 netdev netdev/ https://lore.kernel.org/netdev \
		netdev@vger.kernel.org
	public-inbox-index netdev

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.netdev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git