All of lore.kernel.org
 help / color / mirror / Atom feed
* re-routing multicast pkts after mangle table marking
@ 2015-03-23 20:14 Brian Aanderud
  2020-12-01 18:49 ` Marcin Szewczyk
  0 siblings, 1 reply; 12+ messages in thread
From: Brian Aanderud @ 2015-03-23 20:14 UTC (permalink / raw)
  To: netfilter

What must I do to get the multicast frames routed out a 'different'
interface from the default one after applying a fwmark in iptables the
routing table?  I am able to do this with unicast with a combination
of 'ip rule', 'ip route' to a different table, and iptables to apply a
'mark'.  But, the marked multicast frames never seem to follow the
other routing table's routes.

I have two interfaces eth1 and internal0 on different IP subnets.  By
default I route stuff out 'eth1', but I want to route some traffic out
'internal0'.  I plan to use iptables to put a MARK on the pkts I want
to go out 'internal0', then use an 'ip rule' to route it out the other
interface.  I've been doing this for years with unicast data, but now
I want to do it with SOME multicast data.  I'm not running pimd or
anything for true multiple-hop multicast routing/forwarding, I am
simply generating the frames locally and want SOME of them to go out
this other interface, and I want to be able to specify when they go
out by adding/removing iptables rules to mark them.

Here's my ifconfig output for the 2 interfaces:

  eth1    inet addr:192.168.12.100  Bcast:192.168.12.255  Mask:255.255.255.0

  internal0    inet addr:192.168.80.65  Bcast:192.168.80.255  Mask:255.255.255.0

Some info that might be useful from my system:

  $ iptables --version
  iptables v1.4.1.1
  $ uname -a
  Linux ubuntu_node12 2.6.28-18-generic #59-Ubuntu SMP Thu Jan 28
01:23:03 UTC 2010 i686 GNU/Linux


IP routes are shown here - note the default out to a host on 'eth1' --
again, I normally want to use that interface, and only sometimes use
'internal0':

  $ ip route show
  192.168.80.0/24 dev internal0  proto kernel  scope link  src 192.168.80.65
  192.168.12.0/24 dev eth1       proto kernel  scope link  src 192.168.12.100
  default via 192.168.12.1 dev eth1

So, I start sending frames, and they of course go out 'eth1' by
default.  Both unicast and multicast both go out 'eth1' as one would
expect.  Now time to get them re-routed.

In this example I'll mark the frames I want to send out 'internal0'
with fwmark 12 in iptables, and route them using table 12.  I add
iptables rules to mark pkts with destination port 5001 (the iperf
default port) with fwmark 12:

  $ sudo iptables -t mangle -A OUTPUT -p udp --dport 5001 -j MARK --set-mark 12
  $ sudo iptables -t mangle -L OUTPUT -n -v
  Chain OUTPUT (policy ACCEPT 34M packets, 1666M bytes)
   pkts bytes target     prot opt in     out     source
destination
     17 25466 MARK       udp  --  *      *       0.0.0.0/0
0.0.0.0/0           udp dpt:5001 MARK xset 0xc/0xffffffff

I add the 'ip rule' to route those frames using 'table 12':

  $ sudo ip rule add fwmark 12 table 12
  $ sudo ip rule show
  0:      from all lookup local
  32765:  from all fwmark 0xc lookup 12
  32766:  from all lookup main
  32767:  from all lookup default


I add a couple of routes to table 12.  Note I'm explicitly adding a
route to 224.1.1.1 plus a default.  More on that explicit route later.

  $ sudo ip route add default via 192.168.80.2 table 12
  $ sudo ip route add 224.1.1.1/32 dev internal0 table 12
  $ sudo ip route flush cache
  $ sudo ip route show table 12
  224.1.1.1 dev internal0  scope link
  default via 192.168.80.2 dev internal0


Now, I send UNICAST data using iperf.

        iperf -c 192.168.12.1 -u --ttl 10 -b 10k -t 10000

My unicast data now goes out on 'internal0', as expected.  I see the
frames with 'tcpdump' on interface 'internal0'.  They are correctly
"rerouted" to use that interface via table 12 after the mangle chain
sets their fwmark -- and I'm happy that I can force them out the
'wrong' interface (their destination IP is on the subnet that 'eth1'
uses, but I've forced them out 'internal0').

However, now I try sending multicast data using iperf:

        iperf -c 224.1.1.1 -u --ttl 10 -b 10k -t 10000

My multicast data does NOT get re-routed out 'internal0', even though
I can verify with iptables stats that the frames ARE being marked (the
pkt/byte counter going up when I do iptables -L -n -v).  No matter
what I try, the traffic still goes out 'eth1'.  I've tried 'ip route
flush cache', which doesn't help.  Here's what the route cache shows,
even after a cache flush:

  ubuntu_node12 : ~ $ sudo ip route show cache
  multicast 224.1.1.1 from 192.168.12.100 dev eth1
      cache <mc>  mtu 1500 advmss 1460 hoplimit 64
  multicast 224.1.1.1 from 192.168.12.100 dev eth1
      cache <mc>  mtu 1500 advmss 1460 hoplimit 64

I'm watching the 'mark' rule get hit and so I know they're marked in
the mangle table, AFTER I flush the cache, but nothing good happens.
Here are two instances of running 'iptables' to get rule hit counts:

Chain OUTPUT (policy ACCEPT 34M packets, 1670M bytes)
 pkts bytes target     prot opt in     out     source               destination
  144  216K MARK       udp  --  *      *       0.0.0.0/0
0.0.0.0/0           udp dpt:5001 MARK xset 0xc/0xffffffff


Chain OUTPUT (policy ACCEPT 34M packets, 1670M bytes)
 pkts bytes target     prot opt in     out     source               destination
  217  325K MARK       udp  --  *      *       0.0.0.0/0
0.0.0.0/0           udp dpt:5001 MARK xset 0xc/0xffffffff


None of the 60+ frames went out 'internal0'.

I even added the explicit 224.1.1.1/32route through 'internal0' in the
table 12 routing table, as you can see above, to try to get around
this.  Multicast are never 're-routed'.  I have tried adding
masquerade rule, thought it would help, possibly.  But, it doesn't
seem to do anything different.

The only way I've been able to get the multicast frames out
'internal0' is if I put a route in the MAIN routing table that says to
use internal0 for address 224.1.1.1/32.  But even that was funky,
because in order to do that, I had to STOP iperf and RESTART it -- "ip
route flush cache" didn't do the trick to get them re-routed.  It's
like the socket had a cached route that was maintained through a route
cache flush event.  At any rate, using the main table with a route to
this particular interface for 224.1.1.1 isn't really an option I'd
want to pursue, as I have only a subset of 224.1.1.1 frames that I
want routed out 'internal0', with most using 'eth1'.

Thank you for any help you can provide!  Again, this all works fine
with unicast, it's only multicast that won't work.

Thanks!
-Brian

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: re-routing multicast pkts after mangle table marking
  2015-03-23 20:14 re-routing multicast pkts after mangle table marking Brian Aanderud
@ 2020-12-01 18:49 ` Marcin Szewczyk
       [not found]   ` <CAF4tN+_YNz+0iCQzW7Fd+P5ZkDkU5g95Jv5fdHPNcqhzWtOOng@mail.gmail.com>
  0 siblings, 1 reply; 12+ messages in thread
From: Marcin Szewczyk @ 2020-12-01 18:49 UTC (permalink / raw)
  To: netfilter; +Cc: Brian Aanderud

Brian Aanderud on 23 Mar 2015 wrote:
> What must I do to get the multicast frames routed out a 'different'
> interface from the default one after applying a fwmark in iptables the
> routing table?  I am able to do this with unicast with a combination
> of 'ip rule', 'ip route' to a different table, and iptables to apply a
> 'mark'.  But, the marked multicast frames never seem to follow the
> other routing table's routes.
> [...]

Hi,

I've stumbled upon the same problem as the one discussed over 5 years
ago (with no answer) on this mailing list[1], ie. locally generated
multicast and broadcast traffic do not seem to follow policy routing
when it is constructed using `iptables --set-mark` and `ip rule fwmark`.

iptables counter is incremented so the rule matches. It looks as if
routing occurred before mangling when the mark had not yet been set but
re-routing did not occur after mangling as it seems to be done for
unicast traffic and according to the diagram[2].

Same set of routing rules and tables except for `fwmark` being replaced
with some other criteria, eg. `dport`, works.

Can anyone suggest if I am trying to do something that just should not
work, am I missing some small but vital detail or is it some kind of a
bug?

On Debian Buster I can use:

    ip rule add to 255.255.255.255 dport 5001 table foo

which works, but I would like to be able to use fwmark for that on
Debian Jessie for example which doesn't have the 2018 additions like
dport.

As for the reason I want to be able to send packets to 255.255.255.255
on two different interfaces (one tagged with a VLAN) depending on dport
-- some legacy software and hardware I cannot modify.

I have also experimented with success with veth and putting one of the
applications into a separate network namespace but it feels like an
overkill.

I am interested both in a solution and an explanation why the thing I am
trying to do does not work.


[1]: https://marc.info/?l=netfilter&m=142714167809246&w=2
[2]: https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.svg

-- 
Marcin Szewczyk
http://wodny.org

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: re-routing multicast pkts after mangle table marking
       [not found]       ` <CAN_K0LS+U95rxmhCtiE3sX_hdjEQySnV1HBB9sF1qrrVAz0n-w@mail.gmail.com>
@ 2020-12-02 11:23         ` Marcin Szewczyk
  2020-12-02 12:10           ` Eliezer Croitor
  0 siblings, 1 reply; 12+ messages in thread
From: Marcin Szewczyk @ 2020-12-02 11:23 UTC (permalink / raw)
  To: Fatih USTA; +Cc: Netfilter Users Mailing list

On Wed, Dec 02, 2020 at 07:26:46AM +0300, Fatih USTA wrote:
> > > On Tue, Dec 1, 2020 at 1:19 PM Marcin Szewczyk <marcin.szewczyk@wodny.org> wrote:
> > > > Brian Aanderud on 23 Mar 2015 wrote:
> > > > > What must I do to get the multicast frames routed out a 'different'
> > > > > interface from the default one after applying a fwmark in iptables the
> > > > > routing table?  I am able to do this with unicast with a combination
> > > > > of 'ip rule', 'ip route' to a different table, and iptables to apply a
> > > > > 'mark'.  But, the marked multicast frames never seem to follow the
> > > > > other routing table's routes.
> > > > > [...]

> > > > I've stumbled upon the same problem as the one discussed over 5 years
> > > > ago (with no answer) on this mailing list[1], ie. locally generated
> > > > multicast and broadcast traffic do not seem to follow policy routing
> > > > when it is constructed using `iptables --set-mark` and `ip rule fwmark`.
> > > > [...]
> > > > Can anyone suggest if I am trying to do something that just should not
> > > > work, am I missing some small but vital detail or is it some kind of a
> > > > bug?
> > > >
> > > > [...]
> > > > [1]: https://marc.info/?l=netfilter&m=142714167809246&w=2
> > > > [2]: https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.svg

> By default, the multicast packet TTL 1. Have you changed the TTL?
> Otherwise, the routing cannot be made because the packet's life is over.

Increasing TTL doesn't seem to change anything. For broadcast it was
already 64.

Testing with:
- nc -nvbu 255.255.255.255 2000
- nc -nvbu 255.255.255.255 3000
- nc -nvbu 239.1.1.1 2000
- nc -nvbu 239.1.1.1 3000

This works:

    # ip rule
    0:	from all lookup local
    30000:	from all dport 3000 lookup test
    32766:	from all lookup main
    32767:	from all lookup default

This does not:

    # ip rule
    0:	from all lookup local
    30000:	from all fwmark 0x1 lookup test
    32766:	from all lookup main
    32767:	from all lookup default

For both iptables rules are:

    # iptables -L OUTPUT -n -v -t mangle
    Chain OUTPUT (policy ACCEPT 12M packets, 1507M bytes)
     pkts bytes target     prot opt in     out     source               destination
        4   128 TTL        udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:3000 TTL set to 32
        4   128 MARK       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:3000 MARK set 0x1

The test table and interfaces:

    # ip route show table test
    default dev wlp2s0.2 scope link

    # ip -br -4 addr | grep wlp2s0
    wlp2s0           UP             192.168.1.32/24 
    wlp2s0.2@wlp2s0  UP             192.168.20.20/24

    # ip -d link show dev wlp2s0
    3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT group default qlen 1000
        link/ether 00:c2:c6:1d:39:e5 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 256 maxmtu 2304 addrgenmode none numtxqueues 4 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 

    # ip -d link show dev wlp2s0.2
    10: wlp2s0.2@wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
        link/ether 00:c2:c6:1d:39:e5 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 0 maxmtu 65535 
        vlan protocol 802.1Q id 2 <REORDER_HDR> addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 
    



tcpdump results for the working `ip rule dport` solution (dport 3000 is
sent on VLAN 2):

    11:59:23.301939 00:c2:c6:1d:39:e5 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 46: (tos 0x0, ttl 64, id 15506, offset 0, flags [DF], proto UDP (17), length 32)
        192.168.1.32.37387 > 255.255.255.255.2000: UDP, length 4
            0x0000:  4500 0020 3c92 4000 4011 3c73 c0a8 0120  E...<.@.@.<s....
            0x0010:  ffff ffff 920b 07d0 000c ceb8 666f 6f0a  ............foo.
    11:59:26.485132 00:c2:c6:1d:39:e5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 50: vlan 2, p 0, ethertype IPv4, (tos 0x0, ttl 32, id 44759, offset 0, flags [DF], proto UDP (17), length 32)
        192.168.20.20.51634 > 255.255.255.255.3000: UDP, length 4
            0x0000:  4500 0020 aed7 4000 2011 d739 c0a8 1414  E.....@....9....
            0x0010:  ffff ffff c9b2 0bb8 000c 8143 6261 720a  ...........Cbar.

    12:05:42.945432 00:c2:c6:1d:39:e5 > 01:00:5e:01:01:01, ethertype IPv4 (0x0800), length 46: (tos 0x0, ttl 1, id 20557, offset 0, flags [DF], proto UDP (17), length 32)
        192.168.1.32.51491 > 239.1.1.1.2000: UDP, length 4
            0x0000:  4500 0020 504d 4000 0111 77b5 c0a8 0120  E...PM@...w.....
            0x0010:  ef01 0101 c923 07d0 000c a79d 666f 6f0a  .....#......foo.
    12:05:56.710881 00:c2:c6:1d:39:e5 > 01:00:5e:01:01:01, ethertype 802.1Q (0x8100), length 50: vlan 2, p 0, ethertype IPv4, (tos 0x0, ttl 32, id 62586, offset 0, flags [DF], proto UDP (17), length 32)
        192.168.20.20.40605 > 239.1.1.1.3000: UDP, length 4
            0x0000:  4500 0020 f47a 4000 2011 a193 c0a8 1414  E....z@.........
            0x0010:  ef01 0101 9e9d 0bb8 000c bc55 6261 720a  ...........Ubar.



tcpdump results for the non-working `ip rule fwmark` solution
(everything goes without VLAN tag):

    11:57:54.899126 00:c2:c6:1d:39:e5 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 46: (tos 0x0, ttl 64, id 60815, offset 0, flags [DF], proto UDP (17), length 32)
        192.168.1.32.42413 > 255.255.255.255.2000: UDP, length 4
            0x0000:  4500 0020 ed8f 4000 4011 8b75 c0a8 0120  E.....@.@..u....
            0x0010:  ffff ffff a5ad 07d0 000c bb16 666f 6f0a  ............foo.
    11:57:57.499439 00:c2:c6:1d:39:e5 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 46: (tos 0x0, ttl 32, id 61431, offset 0, flags [DF], proto UDP (17), length 32)
        192.168.1.32.33153 > 255.255.255.255.3000: UDP, length 4
            0x0000:  4500 0020 eff7 4000 2011 a90d c0a8 0120  E.....@.........
            0x0010:  ffff ffff 8181 0bb8 000c dc68 6261 720a  ...........hbar.

    12:06:35.215904 00:c2:c6:1d:39:e5 > 01:00:5e:01:01:01, ethertype IPv4 (0x0800), length 46: (tos 0x0, ttl 1, id 31632, offset 0, flags [DF], proto UDP (17), length 32)
        192.168.1.32.49461 > 239.1.1.1.2000: UDP, length 4
            0x0000:  4500 0020 7b90 4000 0111 4c72 c0a8 0120  E...{.@...Lr....
            0x0010:  ef01 0101 c135 07d0 000c af8b 666f 6f0a  .....5......foo.
    12:06:39.911621 00:c2:c6:1d:39:e5 > 01:00:5e:01:01:01, ethertype IPv4 (0x0800), length 46: (tos 0x0, ttl 32, id 31890, offset 0, flags [DF], proto UDP (17), length 32)
        192.168.1.32.46976 > 239.1.1.1.3000: UDP, length 4
            0x0000:  4500 0020 7c92 4000 2011 2c70 c0a8 0120  E...|.@...,p....
            0x0010:  ef01 0101 b780 0bb8 000c b666 6261 720a  ...........fbar.




-- 
Marcin Szewczyk
http://wodny.org

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: re-routing multicast pkts after mangle table marking
  2020-12-02 11:23         ` Marcin Szewczyk
@ 2020-12-02 12:10           ` Eliezer Croitor
  2020-12-02 12:36             ` Marcin Szewczyk
  0 siblings, 1 reply; 12+ messages in thread
From: Eliezer Croitor @ 2020-12-02 12:10 UTC (permalink / raw)
  To: 'Marcin Szewczyk', 'Fatih USTA'
  Cc: 'Netfilter Users Mailing list'

Hey,

Just wondering, how can I reproduce this issue on my local network?
First a broadcast address cannot be "routed" elsewhere then a connected
network,
there is no real "routing" so to speak.
I do not know how the kernel looks at it but I assume it can only be sent
towards a connected device such as:
* ethernet
* tunnel(these which support broadcast)

About Multicast, It's a whole other story..

Also I am missing some of the thread emails so, What kernel are we talking
about?
Let say Debian/Ubuntu/RHEL/CentOS , what version? Etc..

Reproducing is important to understand what the issue is.

Thanks,
Eliezer

----
Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1ltd@gmail.com

-----Original Message-----
From: Marcin Szewczyk <marcin.szewczyk@wodny.org> 
Sent: Wednesday, December 2, 2020 1:23 PM
To: Fatih USTA <fatihusta86@gmail.com>
Cc: Netfilter Users Mailing list <netfilter@vger.kernel.org>
Subject: Re: re-routing multicast pkts after mangle table marking

On Wed, Dec 02, 2020 at 07:26:46AM +0300, Fatih USTA wrote:
> > > On Tue, Dec 1, 2020 at 1:19 PM Marcin Szewczyk
<marcin.szewczyk@wodny.org> wrote:
> > > > Brian Aanderud on 23 Mar 2015 wrote:
> > > > > What must I do to get the multicast frames routed out a
'different'
> > > > > interface from the default one after applying a fwmark in iptables
the
> > > > > routing table?  I am able to do this with unicast with a
combination
> > > > > of 'ip rule', 'ip route' to a different table, and iptables to
apply a
> > > > > 'mark'.  But, the marked multicast frames never seem to follow the
> > > > > other routing table's routes.
> > > > > [...]

> > > > I've stumbled upon the same problem as the one discussed over 5
years
> > > > ago (with no answer) on this mailing list[1], ie. locally generated
> > > > multicast and broadcast traffic do not seem to follow policy routing
> > > > when it is constructed using `iptables --set-mark` and `ip rule
fwmark`.
> > > > [...]
> > > > Can anyone suggest if I am trying to do something that just should
not
> > > > work, am I missing some small but vital detail or is it some kind of
a
> > > > bug?
> > > >
> > > > [...]
> > > > [1]: https://marc.info/?l=netfilter&m=142714167809246&w=2
> > > > [2]:
https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.sv
g

> By default, the multicast packet TTL 1. Have you changed the TTL?
> Otherwise, the routing cannot be made because the packet's life is over.

Increasing TTL doesn't seem to change anything. For broadcast it was
already 64.

Testing with:
- nc -nvbu 255.255.255.255 2000
- nc -nvbu 255.255.255.255 3000
- nc -nvbu 239.1.1.1 2000
- nc -nvbu 239.1.1.1 3000

This works:

    # ip rule
    0:	from all lookup local
    30000:	from all dport 3000 lookup test
    32766:	from all lookup main
    32767:	from all lookup default

This does not:

    # ip rule
    0:	from all lookup local
    30000:	from all fwmark 0x1 lookup test
    32766:	from all lookup main
    32767:	from all lookup default

For both iptables rules are:

    # iptables -L OUTPUT -n -v -t mangle
    Chain OUTPUT (policy ACCEPT 12M packets, 1507M bytes)
     pkts bytes target     prot opt in     out     source
destination
        4   128 TTL        udp  --  *      *       0.0.0.0/0
0.0.0.0/0            udp dpt:3000 TTL set to 32
        4   128 MARK       udp  --  *      *       0.0.0.0/0
0.0.0.0/0            udp dpt:3000 MARK set 0x1

The test table and interfaces:

    # ip route show table test
    default dev wlp2s0.2 scope link

    # ip -br -4 addr | grep wlp2s0
    wlp2s0           UP             192.168.1.32/24 
    wlp2s0.2@wlp2s0  UP             192.168.20.20/24

    # ip -d link show dev wlp2s0
    3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
mode DORMANT group default qlen 1000
        link/ether 00:c2:c6:1d:39:e5 brd ff:ff:ff:ff:ff:ff promiscuity 0
minmtu 256 maxmtu 2304 addrgenmode none numtxqueues 4 numrxqueues 1
gso_max_size 65536 gso_max_segs 65535 

    # ip -d link show dev wlp2s0.2
    10: wlp2s0.2@wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP mode DEFAULT group default qlen 1000
        link/ether 00:c2:c6:1d:39:e5 brd ff:ff:ff:ff:ff:ff promiscuity 0
minmtu 0 maxmtu 65535 
        vlan protocol 802.1Q id 2 <REORDER_HDR> addrgenmode eui64
numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 
    



tcpdump results for the working `ip rule dport` solution (dport 3000 is
sent on VLAN 2):

    11:59:23.301939 00:c2:c6:1d:39:e5 > ff:ff:ff:ff:ff:ff, ethertype IPv4
(0x0800), length 46: (tos 0x0, ttl 64, id 15506, offset 0, flags [DF], proto
UDP (17), length 32)
        192.168.1.32.37387 > 255.255.255.255.2000: UDP, length 4
            0x0000:  4500 0020 3c92 4000 4011 3c73 c0a8 0120
E...<.@.@.<s....
            0x0010:  ffff ffff 920b 07d0 000c ceb8 666f 6f0a
............foo.
    11:59:26.485132 00:c2:c6:1d:39:e5 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 50: vlan 2, p 0, ethertype IPv4, (tos 0x0, ttl 32, id
44759, offset 0, flags [DF], proto UDP (17), length 32)
        192.168.20.20.51634 > 255.255.255.255.3000: UDP, length 4
            0x0000:  4500 0020 aed7 4000 2011 d739 c0a8 1414
E.....@....9....
            0x0010:  ffff ffff c9b2 0bb8 000c 8143 6261 720a
...........Cbar.

    12:05:42.945432 00:c2:c6:1d:39:e5 > 01:00:5e:01:01:01, ethertype IPv4
(0x0800), length 46: (tos 0x0, ttl 1, id 20557, offset 0, flags [DF], proto
UDP (17), length 32)
        192.168.1.32.51491 > 239.1.1.1.2000: UDP, length 4
            0x0000:  4500 0020 504d 4000 0111 77b5 c0a8 0120
E...PM@...w.....
            0x0010:  ef01 0101 c923 07d0 000c a79d 666f 6f0a
.....#......foo.
    12:05:56.710881 00:c2:c6:1d:39:e5 > 01:00:5e:01:01:01, ethertype 802.1Q
(0x8100), length 50: vlan 2, p 0, ethertype IPv4, (tos 0x0, ttl 32, id
62586, offset 0, flags [DF], proto UDP (17), length 32)
        192.168.20.20.40605 > 239.1.1.1.3000: UDP, length 4
            0x0000:  4500 0020 f47a 4000 2011 a193 c0a8 1414
E....z@.........
            0x0010:  ef01 0101 9e9d 0bb8 000c bc55 6261 720a
...........Ubar.



tcpdump results for the non-working `ip rule fwmark` solution
(everything goes without VLAN tag):

    11:57:54.899126 00:c2:c6:1d:39:e5 > ff:ff:ff:ff:ff:ff, ethertype IPv4
(0x0800), length 46: (tos 0x0, ttl 64, id 60815, offset 0, flags [DF], proto
UDP (17), length 32)
        192.168.1.32.42413 > 255.255.255.255.2000: UDP, length 4
            0x0000:  4500 0020 ed8f 4000 4011 8b75 c0a8 0120
E.....@.@..u....
            0x0010:  ffff ffff a5ad 07d0 000c bb16 666f 6f0a
............foo.
    11:57:57.499439 00:c2:c6:1d:39:e5 > ff:ff:ff:ff:ff:ff, ethertype IPv4
(0x0800), length 46: (tos 0x0, ttl 32, id 61431, offset 0, flags [DF], proto
UDP (17), length 32)
        192.168.1.32.33153 > 255.255.255.255.3000: UDP, length 4
            0x0000:  4500 0020 eff7 4000 2011 a90d c0a8 0120
E.....@.........
            0x0010:  ffff ffff 8181 0bb8 000c dc68 6261 720a
...........hbar.

    12:06:35.215904 00:c2:c6:1d:39:e5 > 01:00:5e:01:01:01, ethertype IPv4
(0x0800), length 46: (tos 0x0, ttl 1, id 31632, offset 0, flags [DF], proto
UDP (17), length 32)
        192.168.1.32.49461 > 239.1.1.1.2000: UDP, length 4
            0x0000:  4500 0020 7b90 4000 0111 4c72 c0a8 0120
E...{.@...Lr....
            0x0010:  ef01 0101 c135 07d0 000c af8b 666f 6f0a
.....5......foo.
    12:06:39.911621 00:c2:c6:1d:39:e5 > 01:00:5e:01:01:01, ethertype IPv4
(0x0800), length 46: (tos 0x0, ttl 32, id 31890, offset 0, flags [DF], proto
UDP (17), length 32)
        192.168.1.32.46976 > 239.1.1.1.3000: UDP, length 4
            0x0000:  4500 0020 7c92 4000 2011 2c70 c0a8 0120
E...|.@...,p....
            0x0010:  ef01 0101 b780 0bb8 000c b666 6261 720a
...........fbar.




-- 
Marcin Szewczyk
http://wodny.org


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: re-routing multicast pkts after mangle table marking
  2020-12-02 12:10           ` Eliezer Croitor
@ 2020-12-02 12:36             ` Marcin Szewczyk
  2020-12-02 15:57               ` Eliezer Croitor
  0 siblings, 1 reply; 12+ messages in thread
From: Marcin Szewczyk @ 2020-12-02 12:36 UTC (permalink / raw)
  To: Eliezer Croitor
  Cc: 'Fatih USTA', 'Netfilter Users Mailing list'

On Wed, Dec 02, 2020 at 02:10:00PM +0200, Eliezer Croitor wrote:
> On Tue, Dec 01, 2020 at 07:49:43PM +0100, Marcin Szewczyk wrote:
> > Brian Aanderud on 23 Mar 2015 wrote:
> > > What must I do to get the multicast frames routed out a 'different'
> > > interface from the default one after applying a fwmark in iptables the
> > > routing table?  I am able to do this with unicast with a combination
> > > of 'ip rule', 'ip route' to a different table, and iptables to apply a
> > > 'mark'.  But, the marked multicast frames never seem to follow the
> > > other routing table's routes.
> > > [...]
> > 
> > I've stumbled upon the same problem as the one discussed over 5 years
> > ago (with no answer) on this mailing list[1], ie. locally generated
> > multicast and broadcast traffic do not seem to follow policy routing
> > when it is constructed using `iptables --set-mark` and `ip rule fwmark`.

> Just wondering, how can I reproduce this issue on my local network?

The configuration is actually in the message you quoted[1]. You do not
need a network adhering to any specific configuration to reproduce it.
It's a local phenomenon.

> First a broadcast address cannot be "routed" elsewhere then a
> connected network, there is no real "routing" so to speak.

Here routing is limited to the local machine. `ip rule dport` actually
produces positive results so there is a way to force broadcast traffic
to adhere to local routing rules even though the traffic is not going to
be routed on the next layer 3 hop.

In general -- broadcast and multicast traffic adheres to rules in any
table not guarded by the fwmark criterion.

> I do not know how the kernel looks at it but I assume it can only be sent
> towards a connected device such as:
> * ethernet
> * tunnel(these which support broadcast)

I am using my wifi interface but one can reproduce the experiments I
mentioned with a veth device as well:

    ip link add ve1 type veth peer ve2

> Also I am missing some of the thread emails so, What kernel are we talking
> about?
> Let say Debian/Ubuntu/RHEL/CentOS , what version? Etc..

I mentioned Debian Jessie (kernel 4.9) and Debian Buster (kernel 4.19).

> Reproducing is important to understand what the issue is.

As I mentioned -- configuration is in the quoted email[1] and in the
original email[2] (not mine) from 2015.

[1]: https://marc.info/?l=netfilter&m=160690828202259
[2]: https://marc.info/?l=netfilter&m=142714167809246

-- 
Marcin Szewczyk
http://wodny.org

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: re-routing multicast pkts after mangle table marking
  2020-12-02 12:36             ` Marcin Szewczyk
@ 2020-12-02 15:57               ` Eliezer Croitor
  2020-12-02 16:12                 ` Marcin Szewczyk
  0 siblings, 1 reply; 12+ messages in thread
From: Eliezer Croitor @ 2020-12-02 15:57 UTC (permalink / raw)
  To: 'Marcin Szewczyk'
  Cc: 'Fatih USTA', 'Netfilter Users Mailing list'

Thanks for taking the time to reply.

I have seen a similar "issue" with outgoing traffic generated locally.
From what I understand the diagram:
*
https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.sv
g

Doesn't talk about locally generated traffic..
I can try to regenerate this issue to understand it better.

There is a big difference in the linux kernel routing cache since the time
of the test...
If you want to re-produce this issue you can try to use iperf3 instead of
iperf.
iperf3 -c 224.1.1.1 -u  -b 10k

Can you create a test lab using netns ?
You can see a fully automated example lab that I wrote at:
https://github.com/elico/mwan-nft-lb-example/blob/main/run-lab.sh

Or another lab examples can be seen at Vincent blog posts github repository:
https://vincent.bernat.ch/en/blog/2018-route-based-vpn-wireguard

I don't understand if there is something to verify at all..

Thanks,
Eliezer

----
Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1ltd@gmail.com

-----Original Message-----
From: Marcin Szewczyk <marcin.szewczyk@wodny.org> 
Sent: Wednesday, December 2, 2020 2:37 PM
To: Eliezer Croitor <ngtech1ltd@gmail.com>
Cc: 'Fatih USTA' <fatihusta86@gmail.com>; 'Netfilter Users Mailing list'
<netfilter@vger.kernel.org>
Subject: Re: re-routing multicast pkts after mangle table marking

On Wed, Dec 02, 2020 at 02:10:00PM +0200, Eliezer Croitor wrote:
> On Tue, Dec 01, 2020 at 07:49:43PM +0100, Marcin Szewczyk wrote:
> > Brian Aanderud on 23 Mar 2015 wrote:
> > > What must I do to get the multicast frames routed out a 'different'
> > > interface from the default one after applying a fwmark in iptables the
> > > routing table?  I am able to do this with unicast with a combination
> > > of 'ip rule', 'ip route' to a different table, and iptables to apply a
> > > 'mark'.  But, the marked multicast frames never seem to follow the
> > > other routing table's routes.
> > > [...]
> > 
> > I've stumbled upon the same problem as the one discussed over 5 years
> > ago (with no answer) on this mailing list[1], ie. locally generated
> > multicast and broadcast traffic do not seem to follow policy routing
> > when it is constructed using `iptables --set-mark` and `ip rule fwmark`.

> Just wondering, how can I reproduce this issue on my local network?

The configuration is actually in the message you quoted[1]. You do not
need a network adhering to any specific configuration to reproduce it.
It's a local phenomenon.

> First a broadcast address cannot be "routed" elsewhere then a
> connected network, there is no real "routing" so to speak.

Here routing is limited to the local machine. `ip rule dport` actually
produces positive results so there is a way to force broadcast traffic
to adhere to local routing rules even though the traffic is not going to
be routed on the next layer 3 hop.

In general -- broadcast and multicast traffic adheres to rules in any
table not guarded by the fwmark criterion.

> I do not know how the kernel looks at it but I assume it can only be sent
> towards a connected device such as:
> * ethernet
> * tunnel(these which support broadcast)

I am using my wifi interface but one can reproduce the experiments I
mentioned with a veth device as well:

    ip link add ve1 type veth peer ve2

> Also I am missing some of the thread emails so, What kernel are we talking
> about?
> Let say Debian/Ubuntu/RHEL/CentOS , what version? Etc..

I mentioned Debian Jessie (kernel 4.9) and Debian Buster (kernel 4.19).

> Reproducing is important to understand what the issue is.

As I mentioned -- configuration is in the quoted email[1] and in the
original email[2] (not mine) from 2015.

[1]: https://marc.info/?l=netfilter&m=160690828202259
[2]: https://marc.info/?l=netfilter&m=142714167809246

-- 
Marcin Szewczyk
http://wodny.org


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: re-routing multicast pkts after mangle table marking
  2020-12-02 15:57               ` Eliezer Croitor
@ 2020-12-02 16:12                 ` Marcin Szewczyk
  2020-12-02 16:30                   ` Fatih USTA
  2020-12-02 17:35                   ` Eliezer Croitor
  0 siblings, 2 replies; 12+ messages in thread
From: Marcin Szewczyk @ 2020-12-02 16:12 UTC (permalink / raw)
  To: Eliezer Croitor
  Cc: 'Fatih USTA', 'Netfilter Users Mailing list'

On Wed, Dec 02, 2020 at 05:57:25PM +0200, Eliezer Croitor wrote:
> I have seen a similar "issue" with outgoing traffic generated locally.
> From what I understand the diagram:
> * https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.svg
> 
> Doesn't talk about locally generated traffic..

I am quite sure that it is not true.

Take a look at the simplified chart:
https://stuffphilwrites.com/2014/09/iptables-processing-flowchart/

OUTPUT chains are specifically for locally generated traffic, not the
forwarded traffic.

Also see:
https://wiki.nftables.org/wiki-nftables/index.php/Configuring_chains#Base_chain_hooks

> There is a big difference in the linux kernel routing cache since the time
> of the test...

My test is fresh. tcpdump output I pasted was created today.

> If you want to re-produce this issue you can try to use iperf3 instead of
> iperf.
> iperf3 -c 224.1.1.1 -u  -b 10k

I do not use iperf at all. I am using netcat.

> Can you create a test lab using netns ?
> You can see a fully automated example lab that I wrote at:
> https://github.com/elico/mwan-nft-lb-example/blob/main/run-lab.sh
> 
> Or another lab examples can be seen at Vincent blog posts github repository:
> https://vincent.bernat.ch/en/blog/2018-route-based-vpn-wireguard

I will take a look later to check if those are relevant.

-- 
Marcin Szewczyk
http://wodny.org

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: re-routing multicast pkts after mangle table marking
  2020-12-02 16:12                 ` Marcin Szewczyk
@ 2020-12-02 16:30                   ` Fatih USTA
  2020-12-02 17:03                     ` Marcin Szewczyk
  2020-12-02 17:35                   ` Eliezer Croitor
  1 sibling, 1 reply; 12+ messages in thread
From: Fatih USTA @ 2020-12-02 16:30 UTC (permalink / raw)
  To: Marcin Szewczyk, Eliezer Croitor; +Cc: 'Netfilter Users Mailing list'

I'm not sure, am I understand you correctly?
But I created testing topology with namespace for multicast routing and 
it worked.

I tested on ubuntu 18.04 but probably works on debian buster too.

install smcroute https://github.com/troglobit/smcroute
apt install smcroute

Get testing tool https://github.com/troglobit/mcjoin
wget 
https://deb.troglobit.com/debian/pool/main/m/mcjoin/mcjoin_2.7_amd64.deb

install tool
apt install ./mcjoin_2.7_amd64.deb

create network namespace
ip netns add client
ip netns add server

create veth interface and assign to the namespace
ip link add name c-eth10 type veth peer name eth0 netns client
ip link add name s-eth10 type veth peer name eth0 netns server

up local veth interface
ip link set dev c-eth10 up
ip link set dev s-eth10 up

Up the namespaces' interfaces
ip netns exec client ip link set dev lo up
ip netns exec client ip link set dev eth0 up
ip netns exec server ip link set dev lo up
ip netns exec server ip link set dev eth0 up

Assing IP address to host veth interfaces
ip addr add 10.0.0.1/24 dev c-eth10 brd +
ip addr add 10.0.1.1/24 dev s-eth10 brd +

assign IP address to namespace interfaces
ip netns exec client ip addr add 10.0.0.2/24 dev eth0 brd +
ip netns exec server ip addr add 10.0.1.2/24 dev eth0 brd +

set default gw in namespaces
ip netns exec client ip route add default via 10.0.0.1
ip netns exec server ip route add default via 10.0.1.1

enable ip forwarding
sysctl -w net.ipv4.ip_forward=1

Prepare multicast routing daemon
cat >> /etc/smcroute.conf <<EOF
mgroup from s-eth10 group 225.1.2.3
mroute from s-eth10 group 225.1.2.3 to c-eth10
EOF

restart service
systemctl restart smcroute

Watch forwarding multicast packetes for interfaces
watch -td -n 1 "cat /proc/net/ip_mr_vif"

or
tcpdump -i c-eth10 -nn multicast -c 10

Open two new terminal

Listen from client namespace
ip netns exec client mcjoin 225.1.2.3 -p 3000

Send multicast packets from server namespace
ip netns exec server mcjoin -t 5 -s 225.1.2.3 -p 3000


read this man page(-t option) for different routing table
ip mrule help
https://manpages.debian.org/buster/smcroute/smcroute.8.en.html#OPTIONS
https://github.com/troglobit/smcroute#multiple-routing-tables

may help you for broadcast relay
http://manpages.ubuntu.com/manpages/trusty/man8/bcrelay.8.html


Fatih USTA

On 2.12.2020 19:12, Marcin Szewczyk wrote:
> On Wed, Dec 02, 2020 at 05:57:25PM +0200, Eliezer Croitor wrote:
>> I have seen a similar "issue" with outgoing traffic generated locally.
>>  From what I understand the diagram:
>> * https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.svg
>>
>> Doesn't talk about locally generated traffic..
> I am quite sure that it is not true.
>
> Take a look at the simplified chart:
> https://stuffphilwrites.com/2014/09/iptables-processing-flowchart/
>
> OUTPUT chains are specifically for locally generated traffic, not the
> forwarded traffic.
>
> Also see:
> https://wiki.nftables.org/wiki-nftables/index.php/Configuring_chains#Base_chain_hooks
>
>> There is a big difference in the linux kernel routing cache since the time
>> of the test...
> My test is fresh. tcpdump output I pasted was created today.
>
>> If you want to re-produce this issue you can try to use iperf3 instead of
>> iperf.
>> iperf3 -c 224.1.1.1 -u  -b 10k
> I do not use iperf at all. I am using netcat.
>
>> Can you create a test lab using netns ?
>> You can see a fully automated example lab that I wrote at:
>> https://github.com/elico/mwan-nft-lb-example/blob/main/run-lab.sh
>>
>> Or another lab examples can be seen at Vincent blog posts github repository:
>> https://vincent.bernat.ch/en/blog/2018-route-based-vpn-wireguard
> I will take a look later to check if those are relevant.
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: re-routing multicast pkts after mangle table marking
  2020-12-02 16:30                   ` Fatih USTA
@ 2020-12-02 17:03                     ` Marcin Szewczyk
  0 siblings, 0 replies; 12+ messages in thread
From: Marcin Szewczyk @ 2020-12-02 17:03 UTC (permalink / raw)
  To: Fatih USTA; +Cc: Eliezer Croitor, 'Netfilter Users Mailing list'

On Wed, Dec 02, 2020 at 07:30:11PM +0300, Fatih USTA wrote:
> I'm not sure, am I understand you correctly?

I sincerely have no idea which part of the thread you are referencing.

> But I created testing topology with namespace for multicast routing and it
> worked.

And it is not a surprise. I have already mentioned at the beginning that
I have experimented with success using namespaces, see:
https://marc.info/?l=netfilter&m=160685018618860
And also mentioned veth:
https://marc.info/?l=netfilter&m=160691271903616
It worked even without smcroute.

> I tested on ubuntu 18.04 but probably works on debian buster too.

This does not answer the main question: why doesn't fwmark work with `ip
rule` for multicast and broadcast traffic, see:
- https://marc.info/?l=netfilter&m=142714167809246
- https://marc.info/?l=netfilter&m=160685018618860

-- 
Marcin Szewczyk
http://wodny.org

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: re-routing multicast pkts after mangle table marking
  2020-12-02 16:12                 ` Marcin Szewczyk
  2020-12-02 16:30                   ` Fatih USTA
@ 2020-12-02 17:35                   ` Eliezer Croitor
  2020-12-02 18:04                     ` Marcin Szewczyk
  1 sibling, 1 reply; 12+ messages in thread
From: Eliezer Croitor @ 2020-12-02 17:35 UTC (permalink / raw)
  To: 'Marcin Szewczyk'; +Cc: 'Netfilter Users Mailing list'

Just to be accurate,

There is a difference between packets which are dropped to the nic itself
and traffic which is bounded to a specific ip address.
From what I remember( and my memory is not the best as it was..) the last
time I checked on Debian jessie you couldn't do any routing
decision on a bounded socket.
Maybe on newer versions of the kernel or another OS it's not the same.

Eliezer

----
Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1ltd@gmail.com

-----Original Message-----
From: Marcin Szewczyk <marcin.szewczyk@wodny.org> 
Sent: Wednesday, December 2, 2020 6:13 PM
To: Eliezer Croitor <ngtech1ltd@gmail.com>
Cc: 'Fatih USTA' <fatihusta86@gmail.com>; 'Netfilter Users Mailing list'
<netfilter@vger.kernel.org>
Subject: Re: re-routing multicast pkts after mangle table marking

On Wed, Dec 02, 2020 at 05:57:25PM +0200, Eliezer Croitor wrote:
> I have seen a similar "issue" with outgoing traffic generated locally.
> From what I understand the diagram:
> *
https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.sv
g
> 
> Doesn't talk about locally generated traffic..

I am quite sure that it is not true.

Take a look at the simplified chart:
https://stuffphilwrites.com/2014/09/iptables-processing-flowchart/

OUTPUT chains are specifically for locally generated traffic, not the
forwarded traffic.

Also see:
https://wiki.nftables.org/wiki-nftables/index.php/Configuring_chains#Base_ch
ain_hooks

> There is a big difference in the linux kernel routing cache since the time
> of the test...

My test is fresh. tcpdump output I pasted was created today.

> If you want to re-produce this issue you can try to use iperf3 instead of
> iperf.
> iperf3 -c 224.1.1.1 -u  -b 10k

I do not use iperf at all. I am using netcat.

> Can you create a test lab using netns ?
> You can see a fully automated example lab that I wrote at:
> https://github.com/elico/mwan-nft-lb-example/blob/main/run-lab.sh
> 
> Or another lab examples can be seen at Vincent blog posts github
repository:
> https://vincent.bernat.ch/en/blog/2018-route-based-vpn-wireguard

I will take a look later to check if those are relevant.

-- 
Marcin Szewczyk
http://wodny.org


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: re-routing multicast pkts after mangle table marking
  2020-12-02 17:35                   ` Eliezer Croitor
@ 2020-12-02 18:04                     ` Marcin Szewczyk
  2020-12-03  8:39                       ` Fatih USTA
  0 siblings, 1 reply; 12+ messages in thread
From: Marcin Szewczyk @ 2020-12-02 18:04 UTC (permalink / raw)
  To: Eliezer Croitor; +Cc: 'Netfilter Users Mailing list'

On Wed, Dec 02, 2020 at 07:35:07PM +0200, Eliezer Croitor wrote:
> There is a difference between packets which are dropped to the nic itself
> and traffic which is bounded to a specific ip address.
> From what I remember( and my memory is not the best as it was..) the last
> time I checked on Debian jessie you couldn't do any routing
> decision on a bounded socket.
> Maybe on newer versions of the kernel or another OS it's not the same.

But remember that `ip rule dport…` works (available in Buster) and it
interacts with the same sockets as fwmark does but `ip rule fwmark…`
doesn't work. So evidence suggest that there are indeed routing
decisions being made for those sockets.

Also note that I have done tests on sockets with sendto() without
explicit binding with any address or interface.

-- 
Marcin Szewczyk
http://wodny.org

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: re-routing multicast pkts after mangle table marking
  2020-12-02 18:04                     ` Marcin Szewczyk
@ 2020-12-03  8:39                       ` Fatih USTA
  0 siblings, 0 replies; 12+ messages in thread
From: Fatih USTA @ 2020-12-03  8:39 UTC (permalink / raw)
  To: Marcin Szewczyk; +Cc: 'Netfilter Users Mailing list'

I looked your old mail and I tested on debian buster.

"ip rule dport" not working like fwmark. How did you tested? Share your 
setup step-by-step with example commands.(Please use veth with namespace)

Kernel doesn't forward multicast traffic to an other interface without 
user-level multicast routing daemon.(mrouted,smcroute,pimd etc.)
Am I wrong? Can someone correct me?

Kernel enable multicast forwarding feature with multicast daemon 
otherwise you don't have permission for this.

echo 1 > /proc/sys/net/ipv4/conf/all/mc_forwarding
-bash: /proc/sys/net/ipv4/conf/all/mc_forwarding: Permission denied

When started the daemon you will see the forwarding enable.

Routing deamon will be add this route automatically to routing table.
ip route show table all type multicast
multicast 225.1.2.3/32 from 10.0.1.2/32 table default proto 17 iif s-eth10
     nexthop dev c-eth10 weight 2


Anyway. If you want to example for "fwmark" with mcast daemon, than flow 
these steps.

iptables -t mangle -A PREROUTING -d 225.1.2.3/32 -p udp -m pkttype 
--pkt-type multicast -m udp --dport 3000 -m mark --mark 0x0 -j MARK 
--set-mark 12

ip mrule add fwmark 12 table 12 pref 12  # don't use 0 for pref.

cat > /etc/mrt12.conf <<EOF
mgroup from s-eth10 group 225.1.2.3
mroute from s-eth10 group 225.1.2.3 to c-eth10
EOF

smcrouted -n -I mrt12 -t 12

-I instance name
-t routing table id

ip route show table all type multicast
multicast 225.1.2.3/32 from 10.0.1.2/32 table 12 proto 17 iif s-eth10
     nexthop dev c-eth10 weight 2


Regards.

Fatih USTA

On 2.12.2020 21:04, Marcin Szewczyk wrote:
> On Wed, Dec 02, 2020 at 07:35:07PM +0200, Eliezer Croitor wrote:
>> There is a difference between packets which are dropped to the nic itself
>> and traffic which is bounded to a specific ip address.
>>  From what I remember( and my memory is not the best as it was..) the last
>> time I checked on Debian jessie you couldn't do any routing
>> decision on a bounded socket.
>> Maybe on newer versions of the kernel or another OS it's not the same.
> But remember that `ip rule dport…` works (available in Buster) and it
> interacts with the same sockets as fwmark does but `ip rule fwmark…`
> doesn't work. So evidence suggest that there are indeed routing
> decisions being made for those sockets.
>
> Also note that I have done tests on sockets with sendto() without
> explicit binding with any address or interface.
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-12-03  8:39 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-23 20:14 re-routing multicast pkts after mangle table marking Brian Aanderud
2020-12-01 18:49 ` Marcin Szewczyk
     [not found]   ` <CAF4tN+_YNz+0iCQzW7Fd+P5ZkDkU5g95Jv5fdHPNcqhzWtOOng@mail.gmail.com>
     [not found]     ` <X8bE6GMR6U37cfH/@flatwhite>
     [not found]       ` <CAN_K0LS+U95rxmhCtiE3sX_hdjEQySnV1HBB9sF1qrrVAz0n-w@mail.gmail.com>
2020-12-02 11:23         ` Marcin Szewczyk
2020-12-02 12:10           ` Eliezer Croitor
2020-12-02 12:36             ` Marcin Szewczyk
2020-12-02 15:57               ` Eliezer Croitor
2020-12-02 16:12                 ` Marcin Szewczyk
2020-12-02 16:30                   ` Fatih USTA
2020-12-02 17:03                     ` Marcin Szewczyk
2020-12-02 17:35                   ` Eliezer Croitor
2020-12-02 18:04                     ` Marcin Szewczyk
2020-12-03  8:39                       ` Fatih USTA

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.