All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] iproute2/tc caching proposal
@ 2009-05-06 22:03 Denys Fedoryschenko
  2009-05-07 18:44 ` Jarek Poplawski
  0 siblings, 1 reply; 6+ messages in thread
From: Denys Fedoryschenko @ 2009-05-06 22:03 UTC (permalink / raw)
  To: Patrick McHardy, Stephen Hemminger, Jarek Poplawski, netdev

[-- Attachment #1: Type: text/plain, Size: 631 bytes --]

Since already someone did caching in iproute2, my changes is very trivial, but 
giving huge improvement in batch performance (30k rules 10minutes vs 30 
seconds).

ll_init_map is called in many places in tc, but since tc not changing 
anything, that can change this map, i think it is enough to call it only at 
the beginning, after rtnl_open().

Only one exclusion - tc monitor, because it is running long time, and things 
can change over this time, so we call ll_init_map on each received rtnetlink 
event.

Also please check "[RFC] [IPROUTE2] Filter class output by classid", if it is 
ok. Many people told it is useful patch.

[-- Attachment #2: tc_caching.diff --]
[-- Type: text/x-diff, Size: 3662 bytes --]

index 67dd49c..38569ca 100644
--- a/iproute2/tc/f_route.c
+++ b/iproute2-test1/tc/f_route.c
@@ -83,7 +83,6 @@ static int route_parse_opt(struct filter_util *qu, char *handle, int argc, char
 		} else if (matches(*argv, "fromif") == 0) {
 			__u32 id;
 			NEXT_ARG();
-			ll_init_map(&rth);
 			if ((id=ll_name_to_index(*argv)) <= 0) {
 				fprintf(stderr, "Illegal \"fromif\"\n");
 				return -1;
index 226df4d..d7a9897 100644
--- a/iproute2/tc/m_mirred.c
+++ b/iproute2-test1/tc/m_mirred.c
@@ -146,8 +146,6 @@ parse_egress(struct action_util *a, int *argc_p, char ***argv_p, int tca_id, str
 
 	if (d[0])  {
 		int idx;
-		ll_init_map(&rth);
-
 		if ((idx = ll_name_to_index(d)) == 0) {
 			fprintf(stderr, "Cannot find device \"%s\"\n", d);
 			return -1;
index 8e362d2..0f26ab7 100644
--- a/iproute2/tc/tc.c
+++ b/iproute2-test1/tc/tc.c
@@ -235,6 +235,9 @@ static int batch(const char *name)
 		return -1;
 	}
 
+	ll_init_map(&rth);
+
+
 	cmdlineno = 0;
 	while (getcmdline(&line, &len, stdin) != -1) {
 		char *largv[100];
@@ -299,6 +302,7 @@ int main(int argc, char **argv)
 		argc--;	argv++;
 	}
 
+
 	if (do_batching)
 		return batch(batchfile);
 
@@ -313,6 +317,8 @@ int main(int argc, char **argv)
 		exit(1);
 	}
 
+	ll_init_map(&rth);
+
 	ret = do_cmd(argc-1, argv+1);
 	rtnl_close(&rth);
 
index 774497a..917f65c 100644
--- a/iproute2/tc/tc_class.c
+++ b/iproute2-test1/tc/tc_class.c
@@ -130,8 +130,6 @@ int tc_class_modify(int cmd, unsigned flags, int argc, char **argv)
 	}
 
 	if (d[0])  {
-		ll_init_map(&rth);
-
 		if ((req.t.tcm_ifindex = ll_name_to_index(d)) == 0) {
 			fprintf(stderr, "Cannot find device \"%s\"\n", d);
 			return 1;
@@ -273,8 +271,6 @@ int tc_class_list(int argc, char **argv)
 		argc--; argv++;
 	}
 
- 	ll_init_map(&rth);
-
 	if (d[0]) {
 		if ((t.tcm_ifindex = ll_name_to_index(d)) == 0) {
 			fprintf(stderr, "Cannot find device \"%s\"\n", d);
index 919c57c..91a1333 100644
--- a/iproute2/tc/tc_filter.c
+++ b/iproute2-test1/tc/tc_filter.c
@@ -159,8 +159,6 @@ int tc_filter_modify(int cmd, unsigned flags, int argc, char **argv)
 
 
 	if (d[0])  {
- 		ll_init_map(&rth);
-
 		if ((req.t.tcm_ifindex = ll_name_to_index(d)) == 0) {
 			fprintf(stderr, "Cannot find device \"%s\"\n", d);
 			return 1;
@@ -326,8 +324,6 @@ int tc_filter_list(int argc, char **argv)
 
 	t.tcm_info = TC_H_MAKE(prio<<16, protocol);
 
- 	ll_init_map(&rth);
-
 	if (d[0]) {
 		if ((t.tcm_ifindex = ll_name_to_index(d)) == 0) {
 			fprintf(stderr, "Cannot find device \"%s\"\n", d);
index bf58744..b2e6ec3 100644
--- a/iproute2/tc/tc_monitor.c
+++ b/iproute2-test1/tc/tc_monitor.c
@@ -39,6 +39,8 @@ int accept_tcmsg(const struct sockaddr_nl *who, struct nlmsghdr *n, void *arg)
 {
 	FILE *fp = (FILE*)arg;
 
+	ll_init_map(&rth);
+
 	if (n->nlmsg_type == RTM_NEWTFILTER || n->nlmsg_type == RTM_DELTFILTER) {
 		print_filter(who, n, arg);
 		return 0;
@@ -98,7 +100,7 @@ int do_tcmonitor(int argc, char **argv)
 	if (rtnl_open(&rth, groups) < 0)
 		exit(1);
 
-	ll_init_map(&rth);
+	
 
 	if (rtnl_listen(&rth, accept_tcmsg, (void*)stdout) < 0) {
 		rtnl_close(&rth);
index c7f2988..c86e52c 100644
--- a/iproute2/tc/tc_qdisc.c
+++ b/iproute2-test1/tc/tc_qdisc.c
@@ -177,8 +177,6 @@ int tc_qdisc_modify(int cmd, unsigned flags, int argc, char **argv)
 	if (d[0])  {
 		int idx;
 
- 		ll_init_map(&rth);
-
 		if ((idx = ll_name_to_index(d)) == 0) {
 			fprintf(stderr, "Cannot find device \"%s\"\n", d);
 			return 1;
@@ -308,8 +306,6 @@ int tc_qdisc_list(int argc, char **argv)
 		argc--; argv++;
 	}
 
- 	ll_init_map(&rth);
-
 	if (d[0]) {
 		if ((t.tcm_ifindex = ll_name_to_index(d)) == 0) {
 			fprintf(stderr, "Cannot find device \"%s\"\n", d);

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [RFC] iproute2/tc caching proposal
  2009-05-06 22:03 [RFC] iproute2/tc caching proposal Denys Fedoryschenko
@ 2009-05-07 18:44 ` Jarek Poplawski
  2009-05-07 19:27   ` Jarek Poplawski
  2009-05-07 19:41   ` Denys Fedoryschenko
  0 siblings, 2 replies; 6+ messages in thread
From: Jarek Poplawski @ 2009-05-07 18:44 UTC (permalink / raw)
  To: Denys Fedoryschenko; +Cc: Patrick McHardy, Stephen Hemminger, netdev

Denys Fedoryschenko wrote, On 05/07/2009 12:03 AM:

> Since already someone did caching in iproute2, my changes is very trivial, but 
> giving huge improvement in batch performance (30k rules 10minutes vs 30 
> seconds).
> 
> ll_init_map is called in many places in tc, but since tc not changing 
> anything, that can change this map, i think it is enough to call it only at 
> the beginning, after rtnl_open().
> 
> Only one exclusion - tc monitor, because it is running long time, and things 
> can change over this time, so we call ll_init_map on each received rtnetlink 
> event.


Do you mean 30 sec. is to short for a change? I don't know these things enough;
your idea looks very nice, but I wonder if you tested how it behaves if e.g.
after 15k rules some dev goes away which is used in the next 15k?

> 
> Also please check "[RFC] [IPROUTE2] Filter class output by classid", if it is 
> ok. Many people told it is useful patch.
> 


I agree it's useful and quite natural option.

Thanks,
Jarek P.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] iproute2/tc caching proposal
  2009-05-07 18:44 ` Jarek Poplawski
@ 2009-05-07 19:27   ` Jarek Poplawski
  2009-05-07 19:49     ` Denys Fedoryschenko
  2009-05-07 19:41   ` Denys Fedoryschenko
  1 sibling, 1 reply; 6+ messages in thread
From: Jarek Poplawski @ 2009-05-07 19:27 UTC (permalink / raw)
  To: Denys Fedoryschenko; +Cc: Patrick McHardy, Stephen Hemminger, netdev

Jarek Poplawski wrote, On 05/07/2009 08:44 PM:

> Do you mean 30 sec. is to short for a change? I don't know these things enough;
> your idea looks very nice, but I wonder if you tested how it behaves if e.g.
> after 15k rules some dev goes away which is used in the next 15k?
 
Hmm... actually, it seems there should be no problem, except less info on the

reason of the failure.

 
Jarek P.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] iproute2/tc caching proposal
  2009-05-07 18:44 ` Jarek Poplawski
  2009-05-07 19:27   ` Jarek Poplawski
@ 2009-05-07 19:41   ` Denys Fedoryschenko
  1 sibling, 0 replies; 6+ messages in thread
From: Denys Fedoryschenko @ 2009-05-07 19:41 UTC (permalink / raw)
  To: Jarek Poplawski; +Cc: Patrick McHardy, Stephen Hemminger, netdev

On Thursday 07 May 2009 21:44:52 Jarek Poplawski wrote:
> Denys Fedoryschenko wrote, On 05/07/2009 12:03 AM:
> > Since already someone did caching in iproute2, my changes is very
> > trivial, but giving huge improvement in batch performance (30k rules
> > 10minutes vs 30 seconds).
> >
> > ll_init_map is called in many places in tc, but since tc not changing
> > anything, that can change this map, i think it is enough to call it only
> > at the beginning, after rtnl_open().
> >
> > Only one exclusion - tc monitor, because it is running long time, and
> > things can change over this time, so we call ll_init_map on each received
> > rtnetlink event.
>
> Do you mean 30 sec. is to short for a change? I don't know these things
> enough; your idea looks very nice, but I wonder if you tested how it
> behaves if e.g. after 15k rules some dev goes away which is used in the
> next 15k?
Well, i think if this point are criticial - not good idea to use batching at 
all. 
Bad condition can happen on 1k interface, and "old style", just each line 
takes hundreds of milliseconds, and there is chance that when root qdisc 
created, in batch, interface doesn't exist, and when you will start creating 
classes it will appear.

In my case even if device disappeared and id is not valid , it will just not 
add shaper(error, in case of flag -force - not critical), because id of 
interface will be different, even if device with same name appeared, because 
device id is incremental as i understand.

For example:
ppp100 assigned to john, who have 100Mbit unlimited. Let's say ppp100 have 
assigned id 12345.
Batch started, and before it is reached john, he disconnected and another user 
connected let's say willy (32Kbit/s) and catch ppp100, but id will be 
different, it will just give error. I use flag "-force" for that.
If i will refresh "interface-id" map on each batch line, first, on system with 
1k+ interface it will take around 10 minutes instead of 30 seconds, to load 
20-40k lines batch file. Second, willy can get 100Mbits because of mistake.



>
> > Also please check "[RFC] [IPROUTE2] Filter class output by classid", if
> > it is ok. Many people told it is useful patch.
>
> I agree it's useful and quite natural option.
>
> Thanks,
> Jarek P.



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] iproute2/tc caching proposal
  2009-05-07 19:27   ` Jarek Poplawski
@ 2009-05-07 19:49     ` Denys Fedoryschenko
  2009-05-07 20:01       ` Jarek Poplawski
  0 siblings, 1 reply; 6+ messages in thread
From: Denys Fedoryschenko @ 2009-05-07 19:49 UTC (permalink / raw)
  To: Jarek Poplawski; +Cc: Patrick McHardy, Stephen Hemminger, netdev

On Thursday 07 May 2009 22:27:02 Jarek Poplawski wrote:
> Jarek Poplawski wrote, On 05/07/2009 08:44 PM:
> > Do you mean 30 sec. is to short for a change? I don't know these things
> > enough; your idea looks very nice, but I wonder if you tested how it
> > behaves if e.g. after 15k rules some dev goes away which is used in the
> > next 15k?
>
> Hmm... actually, it seems there should be no problem, except less info on
> the
>
> reason of the failure.
Info will be same, completely. Just case with changing interfaces have to be 
handled correctly in any case, in case of batch. It is difficult to explain, 
each person doing his own way shapers. I can explain even, why in my case 
caching is better. And probably all other, properly done shapers for such 
cases.

But for me critical, that when i load shaper, machine is for 10 minutes eating 
dust (cpu utilisation is high, fans turning like hell :-))) ), and some of 
users have bandwidth without restrictions. 30 secs much better, and still 
here is space for improvement.


>
>
> Jarek P.



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC] iproute2/tc caching proposal
  2009-05-07 19:49     ` Denys Fedoryschenko
@ 2009-05-07 20:01       ` Jarek Poplawski
  0 siblings, 0 replies; 6+ messages in thread
From: Jarek Poplawski @ 2009-05-07 20:01 UTC (permalink / raw)
  To: Denys Fedoryschenko; +Cc: Patrick McHardy, Stephen Hemminger, netdev

On Thu, May 07, 2009 at 10:49:27PM +0300, Denys Fedoryschenko wrote:
> On Thursday 07 May 2009 22:27:02 Jarek Poplawski wrote:
> > Jarek Poplawski wrote, On 05/07/2009 08:44 PM:
> > > Do you mean 30 sec. is to short for a change? I don't know these things
> > > enough; your idea looks very nice, but I wonder if you tested how it
> > > behaves if e.g. after 15k rules some dev goes away which is used in the
> > > next 15k?
> >
> > Hmm... actually, it seems there should be no problem, except less info on
> > the
> >
> > reason of the failure.
> Info will be same, completely. Just case with changing interfaces have to be 

If a device was removed you wouldn't get e.g. "Cannot find device ..."
from mirred, I guess.

> handled correctly in any case, in case of batch. It is difficult to explain, 
> each person doing his own way shapers. I can explain even, why in my case 
> caching is better. And probably all other, properly done shapers for such 
> cases.
> 
> But for me critical, that when i load shaper, machine is for 10 minutes eating 
> dust (cpu utilisation is high, fans turning like hell :-))) ), and some of 
> users have bandwidth without restrictions. 30 secs much better, and still 
> here is space for improvement.

I agree the gains look very atractive here.

Jarek P.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2009-05-07 20:03 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-05-06 22:03 [RFC] iproute2/tc caching proposal Denys Fedoryschenko
2009-05-07 18:44 ` Jarek Poplawski
2009-05-07 19:27   ` Jarek Poplawski
2009-05-07 19:49     ` Denys Fedoryschenko
2009-05-07 20:01       ` Jarek Poplawski
2009-05-07 19:41   ` Denys Fedoryschenko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.