Netfilter-Devel Archive on lore.kernel.org
 help / color / Atom feed
From: Phil Sutter <phil@nwl.cc>
To: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: netfilter-devel@vger.kernel.org
Subject: Re: [iptables PATCH v3 04/11] nft-cache: Introduce cache levels
Date: Fri, 11 Oct 2019 13:24:52 +0200
Message-ID: <20191011112452.GS12661@orbyte.nwl.cc> (raw)
In-Reply-To: <20191011102052.77s5ujrdb3ficddo@salvia> <20191011092823.dfzjjxmmgqx63eae@salvia>

Hi,

On Fri, Oct 11, 2019 at 11:28:23AM +0200, Pablo Neira Ayuso wrote:
[...]
> You could also just parse the ruleset twice in userspace, once to
> calculate the cache you need and another to actually create the
> transaction batch and push it into the kernel. That's a bit poor man
> approach, but it might work. You would need to invoke
> xtables_restore_parse() twice.

The problem with parsing twice is having to cache input which may be
huge for xtables-restore.

On Fri, Oct 11, 2019 at 12:20:52PM +0200, Pablo Neira Ayuso wrote:
> On Fri, Oct 11, 2019 at 12:09:11AM +0200, Phil Sutter wrote:
> [...]
> > Maybe we could go with a simpler solution for now, which is to check
> > kernel genid again and drop the local cache if it differs from what's
> > stored. If it doesn't, the current cache is still up to date and we may
> > just fetch what's missing. Or does that leave room for a race condition?
> 
> My concern with this approach is that, in the dynamic ruleset update
> scenarios, assuming very frequent updates, you might lose race when
> building the cache in stages. Hence, forcing you to restart from
> scratch in the middle of the transaction handling.

In a very busy environment there's always trouble, simply because we
can't atomically fetch ruleset from kernel and adjust and submit our
batch. Dealing with that means we're back at xtables-lock.

> I prefer to calculate the cache that is needed in one go by analyzing
> the batch, it's simpler. Note that we might lose race still since
> kernel might tell us we're working on an obsolete generation number ID
> cache, forcing us to restart.

My idea for conditional cache reset is based on the assumption that
conflicts are rare and we want to optimize for non-conflict case. So
core logic would be:

1) fetch kernel genid into genid_start
2) if cache level > NFT_CL_NONE and cache genid != genid_start:
   2a) drop local caches
   2b) set cache level to NFT_CL_NONE
3) call cache fetchers based on cache level and desired level
4) fetch kernel genid into genid_end
5) if genid_start != genid_end goto 1

So this is basically the old algorithm but with (2) added. What do you
think?

Thanks, Phil

  reply index

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-08 16:14 [iptables PATCH v3 00/11] Improve iptables-nft performance with large rulesets Phil Sutter
2019-10-08 16:14 ` [iptables PATCH v3 01/11] nft: Pass nft_handle to flush_cache() Phil Sutter
2019-10-09  9:30   ` Pablo Neira Ayuso
2019-10-08 16:14 ` [iptables PATCH v3 02/11] nft: Avoid nested cache fetching Phil Sutter
2019-10-09  9:30   ` Pablo Neira Ayuso
2019-10-08 16:14 ` [iptables PATCH v3 03/11] nft: Extract cache routines into nft-cache.c Phil Sutter
2019-10-09  9:32   ` Pablo Neira Ayuso
2019-10-08 16:14 ` [iptables PATCH v3 04/11] nft-cache: Introduce cache levels Phil Sutter
2019-10-09  9:37   ` Pablo Neira Ayuso
2019-10-09 10:29     ` Pablo Neira Ayuso
2019-10-10 22:09       ` Phil Sutter
2019-10-11  9:28         ` Pablo Neira Ayuso
2019-10-11 11:24           ` Phil Sutter [this message]
2019-10-14 10:00             ` Pablo Neira Ayuso
2019-10-11 10:20         ` Pablo Neira Ayuso
2019-10-08 16:14 ` [iptables PATCH v3 05/11] nft-cache: Fetch only chains in nft_chain_list_get() Phil Sutter
2019-10-08 16:14 ` [iptables PATCH v3 06/11] nft-cache: Cover for multiple fetcher invocation Phil Sutter
2019-10-08 16:14 ` [iptables PATCH v3 07/11] nft-cache: Support partial cache per table Phil Sutter
2019-10-08 16:14 ` [iptables PATCH v3 08/11] nft-cache: Support partial rule cache per chain Phil Sutter
2019-10-08 16:14 ` [iptables PATCH v3 09/11] nft: Reduce cache overhead of nft_chain_builtin_init() Phil Sutter
2019-10-08 16:14 ` [iptables PATCH v3 10/11] nft: Support nft_is_table_compatible() per chain Phil Sutter
2019-10-08 16:14 ` [iptables PATCH v3 11/11] nft: Optimize flushing all chains of a table Phil Sutter

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191011112452.GS12661@orbyte.nwl.cc \
    --to=phil@nwl.cc \
    --cc=netfilter-devel@vger.kernel.org \
    --cc=pablo@netfilter.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Netfilter-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/netfilter-devel/0 netfilter-devel/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 netfilter-devel netfilter-devel/ https://lore.kernel.org/netfilter-devel \
		netfilter-devel@vger.kernel.org
	public-inbox-index netfilter-devel

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.netfilter-devel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git