archive mirror
 help / color / mirror / Atom feed
From: Dominique Martinet <>
To: David Howells <>
Cc: Eric Van Hensbergen <>,
	Latchesar Ionkov <>,,,
Subject: Re: [PATCH] 9p: Convert to new fscache API
Date: Wed, 18 Nov 2020 15:16:49 +0100	[thread overview]
Message-ID: <20201118141649.GA14211@nautica> (raw)
In-Reply-To: <>

David Howells wrote on Wed, Nov 18, 2020:
> > What's the current schedule/plan for the fscache branch merging? Will
> > you be trying this merge window next month?
> That's the aim.  We have afs, ceph and nfs are about ready; I've had a go at
> doing the 9p conversion, which I'll have to leave to you now, I think, and am
> having a poke at cifs.

Ok, will try to polish it up by then.
Worst case as discussed is to have fscache be an alias for cache=loose
until it's ready but with the first version you gave me it hopefully
won't be needed.

> > >  (*) I have made an assumption that 9p_client_read() and write can handle I/Os
> > >      larger than a page.  If this is not the case, v9fs_req_ops will need
> > >      clamp_length() implementing.
> > 
> > There's a max driven by the client's msize
> The netfs read helpers provide you with a couple of options here:
>  (1) You can use ->clamp_length() to do multiple slices of at least 1 byte
>      each.  The assumption being that these represent parallel operations.  A
>      new subreq will be generated for each slice.
>  (2) You can go with large slices that are larger than msize, and just read
>      part of it with each read.  After reading, the netfs helper will keep
>      calling you again to read some more of it, provided you didn't return an
>      error and you at least read something.

clamp_length looks good for that, if we can get parallel requests out
it'll all come back faster.

> > (client->msize - P9_IOHDRSZ ; unfortunately msize is just guaranted to be >=
> > 4k so that means the actual IO size would be smaller in that case even if
> > that's not intended to be common)
> Does that mean you might be limited to reads of less than PAGE_SIZE on some
> systems (ppc64 for example)?  This isn't a problem for the read helper, but
> might be an issue for writing from THPs.

Quite likely, the actual used size varies depending on the backend (64k
for tcp, 500k for virtio) but can definietly be less than PAGE_SIZE.

I take it the read helper would just iterate as long as there's data
still required to read, writing from THPs wouldn't do that?

> > >  (*) The cache needs to be invalidated if a 3rd-party change happens, but I
> > >      haven't done that.
> > 
> > There's no concurrent access logic in 9p as far as I'm aware (like NFS
> > does if the mtime changes for example), so I assume we can keep ignoring
> > this.
> By that, I presume you mean concurrent accesses are just not permitted?

Sorry - I meant coherency if files are modified on multiple clients
isn't guaranted - there are voluntary locks but that's about it, nothing
will detect e.g. remote file size modifications.
Concurrency on a given client works fine and should be used if possible.


  reply	other threads:[~2020-11-18 14:17 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-18 11:02 David Howells
2020-11-18 11:43 ` David Howells
2020-11-18 12:00 ` David Howells
2020-11-18 12:48 ` Dominique Martinet
2020-11-18 13:38 ` David Howells
2020-11-18 14:16   ` Dominique Martinet [this message]
2020-11-18 15:02   ` David Howells
2020-11-18 14:59 ` David Howells

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201118141649.GA14211@nautica \ \ \ \ \ \ \ \
    --subject='Re: [PATCH] 9p: Convert to new fscache API' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).