All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
@ 2020-07-27 11:23 ` Dan Carpenter
  0 siblings, 0 replies; 20+ messages in thread
From: Dan Carpenter @ 2020-07-27 11:23 UTC (permalink / raw)
  To: Trond Myklebust, Frank van der Linden
  Cc: Anna Schumaker, linux-nfs, kernel-janitors

This should return -ENOMEM on failure instead of success.

Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr caching.")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
---
---
 fs/nfs/nfs42xattr.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
index 23fdab977a2a..e75c4bb70266 100644
--- a/fs/nfs/nfs42xattr.c
+++ b/fs/nfs/nfs42xattr.c
@@ -1040,8 +1040,10 @@ int __init nfs4_xattr_cache_init(void)
 		goto out2;
 
 	nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr", WQ_MEM_RECLAIM, 0);
-	if (nfs4_xattr_cache_wq = NULL)
+	if (nfs4_xattr_cache_wq = NULL) {
+		ret = -ENOMEM;
 		goto out1;
+	}
 
 	ret = register_shrinker(&nfs4_xattr_cache_shrinker);
 	if (ret)
-- 
2.27.0

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
@ 2020-07-27 11:23 ` Dan Carpenter
  0 siblings, 0 replies; 20+ messages in thread
From: Dan Carpenter @ 2020-07-27 11:23 UTC (permalink / raw)
  To: Trond Myklebust, Frank van der Linden
  Cc: Anna Schumaker, linux-nfs, kernel-janitors

This should return -ENOMEM on failure instead of success.

Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr caching.")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
---
---
 fs/nfs/nfs42xattr.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
index 23fdab977a2a..e75c4bb70266 100644
--- a/fs/nfs/nfs42xattr.c
+++ b/fs/nfs/nfs42xattr.c
@@ -1040,8 +1040,10 @@ int __init nfs4_xattr_cache_init(void)
 		goto out2;
 
 	nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr", WQ_MEM_RECLAIM, 0);
-	if (nfs4_xattr_cache_wq == NULL)
+	if (nfs4_xattr_cache_wq == NULL) {
+		ret = -ENOMEM;
 		goto out1;
+	}
 
 	ret = register_shrinker(&nfs4_xattr_cache_shrinker);
 	if (ret)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
  2020-07-27 11:23 ` Dan Carpenter
@ 2020-07-27 16:34   ` Frank van der Linden
  -1 siblings, 0 replies; 20+ messages in thread
From: Frank van der Linden @ 2020-07-27 16:34 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: Trond Myklebust, Anna Schumaker, linux-nfs, kernel-janitors

Hi Dan,

On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter wrote:
> 
> 
> This should return -ENOMEM on failure instead of success.
> 
> Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr caching.")
> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> ---
> ---
>  fs/nfs/nfs42xattr.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> index 23fdab977a2a..e75c4bb70266 100644
> --- a/fs/nfs/nfs42xattr.c
> +++ b/fs/nfs/nfs42xattr.c
> @@ -1040,8 +1040,10 @@ int __init nfs4_xattr_cache_init(void)
>                 goto out2;
> 
>         nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr", WQ_MEM_RECLAIM, 0);
> -       if (nfs4_xattr_cache_wq = NULL)
> +       if (nfs4_xattr_cache_wq = NULL) {
> +               ret = -ENOMEM;
>                 goto out1;
> +       }
> 
>         ret = register_shrinker(&nfs4_xattr_cache_shrinker);
>         if (ret)
> --
> 2.27.0
> 

Thanks for catching that one. Since this is against linux-next via Trond,
I assume Trond will add it to his tree (right?)

In any case:


Reviewed-by: Frank van der Linden <fllinden@amazon.com>


- Frank

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
@ 2020-07-27 16:34   ` Frank van der Linden
  0 siblings, 0 replies; 20+ messages in thread
From: Frank van der Linden @ 2020-07-27 16:34 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: Trond Myklebust, Anna Schumaker, linux-nfs, kernel-janitors

Hi Dan,

On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter wrote:
> 
> 
> This should return -ENOMEM on failure instead of success.
> 
> Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr caching.")
> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> ---
> ---
>  fs/nfs/nfs42xattr.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> index 23fdab977a2a..e75c4bb70266 100644
> --- a/fs/nfs/nfs42xattr.c
> +++ b/fs/nfs/nfs42xattr.c
> @@ -1040,8 +1040,10 @@ int __init nfs4_xattr_cache_init(void)
>                 goto out2;
> 
>         nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr", WQ_MEM_RECLAIM, 0);
> -       if (nfs4_xattr_cache_wq == NULL)
> +       if (nfs4_xattr_cache_wq == NULL) {
> +               ret = -ENOMEM;
>                 goto out1;
> +       }
> 
>         ret = register_shrinker(&nfs4_xattr_cache_shrinker);
>         if (ret)
> --
> 2.27.0
> 

Thanks for catching that one. Since this is against linux-next via Trond,
I assume Trond will add it to his tree (right?)

In any case:


Reviewed-by: Frank van der Linden <fllinden@amazon.com>


- Frank

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
  2020-07-27 16:34   ` Frank van der Linden
@ 2020-07-28 15:17     ` Trond Myklebust
  -1 siblings, 0 replies; 20+ messages in thread
From: Trond Myklebust @ 2020-07-28 15:17 UTC (permalink / raw)
  To: fllinden, dan.carpenter; +Cc: linux-nfs, kernel-janitors, anna.schumaker

T24gTW9uLCAyMDIwLTA3LTI3IGF0IDE2OjM0ICswMDAwLCBGcmFuayB2YW4gZGVyIExpbmRlbiB3
cm90ZToNCj4gSGkgRGFuLA0KPiANCj4gT24gTW9uLCBKdWwgMjcsIDIwMjAgYXQgMDI6MjM6NDRQ
TSArMDMwMCwgRGFuIENhcnBlbnRlciB3cm90ZToNCj4gPiANCj4gPiBUaGlzIHNob3VsZCByZXR1
cm4gLUVOT01FTSBvbiBmYWlsdXJlIGluc3RlYWQgb2Ygc3VjY2Vzcy4NCj4gPiANCj4gPiBGaXhl
czogOTVhZDM3ZjkwYzMzICgiTkZTdjQuMjogYWRkIGNsaWVudCBzaWRlIHhhdHRyIGNhY2hpbmcu
IikNCj4gPiBTaWduZWQtb2ZmLWJ5OiBEYW4gQ2FycGVudGVyIDxkYW4uY2FycGVudGVyQG9yYWNs
ZS5jb20+DQo+ID4gLS0tDQo+ID4gLS0tDQo+ID4gIGZzL25mcy9uZnM0MnhhdHRyLmMgfCA0ICsr
Ky0NCj4gPiAgMSBmaWxlIGNoYW5nZWQsIDMgaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKQ0K
PiA+IA0KPiA+IGRpZmYgLS1naXQgYS9mcy9uZnMvbmZzNDJ4YXR0ci5jIGIvZnMvbmZzL25mczQy
eGF0dHIuYw0KPiA+IGluZGV4IDIzZmRhYjk3N2EyYS4uZTc1YzRiYjcwMjY2IDEwMDY0NA0KPiA+
IC0tLSBhL2ZzL25mcy9uZnM0MnhhdHRyLmMNCj4gPiArKysgYi9mcy9uZnMvbmZzNDJ4YXR0ci5j
DQo+ID4gQEAgLTEwNDAsOCArMTA0MCwxMCBAQCBpbnQgX19pbml0IG5mczRfeGF0dHJfY2FjaGVf
aW5pdCh2b2lkKQ0KPiA+ICAgICAgICAgICAgICAgICBnb3RvIG91dDI7DQo+ID4gDQo+ID4gICAg
ICAgICBuZnM0X3hhdHRyX2NhY2hlX3dxID0gYWxsb2Nfd29ya3F1ZXVlKCJuZnM0X3hhdHRyIiwN
Cj4gPiBXUV9NRU1fUkVDTEFJTSwgMCk7DQo+ID4gLSAgICAgICBpZiAobmZzNF94YXR0cl9jYWNo
ZV93cSA9PSBOVUxMKQ0KPiA+ICsgICAgICAgaWYgKG5mczRfeGF0dHJfY2FjaGVfd3EgPT0gTlVM
TCkgew0KPiA+ICsgICAgICAgICAgICAgICByZXQgPSAtRU5PTUVNOw0KPiA+ICAgICAgICAgICAg
ICAgICBnb3RvIG91dDE7DQo+ID4gKyAgICAgICB9DQo+ID4gDQo+ID4gICAgICAgICByZXQgPSBy
ZWdpc3Rlcl9zaHJpbmtlcigmbmZzNF94YXR0cl9jYWNoZV9zaHJpbmtlcik7DQo+ID4gICAgICAg
ICBpZiAocmV0KQ0KPiA+IC0tDQo+ID4gMi4yNy4wDQo+ID4gDQo+IA0KPiBUaGFua3MgZm9yIGNh
dGNoaW5nIHRoYXQgb25lLiBTaW5jZSB0aGlzIGlzIGFnYWluc3QgbGludXgtbmV4dCB2aWENCj4g
VHJvbmQsDQo+IEkgYXNzdW1lIFRyb25kIHdpbGwgYWRkIGl0IHRvIGhpcyB0cmVlIChyaWdodD8p
DQo+IA0KPiBJbiBhbnkgY2FzZToNCj4gDQo+IA0KPiBSZXZpZXdlZC1ieTogRnJhbmsgdmFuIGRl
ciBMaW5kZW4gPGZsbGluZGVuQGFtYXpvbi5jb20+DQo+IA0KPiANCj4gLSBGcmFuaw0KDQpGcmFu
aywgd2h5IGRvIHdlIG5lZWQgYSB3b3JrcXVldWUgaGVyZSBhdCBhbGw/DQoNCi0tIA0KVHJvbmQg
TXlrbGVidXN0DQpMaW51eCBORlMgY2xpZW50IG1haW50YWluZXIsIEhhbW1lcnNwYWNlDQp0cm9u
ZC5teWtsZWJ1c3RAaGFtbWVyc3BhY2UuY29tDQoNCg0K

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
@ 2020-07-28 15:17     ` Trond Myklebust
  0 siblings, 0 replies; 20+ messages in thread
From: Trond Myklebust @ 2020-07-28 15:17 UTC (permalink / raw)
  To: fllinden, dan.carpenter; +Cc: linux-nfs, kernel-janitors, anna.schumaker

On Mon, 2020-07-27 at 16:34 +0000, Frank van der Linden wrote:
> Hi Dan,
> 
> On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter wrote:
> > 
> > This should return -ENOMEM on failure instead of success.
> > 
> > Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr caching.")
> > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > ---
> > ---
> >  fs/nfs/nfs42xattr.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> > index 23fdab977a2a..e75c4bb70266 100644
> > --- a/fs/nfs/nfs42xattr.c
> > +++ b/fs/nfs/nfs42xattr.c
> > @@ -1040,8 +1040,10 @@ int __init nfs4_xattr_cache_init(void)
> >                 goto out2;
> > 
> >         nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr",
> > WQ_MEM_RECLAIM, 0);
> > -       if (nfs4_xattr_cache_wq == NULL)
> > +       if (nfs4_xattr_cache_wq == NULL) {
> > +               ret = -ENOMEM;
> >                 goto out1;
> > +       }
> > 
> >         ret = register_shrinker(&nfs4_xattr_cache_shrinker);
> >         if (ret)
> > --
> > 2.27.0
> > 
> 
> Thanks for catching that one. Since this is against linux-next via
> Trond,
> I assume Trond will add it to his tree (right?)
> 
> In any case:
> 
> 
> Reviewed-by: Frank van der Linden <fllinden@amazon.com>
> 
> 
> - Frank

Frank, why do we need a workqueue here at all?

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
  2020-07-28 15:17     ` Trond Myklebust
@ 2020-07-28 16:09       ` Frank van der Linden
  -1 siblings, 0 replies; 20+ messages in thread
From: Frank van der Linden @ 2020-07-28 16:09 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

Hi Trond,

On Tue, Jul 28, 2020 at 03:17:12PM +0000, Trond Myklebust wrote:
> 
> On Mon, 2020-07-27 at 16:34 +0000, Frank van der Linden wrote:
> > Hi Dan,
> >
> > On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter wrote:
> > >
> > > This should return -ENOMEM on failure instead of success.
> > >
> > > Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr caching.")
> > > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > > ---
> > > ---
> > >  fs/nfs/nfs42xattr.c | 4 +++-
> > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> > > index 23fdab977a2a..e75c4bb70266 100644
> > > --- a/fs/nfs/nfs42xattr.c
> > > +++ b/fs/nfs/nfs42xattr.c
> > > @@ -1040,8 +1040,10 @@ int __init nfs4_xattr_cache_init(void)
> > >                 goto out2;
> > >
> > >         nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr",
> > > WQ_MEM_RECLAIM, 0);
> > > -       if (nfs4_xattr_cache_wq = NULL)
> > > +       if (nfs4_xattr_cache_wq = NULL) {
> > > +               ret = -ENOMEM;
> > >                 goto out1;
> > > +       }
> > >
> > >         ret = register_shrinker(&nfs4_xattr_cache_shrinker);
> > >         if (ret)
> > > --
> > > 2.27.0
> > >
> >
> > Thanks for catching that one. Since this is against linux-next via
> > Trond,
> > I assume Trond will add it to his tree (right?)
> >
> > In any case:
> >
> >
> > Reviewed-by: Frank van der Linden <fllinden@amazon.com>
> >
> >
> > - Frank
> 
> Frank, why do we need a workqueue here at all?

The xattr caches are per-inode, and get created on demand. Invalidating
a cache is done by setting the invalidate flag (as it is for other
cached attribues and data).

When nfs4_xattr_get_cache() sees an invalidated cache, it will just unlink it
from the inode, and create a new one if needed.

The old cache then still needs to be freed. Theoretically, there can be
quite a few entries in it, and nfs4_xattr_get_cache() will be called in
the get/setxattr systemcall path. So my reasoning here was that it's better
to use a workqueue to free the old invalidated cache instead of wasting
cycles in the I/O path.

- Frank

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
@ 2020-07-28 16:09       ` Frank van der Linden
  0 siblings, 0 replies; 20+ messages in thread
From: Frank van der Linden @ 2020-07-28 16:09 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

Hi Trond,

On Tue, Jul 28, 2020 at 03:17:12PM +0000, Trond Myklebust wrote:
> 
> On Mon, 2020-07-27 at 16:34 +0000, Frank van der Linden wrote:
> > Hi Dan,
> >
> > On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter wrote:
> > >
> > > This should return -ENOMEM on failure instead of success.
> > >
> > > Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr caching.")
> > > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > > ---
> > > ---
> > >  fs/nfs/nfs42xattr.c | 4 +++-
> > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> > > index 23fdab977a2a..e75c4bb70266 100644
> > > --- a/fs/nfs/nfs42xattr.c
> > > +++ b/fs/nfs/nfs42xattr.c
> > > @@ -1040,8 +1040,10 @@ int __init nfs4_xattr_cache_init(void)
> > >                 goto out2;
> > >
> > >         nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr",
> > > WQ_MEM_RECLAIM, 0);
> > > -       if (nfs4_xattr_cache_wq == NULL)
> > > +       if (nfs4_xattr_cache_wq == NULL) {
> > > +               ret = -ENOMEM;
> > >                 goto out1;
> > > +       }
> > >
> > >         ret = register_shrinker(&nfs4_xattr_cache_shrinker);
> > >         if (ret)
> > > --
> > > 2.27.0
> > >
> >
> > Thanks for catching that one. Since this is against linux-next via
> > Trond,
> > I assume Trond will add it to his tree (right?)
> >
> > In any case:
> >
> >
> > Reviewed-by: Frank van der Linden <fllinden@amazon.com>
> >
> >
> > - Frank
> 
> Frank, why do we need a workqueue here at all?

The xattr caches are per-inode, and get created on demand. Invalidating
a cache is done by setting the invalidate flag (as it is for other
cached attribues and data).

When nfs4_xattr_get_cache() sees an invalidated cache, it will just unlink it
from the inode, and create a new one if needed.

The old cache then still needs to be freed. Theoretically, there can be
quite a few entries in it, and nfs4_xattr_get_cache() will be called in
the get/setxattr systemcall path. So my reasoning here was that it's better
to use a workqueue to free the old invalidated cache instead of wasting
cycles in the I/O path.

- Frank

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
  2020-07-28 16:09       ` Frank van der Linden
@ 2020-07-28 17:10         ` Trond Myklebust
  -1 siblings, 0 replies; 20+ messages in thread
From: Trond Myklebust @ 2020-07-28 17:10 UTC (permalink / raw)
  To: fllinden; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

T24gVHVlLCAyMDIwLTA3LTI4IGF0IDE2OjA5ICswMDAwLCBGcmFuayB2YW4gZGVyIExpbmRlbiB3
cm90ZToNCj4gSGkgVHJvbmQsDQo+IA0KPiBPbiBUdWUsIEp1bCAyOCwgMjAyMCBhdCAwMzoxNzox
MlBNICswMDAwLCBUcm9uZCBNeWtsZWJ1c3Qgd3JvdGU6DQo+ID4gT24gTW9uLCAyMDIwLTA3LTI3
IGF0IDE2OjM0ICswMDAwLCBGcmFuayB2YW4gZGVyIExpbmRlbiB3cm90ZToNCj4gPiA+IEhpIERh
biwNCj4gPiA+IA0KPiA+ID4gT24gTW9uLCBKdWwgMjcsIDIwMjAgYXQgMDI6MjM6NDRQTSArMDMw
MCwgRGFuIENhcnBlbnRlciB3cm90ZToNCj4gPiA+ID4gVGhpcyBzaG91bGQgcmV0dXJuIC1FTk9N
RU0gb24gZmFpbHVyZSBpbnN0ZWFkIG9mIHN1Y2Nlc3MuDQo+ID4gPiA+IA0KPiA+ID4gPiBGaXhl
czogOTVhZDM3ZjkwYzMzICgiTkZTdjQuMjogYWRkIGNsaWVudCBzaWRlIHhhdHRyIGNhY2hpbmcu
IikNCj4gPiA+ID4gU2lnbmVkLW9mZi1ieTogRGFuIENhcnBlbnRlciA8ZGFuLmNhcnBlbnRlckBv
cmFjbGUuY29tPg0KPiA+ID4gPiAtLS0NCj4gPiA+ID4gLS0tDQo+ID4gPiA+ICBmcy9uZnMvbmZz
NDJ4YXR0ci5jIHwgNCArKystDQo+ID4gPiA+ICAxIGZpbGUgY2hhbmdlZCwgMyBpbnNlcnRpb25z
KCspLCAxIGRlbGV0aW9uKC0pDQo+ID4gPiA+IA0KPiA+ID4gPiBkaWZmIC0tZ2l0IGEvZnMvbmZz
L25mczQyeGF0dHIuYyBiL2ZzL25mcy9uZnM0MnhhdHRyLmMNCj4gPiA+ID4gaW5kZXggMjNmZGFi
OTc3YTJhLi5lNzVjNGJiNzAyNjYgMTAwNjQ0DQo+ID4gPiA+IC0tLSBhL2ZzL25mcy9uZnM0Mnhh
dHRyLmMNCj4gPiA+ID4gKysrIGIvZnMvbmZzL25mczQyeGF0dHIuYw0KPiA+ID4gPiBAQCAtMTA0
MCw4ICsxMDQwLDEwIEBAIGludCBfX2luaXQgbmZzNF94YXR0cl9jYWNoZV9pbml0KHZvaWQpDQo+
ID4gPiA+ICAgICAgICAgICAgICAgICBnb3RvIG91dDI7DQo+ID4gPiA+IA0KPiA+ID4gPiAgICAg
ICAgIG5mczRfeGF0dHJfY2FjaGVfd3EgPSBhbGxvY193b3JrcXVldWUoIm5mczRfeGF0dHIiLA0K
PiA+ID4gPiBXUV9NRU1fUkVDTEFJTSwgMCk7DQo+ID4gPiA+IC0gICAgICAgaWYgKG5mczRfeGF0
dHJfY2FjaGVfd3EgPT0gTlVMTCkNCj4gPiA+ID4gKyAgICAgICBpZiAobmZzNF94YXR0cl9jYWNo
ZV93cSA9PSBOVUxMKSB7DQo+ID4gPiA+ICsgICAgICAgICAgICAgICByZXQgPSAtRU5PTUVNOw0K
PiA+ID4gPiAgICAgICAgICAgICAgICAgZ290byBvdXQxOw0KPiA+ID4gPiArICAgICAgIH0NCj4g
PiA+ID4gDQo+ID4gPiA+ICAgICAgICAgcmV0ID0gcmVnaXN0ZXJfc2hyaW5rZXIoJm5mczRfeGF0
dHJfY2FjaGVfc2hyaW5rZXIpOw0KPiA+ID4gPiAgICAgICAgIGlmIChyZXQpDQo+ID4gPiA+IC0t
DQo+ID4gPiA+IDIuMjcuMA0KPiA+ID4gPiANCj4gPiA+IA0KPiA+ID4gVGhhbmtzIGZvciBjYXRj
aGluZyB0aGF0IG9uZS4gU2luY2UgdGhpcyBpcyBhZ2FpbnN0IGxpbnV4LW5leHQNCj4gPiA+IHZp
YQ0KPiA+ID4gVHJvbmQsDQo+ID4gPiBJIGFzc3VtZSBUcm9uZCB3aWxsIGFkZCBpdCB0byBoaXMg
dHJlZSAocmlnaHQ/KQ0KPiA+ID4gDQo+ID4gPiBJbiBhbnkgY2FzZToNCj4gPiA+IA0KPiA+ID4g
DQo+ID4gPiBSZXZpZXdlZC1ieTogRnJhbmsgdmFuIGRlciBMaW5kZW4gPGZsbGluZGVuQGFtYXpv
bi5jb20+DQo+ID4gPiANCj4gPiA+IA0KPiA+ID4gLSBGcmFuaw0KPiA+IA0KPiA+IEZyYW5rLCB3
aHkgZG8gd2UgbmVlZCBhIHdvcmtxdWV1ZSBoZXJlIGF0IGFsbD8NCj4gDQo+IFRoZSB4YXR0ciBj
YWNoZXMgYXJlIHBlci1pbm9kZSwgYW5kIGdldCBjcmVhdGVkIG9uIGRlbWFuZC4NCj4gSW52YWxp
ZGF0aW5nDQo+IGEgY2FjaGUgaXMgZG9uZSBieSBzZXR0aW5nIHRoZSBpbnZhbGlkYXRlIGZsYWcg
KGFzIGl0IGlzIGZvciBvdGhlcg0KPiBjYWNoZWQgYXR0cmlidWVzIGFuZCBkYXRhKS4NCj4gDQo+
IFdoZW4gbmZzNF94YXR0cl9nZXRfY2FjaGUoKSBzZWVzIGFuIGludmFsaWRhdGVkIGNhY2hlLCBp
dCB3aWxsIGp1c3QNCj4gdW5saW5rIGl0DQo+IGZyb20gdGhlIGlub2RlLCBhbmQgY3JlYXRlIGEg
bmV3IG9uZSBpZiBuZWVkZWQuDQo+IA0KPiBUaGUgb2xkIGNhY2hlIHRoZW4gc3RpbGwgbmVlZHMg
dG8gYmUgZnJlZWQuIFRoZW9yZXRpY2FsbHksIHRoZXJlIGNhbg0KPiBiZQ0KPiBxdWl0ZSBhIGZl
dyBlbnRyaWVzIGluIGl0LCBhbmQgbmZzNF94YXR0cl9nZXRfY2FjaGUoKSB3aWxsIGJlIGNhbGxl
ZA0KPiBpbg0KPiB0aGUgZ2V0L3NldHhhdHRyIHN5c3RlbWNhbGwgcGF0aC4gU28gbXkgcmVhc29u
aW5nIGhlcmUgd2FzIHRoYXQgaXQncw0KPiBiZXR0ZXINCj4gdG8gdXNlIGEgd29ya3F1ZXVlIHRv
IGZyZWUgdGhlIG9sZCBpbnZhbGlkYXRlZCBjYWNoZSBpbnN0ZWFkIG9mDQo+IHdhc3RpbmcNCj4g
Y3ljbGVzIGluIHRoZSBJL08gcGF0aC4NCj4gDQo+IC0gRnJhbmsNCg0KSSB0aGluayB3ZSBtaWdo
dCB3YW50IHRvIGV4cGxvcmUgdGhlIHJlYXNvbnMgZm9yIHRoaXMgYXJndW1lbnQuIFdlIGRvDQpu
b3Qgb2ZmbG9hZCBhbnkgb3RoZXIgY2FjaGUgaW52YWxpZGF0aW9ucywgYW5kIHRoYXQgaW5jbHVk
ZXMgdGhlIGNhc2UNCndoZW4gd2UgaGF2ZSB0byBpbnZhbGlkYXRlIHRoZSBlbnRpcmUgaW5vZGUg
ZGF0YSBjYWNoZSBiZWZvcmUgcmVhZGluZy4NCg0KU28gd2hhdCBpcyBzcGVjaWFsIGFib3V0IHhh
dHRycyB0aGF0IGNhdXNlcyBpbnZhbGlkYXRpb24gdG8gYmUgYQ0KcHJvYmxlbSBpbiB0aGUgSS9P
IHBhdGg/IFdoeSBkbyB3ZSBleHBlY3QgdGhlbSB0byBncm93IHNvIGxhcmdlIHRoYXQNCnRoZXkg
YXJlIG1vcmUgdW53aWVsZHkgdGhhbiB0aGUgaW5vZGUgZGF0YSBjYWNoZT8NCg0KLS0gDQpUcm9u
ZCBNeWtsZWJ1c3QNCkxpbnV4IE5GUyBjbGllbnQgbWFpbnRhaW5lciwgSGFtbWVyc3BhY2UNCnRy
b25kLm15a2xlYnVzdEBoYW1tZXJzcGFjZS5jb20NCg0KDQo

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
@ 2020-07-28 17:10         ` Trond Myklebust
  0 siblings, 0 replies; 20+ messages in thread
From: Trond Myklebust @ 2020-07-28 17:10 UTC (permalink / raw)
  To: fllinden; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

On Tue, 2020-07-28 at 16:09 +0000, Frank van der Linden wrote:
> Hi Trond,
> 
> On Tue, Jul 28, 2020 at 03:17:12PM +0000, Trond Myklebust wrote:
> > On Mon, 2020-07-27 at 16:34 +0000, Frank van der Linden wrote:
> > > Hi Dan,
> > > 
> > > On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter wrote:
> > > > This should return -ENOMEM on failure instead of success.
> > > > 
> > > > Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr caching.")
> > > > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > > > ---
> > > > ---
> > > >  fs/nfs/nfs42xattr.c | 4 +++-
> > > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> > > > index 23fdab977a2a..e75c4bb70266 100644
> > > > --- a/fs/nfs/nfs42xattr.c
> > > > +++ b/fs/nfs/nfs42xattr.c
> > > > @@ -1040,8 +1040,10 @@ int __init nfs4_xattr_cache_init(void)
> > > >                 goto out2;
> > > > 
> > > >         nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr",
> > > > WQ_MEM_RECLAIM, 0);
> > > > -       if (nfs4_xattr_cache_wq == NULL)
> > > > +       if (nfs4_xattr_cache_wq == NULL) {
> > > > +               ret = -ENOMEM;
> > > >                 goto out1;
> > > > +       }
> > > > 
> > > >         ret = register_shrinker(&nfs4_xattr_cache_shrinker);
> > > >         if (ret)
> > > > --
> > > > 2.27.0
> > > > 
> > > 
> > > Thanks for catching that one. Since this is against linux-next
> > > via
> > > Trond,
> > > I assume Trond will add it to his tree (right?)
> > > 
> > > In any case:
> > > 
> > > 
> > > Reviewed-by: Frank van der Linden <fllinden@amazon.com>
> > > 
> > > 
> > > - Frank
> > 
> > Frank, why do we need a workqueue here at all?
> 
> The xattr caches are per-inode, and get created on demand.
> Invalidating
> a cache is done by setting the invalidate flag (as it is for other
> cached attribues and data).
> 
> When nfs4_xattr_get_cache() sees an invalidated cache, it will just
> unlink it
> from the inode, and create a new one if needed.
> 
> The old cache then still needs to be freed. Theoretically, there can
> be
> quite a few entries in it, and nfs4_xattr_get_cache() will be called
> in
> the get/setxattr systemcall path. So my reasoning here was that it's
> better
> to use a workqueue to free the old invalidated cache instead of
> wasting
> cycles in the I/O path.
> 
> - Frank

I think we might want to explore the reasons for this argument. We do
not offload any other cache invalidations, and that includes the case
when we have to invalidate the entire inode data cache before reading.

So what is special about xattrs that causes invalidation to be a
problem in the I/O path? Why do we expect them to grow so large that
they are more unwieldy than the inode data cache?

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
  2020-07-28 17:10         ` Trond Myklebust
@ 2020-07-28 18:00           ` Frank van der Linden
  -1 siblings, 0 replies; 20+ messages in thread
From: Frank van der Linden @ 2020-07-28 18:00 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

On Tue, Jul 28, 2020 at 05:10:34PM +0000, Trond Myklebust wrote:
> On Tue, 2020-07-28 at 16:09 +0000, Frank van der Linden wrote:
> > Hi Trond,
> >
> > On Tue, Jul 28, 2020 at 03:17:12PM +0000, Trond Myklebust wrote:
> > > On Mon, 2020-07-27 at 16:34 +0000, Frank van der Linden wrote:
> > > > Hi Dan,
> > > >
> > > > On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter wrote:
> > > > > This should return -ENOMEM on failure instead of success.
> > > > >
> > > > > Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr caching.")
> > > > > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > > > > ---
> > > > > ---
> > > > >  fs/nfs/nfs42xattr.c | 4 +++-
> > > > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> > > > > index 23fdab977a2a..e75c4bb70266 100644
> > > > > --- a/fs/nfs/nfs42xattr.c
> > > > > +++ b/fs/nfs/nfs42xattr.c
> > > > > @@ -1040,8 +1040,10 @@ int __init nfs4_xattr_cache_init(void)
> > > > >                 goto out2;
> > > > >
> > > > >         nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr",
> > > > > WQ_MEM_RECLAIM, 0);
> > > > > -       if (nfs4_xattr_cache_wq = NULL)
> > > > > +       if (nfs4_xattr_cache_wq = NULL) {
> > > > > +               ret = -ENOMEM;
> > > > >                 goto out1;
> > > > > +       }
> > > > >
> > > > >         ret = register_shrinker(&nfs4_xattr_cache_shrinker);
> > > > >         if (ret)
> > > > > --
> > > > > 2.27.0
> > > > >
> > > >
> > > > Thanks for catching that one. Since this is against linux-next
> > > > via
> > > > Trond,
> > > > I assume Trond will add it to his tree (right?)
> > > >
> > > > In any case:
> > > >
> > > >
> > > > Reviewed-by: Frank van der Linden <fllinden@amazon.com>
> > > >
> > > >
> > > > - Frank
> > >
> > > Frank, why do we need a workqueue here at all?
> >
> > The xattr caches are per-inode, and get created on demand.
> > Invalidating
> > a cache is done by setting the invalidate flag (as it is for other
> > cached attribues and data).
> >
> > When nfs4_xattr_get_cache() sees an invalidated cache, it will just
> > unlink it
> > from the inode, and create a new one if needed.
> >
> > The old cache then still needs to be freed. Theoretically, there can
> > be
> > quite a few entries in it, and nfs4_xattr_get_cache() will be called
> > in
> > the get/setxattr systemcall path. So my reasoning here was that it's
> > better
> > to use a workqueue to free the old invalidated cache instead of
> > wasting
> > cycles in the I/O path.
> >
> > - Frank
> 
> I think we might want to explore the reasons for this argument. We do
> not offload any other cache invalidations, and that includes the case
> when we have to invalidate the entire inode data cache before reading.
> 
> So what is special about xattrs that causes invalidation to be a
> problem in the I/O path? Why do we expect them to grow so large that
> they are more unwieldy than the inode data cache?

In the case of inode data, so you should probably invalidate it
immediately, or accept that you're serving up known-stale data. So
offloading it doesn't seem like a good idea, and you'll just have to accept
the extra cycles you're using to do it.

For this particular case, you're just reaping a cache that is no longer
being used. There is no correctness gain in doing it in the I/O path -
the cache has already been orphaned and new getxattr/listxattr calls
will not see it. So there doesn't seem to be a reason to do it in the
I/O path at all.

The caches shouldn't become very large, no. In the normal case, there
shouldn't be much of a performance difference.

Then again, what do you gain by doing the reaping of the cache in the I/O path,
instead of using a work queue? I concluded that there wasn't an upside, only
a downside, so that's why I implemented it that way.

If you think it's better to do it inline, I'm happy to change it, of course.
It would just mean getting rid of the work queue and the reap_cache function,
and calling discard_cache directly, instead of reap_cache.

- Frank

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
@ 2020-07-28 18:00           ` Frank van der Linden
  0 siblings, 0 replies; 20+ messages in thread
From: Frank van der Linden @ 2020-07-28 18:00 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

On Tue, Jul 28, 2020 at 05:10:34PM +0000, Trond Myklebust wrote:
> On Tue, 2020-07-28 at 16:09 +0000, Frank van der Linden wrote:
> > Hi Trond,
> >
> > On Tue, Jul 28, 2020 at 03:17:12PM +0000, Trond Myklebust wrote:
> > > On Mon, 2020-07-27 at 16:34 +0000, Frank van der Linden wrote:
> > > > Hi Dan,
> > > >
> > > > On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter wrote:
> > > > > This should return -ENOMEM on failure instead of success.
> > > > >
> > > > > Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr caching.")
> > > > > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > > > > ---
> > > > > ---
> > > > >  fs/nfs/nfs42xattr.c | 4 +++-
> > > > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> > > > > index 23fdab977a2a..e75c4bb70266 100644
> > > > > --- a/fs/nfs/nfs42xattr.c
> > > > > +++ b/fs/nfs/nfs42xattr.c
> > > > > @@ -1040,8 +1040,10 @@ int __init nfs4_xattr_cache_init(void)
> > > > >                 goto out2;
> > > > >
> > > > >         nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr",
> > > > > WQ_MEM_RECLAIM, 0);
> > > > > -       if (nfs4_xattr_cache_wq == NULL)
> > > > > +       if (nfs4_xattr_cache_wq == NULL) {
> > > > > +               ret = -ENOMEM;
> > > > >                 goto out1;
> > > > > +       }
> > > > >
> > > > >         ret = register_shrinker(&nfs4_xattr_cache_shrinker);
> > > > >         if (ret)
> > > > > --
> > > > > 2.27.0
> > > > >
> > > >
> > > > Thanks for catching that one. Since this is against linux-next
> > > > via
> > > > Trond,
> > > > I assume Trond will add it to his tree (right?)
> > > >
> > > > In any case:
> > > >
> > > >
> > > > Reviewed-by: Frank van der Linden <fllinden@amazon.com>
> > > >
> > > >
> > > > - Frank
> > >
> > > Frank, why do we need a workqueue here at all?
> >
> > The xattr caches are per-inode, and get created on demand.
> > Invalidating
> > a cache is done by setting the invalidate flag (as it is for other
> > cached attribues and data).
> >
> > When nfs4_xattr_get_cache() sees an invalidated cache, it will just
> > unlink it
> > from the inode, and create a new one if needed.
> >
> > The old cache then still needs to be freed. Theoretically, there can
> > be
> > quite a few entries in it, and nfs4_xattr_get_cache() will be called
> > in
> > the get/setxattr systemcall path. So my reasoning here was that it's
> > better
> > to use a workqueue to free the old invalidated cache instead of
> > wasting
> > cycles in the I/O path.
> >
> > - Frank
> 
> I think we might want to explore the reasons for this argument. We do
> not offload any other cache invalidations, and that includes the case
> when we have to invalidate the entire inode data cache before reading.
> 
> So what is special about xattrs that causes invalidation to be a
> problem in the I/O path? Why do we expect them to grow so large that
> they are more unwieldy than the inode data cache?

In the case of inode data, so you should probably invalidate it
immediately, or accept that you're serving up known-stale data. So
offloading it doesn't seem like a good idea, and you'll just have to accept
the extra cycles you're using to do it.

For this particular case, you're just reaping a cache that is no longer
being used. There is no correctness gain in doing it in the I/O path -
the cache has already been orphaned and new getxattr/listxattr calls
will not see it. So there doesn't seem to be a reason to do it in the
I/O path at all.

The caches shouldn't become very large, no. In the normal case, there
shouldn't be much of a performance difference.

Then again, what do you gain by doing the reaping of the cache in the I/O path,
instead of using a work queue? I concluded that there wasn't an upside, only
a downside, so that's why I implemented it that way.

If you think it's better to do it inline, I'm happy to change it, of course.
It would just mean getting rid of the work queue and the reap_cache function,
and calling discard_cache directly, instead of reap_cache.

- Frank

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
  2020-07-28 18:00           ` Frank van der Linden
@ 2020-07-28 18:04             ` Trond Myklebust
  -1 siblings, 0 replies; 20+ messages in thread
From: Trond Myklebust @ 2020-07-28 18:04 UTC (permalink / raw)
  To: fllinden; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

T24gVHVlLCAyMDIwLTA3LTI4IGF0IDE4OjAwICswMDAwLCBGcmFuayB2YW4gZGVyIExpbmRlbiB3
cm90ZToNCj4gT24gVHVlLCBKdWwgMjgsIDIwMjAgYXQgMDU6MTA6MzRQTSArMDAwMCwgVHJvbmQg
TXlrbGVidXN0IHdyb3RlOg0KPiA+IE9uIFR1ZSwgMjAyMC0wNy0yOCBhdCAxNjowOSArMDAwMCwg
RnJhbmsgdmFuIGRlciBMaW5kZW4gd3JvdGU6DQo+ID4gPiBIaSBUcm9uZCwNCj4gPiA+IA0KPiA+
ID4gT24gVHVlLCBKdWwgMjgsIDIwMjAgYXQgMDM6MTc6MTJQTSArMDAwMCwgVHJvbmQgTXlrbGVi
dXN0IHdyb3RlOg0KPiA+ID4gPiBPbiBNb24sIDIwMjAtMDctMjcgYXQgMTY6MzQgKzAwMDAsIEZy
YW5rIHZhbiBkZXIgTGluZGVuIHdyb3RlOg0KPiA+ID4gPiA+IEhpIERhbiwNCj4gPiA+ID4gPiAN
Cj4gPiA+ID4gPiBPbiBNb24sIEp1bCAyNywgMjAyMCBhdCAwMjoyMzo0NFBNICswMzAwLCBEYW4g
Q2FycGVudGVyDQo+ID4gPiA+ID4gd3JvdGU6DQo+ID4gPiA+ID4gPiBUaGlzIHNob3VsZCByZXR1
cm4gLUVOT01FTSBvbiBmYWlsdXJlIGluc3RlYWQgb2Ygc3VjY2Vzcy4NCj4gPiA+ID4gPiA+IA0K
PiA+ID4gPiA+ID4gRml4ZXM6IDk1YWQzN2Y5MGMzMyAoIk5GU3Y0LjI6IGFkZCBjbGllbnQgc2lk
ZSB4YXR0cg0KPiA+ID4gPiA+ID4gY2FjaGluZy4iKQ0KPiA+ID4gPiA+ID4gU2lnbmVkLW9mZi1i
eTogRGFuIENhcnBlbnRlciA8ZGFuLmNhcnBlbnRlckBvcmFjbGUuY29tPg0KPiA+ID4gPiA+ID4g
LS0tDQo+ID4gPiA+ID4gPiAtLS0NCj4gPiA+ID4gPiA+ICBmcy9uZnMvbmZzNDJ4YXR0ci5jIHwg
NCArKystDQo+ID4gPiA+ID4gPiAgMSBmaWxlIGNoYW5nZWQsIDMgaW5zZXJ0aW9ucygrKSwgMSBk
ZWxldGlvbigtKQ0KPiA+ID4gPiA+ID4gDQo+ID4gPiA+ID4gPiBkaWZmIC0tZ2l0IGEvZnMvbmZz
L25mczQyeGF0dHIuYyBiL2ZzL25mcy9uZnM0MnhhdHRyLmMNCj4gPiA+ID4gPiA+IGluZGV4IDIz
ZmRhYjk3N2EyYS4uZTc1YzRiYjcwMjY2IDEwMDY0NA0KPiA+ID4gPiA+ID4gLS0tIGEvZnMvbmZz
L25mczQyeGF0dHIuYw0KPiA+ID4gPiA+ID4gKysrIGIvZnMvbmZzL25mczQyeGF0dHIuYw0KPiA+
ID4gPiA+ID4gQEAgLTEwNDAsOCArMTA0MCwxMCBAQCBpbnQgX19pbml0DQo+ID4gPiA+ID4gPiBu
ZnM0X3hhdHRyX2NhY2hlX2luaXQodm9pZCkNCj4gPiA+ID4gPiA+ICAgICAgICAgICAgICAgICBn
b3RvIG91dDI7DQo+ID4gPiA+ID4gPiANCj4gPiA+ID4gPiA+ICAgICAgICAgbmZzNF94YXR0cl9j
YWNoZV93cSA9IGFsbG9jX3dvcmtxdWV1ZSgibmZzNF94YXR0ciIsDQo+ID4gPiA+ID4gPiBXUV9N
RU1fUkVDTEFJTSwgMCk7DQo+ID4gPiA+ID4gPiAtICAgICAgIGlmIChuZnM0X3hhdHRyX2NhY2hl
X3dxID09IE5VTEwpDQo+ID4gPiA+ID4gPiArICAgICAgIGlmIChuZnM0X3hhdHRyX2NhY2hlX3dx
ID09IE5VTEwpIHsNCj4gPiA+ID4gPiA+ICsgICAgICAgICAgICAgICByZXQgPSAtRU5PTUVNOw0K
PiA+ID4gPiA+ID4gICAgICAgICAgICAgICAgIGdvdG8gb3V0MTsNCj4gPiA+ID4gPiA+ICsgICAg
ICAgfQ0KPiA+ID4gPiA+ID4gDQo+ID4gPiA+ID4gPiAgICAgICAgIHJldCA9DQo+ID4gPiA+ID4g
PiByZWdpc3Rlcl9zaHJpbmtlcigmbmZzNF94YXR0cl9jYWNoZV9zaHJpbmtlcik7DQo+ID4gPiA+
ID4gPiAgICAgICAgIGlmIChyZXQpDQo+ID4gPiA+ID4gPiAtLQ0KPiA+ID4gPiA+ID4gMi4yNy4w
DQo+ID4gPiA+ID4gPiANCj4gPiA+ID4gPiANCj4gPiA+ID4gPiBUaGFua3MgZm9yIGNhdGNoaW5n
IHRoYXQgb25lLiBTaW5jZSB0aGlzIGlzIGFnYWluc3QgbGludXgtDQo+ID4gPiA+ID4gbmV4dA0K
PiA+ID4gPiA+IHZpYQ0KPiA+ID4gPiA+IFRyb25kLA0KPiA+ID4gPiA+IEkgYXNzdW1lIFRyb25k
IHdpbGwgYWRkIGl0IHRvIGhpcyB0cmVlIChyaWdodD8pDQo+ID4gPiA+ID4gDQo+ID4gPiA+ID4g
SW4gYW55IGNhc2U6DQo+ID4gPiA+ID4gDQo+ID4gPiA+ID4gDQo+ID4gPiA+ID4gUmV2aWV3ZWQt
Ynk6IEZyYW5rIHZhbiBkZXIgTGluZGVuIDxmbGxpbmRlbkBhbWF6b24uY29tPg0KPiA+ID4gPiA+
IA0KPiA+ID4gPiA+IA0KPiA+ID4gPiA+IC0gRnJhbmsNCj4gPiA+ID4gDQo+ID4gPiA+IEZyYW5r
LCB3aHkgZG8gd2UgbmVlZCBhIHdvcmtxdWV1ZSBoZXJlIGF0IGFsbD8NCj4gPiA+IA0KPiA+ID4g
VGhlIHhhdHRyIGNhY2hlcyBhcmUgcGVyLWlub2RlLCBhbmQgZ2V0IGNyZWF0ZWQgb24gZGVtYW5k
Lg0KPiA+ID4gSW52YWxpZGF0aW5nDQo+ID4gPiBhIGNhY2hlIGlzIGRvbmUgYnkgc2V0dGluZyB0
aGUgaW52YWxpZGF0ZSBmbGFnIChhcyBpdCBpcyBmb3INCj4gPiA+IG90aGVyDQo+ID4gPiBjYWNo
ZWQgYXR0cmlidWVzIGFuZCBkYXRhKS4NCj4gPiA+IA0KPiA+ID4gV2hlbiBuZnM0X3hhdHRyX2dl
dF9jYWNoZSgpIHNlZXMgYW4gaW52YWxpZGF0ZWQgY2FjaGUsIGl0IHdpbGwNCj4gPiA+IGp1c3QN
Cj4gPiA+IHVubGluayBpdA0KPiA+ID4gZnJvbSB0aGUgaW5vZGUsIGFuZCBjcmVhdGUgYSBuZXcg
b25lIGlmIG5lZWRlZC4NCj4gPiA+IA0KPiA+ID4gVGhlIG9sZCBjYWNoZSB0aGVuIHN0aWxsIG5l
ZWRzIHRvIGJlIGZyZWVkLiBUaGVvcmV0aWNhbGx5LCB0aGVyZQ0KPiA+ID4gY2FuDQo+ID4gPiBi
ZQ0KPiA+ID4gcXVpdGUgYSBmZXcgZW50cmllcyBpbiBpdCwgYW5kIG5mczRfeGF0dHJfZ2V0X2Nh
Y2hlKCkgd2lsbCBiZQ0KPiA+ID4gY2FsbGVkDQo+ID4gPiBpbg0KPiA+ID4gdGhlIGdldC9zZXR4
YXR0ciBzeXN0ZW1jYWxsIHBhdGguIFNvIG15IHJlYXNvbmluZyBoZXJlIHdhcyB0aGF0DQo+ID4g
PiBpdCdzDQo+ID4gPiBiZXR0ZXINCj4gPiA+IHRvIHVzZSBhIHdvcmtxdWV1ZSB0byBmcmVlIHRo
ZSBvbGQgaW52YWxpZGF0ZWQgY2FjaGUgaW5zdGVhZCBvZg0KPiA+ID4gd2FzdGluZw0KPiA+ID4g
Y3ljbGVzIGluIHRoZSBJL08gcGF0aC4NCj4gPiA+IA0KPiA+ID4gLSBGcmFuaw0KPiA+IA0KPiA+
IEkgdGhpbmsgd2UgbWlnaHQgd2FudCB0byBleHBsb3JlIHRoZSByZWFzb25zIGZvciB0aGlzIGFy
Z3VtZW50LiBXZQ0KPiA+IGRvDQo+ID4gbm90IG9mZmxvYWQgYW55IG90aGVyIGNhY2hlIGludmFs
aWRhdGlvbnMsIGFuZCB0aGF0IGluY2x1ZGVzIHRoZQ0KPiA+IGNhc2UNCj4gPiB3aGVuIHdlIGhh
dmUgdG8gaW52YWxpZGF0ZSB0aGUgZW50aXJlIGlub2RlIGRhdGEgY2FjaGUgYmVmb3JlDQo+ID4g
cmVhZGluZy4NCj4gPiANCj4gPiBTbyB3aGF0IGlzIHNwZWNpYWwgYWJvdXQgeGF0dHJzIHRoYXQg
Y2F1c2VzIGludmFsaWRhdGlvbiB0byBiZSBhDQo+ID4gcHJvYmxlbSBpbiB0aGUgSS9PIHBhdGg/
IFdoeSBkbyB3ZSBleHBlY3QgdGhlbSB0byBncm93IHNvIGxhcmdlDQo+ID4gdGhhdA0KPiA+IHRo
ZXkgYXJlIG1vcmUgdW53aWVsZHkgdGhhbiB0aGUgaW5vZGUgZGF0YSBjYWNoZT8NCj4gDQo+IElu
IHRoZSBjYXNlIG9mIGlub2RlIGRhdGEsIHNvIHlvdSBzaG91bGQgcHJvYmFibHkgaW52YWxpZGF0
ZSBpdA0KPiBpbW1lZGlhdGVseSwgb3IgYWNjZXB0IHRoYXQgeW91J3JlIHNlcnZpbmcgdXAga25v
d24tc3RhbGUgZGF0YS4gU28NCj4gb2ZmbG9hZGluZyBpdCBkb2Vzbid0IHNlZW0gbGlrZSBhIGdv
b2QgaWRlYSwgYW5kIHlvdSdsbCBqdXN0IGhhdmUgdG8NCj4gYWNjZXB0DQo+IHRoZSBleHRyYSBj
eWNsZXMgeW91J3JlIHVzaW5nIHRvIGRvIGl0Lg0KPiANCj4gRm9yIHRoaXMgcGFydGljdWxhciBj
YXNlLCB5b3UncmUganVzdCByZWFwaW5nIGEgY2FjaGUgdGhhdCBpcyBubw0KPiBsb25nZXINCj4g
YmVpbmcgdXNlZC4gVGhlcmUgaXMgbm8gY29ycmVjdG5lc3MgZ2FpbiBpbiBkb2luZyBpdCBpbiB0
aGUgSS9PIHBhdGgNCj4gLQ0KPiB0aGUgY2FjaGUgaGFzIGFscmVhZHkgYmVlbiBvcnBoYW5lZCBh
bmQgbmV3IGdldHhhdHRyL2xpc3R4YXR0ciBjYWxscw0KPiB3aWxsIG5vdCBzZWUgaXQuIFNvIHRo
ZXJlIGRvZXNuJ3Qgc2VlbSB0byBiZSBhIHJlYXNvbiB0byBkbyBpdCBpbiB0aGUNCj4gSS9PIHBh
dGggYXQgYWxsLg0KPiANCj4gVGhlIGNhY2hlcyBzaG91bGRuJ3QgYmVjb21lIHZlcnkgbGFyZ2Us
IG5vLiBJbiB0aGUgbm9ybWFsIGNhc2UsIHRoZXJlDQo+IHNob3VsZG4ndCBiZSBtdWNoIG9mIGEg
cGVyZm9ybWFuY2UgZGlmZmVyZW5jZS4NCj4gDQo+IFRoZW4gYWdhaW4sIHdoYXQgZG8geW91IGdh
aW4gYnkgZG9pbmcgdGhlIHJlYXBpbmcgb2YgdGhlIGNhY2hlIGluIHRoZQ0KPiBJL08gcGF0aCwN
Cj4gaW5zdGVhZCBvZiB1c2luZyBhIHdvcmsgcXVldWU/IEkgY29uY2x1ZGVkIHRoYXQgdGhlcmUg
d2Fzbid0IGFuDQo+IHVwc2lkZSwgb25seQ0KPiBhIGRvd25zaWRlLCBzbyB0aGF0J3Mgd2h5IEkg
aW1wbGVtZW50ZWQgaXQgdGhhdCB3YXkuDQo+IA0KPiBJZiB5b3UgdGhpbmsgaXQncyBiZXR0ZXIg
dG8gZG8gaXQgaW5saW5lLCBJJ20gaGFwcHkgdG8gY2hhbmdlIGl0LCBvZg0KPiBjb3Vyc2UuDQo+
IEl0IHdvdWxkIGp1c3QgbWVhbiBnZXR0aW5nIHJpZCBvZiB0aGUgd29yayBxdWV1ZSBhbmQgdGhl
IHJlYXBfY2FjaGUNCj4gZnVuY3Rpb24sDQo+IGFuZCBjYWxsaW5nIGRpc2NhcmRfY2FjaGUgZGly
ZWN0bHksIGluc3RlYWQgb2YgcmVhcF9jYWNoZS4NCj4gDQo+IC0gRnJhbmsNCg0KSSB0aGluayB3
ZSBzaG91bGQgc3RhcnQgd2l0aCBkb2luZyB0aGUgZnJlZWluZyBvZiB0aGUgb2xkIGNhY2hlIGlu
bGluZS4NCklmIGl0IHR1cm5zIG91dCB0byBiZSBhIHJlYWwgcGVyZm9ybWFuY2UgcHJvYmxlbSwg
dGhlbiB3ZSBjYW4gbGF0ZXINCnJldmlzaXQgdXNpbmcgYSB3b3JrIHF1ZXVlLCBob3dldmVyIGlu
IHRoYXQgY2FzZSwgSSdkIHByZWZlciB0byB1c2UNCm5mc2lvZCByYXRoZXIgdGhhbiBhZGRpbmcg
YSBzcGVjaWFsIHdvcmtxdWV1ZSB0aGF0IGlzIHJlc2VydmVkIGZvcg0KeGF0dHJzLg0KDQotLSAN
ClRyb25kIE15a2xlYnVzdA0KTGludXggTkZTIGNsaWVudCBtYWludGFpbmVyLCBIYW1tZXJzcGFj
ZQ0KdHJvbmQubXlrbGVidXN0QGhhbW1lcnNwYWNlLmNvbQ0KDQoNCg=

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
@ 2020-07-28 18:04             ` Trond Myklebust
  0 siblings, 0 replies; 20+ messages in thread
From: Trond Myklebust @ 2020-07-28 18:04 UTC (permalink / raw)
  To: fllinden; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

On Tue, 2020-07-28 at 18:00 +0000, Frank van der Linden wrote:
> On Tue, Jul 28, 2020 at 05:10:34PM +0000, Trond Myklebust wrote:
> > On Tue, 2020-07-28 at 16:09 +0000, Frank van der Linden wrote:
> > > Hi Trond,
> > > 
> > > On Tue, Jul 28, 2020 at 03:17:12PM +0000, Trond Myklebust wrote:
> > > > On Mon, 2020-07-27 at 16:34 +0000, Frank van der Linden wrote:
> > > > > Hi Dan,
> > > > > 
> > > > > On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter
> > > > > wrote:
> > > > > > This should return -ENOMEM on failure instead of success.
> > > > > > 
> > > > > > Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr
> > > > > > caching.")
> > > > > > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > > > > > ---
> > > > > > ---
> > > > > >  fs/nfs/nfs42xattr.c | 4 +++-
> > > > > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > > > > > 
> > > > > > diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> > > > > > index 23fdab977a2a..e75c4bb70266 100644
> > > > > > --- a/fs/nfs/nfs42xattr.c
> > > > > > +++ b/fs/nfs/nfs42xattr.c
> > > > > > @@ -1040,8 +1040,10 @@ int __init
> > > > > > nfs4_xattr_cache_init(void)
> > > > > >                 goto out2;
> > > > > > 
> > > > > >         nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr",
> > > > > > WQ_MEM_RECLAIM, 0);
> > > > > > -       if (nfs4_xattr_cache_wq == NULL)
> > > > > > +       if (nfs4_xattr_cache_wq == NULL) {
> > > > > > +               ret = -ENOMEM;
> > > > > >                 goto out1;
> > > > > > +       }
> > > > > > 
> > > > > >         ret =
> > > > > > register_shrinker(&nfs4_xattr_cache_shrinker);
> > > > > >         if (ret)
> > > > > > --
> > > > > > 2.27.0
> > > > > > 
> > > > > 
> > > > > Thanks for catching that one. Since this is against linux-
> > > > > next
> > > > > via
> > > > > Trond,
> > > > > I assume Trond will add it to his tree (right?)
> > > > > 
> > > > > In any case:
> > > > > 
> > > > > 
> > > > > Reviewed-by: Frank van der Linden <fllinden@amazon.com>
> > > > > 
> > > > > 
> > > > > - Frank
> > > > 
> > > > Frank, why do we need a workqueue here at all?
> > > 
> > > The xattr caches are per-inode, and get created on demand.
> > > Invalidating
> > > a cache is done by setting the invalidate flag (as it is for
> > > other
> > > cached attribues and data).
> > > 
> > > When nfs4_xattr_get_cache() sees an invalidated cache, it will
> > > just
> > > unlink it
> > > from the inode, and create a new one if needed.
> > > 
> > > The old cache then still needs to be freed. Theoretically, there
> > > can
> > > be
> > > quite a few entries in it, and nfs4_xattr_get_cache() will be
> > > called
> > > in
> > > the get/setxattr systemcall path. So my reasoning here was that
> > > it's
> > > better
> > > to use a workqueue to free the old invalidated cache instead of
> > > wasting
> > > cycles in the I/O path.
> > > 
> > > - Frank
> > 
> > I think we might want to explore the reasons for this argument. We
> > do
> > not offload any other cache invalidations, and that includes the
> > case
> > when we have to invalidate the entire inode data cache before
> > reading.
> > 
> > So what is special about xattrs that causes invalidation to be a
> > problem in the I/O path? Why do we expect them to grow so large
> > that
> > they are more unwieldy than the inode data cache?
> 
> In the case of inode data, so you should probably invalidate it
> immediately, or accept that you're serving up known-stale data. So
> offloading it doesn't seem like a good idea, and you'll just have to
> accept
> the extra cycles you're using to do it.
> 
> For this particular case, you're just reaping a cache that is no
> longer
> being used. There is no correctness gain in doing it in the I/O path
> -
> the cache has already been orphaned and new getxattr/listxattr calls
> will not see it. So there doesn't seem to be a reason to do it in the
> I/O path at all.
> 
> The caches shouldn't become very large, no. In the normal case, there
> shouldn't be much of a performance difference.
> 
> Then again, what do you gain by doing the reaping of the cache in the
> I/O path,
> instead of using a work queue? I concluded that there wasn't an
> upside, only
> a downside, so that's why I implemented it that way.
> 
> If you think it's better to do it inline, I'm happy to change it, of
> course.
> It would just mean getting rid of the work queue and the reap_cache
> function,
> and calling discard_cache directly, instead of reap_cache.
> 
> - Frank

I think we should start with doing the freeing of the old cache inline.
If it turns out to be a real performance problem, then we can later
revisit using a work queue, however in that case, I'd prefer to use
nfsiod rather than adding a special workqueue that is reserved for
xattrs.

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
  2020-07-28 18:04             ` Trond Myklebust
@ 2020-07-28 18:13               ` Frank van der Linden
  -1 siblings, 0 replies; 20+ messages in thread
From: Frank van der Linden @ 2020-07-28 18:13 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

On Tue, Jul 28, 2020 at 06:04:21PM +0000, Trond Myklebust wrote:
> On Tue, 2020-07-28 at 18:00 +0000, Frank van der Linden wrote:
> > On Tue, Jul 28, 2020 at 05:10:34PM +0000, Trond Myklebust wrote:
> > > On Tue, 2020-07-28 at 16:09 +0000, Frank van der Linden wrote:
> > > > Hi Trond,
> > > >
> > > > On Tue, Jul 28, 2020 at 03:17:12PM +0000, Trond Myklebust wrote:
> > > > > On Mon, 2020-07-27 at 16:34 +0000, Frank van der Linden wrote:
> > > > > > Hi Dan,
> > > > > >
> > > > > > On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter
> > > > > > wrote:
> > > > > > > This should return -ENOMEM on failure instead of success.
> > > > > > >
> > > > > > > Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr
> > > > > > > caching.")
> > > > > > > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > > > > > > ---
> > > > > > > ---
> > > > > > >  fs/nfs/nfs42xattr.c | 4 +++-
> > > > > > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > > > > > >
> > > > > > > diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> > > > > > > index 23fdab977a2a..e75c4bb70266 100644
> > > > > > > --- a/fs/nfs/nfs42xattr.c
> > > > > > > +++ b/fs/nfs/nfs42xattr.c
> > > > > > > @@ -1040,8 +1040,10 @@ int __init
> > > > > > > nfs4_xattr_cache_init(void)
> > > > > > >                 goto out2;
> > > > > > >
> > > > > > >         nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr",
> > > > > > > WQ_MEM_RECLAIM, 0);
> > > > > > > -       if (nfs4_xattr_cache_wq = NULL)
> > > > > > > +       if (nfs4_xattr_cache_wq = NULL) {
> > > > > > > +               ret = -ENOMEM;
> > > > > > >                 goto out1;
> > > > > > > +       }
> > > > > > >
> > > > > > >         ret > > > > > > > register_shrinker(&nfs4_xattr_cache_shrinker);
> > > > > > >         if (ret)
> > > > > > > --
> > > > > > > 2.27.0
> > > > > > >
> > > > > >
> > > > > > Thanks for catching that one. Since this is against linux-
> > > > > > next
> > > > > > via
> > > > > > Trond,
> > > > > > I assume Trond will add it to his tree (right?)
> > > > > >
> > > > > > In any case:
> > > > > >
> > > > > >
> > > > > > Reviewed-by: Frank van der Linden <fllinden@amazon.com>
> > > > > >
> > > > > >
> > > > > > - Frank
> > > > >
> > > > > Frank, why do we need a workqueue here at all?
> > > >
> > > > The xattr caches are per-inode, and get created on demand.
> > > > Invalidating
> > > > a cache is done by setting the invalidate flag (as it is for
> > > > other
> > > > cached attribues and data).
> > > >
> > > > When nfs4_xattr_get_cache() sees an invalidated cache, it will
> > > > just
> > > > unlink it
> > > > from the inode, and create a new one if needed.
> > > >
> > > > The old cache then still needs to be freed. Theoretically, there
> > > > can
> > > > be
> > > > quite a few entries in it, and nfs4_xattr_get_cache() will be
> > > > called
> > > > in
> > > > the get/setxattr systemcall path. So my reasoning here was that
> > > > it's
> > > > better
> > > > to use a workqueue to free the old invalidated cache instead of
> > > > wasting
> > > > cycles in the I/O path.
> > > >
> > > > - Frank
> > >
> > > I think we might want to explore the reasons for this argument. We
> > > do
> > > not offload any other cache invalidations, and that includes the
> > > case
> > > when we have to invalidate the entire inode data cache before
> > > reading.
> > >
> > > So what is special about xattrs that causes invalidation to be a
> > > problem in the I/O path? Why do we expect them to grow so large
> > > that
> > > they are more unwieldy than the inode data cache?
> >
> > In the case of inode data, so you should probably invalidate it
> > immediately, or accept that you're serving up known-stale data. So
> > offloading it doesn't seem like a good idea, and you'll just have to
> > accept
> > the extra cycles you're using to do it.
> >
> > For this particular case, you're just reaping a cache that is no
> > longer
> > being used. There is no correctness gain in doing it in the I/O path
> > -
> > the cache has already been orphaned and new getxattr/listxattr calls
> > will not see it. So there doesn't seem to be a reason to do it in the
> > I/O path at all.
> >
> > The caches shouldn't become very large, no. In the normal case, there
> > shouldn't be much of a performance difference.
> >
> > Then again, what do you gain by doing the reaping of the cache in the
> > I/O path,
> > instead of using a work queue? I concluded that there wasn't an
> > upside, only
> > a downside, so that's why I implemented it that way.
> >
> > If you think it's better to do it inline, I'm happy to change it, of
> > course.
> > It would just mean getting rid of the work queue and the reap_cache
> > function,
> > and calling discard_cache directly, instead of reap_cache.
> >
> > - Frank
> 
> I think we should start with doing the freeing of the old cache inline.
> If it turns out to be a real performance problem, then we can later
> revisit using a work queue, however in that case, I'd prefer to use
> nfsiod rather than adding a special workqueue that is reserved for
> xattrs.

Sure, I can do that.

Do you want me to send a new version of the patch series, or an
incremental patch?

- Frank

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
@ 2020-07-28 18:13               ` Frank van der Linden
  0 siblings, 0 replies; 20+ messages in thread
From: Frank van der Linden @ 2020-07-28 18:13 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

On Tue, Jul 28, 2020 at 06:04:21PM +0000, Trond Myklebust wrote:
> On Tue, 2020-07-28 at 18:00 +0000, Frank van der Linden wrote:
> > On Tue, Jul 28, 2020 at 05:10:34PM +0000, Trond Myklebust wrote:
> > > On Tue, 2020-07-28 at 16:09 +0000, Frank van der Linden wrote:
> > > > Hi Trond,
> > > >
> > > > On Tue, Jul 28, 2020 at 03:17:12PM +0000, Trond Myklebust wrote:
> > > > > On Mon, 2020-07-27 at 16:34 +0000, Frank van der Linden wrote:
> > > > > > Hi Dan,
> > > > > >
> > > > > > On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter
> > > > > > wrote:
> > > > > > > This should return -ENOMEM on failure instead of success.
> > > > > > >
> > > > > > > Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr
> > > > > > > caching.")
> > > > > > > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > > > > > > ---
> > > > > > > ---
> > > > > > >  fs/nfs/nfs42xattr.c | 4 +++-
> > > > > > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > > > > > >
> > > > > > > diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> > > > > > > index 23fdab977a2a..e75c4bb70266 100644
> > > > > > > --- a/fs/nfs/nfs42xattr.c
> > > > > > > +++ b/fs/nfs/nfs42xattr.c
> > > > > > > @@ -1040,8 +1040,10 @@ int __init
> > > > > > > nfs4_xattr_cache_init(void)
> > > > > > >                 goto out2;
> > > > > > >
> > > > > > >         nfs4_xattr_cache_wq = alloc_workqueue("nfs4_xattr",
> > > > > > > WQ_MEM_RECLAIM, 0);
> > > > > > > -       if (nfs4_xattr_cache_wq == NULL)
> > > > > > > +       if (nfs4_xattr_cache_wq == NULL) {
> > > > > > > +               ret = -ENOMEM;
> > > > > > >                 goto out1;
> > > > > > > +       }
> > > > > > >
> > > > > > >         ret =
> > > > > > > register_shrinker(&nfs4_xattr_cache_shrinker);
> > > > > > >         if (ret)
> > > > > > > --
> > > > > > > 2.27.0
> > > > > > >
> > > > > >
> > > > > > Thanks for catching that one. Since this is against linux-
> > > > > > next
> > > > > > via
> > > > > > Trond,
> > > > > > I assume Trond will add it to his tree (right?)
> > > > > >
> > > > > > In any case:
> > > > > >
> > > > > >
> > > > > > Reviewed-by: Frank van der Linden <fllinden@amazon.com>
> > > > > >
> > > > > >
> > > > > > - Frank
> > > > >
> > > > > Frank, why do we need a workqueue here at all?
> > > >
> > > > The xattr caches are per-inode, and get created on demand.
> > > > Invalidating
> > > > a cache is done by setting the invalidate flag (as it is for
> > > > other
> > > > cached attribues and data).
> > > >
> > > > When nfs4_xattr_get_cache() sees an invalidated cache, it will
> > > > just
> > > > unlink it
> > > > from the inode, and create a new one if needed.
> > > >
> > > > The old cache then still needs to be freed. Theoretically, there
> > > > can
> > > > be
> > > > quite a few entries in it, and nfs4_xattr_get_cache() will be
> > > > called
> > > > in
> > > > the get/setxattr systemcall path. So my reasoning here was that
> > > > it's
> > > > better
> > > > to use a workqueue to free the old invalidated cache instead of
> > > > wasting
> > > > cycles in the I/O path.
> > > >
> > > > - Frank
> > >
> > > I think we might want to explore the reasons for this argument. We
> > > do
> > > not offload any other cache invalidations, and that includes the
> > > case
> > > when we have to invalidate the entire inode data cache before
> > > reading.
> > >
> > > So what is special about xattrs that causes invalidation to be a
> > > problem in the I/O path? Why do we expect them to grow so large
> > > that
> > > they are more unwieldy than the inode data cache?
> >
> > In the case of inode data, so you should probably invalidate it
> > immediately, or accept that you're serving up known-stale data. So
> > offloading it doesn't seem like a good idea, and you'll just have to
> > accept
> > the extra cycles you're using to do it.
> >
> > For this particular case, you're just reaping a cache that is no
> > longer
> > being used. There is no correctness gain in doing it in the I/O path
> > -
> > the cache has already been orphaned and new getxattr/listxattr calls
> > will not see it. So there doesn't seem to be a reason to do it in the
> > I/O path at all.
> >
> > The caches shouldn't become very large, no. In the normal case, there
> > shouldn't be much of a performance difference.
> >
> > Then again, what do you gain by doing the reaping of the cache in the
> > I/O path,
> > instead of using a work queue? I concluded that there wasn't an
> > upside, only
> > a downside, so that's why I implemented it that way.
> >
> > If you think it's better to do it inline, I'm happy to change it, of
> > course.
> > It would just mean getting rid of the work queue and the reap_cache
> > function,
> > and calling discard_cache directly, instead of reap_cache.
> >
> > - Frank
> 
> I think we should start with doing the freeing of the old cache inline.
> If it turns out to be a real performance problem, then we can later
> revisit using a work queue, however in that case, I'd prefer to use
> nfsiod rather than adding a special workqueue that is reserved for
> xattrs.

Sure, I can do that.

Do you want me to send a new version of the patch series, or an
incremental patch?

- Frank

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
  2020-07-28 18:13               ` Frank van der Linden
@ 2020-07-28 18:21                 ` Trond Myklebust
  -1 siblings, 0 replies; 20+ messages in thread
From: Trond Myklebust @ 2020-07-28 18:21 UTC (permalink / raw)
  To: fllinden; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

T24gVHVlLCAyMDIwLTA3LTI4IGF0IDE4OjEzICswMDAwLCBGcmFuayB2YW4gZGVyIExpbmRlbiB3
cm90ZToNCj4gT24gVHVlLCBKdWwgMjgsIDIwMjAgYXQgMDY6MDQ6MjFQTSArMDAwMCwgVHJvbmQg
TXlrbGVidXN0IHdyb3RlOg0KPiA+IE9uIFR1ZSwgMjAyMC0wNy0yOCBhdCAxODowMCArMDAwMCwg
RnJhbmsgdmFuIGRlciBMaW5kZW4gd3JvdGU6DQo+ID4gPiBPbiBUdWUsIEp1bCAyOCwgMjAyMCBh
dCAwNToxMDozNFBNICswMDAwLCBUcm9uZCBNeWtsZWJ1c3Qgd3JvdGU6DQo+ID4gPiA+IE9uIFR1
ZSwgMjAyMC0wNy0yOCBhdCAxNjowOSArMDAwMCwgRnJhbmsgdmFuIGRlciBMaW5kZW4gd3JvdGU6
DQo+ID4gPiA+ID4gSGkgVHJvbmQsDQo+ID4gPiA+ID4gDQo+ID4gPiA+ID4gT24gVHVlLCBKdWwg
MjgsIDIwMjAgYXQgMDM6MTc6MTJQTSArMDAwMCwgVHJvbmQgTXlrbGVidXN0DQo+ID4gPiA+ID4g
d3JvdGU6DQo+ID4gPiA+ID4gPiBPbiBNb24sIDIwMjAtMDctMjcgYXQgMTY6MzQgKzAwMDAsIEZy
YW5rIHZhbiBkZXIgTGluZGVuDQo+ID4gPiA+ID4gPiB3cm90ZToNCj4gPiA+ID4gPiA+ID4gSGkg
RGFuLA0KPiA+ID4gPiA+ID4gPiANCj4gPiA+ID4gPiA+ID4gT24gTW9uLCBKdWwgMjcsIDIwMjAg
YXQgMDI6MjM6NDRQTSArMDMwMCwgRGFuIENhcnBlbnRlcg0KPiA+ID4gPiA+ID4gPiB3cm90ZToN
Cj4gPiA+ID4gPiA+ID4gPiBUaGlzIHNob3VsZCByZXR1cm4gLUVOT01FTSBvbiBmYWlsdXJlIGlu
c3RlYWQgb2YNCj4gPiA+ID4gPiA+ID4gPiBzdWNjZXNzLg0KPiA+ID4gPiA+ID4gPiA+IA0KPiA+
ID4gPiA+ID4gPiA+IEZpeGVzOiA5NWFkMzdmOTBjMzMgKCJORlN2NC4yOiBhZGQgY2xpZW50IHNp
ZGUgeGF0dHINCj4gPiA+ID4gPiA+ID4gPiBjYWNoaW5nLiIpDQo+ID4gPiA+ID4gPiA+ID4gU2ln
bmVkLW9mZi1ieTogRGFuIENhcnBlbnRlciA8ZGFuLmNhcnBlbnRlckBvcmFjbGUuY29tPg0KPiA+
ID4gPiA+ID4gPiA+IC0tLQ0KPiA+ID4gPiA+ID4gPiA+IC0tLQ0KPiA+ID4gPiA+ID4gPiA+ICBm
cy9uZnMvbmZzNDJ4YXR0ci5jIHwgNCArKystDQo+ID4gPiA+ID4gPiA+ID4gIDEgZmlsZSBjaGFu
Z2VkLCAzIGluc2VydGlvbnMoKyksIDEgZGVsZXRpb24oLSkNCj4gPiA+ID4gPiA+ID4gPiANCj4g
PiA+ID4gPiA+ID4gPiBkaWZmIC0tZ2l0IGEvZnMvbmZzL25mczQyeGF0dHIuYyBiL2ZzL25mcy9u
ZnM0MnhhdHRyLmMNCj4gPiA+ID4gPiA+ID4gPiBpbmRleCAyM2ZkYWI5NzdhMmEuLmU3NWM0YmI3
MDI2NiAxMDA2NDQNCj4gPiA+ID4gPiA+ID4gPiAtLS0gYS9mcy9uZnMvbmZzNDJ4YXR0ci5jDQo+
ID4gPiA+ID4gPiA+ID4gKysrIGIvZnMvbmZzL25mczQyeGF0dHIuYw0KPiA+ID4gPiA+ID4gPiA+
IEBAIC0xMDQwLDggKzEwNDAsMTAgQEAgaW50IF9faW5pdA0KPiA+ID4gPiA+ID4gPiA+IG5mczRf
eGF0dHJfY2FjaGVfaW5pdCh2b2lkKQ0KPiA+ID4gPiA+ID4gPiA+ICAgICAgICAgICAgICAgICBn
b3RvIG91dDI7DQo+ID4gPiA+ID4gPiA+ID4gDQo+ID4gPiA+ID4gPiA+ID4gICAgICAgICBuZnM0
X3hhdHRyX2NhY2hlX3dxID0NCj4gPiA+ID4gPiA+ID4gPiBhbGxvY193b3JrcXVldWUoIm5mczRf
eGF0dHIiLA0KPiA+ID4gPiA+ID4gPiA+IFdRX01FTV9SRUNMQUlNLCAwKTsNCj4gPiA+ID4gPiA+
ID4gPiAtICAgICAgIGlmIChuZnM0X3hhdHRyX2NhY2hlX3dxID09IE5VTEwpDQo+ID4gPiA+ID4g
PiA+ID4gKyAgICAgICBpZiAobmZzNF94YXR0cl9jYWNoZV93cSA9PSBOVUxMKSB7DQo+ID4gPiA+
ID4gPiA+ID4gKyAgICAgICAgICAgICAgIHJldCA9IC1FTk9NRU07DQo+ID4gPiA+ID4gPiA+ID4g
ICAgICAgICAgICAgICAgIGdvdG8gb3V0MTsNCj4gPiA+ID4gPiA+ID4gPiArICAgICAgIH0NCj4g
PiA+ID4gPiA+ID4gPiANCj4gPiA+ID4gPiA+ID4gPiAgICAgICAgIHJldCA9DQo+ID4gPiA+ID4g
PiA+ID4gcmVnaXN0ZXJfc2hyaW5rZXIoJm5mczRfeGF0dHJfY2FjaGVfc2hyaW5rZXIpOw0KPiA+
ID4gPiA+ID4gPiA+ICAgICAgICAgaWYgKHJldCkNCj4gPiA+ID4gPiA+ID4gPiAtLQ0KPiA+ID4g
PiA+ID4gPiA+IDIuMjcuMA0KPiA+ID4gPiA+ID4gPiA+IA0KPiA+ID4gPiA+ID4gPiANCj4gPiA+
ID4gPiA+ID4gVGhhbmtzIGZvciBjYXRjaGluZyB0aGF0IG9uZS4gU2luY2UgdGhpcyBpcyBhZ2Fp
bnN0DQo+ID4gPiA+ID4gPiA+IGxpbnV4LQ0KPiA+ID4gPiA+ID4gPiBuZXh0DQo+ID4gPiA+ID4g
PiA+IHZpYQ0KPiA+ID4gPiA+ID4gPiBUcm9uZCwNCj4gPiA+ID4gPiA+ID4gSSBhc3N1bWUgVHJv
bmQgd2lsbCBhZGQgaXQgdG8gaGlzIHRyZWUgKHJpZ2h0PykNCj4gPiA+ID4gPiA+ID4gDQo+ID4g
PiA+ID4gPiA+IEluIGFueSBjYXNlOg0KPiA+ID4gPiA+ID4gPiANCj4gPiA+ID4gPiA+ID4gDQo+
ID4gPiA+ID4gPiA+IFJldmlld2VkLWJ5OiBGcmFuayB2YW4gZGVyIExpbmRlbiA8ZmxsaW5kZW5A
YW1hem9uLmNvbT4NCj4gPiA+ID4gPiA+ID4gDQo+ID4gPiA+ID4gPiA+IA0KPiA+ID4gPiA+ID4g
PiAtIEZyYW5rDQo+ID4gPiA+ID4gPiANCj4gPiA+ID4gPiA+IEZyYW5rLCB3aHkgZG8gd2UgbmVl
ZCBhIHdvcmtxdWV1ZSBoZXJlIGF0IGFsbD8NCj4gPiA+ID4gPiANCj4gPiA+ID4gPiBUaGUgeGF0
dHIgY2FjaGVzIGFyZSBwZXItaW5vZGUsIGFuZCBnZXQgY3JlYXRlZCBvbiBkZW1hbmQuDQo+ID4g
PiA+ID4gSW52YWxpZGF0aW5nDQo+ID4gPiA+ID4gYSBjYWNoZSBpcyBkb25lIGJ5IHNldHRpbmcg
dGhlIGludmFsaWRhdGUgZmxhZyAoYXMgaXQgaXMgZm9yDQo+ID4gPiA+ID4gb3RoZXINCj4gPiA+
ID4gPiBjYWNoZWQgYXR0cmlidWVzIGFuZCBkYXRhKS4NCj4gPiA+ID4gPiANCj4gPiA+ID4gPiBX
aGVuIG5mczRfeGF0dHJfZ2V0X2NhY2hlKCkgc2VlcyBhbiBpbnZhbGlkYXRlZCBjYWNoZSwgaXQN
Cj4gPiA+ID4gPiB3aWxsDQo+ID4gPiA+ID4ganVzdA0KPiA+ID4gPiA+IHVubGluayBpdA0KPiA+
ID4gPiA+IGZyb20gdGhlIGlub2RlLCBhbmQgY3JlYXRlIGEgbmV3IG9uZSBpZiBuZWVkZWQuDQo+
ID4gPiA+ID4gDQo+ID4gPiA+ID4gVGhlIG9sZCBjYWNoZSB0aGVuIHN0aWxsIG5lZWRzIHRvIGJl
IGZyZWVkLiBUaGVvcmV0aWNhbGx5LA0KPiA+ID4gPiA+IHRoZXJlDQo+ID4gPiA+ID4gY2FuDQo+
ID4gPiA+ID4gYmUNCj4gPiA+ID4gPiBxdWl0ZSBhIGZldyBlbnRyaWVzIGluIGl0LCBhbmQgbmZz
NF94YXR0cl9nZXRfY2FjaGUoKSB3aWxsIGJlDQo+ID4gPiA+ID4gY2FsbGVkDQo+ID4gPiA+ID4g
aW4NCj4gPiA+ID4gPiB0aGUgZ2V0L3NldHhhdHRyIHN5c3RlbWNhbGwgcGF0aC4gU28gbXkgcmVh
c29uaW5nIGhlcmUgd2FzDQo+ID4gPiA+ID4gdGhhdA0KPiA+ID4gPiA+IGl0J3MNCj4gPiA+ID4g
PiBiZXR0ZXINCj4gPiA+ID4gPiB0byB1c2UgYSB3b3JrcXVldWUgdG8gZnJlZSB0aGUgb2xkIGlu
dmFsaWRhdGVkIGNhY2hlIGluc3RlYWQNCj4gPiA+ID4gPiBvZg0KPiA+ID4gPiA+IHdhc3RpbmcN
Cj4gPiA+ID4gPiBjeWNsZXMgaW4gdGhlIEkvTyBwYXRoLg0KPiA+ID4gPiA+IA0KPiA+ID4gPiA+
IC0gRnJhbmsNCj4gPiA+ID4gDQo+ID4gPiA+IEkgdGhpbmsgd2UgbWlnaHQgd2FudCB0byBleHBs
b3JlIHRoZSByZWFzb25zIGZvciB0aGlzIGFyZ3VtZW50Lg0KPiA+ID4gPiBXZQ0KPiA+ID4gPiBk
bw0KPiA+ID4gPiBub3Qgb2ZmbG9hZCBhbnkgb3RoZXIgY2FjaGUgaW52YWxpZGF0aW9ucywgYW5k
IHRoYXQgaW5jbHVkZXMNCj4gPiA+ID4gdGhlDQo+ID4gPiA+IGNhc2UNCj4gPiA+ID4gd2hlbiB3
ZSBoYXZlIHRvIGludmFsaWRhdGUgdGhlIGVudGlyZSBpbm9kZSBkYXRhIGNhY2hlIGJlZm9yZQ0K
PiA+ID4gPiByZWFkaW5nLg0KPiA+ID4gPiANCj4gPiA+ID4gU28gd2hhdCBpcyBzcGVjaWFsIGFi
b3V0IHhhdHRycyB0aGF0IGNhdXNlcyBpbnZhbGlkYXRpb24gdG8gYmUNCj4gPiA+ID4gYQ0KPiA+
ID4gPiBwcm9ibGVtIGluIHRoZSBJL08gcGF0aD8gV2h5IGRvIHdlIGV4cGVjdCB0aGVtIHRvIGdy
b3cgc28gbGFyZ2UNCj4gPiA+ID4gdGhhdA0KPiA+ID4gPiB0aGV5IGFyZSBtb3JlIHVud2llbGR5
IHRoYW4gdGhlIGlub2RlIGRhdGEgY2FjaGU/DQo+ID4gPiANCj4gPiA+IEluIHRoZSBjYXNlIG9m
IGlub2RlIGRhdGEsIHNvIHlvdSBzaG91bGQgcHJvYmFibHkgaW52YWxpZGF0ZSBpdA0KPiA+ID4g
aW1tZWRpYXRlbHksIG9yIGFjY2VwdCB0aGF0IHlvdSdyZSBzZXJ2aW5nIHVwIGtub3duLXN0YWxl
IGRhdGEuDQo+ID4gPiBTbw0KPiA+ID4gb2ZmbG9hZGluZyBpdCBkb2Vzbid0IHNlZW0gbGlrZSBh
IGdvb2QgaWRlYSwgYW5kIHlvdSdsbCBqdXN0IGhhdmUNCj4gPiA+IHRvDQo+ID4gPiBhY2NlcHQN
Cj4gPiA+IHRoZSBleHRyYSBjeWNsZXMgeW91J3JlIHVzaW5nIHRvIGRvIGl0Lg0KPiA+ID4gDQo+
ID4gPiBGb3IgdGhpcyBwYXJ0aWN1bGFyIGNhc2UsIHlvdSdyZSBqdXN0IHJlYXBpbmcgYSBjYWNo
ZSB0aGF0IGlzIG5vDQo+ID4gPiBsb25nZXINCj4gPiA+IGJlaW5nIHVzZWQuIFRoZXJlIGlzIG5v
IGNvcnJlY3RuZXNzIGdhaW4gaW4gZG9pbmcgaXQgaW4gdGhlIEkvTw0KPiA+ID4gcGF0aA0KPiA+
ID4gLQ0KPiA+ID4gdGhlIGNhY2hlIGhhcyBhbHJlYWR5IGJlZW4gb3JwaGFuZWQgYW5kIG5ldyBn
ZXR4YXR0ci9saXN0eGF0dHINCj4gPiA+IGNhbGxzDQo+ID4gPiB3aWxsIG5vdCBzZWUgaXQuIFNv
IHRoZXJlIGRvZXNuJ3Qgc2VlbSB0byBiZSBhIHJlYXNvbiB0byBkbyBpdCBpbg0KPiA+ID4gdGhl
DQo+ID4gPiBJL08gcGF0aCBhdCBhbGwuDQo+ID4gPiANCj4gPiA+IFRoZSBjYWNoZXMgc2hvdWxk
bid0IGJlY29tZSB2ZXJ5IGxhcmdlLCBuby4gSW4gdGhlIG5vcm1hbCBjYXNlLA0KPiA+ID4gdGhl
cmUNCj4gPiA+IHNob3VsZG4ndCBiZSBtdWNoIG9mIGEgcGVyZm9ybWFuY2UgZGlmZmVyZW5jZS4N
Cj4gPiA+IA0KPiA+ID4gVGhlbiBhZ2Fpbiwgd2hhdCBkbyB5b3UgZ2FpbiBieSBkb2luZyB0aGUg
cmVhcGluZyBvZiB0aGUgY2FjaGUgaW4NCj4gPiA+IHRoZQ0KPiA+ID4gSS9PIHBhdGgsDQo+ID4g
PiBpbnN0ZWFkIG9mIHVzaW5nIGEgd29yayBxdWV1ZT8gSSBjb25jbHVkZWQgdGhhdCB0aGVyZSB3
YXNuJ3QgYW4NCj4gPiA+IHVwc2lkZSwgb25seQ0KPiA+ID4gYSBkb3duc2lkZSwgc28gdGhhdCdz
IHdoeSBJIGltcGxlbWVudGVkIGl0IHRoYXQgd2F5Lg0KPiA+ID4gDQo+ID4gPiBJZiB5b3UgdGhp
bmsgaXQncyBiZXR0ZXIgdG8gZG8gaXQgaW5saW5lLCBJJ20gaGFwcHkgdG8gY2hhbmdlIGl0LA0K
PiA+ID4gb2YNCj4gPiA+IGNvdXJzZS4NCj4gPiA+IEl0IHdvdWxkIGp1c3QgbWVhbiBnZXR0aW5n
IHJpZCBvZiB0aGUgd29yayBxdWV1ZSBhbmQgdGhlDQo+ID4gPiByZWFwX2NhY2hlDQo+ID4gPiBm
dW5jdGlvbiwNCj4gPiA+IGFuZCBjYWxsaW5nIGRpc2NhcmRfY2FjaGUgZGlyZWN0bHksIGluc3Rl
YWQgb2YgcmVhcF9jYWNoZS4NCj4gPiA+IA0KPiA+ID4gLSBGcmFuaw0KPiA+IA0KPiA+IEkgdGhp
bmsgd2Ugc2hvdWxkIHN0YXJ0IHdpdGggZG9pbmcgdGhlIGZyZWVpbmcgb2YgdGhlIG9sZCBjYWNo
ZQ0KPiA+IGlubGluZS4NCj4gPiBJZiBpdCB0dXJucyBvdXQgdG8gYmUgYSByZWFsIHBlcmZvcm1h
bmNlIHByb2JsZW0sIHRoZW4gd2UgY2FuIGxhdGVyDQo+ID4gcmV2aXNpdCB1c2luZyBhIHdvcmsg
cXVldWUsIGhvd2V2ZXIgaW4gdGhhdCBjYXNlLCBJJ2QgcHJlZmVyIHRvIHVzZQ0KPiA+IG5mc2lv
ZCByYXRoZXIgdGhhbiBhZGRpbmcgYSBzcGVjaWFsIHdvcmtxdWV1ZSB0aGF0IGlzIHJlc2VydmVk
IGZvcg0KPiA+IHhhdHRycy4NCj4gDQo+IFN1cmUsIEkgY2FuIGRvIHRoYXQuDQo+IA0KPiBEbyB5
b3Ugd2FudCBtZSB0byBzZW5kIGEgbmV3IHZlcnNpb24gb2YgdGhlIHBhdGNoIHNlcmllcywgb3Ig
YW4NCj4gaW5jcmVtZW50YWwgcGF0Y2g/DQoNClBsZWFzZSBqdXN0IHNlbmQgYXMgYW4gaW5jcmVt
ZW50YWwgcGF0Y2guDQoNClRoYW5rcyENCg0KLS0gDQpUcm9uZCBNeWtsZWJ1c3QNCkxpbnV4IE5G
UyBjbGllbnQgbWFpbnRhaW5lciwgSGFtbWVyc3BhY2UNCnRyb25kLm15a2xlYnVzdEBoYW1tZXJz
cGFjZS5jb20NCg0KDQo

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
@ 2020-07-28 18:21                 ` Trond Myklebust
  0 siblings, 0 replies; 20+ messages in thread
From: Trond Myklebust @ 2020-07-28 18:21 UTC (permalink / raw)
  To: fllinden; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

On Tue, 2020-07-28 at 18:13 +0000, Frank van der Linden wrote:
> On Tue, Jul 28, 2020 at 06:04:21PM +0000, Trond Myklebust wrote:
> > On Tue, 2020-07-28 at 18:00 +0000, Frank van der Linden wrote:
> > > On Tue, Jul 28, 2020 at 05:10:34PM +0000, Trond Myklebust wrote:
> > > > On Tue, 2020-07-28 at 16:09 +0000, Frank van der Linden wrote:
> > > > > Hi Trond,
> > > > > 
> > > > > On Tue, Jul 28, 2020 at 03:17:12PM +0000, Trond Myklebust
> > > > > wrote:
> > > > > > On Mon, 2020-07-27 at 16:34 +0000, Frank van der Linden
> > > > > > wrote:
> > > > > > > Hi Dan,
> > > > > > > 
> > > > > > > On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter
> > > > > > > wrote:
> > > > > > > > This should return -ENOMEM on failure instead of
> > > > > > > > success.
> > > > > > > > 
> > > > > > > > Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr
> > > > > > > > caching.")
> > > > > > > > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > > > > > > > ---
> > > > > > > > ---
> > > > > > > >  fs/nfs/nfs42xattr.c | 4 +++-
> > > > > > > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > > > > > > > 
> > > > > > > > diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> > > > > > > > index 23fdab977a2a..e75c4bb70266 100644
> > > > > > > > --- a/fs/nfs/nfs42xattr.c
> > > > > > > > +++ b/fs/nfs/nfs42xattr.c
> > > > > > > > @@ -1040,8 +1040,10 @@ int __init
> > > > > > > > nfs4_xattr_cache_init(void)
> > > > > > > >                 goto out2;
> > > > > > > > 
> > > > > > > >         nfs4_xattr_cache_wq =
> > > > > > > > alloc_workqueue("nfs4_xattr",
> > > > > > > > WQ_MEM_RECLAIM, 0);
> > > > > > > > -       if (nfs4_xattr_cache_wq == NULL)
> > > > > > > > +       if (nfs4_xattr_cache_wq == NULL) {
> > > > > > > > +               ret = -ENOMEM;
> > > > > > > >                 goto out1;
> > > > > > > > +       }
> > > > > > > > 
> > > > > > > >         ret =
> > > > > > > > register_shrinker(&nfs4_xattr_cache_shrinker);
> > > > > > > >         if (ret)
> > > > > > > > --
> > > > > > > > 2.27.0
> > > > > > > > 
> > > > > > > 
> > > > > > > Thanks for catching that one. Since this is against
> > > > > > > linux-
> > > > > > > next
> > > > > > > via
> > > > > > > Trond,
> > > > > > > I assume Trond will add it to his tree (right?)
> > > > > > > 
> > > > > > > In any case:
> > > > > > > 
> > > > > > > 
> > > > > > > Reviewed-by: Frank van der Linden <fllinden@amazon.com>
> > > > > > > 
> > > > > > > 
> > > > > > > - Frank
> > > > > > 
> > > > > > Frank, why do we need a workqueue here at all?
> > > > > 
> > > > > The xattr caches are per-inode, and get created on demand.
> > > > > Invalidating
> > > > > a cache is done by setting the invalidate flag (as it is for
> > > > > other
> > > > > cached attribues and data).
> > > > > 
> > > > > When nfs4_xattr_get_cache() sees an invalidated cache, it
> > > > > will
> > > > > just
> > > > > unlink it
> > > > > from the inode, and create a new one if needed.
> > > > > 
> > > > > The old cache then still needs to be freed. Theoretically,
> > > > > there
> > > > > can
> > > > > be
> > > > > quite a few entries in it, and nfs4_xattr_get_cache() will be
> > > > > called
> > > > > in
> > > > > the get/setxattr systemcall path. So my reasoning here was
> > > > > that
> > > > > it's
> > > > > better
> > > > > to use a workqueue to free the old invalidated cache instead
> > > > > of
> > > > > wasting
> > > > > cycles in the I/O path.
> > > > > 
> > > > > - Frank
> > > > 
> > > > I think we might want to explore the reasons for this argument.
> > > > We
> > > > do
> > > > not offload any other cache invalidations, and that includes
> > > > the
> > > > case
> > > > when we have to invalidate the entire inode data cache before
> > > > reading.
> > > > 
> > > > So what is special about xattrs that causes invalidation to be
> > > > a
> > > > problem in the I/O path? Why do we expect them to grow so large
> > > > that
> > > > they are more unwieldy than the inode data cache?
> > > 
> > > In the case of inode data, so you should probably invalidate it
> > > immediately, or accept that you're serving up known-stale data.
> > > So
> > > offloading it doesn't seem like a good idea, and you'll just have
> > > to
> > > accept
> > > the extra cycles you're using to do it.
> > > 
> > > For this particular case, you're just reaping a cache that is no
> > > longer
> > > being used. There is no correctness gain in doing it in the I/O
> > > path
> > > -
> > > the cache has already been orphaned and new getxattr/listxattr
> > > calls
> > > will not see it. So there doesn't seem to be a reason to do it in
> > > the
> > > I/O path at all.
> > > 
> > > The caches shouldn't become very large, no. In the normal case,
> > > there
> > > shouldn't be much of a performance difference.
> > > 
> > > Then again, what do you gain by doing the reaping of the cache in
> > > the
> > > I/O path,
> > > instead of using a work queue? I concluded that there wasn't an
> > > upside, only
> > > a downside, so that's why I implemented it that way.
> > > 
> > > If you think it's better to do it inline, I'm happy to change it,
> > > of
> > > course.
> > > It would just mean getting rid of the work queue and the
> > > reap_cache
> > > function,
> > > and calling discard_cache directly, instead of reap_cache.
> > > 
> > > - Frank
> > 
> > I think we should start with doing the freeing of the old cache
> > inline.
> > If it turns out to be a real performance problem, then we can later
> > revisit using a work queue, however in that case, I'd prefer to use
> > nfsiod rather than adding a special workqueue that is reserved for
> > xattrs.
> 
> Sure, I can do that.
> 
> Do you want me to send a new version of the patch series, or an
> incremental patch?

Please just send as an incremental patch.

Thanks!

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
  2020-07-28 18:21                 ` Trond Myklebust
@ 2020-07-28 20:18                   ` Frank van der Linden
  -1 siblings, 0 replies; 20+ messages in thread
From: Frank van der Linden @ 2020-07-28 20:18 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

On Tue, Jul 28, 2020 at 06:21:19PM +0000, Trond Myklebust wrote:
> 
> On Tue, 2020-07-28 at 18:13 +0000, Frank van der Linden wrote:
> > On Tue, Jul 28, 2020 at 06:04:21PM +0000, Trond Myklebust wrote:
> > > On Tue, 2020-07-28 at 18:00 +0000, Frank van der Linden wrote:
> > > > On Tue, Jul 28, 2020 at 05:10:34PM +0000, Trond Myklebust wrote:
> > > > > On Tue, 2020-07-28 at 16:09 +0000, Frank van der Linden wrote:
> > > > > > Hi Trond,
> > > > > >
> > > > > > On Tue, Jul 28, 2020 at 03:17:12PM +0000, Trond Myklebust
> > > > > > wrote:
> > > > > > > On Mon, 2020-07-27 at 16:34 +0000, Frank van der Linden
> > > > > > > wrote:
> > > > > > > > Hi Dan,
> > > > > > > >
> > > > > > > > On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter
> > > > > > > > wrote:
> > > > > > > > > This should return -ENOMEM on failure instead of
> > > > > > > > > success.
> > > > > > > > >
> > > > > > > > > Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr
> > > > > > > > > caching.")
> > > > > > > > > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > > > > > > > > ---
> > > > > > > > > ---
> > > > > > > > >  fs/nfs/nfs42xattr.c | 4 +++-
> > > > > > > > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > > > > > > > >
> > > > > > > > > diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> > > > > > > > > index 23fdab977a2a..e75c4bb70266 100644
> > > > > > > > > --- a/fs/nfs/nfs42xattr.c
> > > > > > > > > +++ b/fs/nfs/nfs42xattr.c
> > > > > > > > > @@ -1040,8 +1040,10 @@ int __init
> > > > > > > > > nfs4_xattr_cache_init(void)
> > > > > > > > >                 goto out2;
> > > > > > > > >
> > > > > > > > >         nfs4_xattr_cache_wq > > > > > > > > > alloc_workqueue("nfs4_xattr",
> > > > > > > > > WQ_MEM_RECLAIM, 0);
> > > > > > > > > -       if (nfs4_xattr_cache_wq = NULL)
> > > > > > > > > +       if (nfs4_xattr_cache_wq = NULL) {
> > > > > > > > > +               ret = -ENOMEM;
> > > > > > > > >                 goto out1;
> > > > > > > > > +       }
> > > > > > > > >
> > > > > > > > >         ret > > > > > > > > > register_shrinker(&nfs4_xattr_cache_shrinker);
> > > > > > > > >         if (ret)
> > > > > > > > > --
> > > > > > > > > 2.27.0
> > > > > > > > >
> > > > > > > >
> > > > > > > > Thanks for catching that one. Since this is against
> > > > > > > > linux-
> > > > > > > > next
> > > > > > > > via
> > > > > > > > Trond,
> > > > > > > > I assume Trond will add it to his tree (right?)
> > > > > > > >
> > > > > > > > In any case:
> > > > > > > >
> > > > > > > >
> > > > > > > > Reviewed-by: Frank van der Linden <fllinden@amazon.com>
> > > > > > > >
> > > > > > > >
> > > > > > > > - Frank
> > > > > > >
> > > > > > > Frank, why do we need a workqueue here at all?
> > > > > >
> > > > > > The xattr caches are per-inode, and get created on demand.
> > > > > > Invalidating
> > > > > > a cache is done by setting the invalidate flag (as it is for
> > > > > > other
> > > > > > cached attribues and data).
> > > > > >
> > > > > > When nfs4_xattr_get_cache() sees an invalidated cache, it
> > > > > > will
> > > > > > just
> > > > > > unlink it
> > > > > > from the inode, and create a new one if needed.
> > > > > >
> > > > > > The old cache then still needs to be freed. Theoretically,
> > > > > > there
> > > > > > can
> > > > > > be
> > > > > > quite a few entries in it, and nfs4_xattr_get_cache() will be
> > > > > > called
> > > > > > in
> > > > > > the get/setxattr systemcall path. So my reasoning here was
> > > > > > that
> > > > > > it's
> > > > > > better
> > > > > > to use a workqueue to free the old invalidated cache instead
> > > > > > of
> > > > > > wasting
> > > > > > cycles in the I/O path.
> > > > > >
> > > > > > - Frank
> > > > >
> > > > > I think we might want to explore the reasons for this argument.
> > > > > We
> > > > > do
> > > > > not offload any other cache invalidations, and that includes
> > > > > the
> > > > > case
> > > > > when we have to invalidate the entire inode data cache before
> > > > > reading.
> > > > >
> > > > > So what is special about xattrs that causes invalidation to be
> > > > > a
> > > > > problem in the I/O path? Why do we expect them to grow so large
> > > > > that
> > > > > they are more unwieldy than the inode data cache?
> > > >
> > > > In the case of inode data, so you should probably invalidate it
> > > > immediately, or accept that you're serving up known-stale data.
> > > > So
> > > > offloading it doesn't seem like a good idea, and you'll just have
> > > > to
> > > > accept
> > > > the extra cycles you're using to do it.
> > > >
> > > > For this particular case, you're just reaping a cache that is no
> > > > longer
> > > > being used. There is no correctness gain in doing it in the I/O
> > > > path
> > > > -
> > > > the cache has already been orphaned and new getxattr/listxattr
> > > > calls
> > > > will not see it. So there doesn't seem to be a reason to do it in
> > > > the
> > > > I/O path at all.
> > > >
> > > > The caches shouldn't become very large, no. In the normal case,
> > > > there
> > > > shouldn't be much of a performance difference.
> > > >
> > > > Then again, what do you gain by doing the reaping of the cache in
> > > > the
> > > > I/O path,
> > > > instead of using a work queue? I concluded that there wasn't an
> > > > upside, only
> > > > a downside, so that's why I implemented it that way.
> > > >
> > > > If you think it's better to do it inline, I'm happy to change it,
> > > > of
> > > > course.
> > > > It would just mean getting rid of the work queue and the
> > > > reap_cache
> > > > function,
> > > > and calling discard_cache directly, instead of reap_cache.
> > > >
> > > > - Frank
> > >
> > > I think we should start with doing the freeing of the old cache
> > > inline.
> > > If it turns out to be a real performance problem, then we can later
> > > revisit using a work queue, however in that case, I'd prefer to use
> > > nfsiod rather than adding a special workqueue that is reserved for
> > > xattrs.
> >
> > Sure, I can do that.
> >
> > Do you want me to send a new version of the patch series, or an
> > incremental patch?
> 
> Please just send as an incremental patch.
> 
> Thanks!

Made the change, re-tested it. Patch sent - thanks!

- Frank

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init()
@ 2020-07-28 20:18                   ` Frank van der Linden
  0 siblings, 0 replies; 20+ messages in thread
From: Frank van der Linden @ 2020-07-28 20:18 UTC (permalink / raw)
  To: Trond Myklebust; +Cc: dan.carpenter, linux-nfs, kernel-janitors, anna.schumaker

On Tue, Jul 28, 2020 at 06:21:19PM +0000, Trond Myklebust wrote:
> 
> On Tue, 2020-07-28 at 18:13 +0000, Frank van der Linden wrote:
> > On Tue, Jul 28, 2020 at 06:04:21PM +0000, Trond Myklebust wrote:
> > > On Tue, 2020-07-28 at 18:00 +0000, Frank van der Linden wrote:
> > > > On Tue, Jul 28, 2020 at 05:10:34PM +0000, Trond Myklebust wrote:
> > > > > On Tue, 2020-07-28 at 16:09 +0000, Frank van der Linden wrote:
> > > > > > Hi Trond,
> > > > > >
> > > > > > On Tue, Jul 28, 2020 at 03:17:12PM +0000, Trond Myklebust
> > > > > > wrote:
> > > > > > > On Mon, 2020-07-27 at 16:34 +0000, Frank van der Linden
> > > > > > > wrote:
> > > > > > > > Hi Dan,
> > > > > > > >
> > > > > > > > On Mon, Jul 27, 2020 at 02:23:44PM +0300, Dan Carpenter
> > > > > > > > wrote:
> > > > > > > > > This should return -ENOMEM on failure instead of
> > > > > > > > > success.
> > > > > > > > >
> > > > > > > > > Fixes: 95ad37f90c33 ("NFSv4.2: add client side xattr
> > > > > > > > > caching.")
> > > > > > > > > Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
> > > > > > > > > ---
> > > > > > > > > ---
> > > > > > > > >  fs/nfs/nfs42xattr.c | 4 +++-
> > > > > > > > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > > > > > > > >
> > > > > > > > > diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
> > > > > > > > > index 23fdab977a2a..e75c4bb70266 100644
> > > > > > > > > --- a/fs/nfs/nfs42xattr.c
> > > > > > > > > +++ b/fs/nfs/nfs42xattr.c
> > > > > > > > > @@ -1040,8 +1040,10 @@ int __init
> > > > > > > > > nfs4_xattr_cache_init(void)
> > > > > > > > >                 goto out2;
> > > > > > > > >
> > > > > > > > >         nfs4_xattr_cache_wq =
> > > > > > > > > alloc_workqueue("nfs4_xattr",
> > > > > > > > > WQ_MEM_RECLAIM, 0);
> > > > > > > > > -       if (nfs4_xattr_cache_wq == NULL)
> > > > > > > > > +       if (nfs4_xattr_cache_wq == NULL) {
> > > > > > > > > +               ret = -ENOMEM;
> > > > > > > > >                 goto out1;
> > > > > > > > > +       }
> > > > > > > > >
> > > > > > > > >         ret =
> > > > > > > > > register_shrinker(&nfs4_xattr_cache_shrinker);
> > > > > > > > >         if (ret)
> > > > > > > > > --
> > > > > > > > > 2.27.0
> > > > > > > > >
> > > > > > > >
> > > > > > > > Thanks for catching that one. Since this is against
> > > > > > > > linux-
> > > > > > > > next
> > > > > > > > via
> > > > > > > > Trond,
> > > > > > > > I assume Trond will add it to his tree (right?)
> > > > > > > >
> > > > > > > > In any case:
> > > > > > > >
> > > > > > > >
> > > > > > > > Reviewed-by: Frank van der Linden <fllinden@amazon.com>
> > > > > > > >
> > > > > > > >
> > > > > > > > - Frank
> > > > > > >
> > > > > > > Frank, why do we need a workqueue here at all?
> > > > > >
> > > > > > The xattr caches are per-inode, and get created on demand.
> > > > > > Invalidating
> > > > > > a cache is done by setting the invalidate flag (as it is for
> > > > > > other
> > > > > > cached attribues and data).
> > > > > >
> > > > > > When nfs4_xattr_get_cache() sees an invalidated cache, it
> > > > > > will
> > > > > > just
> > > > > > unlink it
> > > > > > from the inode, and create a new one if needed.
> > > > > >
> > > > > > The old cache then still needs to be freed. Theoretically,
> > > > > > there
> > > > > > can
> > > > > > be
> > > > > > quite a few entries in it, and nfs4_xattr_get_cache() will be
> > > > > > called
> > > > > > in
> > > > > > the get/setxattr systemcall path. So my reasoning here was
> > > > > > that
> > > > > > it's
> > > > > > better
> > > > > > to use a workqueue to free the old invalidated cache instead
> > > > > > of
> > > > > > wasting
> > > > > > cycles in the I/O path.
> > > > > >
> > > > > > - Frank
> > > > >
> > > > > I think we might want to explore the reasons for this argument.
> > > > > We
> > > > > do
> > > > > not offload any other cache invalidations, and that includes
> > > > > the
> > > > > case
> > > > > when we have to invalidate the entire inode data cache before
> > > > > reading.
> > > > >
> > > > > So what is special about xattrs that causes invalidation to be
> > > > > a
> > > > > problem in the I/O path? Why do we expect them to grow so large
> > > > > that
> > > > > they are more unwieldy than the inode data cache?
> > > >
> > > > In the case of inode data, so you should probably invalidate it
> > > > immediately, or accept that you're serving up known-stale data.
> > > > So
> > > > offloading it doesn't seem like a good idea, and you'll just have
> > > > to
> > > > accept
> > > > the extra cycles you're using to do it.
> > > >
> > > > For this particular case, you're just reaping a cache that is no
> > > > longer
> > > > being used. There is no correctness gain in doing it in the I/O
> > > > path
> > > > -
> > > > the cache has already been orphaned and new getxattr/listxattr
> > > > calls
> > > > will not see it. So there doesn't seem to be a reason to do it in
> > > > the
> > > > I/O path at all.
> > > >
> > > > The caches shouldn't become very large, no. In the normal case,
> > > > there
> > > > shouldn't be much of a performance difference.
> > > >
> > > > Then again, what do you gain by doing the reaping of the cache in
> > > > the
> > > > I/O path,
> > > > instead of using a work queue? I concluded that there wasn't an
> > > > upside, only
> > > > a downside, so that's why I implemented it that way.
> > > >
> > > > If you think it's better to do it inline, I'm happy to change it,
> > > > of
> > > > course.
> > > > It would just mean getting rid of the work queue and the
> > > > reap_cache
> > > > function,
> > > > and calling discard_cache directly, instead of reap_cache.
> > > >
> > > > - Frank
> > >
> > > I think we should start with doing the freeing of the old cache
> > > inline.
> > > If it turns out to be a real performance problem, then we can later
> > > revisit using a work queue, however in that case, I'd prefer to use
> > > nfsiod rather than adding a special workqueue that is reserved for
> > > xattrs.
> >
> > Sure, I can do that.
> >
> > Do you want me to send a new version of the patch series, or an
> > incremental patch?
> 
> Please just send as an incremental patch.
> 
> Thanks!

Made the change, re-tested it. Patch sent - thanks!

- Frank

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2020-07-28 20:19 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-27 11:23 [PATCH] NFSv4.2: Fix an error code in nfs4_xattr_cache_init() Dan Carpenter
2020-07-27 11:23 ` Dan Carpenter
2020-07-27 16:34 ` Frank van der Linden
2020-07-27 16:34   ` Frank van der Linden
2020-07-28 15:17   ` Trond Myklebust
2020-07-28 15:17     ` Trond Myklebust
2020-07-28 16:09     ` Frank van der Linden
2020-07-28 16:09       ` Frank van der Linden
2020-07-28 17:10       ` Trond Myklebust
2020-07-28 17:10         ` Trond Myklebust
2020-07-28 18:00         ` Frank van der Linden
2020-07-28 18:00           ` Frank van der Linden
2020-07-28 18:04           ` Trond Myklebust
2020-07-28 18:04             ` Trond Myklebust
2020-07-28 18:13             ` Frank van der Linden
2020-07-28 18:13               ` Frank van der Linden
2020-07-28 18:21               ` Trond Myklebust
2020-07-28 18:21                 ` Trond Myklebust
2020-07-28 20:18                 ` Frank van der Linden
2020-07-28 20:18                   ` Frank van der Linden

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.