ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Layton <jlayton@kernel.org>
To: xiubli@redhat.com
Cc: idryomov@gmail.com, pdonnell@redhat.com, ukernel@gmail.com,
	ceph-devel@vger.kernel.org
Subject: Re: [PATCH] ceph: make the lost+found dir accessible by kernel client
Date: Mon, 19 Apr 2021 12:09:17 -0400	[thread overview]
Message-ID: <02cc34a899aab7169ecfdc9b15bb5dcb3d19edd8.camel@kernel.org> (raw)
In-Reply-To: <20210419023237.1177430-1-xiubli@redhat.com>

On Mon, 2021-04-19 at 10:32 +0800, xiubli@redhat.com wrote:
> From: Xiubo Li <xiubli@redhat.com>
> 
> Inode number 0x4 is reserved for the lost+found dir, and the app
> or test app need to access it.
> 
> URL: https://tracker.ceph.com/issues/50216
> Signed-off-by: Xiubo Li <xiubli@redhat.com>
> ---
>  fs/ceph/super.h              | 3 ++-
>  include/linux/ceph/ceph_fs.h | 7 ++++---
>  2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/ceph/super.h b/fs/ceph/super.h
> index 4808a1458c9b..0f38e6183ff0 100644
> --- a/fs/ceph/super.h
> +++ b/fs/ceph/super.h
> @@ -542,7 +542,8 @@ static inline int ceph_ino_compare(struct inode *inode, void *data)
>  
> 
> 
> 
>  static inline bool ceph_vino_is_reserved(const struct ceph_vino vino)
>  {
> -	if (vino.ino < CEPH_INO_SYSTEM_BASE && vino.ino != CEPH_INO_ROOT) {
> +	if (vino.ino < CEPH_INO_SYSTEM_BASE && vino.ino != CEPH_INO_ROOT &&
> +	    vino.ino != CEPH_INO_LOST_AND_FOUND ) {
>  		WARN_RATELIMIT(1, "Attempt to access reserved inode number 0x%llx", vino.ino);
>  		return true;
>  	}
> diff --git a/include/linux/ceph/ceph_fs.h b/include/linux/ceph/ceph_fs.h
> index e41a811026f6..57e5bd63fb7a 100644
> --- a/include/linux/ceph/ceph_fs.h
> +++ b/include/linux/ceph/ceph_fs.h
> @@ -27,9 +27,10 @@
>  #define CEPH_MONC_PROTOCOL   15 /* server/client */
>  
> 
> 
> 
>  
> 
> 
> 
> -#define CEPH_INO_ROOT   1
> -#define CEPH_INO_CEPH   2       /* hidden .ceph dir */
> -#define CEPH_INO_DOTDOT 3	/* used by ceph fuse for parent (..) */
> +#define CEPH_INO_ROOT           1
> +#define CEPH_INO_CEPH           2 /* hidden .ceph dir */
> +#define CEPH_INO_DOTDOT         3 /* used by ceph fuse for parent (..) */
> +#define CEPH_INO_LOST_AND_FOUND 4 /* lost+found dir */
>  
> 
> 
> 
>  /* arbitrary limit on max # of monitors (cluster of 3 is typical) */
>  #define CEPH_MAX_MON   31

Thanks Xiubo,

For some background, apparently cephfs-data-scan can create this
directory, and the clients do need access to it. I'll fold this into the
original patch that makes these inodes inaccessible (ceph: don't allow
access to MDS-private inodes).

Cheers!
-- 
Jeff Layton <jlayton@kernel.org>


  reply	other threads:[~2021-04-19 16:09 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-19  2:32 xiubli
2021-04-19 16:09 ` Jeff Layton [this message]
2021-04-20  0:23   ` Xiubo Li
2021-04-20  2:02   ` Xiubo Li
2021-04-20 12:51     ` Jeff Layton
2021-04-20 13:30       ` Xiubo Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=02cc34a899aab7169ecfdc9b15bb5dcb3d19edd8.camel@kernel.org \
    --to=jlayton@kernel.org \
    --cc=ceph-devel@vger.kernel.org \
    --cc=idryomov@gmail.com \
    --cc=pdonnell@redhat.com \
    --cc=ukernel@gmail.com \
    --cc=xiubli@redhat.com \
    --subject='Re: [PATCH] ceph: make the lost+found dir accessible by kernel client' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).