linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Roman Zippel <zippel@linux-m68k.org>
To: Eric Sandeen <sandeen@redhat.com>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH] UPDATED: hfs: handle more on-disk corruptions without oopsing
Date: Mon, 24 Dec 2007 03:16:41 +0100	[thread overview]
Message-ID: <200712240316.43308.zippel@linux-m68k.org> (raw)
In-Reply-To: <476A8F36.2050007@redhat.com>

Hi,

On Thursday 20 December 2007, Eric Sandeen wrote:

> Index: linux-2.6.24-rc3/fs/hfs/brec.c
> ===================================================================
> --- linux-2.6.24-rc3.orig/fs/hfs/brec.c
> +++ linux-2.6.24-rc3/fs/hfs/brec.c
> @@ -44,10 +44,21 @@ u16 hfs_brec_keylen(struct hfs_bnode *no
>  		recoff = hfs_bnode_read_u16(node, node->tree->node_size - (rec + 1) *
> 2); if (!recoff)
>  			return 0;
> -		if (node->tree->attributes & HFS_TREE_BIGKEYS)
> +		if (node->tree->attributes & HFS_TREE_BIGKEYS) {
>  			retval = hfs_bnode_read_u16(node, recoff) + 2;
> -		else
> +			if (retval > node->tree->max_key_len + 2) {
> +				printk(KERN_ERR "hfs: keylen %d too large\n",
> +					retval);
> +				retval = HFS_BAD_KEYLEN;
> +			}
> +		} else {
>  			retval = (hfs_bnode_read_u8(node, recoff) | 1) + 1;
> +			if (retval > node->tree->max_key_len + 1) {
> +				printk(KERN_ERR "hfs: keylen %d too large\n",
> +					retval);
> +				retval = HFS_BAD_KEYLEN;
> +			}
> +		}
>  	}
>  	return retval;
>  }

You can reuse 0 as failure value, a key has to be of nonzero size.

> Index: linux-2.6.24-rc3/fs/hfs/btree.c
> ===================================================================
> --- linux-2.6.24-rc3.orig/fs/hfs/btree.c
> +++ linux-2.6.24-rc3/fs/hfs/btree.c
> @@ -81,6 +81,17 @@ struct hfs_btree *hfs_btree_open(struct
>  		goto fail_page;
>  	if (!tree->node_count)
>  		goto fail_page;
> +	if ((id == HFS_EXT_CNID) && (tree->max_key_len != HFS_MAX_EXT_KEYLEN)) {
> +		printk(KERN_ERR "hfs: invalid extent max_key_len %d\n",
> +			tree->max_key_len);
> +		goto fail_page;
> +	}
> +	if ((id == HFS_CAT_CNID) && (tree->max_key_len != HFS_MAX_CAT_KEYLEN)) {
> +		printk(KERN_ERR "hfs: invalid catalog max_key_len %d\n",
> +			tree->max_key_len);
> +		goto fail_page;
> +	}
> +
>  	tree->node_size_shift = ffs(size) - 1;
>  	tree->pages_per_bnode = (tree->node_size + PAGE_CACHE_SIZE - 1) >>
> PAGE_CACHE_SHIFT;
>

I'd prefer a switch statement here.

It would be nice if you could do the same changes for hfsplus, so both stay in 
sync.
Thanks.

bye, Roman

  reply	other threads:[~2007-12-24  2:50 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-12-19 22:33 [PATCH] hfs: handle more on-disk corruptions without oopsing Eric Sandeen
2007-12-20 15:50 ` [PATCH] UPDATED: " Eric Sandeen
2007-12-24  2:16   ` Roman Zippel [this message]
2007-12-24  5:07     ` Eric Sandeen
2008-01-02 17:38     ` [PATCH] UPDATED2: " Eric Sandeen
2008-01-07  3:13       ` Roman Zippel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200712240316.43308.zippel@linux-m68k.org \
    --to=zippel@linux-m68k.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=sandeen@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).