All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] libfdisk: fix gpt for 32bit systems
@ 2017-04-05  9:56 Ruediger Meier
  2017-04-05 10:50 ` Karel Zak
  0 siblings, 1 reply; 8+ messages in thread
From: Ruediger Meier @ 2017-04-05  9:56 UTC (permalink / raw)
  To: util-linux

From: Ruediger Meier <ruediger.meier@ga-group.nl>

test libfdisk/gpt failed since bb676203 because UINT32_MAX does
not fit into ssize_t on 32bit systems.

This patch rewrites parts of commit f71b96bf. Also handling
multiplication overflow and some typos. Note that it still looks
questionable that we would try to read() SSIZE_MAX bytes in
gpt_read_entries().

Signed-off-by: Ruediger Meier <ruediger.meier@ga-group.nl>
---
 libfdisk/src/gpt.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/libfdisk/src/gpt.c b/libfdisk/src/gpt.c
index c99b16f..2091673 100644
--- a/libfdisk/src/gpt.c
+++ b/libfdisk/src/gpt.c
@@ -820,19 +820,19 @@ static ssize_t read_lba(struct fdisk_context *cxt, uint64_t lba,
 static struct gpt_entry *gpt_read_entries(struct fdisk_context *cxt,
 					 struct gpt_header *header)
 {
-	ssize_t sz;
+	uint64_t sz;
 	struct gpt_entry *ret = NULL;
 	off_t offset;
 
 	assert(cxt);
 	assert(header);
 
-	sz = (ssize_t) le32_to_cpu(header->npartition_entries) *
+	sz = (uint64_t) le32_to_cpu(header->npartition_entries) *
 	     le32_to_cpu(header->sizeof_partition_entry);
 
-	if (sz == 0 || sz >= (ssize_t) UINT32_MAX ||
+	if (sz == 0 || sz >= UINT32_MAX || sz > SSIZE_MAX ||
 	    le32_to_cpu(header->sizeof_partition_entry) != sizeof(struct gpt_entry)) {
-		DBG(LABEL, ul_debug("GPT entreis array size check failed"));
+		DBG(LABEL, ul_debug("GPT entries array size check failed"));
 		return NULL;
 	}
 
@@ -844,7 +844,7 @@ static struct gpt_entry *gpt_read_entries(struct fdisk_context *cxt,
 
 	if (offset != lseek(cxt->dev_fd, offset, SEEK_SET))
 		goto fail;
-	if (sz != read(cxt->dev_fd, ret, sz))
+	if ((ssize_t)sz != read(cxt->dev_fd, ret, sz))
 		goto fail;
 
 	return ret;
@@ -2522,10 +2522,10 @@ static int gpt_check_table_overlap(struct fdisk_context *cxt,
 int fdisk_gpt_set_npartitions(struct fdisk_context *cxt, uint32_t entries)
 {
 	struct fdisk_gpt_label *gpt;
-	size_t old_size, new_size;
+	size_t old_size;
 	uint32_t old;
 	struct gpt_entry *ents;
-	uint64_t first_usable, last_usable;
+	uint64_t first_usable, last_usable, new_size;
 	int rc;
 
 	assert(cxt);
@@ -2541,10 +2541,10 @@ int fdisk_gpt_set_npartitions(struct fdisk_context *cxt, uint32_t entries)
 		return 0;	/* do nothing, say nothing */
 
 	/* calculate the size (bytes) of the entries array */
-	new_size = entries * le32_to_cpu(gpt->pheader->sizeof_partition_entry);
-	if (new_size >= UINT32_MAX) {
-		fdisk_warnx(cxt, _("The number of the partition has be smaller than %zu."),
-				UINT32_MAX / le32_to_cpu(gpt->pheader->sizeof_partition_entry));
+	new_size = (uint64_t) entries *
+	           le32_to_cpu(gpt->pheader->sizeof_partition_entry);
+	if (new_size >= UINT32_MAX || new_size > SIZE_MAX) {
+		fdisk_warnx(cxt, _("The number of partitions is too large."));
 		return -EINVAL;
 	}
 
-- 
1.8.5.6


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] libfdisk: fix gpt for 32bit systems
  2017-04-05  9:56 [PATCH] libfdisk: fix gpt for 32bit systems Ruediger Meier
@ 2017-04-05 10:50 ` Karel Zak
  2017-04-05 11:14   ` Ruediger Meier
  0 siblings, 1 reply; 8+ messages in thread
From: Karel Zak @ 2017-04-05 10:50 UTC (permalink / raw)
  To: Ruediger Meier; +Cc: util-linux

On Wed, Apr 05, 2017 at 11:56:29AM +0200, Ruediger Meier wrote:
> From: Ruediger Meier <ruediger.meier@ga-group.nl>
> 
> test libfdisk/gpt failed since bb676203 because UINT32_MAX does
> not fit into ssize_t on 32bit systems.
> 
> This patch rewrites parts of commit f71b96bf. Also handling
> multiplication overflow and some typos. Note that it still looks
> questionable that we would try to read() SSIZE_MAX bytes in
> gpt_read_entries().

Maybe it would be better to have

  #define FDISK_GPT_NENTRIES_MAX  (SIZE_MAX/sizeof(struct gpt_entry))

to avoid all the checks. We have to be able to store the entries
array to the memory, so all should be based on this fact. Things like

    uint64_t sz;
    ...
    ret = calloc(1, sz);

are just crazy fun...

It also seems that the code does not care if header->sizeof_partition_entry
is the same as sizeof(struct gpt_entry), but we use the array as "struct gpt_entry".

The really correct way is to use

    (char *)buf + i * header->sizeof_partition_entry

to get the entry.

I'll play with it and send a patch for review.

    Karel

-- 
 Karel Zak  <kzak@redhat.com>
 http://karelzak.blogspot.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] libfdisk: fix gpt for 32bit systems
  2017-04-05 10:50 ` Karel Zak
@ 2017-04-05 11:14   ` Ruediger Meier
  2017-04-05 16:50     ` Karel Zak
  0 siblings, 1 reply; 8+ messages in thread
From: Ruediger Meier @ 2017-04-05 11:14 UTC (permalink / raw)
  To: Karel Zak; +Cc: util-linux

On Wednesday 05 April 2017, Karel Zak wrote:
> On Wed, Apr 05, 2017 at 11:56:29AM +0200, Ruediger Meier wrote:
> > From: Ruediger Meier <ruediger.meier@ga-group.nl>
> >
> > test libfdisk/gpt failed since bb676203 because UINT32_MAX does
> > not fit into ssize_t on 32bit systems.
> >
> > This patch rewrites parts of commit f71b96bf. Also handling
> > multiplication overflow and some typos. Note that it still looks
> > questionable that we would try to read() SSIZE_MAX bytes in
> > gpt_read_entries().
>
> Maybe it would be better to have
>
>   #define FDISK_GPT_NENTRIES_MAX  (SIZE_MAX/sizeof(struct gpt_entry))

Yes, I was not sure whether f71b96bf really cares for UINT32_MAX or if 
SIZE_MAX... would be good enough as the only limit.

> to avoid all the checks. We have to be able to store the entries
> array to the memory, so all should be based on this fact. Things like
>
>     uint64_t sz;
>     ...
>     ret = calloc(1, sz);
>
> are just crazy fun...
>
> It also seems that the code does not care if
> header->sizeof_partition_entry is the same as sizeof(struct
> gpt_entry), but we use the array as "struct gpt_entry".
>
> The really correct way is to use
>
>     (char *)buf + i * header->sizeof_partition_entry
>
> to get the entry.
>
> I'll play with it and send a patch for review.
>
>     Karel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] libfdisk: fix gpt for 32bit systems
  2017-04-05 11:14   ` Ruediger Meier
@ 2017-04-05 16:50     ` Karel Zak
  2017-04-05 17:58       ` Ruediger Meier
  0 siblings, 1 reply; 8+ messages in thread
From: Karel Zak @ 2017-04-05 16:50 UTC (permalink / raw)
  To: Ruediger Meier; +Cc: util-linux

On Wed, Apr 05, 2017 at 01:14:07PM +0200, Ruediger Meier wrote:
> On Wednesday 05 April 2017, Karel Zak wrote:
> > On Wed, Apr 05, 2017 at 11:56:29AM +0200, Ruediger Meier wrote:
> > > From: Ruediger Meier <ruediger.meier@ga-group.nl>
> > >
> > > test libfdisk/gpt failed since bb676203 because UINT32_MAX does
> > > not fit into ssize_t on 32bit systems.
> > >
> > > This patch rewrites parts of commit f71b96bf. Also handling
> > > multiplication overflow and some typos. Note that it still looks
> > > questionable that we would try to read() SSIZE_MAX bytes in
> > > gpt_read_entries().
> >
> > Maybe it would be better to have
> >
> >   #define FDISK_GPT_NENTRIES_MAX  (SIZE_MAX/sizeof(struct gpt_entry))
> 
> Yes, I was not sure whether f71b96bf really cares for UINT32_MAX or if 
> SIZE_MAX... would be good enough as the only limit.

I did more changes to the code to consolidate all the work with ents[]. 

 https://github.com/karelzak/util-linux/commit/b28df75eece3b65b79b2e56f1c197c2f128ac3d9
 https://github.com/karelzak/util-linux/commit/b683c081085381401f1f777d076d073759cdb964
 https://github.com/karelzak/util-linux/commit/9e320545bb6186b14aa9339cdcb83cfce1d9221b

The limit is calculated by the patch below (the last patch). Not sure if it's
paranoid enough ;-)

    Karel



>From 9e320545bb6186b14aa9339cdcb83cfce1d9221b Mon Sep 17 00:00:00 2001
From: Karel Zak <kzak@redhat.com>
Date: Wed, 5 Apr 2017 18:40:56 +0200
Subject: [PATCH] libfdisk: (gpt) make entries array size calculation more
 robust

* use the same function everywhere
* keep calculation based on size_t

Signed-off-by: Karel Zak <kzak@redhat.com>
---
 libfdisk/src/gpt.c | 81 ++++++++++++++++++++++++++++++++++--------------------
 1 file changed, 51 insertions(+), 30 deletions(-)

diff --git a/libfdisk/src/gpt.c b/libfdisk/src/gpt.c
index 09c90a4..047ba59 100644
--- a/libfdisk/src/gpt.c
+++ b/libfdisk/src/gpt.c
@@ -453,6 +453,24 @@ static inline size_t gpt_get_nentries(struct fdisk_gpt_label *gpt)
 	return (size_t) le32_to_cpu(gpt->pheader->npartition_entries);
 }
 
+static inline int gpt_calculate_sizeof_ents(struct gpt_header *hdr, uint32_t nents, size_t *sz)
+{
+	uint32_t esz = le32_to_cpu(hdr->sizeof_partition_entry);
+
+	if (nents == 0 || esz == 0 || SIZE_MAX/esz < nents) {
+		DBG(LABEL, ul_debug("GPT entreis array size check failed"));
+		return -ERANGE;
+	}
+
+	*sz = nents * esz;
+	return 0;
+}
+
+static inline int gpt_sizeof_ents(struct gpt_header *hdr, size_t *sz)
+{
+	return gpt_calculate_sizeof_ents(hdr, le32_to_cpu(hdr->npartition_entries), sz);
+}
+
 static inline int partition_unused(const struct gpt_entry *e)
 {
 	return !memcmp(&e->type, &GPT_UNUSED_ENTRY_GUID,
@@ -468,7 +486,6 @@ static char *gpt_get_header_id(struct gpt_header *header)
 	return strdup(str);
 }
 
-
 /*
  * Builds a clean new valid protective MBR - will wipe out any existing data.
  * Returns 0 on success, otherwise < 0 on error.
@@ -845,31 +862,30 @@ static ssize_t read_lba(struct fdisk_context *cxt, uint64_t lba,
 static unsigned char *gpt_read_entries(struct fdisk_context *cxt,
 					 struct gpt_header *header)
 {
-	ssize_t sz;
+	size_t sz;
+	ssize_t ssz;
+
 	unsigned char *ret = NULL;
 	off_t offset;
 
 	assert(cxt);
 	assert(header);
 
-	sz = (ssize_t) le32_to_cpu(header->npartition_entries) *
-	     le32_to_cpu(header->sizeof_partition_entry);
-
-	if (sz == 0 || sz >= (ssize_t) UINT32_MAX ||
-	    le32_to_cpu(header->sizeof_partition_entry) != sizeof(struct gpt_entry)) {
-		DBG(LABEL, ul_debug("GPT entreis array size check failed"));
+	if (gpt_sizeof_ents(header, &sz))
 		return NULL;
-	}
 
 	ret = calloc(1, sz);
 	if (!ret)
 		return NULL;
+
 	offset = (off_t) le64_to_cpu(header->partition_entry_lba) *
 		       cxt->sector_size;
 
 	if (offset != lseek(cxt->dev_fd, offset, SEEK_SET))
 		goto fail;
-	if (sz != read(cxt->dev_fd, ret, sz))
+
+	ssz = read(cxt->dev_fd, ret, sz);
+	if (ssz < 0 || (size_t) ssz != sz)
 		goto fail;
 
 	return ret;
@@ -897,8 +913,8 @@ static inline uint32_t gpt_entryarr_count_crc32(struct gpt_header *header, unsig
 {
 	size_t arysz = 0;
 
-	arysz = (size_t) le32_to_cpu(header->npartition_entries) *
-		le32_to_cpu(header->sizeof_partition_entry);
+	if (gpt_sizeof_ents(header, &arysz))
+		return 0;
 
 	return count_crc32(ents, arysz, 0, 0);
 }
@@ -1093,9 +1109,7 @@ static int gpt_locate_disklabel(struct fdisk_context *cxt, int n,
 		gpt = self_label(cxt);
 		*offset = (uint64_t) le64_to_cpu(gpt->pheader->partition_entry_lba) *
 				     cxt->sector_size;
-		*size = gpt_get_nentries(gpt) *
-				le32_to_cpu(gpt->pheader->sizeof_partition_entry);
-		break;
+		return gpt_sizeof_ents(gpt->pheader, size);
 	default:
 		return 1;			/* no more chunks */
 	}
@@ -1847,18 +1861,22 @@ static int gpt_write_partitions(struct fdisk_context *cxt,
 				struct gpt_header *header, unsigned char *ents)
 {
 	off_t offset = (off_t) le64_to_cpu(header->partition_entry_lba) * cxt->sector_size;
-	uint32_t nparts = le32_to_cpu(header->npartition_entries);
-	uint32_t totwrite = nparts * le32_to_cpu(header->sizeof_partition_entry);
-	ssize_t rc;
+	size_t towrite;
+	ssize_t ssz;
+	int rc;
+
+	rc = gpt_sizeof_ents(header, &towrite);
+	if (rc)
+		return rc;
 
 	if (offset != lseek(cxt->dev_fd, offset, SEEK_SET))
-		goto fail;
+		return -errno;
 
-	rc = write(cxt->dev_fd, ents, totwrite);
-	if (rc > 0 && totwrite == (uint32_t) rc)
-		return 0;
-fail:
-	return -errno;
+	ssz = write(cxt->dev_fd, ents, towrite);
+	if (ssz < 0 || (ssize_t) towrite != ssz)
+		return -errno;
+
+	return 0;
 }
 
 /*
@@ -2471,8 +2489,9 @@ static int gpt_create_disklabel(struct fdisk_context *cxt)
 	if (rc < 0)
 		goto done;
 
-	esz = (size_t) le32_to_cpu(gpt->pheader->npartition_entries) *
-	      le32_to_cpu(gpt->pheader->sizeof_partition_entry);
+	rc = gpt_sizeof_ents(gpt->pheader, &esz);
+	if (rc)
+		goto done;
 	gpt->ents = calloc(1, esz);
 	if (!gpt->ents) {
 		rc = -ENOMEM;
@@ -2602,14 +2621,16 @@ int fdisk_gpt_set_npartitions(struct fdisk_context *cxt, uint32_t entries)
 		return 0;	/* do nothing, say nothing */
 
 	/* calculate the size (bytes) of the entries array */
-	new_size = entries * le32_to_cpu(gpt->pheader->sizeof_partition_entry);
-	if (new_size >= UINT32_MAX) {
+	rc = gpt_calculate_sizeof_ents(gpt->pheader, entries, &new_size);
+	if (rc) {
 		fdisk_warnx(cxt, _("The number of the partition has be smaller than %zu."),
 				UINT32_MAX / le32_to_cpu(gpt->pheader->sizeof_partition_entry));
-		return -EINVAL;
+		return rc;
 	}
 
-	old_size = old * le32_to_cpu(gpt->pheader->sizeof_partition_entry);
+	rc = gpt_calculate_sizeof_ents(gpt->pheader, old, &old_size);
+	if (rc)
+		return rc;
 
 	/* calculate new range of usable LBAs */
 	first_usable = (uint64_t) (new_size / cxt->sector_size) + 2ULL;
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] libfdisk: fix gpt for 32bit systems
  2017-04-05 16:50     ` Karel Zak
@ 2017-04-05 17:58       ` Ruediger Meier
  2017-04-06 10:28         ` Karel Zak
  0 siblings, 1 reply; 8+ messages in thread
From: Ruediger Meier @ 2017-04-05 17:58 UTC (permalink / raw)
  To: Karel Zak; +Cc: util-linux

On Wednesday 05 April 2017, Karel Zak wrote:
> On Wed, Apr 05, 2017 at 01:14:07PM +0200, Ruediger Meier wrote:
> > On Wednesday 05 April 2017, Karel Zak wrote:
> > > On Wed, Apr 05, 2017 at 11:56:29AM +0200, Ruediger Meier wrote:
> > > > From: Ruediger Meier <ruediger.meier@ga-group.nl>
> > > >
> > > > test libfdisk/gpt failed since bb676203 because UINT32_MAX does
> > > > not fit into ssize_t on 32bit systems.
> > > >
> > > > This patch rewrites parts of commit f71b96bf. Also handling
> > > > multiplication overflow and some typos. Note that it still
> > > > looks questionable that we would try to read() SSIZE_MAX bytes
> > > > in gpt_read_entries().
> > >
> > > Maybe it would be better to have
> > >
> > >   #define FDISK_GPT_NENTRIES_MAX  (SIZE_MAX/sizeof(struct
> > > gpt_entry))
> >
> > Yes, I was not sure whether f71b96bf really cares for UINT32_MAX or
> > if SIZE_MAX... would be good enough as the only limit.
>
> I did more changes to the code to consolidate all the work with
> ents[].
>
> 
> https://github.com/karelzak/util-linux/commit/b28df75eece3b65b79b2e56
>f1c197c2f128ac3d9
> https://github.com/karelzak/util-linux/commit/b683c081085381401f1f777
>d076d073759cdb964
> https://github.com/karelzak/util-linux/commit/9e320545bb6186b14aa9339
>cdcb83cfce1d9221b
>
> The limit is calculated by the patch below (the last patch). Not sure
> if it's paranoid enough ;-)

Thanks, It fixes the original "(ssize_t)UINT32_MAX" bug on my test 
systems.

I have one "paranoid" comment below.

>
>
>
> From 9e320545bb6186b14aa9339cdcb83cfce1d9221b Mon Sep 17 00:00:00
> 2001 From: Karel Zak <kzak@redhat.com>
> Date: Wed, 5 Apr 2017 18:40:56 +0200
> Subject: [PATCH] libfdisk: (gpt) make entries array size calculation
> more robust
>
> * use the same function everywhere
> * keep calculation based on size_t
>
> Signed-off-by: Karel Zak <kzak@redhat.com>
> ---
>  libfdisk/src/gpt.c | 81
> ++++++++++++++++++++++++++++++++++-------------------- 1 file
> changed, 51 insertions(+), 30 deletions(-)
>
> diff --git a/libfdisk/src/gpt.c b/libfdisk/src/gpt.c
> index 09c90a4..047ba59 100644
> --- a/libfdisk/src/gpt.c
> +++ b/libfdisk/src/gpt.c
> @@ -453,6 +453,24 @@ static inline size_t gpt_get_nentries(struct
> fdisk_gpt_label *gpt) return (size_t)
> le32_to_cpu(gpt->pheader->npartition_entries); }
>
> +static inline int gpt_calculate_sizeof_ents(struct gpt_header *hdr,
> uint32_t nents, size_t *sz) +{
> +	uint32_t esz = le32_to_cpu(hdr->sizeof_partition_entry);
> +
> +	if (nents == 0 || esz == 0 || SIZE_MAX/esz < nents) {
> +		DBG(LABEL, ul_debug("GPT entreis array size check failed"));
> +		return -ERANGE;
> +	}
> +
> +	*sz = nents * esz;
> +	return 0;
> +}
> +
> +static inline int gpt_sizeof_ents(struct gpt_header *hdr, size_t
> *sz) +{
> +	return gpt_calculate_sizeof_ents(hdr,
> le32_to_cpu(hdr->npartition_entries), sz); +}
> +
>  static inline int partition_unused(const struct gpt_entry *e)
>  {
>  	return !memcmp(&e->type, &GPT_UNUSED_ENTRY_GUID,
> @@ -468,7 +486,6 @@ static char *gpt_get_header_id(struct gpt_header
> *header) return strdup(str);
>  }
>
> -
>  /*
>   * Builds a clean new valid protective MBR - will wipe out any
> existing data. * Returns 0 on success, otherwise < 0 on error.
> @@ -845,31 +862,30 @@ static ssize_t read_lba(struct fdisk_context
> *cxt, uint64_t lba, static unsigned char *gpt_read_entries(struct
> fdisk_context *cxt, struct gpt_header *header)
>  {
> -	ssize_t sz;
> +	size_t sz;
> +	ssize_t ssz;
> +
>  	unsigned char *ret = NULL;
>  	off_t offset;
>
>  	assert(cxt);
>  	assert(header);
>
> -	sz = (ssize_t) le32_to_cpu(header->npartition_entries) *
> -	     le32_to_cpu(header->sizeof_partition_entry);
> -
> -	if (sz == 0 || sz >= (ssize_t) UINT32_MAX ||
> -	    le32_to_cpu(header->sizeof_partition_entry) != sizeof(struct
> gpt_entry)) { -		DBG(LABEL, ul_debug("GPT entreis array size check
> failed"));
> +	if (gpt_sizeof_ents(header, &sz)) 
>  		return NULL;
> -	}
>
>  	ret = calloc(1, sz);
>  	if (!ret)
>  		return NULL;
> +
>  	offset = (off_t) le64_to_cpu(header->partition_entry_lba) *
>  		       cxt->sector_size;
>
>  	if (offset != lseek(cxt->dev_fd, offset, SEEK_SET))
>  		goto fail;
> -	if (sz != read(cxt->dev_fd, ret, sz))
> +
> +	ssz = read(cxt->dev_fd, ret, sz);

The read(2) Linux manpage says: "If count is greater than SSIZE_MAX 
(signed!), the result is unspecified."

So maybe we should limit gpt_sizeof_ents() regarding SSIZE_MAX rather 
than SIZE_MAX. I guess that even smwaller sizes would not be possible 
to load into memory.

I'm also not sure that such big-reads (without using read() in a loop) 
are portable at all.


> +	if (ssz < 0 || (size_t) ssz != sz)
>  		goto fail;
>
>  	return ret;
> @@ -897,8 +913,8 @@ static inline uint32_t
> gpt_entryarr_count_crc32(struct gpt_header *header, unsig {
>  	size_t arysz = 0;
>
> -	arysz = (size_t) le32_to_cpu(header->npartition_entries) *
> -		le32_to_cpu(header->sizeof_partition_entry);
> +	if (gpt_sizeof_ents(header, &arysz))
> +		return 0;
>
>  	return count_crc32(ents, arysz, 0, 0);
>  }
> @@ -1093,9 +1109,7 @@ static int gpt_locate_disklabel(struct
> fdisk_context *cxt, int n, gpt = self_label(cxt);
>  		*offset = (uint64_t)
> le64_to_cpu(gpt->pheader->partition_entry_lba) * cxt->sector_size;
> -		*size = gpt_get_nentries(gpt) *
> -				le32_to_cpu(gpt->pheader->sizeof_partition_entry);
> -		break;
> +		return gpt_sizeof_ents(gpt->pheader, size);
>  	default:
>  		return 1;			/* no more chunks */
>  	}
> @@ -1847,18 +1861,22 @@ static int gpt_write_partitions(struct
> fdisk_context *cxt, struct gpt_header *header, unsigned char *ents)
>  {
>  	off_t offset = (off_t) le64_to_cpu(header->partition_entry_lba) *
> cxt->sector_size; -	uint32_t nparts =
> le32_to_cpu(header->npartition_entries); -	uint32_t totwrite = nparts
> * le32_to_cpu(header->sizeof_partition_entry); -	ssize_t rc;
> +	size_t towrite;
> +	ssize_t ssz;
> +	int rc;
> +
> +	rc = gpt_sizeof_ents(header, &towrite);
> +	if (rc)
> +		return rc;
>
>  	if (offset != lseek(cxt->dev_fd, offset, SEEK_SET))
> -		goto fail;
> +		return -errno;
>
> -	rc = write(cxt->dev_fd, ents, totwrite);
> -	if (rc > 0 && totwrite == (uint32_t) rc)
> -		return 0;
> -fail:
> -	return -errno;
> +	ssz = write(cxt->dev_fd, ents, towrite);
> +	if (ssz < 0 || (ssize_t) towrite != ssz)
> +		return -errno;
> +
> +	return 0;
>  }
>
>  /*
> @@ -2471,8 +2489,9 @@ static int gpt_create_disklabel(struct
> fdisk_context *cxt) if (rc < 0)
>  		goto done;
>
> -	esz = (size_t) le32_to_cpu(gpt->pheader->npartition_entries) *
> -	      le32_to_cpu(gpt->pheader->sizeof_partition_entry);
> +	rc = gpt_sizeof_ents(gpt->pheader, &esz);
> +	if (rc)
> +		goto done;
>  	gpt->ents = calloc(1, esz);
>  	if (!gpt->ents) {
>  		rc = -ENOMEM;
> @@ -2602,14 +2621,16 @@ int fdisk_gpt_set_npartitions(struct
> fdisk_context *cxt, uint32_t entries) return 0;	/* do nothing, say
> nothing */
>
>  	/* calculate the size (bytes) of the entries array */
> -	new_size = entries *
> le32_to_cpu(gpt->pheader->sizeof_partition_entry); -	if (new_size >=
> UINT32_MAX) {
> +	rc = gpt_calculate_sizeof_ents(gpt->pheader, entries, &new_size);
> +	if (rc) {
>  		fdisk_warnx(cxt, _("The number of the partition has be smaller
> than %zu."), UINT32_MAX /
> le32_to_cpu(gpt->pheader->sizeof_partition_entry)); -		return
> -EINVAL;
> +		return rc;
>  	}
>
> -	old_size = old * le32_to_cpu(gpt->pheader->sizeof_partition_entry);
> +	rc = gpt_calculate_sizeof_ents(gpt->pheader, old, &old_size);
> +	if (rc)
> +		return rc;
>
>  	/* calculate new range of usable LBAs */
>  	first_usable = (uint64_t) (new_size / cxt->sector_size) + 2ULL;



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] libfdisk: fix gpt for 32bit systems
  2017-04-05 17:58       ` Ruediger Meier
@ 2017-04-06 10:28         ` Karel Zak
  2017-04-06 10:43           ` Ruediger Meier
  0 siblings, 1 reply; 8+ messages in thread
From: Karel Zak @ 2017-04-06 10:28 UTC (permalink / raw)
  To: Ruediger Meier; +Cc: util-linux

On Wed, Apr 05, 2017 at 07:58:00PM +0200, Ruediger Meier wrote:
> > +	ssz = read(cxt->dev_fd, ret, sz);
> 
> The read(2) Linux manpage says: "If count is greater than SSIZE_MAX 
> (signed!), the result is unspecified."
> 
> So maybe we should limit gpt_sizeof_ents() regarding SSIZE_MAX rather 
> than SIZE_MAX. I guess that even smwaller sizes would not be possible 
> to load into memory.

OK, I have added SSIZE_MAX check before the read.

> I'm also not sure that such big-reads (without using read() in a loop) 
> are portable at all.

The area on disk is pretty small, and we read the entries array after
header checksum verification. So, the read(2) should no be affected by
corrupted disk and if someone has 44+ millions partitions then a random
read(2) issue is probably the smallest issue in his live... (we can
use read_all() from include/all-io.h, but I think it's overkill).

    Karel

-- 
 Karel Zak  <kzak@redhat.com>
 http://karelzak.blogspot.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] libfdisk: fix gpt for 32bit systems
  2017-04-06 10:28         ` Karel Zak
@ 2017-04-06 10:43           ` Ruediger Meier
  2017-04-06 11:32             ` Karel Zak
  0 siblings, 1 reply; 8+ messages in thread
From: Ruediger Meier @ 2017-04-06 10:43 UTC (permalink / raw)
  To: Karel Zak; +Cc: util-linux

On Thursday 06 April 2017, Karel Zak wrote:
> On Wed, Apr 05, 2017 at 07:58:00PM +0200, Ruediger Meier wrote:
> > > +	ssz = read(cxt->dev_fd, ret, sz);
> >
> > The read(2) Linux manpage says: "If count is greater than SSIZE_MAX
> > (signed!), the result is unspecified."
> >
> > So maybe we should limit gpt_sizeof_ents() regarding SSIZE_MAX
> > rather than SIZE_MAX. I guess that even smwaller sizes would not be
> > possible to load into memory.
>
> OK, I have added SSIZE_MAX check before the read.
>
> > I'm also not sure that such big-reads (without using read() in a
> > loop) are portable at all.
>
> The area on disk is pretty small, and we read the entries array after
> header checksum verification. So, the read(2) should no be affected
> by corrupted disk and if someone has 44+ millions partitions then a
> random read(2) issue is probably the smallest issue in his live...
> (we can use read_all() from include/all-io.h, but I think it's
> overkill).

Yes, no real problem I guess. I'm just curious what would happen if we 
have at least a few thousand partitions. Or whether we shouldn't make 
the limit much smaller somehow to avoid OOM killer in case somebody 
reads a corrupted gpt table.

BTW we could also generally add more tests for broken devices using 
scsi_debug or libfiu. Maybe I will try this when I feel boring next 
time. But I'm already stuck with these fuzzing tests.

cu,
Rudi


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] libfdisk: fix gpt for 32bit systems
  2017-04-06 10:43           ` Ruediger Meier
@ 2017-04-06 11:32             ` Karel Zak
  0 siblings, 0 replies; 8+ messages in thread
From: Karel Zak @ 2017-04-06 11:32 UTC (permalink / raw)
  To: Ruediger Meier; +Cc: util-linux

On Thu, Apr 06, 2017 at 12:43:39PM +0200, Ruediger Meier wrote:
> On Thursday 06 April 2017, Karel Zak wrote:
> > On Wed, Apr 05, 2017 at 07:58:00PM +0200, Ruediger Meier wrote:
> > > > +	ssz = read(cxt->dev_fd, ret, sz);
> > >
> > > The read(2) Linux manpage says: "If count is greater than SSIZE_MAX
> > > (signed!), the result is unspecified."
> > >
> > > So maybe we should limit gpt_sizeof_ents() regarding SSIZE_MAX
> > > rather than SIZE_MAX. I guess that even smwaller sizes would not be
> > > possible to load into memory.
> >
> > OK, I have added SSIZE_MAX check before the read.
> >
> > > I'm also not sure that such big-reads (without using read() in a
> > > loop) are portable at all.
> >
> > The area on disk is pretty small, and we read the entries array after
> > header checksum verification. So, the read(2) should no be affected
> > by corrupted disk and if someone has 44+ millions partitions then a
> > random read(2) issue is probably the smallest issue in his live...
> > (we can use read_all() from include/all-io.h, but I think it's
> > overkill).
> 
> Yes, no real problem I guess. I'm just curious what would happen if we 
> have at least a few thousand partitions. Or whether we shouldn't make 
> the limit much smaller somehow to avoid OOM killer in case somebody 
> reads a corrupted gpt table.

we have 'l' command in expert menu ('git pull' I found and fix a bug there)
to reallocate the partition array, for example to create entries array
for 10000 partitions, and use first and last partition:

 modprobe scsi_debug dev_size_mb=1000
 echo -e "g\nx\nl\n10000\nr\nn\n1\n\n+1MiB\nn\n10000\n\n+1MiB\np\nw\q" | ./fdisk /dev/sdc
 ...
 Device        Start   End Sectors Size Type
 /dev/sdc1      4096  6143    2048   1M Linux filesystem
 /dev/sdc10000  6144  8191    2048   1M Linux filesystem

but:

 * kernel really don't like it (kmalloc problem in kernel is_gpt_valid())
 * GNU Parted's last message is "Aborted (core dumped)"

if you allocate only space for 1000 partitions than all seems better,
except kernel create the first partition only (kernel limit is 254 or
so).

> BTW we could also generally add more tests for broken devices using 
> scsi_debug or libfiu. Maybe I will try this when I feel boring next 
> time. But I'm already stuck with these fuzzing tests.

It would be useful.

    Karel

-- 
 Karel Zak  <kzak@redhat.com>
 http://karelzak.blogspot.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-04-06 11:32 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-05  9:56 [PATCH] libfdisk: fix gpt for 32bit systems Ruediger Meier
2017-04-05 10:50 ` Karel Zak
2017-04-05 11:14   ` Ruediger Meier
2017-04-05 16:50     ` Karel Zak
2017-04-05 17:58       ` Ruediger Meier
2017-04-06 10:28         ` Karel Zak
2017-04-06 10:43           ` Ruediger Meier
2017-04-06 11:32             ` Karel Zak

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.