* [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid
@ 2019-08-21 14:09 Aaron Williams
2019-08-22 1:40 ` Bin Meng
0 siblings, 1 reply; 9+ messages in thread
From: Aaron Williams @ 2019-08-21 14:09 UTC (permalink / raw)
To: u-boot
From: Aaron Williams <aaron.williams@cavium.com>
When large writes take place I saw a Samsung EVO 970+ return a status
value of 0x13, PRP Offset Invalid. I tracked this down to the
improper handling of PRP entries. The blocks the PRP entries are
placed in cannot cross a page boundary and thus should be allocated
on page boundaries. This is how the Linux kernel driver works.
With this patch, the PRP pool is allocated on a page boundary and
other than the very first allocation, the pool size is a multiple of
the page size. Each page can hold (4096 / 8) - 1 entries since the
last entry must point to the next page in the pool.
Change-Id: I8df66c87d6a6105da556d327d4cc5148e444d20e
Signed-off-by: Aaron Williams <awilliams@marvell.com>
---
drivers/nvme/nvme.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
index 7008a54a6d..71ea226820 100644
--- a/drivers/nvme/nvme.c
+++ b/drivers/nvme/nvme.c
@@ -74,6 +74,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
u64 *prp_pool;
int length = total_len;
int i, nprps;
+ u32 prps_per_page = (page_size >> 3) - 1;
+ u32 num_pages;
+
length -= (page_size - offset);
if (length <= 0) {
@@ -90,15 +93,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
}
nprps = DIV_ROUND_UP(length, page_size);
+ num_pages = DIV_ROUND_UP(nprps, prps_per_page);
if (nprps > dev->prp_entry_num) {
free(dev->prp_pool);
- dev->prp_pool = malloc(nprps << 3);
+ dev->prp_pool = memalign(page_size, num_pages * page_size);
if (!dev->prp_pool) {
printf("Error: malloc prp_pool fail\n");
return -ENOMEM;
}
- dev->prp_entry_num = nprps;
+ dev->prp_entry_num = ((page_size >> 3) - 1) * num_pages;
}
prp_pool = dev->prp_pool;
@@ -791,12 +795,6 @@ static int nvme_probe(struct udevice *udev)
}
memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *));
- ndev->prp_pool = malloc(MAX_PRP_POOL);
- if (!ndev->prp_pool) {
- ret = -ENOMEM;
- printf("Error: %s: Out of memory!\n", udev->name);
- goto free_nvme;
- }
ndev->prp_entry_num = MAX_PRP_POOL >> 3;
ndev->cap = nvme_readq(&ndev->bar->cap);
@@ -808,6 +806,13 @@ static int nvme_probe(struct udevice *udev)
if (ret)
goto free_queue;
+ ndev->prp_pool = memalign(ndev->page_size, MAX_PRP_POOL);
+ if (!ndev->prp_pool) {
+ ret = -ENOMEM;
+ printf("Error: %s: Out of memory!\n", udev->name);
+ goto free_nvme;
+ }
+
ret = nvme_setup_io_queues(ndev);
if (ret)
goto free_queue;
--
2.16.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid
2019-08-21 14:09 [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid Aaron Williams
@ 2019-08-22 1:40 ` Bin Meng
2019-08-22 2:48 ` Aaron Williams
0 siblings, 1 reply; 9+ messages in thread
From: Bin Meng @ 2019-08-22 1:40 UTC (permalink / raw)
To: u-boot
Hi Aaron,
On Wed, Aug 21, 2019 at 10:09 PM Aaron Williams <awilliams@marvell.com> wrote:
>
> From: Aaron Williams <aaron.williams@cavium.com>
>
> When large writes take place I saw a Samsung EVO 970+ return a status
> value of 0x13, PRP Offset Invalid. I tracked this down to the
> improper handling of PRP entries. The blocks the PRP entries are
> placed in cannot cross a page boundary and thus should be allocated
> on page boundaries. This is how the Linux kernel driver works.
>
> With this patch, the PRP pool is allocated on a page boundary and
> other than the very first allocation, the pool size is a multiple of
> the page size. Each page can hold (4096 / 8) - 1 entries since the
> last entry must point to the next page in the pool.
>
> Change-Id: I8df66c87d6a6105da556d327d4cc5148e444d20e
> Signed-off-by: Aaron Williams <awilliams@marvell.com>
> ---
> drivers/nvme/nvme.c | 21 +++++++++++++--------
> 1 file changed, 13 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
> index 7008a54a6d..71ea226820 100644
> --- a/drivers/nvme/nvme.c
> +++ b/drivers/nvme/nvme.c
> @@ -74,6 +74,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
> u64 *prp_pool;
> int length = total_len;
> int i, nprps;
> + u32 prps_per_page = (page_size >> 3) - 1;
> + u32 num_pages;
> +
> length -= (page_size - offset);
>
> if (length <= 0) {
> @@ -90,15 +93,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
> }
>
> nprps = DIV_ROUND_UP(length, page_size);
> + num_pages = DIV_ROUND_UP(nprps, prps_per_page);
>
> if (nprps > dev->prp_entry_num) {
> free(dev->prp_pool);
> - dev->prp_pool = malloc(nprps << 3);
> + dev->prp_pool = memalign(page_size, num_pages * page_size);
> if (!dev->prp_pool) {
> printf("Error: malloc prp_pool fail\n");
> return -ENOMEM;
> }
> - dev->prp_entry_num = nprps;
> + dev->prp_entry_num = ((page_size >> 3) - 1) * num_pages;
This should be: dev->prp_entry_num = prps_per_page * num_pages;
When you respin the patch, please add the version number in the email
title so that we can have a better track. Thanks!
> }
[snip]
Regards,
Bin
^ permalink raw reply [flat|nested] 9+ messages in thread
* [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid
2019-08-22 1:40 ` Bin Meng
@ 2019-08-22 2:48 ` Aaron Williams
0 siblings, 0 replies; 9+ messages in thread
From: Aaron Williams @ 2019-08-22 2:48 UTC (permalink / raw)
To: u-boot
From: Aaron Williams <aaron.williams@cavium.com>
When large writes take place I saw a Samsung EVO 970+ return a status
value of 0x13, PRP Offset Invalid. I tracked this down to the
improper handling of PRP entries. The blocks the PRP entries are
placed in cannot cross a page boundary and thus should be allocated
on page boundaries. This is how the Linux kernel driver works.
With this patch, the PRP pool is allocated on a page boundary and
other than the very first allocation, the pool size is a multiple of
the page size. Each page can hold (4096 / 8) - 1 entries since the
last entry must point to the next page in the pool.
Change-Id: I8df66c87d6a6105da556d327d4cc5148e444d20e
Signed-off-by: Aaron Williams <awilliams@marvell.com>
---
drivers/nvme/nvme.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
index 7008a54a6d..71ea226820 100644
--- a/drivers/nvme/nvme.c
+++ b/drivers/nvme/nvme.c
@@ -74,6 +74,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
u64 *prp_pool;
int length = total_len;
int i, nprps;
+ u32 prps_per_page = (page_size >> 3) - 1;
+ u32 num_pages;
+
length -= (page_size - offset);
if (length <= 0) {
@@ -90,15 +93,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
}
nprps = DIV_ROUND_UP(length, page_size);
+ num_pages = DIV_ROUND_UP(nprps, prps_per_page);
if (nprps > dev->prp_entry_num) {
free(dev->prp_pool);
- dev->prp_pool = malloc(nprps << 3);
+ dev->prp_pool = memalign(page_size, num_pages * page_size);
if (!dev->prp_pool) {
printf("Error: malloc prp_pool fail\n");
return -ENOMEM;
}
- dev->prp_entry_num = nprps;
+ dev->prp_entry_num = ((page_size >> 3) - 1) * num_pages;
}
prp_pool = dev->prp_pool;
@@ -791,12 +795,6 @@ static int nvme_probe(struct udevice *udev)
}
memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *));
- ndev->prp_pool = malloc(MAX_PRP_POOL);
- if (!ndev->prp_pool) {
- ret = -ENOMEM;
- printf("Error: %s: Out of memory!\n", udev->name);
- goto free_nvme;
- }
ndev->prp_entry_num = MAX_PRP_POOL >> 3;
ndev->cap = nvme_readq(&ndev->bar->cap);
@@ -808,6 +806,13 @@ static int nvme_probe(struct udevice *udev)
if (ret)
goto free_queue;
+ ndev->prp_pool = memalign(ndev->page_size, MAX_PRP_POOL);
+ if (!ndev->prp_pool) {
+ ret = -ENOMEM;
+ printf("Error: %s: Out of memory!\n", udev->name);
+ goto free_nvme;
+ }
+
ret = nvme_setup_io_queues(ndev);
if (ret)
goto free_queue;
--
2.16.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid
2019-08-22 9:12 ` [U-Boot] [PATCH] " Aaron Williams
2019-08-22 9:17 ` Aaron Williams
@ 2019-08-22 14:25 ` Bin Meng
1 sibling, 0 replies; 9+ messages in thread
From: Bin Meng @ 2019-08-22 14:25 UTC (permalink / raw)
To: u-boot
HI Aaron,
On Thu, Aug 22, 2019 at 5:12 PM Aaron Williams <awilliams@marvell.com> wrote:
>
> When large writes take place I saw a Samsung EVO 970+ return a status
> value of 0x13, PRP Offset Invalid. I tracked this down to the
> improper handling of PRP entries. The blocks the PRP entries are
> placed in cannot cross a page boundary and thus should be allocated
> on page boundaries. This is how the Linux kernel driver works.
>
> With this patch, the PRP pool is allocated on a page boundary and
> other than the very first allocation, the pool size is a multiple of
> the page size. Each page can hold (4096 / 8) - 1 entries since the
> last entry must point to the next page in the pool.
>
> Signed-off-by: Aaron Williams <awilliams@marvell.com>
> ---
> drivers/nvme/nvme.c | 28 ++++++++++++++++++----------
> 1 file changed, 18 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
> index d4965e2ef6..bc4cf40b40 100644
> --- a/drivers/nvme/nvme.c
> +++ b/drivers/nvme/nvme.c
> @@ -73,6 +73,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
> u64 *prp_pool;
> int length = total_len;
> int i, nprps;
> + u32 prps_per_page = (page_size >> 3) - 1;
> + u32 num_pages;
> +
> length -= (page_size - offset);
>
> if (length <= 0) {
> @@ -89,15 +92,19 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
> }
>
> nprps = DIV_ROUND_UP(length, page_size);
> + num_pages = DIV_ROUND_UP(nprps, prps_per_page);
>
> if (nprps > dev->prp_entry_num) {
> free(dev->prp_pool);
> - dev->prp_pool = malloc(nprps << 3);
> + /* Always increase in increments of pages. It doesn't waste
nits: please use the correct multi-line comment format.
> + * much memory and reduces the number of allocations.
> + */
> + dev->prp_pool = memalign(page_size, num_pages * page_size);
> if (!dev->prp_pool) {
> printf("Error: malloc prp_pool fail\n");
> return -ENOMEM;
> }
> - dev->prp_entry_num = nprps;
> + dev->prp_entry_num = prps_per_page * num_pages;
> }
>
> prp_pool = dev->prp_pool;
> @@ -788,14 +795,6 @@ static int nvme_probe(struct udevice *udev)
> }
> memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *));
>
> - ndev->prp_pool = malloc(MAX_PRP_POOL);
> - if (!ndev->prp_pool) {
> - ret = -ENOMEM;
> - printf("Error: %s: Out of memory!\n", udev->name);
> - goto free_nvme;
> - }
> - ndev->prp_entry_num = MAX_PRP_POOL >> 3;
> -
> ndev->cap = nvme_readq(&ndev->bar->cap);
> ndev->q_depth = min_t(int, NVME_CAP_MQES(ndev->cap) + 1, NVME_Q_DEPTH);
> ndev->db_stride = 1 << NVME_CAP_STRIDE(ndev->cap);
> @@ -805,6 +804,15 @@ static int nvme_probe(struct udevice *udev)
> if (ret)
> goto free_queue;
>
> + /* Allocate after the page size is known */
> + ndev->prp_pool = memalign(ndev->page_size, MAX_PRP_POOL);
> + if (!ndev->prp_pool) {
> + ret = -ENOMEM;
> + printf("Error: %s: Out of memory!\n", udev->name);
> + goto free_nvme;
> + }
> + ndev->prp_entry_num = MAX_PRP_POOL >> 3;
> +
> ret = nvme_setup_io_queues(ndev);
> if (ret)
> goto free_queue;
> --
Other than above nits, you can include my:
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
in your next version patch. Thanks!
Regards,
Bin
^ permalink raw reply [flat|nested] 9+ messages in thread
* [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid
2019-08-22 9:12 ` [U-Boot] [PATCH] " Aaron Williams
@ 2019-08-22 9:17 ` Aaron Williams
2019-08-22 14:25 ` Bin Meng
1 sibling, 0 replies; 9+ messages in thread
From: Aaron Williams @ 2019-08-22 9:17 UTC (permalink / raw)
To: u-boot
I'm sorry about the messed up subject saying [PATCH]. For some reason git
send-email is mangling the subject line. I'm new to trying to use this method
to send out patches. This is version 2 of my patch.
-Aaron
On Thursday, August 22, 2019 2:12:32 AM PDT Aaron Williams wrote:
> When large writes take place I saw a Samsung EVO 970+ return a status
> value of 0x13, PRP Offset Invalid. I tracked this down to the
> improper handling of PRP entries. The blocks the PRP entries are
> placed in cannot cross a page boundary and thus should be allocated
> on page boundaries. This is how the Linux kernel driver works.
>
> With this patch, the PRP pool is allocated on a page boundary and
> other than the very first allocation, the pool size is a multiple of
> the page size. Each page can hold (4096 / 8) - 1 entries since the
> last entry must point to the next page in the pool.
>
> Signed-off-by: Aaron Williams <awilliams@marvell.com>
> ---
> drivers/nvme/nvme.c | 28 ++++++++++++++++++----------
> 1 file changed, 18 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
> index d4965e2ef6..bc4cf40b40 100644
> --- a/drivers/nvme/nvme.c
> +++ b/drivers/nvme/nvme.c
> @@ -73,6 +73,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64
> *prp2, u64 *prp_pool;
> int length = total_len;
> int i, nprps;
> + u32 prps_per_page = (page_size >> 3) - 1;
> + u32 num_pages;
> +
> length -= (page_size - offset);
>
> if (length <= 0) {
> @@ -89,15 +92,19 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64
> *prp2, }
>
> nprps = DIV_ROUND_UP(length, page_size);
> + num_pages = DIV_ROUND_UP(nprps, prps_per_page);
>
> if (nprps > dev->prp_entry_num) {
> free(dev->prp_pool);
> - dev->prp_pool = malloc(nprps << 3);
> + /* Always increase in increments of pages. It doesn't
waste
> + * much memory and reduces the number of allocations.
> + */
> + dev->prp_pool = memalign(page_size, num_pages *
page_size);
> if (!dev->prp_pool) {
> printf("Error: malloc prp_pool fail\n");
> return -ENOMEM;
> }
> - dev->prp_entry_num = nprps;
> + dev->prp_entry_num = prps_per_page * num_pages;
> }
>
> prp_pool = dev->prp_pool;
> @@ -788,14 +795,6 @@ static int nvme_probe(struct udevice *udev)
> }
> memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue
*));
>
> - ndev->prp_pool = malloc(MAX_PRP_POOL);
> - if (!ndev->prp_pool) {
> - ret = -ENOMEM;
> - printf("Error: %s: Out of memory!\n", udev->name);
> - goto free_nvme;
> - }
> - ndev->prp_entry_num = MAX_PRP_POOL >> 3;
> -
> ndev->cap = nvme_readq(&ndev->bar->cap);
> ndev->q_depth = min_t(int, NVME_CAP_MQES(ndev->cap) + 1,
NVME_Q_DEPTH);
> ndev->db_stride = 1 << NVME_CAP_STRIDE(ndev->cap);
> @@ -805,6 +804,15 @@ static int nvme_probe(struct udevice *udev)
> if (ret)
> goto free_queue;
>
> + /* Allocate after the page size is known */
> + ndev->prp_pool = memalign(ndev->page_size, MAX_PRP_POOL);
> + if (!ndev->prp_pool) {
> + ret = -ENOMEM;
> + printf("Error: %s: Out of memory!\n", udev->name);
> + goto free_nvme;
> + }
> + ndev->prp_entry_num = MAX_PRP_POOL >> 3;
> +
> ret = nvme_setup_io_queues(ndev);
> if (ret)
> goto free_queue;
^ permalink raw reply [flat|nested] 9+ messages in thread
* [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid
2019-08-21 22:06 [U-Boot] [EXT] Re: [PATCH 1/1] " Aaron Williams
@ 2019-08-22 9:12 ` Aaron Williams
2019-08-22 9:17 ` Aaron Williams
2019-08-22 14:25 ` Bin Meng
0 siblings, 2 replies; 9+ messages in thread
From: Aaron Williams @ 2019-08-22 9:12 UTC (permalink / raw)
To: u-boot
When large writes take place I saw a Samsung EVO 970+ return a status
value of 0x13, PRP Offset Invalid. I tracked this down to the
improper handling of PRP entries. The blocks the PRP entries are
placed in cannot cross a page boundary and thus should be allocated
on page boundaries. This is how the Linux kernel driver works.
With this patch, the PRP pool is allocated on a page boundary and
other than the very first allocation, the pool size is a multiple of
the page size. Each page can hold (4096 / 8) - 1 entries since the
last entry must point to the next page in the pool.
Signed-off-by: Aaron Williams <awilliams@marvell.com>
---
drivers/nvme/nvme.c | 28 ++++++++++++++++++----------
1 file changed, 18 insertions(+), 10 deletions(-)
diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
index d4965e2ef6..bc4cf40b40 100644
--- a/drivers/nvme/nvme.c
+++ b/drivers/nvme/nvme.c
@@ -73,6 +73,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
u64 *prp_pool;
int length = total_len;
int i, nprps;
+ u32 prps_per_page = (page_size >> 3) - 1;
+ u32 num_pages;
+
length -= (page_size - offset);
if (length <= 0) {
@@ -89,15 +92,19 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
}
nprps = DIV_ROUND_UP(length, page_size);
+ num_pages = DIV_ROUND_UP(nprps, prps_per_page);
if (nprps > dev->prp_entry_num) {
free(dev->prp_pool);
- dev->prp_pool = malloc(nprps << 3);
+ /* Always increase in increments of pages. It doesn't waste
+ * much memory and reduces the number of allocations.
+ */
+ dev->prp_pool = memalign(page_size, num_pages * page_size);
if (!dev->prp_pool) {
printf("Error: malloc prp_pool fail\n");
return -ENOMEM;
}
- dev->prp_entry_num = nprps;
+ dev->prp_entry_num = prps_per_page * num_pages;
}
prp_pool = dev->prp_pool;
@@ -788,14 +795,6 @@ static int nvme_probe(struct udevice *udev)
}
memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *));
- ndev->prp_pool = malloc(MAX_PRP_POOL);
- if (!ndev->prp_pool) {
- ret = -ENOMEM;
- printf("Error: %s: Out of memory!\n", udev->name);
- goto free_nvme;
- }
- ndev->prp_entry_num = MAX_PRP_POOL >> 3;
-
ndev->cap = nvme_readq(&ndev->bar->cap);
ndev->q_depth = min_t(int, NVME_CAP_MQES(ndev->cap) + 1, NVME_Q_DEPTH);
ndev->db_stride = 1 << NVME_CAP_STRIDE(ndev->cap);
@@ -805,6 +804,15 @@ static int nvme_probe(struct udevice *udev)
if (ret)
goto free_queue;
+ /* Allocate after the page size is known */
+ ndev->prp_pool = memalign(ndev->page_size, MAX_PRP_POOL);
+ if (!ndev->prp_pool) {
+ ret = -ENOMEM;
+ printf("Error: %s: Out of memory!\n", udev->name);
+ goto free_nvme;
+ }
+ ndev->prp_entry_num = MAX_PRP_POOL >> 3;
+
ret = nvme_setup_io_queues(ndev);
if (ret)
goto free_queue;
--
2.16.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid
@ 2019-08-21 13:40 Aaron Williams
0 siblings, 0 replies; 9+ messages in thread
From: Aaron Williams @ 2019-08-21 13:40 UTC (permalink / raw)
To: u-boot
From: Aaron Williams <aaron.williams@cavium.com>
When large writes take place I saw a Samsung EVO 970+ return a status
value of 0x13, PRP Offset Invalid. I tracked this down to the
improper handling of PRP entries. The blocks the PRP entries are
placed in cannot cross a page boundary and thus should be allocated
on page boundaries. This is how the Linux kernel driver works.
With this patch, the PRP pool is allocated on a page boundary and
other than the very first allocation, the pool size is a multiple of
the page size. Each page can hold (4096 / 8) - 1 entries since the
last entry must point to the next page in the pool.
Change-Id: I8df66c87d6a6105da556d327d4cc5148e444d20e
Signed-off-by: Aaron Williams <awilliams@marvell.com>
---
drivers/nvme/nvme.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
index 7008a54a6d..ae64459edf 100644
--- a/drivers/nvme/nvme.c
+++ b/drivers/nvme/nvme.c
@@ -75,6 +75,8 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
int length = total_len;
int i, nprps;
length -= (page_size - offset);
+ u32 prps_per_page = (page_size >> 3) - 1;
+ u32 num_pages;
if (length <= 0) {
*prp2 = 0;
@@ -90,15 +92,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
}
nprps = DIV_ROUND_UP(length, page_size);
+ num_pages = (nprps + prps_per_page - 1) / prps_per_page;
if (nprps > dev->prp_entry_num) {
free(dev->prp_pool);
- dev->prp_pool = malloc(nprps << 3);
+ dev->prp_pool = memalign(page_size, num_pages * page_size);
if (!dev->prp_pool) {
printf("Error: malloc prp_pool fail\n");
return -ENOMEM;
}
- dev->prp_entry_num = nprps;
+ dev->prp_entry_num = ((page_size >> 3) - 1) * num_pages;
}
prp_pool = dev->prp_pool;
@@ -791,7 +794,7 @@ static int nvme_probe(struct udevice *udev)
}
memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *));
- ndev->prp_pool = malloc(MAX_PRP_POOL);
+ ndev->prp_pool = memalign(1 << 12, MAX_PRP_POOL);
if (!ndev->prp_pool) {
ret = -ENOMEM;
printf("Error: %s: Out of memory!\n", udev->name);
--
2.16.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid
2019-08-21 11:23 ` [U-Boot] [PATCH 1/1][nvme] " Aaron Williams
@ 2019-08-21 11:23 ` Aaron Williams
0 siblings, 0 replies; 9+ messages in thread
From: Aaron Williams @ 2019-08-21 11:23 UTC (permalink / raw)
To: u-boot
From: Aaron Williams <aaron.williams@cavium.com>
When large writes take place I saw a Samsung EVO 970+ return a status
value of 0x13, PRP Offset Invalid. I tracked this down to the
improper handling of PRP entries. The blocks the PRP entries are
placed in cannot cross a page boundary and thus should be allocated
on page boundaries. This is how the Linux kernel driver works.
With this patch, the PRP pool is allocated on a page boundary and
other than the very first allocation, the pool size is a multiple of
the page size. Each page can hold (4096 / 8) - 1 entries since the
last entry must point to the next page in the pool.
Change-Id: I8df66c87d6a6105da556d327d4cc5148e444d20e
Signed-off-by: Aaron Williams <awilliams@marvell.com>
---
drivers/nvme/nvme.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
index 7008a54a6d..ae64459edf 100644
--- a/drivers/nvme/nvme.c
+++ b/drivers/nvme/nvme.c
@@ -75,6 +75,8 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
int length = total_len;
int i, nprps;
length -= (page_size - offset);
+ u32 prps_per_page = (page_size >> 3) - 1;
+ u32 num_pages;
if (length <= 0) {
*prp2 = 0;
@@ -90,15 +92,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
}
nprps = DIV_ROUND_UP(length, page_size);
+ num_pages = (nprps + prps_per_page - 1) / prps_per_page;
if (nprps > dev->prp_entry_num) {
free(dev->prp_pool);
- dev->prp_pool = malloc(nprps << 3);
+ dev->prp_pool = memalign(page_size, num_pages * page_size);
if (!dev->prp_pool) {
printf("Error: malloc prp_pool fail\n");
return -ENOMEM;
}
- dev->prp_entry_num = nprps;
+ dev->prp_entry_num = ((page_size >> 3) - 1) * num_pages;
}
prp_pool = dev->prp_pool;
@@ -791,7 +794,7 @@ static int nvme_probe(struct udevice *udev)
}
memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *));
- ndev->prp_pool = malloc(MAX_PRP_POOL);
+ ndev->prp_pool = memalign(1 << 12, MAX_PRP_POOL);
if (!ndev->prp_pool) {
ret = -ENOMEM;
printf("Error: %s: Out of memory!\n", udev->name);
--
2.16.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid
@ 2019-08-20 7:18 Aaron Williams
0 siblings, 0 replies; 9+ messages in thread
From: Aaron Williams @ 2019-08-20 7:18 UTC (permalink / raw)
To: u-boot
When large writes take place I saw a Samsung
EVO 970+ return a status value of 0x13, PRP
Offset Invalid. I tracked this down to the
improper handling of PRP entries. The blocks
the PRP entries are placed in cannot cross a
page boundary and thus should be allocated on
page boundaries. This is how the Linux kernel
driver works.
With this patch, the PRP pool is allocated on
a page boundary and other than the very first
allocation, the pool size is a multiple of
the page size. Each page can hold (4096 / 8) - 1
entries since the last entry must point to the
next page in the pool.
Change-Id: I8df66c87d6a6105da556d327d4cc5148e444d20e
Signed-off-by: Aaron Williams <awilliams@marvell.com>
---
drivers/nvme/nvme.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
index 7008a54a6d..c94a6d0654 100644
--- a/drivers/nvme/nvme.c
+++ b/drivers/nvme/nvme.c
@@ -75,6 +75,8 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
int length = total_len;
int i, nprps;
length -= (page_size - offset);
+ u32 prps_per_page = (page_size >> 3) - 1;
+ u32 num_pages;
if (length <= 0) {
*prp2 = 0;
@@ -90,15 +92,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
}
nprps = DIV_ROUND_UP(length, page_size);
+ num_pages = (nprps + prps_per_page - 1) / prps_per_page;
if (nprps > dev->prp_entry_num) {
free(dev->prp_pool);
- dev->prp_pool = malloc(nprps << 3);
+ dev->prp_pool = memalign(page_size, num_pages * page_size);
if (!dev->prp_pool) {
printf("Error: malloc prp_pool fail\n");
return -ENOMEM;
}
- dev->prp_entry_num = nprps;
+ dev->prp_entry_num = ((page_size >> 3) - 1) * num_pages;
}
prp_pool = dev->prp_pool;
@@ -115,6 +118,7 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
nprps--;
}
*prp2 = (ulong)dev->prp_pool;
+ flush_dcache_range(*prp2, *prp2 + (num_pages * page_size));
return 0;
}
@@ -791,7 +795,7 @@ static int nvme_probe(struct udevice *udev)
}
memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *));
- ndev->prp_pool = malloc(MAX_PRP_POOL);
+ ndev->prp_pool = memalign(1 << 12, MAX_PRP_POOL);
if (!ndev->prp_pool) {
ret = -ENOMEM;
printf("Error: %s: Out of memory!\n", udev->name);
--
2.16.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
end of thread, other threads:[~2019-08-22 14:25 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-21 14:09 [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid Aaron Williams
2019-08-22 1:40 ` Bin Meng
2019-08-22 2:48 ` Aaron Williams
-- strict thread matches above, loose matches on Subject: below --
2019-08-21 22:06 [U-Boot] [EXT] Re: [PATCH 1/1] " Aaron Williams
2019-08-22 9:12 ` [U-Boot] [PATCH] " Aaron Williams
2019-08-22 9:17 ` Aaron Williams
2019-08-22 14:25 ` Bin Meng
2019-08-21 13:40 Aaron Williams
2019-08-21 7:55 [U-Boot] [PATCH 1/1] " Bin Meng
2019-08-21 11:23 ` [U-Boot] [PATCH 1/1][nvme] " Aaron Williams
2019-08-21 11:23 ` [U-Boot] [PATCH] nvme: " Aaron Williams
2019-08-20 7:18 Aaron Williams
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.