All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mmc: sd: Meet alignment requirements for raw_ssr DMA
@ 2016-11-11 14:22 Paul Burton
  2016-11-17 12:51 ` Ulf Hansson
  2016-12-21  7:42 ` Ulf Hansson
  0 siblings, 2 replies; 8+ messages in thread
From: Paul Burton @ 2016-11-11 14:22 UTC (permalink / raw)
  To: linux-mmc; +Cc: Paul Burton, Ulf Hansson

The mmc_read_ssr() function results in DMA to the raw_ssr member of
struct mmc_card, which is not guaranteed to be cache line aligned & thus
might not meet the requirements set out in Documentation/DMA-API.txt:

  Warnings:  Memory coherency operates at a granularity called the cache
  line width.  In order for memory mapped by this API to operate
  correctly, the mapped region must begin exactly on a cache line
  boundary and end exactly on one (to prevent two separately mapped
  regions from sharing a single cache line).  Since the cache line size
  may not be known at compile time, the API will not enforce this
  requirement.  Therefore, it is recommended that driver writers who
  don't take special care to determine the cache line size at run time
  only map virtual regions that begin and end on page boundaries (which
  are guaranteed also to be cache line boundaries).

On some systems where DMA is non-coherent this can lead to us losing
data that shares cache lines with the raw_ssr array.

Fix this by kmalloc'ing a temporary buffer to perform DMA into. kmalloc
will ensure the buffer is suitably aligned, allowing the DMA to be
performed without any loss of data.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: linux-mmc@vger.kernel.org
---
 drivers/mmc/core/sd.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
index 73c762a..f6f40a1 100644
--- a/drivers/mmc/core/sd.c
+++ b/drivers/mmc/core/sd.c
@@ -223,6 +223,7 @@ static int mmc_decode_scr(struct mmc_card *card)
 static int mmc_read_ssr(struct mmc_card *card)
 {
 	unsigned int au, es, et, eo;
+	u32 *raw_ssr;
 	int i;
 
 	if (!(card->csd.cmdclass & CCC_APP_SPEC)) {
@@ -231,14 +232,21 @@ static int mmc_read_ssr(struct mmc_card *card)
 		return 0;
 	}
 
-	if (mmc_app_sd_status(card, card->raw_ssr)) {
+	raw_ssr = kmalloc(sizeof(card->raw_ssr), GFP_KERNEL);
+	if (!raw_ssr)
+		return -ENOMEM;
+
+	if (mmc_app_sd_status(card, raw_ssr)) {
 		pr_warn("%s: problem reading SD Status register\n",
 			mmc_hostname(card->host));
+		kfree(raw_ssr);
 		return 0;
 	}
 
 	for (i = 0; i < 16; i++)
-		card->raw_ssr[i] = be32_to_cpu(card->raw_ssr[i]);
+		card->raw_ssr[i] = be32_to_cpu(raw_ssr[i]);
+
+	kfree(raw_ssr);
 
 	/*
 	 * UNSTUFF_BITS only works with four u32s so we have to offset the
-- 
2.10.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] mmc: sd: Meet alignment requirements for raw_ssr DMA
  2016-11-11 14:22 [PATCH] mmc: sd: Meet alignment requirements for raw_ssr DMA Paul Burton
@ 2016-11-17 12:51 ` Ulf Hansson
  2016-11-17 13:19   ` Paul Burton
  2016-12-21  7:42 ` Ulf Hansson
  1 sibling, 1 reply; 8+ messages in thread
From: Ulf Hansson @ 2016-11-17 12:51 UTC (permalink / raw)
  To: Paul Burton; +Cc: linux-mmc

On 11 November 2016 at 15:22, Paul Burton <paul.burton@imgtec.com> wrote:
> The mmc_read_ssr() function results in DMA to the raw_ssr member of
> struct mmc_card, which is not guaranteed to be cache line aligned & thus
> might not meet the requirements set out in Documentation/DMA-API.txt:
>
>   Warnings:  Memory coherency operates at a granularity called the cache
>   line width.  In order for memory mapped by this API to operate
>   correctly, the mapped region must begin exactly on a cache line
>   boundary and end exactly on one (to prevent two separately mapped
>   regions from sharing a single cache line).  Since the cache line size
>   may not be known at compile time, the API will not enforce this
>   requirement.  Therefore, it is recommended that driver writers who
>   don't take special care to determine the cache line size at run time
>   only map virtual regions that begin and end on page boundaries (which
>   are guaranteed also to be cache line boundaries).

This about virtual regions, not about physical contiguous allocated
memories, such as those you get via kmalloc(n, GFP_KERNEL). Right?

>
> On some systems where DMA is non-coherent this can lead to us losing
> data that shares cache lines with the raw_ssr array.
>
> Fix this by kmalloc'ing a temporary buffer to perform DMA into. kmalloc
> will ensure the buffer is suitably aligned, allowing the DMA to be
> performed without any loss of data.
>
> Signed-off-by: Paul Burton <paul.burton@imgtec.com>
> Cc: Ulf Hansson <ulf.hansson@linaro.org>
> Cc: linux-mmc@vger.kernel.org
> ---
>  drivers/mmc/core/sd.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
> index 73c762a..f6f40a1 100644
> --- a/drivers/mmc/core/sd.c
> +++ b/drivers/mmc/core/sd.c
> @@ -223,6 +223,7 @@ static int mmc_decode_scr(struct mmc_card *card)
>  static int mmc_read_ssr(struct mmc_card *card)
>  {
>         unsigned int au, es, et, eo;
> +       u32 *raw_ssr;
>         int i;
>
>         if (!(card->csd.cmdclass & CCC_APP_SPEC)) {
> @@ -231,14 +232,21 @@ static int mmc_read_ssr(struct mmc_card *card)
>                 return 0;
>         }
>
> -       if (mmc_app_sd_status(card, card->raw_ssr)) {

So the struct mmc_card, which contains "u32 raw_ssr[16]", is already
allocated via using a kzalloc().
As kzalloc() returns a physically contiguous allocated memory, using
DMA for reading the data into the memory pointed to by card->raw_ssr
should be safe.

Unless I am missing something, of course.

> +       raw_ssr = kmalloc(sizeof(card->raw_ssr), GFP_KERNEL);
> +       if (!raw_ssr)
> +               return -ENOMEM;
> +
> +       if (mmc_app_sd_status(card, raw_ssr)) {
>                 pr_warn("%s: problem reading SD Status register\n",
>                         mmc_hostname(card->host));
> +               kfree(raw_ssr);
>                 return 0;
>         }
>
>         for (i = 0; i < 16; i++)
> -               card->raw_ssr[i] = be32_to_cpu(card->raw_ssr[i]);
> +               card->raw_ssr[i] = be32_to_cpu(raw_ssr[i]);
> +
> +       kfree(raw_ssr);
>
>         /*
>          * UNSTUFF_BITS only works with four u32s so we have to offset the
> --
> 2.10.2
>

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mmc: sd: Meet alignment requirements for raw_ssr DMA
  2016-11-17 12:51 ` Ulf Hansson
@ 2016-11-17 13:19   ` Paul Burton
  2016-11-23 12:36     ` Ulf Hansson
  0 siblings, 1 reply; 8+ messages in thread
From: Paul Burton @ 2016-11-17 13:19 UTC (permalink / raw)
  To: Ulf Hansson; +Cc: linux-mmc

[-- Attachment #1: Type: text/plain, Size: 6882 bytes --]

Hi Ulf,

On Thursday, 17 November 2016 13:51:02 GMT Ulf Hansson wrote:
> On 11 November 2016 at 15:22, Paul Burton <paul.burton@imgtec.com> wrote:
> > The mmc_read_ssr() function results in DMA to the raw_ssr member of
> > struct mmc_card, which is not guaranteed to be cache line aligned & thus
> > 
> > might not meet the requirements set out in Documentation/DMA-API.txt:
> >   Warnings:  Memory coherency operates at a granularity called the cache
> >   line width.  In order for memory mapped by this API to operate
> >   correctly, the mapped region must begin exactly on a cache line
> >   boundary and end exactly on one (to prevent two separately mapped
> >   regions from sharing a single cache line).  Since the cache line size
> >   may not be known at compile time, the API will not enforce this
> >   requirement.  Therefore, it is recommended that driver writers who
> >   don't take special care to determine the cache line size at run time
> >   only map virtual regions that begin and end on page boundaries (which
> >   are guaranteed also to be cache line boundaries).
> 
> This about virtual regions, not about physical contiguous allocated
> memories, such as those you get via kmalloc(n, GFP_KERNEL). Right?

This is about memory being cache-line aligned. Since the mapping from virtual
to physical operates on pages, which will always be some multiple of a cache
line size, it doesn't matter whether we're talking about virtual or physical
addresses (& it's a good job because semantics get awkward between
architectures - from my MIPS perspective I'd say kmalloc returns a virtual
address).

This has nothing to do with the memory mapped for DMA being physically
contiguous (which as you say, it typically needs to be), it has to do with how
it is kept coherent with caches that the device performing DMA has no
knowledge of or access to.

> 
> > On some systems where DMA is non-coherent this can lead to us losing
> > data that shares cache lines with the raw_ssr array.
> > 
> > Fix this by kmalloc'ing a temporary buffer to perform DMA into. kmalloc
> > will ensure the buffer is suitably aligned, allowing the DMA to be
> > performed without any loss of data.
> > 
> > Signed-off-by: Paul Burton <paul.burton@imgtec.com>
> > Cc: Ulf Hansson <ulf.hansson@linaro.org>
> > Cc: linux-mmc@vger.kernel.org
> > ---
> > 
> >  drivers/mmc/core/sd.c | 12 ++++++++++--
> >  1 file changed, 10 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
> > index 73c762a..f6f40a1 100644
> > --- a/drivers/mmc/core/sd.c
> > +++ b/drivers/mmc/core/sd.c
> > @@ -223,6 +223,7 @@ static int mmc_decode_scr(struct mmc_card *card)
> > 
> >  static int mmc_read_ssr(struct mmc_card *card)
> >  {
> >  
> >         unsigned int au, es, et, eo;
> > 
> > +       u32 *raw_ssr;
> > 
> >         int i;
> >         
> >         if (!(card->csd.cmdclass & CCC_APP_SPEC)) {
> > 
> > @@ -231,14 +232,21 @@ static int mmc_read_ssr(struct mmc_card *card)
> > 
> >                 return 0;
> >         
> >         }
> > 
> > -       if (mmc_app_sd_status(card, card->raw_ssr)) {
> 
> So the struct mmc_card, which contains "u32 raw_ssr[16]", is already
> allocated via using a kzalloc().
> As kzalloc() returns a physically contiguous allocated memory, using
> DMA for reading the data into the memory pointed to by card->raw_ssr
> should be safe.
> 
> Unless I am missing something, of course.

Sadly you are. This is incorrect because the raw_ssr field of struct mmc_card
may not begin or end at a cache-line boundary. On systems where mapping a
buffer for DMA requires cache invalidation we may therefore lose data if it
shares a cache line with some other data. In this case let's say we have
something like this:

0x1000: u32 raw_scr[2];
0x1008: u32 raw_ssr[16];

Now let's say we have 16 byte cache lines and we try to map raw_ssr for DMA.
If the architecture requires that we invalidate the memory being mapped in
caches (in order to avoid any writebacks clobbering the DMA'd data) then we
have a problem because our choice is either to:

- Writeback the line that overlaps with raw_scr, then invalidate it along with
  the rest of the cache lines covering raw_ssr. This won't do because there's
  nothing stopping us from pulling the line back into the cache whilst the DMA
  occurs on some systems - say we access raw_scr whilst the DMA to raw_ssr
  happens or the CPU just speculatively prefetches something. The DMA API
  cannot detect the former & would have no way to deal with it if it could,
  and on systems where the latter is possible we have no choice but to
  invalidate the cache lines again after the DMA is complete. What would we do
  with the first line then? We can't write it back to preserve raw_scr because
  it includes some potentially stale copy of the start of raw_ssr, which we
  would then clobber the DMA'd data with.

- Or, we just invalidate the line which overlaps with raw_scr. We can't do
  this because we might throw away some part of raw_scr.

So we have no choice but to align the buffers used for DMA at a cache line
boundary. This is what the paragraph in Documentation/DMA-API.txt is about &
the MMC code currently fails to meet the requirements it sets out. Allocating
the buffer to be DMA'd to with kmalloc will meet that alignment requirement.
One possible alternative would be to ensure that the raw_ssr field of struct
mmc_card is always aligned at an ARCH_DMA_MINALIGN boundary within the struct,
with nothing after it until another ARCH_DMA_MINALIGN boundary. That seems
like it'd be a bit messy though, and probably easy for someone to break.

Of course when a system has DMA that is coherent with caches there's generally
no need to worry about any of this, but users of the DMA API generally can't
rely on that, again as Documentation/DMA-API.txt says. Any buffers mapped for
DMA should be ensured to be cache-line aligned at minimum giving the
architecture code backing the DMA API a chance to do the right thing for the
given system.

Thanks,
    Paul

> 
> > +       raw_ssr = kmalloc(sizeof(card->raw_ssr), GFP_KERNEL);
> > +       if (!raw_ssr)
> > +               return -ENOMEM;
> > +
> > +       if (mmc_app_sd_status(card, raw_ssr)) {
> > 
> >                 pr_warn("%s: problem reading SD Status register\n",
> >                 
> >                         mmc_hostname(card->host));
> > 
> > +               kfree(raw_ssr);
> > 
> >                 return 0;
> >         
> >         }
> >         
> >         for (i = 0; i < 16; i++)
> > 
> > -               card->raw_ssr[i] = be32_to_cpu(card->raw_ssr[i]);
> > +               card->raw_ssr[i] = be32_to_cpu(raw_ssr[i]);
> > +
> > +       kfree(raw_ssr);
> > 
> >         /*
> >         
> >          * UNSTUFF_BITS only works with four u32s so we have to offset the
> > 
> > --
> > 2.10.2
> 
> Kind regards
> Uffe


[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 801 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mmc: sd: Meet alignment requirements for raw_ssr DMA
  2016-11-17 13:19   ` Paul Burton
@ 2016-11-23 12:36     ` Ulf Hansson
  2016-12-20 18:45       ` Lars-Peter Clausen
  0 siblings, 1 reply; 8+ messages in thread
From: Ulf Hansson @ 2016-11-23 12:36 UTC (permalink / raw)
  To: Paul Burton; +Cc: linux-mmc

[...]

>> > diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
>> > index 73c762a..f6f40a1 100644
>> > --- a/drivers/mmc/core/sd.c
>> > +++ b/drivers/mmc/core/sd.c
>> > @@ -223,6 +223,7 @@ static int mmc_decode_scr(struct mmc_card *card)
>> >
>> >  static int mmc_read_ssr(struct mmc_card *card)
>> >  {
>> >
>> >         unsigned int au, es, et, eo;
>> >
>> > +       u32 *raw_ssr;
>> >
>> >         int i;
>> >
>> >         if (!(card->csd.cmdclass & CCC_APP_SPEC)) {
>> >
>> > @@ -231,14 +232,21 @@ static int mmc_read_ssr(struct mmc_card *card)
>> >
>> >                 return 0;
>> >
>> >         }
>> >
>> > -       if (mmc_app_sd_status(card, card->raw_ssr)) {
>>
>> So the struct mmc_card, which contains "u32 raw_ssr[16]", is already
>> allocated via using a kzalloc().
>> As kzalloc() returns a physically contiguous allocated memory, using
>> DMA for reading the data into the memory pointed to by card->raw_ssr
>> should be safe.
>>
>> Unless I am missing something, of course.
>
> Sadly you are. This is incorrect because the raw_ssr field of struct mmc_card
> may not begin or end at a cache-line boundary. On systems where mapping a
> buffer for DMA requires cache invalidation we may therefore lose data if it
> shares a cache line with some other data. In this case let's say we have

Yes, of course - you are right! Apologize for my ignorance!

> something like this:
>
> 0x1000: u32 raw_scr[2];
> 0x1008: u32 raw_ssr[16];
>
> Now let's say we have 16 byte cache lines and we try to map raw_ssr for DMA.
> If the architecture requires that we invalidate the memory being mapped in
> caches (in order to avoid any writebacks clobbering the DMA'd data) then we
> have a problem because our choice is either to:
>
> - Writeback the line that overlaps with raw_scr, then invalidate it along with
>   the rest of the cache lines covering raw_ssr. This won't do because there's
>   nothing stopping us from pulling the line back into the cache whilst the DMA
>   occurs on some systems - say we access raw_scr whilst the DMA to raw_ssr
>   happens or the CPU just speculatively prefetches something. The DMA API
>   cannot detect the former & would have no way to deal with it if it could,
>   and on systems where the latter is possible we have no choice but to
>   invalidate the cache lines again after the DMA is complete. What would we do
>   with the first line then? We can't write it back to preserve raw_scr because
>   it includes some potentially stale copy of the start of raw_ssr, which we
>   would then clobber the DMA'd data with.
>
> - Or, we just invalidate the line which overlaps with raw_scr. We can't do
>   this because we might throw away some part of raw_scr.
>
> So we have no choice but to align the buffers used for DMA at a cache line
> boundary. This is what the paragraph in Documentation/DMA-API.txt is about &
> the MMC code currently fails to meet the requirements it sets out. Allocating
> the buffer to be DMA'd to with kmalloc will meet that alignment requirement.
> One possible alternative would be to ensure that the raw_ssr field of struct
> mmc_card is always aligned at an ARCH_DMA_MINALIGN boundary within the struct,
> with nothing after it until another ARCH_DMA_MINALIGN boundary. That seems
> like it'd be a bit messy though, and probably easy for someone to break.
>
> Of course when a system has DMA that is coherent with caches there's generally
> no need to worry about any of this, but users of the DMA API generally can't
> rely on that, again as Documentation/DMA-API.txt says. Any buffers mapped for
> DMA should be ensured to be cache-line aligned at minimum giving the
> architecture code backing the DMA API a chance to do the right thing for the
> given system.

Thanks a lot for providing this detailed description to the problem.
It really helped to refresh my mind for how this works!

>
> Thanks,
>     Paul
>
>>
>> > +       raw_ssr = kmalloc(sizeof(card->raw_ssr), GFP_KERNEL);
>> > +       if (!raw_ssr)
>> > +               return -ENOMEM;
>> > +

Creating this bounce buffer does of course comes with a small cost.
Maybe we could neglect its impact!?

However we could also make the card->raw_ssr be cacheline aligned, via
using the __cacheline_aligned attribute for it.

Also, this isn't the only case when the mmc core creates bounce
buffers while reading card registers during the card initialization.

Perhaps a better method is to use the __cacheline_aligned for these
buffers that needs to be DMA-able (requests which is
MMC_DATA_READ|WRITE)). Of course that change would affect/increase the
number of bytes allocated for each card struct (due to padding), but
since this is just a single struct being allocated per card, this
shouldn't be an issue.

What do you think?

>> > +       if (mmc_app_sd_status(card, raw_ssr)) {
>> >
>> >                 pr_warn("%s: problem reading SD Status register\n",
>> >
>> >                         mmc_hostname(card->host));
>> >
>> > +               kfree(raw_ssr);
>> >
>> >                 return 0;
>> >
>> >         }
>> >
>> >         for (i = 0; i < 16; i++)
>> >
>> > -               card->raw_ssr[i] = be32_to_cpu(card->raw_ssr[i]);
>> > +               card->raw_ssr[i] = be32_to_cpu(raw_ssr[i]);
>> > +
>> > +       kfree(raw_ssr);
>> >
>> >         /*
>> >
>> >          * UNSTUFF_BITS only works with four u32s so we have to offset the
>> >
>> > --
>> > 2.10.2
>>

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mmc: sd: Meet alignment requirements for raw_ssr DMA
  2016-11-23 12:36     ` Ulf Hansson
@ 2016-12-20 18:45       ` Lars-Peter Clausen
  2016-12-20 20:41         ` Ulf Hansson
  0 siblings, 1 reply; 8+ messages in thread
From: Lars-Peter Clausen @ 2016-12-20 18:45 UTC (permalink / raw)
  To: Ulf Hansson, Paul Burton; +Cc: linux-mmc, Maarten ter Huurne

On 11/23/2016 01:36 PM, Ulf Hansson wrote:
> [...]
> 
>>>> diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
>>>> index 73c762a..f6f40a1 100644
>>>> --- a/drivers/mmc/core/sd.c
>>>> +++ b/drivers/mmc/core/sd.c
>>>> @@ -223,6 +223,7 @@ static int mmc_decode_scr(struct mmc_card *card)
>>>>
>>>>  static int mmc_read_ssr(struct mmc_card *card)
>>>>  {
>>>>
>>>>         unsigned int au, es, et, eo;
>>>>
>>>> +       u32 *raw_ssr;
>>>>
>>>>         int i;
>>>>
>>>>         if (!(card->csd.cmdclass & CCC_APP_SPEC)) {
>>>>
>>>> @@ -231,14 +232,21 @@ static int mmc_read_ssr(struct mmc_card *card)
>>>>
>>>>                 return 0;
>>>>
>>>>         }
>>>>
>>>> -       if (mmc_app_sd_status(card, card->raw_ssr)) {
>>>
>>> So the struct mmc_card, which contains "u32 raw_ssr[16]", is already
>>> allocated via using a kzalloc().
>>> As kzalloc() returns a physically contiguous allocated memory, using
>>> DMA for reading the data into the memory pointed to by card->raw_ssr
>>> should be safe.
>>>
>>> Unless I am missing something, of course.
>>
>> Sadly you are. This is incorrect because the raw_ssr field of struct mmc_card
>> may not begin or end at a cache-line boundary. On systems where mapping a
>> buffer for DMA requires cache invalidation we may therefore lose data if it
>> shares a cache line with some other data. In this case let's say we have
> 
> Yes, of course - you are right! Apologize for my ignorance!
> 
>> something like this:
>>
>> 0x1000: u32 raw_scr[2];
>> 0x1008: u32 raw_ssr[16];
>>
>> Now let's say we have 16 byte cache lines and we try to map raw_ssr for DMA.
>> If the architecture requires that we invalidate the memory being mapped in
>> caches (in order to avoid any writebacks clobbering the DMA'd data) then we
>> have a problem because our choice is either to:
>>
>> - Writeback the line that overlaps with raw_scr, then invalidate it along with
>>   the rest of the cache lines covering raw_ssr. This won't do because there's
>>   nothing stopping us from pulling the line back into the cache whilst the DMA
>>   occurs on some systems - say we access raw_scr whilst the DMA to raw_ssr
>>   happens or the CPU just speculatively prefetches something. The DMA API
>>   cannot detect the former & would have no way to deal with it if it could,
>>   and on systems where the latter is possible we have no choice but to
>>   invalidate the cache lines again after the DMA is complete. What would we do
>>   with the first line then? We can't write it back to preserve raw_scr because
>>   it includes some potentially stale copy of the start of raw_ssr, which we
>>   would then clobber the DMA'd data with.
>>
>> - Or, we just invalidate the line which overlaps with raw_scr. We can't do
>>   this because we might throw away some part of raw_scr.
>>
>> So we have no choice but to align the buffers used for DMA at a cache line
>> boundary. This is what the paragraph in Documentation/DMA-API.txt is about &
>> the MMC code currently fails to meet the requirements it sets out. Allocating
>> the buffer to be DMA'd to with kmalloc will meet that alignment requirement.
>> One possible alternative would be to ensure that the raw_ssr field of struct
>> mmc_card is always aligned at an ARCH_DMA_MINALIGN boundary within the struct,
>> with nothing after it until another ARCH_DMA_MINALIGN boundary. That seems
>> like it'd be a bit messy though, and probably easy for someone to break.
>>
>> Of course when a system has DMA that is coherent with caches there's generally
>> no need to worry about any of this, but users of the DMA API generally can't
>> rely on that, again as Documentation/DMA-API.txt says. Any buffers mapped for
>> DMA should be ensured to be cache-line aligned at minimum giving the
>> architecture code backing the DMA API a chance to do the right thing for the
>> given system.
> 
> Thanks a lot for providing this detailed description to the problem.
> It really helped to refresh my mind for how this works!
> 
>>
>> Thanks,
>>     Paul
>>
>>>
>>>> +       raw_ssr = kmalloc(sizeof(card->raw_ssr), GFP_KERNEL);
>>>> +       if (!raw_ssr)
>>>> +               return -ENOMEM;
>>>> +
> 
> Creating this bounce buffer does of course comes with a small cost.
> Maybe we could neglect its impact!?
> 
> However we could also make the card->raw_ssr be cacheline aligned, via
> using the __cacheline_aligned attribute for it.
> 
> Also, this isn't the only case when the mmc core creates bounce
> buffers while reading card registers during the card initialization.
> 
> Perhaps a better method is to use the __cacheline_aligned for these
> buffers that needs to be DMA-able (requests which is
> MMC_DATA_READ|WRITE)). Of course that change would affect/increase the
> number of bytes allocated for each card struct (due to padding), but
> since this is just a single struct being allocated per card, this
> shouldn't be an issue.
> 
> What do you think?

The issue is now in v4.9 and is causing problems on some platforms with the
reported card name becoming corrupted (since it cid.prodname is on the same
cacheline).

I'd say go with the simple solution for the fix, which is using the bounce
buffer. If somebody wants to rework this (and all the other bounce buffers)
to use __cacheline_aligned or something else that can be done as a separate
patch series on top of this.

When you apply the patch can you add

Fixes: 5275a652d296 ("mmc: sd: Export SD Status via “ssr” device attribute")

and tag it for stable?

Thanks,
- Lars

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mmc: sd: Meet alignment requirements for raw_ssr DMA
  2016-12-20 18:45       ` Lars-Peter Clausen
@ 2016-12-20 20:41         ` Ulf Hansson
  2016-12-20 20:54           ` Lars-Peter Clausen
  0 siblings, 1 reply; 8+ messages in thread
From: Ulf Hansson @ 2016-12-20 20:41 UTC (permalink / raw)
  To: Lars-Peter Clausen; +Cc: Paul Burton, linux-mmc, Maarten ter Huurne

[...]

>>>>> +       raw_ssr = kmalloc(sizeof(card->raw_ssr), GFP_KERNEL);
>>>>> +       if (!raw_ssr)
>>>>> +               return -ENOMEM;
>>>>> +
>>
>> Creating this bounce buffer does of course comes with a small cost.
>> Maybe we could neglect its impact!?
>>
>> However we could also make the card->raw_ssr be cacheline aligned, via
>> using the __cacheline_aligned attribute for it.
>>
>> Also, this isn't the only case when the mmc core creates bounce
>> buffers while reading card registers during the card initialization.
>>
>> Perhaps a better method is to use the __cacheline_aligned for these
>> buffers that needs to be DMA-able (requests which is
>> MMC_DATA_READ|WRITE)). Of course that change would affect/increase the
>> number of bytes allocated for each card struct (due to padding), but
>> since this is just a single struct being allocated per card, this
>> shouldn't be an issue.
>>
>> What do you think?
>
> The issue is now in v4.9 and is causing problems on some platforms with the
> reported card name becoming corrupted (since it cid.prodname is on the same
> cacheline).
>
> I'd say go with the simple solution for the fix, which is using the bounce
> buffer. If somebody wants to rework this (and all the other bounce buffers)
> to use __cacheline_aligned or something else that can be done as a separate
> patch series on top of this.
>
> When you apply the patch can you add
>
> Fixes: 5275a652d296 ("mmc: sd: Export SD Status via “ssr” device attribute")

Ahh, I didn't realize this was a fix for a regression. I thought it
was a long outstanding issue that more or less never occurred.

>
> and tag it for stable?

Yes, of course. I will deal with it as of tomorrow morning. Thanks for
pinging me on this!

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mmc: sd: Meet alignment requirements for raw_ssr DMA
  2016-12-20 20:41         ` Ulf Hansson
@ 2016-12-20 20:54           ` Lars-Peter Clausen
  0 siblings, 0 replies; 8+ messages in thread
From: Lars-Peter Clausen @ 2016-12-20 20:54 UTC (permalink / raw)
  To: Ulf Hansson; +Cc: Paul Burton, linux-mmc, Maarten ter Huurne

On 12/20/2016 09:41 PM, Ulf Hansson wrote:
> [...]
> 
>>>>>> +       raw_ssr = kmalloc(sizeof(card->raw_ssr), GFP_KERNEL);
>>>>>> +       if (!raw_ssr)
>>>>>> +               return -ENOMEM;
>>>>>> +
>>>
>>> Creating this bounce buffer does of course comes with a small cost.
>>> Maybe we could neglect its impact!?
>>>
>>> However we could also make the card->raw_ssr be cacheline aligned, via
>>> using the __cacheline_aligned attribute for it.
>>>
>>> Also, this isn't the only case when the mmc core creates bounce
>>> buffers while reading card registers during the card initialization.
>>>
>>> Perhaps a better method is to use the __cacheline_aligned for these
>>> buffers that needs to be DMA-able (requests which is
>>> MMC_DATA_READ|WRITE)). Of course that change would affect/increase the
>>> number of bytes allocated for each card struct (due to padding), but
>>> since this is just a single struct being allocated per card, this
>>> shouldn't be an issue.
>>>
>>> What do you think?
>>
>> The issue is now in v4.9 and is causing problems on some platforms with the
>> reported card name becoming corrupted (since it cid.prodname is on the same
>> cacheline).
>>
>> I'd say go with the simple solution for the fix, which is using the bounce
>> buffer. If somebody wants to rework this (and all the other bounce buffers)
>> to use __cacheline_aligned or something else that can be done as a separate
>> patch series on top of this.
>>
>> When you apply the patch can you add
>>
>> Fixes: 5275a652d296 ("mmc: sd: Export SD Status via “ssr” device attribute")
> 
> Ahh, I didn't realize this was a fix for a regression. I thought it
> was a long outstanding issue that more or less never occurred.
> 
>>
>> and tag it for stable?
> 
> Yes, of course. I will deal with it as of tomorrow morning. Thanks for
> pinging me on this!

Thanks for the quick reply!


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mmc: sd: Meet alignment requirements for raw_ssr DMA
  2016-11-11 14:22 [PATCH] mmc: sd: Meet alignment requirements for raw_ssr DMA Paul Burton
  2016-11-17 12:51 ` Ulf Hansson
@ 2016-12-21  7:42 ` Ulf Hansson
  1 sibling, 0 replies; 8+ messages in thread
From: Ulf Hansson @ 2016-12-21  7:42 UTC (permalink / raw)
  To: Paul Burton; +Cc: linux-mmc, Lars-Peter Clausen, # 4.0+, Maarten ter Huurne

+stable, Lars, Maarten

On 11 November 2016 at 15:22, Paul Burton <paul.burton@imgtec.com> wrote:
> The mmc_read_ssr() function results in DMA to the raw_ssr member of
> struct mmc_card, which is not guaranteed to be cache line aligned & thus
> might not meet the requirements set out in Documentation/DMA-API.txt:
>
>   Warnings:  Memory coherency operates at a granularity called the cache
>   line width.  In order for memory mapped by this API to operate
>   correctly, the mapped region must begin exactly on a cache line
>   boundary and end exactly on one (to prevent two separately mapped
>   regions from sharing a single cache line).  Since the cache line size
>   may not be known at compile time, the API will not enforce this
>   requirement.  Therefore, it is recommended that driver writers who
>   don't take special care to determine the cache line size at run time
>   only map virtual regions that begin and end on page boundaries (which
>   are guaranteed also to be cache line boundaries).
>
> On some systems where DMA is non-coherent this can lead to us losing
> data that shares cache lines with the raw_ssr array.
>
> Fix this by kmalloc'ing a temporary buffer to perform DMA into. kmalloc
> will ensure the buffer is suitably aligned, allowing the DMA to be
> performed without any loss of data.
>
> Signed-off-by: Paul Burton <paul.burton@imgtec.com>
> Cc: Ulf Hansson <ulf.hansson@linaro.org>
> Cc: linux-mmc@vger.kernel.org

Sorry for the delay!

I dropped these Cc, but added the below tags instead.

Fixes: 5275a652d296 ("mmc: sd: Export SD Status via “ssr” device attribute")
Cc: <stable@vger.kernel.org>

Thanks, applied for fixes!

Kind regards
Uffe

> ---
>  drivers/mmc/core/sd.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
> index 73c762a..f6f40a1 100644
> --- a/drivers/mmc/core/sd.c
> +++ b/drivers/mmc/core/sd.c
> @@ -223,6 +223,7 @@ static int mmc_decode_scr(struct mmc_card *card)
>  static int mmc_read_ssr(struct mmc_card *card)
>  {
>         unsigned int au, es, et, eo;
> +       u32 *raw_ssr;
>         int i;
>
>         if (!(card->csd.cmdclass & CCC_APP_SPEC)) {
> @@ -231,14 +232,21 @@ static int mmc_read_ssr(struct mmc_card *card)
>                 return 0;
>         }
>
> -       if (mmc_app_sd_status(card, card->raw_ssr)) {
> +       raw_ssr = kmalloc(sizeof(card->raw_ssr), GFP_KERNEL);
> +       if (!raw_ssr)
> +               return -ENOMEM;
> +
> +       if (mmc_app_sd_status(card, raw_ssr)) {
>                 pr_warn("%s: problem reading SD Status register\n",
>                         mmc_hostname(card->host));
> +               kfree(raw_ssr);
>                 return 0;
>         }
>
>         for (i = 0; i < 16; i++)
> -               card->raw_ssr[i] = be32_to_cpu(card->raw_ssr[i]);
> +               card->raw_ssr[i] = be32_to_cpu(raw_ssr[i]);
> +
> +       kfree(raw_ssr);
>
>         /*
>          * UNSTUFF_BITS only works with four u32s so we have to offset the
> --
> 2.10.2
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-12-21  7:42 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-11 14:22 [PATCH] mmc: sd: Meet alignment requirements for raw_ssr DMA Paul Burton
2016-11-17 12:51 ` Ulf Hansson
2016-11-17 13:19   ` Paul Burton
2016-11-23 12:36     ` Ulf Hansson
2016-12-20 18:45       ` Lars-Peter Clausen
2016-12-20 20:41         ` Ulf Hansson
2016-12-20 20:54           ` Lars-Peter Clausen
2016-12-21  7:42 ` Ulf Hansson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.