All of lore.kernel.org
 help / color / mirror / Atom feed
* Question about (and problem with) pflash data access
@ 2020-02-12 18:46 Guenter Roeck
  2020-02-12 21:39 ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 10+ messages in thread
From: Guenter Roeck @ 2020-02-12 18:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Philippe Mathieu-Daudé, qemu-block, Max Reitz

Hi,

I have been playing with pflash recently. For the most part it works,
but I do have an odd problem when trying to instantiate pflash on sx1.

My data file looks as follows.

0000000 0001 0000 aaaa aaaa 5555 5555 0000 0000
0000020 0000 0000 0000 0000 0000 0000 0000 0000
*
0002000 0002 0000 aaaa aaaa 5555 5555 0000 0000
0002020 0000 0000 0000 0000 0000 0000 0000 0000
*
0004000 0003 0000 aaaa aaaa 5555 5555 0000 0000
0004020 0000 0000 0000 0000 0000 0000 0000 0000
...

In the sx1 machine, this becomes:

0000000 6001 0000 aaaa aaaa 5555 5555 0000 0000
0000020 0000 0000 0000 0000 0000 0000 0000 0000
*
0002000 6002 0000 aaaa aaaa 5555 5555 0000 0000
0002020 0000 0000 0000 0000 0000 0000 0000 0000
*
0004000 6003 0000 aaaa aaaa 5555 5555 0000 0000
0004020 0000 0000 0000 0000 0000 0000 0000 0000
*
...

pflash is instantiated with "-drive file=flash.32M.test,format=raw,if=pflash".

I don't have much success with pflash tracing - data accesses don't
show up there.

I did find a number of problems with the sx1 emulation, but I have no clue
what is going on with pflash. As far as I can see pflash works fine on
other machines. Can someone give me a hint what to look out for ?

Thanks,
Guenter


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Question about (and problem with) pflash data access
  2020-02-12 18:46 Question about (and problem with) pflash data access Guenter Roeck
@ 2020-02-12 21:39 ` Philippe Mathieu-Daudé
  2020-02-12 23:09   ` Guenter Roeck
  0 siblings, 1 reply; 10+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-02-12 21:39 UTC (permalink / raw)
  To: Guenter Roeck, qemu-devel, Peter Maydell,
	Jean-Christophe PLAGNIOL-VILLARD
  Cc: Kevin Wolf, qemu-arm, qemu-block, Max Reitz

Cc'ing Jean-Christophe and Peter.

On 2/12/20 7:46 PM, Guenter Roeck wrote:
> Hi,
> 
> I have been playing with pflash recently. For the most part it works,
> but I do have an odd problem when trying to instantiate pflash on sx1.
> 
> My data file looks as follows.
> 
> 0000000 0001 0000 aaaa aaaa 5555 5555 0000 0000
> 0000020 0000 0000 0000 0000 0000 0000 0000 0000
> *
> 0002000 0002 0000 aaaa aaaa 5555 5555 0000 0000
> 0002020 0000 0000 0000 0000 0000 0000 0000 0000
> *
> 0004000 0003 0000 aaaa aaaa 5555 5555 0000 0000
> 0004020 0000 0000 0000 0000 0000 0000 0000 0000
> ...
> 
> In the sx1 machine, this becomes:
> 
> 0000000 6001 0000 aaaa aaaa 5555 5555 0000 0000
> 0000020 0000 0000 0000 0000 0000 0000 0000 0000
> *
> 0002000 6002 0000 aaaa aaaa 5555 5555 0000 0000
> 0002020 0000 0000 0000 0000 0000 0000 0000 0000
> *
> 0004000 6003 0000 aaaa aaaa 5555 5555 0000 0000
> 0004020 0000 0000 0000 0000 0000 0000 0000 0000
> *
> ...
> 
> pflash is instantiated with "-drive file=flash.32M.test,format=raw,if=pflash".
> 
> I don't have much success with pflash tracing - data accesses don't
> show up there.
> 
> I did find a number of problems with the sx1 emulation, but I have no clue
> what is going on with pflash. As far as I can see pflash works fine on
> other machines. Can someone give me a hint what to look out for ?

This is specific to the SX1, introduced in commit 997641a84ff:

  64 static uint64_t static_read(void *opaque, hwaddr offset,
  65                             unsigned size)
  66 {
  67     uint32_t *val = (uint32_t *) opaque;
  68     uint32_t mask = (4 / size) - 1;
  69
  70     return *val >> ((offset & mask) << 3);
  71 }

Only guessing, this looks like some hw parity, and I imagine you need to 
write the parity bits in your flash.32M file before starting QEMU, then 
it would appear "normal" within the guest.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Question about (and problem with) pflash data access
  2020-02-12 21:39 ` Philippe Mathieu-Daudé
@ 2020-02-12 23:09   ` Guenter Roeck
  2020-02-12 23:50     ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 10+ messages in thread
From: Guenter Roeck @ 2020-02-12 23:09 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Kevin Wolf, Peter Maydell, qemu-block, qemu-devel, Max Reitz,
	qemu-arm, Jean-Christophe PLAGNIOL-VILLARD

On Wed, Feb 12, 2020 at 10:39:30PM +0100, Philippe Mathieu-Daudé wrote:
> Cc'ing Jean-Christophe and Peter.
> 
> On 2/12/20 7:46 PM, Guenter Roeck wrote:
> > Hi,
> > 
> > I have been playing with pflash recently. For the most part it works,
> > but I do have an odd problem when trying to instantiate pflash on sx1.
> > 
> > My data file looks as follows.
> > 
> > 0000000 0001 0000 aaaa aaaa 5555 5555 0000 0000
> > 0000020 0000 0000 0000 0000 0000 0000 0000 0000
> > *
> > 0002000 0002 0000 aaaa aaaa 5555 5555 0000 0000
> > 0002020 0000 0000 0000 0000 0000 0000 0000 0000
> > *
> > 0004000 0003 0000 aaaa aaaa 5555 5555 0000 0000
> > 0004020 0000 0000 0000 0000 0000 0000 0000 0000
> > ...
> > 
> > In the sx1 machine, this becomes:
> > 
> > 0000000 6001 0000 aaaa aaaa 5555 5555 0000 0000
> > 0000020 0000 0000 0000 0000 0000 0000 0000 0000
> > *
> > 0002000 6002 0000 aaaa aaaa 5555 5555 0000 0000
> > 0002020 0000 0000 0000 0000 0000 0000 0000 0000
> > *
> > 0004000 6003 0000 aaaa aaaa 5555 5555 0000 0000
> > 0004020 0000 0000 0000 0000 0000 0000 0000 0000
> > *
> > ...
> > 
> > pflash is instantiated with "-drive file=flash.32M.test,format=raw,if=pflash".
> > 
> > I don't have much success with pflash tracing - data accesses don't
> > show up there.
> > 
> > I did find a number of problems with the sx1 emulation, but I have no clue
> > what is going on with pflash. As far as I can see pflash works fine on
> > other machines. Can someone give me a hint what to look out for ?
> 
> This is specific to the SX1, introduced in commit 997641a84ff:
> 
>  64 static uint64_t static_read(void *opaque, hwaddr offset,
>  65                             unsigned size)
>  66 {
>  67     uint32_t *val = (uint32_t *) opaque;
>  68     uint32_t mask = (4 / size) - 1;
>  69
>  70     return *val >> ((offset & mask) << 3);
>  71 }
> 
> Only guessing, this looks like some hw parity, and I imagine you need to
> write the parity bits in your flash.32M file before starting QEMU, then it
> would appear "normal" within the guest.
> 
I thought this might be related, but that is not the case. I added log
messages, and even ran the code in gdb. static_read() and static_write()
are not executed.

Also,

    memory_region_init_io(&cs[0], NULL, &static_ops, &cs0val,
                          "sx1.cs0", OMAP_CS0_SIZE - flash_size);
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^
    memory_region_add_subregion(address_space,
                                OMAP_CS0_BASE + flash_size, &cs[0]);
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^

suggests that the code is only executed for memory accesses _after_
the actual flash. The memory tree is:

memory-region: system
  0000000000000000-ffffffffffffffff (prio 0, i/o): system
    0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0-1
    0000000000000000-0000000001ffffff (prio 0, rom): omap_sx1.flash0-0
    0000000002000000-0000000003ffffff (prio 0, i/o): sx1.cs0

I thought that the dual memory assignment (omap_sx1.flash0-1 and
omap_sx1.flash0-0) might play a role, but removing that didn't make
a difference either (not that I have any idea what it is supposed
to be used for).

Thanks,
Guenter


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Question about (and problem with) pflash data access
  2020-02-12 23:09   ` Guenter Roeck
@ 2020-02-12 23:50     ` Philippe Mathieu-Daudé
  2020-02-13  7:40       ` Alexey Kardashevskiy
  0 siblings, 1 reply; 10+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-02-12 23:50 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: Kevin Wolf, Peter Maydell, qemu-block, Alexey Kardashevskiy,
	qemu-devel, Max Reitz, qemu-arm, Paolo Bonzini,
	Jean-Christophe PLAGNIOL-VILLARD

Cc'ing Paolo and Alexey.

On 2/13/20 12:09 AM, Guenter Roeck wrote:
> On Wed, Feb 12, 2020 at 10:39:30PM +0100, Philippe Mathieu-Daudé wrote:
>> Cc'ing Jean-Christophe and Peter.
>>
>> On 2/12/20 7:46 PM, Guenter Roeck wrote:
>>> Hi,
>>>
>>> I have been playing with pflash recently. For the most part it works,
>>> but I do have an odd problem when trying to instantiate pflash on sx1.
>>>
>>> My data file looks as follows.
>>>
>>> 0000000 0001 0000 aaaa aaaa 5555 5555 0000 0000
>>> 0000020 0000 0000 0000 0000 0000 0000 0000 0000
>>> *
>>> 0002000 0002 0000 aaaa aaaa 5555 5555 0000 0000
>>> 0002020 0000 0000 0000 0000 0000 0000 0000 0000
>>> *
>>> 0004000 0003 0000 aaaa aaaa 5555 5555 0000 0000
>>> 0004020 0000 0000 0000 0000 0000 0000 0000 0000
>>> ...
>>>
>>> In the sx1 machine, this becomes:
>>>
>>> 0000000 6001 0000 aaaa aaaa 5555 5555 0000 0000
>>> 0000020 0000 0000 0000 0000 0000 0000 0000 0000
>>> *
>>> 0002000 6002 0000 aaaa aaaa 5555 5555 0000 0000
>>> 0002020 0000 0000 0000 0000 0000 0000 0000 0000
>>> *
>>> 0004000 6003 0000 aaaa aaaa 5555 5555 0000 0000
>>> 0004020 0000 0000 0000 0000 0000 0000 0000 0000
>>> *
>>> ...
>>>
>>> pflash is instantiated with "-drive file=flash.32M.test,format=raw,if=pflash".
>>>
>>> I don't have much success with pflash tracing - data accesses don't
>>> show up there.
>>>
>>> I did find a number of problems with the sx1 emulation, but I have no clue
>>> what is going on with pflash. As far as I can see pflash works fine on
>>> other machines. Can someone give me a hint what to look out for ?
>>
>> This is specific to the SX1, introduced in commit 997641a84ff:
>>
>>   64 static uint64_t static_read(void *opaque, hwaddr offset,
>>   65                             unsigned size)
>>   66 {
>>   67     uint32_t *val = (uint32_t *) opaque;
>>   68     uint32_t mask = (4 / size) - 1;
>>   69
>>   70     return *val >> ((offset & mask) << 3);
>>   71 }
>>
>> Only guessing, this looks like some hw parity, and I imagine you need to
>> write the parity bits in your flash.32M file before starting QEMU, then it
>> would appear "normal" within the guest.
>>
> I thought this might be related, but that is not the case. I added log
> messages, and even ran the code in gdb. static_read() and static_write()
> are not executed.
> 
> Also,
> 
>      memory_region_init_io(&cs[0], NULL, &static_ops, &cs0val,
>                            "sx1.cs0", OMAP_CS0_SIZE - flash_size);
>                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
>      memory_region_add_subregion(address_space,
>                                  OMAP_CS0_BASE + flash_size, &cs[0]);
>                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> suggests that the code is only executed for memory accesses _after_
> the actual flash. The memory tree is:
> 
> memory-region: system
>    0000000000000000-ffffffffffffffff (prio 0, i/o): system
>      0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0-1
>      0000000000000000-0000000001ffffff (prio 0, rom): omap_sx1.flash0-0

Eh two memory regions with same size and same priority... Is this legal?

(qemu) info mtree -f -d
FlatView #0
  AS "memory", root: system
  AS "cpu-memory-0", root: system
  Root memory region: system
   0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0-1
   0000000002000000-0000000003ffffff (prio 0, i/o): sx1.cs0
   0000000004000000-0000000007ffffff (prio 0, i/o): sx1.cs1
   0000000008000000-000000000bffffff (prio 0, i/o): sx1.cs3
   0000000010000000-0000000011ffffff (prio 0, ram): omap1.dram
   0000000020000000-000000002002ffff (prio 0, ram): omap1.sram
   ...
   Dispatch
     Physical sections
       #0 @0000000000000000..ffffffffffffffff (noname) [unassigned]
       #1 @0000000000000000..0000000001ffffff omap_sx1.flash0-1 [not dirty]
       #2 @0000000002000000..0000000003ffffff sx1.cs0 [ROM]
       #3 @0000000004000000..0000000007ffffff sx1.cs1 [watch]
       #4 @0000000008000000..000000000bffffff sx1.cs3
       #5 @0000000010000000..0000000011ffffff omap1.dram
       #6 @0000000020000000..000000002002ffff omap1.sram
       ...
     Nodes (9 bits per level, 6 levels) ptr=[3] skip=4
       [0]
           0       skip=3  ptr=[3]
           1..511  skip=1  ptr=NIL
       [1]
           0       skip=2  ptr=[3]
           1..511  skip=1  ptr=NIL
       [2]
           0       skip=1  ptr=[3]
           1..511  skip=1  ptr=NIL
       [3]
           0       skip=1  ptr=[4]
           1       skip=1  ptr=[5]
           2       skip=2  ptr=[7]
           3..13   skip=1  ptr=NIL
          14       skip=2  ptr=[9]
          15       skip=2  ptr=[11]
          16..511  skip=1  ptr=NIL
       [4]
           0..63   skip=0  ptr=#1
          64..127  skip=0  ptr=#2
         128..255  skip=0  ptr=#3
         256..383  skip=0  ptr=#4
         384..511  skip=1  ptr=NIL

So the romd wins.

>      0000000002000000-0000000003ffffff (prio 0, i/o): sx1.cs0
> 
> I thought that the dual memory assignment (omap_sx1.flash0-1 and
> omap_sx1.flash0-0) might play a role, but removing that didn't make
> a difference either (not that I have any idea what it is supposed
> to be used for).
> 
> Thanks,
> Guenter
> 



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Question about (and problem with) pflash data access
  2020-02-12 23:50     ` Philippe Mathieu-Daudé
@ 2020-02-13  7:40       ` Alexey Kardashevskiy
  2020-02-13  9:51         ` Paolo Bonzini
  0 siblings, 1 reply; 10+ messages in thread
From: Alexey Kardashevskiy @ 2020-02-13  7:40 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, Guenter Roeck
  Cc: Kevin Wolf, Peter Maydell, qemu-block, qemu-devel, Max Reitz,
	qemu-arm, Paolo Bonzini, Jean-Christophe PLAGNIOL-VILLARD



On 13/02/2020 10:50, Philippe Mathieu-Daudé wrote:
> Cc'ing Paolo and Alexey.
> 
> On 2/13/20 12:09 AM, Guenter Roeck wrote:
>> On Wed, Feb 12, 2020 at 10:39:30PM +0100, Philippe Mathieu-Daudé wrote:
>>> Cc'ing Jean-Christophe and Peter.
>>>
>>> On 2/12/20 7:46 PM, Guenter Roeck wrote:
>>>> Hi,
>>>>
>>>> I have been playing with pflash recently. For the most part it works,
>>>> but I do have an odd problem when trying to instantiate pflash on sx1.
>>>>
>>>> My data file looks as follows.
>>>>
>>>> 0000000 0001 0000 aaaa aaaa 5555 5555 0000 0000
>>>> 0000020 0000 0000 0000 0000 0000 0000 0000 0000
>>>> *
>>>> 0002000 0002 0000 aaaa aaaa 5555 5555 0000 0000
>>>> 0002020 0000 0000 0000 0000 0000 0000 0000 0000
>>>> *
>>>> 0004000 0003 0000 aaaa aaaa 5555 5555 0000 0000
>>>> 0004020 0000 0000 0000 0000 0000 0000 0000 0000
>>>> ...
>>>>
>>>> In the sx1 machine, this becomes:
>>>>
>>>> 0000000 6001 0000 aaaa aaaa 5555 5555 0000 0000
>>>> 0000020 0000 0000 0000 0000 0000 0000 0000 0000
>>>> *
>>>> 0002000 6002 0000 aaaa aaaa 5555 5555 0000 0000
>>>> 0002020 0000 0000 0000 0000 0000 0000 0000 0000
>>>> *
>>>> 0004000 6003 0000 aaaa aaaa 5555 5555 0000 0000
>>>> 0004020 0000 0000 0000 0000 0000 0000 0000 0000
>>>> *
>>>> ...
>>>>
>>>> pflash is instantiated with "-drive
>>>> file=flash.32M.test,format=raw,if=pflash".
>>>>
>>>> I don't have much success with pflash tracing - data accesses don't
>>>> show up there.
>>>>
>>>> I did find a number of problems with the sx1 emulation, but I have
>>>> no clue
>>>> what is going on with pflash. As far as I can see pflash works fine on
>>>> other machines. Can someone give me a hint what to look out for ?
>>>
>>> This is specific to the SX1, introduced in commit 997641a84ff:
>>>
>>>   64 static uint64_t static_read(void *opaque, hwaddr offset,
>>>   65                             unsigned size)
>>>   66 {
>>>   67     uint32_t *val = (uint32_t *) opaque;
>>>   68     uint32_t mask = (4 / size) - 1;
>>>   69
>>>   70     return *val >> ((offset & mask) << 3);
>>>   71 }
>>>
>>> Only guessing, this looks like some hw parity, and I imagine you need to
>>> write the parity bits in your flash.32M file before starting QEMU,
>>> then it
>>> would appear "normal" within the guest.
>>>
>> I thought this might be related, but that is not the case. I added log
>> messages, and even ran the code in gdb. static_read() and static_write()
>> are not executed.
>>
>> Also,
>>
>>      memory_region_init_io(&cs[0], NULL, &static_ops, &cs0val,
>>                            "sx1.cs0", OMAP_CS0_SIZE - flash_size);
>>                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
>>      memory_region_add_subregion(address_space,
>>                                  OMAP_CS0_BASE + flash_size, &cs[0]);
>>                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
>>
>> suggests that the code is only executed for memory accesses _after_
>> the actual flash. The memory tree is:
>>
>> memory-region: system
>>    0000000000000000-ffffffffffffffff (prio 0, i/o): system
>>      0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0-1
>>      0000000000000000-0000000001ffffff (prio 0, rom): omap_sx1.flash0-0
> 
> Eh two memory regions with same size and same priority... Is this legal?


I'd say yes if used with memory_region_set_enabled() to make sure only
one is enabled. Having both enabled is weird and we should print a
warning. Thanks,



> 
> (qemu) info mtree -f -d
> FlatView #0
>  AS "memory", root: system
>  AS "cpu-memory-0", root: system
>  Root memory region: system
>   0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0-1
>   0000000002000000-0000000003ffffff (prio 0, i/o): sx1.cs0
>   0000000004000000-0000000007ffffff (prio 0, i/o): sx1.cs1
>   0000000008000000-000000000bffffff (prio 0, i/o): sx1.cs3
>   0000000010000000-0000000011ffffff (prio 0, ram): omap1.dram
>   0000000020000000-000000002002ffff (prio 0, ram): omap1.sram
>   ...
>   Dispatch
>     Physical sections
>       #0 @0000000000000000..ffffffffffffffff (noname) [unassigned]
>       #1 @0000000000000000..0000000001ffffff omap_sx1.flash0-1 [not dirty]
>       #2 @0000000002000000..0000000003ffffff sx1.cs0 [ROM]
>       #3 @0000000004000000..0000000007ffffff sx1.cs1 [watch]
>       #4 @0000000008000000..000000000bffffff sx1.cs3
>       #5 @0000000010000000..0000000011ffffff omap1.dram
>       #6 @0000000020000000..000000002002ffff omap1.sram
>       ...
>     Nodes (9 bits per level, 6 levels) ptr=[3] skip=4
>       [0]
>           0       skip=3  ptr=[3]
>           1..511  skip=1  ptr=NIL
>       [1]
>           0       skip=2  ptr=[3]
>           1..511  skip=1  ptr=NIL
>       [2]
>           0       skip=1  ptr=[3]
>           1..511  skip=1  ptr=NIL
>       [3]
>           0       skip=1  ptr=[4]
>           1       skip=1  ptr=[5]
>           2       skip=2  ptr=[7]
>           3..13   skip=1  ptr=NIL
>          14       skip=2  ptr=[9]
>          15       skip=2  ptr=[11]
>          16..511  skip=1  ptr=NIL
>       [4]
>           0..63   skip=0  ptr=#1
>          64..127  skip=0  ptr=#2
>         128..255  skip=0  ptr=#3
>         256..383  skip=0  ptr=#4
>         384..511  skip=1  ptr=NIL
> 
> So the romd wins.
> 
>>      0000000002000000-0000000003ffffff (prio 0, i/o): sx1.cs0
>>
>> I thought that the dual memory assignment (omap_sx1.flash0-1 and
>> omap_sx1.flash0-0) might play a role, but removing that didn't make
>> a difference either (not that I have any idea what it is supposed
>> to be used for).
>>
>> Thanks,
>> Guenter
>>
> 

-- 
Alexey


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Question about (and problem with) pflash data access
  2020-02-13  7:40       ` Alexey Kardashevskiy
@ 2020-02-13  9:51         ` Paolo Bonzini
  2020-02-13 14:26           ` Guenter Roeck
  0 siblings, 1 reply; 10+ messages in thread
From: Paolo Bonzini @ 2020-02-13  9:51 UTC (permalink / raw)
  To: Alexey Kardashevskiy, Philippe Mathieu-Daudé, Guenter Roeck
  Cc: Kevin Wolf, Peter Maydell, qemu-block, qemu-devel, Max Reitz,
	qemu-arm, Jean-Christophe PLAGNIOL-VILLARD

On 13/02/20 08:40, Alexey Kardashevskiy wrote:
>>>
>>> memory-region: system
>>>    0000000000000000-ffffffffffffffff (prio 0, i/o): system
>>>      0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0-1
>>>      0000000000000000-0000000001ffffff (prio 0, rom): omap_sx1.flash0-0
>> Eh two memory regions with same size and same priority... Is this legal?
> 
> I'd say yes if used with memory_region_set_enabled() to make sure only
> one is enabled. Having both enabled is weird and we should print a
> warning.

Yeah, it's undefined which one becomes visible.

Paolo



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Question about (and problem with) pflash data access
  2020-02-13  9:51         ` Paolo Bonzini
@ 2020-02-13 14:26           ` Guenter Roeck
  2020-02-13 14:39             ` Peter Maydell
  0 siblings, 1 reply; 10+ messages in thread
From: Guenter Roeck @ 2020-02-13 14:26 UTC (permalink / raw)
  To: Paolo Bonzini, Alexey Kardashevskiy, Philippe Mathieu-Daudé
  Cc: Kevin Wolf, Peter Maydell, qemu-block, qemu-devel, Max Reitz,
	qemu-arm, Jean-Christophe PLAGNIOL-VILLARD

On 2/13/20 1:51 AM, Paolo Bonzini wrote:
> On 13/02/20 08:40, Alexey Kardashevskiy wrote:
>>>>
>>>> memory-region: system
>>>>     0000000000000000-ffffffffffffffff (prio 0, i/o): system
>>>>       0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0-1
>>>>       0000000000000000-0000000001ffffff (prio 0, rom): omap_sx1.flash0-0
>>> Eh two memory regions with same size and same priority... Is this legal?
>>
>> I'd say yes if used with memory_region_set_enabled() to make sure only
>> one is enabled. Having both enabled is weird and we should print a
>> warning.
> 
> Yeah, it's undefined which one becomes visible.
> 

I have a patch fixing that, resulting in

(qemu) info mtree -f
FlatView #0
  AS "I/O", root: io
  Root memory region: io
   0000000000000000-000000000000ffff (prio 0, i/o): io

FlatView #1
  AS "memory", root: system
  AS "cpu-memory-0", root: system
  Root memory region: system
   0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0
   0000000002000000-0000000003ffffff (prio 0, i/o): sx1.cs0
   0000000004000000-0000000007ffffff (prio 0, i/o): sx1.cs1
   0000000008000000-000000000bffffff (prio 0, i/o): sx1.cs2
   000000000c000000-000000000fffffff (prio 0, i/o): sx1.cs3
   ...

but unfortunately that doesn't fix my problem. The data in the
omap_sx1.flash0 region is as wrong as before.

What really puzzles me is that there is no trace output for
flash data accesses (trace_pflash_data_read and trace_pflash_data_write),
meaning the actual flash data access must be handled elsewhere.
Can someone give me a hint where that might be ?
Clearly I am missing something about inner workings of qemu.

Thanks,
Guenter


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Question about (and problem with) pflash data access
  2020-02-13 14:26           ` Guenter Roeck
@ 2020-02-13 14:39             ` Peter Maydell
  2020-02-13 15:24               ` Philippe Mathieu-Daudé
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Maydell @ 2020-02-13 14:39 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: Kevin Wolf, Qemu-block, Alexey Kardashevskiy,
	Philippe Mathieu-Daudé,
	QEMU Developers, Max Reitz, qemu-arm, Paolo Bonzini,
	Jean-Christophe PLAGNIOL-VILLARD

On Thu, 13 Feb 2020 at 14:26, Guenter Roeck <linux@roeck-us.net> wrote:
> What really puzzles me is that there is no trace output for
> flash data accesses (trace_pflash_data_read and trace_pflash_data_write),
> meaning the actual flash data access must be handled elsewhere.
> Can someone give me a hint where that might be ?
> Clearly I am missing something about inner workings of qemu.

Probably the device is in 'romd' mode. A QEMU MemoryRegion
can be:
 * RAM (includes ROM for these purposes) -- backed by host
   memory, reads and writes (if permitted) go straight to
   the host memory via fastpath accesses
 * MMIO -- backed by a read and write accessor function,
   all accesses go to these functions
 * "ROM device" -- a mix of the above where there is a
   backing bit of host memory but also accessor functions.
   When the device is in "romd" mode, reads go direct to
   host memory, and writes still go to the accessor function.
   When the device is not in "romd" mode, reads also go
   to the accessor function.

We use this in the pflash devices to make the common case
("just read the flash") fast. When the guest makes a write
to flash that puts it into programming mode, we call
memory_region_rom_device_set_romd(..., false) so we can
intercept reads and make them do the right thing for
programming mode.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Question about (and problem with) pflash data access
  2020-02-13 14:39             ` Peter Maydell
@ 2020-02-13 15:24               ` Philippe Mathieu-Daudé
  2020-02-13 16:21                 ` Guenter Roeck
  0 siblings, 1 reply; 10+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-02-13 15:24 UTC (permalink / raw)
  To: Peter Maydell, Guenter Roeck
  Cc: Kevin Wolf, Qemu-block, Alexey Kardashevskiy, QEMU Developers,
	Max Reitz, qemu-arm, Paolo Bonzini,
	Jean-Christophe PLAGNIOL-VILLARD

On 2/13/20 3:39 PM, Peter Maydell wrote:
> On Thu, 13 Feb 2020 at 14:26, Guenter Roeck <linux@roeck-us.net> wrote:
>> What really puzzles me is that there is no trace output for
>> flash data accesses (trace_pflash_data_read and trace_pflash_data_write),
>> meaning the actual flash data access must be handled elsewhere.
>> Can someone give me a hint where that might be ?

If you can share built kernel/dtb/rootfs for this machine I can have a 
look at it.

>> Clearly I am missing something about inner workings of qemu.

You can see all the pflash events using '-trace pflash*'.

> 
> Probably the device is in 'romd' mode. A QEMU MemoryRegion
> can be:
>   * RAM (includes ROM for these purposes) -- backed by host
>     memory, reads and writes (if permitted) go straight to
>     the host memory via fastpath accesses

No tracing here.

>   * MMIO -- backed by a read and write accessor function,
>     all accesses go to these functions
>   * "ROM device" -- a mix of the above where there is a
>     backing bit of host memory but also accessor functions.
>     When the device is in "romd" mode, reads go direct to
>     host memory, and writes still go to the accessor function.
>     When the device is not in "romd" mode, reads also go
>     to the accessor function.
> 
> We use this in the pflash devices to make the common case
> ("just read the flash") fast. When the guest makes a write
> to flash that puts it into programming mode, we call
> memory_region_rom_device_set_romd(..., false) so we can
> intercept reads and make them do the right thing for
> programming mode.
> 
> thanks
> -- PMM
> 



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Question about (and problem with) pflash data access
  2020-02-13 15:24               ` Philippe Mathieu-Daudé
@ 2020-02-13 16:21                 ` Guenter Roeck
  0 siblings, 0 replies; 10+ messages in thread
From: Guenter Roeck @ 2020-02-13 16:21 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Kevin Wolf, Peter Maydell, Qemu-block, Alexey Kardashevskiy,
	QEMU Developers, Max Reitz, qemu-arm, Paolo Bonzini,
	Jean-Christophe PLAGNIOL-VILLARD

On Thu, Feb 13, 2020 at 04:24:24PM +0100, Philippe Mathieu-Daudé wrote:
> On 2/13/20 3:39 PM, Peter Maydell wrote:
> > On Thu, 13 Feb 2020 at 14:26, Guenter Roeck <linux@roeck-us.net> wrote:
> > > What really puzzles me is that there is no trace output for
> > > flash data accesses (trace_pflash_data_read and trace_pflash_data_write),
> > > meaning the actual flash data access must be handled elsewhere.
> > > Can someone give me a hint where that might be ?
> 
> If you can share built kernel/dtb/rootfs for this machine I can have a look
> at it.
> 
> > > Clearly I am missing something about inner workings of qemu.
> 
> You can see all the pflash events using '-trace pflash*'.
> 
Yes, I got that much ;-).

> > 
> > Probably the device is in 'romd' mode. A QEMU MemoryRegion
> > can be:
> >   * RAM (includes ROM for these purposes) -- backed by host
> >     memory, reads and writes (if permitted) go straight to
> >     the host memory via fastpath accesses
> 
> No tracing here.
> 
> >   * MMIO -- backed by a read and write accessor function,
> >     all accesses go to these functions
> >   * "ROM device" -- a mix of the above where there is a
> >     backing bit of host memory but also accessor functions.
> >     When the device is in "romd" mode, reads go direct to
> >     host memory, and writes still go to the accessor function.
> >     When the device is not in "romd" mode, reads also go
> >     to the accessor function.
> > 
> > We use this in the pflash devices to make the common case
> > ("just read the flash") fast. When the guest makes a write
> > to flash that puts it into programming mode, we call
> > memory_region_rom_device_set_romd(..., false) so we can
> > intercept reads and make them do the right thing for
> > programming mode.
> > 

Disabling the calls to memory_region_rom_device_set_romd(..., true)
got me the trace output I was looking for. Turns out that reads
which supposedly are from the beginning of the flash start at offset
0x180000 instead of 0. This explains the "corruption", since that is
exactly the data in my test file at that offset. Adding debug output
to the Linux kernel confirms that this offset originates from the Linux
kernel.

Taking a closer look into the kernel source shows that the flash partitions
for SX1 indeed start at offset 0x180000 in the flash, not at 0. Bummer.

Sorry for all the noise. I should have paid closer attention to the
kernel source. Oh well, at least I learned a lot about qemu.

Thanks,
Guenter


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-02-13 16:22 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-12 18:46 Question about (and problem with) pflash data access Guenter Roeck
2020-02-12 21:39 ` Philippe Mathieu-Daudé
2020-02-12 23:09   ` Guenter Roeck
2020-02-12 23:50     ` Philippe Mathieu-Daudé
2020-02-13  7:40       ` Alexey Kardashevskiy
2020-02-13  9:51         ` Paolo Bonzini
2020-02-13 14:26           ` Guenter Roeck
2020-02-13 14:39             ` Peter Maydell
2020-02-13 15:24               ` Philippe Mathieu-Daudé
2020-02-13 16:21                 ` Guenter Roeck

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.