All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Philippe Mathieu-Daudé" <philmd@redhat.com>
To: Guenter Roeck <linux@roeck-us.net>
Cc: Kevin Wolf <kwolf@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	qemu-block@nongnu.org, Alexey Kardashevskiy <aik@ozlabs.ru>,
	qemu-devel@nongnu.org, Max Reitz <mreitz@redhat.com>,
	qemu-arm <qemu-arm@nongnu.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Subject: Re: Question about (and problem with) pflash data access
Date: Thu, 13 Feb 2020 00:50:20 +0100	[thread overview]
Message-ID: <560224fe-f0a3-c64a-6689-e824225cfbb9@redhat.com> (raw)
In-Reply-To: <20200212230918.GA27242@roeck-us.net>

Cc'ing Paolo and Alexey.

On 2/13/20 12:09 AM, Guenter Roeck wrote:
> On Wed, Feb 12, 2020 at 10:39:30PM +0100, Philippe Mathieu-Daudé wrote:
>> Cc'ing Jean-Christophe and Peter.
>>
>> On 2/12/20 7:46 PM, Guenter Roeck wrote:
>>> Hi,
>>>
>>> I have been playing with pflash recently. For the most part it works,
>>> but I do have an odd problem when trying to instantiate pflash on sx1.
>>>
>>> My data file looks as follows.
>>>
>>> 0000000 0001 0000 aaaa aaaa 5555 5555 0000 0000
>>> 0000020 0000 0000 0000 0000 0000 0000 0000 0000
>>> *
>>> 0002000 0002 0000 aaaa aaaa 5555 5555 0000 0000
>>> 0002020 0000 0000 0000 0000 0000 0000 0000 0000
>>> *
>>> 0004000 0003 0000 aaaa aaaa 5555 5555 0000 0000
>>> 0004020 0000 0000 0000 0000 0000 0000 0000 0000
>>> ...
>>>
>>> In the sx1 machine, this becomes:
>>>
>>> 0000000 6001 0000 aaaa aaaa 5555 5555 0000 0000
>>> 0000020 0000 0000 0000 0000 0000 0000 0000 0000
>>> *
>>> 0002000 6002 0000 aaaa aaaa 5555 5555 0000 0000
>>> 0002020 0000 0000 0000 0000 0000 0000 0000 0000
>>> *
>>> 0004000 6003 0000 aaaa aaaa 5555 5555 0000 0000
>>> 0004020 0000 0000 0000 0000 0000 0000 0000 0000
>>> *
>>> ...
>>>
>>> pflash is instantiated with "-drive file=flash.32M.test,format=raw,if=pflash".
>>>
>>> I don't have much success with pflash tracing - data accesses don't
>>> show up there.
>>>
>>> I did find a number of problems with the sx1 emulation, but I have no clue
>>> what is going on with pflash. As far as I can see pflash works fine on
>>> other machines. Can someone give me a hint what to look out for ?
>>
>> This is specific to the SX1, introduced in commit 997641a84ff:
>>
>>   64 static uint64_t static_read(void *opaque, hwaddr offset,
>>   65                             unsigned size)
>>   66 {
>>   67     uint32_t *val = (uint32_t *) opaque;
>>   68     uint32_t mask = (4 / size) - 1;
>>   69
>>   70     return *val >> ((offset & mask) << 3);
>>   71 }
>>
>> Only guessing, this looks like some hw parity, and I imagine you need to
>> write the parity bits in your flash.32M file before starting QEMU, then it
>> would appear "normal" within the guest.
>>
> I thought this might be related, but that is not the case. I added log
> messages, and even ran the code in gdb. static_read() and static_write()
> are not executed.
> 
> Also,
> 
>      memory_region_init_io(&cs[0], NULL, &static_ops, &cs0val,
>                            "sx1.cs0", OMAP_CS0_SIZE - flash_size);
>                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
>      memory_region_add_subregion(address_space,
>                                  OMAP_CS0_BASE + flash_size, &cs[0]);
>                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> suggests that the code is only executed for memory accesses _after_
> the actual flash. The memory tree is:
> 
> memory-region: system
>    0000000000000000-ffffffffffffffff (prio 0, i/o): system
>      0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0-1
>      0000000000000000-0000000001ffffff (prio 0, rom): omap_sx1.flash0-0

Eh two memory regions with same size and same priority... Is this legal?

(qemu) info mtree -f -d
FlatView #0
  AS "memory", root: system
  AS "cpu-memory-0", root: system
  Root memory region: system
   0000000000000000-0000000001ffffff (prio 0, romd): omap_sx1.flash0-1
   0000000002000000-0000000003ffffff (prio 0, i/o): sx1.cs0
   0000000004000000-0000000007ffffff (prio 0, i/o): sx1.cs1
   0000000008000000-000000000bffffff (prio 0, i/o): sx1.cs3
   0000000010000000-0000000011ffffff (prio 0, ram): omap1.dram
   0000000020000000-000000002002ffff (prio 0, ram): omap1.sram
   ...
   Dispatch
     Physical sections
       #0 @0000000000000000..ffffffffffffffff (noname) [unassigned]
       #1 @0000000000000000..0000000001ffffff omap_sx1.flash0-1 [not dirty]
       #2 @0000000002000000..0000000003ffffff sx1.cs0 [ROM]
       #3 @0000000004000000..0000000007ffffff sx1.cs1 [watch]
       #4 @0000000008000000..000000000bffffff sx1.cs3
       #5 @0000000010000000..0000000011ffffff omap1.dram
       #6 @0000000020000000..000000002002ffff omap1.sram
       ...
     Nodes (9 bits per level, 6 levels) ptr=[3] skip=4
       [0]
           0       skip=3  ptr=[3]
           1..511  skip=1  ptr=NIL
       [1]
           0       skip=2  ptr=[3]
           1..511  skip=1  ptr=NIL
       [2]
           0       skip=1  ptr=[3]
           1..511  skip=1  ptr=NIL
       [3]
           0       skip=1  ptr=[4]
           1       skip=1  ptr=[5]
           2       skip=2  ptr=[7]
           3..13   skip=1  ptr=NIL
          14       skip=2  ptr=[9]
          15       skip=2  ptr=[11]
          16..511  skip=1  ptr=NIL
       [4]
           0..63   skip=0  ptr=#1
          64..127  skip=0  ptr=#2
         128..255  skip=0  ptr=#3
         256..383  skip=0  ptr=#4
         384..511  skip=1  ptr=NIL

So the romd wins.

>      0000000002000000-0000000003ffffff (prio 0, i/o): sx1.cs0
> 
> I thought that the dual memory assignment (omap_sx1.flash0-1 and
> omap_sx1.flash0-0) might play a role, but removing that didn't make
> a difference either (not that I have any idea what it is supposed
> to be used for).
> 
> Thanks,
> Guenter
> 



  reply	other threads:[~2020-02-12 23:51 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-12 18:46 Question about (and problem with) pflash data access Guenter Roeck
2020-02-12 21:39 ` Philippe Mathieu-Daudé
2020-02-12 23:09   ` Guenter Roeck
2020-02-12 23:50     ` Philippe Mathieu-Daudé [this message]
2020-02-13  7:40       ` Alexey Kardashevskiy
2020-02-13  9:51         ` Paolo Bonzini
2020-02-13 14:26           ` Guenter Roeck
2020-02-13 14:39             ` Peter Maydell
2020-02-13 15:24               ` Philippe Mathieu-Daudé
2020-02-13 16:21                 ` Guenter Roeck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=560224fe-f0a3-c64a-6689-e824225cfbb9@redhat.com \
    --to=philmd@redhat.com \
    --cc=aik@ozlabs.ru \
    --cc=kwolf@redhat.com \
    --cc=linux@roeck-us.net \
    --cc=mreitz@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=plagnioj@jcrosoft.com \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.