All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Wei Liu" <wl@xen.org>, "Roger Pau Monné" <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: Ping²: [PATCH v2] x86emul: de-duplicate scatters to the same linear address
Date: Mon, 18 Oct 2021 10:10:17 +0200	[thread overview]
Message-ID: <3b1b038d-3cba-f947-3458-297ca21a262b@suse.com> (raw)
In-Reply-To: <26495a3f-d250-4445-ca91-0d0aae336fe7@suse.com>

On 28.06.2021 14:13, Jan Beulich wrote:
> On 20.05.2021 15:34, Jan Beulich wrote:
>> The SDM specifically allows for earlier writes to fully overlapping
>> ranges to be dropped. If a guest did so, hvmemul_phys_mmio_access()
>> would crash it if varying data was written to the same address. Detect
>> overlaps early, as doing so in hvmemul_{linear,phys}_mmio_access() would
>> be quite a bit more difficult. To maintain proper faulting behavior,
>> instead of dropping earlier write instances of fully overlapping slots
>> altogether, write the data of the final of these slots multiple times.
>> (We also can't pull ahead the [single] write of the data of the last of
>> the slots, clearing all involved slots' op_mask bits together, as this
>> would yield incorrect results if there were intervening partially
>> overlapping ones.)
>>
>> Note that due to cache slot use being linear address based, there's no
>> similar issue with multiple writes to the same physical address (mapped
>> through different linear addresses).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> As indicated before, this is an issue which - afaict - would be a
> security issue if introspection was security supported. As such I
> find it highly irritating that this has now been pending for well
> over half a year (counting from the submission of v1).

This continues to be pending, despite having got submitted well in time
for 4.15 (Nov 2020). Are we meaning to ship another major release with
this bug unaddressed?

Jan

>> ---
>> v2: Maintain correct faulting behavior.
>>
>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -10073,15 +10073,36 @@ x86_emulate(
>>  
>>          for ( i = 0; op_mask; ++i )
>>          {
>> -            long idx = b & 1 ? index.qw[i] : index.dw[i];
>> +            long idx = (b & 1 ? index.qw[i]
>> +                              : index.dw[i]) * (1 << state->sib_scale);
>> +            unsigned long offs = truncate_ea(ea.mem.off + idx);
>> +            unsigned int j, slot;
>>  
>>              if ( !(op_mask & (1 << i)) )
>>                  continue;
>>  
>> -            rc = ops->write(ea.mem.seg,
>> -                            truncate_ea(ea.mem.off +
>> -                                        idx * (1 << state->sib_scale)),
>> -                            (void *)mmvalp + i * op_bytes, op_bytes, ctxt);
>> +            /*
>> +             * hvmemul_linear_mmio_access() will find a cache slot based on
>> +             * linear address.  hvmemul_phys_mmio_access() will crash the
>> +             * domain if observing varying data getting written to the same
>> +             * cache slot.  Utilize that squashing earlier writes to fully
>> +             * overlapping addresses is permitted by the spec.  We can't,
>> +             * however, drop the writes altogether, to maintain correct
>> +             * faulting behavior.  Instead write the data from the last of
>> +             * the fully overlapping slots multiple times.
>> +             */
>> +            for ( j = (slot = i) + 1; j < n; ++j )
>> +            {
>> +                long idx2 = (b & 1 ? index.qw[j]
>> +                                   : index.dw[j]) * (1 << state->sib_scale);
>> +
>> +                if ( (op_mask & (1 << j)) &&
>> +                     truncate_ea(ea.mem.off + idx2) == offs )
>> +                    slot = j;
>> +            }
>> +
>> +            rc = ops->write(ea.mem.seg, offs,
>> +                            (void *)mmvalp + slot * op_bytes, op_bytes, ctxt);
>>              if ( rc != X86EMUL_OKAY )
>>              {
>>                  /* See comment in gather emulation. */
>>
> 
> 



  reply	other threads:[~2021-10-18  8:10 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-20 13:34 [PATCH v2] x86emul: de-duplicate scatters to the same linear address Jan Beulich
2021-06-28 12:13 ` Ping: " Jan Beulich
2021-10-18  8:10   ` Jan Beulich [this message]
2021-10-18 11:19 ` Roger Pau Monné
2021-10-18 12:17   ` Jan Beulich
2021-10-18 12:52     ` Roger Pau Monné

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3b1b038d-3cba-f947-3458-297ca21a262b@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=roger.pau@citrix.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.