All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tom Roeder <tmroeder@google.com>
To: Keith Busch <kbusch@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@fb.com>,
	Sagi Grimberg <sagi@grimberg.me>, Peter Gonda <pgonda@google.com>,
	Marios Pomonis <pomonis@google.com>,
	linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] nvme: Cache DMA descriptors to prevent corruption.
Date: Mon, 30 Nov 2020 10:55:00 -0800	[thread overview]
Message-ID: <20201130185500.GB744128@google.com> (raw)
In-Reply-To: <20201120142954.GC2855047@dhcp-10-100-145-180.wdc.com>

On Fri, Nov 20, 2020 at 06:29:54AM -0800, Keith Busch wrote:
>On Fri, Nov 20, 2020 at 09:02:43AM +0100, Christoph Hellwig wrote:
>> On Thu, Nov 19, 2020 at 05:27:37PM -0800, Tom Roeder wrote:
>> > This patch changes the NVMe PCI implementation to cache host_mem_descs
>> > in non-DMA memory instead of depending on descriptors stored in DMA
>> > memory. This change is needed under the malicious-hypervisor threat
>> > model assumed by the AMD SEV and Intel TDX architectures, which encrypt
>> > guest memory to make it unreadable. Some versions of these architectures
>> > also make it cryptographically hard to modify guest memory without
>> > detection.
>>
>> I don't think this is a useful threat model, and I've not seen a
>> discussion on lkml where we had any discussion on this kind of threat
>> model either.
>>
>> Before you start sending patches that regress optimizations in various
>> drivers (and there will be lots with this model) we need to have a
>> broader discussion first.
>>
>> And HMB support, which is for low-end consumer devices that are usually
>> not directly assigned to VMs aren't a good starting point for this.
>
>Yeah, while doing this for HMB isn't really a performance concern, this
>method for chaining SGL/PRP lists would be.

I see that this answers a question I just asked in my reply to the 
previous message. Sorry about that. Can you please point me to the code 
in question?

>
>And perhaps more importantly, the proposed mitigation only lets the
>guest silently carry on from such an attack while the device is surely
>corrupting something. I think we'd rather free the wrong address since
>that may at least eventually raise an error.

 From a security perspective, I'd rather not free the wrong address, 
since that could lead to an attack on the guest (use-after-free). But I 
agree with the concern about fixing the problem silently. Maybe this 
code should instead raise an error itself in this case after comparing 
the cached values with the values stored in the DMA memory?

WARNING: multiple messages have this Message-ID (diff)
From: Tom Roeder <tmroeder@google.com>
To: Keith Busch <kbusch@kernel.org>
Cc: Sagi Grimberg <sagi@grimberg.me>,
	linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
	Marios Pomonis <pomonis@google.com>, Jens Axboe <axboe@fb.com>,
	Peter Gonda <pgonda@google.com>, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH v2] nvme: Cache DMA descriptors to prevent corruption.
Date: Mon, 30 Nov 2020 10:55:00 -0800	[thread overview]
Message-ID: <20201130185500.GB744128@google.com> (raw)
In-Reply-To: <20201120142954.GC2855047@dhcp-10-100-145-180.wdc.com>

On Fri, Nov 20, 2020 at 06:29:54AM -0800, Keith Busch wrote:
>On Fri, Nov 20, 2020 at 09:02:43AM +0100, Christoph Hellwig wrote:
>> On Thu, Nov 19, 2020 at 05:27:37PM -0800, Tom Roeder wrote:
>> > This patch changes the NVMe PCI implementation to cache host_mem_descs
>> > in non-DMA memory instead of depending on descriptors stored in DMA
>> > memory. This change is needed under the malicious-hypervisor threat
>> > model assumed by the AMD SEV and Intel TDX architectures, which encrypt
>> > guest memory to make it unreadable. Some versions of these architectures
>> > also make it cryptographically hard to modify guest memory without
>> > detection.
>>
>> I don't think this is a useful threat model, and I've not seen a
>> discussion on lkml where we had any discussion on this kind of threat
>> model either.
>>
>> Before you start sending patches that regress optimizations in various
>> drivers (and there will be lots with this model) we need to have a
>> broader discussion first.
>>
>> And HMB support, which is for low-end consumer devices that are usually
>> not directly assigned to VMs aren't a good starting point for this.
>
>Yeah, while doing this for HMB isn't really a performance concern, this
>method for chaining SGL/PRP lists would be.

I see that this answers a question I just asked in my reply to the 
previous message. Sorry about that. Can you please point me to the code 
in question?

>
>And perhaps more importantly, the proposed mitigation only lets the
>guest silently carry on from such an attack while the device is surely
>corrupting something. I think we'd rather free the wrong address since
>that may at least eventually raise an error.

 From a security perspective, I'd rather not free the wrong address, 
since that could lead to an attack on the guest (use-after-free). But I 
agree with the concern about fixing the problem silently. Maybe this 
code should instead raise an error itself in this case after comparing 
the cached values with the values stored in the DMA memory?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-11-30 18:56 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-20  1:27 [PATCH v2] nvme: Cache DMA descriptors to prevent corruption Tom Roeder
2020-11-20  1:27 ` Tom Roeder
2020-11-20  8:02 ` Christoph Hellwig
2020-11-20  8:02   ` Christoph Hellwig
2020-11-20 14:29   ` Keith Busch
2020-11-20 14:29     ` Keith Busch
2020-11-30 18:55     ` Tom Roeder [this message]
2020-11-30 18:55       ` Tom Roeder
2020-11-30 18:50   ` Tom Roeder
2020-11-30 18:50     ` Tom Roeder
2020-12-02 16:31     ` Tom Lendacky
2020-12-02 16:31       ` Tom Lendacky
2023-01-24 17:24 Julien Bachmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201130185500.GB744128@google.com \
    --to=tmroeder@google.com \
    --cc=axboe@fb.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=pgonda@google.com \
    --cc=pomonis@google.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.