All of lore.kernel.org
 help / color / mirror / Atom feed
From: Steven Price <steven.price@arm.com>
To: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>, Will Deacon <will@kernel.org>,
	James Morse <james.morse@arm.com>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, Dave Martin <Dave.Martin@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	qemu-devel@nongnu.org, Juan Quintela <quintela@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	Haibo Xu <Haibo.Xu@arm.com>, Andrew Jones <drjones@redhat.com>
Subject: Re: [PATCH v12 4/8] arm64: kvm: Introduce MTE VM feature
Date: Fri, 21 May 2021 10:28:52 +0100	[thread overview]
Message-ID: <21629fb6-4287-88b2-2287-2a63496cff79@arm.com> (raw)
In-Reply-To: <20210520175037.GG12251@arm.com>

On 20/05/2021 18:50, Catalin Marinas wrote:
> On Thu, May 20, 2021 at 04:05:46PM +0100, Steven Price wrote:
>> On 20/05/2021 12:54, Catalin Marinas wrote:
>>> On Mon, May 17, 2021 at 01:32:35PM +0100, Steven Price wrote:
>>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>>>> index c5d1f3c87dbd..8660f6a03f51 100644
>>>> --- a/arch/arm64/kvm/mmu.c
>>>> +++ b/arch/arm64/kvm/mmu.c
>>>> @@ -822,6 +822,31 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
>>>>  	return PAGE_SIZE;
>>>>  }
>>>>  
>>>> +static int sanitise_mte_tags(struct kvm *kvm, unsigned long size,
>>>> +			     kvm_pfn_t pfn)
>>>> +{
>>>> +	if (kvm_has_mte(kvm)) {
>>>> +		/*
>>>> +		 * The page will be mapped in stage 2 as Normal Cacheable, so
>>>> +		 * the VM will be able to see the page's tags and therefore
>>>> +		 * they must be initialised first. If PG_mte_tagged is set,
>>>> +		 * tags have already been initialised.
>>>> +		 */
>>>> +		unsigned long i, nr_pages = size >> PAGE_SHIFT;
>>>> +		struct page *page = pfn_to_online_page(pfn);
>>>> +
>>>> +		if (!page)
>>>> +			return -EFAULT;
>>>
>>> IIRC we ended up with pfn_to_online_page() to reject ZONE_DEVICE pages
>>> that may be mapped into a guest and we have no idea whether they support
>>> MTE. It may be worth adding a comment, otherwise, as Marc said, the page
>>> wouldn't disappear.
>>
>> I'll add a comment.
>>
>>>> +
>>>> +		for (i = 0; i < nr_pages; i++, page++) {
>>>> +			if (!test_and_set_bit(PG_mte_tagged, &page->flags))
>>>> +				mte_clear_page_tags(page_address(page));
>>>
>>> We started the page->flags thread and ended up fixing it for the host
>>> set_pte_at() as per the first patch:
>>>
>>> https://lore.kernel.org/r/c3293d47-a5f2-ea4a-6730-f5cae26d8a7e@arm.com
>>>
>>> Now, can we have a race between the stage 2 kvm_set_spte_gfn() and a
>>> stage 1 set_pte_at()? Only the latter takes a lock. Or between two
>>> kvm_set_spte_gfn() in different VMs? I think in the above thread we
>>> concluded that there's only a problem if the page is shared between
>>> multiple VMMs (MAP_SHARED). How realistic is this and what's the
>>> workaround?
>>>
>>> Either way, I think it's worth adding a comment here on the race on
>>> page->flags as it looks strange that here it's just a test_and_set_bit()
>>> while set_pte_at() uses a spinlock.
>>>
>>
>> Very good point! I should have thought about that. I think splitting the
>> test_and_set_bit() in two (as with the cache flush) is sufficient. While
>> there technically still is a race which could lead to user space tags
>> being clobbered:
>>
>> a) It's very odd for a VMM to be doing an mprotect() after the fact to
>> add PROT_MTE, or to be sharing the memory with another process which
>> sets PROT_MTE.
>>
>> b) The window for the race is incredibly small and the VMM (generally)
>> needs to be robust against the guest changing tags anyway.
>>
>> But I'll add a comment here as well:
>>
>> 	/*
>> 	 * There is a potential race between sanitising the
>> 	 * flags here and user space using mprotect() to add
>> 	 * PROT_MTE to access the tags, however by splitting
>> 	 * the test/set the only risk is user space tags
>> 	 * being overwritten by the mte_clear_page_tags() call.
>> 	 */
> 
> I think (well, I haven't re-checked), an mprotect() in the VMM ends up
> calling set_pte_at_notify() which would call kvm_set_spte_gfn() and that
> will map the page in the guest. So the problem only appears between
> different VMMs sharing the same page. In principle they can be
> MAP_PRIVATE but they'd be CoW so the race wouldn't matter. So it's left
> with MAP_SHARED between multiple VMMs.

mprotect.c only has a call to set_pte_at() not set_pte_at_notify(). And
AFAICT the MMU notifiers are called to invalidate only in
change_pmd_range(). So the stage 2 mappings would be invalidated rather
than populated. However I believe this should cause synchronisation
because of the KVM mmu_lock. So from my reading you are right an
mprotect() can't race.

MAP_SHARED between multiple VMs is then the only potential problem.

> I think we should just state that this is unsafe and they can delete
> each-others tags. If we are really worried, we can export that lock you
> added in mte.c.
> 

I'll just update the comment for now.

Thanks,

Steve

WARNING: multiple messages have this Message-ID (diff)
From: Steven Price <steven.price@arm.com>
To: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Andrew Jones <drjones@redhat.com>, Haibo Xu <Haibo.Xu@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	qemu-devel@nongnu.org, Marc Zyngier <maz@kernel.org>,
	Juan Quintela <quintela@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	linux-kernel@vger.kernel.org, Dave Martin <Dave.Martin@arm.com>,
	James Morse <james.morse@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Will Deacon <will@kernel.org>,
	kvmarm@lists.cs.columbia.edu,
	Julien Thierry <julien.thierry.kdev@gmail.com>
Subject: Re: [PATCH v12 4/8] arm64: kvm: Introduce MTE VM feature
Date: Fri, 21 May 2021 10:28:52 +0100	[thread overview]
Message-ID: <21629fb6-4287-88b2-2287-2a63496cff79@arm.com> (raw)
In-Reply-To: <20210520175037.GG12251@arm.com>

On 20/05/2021 18:50, Catalin Marinas wrote:
> On Thu, May 20, 2021 at 04:05:46PM +0100, Steven Price wrote:
>> On 20/05/2021 12:54, Catalin Marinas wrote:
>>> On Mon, May 17, 2021 at 01:32:35PM +0100, Steven Price wrote:
>>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>>>> index c5d1f3c87dbd..8660f6a03f51 100644
>>>> --- a/arch/arm64/kvm/mmu.c
>>>> +++ b/arch/arm64/kvm/mmu.c
>>>> @@ -822,6 +822,31 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
>>>>  	return PAGE_SIZE;
>>>>  }
>>>>  
>>>> +static int sanitise_mte_tags(struct kvm *kvm, unsigned long size,
>>>> +			     kvm_pfn_t pfn)
>>>> +{
>>>> +	if (kvm_has_mte(kvm)) {
>>>> +		/*
>>>> +		 * The page will be mapped in stage 2 as Normal Cacheable, so
>>>> +		 * the VM will be able to see the page's tags and therefore
>>>> +		 * they must be initialised first. If PG_mte_tagged is set,
>>>> +		 * tags have already been initialised.
>>>> +		 */
>>>> +		unsigned long i, nr_pages = size >> PAGE_SHIFT;
>>>> +		struct page *page = pfn_to_online_page(pfn);
>>>> +
>>>> +		if (!page)
>>>> +			return -EFAULT;
>>>
>>> IIRC we ended up with pfn_to_online_page() to reject ZONE_DEVICE pages
>>> that may be mapped into a guest and we have no idea whether they support
>>> MTE. It may be worth adding a comment, otherwise, as Marc said, the page
>>> wouldn't disappear.
>>
>> I'll add a comment.
>>
>>>> +
>>>> +		for (i = 0; i < nr_pages; i++, page++) {
>>>> +			if (!test_and_set_bit(PG_mte_tagged, &page->flags))
>>>> +				mte_clear_page_tags(page_address(page));
>>>
>>> We started the page->flags thread and ended up fixing it for the host
>>> set_pte_at() as per the first patch:
>>>
>>> https://lore.kernel.org/r/c3293d47-a5f2-ea4a-6730-f5cae26d8a7e@arm.com
>>>
>>> Now, can we have a race between the stage 2 kvm_set_spte_gfn() and a
>>> stage 1 set_pte_at()? Only the latter takes a lock. Or between two
>>> kvm_set_spte_gfn() in different VMs? I think in the above thread we
>>> concluded that there's only a problem if the page is shared between
>>> multiple VMMs (MAP_SHARED). How realistic is this and what's the
>>> workaround?
>>>
>>> Either way, I think it's worth adding a comment here on the race on
>>> page->flags as it looks strange that here it's just a test_and_set_bit()
>>> while set_pte_at() uses a spinlock.
>>>
>>
>> Very good point! I should have thought about that. I think splitting the
>> test_and_set_bit() in two (as with the cache flush) is sufficient. While
>> there technically still is a race which could lead to user space tags
>> being clobbered:
>>
>> a) It's very odd for a VMM to be doing an mprotect() after the fact to
>> add PROT_MTE, or to be sharing the memory with another process which
>> sets PROT_MTE.
>>
>> b) The window for the race is incredibly small and the VMM (generally)
>> needs to be robust against the guest changing tags anyway.
>>
>> But I'll add a comment here as well:
>>
>> 	/*
>> 	 * There is a potential race between sanitising the
>> 	 * flags here and user space using mprotect() to add
>> 	 * PROT_MTE to access the tags, however by splitting
>> 	 * the test/set the only risk is user space tags
>> 	 * being overwritten by the mte_clear_page_tags() call.
>> 	 */
> 
> I think (well, I haven't re-checked), an mprotect() in the VMM ends up
> calling set_pte_at_notify() which would call kvm_set_spte_gfn() and that
> will map the page in the guest. So the problem only appears between
> different VMMs sharing the same page. In principle they can be
> MAP_PRIVATE but they'd be CoW so the race wouldn't matter. So it's left
> with MAP_SHARED between multiple VMMs.

mprotect.c only has a call to set_pte_at() not set_pte_at_notify(). And
AFAICT the MMU notifiers are called to invalidate only in
change_pmd_range(). So the stage 2 mappings would be invalidated rather
than populated. However I believe this should cause synchronisation
because of the KVM mmu_lock. So from my reading you are right an
mprotect() can't race.

MAP_SHARED between multiple VMs is then the only potential problem.

> I think we should just state that this is unsafe and they can delete
> each-others tags. If we are really worried, we can export that lock you
> added in mte.c.
> 

I'll just update the comment for now.

Thanks,

Steve


WARNING: multiple messages have this Message-ID (diff)
From: Steven Price <steven.price@arm.com>
To: Catalin Marinas <catalin.marinas@arm.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	qemu-devel@nongnu.org, Marc Zyngier <maz@kernel.org>,
	Juan Quintela <quintela@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	linux-kernel@vger.kernel.org, Dave Martin <Dave.Martin@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Will Deacon <will@kernel.org>,
	kvmarm@lists.cs.columbia.edu
Subject: Re: [PATCH v12 4/8] arm64: kvm: Introduce MTE VM feature
Date: Fri, 21 May 2021 10:28:52 +0100	[thread overview]
Message-ID: <21629fb6-4287-88b2-2287-2a63496cff79@arm.com> (raw)
In-Reply-To: <20210520175037.GG12251@arm.com>

On 20/05/2021 18:50, Catalin Marinas wrote:
> On Thu, May 20, 2021 at 04:05:46PM +0100, Steven Price wrote:
>> On 20/05/2021 12:54, Catalin Marinas wrote:
>>> On Mon, May 17, 2021 at 01:32:35PM +0100, Steven Price wrote:
>>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>>>> index c5d1f3c87dbd..8660f6a03f51 100644
>>>> --- a/arch/arm64/kvm/mmu.c
>>>> +++ b/arch/arm64/kvm/mmu.c
>>>> @@ -822,6 +822,31 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
>>>>  	return PAGE_SIZE;
>>>>  }
>>>>  
>>>> +static int sanitise_mte_tags(struct kvm *kvm, unsigned long size,
>>>> +			     kvm_pfn_t pfn)
>>>> +{
>>>> +	if (kvm_has_mte(kvm)) {
>>>> +		/*
>>>> +		 * The page will be mapped in stage 2 as Normal Cacheable, so
>>>> +		 * the VM will be able to see the page's tags and therefore
>>>> +		 * they must be initialised first. If PG_mte_tagged is set,
>>>> +		 * tags have already been initialised.
>>>> +		 */
>>>> +		unsigned long i, nr_pages = size >> PAGE_SHIFT;
>>>> +		struct page *page = pfn_to_online_page(pfn);
>>>> +
>>>> +		if (!page)
>>>> +			return -EFAULT;
>>>
>>> IIRC we ended up with pfn_to_online_page() to reject ZONE_DEVICE pages
>>> that may be mapped into a guest and we have no idea whether they support
>>> MTE. It may be worth adding a comment, otherwise, as Marc said, the page
>>> wouldn't disappear.
>>
>> I'll add a comment.
>>
>>>> +
>>>> +		for (i = 0; i < nr_pages; i++, page++) {
>>>> +			if (!test_and_set_bit(PG_mte_tagged, &page->flags))
>>>> +				mte_clear_page_tags(page_address(page));
>>>
>>> We started the page->flags thread and ended up fixing it for the host
>>> set_pte_at() as per the first patch:
>>>
>>> https://lore.kernel.org/r/c3293d47-a5f2-ea4a-6730-f5cae26d8a7e@arm.com
>>>
>>> Now, can we have a race between the stage 2 kvm_set_spte_gfn() and a
>>> stage 1 set_pte_at()? Only the latter takes a lock. Or between two
>>> kvm_set_spte_gfn() in different VMs? I think in the above thread we
>>> concluded that there's only a problem if the page is shared between
>>> multiple VMMs (MAP_SHARED). How realistic is this and what's the
>>> workaround?
>>>
>>> Either way, I think it's worth adding a comment here on the race on
>>> page->flags as it looks strange that here it's just a test_and_set_bit()
>>> while set_pte_at() uses a spinlock.
>>>
>>
>> Very good point! I should have thought about that. I think splitting the
>> test_and_set_bit() in two (as with the cache flush) is sufficient. While
>> there technically still is a race which could lead to user space tags
>> being clobbered:
>>
>> a) It's very odd for a VMM to be doing an mprotect() after the fact to
>> add PROT_MTE, or to be sharing the memory with another process which
>> sets PROT_MTE.
>>
>> b) The window for the race is incredibly small and the VMM (generally)
>> needs to be robust against the guest changing tags anyway.
>>
>> But I'll add a comment here as well:
>>
>> 	/*
>> 	 * There is a potential race between sanitising the
>> 	 * flags here and user space using mprotect() to add
>> 	 * PROT_MTE to access the tags, however by splitting
>> 	 * the test/set the only risk is user space tags
>> 	 * being overwritten by the mte_clear_page_tags() call.
>> 	 */
> 
> I think (well, I haven't re-checked), an mprotect() in the VMM ends up
> calling set_pte_at_notify() which would call kvm_set_spte_gfn() and that
> will map the page in the guest. So the problem only appears between
> different VMMs sharing the same page. In principle they can be
> MAP_PRIVATE but they'd be CoW so the race wouldn't matter. So it's left
> with MAP_SHARED between multiple VMMs.

mprotect.c only has a call to set_pte_at() not set_pte_at_notify(). And
AFAICT the MMU notifiers are called to invalidate only in
change_pmd_range(). So the stage 2 mappings would be invalidated rather
than populated. However I believe this should cause synchronisation
because of the KVM mmu_lock. So from my reading you are right an
mprotect() can't race.

MAP_SHARED between multiple VMs is then the only potential problem.

> I think we should just state that this is unsafe and they can delete
> each-others tags. If we are really worried, we can export that lock you
> added in mte.c.
> 

I'll just update the comment for now.

Thanks,

Steve
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Steven Price <steven.price@arm.com>
To: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>, Will Deacon <will@kernel.org>,
	James Morse <james.morse@arm.com>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, Dave Martin <Dave.Martin@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	qemu-devel@nongnu.org, Juan Quintela <quintela@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	Haibo Xu <Haibo.Xu@arm.com>, Andrew Jones <drjones@redhat.com>
Subject: Re: [PATCH v12 4/8] arm64: kvm: Introduce MTE VM feature
Date: Fri, 21 May 2021 10:28:52 +0100	[thread overview]
Message-ID: <21629fb6-4287-88b2-2287-2a63496cff79@arm.com> (raw)
In-Reply-To: <20210520175037.GG12251@arm.com>

On 20/05/2021 18:50, Catalin Marinas wrote:
> On Thu, May 20, 2021 at 04:05:46PM +0100, Steven Price wrote:
>> On 20/05/2021 12:54, Catalin Marinas wrote:
>>> On Mon, May 17, 2021 at 01:32:35PM +0100, Steven Price wrote:
>>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>>>> index c5d1f3c87dbd..8660f6a03f51 100644
>>>> --- a/arch/arm64/kvm/mmu.c
>>>> +++ b/arch/arm64/kvm/mmu.c
>>>> @@ -822,6 +822,31 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
>>>>  	return PAGE_SIZE;
>>>>  }
>>>>  
>>>> +static int sanitise_mte_tags(struct kvm *kvm, unsigned long size,
>>>> +			     kvm_pfn_t pfn)
>>>> +{
>>>> +	if (kvm_has_mte(kvm)) {
>>>> +		/*
>>>> +		 * The page will be mapped in stage 2 as Normal Cacheable, so
>>>> +		 * the VM will be able to see the page's tags and therefore
>>>> +		 * they must be initialised first. If PG_mte_tagged is set,
>>>> +		 * tags have already been initialised.
>>>> +		 */
>>>> +		unsigned long i, nr_pages = size >> PAGE_SHIFT;
>>>> +		struct page *page = pfn_to_online_page(pfn);
>>>> +
>>>> +		if (!page)
>>>> +			return -EFAULT;
>>>
>>> IIRC we ended up with pfn_to_online_page() to reject ZONE_DEVICE pages
>>> that may be mapped into a guest and we have no idea whether they support
>>> MTE. It may be worth adding a comment, otherwise, as Marc said, the page
>>> wouldn't disappear.
>>
>> I'll add a comment.
>>
>>>> +
>>>> +		for (i = 0; i < nr_pages; i++, page++) {
>>>> +			if (!test_and_set_bit(PG_mte_tagged, &page->flags))
>>>> +				mte_clear_page_tags(page_address(page));
>>>
>>> We started the page->flags thread and ended up fixing it for the host
>>> set_pte_at() as per the first patch:
>>>
>>> https://lore.kernel.org/r/c3293d47-a5f2-ea4a-6730-f5cae26d8a7e@arm.com
>>>
>>> Now, can we have a race between the stage 2 kvm_set_spte_gfn() and a
>>> stage 1 set_pte_at()? Only the latter takes a lock. Or between two
>>> kvm_set_spte_gfn() in different VMs? I think in the above thread we
>>> concluded that there's only a problem if the page is shared between
>>> multiple VMMs (MAP_SHARED). How realistic is this and what's the
>>> workaround?
>>>
>>> Either way, I think it's worth adding a comment here on the race on
>>> page->flags as it looks strange that here it's just a test_and_set_bit()
>>> while set_pte_at() uses a spinlock.
>>>
>>
>> Very good point! I should have thought about that. I think splitting the
>> test_and_set_bit() in two (as with the cache flush) is sufficient. While
>> there technically still is a race which could lead to user space tags
>> being clobbered:
>>
>> a) It's very odd for a VMM to be doing an mprotect() after the fact to
>> add PROT_MTE, or to be sharing the memory with another process which
>> sets PROT_MTE.
>>
>> b) The window for the race is incredibly small and the VMM (generally)
>> needs to be robust against the guest changing tags anyway.
>>
>> But I'll add a comment here as well:
>>
>> 	/*
>> 	 * There is a potential race between sanitising the
>> 	 * flags here and user space using mprotect() to add
>> 	 * PROT_MTE to access the tags, however by splitting
>> 	 * the test/set the only risk is user space tags
>> 	 * being overwritten by the mte_clear_page_tags() call.
>> 	 */
> 
> I think (well, I haven't re-checked), an mprotect() in the VMM ends up
> calling set_pte_at_notify() which would call kvm_set_spte_gfn() and that
> will map the page in the guest. So the problem only appears between
> different VMMs sharing the same page. In principle they can be
> MAP_PRIVATE but they'd be CoW so the race wouldn't matter. So it's left
> with MAP_SHARED between multiple VMMs.

mprotect.c only has a call to set_pte_at() not set_pte_at_notify(). And
AFAICT the MMU notifiers are called to invalidate only in
change_pmd_range(). So the stage 2 mappings would be invalidated rather
than populated. However I believe this should cause synchronisation
because of the KVM mmu_lock. So from my reading you are right an
mprotect() can't race.

MAP_SHARED between multiple VMs is then the only potential problem.

> I think we should just state that this is unsafe and they can delete
> each-others tags. If we are really worried, we can export that lock you
> added in mte.c.
> 

I'll just update the comment for now.

Thanks,

Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-05-21  9:37 UTC|newest]

Thread overview: 196+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-17 12:32 [PATCH v12 0/8] MTE support for KVM guest Steven Price
2021-05-17 12:32 ` Steven Price
2021-05-17 12:32 ` Steven Price
2021-05-17 12:32 ` Steven Price
2021-05-17 12:32 ` [PATCH v12 1/8] arm64: mte: Handle race when synchronising tags Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 14:03   ` Marc Zyngier
2021-05-17 14:03     ` Marc Zyngier
2021-05-17 14:03     ` Marc Zyngier
2021-05-17 14:03     ` Marc Zyngier
2021-05-17 14:56     ` Steven Price
2021-05-17 14:56       ` Steven Price
2021-05-17 14:56       ` Steven Price
2021-05-17 14:56       ` Steven Price
2021-05-19 17:32   ` Catalin Marinas
2021-05-19 17:32     ` Catalin Marinas
2021-05-19 17:32     ` Catalin Marinas
2021-05-19 17:32     ` Catalin Marinas
2021-05-17 12:32 ` [PATCH v12 2/8] arm64: Handle MTE tags zeroing in __alloc_zeroed_user_highpage() Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32 ` [PATCH v12 3/8] arm64: mte: Sync tags for pages where PTE is untagged Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 16:14   ` Marc Zyngier
2021-05-17 16:14     ` Marc Zyngier
2021-05-17 16:14     ` Marc Zyngier
2021-05-17 16:14     ` Marc Zyngier
2021-05-19  9:32     ` Steven Price
2021-05-19  9:32       ` Steven Price
2021-05-19  9:32       ` Steven Price
2021-05-19  9:32       ` Steven Price
2021-05-19 17:48       ` Catalin Marinas
2021-05-19 17:48         ` Catalin Marinas
2021-05-19 17:48         ` Catalin Marinas
2021-05-19 17:48         ` Catalin Marinas
2021-05-19 18:06   ` Catalin Marinas
2021-05-19 18:06     ` Catalin Marinas
2021-05-19 18:06     ` Catalin Marinas
2021-05-19 18:06     ` Catalin Marinas
2021-05-20 11:55     ` Steven Price
2021-05-20 11:55       ` Steven Price
2021-05-20 11:55       ` Steven Price
2021-05-20 11:55       ` Steven Price
2021-05-20 12:25       ` Catalin Marinas
2021-05-20 12:25         ` Catalin Marinas
2021-05-20 12:25         ` Catalin Marinas
2021-05-20 12:25         ` Catalin Marinas
2021-05-20 13:02         ` Catalin Marinas
2021-05-20 13:02           ` Catalin Marinas
2021-05-20 13:02           ` Catalin Marinas
2021-05-20 13:02           ` Catalin Marinas
2021-05-20 13:03         ` Steven Price
2021-05-20 13:03           ` Steven Price
2021-05-20 13:03           ` Steven Price
2021-05-20 13:03           ` Steven Price
2021-05-17 12:32 ` [PATCH v12 4/8] arm64: kvm: Introduce MTE VM feature Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 16:45   ` Marc Zyngier
2021-05-17 16:45     ` Marc Zyngier
2021-05-17 16:45     ` Marc Zyngier
2021-05-17 16:45     ` Marc Zyngier
2021-05-19 10:48     ` Steven Price
2021-05-19 10:48       ` Steven Price
2021-05-19 10:48       ` Steven Price
2021-05-19 10:48       ` Steven Price
2021-05-20  8:51       ` Marc Zyngier
2021-05-20  8:51         ` Marc Zyngier
2021-05-20  8:51         ` Marc Zyngier
2021-05-20  8:51         ` Marc Zyngier
2021-05-20 14:46         ` Steven Price
2021-05-20 14:46           ` Steven Price
2021-05-20 14:46           ` Steven Price
2021-05-20 14:46           ` Steven Price
2021-05-20 11:54   ` Catalin Marinas
2021-05-20 11:54     ` Catalin Marinas
2021-05-20 11:54     ` Catalin Marinas
2021-05-20 11:54     ` Catalin Marinas
2021-05-20 15:05     ` Steven Price
2021-05-20 15:05       ` Steven Price
2021-05-20 15:05       ` Steven Price
2021-05-20 15:05       ` Steven Price
2021-05-20 17:50       ` Catalin Marinas
2021-05-20 17:50         ` Catalin Marinas
2021-05-20 17:50         ` Catalin Marinas
2021-05-20 17:50         ` Catalin Marinas
2021-05-21  9:28         ` Steven Price [this message]
2021-05-21  9:28           ` Steven Price
2021-05-21  9:28           ` Steven Price
2021-05-21  9:28           ` Steven Price
2021-05-17 12:32 ` [PATCH v12 5/8] arm64: kvm: Save/restore MTE registers Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 17:17   ` Marc Zyngier
2021-05-17 17:17     ` Marc Zyngier
2021-05-17 17:17     ` Marc Zyngier
2021-05-17 17:17     ` Marc Zyngier
2021-05-19 13:04     ` Steven Price
2021-05-19 13:04       ` Steven Price
2021-05-19 13:04       ` Steven Price
2021-05-19 13:04       ` Steven Price
2021-05-20  9:46       ` Marc Zyngier
2021-05-20  9:46         ` Marc Zyngier
2021-05-20  9:46         ` Marc Zyngier
2021-05-20  9:46         ` Marc Zyngier
2021-05-20 15:21         ` Steven Price
2021-05-20 15:21           ` Steven Price
2021-05-20 15:21           ` Steven Price
2021-05-20 15:21           ` Steven Price
2021-05-17 12:32 ` [PATCH v12 6/8] arm64: kvm: Expose KVM_ARM_CAP_MTE Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 17:40   ` Marc Zyngier
2021-05-17 17:40     ` Marc Zyngier
2021-05-17 17:40     ` Marc Zyngier
2021-05-17 17:40     ` Marc Zyngier
2021-05-19 13:26     ` Steven Price
2021-05-19 13:26       ` Steven Price
2021-05-19 13:26       ` Steven Price
2021-05-19 13:26       ` Steven Price
2021-05-20 10:09       ` Marc Zyngier
2021-05-20 10:09         ` Marc Zyngier
2021-05-20 10:09         ` Marc Zyngier
2021-05-20 10:09         ` Marc Zyngier
2021-05-20 10:51         ` Steven Price
2021-05-20 10:51           ` Steven Price
2021-05-20 10:51           ` Steven Price
2021-05-20 10:51           ` Steven Price
2021-05-17 12:32 ` [PATCH v12 7/8] KVM: arm64: ioctl to fetch/store tags in a guest Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 18:04   ` Marc Zyngier
2021-05-17 18:04     ` Marc Zyngier
2021-05-17 18:04     ` Marc Zyngier
2021-05-17 18:04     ` Marc Zyngier
2021-05-19 13:51     ` Steven Price
2021-05-19 13:51       ` Steven Price
2021-05-19 13:51       ` Steven Price
2021-05-19 13:51       ` Steven Price
2021-05-20 12:05   ` Catalin Marinas
2021-05-20 12:05     ` Catalin Marinas
2021-05-20 12:05     ` Catalin Marinas
2021-05-20 12:05     ` Catalin Marinas
2021-05-20 15:58     ` Steven Price
2021-05-20 15:58       ` Steven Price
2021-05-20 15:58       ` Steven Price
2021-05-20 15:58       ` Steven Price
2021-05-20 17:27       ` Catalin Marinas
2021-05-20 17:27         ` Catalin Marinas
2021-05-20 17:27         ` Catalin Marinas
2021-05-20 17:27         ` Catalin Marinas
2021-05-21  9:42         ` Steven Price
2021-05-21  9:42           ` Steven Price
2021-05-21  9:42           ` Steven Price
2021-05-21  9:42           ` Steven Price
2021-05-24 18:11           ` Catalin Marinas
2021-05-24 18:11             ` Catalin Marinas
2021-05-24 18:11             ` Catalin Marinas
2021-05-24 18:11             ` Catalin Marinas
2021-05-27  7:50             ` Steven Price
2021-05-27  7:50               ` Steven Price
2021-05-27  7:50               ` Steven Price
2021-05-27  7:50               ` Steven Price
2021-05-27 13:08               ` Catalin Marinas
2021-05-27 13:08                 ` Catalin Marinas
2021-05-27 13:08                 ` Catalin Marinas
2021-05-27 13:08                 ` Catalin Marinas
2021-05-17 12:32 ` [PATCH v12 8/8] KVM: arm64: Document MTE capability and ioctl Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 12:32   ` Steven Price
2021-05-17 18:09   ` Marc Zyngier
2021-05-17 18:09     ` Marc Zyngier
2021-05-17 18:09     ` Marc Zyngier
2021-05-17 18:09     ` Marc Zyngier
2021-05-19 14:09     ` Steven Price
2021-05-19 14:09       ` Steven Price
2021-05-19 14:09       ` Steven Price
2021-05-19 14:09       ` Steven Price
2021-05-20 10:24       ` Marc Zyngier
2021-05-20 10:24         ` Marc Zyngier
2021-05-20 10:24         ` Marc Zyngier
2021-05-20 10:24         ` Marc Zyngier
2021-05-20 10:52         ` Steven Price
2021-05-20 10:52           ` Steven Price
2021-05-20 10:52           ` Steven Price
2021-05-20 10:52           ` Steven Price

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=21629fb6-4287-88b2-2287-2a63496cff79@arm.com \
    --to=steven.price@arm.com \
    --cc=Dave.Martin@arm.com \
    --cc=Haibo.Xu@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=dgilbert@redhat.com \
    --cc=drjones@redhat.com \
    --cc=james.morse@arm.com \
    --cc=julien.thierry.kdev@gmail.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=richard.henderson@linaro.org \
    --cc=suzuki.poulose@arm.com \
    --cc=tglx@linutronix.de \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.