All of lore.kernel.org
 help / color / mirror / Atom feed
From: Elliot Berman <quic_eberman@quicinc.com>
To: Alex Elder <elder@linaro.org>,
	Srinivas Kandagatla <srinivas.kandagatla@linaro.org>,
	Prakruthi Deepak Heragu <quic_pheragu@quicinc.com>
Cc: Murali Nalajala <quic_mnalajal@quicinc.com>,
	Trilok Soni <quic_tsoni@quicinc.com>,
	Srivatsa Vaddagiri <quic_svaddagi@quicinc.com>,
	Carl van Schaik <quic_cvanscha@quicinc.com>,
	Dmitry Baryshkov <dmitry.baryshkov@linaro.org>,
	Bjorn Andersson <andersson@kernel.org>,
	"Konrad Dybcio" <konrad.dybcio@linaro.org>,
	Arnd Bergmann <arnd@arndb.de>,
	"Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
	Rob Herring <robh+dt@kernel.org>,
	Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Bagas Sanjaya <bagasdotme@gmail.com>,
	Will Deacon <will@kernel.org>, Andy Gross <agross@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Jassi Brar <jassisinghbrar@gmail.com>,
	<linux-arm-msm@vger.kernel.org>, <devicetree@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <linux-doc@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v11 12/26] gunyah: vm_mgr: Add/remove user memory regions
Date: Tue, 11 Apr 2023 14:04:12 -0700	[thread overview]
Message-ID: <1276ec4c-e177-aeb2-d493-93bd48634ee8@quicinc.com> (raw)
In-Reply-To: <3b4e230c-6635-43f6-99ce-1ed51b55a450@linaro.org>



On 3/31/2023 7:26 AM, Alex Elder wrote:
> On 3/3/23 7:06 PM, Elliot Berman wrote:
>> +
>> +    mem_entries = kcalloc(mapping->npages, sizeof(*mem_entries), 
>> GFP_KERNEL);
>> +    if (!mem_entries) {
>> +        ret = -ENOMEM;
>> +        goto reclaim;
>> +    }
>> +
>> +    /* reduce number of entries by combining contiguous pages into 
>> single memory entry */
> 
> Are you sure you need to do this?  I.e., does pin_user_pages_fast()
> already take care of consolidating these pages?
> 

pin_user_pages_fast wouldn't consolidate the page entries. There's a 
speedup in sharing memory when pages are contiguous since less 
information needs to be transmitted to Gunyah describing the memory.

>> +    prev_page = page_to_phys(mapping->pages[0]);
>> +    mem_entries[0].ipa_base = cpu_to_le64(prev_page);
>> +    entry_size = PAGE_SIZE;
>> +    for (i = 1, j = 0; i < mapping->npages; i++) {
>> +        curr_page = page_to_phys(mapping->pages[i]);
> 
> I think you can actually use the page frame numbers
> here instead of the addresses.  If they are consecutive,
> they are contiguous.  See pages_are_mergeable() for an
> example of that.  Using PFNs might simplify this code.
> 

It did, thanks for the suggestion!

>> +        if (curr_page - prev_page == PAGE_SIZE) {
>> +            entry_size += PAGE_SIZE;
>> +        } else {
>> +            mem_entries[j].size = cpu_to_le64(entry_size);
>> +            j++;
>> +            mem_entries[j].ipa_base = cpu_to_le64(curr_page);
>> +            entry_size = PAGE_SIZE;
>> +        }
>> +
>> +        prev_page = curr_page;
>> +    }
>> +    mem_entries[j].size = cpu_to_le64(entry_size);
> 
> It might be messier, but it seems like you could scan the pages to
> see how many you'll need (after combining), then allocate the array
> of mem entries based on that.  That is, do that rather than allocating,
> filling, then duplicating and freeing.
> 
>      count = 1;
>      curr_page = mapping->pages[0];
>      for (i = 1; i < mapping->npages; i++) {
>          next_page = mapping->pages[i];
>          if (page_to_pfn(next_page) !=
>                  page_to_pfn(curr_page) + 1)
>              count++;
>          curr_page = next_page;
>      }
>      parcel->n_mem_entries = count;
>      parcel->mem_entries = kcalloc(count, ...);
>      /* Then fill them up */
> 
> (Not tested, but you get the idea.)
> 

It wasn't too messy IMO, I think this ended up simplifying the loop.

>> +
>> +    parcel->n_mem_entries = j + 1;
>> +    parcel->mem_entries = kmemdup(mem_entries, sizeof(*mem_entries) * 
>> parcel->n_mem_entries,
>> +                    GFP_KERNEL);
>> +    kfree(mem_entries);
>> +    if (!parcel->mem_entries) {
>> +        ret = -ENOMEM;
>> +        goto reclaim;
>> +    }
>> +
>> +    mutex_unlock(&ghvm->mm_lock);
>> +    return 0;
>> +reclaim:
>> +    gh_vm_mem_reclaim(ghvm, mapping);
>> +free_mapping:
>> +    kfree(mapping);
>> +    mutex_unlock(&ghvm->mm_lock);
>> +    return ret;
>> +}
>> +
>> +int gh_vm_mem_free(struct gh_vm *ghvm, u32 label)
>> +{
>> +    struct gh_vm_mem *mapping;
>> +    int ret;
>> +
>> +    ret = mutex_lock_interruptible(&ghvm->mm_lock);
>> +    if (ret)
>> +        return ret;
>> +
>> +    mapping = __gh_vm_mem_find_by_label(ghvm, label);
>> +    if (!mapping)
>> +        goto out;
>> +
>> +    gh_vm_mem_reclaim(ghvm, mapping);
>> +    kfree(mapping);
>> +out:
>> +    mutex_unlock(&ghvm->mm_lock);
>> +    return ret;
>> +}
>> diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h
>> index 10ba32d2b0a6..a19207e3e065 100644
>> --- a/include/uapi/linux/gunyah.h
>> +++ b/include/uapi/linux/gunyah.h
>> @@ -20,4 +20,33 @@
>>    */
>>   #define GH_CREATE_VM            _IO(GH_IOCTL_TYPE, 0x0) /* Returns a 
>> Gunyah VM fd */
>> +/*
>> + * ioctls for VM fds
>> + */
>> +
> 
> I think you should define the following three values in an enum.
> 
>> +#define GH_MEM_ALLOW_READ    (1UL << 0)
>> +#define GH_MEM_ALLOW_WRITE    (1UL << 1)
>> +#define GH_MEM_ALLOW_EXEC    (1UL << 2)
>> +
>> +/**
>> + * struct gh_userspace_memory_region - Userspace memory descripion 
>> for GH_VM_SET_USER_MEM_REGION
>> + * @label: Unique identifer to the region.
> 
> Unique with respect to what?  I think it's unique among memory
> regions defined within a VM.  And I think it's arbitrary and
> defined by the caller (right?).
> 
>> + * @flags: Flags for memory parcel behavior
>> + * @guest_phys_addr: Location of the memory region in guest's memory 
>> space (page-aligned)
>> + * @memory_size: Size of the region (page-aligned)
>> + * @userspace_addr: Location of the memory region in caller 
>> (userspace)'s memory
>> + *
>> + * See Documentation/virt/gunyah/vm-manager.rst for further details.
>> + */
>> +struct gh_userspace_memory_region {
>> +    __u32 label;
>> +    __u32 flags;
> 
> Add a comment to indicate what types of values "flags" can have.
> Maybe "flags" should be called "perms" or something?
> 

Added documentation for the valid values of flags. I'm anticipating 
needing to add other flag values beyond permission bits.

>> +    __u64 guest_phys_addr;
>> +    __u64 memory_size;
>> +    __u64 userspace_addr;
> 
> Why isn't userspace_addr just a (void *)?  That would be a more natural
> thing to pass to the kernel.  Is it to avoid 32-bit/64-bit pointer
> differences in the API?
> 

Yes, to avoid 32-bit/64-bit pointer differences in API.

>> +};
>> +
>> +#define GH_VM_SET_USER_MEM_REGION    _IOW(GH_IOCTL_TYPE, 0x1, \
>> +                        struct gh_userspace_memory_region)
>> +
> 
> I think it's nicer to group the definitions of these IOCTL values.
> Then in the struct definitions that follow, you can add comment that
> indicates which IOCTL the struct is used for.
> 
>>   #endif
> 

WARNING: multiple messages have this Message-ID (diff)
From: Elliot Berman <quic_eberman@quicinc.com>
To: Alex Elder <elder@linaro.org>,
	Srinivas Kandagatla <srinivas.kandagatla@linaro.org>,
	Prakruthi Deepak Heragu <quic_pheragu@quicinc.com>
Cc: Murali Nalajala <quic_mnalajal@quicinc.com>,
	Trilok Soni <quic_tsoni@quicinc.com>,
	Srivatsa Vaddagiri <quic_svaddagi@quicinc.com>,
	Carl van Schaik <quic_cvanscha@quicinc.com>,
	Dmitry Baryshkov <dmitry.baryshkov@linaro.org>,
	Bjorn Andersson <andersson@kernel.org>,
	"Konrad Dybcio" <konrad.dybcio@linaro.org>,
	Arnd Bergmann <arnd@arndb.de>,
	"Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
	Rob Herring <robh+dt@kernel.org>,
	Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Bagas Sanjaya <bagasdotme@gmail.com>,
	Will Deacon <will@kernel.org>, Andy Gross <agross@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Jassi Brar <jassisinghbrar@gmail.com>,
	<linux-arm-msm@vger.kernel.org>, <devicetree@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <linux-doc@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v11 12/26] gunyah: vm_mgr: Add/remove user memory regions
Date: Tue, 11 Apr 2023 14:04:12 -0700	[thread overview]
Message-ID: <1276ec4c-e177-aeb2-d493-93bd48634ee8@quicinc.com> (raw)
In-Reply-To: <3b4e230c-6635-43f6-99ce-1ed51b55a450@linaro.org>



On 3/31/2023 7:26 AM, Alex Elder wrote:
> On 3/3/23 7:06 PM, Elliot Berman wrote:
>> +
>> +    mem_entries = kcalloc(mapping->npages, sizeof(*mem_entries), 
>> GFP_KERNEL);
>> +    if (!mem_entries) {
>> +        ret = -ENOMEM;
>> +        goto reclaim;
>> +    }
>> +
>> +    /* reduce number of entries by combining contiguous pages into 
>> single memory entry */
> 
> Are you sure you need to do this?  I.e., does pin_user_pages_fast()
> already take care of consolidating these pages?
> 

pin_user_pages_fast wouldn't consolidate the page entries. There's a 
speedup in sharing memory when pages are contiguous since less 
information needs to be transmitted to Gunyah describing the memory.

>> +    prev_page = page_to_phys(mapping->pages[0]);
>> +    mem_entries[0].ipa_base = cpu_to_le64(prev_page);
>> +    entry_size = PAGE_SIZE;
>> +    for (i = 1, j = 0; i < mapping->npages; i++) {
>> +        curr_page = page_to_phys(mapping->pages[i]);
> 
> I think you can actually use the page frame numbers
> here instead of the addresses.  If they are consecutive,
> they are contiguous.  See pages_are_mergeable() for an
> example of that.  Using PFNs might simplify this code.
> 

It did, thanks for the suggestion!

>> +        if (curr_page - prev_page == PAGE_SIZE) {
>> +            entry_size += PAGE_SIZE;
>> +        } else {
>> +            mem_entries[j].size = cpu_to_le64(entry_size);
>> +            j++;
>> +            mem_entries[j].ipa_base = cpu_to_le64(curr_page);
>> +            entry_size = PAGE_SIZE;
>> +        }
>> +
>> +        prev_page = curr_page;
>> +    }
>> +    mem_entries[j].size = cpu_to_le64(entry_size);
> 
> It might be messier, but it seems like you could scan the pages to
> see how many you'll need (after combining), then allocate the array
> of mem entries based on that.  That is, do that rather than allocating,
> filling, then duplicating and freeing.
> 
>      count = 1;
>      curr_page = mapping->pages[0];
>      for (i = 1; i < mapping->npages; i++) {
>          next_page = mapping->pages[i];
>          if (page_to_pfn(next_page) !=
>                  page_to_pfn(curr_page) + 1)
>              count++;
>          curr_page = next_page;
>      }
>      parcel->n_mem_entries = count;
>      parcel->mem_entries = kcalloc(count, ...);
>      /* Then fill them up */
> 
> (Not tested, but you get the idea.)
> 

It wasn't too messy IMO, I think this ended up simplifying the loop.

>> +
>> +    parcel->n_mem_entries = j + 1;
>> +    parcel->mem_entries = kmemdup(mem_entries, sizeof(*mem_entries) * 
>> parcel->n_mem_entries,
>> +                    GFP_KERNEL);
>> +    kfree(mem_entries);
>> +    if (!parcel->mem_entries) {
>> +        ret = -ENOMEM;
>> +        goto reclaim;
>> +    }
>> +
>> +    mutex_unlock(&ghvm->mm_lock);
>> +    return 0;
>> +reclaim:
>> +    gh_vm_mem_reclaim(ghvm, mapping);
>> +free_mapping:
>> +    kfree(mapping);
>> +    mutex_unlock(&ghvm->mm_lock);
>> +    return ret;
>> +}
>> +
>> +int gh_vm_mem_free(struct gh_vm *ghvm, u32 label)
>> +{
>> +    struct gh_vm_mem *mapping;
>> +    int ret;
>> +
>> +    ret = mutex_lock_interruptible(&ghvm->mm_lock);
>> +    if (ret)
>> +        return ret;
>> +
>> +    mapping = __gh_vm_mem_find_by_label(ghvm, label);
>> +    if (!mapping)
>> +        goto out;
>> +
>> +    gh_vm_mem_reclaim(ghvm, mapping);
>> +    kfree(mapping);
>> +out:
>> +    mutex_unlock(&ghvm->mm_lock);
>> +    return ret;
>> +}
>> diff --git a/include/uapi/linux/gunyah.h b/include/uapi/linux/gunyah.h
>> index 10ba32d2b0a6..a19207e3e065 100644
>> --- a/include/uapi/linux/gunyah.h
>> +++ b/include/uapi/linux/gunyah.h
>> @@ -20,4 +20,33 @@
>>    */
>>   #define GH_CREATE_VM            _IO(GH_IOCTL_TYPE, 0x0) /* Returns a 
>> Gunyah VM fd */
>> +/*
>> + * ioctls for VM fds
>> + */
>> +
> 
> I think you should define the following three values in an enum.
> 
>> +#define GH_MEM_ALLOW_READ    (1UL << 0)
>> +#define GH_MEM_ALLOW_WRITE    (1UL << 1)
>> +#define GH_MEM_ALLOW_EXEC    (1UL << 2)
>> +
>> +/**
>> + * struct gh_userspace_memory_region - Userspace memory descripion 
>> for GH_VM_SET_USER_MEM_REGION
>> + * @label: Unique identifer to the region.
> 
> Unique with respect to what?  I think it's unique among memory
> regions defined within a VM.  And I think it's arbitrary and
> defined by the caller (right?).
> 
>> + * @flags: Flags for memory parcel behavior
>> + * @guest_phys_addr: Location of the memory region in guest's memory 
>> space (page-aligned)
>> + * @memory_size: Size of the region (page-aligned)
>> + * @userspace_addr: Location of the memory region in caller 
>> (userspace)'s memory
>> + *
>> + * See Documentation/virt/gunyah/vm-manager.rst for further details.
>> + */
>> +struct gh_userspace_memory_region {
>> +    __u32 label;
>> +    __u32 flags;
> 
> Add a comment to indicate what types of values "flags" can have.
> Maybe "flags" should be called "perms" or something?
> 

Added documentation for the valid values of flags. I'm anticipating 
needing to add other flag values beyond permission bits.

>> +    __u64 guest_phys_addr;
>> +    __u64 memory_size;
>> +    __u64 userspace_addr;
> 
> Why isn't userspace_addr just a (void *)?  That would be a more natural
> thing to pass to the kernel.  Is it to avoid 32-bit/64-bit pointer
> differences in the API?
> 

Yes, to avoid 32-bit/64-bit pointer differences in API.

>> +};
>> +
>> +#define GH_VM_SET_USER_MEM_REGION    _IOW(GH_IOCTL_TYPE, 0x1, \
>> +                        struct gh_userspace_memory_region)
>> +
> 
> I think it's nicer to group the definitions of these IOCTL values.
> Then in the struct definitions that follow, you can add comment that
> indicates which IOCTL the struct is used for.
> 
>>   #endif
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2023-04-11 21:04 UTC|newest]

Thread overview: 168+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-04  1:06 [PATCH v11 00/26] Drivers for gunyah hypervisor Elliot Berman
2023-03-04  1:06 ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 01/26] docs: gunyah: Introduce Gunyah Hypervisor Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 02/26] dt-bindings: Add binding for gunyah hypervisor Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 03/26] gunyah: Common types and error codes for Gunyah hypercalls Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-21 14:23   ` Srinivas Kandagatla
2023-03-21 14:23     ` Srinivas Kandagatla
2023-03-31 14:24   ` Alex Elder
2023-03-31 14:24     ` Alex Elder
2023-04-03 19:44     ` Elliot Berman
2023-04-03 19:44       ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 04/26] virt: gunyah: Add hypercalls to identify Gunyah Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-21 14:22   ` Srinivas Kandagatla
2023-03-21 14:22     ` Srinivas Kandagatla
2023-03-31 14:24   ` Alex Elder
2023-03-31 14:24     ` Alex Elder
2023-03-04  1:06 ` [PATCH v11 05/26] virt: gunyah: Identify hypervisor version Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-21 15:48   ` Srinivas Kandagatla
2023-03-21 15:48     ` Srinivas Kandagatla
2023-03-31 14:24   ` Alex Elder
2023-03-31 14:24     ` Alex Elder
2023-03-04  1:06 ` [PATCH v11 06/26] virt: gunyah: msgq: Add hypercalls to send and receive messages Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-21 15:49   ` Srinivas Kandagatla
2023-03-21 15:49     ` Srinivas Kandagatla
2023-03-31 14:25   ` Alex Elder
2023-03-31 14:25     ` Alex Elder
2023-03-04  1:06 ` [PATCH v11 07/26] mailbox: Add Gunyah message queue mailbox Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-21 14:22   ` Srinivas Kandagatla
2023-03-21 14:22     ` Srinivas Kandagatla
2023-03-31 14:25   ` Alex Elder
2023-03-31 14:25     ` Alex Elder
2023-04-03 20:15     ` Elliot Berman
2023-04-03 20:15       ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 08/26] gunyah: rsc_mgr: Add resource manager RPC core Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:25   ` Alex Elder
2023-03-31 14:25     ` Alex Elder
2023-04-03 20:34     ` Elliot Berman
2023-04-03 20:34       ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 09/26] gunyah: rsc_mgr: Add VM lifecycle RPC Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:25   ` Alex Elder
2023-03-31 14:25     ` Alex Elder
2023-04-03 21:09     ` Elliot Berman
2023-04-03 21:09       ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 10/26] gunyah: vm_mgr: Introduce basic VM Manager Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-21 14:23   ` Srinivas Kandagatla
2023-03-21 14:23     ` Srinivas Kandagatla
2023-03-31 14:25   ` Alex Elder
2023-03-31 14:25     ` Alex Elder
2023-04-11 20:48     ` Elliot Berman
2023-04-11 20:48       ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 11/26] gunyah: rsc_mgr: Add RPC for sharing memory Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:26   ` Alex Elder
2023-03-31 14:26     ` Alex Elder
2023-03-04  1:06 ` [PATCH v11 12/26] gunyah: vm_mgr: Add/remove user memory regions Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-24 18:37   ` Will Deacon
2023-03-24 18:37     ` Will Deacon
2023-04-11 20:34     ` Elliot Berman
2023-04-11 20:34       ` Elliot Berman
2023-04-11 21:19       ` Will Deacon
2023-04-11 21:19         ` Will Deacon
2023-04-12 20:48         ` Elliot Berman
2023-04-12 20:48           ` Elliot Berman
2023-04-13  9:54           ` Will Deacon
2023-04-13  9:54             ` Will Deacon
2023-03-31 14:26   ` Alex Elder
2023-03-31 14:26     ` Alex Elder
2023-04-11 21:04     ` Elliot Berman [this message]
2023-04-11 21:04       ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 13/26] gunyah: vm_mgr: Add ioctls to support basic non-proxy VM boot Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-21 14:24   ` Srinivas Kandagatla
2023-03-21 14:24     ` Srinivas Kandagatla
2023-04-11 21:07     ` Elliot Berman
2023-04-11 21:07       ` Elliot Berman
2023-04-11 21:09       ` Alex Elder
2023-04-11 21:09         ` Alex Elder
2023-03-31 14:26   ` Alex Elder
2023-03-31 14:26     ` Alex Elder
2023-04-11 21:16     ` Elliot Berman
2023-04-11 21:16       ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 14/26] samples: Add sample userspace Gunyah VM Manager Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:26   ` Alex Elder
2023-03-31 14:26     ` Alex Elder
2023-03-04  1:06 ` [PATCH v11 15/26] gunyah: rsc_mgr: Add platform ops on mem_lend/mem_reclaim Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-21 14:23   ` Srinivas Kandagatla
2023-03-21 14:23     ` Srinivas Kandagatla
2023-03-22 19:17     ` Elliot Berman
2023-03-22 19:17       ` Elliot Berman
2023-03-31 14:26   ` Alex Elder
2023-03-31 14:26     ` Alex Elder
2023-03-04  1:06 ` [PATCH v11 16/26] firmware: qcom_scm: Register Gunyah platform ops Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-21 14:24   ` Srinivas Kandagatla
2023-03-21 14:24     ` Srinivas Kandagatla
2023-03-21 18:40     ` Elliot Berman
2023-03-21 18:40       ` Elliot Berman
2023-03-21 20:19       ` Srinivas Kandagatla
2023-03-21 20:19         ` Srinivas Kandagatla
2023-03-04  1:06 ` [PATCH v11 17/26] docs: gunyah: Document Gunyah VM Manager Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 18/26] virt: gunyah: Translate gh_rm_hyp_resource into gunyah_resource Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:26   ` Alex Elder
2023-03-31 14:26     ` Alex Elder
2023-04-18  0:25     ` Elliot Berman
2023-04-18  0:25       ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 19/26] gunyah: vm_mgr: Add framework to add VM Functions Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:26   ` Alex Elder
2023-03-31 14:26     ` Alex Elder
2023-03-04  1:06 ` [PATCH v11 20/26] virt: gunyah: Add resource tickets Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:27   ` Alex Elder
2023-03-31 14:27     ` Alex Elder
2023-04-17 22:57     ` Elliot Berman
2023-04-17 22:57       ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 21/26] virt: gunyah: Add IO handlers Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:27   ` Alex Elder
2023-03-31 14:27     ` Alex Elder
2023-03-04  1:06 ` [PATCH v11 22/26] virt: gunyah: Add proxy-scheduled vCPUs Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:27   ` Alex Elder
2023-03-31 14:27     ` Alex Elder
2023-04-17 22:41     ` Elliot Berman
2023-04-17 22:41       ` Elliot Berman
2023-04-18 12:46       ` Alex Elder
2023-04-18 12:46         ` Alex Elder
2023-04-18 17:18       ` Elliot Berman
2023-04-18 17:18         ` Elliot Berman
2023-04-18 17:31         ` Alex Elder
2023-04-18 17:31           ` Alex Elder
2023-04-18 18:35           ` Elliot Berman
2023-04-18 18:35             ` Elliot Berman
2023-03-04  1:06 ` [PATCH v11 23/26] virt: gunyah: Add hypercalls for sending doorbell Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:27   ` Alex Elder
2023-03-31 14:27     ` Alex Elder
2023-03-04  1:06 ` [PATCH v11 24/26] virt: gunyah: Add irqfd interface Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:27   ` Alex Elder
2023-03-31 14:27     ` Alex Elder
2023-04-17 22:55     ` Elliot Berman
2023-04-17 22:55       ` Elliot Berman
2023-04-18 12:55       ` Alex Elder
2023-04-18 12:55         ` Alex Elder
2023-03-04  1:06 ` [PATCH v11 25/26] virt: gunyah: Add ioeventfd Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:27   ` Alex Elder
2023-03-31 14:27     ` Alex Elder
2023-03-04  1:06 ` [PATCH v11 26/26] MAINTAINERS: Add Gunyah hypervisor drivers section Elliot Berman
2023-03-04  1:06   ` Elliot Berman
2023-03-31 14:24 ` [PATCH v11 00/26] Drivers for gunyah hypervisor Alex Elder
2023-03-31 14:24   ` Alex Elder

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1276ec4c-e177-aeb2-d493-93bd48634ee8@quicinc.com \
    --to=quic_eberman@quicinc.com \
    --cc=agross@kernel.org \
    --cc=andersson@kernel.org \
    --cc=arnd@arndb.de \
    --cc=bagasdotme@gmail.com \
    --cc=catalin.marinas@arm.com \
    --cc=corbet@lwn.net \
    --cc=devicetree@vger.kernel.org \
    --cc=dmitry.baryshkov@linaro.org \
    --cc=elder@linaro.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=jassisinghbrar@gmail.com \
    --cc=konrad.dybcio@linaro.org \
    --cc=krzysztof.kozlowski+dt@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=quic_cvanscha@quicinc.com \
    --cc=quic_mnalajal@quicinc.com \
    --cc=quic_pheragu@quicinc.com \
    --cc=quic_svaddagi@quicinc.com \
    --cc=quic_tsoni@quicinc.com \
    --cc=robh+dt@kernel.org \
    --cc=srinivas.kandagatla@linaro.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.