All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alex Elder <elder@linaro.org>
To: Elliot Berman <quic_eberman@quicinc.com>,
	Srinivas Kandagatla <srinivas.kandagatla@linaro.org>,
	Prakruthi Deepak Heragu <quic_pheragu@quicinc.com>
Cc: Murali Nalajala <quic_mnalajal@quicinc.com>,
	Trilok Soni <quic_tsoni@quicinc.com>,
	Srivatsa Vaddagiri <quic_svaddagi@quicinc.com>,
	Carl van Schaik <quic_cvanscha@quicinc.com>,
	Dmitry Baryshkov <dmitry.baryshkov@linaro.org>,
	Bjorn Andersson <andersson@kernel.org>,
	Konrad Dybcio <konrad.dybcio@linaro.org>,
	Arnd Bergmann <arnd@arndb.de>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Rob Herring <robh+dt@kernel.org>,
	Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Bagas Sanjaya <bagasdotme@gmail.com>,
	Will Deacon <will@kernel.org>, Andy Gross <agross@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Jassi Brar <jassisinghbrar@gmail.com>,
	linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v13 09/24] gunyah: rsc_mgr: Add RPC for sharing memory
Date: Mon, 5 Jun 2023 14:48:09 -0500	[thread overview]
Message-ID: <9747a71d-c6d0-b67b-a3b1-c84848268f46@linaro.org> (raw)
In-Reply-To: <20230509204801.2824351-10-quic_eberman@quicinc.com>

On 5/9/23 3:47 PM, Elliot Berman wrote:
> Gunyah resource manager provides API to manipulate stage 2 page tables.
> Manipulations are represented as a memory parcel. Memory parcels

Not a huge deal, but maybe:
   The Gunyah resource manager provides an API for manipulating stage 2
   page tables.  The API uses "memory parcels" to represent regions of
   memory affected by API calls.

> describe a list of memory regions (intermediate physical address and

...(intermediate physical address--IPA--and size)...

> size), a list of new permissions for VMs, and the memory type (DDR or
> MMIO). Memory parcels are uniquely identified by a handle allocated by

s/Memory parcels are/Each memory parcel is/

Also, as I recall, a memory parcel is contiguous memory in
the guest address space.  If that's true, it might be worth
mentioning (here and/or in the code).

> Gunyah. There are a few types of memory parcel sharing which Gunyah
> supports:
> 
>   - Sharing: the guest and host VM both have access
>   - Lending: only the guest has access; host VM loses access
>   - Donating: Permanently lent (not reclaimed even if guest shuts down)
> 
> Memory parcels that have been shared or lent can be reclaimed by the
> host via an additional call. The reclaim operation restores the original
> access the host VM had to the memory parcel and removes the access to
> other VM.
> 
> One point to note that memory parcels don't describe where in the guest
> VM the memory parcel should reside. The guest VM must accept the memory
> parcel either explicitly via a "gh_rm_mem_accept" call (not introduced
> here) or be configured to accept it automatically at boot. As the guest
> VM accepts the memory parcel, it also mentions the IPA it wants to place
> memory parcel.

I have quite a few small comments and questions.  Some of the
questions arise because I haven't done a very deep review this
time, so I might just be missing or forgetting bits of the
bigger picture.

My feedback is down to nits though, for the most part.  Consider
what I say, but even if you ignore much of it:

Reviewed-by: Alex Elder <elder@linaro.org>

> Co-developed-by: Prakruthi Deepak Heragu <quic_pheragu@quicinc.com>
> Signed-off-by: Prakruthi Deepak Heragu <quic_pheragu@quicinc.com>
> Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
> ---
>   drivers/virt/gunyah/rsc_mgr_rpc.c | 227 ++++++++++++++++++++++++++++++
>   include/linux/gunyah_rsc_mgr.h    |  48 +++++++
>   2 files changed, 275 insertions(+)
> 
> diff --git a/drivers/virt/gunyah/rsc_mgr_rpc.c b/drivers/virt/gunyah/rsc_mgr_rpc.c
> index a4a9f0ba4e1f..4f25f07400b3 100644
> --- a/drivers/virt/gunyah/rsc_mgr_rpc.c
> +++ b/drivers/virt/gunyah/rsc_mgr_rpc.c
> @@ -6,6 +6,12 @@
>   #include <linux/gunyah_rsc_mgr.h>
>   #include "rsc_mgr.h"
>   
> +/* Message IDs: Memory Management */
> +#define GH_RM_RPC_MEM_LEND			0x51000012
> +#define GH_RM_RPC_MEM_SHARE			0x51000013
> +#define GH_RM_RPC_MEM_RECLAIM			0x51000015
> +#define GH_RM_RPC_MEM_APPEND			0x51000018

These definitions seem to be permanent, unchanging, definitional
values for the Gunyah RM API.  It seems like they could reside
in a file that reinforces that--like "gh_rm_api.h" or something.

That said, nobody else will be using these, so I guess defining
it here makes sense.

> +
>   /* Message IDs: VM Management */
>   #define GH_RM_RPC_VM_ALLOC_VMID			0x56000001
>   #define GH_RM_RPC_VM_DEALLOC_VMID		0x56000002
> @@ -22,6 +28,46 @@ struct gh_rm_vm_common_vmid_req {
>   	__le16 _padding;
>   } __packed;
>   
> +/* Call: MEM_LEND, MEM_SHARE */
> +#define GH_MEM_SHARE_REQ_FLAGS_APPEND		BIT(1)
> +
> +struct gh_rm_mem_share_req_header {
> +	u8 mem_type;
> +	u8 _padding0;
> +	u8 flags;
> +	u8 _padding1;
> +	__le32 label;
> +} __packed;
> +
> +struct gh_rm_mem_share_req_acl_section {
> +	__le32 n_entries;
> +	struct gh_rm_mem_acl_entry entries[];
> +};
> +
> +struct gh_rm_mem_share_req_mem_section {
> +	__le16 n_entries;
> +	__le16 _padding;
> +	struct gh_rm_mem_entry entries[];
> +};
> +
> +/* Call: MEM_RELEASE */
> +struct gh_rm_mem_release_req {
> +	__le32 mem_handle;
> +	u8 flags; /* currently not used */
> +	u8 _padding0;
> +	__le16 _padding1;
> +} __packed;
> +
> +/* Call: MEM_APPEND */
> +#define GH_MEM_APPEND_REQ_FLAGS_END		BIT(0)
> +
> +struct gh_rm_mem_append_req_header {
> +	__le32 mem_handle;
> +	u8 flags;
> +	u8 _padding0;
> +	__le16 _padding1;
> +} __packed;
> +
>   /* Call: VM_ALLOC */
>   struct gh_rm_vm_alloc_vmid_resp {
>   	__le16 vmid;
> @@ -51,6 +97,8 @@ struct gh_rm_vm_config_image_req {
>   	__le64 dtb_size;
>   } __packed;
>   
> +#define GH_RM_MAX_MEM_ENTRIES	512
> +
>   /*
>    * Several RM calls take only a VMID as a parameter and give only standard
>    * response back. Deduplicate boilerplate code by using this common call.
> @@ -64,6 +112,185 @@ static int gh_rm_common_vmid_call(struct gh_rm *rm, u32 message_id, u16 vmid)
>   	return gh_rm_call(rm, message_id, &req_payload, sizeof(req_payload), NULL, NULL);
>   }
>   
> +static int _gh_rm_mem_append(struct gh_rm *rm, u32 mem_handle, bool end_append,
> +			struct gh_rm_mem_entry *mem_entries, size_t n_mem_entries)
> +{
> +	struct gh_rm_mem_share_req_mem_section *mem_section;
> +	struct gh_rm_mem_append_req_header *req_header;
> +	size_t msg_size = 0;
> +	void *msg;
> +	int ret;
> +
> +	msg_size += sizeof(struct gh_rm_mem_append_req_header);
> +	msg_size += struct_size(mem_section, entries, n_mem_entries);
> +
> +	msg = kzalloc(msg_size, GFP_KERNEL);
> +	if (!msg)
> +		return -ENOMEM;
> +
> +	req_header = msg;
> +	mem_section = (void *)req_header + sizeof(struct gh_rm_mem_append_req_header);

You could use req_header + 1.  Even if not, use sizeof(*req_header).

> +
> +	req_header->mem_handle = cpu_to_le32(mem_handle);
> +	if (end_append)
> +		req_header->flags |= GH_MEM_APPEND_REQ_FLAGS_END;
> +
> +	mem_section->n_entries = cpu_to_le16(n_mem_entries);
> +	memcpy(mem_section->entries, mem_entries, sizeof(*mem_entries) * n_mem_entries);
> +
> +	ret = gh_rm_call(rm, GH_RM_RPC_MEM_APPEND, msg, msg_size, NULL, NULL);
> +	kfree(msg);
> +
> +	return ret;
> +}
> +
> +static int gh_rm_mem_append(struct gh_rm *rm, u32 mem_handle,
> +			struct gh_rm_mem_entry *mem_entries, size_t n_mem_entries)
> +{
> +	bool end_append;
> +	int ret = 0;
> +	size_t n;
> +
> +	while (n_mem_entries) {
> +		if (n_mem_entries > GH_RM_MAX_MEM_ENTRIES) {
> +			end_append = false;
> +			n = GH_RM_MAX_MEM_ENTRIES;
> +		} else {
> +			end_append = true;
> +			n = n_mem_entries;
> +		}
> +
> +		ret = _gh_rm_mem_append(rm, mem_handle, end_append, mem_entries, n);
> +		if (ret)
> +			break;
> +
> +		mem_entries += n;
> +		n_mem_entries -= n;
> +	}
> +
> +	return ret;
> +}
> +
> +static int gh_rm_mem_lend_common(struct gh_rm *rm, u32 message_id, struct gh_rm_mem_parcel *p)
> +{
> +	size_t msg_size = 0, initial_mem_entries = p->n_mem_entries, resp_size;
> +	size_t acl_section_size, mem_section_size;
> +	struct gh_rm_mem_share_req_acl_section *acl_section;
> +	struct gh_rm_mem_share_req_mem_section *mem_section;
> +	struct gh_rm_mem_share_req_header *req_header;
> +	u32 *attr_section;
> +	__le32 *resp;
> +	void *msg;
> +	int ret;
> +
> +	if (!p->acl_entries || !p->n_acl_entries || !p->mem_entries || !p->n_mem_entries ||
> +	    p->n_acl_entries > U8_MAX || p->mem_handle != GH_MEM_HANDLE_INVAL)
> +		return -EINVAL;
> +
> +	if (initial_mem_entries > GH_RM_MAX_MEM_ENTRIES)
> +		initial_mem_entries = GH_RM_MAX_MEM_ENTRIES;

Is it OK to truncate the number of entries silently?

> +
> +	acl_section_size = struct_size(acl_section, entries, p->n_acl_entries);

Is there a limit on the number of ACL entries (as there is for
the number of mem entries).

> +	mem_section_size = struct_size(mem_section, entries, initial_mem_entries);
> +	/* The format of the message goes:
> +	 * request header
> +	 * ACL entries (which VMs get what kind of access to this memory parcel)
> +	 * Memory entries (list of memory regions to share)
> +	 * Memory attributes (currently unused, we'll hard-code the size to 0)
> +	 */
> +	msg_size += sizeof(struct gh_rm_mem_share_req_header);
> +	msg_size += acl_section_size;
> +	msg_size += mem_section_size;
> +	msg_size += sizeof(u32); /* for memory attributes, currently unused */
> +
> +	msg = kzalloc(msg_size, GFP_KERNEL);
> +	if (!msg)
> +		return -ENOMEM;
> +
> +	req_header = msg;
> +	acl_section = (void *)req_header + sizeof(*req_header);
> +	mem_section = (void *)acl_section + acl_section_size;
> +	attr_section = (void *)mem_section + mem_section_size;
> +
> +	req_header->mem_type = p->mem_type;
> +	if (initial_mem_entries != p->n_mem_entries)
> +		req_header->flags |= GH_MEM_SHARE_REQ_FLAGS_APPEND;
> +	req_header->label = cpu_to_le32(p->label);
> +
> +	acl_section->n_entries = cpu_to_le32(p->n_acl_entries);
> +	memcpy(acl_section->entries, p->acl_entries,
> +		flex_array_size(acl_section, entries, p->n_acl_entries));
> +
> +	mem_section->n_entries = cpu_to_le16(initial_mem_entries);
> +	memcpy(mem_section->entries, p->mem_entries,
> +		flex_array_size(mem_section, entries, initial_mem_entries));
> +
> +	/* Set n_entries for memory attribute section to 0 */
> +	*attr_section = 0;
> +
> +	ret = gh_rm_call(rm, message_id, msg, msg_size, (void **)&resp, &resp_size);
> +	kfree(msg);
> +
> +	if (ret)
> +		return ret;
> +
> +	p->mem_handle = le32_to_cpu(*resp);
> +	kfree(resp);
> +
> +	if (initial_mem_entries != p->n_mem_entries) {
> +		ret = gh_rm_mem_append(rm, p->mem_handle,
> +					&p->mem_entries[initial_mem_entries],
> +					p->n_mem_entries - initial_mem_entries);

Will there always be at most one gh_rm_mem_append() call?

> +		if (ret) {
> +			gh_rm_mem_reclaim(rm, p);
> +			p->mem_handle = GH_MEM_HANDLE_INVAL;
> +		}
> +	}
> +
> +	return ret;
> +}

. . .


WARNING: multiple messages have this Message-ID (diff)
From: Alex Elder <elder@linaro.org>
To: Elliot Berman <quic_eberman@quicinc.com>,
	Srinivas Kandagatla <srinivas.kandagatla@linaro.org>,
	Prakruthi Deepak Heragu <quic_pheragu@quicinc.com>
Cc: Murali Nalajala <quic_mnalajal@quicinc.com>,
	Trilok Soni <quic_tsoni@quicinc.com>,
	Srivatsa Vaddagiri <quic_svaddagi@quicinc.com>,
	Carl van Schaik <quic_cvanscha@quicinc.com>,
	Dmitry Baryshkov <dmitry.baryshkov@linaro.org>,
	Bjorn Andersson <andersson@kernel.org>,
	Konrad Dybcio <konrad.dybcio@linaro.org>,
	Arnd Bergmann <arnd@arndb.de>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Rob Herring <robh+dt@kernel.org>,
	Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Bagas Sanjaya <bagasdotme@gmail.com>,
	Will Deacon <will@kernel.org>, Andy Gross <agross@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Jassi Brar <jassisinghbrar@gmail.com>,
	linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v13 09/24] gunyah: rsc_mgr: Add RPC for sharing memory
Date: Mon, 5 Jun 2023 14:48:09 -0500	[thread overview]
Message-ID: <9747a71d-c6d0-b67b-a3b1-c84848268f46@linaro.org> (raw)
In-Reply-To: <20230509204801.2824351-10-quic_eberman@quicinc.com>

On 5/9/23 3:47 PM, Elliot Berman wrote:
> Gunyah resource manager provides API to manipulate stage 2 page tables.
> Manipulations are represented as a memory parcel. Memory parcels

Not a huge deal, but maybe:
   The Gunyah resource manager provides an API for manipulating stage 2
   page tables.  The API uses "memory parcels" to represent regions of
   memory affected by API calls.

> describe a list of memory regions (intermediate physical address and

...(intermediate physical address--IPA--and size)...

> size), a list of new permissions for VMs, and the memory type (DDR or
> MMIO). Memory parcels are uniquely identified by a handle allocated by

s/Memory parcels are/Each memory parcel is/

Also, as I recall, a memory parcel is contiguous memory in
the guest address space.  If that's true, it might be worth
mentioning (here and/or in the code).

> Gunyah. There are a few types of memory parcel sharing which Gunyah
> supports:
> 
>   - Sharing: the guest and host VM both have access
>   - Lending: only the guest has access; host VM loses access
>   - Donating: Permanently lent (not reclaimed even if guest shuts down)
> 
> Memory parcels that have been shared or lent can be reclaimed by the
> host via an additional call. The reclaim operation restores the original
> access the host VM had to the memory parcel and removes the access to
> other VM.
> 
> One point to note that memory parcels don't describe where in the guest
> VM the memory parcel should reside. The guest VM must accept the memory
> parcel either explicitly via a "gh_rm_mem_accept" call (not introduced
> here) or be configured to accept it automatically at boot. As the guest
> VM accepts the memory parcel, it also mentions the IPA it wants to place
> memory parcel.

I have quite a few small comments and questions.  Some of the
questions arise because I haven't done a very deep review this
time, so I might just be missing or forgetting bits of the
bigger picture.

My feedback is down to nits though, for the most part.  Consider
what I say, but even if you ignore much of it:

Reviewed-by: Alex Elder <elder@linaro.org>

> Co-developed-by: Prakruthi Deepak Heragu <quic_pheragu@quicinc.com>
> Signed-off-by: Prakruthi Deepak Heragu <quic_pheragu@quicinc.com>
> Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
> ---
>   drivers/virt/gunyah/rsc_mgr_rpc.c | 227 ++++++++++++++++++++++++++++++
>   include/linux/gunyah_rsc_mgr.h    |  48 +++++++
>   2 files changed, 275 insertions(+)
> 
> diff --git a/drivers/virt/gunyah/rsc_mgr_rpc.c b/drivers/virt/gunyah/rsc_mgr_rpc.c
> index a4a9f0ba4e1f..4f25f07400b3 100644
> --- a/drivers/virt/gunyah/rsc_mgr_rpc.c
> +++ b/drivers/virt/gunyah/rsc_mgr_rpc.c
> @@ -6,6 +6,12 @@
>   #include <linux/gunyah_rsc_mgr.h>
>   #include "rsc_mgr.h"
>   
> +/* Message IDs: Memory Management */
> +#define GH_RM_RPC_MEM_LEND			0x51000012
> +#define GH_RM_RPC_MEM_SHARE			0x51000013
> +#define GH_RM_RPC_MEM_RECLAIM			0x51000015
> +#define GH_RM_RPC_MEM_APPEND			0x51000018

These definitions seem to be permanent, unchanging, definitional
values for the Gunyah RM API.  It seems like they could reside
in a file that reinforces that--like "gh_rm_api.h" or something.

That said, nobody else will be using these, so I guess defining
it here makes sense.

> +
>   /* Message IDs: VM Management */
>   #define GH_RM_RPC_VM_ALLOC_VMID			0x56000001
>   #define GH_RM_RPC_VM_DEALLOC_VMID		0x56000002
> @@ -22,6 +28,46 @@ struct gh_rm_vm_common_vmid_req {
>   	__le16 _padding;
>   } __packed;
>   
> +/* Call: MEM_LEND, MEM_SHARE */
> +#define GH_MEM_SHARE_REQ_FLAGS_APPEND		BIT(1)
> +
> +struct gh_rm_mem_share_req_header {
> +	u8 mem_type;
> +	u8 _padding0;
> +	u8 flags;
> +	u8 _padding1;
> +	__le32 label;
> +} __packed;
> +
> +struct gh_rm_mem_share_req_acl_section {
> +	__le32 n_entries;
> +	struct gh_rm_mem_acl_entry entries[];
> +};
> +
> +struct gh_rm_mem_share_req_mem_section {
> +	__le16 n_entries;
> +	__le16 _padding;
> +	struct gh_rm_mem_entry entries[];
> +};
> +
> +/* Call: MEM_RELEASE */
> +struct gh_rm_mem_release_req {
> +	__le32 mem_handle;
> +	u8 flags; /* currently not used */
> +	u8 _padding0;
> +	__le16 _padding1;
> +} __packed;
> +
> +/* Call: MEM_APPEND */
> +#define GH_MEM_APPEND_REQ_FLAGS_END		BIT(0)
> +
> +struct gh_rm_mem_append_req_header {
> +	__le32 mem_handle;
> +	u8 flags;
> +	u8 _padding0;
> +	__le16 _padding1;
> +} __packed;
> +
>   /* Call: VM_ALLOC */
>   struct gh_rm_vm_alloc_vmid_resp {
>   	__le16 vmid;
> @@ -51,6 +97,8 @@ struct gh_rm_vm_config_image_req {
>   	__le64 dtb_size;
>   } __packed;
>   
> +#define GH_RM_MAX_MEM_ENTRIES	512
> +
>   /*
>    * Several RM calls take only a VMID as a parameter and give only standard
>    * response back. Deduplicate boilerplate code by using this common call.
> @@ -64,6 +112,185 @@ static int gh_rm_common_vmid_call(struct gh_rm *rm, u32 message_id, u16 vmid)
>   	return gh_rm_call(rm, message_id, &req_payload, sizeof(req_payload), NULL, NULL);
>   }
>   
> +static int _gh_rm_mem_append(struct gh_rm *rm, u32 mem_handle, bool end_append,
> +			struct gh_rm_mem_entry *mem_entries, size_t n_mem_entries)
> +{
> +	struct gh_rm_mem_share_req_mem_section *mem_section;
> +	struct gh_rm_mem_append_req_header *req_header;
> +	size_t msg_size = 0;
> +	void *msg;
> +	int ret;
> +
> +	msg_size += sizeof(struct gh_rm_mem_append_req_header);
> +	msg_size += struct_size(mem_section, entries, n_mem_entries);
> +
> +	msg = kzalloc(msg_size, GFP_KERNEL);
> +	if (!msg)
> +		return -ENOMEM;
> +
> +	req_header = msg;
> +	mem_section = (void *)req_header + sizeof(struct gh_rm_mem_append_req_header);

You could use req_header + 1.  Even if not, use sizeof(*req_header).

> +
> +	req_header->mem_handle = cpu_to_le32(mem_handle);
> +	if (end_append)
> +		req_header->flags |= GH_MEM_APPEND_REQ_FLAGS_END;
> +
> +	mem_section->n_entries = cpu_to_le16(n_mem_entries);
> +	memcpy(mem_section->entries, mem_entries, sizeof(*mem_entries) * n_mem_entries);
> +
> +	ret = gh_rm_call(rm, GH_RM_RPC_MEM_APPEND, msg, msg_size, NULL, NULL);
> +	kfree(msg);
> +
> +	return ret;
> +}
> +
> +static int gh_rm_mem_append(struct gh_rm *rm, u32 mem_handle,
> +			struct gh_rm_mem_entry *mem_entries, size_t n_mem_entries)
> +{
> +	bool end_append;
> +	int ret = 0;
> +	size_t n;
> +
> +	while (n_mem_entries) {
> +		if (n_mem_entries > GH_RM_MAX_MEM_ENTRIES) {
> +			end_append = false;
> +			n = GH_RM_MAX_MEM_ENTRIES;
> +		} else {
> +			end_append = true;
> +			n = n_mem_entries;
> +		}
> +
> +		ret = _gh_rm_mem_append(rm, mem_handle, end_append, mem_entries, n);
> +		if (ret)
> +			break;
> +
> +		mem_entries += n;
> +		n_mem_entries -= n;
> +	}
> +
> +	return ret;
> +}
> +
> +static int gh_rm_mem_lend_common(struct gh_rm *rm, u32 message_id, struct gh_rm_mem_parcel *p)
> +{
> +	size_t msg_size = 0, initial_mem_entries = p->n_mem_entries, resp_size;
> +	size_t acl_section_size, mem_section_size;
> +	struct gh_rm_mem_share_req_acl_section *acl_section;
> +	struct gh_rm_mem_share_req_mem_section *mem_section;
> +	struct gh_rm_mem_share_req_header *req_header;
> +	u32 *attr_section;
> +	__le32 *resp;
> +	void *msg;
> +	int ret;
> +
> +	if (!p->acl_entries || !p->n_acl_entries || !p->mem_entries || !p->n_mem_entries ||
> +	    p->n_acl_entries > U8_MAX || p->mem_handle != GH_MEM_HANDLE_INVAL)
> +		return -EINVAL;
> +
> +	if (initial_mem_entries > GH_RM_MAX_MEM_ENTRIES)
> +		initial_mem_entries = GH_RM_MAX_MEM_ENTRIES;

Is it OK to truncate the number of entries silently?

> +
> +	acl_section_size = struct_size(acl_section, entries, p->n_acl_entries);

Is there a limit on the number of ACL entries (as there is for
the number of mem entries).

> +	mem_section_size = struct_size(mem_section, entries, initial_mem_entries);
> +	/* The format of the message goes:
> +	 * request header
> +	 * ACL entries (which VMs get what kind of access to this memory parcel)
> +	 * Memory entries (list of memory regions to share)
> +	 * Memory attributes (currently unused, we'll hard-code the size to 0)
> +	 */
> +	msg_size += sizeof(struct gh_rm_mem_share_req_header);
> +	msg_size += acl_section_size;
> +	msg_size += mem_section_size;
> +	msg_size += sizeof(u32); /* for memory attributes, currently unused */
> +
> +	msg = kzalloc(msg_size, GFP_KERNEL);
> +	if (!msg)
> +		return -ENOMEM;
> +
> +	req_header = msg;
> +	acl_section = (void *)req_header + sizeof(*req_header);
> +	mem_section = (void *)acl_section + acl_section_size;
> +	attr_section = (void *)mem_section + mem_section_size;
> +
> +	req_header->mem_type = p->mem_type;
> +	if (initial_mem_entries != p->n_mem_entries)
> +		req_header->flags |= GH_MEM_SHARE_REQ_FLAGS_APPEND;
> +	req_header->label = cpu_to_le32(p->label);
> +
> +	acl_section->n_entries = cpu_to_le32(p->n_acl_entries);
> +	memcpy(acl_section->entries, p->acl_entries,
> +		flex_array_size(acl_section, entries, p->n_acl_entries));
> +
> +	mem_section->n_entries = cpu_to_le16(initial_mem_entries);
> +	memcpy(mem_section->entries, p->mem_entries,
> +		flex_array_size(mem_section, entries, initial_mem_entries));
> +
> +	/* Set n_entries for memory attribute section to 0 */
> +	*attr_section = 0;
> +
> +	ret = gh_rm_call(rm, message_id, msg, msg_size, (void **)&resp, &resp_size);
> +	kfree(msg);
> +
> +	if (ret)
> +		return ret;
> +
> +	p->mem_handle = le32_to_cpu(*resp);
> +	kfree(resp);
> +
> +	if (initial_mem_entries != p->n_mem_entries) {
> +		ret = gh_rm_mem_append(rm, p->mem_handle,
> +					&p->mem_entries[initial_mem_entries],
> +					p->n_mem_entries - initial_mem_entries);

Will there always be at most one gh_rm_mem_append() call?

> +		if (ret) {
> +			gh_rm_mem_reclaim(rm, p);
> +			p->mem_handle = GH_MEM_HANDLE_INVAL;
> +		}
> +	}
> +
> +	return ret;
> +}

. . .


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2023-06-05 19:49 UTC|newest]

Thread overview: 134+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-09 20:47 [PATCH v13 00/24] Drivers for Gunyah hypervisor Elliot Berman
2023-05-09 20:47 ` [PATCH v13 01/24] dt-bindings: Add binding for gunyah hypervisor Elliot Berman
2023-06-05 19:47   ` Alex Elder
2023-06-05 19:47     ` Alex Elder
2023-05-09 20:47 ` [PATCH v13 02/24] gunyah: Common types and error codes for Gunyah hypercalls Elliot Berman
2023-06-05 19:47   ` Alex Elder
2023-06-05 19:47     ` Alex Elder
2023-06-05 21:31   ` Srinivas Kandagatla
2023-06-05 21:31     ` Srinivas Kandagatla
2023-05-09 20:47 ` [PATCH v13 03/24] virt: gunyah: Add hypercalls to identify Gunyah Elliot Berman
2023-06-05 19:47   ` Alex Elder
2023-06-05 19:47     ` Alex Elder
2023-05-09 20:47 ` [PATCH v13 04/24] virt: gunyah: msgq: Add hypercalls to send and receive messages Elliot Berman
2023-06-05 19:47   ` Alex Elder
2023-06-05 19:47     ` Alex Elder
2023-06-05 21:31   ` Srinivas Kandagatla
2023-06-05 21:31     ` Srinivas Kandagatla
2023-05-09 20:47 ` [PATCH v13 05/24] mailbox: Add Gunyah message queue mailbox Elliot Berman
2023-05-24  6:29   ` Dmitry Baryshkov
2023-05-24  6:29     ` Dmitry Baryshkov
2023-06-05 19:47   ` Alex Elder
2023-06-05 19:47     ` Alex Elder
2023-06-05 21:32   ` Srinivas Kandagatla
2023-06-05 21:32     ` Srinivas Kandagatla
2023-05-09 20:47 ` [PATCH v13 06/24] gunyah: rsc_mgr: Add resource manager RPC core Elliot Berman
2023-06-05 19:47   ` Alex Elder
2023-06-05 19:47     ` Alex Elder
2023-06-06 12:45   ` Srinivas Kandagatla
2023-06-06 12:45     ` Srinivas Kandagatla
2023-05-09 20:47 ` [PATCH v13 07/24] gunyah: rsc_mgr: Add VM lifecycle RPC Elliot Berman
2023-06-05 19:47   ` Alex Elder
2023-06-05 19:47     ` Alex Elder
2023-06-06 12:49   ` Srinivas Kandagatla
2023-06-06 12:49     ` Srinivas Kandagatla
2023-05-09 20:47 ` [PATCH v13 08/24] gunyah: vm_mgr: Introduce basic VM Manager Elliot Berman
2023-06-05 19:47   ` Alex Elder
2023-06-05 19:47     ` Alex Elder
2023-06-06 12:51   ` Srinivas Kandagatla
2023-06-06 12:51     ` Srinivas Kandagatla
2023-05-09 20:47 ` [PATCH v13 09/24] gunyah: rsc_mgr: Add RPC for sharing memory Elliot Berman
2023-06-05 19:48   ` Alex Elder [this message]
2023-06-05 19:48     ` Alex Elder
2023-06-09 22:35     ` Elliot Berman
2023-06-09 22:35       ` Elliot Berman
2023-06-06 13:35   ` Srinivas Kandagatla
2023-06-06 13:35     ` Srinivas Kandagatla
2023-05-09 20:47 ` [PATCH v13 10/24] gunyah: vm_mgr: Add/remove user memory regions Elliot Berman
2023-05-19 11:59   ` Will Deacon
2023-05-19 11:59     ` Will Deacon
2023-05-19 17:02     ` Elliot Berman
2023-05-19 17:02       ` Elliot Berman
2023-06-05 14:18       ` Will Deacon
2023-06-05 14:18         ` Will Deacon
2023-06-07 15:54         ` Elliot Berman
2023-06-07 15:54           ` Elliot Berman
2023-06-22 23:56           ` Elliot Berman
2023-06-22 23:56             ` Elliot Berman
2023-07-13 20:28             ` Elliot Berman
2023-07-13 20:28               ` Elliot Berman
2023-07-14 12:13               ` Will Deacon
2023-07-14 12:13                 ` Will Deacon
2023-07-19  2:28                 ` Elliot Berman
2023-07-19  2:28                   ` Elliot Berman
2023-07-20 10:39                   ` Will Deacon
2023-07-20 10:39                     ` Will Deacon
2023-06-13 17:54         ` Elliot Berman
2023-06-13 17:54           ` Elliot Berman
2023-06-05 19:48   ` Alex Elder
2023-06-05 19:48     ` Alex Elder
2023-05-09 20:47 ` [PATCH v13 11/24] gunyah: vm_mgr: Add ioctls to support basic non-proxy VM boot Elliot Berman
2023-06-05 19:48   ` Alex Elder
2023-06-05 19:48     ` Alex Elder
2023-05-09 20:47 ` [PATCH v13 12/24] samples: Add sample userspace Gunyah VM Manager Elliot Berman
2023-06-05 19:48   ` Alex Elder
2023-06-05 19:48     ` Alex Elder
2023-05-09 20:47 ` [PATCH v13 13/24] gunyah: rsc_mgr: Add platform ops on mem_lend/mem_reclaim Elliot Berman
2023-06-05 19:48   ` Alex Elder
2023-06-05 19:48     ` Alex Elder
2023-05-09 20:47 ` [PATCH v13 14/24] virt: gunyah: Add Qualcomm Gunyah platform ops Elliot Berman
2023-06-05 19:48   ` Alex Elder
2023-06-05 19:48     ` Alex Elder
2023-06-07 15:55     ` Elliot Berman
2023-06-07 15:55       ` Elliot Berman
2023-05-09 20:47 ` [PATCH v13 15/24] docs: gunyah: Document Gunyah VM Manager Elliot Berman
2023-06-05 19:49   ` Alex Elder
2023-06-05 19:49     ` Alex Elder
2023-05-09 20:47 ` [PATCH v13 16/24] virt: gunyah: Translate gh_rm_hyp_resource into gunyah_resource Elliot Berman
2023-06-05 19:49   ` Alex Elder
2023-06-05 19:49     ` Alex Elder
2023-06-09 20:00     ` Elliot Berman
2023-06-09 20:00       ` Elliot Berman
2023-05-09 20:47 ` [PATCH v13 17/24] gunyah: vm_mgr: Add framework for VM Functions Elliot Berman
2023-06-05 19:49   ` Alex Elder
2023-06-05 19:49     ` Alex Elder
2023-06-09 19:49     ` Elliot Berman
2023-06-09 19:49       ` Elliot Berman
2023-06-12 12:57       ` Alex Elder
2023-05-09 20:47 ` [PATCH v13 18/24] virt: gunyah: Add resource tickets Elliot Berman
2023-06-05 19:49   ` Alex Elder
2023-06-05 19:49     ` Alex Elder
2023-05-09 20:47 ` [PATCH v13 19/24] virt: gunyah: Add IO handlers Elliot Berman
2023-06-05 19:50   ` Alex Elder
2023-06-05 19:50     ` Alex Elder
2023-05-09 20:47 ` [PATCH v13 20/24] virt: gunyah: Add proxy-scheduled vCPUs Elliot Berman
2023-06-05 19:50   ` Alex Elder
2023-06-05 19:50     ` Alex Elder
2023-05-09 20:47 ` [PATCH v13 21/24] virt: gunyah: Add hypercalls for sending doorbell Elliot Berman
2023-06-05 19:50   ` Alex Elder
2023-06-05 19:50     ` Alex Elder
2023-05-09 20:47 ` [PATCH v13 22/24] virt: gunyah: Add irqfd interface Elliot Berman
2023-06-05 19:50   ` Alex Elder
2023-06-05 19:50     ` Alex Elder
2023-06-09 18:22     ` Elliot Berman
2023-06-09 18:22       ` Elliot Berman
2023-06-09 18:28       ` Alex Elder
2023-06-09 18:28         ` Alex Elder
2023-05-09 20:48 ` [PATCH v13 23/24] virt: gunyah: Add ioeventfd Elliot Berman
2023-06-05 19:50   ` Alex Elder
2023-06-05 19:50     ` Alex Elder
2023-06-09 17:33     ` Elliot Berman
2023-06-09 17:33       ` Elliot Berman
2023-06-09 17:46       ` Alex Elder
2023-06-09 17:46         ` Alex Elder
2023-05-09 20:48 ` [PATCH v13 24/24] MAINTAINERS: Add Gunyah hypervisor drivers section Elliot Berman
2023-06-05 19:50   ` Alex Elder
2023-06-05 19:50     ` Alex Elder
2023-05-24  6:47 ` [PATCH v13 00/24] Drivers for Gunyah hypervisor Dmitry Baryshkov
2023-05-24  6:47   ` Dmitry Baryshkov
2023-05-24 17:13 ` Alex Bennée
2023-05-24 17:13   ` Alex Bennée
2023-06-13 23:02   ` Elliot Berman
2023-06-13 23:02     ` Elliot Berman
2023-06-05 19:47 ` Alex Elder
2023-06-05 19:47   ` Alex Elder

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9747a71d-c6d0-b67b-a3b1-c84848268f46@linaro.org \
    --to=elder@linaro.org \
    --cc=agross@kernel.org \
    --cc=andersson@kernel.org \
    --cc=arnd@arndb.de \
    --cc=bagasdotme@gmail.com \
    --cc=catalin.marinas@arm.com \
    --cc=corbet@lwn.net \
    --cc=devicetree@vger.kernel.org \
    --cc=dmitry.baryshkov@linaro.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=jassisinghbrar@gmail.com \
    --cc=konrad.dybcio@linaro.org \
    --cc=krzysztof.kozlowski+dt@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=quic_cvanscha@quicinc.com \
    --cc=quic_eberman@quicinc.com \
    --cc=quic_mnalajal@quicinc.com \
    --cc=quic_pheragu@quicinc.com \
    --cc=quic_svaddagi@quicinc.com \
    --cc=quic_tsoni@quicinc.com \
    --cc=robh+dt@kernel.org \
    --cc=srinivas.kandagatla@linaro.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.