All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stephen Boyd <swboyd@chromium.org>
To: Kalesh Singh <kaleshsingh@google.com>
Cc: will@kernel.org, maz@kernel.org, qperret@google.com,
	tabba@google.com, surenb@google.com, kernel-team@android.com,
	James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Mark Brown <broonie@kernel.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Peter Collingbourne <pcc@google.com>,
	"Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>,
	Andrew Scull <ascull@google.com>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v5 1/8] KVM: arm64: Introduce hyp_alloc_private_va_range()
Date: Tue, 8 Mar 2022 12:21:40 -0800	[thread overview]
Message-ID: <CAE-0n52LmVRkrSNN=eJf+TYYnmesVjFv99nnetYvRWshm82rOg@mail.gmail.com> (raw)
In-Reply-To: <20220307184935.1704614-2-kaleshsingh@google.com>

Quoting Kalesh Singh (2022-03-07 10:48:59)
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index bc2aba953299..ccb2847ee2f4 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -457,22 +457,17 @@ int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot)
>         return 0;
>  }
>
> -static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size,
> -                                       unsigned long *haddr,
> -                                       enum kvm_pgtable_prot prot)
> +
> +/**
> + * hyp_alloc_private_va_range - Allocates a private VA range.
> + * @size:      The size of the VA range to reserve.
> + *
> + * The private VA range is allocated below io_map_base and
> + * aligned based on the order of @size.

Add what it returns?

Return: Start address of allocated VA range or some error value... (I don't
understand this part).

It may also be a good idea to write out what VA is in the description:

The private virtual address (VA) range is allocated below io_map_base

> + */
> +unsigned long hyp_alloc_private_va_range(size_t size)
>  {
>         unsigned long base;
> -       int ret = 0;
> -
> -       if (!kvm_host_owns_hyp_mappings()) {
> -               base = kvm_call_hyp_nvhe(__pkvm_create_private_mapping,
> -                                        phys_addr, size, prot);
> -               if (IS_ERR_OR_NULL((void *)base))
> -                       return PTR_ERR((void *)base);
> -               *haddr = base;
> -
> -               return 0;
> -       }
>
>         mutex_lock(&kvm_hyp_pgd_mutex);
>
> @@ -484,29 +479,53 @@ static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size,
>          *
>          * The allocated size is always a multiple of PAGE_SIZE.
>          */
> -       size = PAGE_ALIGN(size + offset_in_page(phys_addr));
> -       base = io_map_base - size;
> +       base = io_map_base - PAGE_ALIGN(size);
> +
> +       /* Align the allocation based on the order of its size */
> +       base = ALIGN_DOWN(base, PAGE_SIZE << get_order(size));
>
>         /*
>          * Verify that BIT(VA_BITS - 1) hasn't been flipped by
>          * allocating the new area, as it would indicate we've
>          * overflowed the idmap/IO address range.
>          */
> -       if ((base ^ io_map_base) & BIT(VA_BITS - 1))
> -               ret = -ENOMEM;
> +       if (!base || (base ^ io_map_base) & BIT(VA_BITS - 1))
> +               base = (unsigned long)ERR_PTR(-ENOMEM);

It looks odd to use an error pointer casted to unsigned long to return
from an address allocation function. Why not pass a pointer for base
like the function was written before and return an int from this
function with 0 for success and negative error value? Otherwise some
sort of define should made like DMA_MAPPING_ERROR and that can be used
to indicate to the caller that the allocation failed, or a simple zero
may work?

>         else
>                 io_map_base = base;
>
>         mutex_unlock(&kvm_hyp_pgd_mutex);
>
> -       if (ret)
> -               goto out;
> +       return base;
> +}
> +
> +static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size,
> +                                       unsigned long *haddr,
> +                                       enum kvm_pgtable_prot prot)
> +{
> +       unsigned long addr;
> +       int ret = 0;
> +
> +       if (!kvm_host_owns_hyp_mappings()) {
> +               addr = kvm_call_hyp_nvhe(__pkvm_create_private_mapping,
> +                                        phys_addr, size, prot);
> +               if (IS_ERR((void *)addr))

IS_ERR_VALUE()?

> +                       return PTR_ERR((void *)addr);
> +               *haddr = addr;
> +
> +               return 0;
> +       }
> +
> +       size += offset_in_page(phys_addr);
> +       addr = hyp_alloc_private_va_range(size);
> +       if (IS_ERR((void *)addr))

IS_ERR_VALUE()?

> +               return PTR_ERR((void *)addr);
>
> -       ret = __create_hyp_mappings(base, size, phys_addr, prot);
> +       ret = __create_hyp_mappings(addr, size, phys_addr, prot);
>         if (ret)
>                 goto out;
>
> -       *haddr = base + offset_in_page(phys_addr);
> +       *haddr = addr + offset_in_page(phys_addr);
>  out:
>         return ret;

Would be simpler to remove the goto, or return early.

	if (!ret)
		*haddr = addr + offset_in_page(phys_addr);

	return ret;

>  }

WARNING: multiple messages have this Message-ID (diff)
From: Stephen Boyd <swboyd@chromium.org>
To: Kalesh Singh <kaleshsingh@google.com>
Cc: will@kernel.org, maz@kernel.org, qperret@google.com,
	tabba@google.com,  surenb@google.com, kernel-team@android.com,
	James Morse <james.morse@arm.com>,
	 Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	 Catalin Marinas <catalin.marinas@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Mark Brown <broonie@kernel.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	 Peter Collingbourne <pcc@google.com>,
	"Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>,
	 Andrew Scull <ascull@google.com>,
	linux-arm-kernel@lists.infradead.org,
	 kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v5 1/8] KVM: arm64: Introduce hyp_alloc_private_va_range()
Date: Tue, 8 Mar 2022 12:21:40 -0800	[thread overview]
Message-ID: <CAE-0n52LmVRkrSNN=eJf+TYYnmesVjFv99nnetYvRWshm82rOg@mail.gmail.com> (raw)
In-Reply-To: <20220307184935.1704614-2-kaleshsingh@google.com>

Quoting Kalesh Singh (2022-03-07 10:48:59)
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index bc2aba953299..ccb2847ee2f4 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -457,22 +457,17 @@ int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot)
>         return 0;
>  }
>
> -static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size,
> -                                       unsigned long *haddr,
> -                                       enum kvm_pgtable_prot prot)
> +
> +/**
> + * hyp_alloc_private_va_range - Allocates a private VA range.
> + * @size:      The size of the VA range to reserve.
> + *
> + * The private VA range is allocated below io_map_base and
> + * aligned based on the order of @size.

Add what it returns?

Return: Start address of allocated VA range or some error value... (I don't
understand this part).

It may also be a good idea to write out what VA is in the description:

The private virtual address (VA) range is allocated below io_map_base

> + */
> +unsigned long hyp_alloc_private_va_range(size_t size)
>  {
>         unsigned long base;
> -       int ret = 0;
> -
> -       if (!kvm_host_owns_hyp_mappings()) {
> -               base = kvm_call_hyp_nvhe(__pkvm_create_private_mapping,
> -                                        phys_addr, size, prot);
> -               if (IS_ERR_OR_NULL((void *)base))
> -                       return PTR_ERR((void *)base);
> -               *haddr = base;
> -
> -               return 0;
> -       }
>
>         mutex_lock(&kvm_hyp_pgd_mutex);
>
> @@ -484,29 +479,53 @@ static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size,
>          *
>          * The allocated size is always a multiple of PAGE_SIZE.
>          */
> -       size = PAGE_ALIGN(size + offset_in_page(phys_addr));
> -       base = io_map_base - size;
> +       base = io_map_base - PAGE_ALIGN(size);
> +
> +       /* Align the allocation based on the order of its size */
> +       base = ALIGN_DOWN(base, PAGE_SIZE << get_order(size));
>
>         /*
>          * Verify that BIT(VA_BITS - 1) hasn't been flipped by
>          * allocating the new area, as it would indicate we've
>          * overflowed the idmap/IO address range.
>          */
> -       if ((base ^ io_map_base) & BIT(VA_BITS - 1))
> -               ret = -ENOMEM;
> +       if (!base || (base ^ io_map_base) & BIT(VA_BITS - 1))
> +               base = (unsigned long)ERR_PTR(-ENOMEM);

It looks odd to use an error pointer casted to unsigned long to return
from an address allocation function. Why not pass a pointer for base
like the function was written before and return an int from this
function with 0 for success and negative error value? Otherwise some
sort of define should made like DMA_MAPPING_ERROR and that can be used
to indicate to the caller that the allocation failed, or a simple zero
may work?

>         else
>                 io_map_base = base;
>
>         mutex_unlock(&kvm_hyp_pgd_mutex);
>
> -       if (ret)
> -               goto out;
> +       return base;
> +}
> +
> +static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size,
> +                                       unsigned long *haddr,
> +                                       enum kvm_pgtable_prot prot)
> +{
> +       unsigned long addr;
> +       int ret = 0;
> +
> +       if (!kvm_host_owns_hyp_mappings()) {
> +               addr = kvm_call_hyp_nvhe(__pkvm_create_private_mapping,
> +                                        phys_addr, size, prot);
> +               if (IS_ERR((void *)addr))

IS_ERR_VALUE()?

> +                       return PTR_ERR((void *)addr);
> +               *haddr = addr;
> +
> +               return 0;
> +       }
> +
> +       size += offset_in_page(phys_addr);
> +       addr = hyp_alloc_private_va_range(size);
> +       if (IS_ERR((void *)addr))

IS_ERR_VALUE()?

> +               return PTR_ERR((void *)addr);
>
> -       ret = __create_hyp_mappings(base, size, phys_addr, prot);
> +       ret = __create_hyp_mappings(addr, size, phys_addr, prot);
>         if (ret)
>                 goto out;
>
> -       *haddr = base + offset_in_page(phys_addr);
> +       *haddr = addr + offset_in_page(phys_addr);
>  out:
>         return ret;

Would be simpler to remove the goto, or return early.

	if (!ret)
		*haddr = addr + offset_in_page(phys_addr);

	return ret;

>  }

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Stephen Boyd <swboyd@chromium.org>
To: Kalesh Singh <kaleshsingh@google.com>
Cc: kernel-team@android.com, linux-arm-kernel@lists.infradead.org,
	will@kernel.org, Peter Collingbourne <pcc@google.com>,
	maz@kernel.org, linux-kernel@vger.kernel.org,
	"Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>,
	Mark Brown <broonie@kernel.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	surenb@google.com, kvmarm@lists.cs.columbia.edu
Subject: Re: [PATCH v5 1/8] KVM: arm64: Introduce hyp_alloc_private_va_range()
Date: Tue, 8 Mar 2022 12:21:40 -0800	[thread overview]
Message-ID: <CAE-0n52LmVRkrSNN=eJf+TYYnmesVjFv99nnetYvRWshm82rOg@mail.gmail.com> (raw)
In-Reply-To: <20220307184935.1704614-2-kaleshsingh@google.com>

Quoting Kalesh Singh (2022-03-07 10:48:59)
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index bc2aba953299..ccb2847ee2f4 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -457,22 +457,17 @@ int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot)
>         return 0;
>  }
>
> -static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size,
> -                                       unsigned long *haddr,
> -                                       enum kvm_pgtable_prot prot)
> +
> +/**
> + * hyp_alloc_private_va_range - Allocates a private VA range.
> + * @size:      The size of the VA range to reserve.
> + *
> + * The private VA range is allocated below io_map_base and
> + * aligned based on the order of @size.

Add what it returns?

Return: Start address of allocated VA range or some error value... (I don't
understand this part).

It may also be a good idea to write out what VA is in the description:

The private virtual address (VA) range is allocated below io_map_base

> + */
> +unsigned long hyp_alloc_private_va_range(size_t size)
>  {
>         unsigned long base;
> -       int ret = 0;
> -
> -       if (!kvm_host_owns_hyp_mappings()) {
> -               base = kvm_call_hyp_nvhe(__pkvm_create_private_mapping,
> -                                        phys_addr, size, prot);
> -               if (IS_ERR_OR_NULL((void *)base))
> -                       return PTR_ERR((void *)base);
> -               *haddr = base;
> -
> -               return 0;
> -       }
>
>         mutex_lock(&kvm_hyp_pgd_mutex);
>
> @@ -484,29 +479,53 @@ static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size,
>          *
>          * The allocated size is always a multiple of PAGE_SIZE.
>          */
> -       size = PAGE_ALIGN(size + offset_in_page(phys_addr));
> -       base = io_map_base - size;
> +       base = io_map_base - PAGE_ALIGN(size);
> +
> +       /* Align the allocation based on the order of its size */
> +       base = ALIGN_DOWN(base, PAGE_SIZE << get_order(size));
>
>         /*
>          * Verify that BIT(VA_BITS - 1) hasn't been flipped by
>          * allocating the new area, as it would indicate we've
>          * overflowed the idmap/IO address range.
>          */
> -       if ((base ^ io_map_base) & BIT(VA_BITS - 1))
> -               ret = -ENOMEM;
> +       if (!base || (base ^ io_map_base) & BIT(VA_BITS - 1))
> +               base = (unsigned long)ERR_PTR(-ENOMEM);

It looks odd to use an error pointer casted to unsigned long to return
from an address allocation function. Why not pass a pointer for base
like the function was written before and return an int from this
function with 0 for success and negative error value? Otherwise some
sort of define should made like DMA_MAPPING_ERROR and that can be used
to indicate to the caller that the allocation failed, or a simple zero
may work?

>         else
>                 io_map_base = base;
>
>         mutex_unlock(&kvm_hyp_pgd_mutex);
>
> -       if (ret)
> -               goto out;
> +       return base;
> +}
> +
> +static int __create_hyp_private_mapping(phys_addr_t phys_addr, size_t size,
> +                                       unsigned long *haddr,
> +                                       enum kvm_pgtable_prot prot)
> +{
> +       unsigned long addr;
> +       int ret = 0;
> +
> +       if (!kvm_host_owns_hyp_mappings()) {
> +               addr = kvm_call_hyp_nvhe(__pkvm_create_private_mapping,
> +                                        phys_addr, size, prot);
> +               if (IS_ERR((void *)addr))

IS_ERR_VALUE()?

> +                       return PTR_ERR((void *)addr);
> +               *haddr = addr;
> +
> +               return 0;
> +       }
> +
> +       size += offset_in_page(phys_addr);
> +       addr = hyp_alloc_private_va_range(size);
> +       if (IS_ERR((void *)addr))

IS_ERR_VALUE()?

> +               return PTR_ERR((void *)addr);
>
> -       ret = __create_hyp_mappings(base, size, phys_addr, prot);
> +       ret = __create_hyp_mappings(addr, size, phys_addr, prot);
>         if (ret)
>                 goto out;
>
> -       *haddr = base + offset_in_page(phys_addr);
> +       *haddr = addr + offset_in_page(phys_addr);
>  out:
>         return ret;

Would be simpler to remove the goto, or return early.

	if (!ret)
		*haddr = addr + offset_in_page(phys_addr);

	return ret;

>  }
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2022-03-08 20:21 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-07 18:48 [PATCH v5 0/8] KVM: arm64: Hypervisor stack enhancements Kalesh Singh
2022-03-07 18:48 ` Kalesh Singh
2022-03-07 18:48 ` Kalesh Singh
2022-03-07 18:48 ` [PATCH v5 1/8] KVM: arm64: Introduce hyp_alloc_private_va_range() Kalesh Singh
2022-03-07 18:48   ` Kalesh Singh
2022-03-07 18:48   ` Kalesh Singh
2022-03-08 20:21   ` Stephen Boyd [this message]
2022-03-08 20:21     ` Stephen Boyd
2022-03-08 20:21     ` Stephen Boyd
2022-03-08 23:09     ` Kalesh Singh
2022-03-08 23:09       ` Kalesh Singh
2022-03-08 23:09       ` Kalesh Singh
2022-03-09 16:50       ` Quentin Perret
2022-03-09 16:50         ` Quentin Perret
2022-03-09 16:50         ` Quentin Perret
2022-03-09 17:04         ` Kalesh Singh
2022-03-09 17:04           ` Kalesh Singh
2022-03-09 17:04           ` Kalesh Singh
2022-03-07 18:49 ` [PATCH v5 2/8] KVM: arm64: Introduce pkvm_alloc_private_va_range() Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49 ` [PATCH v5 3/8] KVM: arm64: Add guard pages for KVM nVHE hypervisor stack Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49 ` [PATCH v5 4/8] KVM: arm64: Add guard pages for pKVM (protected nVHE) " Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49 ` [PATCH v5 5/8] KVM: arm64: Detect and handle hypervisor stack overflows Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49 ` [PATCH v5 6/8] KVM: arm64: Add hypervisor overflow stack Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49 ` [PATCH v5 7/8] KVM: arm64: Unwind and dump nVHE HYP stacktrace Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49 ` [PATCH v5 8/8] KVM: arm64: Symbolize the nVHE HYP backtrace Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh
2022-03-07 18:49   ` Kalesh Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAE-0n52LmVRkrSNN=eJf+TYYnmesVjFv99nnetYvRWshm82rOg@mail.gmail.com' \
    --to=swboyd@chromium.org \
    --cc=alexandru.elisei@arm.com \
    --cc=ascull@google.com \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=james.morse@arm.com \
    --cc=kaleshsingh@google.com \
    --cc=kernel-team@android.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=madvenka@linux.microsoft.com \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=mhiramat@kernel.org \
    --cc=pcc@google.com \
    --cc=qperret@google.com \
    --cc=surenb@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.