All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 08/14] s390/mm: Make gmap_read_table EDAT1 compatible
@ 2018-09-19  8:47 Janosch Frank
  0 siblings, 0 replies; 2+ messages in thread
From: Janosch Frank @ 2018-09-19  8:47 UTC (permalink / raw)
  To: linux-s390, kvm

For the upcoming support of VSIE guests on huge page backed hosts, we
need to be able to read from large segments.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/mm/gmap.c | 43 ++++++++++++++++++++++++++-----------------
 1 file changed, 26 insertions(+), 17 deletions(-)

diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 26cc6ce19afb..ba0425f1c2c0 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -1274,35 +1274,44 @@ EXPORT_SYMBOL_GPL(gmap_mprotect_notify);
 int gmap_read_table(struct gmap *gmap, unsigned long gaddr, unsigned long *val)
 {
 	unsigned long address, vmaddr;
-	spinlock_t *ptl;
+	spinlock_t *ptl_pmd = NULL, *ptl_pte = NULL;
+	pmd_t *pmdp;
 	pte_t *ptep, pte;
 	int rc;
 
-	if (gmap_is_shadow(gmap))
-		return -EINVAL;
+	BUG_ON(gmap_is_shadow(gmap));
 
 	while (1) {
 		rc = -EAGAIN;
-		ptep = gmap_pte_op_walk(gmap, gaddr, &ptl);
-		if (ptep) {
-			pte = *ptep;
-			if (pte_present(pte) && (pte_val(pte) & _PAGE_READ)) {
-				address = pte_val(pte) & PAGE_MASK;
-				address += gaddr & ~PAGE_MASK;
+		vmaddr = __gmap_translate(gmap, gaddr);
+		if (IS_ERR_VALUE(vmaddr))
+			return vmaddr;
+		pmdp = gmap_pmd_op_walk(gmap, gaddr, vmaddr, &ptl_pmd);
+		if (pmdp && !(pmd_val(*pmdp) & _SEGMENT_ENTRY_INVALID)) {
+			if (!pmd_large(*pmdp)) {
+				ptep = gmap_pte_from_pmd(gmap, pmdp, vmaddr, &ptl_pte);
+				if (ptep) {
+					pte = *ptep;
+					if (pte_present(pte) && (pte_val(pte) & _PAGE_READ)) {
+						address = pte_val(pte) & PAGE_MASK;
+						address += gaddr & ~PAGE_MASK;
+						*val = *(unsigned long *) address;
+						pte_val(*ptep) |= _PAGE_YOUNG;
+						/* Do *NOT* clear the _PAGE_INVALID bit! */
+						rc = 0;
+					}
+				}
+				gmap_pte_op_end(ptl_pte);
+			} else {
+				address = pmd_val(*pmdp) & HPAGE_MASK;
+				address += gaddr & ~HPAGE_MASK;
 				*val = *(unsigned long *) address;
-				pte_val(*ptep) |= _PAGE_YOUNG;
-				/* Do *NOT* clear the _PAGE_INVALID bit! */
 				rc = 0;
 			}
-			gmap_pte_op_end(ptl);
+			gmap_pmd_op_end(ptl_pmd);
 		}
 		if (!rc)
 			break;
-		vmaddr = __gmap_translate(gmap, gaddr);
-		if (IS_ERR_VALUE(vmaddr)) {
-			rc = vmaddr;
-			break;
-		}
 		rc = gmap_fixup(gmap, gaddr, vmaddr, PROT_READ);
 		if (rc)
 			break;
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [RFC 08/14] s390/mm: Make gmap_read_table EDAT1 compatible
       [not found] <979ea4cc-8a85-479c-a03d-9f8b9a9b3487@linux.ibm.com>
@ 2018-10-16  8:57 ` David Hildenbrand
  0 siblings, 0 replies; 2+ messages in thread
From: David Hildenbrand @ 2018-10-16  8:57 UTC (permalink / raw)
  To: linux-s390, kvm

>>>  	while (1) {
>>>  		rc = -EAGAIN;
>>> -		ptep = gmap_pte_op_walk(gmap, gaddr, &ptl);
>>> -		if (ptep) {
>>> -			pte = *ptep;
>>> -			if (pte_present(pte) && (pte_val(pte) & _PAGE_READ)) {
>>> -				address = pte_val(pte) & PAGE_MASK;
>>> -				address += gaddr & ~PAGE_MASK;
>>> +		vmaddr = __gmap_translate(gmap, gaddr);
>>> +		if (IS_ERR_VALUE(vmaddr))
>>> +			return vmaddr;
>>> +		pmdp = gmap_pmd_op_walk(gmap, gaddr, vmaddr, &ptl_pmd);
>>> +		if (pmdp && !(pmd_val(*pmdp) & _SEGMENT_ENTRY_INVALID)) {
>>> +			if (!pmd_large(*pmdp)) {
>>> +				ptep = gmap_pte_from_pmd(gmap, pmdp, vmaddr, &ptl_pte);
>>> +				if (ptep) {
>>> +					pte = *ptep;
>>> +					if (pte_present(pte) && (pte_val(pte) & _PAGE_READ)) {
>>> +						address = pte_val(pte) & PAGE_MASK;
>>> +						address += gaddr & ~PAGE_MASK;
>>> +						*val = *(unsigned long *) address;
>>> +						pte_val(*ptep) |= _PAGE_YOUNG;
>>> +						/* Do *NOT* clear the _PAGE_INVALID bit! */
>>> +						rc = 0;
>>> +					}
>>> +				}
>>> +				gmap_pte_op_end(ptl_pte);
>>
>> I'm confused that we have a gmap_pte_op_end() followed by a
>> gmap_pmd_op_end() although we never started a gmap_pte_op_walk() ... I
>> assume this is due to gmap_pte_from_pmd() ? We should find better names
>> for these functions otherwise this is pure magic.
>>
>> e.g. gmap_pte_op_walk_pmd() instead of gmap_pte_from_pmd()
> 
> Hrm, in my opinion pte_from_pmd is very specific, although it lacks the
> op part. How about gmap_pte_op_map, that would be closer to the
> pte_alloc/offset_map from the kernel side?

Yes, as long as there is "op" in there one can see in the code how it
all plays together.

> 
>>
>> ... and shouldn't "gmap_pte_op_end(ptl_pte)" be inside of the "if(ptep)" ?
> 
> It doesn't matter as we check the ptl for NULL in op_end functions.
> I have that scheme everywhere where it's nicer to read, like in the gmap
> protection functions.
> 
> I'll have a look if I can make that consistent either way.

Yes, as long as it's consistent it's fine for me.


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2018-10-16  8:57 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-19  8:47 [RFC 08/14] s390/mm: Make gmap_read_table EDAT1 compatible Janosch Frank
     [not found] <979ea4cc-8a85-479c-a03d-9f8b9a9b3487@linux.ibm.com>
2018-10-16  8:57 ` David Hildenbrand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.