linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Christophe Leroy <christophe.leroy@csgroup.eu>
To: Michael Ellerman <mpe@ellerman.id.au>
Cc: "linuxppc-dev@ozlabs.org" <linuxppc-dev@ozlabs.org>
Subject: Re: C vdso
Date: Fri, 23 Oct 2020 08:28:55 +0200	[thread overview]
Message-ID: <be21c7c8-6828-b757-064d-20f74e5c1a31@csgroup.eu> (raw)
In-Reply-To: <cc532aa8-a9e0-a105-b7b1-ee8d723b7ed6@csgroup.eu>

Hi Michael,

Le 24/09/2020 à 15:17, Christophe Leroy a écrit :
> Hi Michael
> 
> Le 17/09/2020 à 14:33, Michael Ellerman a écrit :
>> Hi Christophe,
>>
>> Christophe Leroy <christophe.leroy@csgroup.eu> writes:
>>> Hi Michael,
>>>
>>> What is the status with the generic C vdso merge ?
>>> In some mail, you mentionned having difficulties getting it working on
>>> ppc64, any progress ? What's the problem ? Can I help ?
>>
>> Yeah sorry I was hoping to get time to work on it but haven't been able
>> to.
>>
>> It's causing crashes on ppc64 ie. big endian.
>>
> 
>>
>> As you can see from the instruction dump we have jumped into the weeds somewhere.
>>
>> We also had the report from the kbuild robot about rela.opd being
>> discarded, which I think is indicative of a bigger problem. ie. we don't
>> process relocations for the VDSO, but opds require relocations (they
>> contain an absolute pointer).
>>
>> I thought we could get away with that, because the VDSO entry points
>> aren't proper functions (so they don't have opds), and I didn't think
>> we'd be calling via function pointers in the VDSO code (which would
>> require opds). But seems something is not working right.
>>
>> Sorry I haven't got back to you with those details. Things are a bit of
>> a mess inside IBM at the moment (always?), and I've been trying to get
>> everything done before I take a holiday next week.
>>
> 
> 
> Can you tell what defconfig you are using ? I have been able to setup a full glibc PPC64 cross 
> compilation chain and been able to test it under QEMU with success, using Nathan's vdsotest tool.


What config are you using ?

Christophe

> 
> I tested with both ppc64_defconfig and pseries_defconfig.
> 
> The only problem I got is with getcpu, which segfaults but both before and after applying my series, 
> so I guess this is unrelated.
> 
> Not sure we can pay too much attention to the exact measurement as it is a ppc64 QEMU running on a 
> x86 Linux which is running in a Virtual Box on a x86 windows Laptop, but at least it works:
> 
> Without the series:
> 
> clock-getres-monotonic:    vdso: 389 nsec/call
> clock-gettime-monotonic:    vdso: 781 nsec/call
> clock-getres-monotonic-coarse:    vdso: 13715 nsec/call
> clock-gettime-monotonic-coarse:    vdso: 312 nsec/call
> clock-getres-monotonic-raw:    vdso: 13589 nsec/call
> clock-getres-tai:    vdso: 13827 nsec/call
> clock-gettime-tai:    vdso: 14846 nsec/call
> clock-getres-boottime:    vdso: 13596 nsec/call
> clock-gettime-boottime:    vdso: 14758 nsec/call
> clock-getres-realtime:    vdso: 327 nsec/call
> clock-gettime-realtime:    vdso: 717 nsec/call
> clock-getres-realtime-coarse:    vdso: 14102 nsec/call
> clock-gettime-realtime-coarse:    vdso: 299 nsec/call
> gettimeofday:    vdso: 771 nsec/call
> 
> With the series:
> 
> clock-getres-monotonic:    vdso: 350 nsec/call
> clock-gettime-monotonic:    vdso: 726 nsec/call
> clock-getres-monotonic-coarse:    vdso: 356 nsec/call
> clock-gettime-monotonic-coarse:    vdso: 423 nsec/call
> clock-getres-monotonic-raw:    vdso: 349 nsec/call
> clock-getres-tai:    vdso: 419 nsec/call
> clock-gettime-tai:    vdso: 724 nsec/call
> clock-getres-boottime:    vdso: 352 nsec/call
> clock-gettime-boottime:    vdso: 752 nsec/call
> clock-getres-realtime:    vdso: 351 nsec/call
> clock-gettime-realtime:    vdso: 733 nsec/call
> clock-getres-realtime-coarse:    vdso: 356 nsec/call
> clock-gettime-realtime-coarse:    vdso: 367 nsec/call
> gettimeofday:    vdso: 796 nsec/call
> 
> 
> Thanks
> Christophe

  reply	other threads:[~2020-10-23  6:30 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20200916165516.Horde.uocmo3irPb7BMg__NUSqRA9@messagerie.si.c-s.fr>
     [not found] ` <87r1r0oa4o.fsf@mpe.ellerman.id.au>
2020-09-24 13:17   ` C vdso Christophe Leroy
2020-10-23  6:28     ` Christophe Leroy [this message]
2020-10-23 13:24       ` Michael Ellerman
2020-10-24 10:07         ` Michael Ellerman
2020-10-24 11:16           ` Christophe Leroy
2020-11-03 18:11           ` Christophe Leroy
2020-11-03 18:13         ` Christophe Leroy
2020-11-24 10:11           ` Christophe Leroy
2020-11-25  2:04             ` Michael Ellerman
2020-11-25  9:21               ` Christophe Leroy
2020-11-25 12:22                 ` Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=be21c7c8-6828-b757-064d-20f74e5c1a31@csgroup.eu \
    --to=christophe.leroy@csgroup.eu \
    --cc=linuxppc-dev@ozlabs.org \
    --cc=mpe@ellerman.id.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).