From mboxrd@z Thu Jan 1 00:00:00 1970 From: Omkar Bolla Subject: Xen PV: Sample new PV driver for buffer sharing between domains Date: Thu, 27 Sep 2018 11:14:41 +0530 Message-ID: Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6858121191328241399==" Return-path: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: xen-devel@lists.xensource.com List-Id: xen-devel@lists.xenproject.org --===============6858121191328241399== Content-Type: multipart/alternative; boundary="0000000000007b1ff20576d3d683" --0000000000007b1ff20576d3d683 Content-Type: text/plain; charset="UTF-8" Hi, I am using Debian as Domain-0 and Debian as Domain-U on Hikey960 platform(ARMv8) and using Xen-4.8 stable release. Here I want to create a PV frontend and backend to share memory between Domain-0 and Domain-U. I used below link to create frontend and backend, https://fnordig.de/2016/12/02/xen-a-backend-frontend-driver-example/ But I am facing below errors while adding device vdevb to xenstore. Below errors I am getting from xenbus_switch_state(). vdevb vdevb-0: failed to write error node for device/vdevb/0 (13 writing new state) Please suggest me, How to create PV drivers. Thanks, Omkar B -- This message contains confidential information and is intended only for the individual(s) named. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this mail and attached file/s is strictly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. --0000000000007b1ff20576d3d683 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi,

I am using Debian as Domain-0 and= Debian as Domain-U on Hikey960 platform(ARMv8) and using Xen-4.8 stable re= lease. Here I want to create a PV frontend and backend to share memory betw= een Domain-0 and Domain-U.=C2=A0



I used below link= to create frontend and backend,
<= /div>
https://fno= rdig.de/2016/12/02/xen-a-backend-frontend-driver-example/

But I am facing below errors while adding device vdevb = to xenstore.=C2=A0
Below errors I am getting from=C2=A0xenbus_switch_state().<= /div>
vdev= b vdevb-0: failed to write error node for device/vdevb/0 (13 writing new st= ate)

Please suggest me, How to create PV drivers.

Thanks,
Omkar B
<= /div>

This message contains confidential information and is intended only for the individual(s) named. If you are not the = intended recipient, you are notified that disclosing, copying, distributing or takin= g any action in reliance on the contents of this mail and attached file/s is stri= ctly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmis= sion cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or cont= ain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mai= l transmission.

--0000000000007b1ff20576d3d683-- --===============6858121191328241399== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6Ly9saXN0 cy54ZW5wcm9qZWN0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA== --===============6858121191328241399==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Lars Kurth Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Thu, 27 Sep 2018 10:39:36 +0100 Message-ID: References: Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.1\)) Content-Type: multipart/mixed; boundary="===============4853633143287771523==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Omkar Bolla Cc: Julien Grall , xen-devel@lists.xensource.com, Stefano Stabellini , Oleksandr Andrushchenko List-Id: xen-devel@lists.xenproject.org --===============4853633143287771523== Content-Type: multipart/alternative; boundary="Apple-Mail=_FE3EE81C-D827-4BA6-BE98-08E666AA027A" --Apple-Mail=_FE3EE81C-D827-4BA6-BE98-08E666AA027A Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii Adding a few people who have recently been working on PV drivers, as = well as Julien Lars > On 27 Sep 2018, at 06:44, Omkar Bolla = wrote: >=20 > Hi, >=20 > I am using Debian as Domain-0 and Debian as Domain-U on Hikey960 = platform(ARMv8) and using Xen-4.8 stable release. Here I want to create = a PV frontend and backend to share memory between Domain-0 and Domain-U.=20= >=20 >=20 >=20 > I used below link to create frontend and backend, > https://fnordig.de/2016/12/02/xen-a-backend-frontend-driver-example/ = >=20 > But I am facing below errors while adding device vdevb to xenstore.=20 > Below errors I am getting from xenbus_switch_state(). > vdevb vdevb-0: failed to write error node for device/vdevb/0 (13 = writing new state) >=20 > Please suggest me, How to create PV drivers. >=20 > Thanks, > Omkar B >=20 > This message contains confidential information and is intended only = for the individual(s) named. If you are not the intended recipient, you = are notified that disclosing, copying, distributing or taking any action = in reliance on the contents of this mail and attached file/s is strictly = prohibited. Please notify the sender immediately and delete this e-mail = from your system. E-mail transmission cannot be guaranteed to be secured = or error-free as information could be intercepted, corrupted, lost, = destroyed, arrive late or incomplete, or contain viruses. The sender = therefore does not accept liability for any errors or omissions in the = contents of this message, which arise as a result of e-mail = transmission. >=20 > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xenproject.org > https://lists.xenproject.org/mailman/listinfo/xen-devel --Apple-Mail=_FE3EE81C-D827-4BA6-BE98-08E666AA027A Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii Adding a few people who have recently been working on PV = drivers, as well as Julien
Lars

On 27 = Sep 2018, at 06:44, Omkar Bolla <omkar.bolla@pathpartnertech.com> wrote:

Hi,

I am using Debian = as Domain-0 and Debian as Domain-U on Hikey960 platform(ARMv8) and using = Xen-4.8 stable release. Here I want to create a PV frontend and backend = to share memory between Domain-0 and Domain-U. 



I = used below link to create frontend and backend,

But I am facing below errors while adding device vdevb to = xenstore. 
Below errors I am getting = from xenbus_switch_state().
vdevb vdevb-0: failed to write error node for device/vdevb/0 = (13 writing new state)

Please suggest me, How to create PV = drivers.

Thanks,
Omkar = B

This message contains confidential information and is intended only for the individual(s) named. If = you are not the intended recipient, you are notified that disclosing, copying, distributing or = taking any action in reliance on the contents of this mail and attached file/s is = strictly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail = transmission cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or = contain viruses. The sender therefore does not accept liability for any errors = or omissions in the contents of this message, which arise as a result of = e-mail = transmission.

_______________________________________________Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

= --Apple-Mail=_FE3EE81C-D827-4BA6-BE98-08E666AA027A-- --===============4853633143287771523== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6Ly9saXN0 cy54ZW5wcm9qZWN0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA== --===============4853633143287771523==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Oleksandr Andrushchenko Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Thu, 27 Sep 2018 13:07:04 +0300 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; Format="flowed" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Lars Kurth , Omkar Bolla Cc: Julien Grall , xen-devel@lists.xensource.com, Stefano Stabellini , Oleksandr Andrushchenko List-Id: xen-devel@lists.xenproject.org SGksCk9uIDA5LzI3LzIwMTggMTI6MzkgUE0sIExhcnMgS3VydGggd3JvdGU6Cj4gQWRkaW5nIGEg ZmV3IHBlb3BsZSB3aG8gaGF2ZSByZWNlbnRseSBiZWVuIHdvcmtpbmcgb24gUFYgZHJpdmVycywg YXMgCj4gd2VsbCBhcyBKdWxpZW4KPiBMYXJzCj4KPj4gT24gMjcgU2VwIDIwMTgsIGF0IDA2OjQ0 LCBPbWthciBCb2xsYSAKPj4gPG9ta2FyLmJvbGxhQHBhdGhwYXJ0bmVydGVjaC5jb20gCj4+IDxt YWlsdG86b21rYXIuYm9sbGFAcGF0aHBhcnRuZXJ0ZWNoLmNvbT4+IHdyb3RlOgo+Pgo+PiBIaSwK Pj4KPj4gSSBhbSB1c2luZyBEZWJpYW4gYXMgRG9tYWluLTAgYW5kIERlYmlhbiBhcyBEb21haW4t VSBvbiBIaWtleTk2MCAKPj4gcGxhdGZvcm0oQVJNdjgpIGFuZCB1c2luZyBYZW4tNC44IHN0YWJs ZSByZWxlYXNlLiBIZXJlIEkgd2FudCB0byAKPj4gY3JlYXRlIGEgUFYgZnJvbnRlbmQgYW5kIGJh Y2tlbmQgdG8gc2hhcmUgbWVtb3J5IGJldHdlZW4gRG9tYWluLTAgYW5kIAo+PiBEb21haW4tVS4K Pj4KPj4KPj4KPj4gSSB1c2VkIGJlbG93IGxpbmsgdG8gY3JlYXRlIGZyb250ZW5kIGFuZCBiYWNr ZW5kLAo+PiBodHRwczovL2Zub3JkaWcuZGUvMjAxNi8xMi8wMi94ZW4tYS1iYWNrZW5kLWZyb250 ZW5kLWRyaXZlci1leGFtcGxlLwpUaGUgbGluayBhYm92ZSBoYXMgYW5vdGhlciBsaW5rIHRvIGdp dGh1YiBbMV0gd2l0aCAyIGNoYXB0ZXJzLiBBbmQgaXQgCmxvb2tzIGxpa2UgeW91IGhhdmUKYWxy ZWFkeSBtb2RpZmllZCB0aGUgc291cmNlcyAoIm15ZGV2aWNlIiAtPiAidmRldmIiIGF0IGxlYXN0 KS4KU28sIHdoYXQgYXJlIHRoZSBzb3VyY2VzIHlvdSBhcmUgdXNpbmc/CgpZb3UgY291bGQgcHJv YmFibHkgdGFrZSBhIGxvb2sgYXQgdGhlIHJlbGF0aXZlbHkgc21hbGwgdmtiZCBmcm9udGVuZCAK ZHJpdmVyIFsyXQp0byBnZXQgc29tZSBoaW50cy4KPj4KPj4gQnV0IEkgYW0gZmFjaW5nIGJlbG93 IGVycm9ycyB3aGlsZSBhZGRpbmcgZGV2aWNlIHZkZXZiIHRvIHhlbnN0b3JlLgo+PiBCZWxvdyBl cnJvcnMgSSBhbSBnZXR0aW5nIGZyb23CoHhlbmJ1c19zd2l0Y2hfc3RhdGUoKS4KPj4gdmRldmIg dmRldmItMDogZmFpbGVkIHRvIHdyaXRlIGVycm9yIG5vZGUgZm9yIGRldmljZS92ZGV2Yi8wICgx MyAKPj4gd3JpdGluZyBuZXcgc3RhdGUpCj4+CklmIHRoZSBzb3VyY2VzIGFyZSBrbm93biB0aGVu IHdlIHdvdWxkIG5lZWQgdGhlIGZ1bGwgc2NlbmFyaW8gd2hpY2ggCmxlYWRzIHRvIHRoZSBmYWls dXJlLgpDb3VsZCB5b3UgcGxlYXNlIGFsc28gYWRkIHNvbWUgZGVidWcgbG9ncyBpbnRvIGV2ZXJ5 IGZ1bmN0aW9uIG9mIHRoZSAKZHJpdmVyIHNvIHdlIHNlZSB3aGF0CmFuZCB3aGVuIGhhcHBlbnMg b24gYm90aCBiYWNrZW5kIGFuZCBmcm9udGVuZCBzaWRlcz8KPj4gUGxlYXNlIHN1Z2dlc3QgbWUs IEhvdyB0byBjcmVhdGUgUFYgZHJpdmVycy4KSSB3b3VsZCBnbyB3aXRoIGFueSBleGlzdGluZyBk cml2ZXIgaW4gdGhlIGtlcm5lbCBhcyBhbiBleGFtcGxlCj4+IFRoYW5rcywKPj4gT21rYXIgQgo+ Pgo+PiBUaGlzIG1lc3NhZ2UgY29udGFpbnMgY29uZmlkZW50aWFsIGluZm9ybWF0aW9uIGFuZCBp cyBpbnRlbmRlZCBvbmx5IAo+PiBmb3IgdGhlIGluZGl2aWR1YWwocykgbmFtZWQuSWYgeW91IGFy ZSBub3QgdGhlIGludGVuZGVkIHJlY2lwaWVudCwgCj4+IHlvdSBhcmUgbm90aWZpZWQgdGhhdCBk aXNjbG9zaW5nLCBjb3B5aW5nLCBkaXN0cmlidXRpbmcgb3IgdGFraW5nIGFueSAKPj4gYWN0aW9u IGluIHJlbGlhbmNlIG9uIHRoZSBjb250ZW50cyBvZiB0aGlzIG1haWwgYW5kIGF0dGFjaGVkIGZp bGUvcyAKPj4gaXMgc3RyaWN0bHkgcHJvaGliaXRlZC4gUGxlYXNlIG5vdGlmeSB0aGUgc2VuZGVy IGltbWVkaWF0ZWx5IGFuZCAKPj4gZGVsZXRlIHRoaXMgZS1tYWlsIGZyb20geW91ciBzeXN0ZW0u IEUtbWFpbCB0cmFuc21pc3Npb24gY2Fubm90IGJlIAo+PiBndWFyYW50ZWVkIHRvIGJlIHNlY3Vy ZWQgb3IgZXJyb3ItZnJlZSBhcyBpbmZvcm1hdGlvbiBjb3VsZCBiZSAKPj4gaW50ZXJjZXB0ZWQs IGNvcnJ1cHRlZCwgbG9zdCwgZGVzdHJveWVkLCBhcnJpdmUgbGF0ZSBvciBpbmNvbXBsZXRlLCAK Pj4gb3IgY29udGFpbiB2aXJ1c2VzLiBUaGUgc2VuZGVyIHRoZXJlZm9yZSBkb2VzIG5vdCBhY2Nl cHQgbGlhYmlsaXR5IAo+PiBmb3IgYW55IGVycm9ycyBvciBvbWlzc2lvbnMgaW4gdGhlIGNvbnRl bnRzIG9mIHRoaXMgbWVzc2FnZSwgd2hpY2ggCj4+IGFyaXNlIGFzIGEgcmVzdWx0IG9mIGUtbWFp bCB0cmFuc21pc3Npb24uCj4+Cj4+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fCj4+IFhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKPj4gWGVuLWRldmVsQGxpc3Rz LnhlbnByb2plY3Qub3JnIDxtYWlsdG86WGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnPgo+ PiBodHRwczovL2xpc3RzLnhlbnByb2plY3Qub3JnL21haWxtYW4vbGlzdGluZm8veGVuLWRldmVs Cj4KPgo+Cj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18K PiBYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3Jn Cj4gaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZl bApbMV0gaHR0cHM6Ly9naXRodWIuY29tL2JhZGJveS94ZW4tc3BsaXQtZHJpdmVyLWV4YW1wbGUK WzJdIApodHRwczovL2VsaXhpci5ib290bGluLmNvbS9saW51eC9sYXRlc3Qvc291cmNlL2RyaXZl cnMvaW5wdXQvbWlzYy94ZW4ta2JkZnJvbnQuYwpbM10gCmh0dHBzOi8vZ2l0aHViLmNvbS9iYWRi b3kveGVuLXNwbGl0LWRyaXZlci1leGFtcGxlL2Jsb2IvbWFzdGVyL2NoYXB0ZXIwMi9hY3RpdmF0 ZS5zaAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6 Ly9saXN0cy54ZW5wcm9qZWN0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA== From mboxrd@z Thu Jan 1 00:00:00 1970 From: Juergen Gross Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Thu, 27 Sep 2018 12:16:29 +0200 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: Content-Language: de-DE List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Oleksandr Andrushchenko , Lars Kurth , Omkar Bolla Cc: Julien Grall , xen-devel@lists.xensource.com, Stefano Stabellini , Oleksandr Andrushchenko List-Id: xen-devel@lists.xenproject.org T24gMjcvMDkvMjAxOCAxMjowNywgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6Cj4gSGks Cj4gT24gMDkvMjcvMjAxOCAxMjozOSBQTSwgTGFycyBLdXJ0aCB3cm90ZToKPj4gQWRkaW5nIGEg ZmV3IHBlb3BsZSB3aG8gaGF2ZSByZWNlbnRseSBiZWVuIHdvcmtpbmcgb24gUFYgZHJpdmVycywg YXMKPj4gd2VsbCBhcyBKdWxpZW4KPj4gTGFycwo+Pgo+Pj4gT24gMjcgU2VwIDIwMTgsIGF0IDA2 OjQ0LCBPbWthciBCb2xsYQo+Pj4gPG9ta2FyLmJvbGxhQHBhdGhwYXJ0bmVydGVjaC5jb20KPj4+ IDxtYWlsdG86b21rYXIuYm9sbGFAcGF0aHBhcnRuZXJ0ZWNoLmNvbT4+IHdyb3RlOgo+Pj4KPj4+ IEhpLAo+Pj4KPj4+IEkgYW0gdXNpbmcgRGViaWFuIGFzIERvbWFpbi0wIGFuZCBEZWJpYW4gYXMg RG9tYWluLVUgb24gSGlrZXk5NjAKPj4+IHBsYXRmb3JtKEFSTXY4KSBhbmQgdXNpbmcgWGVuLTQu OCBzdGFibGUgcmVsZWFzZS4gSGVyZSBJIHdhbnQgdG8KPj4+IGNyZWF0ZSBhIFBWIGZyb250ZW5k IGFuZCBiYWNrZW5kIHRvIHNoYXJlIG1lbW9yeSBiZXR3ZWVuIERvbWFpbi0wIGFuZAo+Pj4gRG9t YWluLVUuCj4+Pgo+Pj4KPj4+Cj4+PiBJIHVzZWQgYmVsb3cgbGluayB0byBjcmVhdGUgZnJvbnRl bmQgYW5kIGJhY2tlbmQsCj4+PiBodHRwczovL2Zub3JkaWcuZGUvMjAxNi8xMi8wMi94ZW4tYS1i YWNrZW5kLWZyb250ZW5kLWRyaXZlci1leGFtcGxlLwo+IFRoZSBsaW5rIGFib3ZlIGhhcyBhbm90 aGVyIGxpbmsgdG8gZ2l0aHViIFsxXSB3aXRoIDIgY2hhcHRlcnMuIEFuZCBpdAo+IGxvb2tzIGxp a2UgeW91IGhhdmUKPiBhbHJlYWR5IG1vZGlmaWVkIHRoZSBzb3VyY2VzICgibXlkZXZpY2UiIC0+ ICJ2ZGV2YiIgYXQgbGVhc3QpLgo+IFNvLCB3aGF0IGFyZSB0aGUgc291cmNlcyB5b3UgYXJlIHVz aW5nPwo+IAo+IFlvdSBjb3VsZCBwcm9iYWJseSB0YWtlIGEgbG9vayBhdCB0aGUgcmVsYXRpdmVs eSBzbWFsbCB2a2JkIGZyb250ZW5kCj4gZHJpdmVyIFsyXQo+IHRvIGdldCBzb21lIGhpbnRzLgo+ Pj4KPj4+IEJ1dCBJIGFtIGZhY2luZyBiZWxvdyBlcnJvcnMgd2hpbGUgYWRkaW5nIGRldmljZSB2 ZGV2YiB0byB4ZW5zdG9yZS4KPj4+IEJlbG93IGVycm9ycyBJIGFtIGdldHRpbmcgZnJvbcKgeGVu YnVzX3N3aXRjaF9zdGF0ZSgpLgo+Pj4gdmRldmIgdmRldmItMDogZmFpbGVkIHRvIHdyaXRlIGVy cm9yIG5vZGUgZm9yIGRldmljZS92ZGV2Yi8wICgxMwo+Pj4gd3JpdGluZyBuZXcgc3RhdGUpCgpF cnJvciAxMyBpcyBFQUNDRVNTLiBJIGd1ZXNzIHRoZSBhY2Nlc3MgcmlnaHRzIG9mIHRoZSBYZW5z dG9yZSBub2RlcwphcmUgbm90IHN1ZmZpY2llbnQgdG8gd3JpdGUgdGhlIG5lZWRlZCBlbnRyaWVz LgoKRGlkIHlvdSBtb2RpZnkgWGVuIHRvb2xzICh4bC9saWJ4bCkgZm9yIGFkZGluZyB0aGUgbmV3 IGRldmljZSB0eXBlPwpJZiBub3QgeW91IG5lZWQgdG8gc2V0dXAgdGhlIFhlbnN0b3JlIG5vZGVz IG1hbnVhbGx5LgoKCkp1ZXJnZW4KCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbnBy b2plY3Qub3JnCmh0dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvbWFpbG1hbi9saXN0aW5mby94 ZW4tZGV2ZWw= From mboxrd@z Thu Jan 1 00:00:00 1970 From: Oleksandr Andrushchenko Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Thu, 27 Sep 2018 13:20:14 +0300 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; Format="flowed" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Juergen Gross , Oleksandr Andrushchenko , Lars Kurth , Omkar Bolla Cc: Julien Grall , xen-devel@lists.xensource.com, Stefano Stabellini List-Id: xen-devel@lists.xenproject.org T24gMDkvMjcvMjAxOCAwMToxNiBQTSwgSnVlcmdlbiBHcm9zcyB3cm90ZToKPiBPbiAyNy8wOS8y MDE4IDEyOjA3LCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToKPj4gSGksCj4+IE9uIDA5 LzI3LzIwMTggMTI6MzkgUE0sIExhcnMgS3VydGggd3JvdGU6Cj4+PiBBZGRpbmcgYSBmZXcgcGVv cGxlIHdobyBoYXZlIHJlY2VudGx5IGJlZW4gd29ya2luZyBvbiBQViBkcml2ZXJzLCBhcwo+Pj4g d2VsbCBhcyBKdWxpZW4KPj4+IExhcnMKPj4+Cj4+Pj4gT24gMjcgU2VwIDIwMTgsIGF0IDA2OjQ0 LCBPbWthciBCb2xsYQo+Pj4+IDxvbWthci5ib2xsYUBwYXRocGFydG5lcnRlY2guY29tCj4+Pj4g PG1haWx0bzpvbWthci5ib2xsYUBwYXRocGFydG5lcnRlY2guY29tPj4gd3JvdGU6Cj4+Pj4KPj4+ PiBIaSwKPj4+Pgo+Pj4+IEkgYW0gdXNpbmcgRGViaWFuIGFzIERvbWFpbi0wIGFuZCBEZWJpYW4g YXMgRG9tYWluLVUgb24gSGlrZXk5NjAKPj4+PiBwbGF0Zm9ybShBUk12OCkgYW5kIHVzaW5nIFhl bi00Ljggc3RhYmxlIHJlbGVhc2UuIEhlcmUgSSB3YW50IHRvCj4+Pj4gY3JlYXRlIGEgUFYgZnJv bnRlbmQgYW5kIGJhY2tlbmQgdG8gc2hhcmUgbWVtb3J5IGJldHdlZW4gRG9tYWluLTAgYW5kCj4+ Pj4gRG9tYWluLVUuCj4+Pj4KPj4+Pgo+Pj4+Cj4+Pj4gSSB1c2VkIGJlbG93IGxpbmsgdG8gY3Jl YXRlIGZyb250ZW5kIGFuZCBiYWNrZW5kLAo+Pj4+IGh0dHBzOi8vZm5vcmRpZy5kZS8yMDE2LzEy LzAyL3hlbi1hLWJhY2tlbmQtZnJvbnRlbmQtZHJpdmVyLWV4YW1wbGUvCj4+IFRoZSBsaW5rIGFi b3ZlIGhhcyBhbm90aGVyIGxpbmsgdG8gZ2l0aHViIFsxXSB3aXRoIDIgY2hhcHRlcnMuIEFuZCBp dAo+PiBsb29rcyBsaWtlIHlvdSBoYXZlCj4+IGFscmVhZHkgbW9kaWZpZWQgdGhlIHNvdXJjZXMg KCJteWRldmljZSIgLT4gInZkZXZiIiBhdCBsZWFzdCkuCj4+IFNvLCB3aGF0IGFyZSB0aGUgc291 cmNlcyB5b3UgYXJlIHVzaW5nPwo+Pgo+PiBZb3UgY291bGQgcHJvYmFibHkgdGFrZSBhIGxvb2sg YXQgdGhlIHJlbGF0aXZlbHkgc21hbGwgdmtiZCBmcm9udGVuZAo+PiBkcml2ZXIgWzJdCj4+IHRv IGdldCBzb21lIGhpbnRzLgo+Pj4+IEJ1dCBJIGFtIGZhY2luZyBiZWxvdyBlcnJvcnMgd2hpbGUg YWRkaW5nIGRldmljZSB2ZGV2YiB0byB4ZW5zdG9yZS4KPj4+PiBCZWxvdyBlcnJvcnMgSSBhbSBn ZXR0aW5nIGZyb23CoHhlbmJ1c19zd2l0Y2hfc3RhdGUoKS4KPj4+PiB2ZGV2YiB2ZGV2Yi0wOiBm YWlsZWQgdG8gd3JpdGUgZXJyb3Igbm9kZSBmb3IgZGV2aWNlL3ZkZXZiLzAgKDEzCj4+Pj4gd3Jp dGluZyBuZXcgc3RhdGUpCj4gRXJyb3IgMTMgaXMgRUFDQ0VTUy4gSSBndWVzcyB0aGUgYWNjZXNz IHJpZ2h0cyBvZiB0aGUgWGVuc3RvcmUgbm9kZXMKPiBhcmUgbm90IHN1ZmZpY2llbnQgdG8gd3Jp dGUgdGhlIG5lZWRlZCBlbnRyaWVzLgo+Cj4gRGlkIHlvdSBtb2RpZnkgWGVuIHRvb2xzICh4bC9s aWJ4bCkgZm9yIGFkZGluZyB0aGUgbmV3IGRldmljZSB0eXBlPwo+IElmIG5vdCB5b3UgbmVlZCB0 byBzZXR1cCB0aGUgWGVuc3RvcmUgbm9kZXMgbWFudWFsbHkuClRoZXJlIGlzIGEgc2NyaXB0IFsx XSB3aGljaCBjb21lcyB3aXRoIHRoZSBleGFtcGxlIGltcGxlbWVudGF0aW9uLApzbyBJIGJlbGll dmUgT21rYXIgdXNlcyBpdCB3aXRoICJteWRldmljZSIgLT4gInZkZXZiIiBjaGFuZ2UKPgo+Cj4g SnVlcmdlbgpbMV0gCmh0dHBzOi8vZ2l0aHViLmNvbS9iYWRib3kveGVuLXNwbGl0LWRyaXZlci1l eGFtcGxlL2Jsb2IvbWFzdGVyL2NoYXB0ZXIwMi9hY3RpdmF0ZS5zaAoKX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9y Zy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA== From mboxrd@z Thu Jan 1 00:00:00 1970 From: Julien Grall Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Thu, 27 Sep 2018 11:34:50 +0100 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; Format="flowed" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Oleksandr Andrushchenko , Juergen Gross , Oleksandr Andrushchenko , Lars Kurth , Omkar Bolla Cc: xen-devel@lists.xensource.com, Stefano Stabellini List-Id: xen-devel@lists.xenproject.org SGksCgpPbiAwOS8yNy8yMDE4IDExOjIwIEFNLCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90 ZToKPiBPbiAwOS8yNy8yMDE4IDAxOjE2IFBNLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOgo+PiBPbiAy Ny8wOS8yMDE4IDEyOjA3LCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToKPj4+IEhpLAo+ Pj4gT24gMDkvMjcvMjAxOCAxMjozOSBQTSwgTGFycyBLdXJ0aCB3cm90ZToKPj4+PiBBZGRpbmcg YSBmZXcgcGVvcGxlIHdobyBoYXZlIHJlY2VudGx5IGJlZW4gd29ya2luZyBvbiBQViBkcml2ZXJz LCBhcwo+Pj4+IHdlbGwgYXMgSnVsaWVuCj4+Pj4gTGFycwo+Pj4+Cj4+Pj4+IE9uIDI3IFNlcCAy MDE4LCBhdCAwNjo0NCwgT21rYXIgQm9sbGEKPj4+Pj4gPG9ta2FyLmJvbGxhQHBhdGhwYXJ0bmVy dGVjaC5jb20KPj4+Pj4gPG1haWx0bzpvbWthci5ib2xsYUBwYXRocGFydG5lcnRlY2guY29tPj4g d3JvdGU6Cj4+Pj4+Cj4+Pj4+IEhpLAo+Pj4+Pgo+Pj4+PiBJIGFtIHVzaW5nIERlYmlhbiBhcyBE b21haW4tMCBhbmQgRGViaWFuIGFzIERvbWFpbi1VIG9uIEhpa2V5OTYwCj4+Pj4+IHBsYXRmb3Jt KEFSTXY4KSBhbmQgdXNpbmcgWGVuLTQuOCBzdGFibGUgcmVsZWFzZS4gSGVyZSBJIHdhbnQgdG8K Pj4+Pj4gY3JlYXRlIGEgUFYgZnJvbnRlbmQgYW5kIGJhY2tlbmQgdG8gc2hhcmUgbWVtb3J5IGJl dHdlZW4gRG9tYWluLTAgYW5kCj4+Pj4+IERvbWFpbi1VLgoKRG8geW91IG5lZWQgdG8gc2hhcmUg dGhlIGJ1ZmZlciBkeW5hbWljYWxseT8gSWYgbm90LCB5b3UgbWF5IHdhbnQgdG8gCmhhdmUgYSBs b29rIGF0ICJBbGxvdyBzZXR0aW5nIHVwIHNoYXJlZCBtZW1vcnkgYXJlYXMgYmV0d2VlbiBWTXMg ZnJvbSB4bCAKY29uZmlnIGZpbGVzIiBbMl0uIFdlIGFpbSB0byBtZXJnZSBpdCBpbiB0aGUgbmV4 dCBYZW4gcmVsZWFzZS4KCkNoZWVycywKClsyXSBodHRwczovL2xpc3RzLnhlbi5vcmcvYXJjaGl2 ZXMvaHRtbC94ZW4tZGV2ZWwvMjAxOC0wOC9tc2cwMDg4My5odG1sCgotLSAKSnVsaWVuIEdyYWxs CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2 ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZwpodHRwczovL2xp c3RzLnhlbnByb2plY3Qub3JnL21haWxtYW4vbGlzdGluZm8veGVuLWRldmVs From mboxrd@z Thu Jan 1 00:00:00 1970 From: Omkar Bolla Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Thu, 27 Sep 2018 16:05:24 +0530 Message-ID: References: Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4428069996853167348==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Oleksandr_Andrushchenko@epam.com Cc: jgross@suse.com, xen-devel@lists.xensource.com, andr2000@gmail.com, lars.kurth.xen@gmail.com, Julien Grall , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org --===============4428069996853167348== Content-Type: multipart/alternative; boundary="00000000000031cf0b0576d7e6c5" --00000000000031cf0b0576d7e6c5 Content-Type: text/plain; charset="UTF-8" Hi, Sorry, I forgot, I used code from github chapter [2] from that link, and I just changed name "mydevice" to "vdevb" > Error 13 is EACCESS. I guess the access rights of the Xenstore nodes > are not sufficient to write the needed entries. Where I have to provide access rights, i.e from Kernel code or from from command line in domain-0 or modify in xen source? Any thing that I have to do/change in xenbits xen-4.8 sources code, to add new PV device? > Did you modify Xen tools (xl/libxl) for adding the new device type? No, is it needed to modify some thing in xl/libxl for adding new device type? > If not you need to setup the Xenstore nodes manually. Setup manually Xenstore means, using commands? Thanks, Omkar B On Thu, Sep 27, 2018 at 3:50 PM Oleksandr Andrushchenko < Oleksandr_Andrushchenko@epam.com> wrote: > On 09/27/2018 01:16 PM, Juergen Gross wrote: > > On 27/09/2018 12:07, Oleksandr Andrushchenko wrote: > >> Hi, > >> On 09/27/2018 12:39 PM, Lars Kurth wrote: > >>> Adding a few people who have recently been working on PV drivers, as > >>> well as Julien > >>> Lars > >>> > >>>> On 27 Sep 2018, at 06:44, Omkar Bolla > >>>> >>>> > wrote: > >>>> > >>>> Hi, > >>>> > >>>> I am using Debian as Domain-0 and Debian as Domain-U on Hikey960 > >>>> platform(ARMv8) and using Xen-4.8 stable release. Here I want to > >>>> create a PV frontend and backend to share memory between Domain-0 and > >>>> Domain-U. > >>>> > >>>> > >>>> > >>>> I used below link to create frontend and backend, > >>>> https://fnordig.de/2016/12/02/xen-a-backend-frontend-driver-example/ > >> The link above has another link to github [1] with 2 chapters. And it > >> looks like you have > >> already modified the sources ("mydevice" -> "vdevb" at least). > >> So, what are the sources you are using? > >> > >> You could probably take a look at the relatively small vkbd frontend > >> driver [2] > >> to get some hints. > >>>> But I am facing below errors while adding device vdevb to xenstore. > >>>> Below errors I am getting from xenbus_switch_state(). > >>>> vdevb vdevb-0: failed to write error node for device/vdevb/0 (13 > >>>> writing new state) > > Error 13 is EACCESS. I guess the access rights of the Xenstore nodes > > are not sufficient to write the needed entries. > > > > Did you modify Xen tools (xl/libxl) for adding the new device type? > > If not you need to setup the Xenstore nodes manually. > There is a script [1] which comes with the example implementation, > so I believe Omkar uses it with "mydevice" -> "vdevb" change > > > > > > Juergen > [1] > > https://github.com/badboy/xen-split-driver-example/blob/master/chapter02/activate.sh > -- This message contains confidential information and is intended only for the individual(s) named. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this mail and attached file/s is strictly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. --00000000000031cf0b0576d7e6c5 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi,

=
Sorry, I = forgot, I used code from github chapter [2] from that link, and
I just changed name &= quot;mydevice"=C2=A0 to "vdevb"

> Error 13 is EAC= CESS. I guess the access rights of the Xenstore nodes
> are not suffici= ent to write the needed entries.
Wh= ere I have to provide access rights, i.e from Kernel code or from from comm= and line in domain-0 or modify in xen source?
Any thing that I have to do/change = in xenbits xen-4.8 sources code, to add new PV device?

> Did you modify Xen tools (xl/li= bxl) for adding the new device type?
No, is it needed to modify some thing= in xl/libxl for adding new device type?

> If not you need to setup the Xenstore nodes m= anually.
Setup manually X= enstore means, using commands?

Thanks,
Omkar B




On Thu, Sep 27, 2018 at 3:50 PM Oleksandr Andrushchenko <Oleksandr_Andrushchenko@e= pam.com> wrote:
On 09/27/201= 8 01:16 PM, Juergen Gross wrote:
> On 27/09/2018 12:07, Oleksandr Andrushchenko wrote:
>> Hi,
>> On 09/27/2018 12:39 PM, Lars Kurth wrote:
>>> Adding a few people who have recently been working on PV drive= rs, as
>>> well as Julien
>>> Lars
>>>
>>>> On 27 Sep 2018, at 06:44, Omkar Bolla
>>>> <omkar.bolla@pathpartnertech.com
>>>> <mailto:omkar.bolla@pathpartnertech.com>> wrote: >>>>
>>>> Hi,
>>>>
>>>> I am using Debian as Domain-0 and Debian as Domain-U on Hi= key960
>>>> platform(ARMv8) and using Xen-4.8 stable release. Here I w= ant to
>>>> create a PV frontend and backend to share memory between D= omain-0 and
>>>> Domain-U.
>>>>
>>>>
>>>>
>>>> I used below link to create frontend and backend,
>>>> https://fnordig= .de/2016/12/02/xen-a-backend-frontend-driver-example/
>> The link above has another link to github [1] with 2 chapters. And= it
>> looks like you have
>> already modified the sources ("mydevice" -> "vde= vb" at least).
>> So, what are the sources you are using?
>>
>> You could probably take a look at the relatively small vkbd fronte= nd
>> driver [2]
>> to get some hints.
>>>> But I am facing below errors while adding device vdevb to = xenstore.
>>>> Below errors I am getting from=C2=A0xenbus_switch_state().=
>>>> vdevb vdevb-0: failed to write error node for device/vdevb= /0 (13
>>>> writing new state)
> Er= ror 13 is EACCESS. I guess the access rights of the Xenstore nodes
> are not sufficient to write the needed entries.
>
> Did you modify Xen tools (xl/libxl) for adding the new device type? > If not you need to setup the Xenstore nodes manually.
There is a script [1] which comes with the example implementation,
so I believe Omkar uses it with "mydevice" -> "vdevb"= ; change
>
>
> Juergen
[1]
https://github.c= om/badboy/xen-split-driver-example/blob/master/chapter02/activate.sh

This message contains confidential information and is intended only for the individual(s) named. If you are not the = intended recipient, you are notified that disclosing, copying, distributing or takin= g any action in reliance on the contents of this mail and attached file/s is stri= ctly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmis= sion cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or cont= ain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mai= l transmission.

--00000000000031cf0b0576d7e6c5-- --===============4428069996853167348== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6Ly9saXN0 cy54ZW5wcm9qZWN0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA== --===============4428069996853167348==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Juergen Gross Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Thu, 27 Sep 2018 13:03:11 +0200 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Omkar Bolla , Oleksandr_Andrushchenko@epam.com Cc: andr2000@gmail.com, lars.kurth.xen@gmail.com, Julien Grall , xen-devel@lists.xensource.com, Stefano Stabellini List-Id: xen-devel@lists.xenproject.org T24gMjcvMDkvMjAxOCAxMjozNSwgT21rYXIgQm9sbGEgd3JvdGU6Cj4gSGksCj4gCj4gU29ycnks IEkgZm9yZ290LCBJIHVzZWQgY29kZSBmcm9tIGdpdGh1YiBjaGFwdGVyIFsyXSBmcm9tIHRoYXQg bGluaywgYW5kCj4gSSBqdXN0IGNoYW5nZWQgbmFtZSAibXlkZXZpY2UiwqAgdG8gInZkZXZiIgoK T2theS4KCj4gCj4+IEVycm9yIDEzIGlzIEVBQ0NFU1MuIEkgZ3Vlc3MgdGhlIGFjY2VzcyByaWdo dHMgb2YgdGhlIFhlbnN0b3JlIG5vZGVzCj4+IGFyZSBub3Qgc3VmZmljaWVudCB0byB3cml0ZSB0 aGUgbmVlZGVkIGVudHJpZXMuCj4gV2hlcmUgSSBoYXZlIHRvIHByb3ZpZGUgYWNjZXNzIHJpZ2h0 cywgaS5lIGZyb20gS2VybmVsIGNvZGUgb3IgZnJvbSBmcm9tCj4gY29tbWFuZCBsaW5lIGluIGRv bWFpbi0wIG9yIG1vZGlmeSBpbiB4ZW4gc291cmNlPwoKSSBndWVzcyB5b3UgaGF2ZSB5b3VyIGZy b250ZW5kIGFscmVhZHkgbG9hZGVkIHdoZW4gcnVubmluZyB0aGUKc2NyaXB0IGNyZWF0aW5nIHRo ZSBYZW5zdG9yZSBlbnRyaWVzPwoKVGhpcyB3b3VsZCBleHBsYWluIHRoZSBwcm9ibGVtOiBhcyBz b29uIGFzIHRoZSBlbnRyaWVzIGFyZSB3cml0dGVuCnRoZSBmcm9udGVuZCB3aWxsIHJlYWN0LiBB dCB0aGlzIHBvaW50IHRoZSBhY2Nlc3MgcmlnaHRzIGFyZSBub3Qgc2V0dXAKcHJvcGVybHksIHRo aXMgaXMgZG9uZSBhIGxpdHRsZSBiaXQgbGF0ZXIgaW4gdGhlIHNjcmlwdC4KClBvc3NpYmxlIHNv bHV0aW9ucyBhcmUgdG8gZWl0aGVyIGxvYWQgdGhlIGZyb250ZW5kIGRyaXZlciBvbmx5IGFmdGVy CnNldHRpbmcgdXAgdGhlIFhlbnN0b3JlIGVudHJpZXMsIG9yIHRvIHBhdXNlIHRoZSBkb21haW4g d2hpbGUgZG9pbmcKc28gYW5kIHVucGF1c2UgaXQgYWZ0ZXJ3YXJkcyAob3Igc3RhcnQgdGhlIGRv bWFpbiBwYXVzZWQsIGNyZWF0ZSB0aGUKWGVuc3RvcmUgZW50cmllcywgYW5kIHVucGF1c2UgdGhl IGRvbWFpbikuCgpUaGUgcmVhbGx5IGNvcnJlY3Qgd2F5IHRvIGRvIGl0IHdvdWxkIGJlIHRvIHNl dHVwIFhlbnN0b3JlIGluIGEgc2luZ2xlCnRyYW5zYWN0aW9uICh3cml0ZSBhbGwgdGhlIGVudHJp ZXMgYW5kIHNldCBhY2Nlc3MgcmlnaHRzKS4KCj4gQW55IHRoaW5nIHRoYXQgSSBoYXZlIHRvIGRv L2NoYW5nZSBpbiB4ZW5iaXRzIHhlbi00Ljggc291cmNlcyBjb2RlLCB0bwo+IGFkZCBuZXcgUFYg ZGV2aWNlPwoKT25seSBpZiB5b3Ugd2FudCB0byBpbmNsdWRlIGRvbWFpbiBjb25maWcgZmlsZSBl bnRyaWVzIGZvciB5b3VyIGRldmljZS4KCj4gCj4+IERpZCB5b3UgbW9kaWZ5IFhlbiB0b29scyAo eGwvbGlieGwpIGZvciBhZGRpbmcgdGhlIG5ldyBkZXZpY2UgdHlwZT8KPiBObywgaXMgaXQgbmVl ZGVkIHRvIG1vZGlmeSBzb21lIHRoaW5nIGluIHhsL2xpYnhsIGZvciBhZGRpbmcgbmV3IGRldmlj ZQo+IHR5cGU/CgpUaGlzIHdhcyBqdXN0IGEgcXVlc3Rpb24gdG8gbGVhcm4gaG93IFhlbnN0b3Jl IGlzIGJlaW5nIGluaXRpYWxpemVkCmluIHlvdXIgc2NlbmFyaW8uCgo+IAo+PiBJZiBub3QgeW91 IG5lZWQgdG8gc2V0dXAgdGhlIFhlbnN0b3JlIG5vZGVzIG1hbnVhbGx5Lgo+IFNldHVwIG1hbnVh bGx5IFhlbnN0b3JlIG1lYW5zLCB1c2luZyBjb21tYW5kcz8KClllcywgbGlrZSB5b3VyIHNjcmlw dCBkb2VzLgoKCkp1ZXJnZW4KCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbnByb2pl Y3Qub3JnCmh0dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvbWFpbG1hbi9saXN0aW5mby94ZW4t ZGV2ZWw= From mboxrd@z Thu Jan 1 00:00:00 1970 From: Omkar Bolla Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Fri, 28 Sep 2018 18:25:06 +0530 Message-ID: References: Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="000000000000a0fa920576edf7f3" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: jgross@suse.com Cc: xen-devel@lists.xensource.com, Oleksandr_Andrushchenko@epam.com, Oleksandr Andrushchenko , Lars Kurth , Julien Grall , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org --000000000000a0fa920576edf7f3 Content-Type: multipart/alternative; boundary="000000000000a0fa8e0576edf7f1" --000000000000a0fa8e0576edf7f1 Content-Type: text/plain; charset="UTF-8" Hi, I tried to run script after pause the domain and unpause domain after run script. But I ended up with same error Below I shared PV device log, and attached my FE and BE driver and script that i used, root@hikey960:/debian# [XEN_BUF]xen_vdevb_be_probe(): 124: Probe called. We are good to go. [ 165.087419] [XEN_BUF]xen_vdevb_be_probe(): 125: ffffffc017fb7000 1 [ 165.093759] [XEN_BUF]xen_vdevb_be_probe(): 137: info->domid: 1 [ 165.099641] [XEN_BUF]xen_vdevb_be_probe(): 138: devicetype: vdevb, nodename: backend/vdevb/1/0, otherend: /local/domain/1/device/vdevb/0 [ 165.112939] [XEN_BUF]xen_vdevb_be_frontend_changed(): 177: dev->state: XenbusStateInitialising-1, frontend_state: XenbusStateInitialising-1 root@hikey960:/debian# root@hikey960:/debian# root@hikey960:/debian# root@hikey960:/debian# root@hikey960:/debian# xl console debian [ 22.243570] [XEN_BUF]xen_vdevb_fe_probe(): 24: Probe called. We are good to go. [ 22.243606] [XEN_BUF]xen_vdevb_fe_probe(): 25: ffffffc0160b4000 0 [ 22.243620] [XEN_BUF]xen_vdevb_fe_probe(): 38: info->domid: 0 [ 22.243633] [XEN_BUF]xen_vdevb_fe_probe(): 39: devicetype: vdevb, nodename: device/vdevb/0, otherend: /local/domain/0/backend/vdevb/1/0 [ 22.244669] [XEN_BUF]xen_vdevb_fe_backend_changed(): 64: dev->state: XenbusStateInitialising-1, backend_state: XenbusStateInitWait-2 [ 22.244701] [XEN_BUF]frontend_connect(): 53: Connecting the frontend now [ 22.245866] vdevb vdevb-0: 13 writing new state [ 22.246085] vdevb vdevb-0: failed to write error node for device/vdevb/0 (13 writing new state) [ 22.250005] vdevb vdevb-0: 13 writing new state [ 22.250220] vdevb vdevb-0: failed to write error node for device/vdevb/0 (13 writing new state) root@hikey960:~# Thanks, Omkar B On Thu, Sep 27, 2018 at 4:33 PM Juergen Gross wrote: > On 27/09/2018 12:35, Omkar Bolla wrote: > > Hi, > > > > Sorry, I forgot, I used code from github chapter [2] from that link, and > > I just changed name "mydevice" to "vdevb" > > Okay. > > > > >> Error 13 is EACCESS. I guess the access rights of the Xenstore nodes > >> are not sufficient to write the needed entries. > > Where I have to provide access rights, i.e from Kernel code or from from > > command line in domain-0 or modify in xen source? > > I guess you have your frontend already loaded when running the > script creating the Xenstore entries? > > This would explain the problem: as soon as the entries are written > the frontend will react. At this point the access rights are not setup > properly, this is done a little bit later in the script. > > Possible solutions are to either load the frontend driver only after > setting up the Xenstore entries, or to pause the domain while doing > so and unpause it afterwards (or start the domain paused, create the > Xenstore entries, and unpause the domain). > > The really correct way to do it would be to setup Xenstore in a single > transaction (write all the entries and set access rights). > > > Any thing that I have to do/change in xenbits xen-4.8 sources code, to > > add new PV device? > > Only if you want to include domain config file entries for your device. > > > > >> Did you modify Xen tools (xl/libxl) for adding the new device type? > > No, is it needed to modify some thing in xl/libxl for adding new device > > type? > > This was just a question to learn how Xenstore is being initialized > in your scenario. > > > > >> If not you need to setup the Xenstore nodes manually. > > Setup manually Xenstore means, using commands? > > Yes, like your script does. > > > Juergen > -- This message contains confidential information and is intended only for the individual(s) named. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this mail and attached file/s is strictly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. --000000000000a0fa8e0576edf7f1 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi,
I tried to run script after pause the domain and unpause domai= n after run script. But I ended up with same error

Below I shared=C2=A0 PV device log, and= attached my FE and BE driver and script that i used,=C2=A0
root@hikey960:/debi= an#
[XEN_BUF]xen_vdevb_be_probe(): 124: Probe called. We are good t= o go.
[=C2= =A0 165.087419] [XEN_BUF]xen_vdevb_be_probe(): 125:=C2=A0 ffffffc017fb7000 = 1
[=C2=A0 = 165.093759] [XEN_BUF]xen_vdevb_be_probe(): 137: info->domid: 1
[=C2=A0 165.099641]= [XEN_BUF]xen_vdevb_be_probe(): 138: devicetype: vdevb, nodename: backend/v= devb/1/0, otherend: /local/domain/1/device/vdevb/0
[=C2=A0 165.112939] [XEN_BUF]xen= _vdevb_be_frontend_changed(): 177: dev->state: XenbusStateInitialising-1= , frontend_state: XenbusStateInitialising-1

root@hikey960:/debian#=C2=A0
root@hikey960:/debian#=C2= =A0
root@h= ikey960:/debian#=C2=A0
root@hikey960:/debian#=C2=A0
root@hikey960:/debian# xl console debian
[=C2=A0 =C2= =A022.243570] [XEN_BUF]xen_vdevb_fe_probe(): 24: Probe called. We are good = to go.
[= =C2=A0 =C2=A022.243606] [XEN_BUF]xen_vdevb_fe_probe(): 25:=C2=A0 ffffffc016= 0b4000 0
[= =C2=A0 =C2=A022.243620] [XEN_BUF]xen_vdevb_fe_probe(): 38: info->domid: = 0
[=C2=A0 = =C2=A022.243633] [XEN_BUF]xen_vdevb_fe_probe(): 39: devicetype: vdevb, node= name: device/vdevb/0, otherend: /local/domain/0/backend/vdevb/1/0
[=C2=A0 =C2=A022.24= 4669] [XEN_BUF]xen_vdevb_fe_backend_changed(): 64: dev->state: XenbusSta= teInitialising-1, backend_state: XenbusStateInitWait-2
[=C2=A0 =C2=A022.244701] [XEN_= BUF]frontend_connect(): 53: Connecting the frontend now
[=C2=A0 =C2=A022.245866] vdev= b vdevb-0: 13 writing new state
[=C2=A0 =C2=A022.246085] vdevb vdevb-0: failed to wri= te error node for device/vdevb/0 (13 writing new state)
[=C2=A0 =C2=A022.250005] vdev= b vdevb-0: 13 writing new state
[=C2=A0 =C2=A022.250220] vdevb vdevb-0: failed to wri= te error node for device/vdevb/0 (13 writing new state)
root@hikey960:~#
=


Thanks,
Omkar=C2=A0 B<= /div>

=

On Thu, Sep 27,= 2018 at 4:33 PM Juergen Gross <jgros= s@suse.com> wrote:
On 27/09= /2018 12:35, Omkar Bolla wrote:
> Hi,
>
> Sorry, I forgot, I used code from github chapter [2] from that link, a= nd
> I just changed name "mydevice"=C2=A0 to "vdevb"
Okay.

>
>> Error 13 is EACCESS. I guess the access rights of the Xenstore nod= es
>> are not sufficient to write the needed entries.
> Where I have to provide access rights, i.e from Kernel code or from fr= om
> command line in domain-0 or modify in xen source?

I guess you have your frontend already loaded when running the
script creating the Xenstore entries?

This would explain the problem: as soon as the entries are written
the frontend will react. At this point the access rights are not setup
properly, this is done a little bit later in the script.

Possible solutions are to either load the frontend driver only after
setting up the Xenstore entries, or to pause the domain while doing
so and unpause it afterwards (or start the domain paused, create the
Xenstore entries, and unpause the domain).

The really correct way to do it would be to setup Xenstore in a single
transaction (write all the entries and set access rights).

> Any thing that I have to do/change in xenbits xen-4.8 sources code, to=
> add new PV device?

Only if you want to include domain config file entries for your device.

>
>> Did you modify Xen tools (xl/libxl) for adding the new device type= ?
> No, is it needed to modify some thing in xl/libxl for adding new devic= e
> type?

This was just a question to learn how Xenstore is being initialized
in your scenario.

>
>> If not you need to setup the Xenstore nodes manually.
> Setup manually Xenstore means, using commands?

Yes, like your script does.


Juergen

This message contains confidential information and is intended only for the individual(s) named. If you are not the = intended recipient, you are notified that disclosing, copying, distributing or takin= g any action in reliance on the contents of this mail and attached file/s is stri= ctly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmis= sion cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or cont= ain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mai= l transmission.

--000000000000a0fa8e0576edf7f1-- --000000000000a0fa920576edf7f3 Content-Type: text/x-csrc; charset="US-ASCII"; name="xen_buf_frontend.c" Content-Disposition: attachment; filename="xen_buf_frontend.c" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: f_jmm0axhd0 I2luY2x1ZGUgPGxpbnV4L21vZHVsZS5oPiAgLyogTmVlZGVkIGJ5IGFsbCBtb2R1bGVzICovCiNp bmNsdWRlIDxsaW51eC9rZXJuZWwuaD4gIC8qIE5lZWRlZCBmb3IgS0VSTl9BTEVSVCAqLwoKI2lu Y2x1ZGUgPHhlbi94ZW4uaD4gICAgICAgLyogV2UgYXJlIGRvaW5nIHNvbWV0aGluZyB3aXRoIFhl biAqLwojaW5jbHVkZSA8eGVuL3hlbmJ1cy5oPgoKI2luY2x1ZGUgInhlbl9idWYuaCIKCnN0cnVj dCB2ZGV2YmZybnRfaW5mbyB7CglzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2OwoJdW5zaWduZWQg aW50IGV2dGNobjsKCXVuc2lnbmVkIGludCBpcnE7CgoJZ3JhbnRfcmVmX3QgcmluZ19yZWY7CgkK fTsKLy8gVGhlIGZ1bmN0aW9uIGlzIGNhbGxlZCBvbiBhY3RpdmF0aW9uIG9mIHRoZSBkZXZpY2UK c3RhdGljIGludCB4ZW5fdmRldmJfZmVfcHJvYmUoc3RydWN0IHhlbmJ1c19kZXZpY2UgKmRldiwK ICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3QgeGVuYnVzX2RldmljZV9pZCAqaWQpCnsKCXN0cnVj dCB2ZGV2YmZybnRfaW5mbyAqaW5mbyA9IE5VTEw7CgkKCQoJcHJfbG9nKCJQcm9iZSBjYWxsZWQu IFdlIGFyZSBnb29kIHRvIGdvLlxuIik7Cglwcl9sb2coIiAlcCAlZFxuIiwgZGV2LCBkZXYtPm90 aGVyZW5kX2lkKTsKCgkvKiBBbGxvY2F0aW5nIG1lbW9yeSBmb3IgcHJpdmF0ZSBzdHJ1Y3R1cmUg Ki8KCWluZm8gPSBremFsbG9jKHNpemVvZihzdHJ1Y3QgdmRldmJmcm50X2luZm8pLCBHRlBfS0VS TkVMKTsKCWlmICghaW5mbykgewoJCXhlbmJ1c19kZXZfZmF0YWwoZGV2LCAtRU5PTUVNLAoJCQkJ ICJhbGxvY2F0aW5nIGZyb250ZW5kIHN0cnVjdHVyZSIpOwoJCXJldHVybiAtRU5PTUVNOwoJfQoK CWluZm8tPmRldiA9IGRldjsKCWRldl9zZXRfZHJ2ZGF0YSgmZGV2LT5kZXYsIGluZm8pOwovLwlp bmZvLT5kb21pZCA9IGRldi0+b3RoZXJlbmRfaWQ7Cglwcl9sb2coImluZm8tPmRvbWlkOiAlZFxu IiwgZGV2LT5vdGhlcmVuZF9pZCk7Cglwcl9sb2coImRldmljZXR5cGU6ICVzLCBub2RlbmFtZTog JXMsIG90aGVyZW5kOiAlc1xuIiwgZGV2LT5kZXZpY2V0eXBlLCBkZXYtPm5vZGVuYW1lLCBkZXYt Pm90aGVyZW5kKTsKCi8vCXhlbmJ1c19zd2l0Y2hfc3RhdGUoZGV2LCBYZW5idXNTdGF0ZUluaXRp YWxpc2VkKTsKCXJldHVybiAwOwp9CgpzdGF0aWMgaW50IHhlbl92ZGV2Yl9mZV9yZW1vdmUoc3Ry dWN0IHhlbmJ1c19kZXZpY2UgKmRldikKewoJcHJfbG9nKCJcbiIpOwoJcmV0dXJuIDA7Cn0KLy8g VGhpcyBpcyB3aGVyZSB3ZSBzZXQgdXAgeGVuc3RvcmUgZmlsZXMgYW5kIGV2ZW50IGNoYW5uZWxz CnN0YXRpYyBpbnQgZnJvbnRlbmRfY29ubmVjdChzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2KQp7 Cglwcl9sb2coIkNvbm5lY3RpbmcgdGhlIGZyb250ZW5kIG5vd1xuIik7CglyZXR1cm4gMDsKfQoK Ly8gVGhlIGZ1bmN0aW9uIGlzIGNhbGxlZCBvbiBhIHN0YXRlIGNoYW5nZSBvZiB0aGUgYmFja2Vu ZCBkcml2ZXIKc3RhdGljIHZvaWQgeGVuX3ZkZXZiX2ZlX2JhY2tlbmRfY2hhbmdlZChzdHJ1Y3Qg eGVuYnVzX2RldmljZSAqZGV2LAoJCQkgICAgZW51bSB4ZW5idXNfc3RhdGUgYmFja2VuZF9zdGF0 ZSkKewoJc3RydWN0IHZkZXZiZnJudF9pbmZvICppbmZvID0gZGV2X2dldF9kcnZkYXRhKCZkZXYt PmRldik7CgovL3ByX2xvZygiaW5mbzogJXBcbiIsIGluZm8pOwpwcl9sb2coImRldi0+c3RhdGU6 ICVzLSVkLCBiYWNrZW5kX3N0YXRlOiAlcy0lZFxuIiwgeGVuYnVzX3N0YXRlX2FycmF5W2Rldi0+ c3RhdGVdLCBkZXYtPnN0YXRlLCB4ZW5idXNfc3RhdGVfYXJyYXlbYmFja2VuZF9zdGF0ZV0sIGJh Y2tlbmRfc3RhdGUpOwoJc3dpdGNoIChiYWNrZW5kX3N0YXRlKQoJewoJCWNhc2UgWGVuYnVzU3Rh dGVJbml0aWFsaXNpbmc6CgkJCXhlbmJ1c19zd2l0Y2hfc3RhdGUoZGV2LCBYZW5idXNTdGF0ZUlu aXRpYWxpc2luZyk7CgkJCWJyZWFrOwoJCWNhc2UgWGVuYnVzU3RhdGVJbml0aWFsaXNlZDoKCQlj YXNlIFhlbmJ1c1N0YXRlUmVjb25maWd1cmluZzoKCQljYXNlIFhlbmJ1c1N0YXRlUmVjb25maWd1 cmVkOgoJCWNhc2UgWGVuYnVzU3RhdGVVbmtub3duOgoJCQlicmVhazsKCgkJY2FzZSBYZW5idXNT dGF0ZUluaXRXYWl0OgoJCQlpZiAoZGV2LT5zdGF0ZSAhPSBYZW5idXNTdGF0ZUluaXRpYWxpc2lu ZykKCQkJCWJyZWFrOwoJCQlpZiAoZnJvbnRlbmRfY29ubmVjdChkZXYpICE9IDApCgkJCQlicmVh azsKCgkJCXhlbmJ1c19zd2l0Y2hfc3RhdGUoZGV2LCBYZW5idXNTdGF0ZUNvbm5lY3RlZCk7CgoJ CQlicmVhazsKCgkJY2FzZSBYZW5idXNTdGF0ZUNvbm5lY3RlZDoKCQkJcHJfbG9nKCJPdGhlciBz aWRlIHNheXMgaXQgaXMgY29ubmVjdGVkIGFzIHdlbGwuXG4iKTsKCQkJYnJlYWs7CgoJCWNhc2Ug WGVuYnVzU3RhdGVDbG9zZWQ6CgkJCWlmIChkZXYtPnN0YXRlID09IFhlbmJ1c1N0YXRlQ2xvc2Vk KQoJCQkJYnJlYWs7CgkJCS8qIE1pc3NlZCB0aGUgYmFja2VuZCdzIENMT1NJTkcgc3RhdGUgLS0g ZmFsbHRocm91Z2ggKi8KCQljYXNlIFhlbmJ1c1N0YXRlQ2xvc2luZzoKCQkJeGVuYnVzX2Zyb250 ZW5kX2Nsb3NlZChkZXYpOwoJfQp9CgovLyBUaGlzIGRlZmluZXMgdGhlIG5hbWUgb2YgdGhlIGRl dmljZXMgdGhlIGRyaXZlciByZWFjdHMgdG8Kc3RhdGljIGNvbnN0IHN0cnVjdCB4ZW5idXNfZGV2 aWNlX2lkIHhlbl92ZGV2Yl9mZV9pZHNbXSA9IHsKCXsgInZkZXZiIiAgfSwKCXsgIiIgIH0KfTsK Ci8vIFdlIHNldCB1cCB0aGUgY2FsbGJhY2sgZnVuY3Rpb25zCnN0YXRpYyBzdHJ1Y3QgeGVuYnVz X2RyaXZlciB4ZW5fdmRldmJfZmVfZHJpdmVyID0gewoJLmlkcyAgPSB4ZW5fdmRldmJfZmVfaWRz LAoJLnByb2JlID0geGVuX3ZkZXZiX2ZlX3Byb2JlLAoJLnJlbW92ZSA9IHhlbl92ZGV2Yl9mZV9y ZW1vdmUsCgkub3RoZXJlbmRfY2hhbmdlZCA9IHhlbl92ZGV2Yl9mZV9iYWNrZW5kX2NoYW5nZWQs Cn07CgovLyBPbiBsb2FkaW5nIHRoaXMga2VybmVsIG1vZHVsZSwgd2UgcmVnaXN0ZXIgYXMgYSBm cm9udGVuZCBkcml2ZXIKc3RhdGljIGludCBfX2luaXQgeGVuX3ZkZXZiX2ZlX2luaXQodm9pZCkK ewpwcl9sb2coInhlbl9kb21haW5fdHlwZTogJWQsIHhlbl9kb21haW46ICVkXG4iLCB4ZW5fZG9t YWluX3R5cGUseGVuX2RvbWFpbigpKTsKCXByX2xvZygiSGVsbG8gV29ybGQhXG4iKTsKCglyZXR1 cm4geGVuYnVzX3JlZ2lzdGVyX2Zyb250ZW5kKCZ4ZW5fdmRldmJfZmVfZHJpdmVyKTsKfQptb2R1 bGVfaW5pdCh4ZW5fdmRldmJfZmVfaW5pdCk7CgovLyAuLi5hbmQgb24gdW5sb2FkIHdlIHVucmVn aXN0ZXIKc3RhdGljIHZvaWQgX19leGl0IHhlbl92ZGV2Yl9mZV9leGl0KHZvaWQpCnsKCXhlbmJ1 c191bnJlZ2lzdGVyX2RyaXZlcigmeGVuX3ZkZXZiX2ZlX2RyaXZlcik7Cglwcl9sb2coICJHb29k YnllIHdvcmxkLlxuIik7Cn0KbW9kdWxlX2V4aXQoeGVuX3ZkZXZiX2ZlX2V4aXQpOwoKTU9EVUxF X0xJQ0VOU0UoIkdQTCIpOwpNT0RVTEVfQUxJQVMoInhlbi1jbGs6IHZkZXZiIik7Cg== --000000000000a0fa920576edf7f3 Content-Type: text/x-csrc; charset="US-ASCII"; name="xen_buf_backend.c" Content-Disposition: attachment; filename="xen_buf_backend.c" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: f_jmm0bcoh1 I2luY2x1ZGUgPGxpbnV4L21vZHVsZS5oPiAgLyogTmVlZGVkIGJ5IGFsbCBtb2R1bGVzICovCiNp bmNsdWRlIDxsaW51eC9rZXJuZWwuaD4gIC8qIE5lZWRlZCBmb3IgS0VSTl9BTEVSVCAqLwoKI2lu Y2x1ZGUgPHhlbi94ZW4uaD4gICAgICAgLyogV2UgYXJlIGRvaW5nIHNvbWV0aGluZyB3aXRoIFhl biAqLwojaW5jbHVkZSA8eGVuL3hlbmJ1cy5oPgoKI2luY2x1ZGUgInhlbl9idWYuaCIKCgoKCnN0 cnVjdCB2ZGV2YmJrX2luZm8gewoJc3RydWN0IHhlbmJ1c19kZXZpY2UgKmRldjsKCglkb21pZF90 IGRvbWlkOwoJdW5zaWduZWQgaW50IGlycTsKCi8vCXN0cnVjdCB2c2NzaWlmX2JhY2tfcmluZyBy aW5nOwoJaW50IHJpbmdfZXJyb3I7CgoJc3BpbmxvY2tfdCByaW5nX2xvY2s7CglhdG9taWNfdCBu cl91bnJlcGxpZWRfcmVxczsKCglzcGlubG9ja190IHZkZXZiX2xvY2s7CglzdHJ1Y3QgbGlzdF9o ZWFkIHZkZXZiX2VudHJ5X2xpc3RzOwoKCXdhaXRfcXVldWVfaGVhZF90IHdhaXRpbmdfdG9fZnJl ZTsKfTsKCi8vIFRoaXMgaXMgd2hlcmUgd2Ugc2V0IHVwIHBhdGggd2F0Y2hlcnMgYW5kIGV2ZW50 IGNoYW5uZWxzCnN0YXRpYyB2b2lkIGJhY2tlbmRfY29ubmVjdChzdHJ1Y3QgeGVuYnVzX2Rldmlj ZSAqZGV2KQp7Cglwcl9sb2coIkNvbm5lY3RpbmcgdGhlIGJhY2tlbmQgbm93XG4iKTsKfQovLyBU aGlzIHdpbGwgZGVzdHJveSBldmVudCBjaGFubmVsIGhhbmRsZXJzCnN0YXRpYyB2b2lkIGJhY2tl bmRfZGlzY29ubmVjdChzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2KQp7Cglwcl9sb2coIkNvbm5l Y3RpbmcgdGhlIGJhY2tlbmQgbm93XG4iKTsKfQovLyBXZSB0cnkgdG8gc3dpdGNoIHRvIHRoZSBu ZXh0IHN0YXRlIGZyb20gYSBwcmV2aW91cyBvbmUKc3RhdGljIHZvaWQgc2V0X2JhY2tlbmRfc3Rh dGUoc3RydWN0IHhlbmJ1c19kZXZpY2UgKmRldiwKCQkJICAgICAgZW51bSB4ZW5idXNfc3RhdGUg c3RhdGUpCnsKLy9wcl9sb2coImRldi0+c3RhdGU6ICVzLSVkLCBzdGF0ZTogJXMtJWRcbiIsIHhl bmJ1c19zdGF0ZV9hcnJheVtkZXYtPnN0YXRlXSwgZGV2LT5zdGF0ZSwgeGVuYnVzX3N0YXRlX2Fy cmF5W3N0YXRlXSwgc3RhdGUpOwoJd2hpbGUgKGRldi0+c3RhdGUgIT0gc3RhdGUpIHsKCQlzd2l0 Y2ggKGRldi0+c3RhdGUpIHsKCQljYXNlIFhlbmJ1c1N0YXRlSW5pdGlhbGlzaW5nOgoJCQlzd2l0 Y2ggKHN0YXRlKSB7CgkJCWNhc2UgWGVuYnVzU3RhdGVJbml0V2FpdDoKCQkJY2FzZSBYZW5idXNT dGF0ZUNvbm5lY3RlZDoKCQkJY2FzZSBYZW5idXNTdGF0ZUNsb3Npbmc6CgkJCQl4ZW5idXNfc3dp dGNoX3N0YXRlKGRldiwgWGVuYnVzU3RhdGVJbml0V2FpdCk7CgkJCQlicmVhazsKCQkJY2FzZSBY ZW5idXNTdGF0ZUNsb3NlZDoKCQkJCXhlbmJ1c19zd2l0Y2hfc3RhdGUoZGV2LCBYZW5idXNTdGF0 ZUNsb3NlZCk7CgkJCQlicmVhazsKCQkJZGVmYXVsdDoKCQkJCUJVRygpOwoJCQl9CgkJCWJyZWFr OwoJCWNhc2UgWGVuYnVzU3RhdGVDbG9zZWQ6CgkJCXN3aXRjaCAoc3RhdGUpIHsKCQkJY2FzZSBY ZW5idXNTdGF0ZUluaXRXYWl0OgoJCQljYXNlIFhlbmJ1c1N0YXRlQ29ubmVjdGVkOgoJCQkJeGVu YnVzX3N3aXRjaF9zdGF0ZShkZXYsIFhlbmJ1c1N0YXRlSW5pdFdhaXQpOwoJCQkJYnJlYWs7CgkJ CWNhc2UgWGVuYnVzU3RhdGVDbG9zaW5nOgoJCQkJeGVuYnVzX3N3aXRjaF9zdGF0ZShkZXYsIFhl bmJ1c1N0YXRlQ2xvc2luZyk7CgkJCQlicmVhazsKCQkJZGVmYXVsdDoKCQkJCUJVRygpOwoJCQl9 CgkJCWJyZWFrOwoJCWNhc2UgWGVuYnVzU3RhdGVJbml0V2FpdDoKCQkJc3dpdGNoIChzdGF0ZSkg ewoJCQljYXNlIFhlbmJ1c1N0YXRlQ29ubmVjdGVkOgoJCQkJYmFja2VuZF9jb25uZWN0KGRldik7 CgkJCQl4ZW5idXNfc3dpdGNoX3N0YXRlKGRldiwgWGVuYnVzU3RhdGVDb25uZWN0ZWQpOwoJCQkJ YnJlYWs7CgkJCWNhc2UgWGVuYnVzU3RhdGVDbG9zaW5nOgoJCQljYXNlIFhlbmJ1c1N0YXRlQ2xv c2VkOgoJCQkJeGVuYnVzX3N3aXRjaF9zdGF0ZShkZXYsIFhlbmJ1c1N0YXRlQ2xvc2luZyk7CgkJ CQlicmVhazsKCQkJZGVmYXVsdDoKCQkJCUJVRygpOwoJCQl9CgkJCWJyZWFrOwoJCWNhc2UgWGVu YnVzU3RhdGVDb25uZWN0ZWQ6CgkJCXN3aXRjaCAoc3RhdGUpIHsKCQkJY2FzZSBYZW5idXNTdGF0 ZUluaXRXYWl0OgoJCQljYXNlIFhlbmJ1c1N0YXRlQ2xvc2luZzoKCQkJY2FzZSBYZW5idXNTdGF0 ZUNsb3NlZDoKCQkJCWJhY2tlbmRfZGlzY29ubmVjdChkZXYpOwoJCQkJeGVuYnVzX3N3aXRjaF9z dGF0ZShkZXYsIFhlbmJ1c1N0YXRlQ2xvc2luZyk7CgkJCQlicmVhazsKCQkJZGVmYXVsdDoKCQkJ CUJVRygpOwoJCQl9CgkJCWJyZWFrOwoJCWNhc2UgWGVuYnVzU3RhdGVDbG9zaW5nOgoJCQlzd2l0 Y2ggKHN0YXRlKSB7CgkJCWNhc2UgWGVuYnVzU3RhdGVJbml0V2FpdDoKCQkJY2FzZSBYZW5idXNT dGF0ZUNvbm5lY3RlZDoKCQkJY2FzZSBYZW5idXNTdGF0ZUNsb3NlZDoKCQkJCXhlbmJ1c19zd2l0 Y2hfc3RhdGUoZGV2LCBYZW5idXNTdGF0ZUNsb3NlZCk7CgkJCQlicmVhazsKCQkJZGVmYXVsdDoK CQkJCUJVRygpOwoJCQl9CgkJCWJyZWFrOwoJCWRlZmF1bHQ6CgkJCUJVRygpOwoJCX0KCX0KfQoK Ly8gVGhlIGZ1bmN0aW9uIGlzIGNhbGxlZCBvbiBhY3RpdmF0aW9uIG9mIHRoZSBkZXZpY2UKc3Rh dGljIGludCB4ZW5fdmRldmJfYmVfcHJvYmUoc3RydWN0IHhlbmJ1c19kZXZpY2UgKmRldiwKCQkJ Y29uc3Qgc3RydWN0IHhlbmJ1c19kZXZpY2VfaWQgKmlkKQp7CglzdHJ1Y3QgdmRldmJia19pbmZv ICppbmZvID0gTlVMTDsKCWludCByZXQgPSAwOwoKCXByX2xvZygiUHJvYmUgY2FsbGVkLiBXZSBh cmUgZ29vZCB0byBnby5cbiIpOwoJcHJfbG9nKCIgJXAgJWRcbiIsIGRldiwgZGV2LT5vdGhlcmVu ZF9pZCk7CgoJLyogQWxsb2NhdGluZyBtZW1vcnkgZm9yIHByaXZhdGUgc3RydWN0dXJlICovCglp bmZvID0ga3phbGxvYyhzaXplb2Yoc3RydWN0IHZkZXZiYmtfaW5mbyksIEdGUF9LRVJORUwpOwoJ aWYgKCFpbmZvKSB7CgkJeGVuYnVzX2Rldl9mYXRhbChkZXYsIC1FTk9NRU0sIAoJCQkJImFsbG9j YXRpbmcgYmFja2VuZCBzdHJ1Y3R1cmUiKTsKICAgICAgICAgICAgICAgIHJldHVybiAtRU5PTUVN OwogICAgICAgIH0KCWluZm8tPmRldiA9IGRldjsKCWRldl9zZXRfZHJ2ZGF0YSgmZGV2LT5kZXYs IGluZm8pOwoJaW5mby0+ZG9taWQgPSBkZXYtPm90aGVyZW5kX2lkOwoJcHJfbG9nKCJpbmZvLT5k b21pZDogJWRcbiIsIGluZm8tPmRvbWlkKTsKCXByX2xvZygiZGV2aWNldHlwZTogJXMsIG5vZGVu YW1lOiAlcywgb3RoZXJlbmQ6ICVzXG4iLCBkZXYtPmRldmljZXR5cGUsIGRldi0+bm9kZW5hbWUs IGRldi0+b3RoZXJlbmQpOwoKI2lmIDAKCXNwaW5fbG9ja19pbml0KCZpbmZvLT5yaW5nX2xvY2sp OwoJaW5mby0+cmluZ19lcnJvciA9IDA7CglhdG9taWNfc2V0KCZpbmZvLT5ucl91bnJlcGxpZWRf cmVxcywgMCk7Cglpbml0X3dhaXRxdWV1ZV9oZWFkKCZpbmZvLT53YWl0aW5nX3RvX2ZyZWUpOwoJ aW5mby0+aXJxID0gMDsKCUlOSVRfTElTVF9IRUFEKCZpbmZvLT52ZGV2Yl9lbnRyeV9saXN0cyk7 CglzcGluX2xvY2tfaW5pdCgmaW5mby0+dmRldmJfbG9jayk7CiNlbmRpZgoJCglyZXQgPSB4ZW5i dXNfcHJpbnRmKFhCVF9OSUwsIGRldi0+bm9kZW5hbWUsICJ2ZGV2Yi12ZXJzaW9uIiwgIiVzIiwg IjAuMSIpOwoJaWYgKHJldCkKICAgICAgICAgICAgICAgIHhlbmJ1c19kZXZfZXJyb3IoZGV2LCBy ZXQsICJ3cml0aW5nIHZkZXZiLXZlcnNpb24iKTsKCglyZXQgPSB4ZW5idXNfc3dpdGNoX3N0YXRl KGRldiwgWGVuYnVzU3RhdGVJbml0aWFsaXNpbmcpOwoJaWYgKHJldCkKCQlnb3RvIGVycm9yOwoJ cmV0dXJuIDA7CgogZXJyb3I6Cglwcl93YXJuKCIlcyBmYWlsZWRcbiIsIF9fZnVuY19fKTsKCWtm cmVlKGluZm8pOwoJZGV2X3NldF9kcnZkYXRhKCZkZXYtPmRldiwgTlVMTCk7CglyZXR1cm4gcmV0 Owp9CgpzdGF0aWMgaW50IHhlbl92ZGV2Yl9iZV9yZW1vdmUoc3RydWN0IHhlbmJ1c19kZXZpY2Ug KmRldikKewoJcHJfbG9nKCJcbiIpOwoJcmV0dXJuIDA7Cn0KCi8vIFRoZSBmdW5jdGlvbiBpcyBj YWxsZWQgb24gYSBzdGF0ZSBjaGFuZ2Ugb2YgdGhlIGZyb250ZW5kIGRyaXZlcgpzdGF0aWMgdm9p ZCB4ZW5fdmRldmJfYmVfZnJvbnRlbmRfY2hhbmdlZChzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2 LAoJCQllbnVtIHhlbmJ1c19zdGF0ZSBmcm9udGVuZF9zdGF0ZSkKewoJc3RydWN0IHZkZXZiYmtf aW5mbyAqaW5mbyA9IGRldl9nZXRfZHJ2ZGF0YSgmZGV2LT5kZXYpOwpwcl9sb2coImRldi0+c3Rh dGU6ICVzLSVkLCBmcm9udGVuZF9zdGF0ZTogJXMtJWRcbiIsIHhlbmJ1c19zdGF0ZV9hcnJheVtk ZXYtPnN0YXRlXSwgZGV2LT5zdGF0ZSwgeGVuYnVzX3N0YXRlX2FycmF5W2Zyb250ZW5kX3N0YXRl XSwgZnJvbnRlbmRfc3RhdGUpOwoKCXN3aXRjaCAoZnJvbnRlbmRfc3RhdGUpIHsKCQljYXNlIFhl bmJ1c1N0YXRlSW5pdGlhbGlzaW5nOgoJCQlzZXRfYmFja2VuZF9zdGF0ZShkZXYsIFhlbmJ1c1N0 YXRlSW5pdFdhaXQpOwoJCQlicmVhazsKCgkJY2FzZSBYZW5idXNTdGF0ZUluaXRpYWxpc2VkOgoJ CQlicmVhazsKCgkJY2FzZSBYZW5idXNTdGF0ZUNvbm5lY3RlZDoKCQkJc2V0X2JhY2tlbmRfc3Rh dGUoZGV2LCBYZW5idXNTdGF0ZUNvbm5lY3RlZCk7CgkJCWJyZWFrOwoKCQljYXNlIFhlbmJ1c1N0 YXRlQ2xvc2luZzoKCQkJc2V0X2JhY2tlbmRfc3RhdGUoZGV2LCBYZW5idXNTdGF0ZUNsb3Npbmcp OwoJCQlicmVhazsKCgkJY2FzZSBYZW5idXNTdGF0ZUNsb3NlZDoKCQkJc2V0X2JhY2tlbmRfc3Rh dGUoZGV2LCBYZW5idXNTdGF0ZUNsb3NlZCk7CgkJCWlmICh4ZW5idXNfZGV2X2lzX29ubGluZShk ZXYpKQoJCQkJYnJlYWs7CgkJCS8qIGZhbGwgdGhyb3VnaCBpZiBub3Qgb25saW5lICovCgkJY2Fz ZSBYZW5idXNTdGF0ZVVua25vd246CgkJCXNldF9iYWNrZW5kX3N0YXRlKGRldiwgWGVuYnVzU3Rh dGVDbG9zZWQpOwoJCQlkZXZpY2VfdW5yZWdpc3RlcigmZGV2LT5kZXYpOwoJCQlicmVhazsKCgkJ ZGVmYXVsdDoKCQkJeGVuYnVzX2Rldl9mYXRhbChkZXYsIC1FSU5WQUwsCgkJCQkgICAgICAgICJz YXcgc3RhdGUgJXMgKCVkKSBhdCBmcm9udGVuZCIsCgkJCQkJeGVuYnVzX3N0cnN0YXRlKGZyb250 ZW5kX3N0YXRlKSwKCQkJCQlmcm9udGVuZF9zdGF0ZSk7CgkJCWJyZWFrOwoJfQp9CgovLyBUaGlz IGRlZmluZXMgdGhlIG5hbWUgb2YgdGhlIGRldmljZXMgdGhlIGRyaXZlciByZWFjdHMgdG8Kc3Rh dGljIGNvbnN0IHN0cnVjdCB4ZW5idXNfZGV2aWNlX2lkIHhlbl92ZGV2Yl9iZV9pZHNbXSA9IHsK CXsgInZkZXZiIiB9LAoJeyAiIiB9Cn07CgovLyBXZSBzZXQgdXAgdGhlIGNhbGxiYWNrIGZ1bmN0 aW9ucwpzdGF0aWMgc3RydWN0IHhlbmJ1c19kcml2ZXIgeGVuX3ZkZXZiX2JlX2RyaXZlciA9IHsK CS5pZHMgID0geGVuX3ZkZXZiX2JlX2lkcywKCS5wcm9iZSA9IHhlbl92ZGV2Yl9iZV9wcm9iZSwK CS5yZW1vdmUgPSB4ZW5fdmRldmJfYmVfcmVtb3ZlLAoJLm90aGVyZW5kX2NoYW5nZWQgPSB4ZW5f dmRldmJfYmVfZnJvbnRlbmRfY2hhbmdlZCwKfTsKCi8vIE9uIGxvYWRpbmcgdGhpcyBrZXJuZWwg bW9kdWxlLCB3ZSByZWdpc3RlciBhcyBhIGZyb250ZW5kIGRyaXZlcgpzdGF0aWMgaW50IF9faW5p dCB4ZW5fdmRldmJfYmVfaW5pdCh2b2lkKQp7Cglwcl9sb2coInhlbl9kb21haW5fdHlwZTogJWQs IHhlbl9kb21haW46ICVkXG4iLCB4ZW5fZG9tYWluX3R5cGUseGVuX2RvbWFpbigpKTsKCXByX2xv ZygiSGVsbG8gV29ybGQhXG4iKTsKCglyZXR1cm4geGVuYnVzX3JlZ2lzdGVyX2JhY2tlbmQoJnhl bl92ZGV2Yl9iZV9kcml2ZXIpOwp9Cm1vZHVsZV9pbml0KHhlbl92ZGV2Yl9iZV9pbml0KTsKCi8v IC4uLmFuZCBvbiB1bmxvYWQgd2UgdW5yZWdpc3RlcgpzdGF0aWMgdm9pZCBfX2V4aXQgeGVuX3Zk ZXZiX2JlX2V4aXQodm9pZCkKewoJeGVuYnVzX3VucmVnaXN0ZXJfZHJpdmVyKCZ4ZW5fdmRldmJf YmVfZHJpdmVyKTsKCXByX2xvZygiR29vZGJ5ZSB3b3JsZC5cbiIpOwp9Cm1vZHVsZV9leGl0KHhl bl92ZGV2Yl9iZV9leGl0KTsKCk1PRFVMRV9MSUNFTlNFKCJHUEwiKTsKTU9EVUxFX0FMSUFTKCJ4 ZW4tY2xrLWJhY2tlbmQgOnZkZXZiIik7Cg== --000000000000a0fa920576edf7f3 Content-Type: text/x-chdr; charset="US-ASCII"; name="xen_buf.h" Content-Disposition: attachment; filename="xen_buf.h" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: f_jmm0bgi52 I2lmbmRlZiBfX1hFTl9CVUZfXwoKI2RlZmluZSBwcl9sb2coZm10LCAuLi4pIHsgXApwcl9pbmZv KCJbWEVOX0JVRl0lcygpOiAlZDogImZtdCwgX19mdW5jX18sIF9fTElORV9fLCAjI19fVkFfQVJH U19fKTsgXAp9CgpzdGF0aWMgY2hhciAqeGVuYnVzX3N0YXRlX2FycmF5W10gPSB7CiAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgIAoJIlhlbmJ1c1N0YXRlVW5rbm93biIsCgkiWGVuYnVzU3Rh dGVJbml0aWFsaXNpbmciLAoJIlhlbmJ1c1N0YXRlSW5pdFdhaXQiLCAgICAgICAgLyogRmluaXNo ZWQgZWFybHkKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbml0aWFs aXNhdGlvbiwgYnV0IHdhaXRpbmcKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICBmb3IgaW5mb3JtYXRpb24gZnJvbSB0aGUgcGVlcgogICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgIG9yIGhvdHBsdWcgc2NyaXB0cy4gKi8KCSJYZW5idXNTdGF0ZUlu aXRpYWxpc2VkIiwgICAgIC8qIEluaXRpYWxpc2VkIGFuZCB3YWl0aW5nIGZvciBhCiAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29ubmVjdGlvbiBmcm9tIHRoZSBwZWVy LiAqLwoJIlhlbmJ1c1N0YXRlQ29ubmVjdGVkIiwKCSJYZW5idXNTdGF0ZUNsb3NpbmciLCAgICAg ICAgIC8qIFRoZSBkZXZpY2UgaXMgYmVpbmcgY2xvc2VkCiAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgZHVlIHRvIGFuIGVycm9yIG9yIGFuIHVucGx1ZwogICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGV2ZW50LiAqLwoJIlhlbmJ1c1N0YXRlQ2xv c2VkIiwKICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgLyogICAgICAgICAgICAgICAg ICAgICAgCiAgICAgICAgKiBSZWNvbmZpZ3VyaW5nOiBUaGUgZGV2aWNlIGlzIGJlaW5nIHJlY29u ZmlndXJlZC4KICAgICAgICAqLyAgICAgICAgICAgICAgCiAgICAgICAgIlhlbmJ1c1N0YXRlUmVj b25maWd1cmluZyIsCiAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICJYZW5idXNTdGF0 ZVJlY29uZmlndXJlZCIsCn07CgojZW5kaWYgLyogX19YRU5fQlVGX18gKi8K --000000000000a0fa920576edf7f3 Content-Type: application/x-shellscript; name="activate.sh" Content-Disposition: attachment; filename="activate.sh" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: f_jmm0dfw73 RE9NVV9JRD0kMQoKaWYgWyAteiAiJERPTVVfSUQiICAgXTsgdGhlbgogICAgICAgICAgZWNobyAi VXNhZ2U6ICQwIFtkb21VIElEXV0iCiAgICAgICAgICAgIGVjaG8KICAgICAgICAgICAgICBlY2hv ICJDb25uZWN0cyB0aGUgbmV3IGRldmljZSwgd2l0aCBkb20wIGFzIGJhY2tlbmQsIGRvbVUgYXMg ZnJvbnRlbmQiCiAgICAgICAgICAgICAgICBleGl0IDEKICAgICAgICBmaQpERVZJQ0U9dmRldmIK RE9NVV9LRVk9L2xvY2FsL2RvbWFpbi8kRE9NVV9JRC9kZXZpY2UvJERFVklDRS8wCkRPTTBfS0VZ PS9sb2NhbC9kb21haW4vMC9iYWNrZW5kLyRERVZJQ0UvJERPTVVfSUQvMAoKICAgICAgICAjIFRl bGwgdGhlIGRvbVUgYWJvdXQgdGhlIG5ldyBkZXZpY2UgYW5kIGl0cyBiYWNrZW5kCnhlbnN0b3Jl LXdyaXRlICRET01VX0tFWS9iYWNrZW5kLWlkIDAKeGVuc3RvcmUtd3JpdGUgJERPTVVfS0VZL2Jh Y2tlbmQgIi9sb2NhbC9kb21haW4vMC9iYWNrZW5kLyRERVZJQ0UvJERPTVVfSUQvMCIKCiAgICAg ICAgIyBUZWxsIHRoZSBkb20wIGFib3V0IHRoZSBuZXcgZGV2aWNlIGFuZCBpdHMgZnJvbnRlbmQK eGVuc3RvcmUtd3JpdGUgJERPTTBfS0VZL2Zyb250ZW5kLWlkICRET01VX0lECnhlbnN0b3JlLXdy aXRlICRET00wX0tFWS9mcm9udGVuZCAiL2xvY2FsL2RvbWFpbi8kRE9NVV9JRC9kZXZpY2UvJERF VklDRS8wIgoKICAgICAgICAjIE1ha2Ugc3VyZSB0aGUgZG9tVSBjYW4gcmVhZCB0aGUgZG9tMCBk YXRhCnhlbnN0b3JlLWNobW9kICRET00wX0tFWSByCgp4bCBwYXVzZSBkZWJpYW4KICAgICAgICAj IEFjdGl2YXRlIHRoZSBkZXZpY2UsIGRvbTAgbmVlZHMgdG8gYmUgYWN0aXZhdGVkIGxhc3QKeGVu c3RvcmUtd3JpdGUgJERPTVVfS0VZL3N0YXRlIDEKeGVuc3RvcmUtd3JpdGUgJERPTTBfS0VZL3N0 YXRlIDEKeGwgdW5wYXVzZSBkZWJpYW4K --000000000000a0fa920576edf7f3 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6Ly9saXN0 cy54ZW5wcm9qZWN0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA== --000000000000a0fa920576edf7f3-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Juergen Gross Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Fri, 28 Sep 2018 15:42:52 +0200 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: Content-Language: de-DE List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Omkar Bolla Cc: xen-devel@lists.xensource.com, Oleksandr_Andrushchenko@epam.com, Oleksandr Andrushchenko , Lars Kurth , Julien Grall , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org T24gMjgvMDkvMjAxOCAxNDo1NSwgT21rYXIgQm9sbGEgd3JvdGU6Cj4gSGksCj4gSSB0cmllZCB0 byBydW4gc2NyaXB0IGFmdGVyIHBhdXNlIHRoZSBkb21haW4gYW5kIHVucGF1c2UgZG9tYWluIGFm dGVyCj4gcnVuIHNjcmlwdC4gQnV0IEkgZW5kZWQgdXAgd2l0aCBzYW1lIGVycm9yCgpJIGxvb2tl ZCBhdCB0aGUgc2NyaXB0IGFnYWluLCBpdCBpcyB3cm9uZy4gVGhlIHBlcm1pc3Npb25zIHNob3Vs ZApiZSBzZXQgZm9yIGVhY2ggbm9kZSB1bmRlciB0aGUgcm9vdCBwYXRoIG9mIHRoZSByZXNwZWN0 aXZlIGRvbWFpbnMsCnRoZSBmaXJzdCBwZXJtaXNzaW9uIHNob3VsZCBiZSAibiRkb21pZCIgKCRk b21pZCBpcyB0aGUgb3duZXIgd2hvCmNhbiBhbHdheXMgcmVhZC93cml0ZSwgdGhlIG4gaXMgIm5v IGFjY2VzcyIgZm9yIGFsbCBkb21haW5zIG5vdApleHBsaWNpdGx5IGxpc3RlZCksIHRoZSBzZWNv bmQgcGVybWlzc2lvbiBzaG91bGQgYmUgInIkZG9taWQiIGFzCnRoZSBvdGhlciBzaWRlIHNob3Vs ZCBiZSBhYmxlIHRvIHJlYWQgb25seS4KCkluIG9yZGVyIHRvIGRvIGl0IGNvcnJlY3RseSB0aGUg c2NyaXB0IHNob3VsZCBiZToKCiMhL2Jpbi9iYXNoCgpET01VX0lEPSQxCgppZiBbIC16ICIkRE9N VV9JRCIgIF07IHRoZW4KICBlY2hvICJVc2FnZTogJDAgW2RvbVUgSURdXSIKICBlY2hvCiAgZWNo byAiQ29ubmVjdHMgdGhlIG5ldyBkZXZpY2UsIHdpdGggZG9tMCBhcyBiYWNrZW5kLCBkb21VIGFz IGZyb250ZW5kIgogIGV4aXQgMQpmaQoKIyBQYXVzZSBkb21VIGFzIGEgc2NyaXB0IGNhbid0IHdy aXRlIGFuIGVudHJ5IGFuZCBzZXQgcGVybWlzc2lvbgojIGluIGEgc2luZ2xlIG9wZXJhdGlvbi4K eGwgcGF1c2UgJERPTVVfSUQKCkRFVklDRT1teWRldmljZQpET01VX0tFWT0vbG9jYWwvZG9tYWlu LyRET01VX0lEL2RldmljZS8kREVWSUNFLzAKRE9NMF9LRVk9L2xvY2FsL2RvbWFpbi8wL2JhY2tl bmQvJERFVklDRS8kRE9NVV9JRC8wCgojIFRlbGwgdGhlIGRvbVUgYWJvdXQgdGhlIG5ldyBkZXZp Y2UgYW5kIGl0cyBiYWNrZW5kCnhlbnN0b3JlLXdyaXRlICRET01VX0tFWS9iYWNrZW5kLWlkIDAK eGVuc3RvcmUtd3JpdGUgJERPTVVfS0VZL2JhY2tlbmQKIi9sb2NhbC9kb21haW4vMC9iYWNrZW5k LyRERVZJQ0UvJERPTVVfSUQvMCIKCiMgVGVsbCB0aGUgZG9tMCBhYm91dCB0aGUgbmV3IGRldmlj ZSBhbmQgaXRzIGZyb250ZW5kCnhlbnN0b3JlLXdyaXRlICRET00wX0tFWS9mcm9udGVuZC1pZCAk RE9NVV9JRAp4ZW5zdG9yZS13cml0ZSAkRE9NMF9LRVkvZnJvbnRlbmQgIi9sb2NhbC9kb21haW4v JERPTVVfSUQvZGV2aWNlLyRERVZJQ0UvMCIKCiMgQWN0aXZhdGUgdGhlIGRldmljZSwgZG9tMCBu ZWVkcyB0byBiZSBhY3RpdmF0ZWQgbGFzdAp4ZW5zdG9yZS13cml0ZSAkRE9NVV9LRVkvc3RhdGUg MQp4ZW5zdG9yZS13cml0ZSAkRE9NMF9LRVkvc3RhdGUgMQoKIyBNYWtlIHN1cmUgdGhlIGRvbVUg Y2FuIHJlYWQgdGhlIGRvbTAgZGF0YQp4ZW5zdG9yZS1jaG1vZCAtciAkRE9NMF9LRVkgbjAgciRE T01VX0lECnhlbnN0b3JlLWNobW9kIC1yICRET01VX0tFWSBuJERPTVVfSUQgcjAKCnhsIHVucGF1 c2UgJERPTVVfSUQKCgpKdWVyZ2VuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW5w cm9qZWN0Lm9yZwpodHRwczovL2xpc3RzLnhlbnByb2plY3Qub3JnL21haWxtYW4vbGlzdGluZm8v eGVuLWRldmVs From mboxrd@z Thu Jan 1 00:00:00 1970 From: Omkar Bolla Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Tue, 2 Oct 2018 15:33:12 +0530 Message-ID: References: Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1622781045472374045==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: jgross@suse.com Cc: xen-devel@lists.xensource.com, Oleksandr_Andrushchenko@epam.com, Oleksandr Andrushchenko , Lars Kurth , Julien Grall , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org --===============1622781045472374045== Content-Type: multipart/alternative; boundary="0000000000004995b605773c0894" --0000000000004995b605773c0894 Content-Type: text/plain; charset="UTF-8" Hi, Thanks, Basic state change is working now, after using above script. As I said, I want to share buffer between two domains. Could you please suggest outlines, how can I share buffer between 2 domains(Guest and Host)? Thanks, Omkar B On Fri, Sep 28, 2018 at 7:12 PM Juergen Gross wrote: > On 28/09/2018 14:55, Omkar Bolla wrote: > > Hi, > > I tried to run script after pause the domain and unpause domain after > > run script. But I ended up with same error > > I looked at the script again, it is wrong. The permissions should > be set for each node under the root path of the respective domains, > the first permission should be "n$domid" ($domid is the owner who > can always read/write, the n is "no access" for all domains not > explicitly listed), the second permission should be "r$domid" as > the other side should be able to read only. > > In order to do it correctly the script should be: > > #!/bin/bash > > DOMU_ID=$1 > > if [ -z "$DOMU_ID" ]; then > echo "Usage: $0 [domU ID]]" > echo > echo "Connects the new device, with dom0 as backend, domU as frontend" > exit 1 > fi > > # Pause domU as a script can't write an entry and set permission > # in a single operation. > xl pause $DOMU_ID > > DEVICE=mydevice > DOMU_KEY=/local/domain/$DOMU_ID/device/$DEVICE/0 > DOM0_KEY=/local/domain/0/backend/$DEVICE/$DOMU_ID/0 > > # Tell the domU about the new device and its backend > xenstore-write $DOMU_KEY/backend-id 0 > xenstore-write $DOMU_KEY/backend > "/local/domain/0/backend/$DEVICE/$DOMU_ID/0" > > # Tell the dom0 about the new device and its frontend > xenstore-write $DOM0_KEY/frontend-id $DOMU_ID > xenstore-write $DOM0_KEY/frontend "/local/domain/$DOMU_ID/device/$DEVICE/0" > > # Activate the device, dom0 needs to be activated last > xenstore-write $DOMU_KEY/state 1 > xenstore-write $DOM0_KEY/state 1 > > # Make sure the domU can read the dom0 data > xenstore-chmod -r $DOM0_KEY n0 r$DOMU_ID > xenstore-chmod -r $DOMU_KEY n$DOMU_ID r0 > > xl unpause $DOMU_ID > > > Juergen > -- This message contains confidential information and is intended only for the individual(s) named. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this mail and attached file/s is strictly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. --0000000000004995b605773c0894 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi,

Thanks,
Basic state change is working now, afte= r using above script.

As I sa= id, I want to share buffer between two domains.
Could you please suggest outlines, how can I= share buffer between 2 domains(Guest and Host)?

Thanks,
Omkar B

On Fri, Sep 28, 2018 at 7:12 PM Juergen Gross <jgross@suse.com> wrote:
On 28/09/2018 14:55, Omkar Bolla wrote:
> Hi,
> I tried to run script after pause the domain and unpause domain after<= br> > run script. But I ended up with same error

I looked at the script again, it is wrong. The permissions should
be set for each node under the root path of the respective domains,
the first permission should be "n$domid" ($domid is the owner who=
can always read/write, the n is "no access" for all domains not explicitly listed), the second permission should be "r$domid" as<= br> the other side should be able to read only.

In order to do it correctly the script should be:

#!/bin/bash

DOMU_ID=3D$1

if [ -z "$DOMU_ID"=C2=A0 ]; then
=C2=A0 echo "Usage: $0 [domU ID]]"
=C2=A0 echo
=C2=A0 echo "Connects the new device, with dom0 as backend, domU as fr= ontend"
=C2=A0 exit 1
fi

# Pause domU as a script can't write an entry and set permission
# in a single operation.
xl pause $DOMU_ID

DEVICE=3Dmydevice
DOMU_KEY=3D/local/domain/$DOMU_ID/device/$DEVICE/0
DOM0_KEY=3D/local/domain/0/backend/$DEVICE/$DOMU_ID/0

# Tell the domU about the new device and its backend
xenstore-write $DOMU_KEY/backend-id 0
xenstore-write $DOMU_KEY/backend
"/local/domain/0/backend/$DEVICE/$DOMU_ID/0"

# Tell the dom0 about the new device and its frontend
xenstore-write $DOM0_KEY/frontend-id $DOMU_ID
xenstore-write $DOM0_KEY/frontend "/local/domain/$DOMU_ID/device/$DEVI= CE/0"

# Activate the device, dom0 needs to be activated last
xenstore-write $DOMU_KEY/state 1
xenstore-write $DOM0_KEY/state 1

# Make sure the domU can read the dom0 data
xenstore-chmod -r $DOM0_KEY n0 r$DOMU_ID
xenstore-chmod -r $DOMU_KEY n$DOMU_ID r0

xl unpause $DOMU_ID


Juergen

This message contains confidential information and is intended only for the individual(s) named. If you are not the = intended recipient, you are notified that disclosing, copying, distributing or takin= g any action in reliance on the contents of this mail and attached file/s is stri= ctly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmis= sion cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or cont= ain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mai= l transmission.

--0000000000004995b605773c0894-- --===============1622781045472374045== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6Ly9saXN0 cy54ZW5wcm9qZWN0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA== --===============1622781045472374045==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Julien Grall Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Wed, 3 Oct 2018 10:53:20 +0100 Message-ID: <074697de-7265-a1fb-2970-4128a58f09ca@arm.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; Format="flowed" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Omkar Bolla , jgross@suse.com Cc: Oleksandr Andrushchenko , Lars Kurth , xen-devel@lists.xensource.com, Stefano Stabellini , Oleksandr_Andrushchenko@epam.com List-Id: xen-devel@lists.xenproject.org CgpPbiAxMC8wMi8yMDE4IDExOjAzIEFNLCBPbWthciBCb2xsYSB3cm90ZToKPiBIaSwKPiAKPiBU aGFua3MsCj4gQmFzaWMgc3RhdGUgY2hhbmdlIGlzIHdvcmtpbmcgbm93LCBhZnRlciB1c2luZyBh Ym92ZSBzY3JpcHQuCj4gCj4gQXMgSSBzYWlkLCBJIHdhbnQgdG8gc2hhcmUgYnVmZmVyIGJldHdl ZW4gdHdvIGRvbWFpbnMuCj4gQ291bGQgeW91IHBsZWFzZSBzdWdnZXN0IG91dGxpbmVzLCBob3cg Y2FuIEkgc2hhcmUgYnVmZmVyIGJldHdlZW4gMiAKPiBkb21haW5zKEd1ZXN0IGFuZCBIb3N0KT8K Ck15IHF1ZXN0aW9uIG9uIGEgcHJldmlvdXMgZS1tYWlsIHdhcyBsZWZ0IHVuYW5zd2VyZWQuIERv IHlvdSBoYXZlIApyZXF1aXJlbWVudHMgdG8gc2hhcmUgdGhlIGJ1ZmZlciBkeW5hbWljYWxseT8K CklmIG5vdCwgeW91IG1heSB3YW50IHRvIGhhdmUgYSBsb29rIGF0ICJBbGxvdyBzZXR0aW5nIHVw IHNoYXJlZCBtZW1vcnkgCmFyZWFzIGJldHdlZW4gVk1zIGZyb20geGwgY29uZmlnIGZpbGVzIiBb Ml0uIFdlIGFpbSB0byBtZXJnZSBpdCBpbiB0aGUgCm5leHQgWGVuIHJlbGVhc2UuCgpDaGVlcnMs CgpbMl0gaHR0cHM6Ly9saXN0cy54ZW4ub3JnL2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIwMTgt MDgvbXNnMDA4ODMuaHRtbAoKPiBUaGlzIG1lc3NhZ2UgY29udGFpbnMgY29uZmlkZW50aWFsIGlu Zm9ybWF0aW9uIGFuZCBpcyBpbnRlbmRlZCBvbmx5IGZvciAKPiB0aGUgaW5kaXZpZHVhbChzKSBu YW1lZC5JZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQgcmVjaXBpZW50LCB5b3UgYXJlIAo+IG5v dGlmaWVkIHRoYXQgZGlzY2xvc2luZywgY29weWluZywgZGlzdHJpYnV0aW5nIG9yIHRha2luZyBh bnkgYWN0aW9uIGluIAo+IHJlbGlhbmNlIG9uIHRoZSBjb250ZW50cyBvZiB0aGlzIG1haWwgYW5k IGF0dGFjaGVkIGZpbGUvcyBpcyBzdHJpY3RseSAgCj4gcHJvaGliaXRlZC4gUGxlYXNlIG5vdGlm eSB0aGUgc2VuZGVyIGltbWVkaWF0ZWx5IGFuZCBkZWxldGUgdGhpcyBlLW1haWwgCj4gZnJvbSB5 b3VyIHN5c3RlbS4gRS1tYWlsIHRyYW5zbWlzc2lvbiBjYW5ub3QgYmUgZ3VhcmFudGVlZCB0byBi ZSBzZWN1cmVkIAo+IG9yIGVycm9yLWZyZWUgYXMgaW5mb3JtYXRpb24gY291bGQgYmUgaW50ZXJj ZXB0ZWQsIGNvcnJ1cHRlZCwgbG9zdCwgCj4gZGVzdHJveWVkLCBhcnJpdmUgbGF0ZSBvciBpbmNv bXBsZXRlLCBvciBjb250YWluIHZpcnVzZXMuIFRoZSBzZW5kZXIgCj4gdGhlcmVmb3JlIGRvZXMg bm90IGFjY2VwdCBsaWFiaWxpdHkgZm9yIGFueSBlcnJvcnMgb3Igb21pc3Npb25zIGluIHRoZSAK PiBjb250ZW50cyBvZiB0aGlzIG1lc3NhZ2UsIHdoaWNoIGFyaXNlIGFzIGEgcmVzdWx0IG9mIGUt bWFpbCB0cmFuc21pc3Npb24uCj4gCgpQbGVhc2UgY29uZmlndXJlIHlvdXIgY2xpZW50IHRvIHJl bW92ZSB5b3VyIGRpc2NsYWltZXIgY29tcGFueS4KCkNoZWVycywKCi0tIApKdWxpZW4gR3JhbGwK Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnCmh0dHBzOi8vbGlz dHMueGVucHJvamVjdC5vcmcvbWFpbG1hbi9saXN0aW5mby94ZW4tZGV2ZWw= From mboxrd@z Thu Jan 1 00:00:00 1970 From: Omkar Bolla Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Mon, 8 Oct 2018 14:42:46 +0530 Message-ID: References: <074697de-7265-a1fb-2970-4128a58f09ca@arm.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5393840344891366416==" Return-path: In-Reply-To: <074697de-7265-a1fb-2970-4128a58f09ca@arm.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Julien Grall Cc: jgross@suse.com, xen-devel@lists.xensource.com, Oleksandr_Andrushchenko@epam.com, Oleksandr Andrushchenko , Lars Kurth , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org --===============5393840344891366416== Content-Type: multipart/alternative; boundary="000000000000ef8e4d0577b40634" --000000000000ef8e4d0577b40634 Content-Type: text/plain; charset="UTF-8" Hi, Sorry for late response, On Wed, Oct 3, 2018 at 3:23 PM Julien Grall wrote: > > > On 10/02/2018 11:03 AM, Omkar Bolla wrote: > > Hi, > > > > Thanks, > > Basic state change is working now, after using above script. > > > > As I said, I want to share buffer between two domains. > > Could you please suggest outlines, how can I share buffer between 2 > > domains(Guest and Host)? > > My question on a previous e-mail was left unanswered. Do you have > requirements to share the buffer dynamically? > > Yes, I want to share buffer dynamically but not more than 1024 bytes. > If not, you may want to have a look at "Allow setting up shared memory > areas between VMs from xl config files" [2]. We aim to merge it in the > next Xen release. > > Cheers, > > [2] https://lists.xen.org/archives/html/xen-devel/2018-08/msg00883.html > > This is also okay, but problem here is I am using 4.8 stable xen because it is working on Hkey960(ArmV8) Is there any other way to share buffer dynamically? > This message contains confidential information and is intended only for > > the individual(s) named.If you are not the intended recipient, you are > > notified that disclosing, copying, distributing or taking any action in > > reliance on the contents of this mail and attached file/s is strictly > > prohibited. Please notify the sender immediately and delete this e-mail > > from your system. E-mail transmission cannot be guaranteed to be secured > > or error-free as information could be intercepted, corrupted, lost, > > destroyed, arrive late or incomplete, or contain viruses. The sender > > therefore does not accept liability for any errors or omissions in the > > contents of this message, which arise as a result of e-mail transmission. > > > > Please configure your client to remove your disclaimer company. > I am looking into settings, to remove disclaimer. > > Cheers, > > -- > Julien Grall > Thanks, Omkar B -- This message contains confidential information and is intended only for the individual(s) named. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this mail and attached file/s is strictly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. --000000000000ef8e4d0577b40634 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi,

Sorry for late response,
=
On Wed, Oct 3, 2018 at 3:23= PM Julien Grall <julien.grall@a= rm.com> wrote:


On 10/02/2018 11:03 AM, Omkar Bolla wrote:
> Hi,
>
> Thanks,
> Basic state change is working now, after using above script.
>
> As I said, I want to share buffer between two domains.
> Could you please suggest outlines, how can I share buffer between 2 > domains(Guest and Host)?

My question on a previous e-mail was left unanswered. Do you have
requirements to share the buffer dynamically?

Yes, I want to share buffer dynamically but not more than 1024 bytes.
If not, you may want to have a look at "Allow setting up shared memory=
areas between VMs from xl config files" [2]. We aim to merge it in the=
next Xen release.

Cheers,

[2] https://lists.xen.org/archive= s/html/xen-devel/2018-08/msg00883.html

This is also okay, but problem here is I am using 4.8 stable=C2=A0 = xen because it=C2=A0 is working on Hkey960(ArmV8)
Is there any other way= to share buffer dynamically?

> This message contains confidential information and is intended only fo= r
> the individual(s) named.If you are not the intended recipient, you are=
> notified that disclosing, copying, distributing or taking any action i= n
> reliance on the contents of this mail and attached file/s is strictly= =C2=A0
> prohibited. Please notify the sender immediately and delete this e-mai= l
> from your system. E-mail transmission cannot be guaranteed to be secur= ed
> or error-free as information could be intercepted, corrupted, lost, > destroyed, arrive late or incomplete, or contain viruses. The sender <= br> > therefore does not accept liability for any errors or omissions in the=
> contents of this message, which arise as a result of e-mail transmissi= on.
>

Please configure your client to remove your disclaimer company.
I am= looking into settings, to remove disclaimer.=C2=A0

Cheers,

--
Julien Grall

Thanks,
Omkar B

This message contains confidential information and is intended only for the individual(s) named. If you are not the = intended recipient, you are notified that disclosing, copying, distributing or takin= g any action in reliance on the contents of this mail and attached file/s is stri= ctly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmis= sion cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or cont= ain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mai= l transmission.

--000000000000ef8e4d0577b40634-- --===============5393840344891366416== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6Ly9saXN0 cy54ZW5wcm9qZWN0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA== --===============5393840344891366416==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Julien Grall Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Mon, 8 Oct 2018 11:30:21 +0100 Message-ID: References: <074697de-7265-a1fb-2970-4128a58f09ca@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; Format="flowed" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Omkar Bolla Cc: jgross@suse.com, xen-devel@lists.xensource.com, Oleksandr_Andrushchenko@epam.com, Oleksandr Andrushchenko , Lars Kurth , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org CgpPbiAwOC8xMC8yMDE4IDEwOjEyLCBPbWthciBCb2xsYSB3cm90ZToKPiBIaSwKCkhpLAoKPiBU aGlzIGlzIGFsc28gb2theSwgYnV0IHByb2JsZW0gaGVyZSBpcyBJIGFtIHVzaW5nIDQuOCBzdGFi bGXCoCB4ZW4gCj4gYmVjYXVzZSBpdMKgIGlzIHdvcmtpbmcgb24gSGtleTk2MChBcm1WOCkKClRo aXMgaXMgYmVjYXVzZSB5b3UgY2FuJ3QgYnJpbmcgdXAgc2Vjb25kYXJ5IENQVXMgb24gdGhlIEhp a2V5IHdpdGggWGVuIAo0LjExIFsxXSwgcmlnaHQ/IEl0IHdvdWxkIGJlIG5pY2UgdG8gZmluZCB3 aGVyZSB0aGUgYnVnIHdhcyBpbnRyb2R1Y2VkIApiZWNhdXNlIFhlbiA0LjggaXMgb3V0IG9mIHN1 cHBvcnQgYW5kIGRvZXMgbm90IGNvbnRhaW4gdGhlIGxhdGVzdCBmaXhlcyAKKHN1Y2ggYXMgTWVs dGRvd24vU3BlY3RyZSkuCgo+IElzIHRoZXJlIGFueSBvdGhlciB3YXkgdG8gc2hhcmUgYnVmZmVy IGR5bmFtaWNhbGx5PwpZb3Ugd291bGQgaGF2ZSB0byB3cml0ZSB5b3VyIG93biBQViBkcml2ZXJz IG9yIHBvcnQgdGhlIHNlcmllcyB0byBYZW4gNC44LgoKQ2hlZXJzLAoKWzFdIApodHRwczovL3d3 dy5tYWlsLWFyY2hpdmUuY29tL3hlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZy9tc2cyMTU3 Ni5odG1sCgotLSAKSnVsaWVuIEdyYWxsCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54 ZW5wcm9qZWN0Lm9yZwpodHRwczovL2xpc3RzLnhlbnByb2plY3Qub3JnL21haWxtYW4vbGlzdGlu Zm8veGVuLWRldmVs From mboxrd@z Thu Jan 1 00:00:00 1970 From: Omkar Bolla Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Wed, 17 Oct 2018 17:54:46 +0530 Message-ID: References: <074697de-7265-a1fb-2970-4128a58f09ca@arm.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4806630735752980086==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Julien Grall Cc: jgross@suse.com, xen-devel@lists.xensource.com, Oleksandr_Andrushchenko@epam.com, Oleksandr Andrushchenko , Lars Kurth , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org --===============4806630735752980086== Content-Type: multipart/alternative; boundary="0000000000001da4bb05786bc200" --0000000000001da4bb05786bc200 Content-Type: text/plain; charset="UTF-8" Hi, I have started finding which patch introduced Armv8 Secondary CPUs issue. I just want to start PV vdevb before domainU debian rootfs mount. Is it possible? Thanks, Omkar B On Mon, Oct 8, 2018 at 4:00 PM Julien Grall wrote: > > > On 08/10/2018 10:12, Omkar Bolla wrote: > > Hi, > > Hi, > > > This is also okay, but problem here is I am using 4.8 stable xen > > because it is working on Hkey960(ArmV8) > > This is because you can't bring up secondary CPUs on the Hikey with Xen > 4.11 [1], right? It would be nice to find where the bug was introduced > because Xen 4.8 is out of support and does not contain the latest fixes > (such as Meltdown/Spectre). > > > Is there any other way to share buffer dynamically? > You would have to write your own PV drivers or port the series to Xen 4.8. > > Cheers, > > [1] > https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg21576.html > > -- > Julien Grall > -- This message contains confidential information and is intended only for the individual(s) named. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this mail and attached file/s is strictly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. --0000000000001da4bb05786bc200 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi,

I have started finding which p= atch introduced Armv8 Secondary CPUs issue.

I just want to start PV vdevb before domainU debian rootfs m= ount. Is it possible?

Thanks,=
Omkar B



This message contains confidential information and is intended only for the individual(s) named. If you are not the = intended recipient, you are notified that disclosing, copying, distributing or takin= g any action in reliance on the contents of this mail and attached file/s is stri= ctly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmis= sion cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or cont= ain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mai= l transmission.

--0000000000001da4bb05786bc200-- --===============4806630735752980086== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6Ly9saXN0 cy54ZW5wcm9qZWN0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA== --===============4806630735752980086==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Julien Grall Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Wed, 31 Oct 2018 19:41:45 +0000 Message-ID: References: <074697de-7265-a1fb-2970-4128a58f09ca@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; Format="flowed" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Omkar Bolla Cc: jgross@suse.com, xen-devel@lists.xensource.com, Oleksandr_Andrushchenko@epam.com, Oleksandr Andrushchenko , Lars Kurth , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org CgpPbiAxMC8xNy8xOCAxOjI0IFBNLCBPbWthciBCb2xsYSB3cm90ZToKPiBIaSwKCkhpIE9ta2Fy LAoKPiBJIGhhdmUgc3RhcnRlZCBmaW5kaW5nIHdoaWNoIHBhdGNoIGludHJvZHVjZWQgQXJtdjgg U2Vjb25kYXJ5IENQVXMgaXNzdWUuCj4gCj4gSSBqdXN0IHdhbnQgdG8gc3RhcnQgUFYgdmRldmIg YmVmb3JlIGRvbWFpblUgZGViaWFuIHJvb3RmcyBtb3VudC4gSXMgaXQgCj4gcG9zc2libGU/CgpN YXkgSSBhc2sgd2h5IHlvdSBuZWVkIHRoZSBkZXBlbmRlbmN5IG9uIHRoZSByb290ZnM/CgpDaGVl cnMsCgotLSAKSnVsaWVuIEdyYWxsCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW5w cm9qZWN0Lm9yZwpodHRwczovL2xpc3RzLnhlbnByb2plY3Qub3JnL21haWxtYW4vbGlzdGluZm8v eGVuLWRldmVs From mboxrd@z Thu Jan 1 00:00:00 1970 From: Omkar Bolla Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Thu, 1 Nov 2018 14:45:05 +0530 Message-ID: References: <074697de-7265-a1fb-2970-4128a58f09ca@arm.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============6198494298375653448==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Julien Grall Cc: jgross@suse.com, xen-devel@lists.xensource.com, Oleksandr_Andrushchenko@epam.com, Oleksandr Andrushchenko , Lars Kurth , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org --===============6198494298375653448== Content-Type: multipart/alternative; boundary="0000000000006a8932057996dbaf" --0000000000006a8932057996dbaf Content-Type: text/plain; charset="UTF-8" Hi, > May I ask why you need the dependency on the rootfs? I am trying to pass-through the display to guest domain. to do through driver needs clocks. I have written simple basic clock pv frontend and backend. So I thought these clocks must be initialised before display driver initialisation. But if I start both domain and clocks script one after another, clock got initialised properly. Problem solved. But still i have some doubt, is it possible to do some thing in xenbits src to start automatically when we start underprivileged domain? I have one more question about pass-through To implement pass through I took reference from below link https://wiki.xen.org/images/1/17/Device_passthrough_xen.pdf I added 'xen-passthrough' to actual dom0 dtb and created new dtb with below nodes in passthrough node ============================================================================ dpe: dpe@10004000 { compatible = "hisilicon,hi3660-dpe"; status = "ok"; #if 0 //ACTUAL REG PROPERTY of DISPLAY reg = <0x0 0xE8600000 0x0 0x80000>, <0x0 0xFFF35000 0 0x1000>, <0x0 0xFFF0A000 0 0x1000>, <0x0 0xFFF31000 0 0x1000>, <0x0 0xE86C0000 0 0x10000>; #endif //reg = <0x0 0x10004000 0x0 0x80000>, reg = <0x0 0x10004000 0x0 0x80000>, <0x0 0x10084000 0 0x1000>, <0x0 0x10085000 0 0x1000>, <0x0 0x10086000 0 0x1000>, <0x0 0x100C4000 0 0x10000>; // <0x0 0x10087000 0 0x10000>; interrupts = <0 245 4>; clocks = <&clk_xen HI3660_ACLK_GATE_DSS>, <&clk_xen HI3660_PCLK_GATE_DSS>, <&clk_xen HI3660_CLK_GATE_EDC0>, <&clk_xen HI3660_CLK_GATE_LDI0>, <&clk_xen HI3660_CLK_GATE_LDI1>, <&clk_xen HI3660_CLK_GATE_DSS_AXI_MM>, <&clk_xen HI3660_PCLK_GATE_MMBUF>; clock-names = "aclk_dss", "pclk_dss", "clk_edc0", "clk_ldi0", "clk_ldi1", "clk_dss_axi_mm", "pclk_mmbuf"; dma-coherent; port { dpe_out: endpoint { remote-endpoint = <&dsi_in>; }; }; }; dsi: dsi@10097000 { compatible = "hisilicon,hi3660-dsi"; status = "ok"; #if 0 //ACTUAL REG PROPERTY of DISPLAY reg = <0 0xE8601000 0 0x7F000>, <0 0xFFF35000 0 0x1000>; #endif // reg = <0 0x10097000 0 0x7F000>, // <0 0x10116000 0 0x1000>; reg = <0 0x10004000 0 0x80000>, <0 0x10084000 0 0x1000>; clocks = <&clk_xen HI3660_CLK_GATE_TXDPHY0_REF>, <&clk_xen HI3660_CLK_GATE_TXDPHY1_REF>, <&clk_xen HI3660_CLK_GATE_TXDPHY0_CFG>, <&clk_xen HI3660_CLK_GATE_TXDPHY1_CFG>, <&clk_xen HI3660_PCLK_GATE_DSI0>, <&clk_xen HI3660_PCLK_GATE_DSI1>; clock-names = "clk_txdphy0_ref", "clk_txdphy1_ref", "clk_txdphy0_cfg", "clk_txdphy1_cfg", "pclk_dsi0", "pclk_dsi1"; #address-cells = <1>; #size-cells = <0>; }; #endif clocks { compatible = "simple-bus"; #address-cells = <2>; #size-cells = <2>; ranges; clk_xen: xen_clk@0 { compatible = "xen,xen-vclk"; #clock-cells = <1>; }; }; ============================================================================ Below is my 'debian.cfg' file: ============================================================================ kernel = "/debian/Image" device_tree="/debian/domu.dtb" memory = 512 vcpus = 8 cpus = "all" name="debian" ################# DPE ################ #iomem = [ "0xE8600,0x80@0x10004", "0xFFF35,1@0x10084", "0xFFF0A,1@0x10085", "0xFFF31,1@0x10086", "0xE86C0,10@0x10087"] #iomem = [ "0xE8600,0x80", "0xFFF35,1", "0xFFF0A,1", "0xFFF31,1", "0xE86C0,10"] irqs = [ 277 ] iomem = [ "0xE8600,80@0x10004" ] iomem = [ "0xFFF35,1@0x10084" ] iomem = [ "0xFFF0A,1@0x10085" ] iomem = [ "0xFFF31,1@0x10086" ] iomem = [ "0xE86C0,10@0x100C4"] #iomem = [ "0xE86C0,10@0x10087"] #iomem = [ "0xE8600,80@0x00000" ] ################# DPE ################ ################# DSI ################ #iomem = [ "0xE8601,0x7F", "0xFFF35,1"] #iomem = [ "0xE8601,0x7F@0x10097", "0xFFF35,1@0x10116", "0xE8601,0x7F@0x10195"] #iomem = [ "0xE8601,7F@0x10097" ] #iomem = [ "0xFFF35,1@0x10116" ] iomem = [ "0xE8601,7F@0x10005" ] iomem = [ "0xFFF35,1@0x10084" ] ################# DSI ################ #vif = ['mac=00:16:3e:64:b8:40,bridge=xenbr0'] #nics = 1 #vif = [ 'eth0=00:60:00:00:00:01' ] disk = ['/dev/loop1,raw,xvda,w'] extra = "earlyprintk=xenboot console=hvc0 root=/dev/xvda rootfstype=ext4 rw video=HDMI-A-1:1280x720@60" ============================================================================ Here I am using same io space(GFNs) for DPE and DSI nodes, and having same below error and tried with different GFNs and giving same error. But adding this, Every thing is good but when i am trying to remap iomem second time, having below error ============================================================================ [ 3.215021] OF: rrrrrrrrrrrr: start: 0x10004000, sz = 0x80000 [ 3.215062] [DISPLAY] dsi_parse_dt(): 1536: of device: /passthrough/dsi@10097000 [ 3.215083] [DISPLAY] dsi_parse_dt(): 1537: +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ [ 3.215108] [DISPLAY] dsi_parse_dt(): 1540: ctx->base: ffffff800bd01000 [ 3.215126] [DISPLAY] dsi_parse_dt(): 1541: [ 3.215136] OF: rrrrrrrrrrrr: start: 0x10084000, sz = 0x1000 [ 3.215169] [DISPLAY] dsi_parse_dt(): 1548: [ 4.159087] [DISPLAY] dsi_parse_dt(): 1563: [ 4.159092] [DISPLAY] dsi_parse_dt(): 1568: [ 4.159132] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: clk_txdphy0_ref,1 [ 4.159163] [D][XEN_VCLK]xen_vclk_xfer(): 164: ******************************************* [ 4.159399] [D][XEN_VCLK]xen_vclk_xfer(): 170: ******************************************* [ 4.159626] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to domain-0 [ 4.160218] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(13) from domain 0 fired!!! [ 4.160359] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0x3 [ 4.160532] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved from dom-0 [ 4.160542] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done [ 4.160545] [D][XEN_VCLK]xen_of_clk_src_onecell_get(): 286: Xfer done... [ 4.160554] [DISPLAY] dsi_parse_dt(): 1575: [ 4.160560] [D][XEN_VCLK]vclk_round_rate(): 224: called... [ 4.160567] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: clk_txdphy0_ref,4,19200000 [ 4.160570] [D][XEN_VCLK]xen_vclk_xfer(): 164: ******************************************* [ 4.161095] [D][XEN_VCLK]xen_vclk_xfer(): 170: ******************************************* [ 4.161331] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to domain-0 [ 4.161946] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(13) from domain 0 fired!!! [ 4.162120] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0x3 [ 4.162284] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved from dom-0 [ 4.162295] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done [ 4.162301] [DISPLAY] dsi_parse_dt(): 1583: [ 4.162314] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: clk_txdphy0_cfg,1 [ 4.162316] [D][XEN_VCLK]xen_vclk_xfer(): 164: ******************************************* [ 4.162641] [D][XEN_VCLK]xen_vclk_xfer(): 170: ******************************************* [ 4.162984] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to domain-0 [ 4.163596] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(13) from domain 0 fired!!! [ 4.167753] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0x3 [ 4.167955] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved from dom-0 [ 4.167968] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done [ 4.167971] [D][XEN_VCLK]xen_of_clk_src_onecell_get(): 286: Xfer done... [ 4.167979] [DISPLAY] dsi_parse_dt(): 1593: [ 4.167982] [D][XEN_VCLK]vclk_round_rate(): 224: called... [ 4.167985] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: clk_txdphy0_cfg,4,19200000 [ 4.167992] [D][XEN_VCLK]xen_vclk_xfer(): 164: ******************************************* [ 4.168244] [D][XEN_VCLK]xen_vclk_xfer(): 170: ******************************************* [ 4.168476] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to domain-0 [ 4.169101] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(13) from domain 0 fired!!! [ 4.169262] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0x3 [ 4.169448] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved from dom-0 [ 4.169491] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done [ 4.169510] [DISPLAY] dsi_parse_dt(): 1601: [ 4.169535] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: pclk_dsi0,1 [ 4.169554] [D][XEN_VCLK]xen_vclk_xfer(): 164: ******************************************* [ 4.169803] [D][XEN_VCLK]xen_vclk_xfer(): 170: ******************************************* [ 4.170019] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to domain-0 [ 4.170619] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(13) from domain 0 fired!!! [ 4.170779] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0x3 [ 4.170965] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved from dom-0 [ 4.170978] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done [ 4.170981] [D][XEN_VCLK]xen_of_clk_src_onecell_get(): 286: Xfer done... [ 4.170989] [DISPLAY] dsi_parse_dt(): 1611: [ 4.170992] [DISPLAY] dsi_probe(): 1654: Before component add [ 4.170997] [DISPLAY] compare_of(): 242: [ 4.171002] [DISPLAY] kirin_drm_bind(): 257: [ 4.171004] [drm] +. [ 4.171386] [DISPLAY] kirin_drm_kms_init(): 105: [ 4.171391] [drm] +. [ 4.212543] [DISPLAY] kirin_drm_mode_config_init(): 91: [ 4.212547] [DISPLAY] dss_drm_init(): 638: [ 4.212563] [drm] +. [ 4.212585] [DISPLAY] dss_dts_parse(): 513: [ 4.212603] [DISPLAY] dss_dts_parse(): 530: of device: /passthrough/dpe@10004000 [ 4.212635] [DISPLAY] dss_dts_parse(): 531: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [ 4.212661] [DISPLAY] dss_dts_parse(): 532: ctx->base: ffffff800bd00000 [ 4.212688] Unhandled fault: ttbr address size fault (0x96000000) at 0xffffff800bd01000 [ 4.212708] Mem abort info: [ 4.212720] Exception class = DABT (current EL), IL = 32 bits [ 4.212738] SET = 0, FnV = 0 [ 4.212751] EA = 0, S1PTW = 0 [ 4.212763] Data abort info: [ 4.212776] ISV = 0, ISS = 0x00000000 [ 4.212789] CM = 0, WnR = 0 [ 4.212806] Internal error: : 96000000 [#1] PREEMPT SMP [ 4.212821] Modules linked in: [ 4.212839] CPU: 7 PID: 99 Comm: kworker/7:1 Tainted: G S 4.14.0-rc7 #456 [ 4.212857] Hardware name: XENVM-4.8 (DT) [ 4.212878] Workqueue: events deferred_probe_work_func [ 4.212893] task: ffffffc01abe6000 task.stack: ffffff8009878000 [ 4.212916] PC is at dss_drm_init+0x1a8/0x680 [ 4.212931] LR is at dss_drm_init+0x1a0/0x680 [ 4.212945] pc : [] lr : [] pstate: 40000045 [ 4.212963] sp : ffffff800987ba20 [ 4.212973] x29: ffffff800987ba30 x28: ffffffc01bff42e8 [ 4.212990] x27: ffffff800bd01000 x26: ffffffc018d25760 [ 4.213006] x25: ffffff80090f8c70 x24: ffffffc017212800 [ 4.213023] x23: ffffff8008e32000 x22: ffffff80090f8000 [ 4.213039] x21: ffffff8008e32748 x20: ffffffc018d25018 [ 4.213055] x19: ffffffc01abdf810 x18: 0000000000000010 [ 4.213071] x17: 000000000000000e x16: 0000000000000020 [ 4.213087] x15: ffffffffffffffff x14: ffffff80894c6157 [ 4.213104] x13: ffffff80094c6165 x12: ffffff8009379000 [ 4.213120] x11: 0000000005f5e0ff x10: ffffff800987b6f0 [ 4.257555] x9 : 00000000ffffffd0 x8 : 000000000000004b [ 4.257573] x7 : 000000000000000c x6 : 00000000000001ee [ 4.257591] x5 : 0000000000007ceb x4 : 0000000000000000 [ 4.257608] x3 : ffffff800934a000 x2 : 0000000000000000 [ 4.257625] x1 : 0000000000000000 x0 : 000000000000003b [ 4.257644] Process kworker/7:1 (pid: 99, stack limit = 0xffffff8009878000) [ 4.257661] Call trace: [ 4.257672] Exception stack(0xffffff800987b8e0 to 0xffffff800987ba20) [ 4.257691] b8e0: 000000000000003b 0000000000000000 0000000000000000 ffffff800934a000 [ 4.257713] b900: 0000000000000000 0000000000007ceb 00000000000001ee 000000000000000c [ 4.257734] b920: 000000000000004b 00000000ffffffd0 ffffff800987b6f0 0000000005f5e0ff [ 4.257756] b940: ffffff8009379000 ffffff80094c6165 ffffff80894c6157 ffffffffffffffff [ 4.257777] b960: 0000000000000020 000000000000000e 0000000000000010 ffffffc01abdf810 [ 4.257799] b980: ffffffc018d25018 ffffff8008e32748 ffffff80090f8000 ffffff8008e32000 [ 4.257821] b9a0: ffffffc017212800 ffffff80090f8c70 ffffffc018d25760 ffffff800bd01000 [ 4.257842] b9c0: ffffffc01bff42e8 ffffff800987ba30 ffffff80087061c8 ffffff800987ba20 [ 4.257864] b9e0: ffffff80087061d0 0000000040000045 0000000000000214 ffffff800bd00000 [ 4.257885] ba00: ffffffffffffffff 0000000000007c9f ffffff800987ba30 ffffff80087061d0 [ 4.257908] [] dss_drm_init+0x1a8/0x680 [ 4.257926] [] kirin_drm_bind+0x128/0x310 [ 4.257945] [] try_to_bring_up_master+0x180/0x1e0 [ 4.257965] [] component_add+0xa4/0x170 [ 4.257981] [] dsi_probe+0x52c/0x5a0 [ 4.258000] [] platform_drv_probe+0x60/0xc0 [ 4.258018] [] driver_probe_device+0x234/0x2e0 [ 4.258037] [] __device_attach_driver+0xa0/0xe8 [ 4.258056] [] bus_for_each_drv+0x58/0xa8 [ 4.258072] [] __device_attach+0xc8/0x138 [ 4.302470] [] device_initial_probe+0x24/0x30 [ 4.302490] [] bus_probe_device+0x9c/0xa8 [ 4.302506] [] deferred_probe_work_func+0xac/0x150 [ 4.302528] [] process_one_work+0x1d8/0x490 [ 4.302547] [] worker_thread+0x248/0x478 [ 4.302565] [] kthread+0x138/0x140 [ 4.302584] [] ret_from_fork+0x10/0x1c [ 4.302601] Code: f90037a4 97e8a1ea f943a69b 9140077b (b940037b) [ 4.302621] ---[ end trace d64c23a811313502 ]--- [ 4.302638] Kernel panic - not syncing: Fatal exception [ 4.302656] SMP: stopping secondary CPUs [ 4.332694] Kernel Offset: disabled [ 4.332708] CPU features: 0x002004 [ 4.332720] Memory Limit: none [ 4.332736] Rebooting in 5 seconds.. ============================================================================ How to fix this IO addr size fault, I think some problem in dts or debian.cfg to map Physical addresses to GFNs Please help me to come out of this problem. Thanks Omkar B On Thu, Nov 1, 2018 at 1:11 AM Julien Grall wrote: > > > On 10/17/18 1:24 PM, Omkar Bolla wrote: > > Hi, > > Hi Omkar, > > > I have started finding which patch introduced Armv8 Secondary CPUs issue. > > > > I just want to start PV vdevb before domainU debian rootfs mount. Is it > > possible? > > May I ask why you need the dependency on the rootfs? > > Cheers, > > -- > Julien Grall > -- This message contains confidential information and is intended only for the individual(s) named. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this mail and attached file/s is strictly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. --0000000000006a8932057996dbaf Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi,

<= div class=3D"gmail_default" style=3D"color:rgb(11,83,148)">> May I ask why you need the dependency on the rootf= s?

I am trying to pass-through the display to gues= t domain. to do through driver needs clocks. I have written simple basic cl= ock pv frontend and backend.
So I thought these clocks must be initialised before dis= play driver initialisation.=C2=A0

But if I start both domain and clocks script one after= another, clock got initialised properly. Problem solved.
But still i have some dou= bt, is it possible to do some thing in xenbits src to start automatically w= hen we start underprivileged domain?

I have one more question about pass-through
To implement pass t= hrough I took reference from below link

I added 'xen-passthrough' to actual dom0 dtb and created new dt= b with below nodes in passthrough node
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D
dpe: dpe@10004000 {
<= font color=3D"#0b5394"> compatible= =3D "hisilicon,hi3660-dpe";
stat= us =3D "ok";
#if 0
//ACTUAL REG PROPERTY of DISPLAY
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 reg =3D <0x0 0xE8600000 0x0 0x= 80000>,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0x0 = 0xFFF35000 0 0x1000>,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 <0x0 0xFFF0A000 0 0x1000>,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 <0x0 0xFFF31000 0 0x1000>,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0x0 0xE86C0000 0 0x10000>;=
#endif
//reg =3D <0x0 0x10004000 0x0 0x80000>,=
reg =3D <0x0 0x10004000 0x0 0x80000>,
=C2=A0 =C2=A0 =C2=A0 <0x0 0x10084000 0 0x1000= >,
=C2=A0 =C2=A0 =C2=A0 <0x0 0x100850= 00 0 0x1000>,
=C2=A0 =C2=A0 =C2=A0 <0= x0 0x10086000 0 0x1000>,
=C2=A0 =C2=A0 = =C2=A0 <0x0 0x100C4000 0 0x10000>;
// = =C2=A0 =C2=A0 =C2=A0 <0x0 0x10087000 0 0x10000>;

= interrupts =3D <0 245 4>;

= clocks = =3D <&clk_xen HI3660_ACLK_GATE_DSS>,
<&clk_xen HI3660_PCLK_GATE_DSS>,
<= /span><&clk_xen HI3660_CLK_GATE_EDC0>,
= <&clk_xen HI3660_CLK_GATE_LDI0>,
= <&clk_xen HI3660_CLK_GATE_LDI1>,
= <&clk_xen HI3660_CLK_GATE_DSS_AXI_MM>,
<&clk_xen HI3660_PCLK_GATE_MMBUF>;
=
clock-names =3D "aclk_dss",
"pclk_dss",
&q= uot;clk_edc0",
"clk_ldi0",=
"clk_ldi1",
"clk_dss_axi_mm",
"pclk_mmbuf";

dma-coherent;=

port {
dpe_out: endpo= int {
remote-endpoint =3D <&dsi_in= >;
};
}= ;
};

dsi: dsi@1009= 7000 {
compatible =3D "hisilicon,hi366= 0-dsi";
status =3D "ok";
#if 0<= /div>
//ACTUAL REG PROP= ERTY of DISPLAY
reg =3D <0 0xE8601000 0 = 0x7F000>,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0= 0xFFF35000 0 0x1000>;
#endif
// reg =3D <0 0x10097000 0 0x7F000>,
// <0 0x101160= 00 0 0x1000>;
reg =3D <0 0x10004000 0= 0x80000>,
<0 0x10084000 0 0x1000&g= t;;

clocks =3D <&clk_xen HI3660_CLK_GATE_= TXDPHY0_REF>,
<&clk_xen HI3660_= CLK_GATE_TXDPHY1_REF>,
<&clk_xe= n HI3660_CLK_GATE_TXDPHY0_CFG>,
<&a= mp;clk_xen HI3660_CLK_GATE_TXDPHY1_CFG>,
<&clk_xen HI3660_PCLK_GATE_DSI0>,
<= /span><&clk_xen HI3660_PCLK_GATE_DSI1>;
= clock-names =3D "clk_txdphy0_ref",
= "clk_txdphy1_ref",
"clk_txdphy0_cfg",
"clk= _txdphy1_cfg",
"pclk_dsi0"= ;,
"pclk_dsi1";

#address-cells =3D <1>;
= #size-cells =3D <0>;

};
#endif
clocks {
compatible =3D "sim= ple-bus";
#address-cells =3D <2>= ;
#size-cells =3D <2>;
ranges;
clk_xen: xen_clk@= 0 {
compatible =3D "xen,xen-vclk"= ;;
#clock-cells =3D <1>;
};
};
=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Below is my 'debian.cfg' file:
=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
kernel =3D "/debian/Image&q= uot;
device_tree=3D"/debian/domu.dtb= "
memory =3D 512
vcpus =3D 8
cpus =3D "= ;all"
name=3D"debian"

######= ########### DPE ################
#iomem = =3D [ "0xE8600,0x80@0x10004", "0xFFF35,1@0x10084", &quo= t;0xFFF0A,1@0x10085", "0xFFF31,1@0x10086", "0xE86C0,10@= 0x10087"]
#iomem =3D [ "0xE8600= ,0x80", "0xFFF35,1", "0xFFF0A,1", "0xFFF31,1&= quot;, "0xE86C0,10"]
irqs =3D [= 277 ]

iomem =3D [ "0xE8600,80@0x10004" ]

iomem =3D [ "0xFFF35,1= @0x10084" ]
iomem =3D [ "0xFFF0= A,1@0x10085" ]
iomem =3D [ "0xF= FF31,1@0x10086" ]
iomem =3D [ "= 0xE86C0,10@0x100C4"]
#iomem =3D [ &q= uot;0xE86C0,10@0x10087"]
#iomem =3D = [ "0xE8600,80@0x00000" ]

################# DPE ################
################# DSI ################
=
#iomem =3D [ "0xE8601,0x7F", "0= xFFF35,1"]
#iomem =3D [ "0xE860= 1,0x7F@0x10097", "0xFFF35,1@0x10116", "0xE8601,0x7F@0x1= 0195"]

#iomem =3D [ "0xE8601,7F@0x10097" ]
#iomem =3D [ "0xFFF35,1@0x10116" ]


iomem =3D [ "0xE8601,7F@0x10005" ]
iomem =3D [ "0xFFF35,1@0x10084" ]
=
################# DSI ################

#vif =3D = ['mac=3D00:16:3e:64:b8:40,bridge=3Dxenbr0']
#nics =3D 1
#vif =3D [ 'et= h0=3D00:60:00:00:00:01' ]

<= div class=3D"gmail_default">disk =3D ['/dev/loop1,raw,xvda,w']
extra =3D "earlyprintk=3Dxenboot console= =3Dhvc0 root=3D/dev/xvda rootfstype=3Dext4 rw video=3DHDMI-A-1:1280x720@60&= quot;
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D
Here I am using same io space(GFNs) for DPE and DSI nodes= , and having same below error
and tried with different GFNs and giving same error.

<= /div>
But adding this, = Every thing is good but when i am trying to remap iomem second time, having= below error
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D
[=C2=A0 =C2=A0 3.215021] OF: rrrrrrrrrrrr: start: 0x10004000, sz =3D 0x800= 00
[=C2=A0 =C2=A0 3.215062] [DISPLAY] dsi= _parse_dt(): 1536: of device: /passthrough/dsi@10097000
[=C2=A0 =C2=A0 3.215083] [DISPLAY] dsi_parse_dt(): 1537: +++= ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[=C2=A0 =C2=A0 3.215108] [DISPLAY] dsi_parse_dt(): 15= 40: ctx->base: ffffff800bd01000
[=C2= =A0 =C2=A0 3.215126] [DISPLAY] dsi_parse_dt(): 1541:=C2=A0
[=C2=A0 =C2=A0 3.215136] OF: rrrrrrrrrrrr: start: 0x1008= 4000, sz =3D 0x1000
[=C2=A0 =C2=A0 3.2151= 69] [DISPLAY] dsi_parse_dt(): 1548:=C2=A0
[=C2=A0 =C2=A0 4.159087] [DISPLAY] dsi_parse_dt(): 1563:=C2=A0
[=C2=A0 =C2=A0 4.159092] [DISPLAY] dsi_parse_dt(): 1= 568:=C2=A0
[=C2=A0 =C2=A0 4.159132] [D][X= EN_VCLK]xen_vclk_xfer(): 163: buffer: clk_txdphy0_ref,1
[=C2=A0 =C2=A0 4.159163] [D][XEN_VCLK]xen_vclk_xfer(): 164: = *******************************************
[=C2=A0 =C2=A0 4.159399] [D][XEN_VCLK]xen_vclk_xfer(): 170: ************= *******************************
[=C2=A0 = =C2=A0 4.159626] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to dom= ain-0
[=C2=A0 =C2=A0 4.160218] [D][XEN_VC= LK]xen_vclk_interrupt(): 446: IRQ(13) from domain 0 fired!!!
[=C2=A0 =C2=A0 4.160359] [D][XEN_VCLK]vclk_fe_bh(): 394= : irq_status: 0x3
[=C2=A0 =C2=A0 4.160532= ] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved from dom-0
[=C2=A0 =C2=A0 4.160542] [D][XEN_VCLK]xen_vclk_xfer(): 1= 79: Xfer Done
[=C2=A0 =C2=A0 4.160545] [D= ][XEN_VCLK]xen_of_clk_src_onecell_get(): 286:=C2=A0 Xfer done...
[=C2=A0 =C2=A0 4.160554] [DISPLAY] dsi_parse_dt(): = 1575:=C2=A0
[=C2=A0 =C2=A0 4.160560] [D][= XEN_VCLK]vclk_round_rate(): 224:=C2=A0 called...
[=C2=A0 =C2=A0 4.160567] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer:= clk_txdphy0_ref,4,19200000
[=C2=A0 =C2= =A0 4.160570] [D][XEN_VCLK]xen_vclk_xfer(): 164: **************************= *****************
[=C2=A0 =C2=A0 4.161095= ] [D][XEN_VCLK]xen_vclk_xfer(): 170: **************************************= *****
[=C2=A0 =C2=A0 4.161331] [D][XEN_VC= LK]xen_vclk_xfer(): 176: Sending IRQ_DATA to domain-0
[=C2=A0 =C2=A0 4.161946] [D][XEN_VCLK]xen_vclk_interrupt(): 44= 6: IRQ(13) from domain 0 fired!!!
[=C2=A0= =C2=A0 4.162120] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0x3
[=C2=A0 =C2=A0 4.162284] [D][XEN_VCLK]vclk_fe_bh()= : 407: ACK Recieved from dom-0
[=C2=A0 = =C2=A0 4.162295] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done
[=C2=A0 =C2=A0 4.162301] [DISPLAY] dsi_parse_dt(): 158= 3:=C2=A0
[=C2=A0 =C2=A0 4.162314] [D][XEN= _VCLK]xen_vclk_xfer(): 163: buffer: clk_txdphy0_cfg,1
[=C2=A0 =C2=A0 4.162316] [D][XEN_VCLK]xen_vclk_xfer(): 164: **= *****************************************
[=C2=A0 =C2=A0 4.162641] [D][XEN_VCLK]xen_vclk_xfer(): 170: **************= *****************************
[=C2=A0 =C2= =A0 4.162984] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to domain= -0
[=C2=A0 =C2=A0 4.163596] [D][XEN_VCLK]= xen_vclk_interrupt(): 446: IRQ(13) from domain 0 fired!!!
[=C2=A0 =C2=A0 4.167753] [D][XEN_VCLK]vclk_fe_bh(): 394:= irq_status: 0x3
[=C2=A0 =C2=A0 4.167955]= [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved from dom-0
[=C2=A0 =C2=A0 4.167968] [D][XEN_VCLK]xen_vclk_xfer(): 179:= Xfer Done
[=C2=A0 =C2=A0 4.167971] [D][X= EN_VCLK]xen_of_clk_src_onecell_get(): 286:=C2=A0 Xfer done...
[=C2=A0 =C2=A0 4.167979] [DISPLAY] dsi_parse_dt(): 159= 3:=C2=A0
[=C2=A0 =C2=A0 4.167982] [D][XEN= _VCLK]vclk_round_rate(): 224:=C2=A0 called...
[=C2=A0 =C2=A0 4.167985] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: cl= k_txdphy0_cfg,4,19200000
[=C2=A0 =C2=A0 4= .167992] [D][XEN_VCLK]xen_vclk_xfer(): 164: *******************************= ************
[=C2=A0 =C2=A0 4.168244] [D]= [XEN_VCLK]xen_vclk_xfer(): 170: *******************************************=
[=C2=A0 =C2=A0 4.168476] [D][XEN_VCLK]xe= n_vclk_xfer(): 176: Sending IRQ_DATA to domain-0
[=C2=A0 =C2=A0 4.169101] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IR= Q(13) from domain 0 fired!!!
[=C2=A0 =C2= =A0 4.169262] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0x3
[=C2=A0 =C2=A0 4.169448] [D][XEN_VCLK]vclk_fe_bh(): 40= 7: ACK Recieved from dom-0
[=C2=A0 =C2=A0= 4.169491] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done
[=C2=A0 =C2=A0 4.169510] [DISPLAY] dsi_parse_dt(): 1601:=C2= =A0
[=C2=A0 =C2=A0 4.169535] [D][XEN_VCLK= ]xen_vclk_xfer(): 163: buffer: pclk_dsi0,1
[=C2=A0 =C2=A0 4.169554] [D][XEN_VCLK]xen_vclk_xfer(): 164: *************= ******************************
[=C2=A0 = =C2=A0 4.169803] [D][XEN_VCLK]xen_vclk_xfer(): 170: ***********************= ********************
[=C2=A0 =C2=A0 4.170= 019] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to domain-0
<= div class=3D"gmail_default">[=C2=A0 =C2=A0 4.170619] [D][XEN_VCLK]xen_vclk_= interrupt(): 446: IRQ(13) from domain 0 fired!!!
[=C2=A0 =C2=A0 4.170779] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status= : 0x3
[=C2=A0 =C2=A0 4.170965] [D][XEN_VC= LK]vclk_fe_bh(): 407: ACK Recieved from dom-0
[=C2=A0 =C2=A0 4.170978] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done<= /div>
[=C2=A0 =C2=A0 4.170981] [D][XEN_VCLK]xen= _of_clk_src_onecell_get(): 286:=C2=A0 Xfer done...
[=C2=A0 =C2=A0 4.170989] [DISPLAY] dsi_parse_dt(): 1611:=C2=A0
[=C2=A0 =C2=A0 4.170992] [DISPLAY] dsi_prob= e(): 1654: Before component add
[=C2=A0 = =C2=A0 4.170997] [DISPLAY] compare_of(): 242:=C2=A0
[=C2=A0 =C2=A0 4.171002] [DISPLAY] kirin_drm_bind(): 257:=C2=A0<= /div>
[=C2=A0 =C2=A0 4.171004] [drm] +.=C2=A0
[=C2=A0 =C2=A0 4.171386] [DISPLAY] kirin_d= rm_kms_init(): 105:=C2=A0
[=C2=A0 =C2=A0 = 4.171391] [drm] +.
[=C2=A0 =C2=A0 4.21254= 3] [DISPLAY] kirin_drm_mode_config_init(): 91:=C2=A0
[=C2=A0 =C2=A0 4.212547] [DISPLAY] dss_drm_init(): 638:=C2=A0
[=C2=A0 =C2=A0 4.212563] [drm] +.
[=C2=A0 =C2=A0 4.212585] [DISPLAY] dss_dts_parse(= ): 513:=C2=A0
[=C2=A0 =C2=A0 4.212603] [D= ISPLAY] dss_dts_parse(): 530: of device: /passthrough/dpe@10004000
[=C2=A0 =C2=A0 4.212635] [DISPLAY] dss_dts_parse(= ): 531: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^= ^^^^^^^^^
[=C2=A0 =C2=A0 4.212661] [DISPL= AY] dss_dts_parse(): 532: ctx->base: ffffff800bd00000
[=C2=A0 =C2=A0 4.212688] Unhandled fault: ttbr address size= fault (0x96000000) at 0xffffff800bd01000
[=C2=A0 =C2=A0 4.212708] Mem abort info:
[=C2=A0 =C2=A0 4.212720]=C2=A0 =C2=A0Exception class =3D DABT (current EL)= , IL =3D 32 bits
[=C2=A0 =C2=A0 4.212738]= =C2=A0 =C2=A0SET =3D 0, FnV =3D 0
[=C2=A0= =C2=A0 4.212751]=C2=A0 =C2=A0EA =3D 0, S1PTW =3D 0
[=C2=A0 =C2=A0 4.212763] Data abort info:
[=C2=A0 =C2=A0 4.212776]=C2=A0 =C2=A0ISV =3D 0, ISS =3D 0x00000= 000
[=C2=A0 =C2=A0 4.212789]=C2=A0 =C2=A0= CM =3D 0, WnR =3D 0
[=C2=A0 =C2=A0 4.2128= 06] Internal error: : 96000000 [#1] PREEMPT SMP
[=C2=A0 =C2=A0 4.212821] Modules linked in:
[=C2=A0 =C2=A0 4.212839] CPU: 7 PID: 99 Comm: kworker/7:1 Tainted= : G S=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 4.14.0-rc7 #456
=
[=C2=A0 =C2=A0 4.212857] Hardware name: XENVM-= 4.8 (DT)
[=C2=A0 =C2=A0 4.212878] Workque= ue: events deferred_probe_work_func
[=C2= =A0 =C2=A0 4.212893] task: ffffffc01abe6000 task.stack: ffffff8009878000
[=C2=A0 =C2=A0 4.212916] PC is at dss_drm_i= nit+0x1a8/0x680
[=C2=A0 =C2=A0 4.212931] = LR is at dss_drm_init+0x1a0/0x680
[=C2=A0= =C2=A0 4.212945] pc : [<ffffff80087061d0>] lr : [<ffffff80087061c= 8>] pstate: 40000045
[=C2=A0 =C2=A0 4.= 212963] sp : ffffff800987ba20
[=C2=A0 =C2= =A0 4.212973] x29: ffffff800987ba30 x28: ffffffc01bff42e8=C2=A0
[=C2=A0 =C2=A0 4.212990] x27: ffffff800bd01000 x26: = ffffffc018d25760=C2=A0
[=C2=A0 =C2=A0 4.2= 13006] x25: ffffff80090f8c70 x24: ffffffc017212800=C2=A0
[=C2=A0 =C2=A0 4.213023] x23: ffffff8008e32000 x22: ffffff8= 0090f8000=C2=A0
[=C2=A0 =C2=A0 4.213039] = x21: ffffff8008e32748 x20: ffffffc018d25018=C2=A0
[=C2=A0 =C2=A0 4.213055] x19: ffffffc01abdf810 x18: 00000000000000= 10=C2=A0
[=C2=A0 =C2=A0 4.213071] x17: 00= 0000000000000e x16: 0000000000000020=C2=A0
[=C2=A0 =C2=A0 4.213087] x15: ffffffffffffffff x14: ffffff80894c6157=C2= =A0
[=C2=A0 =C2=A0 4.213104] x13: ffffff8= 0094c6165 x12: ffffff8009379000=C2=A0
[= =C2=A0 =C2=A0 4.213120] x11: 0000000005f5e0ff x10: ffffff800987b6f0=C2=A0
[=C2=A0 =C2=A0 4.257555] x9 : 00000000ffff= ffd0 x8 : 000000000000004b=C2=A0
[=C2=A0 = =C2=A0 4.257573] x7 : 000000000000000c x6 : 00000000000001ee=C2=A0
[=C2=A0 =C2=A0 4.257591] x5 : 0000000000007ceb x4= : 0000000000000000=C2=A0
[=C2=A0 =C2=A0 = 4.257608] x3 : ffffff800934a000 x2 : 0000000000000000=C2=A0
[=C2=A0 =C2=A0 4.257625] x1 : 0000000000000000 x0 : 0000= 00000000003b=C2=A0
[=C2=A0 =C2=A0 4.25764= 4] Process kworker/7:1 (pid: 99, stack limit =3D 0xffffff8009878000)
<= div class=3D"gmail_default">[=C2=A0 =C2=A0 4.257661] Call trace:
[=C2=A0 =C2=A0 4.257672] Exception stack(0xffffff80= 0987b8e0 to 0xffffff800987ba20)
[=C2=A0 = =C2=A0 4.257691] b8e0: 000000000000003b 0000000000000000 0000000000000000 f= fffff800934a000
[=C2=A0 =C2=A0 4.257713] = b900: 0000000000000000 0000000000007ceb 00000000000001ee 000000000000000c
[=C2=A0 =C2=A0 4.257734] b920: 00000000000= 0004b 00000000ffffffd0 ffffff800987b6f0 0000000005f5e0ff
[=C2=A0 =C2=A0 4.257756] b940: ffffff8009379000 ffffff80094= c6165 ffffff80894c6157 ffffffffffffffff
[= =C2=A0 =C2=A0 4.257777] b960: 0000000000000020 000000000000000e 00000000000= 00010 ffffffc01abdf810
[=C2=A0 =C2=A0 4.2= 57799] b980: ffffffc018d25018 ffffff8008e32748 ffffff80090f8000 ffffff8008e= 32000
[=C2=A0 =C2=A0 4.257821] b9a0: ffff= ffc017212800 ffffff80090f8c70 ffffffc018d25760 ffffff800bd01000
[=C2=A0 =C2=A0 4.257842] b9c0: ffffffc01bff42e8 ffff= ff800987ba30 ffffff80087061c8 ffffff800987ba20
[=C2=A0 =C2=A0 4.257864] b9e0: ffffff80087061d0 0000000040000045 0000= 000000000214 ffffff800bd00000
[=C2=A0 =C2= =A0 4.257885] ba00: ffffffffffffffff 0000000000007c9f ffffff800987ba30 ffff= ff80087061d0
[=C2=A0 =C2=A0 4.257908] [&l= t;ffffff80087061d0>] dss_drm_init+0x1a8/0x680
[=C2=A0 =C2=A0 4.257926] [<ffffff8008705490>] kirin_drm_bind+= 0x128/0x310
[=C2=A0 =C2=A0 4.257945] [<= ;ffffff8008740c88>] try_to_bring_up_master+0x180/0x1e0
[=C2=A0 =C2=A0 4.257965] [<ffffff8008740d8c>] comp= onent_add+0xa4/0x170
[=C2=A0 =C2=A0 4.257= 981] [<ffffff800870b574>] dsi_probe+0x52c/0x5a0
[=C2=A0 =C2=A0 4.258000] [<ffffff8008749d60>] platform_d= rv_probe+0x60/0xc0
[=C2=A0 =C2=A0 4.25801= 8] [<ffffff8008747a94>] driver_probe_device+0x234/0x2e0
[=C2=A0 =C2=A0 4.258037] [<ffffff8008747cb0>] __= device_attach_driver+0xa0/0xe8
[=C2=A0 = =C2=A0 4.258056] [<ffffff80087459d0>] bus_for_each_drv+0x58/0xa8
[=C2=A0 =C2=A0 4.258072] [<ffffff80087476e= 8>] __device_attach+0xc8/0x138
[=C2=A0= =C2=A0 4.302470] [<ffffff8008747d74>] device_initial_probe+0x24/0x30=
[=C2=A0 =C2=A0 4.302490] [<ffffff8008= 746ae4>] bus_probe_device+0x9c/0xa8
[= =C2=A0 =C2=A0 4.302506] [<ffffff8008746fcc>] deferred_probe_work_func= +0xac/0x150
[=C2=A0 =C2=A0 4.302528] [<= ;ffffff80080efd98>] process_one_work+0x1d8/0x490
[=C2=A0 =C2=A0 4.302547] [<ffffff80080f0298>] worker_threa= d+0x248/0x478
[=C2=A0 =C2=A0 4.302565] [&= lt;ffffff80080f6728>] kthread+0x138/0x140
[=C2=A0 =C2=A0 4.302584] [<ffffff8008084d7c>] ret_from_fork+0x10/= 0x1c
[=C2=A0 =C2=A0 4.302601] Code: f9003= 7a4 97e8a1ea f943a69b 9140077b (b940037b)=C2=A0
[=C2=A0 =C2=A0 4.302621] ---[ end trace d64c23a811313502 ]---
<= div class=3D"gmail_default">[=C2=A0 =C2=A0 4.302638] Kernel panic - not syn= cing: Fatal exception
[=C2=A0 =C2=A0 4.30= 2656] SMP: stopping secondary CPUs
[=C2= =A0 =C2=A0 4.332694] Kernel Offset: disabled
[=C2=A0 =C2=A0 4.332708] CPU features: 0x002004
[=C2=A0 =C2=A0 4.332720] Memory Limit: none
[=C2=A0 =C2=A0 4.332736] Rebooting in 5 seconds..
= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D

How to fix this IO addr size fault, = I think some problem in dts or debian.cfg to map Physical addresses to GFNs=
Please he= lp me to come out of this problem.

Thanks
Omkar B

On Thu, Nov 1, 2018 at 1:11 AM Julien Gra= ll <julien.gra= ll@arm.com> wrote:


On 10/17/18 1:24 PM, Omkar Bolla wrote:
> Hi,

Hi Omkar,

> I have started finding which patch introduced Armv8 Secondary CPUs iss= ue.
>
> I just want to start PV vdevb before domainU debian rootfs mount. Is i= t
> possible?

May I ask why you need the dependency on the rootfs?

Cheers,

--
Julien Grall

This message contains confidential information and is intended only for the individual(s) named. If you are not the = intended recipient, you are notified that disclosing, copying, distributing or takin= g any action in reliance on the contents of this mail and attached file/s is stri= ctly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmis= sion cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or cont= ain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mai= l transmission.

--0000000000006a8932057996dbaf-- --===============6198494298375653448== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6Ly9saXN0 cy54ZW5wcm9qZWN0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA== --===============6198494298375653448==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Julien Grall Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Thu, 1 Nov 2018 21:49:13 +0000 Message-ID: References: <074697de-7265-a1fb-2970-4128a58f09ca@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; Format="flowed" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Omkar Bolla Cc: jgross@suse.com, xen-devel@lists.xensource.com, Wei Liu , Oleksandr_Andrushchenko@epam.com, Oleksandr Andrushchenko , Lars Kurth , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org KCsgV2VpKQoKT24gMTEvMS8xOCA5OjE1IEFNLCBPbWthciBCb2xsYSB3cm90ZToKPiBIaSwKPiAK Pj4gTWF5IEkgYXNrIHdoeSB5b3UgbmVlZCB0aGUgZGVwZW5kZW5jeSBvbiB0aGUgcm9vdGZzPwo+ IAo+IEkgYW0gdHJ5aW5nIHRvIHBhc3MtdGhyb3VnaCB0aGUgZGlzcGxheSB0byBndWVzdCBkb21h aW4uIHRvIGRvIHRocm91Z2ggCj4gZHJpdmVyIG5lZWRzIGNsb2Nrcy4gSSBoYXZlIHdyaXR0ZW4g c2ltcGxlIGJhc2ljIGNsb2NrIHB2IGZyb250ZW5kIGFuZCAKPiBiYWNrZW5kLgo+IFNvIEkgdGhv dWdodCB0aGVzZSBjbG9ja3MgbXVzdCBiZSBpbml0aWFsaXNlZCBiZWZvcmUgZGlzcGxheSBkcml2 ZXIgCj4gaW5pdGlhbGlzYXRpb24uCgpUaGUgZ3JhcGhpYyBkcml2ZXIgc2hvdWxkIHJlcXVlc3Qg dGhlIGNsb2NrLCByaWdodD8gU28gTGludXggd2lsbCBtYWtlIApzdXJlIHRvIGhhdmUgdGhlIGNs b2NrIGJlZm9yZSBpbml0aWFsaXppbmcgdGhlIGRpc3BsYXkuCgo+IAo+IEJ1dCBpZiBJIHN0YXJ0 IGJvdGggZG9tYWluIGFuZCBjbG9ja3Mgc2NyaXB0IG9uZSBhZnRlciBhbm90aGVyLCBjbG9jayAK PiBnb3QgaW5pdGlhbGlzZWQgcHJvcGVybHkuIFByb2JsZW0gc29sdmVkLgo+IEJ1dCBzdGlsbCBp IGhhdmUgc29tZSBkb3VidCwgaXMgaXQgcG9zc2libGUgdG8gZG8gc29tZSB0aGluZyBpbiB4ZW5i aXRzIAo+IHNyYyB0byBzdGFydCBhdXRvbWF0aWNhbGx5IHdoZW4gd2Ugc3RhcnQgdW5kZXJwcml2 aWxlZ2VkIGRvbWFpbj8KCkkgYW0gbm90IGVudGlyZWx5IHN1cmUgaWYgd2UgaGF2ZSBhIHdheSB0 byBydW4gYSBzY3JpcHQgZHVyaW5nIGRvbWFpbiAKY3JlYXRpb24uIFdlaSwgZG8geW91IGtub3cg aWYgdGhhdCdzIHBvc3NpYmxlPwoKQSB3b3JrYXJvdW5kIHdvdWxkIGJlIHRvIGNyZWF0ZSB0aGUg ZG9tYWluIHBhdXNlZCwgY2FsbCB0aGUgc2NyaXB0IGFuZCAKdGhlbiB1bnBhdXNlIGl0LgoKNDJz aD4geGwgY3JlYXRlIC1wIC4uLgo0MnNoPiAuL215c2NyaXB0LnNoCjQyc2g+IHhsIHVucGF1c2Ug PG15Z3Vlc3Q+Cgo+IAo+IEkgaGF2ZSBvbmUgbW9yZSBxdWVzdGlvbiBhYm91dCBwYXNzLXRocm91 Z2gKPiBUbyBpbXBsZW1lbnQgcGFzcyB0aHJvdWdoIEkgdG9vayByZWZlcmVuY2UgZnJvbSBiZWxv dyBsaW5rCj4gaHR0cHM6Ly93aWtpLnhlbi5vcmcvaW1hZ2VzLzEvMTcvRGV2aWNlX3Bhc3N0aHJv dWdoX3hlbi5wZGYKPiAKPiBJIGFkZGVkICd4ZW4tcGFzc3Rocm91Z2gnIHRvIGFjdHVhbCBkb20w IGR0YiBhbmQgY3JlYXRlZCBuZXcgZHRiIHdpdGggCj4gYmVsb3cgbm9kZXMgaW4gcGFzc3Rocm91 Z2ggbm9kZQo+ID09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KPiBkcGU6IGRwZUAxMDAwNDAwMCB7Cj4gY29t cGF0aWJsZSA9ICJoaXNpbGljb24saGkzNjYwLWRwZSI7Cj4gc3RhdHVzID0gIm9rIjsKPiAjaWYg MAo+IC8vQUNUVUFMIFJFRyBQUk9QRVJUWSBvZiBESVNQTEFZCj4gIMKgIMKgIMKgIMKgIMKgIMKg IMKgIMKgIMKgIMKgIMKgIMKgIHJlZyA9IDwweDAgMHhFODYwMDAwMCAweDAgMHg4MDAwMD4sCj4g IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg IMKgIDwweDAgMHhGRkYzNTAwMCAwIDB4MTAwMD4sCj4gIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIDwweDAgMHhGRkYwQTAwMCAwIDB4 MTAwMD4sCj4gIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg IMKgIMKgIMKgIMKgIDwweDAgMHhGRkYzMTAwMCAwIDB4MTAwMD4sCj4gIMKgIMKgIMKgIMKgIMKg IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIDwweDAgMHhFODZD MDAwMCAwIDB4MTAwMDA+Owo+ICNlbmRpZgo+IC8vcmVnID0gPDB4MCAweDEwMDA0MDAwIDB4MCAw eDgwMDAwPiwKPiByZWcgPSA8MHgwIDB4MTAwMDQwMDAgMHgwIDB4ODAwMDA+LAo+ICDCoCDCoCDC oCA8MHgwIDB4MTAwODQwMDAgMCAweDEwMDA+LAo+ICDCoCDCoCDCoCA8MHgwIDB4MTAwODUwMDAg MCAweDEwMDA+LAo+ICDCoCDCoCDCoCA8MHgwIDB4MTAwODYwMDAgMCAweDEwMDA+LAo+ICDCoCDC oCDCoCA8MHgwIDB4MTAwQzQwMDAgMCAweDEwMDAwPjsKPiAvL8KgIMKgIMKgIDwweDAgMHgxMDA4 NzAwMCAwIDB4MTAwMDA+Owo+IAo+IGludGVycnVwdHMgPSA8MCAyNDUgND47Cj4gCj4gY2xvY2tz ID0gPCZjbGtfeGVuIEhJMzY2MF9BQ0xLX0dBVEVfRFNTPiwKPiA8JmNsa194ZW4gSEkzNjYwX1BD TEtfR0FURV9EU1M+LAo+IDwmY2xrX3hlbiBISTM2NjBfQ0xLX0dBVEVfRURDMD4sCj4gPCZjbGtf eGVuIEhJMzY2MF9DTEtfR0FURV9MREkwPiwKPiA8JmNsa194ZW4gSEkzNjYwX0NMS19HQVRFX0xE STE+LAo+IDwmY2xrX3hlbiBISTM2NjBfQ0xLX0dBVEVfRFNTX0FYSV9NTT4sCj4gPCZjbGtfeGVu IEhJMzY2MF9QQ0xLX0dBVEVfTU1CVUY+Owo+IGNsb2NrLW5hbWVzID0gImFjbGtfZHNzIiwKPiAi cGNsa19kc3MiLAo+ICJjbGtfZWRjMCIsCj4gImNsa19sZGkwIiwKPiAiY2xrX2xkaTEiLAo+ICJj bGtfZHNzX2F4aV9tbSIsCj4gInBjbGtfbW1idWYiOwo+IAo+IGRtYS1jb2hlcmVudDsKPiAKPiBw b3J0IHsKPiBkcGVfb3V0OiBlbmRwb2ludCB7Cj4gcmVtb3RlLWVuZHBvaW50ID0gPCZkc2lfaW4+ Owo+IH07Cj4gfTsKPiB9Owo+IAo+IGRzaTogZHNpQDEwMDk3MDAwIHsKPiBjb21wYXRpYmxlID0g Imhpc2lsaWNvbixoaTM2NjAtZHNpIjsKPiBzdGF0dXMgPSAib2siOwo+ICNpZiAwCj4gLy9BQ1RV QUwgUkVHIFBST1BFUlRZIG9mIERJU1BMQVkKPiByZWcgPSA8MCAweEU4NjAxMDAwIDAgMHg3RjAw MD4sCj4gIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg IMKgIMKgIMKgIDwwIDB4RkZGMzUwMDAgMCAweDEwMDA+Owo+ICNlbmRpZgo+IC8vcmVnID0gPDAg MHgxMDA5NzAwMCAwIDB4N0YwMDA+LAo+IC8vPDAgMHgxMDExNjAwMCAwIDB4MTAwMD47Cj4gcmVn ID0gPDAgMHgxMDAwNDAwMCAwIDB4ODAwMDA+LAo+IDwwIDB4MTAwODQwMDAgMCAweDEwMDA+Owo+ IAo+IGNsb2NrcyA9IDwmY2xrX3hlbiBISTM2NjBfQ0xLX0dBVEVfVFhEUEhZMF9SRUY+LAo+IDwm Y2xrX3hlbiBISTM2NjBfQ0xLX0dBVEVfVFhEUEhZMV9SRUY+LAo+IDwmY2xrX3hlbiBISTM2NjBf Q0xLX0dBVEVfVFhEUEhZMF9DRkc+LAo+IDwmY2xrX3hlbiBISTM2NjBfQ0xLX0dBVEVfVFhEUEhZ MV9DRkc+LAo+IDwmY2xrX3hlbiBISTM2NjBfUENMS19HQVRFX0RTSTA+LAo+IDwmY2xrX3hlbiBI STM2NjBfUENMS19HQVRFX0RTSTE+Owo+IGNsb2NrLW5hbWVzID0gImNsa190eGRwaHkwX3JlZiIs Cj4gImNsa190eGRwaHkxX3JlZiIsCj4gImNsa190eGRwaHkwX2NmZyIsCj4gImNsa190eGRwaHkx X2NmZyIsCj4gInBjbGtfZHNpMCIsCj4gInBjbGtfZHNpMSI7Cj4gCj4gI2FkZHJlc3MtY2VsbHMg PSA8MT47Cj4gI3NpemUtY2VsbHMgPSA8MD47Cj4gCj4gfTsKPiAjZW5kaWYKPiBjbG9ja3Mgewo+ IGNvbXBhdGlibGUgPSAic2ltcGxlLWJ1cyI7Cj4gI2FkZHJlc3MtY2VsbHMgPSA8Mj47Cj4gI3Np emUtY2VsbHMgPSA8Mj47Cj4gcmFuZ2VzOwo+IGNsa194ZW46IHhlbl9jbGtAMCB7Cj4gY29tcGF0 aWJsZSA9ICJ4ZW4seGVuLXZjbGsiOwo+ICNjbG9jay1jZWxscyA9IDwxPjsKPiB9Owo+IH07Cj4g PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PQo+IEJlbG93IGlzIG15ICdkZWJpYW4uY2ZnJyBmaWxlOgo+ID09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT0KPiBrZXJuZWwgPSAiL2RlYmlhbi9JbWFnZSIKPiBkZXZpY2VfdHJl ZT0iL2RlYmlhbi9kb211LmR0YiIKPiBtZW1vcnkgPSA1MTIKPiB2Y3B1cyA9IDgKPiBjcHVzID0g ImFsbCIKPiBuYW1lPSJkZWJpYW4iCj4gCj4gIyMjIyMjIyMjIyMjIyMjIyMgRFBFICMjIyMjIyMj IyMjIyMjIyMKPiAjaW9tZW0gPSBbICIweEU4NjAwLDB4ODBAMHgxMDAwNCIsICIweEZGRjM1LDFA MHgxMDA4NCIsIAo+ICIweEZGRjBBLDFAMHgxMDA4NSIsICIweEZGRjMxLDFAMHgxMDA4NiIsICIw eEU4NkMwLDEwQDB4MTAwODciXQo+ICNpb21lbSA9IFsgIjB4RTg2MDAsMHg4MCIsICIweEZGRjM1 LDEiLCAiMHhGRkYwQSwxIiwgIjB4RkZGMzEsMSIsIAo+ICIweEU4NkMwLDEwIl0KPiBpcnFzID0g WyAyNzcgXQo+IAo+IGlvbWVtID0gWyAiMHhFODYwMCw4MEAweDEwMDA0IiBdCj4gCj4gaW9tZW0g PSBbICIweEZGRjM1LDFAMHgxMDA4NCIgXQo+IGlvbWVtID0gWyAiMHhGRkYwQSwxQDB4MTAwODUi IF0KPiBpb21lbSA9IFsgIjB4RkZGMzEsMUAweDEwMDg2IiBdCj4gaW9tZW0gPSBbICIweEU4NkMw LDEwQDB4MTAwQzQiXQo+ICNpb21lbSA9IFsgIjB4RTg2QzAsMTBAMHgxMDA4NyJdCj4gI2lvbWVt ID0gWyAiMHhFODYwMCw4MEAweDAwMDAwIiBdCj4gCj4gIyMjIyMjIyMjIyMjIyMjIyMgRFBFICMj IyMjIyMjIyMjIyMjIyMKPiAjIyMjIyMjIyMjIyMjIyMjIyBEU0kgIyMjIyMjIyMjIyMjIyMjIwo+ ICNpb21lbSA9IFsgIjB4RTg2MDEsMHg3RiIsICIweEZGRjM1LDEiXQo+ICNpb21lbSA9IFsgIjB4 RTg2MDEsMHg3RkAweDEwMDk3IiwgIjB4RkZGMzUsMUAweDEwMTE2IiwgCj4gIjB4RTg2MDEsMHg3 RkAweDEwMTk1Il0KPiAKPiAjaW9tZW0gPSBbICIweEU4NjAxLDdGQDB4MTAwOTciIF0KPiAjaW9t ZW0gPSBbICIweEZGRjM1LDFAMHgxMDExNiIgXQo+IAo+IAo+IGlvbWVtID0gWyAiMHhFODYwMSw3 RkAweDEwMDA1IiBdCj4gaW9tZW0gPSBbICIweEZGRjM1LDFAMHgxMDA4NCIgXQo+ICMjIyMjIyMj IyMjIyMjIyMjIERTSSAjIyMjIyMjIyMjIyMjIyMjCj4gCj4gI3ZpZiA9IFsnbWFjPTAwOjE2OjNl OjY0OmI4OjQwLGJyaWRnZT14ZW5icjAnXQo+ICNuaWNzID0gMQo+ICN2aWYgPSBbICdldGgwPTAw OjYwOjAwOjAwOjAwOjAxJyBdCj4gCj4gZGlzayA9IFsnL2Rldi9sb29wMSxyYXcseHZkYSx3J10K PiBleHRyYSA9ICJlYXJseXByaW50az14ZW5ib290IGNvbnNvbGU9aHZjMCByb290PS9kZXYveHZk YSByb290ZnN0eXBlPWV4dDQgCj4gcncgdmlkZW89SERNSS1BLTE6MTI4MHg3MjBANjAiCj4gPT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PQo+IEhlcmUgSSBhbSB1c2luZyBzYW1lIGlvIHNwYWNlKEdGTnMpIGZv ciBEUEUgYW5kIERTSSBub2RlcywgYW5kIGhhdmluZyAKPiBzYW1lIGJlbG93IGVycm9yCj4gYW5k IHRyaWVkIHdpdGggZGlmZmVyZW50IEdGTnMgYW5kIGdpdmluZyBzYW1lIGVycm9yLgo+IAo+IEJ1 dCBhZGRpbmcgdGhpcywgRXZlcnkgdGhpbmcgaXMgZ29vZCBidXQgd2hlbiBpIGFtIHRyeWluZyB0 byByZW1hcCBpb21lbSAKPiBzZWNvbmQgdGltZSwgaGF2aW5nIGJlbG93IGVycm9yCgpXaG8gaXMg Z29pbmcgdGhlIHJlbWFwPyBUaGUgZ3Vlc3Q/IEFsc28sIGNhbiB5b3UgZXhwYW5kIHdoYXQgeW91 IG1lYW4gYnkgCml0IGNyYXNoIHRoZSBzZWNvbmQgdGltZS4gSXMgaXQgZHVyaW5nIHRoZSByZW1h cCwgb3IgYWNjZXNzIHRoZSBuZXcgCm1hcHBlZCByZWdpb24/Cgo+ID09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT0KPiBbwqAgwqAgMy4yMTUwMjFdIE9GOiBycnJycnJycnJycnI6IHN0YXJ0OiAweDEwMDA0MDAw LCBzeiA9IDB4ODAwMDAKPiBbwqAgwqAgMy4yMTUwNjJdIFtESVNQTEFZXSBkc2lfcGFyc2VfZHQo KTogMTUzNjogb2YgZGV2aWNlOiAKPiAvcGFzc3Rocm91Z2gvZHNpQDEwMDk3MDAwCj4gW8KgIMKg IDMuMjE1MDgzXSBbRElTUExBWV0gZHNpX3BhcnNlX2R0KCk6IDE1Mzc6IAo+ICsrKysrKysrKysr KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrCj4g W8KgIMKgIDMuMjE1MTA4XSBbRElTUExBWV0gZHNpX3BhcnNlX2R0KCk6IDE1NDA6IGN0eC0+YmFz ZTogZmZmZmZmODAwYmQwMTAwMAo+IFvCoCDCoCAzLjIxNTEyNl0gW0RJU1BMQVldIGRzaV9wYXJz ZV9kdCgpOiAxNTQxOgo+IFvCoCDCoCAzLjIxNTEzNl0gT0Y6IHJycnJycnJycnJycjogc3RhcnQ6 IDB4MTAwODQwMDAsIHN6ID0gMHgxMDAwCj4gW8KgIMKgIDMuMjE1MTY5XSBbRElTUExBWV0gZHNp X3BhcnNlX2R0KCk6IDE1NDg6Cj4gW8KgIMKgIDQuMTU5MDg3XSBbRElTUExBWV0gZHNpX3BhcnNl X2R0KCk6IDE1NjM6Cj4gW8KgIMKgIDQuMTU5MDkyXSBbRElTUExBWV0gZHNpX3BhcnNlX2R0KCk6 IDE1Njg6Cj4gW8KgIMKgIDQuMTU5MTMyXSBbRF1bWEVOX1ZDTEtdeGVuX3ZjbGtfeGZlcigpOiAx NjM6IGJ1ZmZlcjogY2xrX3R4ZHBoeTBfcmVmLDEKPiBbwqAgwqAgNC4xNTkxNjNdIFtEXVtYRU5f VkNMS114ZW5fdmNsa194ZmVyKCk6IDE2NDogCj4gKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKgo+IFvCoCDCoCA0LjE1OTM5OV0gW0RdW1hFTl9WQ0xLXXhlbl92Y2xr X3hmZXIoKTogMTcwOiAKPiAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqCj4gW8KgIMKgIDQuMTU5NjI2XSBbRF1bWEVOX1ZDTEtdeGVuX3ZjbGtfeGZlcigpOiAxNzY6 IFNlbmRpbmcgSVJRX0RBVEEgdG8gCj4gZG9tYWluLTAKPiBbwqAgwqAgNC4xNjAyMThdIFtEXVtY RU5fVkNMS114ZW5fdmNsa19pbnRlcnJ1cHQoKTogNDQ2OiBJUlEoMTMpIGZyb20gCj4gZG9tYWlu IDAgZmlyZWQhISEKPiBbwqAgwqAgNC4xNjAzNTldIFtEXVtYRU5fVkNMS112Y2xrX2ZlX2JoKCk6 IDM5NDogaXJxX3N0YXR1czogMHgzCj4gW8KgIMKgIDQuMTYwNTMyXSBbRF1bWEVOX1ZDTEtddmNs a19mZV9iaCgpOiA0MDc6IEFDSyBSZWNpZXZlZCBmcm9tIGRvbS0wCj4gW8KgIMKgIDQuMTYwNTQy XSBbRF1bWEVOX1ZDTEtdeGVuX3ZjbGtfeGZlcigpOiAxNzk6IFhmZXIgRG9uZQo+IFvCoCDCoCA0 LjE2MDU0NV0gW0RdW1hFTl9WQ0xLXXhlbl9vZl9jbGtfc3JjX29uZWNlbGxfZ2V0KCk6IDI4NjrC oCBYZmVyIGRvbmUuLi4KPiBbwqAgwqAgNC4xNjA1NTRdIFtESVNQTEFZXSBkc2lfcGFyc2VfZHQo KTogMTU3NToKPiBbwqAgwqAgNC4xNjA1NjBdIFtEXVtYRU5fVkNMS112Y2xrX3JvdW5kX3JhdGUo KTogMjI0OsKgIGNhbGxlZC4uLgo+IFvCoCDCoCA0LjE2MDU2N10gW0RdW1hFTl9WQ0xLXXhlbl92 Y2xrX3hmZXIoKTogMTYzOiBidWZmZXI6IAo+IGNsa190eGRwaHkwX3JlZiw0LDE5MjAwMDAwCj4g W8KgIMKgIDQuMTYwNTcwXSBbRF1bWEVOX1ZDTEtdeGVuX3ZjbGtfeGZlcigpOiAxNjQ6IAo+ICoq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioKPiBbwqAgwqAgNC4xNjEw OTVdIFtEXVtYRU5fVkNMS114ZW5fdmNsa194ZmVyKCk6IDE3MDogCj4gKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKgo+IFvCoCDCoCA0LjE2MTMzMV0gW0RdW1hFTl9W Q0xLXXhlbl92Y2xrX3hmZXIoKTogMTc2OiBTZW5kaW5nIElSUV9EQVRBIHRvIAo+IGRvbWFpbi0w Cj4gW8KgIMKgIDQuMTYxOTQ2XSBbRF1bWEVOX1ZDTEtdeGVuX3ZjbGtfaW50ZXJydXB0KCk6IDQ0 NjogSVJRKDEzKSBmcm9tIAo+IGRvbWFpbiAwIGZpcmVkISEhCj4gW8KgIMKgIDQuMTYyMTIwXSBb RF1bWEVOX1ZDTEtddmNsa19mZV9iaCgpOiAzOTQ6IGlycV9zdGF0dXM6IDB4Mwo+IFvCoCDCoCA0 LjE2MjI4NF0gW0RdW1hFTl9WQ0xLXXZjbGtfZmVfYmgoKTogNDA3OiBBQ0sgUmVjaWV2ZWQgZnJv bSBkb20tMAo+IFvCoCDCoCA0LjE2MjI5NV0gW0RdW1hFTl9WQ0xLXXhlbl92Y2xrX3hmZXIoKTog MTc5OiBYZmVyIERvbmUKPiBbwqAgwqAgNC4xNjIzMDFdIFtESVNQTEFZXSBkc2lfcGFyc2VfZHQo KTogMTU4MzoKPiBbwqAgwqAgNC4xNjIzMTRdIFtEXVtYRU5fVkNMS114ZW5fdmNsa194ZmVyKCk6 IDE2MzogYnVmZmVyOiBjbGtfdHhkcGh5MF9jZmcsMQo+IFvCoCDCoCA0LjE2MjMxNl0gW0RdW1hF Tl9WQ0xLXXhlbl92Y2xrX3hmZXIoKTogMTY0OiAKPiAqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqCj4gW8KgIMKgIDQuMTYyNjQxXSBbRF1bWEVOX1ZDTEtdeGVuX3Zj bGtfeGZlcigpOiAxNzA6IAo+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioKPiBbwqAgwqAgNC4xNjI5ODRdIFtEXVtYRU5fVkNMS114ZW5fdmNsa194ZmVyKCk6IDE3 NjogU2VuZGluZyBJUlFfREFUQSB0byAKPiBkb21haW4tMAo+IFvCoCDCoCA0LjE2MzU5Nl0gW0Rd W1hFTl9WQ0xLXXhlbl92Y2xrX2ludGVycnVwdCgpOiA0NDY6IElSUSgxMykgZnJvbSAKPiBkb21h aW4gMCBmaXJlZCEhIQo+IFvCoCDCoCA0LjE2Nzc1M10gW0RdW1hFTl9WQ0xLXXZjbGtfZmVfYmgo KTogMzk0OiBpcnFfc3RhdHVzOiAweDMKPiBbwqAgwqAgNC4xNjc5NTVdIFtEXVtYRU5fVkNMS112 Y2xrX2ZlX2JoKCk6IDQwNzogQUNLIFJlY2lldmVkIGZyb20gZG9tLTAKPiBbwqAgwqAgNC4xNjc5 NjhdIFtEXVtYRU5fVkNMS114ZW5fdmNsa194ZmVyKCk6IDE3OTogWGZlciBEb25lCj4gW8KgIMKg IDQuMTY3OTcxXSBbRF1bWEVOX1ZDTEtdeGVuX29mX2Nsa19zcmNfb25lY2VsbF9nZXQoKTogMjg2 OsKgIFhmZXIgZG9uZS4uLgo+IFvCoCDCoCA0LjE2Nzk3OV0gW0RJU1BMQVldIGRzaV9wYXJzZV9k dCgpOiAxNTkzOgo+IFvCoCDCoCA0LjE2Nzk4Ml0gW0RdW1hFTl9WQ0xLXXZjbGtfcm91bmRfcmF0 ZSgpOiAyMjQ6wqAgY2FsbGVkLi4uCj4gW8KgIMKgIDQuMTY3OTg1XSBbRF1bWEVOX1ZDTEtdeGVu X3ZjbGtfeGZlcigpOiAxNjM6IGJ1ZmZlcjogCj4gY2xrX3R4ZHBoeTBfY2ZnLDQsMTkyMDAwMDAK PiBbwqAgwqAgNC4xNjc5OTJdIFtEXVtYRU5fVkNMS114ZW5fdmNsa194ZmVyKCk6IDE2NDogCj4g KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKgo+IFvCoCDCoCA0LjE2 ODI0NF0gW0RdW1hFTl9WQ0xLXXhlbl92Y2xrX3hmZXIoKTogMTcwOiAKPiAqKioqKioqKioqKioq KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqCj4gW8KgIMKgIDQuMTY4NDc2XSBbRF1bWEVO X1ZDTEtdeGVuX3ZjbGtfeGZlcigpOiAxNzY6IFNlbmRpbmcgSVJRX0RBVEEgdG8gCj4gZG9tYWlu LTAKPiBbwqAgwqAgNC4xNjkxMDFdIFtEXVtYRU5fVkNMS114ZW5fdmNsa19pbnRlcnJ1cHQoKTog NDQ2OiBJUlEoMTMpIGZyb20gCj4gZG9tYWluIDAgZmlyZWQhISEKPiBbwqAgwqAgNC4xNjkyNjJd IFtEXVtYRU5fVkNMS112Y2xrX2ZlX2JoKCk6IDM5NDogaXJxX3N0YXR1czogMHgzCj4gW8KgIMKg IDQuMTY5NDQ4XSBbRF1bWEVOX1ZDTEtddmNsa19mZV9iaCgpOiA0MDc6IEFDSyBSZWNpZXZlZCBm cm9tIGRvbS0wCj4gW8KgIMKgIDQuMTY5NDkxXSBbRF1bWEVOX1ZDTEtdeGVuX3ZjbGtfeGZlcigp OiAxNzk6IFhmZXIgRG9uZQo+IFvCoCDCoCA0LjE2OTUxMF0gW0RJU1BMQVldIGRzaV9wYXJzZV9k dCgpOiAxNjAxOgo+IFvCoCDCoCA0LjE2OTUzNV0gW0RdW1hFTl9WQ0xLXXhlbl92Y2xrX3hmZXIo KTogMTYzOiBidWZmZXI6IHBjbGtfZHNpMCwxCj4gW8KgIMKgIDQuMTY5NTU0XSBbRF1bWEVOX1ZD TEtdeGVuX3ZjbGtfeGZlcigpOiAxNjQ6IAo+ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKioqKioqKioKPiBbwqAgwqAgNC4xNjk4MDNdIFtEXVtYRU5fVkNMS114ZW5fdmNsa194 ZmVyKCk6IDE3MDogCj4gKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq Kgo+IFvCoCDCoCA0LjE3MDAxOV0gW0RdW1hFTl9WQ0xLXXhlbl92Y2xrX3hmZXIoKTogMTc2OiBT ZW5kaW5nIElSUV9EQVRBIHRvIAo+IGRvbWFpbi0wCj4gW8KgIMKgIDQuMTcwNjE5XSBbRF1bWEVO X1ZDTEtdeGVuX3ZjbGtfaW50ZXJydXB0KCk6IDQ0NjogSVJRKDEzKSBmcm9tIAo+IGRvbWFpbiAw IGZpcmVkISEhCj4gW8KgIMKgIDQuMTcwNzc5XSBbRF1bWEVOX1ZDTEtddmNsa19mZV9iaCgpOiAz OTQ6IGlycV9zdGF0dXM6IDB4Mwo+IFvCoCDCoCA0LjE3MDk2NV0gW0RdW1hFTl9WQ0xLXXZjbGtf ZmVfYmgoKTogNDA3OiBBQ0sgUmVjaWV2ZWQgZnJvbSBkb20tMAo+IFvCoCDCoCA0LjE3MDk3OF0g W0RdW1hFTl9WQ0xLXXhlbl92Y2xrX3hmZXIoKTogMTc5OiBYZmVyIERvbmUKPiBbwqAgwqAgNC4x NzA5ODFdIFtEXVtYRU5fVkNMS114ZW5fb2ZfY2xrX3NyY19vbmVjZWxsX2dldCgpOiAyODY6wqAg WGZlciBkb25lLi4uCj4gW8KgIMKgIDQuMTcwOTg5XSBbRElTUExBWV0gZHNpX3BhcnNlX2R0KCk6 IDE2MTE6Cj4gW8KgIMKgIDQuMTcwOTkyXSBbRElTUExBWV0gZHNpX3Byb2JlKCk6IDE2NTQ6IEJl Zm9yZSBjb21wb25lbnQgYWRkCj4gW8KgIMKgIDQuMTcwOTk3XSBbRElTUExBWV0gY29tcGFyZV9v ZigpOiAyNDI6Cj4gW8KgIMKgIDQuMTcxMDAyXSBbRElTUExBWV0ga2lyaW5fZHJtX2JpbmQoKTog MjU3Ogo+IFvCoCDCoCA0LjE3MTAwNF0gW2RybV0gKy4KPiBbwqAgwqAgNC4xNzEzODZdIFtESVNQ TEFZXSBraXJpbl9kcm1fa21zX2luaXQoKTogMTA1Ogo+IFvCoCDCoCA0LjE3MTM5MV0gW2RybV0g Ky4KPiBbwqAgwqAgNC4yMTI1NDNdIFtESVNQTEFZXSBraXJpbl9kcm1fbW9kZV9jb25maWdfaW5p dCgpOiA5MToKPiBbwqAgwqAgNC4yMTI1NDddIFtESVNQTEFZXSBkc3NfZHJtX2luaXQoKTogNjM4 Ogo+IFvCoCDCoCA0LjIxMjU2M10gW2RybV0gKy4KPiBbwqAgwqAgNC4yMTI1ODVdIFtESVNQTEFZ XSBkc3NfZHRzX3BhcnNlKCk6IDUxMzoKPiBbwqAgwqAgNC4yMTI2MDNdIFtESVNQTEFZXSBkc3Nf ZHRzX3BhcnNlKCk6IDUzMDogb2YgZGV2aWNlOiAKPiAvcGFzc3Rocm91Z2gvZHBlQDEwMDA0MDAw Cj4gW8KgIMKgIDQuMjEyNjM1XSBbRElTUExBWV0gZHNzX2R0c19wYXJzZSgpOiA1MzE6IAo+IF5e Xl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5e Xl5eXl5eXl5eXl5eXl5eXl4KPiBbwqAgwqAgNC4yMTI2NjFdIFtESVNQTEFZXSBkc3NfZHRzX3Bh cnNlKCk6IDUzMjogY3R4LT5iYXNlOiBmZmZmZmY4MDBiZDAwMDAwCj4gW8KgIMKgIDQuMjEyNjg4 XSBVbmhhbmRsZWQgZmF1bHQ6IHR0YnIgYWRkcmVzcyBzaXplIGZhdWx0ICgweDk2MDAwMDAwKSBh dCAKPiAweGZmZmZmZjgwMGJkMDEwMDAKCklJUkMsIHRoaXMgZXJyb3IgdXN1YWxseSBoYXBwZW4g d2hlbiB0aGUgcmVnaW9uIGlzIG5vdCBtYXBwZWQgaW4gCnN0YWdlLTIuIE9uIFhlbiBkZWJ1Zy1i dWlsZCAoQ09ORklHX0RFQlVHPXkgaW4gLmNvbmZpZykgeW91IHNob3VsZCBnZXQgCnNvbWUgbG9n IGlmIHRoZXJlIHdhcyBhIGRhdGEgYWJvcnQgaW4gc3RhZ2UtMi4KCkNoZWVycywKCi0tIApKdWxp ZW4gR3JhbGwKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f Clhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnCmh0 dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvbWFpbG1hbi9saXN0aW5mby94ZW4tZGV2ZWw= From mboxrd@z Thu Jan 1 00:00:00 1970 From: Omkar Bolla Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Fri, 2 Nov 2018 10:55:15 +0530 Message-ID: References: <074697de-7265-a1fb-2970-4128a58f09ca@arm.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4094633085009688257==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Julien Grall Cc: jgross@suse.com, xen-devel@lists.xensource.com, wei.liu2@citrix.com, Oleksandr_Andrushchenko@epam.com, Oleksandr Andrushchenko , Lars Kurth , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org --===============4094633085009688257== Content-Type: multipart/alternative; boundary="0000000000004dfac50579a7c382" --0000000000004dfac50579a7c382 Content-Type: text/plain; charset="UTF-8" Hi, > > > I am trying to pass-through the display to guest domain. to do through > > driver needs clocks. I have written simple basic clock pv frontend and > > backend. > > So I thought these clocks must be initialised before display driver > > initialisation. > > The graphic driver should request the clock, right? So Linux will make > sure to have the clock before initializing the display. > We are not using graphics(GPU), I think drm takes care of graphics. And all clocks needed for display I enabled using clocks PV. Clocks i have checked in host domain all got enabled properly. > > > > But if I start both domain and clocks script one after another, clock > > got initialised properly. Problem solved. > > But still i have some doubt, is it possible to do some thing in xenbits > > src to start automatically when we start underprivileged domain? > > I am not entirely sure if we have a way to run a script during domain > creation. Wei, do you know if that's possible? > > A workaround would be to create the domain paused, call the script and > then unpause it. > > 42sh> xl create -p ... > 42sh> ./myscript.sh > 42sh> xl unpause > Now, I am doing same way pause and unpauase domain to start PV and it is working. > > > > > I have one more question about pass-through > > To implement pass through I took reference from below link > > https://wiki.xen.org/images/1/17/Device_passthrough_xen.pdf > > > > I added 'xen-passthrough' to actual dom0 dtb and created new dtb with > > below nodes in passthrough node > > > ============================================================================ > > dpe: dpe@10004000 { > > compatible = "hisilicon,hi3660-dpe"; > > status = "ok"; > > #if 0 > > //ACTUAL REG PROPERTY of DISPLAY > > reg = <0x0 0xE8600000 0x0 0x80000>, > > <0x0 0xFFF35000 0 0x1000>, > > <0x0 0xFFF0A000 0 0x1000>, > > <0x0 0xFFF31000 0 0x1000>, > > <0x0 0xE86C0000 0 0x10000>; > > #endif > > //reg = <0x0 0x10004000 0x0 0x80000>, > > reg = <0x0 0x10004000 0x0 0x80000>, > > <0x0 0x10084000 0 0x1000>, > > <0x0 0x10085000 0 0x1000>, > > <0x0 0x10086000 0 0x1000>, > > <0x0 0x100C4000 0 0x10000>; > > // <0x0 0x10087000 0 0x10000>; > > > > interrupts = <0 245 4>; > > > > clocks = <&clk_xen HI3660_ACLK_GATE_DSS>, > > <&clk_xen HI3660_PCLK_GATE_DSS>, > > <&clk_xen HI3660_CLK_GATE_EDC0>, > > <&clk_xen HI3660_CLK_GATE_LDI0>, > > <&clk_xen HI3660_CLK_GATE_LDI1>, > > <&clk_xen HI3660_CLK_GATE_DSS_AXI_MM>, > > <&clk_xen HI3660_PCLK_GATE_MMBUF>; > > clock-names = "aclk_dss", > > "pclk_dss", > > "clk_edc0", > > "clk_ldi0", > > "clk_ldi1", > > "clk_dss_axi_mm", > > "pclk_mmbuf"; > > > > dma-coherent; > > > > port { > > dpe_out: endpoint { > > remote-endpoint = <&dsi_in>; > > }; > > }; > > }; > > > > dsi: dsi@10097000 { > > compatible = "hisilicon,hi3660-dsi"; > > status = "ok"; > > #if 0 > > //ACTUAL REG PROPERTY of DISPLAY > > reg = <0 0xE8601000 0 0x7F000>, > > <0 0xFFF35000 0 0x1000>; > > #endif > > //reg = <0 0x10097000 0 0x7F000>, > > //<0 0x10116000 0 0x1000>; > > reg = <0 0x10004000 0 0x80000>, > > <0 0x10084000 0 0x1000>; > > > > clocks = <&clk_xen HI3660_CLK_GATE_TXDPHY0_REF>, > > <&clk_xen HI3660_CLK_GATE_TXDPHY1_REF>, > > <&clk_xen HI3660_CLK_GATE_TXDPHY0_CFG>, > > <&clk_xen HI3660_CLK_GATE_TXDPHY1_CFG>, > > <&clk_xen HI3660_PCLK_GATE_DSI0>, > > <&clk_xen HI3660_PCLK_GATE_DSI1>; > > clock-names = "clk_txdphy0_ref", > > "clk_txdphy1_ref", > > "clk_txdphy0_cfg", > > "clk_txdphy1_cfg", > > "pclk_dsi0", > > "pclk_dsi1"; > > > > #address-cells = <1>; > > #size-cells = <0>; > > > > }; > > #endif > > clocks { > > compatible = "simple-bus"; > > #address-cells = <2>; > > #size-cells = <2>; > > ranges; > > clk_xen: xen_clk@0 { > > compatible = "xen,xen-vclk"; > > #clock-cells = <1>; > > }; > > }; > > > ============================================================================ > > Below is my 'debian.cfg' file: > > > ============================================================================ > > kernel = "/debian/Image" > > device_tree="/debian/domu.dtb" > > memory = 512 > > vcpus = 8 > > cpus = "all" > > name="debian" > > > > ################# DPE ################ > > #iomem = [ "0xE8600,0x80@0x10004", "0xFFF35,1@0x10084", > > "0xFFF0A,1@0x10085", "0xFFF31,1@0x10086", "0xE86C0,10@0x10087"] > > #iomem = [ "0xE8600,0x80", "0xFFF35,1", "0xFFF0A,1", "0xFFF31,1", > > "0xE86C0,10"] > > irqs = [ 277 ] > > > > iomem = [ "0xE8600,80@0x10004" ] > > > > iomem = [ "0xFFF35,1@0x10084" ] > > iomem = [ "0xFFF0A,1@0x10085" ] > > iomem = [ "0xFFF31,1@0x10086" ] > > iomem = [ "0xE86C0,10@0x100C4"] > > #iomem = [ "0xE86C0,10@0x10087"] > > #iomem = [ "0xE8600,80@0x00000" ] > > > > ################# DPE ################ > > ################# DSI ################ > > #iomem = [ "0xE8601,0x7F", "0xFFF35,1"] > > #iomem = [ "0xE8601,0x7F@0x10097", "0xFFF35,1@0x10116", > > "0xE8601,0x7F@0x10195"] > > > > #iomem = [ "0xE8601,7F@0x10097" ] > > #iomem = [ "0xFFF35,1@0x10116" ] > > > > > > iomem = [ "0xE8601,7F@0x10005" ] > > iomem = [ "0xFFF35,1@0x10084" ] > > ################# DSI ################ > > > > #vif = ['mac=00:16:3e:64:b8:40,bridge=xenbr0'] > > #nics = 1 > > #vif = [ 'eth0=00:60:00:00:00:01' ] > > > > disk = ['/dev/loop1,raw,xvda,w'] > > extra = "earlyprintk=xenboot console=hvc0 root=/dev/xvda rootfstype=ext4 > > rw video=HDMI-A-1:1280x720@60" > > > ============================================================================ > > Here I am using same io space(GFNs) for DPE and DSI nodes, and having > > same below error > > and tried with different GFNs and giving same error. > > > > But adding this, Every thing is good but when i am trying to remap iomem > > second time, having below error > > Who is going the remap? The guest? Also, can you expand what you mean by > it crash the second time. Is it during the remap, or access the new > mapped region? > Here, Guest(domain-U) is remapping in display driver. In display driver remapping is successful and yesterday i found when i try to access remapped region domain-U giving that crash, "Unhandled fault: ttbr address size fault" > > > > ============================================================================ > > [ 3.215021] OF: rrrrrrrrrrrr: start: 0x10004000, sz = 0x80000 > > [ 3.215062] [DISPLAY] dsi_parse_dt(): 1536: of device: > > /passthrough/dsi@10097000 > > [ 3.215083] [DISPLAY] dsi_parse_dt(): 1537: > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > [ 3.215108] [DISPLAY] dsi_parse_dt(): 1540: ctx->base: > ffffff800bd01000 > > [ 3.215126] [DISPLAY] dsi_parse_dt(): 1541: > > [ 3.215136] OF: rrrrrrrrrrrr: start: 0x10084000, sz = 0x1000 > > [ 3.215169] [DISPLAY] dsi_parse_dt(): 1548: > > [ 4.159087] [DISPLAY] dsi_parse_dt(): 1563: > > [ 4.159092] [DISPLAY] dsi_parse_dt(): 1568: > > [ 4.159132] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: > clk_txdphy0_ref,1 > > [ 4.159163] [D][XEN_VCLK]xen_vclk_xfer(): 164: > > ******************************************* > > [ 4.159399] [D][XEN_VCLK]xen_vclk_xfer(): 170: > > ******************************************* > > [ 4.159626] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to > > domain-0 > > [ 4.160218] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(13) from > > domain 0 fired!!! > > [ 4.160359] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0x3 > > [ 4.160532] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved from dom-0 > > [ 4.160542] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done > > [ 4.160545] [D][XEN_VCLK]xen_of_clk_src_onecell_get(): 286: Xfer > done... > > [ 4.160554] [DISPLAY] dsi_parse_dt(): 1575: > > [ 4.160560] [D][XEN_VCLK]vclk_round_rate(): 224: called... > > [ 4.160567] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: > > clk_txdphy0_ref,4,19200000 > > [ 4.160570] [D][XEN_VCLK]xen_vclk_xfer(): 164: > > ******************************************* > > [ 4.161095] [D][XEN_VCLK]xen_vclk_xfer(): 170: > > ******************************************* > > [ 4.161331] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to > > domain-0 > > [ 4.161946] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(13) from > > domain 0 fired!!! > > [ 4.162120] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0x3 > > [ 4.162284] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved from dom-0 > > [ 4.162295] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done > > [ 4.162301] [DISPLAY] dsi_parse_dt(): 1583: > > [ 4.162314] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: > clk_txdphy0_cfg,1 > > [ 4.162316] [D][XEN_VCLK]xen_vclk_xfer(): 164: > > ******************************************* > > [ 4.162641] [D][XEN_VCLK]xen_vclk_xfer(): 170: > > ******************************************* > > [ 4.162984] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to > > domain-0 > > [ 4.163596] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(13) from > > domain 0 fired!!! > > [ 4.167753] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0x3 > > [ 4.167955] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved from dom-0 > > [ 4.167968] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done > > [ 4.167971] [D][XEN_VCLK]xen_of_clk_src_onecell_get(): 286: Xfer > done... > > [ 4.167979] [DISPLAY] dsi_parse_dt(): 1593: > > [ 4.167982] [D][XEN_VCLK]vclk_round_rate(): 224: called... > > [ 4.167985] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: > > clk_txdphy0_cfg,4,19200000 > > [ 4.167992] [D][XEN_VCLK]xen_vclk_xfer(): 164: > > ******************************************* > > [ 4.168244] [D][XEN_VCLK]xen_vclk_xfer(): 170: > > ******************************************* > > [ 4.168476] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to > > domain-0 > > [ 4.169101] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(13) from > > domain 0 fired!!! > > [ 4.169262] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0x3 > > [ 4.169448] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved from dom-0 > > [ 4.169491] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done > > [ 4.169510] [DISPLAY] dsi_parse_dt(): 1601: > > [ 4.169535] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: pclk_dsi0,1 > > [ 4.169554] [D][XEN_VCLK]xen_vclk_xfer(): 164: > > ******************************************* > > [ 4.169803] [D][XEN_VCLK]xen_vclk_xfer(): 170: > > ******************************************* > > [ 4.170019] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IRQ_DATA to > > domain-0 > > [ 4.170619] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(13) from > > domain 0 fired!!! > > [ 4.170779] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0x3 > > [ 4.170965] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved from dom-0 > > [ 4.170978] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done > > [ 4.170981] [D][XEN_VCLK]xen_of_clk_src_onecell_get(): 286: Xfer > done... > > [ 4.170989] [DISPLAY] dsi_parse_dt(): 1611: > > [ 4.170992] [DISPLAY] dsi_probe(): 1654: Before component add > > [ 4.170997] [DISPLAY] compare_of(): 242: > > [ 4.171002] [DISPLAY] kirin_drm_bind(): 257: > > [ 4.171004] [drm] +. > > [ 4.171386] [DISPLAY] kirin_drm_kms_init(): 105: > > [ 4.171391] [drm] +. > > [ 4.212543] [DISPLAY] kirin_drm_mode_config_init(): 91: > > [ 4.212547] [DISPLAY] dss_drm_init(): 638: > > [ 4.212563] [drm] +. > > [ 4.212585] [DISPLAY] dss_dts_parse(): 513: > > [ 4.212603] [DISPLAY] dss_dts_parse(): 530: of device: > > /passthrough/dpe@10004000 > > [ 4.212635] [DISPLAY] dss_dts_parse(): 531: > > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > [ 4.212661] [DISPLAY] dss_dts_parse(): 532: ctx->base: > ffffff800bd00000 > > [ 4.212688] Unhandled fault: ttbr address size fault (0x96000000) at > > 0xffffff800bd01000 > > IIRC, this error usually happen when the region is not mapped in > stage-2. On Xen debug-build (CONFIG_DEBUG=y in .config) you should get > some log if there was a data abort in stage-2. > > I enabled (CONFIG_DEBUG=y in .config) in xenbits src, I dont have any log from xen. also checked 'xl dmesg' but no log from xen. How to find GFNs from xen? I mean, xen maps region from RAM memory of domain-u or maps to memory of Domain-0 and give access to domain-u? I am sharing log of xl dmesg after enabling guest log level: ====================================== root@hikey960:/debian# xl dmesg (XEN) Checking for initrd in /chosen (XEN) RAM: 0000000000000000 - 000000001abfffff (XEN) RAM: 000000001ad88000 - 0000000031ffffff (XEN) RAM: 0000000032101000 - 000000003dffffff (XEN) RAM: 0000000040000000 - 000000004aee9fff (XEN) RAM: 0000000089cc0000 - 00000000b8427fff (XEN) RAM: 00000000b9af0000 - 00000000b9baffff (XEN) RAM: 00000000b9c50000 - 00000000b9c54fff (XEN) RAM: 00000000b9c56000 - 00000000b9d4ffff (XEN) RAM: 00000000ba114000 - 00000000ba11bfff (XEN) RAM: 00000000ba11c000 - 00000000bdbf1fff (XEN) RAM: 00000000bdbf2000 - 00000000bdca2fff (XEN) RAM: 00000000bdca3000 - 00000000bdd58fff (XEN) RAM: 00000000bdd59000 - 00000000bef4ffff (XEN) RAM: 00000000bef50000 - 00000000bef54fff (XEN) RAM: 00000000bef55000 - 00000000bf0dffff (XEN) RAM: 00000000bf0e0000 - 00000000bf12ffff (XEN) RAM: 00000000bf180000 - 00000000bf188fff (XEN) RAM: 00000000bf189000 - 00000000bfffffff (XEN) RAM: 00000000c0000000 - 00000000dfffffff (XEN) RAM: 0000000200000000 - 000000021fffffff (XEN) (XEN) MODULE[0]: 00000000b8428000 - 00000000b8436000 Device Tree (XEN) MODULE[1]: 00000000b8544000 - 00000000b997ca00 Kernel console=tty0 console=hvc0 root=/dev/sdd14 rootwait rw rootfstype=ext4 efi=noruntime video=HDMI-A -1:1280x720@60 (XEN) (XEN) Command line: loglvl=all console=dtuart dtuart=/soc/serial@fff32000 dom0_mem=512M efi=no-rs guest_loglvl=all (XEN) Placing Xen at 0x000000001aa00000-0x000000001ac00000 (XEN) Update BOOTMOD_XEN from 00000000b8436000-00000000b8536d81 => 000000001aa00000-000000001ab00d81 (XEN) Domain heap initialised (XEN) Platform: Generic System (XEN) Looking for dtuart at "/soc/serial@fff32000", options "" Xen 4.8.5-pre (XEN) Xen version 4.8.5-pre (omkar.bolla@) (aarch64-linux-gnu-gcc (Linaro GCC 7.1-2017.05) 7.1.1 20170510) debug=n Fri Nov 2 10:40:45 IST 2018 (XEN) Latest ChangeSet: (XEN) Processor: 410fd034: "ARM Limited", variant: 0x0, part 0xd03, rev 0x4 (XEN) 64-bit Execution: (XEN) Processor Features: 0000000000002222 0000000000000000 (XEN) Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32 (XEN) Extensions: FloatingPoint AdvancedSIMD (XEN) Debug Features: 0000000010305106 0000000000000000 (XEN) Auxiliary Features: 0000000000000000 0000000000000000 (XEN) Memory Model Features: 0000000000001122 0000000000000000 (XEN) ISA Features: 0000000000011120 0000000000000000 (XEN) 32-bit Execution: (XEN) Processor Features: 00000131:00011011 (XEN) Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle (XEN) Extensions: GenericTimer Security (XEN) Debug Features: 03010066 (XEN) Auxiliary Features: 00000000 (XEN) Memory Model Features: 10201105 40000000 01260000 02102211 (XEN) ISA Features: 02101110 13112111 21232042 01112131 00011142 00011121 (XEN) Using PSCI-1.1 for SMP bringup (XEN) SMP: Allowing 8 CPUs (XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 1920 KHz (XEN) GICv2 initialization: (XEN) gic_dist_addr=00000000e82b1000 (XEN) gic_cpu_addr=00000000e82b2000 (XEN) gic_hyp_addr=00000000e82b4000 (XEN) gic_vcpu_addr=00000000e82b6000 (XEN) gic_maintenance_irq=25 (XEN) GICv2: 384 lines, 8 cpus, secure (IID 0200143b). (XEN) Using scheduler: SMP Credit Scheduler (credit) (XEN) Allocated console ring of 64 KiB. (XEN) Bringing up CPU1 (XEN) CPU 1 booted. (XEN) Bringing up CPU2 (XEN) CPU 2 booted. (XEN) Bringing up CPU3 (XEN) CPU 3 booted. (XEN) Bringing up CPU4 (XEN) CPU 4 booted. (XEN) Bringing up CPU5 (XEN) CPU 5 booted. (XEN) Bringing up CPU6 (XEN) CPU 6 booted. (XEN) Bringing up CPU7 (XEN) CPU 7 booted. (XEN) Brought up 8 CPUs (XEN) P2M: 40-bit IPA with 40-bit PA (XEN) P2M: 3 levels with order-1 root, VTCR 0x80023558 (XEN) I/O virtualisation disabled (XEN) build-id: baea0accc1a90dbddfdab8bcec069906a9b45ba6 (XEN) alternatives: Patching with alt table 00000000400a4868 -> 00000000400a4c04 (XEN) *** LOADING DOMAIN 0 *** (XEN) Loading kernel from boot module @ 00000000b8544000 (XEN) Allocating 1:1 mappings totalling 512MB for dom0: (XEN) BANK[0] 0x000000c0000000-0x000000e0000000 (512MB) (XEN) Grant table range: 0x0000001aa00000-0x0000001aa5f000 (XEN) /framebuffer@E8600000 passthrough = 0 nirq = 5 naddr = 7 (XEN) - MMIO: 00e8600000 - 00e8680000 P2MType=5 (XEN) Device Node: /framebuffer@E8600000 (XEN) - MMIO: 00fff35000 - 00fff36000 P2MType=5 (XEN) Device Node: /framebuffer@E8600000 (XEN) - MMIO: 00fff0a000 - 00fff0b000 P2MType=5 (XEN) Device Node: /framebuffer@E8600000 (XEN) - MMIO: 00e8a09000 - 00e8a0a000 P2MType=5 (XEN) Device Node: /framebuffer@E8600000 (XEN) - MMIO: 00e86c0000 - 00e86d0000 P2MType=5 (XEN) Device Node: /framebuffer@E8600000 (XEN) - MMIO: 00fff02000 - 00fff03000 P2MType=5 (XEN) Device Node: /framebuffer@E8600000 (XEN) - MMIO: 00fff31000 - 00fff32000 P2MType=5 (XEN) Device Node: /framebuffer@E8600000 (XEN) /mali@E82C0000 passthrough = 0 nirq = 3 naddr = 1 (XEN) - MMIO: 00e82c0000 - 00e82c4000 P2MType=5 (XEN) Device Node: /mali@E82C0000 (XEN) /dpe@E8600000 passthrough = 0 nirq = 1 naddr = 5 (XEN) - MMIO: 00e8600000 - 00e8680000 P2MType=5 (XEN) Device Node: /dpe@E8600000 (XEN) - MMIO: 00fff35000 - 00fff36000 P2MType=5 (XEN) Device Node: /dpe@E8600000 (XEN) - MMIO: 00fff0a000 - 00fff0b000 P2MType=5 (XEN) Device Node: /dpe@E8600000 (XEN) - MMIO: 00fff31000 - 00fff32000 P2MType=5 (XEN) Device Node: /dpe@E8600000 (XEN) - MMIO: 00e86c0000 - 00e86d0000 P2MType=5 (XEN) Device Node: /dpe@E8600000 (XEN) /dsi@E8601000 passthrough = 0 nirq = 0 naddr = 2 (XEN) - MMIO: 00e8601000 - 00e8680000 P2MType=5 (XEN) Device Node: /dsi@E8601000 (XEN) - MMIO: 00fff35000 - 00fff36000 P2MType=5 (XEN) Device Node: /dsi@E8601000 (XEN) Loading zImage from 00000000b8544000 to 00000000c0080000-00000000c14b8a00 (XEN) Allocating PPI 16 for event channel interrupt (XEN) Loading dom0 DTB to 0x00000000c8000000-0x00000000c800a8a1 (XEN) Scrubbing Free RAM on 1 nodes using 8 CPUs (XEN) .........done. (XEN) Initial low memory virq threshold set at 0x4000 pages. (XEN) Std. Loglevel: All (XEN) Guest Loglevel: All (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen) (XEN) Freed 272kB init memory. (XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER4 (XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER8 (XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER12 (XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER16 (XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER20 (XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER24 (XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER28 (XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER32 (XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER36 (XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER40 (XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER44 (XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d0v1: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d0v2: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d0v3: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d0v4: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d0v5: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d0v6: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d0v7: vGICD: unhandled word write 0xffffffff to ICACTIVER0 ------------------------------------------------------------------------------- (XEN) memory_map:add: dom1 gfn=10084 mfn=fff35 nr=1 ------------------------------------------------------------------------------- (XEN) d1v0: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d1v1: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d1v2: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d1v3: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d1v4: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d1v5: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d1v6: vGICD: unhandled word write 0xffffffff to ICACTIVER0 (XEN) d1v7: vGICD: unhandled word write 0xffffffff to ICACTIVER0 ====================================== After enabling guest log level, It is showing only one io region mapped instead of all. Why? > Cheers, > > -- > Julien Grall > Thanks, Omkar B -- This message contains confidential information and is intended only for the individual(s) named. If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this mail and attached file/s is strictly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. --0000000000004dfac50579a7c382 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


Hi,

> I am trying to pass-through the display to guest domain. to do through=
> driver needs clocks. I have written simple basic clock pv frontend and=
> backend.
> So I thought these clocks must be initialised before display driver > initialisation.

The graphic driver should request the clock, right? So Linux will make
sure to have the clock before initializing the display.

We are not using graphics(GPU), I think drm takes care of graphics. And a= ll
clocks = needed for display I enabled using clocks PV. Clocks i have checked
in host domain al= l got enabled properly.


>
> But if I start both domain and clocks script one after another, clock =
> got initialised properly. Problem solved.
> But still i have some doubt, is it possible to do some thing in xenbit= s
> src to start automatically when we start underprivileged domain?

I am not entirely sure if we have a way to run a script during domain
creation. Wei, do you know if that's possible?

A workaround would be to create the domain paused, call the script and
then unpause it.

42sh> xl create -p ...
42sh> ./myscript.sh
42sh> xl unpause <myguest>

Now, I am doing same wa= y pause and unpauase domain to start PV and it is working.

>
> I have one more question about pass-through
> To implement pass through I took reference from below link
> https://wiki.xen.org/images/1/17/Dev= ice_passthrough_xen.pdf
>
> I added 'xen-passthrough' to actual dom0 dtb and created new d= tb with
> below nodes in passthrough node
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D
> dpe: dpe@10004000 {
> compatible =3D "hisilicon,hi3660-dpe";
> status =3D "ok";
> #if 0
> //ACTUAL REG PROPERTY of DISPLAY
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 reg =3D <0x0 0xE8600000 0x0 0x80000>,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 <0x0 0xFFF35000 0 0x1000>,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 <0x0 0xFFF0A000 0 0x1000>,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 <0x0 0xFFF31000 0 0x1000>,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 <0x0 0xE86C0000 0 0x10000>;
> #endif
> //reg =3D <0x0 0x10004000 0x0 0x80000>,
> reg =3D <0x0 0x10004000 0x0 0x80000>,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 <0x0 0x10084000 0 0x1000>,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 <0x0 0x10085000 0 0x1000>,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 <0x0 0x10086000 0 0x1000>,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 <0x0 0x100C4000 0 0x10000>;
> //=C2=A0 =C2=A0 =C2=A0 <0x0 0x10087000 0 0x10000>;
>
> interrupts =3D <0 245 4>;
>
> clocks =3D <&clk_xen HI3660_ACLK_GATE_DSS>,
> <&clk_xen HI3660_PCLK_GATE_DSS>,
> <&clk_xen HI3660_CLK_GATE_EDC0>,
> <&clk_xen HI3660_CLK_GATE_LDI0>,
> <&clk_xen HI3660_CLK_GATE_LDI1>,
> <&clk_xen HI3660_CLK_GATE_DSS_AXI_MM>,
> <&clk_xen HI3660_PCLK_GATE_MMBUF>;
> clock-names =3D "aclk_dss",
> "pclk_dss",
> "clk_edc0",
> "clk_ldi0",
> "clk_ldi1",
> "clk_dss_axi_mm",
> "pclk_mmbuf";
>
> dma-coherent;
>
> port {
> dpe_out: endpoint {
> remote-endpoint =3D <&dsi_in>;
> };
> };
> };
>
> dsi: dsi@10097000 {
> compatible =3D "hisilicon,hi3660-dsi";
> status =3D "ok";
> #if 0
> //ACTUAL REG PROPERTY of DISPLAY
> reg =3D <0 0xE8601000 0 0x7F000>,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 <0 0xFFF35000 0 0x1000>;
> #endif
> //reg =3D <0 0x10097000 0 0x7F000>,
> //<0 0x10116000 0 0x1000>;
> reg =3D <0 0x10004000 0 0x80000>,
> <0 0x10084000 0 0x1000>;
>
> clocks =3D <&clk_xen HI3660_CLK_GATE_TXDPHY0_REF>,
> <&clk_xen HI3660_CLK_GATE_TXDPHY1_REF>,
> <&clk_xen HI3660_CLK_GATE_TXDPHY0_CFG>,
> <&clk_xen HI3660_CLK_GATE_TXDPHY1_CFG>,
> <&clk_xen HI3660_PCLK_GATE_DSI0>,
> <&clk_xen HI3660_PCLK_GATE_DSI1>;
> clock-names =3D "clk_txdphy0_ref",
> "clk_txdphy1_ref",
> "clk_txdphy0_cfg",
> "clk_txdphy1_cfg",
> "pclk_dsi0",
> "pclk_dsi1";
>
> #address-cells =3D <1>;
> #size-cells =3D <0>;
>
> };
> #endif
> clocks {
> compatible =3D "simple-bus";
> #address-cells =3D <2>;
> #size-cells =3D <2>;
> ranges;
> clk_xen: xen_clk@0 {
> compatible =3D "xen,xen-vclk";
> #clock-cells =3D <1>;
> };
> };
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D
> Below is my 'debian.cfg' file:
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D
> kernel =3D "/debian/Image"
> device_tree=3D"/debian/domu.dtb"
> memory =3D 512
> vcpus =3D 8
> cpus =3D "all"
> name=3D"debian"
>
> ################# DPE ################
> #iomem =3D [ "0xE8600,0x80@0x10004", "0xFFF35,1@0x10084= ",
> "0xFFF0A,1@0x10085", "0xFFF31,1@0x10086", "0x= E86C0,10@0x10087"]
> #iomem =3D [ "0xE8600,0x80", "0xFFF35,1", "0x= FFF0A,1", "0xFFF31,1",
> "0xE86C0,10"]
> irqs =3D [ 277 ]
>
> iomem =3D [ "0xE8600,80@0x10004" ]
>
> iomem =3D [ "0xFFF35,1@0x10084" ]
> iomem =3D [ "0xFFF0A,1@0x10085" ]
> iomem =3D [ "0xFFF31,1@0x10086" ]
> iomem =3D [ "0xE86C0,10@0x100C4"]
> #iomem =3D [ "0xE86C0,10@0x10087"]
> #iomem =3D [ "0xE8600,80@0x00000" ]
>
> ################# DPE ################
> ################# DSI ################
> #iomem =3D [ "0xE8601,0x7F", "0xFFF35,1"]
> #iomem =3D [ "0xE8601,0x7F@0x10097", "0xFFF35,1@0x10116= ",
> "0xE8601,0x7F@0x10195"]
>
> #iomem =3D [ "0xE8601,7F@0x10097" ]
> #iomem =3D [ "0xFFF35,1@0x10116" ]
>
>
> iomem =3D [ "0xE8601,7F@0x10005" ]
> iomem =3D [ "0xFFF35,1@0x10084" ]
> ################# DSI ################
>
> #vif =3D ['mac=3D00:16:3e:64:b8:40,bridge=3Dxenbr0']
> #nics =3D 1
> #vif =3D [ 'eth0=3D00:60:00:00:00:01' ]
>
> disk =3D ['/dev/loop1,raw,xvda,w']
> extra =3D "earlyprintk=3Dxenboot console=3Dhvc0 root=3D/dev/xvda = rootfstype=3Dext4
> rw video=3DHDMI-A-1:1280x720@60"
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D
> Here I am using same io space(GFNs) for DPE and DSI nodes, and having =
> same below error
> and tried with different GFNs and giving same error.
>
> But adding this, Every thing is good but when i am trying to remap iom= em
> second time, having below error

Who is going the remap? The guest? Also, can you expand what you mean by it crash the second time. Is it during the remap, or access the new
mapped region?
Here, Gue= st(domain-U) is remapping in display driver.
In display driver remapping is successfu= l and yesterday i found when i try to access=C2=A0
remapped region domain-U giving th= at crash, "Unhandled fault: ttbr address size fault"=C2=A0
<= blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-l= eft:1px solid rgb(204,204,204);padding-left:1ex">=C2=A0
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D
> [=C2=A0 =C2=A0 3.215021] OF: rrrrrrrrrrrr: start: 0x10004000, sz =3D 0= x80000
> [=C2=A0 =C2=A0 3.215062] [DISPLAY] dsi_parse_dt(): 1536: of device: > /passthrough/dsi@10097000
> [=C2=A0 =C2=A0 3.215083] [DISPLAY] dsi_parse_dt(): 1537:
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> [=C2=A0 =C2=A0 3.215108] [DISPLAY] dsi_parse_dt(): 1540: ctx->base:= ffffff800bd01000
> [=C2=A0 =C2=A0 3.215126] [DISPLAY] dsi_parse_dt(): 1541:
> [=C2=A0 =C2=A0 3.215136] OF: rrrrrrrrrrrr: start: 0x10084000, sz =3D 0= x1000
> [=C2=A0 =C2=A0 3.215169] [DISPLAY] dsi_parse_dt(): 1548:
> [=C2=A0 =C2=A0 4.159087] [DISPLAY] dsi_parse_dt(): 1563:
> [=C2=A0 =C2=A0 4.159092] [DISPLAY] dsi_parse_dt(): 1568:
> [=C2=A0 =C2=A0 4.159132] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: cl= k_txdphy0_ref,1
> [=C2=A0 =C2=A0 4.159163] [D][XEN_VCLK]xen_vclk_xfer(): 164:
> *******************************************
> [=C2=A0 =C2=A0 4.159399] [D][XEN_VCLK]xen_vclk_xfer(): 170:
> *******************************************
> [=C2=A0 =C2=A0 4.159626] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IR= Q_DATA to
> domain-0
> [=C2=A0 =C2=A0 4.160218] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(1= 3) from
> domain 0 fired!!!
> [=C2=A0 =C2=A0 4.160359] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0= x3
> [=C2=A0 =C2=A0 4.160532] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved = from dom-0
> [=C2=A0 =C2=A0 4.160542] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done<= br> > [=C2=A0 =C2=A0 4.160545] [D][XEN_VCLK]xen_of_clk_src_onecell_get(): 28= 6:=C2=A0 Xfer done...
> [=C2=A0 =C2=A0 4.160554] [DISPLAY] dsi_parse_dt(): 1575:
> [=C2=A0 =C2=A0 4.160560] [D][XEN_VCLK]vclk_round_rate(): 224:=C2=A0 ca= lled...
> [=C2=A0 =C2=A0 4.160567] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: > clk_txdphy0_ref,4,19200000
> [=C2=A0 =C2=A0 4.160570] [D][XEN_VCLK]xen_vclk_xfer(): 164:
> *******************************************
> [=C2=A0 =C2=A0 4.161095] [D][XEN_VCLK]xen_vclk_xfer(): 170:
> *******************************************
> [=C2=A0 =C2=A0 4.161331] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IR= Q_DATA to
> domain-0
> [=C2=A0 =C2=A0 4.161946] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(1= 3) from
> domain 0 fired!!!
> [=C2=A0 =C2=A0 4.162120] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0= x3
> [=C2=A0 =C2=A0 4.162284] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved = from dom-0
> [=C2=A0 =C2=A0 4.162295] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done<= br> > [=C2=A0 =C2=A0 4.162301] [DISPLAY] dsi_parse_dt(): 1583:
> [=C2=A0 =C2=A0 4.162314] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: cl= k_txdphy0_cfg,1
> [=C2=A0 =C2=A0 4.162316] [D][XEN_VCLK]xen_vclk_xfer(): 164:
> *******************************************
> [=C2=A0 =C2=A0 4.162641] [D][XEN_VCLK]xen_vclk_xfer(): 170:
> *******************************************
> [=C2=A0 =C2=A0 4.162984] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IR= Q_DATA to
> domain-0
> [=C2=A0 =C2=A0 4.163596] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(1= 3) from
> domain 0 fired!!!
> [=C2=A0 =C2=A0 4.167753] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0= x3
> [=C2=A0 =C2=A0 4.167955] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved = from dom-0
> [=C2=A0 =C2=A0 4.167968] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done<= br> > [=C2=A0 =C2=A0 4.167971] [D][XEN_VCLK]xen_of_clk_src_onecell_get(): 28= 6:=C2=A0 Xfer done...
> [=C2=A0 =C2=A0 4.167979] [DISPLAY] dsi_parse_dt(): 1593:
> [=C2=A0 =C2=A0 4.167982] [D][XEN_VCLK]vclk_round_rate(): 224:=C2=A0 ca= lled...
> [=C2=A0 =C2=A0 4.167985] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: > clk_txdphy0_cfg,4,19200000
> [=C2=A0 =C2=A0 4.167992] [D][XEN_VCLK]xen_vclk_xfer(): 164:
> *******************************************
> [=C2=A0 =C2=A0 4.168244] [D][XEN_VCLK]xen_vclk_xfer(): 170:
> *******************************************
> [=C2=A0 =C2=A0 4.168476] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IR= Q_DATA to
> domain-0
> [=C2=A0 =C2=A0 4.169101] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(1= 3) from
> domain 0 fired!!!
> [=C2=A0 =C2=A0 4.169262] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0= x3
> [=C2=A0 =C2=A0 4.169448] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved = from dom-0
> [=C2=A0 =C2=A0 4.169491] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done<= br> > [=C2=A0 =C2=A0 4.169510] [DISPLAY] dsi_parse_dt(): 1601:
> [=C2=A0 =C2=A0 4.169535] [D][XEN_VCLK]xen_vclk_xfer(): 163: buffer: pc= lk_dsi0,1
> [=C2=A0 =C2=A0 4.169554] [D][XEN_VCLK]xen_vclk_xfer(): 164:
> *******************************************
> [=C2=A0 =C2=A0 4.169803] [D][XEN_VCLK]xen_vclk_xfer(): 170:
> *******************************************
> [=C2=A0 =C2=A0 4.170019] [D][XEN_VCLK]xen_vclk_xfer(): 176: Sending IR= Q_DATA to
> domain-0
> [=C2=A0 =C2=A0 4.170619] [D][XEN_VCLK]xen_vclk_interrupt(): 446: IRQ(1= 3) from
> domain 0 fired!!!
> [=C2=A0 =C2=A0 4.170779] [D][XEN_VCLK]vclk_fe_bh(): 394: irq_status: 0= x3
> [=C2=A0 =C2=A0 4.170965] [D][XEN_VCLK]vclk_fe_bh(): 407: ACK Recieved = from dom-0
> [=C2=A0 =C2=A0 4.170978] [D][XEN_VCLK]xen_vclk_xfer(): 179: Xfer Done<= br> > [=C2=A0 =C2=A0 4.170981] [D][XEN_VCLK]xen_of_clk_src_onecell_get(): 28= 6:=C2=A0 Xfer done...
> [=C2=A0 =C2=A0 4.170989] [DISPLAY] dsi_parse_dt(): 1611:
> [=C2=A0 =C2=A0 4.170992] [DISPLAY] dsi_probe(): 1654: Before component= add
> [=C2=A0 =C2=A0 4.170997] [DISPLAY] compare_of(): 242:
> [=C2=A0 =C2=A0 4.171002] [DISPLAY] kirin_drm_bind(): 257:
> [=C2=A0 =C2=A0 4.171004] [drm] +.
> [=C2=A0 =C2=A0 4.171386] [DISPLAY] kirin_drm_kms_init(): 105:
> [=C2=A0 =C2=A0 4.171391] [drm] +.
> [=C2=A0 =C2=A0 4.212543] [DISPLAY] kirin_drm_mode_config_init(): 91: > [=C2=A0 =C2=A0 4.212547] [DISPLAY] dss_drm_init(): 638:
> [=C2=A0 =C2=A0 4.212563] [drm] +.
> [=C2=A0 =C2=A0 4.212585] [DISPLAY] dss_dts_parse(): 513:
> [=C2=A0 =C2=A0 4.212603] [DISPLAY] dss_dts_parse(): 530: of device: > /passthrough/dpe@10004000
> [=C2=A0 =C2=A0 4.212635] [DISPLAY] dss_dts_parse(): 531:
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^= ^^^^^^
> [=C2=A0 =C2=A0 4.212661] [DISPLAY] dss_dts_parse(): 532: ctx->base:= ffffff800bd00000
> [=C2=A0 =C2=A0 4.212688] Unhandled fault: ttbr address size fault (0x9= 6000000) at
> 0xffffff800bd01000

IIRC, this error usually happen when the region is not mapped in
stage-2. On Xen debug-build (CONFIG_DEBUG=3Dy in .config) you should get
some log if there was a data abort in stage-2.

I enabled (CONFIG_DEBUG=3Dy in .config) in xenbits src,=C2=A0 I dont = have any log from xen.
also checked 'xl dmesg'=C2=A0 but no log from xen.=C2= =A0

How to find= GFNs from xen? I mean, xen maps region from RAM memory of domain-u or
maps to memory= of Domain-0 and give access to domain-u?=C2=A0

I am sharing log= of xl dmesg after enabling guest log level:
=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D
root@hikey960:/debian# xl dmes= g
(XEN) Checking for initrd in /chosen
(XEN) RAM: 0000000000000000 - 000000001abff= fff
(XEN) RAM: 000000001ad88000 - 0000000= 031ffffff
(XEN) RAM: 0000000032101000 - 0= 00000003dffffff
(XEN) RAM: 00000000400000= 00 - 000000004aee9fff
(XEN) RAM: 00000000= 89cc0000 - 00000000b8427fff
(XEN) RAM: 00= 000000b9af0000 - 00000000b9baffff
(XEN) R= AM: 00000000b9c50000 - 00000000b9c54fff
(= XEN) RAM: 00000000b9c56000 - 00000000b9d4ffff
(XEN) RAM: 00000000ba114000 - 00000000ba11bfff
(XEN) RAM: 00000000ba11c000 - 00000000bdbf1fff
(XEN) RAM: 00000000bdbf2000 - 00000000bdca2fff
(XEN) RAM: 00000000bdca3000 - 00000000bdd58fff
(XEN) RAM: 00000000bdd59000 - 00000000bef4ff= ff
(XEN) RAM: 00000000bef50000 - 00000000= bef54fff
(XEN) RAM: 00000000bef55000 - 00= 000000bf0dffff
(XEN) RAM: 00000000bf0e000= 0 - 00000000bf12ffff
(XEN) RAM: 00000000b= f180000 - 00000000bf188fff
(XEN) RAM: 000= 00000bf189000 - 00000000bfffffff
(XEN) RA= M: 00000000c0000000 - 00000000dfffffff
(X= EN) RAM: 0000000200000000 - 000000021fffffff
(XEN)=C2=A0
(XEN) MODULE[0]: 00000000= b8428000 - 00000000b8436000 Device Tree=C2=A0=C2=A0
(XEN) MODULE[1]: 00000000b8544000 - 00000000b997ca00 Kernel=C2= =A0 =C2=A0 =C2=A0 =C2=A0console=3Dtty0 console=3Dhvc0 root=3D/dev/sdd14 roo= twait rw rootfstype=3Dext4 efi=3Dnoruntime video=3DHDMI-A
-1:1280x720@60
(XEN)= =C2=A0
(XEN) Command line: loglvl=3Dall c= onsole=3Ddtuart dtuart=3D/soc/serial@fff32000 dom0_mem=3D512M efi=3Dno-rs g= uest_loglvl=3Dall
(XEN) Placing Xen at 0x= 000000001aa00000-0x000000001ac00000
(XEN)= Update BOOTMOD_XEN from 00000000b8436000-00000000b8536d81 =3D> 00000000= 1aa00000-000000001ab00d81
(XEN) Domain he= ap initialised
(XEN) Platform: Generic Sy= stem
(XEN) Looking for dtuart at "/s= oc/serial@fff32000", options ""
=C2=A0Xen 4.8.5-pre
(XEN) Xen versi= on 4.8.5-pre (omkar.bolla@) (aarch64-linux-gnu-gcc (Linaro GCC 7.1-2017.05)= 7.1.1 20170510) debug=3Dn=C2=A0 Fri Nov=C2=A0 2 10:40:45 IST 2018
(XEN) Latest ChangeSet:=C2=A0
(XEN) Processor: 410fd034: "ARM Limited", variant:= 0x0, part 0xd03, rev 0x4
(XEN) 64-bit Ex= ecution:
(XEN)=C2=A0 =C2=A0Processor Feat= ures: 0000000000002222 0000000000000000
(= XEN)=C2=A0 =C2=A0 =C2=A0Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0= :64+32
(XEN)=C2=A0 =C2=A0 =C2=A0Extension= s: FloatingPoint AdvancedSIMD
(XEN)=C2=A0= =C2=A0Debug Features: 0000000010305106 0000000000000000
(XEN)=C2=A0 =C2=A0Auxiliary Features: 0000000000000000 0000= 000000000000
(XEN)=C2=A0 =C2=A0Memory Mod= el Features: 0000000000001122 0000000000000000
(XEN)=C2=A0 =C2=A0ISA Features:=C2=A0 0000000000011120 00000000000000= 00
(XEN) 32-bit Execution:
(XEN)=C2=A0 =C2=A0Processor Features: 00000131:00011011=
(XEN)=C2=A0 =C2=A0 =C2=A0Instruction Set= s: AArch32 A32 Thumb Thumb-2 Jazelle
(XEN= )=C2=A0 =C2=A0 =C2=A0Extensions: GenericTimer Security
(XEN)=C2=A0 =C2=A0Debug Features: 03010066
(XEN)=C2=A0 =C2=A0Auxiliary Features: 00000000
(XEN)=C2=A0 =C2=A0Memory Model Features: 10201105 400= 00000 01260000 02102211
(XEN)=C2=A0 ISA F= eatures: 02101110 13112111 21232042 01112131 00011142 00011121
(XEN) Using PSCI-1.1 for SMP bringup
(XEN) SMP: Allowing 8 CPUs
(XEN) Generic Timer IRQ: phys=3D30 hyp=3D26 virt=3D27 Freq: 1920 KHz
(XEN) GICv2 initialization:
(XEN)=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0gic_dist_addr=3D= 00000000e82b1000
(XEN)=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0gic_cpu_addr=3D00000000e82b2000
(XEN)=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0gic_hyp_addr=3D00000000e82b4= 000
(XEN)=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0gic_vcpu_addr=3D00000000e82b6000
(XEN)= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0gic_maintenance_irq=3D25
(XEN) GICv2: 384 lines, 8 cpus, secure (IID 0200143b).
(XEN) Using scheduler: SMP Credit Schedule= r (credit)
(XEN) Allocated console ring o= f 64 KiB.
(XEN) Bringing up CPU1
(XEN) CPU 1 booted.
(XEN) Bringing up CPU2
(XEN) CPU 2 b= ooted.
(XEN) Bringing up CPU3
(XEN) CPU 3 booted.
(XEN) Bringing up CPU4
(XEN) CPU 4 boot= ed.
(XEN) Bringing up CPU5
(XEN) CPU 5 booted.
(= XEN) Bringing up CPU6
(XEN) CPU 6 booted.=
(XEN) Bringing up CPU7
(XEN) CPU 7 booted.
(X= EN) Brought up 8 CPUs
(XEN) P2M: 40-bit I= PA with 40-bit PA
(XEN) P2M: 3 levels wit= h order-1 root, VTCR 0x80023558
(XEN) I/O= virtualisation disabled
(XEN) build-id: = baea0accc1a90dbddfdab8bcec069906a9b45ba6
= (XEN) alternatives: Patching with alt table 00000000400a4868 -> 00000000= 400a4c04
(XEN) *** LOADING DOMAIN 0 ***
(XEN) Loading kernel from boot module @ 00= 000000b8544000
(XEN) Allocating 1:1 mappi= ngs totalling 512MB for dom0:
(XEN) BANK[= 0] 0x000000c0000000-0x000000e0000000 (512MB)
(XEN) Grant table range: 0x0000001aa00000-0x0000001aa5f000
(XEN) /framebuffer@E8600000 passthrough =3D 0 nirq = =3D 5 naddr =3D 7
(XEN)=C2=A0 =C2=A0- MMI= O: 00e8600000 - 00e8680000 P2MType=3D5
(X= EN) Device Node: /framebuffer@E8600000
(X= EN)=C2=A0 =C2=A0- MMIO: 00fff35000 - 00fff36000 P2MType=3D5
(XEN) Device Node: /framebuffer@E8600000
(XEN)=C2=A0 =C2=A0- MMIO: 00fff0a000 - 00fff0b000 P2MTyp= e=3D5
(XEN) Device Node: /framebuffer@E86= 00000
(XEN)=C2=A0 =C2=A0- MMIO: 00e8a0900= 0 - 00e8a0a000 P2MType=3D5
(XEN) Device N= ode: /framebuffer@E8600000
(XEN)=C2=A0 = =C2=A0- MMIO: 00e86c0000 - 00e86d0000 P2MType=3D5
(XEN) Device Node: /framebuffer@E8600000
(XEN)=C2=A0 =C2=A0- MMIO: 00fff02000 - 00fff03000 P2MType=3D5
(XEN) Device Node: /framebuffer@E8600000
(XEN)=C2=A0 =C2=A0- MMIO: 00fff31000 - 00fff3= 2000 P2MType=3D5
(XEN) Device Node: /fram= ebuffer@E8600000
(XEN) /mali@E82C0000 pas= sthrough =3D 0 nirq =3D 3 naddr =3D 1
(XE= N)=C2=A0 =C2=A0- MMIO: 00e82c0000 - 00e82c4000 P2MType=3D5
(XEN) Device Node: /mali@E82C0000
(XEN) /dpe@E8600000 passthrough =3D 0 nirq =3D 1 naddr =3D 5
(XEN)=C2=A0 =C2=A0- MMIO: 00e8600000 - 00e8= 680000 P2MType=3D5
(XEN) Device Node: /dp= e@E8600000
(XEN)=C2=A0 =C2=A0- MMIO: 00ff= f35000 - 00fff36000 P2MType=3D5
(XEN) Dev= ice Node: /dpe@E8600000
(XEN)=C2=A0 =C2= =A0- MMIO: 00fff0a000 - 00fff0b000 P2MType=3D5
(XEN) Device Node: /dpe@E8600000
(X= EN)=C2=A0 =C2=A0- MMIO: 00fff31000 - 00fff32000 P2MType=3D5
(XEN) Device Node: /dpe@E8600000
(XEN)=C2=A0 =C2=A0- MMIO: 00e86c0000 - 00e86d0000 P2MType=3D5
(XEN) Device Node: /dpe@E8600000
(XEN) /dsi@E8601000 passthrough =3D 0 nirq =3D 0 na= ddr =3D 2
(XEN)=C2=A0 =C2=A0- MMIO: 00e86= 01000 - 00e8680000 P2MType=3D5
(XEN) Devi= ce Node: /dsi@E8601000
(XEN)=C2=A0 =C2=A0= - MMIO: 00fff35000 - 00fff36000 P2MType=3D5
(XEN) Device Node: /dsi@E8601000
(XEN)= Loading zImage from 00000000b8544000 to 00000000c0080000-00000000c14b8a00<= /div>
(XEN) Allocating PPI 16 for event channel= interrupt
(XEN) Loading dom0 DTB to 0x00= 000000c8000000-0x00000000c800a8a1
(XEN) S= crubbing Free RAM on 1 nodes using 8 CPUs
(XEN) .........done.
(XEN) Initial low m= emory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Log= level: All
(XEN) *** Serial input -> D= OM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 272kB init memory.
(XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER= 4
(XEN) d0v0: vGICD: unhandled word write= 0xffffffff to ICACTIVER8
(XEN) d0v0: vGI= CD: unhandled word write 0xffffffff to ICACTIVER12
(XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER16=
(XEN) d0v0: vGICD: unhandled word write = 0xffffffff to ICACTIVER20
(XEN) d0v0: vGI= CD: unhandled word write 0xffffffff to ICACTIVER24
(XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER28=
(XEN) d0v0: vGICD: unhandled word write = 0xffffffff to ICACTIVER32
(XEN) d0v0: vGI= CD: unhandled word write 0xffffffff to ICACTIVER36
(XEN) d0v0: vGICD: unhandled word write 0xffffffff to ICACTIVER40=
(XEN) d0v0: vGICD: unhandled word write = 0xffffffff to ICACTIVER44
(XEN) d0v0: vGI= CD: unhandled word write 0xffffffff to ICACTIVER0
(XEN) d0v1: vGICD: unhandled word write 0xffffffff to ICACTIVER0
(XEN) d0v2: vGICD: unhandled word write 0x= ffffffff to ICACTIVER0
(XEN) d0v3: vGICD:= unhandled word write 0xffffffff to ICACTIVER0
(XEN) d0v4: vGICD: unhandled word write 0xffffffff to ICACTIVER0
(XEN) d0v5: vGICD: unhandled word write 0xfff= fffff to ICACTIVER0
(XEN) d0v6: vGICD: un= handled word write 0xffffffff to ICACTIVER0
(XEN) d0v7: vGICD: unhandled word write 0xffffffff to ICACTIVER0
------------------------------------------------= -------------------------------
(XEN) mem= ory_map:add: dom1 gfn=3D10084 mfn=3Dfff35 nr=3D1
-------------------------------------------------------------------= ------------
(XEN) d1v0: vGICD: unhan= dled word write 0xffffffff to ICACTIVER0
= (XEN) d1v1: vGICD: unhandled word write 0xffffffff to ICACTIVER0
(XEN) d1v2: vGICD: unhandled word write 0xffffffff = to ICACTIVER0
(XEN) d1v3: vGICD: unhandle= d word write 0xffffffff to ICACTIVER0
(XE= N) d1v4: vGICD: unhandled word write 0xffffffff to ICACTIVER0
(XEN) d1v5: vGICD: unhandled word write 0xffffffff to = ICACTIVER0
(XEN) d1v6: vGICD: unhandled w= ord write 0xffffffff to ICACTIVER0
(XEN) = d1v7: vGICD: unhandled word write 0xffffffff to ICACTIVER0
=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=C2=A0
After enabling guest log level, It= is showing only one io region mapped instead of all. Why?

<= div>=C2=A0
Cheers,

--
Julien Grall

Thanks,
Omkar B

This message contains confidential information and is intended only for the individual(s) named. If you are not the = intended recipient, you are notified that disclosing, copying, distributing or takin= g any action in reliance on the contents of this mail and attached file/s is stri= ctly prohibited. Please notify the sender immediately and delete this e-mail from your system. E-mail transmis= sion cannot be guaranteed to be secured or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or cont= ain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mai= l transmission.

--0000000000004dfac50579a7c382-- --===============4094633085009688257== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6Ly9saXN0 cy54ZW5wcm9qZWN0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA== --===============4094633085009688257==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Wei Liu Subject: Re: Xen PV: Sample new PV driver for buffer sharing between domains Date: Fri, 2 Nov 2018 08:47:49 +0000 Message-ID: <20181102084749.snkpmg4fjmryseww@zion.uk.xensource.com> References: <074697de-7265-a1fb-2970-4128a58f09ca@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" To: Julien Grall Cc: jgross@suse.com, xen-devel@lists.xensource.com, Wei Liu , Oleksandr_Andrushchenko@epam.com, Oleksandr Andrushchenko , Omkar Bolla , Lars Kurth , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org T24gVGh1LCBOb3YgMDEsIDIwMTggYXQgMDk6NDk6MTNQTSArMDAwMCwgSnVsaWVuIEdyYWxsIHdy b3RlOgo+ICgrIFdlaSkKPiAKPiBPbiAxMS8xLzE4IDk6MTUgQU0sIE9ta2FyIEJvbGxhIHdyb3Rl Ogo+ID4gSGksCj4gPiAKPiA+ID4gTWF5IEkgYXNrIHdoeSB5b3UgbmVlZCB0aGUgZGVwZW5kZW5j eSBvbiB0aGUgcm9vdGZzPwo+ID4gCj4gPiBJIGFtIHRyeWluZyB0byBwYXNzLXRocm91Z2ggdGhl IGRpc3BsYXkgdG8gZ3Vlc3QgZG9tYWluLiB0byBkbyB0aHJvdWdoCj4gPiBkcml2ZXIgbmVlZHMg Y2xvY2tzLiBJIGhhdmUgd3JpdHRlbiBzaW1wbGUgYmFzaWMgY2xvY2sgcHYgZnJvbnRlbmQgYW5k Cj4gPiBiYWNrZW5kLgo+ID4gU28gSSB0aG91Z2h0IHRoZXNlIGNsb2NrcyBtdXN0IGJlIGluaXRp YWxpc2VkIGJlZm9yZSBkaXNwbGF5IGRyaXZlcgo+ID4gaW5pdGlhbGlzYXRpb24uCj4gCj4gVGhl IGdyYXBoaWMgZHJpdmVyIHNob3VsZCByZXF1ZXN0IHRoZSBjbG9jaywgcmlnaHQ/IFNvIExpbnV4 IHdpbGwgbWFrZSBzdXJlCj4gdG8gaGF2ZSB0aGUgY2xvY2sgYmVmb3JlIGluaXRpYWxpemluZyB0 aGUgZGlzcGxheS4KPiAKPiA+IAo+ID4gQnV0IGlmIEkgc3RhcnQgYm90aCBkb21haW4gYW5kIGNs b2NrcyBzY3JpcHQgb25lIGFmdGVyIGFub3RoZXIsIGNsb2NrCj4gPiBnb3QgaW5pdGlhbGlzZWQg cHJvcGVybHkuIFByb2JsZW0gc29sdmVkLgo+ID4gQnV0IHN0aWxsIGkgaGF2ZSBzb21lIGRvdWJ0 LCBpcyBpdCBwb3NzaWJsZSB0byBkbyBzb21lIHRoaW5nIGluIHhlbmJpdHMKPiA+IHNyYyB0byBz dGFydCBhdXRvbWF0aWNhbGx5IHdoZW4gd2Ugc3RhcnQgdW5kZXJwcml2aWxlZ2VkIGRvbWFpbj8K PiAKPiBJIGFtIG5vdCBlbnRpcmVseSBzdXJlIGlmIHdlIGhhdmUgYSB3YXkgdG8gcnVuIGEgc2Ny aXB0IGR1cmluZyBkb21haW4KPiBjcmVhdGlvbi4gV2VpLCBkbyB5b3Uga25vdyBpZiB0aGF0J3Mg cG9zc2libGU/CgpUaGVyZSBpcyB0aGUgaG90cGx1ZyBzY3JpcHQgbWVjaGFuaXNtIHdoaWNoIG1h eSBvciBtYXkgbm90IGJlIHdoYXQgeW91Cm5lZWQuCgpXZWkuCgpfX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k ZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZwpodHRwczovL2xpc3RzLnhlbnByb2plY3Qub3JnL21h aWxtYW4vbGlzdGluZm8veGVuLWRldmVs