From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Deucher, Alexander" Subject: Re: FW: [PATCH v2 2/2] drm/amdgpu: Move to gtt before cpu accesses dma buf. Date: Wed, 13 Dec 2017 19:49:29 +0000 Message-ID: References: <1512771734-13581-1-git-send-email-Samuel.Li@amd.com> <212eeef1-2c32-bef9-82a6-a1884ef002da@amd.com> <82599e20-60fa-fe6f-2c91-4882ffcff4ce@amd.com> <154804e5-cdbb-a5f8-2ad8-506b0752943b@amd.com> <53b66de8-3fe3-c139-9887-7358bf1d95b5@amd.com> <141a2180-104a-4a75-7757-6de3b522c578@amd.com>, Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1261823085==" Return-path: In-Reply-To: Content-Language: en-US List-Id: Discussion list for AMD gfx List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: amd-gfx-bounces-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org Sender: "amd-gfx" To: "Li, Samuel" , "Koenig, Christian" , "amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org" --===============1261823085== Content-Language: en-US Content-Type: multipart/alternative; boundary="_000_BN6PR12MB1652331B4826780B9442CA67F7350BN6PR12MB1652namp_" --_000_BN6PR12MB1652331B4826780B9442CA67F7350BN6PR12MB1652namp_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Please send the drm prime patch to dri-devel if you didn't already. Alex ________________________________ From: amd-gfx on behalf of Samuel L= i Sent: Wednesday, December 13, 2017 2:17:49 PM To: Koenig, Christian; amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org Subject: Re: FW: [PATCH v2 2/2] drm/amdgpu: Move to gtt before cpu accesses= dma buf. For the record. On 2017-12-13 01:26 PM, Christian K=F6nig wrote: > Actually we try to avoid that drivers define their own dma_buf_ops in DRM= . > > That's why you have all those callbacks in drm_driver which just mirror t= he dma_buf interface but unpack the GEM object from the dma-buf object. > > There are quite a number of exceptions, but those drivers then implement = everything on their own because the DRM marshaling doesn't make sense for t= hem. > > Christian. > > Am 13.12.2017 um 19:01 schrieb Samuel Li: >> That is an approach. The cost is to add a new call back, which is not ne= cessary though, since driver can always actually define their own dma_buf_o= ps. >> The intention here is to allow a driver reuse drm_gem_prime_dmabuf_ops{}= . If you would like to go this far, maybe a more straight forward way is to= export those ops, e.g. drm_gem_map_attach, so that a driver can use them i= n its own definitions. >> >> Sam >> >> >> >> On 2017-12-13 05:23 AM, Christian K=F6nig wrote: >>> Something like the attached patch. Not even compile tested. >>> >>> Christian. >>> >>> Am 12.12.2017 um 20:13 schrieb Samuel Li: >>>> Not sure if I understand your comments correctly. Currently amdgpu pri= me reuses drm_gem_prime_dmabuf_ops{}, and it is defined as static which is = reasonable. I do not see an easier way to introduce amdgpu_gem_begin_cpu_ac= cess(). >>>> >>>> Sam >>>> >>>> On 2017-12-12 01:30 PM, Christian K=F6nig wrote: >>>>>> + while (amdgpu_dmabuf_ops.begin_cpu_access !=3D amdgpu_gem_begin= _cpu_access) >>>>> I would rather just add the four liner code to drm to forward the beg= in_cpu_access callback into a drm_driver callback instead of all this. >>>>> >>>>> But apart from that it looks good to me. >>>>> >>>>> Christian. >>>>> >>>>> Am 12.12.2017 um 19:14 schrieb Li, Samuel: >>>>>> A gentle ping on this one, Christian, can you take a look at this? >>>>>> >>>>>> Sam >>>>>> >>>>>> -----Original Message----- >>>>>> From: Li, Samuel >>>>>> Sent: Friday, December 08, 2017 5:22 PM >>>>>> To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org >>>>>> Cc: Li, Samuel >>>>>> Subject: [PATCH v2 2/2] drm/amdgpu: Move to gtt before cpu accesses = dma buf. >>>>>> >>>>>> To improve cpu read performance. This is implemented for APUs curren= tly. >>>>>> >>>>>> v2: Adapt to change https://lists.freedesktop.org/archives/amd-gfx/2= 017-October/015174.html >>>>>> >>>>>> Change-Id: I7a583e23a9ee706e0edd2a46f4e4186a609368e3 >>>>>> --- >>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu.h | 2 ++ >>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 +- >>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c | 58 +++++++++++++++++= ++++++++++++++ >>>>>> 3 files changed, 61 insertions(+), 1 deletion(-) >>>>>> >>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/a= md/amdgpu/amdgpu.h >>>>>> index f8657c3..193db70 100644 >>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h >>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h >>>>>> @@ -417,6 +417,8 @@ amdgpu_gem_prime_import_sg_table(struct drm_devi= ce *dev, struct dma_buf *amdgpu_gem_prime_export(struct drm_device *dev, >>>>>> struct drm_gem_object *gobj, >>>>>> int flags); >>>>>> +struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *d= ev, >>>>>> + struct dma_buf *dma_buf); >>>>>> int amdgpu_gem_prime_pin(struct drm_gem_object *obj); void amdg= pu_gem_prime_unpin(struct drm_gem_object *obj); struct reservation_object = *amdgpu_gem_prime_res_obj(struct drm_gem_object *); diff --git a/drivers/gp= u/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c >>>>>> index 31383e0..df30b08 100644 >>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c >>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c >>>>>> @@ -868,7 +868,7 @@ static struct drm_driver kms_driver =3D { >>>>>> .prime_handle_to_fd =3D drm_gem_prime_handle_to_fd, >>>>>> .prime_fd_to_handle =3D drm_gem_prime_fd_to_handle, >>>>>> .gem_prime_export =3D amdgpu_gem_prime_export, >>>>>> - .gem_prime_import =3D drm_gem_prime_import, >>>>>> + .gem_prime_import =3D amdgpu_gem_prime_import, >>>>>> .gem_prime_pin =3D amdgpu_gem_prime_pin, >>>>>> .gem_prime_unpin =3D amdgpu_gem_prime_unpin, >>>>>> .gem_prime_res_obj =3D amdgpu_gem_prime_res_obj, diff --git = a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c b/drivers/gpu/drm/amd/amdgpu/am= dgpu_prime.c >>>>>> index ae9c106..de6f599 100644 >>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c >>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c >>>>>> @@ -26,6 +26,7 @@ >>>>>> #include >>>>>> #include "amdgpu.h" >>>>>> +#include "amdgpu_display.h" >>>>>> #include >>>>>> #include >>>>>> @@ -164,6 +165,33 @@ struct reservation_object *amdgpu_gem_prime= _res_obj(struct drm_gem_object *obj) >>>>>> return bo->tbo.resv; >>>>>> } >>>>>> +static int amdgpu_gem_begin_cpu_access(struct dma_buf *dma_buf,= enum >>>>>> +dma_data_direction direction) { >>>>>> + struct amdgpu_bo *bo =3D gem_to_amdgpu_bo(dma_buf->priv); >>>>>> + struct amdgpu_device *adev =3D amdgpu_ttm_adev(bo->tbo.bdev); >>>>>> + struct ttm_operation_ctx ctx =3D { true, false }; >>>>>> + u32 domain =3D amdgpu_framebuffer_domains(adev); >>>>>> + long ret =3D 0; >>>>>> + bool reads =3D (direction =3D=3D DMA_BIDIRECTIONAL || direction= =3D=3D >>>>>> +DMA_FROM_DEVICE); >>>>>> + >>>>>> + if (!reads || !(domain | AMDGPU_GEM_DOMAIN_GTT) || bo->pin_coun= t) >>>>>> + return 0; >>>>>> + >>>>>> + /* move to gtt */ >>>>>> + ret =3D amdgpu_bo_reserve(bo, false); >>>>>> + if (unlikely(ret !=3D 0)) >>>>>> + return ret; >>>>>> + >>>>>> + amdgpu_ttm_placement_from_domain(bo, AMDGPU_GEM_DOMAIN_GTT); >>>>>> + ret =3D ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); >>>>>> + >>>>>> + amdgpu_bo_unreserve(bo); >>>>>> + return ret; >>>>>> +} >>>>>> + >>>>>> +static struct dma_buf_ops amdgpu_dmabuf_ops; static atomic_t aops_l= ock; >>>>>> + >>>>>> struct dma_buf *amdgpu_gem_prime_export(struct drm_device *dev, >>>>>> struct drm_gem_object *gobj, >>>>>> int flags) >>>>>> @@ -178,5 +206,35 @@ struct dma_buf *amdgpu_gem_prime_export(struct = drm_device *dev, >>>>>> buf =3D drm_gem_prime_export(dev, gobj, flags); >>>>>> if (!IS_ERR(buf)) >>>>>> buf->file->f_mapping =3D dev->anon_inode->i_mapping; >>>>>> + >>>>>> + while (amdgpu_dmabuf_ops.begin_cpu_access !=3D amdgpu_gem_begin= _cpu_access) >>>>>> + { >>>>>> + if (!atomic_cmpxchg(&aops_lock, 0, 1)) { >>>>>> + amdgpu_dmabuf_ops =3D *(buf->ops); >>>>>> + amdgpu_dmabuf_ops.begin_cpu_access =3D amdgpu_gem_begin= _cpu_access; >>>>>> + } >>>>>> + } >>>>>> + buf->ops =3D &amdgpu_dmabuf_ops; >>>>>> + >>>>>> return buf; >>>>>> } >>>>>> + >>>>>> +struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *d= ev, >>>>>> + struct dma_buf *dma_buf) >>>>>> +{ >>>>>> + struct drm_gem_object *obj; >>>>>> + >>>>>> + if (dma_buf->ops =3D=3D &amdgpu_dmabuf_ops) { >>>>>> + obj =3D dma_buf->priv; >>>>>> + if (obj->dev =3D=3D dev) { >>>>>> + /* >>>>>> + * Importing dmabuf exported from out own gem increases >>>>>> + * refcount on gem itself instead of f_count of dmabuf. >>>>>> + */ >>>>>> + drm_gem_object_get(obj); >>>>>> + return obj; >>>>>> + } >>>>>> + } >>>>>> + >>>>>> + return drm_gem_prime_import(dev, dma_buf); } >>>>>> -- >>>>>> 2.7.4 >>>>>> > --_000_BN6PR12MB1652331B4826780B9442CA67F7350BN6PR12MB1652namp_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable

Please send the drm prime patch t= o dri-devel if you didn't already.


Alex


From: amd-gfx <amd-gfx-b= ounces-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org> on behalf of Samuel Li <samuel.li@amd.c= om>
Sent: Wednesday, December 13, 2017 2:17:49 PM
To: Koenig, Christian; amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
Subject: Re: FW: [PATCH v2 2/2] drm/amdgpu: Move to gtt before cpu a= ccesses dma buf.
 
For the record.


On 2017-12-13 01:26 PM, Christian K=F6nig wrote:
> Actually we try to avoid that drivers define their own dma_buf_ops in = DRM.
>
> That's why you have all those callbacks in drm_driver which just mirro= r the dma_buf interface but unpack the GEM object from the dma-buf object.<= br> >
> There are quite a number of exceptions, but those drivers then impleme= nt everything on their own because the DRM marshaling doesn't make sense fo= r them.
>
> Christian.
>
> Am 13.12.2017 um 19:01 schrieb Samuel Li:
>> That is an approach. The cost is to add a new call back, which is = not necessary though, since driver can always actually define their own dma= _buf_ops.
>> The intention here is to allow a driver reuse drm_gem_prime_dmabuf= _ops{}. If you would like to go this far, maybe a more straight forward way= is to export those ops, e.g. drm_gem_map_attach, so that a driver can use = them in its own definitions.
>>
>> Sam
>>
>>
>>
>> On 2017-12-13 05:23 AM, Christian K=F6nig wrote:
>>> Something like the attached patch. Not even compile tested. >>>
>>> Christian.
>>>
>>> Am 12.12.2017 um 20:13 schrieb Samuel Li:
>>>> Not sure if I understand your comments correctly. Currentl= y amdgpu prime reuses drm_gem_prime_dmabuf_ops{}, and it is defined as stat= ic which is reasonable. I do not see an easier way to introduce amdgpu_gem_= begin_cpu_access().
>>>>
>>>> Sam
>>>>
>>>> On 2017-12-12 01:30 PM, Christian K=F6nig wrote:
>>>>>> +    while (amdgpu_dmabuf_ops.b= egin_cpu_access !=3D amdgpu_gem_begin_cpu_access)
>>>>> I would rather just add the four liner code to drm to = forward the begin_cpu_access callback into a drm_driver callback instead of= all this.
>>>>>
>>>>> But apart from that it looks good to me.
>>>>>
>>>>> Christian.
>>>>>
>>>>> Am 12.12.2017 um 19:14 schrieb Li, Samuel:
>>>>>> A gentle ping on this one, Christian, can you take= a look at this?
>>>>>>
>>>>>> Sam
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Li, Samuel
>>>>>> Sent: Friday, December 08, 2017 5:22 PM
>>>>>> To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
>>>>>> Cc: Li, Samuel <Samuel.Li-5C7GfCeVMHo@public.gmane.org>
>>>>>> Subject: [PATCH v2 2/2] drm/amdgpu: Move to gtt be= fore cpu accesses dma buf.
>>>>>>
>>>>>> To improve cpu read performance. This is implement= ed for APUs currently.
>>>>>>
>>>>>> v2: Adapt to change https://lists.freedesktop.org/archives/amd-gfx/2017-October/015174.html=
>>>>>>
>>>>>> Change-Id: I7a583e23a9ee706e0edd2a46f4e4186a609368= e3
>>>>>> ---
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdg= pu.h       |  2 ++
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdg= pu_drv.c   |  2 +-
>>>>>>     drivers/gpu/drm/amd/amdgpu/amdg= pu_prime.c | 58 ++++++++++++= ;+++++++++++++++= ;++++
>>>>>>     3 files changed, 61 insertions(= +), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b= /drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>>>>> index f8657c3..193db70 100644
>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgp= u.h
>>>>>> @@ -417,6 +417,8 @@ amdgpu_gem_prime_import_sg= _table(struct drm_device *dev,  struct dma_buf *amdgpu_gem_prime_expor= t(struct drm_device *dev,
>>>>>>         &n= bsp;            = ;   struct drm_gem_object *gobj,
>>>>>>         &n= bsp;            = ;   int flags);
>>>>>> +struct drm_gem_object *amdgpu_gem_prime_impor= t(struct drm_device *dev,
>>>>>> +       &nb= sp;            =     struct dma_buf *dma_buf);
>>>>>>     int amdgpu_gem_prime_pin(struct= drm_gem_object *obj);  void amdgpu_gem_prime_unpin(struct drm_gem_obj= ect *obj);  struct reservation_object *amdgpu_gem_prime_res_obj(struct= drm_gem_object *); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/= drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>>>>> index 31383e0..df30b08 100644
>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgp= u_drv.c
>>>>>> @@ -868,7 +868,7 @@ static struct drm_driver k= ms_driver =3D {
>>>>>>         .prime_= handle_to_fd =3D drm_gem_prime_handle_to_fd,
>>>>>>         .prime_= fd_to_handle =3D drm_gem_prime_fd_to_handle,
>>>>>>         .gem_pr= ime_export =3D amdgpu_gem_prime_export,
>>>>>> -    .gem_prime_import =3D drm_gem_= prime_import,
>>>>>> +    .gem_prime_import =3D amdg= pu_gem_prime_import,
>>>>>>         .gem_pr= ime_pin =3D amdgpu_gem_prime_pin,
>>>>>>         .gem_pr= ime_unpin =3D amdgpu_gem_prime_unpin,
>>>>>>         .gem_pr= ime_res_obj =3D amdgpu_gem_prime_res_obj, diff --git a/drivers/gpu/drm/amd/= amdgpu/amdgpu_prime.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c
>>>>>> index ae9c106..de6f599 100644
>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_prime.c >>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgp= u_prime.c
>>>>>> @@ -26,6 +26,7 @@
>>>>>>     #include <drm/drmP.h>
>>>>>>       #include "amdgpu.h&= quot;
>>>>>> +#include "amdgpu_display.h"
>>>>>>     #include <drm/amdgpu_drm.h&g= t;
>>>>>>     #include <linux/dma-buf.h>= ;
>>>>>>     @@ -164,6 +165,33 @@ struct= reservation_object *amdgpu_gem_prime_res_obj(struct drm_gem_object *obj) >>>>>>         return = bo->tbo.resv;
>>>>>>     }
>>>>>>     +static int amdgpu_gem_begi= n_cpu_access(struct dma_buf *dma_buf, enum
>>>>>> +dma_data_direction direction) {
>>>>>> +    struct amdgpu_bo *bo =3D g= em_to_amdgpu_bo(dma_buf->priv);
>>>>>> +    struct amdgpu_device *adev= =3D amdgpu_ttm_adev(bo->tbo.bdev);
>>>>>> +    struct ttm_operation_ctx c= tx =3D { true, false };
>>>>>> +    u32 domain =3D amdgpu_fram= ebuffer_domains(adev);
>>>>>> +    long ret =3D 0;
>>>>>> +    bool reads =3D (direction = =3D=3D DMA_BIDIRECTIONAL || direction =3D=3D
>>>>>> +DMA_FROM_DEVICE);
>>>>>> +
>>>>>> +    if (!reads || !(domain | A= MDGPU_GEM_DOMAIN_GTT) || bo->pin_count)
>>>>>> +        re= turn 0;
>>>>>> +
>>>>>> +    /* move to gtt */
>>>>>> +    ret =3D amdgpu_bo_reserve(= bo, false);
>>>>>> +    if (unlikely(ret !=3D 0))<= br> >>>>>> +        re= turn ret;
>>>>>> +
>>>>>> +    amdgpu_ttm_placement_from_= domain(bo, AMDGPU_GEM_DOMAIN_GTT);
>>>>>> +    ret =3D ttm_bo_validate(&a= mp;bo->tbo, &bo->placement, &ctx);
>>>>>> +
>>>>>> +    amdgpu_bo_unreserve(bo); >>>>>> +    return ret;
>>>>>> +}
>>>>>> +
>>>>>> +static struct dma_buf_ops amdgpu_dmabuf_ops; = static atomic_t aops_lock;
>>>>>> +
>>>>>>     struct dma_buf *amdgpu_gem_prim= e_export(struct drm_device *dev,
>>>>>>         &n= bsp;            = ;   struct drm_gem_object *gobj,
>>>>>>         &n= bsp;            = ;   int flags)
>>>>>> @@ -178,5 +206,35 @@ struct dma_buf *amdgpu_ge= m_prime_export(struct drm_device *dev,
>>>>>>         buf =3D= drm_gem_prime_export(dev, gobj, flags);
>>>>>>         if (!IS= _ERR(buf))
>>>>>>         &n= bsp;   buf->file->f_mapping =3D dev->anon_inode->i_ma= pping;
>>>>>> +
>>>>>> +    while (amdgpu_dmabuf_ops.b= egin_cpu_access !=3D amdgpu_gem_begin_cpu_access)
>>>>>> +    {
>>>>>> +        if= (!atomic_cmpxchg(&aops_lock, 0, 1)) {
>>>>>> +       &nb= sp;    amdgpu_dmabuf_ops =3D *(buf->ops);
>>>>>> +       &nb= sp;    amdgpu_dmabuf_ops.begin_cpu_access =3D amdgpu_gem_beg= in_cpu_access;
>>>>>> +        }<= br> >>>>>> +    }
>>>>>> +    buf->ops =3D &amdgp= u_dmabuf_ops;
>>>>>> +
>>>>>>         return = buf;
>>>>>>     }
>>>>>> +
>>>>>> +struct drm_gem_object *amdgpu_gem_prime_impor= t(struct drm_device *dev,
>>>>>> +       &nb= sp;            =     struct dma_buf *dma_buf)
>>>>>> +{
>>>>>> +    struct drm_gem_object *obj= ;
>>>>>> +
>>>>>> +    if (dma_buf->ops =3D=3D= &amdgpu_dmabuf_ops) {
>>>>>> +        ob= j =3D dma_buf->priv;
>>>>>> +        if= (obj->dev =3D=3D dev) {
>>>>>> +       &nb= sp;    /*
>>>>>> +       &nb= sp;     * Importing dmabuf exported from out own gem in= creases
>>>>>> +       &nb= sp;     * refcount on gem itself instead of f_count of = dmabuf.
>>>>>> +       &nb= sp;     */
>>>>>> +       &nb= sp;    drm_gem_object_get(obj);
>>>>>> +       &nb= sp;    return obj;
>>>>>> +        }<= br> >>>>>> +    }
>>>>>> +
>>>>>> +    return drm_gem_prime_impor= t(dev, dma_buf); }
>>>>>> -- 
>>>>>> 2.7.4
>>>>>>
>
--_000_BN6PR12MB1652331B4826780B9442CA67F7350BN6PR12MB1652namp_-- --===============1261823085== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KYW1kLWdmeCBt YWlsaW5nIGxpc3QKYW1kLWdmeEBsaXN0cy5mcmVlZGVza3RvcC5vcmcKaHR0cHM6Ly9saXN0cy5m cmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0aW5mby9hbWQtZ2Z4Cg== --===============1261823085==--