From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.1 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDBB0C2BD09 for ; Wed, 4 Dec 2019 11:37:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7EC962054F for ; Wed, 4 Dec 2019 11:37:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=shipmail.org header.i=@shipmail.org header.b="PlXbStpw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7EC962054F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shipmail.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 09E726B0A53; Wed, 4 Dec 2019 06:37:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 04F9E6B0A54; Wed, 4 Dec 2019 06:37:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E806B6B0A55; Wed, 4 Dec 2019 06:37:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id D17C86B0A53 for ; Wed, 4 Dec 2019 06:37:00 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 8905C37E7 for ; Wed, 4 Dec 2019 11:37:00 +0000 (UTC) X-FDA: 76227257400.26.roll70_3cef64e8be70b X-HE-Tag: roll70_3cef64e8be70b X-Filterd-Recvd-Size: 12130 Received: from ste-pvt-msa2.bahnhof.se (ste-pvt-msa2.bahnhof.se [213.80.101.71]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Wed, 4 Dec 2019 11:36:58 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by ste-pvt-msa2.bahnhof.se (Postfix) with ESMTP id E54F047BA8; Wed, 4 Dec 2019 12:36:56 +0100 (CET) Authentication-Results: ste-pvt-msa2.bahnhof.se; dkim=pass (1024-bit key; unprotected) header.d=shipmail.org header.i=@shipmail.org header.b=PlXbStpw; dkim-atps=neutral X-Virus-Scanned: Debian amavisd-new at bahnhof.se Authentication-Results: ste-ftg-msa2.bahnhof.se (amavisd-new); dkim=pass (1024-bit key) header.d=shipmail.org Received: from ste-pvt-msa2.bahnhof.se ([127.0.0.1]) by localhost (ste-ftg-msa2.bahnhof.se [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JlA6HdtsDBLF; Wed, 4 Dec 2019 12:36:55 +0100 (CET) Received: from mail1.shipmail.org (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) (Authenticated sender: mb878879) by ste-pvt-msa2.bahnhof.se (Postfix) with ESMTPA id E4C0B3F802; Wed, 4 Dec 2019 12:36:48 +0100 (CET) Received: from localhost.localdomain (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) by mail1.shipmail.org (Postfix) with ESMTPSA id F2FA8360608; Wed, 4 Dec 2019 12:36:47 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=shipmail.org; s=mail; t=1575459408; bh=YaxGHckT/+4Bzy1VBWGGWN9Dy0KBMg6XIzVFkcyn1+0=; h=Subject:To:Cc:References:From:Date:In-Reply-To:From; b=PlXbStpwEvZtnm6PVOT4yH9HBJ32oNP4sYGwOMbRVPq/l76dIjSHvGPyvn58mVENG 5JQ76oRbcNSe2mWCpP8VXgrLF1qtkN4YddXdUB+XWDD+fn5c+MITWOVb2ybhQbzFal qfFuZZoP+FKVMIN18ZnryNopm/2pYWKxICI6s/ew= Subject: Re: [PATCH 6/8] drm: Add a drm_get_unmapped_area() helper To: =?UTF-8?Q?Christian_K=c3=b6nig?= , linux-mm@kvack.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: pv-drivers@vmware.com, linux-graphics-maintainer@vmware.com, Thomas Hellstrom , Andrew Morton , Michal Hocko , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Ralph Campbell , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= References: <20191203132239.5910-1-thomas_os@shipmail.org> <20191203132239.5910-7-thomas_os@shipmail.org> From: =?UTF-8?Q?Thomas_Hellstr=c3=b6m_=28VMware=29?= Organization: VMware Inc. Message-ID: <98af5b11-1034-91fa-aa38-5730f116d1cd@shipmail.org> Date: Wed, 4 Dec 2019 12:36:47 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 12/4/19 12:11 PM, Christian K=C3=B6nig wrote: > Am 03.12.19 um 14:22 schrieb Thomas Hellstr=C3=B6m (VMware): >> From: Thomas Hellstrom >> >> This helper is used to align user-space buffer object addresses to >> huge page boundaries, minimizing the chance of alignment mismatch >> between user-space addresses and physical addresses. > > Mhm, I'm wondering if that is really such a good idea. Could you elaborate? What drawbacks do you see? Note that this is the way other subsystems are doing it. Take a look at=20 shmem_get_unmapped_area() for instance. > > Wouldn't it be sufficient if userspace uses MAP_HUGETLB? MAP_HUGETLB is something different and appears to be tied to the kernel=20 persistent huge page mechanism, whereas the TTM huge pages is tided to=20 the THP functionality (although skipped by khugepaged). Thanks, Thomas > > Regards, > Christian. > >> >> Cc: Andrew Morton >> Cc: Michal Hocko >> Cc: "Matthew Wilcox (Oracle)" >> Cc: "Kirill A. Shutemov" >> Cc: Ralph Campbell >> Cc: "J=C3=A9r=C3=B4me Glisse" >> Cc: "Christian K=C3=B6nig" >> Signed-off-by: Thomas Hellstrom >> --- >> =C2=A0 drivers/gpu/drm/drm_file.c | 130 ++++++++++++++++++++++++++++++= +++++++ >> =C2=A0 include/drm/drm_file.h=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 5 = ++ >> =C2=A0 2 files changed, 135 insertions(+) >> >> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c >> index ea34bc991858..e5b4024cd397 100644 >> --- a/drivers/gpu/drm/drm_file.c >> +++ b/drivers/gpu/drm/drm_file.c >> @@ -31,6 +31,8 @@ >> =C2=A0=C2=A0 * OTHER DEALINGS IN THE SOFTWARE. >> =C2=A0=C2=A0 */ >> =C2=A0 +#include >> + >> =C2=A0 #include >> =C2=A0 #include >> =C2=A0 #include >> @@ -41,6 +43,7 @@ >> =C2=A0 #include >> =C2=A0 #include >> =C2=A0 #include >> +#include >> =C2=A0 =C2=A0 #include "drm_crtc_internal.h" >> =C2=A0 #include "drm_internal.h" >> @@ -754,3 +757,130 @@ void drm_send_event(struct drm_device *dev,=20 >> struct drm_pending_event *e) >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 spin_unlock_irqrestore(&dev->event_lock= , irqflags); >> =C2=A0 } >> =C2=A0 EXPORT_SYMBOL(drm_send_event); >> + >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE >> +/* >> + * drm_addr_inflate() attempts to construct an aligned area by=20 >> inflating >> + * the area size and skipping the unaligned start of the area. >> + * adapted from shmem_get_unmapped_area() >> + */ >> +static unsigned long drm_addr_inflate(unsigned long addr, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long l= en, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long p= goff, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long f= lags, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long h= uge_size) >> +{ >> +=C2=A0=C2=A0=C2=A0 unsigned long offset, inflated_len; >> +=C2=A0=C2=A0=C2=A0 unsigned long inflated_addr; >> +=C2=A0=C2=A0=C2=A0 unsigned long inflated_offset; >> + >> +=C2=A0=C2=A0=C2=A0 offset =3D (pgoff << PAGE_SHIFT) & (huge_size - 1)= ; >> +=C2=A0=C2=A0=C2=A0 if (offset && offset + len < 2 * huge_size) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> +=C2=A0=C2=A0=C2=A0 if ((addr & (huge_size - 1)) =3D=3D offset) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> + >> +=C2=A0=C2=A0=C2=A0 inflated_len =3D len + huge_size - PAGE_SIZE; >> +=C2=A0=C2=A0=C2=A0 if (inflated_len > TASK_SIZE) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> +=C2=A0=C2=A0=C2=A0 if (inflated_len < len) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> + >> +=C2=A0=C2=A0=C2=A0 inflated_addr =3D current->mm->get_unmapped_area(N= ULL, 0,=20 >> inflated_len, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0, flags); >> +=C2=A0=C2=A0=C2=A0 if (IS_ERR_VALUE(inflated_addr)) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> +=C2=A0=C2=A0=C2=A0 if (inflated_addr & ~PAGE_MASK) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> + >> +=C2=A0=C2=A0=C2=A0 inflated_offset =3D inflated_addr & (huge_size - 1= ); >> +=C2=A0=C2=A0=C2=A0 inflated_addr +=3D offset - inflated_offset; >> +=C2=A0=C2=A0=C2=A0 if (inflated_offset > offset) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 inflated_addr +=3D huge_si= ze; >> + >> +=C2=A0=C2=A0=C2=A0 if (inflated_addr > TASK_SIZE - len) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> + >> +=C2=A0=C2=A0=C2=A0 return inflated_addr; >> +} >> + >> +/** >> + * drm_get_unmapped_area() - Get an unused user-space virtual memory=20 >> area >> + * suitable for huge page table entries. >> + * @file: The struct file representing the address space being=20 >> mmap()'d. >> + * @uaddr: Start address suggested by user-space. >> + * @len: Length of the area. >> + * @pgoff: The page offset into the address space. >> + * @flags: mmap flags >> + * @mgr: The address space manager used by the drm driver. This=20 >> argument can >> + * probably be removed at some point when all drivers use the same >> + * address space manager. >> + * >> + * This function attempts to find an unused user-space virtual=20 >> memory area >> + * that can accommodate the size we want to map, and that is properly >> + * aligned to facilitate huge page table entries matching actual >> + * huge pages or huge page aligned memory in buffer objects. Buffer=20 >> objects >> + * are assumed to start at huge page boundary pfns (io memory) or be >> + * populated by huge pages aligned to the start of the buffer object >> + * (system- or coherent memory). Adapted from shmem_get_unmapped_area= . >> + * >> + * Return: aligned user-space address. >> + */ >> +unsigned long drm_get_unmapped_area(struct file *file, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long uaddr, unsign= ed long len, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long pgoff, unsign= ed long flags, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct drm_vma_offset_manag= er *mgr) >> +{ >> +=C2=A0=C2=A0=C2=A0 unsigned long addr; >> +=C2=A0=C2=A0=C2=A0 unsigned long inflated_addr; >> +=C2=A0=C2=A0=C2=A0 struct drm_vma_offset_node *node; >> + >> +=C2=A0=C2=A0=C2=A0 if (len > TASK_SIZE) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -ENOMEM; >> + >> +=C2=A0=C2=A0=C2=A0 /* Adjust mapping offset to be zero at bo start */ >> +=C2=A0=C2=A0=C2=A0 drm_vma_offset_lock_lookup(mgr); >> +=C2=A0=C2=A0=C2=A0 node =3D drm_vma_offset_lookup_locked(mgr, pgoff, = 1); >> +=C2=A0=C2=A0=C2=A0 if (node) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pgoff -=3D node->vm_node.s= tart; >> +=C2=A0=C2=A0=C2=A0 drm_vma_offset_unlock_lookup(mgr); >> + >> +=C2=A0=C2=A0=C2=A0 addr =3D current->mm->get_unmapped_area(file, uadd= r, len, pgoff,=20 >> flags); >> +=C2=A0=C2=A0=C2=A0 if (IS_ERR_VALUE(addr)) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> +=C2=A0=C2=A0=C2=A0 if (addr & ~PAGE_MASK) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> +=C2=A0=C2=A0=C2=A0 if (addr > TASK_SIZE - len) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> + >> +=C2=A0=C2=A0=C2=A0 if (len < HPAGE_PMD_SIZE) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> +=C2=A0=C2=A0=C2=A0 if (flags & MAP_FIXED) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> +=C2=A0=C2=A0=C2=A0 /* >> +=C2=A0=C2=A0=C2=A0=C2=A0 * Our priority is to support MAP_SHARED mapp= ed hugely; >> +=C2=A0=C2=A0=C2=A0=C2=A0 * and support MAP_PRIVATE mapped hugely too,= until it is COWed. >> +=C2=A0=C2=A0=C2=A0=C2=A0 * But if caller specified an address hint, r= espect that as before. >> +=C2=A0=C2=A0=C2=A0=C2=A0 */ >> +=C2=A0=C2=A0=C2=A0 if (uaddr) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return addr; >> + >> +=C2=A0=C2=A0=C2=A0 inflated_addr =3D drm_addr_inflate(addr, len, pgof= f, flags, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 HPAGE_PMD_SIZE); >> + >> +=C2=A0=C2=A0=C2=A0 if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPA= GE_PUD) && >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 len >=3D HPAGE_PUD_SIZE) >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 inflated_addr =3D drm_addr= _inflate(inflated_addr, len, pgoff, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= flags, HPAGE_PUD_SIZE); >> +=C2=A0=C2=A0=C2=A0 return inflated_addr; >> +} >> +#else /* CONFIG_TRANSPARENT_HUGEPAGE */ >> +unsigned long drm_get_unmapped_area(struct file *file, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long uaddr, unsign= ed long len, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long pgoff, unsign= ed long flags, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct drm_vma_offset_manag= er *mgr) >> +{ >> +=C2=A0=C2=A0=C2=A0 return current->mm->get_unmapped_area(file, uaddr,= len, pgoff,=20 >> flags); >> +} >> +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ >> +EXPORT_SYMBOL_GPL(drm_get_unmapped_area); >> diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h >> index 67af60bb527a..4719cc80d547 100644 >> --- a/include/drm/drm_file.h >> +++ b/include/drm/drm_file.h >> @@ -386,5 +386,10 @@ void drm_event_cancel_free(struct drm_device *dev= , >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 struct drm_pending_event *p); >> =C2=A0 void drm_send_event_locked(struct drm_device *dev, struct=20 >> drm_pending_event *e); >> =C2=A0 void drm_send_event(struct drm_device *dev, struct=20 >> drm_pending_event *e); >> +struct drm_vma_offset_manager; >> +unsigned long drm_get_unmapped_area(struct file *file, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long uaddr, unsign= ed long len, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long pgoff, unsign= ed long flags, >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct drm_vma_offset_manag= er *mgr); >> =C2=A0 =C2=A0 #endif /* _DRM_FILE_H_ */