linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 00/10] NTFS read-write driver GPL implementation by Paragon Software
@ 2020-09-11 14:10 Konstantin Komarov
  2020-09-11 14:10 ` [PATCH v5 01/10] fs/ntfs3: Add headers and misc files Konstantin Komarov
                   ` (7 more replies)
  0 siblings, 8 replies; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-11 14:10 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, joe,
	mark, nborisov, Konstantin Komarov

This patch adds NTFS Read-Write driver to fs/ntfs3.

Having decades of expertise in commercial file systems development and huge
test coverage, we at Paragon Software GmbH want to make our contribution to
the Open Source Community by providing implementation of NTFS Read-Write
driver for the Linux Kernel.

This is fully functional NTFS Read-Write driver. Current version works with
NTFS(including v3.1) and normal/compressed/sparse files and supports journal replaying.

We plan to support this version after the codebase once merged, and add new
features and fix bugs. For example, full journaling support over JBD will be
added in later updates.

v2:
 - patch splitted to chunks (file-wise)
 - build issues fixed
 - sparse and checkpatch.pl errors fixed
 - NULL pointer dereference on mkfs.ntfs-formatted volume mount fixed
 - cosmetics + code cleanup

v3:
 - added acl, noatime, no_acs_rules, prealloc mount options
 - added fiemap support
 - fixed encodings support
 - removed typedefs
 - adapted Kernel-way logging mechanisms
 - fixed typos and corner-case issues

v4:
 - atomic_open() refactored
 - code style updated
 - bugfixes

v5:
- nls/nls_alt mount options added
- Unicode conversion fixes
- Improved operations with fragmented sparse files
- logging cometics

Konstantin Komarov (10):
  fs/ntfs3: Add headers and misc files
  fs/ntfs3: Add initialization of super block
  fs/ntfs3: Add bitmap
  fs/ntfs3: Add file operations and implementation
  fs/ntfs3: Add attrib operations
  fs/ntfs3: Add compression
  fs/ntfs3: Add NTFS journal
  fs/ntfs3: Add Kconfig, Makefile and doc
  fs/ntfs3: Add NTFS3 in fs/Kconfig and fs/Makefile
  fs/ntfs3: Add MAINTAINERS

 Documentation/filesystems/ntfs3.rst |  107 +
 MAINTAINERS                         |    7 +
 fs/Kconfig                          |    1 +
 fs/Makefile                         |    1 +
 fs/ntfs3/Kconfig                    |   23 +
 fs/ntfs3/Makefile                   |   11 +
 fs/ntfs3/attrib.c                   | 1300 +++++++
 fs/ntfs3/attrlist.c                 |  462 +++
 fs/ntfs3/bitfunc.c                  |  137 +
 fs/ntfs3/bitmap.c                   | 1516 ++++++++
 fs/ntfs3/debug.h                    |   59 +
 fs/ntfs3/dir.c                      |  631 ++++
 fs/ntfs3/file.c                     | 1224 +++++++
 fs/ntfs3/frecord.c                  | 2363 ++++++++++++
 fs/ntfs3/fslog.c                    | 5222 +++++++++++++++++++++++++++
 fs/ntfs3/fsntfs.c                   | 2210 ++++++++++++
 fs/ntfs3/index.c                    | 2639 ++++++++++++++
 fs/ntfs3/inode.c                    | 2004 ++++++++++
 fs/ntfs3/lznt.c                     |  452 +++
 fs/ntfs3/namei.c                    |  542 +++
 fs/ntfs3/ntfs.h                     | 1256 +++++++
 fs/ntfs3/ntfs_fs.h                  | 1001 +++++
 fs/ntfs3/record.c                   |  612 ++++
 fs/ntfs3/run.c                      | 1156 ++++++
 fs/ntfs3/super.c                    | 1430 ++++++++
 fs/ntfs3/upcase.c                   |   78 +
 fs/ntfs3/xattr.c                    |  996 +++++
 27 files changed, 27440 insertions(+)
 create mode 100644 Documentation/filesystems/ntfs3.rst
 create mode 100644 fs/ntfs3/Kconfig
 create mode 100644 fs/ntfs3/Makefile
 create mode 100644 fs/ntfs3/attrib.c
 create mode 100644 fs/ntfs3/attrlist.c
 create mode 100644 fs/ntfs3/bitfunc.c
 create mode 100644 fs/ntfs3/bitmap.c
 create mode 100644 fs/ntfs3/debug.h
 create mode 100644 fs/ntfs3/dir.c
 create mode 100644 fs/ntfs3/file.c
 create mode 100644 fs/ntfs3/frecord.c
 create mode 100644 fs/ntfs3/fslog.c
 create mode 100644 fs/ntfs3/fsntfs.c
 create mode 100644 fs/ntfs3/index.c
 create mode 100644 fs/ntfs3/inode.c
 create mode 100644 fs/ntfs3/lznt.c
 create mode 100644 fs/ntfs3/namei.c
 create mode 100644 fs/ntfs3/ntfs.h
 create mode 100644 fs/ntfs3/ntfs_fs.h
 create mode 100644 fs/ntfs3/record.c
 create mode 100644 fs/ntfs3/run.c
 create mode 100644 fs/ntfs3/super.c
 create mode 100644 fs/ntfs3/upcase.c
 create mode 100644 fs/ntfs3/xattr.c

-- 
2.25.4


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v5 01/10] fs/ntfs3: Add headers and misc files
  2020-09-11 14:10 [PATCH v5 00/10] NTFS read-write driver GPL implementation by Paragon Software Konstantin Komarov
@ 2020-09-11 14:10 ` Konstantin Komarov
  2020-09-12  2:28   ` Joe Perches
  2020-09-11 14:10 ` [PATCH v5 03/10] fs/ntfs3: Add bitmap Konstantin Komarov
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-11 14:10 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, joe,
	mark, nborisov, Konstantin Komarov

This adds headers and misc files

Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
---
 fs/ntfs3/debug.h   |   59 +++
 fs/ntfs3/ntfs.h    | 1256 ++++++++++++++++++++++++++++++++++++++++++++
 fs/ntfs3/ntfs_fs.h | 1001 +++++++++++++++++++++++++++++++++++
 fs/ntfs3/upcase.c  |   78 +++
 4 files changed, 2394 insertions(+)
 create mode 100644 fs/ntfs3/debug.h
 create mode 100644 fs/ntfs3/ntfs.h
 create mode 100644 fs/ntfs3/ntfs_fs.h
 create mode 100644 fs/ntfs3/upcase.c

diff --git a/fs/ntfs3/debug.h b/fs/ntfs3/debug.h
new file mode 100644
index 000000000000..38ec7420017b
--- /dev/null
+++ b/fs/ntfs3/debug.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ *  linux/fs/ntfs3/debug.h
+ *
+ * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved.
+ *
+ * useful functions for debuging
+ */
+
+#ifndef Add2Ptr
+#define Add2Ptr(P, I) (void *)((u8 *)(P) + (I))
+#define PtrOffset(B, O) ((size_t)((size_t)(O) - (size_t)(B)))
+#endif
+
+#define QuadAlign(n) (((n) + 7u) & (~7u))
+#define IsQuadAligned(n) (!((size_t)(n)&7u))
+#define Quad2Align(n) (((n) + 15u) & (~15u))
+#define IsQuad2Aligned(n) (!((size_t)(n)&15u))
+#define Quad4Align(n) (((n) + 31u) & (~31u))
+#define IsSizeTAligned(n) (!((size_t)(n) & (sizeof(size_t) - 1)))
+#define DwordAlign(n) (((n) + 3u) & (~3u))
+#define IsDwordAligned(n) (!((size_t)(n)&3u))
+#define WordAlign(n) (((n) + 1u) & (~1u))
+#define IsWordAligned(n) (!((size_t)(n)&1u))
+
+#ifdef CONFIG_PRINTK
+__printf(2, 3) void ntfs_printk(const struct super_block *sb, const char *fmt,
+				...);
+__printf(2, 3) void ntfs_inode_printk(struct inode *inode, const char *fmt,
+				      ...);
+#else
+static inline __printf(2, 3) void ntfs_printk(const struct super_block *sb,
+					      const char *fmt, ...)
+{
+}
+
+static inline __printf(2, 3) void ntfs_inode_printk(struct inode *inode,
+						    const char *fmt, ...)
+{
+}
+#endif
+
+/*
+ * Logging macros ( thanks Joe Perches <joe@perches.com> for implementation )
+ */
+
+#define ntfs_err(sb, fmt, ...) ntfs_printk(sb, KERN_ERR fmt, ##__VA_ARGS__)
+#define ntfs_warn(sb, fmt, ...) ntfs_printk(sb, KERN_WARNING fmt, ##__VA_ARGS__)
+#define ntfs_notice(sb, fmt, ...)                                              \
+	ntfs_printk(sb, KERN_NOTICE fmt, ##__VA_ARGS__)
+
+#define ntfs_inode_err(inode, fmt, ...)                                        \
+	ntfs_inode_printk(inode, KERN_ERR fmt, ##__VA_ARGS__)
+#define ntfs_inode_warn(inode, fmt, ...)                                       \
+	ntfs_inode_printk(inode, KERN_WARNING fmt, ##__VA_ARGS__)
+
+#define ntfs_alloc(s, z) kmalloc(s, z ? (GFP_NOFS | __GFP_ZERO) : GFP_NOFS)
+#define ntfs_free(p) kfree(p)
+#define ntfs_memdup(src, len) kmemdup(src, len, GFP_NOFS)
diff --git a/fs/ntfs3/ntfs.h b/fs/ntfs3/ntfs.h
new file mode 100644
index 000000000000..3d917dd6017a
--- /dev/null
+++ b/fs/ntfs3/ntfs.h
@@ -0,0 +1,1256 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ *  linux/fs/ntfs3/ntfs.h
+ *
+ * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved.
+ *
+ * on-disk ntfs structs
+ */
+
+/* TODO:
+ * - Check 4K mft record and 512 bytes cluster
+ */
+
+/*
+ * Activate this define to use binary search in indexes
+ */
+#define NTFS3_INDEX_BINARY_SEARCH
+
+/*
+ * Check each run for marked clusters
+ */
+#define NTFS3_CHECK_FREE_CLST
+
+#define NTFS_NAME_LEN 255
+
+/*
+ * ntfs.sys used 500 maximum links
+ * on-disk struct allows up to 0xffff
+ */
+#define NTFS_LINK_MAX 0x400
+//#define NTFS_LINK_MAX 0xffff
+
+/*
+ * Activate to use 64 bit clusters instead of 32 bits in ntfs.sys
+ * Logical and virtual cluster number
+ * If needed, may be redefined to use 64 bit value
+ */
+//#define NTFS3_64BIT_CLUSTER
+
+#define NTFS_LZNT_MAX_CLUSTER 4096
+#define NTFS_LZNT_CUNIT 4
+
+struct GUID {
+	__le32 Data1;
+	__le16 Data2;
+	__le16 Data3;
+	u8 Data4[8];
+};
+
+/*
+ * this struct repeats layout of ATTR_FILE_NAME
+ * at offset 0x40
+ * it used to store global constants NAME_MFT/NAME_MIRROR...
+ * most constant names are shorter than 10
+ */
+struct cpu_str {
+	u8 len;
+	u8 unused;
+	u16 name[10];
+};
+
+struct le_str {
+	u8 len;
+	u8 unused;
+	__le16 name[1];
+};
+
+static_assert(SECTOR_SHIFT == 9);
+
+#ifdef NTFS3_64BIT_CLUSTER
+typedef u64 CLST;
+static_assert(sizeof(size_t) == 8);
+#else
+typedef u32 CLST;
+#endif
+
+#define SPARSE_LCN ((CLST)-1)
+#define RESIDENT_LCN ((CLST)-2)
+#define COMPRESSED_LCN ((CLST)-3)
+
+#define COMPRESSION_UNIT 4
+#define COMPRESS_MAX_CLUSTER 0x1000
+#define MFT_INCREASE_CHUNK 1024
+
+enum RECORD_NUM {
+	MFT_REC_MFT = 0,
+	MFT_REC_MIRR = 1,
+	MFT_REC_LOG = 2,
+	MFT_REC_VOL = 3,
+	MFT_REC_ATTR = 4,
+	MFT_REC_ROOT = 5,
+	MFT_REC_BITMAP = 6,
+	MFT_REC_BOOT = 7,
+	MFT_REC_BADCLUST = 8,
+	MFT_REC_QUOTA = 9,
+	MFT_REC_SECURE = 9, // NTFS 3.0
+	MFT_REC_UPCASE = 10,
+	MFT_REC_EXTEND = 11, // NTFS 3.0
+	MFT_REC_RESERVED = 11,
+	MFT_REC_FREE = 16,
+	MFT_REC_USER = 24,
+};
+
+enum ATTR_TYPE {
+	ATTR_ZERO = cpu_to_le32(0x00),
+	ATTR_STD = cpu_to_le32(0x10),
+	ATTR_LIST = cpu_to_le32(0x20),
+	ATTR_NAME = cpu_to_le32(0x30),
+	// ATTR_VOLUME_VERSION on Nt4
+	ATTR_ID = cpu_to_le32(0x40),
+	ATTR_SECURE = cpu_to_le32(0x50),
+	ATTR_LABEL = cpu_to_le32(0x60),
+	ATTR_VOL_INFO = cpu_to_le32(0x70),
+	ATTR_DATA = cpu_to_le32(0x80),
+	ATTR_ROOT = cpu_to_le32(0x90),
+	ATTR_ALLOC = cpu_to_le32(0xA0),
+	ATTR_BITMAP = cpu_to_le32(0xB0),
+	// ATTR_SYMLINK on Nt4
+	ATTR_REPARSE = cpu_to_le32(0xC0),
+	ATTR_EA_INFO = cpu_to_le32(0xD0),
+	ATTR_EA = cpu_to_le32(0xE0),
+	ATTR_PROPERTYSET = cpu_to_le32(0xF0),
+	ATTR_LOGGED_UTILITY_STREAM = cpu_to_le32(0x100),
+	ATTR_END = cpu_to_le32(0xFFFFFFFF)
+};
+
+static_assert(sizeof(enum ATTR_TYPE) == 4);
+
+enum FILE_ATTRIBUTE {
+	FILE_ATTRIBUTE_READONLY = cpu_to_le32(0x00000001),
+	FILE_ATTRIBUTE_HIDDEN = cpu_to_le32(0x00000002),
+	FILE_ATTRIBUTE_SYSTEM = cpu_to_le32(0x00000004),
+	FILE_ATTRIBUTE_ARCHIVE = cpu_to_le32(0x00000020),
+	FILE_ATTRIBUTE_DEVICE = cpu_to_le32(0x00000040),
+
+	FILE_ATTRIBUTE_TEMPORARY = cpu_to_le32(0x00000100),
+	FILE_ATTRIBUTE_SPARSE_FILE = cpu_to_le32(0x00000200),
+	FILE_ATTRIBUTE_REPARSE_POINT = cpu_to_le32(0x00000400),
+	FILE_ATTRIBUTE_COMPRESSED = cpu_to_le32(0x00000800),
+
+	FILE_ATTRIBUTE_OFFLINE = cpu_to_le32(0x00001000),
+	FILE_ATTRIBUTE_NOT_CONTENT_INDEXED = cpu_to_le32(0x00002000),
+	FILE_ATTRIBUTE_ENCRYPTED = cpu_to_le32(0x00004000),
+
+	FILE_ATTRIBUTE_VALID_FLAGS = cpu_to_le32(0x00007fb7),
+
+	FILE_ATTRIBUTE_DIRECTORY = cpu_to_le32(0x10000000),
+};
+
+static_assert(sizeof(enum FILE_ATTRIBUTE) == 4);
+
+extern const struct cpu_str NAME_MFT; // L"$MFT"
+extern const struct cpu_str NAME_MIRROR; // L"$MFTMirr"
+extern const struct cpu_str NAME_LOGFILE; // L"$LogFile"
+extern const struct cpu_str NAME_VOLUME; // L"$Volume"
+extern const struct cpu_str NAME_ATTRDEF; // L"$AttrDef"
+extern const struct cpu_str NAME_ROOT; // L"."
+extern const struct cpu_str NAME_BITMAP; // L"$Bitmap"
+extern const struct cpu_str NAME_BOOT; // L"$Boot"
+extern const struct cpu_str NAME_BADCLUS; // L"$BadClus"
+extern const struct cpu_str NAME_QUOTA; // L"$Quota"
+extern const struct cpu_str NAME_SECURE; // L"$Secure"
+extern const struct cpu_str NAME_UPCASE; // L"$UpCase"
+extern const struct cpu_str NAME_EXTEND; // L"$Extend"
+extern const struct cpu_str NAME_OBJID; // L"$ObjId"
+extern const struct cpu_str NAME_REPARSE; // L"$Reparse"
+extern const struct cpu_str NAME_USNJRNL; // L"$UsnJrnl"
+extern const struct cpu_str NAME_UGM; // L"$UGM"
+
+extern const __le16 I30_NAME[4]; // L"$I30"
+extern const __le16 SII_NAME[4]; // L"$SII"
+extern const __le16 SDH_NAME[4]; // L"$SDH"
+extern const __le16 SO_NAME[2]; // L"$O"
+extern const __le16 SQ_NAME[2]; // L"$Q"
+extern const __le16 SR_NAME[2]; // L"$R"
+
+extern const __le16 BAD_NAME[4]; // L"$Bad"
+extern const __le16 SDS_NAME[4]; // L"$SDS"
+extern const __le16 EFS_NAME[4]; // L"$EFS"
+extern const __le16 WOF_NAME[17]; // L"WofCompressedData"
+extern const __le16 J_NAME[2]; // L"$J"
+extern const __le16 MAX_NAME[4]; // L"$Max"
+
+/* MFT record number structure */
+struct MFT_REF {
+	__le32 low; // The low part of the number
+	__le16 high; // The high part of the number
+	__le16 seq; // The sequence number of MFT record
+};
+
+static_assert(sizeof(__le64) == sizeof(struct MFT_REF));
+
+static inline CLST ino_get(const struct MFT_REF *ref)
+{
+#ifdef NTFS3_64BIT_CLUSTER
+	return le32_to_cpu(ref->low) | ((u64)le16_to_cpu(ref->high) << 32);
+#else
+	return le32_to_cpu(ref->low);
+#endif
+}
+
+struct NTFS_BOOT {
+	u8 jump_code[3]; // 0x00: Jump to boot code
+	u8 system_id[8]; // 0x03: System ID, equals "NTFS    "
+
+	// NOTE: this member is not aligned(!)
+	// bytes_per_sector[0] must be 0
+	// bytes_per_sector[1] must be multiplied by 256
+	u8 bytes_per_sector[2]; // 0x0B: Bytes per sector
+
+	u8 sectors_per_clusters; // 0x0D: Sectors per cluster
+	u8 unused1[7];
+	u8 media_type; // 0x15: Media type (0xF8 - harddisk)
+	u8 unused2[2];
+	__le16 secotrs_per_track; // 0x18: number of sectors per track
+	__le16 heads; // 0x1A: number of heads per cylinder
+	__le32 hidden_sectors; // 0x1C: number of 'hidden' sectors
+	u8 unused3[4];
+	u8 bios_drive_num; // 0x24: BIOS drive number =0x80
+	u8 unused4;
+	u8 signature_ex; // 0x26: Extended BOOT signature =0x80
+	u8 unused5;
+	__le64 sectors_per_volume; // 0x28: size of volume in sectors
+	__le64 mft_clst; // 0x30: first cluster of $MFT
+	__le64 mft2_clst; // 0x38: first cluster of $MFTMirr
+	s8 record_size; // 0x40: size of MFT record in clusters(sectors)
+	u8 unused6[3];
+	s8 index_size; // 0x44: size of INDX record in clusters(sectors)
+	u8 unused7[3];
+	__le64 serial_num; // 0x48: Volume serial number
+	__le32 check_sum; // 0x50: Simple additive checksum of all of the u32's which
+		// precede the 'check_sum'
+
+	u8 boot_code[0x200 - 0x50 - 2 - 4]; // 0x54:
+	u8 boot_magic[2]; // 0x1FE: Boot signature =0x55 + 0xAA
+};
+
+static_assert(sizeof(struct NTFS_BOOT) == 0x200);
+
+enum NTFS_SIGNATURE {
+	NTFS_FILE_SIGNATURE = cpu_to_le32(0x454C4946), // 'FILE'
+	NTFS_INDX_SIGNATURE = cpu_to_le32(0x58444E49), // 'INDX'
+	NTFS_CHKD_SIGNATURE = cpu_to_le32(0x444B4843), // 'CHKD'
+	NTFS_RSTR_SIGNATURE = cpu_to_le32(0x52545352), // 'RSTR'
+	NTFS_RCRD_SIGNATURE = cpu_to_le32(0x44524352), // 'RCRD'
+	NTFS_BAAD_SIGNATURE = cpu_to_le32(0x44414142), // 'BAAD'
+	NTFS_HOLE_SIGNATURE = cpu_to_le32(0x454C4F48), // 'HOLE'
+	NTFS_FFFF_SIGNATURE = cpu_to_le32(0xffffffff),
+};
+
+static_assert(sizeof(enum NTFS_SIGNATURE) == 4);
+
+/* MFT Record header structure */
+struct NTFS_RECORD_HEADER {
+	/* Record magic number, equals 'FILE'/'INDX'/'RSTR'/'RCRD' */
+	enum NTFS_SIGNATURE sign; // 0x00:
+	__le16 fix_off; // 0x04:
+	__le16 fix_num; // 0x06:
+	__le64 lsn; // 0x08: Log file sequence number
+};
+
+static_assert(sizeof(struct NTFS_RECORD_HEADER) == 0x10);
+
+static inline int is_baad(const struct NTFS_RECORD_HEADER *hdr)
+{
+	return hdr->sign == NTFS_BAAD_SIGNATURE;
+}
+
+/* Possible bits in struct MFT_REC.flags */
+enum RECORD_FLAG {
+	RECORD_FLAG_IN_USE = cpu_to_le16(0x0001),
+	RECORD_FLAG_DIR = cpu_to_le16(0x0002),
+	RECORD_FLAG_SYSTEM = cpu_to_le16(0x0004),
+	RECORD_FLAG_UNKNOWN = cpu_to_le16(0x0008),
+};
+
+/* MFT Record structure */
+struct MFT_REC {
+	struct NTFS_RECORD_HEADER rhdr; // 'FILE'
+
+	__le16 seq; // 0x10: Sequence number for this record
+	__le16 hard_links; // 0x12: The number of hard links to record
+	__le16 attr_off; // 0x14: Offset to attributes
+	__le16 flags; // 0x16: 1=non-resident, 2=dir. See RECORD_FLAG_XXX
+	__le32 used; // 0x18: The size of used part
+	__le32 total; // 0x1C: Total record size
+
+	struct MFT_REF parent_ref; // 0x20: Parent MFT record
+	__le16 next_attr_id; // 0x28: The next attribute Id
+
+	//
+	// NTFS of version 3.1 uses this record header
+	// if fix_off >= 0x30
+
+	__le16 Res; // 0x2A: ? High part of MftRecord
+	__le32 MftRecord; // 0x2C: Current record number
+	__le16 Fixups[1]; // 0x30:
+};
+
+#define MFTRECORD_FIXUP_OFFSET_1 offsetof(struct MFT_REC, Res)
+#define MFTRECORD_FIXUP_OFFSET_3 offsetof(struct MFT_REC, Fixups)
+
+static_assert(MFTRECORD_FIXUP_OFFSET_1 == 0x2A);
+static_assert(MFTRECORD_FIXUP_OFFSET_3 == 0x30);
+
+static inline bool is_rec_base(const struct MFT_REC *rec)
+{
+	const struct MFT_REF *r = &rec->parent_ref;
+
+	return !r->low && !r->high && !r->seq;
+}
+
+static inline bool is_mft_rec5(const struct MFT_REC *rec)
+{
+	return le16_to_cpu(rec->rhdr.fix_off) >=
+	       offsetof(struct MFT_REC, Fixups);
+}
+
+static inline bool is_rec_inuse(const struct MFT_REC *rec)
+{
+	return rec->flags & RECORD_FLAG_IN_USE;
+}
+
+static inline bool clear_rec_inuse(struct MFT_REC *rec)
+{
+	return rec->flags &= ~RECORD_FLAG_IN_USE;
+}
+
+/* Possible values of ATTR_RESIDENT.flags */
+#define RESIDENT_FLAG_INDEXED 0x01
+
+struct ATTR_RESIDENT {
+	__le32 data_size; // 0x10: The size of data
+	__le16 data_off; // 0x14: Offset to data
+	u8 flags; // 0x16: resident flags ( 1 - indexed )
+	u8 res; // 0x17:
+}; // sizeof() = 0x18
+
+struct ATTR_NONRESIDENT {
+	__le64 svcn; // 0x10: Starting VCN of this segment
+	__le64 evcn; // 0x18: End VCN of this segment
+	__le16 run_off; // 0x20: Offset to packed runs
+	//  Unit of Compression size for this stream, expressed
+	//  as a log of the cluster size.
+	//
+	//      0 means file is not compressed
+	//      1, 2, 3, and 4 are potentially legal values if the
+	//          stream is compressed, however the implementation
+	//          may only choose to use 4, or possibly 3.  Note
+	//          that 4 means cluster size time 16.  If convenient
+	//          the implementation may wish to accept a
+	//          reasonable range of legal values here (1-5?),
+	//          even if the implementation only generates
+	//          a smaller set of values itself.
+	u8 c_unit; // 0x22
+	u8 res1[5]; // 0x23:
+	__le64 alloc_size; // 0x28: The allocated size of attribute in bytes
+		// (multiple of cluster size)
+	__le64 data_size; // 0x30: The size of attribute  in bytes <= alloc_size
+	__le64 valid_size; // 0x38: The size of valid part in bytes <= data_size
+	__le64 total_size; // 0x40: The sum of the allocated clusters for a file
+		// (present only for the first segment (0 == vcn)
+		// of compressed attribute)
+
+}; // sizeof()=0x40 or 0x48 (if compressed)
+
+/* Possible values of ATTRIB.flags: */
+#define ATTR_FLAG_COMPRESSED cpu_to_le16(0x0001)
+#define ATTR_FLAG_COMPRESSED_MASK cpu_to_le16(0x00FF)
+#define ATTR_FLAG_ENCRYPTED cpu_to_le16(0x4000)
+#define ATTR_FLAG_SPARSED cpu_to_le16(0x8000)
+
+struct ATTRIB {
+	enum ATTR_TYPE type; // 0x00: The type of this attribute
+	__le32 size; // 0x04: The size of this attribute
+	u8 non_res; // 0x08: Is this attribute non-resident ?
+	u8 name_len; // 0x09: This attribute name length
+	__le16 name_off; // 0x0A: Offset to the attribute name
+	__le16 flags; // 0x0C: See ATTR_FLAG_XXX
+	__le16 id; // 0x0E: unique id (per record)
+
+	union {
+		struct ATTR_RESIDENT res; // 0x10
+		struct ATTR_NONRESIDENT nres; // 0x10
+	};
+};
+
+/* Define attribute sizes */
+#define SIZEOF_RESIDENT 0x18
+#define SIZEOF_NONRESIDENT_EX 0x48
+#define SIZEOF_NONRESIDENT 0x40
+
+#define SIZEOF_RESIDENT_LE cpu_to_le16(0x18)
+#define SIZEOF_NONRESIDENT_EX_LE cpu_to_le16(0x48)
+#define SIZEOF_NONRESIDENT_LE cpu_to_le16(0x40)
+
+static inline u64 attr_ondisk_size(const struct ATTRIB *attr)
+{
+	return attr->non_res ? ((attr->flags &
+				 (ATTR_FLAG_COMPRESSED | ATTR_FLAG_SPARSED)) ?
+					le64_to_cpu(attr->nres.total_size) :
+					le64_to_cpu(attr->nres.alloc_size)) :
+			       QuadAlign(le32_to_cpu(attr->res.data_size));
+}
+
+static inline u64 attr_size(const struct ATTRIB *attr)
+{
+	return attr->non_res ? le64_to_cpu(attr->nres.data_size) :
+			       le32_to_cpu(attr->res.data_size);
+}
+
+static inline bool is_attr_encrypted(const struct ATTRIB *attr)
+{
+	return attr->flags & ATTR_FLAG_ENCRYPTED;
+}
+
+static inline bool is_attr_sparsed(const struct ATTRIB *attr)
+{
+	return attr->flags & ATTR_FLAG_SPARSED;
+}
+
+static inline bool is_attr_compressed(const struct ATTRIB *attr)
+{
+	return attr->flags & ATTR_FLAG_COMPRESSED;
+}
+
+static inline bool is_attr_ext(const struct ATTRIB *attr)
+{
+	return attr->flags & (ATTR_FLAG_SPARSED | ATTR_FLAG_COMPRESSED);
+}
+
+static inline bool is_attr_indexed(const struct ATTRIB *attr)
+{
+	return !attr->non_res && (attr->res.flags & RESIDENT_FLAG_INDEXED);
+}
+
+static inline const __le16 *attr_name(const struct ATTRIB *attr)
+{
+	return Add2Ptr(attr, le16_to_cpu(attr->name_off));
+}
+
+static inline u64 attr_svcn(const struct ATTRIB *attr)
+{
+	return attr->non_res ? le64_to_cpu(attr->nres.svcn) : 0;
+}
+
+/* the size of resident attribute by its resident size */
+#define BYTES_PER_RESIDENT(b) (0x18 + (b))
+
+static_assert(sizeof(struct ATTRIB) == 0x48);
+static_assert(sizeof(((struct ATTRIB *)NULL)->res) == 0x08);
+static_assert(sizeof(((struct ATTRIB *)NULL)->nres) == 0x38);
+
+static inline void *resident_data_ex(const struct ATTRIB *attr, u32 datasize)
+{
+	u32 asize, rsize;
+	u16 off;
+
+	if (attr->non_res)
+		return NULL;
+
+	asize = le32_to_cpu(attr->size);
+	off = le16_to_cpu(attr->res.data_off);
+
+	if (asize < datasize + off)
+		return NULL;
+
+	rsize = le32_to_cpu(attr->res.data_size);
+	if (rsize < datasize)
+		return NULL;
+
+	return Add2Ptr(attr, off);
+}
+
+static inline void *resident_data(const struct ATTRIB *attr)
+{
+	return Add2Ptr(attr, le16_to_cpu(attr->res.data_off));
+}
+
+static inline void *attr_run(const struct ATTRIB *attr)
+{
+	return Add2Ptr(attr, le16_to_cpu(attr->nres.run_off));
+}
+
+/* Standard information attribute (0x10) */
+struct ATTR_STD_INFO {
+	__le64 cr_time; // 0x00: File creation file
+	__le64 m_time; // 0x08: File modification time
+	__le64 c_time; // 0x10: Last time any attribute was modified.
+	__le64 a_time; // 0x18: File last access time
+	enum FILE_ATTRIBUTE fa; // 0x20: Standard DOS attributes & more
+	__le32 max_ver_num; // 0x24: Maximum Number of Versions
+	__le32 ver_num; // 0x28: Version Number
+	__le32 class_id; // 0x2C: Class Id from bidirectional Class Id index
+};
+
+static_assert(sizeof(struct ATTR_STD_INFO) == 0x30);
+
+#define SECURITY_ID_INVALID 0x00000000
+#define SECURITY_ID_FIRST 0x00000100
+
+struct ATTR_STD_INFO5 {
+	__le64 cr_time; // 0x00: File creation file
+	__le64 m_time; // 0x08: File modification time
+	__le64 c_time; // 0x10: Last time any attribute was modified.
+	__le64 a_time; // 0x18: File last access time
+	enum FILE_ATTRIBUTE fa; // 0x20: Standard DOS attributes & more
+	__le32 max_ver_num; // 0x24: Maximum Number of Versions
+	__le32 ver_num; // 0x28: Version Number
+	__le32 class_id; // 0x2C: Class Id from bidirectional Class Id index
+
+	__le32 owner_id; // 0x30: Owner Id of the user owning the file. This Id is a key
+		// in the $O and $Q Indexes of the file $Quota. If zero, then
+		// quotas are disabled
+	__le32 security_id; // 0x34: The Security Id is a key in the $SII Index and $SDS
+		// Data Stream in the file $Secure.
+	__le64 quota_charge; // 0x38: The number of bytes this file user from the user's
+		// quota. This should be the total data size of all streams.
+		// If zero, then quotas are disabled.
+	__le64 usn; // 0x40: Last Update Sequence Number of the file. This is a direct
+		// index into the file $UsnJrnl. If zero, the USN Journal is
+		// disabled.
+};
+
+static_assert(sizeof(struct ATTR_STD_INFO5) == 0x48);
+
+/* attribute list entry structure (0x20) */
+struct ATTR_LIST_ENTRY {
+	enum ATTR_TYPE type; // 0x00: The type of attribute
+	__le16 size; // 0x04: The size of this record
+	u8 name_len; // 0x06: The length of attribute name
+	u8 name_off; // 0x07: The offset to attribute name
+	__le64 vcn; // 0x08: Starting VCN of this attribute
+	struct MFT_REF ref; // 0x10: MFT record number with attribute
+	__le16 id; // 0x18: struct ATTRIB ID
+	__le16 name[3]; // 0x1A: Just to align. To get real name can use bNameOffset
+
+}; // sizeof(0x20)
+
+static_assert(sizeof(struct ATTR_LIST_ENTRY) == 0x20);
+
+static inline u32 le_size(u8 name_len)
+{
+	return QuadAlign(offsetof(struct ATTR_LIST_ENTRY, name) +
+			 name_len * sizeof(short));
+}
+
+/* returns 0 if 'attr' has the same type and name */
+static inline int le_cmp(const struct ATTR_LIST_ENTRY *le,
+			 const struct ATTRIB *attr)
+{
+	return le->type != attr->type || le->name_len != attr->name_len ||
+	       (!le->name_len &&
+		memcmp(Add2Ptr(le, le->name_off),
+		       Add2Ptr(attr, le16_to_cpu(attr->name_off)),
+		       le->name_len * sizeof(short)));
+}
+
+static inline const __le16 *le_name(const struct ATTR_LIST_ENTRY *le)
+{
+	return Add2Ptr(le, le->name_off);
+}
+
+/* File name types (the field type in struct ATTR_FILE_NAME ) */
+#define FILE_NAME_POSIX 0
+#define FILE_NAME_UNICODE 1
+#define FILE_NAME_DOS 2
+#define FILE_NAME_UNICODE_AND_DOS (FILE_NAME_DOS | FILE_NAME_UNICODE)
+
+/* Filename attribute structure (0x30) */
+struct NTFS_DUP_INFO {
+	__le64 cr_time; // 0x00: File creation file
+	__le64 m_time; // 0x08: File modification time
+	__le64 c_time; // 0x10: Last time any attribute was modified
+	__le64 a_time; // 0x18: File last access time
+	__le64 alloc_size; // 0x20: Data attribute allocated size, multiple of cluster size
+	__le64 data_size; // 0x28: Data attribute size <= Dataalloc_size
+	enum FILE_ATTRIBUTE fa; // 0x30: Standard DOS attributes & more
+	__le16 ea_size; // 0x34: Packed EAs
+	__le16 reparse; // 0x36: Used by Reparse
+
+}; // 0x38
+
+struct ATTR_FILE_NAME {
+	struct MFT_REF home; // 0x00: MFT record for directory
+	struct NTFS_DUP_INFO dup; // 0x08
+	u8 name_len; // 0x40: File name length in words
+	u8 type; // 0x41: File name type
+	__le16 name[1]; // 0x42: File name
+};
+
+static_assert(sizeof(((struct ATTR_FILE_NAME *)NULL)->dup) == 0x38);
+static_assert(offsetof(struct ATTR_FILE_NAME, name) == 0x42);
+#define SIZEOF_ATTRIBUTE_FILENAME 0x44
+#define SIZEOF_ATTRIBUTE_FILENAME_MAX (0x42 + 255 * 2)
+
+static inline struct ATTRIB *attr_from_name(struct ATTR_FILE_NAME *fname)
+{
+	return (struct ATTRIB *)((char *)fname - SIZEOF_RESIDENT);
+}
+
+static inline u16 fname_full_size(const struct ATTR_FILE_NAME *fname)
+{
+	return offsetof(struct ATTR_FILE_NAME, name) +
+	       fname->name_len * sizeof(short);
+}
+
+static inline u8 paired_name(u8 type)
+{
+	if (type == FILE_NAME_UNICODE)
+		return FILE_NAME_DOS;
+	if (type == FILE_NAME_DOS)
+		return FILE_NAME_UNICODE;
+	return FILE_NAME_POSIX;
+}
+
+/* Index entry defines ( the field flags in NtfsDirEntry ) */
+#define NTFS_IE_HAS_SUBNODES cpu_to_le16(1)
+#define NTFS_IE_LAST cpu_to_le16(2)
+
+/* Directory entry structure */
+struct NTFS_DE {
+	union {
+		struct MFT_REF ref; // 0x00: MFT record number with this file
+		struct {
+			__le16 data_off; // 0x00:
+			__le16 data_size; // 0x02:
+			__le32 Res; // 0x04: must be 0
+		} View;
+	};
+	__le16 size; // 0x08: The size of this entry
+	__le16 key_size; // 0x0A: The size of File name length in bytes + 0x42
+	__le16 flags; // 0x0C: Entry flags, 1=subnodes, 2=last
+	__le16 res; // 0x0E:
+
+	// Here any indexed attribute can be placed
+	// One of them is:
+	// struct ATTR_FILE_NAME AttrFileName;
+	//
+
+	// The last 8 bytes of this structure contains
+	// the VBN of subnode
+	// !!! Note !!!
+	// This field is presented only if (flags & NTFS_IE_HAS_SUBNODES)
+	// __le64 vbn;
+};
+
+static_assert(sizeof(struct NTFS_DE) == 0x10);
+
+static inline void de_set_vbn_le(struct NTFS_DE *e, __le64 vcn)
+{
+	__le64 *v = Add2Ptr(e, le16_to_cpu(e->size) - sizeof(__le64));
+
+	*v = vcn;
+}
+
+static inline void de_set_vbn(struct NTFS_DE *e, CLST vcn)
+{
+	__le64 *v = Add2Ptr(e, le16_to_cpu(e->size) - sizeof(__le64));
+
+	*v = cpu_to_le64(vcn);
+}
+
+static inline __le64 de_get_vbn_le(const struct NTFS_DE *e)
+{
+	return *(__le64 *)Add2Ptr(e, le16_to_cpu(e->size) - sizeof(__le64));
+}
+
+static inline CLST de_get_vbn(const struct NTFS_DE *e)
+{
+	__le64 *v = Add2Ptr(e, le16_to_cpu(e->size) - sizeof(__le64));
+
+	return le64_to_cpu(*v);
+}
+
+static inline struct NTFS_DE *de_get_next(const struct NTFS_DE *e)
+{
+	return Add2Ptr(e, le16_to_cpu(e->size));
+}
+
+static inline struct ATTR_FILE_NAME *de_get_fname(const struct NTFS_DE *e)
+{
+	return le16_to_cpu(e->key_size) >= SIZEOF_ATTRIBUTE_FILENAME ?
+		       Add2Ptr(e, sizeof(struct NTFS_DE)) :
+		       NULL;
+}
+
+static inline bool de_is_last(const struct NTFS_DE *e)
+{
+	return e->flags & NTFS_IE_LAST;
+}
+
+static inline bool de_has_vcn(const struct NTFS_DE *e)
+{
+	return e->flags & NTFS_IE_HAS_SUBNODES;
+}
+
+static inline bool de_has_vcn_ex(const struct NTFS_DE *e)
+{
+	return (e->flags & NTFS_IE_HAS_SUBNODES) &&
+	       (u64)(-1) != *((u64 *)Add2Ptr(e, le16_to_cpu(e->size) -
+							sizeof(__le64)));
+}
+
+#define MAX_BYTES_PER_NAME_ENTRY                                               \
+	QuadAlign(sizeof(struct NTFS_DE) +                                     \
+		  offsetof(struct ATTR_FILE_NAME, name) +                      \
+		  NTFS_NAME_LEN * sizeof(short))
+
+struct INDEX_HDR {
+	// The offset from the start of this structure to the first NtfsDirEntry
+	__le32 de_off; // 0x00:
+	// The size of this structure plus all entries (quad-word aligned)
+	__le32 used; // 0x04
+	// The allocated size of for this structure plus all entries
+	__le32 total; // 0x08:
+	// 0x00 = Small directory, 0x01 = Large directory
+	u8 flags; // 0x0C
+	u8 res[3];
+
+	//
+	// de_off + used <= total
+	//
+};
+
+static_assert(sizeof(struct INDEX_HDR) == 0x10);
+
+static inline struct NTFS_DE *hdr_first_de(const struct INDEX_HDR *hdr)
+{
+	u32 de_off = le32_to_cpu(hdr->de_off);
+	u32 used = le32_to_cpu(hdr->used);
+	struct NTFS_DE *e = Add2Ptr(hdr, de_off);
+	u16 esize;
+
+	if (de_off >= used || de_off >= le32_to_cpu(hdr->total))
+		return NULL;
+
+	esize = le16_to_cpu(e->size);
+	if (esize < sizeof(struct NTFS_DE) || de_off + esize > used)
+		return NULL;
+
+	return e;
+}
+
+static inline struct NTFS_DE *hdr_next_de(const struct INDEX_HDR *hdr,
+					  const struct NTFS_DE *e)
+{
+	size_t off = PtrOffset(hdr, e);
+	u32 used = le32_to_cpu(hdr->used);
+	u16 esize;
+
+	if (off >= used)
+		return NULL;
+
+	esize = le16_to_cpu(e->size);
+
+	if (esize < sizeof(struct NTFS_DE) ||
+	    off + esize + sizeof(struct NTFS_DE) > used)
+		return NULL;
+
+	return Add2Ptr(e, esize);
+}
+
+static inline bool hdr_has_subnode(const struct INDEX_HDR *hdr)
+{
+	return hdr->flags & 1;
+}
+
+struct INDEX_BUFFER {
+	struct NTFS_RECORD_HEADER rhdr; // 'INDX'
+	__le64 vbn; // 0x10: vcn if index >= cluster or vsn id index < cluster
+	struct INDEX_HDR ihdr; // 0x18:
+};
+
+static_assert(sizeof(struct INDEX_BUFFER) == 0x28);
+
+static inline bool ib_is_empty(const struct INDEX_BUFFER *ib)
+{
+	const struct NTFS_DE *first = hdr_first_de(&ib->ihdr);
+
+	return !first || de_is_last(first);
+}
+
+static inline bool ib_is_leaf(const struct INDEX_BUFFER *ib)
+{
+	return !(ib->ihdr.flags & 1);
+}
+
+/* Index root structure ( 0x90 ) */
+enum COLLATION_RULE {
+	NTFS_COLLATION_TYPE_BINARY = cpu_to_le32(0),
+	NTFS_COLLATION_TYPE_FILENAME = cpu_to_le32(0x01),
+	// $SII of $Secure / $Q of Quota
+	NTFS_COLLATION_TYPE_UINT = cpu_to_le32(0x10),
+	// $O of Quota
+	NTFS_COLLATION_TYPE_SID = cpu_to_le32(0x11),
+	// $SDH of $Secure
+	NTFS_COLLATION_TYPE_SECURITY_HASH = cpu_to_le32(0x12),
+	// $O of ObjId and "$R" for Reparse
+	NTFS_COLLATION_TYPE_UINTS = cpu_to_le32(0x13)
+};
+
+static_assert(sizeof(enum COLLATION_RULE) == 4);
+
+//
+struct INDEX_ROOT {
+	enum ATTR_TYPE type; // 0x00: The type of attribute to index on
+	enum COLLATION_RULE rule; // 0x04: The rule
+	__le32 index_block_size; // 0x08: The size of index record
+	u8 index_block_clst; // 0x0C: The number of clusters per index
+	u8 res[3];
+	struct INDEX_HDR ihdr; // 0x10:
+};
+
+static_assert(sizeof(struct INDEX_ROOT) == 0x20);
+static_assert(offsetof(struct INDEX_ROOT, ihdr) == 0x10);
+
+#define VOLUME_FLAG_DIRTY cpu_to_le16(0x0001)
+#define VOLUME_FLAG_RESIZE_LOG_FILE cpu_to_le16(0x0002)
+
+struct VOLUME_INFO {
+	__le64 res1; // 0x00
+	u8 major_ver; // 0x08: NTFS major version number (before .)
+	u8 minor_ver; // 0x09: NTFS minor version number (after .)
+	__le16 flags; // 0x0A: Volume flags, see VOLUME_FLAG_XXX
+
+}; // sizeof=0xC
+
+#define SIZEOF_ATTRIBUTE_VOLUME_INFO 0xc
+
+#define NTFS_LABEL_MAX_LENGTH (0x100 / sizeof(short))
+#define NTFS_ATTR_INDEXABLE cpu_to_le32(0x00000002)
+#define NTFS_ATTR_DUPALLOWED cpu_to_le32(0x00000004)
+#define NTFS_ATTR_MUST_BE_INDEXED cpu_to_le32(0x00000010)
+#define NTFS_ATTR_MUST_BE_NAMED cpu_to_le32(0x00000020)
+#define NTFS_ATTR_MUST_BE_RESIDENT cpu_to_le32(0x00000040)
+#define NTFS_ATTR_LOG_ALWAYS cpu_to_le32(0x00000080)
+
+/* $AttrDef file entry */
+struct ATTR_DEF_ENTRY {
+	__le16 name[0x40]; // 0x00: Attr name
+	enum ATTR_TYPE type; // 0x80: struct ATTRIB type
+	__le32 res; // 0x84:
+	enum COLLATION_RULE rule; // 0x88:
+	__le32 flags; // 0x8C: NTFS_ATTR_XXX (see above)
+	__le64 min_sz; // 0x90: Minimum attribute data size
+	__le64 max_sz; // 0x98: Maximum attribute data size
+};
+
+static_assert(sizeof(struct ATTR_DEF_ENTRY) == 0xa0);
+
+/* Object ID (0x40) */
+struct OBJECT_ID {
+	struct GUID ObjId; // 0x00: Unique Id assigned to file
+	struct GUID
+		BirthVolumeId; // 0x10: Birth Volume Id is the Object Id of the Volume on
+	// which the Object Id was allocated. It never changes
+	struct GUID
+		BirthObjectId; // 0x20: Birth Object Id is the first Object Id that was
+	// ever assigned to this MFT Record. I.e. If the Object Id
+	// is changed for some reason, this field will reflect the
+	// original value of the Object Id.
+	struct GUID
+		DomainId; // 0x30: Domain Id is currently unused but it is intended to be
+	// used in a network environment where the local machine is
+	// part of a Windows 2000 Domain. This may be used in a Windows
+	// 2000 Advanced Server managed domain.
+};
+
+static_assert(sizeof(struct OBJECT_ID) == 0x40);
+
+/* O Directory entry structure ( rule = 0x13 ) */
+struct NTFS_DE_O {
+	struct NTFS_DE de;
+	// See struct OBJECT_ID (0x40) for details
+	struct GUID ObjId; // 0x10: Unique Id assigned to file
+	struct MFT_REF ref; // 0x20: MFT record number with this file
+	struct GUID
+		BirthVolumeId; // 0x28: Birth Volume Id is the Object Id of the Volume on
+	// which the Object Id was allocated. It never changes
+	struct GUID
+		BirthObjectId; // 0x38: Birth Object Id is the first Object Id that was
+	// ever assigned to this MFT Record. I.e. If the Object Id
+	// is changed for some reason, this field will reflect the
+	// original value of the Object Id.
+	// This field is valid if data_size == 0x48
+	struct GUID
+		BirthDomainId; // 0x48: Domain Id is currently unused but it is intended
+	// to be used in a network environment where the local
+	// machine is part of a Windows 2000 Domain. This may be
+	// used in a Windows 2000 Advanced Server managed domain.
+
+	// The last 8 bytes of this structure contains
+	// the VCN of subnode
+	// !!! Note !!!
+	// This field is presented only if (flags & 0x1)
+	// __le64             SubnodesVCN;
+};
+
+static_assert(sizeof(struct NTFS_DE_O) == 0x58);
+
+#define NTFS_OBJECT_ENTRY_DATA_SIZE1                                           \
+	0x38 // struct NTFS_DE_O.BirthDomainId is not used
+#define NTFS_OBJECT_ENTRY_DATA_SIZE2                                           \
+	0x48 // struct NTFS_DE_O.BirthDomainId is used
+
+/* Q Directory entry structure ( rule = 0x11 ) */
+struct NTFS_DE_Q {
+	struct NTFS_DE de;
+	__le32 owner_id; // 0x10: Unique Id assigned to file
+	__le32 Version; // 0x14: 0x02
+	__le32 flags2; // 0x18: Quota flags, see above
+	__le64 BytesUsed; // 0x1C:
+	__le64 ChangeTime; // 0x24:
+	__le64 WarningLimit; // 0x28:
+	__le64 HardLimit; // 0x34:
+	__le64 ExceededTime; // 0x3C:
+
+	// SID is placed here
+
+	// The last 8 bytes of this structure contains
+	// the VCN of subnode
+	// !!! Note !!!
+	// This field is presented only if (flags & 0x1)
+	// __le64             SubnodesVCN;
+
+}; // __attribute__ ((packed)); // sizeof() = 0x44
+
+#define SIZEOF_NTFS_DE_Q 0x44
+
+#define SecurityDescriptorsBlockSize 0x40000 // 256K
+#define SecurityDescriptorMaxSize 0x20000 // 128K
+#define Log2OfSecurityDescriptorsBlockSize 18
+
+struct SECURITY_KEY {
+	__le32 hash; //  Hash value for descriptor
+	__le32 sec_id; //  Security Id (guaranteed unique)
+};
+
+/* Security descriptors (the content of $Secure::SDS data stream) */
+struct SECURITY_HDR {
+	struct SECURITY_KEY key; // 0x00: Security Key
+	__le64 off; // 0x08: Offset of this entry in the file
+	__le32 size; // 0x10: Size of this entry, 8 byte aligned
+	//
+	// Security descriptor itself is placed here
+	// Total size is 16 byte aligned
+	//
+
+} __packed;
+
+#define SIZEOF_SECURITY_HDR 0x14
+
+/* SII Directory entry structure */
+struct NTFS_DE_SII {
+	struct NTFS_DE de;
+	__le32 sec_id; // 0x10: Key: sizeof(security_id) = wKeySize
+	struct SECURITY_HDR sec_hdr; // 0x14:
+
+} __packed;
+
+#define SIZEOF_SII_DIRENTRY 0x28
+
+/* SDH Directory entry structure */
+struct NTFS_DE_SDH {
+	struct NTFS_DE de;
+	struct SECURITY_KEY key; // 0x10: Key
+	struct SECURITY_HDR sec_hdr; // 0x18: Data
+	__le16 magic[2]; // 0x2C: 0x00490049 "I I"
+};
+
+#define SIZEOF_SDH_DIRENTRY 0x30
+
+struct REPARSE_KEY {
+	__le32 ReparseTag; // 0x00: Reparse Tag
+	struct MFT_REF ref; // 0x04: MFT record number with this file
+
+}; // sizeof() = 0x0C
+
+static_assert(offsetof(struct REPARSE_KEY, ref) == 0x04);
+#define SIZEOF_REPARSE_KEY 0x0C
+
+/* Reparse Directory entry structure */
+struct NTFS_DE_R {
+	struct NTFS_DE de;
+	struct REPARSE_KEY Key; // 0x10: Reparse Key (Tag + struct MFT_REF)
+
+	// The last 8 bytes of this structure contains
+	// the VCN of subnode
+	// !!! Note !!!
+	// This field is presented only if (flags & 0x1)
+	// __le64             SubnodesVCN;
+
+}; // sizeof() = 0x1C
+
+#define SIZEOF_R_DIRENTRY 0x1C
+
+/* CompressReparseBuffer.WofVersion */
+#define WOF_CURRENT_VERSION cpu_to_le32(1)
+/* CompressReparseBuffer.WofProvider */
+#define WOF_PROVIDER_WIM cpu_to_le32(1)
+/* CompressReparseBuffer.WofProvider */
+#define WOF_PROVIDER_SYSTEM cpu_to_le32(2)
+/* CompressReparseBuffer.ProviderVer */
+#define WOF_PROVIDER_CURRENT_VERSION cpu_to_le32(1)
+
+#define WOF_COMPRESSION_XPRESS4K 0 // 4k
+#define WOF_COMPRESSION_LZX 1 // 32k
+#define WOF_COMPRESSION_XPRESS8K 2 // 8k
+#define WOF_COMPRESSION_XPRESS16K 3 // 16k
+
+/*
+ * ATTR_REPARSE (0xC0)
+ *
+ * The reparse struct GUID structure is used by all 3rd party layered drivers to
+ * store data in a reparse point. For non-Microsoft tags, The struct GUID field
+ * cannot be GUID_NULL.
+ * The constraints on reparse tags are defined below.
+ * Microsoft tags can also be used with this format of the reparse point buffer.
+ */
+struct REPARSE_POINT {
+	__le32 ReparseTag; // 0x00:
+	__le16 ReparseDataLength; // 0x04:
+	__le16 Reserved;
+
+	struct GUID Guid; // 0x08:
+
+	//
+	// Here GenericReparseBuffer is placed
+	//
+};
+
+static_assert(sizeof(struct REPARSE_POINT) == 0x18);
+
+//
+// Maximum allowed size of the reparse data.
+//
+#define MAXIMUM_REPARSE_DATA_BUFFER_SIZE (16 * 1024)
+
+//
+// The value of the following constant needs to satisfy the following
+// conditions:
+//  (1) Be at least as large as the largest of the reserved tags.
+//  (2) Be strictly smaller than all the tags in use.
+//
+#define IO_REPARSE_TAG_RESERVED_RANGE 1
+
+//
+// The reparse tags are a ULONG. The 32 bits are laid out as follows:
+//
+//   3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1
+//   1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+//  +-+-+-+-+-----------------------+-------------------------------+
+//  |M|R|N|R|     Reserved bits     |       Reparse Tag Value       |
+//  +-+-+-+-+-----------------------+-------------------------------+
+//
+// M is the Microsoft bit. When set to 1, it denotes a tag owned by Microsoft.
+//   All ISVs must use a tag with a 0 in this position.
+//   Note: If a Microsoft tag is used by non-Microsoft software, the
+//   behavior is not defined.
+//
+// R is reserved.  Must be zero for non-Microsoft tags.
+//
+// N is name surrogate. When set to 1, the file represents another named
+//   entity in the system.
+//
+// The M and N bits are OR-able.
+// The following macros check for the M and N bit values:
+//
+
+//
+// Macro to determine whether a reparse point tag corresponds to a tag
+// owned by Microsoft.
+//
+#define IsReparseTagMicrosoft(_tag) (((_tag)&IO_REPARSE_TAG_MICROSOFT))
+
+//
+// Macro to determine whether a reparse point tag is a name surrogate
+//
+#define IsReparseTagNameSurrogate(_tag) (((_tag)&IO_REPARSE_TAG_NAME_SURROGATE))
+
+//
+// The following constant represents the bits that are valid to use in
+// reparse tags.
+//
+#define IO_REPARSE_TAG_VALID_VALUES 0xF000FFFF
+
+//
+// Macro to determine whether a reparse tag is a valid tag.
+//
+#define IsReparseTagValid(_tag)                                                \
+	(!((_tag) & ~IO_REPARSE_TAG_VALID_VALUES) &&                           \
+	 ((_tag) > IO_REPARSE_TAG_RESERVED_RANGE))
+
+//
+// Microsoft tags for reparse points.
+//
+
+enum IO_REPARSE_TAG {
+	IO_REPARSE_TAG_SYMBOLIC_LINK = cpu_to_le32(0),
+	IO_REPARSE_TAG_NAME_SURROGATE = cpu_to_le32(0x20000000),
+	IO_REPARSE_TAG_MICROSOFT = cpu_to_le32(0x80000000),
+	IO_REPARSE_TAG_MOUNT_POINT = cpu_to_le32(0xA0000003),
+	IO_REPARSE_TAG_SYMLINK = cpu_to_le32(0xA000000C),
+	IO_REPARSE_TAG_HSM = cpu_to_le32(0xC0000004),
+	IO_REPARSE_TAG_SIS = cpu_to_le32(0x80000007),
+	IO_REPARSE_TAG_DEDUP = cpu_to_le32(0x80000013),
+	IO_REPARSE_TAG_COMPRESS = cpu_to_le32(0x80000017),
+
+	//
+	// The reparse tag 0x80000008 is reserved for Microsoft internal use
+	// (may be published in the future)
+	//
+
+	//
+	// Microsoft reparse tag reserved for DFS
+	//
+	IO_REPARSE_TAG_DFS = cpu_to_le32(0x8000000A),
+
+	//
+	// Microsoft reparse tag reserved for the file system filter manager
+	//
+	IO_REPARSE_TAG_FILTER_MANAGER = cpu_to_le32(0x8000000B),
+
+	//
+	// Non-Microsoft tags for reparse points
+	//
+
+	//
+	// Tag allocated to CONGRUENT, May 2000. Used by IFSTEST
+	//
+	IO_REPARSE_TAG_IFSTEST_CONGRUENT = cpu_to_le32(0x00000009),
+
+	//
+	// Tag allocated to ARKIVIO
+	//
+	IO_REPARSE_TAG_ARKIVIO = cpu_to_le32(0x0000000C),
+
+	//
+	//  Tag allocated to SOLUTIONSOFT
+	//
+	IO_REPARSE_TAG_SOLUTIONSOFT = cpu_to_le32(0x2000000D),
+
+	//
+	//  Tag allocated to COMMVAULT
+	//
+	IO_REPARSE_TAG_COMMVAULT = cpu_to_le32(0x0000000E),
+
+	// OneDrive??
+	IO_REPARSE_TAG_CLOUD = cpu_to_le32(0x9000001A),
+	IO_REPARSE_TAG_CLOUD_1 = cpu_to_le32(0x9000101A),
+	IO_REPARSE_TAG_CLOUD_2 = cpu_to_le32(0x9000201A),
+	IO_REPARSE_TAG_CLOUD_3 = cpu_to_le32(0x9000301A),
+	IO_REPARSE_TAG_CLOUD_4 = cpu_to_le32(0x9000401A),
+	IO_REPARSE_TAG_CLOUD_5 = cpu_to_le32(0x9000501A),
+	IO_REPARSE_TAG_CLOUD_6 = cpu_to_le32(0x9000601A),
+	IO_REPARSE_TAG_CLOUD_7 = cpu_to_le32(0x9000701A),
+	IO_REPARSE_TAG_CLOUD_8 = cpu_to_le32(0x9000801A),
+	IO_REPARSE_TAG_CLOUD_9 = cpu_to_le32(0x9000901A),
+	IO_REPARSE_TAG_CLOUD_A = cpu_to_le32(0x9000A01A),
+	IO_REPARSE_TAG_CLOUD_B = cpu_to_le32(0x9000B01A),
+	IO_REPARSE_TAG_CLOUD_C = cpu_to_le32(0x9000C01A),
+	IO_REPARSE_TAG_CLOUD_D = cpu_to_le32(0x9000D01A),
+	IO_REPARSE_TAG_CLOUD_E = cpu_to_le32(0x9000E01A),
+	IO_REPARSE_TAG_CLOUD_F = cpu_to_le32(0x9000F01A),
+
+};
+
+/* Microsoft reparse buffer. (see DDK for details) */
+struct REPARSE_DATA_BUFFER {
+	__le32 ReparseTag; // 0x00:
+	__le16 ReparseDataLength; // 0x04:
+	__le16 Reserved;
+
+	union {
+		// If ReparseTag == 0
+		struct {
+			__le16 SubstituteNameOffset; // 0x08
+			__le16 SubstituteNameLength; // 0x0A
+			__le16 PrintNameOffset; // 0x0C
+			__le16 PrintNameLength; // 0x0E
+			__le16 PathBuffer[1]; // 0x10
+		} SymbolicLinkReparseBuffer;
+
+		// If ReparseTag == 0xA0000003U
+		struct {
+			__le16 SubstituteNameOffset; // 0x08
+			__le16 SubstituteNameLength; // 0x0A
+			__le16 PrintNameOffset; // 0x0C
+			__le16 PrintNameLength; // 0x0E
+			__le16 PathBuffer[1]; // 0x10
+		} MountPointReparseBuffer;
+
+		// If ReparseTag == IO_REPARSE_TAG_SYMLINK2       (0xA000000CU)
+		// https://msdn.microsoft.com/en-us/library/cc232006.aspx
+		struct {
+			__le16 SubstituteNameOffset; // 0x08
+			__le16 SubstituteNameLength; // 0x0A
+			__le16 PrintNameOffset; // 0x0C
+			__le16 PrintNameLength; // 0x0E
+			// 0-absolute path 1- relative path
+			__le32 Flags; // 0x10
+			__le16 PathBuffer[1]; // 0x14
+		} SymbolicLink2ReparseBuffer;
+
+		// If ReparseTag == 0x80000017U
+		struct {
+			__le32 WofVersion; // 0x08 == 1
+			/* 1 - WIM backing provider ("WIMBoot"),
+			 * 2 - System compressed file provider
+			 */
+			__le32 WofProvider; // 0x0C
+			__le32 ProviderVer; // 0x10: == 1 WOF_FILE_PROVIDER_CURRENT_VERSION == 1
+			__le32 CompressionFormat; // 0x14: 0, 1, 2, 3. See WOF_COMPRESSION_XXX
+		} CompressReparseBuffer;
+
+		struct {
+			u8 DataBuffer[1]; // 0x08
+		} GenericReparseBuffer;
+	};
+};
+
+static inline u32 ntfs_reparse_bytes(u32 uni_len)
+{
+	/* two unicode strings + header */
+	return sizeof(short) * (2 * uni_len + 4) +
+	       offsetof(struct REPARSE_DATA_BUFFER,
+			SymbolicLink2ReparseBuffer.PathBuffer);
+}
+
+/* ATTR_EA_INFO (0xD0) */
+
+#define FILE_NEED_EA 0x80 // See ntifs.h
+/* FILE_NEED_EA, indicates that the file to which the EA belongs cannot be
+ * interpreted without understanding the associated extended attributes.
+ */
+struct EA_INFO {
+	__le16 size_pack; // 0x00: Size of buffer to hold in packed form
+	__le16 count; // 0x02: Count of EA's with FILE_NEED_EA bit set
+	__le32 size; // 0x04: Size of buffer to hold in unpacked form
+};
+
+static_assert(sizeof(struct EA_INFO) == 8);
+
+/* ATTR_EA (0xE0) */
+struct EA_FULL {
+	__le32 size; // 0x00: (not in packed)
+	u8 flags; // 0x04
+	u8 name_len; // 0x05
+	__le16 elength; // 0x06
+	u8 name[1]; // 0x08
+};
+
+static_assert(offsetof(struct EA_FULL, name) == 8);
+
+#define MAX_EA_DATA_SIZE (256 * 1024)
diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h
new file mode 100644
index 000000000000..80ec4ff7ab33
--- /dev/null
+++ b/fs/ntfs3/ntfs_fs.h
@@ -0,0 +1,1001 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ *  linux/fs/ntfs3/ntfs_fs.h
+ *
+ * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved.
+ *
+ */
+
+/* "true" when [s,s+c) intersects with [l,l+w) */
+#define IS_IN_RANGE(s, c, l, w)                                                \
+	(((c) > 0 && (w) > 0) &&                                               \
+	 (((l) <= (s) && (s) < ((l) + (w))) ||                                 \
+	  ((s) <= (l) && ((s) + (c)) >= ((l) + (w))) ||                        \
+	  ((l) < ((s) + (c)) && ((s) + (c)) < ((l) + (w)))))
+
+/* "true" when [s,se) intersects with [l,le) */
+#define IS_IN_RANGE2(s, se, l, le)                                             \
+	(((se) > (s) && (le) > (l)) &&                                         \
+	 (((l) <= (s) && (s) < (le)) || ((s) <= (l) && (se) >= (le)) ||        \
+	  ((l) < (se) && (se) < (le))))
+
+#define MINUS_ONE_T ((size_t)(-1))
+/* Biggest MFT / smallest cluster */
+#define MAXIMUM_BYTES_PER_MFT 4096 // ??
+#define NTFS_BLOCKS_PER_MFT_RECORD (MAXIMUM_BYTES_PER_MFT / 512)
+
+#define MAXIMUM_BYTES_PER_INDEX 4096 // ??
+#define NTFS_BLOCKS_PER_INODE (MAXIMUM_BYTES_PER_INDEX / 512)
+
+struct ntfs_inode;
+struct ntfs_sb_info;
+struct lznt;
+
+struct mount_options {
+	kuid_t fs_uid;
+	kgid_t fs_gid;
+	u16 fs_fmask_inv;
+	u16 fs_dmask_inv;
+
+	unsigned uid : 1, /* uid was set */
+		gid : 1, /* gid was set */
+		fmask : 1, /* fmask was set */
+		dmask : 1, /*dmask was set*/
+		sys_immutable : 1, /* set = system files are immutable */
+		discard : 1, /* issue discard requests on deletions */
+		sparse : 1, /*create sparse files*/
+		showmeta : 1, /*show meta files*/
+		nohidden : 1, /*do not show hidden files*/
+		force : 1, /*rw mount dirty volume*/
+		no_acs_rules : 1, /*exclude acs rules*/
+		prealloc : 1 /*preallocate space when file is growing*/
+		;
+};
+
+struct ntfs_run;
+
+/* TODO: use rb tree instead of array */
+struct runs_tree {
+	struct ntfs_run *runs_;
+	size_t count; // Currently used size a ntfs_run storage.
+	size_t allocated; // Currently allocated ntfs_run storage size.
+};
+
+struct ntfs_buffers {
+	/* Biggest MFT / smallest cluster = 4096 / 512 = 8 */
+	/* Biggest index / smallest cluster = 4096 / 512 = 8 */
+	struct buffer_head *bh[PAGE_SIZE >> SECTOR_SHIFT];
+	u32 bytes;
+	u32 nbufs;
+	u32 off;
+};
+
+#define NTFS_FLAGS_NODISCARD 0x00000001
+#define NTFS_FLAGS_NEED_REPLAY 0x04000000
+
+enum ALLOCATE_OPT {
+	ALLOCATE_DEF = 0, // Allocate all clusters
+	ALLOCATE_MFT = 1, // Allocate for MFT
+};
+
+enum bitmap_mutex_classes {
+	BITMAP_MUTEX_CLUSTERS = 0,
+	BITMAP_MUTEX_MFT = 1,
+};
+
+struct wnd_bitmap {
+	struct super_block *sb;
+	struct rw_semaphore rw_lock;
+
+	struct runs_tree run;
+	size_t nbits;
+
+	u16 free_holder[8]; // holder for free_bits
+
+	size_t total_zeroes; // total number of free bits
+	u16 *free_bits; // free bits in each window
+	size_t nwnd;
+	u32 bits_last; // bits in last window
+
+	struct rb_root start_tree; // extents, sorted by 'start'
+	struct rb_root count_tree; // extents, sorted by 'count + start'
+	size_t count; // extents count
+	int uptodated; // -1 Tree is activated but not updated (too many fragments)
+		// 0 - Tree is not activated
+		// 1 - Tree is activated and updated
+	size_t extent_min; // Minimal extent used while building
+	size_t extent_max; // Upper estimate of biggest free block
+
+	bool set_tail; // not necessary in driver
+	bool inited;
+
+	/* Zone [bit, end) */
+	size_t zone_bit;
+	size_t zone_end;
+};
+
+typedef int (*NTFS_CMP_FUNC)(const void *key1, size_t len1, const void *key2,
+			     size_t len2, const void *param);
+
+enum index_mutex_classed {
+	INDEX_MUTEX_I30 = 0,
+	INDEX_MUTEX_SII = 1,
+	INDEX_MUTEX_SDH = 2,
+	INDEX_MUTEX_SO = 3,
+	INDEX_MUTEX_SQ = 4,
+	INDEX_MUTEX_SR = 5,
+	INDEX_MUTEX_TOTAL
+};
+
+/* This struct works with indexes */
+struct ntfs_index {
+	struct runs_tree bitmap_run;
+	struct runs_tree alloc_run;
+
+	/*TODO: remove 'cmp'*/
+	NTFS_CMP_FUNC cmp;
+
+	u8 index_bits; // log2(root->index_block_size)
+	u8 idx2vbn_bits; // log2(root->index_block_clst)
+	u8 vbn2vbo_bits; // index_block_size < cluster? 9 : cluster_bits
+	u8 changed; // set when tree is changed
+	u8 type; // index_mutex_classed
+};
+
+/* Set when $LogFile is replaying */
+#define NTFS_FLAGS_LOG_REPLAYING 0x00000008
+
+/* Set when we changed first MFT's which copy must be updated in $MftMirr */
+#define NTFS_FLAGS_MFTMIRR 0x00001000
+
+/* Minimum mft zone */
+#define NTFS_MIN_MFT_ZONE 100
+
+struct COMPRESS_CTX {
+	u64 chunk_num; // Number of chunk cmpr_buffer/unc_buffer
+	u64 first_chunk, last_chunk, total_chunks;
+	u64 chunk0_off;
+	void *ctx;
+	u8 *cmpr_buffer;
+	u8 *unc_buffer;
+	void *chunk_off_mem;
+	size_t chunk_off;
+	u32 *chunk_off32; // pointer inside ChunkOffsetsMem
+	u64 *chunk_off64; // pointer inside ChunkOffsetsMem
+	u32 compress_format;
+	u32 offset_bits;
+	u32 chunk_bits;
+	u32 chunk_size;
+};
+
+/* ntfs file system in-core superblock data */
+struct ntfs_sb_info {
+	struct super_block *sb;
+
+	u32 discard_granularity;
+	u64 discard_granularity_mask_inv; // ~(discard_granularity_mask_inv-1)
+
+	u32 cluster_size; // bytes per cluster
+	u32 cluster_mask; // == cluster_size - 1
+	u64 cluster_mask_inv; // ~(cluster_size - 1)
+	u32 block_mask; // sb->s_blocksize - 1
+	u32 blocks_per_cluster; // cluster_size / sb->s_blocksize
+
+	u32 record_size;
+	u32 sector_size;
+	u32 index_size;
+
+	u8 sector_bits;
+	u8 cluster_bits;
+	u8 record_bits;
+
+	u64 maxbytes; // Maximum size for normal files
+	u64 maxbytes_sparse; // Maximum size for sparse file
+
+	u32 flags; // See NTFS_FLAGS_XXX
+
+	CLST bad_clusters; // The count of marked bad clusters
+
+	u16 max_bytes_per_attr; // maximum attribute size in record
+	u16 attr_size_tr; // attribute size threshold (320 bytes)
+
+	/* Records in $Extend */
+	CLST objid_no;
+	CLST quota_no;
+	CLST reparse_no;
+	CLST usn_jrnl_no;
+
+	struct ATTR_DEF_ENTRY *def_table; // attribute definition table
+	u32 def_entries;
+
+	struct MFT_REC *new_rec;
+
+	u16 *upcase;
+
+	struct nls_table *nls[2];
+
+	struct {
+		u64 lbo, lbo2;
+		struct ntfs_inode *ni;
+		struct wnd_bitmap bitmap; // $MFT::Bitmap
+		ulong reserved_bitmap;
+		size_t next_free; // The next record to allocate from
+		size_t used;
+		u32 recs_mirr; // Number of records MFTMirr
+		u8 next_reserved;
+		u8 reserved_bitmap_inited;
+	} mft;
+
+	struct {
+		struct wnd_bitmap bitmap; // $Bitmap::Data
+		CLST next_free_lcn;
+	} used;
+
+	struct {
+		u64 size; // in bytes
+		u64 blocks; // in blocks
+		u64 ser_num;
+		struct ntfs_inode *ni;
+		__le16 flags; // see VOLUME_FLAG_XXX
+		u8 major_ver;
+		u8 minor_ver;
+		char label[65];
+		bool real_dirty; /* real fs state*/
+	} volume;
+
+	struct {
+		struct ntfs_index index_sii;
+		struct ntfs_index index_sdh;
+		struct ntfs_inode *ni;
+		u32 next_id;
+		u64 next_off;
+
+		__le32 def_file_id;
+		__le32 def_dir_id;
+	} security;
+
+	struct {
+		struct ntfs_index index_r;
+		struct ntfs_inode *ni;
+		u64 max_size; // 16K
+	} reparse;
+
+	struct {
+		struct ntfs_index index_o;
+		struct ntfs_inode *ni;
+	} objid;
+
+	struct {
+		/*protect 'frame_unc' and 'ctx'*/
+		spinlock_t lock;
+		u8 *frame_unc;
+		struct lznt *ctx;
+	} compress;
+
+	struct mount_options options;
+	struct ratelimit_state msg_ratelimit;
+};
+
+struct mft_inode {
+	struct rb_node node;
+	struct ntfs_sb_info *sbi;
+
+	CLST rno;
+	struct MFT_REC *mrec;
+	struct ntfs_buffers nb;
+
+	bool dirty;
+};
+
+#define NI_FLAG_DIR 0x00000001
+#define NI_FLAG_RESIDENT 0x00000002
+#define NI_FLAG_UPDATE_PARENT 0x00000004
+
+/* Data attribute is compressed special way */
+#define NI_FLAG_COMPRESSED_MASK 0x00000f00 //
+/* Data attribute is deduplicated */
+#define NI_FLAG_DEDUPLICATED 0x00001000
+#define NI_FLAG_EA 0x00002000
+
+/* ntfs file system inode data memory */
+struct ntfs_inode {
+	struct mft_inode mi; // base record
+
+	loff_t i_valid; /* valid size */
+	struct timespec64 i_crtime;
+
+	struct mutex ni_lock;
+
+	/* file attributes from std */
+	enum FILE_ATTRIBUTE std_fa;
+	__le32 std_security_id;
+
+	// subrecords tree
+	struct rb_root mi_tree;
+
+	union {
+		struct ntfs_index dir;
+		struct {
+			struct rw_semaphore run_lock;
+			struct runs_tree run;
+		} file;
+	};
+
+	struct {
+		struct runs_tree run;
+		struct ATTR_LIST_ENTRY *le; // 1K aligned memory
+		size_t size;
+		bool dirty;
+	} attr_list;
+
+	size_t ni_flags; // NI_FLAG_XXX
+
+	struct inode vfs_inode;
+};
+
+struct indx_node {
+	struct ntfs_buffers nb;
+	struct INDEX_BUFFER *index;
+};
+
+struct ntfs_fnd {
+	int level;
+	struct indx_node *nodes[20];
+	struct NTFS_DE *de[20];
+	struct NTFS_DE *root_de;
+};
+
+enum REPARSE_SIGN {
+	REPARSE_NONE = 0,
+	REPARSE_COMPRESSED = 1,
+	REPARSE_DEDUPLICATED = 2,
+	REPARSE_LINK = 3
+};
+
+/* functions from attrib.c*/
+int attr_load_runs(struct ATTRIB *attr, struct ntfs_inode *ni,
+		   struct runs_tree *run);
+int attr_allocate_clusters(struct ntfs_sb_info *sbi, struct runs_tree *run,
+			   CLST vcn, CLST lcn, CLST len, CLST *pre_alloc,
+			   enum ALLOCATE_OPT opt, CLST *alen, const size_t fr,
+			   CLST *new_lcn);
+int attr_set_size(struct ntfs_inode *ni, enum ATTR_TYPE type,
+		  const __le16 *name, u8 name_len, struct runs_tree *run,
+		  u64 new_size, const u64 *new_valid, bool keep_prealloc,
+		  struct ATTRIB **ret);
+int attr_data_get_block(struct ntfs_inode *ni, CLST vcn, CLST clen, CLST *lcn,
+			CLST *len, bool *new);
+int attr_load_runs_vcn(struct ntfs_inode *ni, enum ATTR_TYPE type,
+		       const __le16 *name, u8 name_len, struct runs_tree *run,
+		       CLST vcn);
+int attr_is_frame_compressed(struct ntfs_inode *ni, struct ATTRIB *attr,
+			     CLST frame, CLST *clst_data, bool *is_compr);
+int attr_allocate_frame(struct ntfs_inode *ni, CLST frame, size_t compr_size,
+			u64 new_valid);
+
+/* functions from attrlist.c*/
+void al_destroy(struct ntfs_inode *ni);
+bool al_verify(struct ntfs_inode *ni);
+int ntfs_load_attr_list(struct ntfs_inode *ni, struct ATTRIB *attr);
+struct ATTR_LIST_ENTRY *al_enumerate(struct ntfs_inode *ni,
+				     struct ATTR_LIST_ENTRY *le);
+struct ATTR_LIST_ENTRY *al_find_le(struct ntfs_inode *ni,
+				   struct ATTR_LIST_ENTRY *le,
+				   const struct ATTRIB *attr);
+struct ATTR_LIST_ENTRY *al_find_ex(struct ntfs_inode *ni,
+				   struct ATTR_LIST_ENTRY *le,
+				   enum ATTR_TYPE type, const __le16 *name,
+				   u8 name_len, const CLST *vcn);
+int al_add_le(struct ntfs_inode *ni, enum ATTR_TYPE type, const __le16 *name,
+	      u8 name_len, CLST svcn, __le16 id, const struct MFT_REF *ref,
+	      struct ATTR_LIST_ENTRY **new_le);
+bool al_remove_le(struct ntfs_inode *ni, struct ATTR_LIST_ENTRY *le);
+bool al_delete_le(struct ntfs_inode *ni, enum ATTR_TYPE type, CLST vcn,
+		  const __le16 *name, size_t name_len,
+		  const struct MFT_REF *ref);
+int al_update(struct ntfs_inode *ni);
+static inline size_t al_aligned(size_t size)
+{
+	return (size + 1023) & ~(size_t)1023;
+}
+
+/* globals from bitfunc.c */
+bool are_bits_clear(const ulong *map, size_t bit, size_t nbits);
+bool are_bits_set(const ulong *map, size_t bit, size_t nbits);
+size_t get_set_bits_ex(const ulong *map, size_t bit, size_t nbits);
+
+/* globals from dir.c */
+int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const struct le_str *uni,
+		      u8 *buf, int buf_len);
+int ntfs_nls_to_utf16(struct ntfs_sb_info *sbi, const u8 *name, u32 name_len,
+		      struct cpu_str *uni, u32 max_ulen,
+		      enum utf16_endian endian);
+struct inode *dir_search_u(struct inode *dir, const struct cpu_str *uni,
+			   struct ntfs_fnd *fnd);
+struct inode *dir_search(struct inode *dir, const struct qstr *name,
+			 struct ntfs_fnd *fnd);
+bool dir_is_empty(struct inode *dir);
+extern const struct file_operations ntfs_dir_operations;
+
+/* globals from file.c*/
+int ntfs_getattr(const struct path *path, struct kstat *stat, u32 request_mask,
+		 u32 flags);
+void ntfs_sparse_cluster(struct inode *inode, struct page *page0, loff_t vbo,
+			 u32 bytes);
+int ntfs_file_fsync(struct file *filp, loff_t start, loff_t end, int datasync);
+void ntfs_truncate_blocks(struct inode *inode, loff_t offset);
+int ntfs_setattr(struct dentry *dentry, struct iattr *attr);
+int ntfs_file_open(struct inode *inode, struct file *file);
+int ntfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+		__u64 start, __u64 len);
+extern const struct inode_operations ntfs_special_inode_operations;
+extern const struct inode_operations ntfs_file_inode_operations;
+extern const struct file_operations ntfs_file_operations;
+
+/* globals from frecord.c */
+void ni_remove_mi(struct ntfs_inode *ni, struct mft_inode *mi);
+struct ATTR_STD_INFO *ni_std(struct ntfs_inode *ni);
+void ni_clear(struct ntfs_inode *ni);
+int ni_load_mi_ex(struct ntfs_inode *ni, CLST rno, struct mft_inode **mi);
+int ni_load_mi(struct ntfs_inode *ni, struct ATTR_LIST_ENTRY *le,
+	       struct mft_inode **mi);
+struct ATTRIB *ni_find_attr(struct ntfs_inode *ni, struct ATTRIB *attr,
+			    struct ATTR_LIST_ENTRY **entry_o,
+			    enum ATTR_TYPE type, const __le16 *name,
+			    u8 name_len, const CLST *vcn,
+			    struct mft_inode **mi);
+struct ATTRIB *ni_enum_attr_ex(struct ntfs_inode *ni, struct ATTRIB *attr,
+			       struct ATTR_LIST_ENTRY **le);
+struct ATTRIB *ni_load_attr(struct ntfs_inode *ni, enum ATTR_TYPE type,
+			    const __le16 *name, u8 name_len, CLST vcn,
+			    struct mft_inode **pmi);
+int ni_load_all_mi(struct ntfs_inode *ni);
+bool ni_add_subrecord(struct ntfs_inode *ni, CLST rno, struct mft_inode **mi);
+int ni_remove_attr(struct ntfs_inode *ni, enum ATTR_TYPE type,
+		   const __le16 *name, size_t name_len, bool base_only,
+		   const __le16 *id);
+int ni_create_attr_list(struct ntfs_inode *ni);
+int ni_expand_list(struct ntfs_inode *ni);
+int ni_insert_nonresident(struct ntfs_inode *ni, enum ATTR_TYPE type,
+			  const __le16 *name, u8 name_len,
+			  const struct runs_tree *run, CLST svcn, CLST len,
+			  __le16 flags, struct ATTRIB **new_attr,
+			  struct mft_inode **mi);
+int ni_insert_resident(struct ntfs_inode *ni, u32 data_size,
+		       enum ATTR_TYPE type, const __le16 *name, u8 name_len,
+		       struct ATTRIB **new_attr, struct mft_inode **mi);
+int ni_remove_attr_le(struct ntfs_inode *ni, struct ATTRIB *attr,
+		      struct ATTR_LIST_ENTRY *le);
+int ni_delete_all(struct ntfs_inode *ni);
+struct ATTR_FILE_NAME *ni_fname_name(struct ntfs_inode *ni,
+				     const struct cpu_str *uni,
+				     const struct MFT_REF *home,
+				     struct ATTR_LIST_ENTRY **entry);
+struct ATTR_FILE_NAME *ni_fname_type(struct ntfs_inode *ni, u8 name_type,
+				     struct ATTR_LIST_ENTRY **entry);
+u16 ni_fnames_count(struct ntfs_inode *ni);
+int ni_init_compress(struct ntfs_inode *ni, struct COMPRESS_CTX *ctx);
+enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr,
+				   void *buffer);
+int ni_write_inode(struct inode *inode, int sync, const char *hint);
+#define _ni_write_inode(i, w) ni_write_inode(i, w, __func__)
+int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+	      __u64 vbo, __u64 len);
+int ni_readpage_cmpr(struct ntfs_inode *ni, struct page *page);
+int ni_writepage_cmpr(struct page *page, int sync);
+
+/* globals from fslog.c */
+int log_replay(struct ntfs_inode *ni);
+
+/* globals from fsntfs.c */
+bool ntfs_fix_pre_write(struct NTFS_RECORD_HEADER *rhdr, size_t bytes);
+int ntfs_fix_post_read(struct NTFS_RECORD_HEADER *rhdr, size_t bytes,
+		       bool simple);
+int ntfs_extend_init(struct ntfs_sb_info *sbi);
+int ntfs_loadlog_and_replay(struct ntfs_inode *ni, struct ntfs_sb_info *sbi);
+const struct ATTR_DEF_ENTRY *ntfs_query_def(struct ntfs_sb_info *sbi,
+					    enum ATTR_TYPE Type);
+int ntfs_look_for_free_space(struct ntfs_sb_info *sbi, CLST lcn, CLST len,
+			     CLST *new_lcn, CLST *new_len,
+			     enum ALLOCATE_OPT opt);
+int ntfs_look_free_mft(struct ntfs_sb_info *sbi, CLST *rno, bool mft,
+		       struct ntfs_inode *ni, struct mft_inode **mi);
+void ntfs_mark_rec_free(struct ntfs_sb_info *sbi, CLST nRecord);
+int ntfs_clear_mft_tail(struct ntfs_sb_info *sbi, size_t from, size_t to);
+int ntfs_refresh_zone(struct ntfs_sb_info *sbi);
+int ntfs_update_mftmirr(struct ntfs_sb_info *sbi, int wait);
+enum NTFS_DIRTY_FLAGS {
+	NTFS_DIRTY_CLEAR = 0,
+	NTFS_DIRTY_DIRTY = 1,
+	NTFS_DIRTY_ERROR = 2,
+};
+int ntfs_set_state(struct ntfs_sb_info *sbi, enum NTFS_DIRTY_FLAGS dirty);
+int ntfs_sb_read(struct super_block *sb, u64 lbo, size_t bytes, void *buffer);
+int ntfs_sb_write(struct super_block *sb, u64 lbo, size_t bytes,
+		  const void *buffer, int wait);
+int ntfs_sb_write_run(struct ntfs_sb_info *sbi, struct runs_tree *run, u64 vbo,
+		      const void *buf, size_t bytes);
+struct buffer_head *ntfs_bread_run(struct ntfs_sb_info *sbi,
+				   struct runs_tree *run, u64 vbo);
+int ntfs_read_run_nb(struct ntfs_sb_info *sbi, struct runs_tree *run, u64 vbo,
+		     void *buf, u32 bytes, struct ntfs_buffers *nb);
+int ntfs_read_bh(struct ntfs_sb_info *sbi, struct runs_tree *run, u64 vbo,
+		 struct NTFS_RECORD_HEADER *rhdr, u32 bytes,
+		 struct ntfs_buffers *nb);
+int ntfs_get_bh(struct ntfs_sb_info *sbi, struct runs_tree *run, u64 vbo,
+		u32 bytes, struct ntfs_buffers *nb);
+int ntfs_write_bh(struct ntfs_sb_info *sbi, struct NTFS_RECORD_HEADER *rhdr,
+		  struct ntfs_buffers *nb, int sync);
+int ntfs_vbo_to_lbo(struct ntfs_sb_info *sbi, struct runs_tree *run, u64 vbo,
+		    u64 *lbo, u64 *bytes);
+struct ntfs_inode *ntfs_new_inode(struct ntfs_sb_info *sbi, CLST nRec,
+				  bool dir);
+extern const u8 s_dir_security[0x50];
+extern const u8 s_file_security[0x58];
+int ntfs_security_init(struct ntfs_sb_info *sbi);
+int ntfs_get_security_by_id(struct ntfs_sb_info *sbi, u32 security_id,
+			    void **sd, size_t *size);
+int ntfs_insert_security(struct ntfs_sb_info *sbi, const void *sd, u32 size,
+			 __le32 *security_id, bool *inserted);
+int ntfs_reparse_init(struct ntfs_sb_info *sbi);
+int ntfs_objid_init(struct ntfs_sb_info *sbi);
+int ntfs_objid_remove(struct ntfs_sb_info *sbi, struct GUID *guid);
+int ntfs_insert_reparse(struct ntfs_sb_info *sbi, __le32 rtag,
+			const struct MFT_REF *ref);
+int ntfs_remove_reparse(struct ntfs_sb_info *sbi, __le32 rtag,
+			const struct MFT_REF *ref);
+void mark_as_free_ex(struct ntfs_sb_info *sbi, CLST lcn, CLST len, bool trim);
+int run_deallocate(struct ntfs_sb_info *sbi, struct runs_tree *run, bool trim);
+
+/* globals from index.c */
+int indx_used_bit(struct ntfs_index *indx, struct ntfs_inode *ni, size_t *bit);
+void fnd_clear(struct ntfs_fnd *fnd);
+struct ntfs_fnd *fnd_get(struct ntfs_index *indx);
+void fnd_put(struct ntfs_fnd *fnd);
+void indx_clear(struct ntfs_index *idx);
+int indx_init(struct ntfs_index *indx, struct ntfs_sb_info *sbi,
+	      const struct ATTRIB *attr, enum index_mutex_classed type);
+struct INDEX_ROOT *indx_get_root(struct ntfs_index *indx, struct ntfs_inode *ni,
+				 struct ATTRIB **attr, struct mft_inode **mi);
+int indx_read(struct ntfs_index *idx, struct ntfs_inode *ni, CLST vbn,
+	      struct indx_node **node);
+int indx_find(struct ntfs_index *indx, struct ntfs_inode *dir,
+	      const struct INDEX_ROOT *root, const void *Key, size_t KeyLen,
+	      const void *param, int *diff, struct NTFS_DE **entry,
+	      struct ntfs_fnd *fnd);
+int indx_find_sort(struct ntfs_index *indx, struct ntfs_inode *ni,
+		   const struct INDEX_ROOT *root, struct NTFS_DE **entry,
+		   struct ntfs_fnd *fnd);
+int indx_find_raw(struct ntfs_index *indx, struct ntfs_inode *ni,
+		  const struct INDEX_ROOT *root, struct NTFS_DE **entry,
+		  size_t *off, struct ntfs_fnd *fnd);
+int indx_insert_entry(struct ntfs_index *indx, struct ntfs_inode *ni,
+		      const struct NTFS_DE *new_de, const void *param,
+		      struct ntfs_fnd *fnd);
+int indx_delete_entry(struct ntfs_index *indx, struct ntfs_inode *ni,
+		      const void *key, u32 key_len, const void *param);
+int indx_update_dup(struct ntfs_inode *ni, struct ntfs_sb_info *sbi,
+		    const struct ATTR_FILE_NAME *fname,
+		    const struct NTFS_DUP_INFO *dup, int sync);
+
+/* globals from inode.c */
+struct inode *ntfs_iget5(struct super_block *sb, const struct MFT_REF *ref,
+			 const struct cpu_str *name);
+int ntfs_set_size(struct inode *inode, u64 new_size);
+int reset_log_file(struct inode *inode);
+int ntfs_get_block(struct inode *inode, sector_t vbn,
+		   struct buffer_head *bh_result, int create);
+int ntfs_write_inode(struct inode *inode, struct writeback_control *wbc);
+int ntfs_sync_inode(struct inode *inode);
+int ntfs_flush_inodes(struct super_block *sb, struct inode *i1,
+		      struct inode *i2);
+int inode_write_data(struct inode *inode, const void *data, size_t bytes);
+int ntfs_create_inode(struct inode *dir, struct dentry *dentry,
+		      struct file *file, umode_t mode, dev_t dev,
+		      const char *symname, unsigned int size, int excl,
+		      struct ntfs_fnd *fnd, struct inode **new_inode);
+int ntfs_link_inode(struct inode *inode, struct dentry *dentry);
+int ntfs_unlink_inode(struct inode *dir, const struct dentry *dentry);
+void ntfs_evict_inode(struct inode *inode);
+int ntfs_readpage(struct file *file, struct page *page);
+extern const struct inode_operations ntfs_link_inode_operations;
+extern const struct address_space_operations ntfs_aops;
+extern const struct address_space_operations ntfs_aops_cmpr;
+
+/* globals from name_i.c*/
+int fill_name_de(struct ntfs_sb_info *sbi, void *buf, const struct qstr *name);
+struct dentry *ntfs_get_parent(struct dentry *child);
+
+extern const struct inode_operations ntfs_dir_inode_operations;
+
+/* globals from record.c */
+int mi_get(struct ntfs_sb_info *sbi, CLST rno, struct mft_inode **mi);
+void mi_put(struct mft_inode *mi);
+int mi_init(struct mft_inode *mi, struct ntfs_sb_info *sbi, CLST rno);
+int mi_read(struct mft_inode *mi, bool is_mft);
+struct ATTRIB *mi_enum_attr(struct mft_inode *mi, struct ATTRIB *attr);
+// TODO: id?
+struct ATTRIB *mi_find_attr(struct mft_inode *mi, struct ATTRIB *attr,
+			    enum ATTR_TYPE type, const __le16 *name,
+			    size_t name_len, const __le16 *id);
+static inline struct ATTRIB *rec_find_attr_le(struct mft_inode *rec,
+					      struct ATTR_LIST_ENTRY *le)
+{
+	return mi_find_attr(rec, NULL, le->type, le_name(le), le->name_len,
+			    &le->id);
+}
+int mi_write(struct mft_inode *mi, int wait);
+int mi_format_new(struct mft_inode *mi, struct ntfs_sb_info *sbi, CLST rno,
+		  __le16 flags, bool is_mft);
+void mi_mark_free(struct mft_inode *mi);
+struct ATTRIB *mi_insert_attr(struct mft_inode *mi, enum ATTR_TYPE type,
+			      const __le16 *name, u8 name_len, u32 asize,
+			      u16 name_off);
+
+bool mi_remove_attr(struct mft_inode *mi, struct ATTRIB *attr);
+bool mi_resize_attr(struct mft_inode *mi, struct ATTRIB *attr, int bytes);
+int mi_pack_runs(struct mft_inode *mi, struct ATTRIB *attr,
+		 struct runs_tree *run, CLST len);
+static inline bool mi_is_ref(const struct mft_inode *mi,
+			     const struct MFT_REF *ref)
+{
+	if (le32_to_cpu(ref->low) != mi->rno)
+		return false;
+	if (ref->seq != mi->mrec->seq)
+		return false;
+
+#ifdef NTFS3_64BIT_CLUSTER
+	return le16_to_cpu(ref->high) == (mi->rno >> 32);
+#else
+	return !ref->high;
+#endif
+}
+
+/* globals from run.c */
+bool run_lookup_entry(const struct runs_tree *run, CLST vcn, CLST *lcn,
+		      CLST *len, size_t *index);
+void run_truncate(struct runs_tree *run, CLST vcn);
+void run_truncate_head(struct runs_tree *run, CLST vcn);
+bool run_lookup(const struct runs_tree *run, CLST Vcn, size_t *Index);
+bool run_add_entry(struct runs_tree *run, CLST vcn, CLST lcn, CLST len);
+bool run_get_entry(const struct runs_tree *run, size_t index, CLST *vcn,
+		   CLST *lcn, CLST *len);
+bool run_is_mapped_full(const struct runs_tree *run, CLST svcn, CLST evcn);
+
+int run_pack(const struct runs_tree *run, CLST svcn, CLST len, u8 *run_buf,
+	     u32 run_buf_size, CLST *packed_vcns);
+int run_unpack(struct runs_tree *run, struct ntfs_sb_info *sbi, CLST ino,
+	       CLST svcn, CLST evcn, const u8 *run_buf, u32 run_buf_size);
+
+#ifdef NTFS3_CHECK_FREE_CLST
+int run_unpack_ex(struct runs_tree *run, struct ntfs_sb_info *sbi, CLST ino,
+		  CLST svcn, CLST evcn, const u8 *run_buf, u32 run_buf_size);
+#else
+#define run_unpack_ex run_unpack
+#endif
+int run_get_highest_vcn(CLST vcn, const u8 *run_buf, u64 *highest_vcn);
+
+/* globals from super.c */
+void *ntfs_set_shared(void *ptr, u32 bytes);
+void *ntfs_put_shared(void *ptr);
+void ntfs_unmap_meta(struct super_block *sb, CLST lcn, CLST len);
+int ntfs_discard(struct ntfs_sb_info *sbi, CLST Lcn, CLST Len);
+
+/* globals from ubitmap.c*/
+void wnd_close(struct wnd_bitmap *wnd);
+static inline size_t wnd_zeroes(const struct wnd_bitmap *wnd)
+{
+	return wnd->total_zeroes;
+}
+void wnd_trace(struct wnd_bitmap *wnd);
+void wnd_trace_tree(struct wnd_bitmap *wnd, u32 nExtents, const char *Hint);
+int wnd_init(struct wnd_bitmap *wnd, struct super_block *sb, size_t nBits);
+int wnd_set_free(struct wnd_bitmap *wnd, size_t FirstBit, size_t Bits);
+int wnd_set_used(struct wnd_bitmap *wnd, size_t FirstBit, size_t Bits);
+bool wnd_is_free(struct wnd_bitmap *wnd, size_t FirstBit, size_t Bits);
+bool wnd_is_used(struct wnd_bitmap *wnd, size_t FirstBit, size_t Bits);
+
+/* Possible values for 'flags' 'wnd_find' */
+#define BITMAP_FIND_MARK_AS_USED 0x01
+#define BITMAP_FIND_FULL 0x02
+size_t wnd_find(struct wnd_bitmap *wnd, size_t to_alloc, size_t hint,
+		size_t flags, size_t *allocated);
+int wnd_extend(struct wnd_bitmap *wnd, size_t new_bits);
+void wnd_zone_set(struct wnd_bitmap *wnd, size_t Lcn, size_t Len);
+int ntfs_trim_fs(struct ntfs_sb_info *sbi, struct fstrim_range *range);
+
+/* globals from upcase.c */
+int ntfs_cmp_names(const __le16 *s1, size_t l1, const __le16 *s2, size_t l2,
+		   const u16 *upcase);
+int ntfs_cmp_names_cpu(const struct cpu_str *uni1, const struct le_str *uni2,
+		       const u16 *upcase);
+
+/* globals from xattr.c */
+struct posix_acl *ntfs_get_acl(struct inode *inode, int type);
+int ntfs_set_acl(struct inode *inode, struct posix_acl *acl, int type);
+int ntfs_acl_chmod(struct inode *inode);
+int ntfs_permission(struct inode *inode, int mask);
+ssize_t ntfs_listxattr(struct dentry *dentry, char *buffer, size_t size);
+int ntfs_init_acl(struct inode *inode, struct inode *dir);
+extern const struct xattr_handler *ntfs_xattr_handlers[];
+
+/* globals from lznt.c */
+struct lznt *get_compression_ctx(bool std);
+size_t compress_lznt(const void *uncompressed, size_t uncompressed_size,
+		     void *compressed, size_t compressed_size,
+		     struct lznt *ctx);
+ssize_t decompress_lznt(const void *compressed, size_t compressed_size,
+			void *uncompressed, size_t uncompressed_size);
+
+char *attr_str(const struct ATTRIB *attr, char *buf, size_t buf_len);
+
+static inline bool is_ntfs3(struct ntfs_sb_info *sbi)
+{
+	return sbi->volume.major_ver >= 3;
+}
+
+/*(sb->s_flags & SB_ACTIVE)*/
+static inline bool is_mounted(struct ntfs_sb_info *sbi)
+{
+	return !!sbi->sb->s_root;
+}
+
+static inline bool ntfs_is_meta_file(struct ntfs_sb_info *sbi, CLST rno)
+{
+	return rno < MFT_REC_FREE || rno == sbi->objid_no ||
+	       rno == sbi->quota_no || rno == sbi->reparse_no ||
+	       rno == sbi->usn_jrnl_no;
+}
+
+static inline void ntfs_unmap_page(struct page *page)
+{
+	kunmap(page);
+	put_page(page);
+}
+
+static inline struct page *ntfs_map_page(struct address_space *mapping,
+					 unsigned long index)
+{
+	struct page *page = read_mapping_page(mapping, index, NULL);
+
+	if (!IS_ERR(page)) {
+		kmap(page);
+		if (!PageError(page))
+			return page;
+		ntfs_unmap_page(page);
+		return ERR_PTR(-EIO);
+	}
+	return page;
+}
+
+static inline size_t wnd_zone_bit(const struct wnd_bitmap *wnd)
+{
+	return wnd->zone_bit;
+}
+
+static inline size_t wnd_zone_len(const struct wnd_bitmap *wnd)
+{
+	return wnd->zone_end - wnd->zone_bit;
+}
+
+static inline void run_init(struct runs_tree *run)
+{
+	run->runs_ = NULL;
+	run->count = 0;
+	run->allocated = 0;
+}
+
+static inline struct runs_tree *run_alloc(void)
+{
+	return ntfs_alloc(sizeof(struct runs_tree), 1);
+}
+
+static inline void run_close(struct runs_tree *run)
+{
+	ntfs_free(run->runs_);
+	memset(run, 0, sizeof(*run));
+}
+
+static inline void run_free(struct runs_tree *run)
+{
+	if (run) {
+		ntfs_free(run->runs_);
+		ntfs_free(run);
+	}
+}
+
+static inline bool run_is_empty(struct runs_tree *run)
+{
+	return !run->count;
+}
+
+/* NTFS uses quad aligned bitmaps */
+static inline size_t bitmap_size(size_t bits)
+{
+	return QuadAlign((bits + 7) >> 3);
+}
+
+#define _100ns2seconds 10000000
+#define SecondsToStartOf1970 0x00000002B6109100
+
+#define NTFS_TIME_GRAN 100
+
+/*
+ * kernel2nt
+ *
+ * converts in-memory kernel timestamp into nt time
+ */
+static inline __le64 kernel2nt(const struct timespec64 *ts)
+{
+	// 10^7 units of 100 nanoseconds one second
+	return cpu_to_le64(_100ns2seconds *
+				   (ts->tv_sec + SecondsToStartOf1970) +
+			   ts->tv_nsec / NTFS_TIME_GRAN);
+}
+
+/*
+ * nt2kernel
+ *
+ * converts on-disk nt time into kernel timestamp
+ */
+static inline void nt2kernel(const __le64 tm, struct timespec64 *ts)
+{
+	u64 t = le64_to_cpu(tm) - _100ns2seconds * SecondsToStartOf1970;
+
+	// WARNING: do_div changes its first argument(!)
+	ts->tv_nsec = do_div(t, _100ns2seconds) * 100;
+	ts->tv_sec = t;
+}
+
+static inline struct ntfs_sb_info *ntfs_sb(struct super_block *sb)
+{
+	return sb->s_fs_info;
+}
+
+/* Align up on cluster boundary */
+static inline u64 ntfs_up_cluster(const struct ntfs_sb_info *sbi, u64 size)
+{
+	return (size + sbi->cluster_mask) & ~((u64)sbi->cluster_mask);
+}
+
+/* Align up on cluster boundary */
+static inline u64 ntfs_up_block(const struct super_block *sb, u64 size)
+{
+	return (size + sb->s_blocksize - 1) & ~(u64)(sb->s_blocksize - 1);
+}
+
+static inline CLST bytes_to_cluster(const struct ntfs_sb_info *sbi, u64 size)
+{
+	return (size + sbi->cluster_mask) >> sbi->cluster_bits;
+}
+
+static inline u64 bytes_to_block(const struct super_block *sb, u64 size)
+{
+	return (size + sb->s_blocksize - 1) >> sb->s_blocksize_bits;
+}
+
+/* calculates ((bytes + frame_size - 1)/frame_size)*frame_size; */
+static inline u64 ntfs_up_frame(const struct ntfs_sb_info *sbi, u64 bytes,
+				u8 c_unit)
+{
+	u32 bytes_per_frame = 1u << (c_unit + sbi->cluster_bits);
+
+	return (bytes + bytes_per_frame - 1) & ~(u64)(bytes_per_frame - 1);
+}
+
+static inline struct buffer_head *ntfs_bread(struct super_block *sb,
+					     sector_t block)
+{
+	struct buffer_head *bh;
+
+	bh = sb_bread(sb, block);
+	if (bh)
+		return bh;
+
+	ntfs_err(sb, "failed to read volume at offset 0x%llx",
+		 (u64)block << sb->s_blocksize_bits);
+	return NULL;
+}
+
+static inline bool is_power_of2(size_t v)
+{
+	return v && !(v & (v - 1));
+}
+
+static inline struct ntfs_inode *ntfs_i(struct inode *inode)
+{
+	return container_of(inode, struct ntfs_inode, vfs_inode);
+}
+
+static inline bool is_compressed(const struct ntfs_inode *ni)
+{
+	return (ni->std_fa & FILE_ATTRIBUTE_COMPRESSED) ||
+	       (ni->ni_flags & NI_FLAG_COMPRESSED_MASK);
+}
+
+static inline bool is_dedup(const struct ntfs_inode *ni)
+{
+	return ni->ni_flags & NI_FLAG_DEDUPLICATED;
+}
+
+static inline bool is_encrypted(const struct ntfs_inode *ni)
+{
+	return ni->std_fa & FILE_ATTRIBUTE_ENCRYPTED;
+}
+
+static inline bool is_sparsed(const struct ntfs_inode *ni)
+{
+	return ni->std_fa & FILE_ATTRIBUTE_SPARSE_FILE;
+}
+
+static inline void le16_sub_cpu(__le16 *var, u16 val)
+{
+	*var = cpu_to_le16(le16_to_cpu(*var) - val);
+}
+
+static inline void le32_sub_cpu(__le32 *var, u32 val)
+{
+	*var = cpu_to_le32(le32_to_cpu(*var) - val);
+}
+
+static inline void nb_put(struct ntfs_buffers *nb)
+{
+	u32 i, nbufs = nb->nbufs;
+
+	if (!nbufs)
+		return;
+
+	for (i = 0; i < nbufs; i++)
+		put_bh(nb->bh[i]);
+	nb->nbufs = 0;
+}
+
+static inline void put_indx_node(struct indx_node *in)
+{
+	if (!in)
+		return;
+
+	ntfs_free(in->index);
+	nb_put(&in->nb);
+	ntfs_free(in);
+}
+
+static inline void mi_clear(struct mft_inode *mi)
+{
+	nb_put(&mi->nb);
+	ntfs_free(mi->mrec);
+	mi->mrec = NULL;
+}
+
+static inline void ni_lock(struct ntfs_inode *ni)
+{
+	mutex_lock(&ni->ni_lock);
+}
+
+static inline void ni_unlock(struct ntfs_inode *ni)
+{
+	mutex_unlock(&ni->ni_lock);
+}
+
+static inline int ni_trylock(struct ntfs_inode *ni)
+{
+	return mutex_trylock(&ni->ni_lock);
+}
+
+static inline int ni_has_resident_data(struct ntfs_inode *ni)
+{
+	return ni->ni_flags & NI_FLAG_RESIDENT;
+}
+
+static inline int attr_load_runs_attr(struct ntfs_inode *ni,
+				      struct ATTRIB *attr,
+				      struct runs_tree *run, CLST vcn)
+{
+	return attr_load_runs_vcn(ni, attr->type, attr_name(attr),
+				  attr->name_len, run, vcn);
+}
+
+static inline void le64_sub_cpu(__le64 *var, u64 val)
+{
+	*var = cpu_to_le64(le64_to_cpu(*var) - val);
+}
diff --git a/fs/ntfs3/upcase.c b/fs/ntfs3/upcase.c
new file mode 100644
index 000000000000..0bb8d75b8abb
--- /dev/null
+++ b/fs/ntfs3/upcase.c
@@ -0,0 +1,78 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/fs/ntfs3/upcase.c
+ *
+ * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved.
+ *
+ */
+#include <linux/blkdev.h>
+#include <linux/buffer_head.h>
+#include <linux/module.h>
+#include <linux/nls.h>
+
+#include "debug.h"
+#include "ntfs.h"
+#include "ntfs_fs.h"
+
+static inline u16 upcase_unicode_char(const u16 *upcase, u16 chr)
+{
+	if (chr < 'a')
+		return chr;
+
+	if (chr <= 'z')
+		return (u16)(chr - ('a' - 'A'));
+
+	return upcase[chr];
+}
+
+int ntfs_cmp_names(const __le16 *s1, size_t l1, const __le16 *s2, size_t l2,
+		   const u16 *upcase)
+{
+	int diff;
+	size_t len = l1 < l2 ? l1 : l2;
+
+	if (upcase) {
+		while (len--) {
+			diff = upcase_unicode_char(upcase, le16_to_cpu(*s1++)) -
+			       upcase_unicode_char(upcase, le16_to_cpu(*s2++));
+			if (diff)
+				return diff;
+		}
+	} else {
+		while (len--) {
+			diff = le16_to_cpu(*s1++) - le16_to_cpu(*s2++);
+			if (diff)
+				return diff;
+		}
+	}
+
+	return (int)(l1 - l2);
+}
+
+int ntfs_cmp_names_cpu(const struct cpu_str *uni1, const struct le_str *uni2,
+		       const u16 *upcase)
+{
+	const u16 *s1 = uni1->name;
+	const __le16 *s2 = uni2->name;
+	size_t l1 = uni1->len;
+	size_t l2 = uni2->len;
+	size_t len = l1 < l2 ? l1 : l2;
+	int diff;
+
+	if (upcase) {
+		while (len--) {
+			diff = upcase_unicode_char(upcase, *s1++) -
+			       upcase_unicode_char(upcase, le16_to_cpu(*s2++));
+			if (diff)
+				return diff;
+		}
+	} else {
+		while (len--) {
+			diff = *s1++ - le16_to_cpu(*s2++);
+			if (diff)
+				return diff;
+		}
+	}
+
+	return l1 - l2;
+}
-- 
2.25.4


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 03/10] fs/ntfs3: Add bitmap
  2020-09-11 14:10 [PATCH v5 00/10] NTFS read-write driver GPL implementation by Paragon Software Konstantin Komarov
  2020-09-11 14:10 ` [PATCH v5 01/10] fs/ntfs3: Add headers and misc files Konstantin Komarov
@ 2020-09-11 14:10 ` Konstantin Komarov
  2020-09-13 18:43   ` Joe Perches
  2020-09-11 14:10 ` [PATCH v5 05/10] fs/ntfs3: Add attrib operations Konstantin Komarov
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-11 14:10 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, joe,
	mark, nborisov, Konstantin Komarov

This adds bitmap

Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
---
 fs/ntfs3/bitfunc.c |  137 ++++
 fs/ntfs3/bitmap.c  | 1516 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1653 insertions(+)
 create mode 100644 fs/ntfs3/bitfunc.c
 create mode 100644 fs/ntfs3/bitmap.c

diff --git a/fs/ntfs3/bitfunc.c b/fs/ntfs3/bitfunc.c
new file mode 100644
index 000000000000..b19972177535
--- /dev/null
+++ b/fs/ntfs3/bitfunc.c
@@ -0,0 +1,137 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/fs/ntfs3/bitfunc.c
+ *
+ * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved.
+ *
+ */
+#include <linux/blkdev.h>
+#include <linux/buffer_head.h>
+#include <linux/fs.h>
+#include <linux/nls.h>
+#include <linux/sched/signal.h>
+
+#include "debug.h"
+#include "ntfs.h"
+#include "ntfs_fs.h"
+
+#define BITS_IN_SIZE_T (sizeof(size_t) * 8)
+
+/*
+ * fill_mask[i] - first i bits are '1' , i = 0,1,2,3,4,5,6,7,8
+ * fill_mask[i] = 0xFF >> (8-i)
+ */
+static const u8 fill_mask[] = { 0x00, 0x01, 0x03, 0x07, 0x0F,
+				0x1F, 0x3F, 0x7F, 0xFF };
+
+/*
+ * zero_mask[i] - first i bits are '0' , i = 0,1,2,3,4,5,6,7,8
+ * zero_mask[i] = 0xFF << i
+ */
+static const u8 zero_mask[] = { 0xFF, 0xFE, 0xFC, 0xF8, 0xF0,
+				0xE0, 0xC0, 0x80, 0x00 };
+
+/*
+ * are_bits_clear
+ *
+ * Returns true if all bits [bit, bit+nbits) are zeros "0"
+ */
+bool are_bits_clear(const ulong *lmap, size_t bit, size_t nbits)
+{
+	size_t pos = bit & 7;
+	const u8 *map = (u8 *)lmap + (bit >> 3);
+
+	if (pos) {
+		if (8 - pos >= nbits)
+			return !nbits || !(*map & fill_mask[pos + nbits] &
+					   zero_mask[pos]);
+
+		if (*map++ & zero_mask[pos])
+			return false;
+		nbits -= 8 - pos;
+	}
+
+	pos = ((size_t)map) & (sizeof(size_t) - 1);
+	if (pos) {
+		pos = sizeof(size_t) - pos;
+		if (nbits >= pos * 8) {
+			for (nbits -= pos * 8; pos; pos--, map++) {
+				if (*map)
+					return false;
+			}
+		}
+	}
+
+	for (pos = nbits / BITS_IN_SIZE_T; pos; pos--, map += sizeof(size_t)) {
+		if (*((size_t *)map))
+			return false;
+	}
+
+	for (pos = (nbits % BITS_IN_SIZE_T) >> 3; pos; pos--, map++) {
+		if (*map)
+			return false;
+	}
+
+	pos = nbits & 7;
+	if (pos && (*map & fill_mask[pos]))
+		return false;
+
+	// All bits are zero
+	return true;
+}
+
+/*
+ * are_bits_set
+ *
+ * Returns true if all bits [bit, bit+nbits) are ones "1"
+ */
+bool are_bits_set(const ulong *lmap, size_t bit, size_t nbits)
+{
+	u8 mask;
+	size_t pos = bit & 7;
+	const u8 *map = (u8 *)lmap + (bit >> 3);
+
+	if (pos) {
+		if (8 - pos >= nbits) {
+			mask = fill_mask[pos + nbits] & zero_mask[pos];
+			return !nbits || (*map & mask) == mask;
+		}
+
+		mask = zero_mask[pos];
+		if ((*map++ & mask) != mask)
+			return false;
+		nbits -= 8 - pos;
+	}
+
+	pos = ((size_t)map) & (sizeof(size_t) - 1);
+	if (pos) {
+		pos = sizeof(size_t) - pos;
+		if (nbits >= pos * 8) {
+			for (nbits -= pos * 8; pos; pos--, map++) {
+				if (*map != 0xFF)
+					return false;
+			}
+		}
+	}
+
+	for (pos = nbits / BITS_IN_SIZE_T; pos; pos--, map += sizeof(size_t)) {
+		if (*((size_t *)map) != MINUS_ONE_T)
+			return false;
+	}
+
+	for (pos = (nbits % BITS_IN_SIZE_T) >> 3; pos; pos--, map++) {
+		if (*map != 0xFF)
+			return false;
+	}
+
+	pos = nbits & 7;
+	if (pos) {
+		u8 mask = fill_mask[pos];
+
+		if ((*map & mask) != mask)
+			return false;
+	}
+
+	// All bits are ones
+	return true;
+}
diff --git a/fs/ntfs3/bitmap.c b/fs/ntfs3/bitmap.c
new file mode 100644
index 000000000000..82ab2d70bfa8
--- /dev/null
+++ b/fs/ntfs3/bitmap.c
@@ -0,0 +1,1516 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/fs/ntfs3/bitmap.c
+ *
+ * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved.
+ *
+ */
+
+#include <linux/blkdev.h>
+#include <linux/buffer_head.h>
+#include <linux/fs.h>
+#include <linux/nls.h>
+#include <linux/sched/signal.h>
+
+#include "debug.h"
+#include "ntfs.h"
+#include "ntfs_fs.h"
+
+struct rb_node_key {
+	struct rb_node node;
+	size_t key;
+};
+
+/*
+ * Tree is sorted by start (key)
+ */
+struct e_node {
+	struct rb_node_key start; /* Tree sorted by start */
+	struct rb_node_key count; /* Tree sorted by len*/
+};
+
+static int wnd_rescan(struct wnd_bitmap *wnd);
+static struct buffer_head *wnd_map(struct wnd_bitmap *wnd, size_t iw);
+static bool wnd_is_free_hlp(struct wnd_bitmap *wnd, size_t bit, size_t bits);
+
+static inline u32 wnd_bits(const struct wnd_bitmap *wnd, size_t i)
+{
+	return i + 1 == wnd->nwnd ? wnd->bits_last : wnd->sb->s_blocksize * 8;
+}
+
+/*
+ * b_pos + b_len - biggest fragment
+ * Scan range [wpos wbits) window 'buf'
+ * Returns -1 if not found
+ */
+static size_t wnd_scan(const ulong *buf, size_t wbit, u32 wpos, u32 wend,
+		       size_t to_alloc, size_t *prev_tail, size_t *b_pos,
+		       size_t *b_len)
+{
+	while (wpos < wend) {
+		size_t free_len;
+		u32 free_bits, end;
+		u32 used = find_next_zero_bit(buf, wend, wpos);
+
+		if (used >= wend) {
+			if (*b_len < *prev_tail) {
+				*b_pos = wbit - *prev_tail;
+				*b_len = *prev_tail;
+			}
+
+			*prev_tail = 0;
+			return -1;
+		}
+
+		if (used > wpos) {
+			wpos = used;
+			if (*b_len < *prev_tail) {
+				*b_pos = wbit - *prev_tail;
+				*b_len = *prev_tail;
+			}
+
+			*prev_tail = 0;
+		}
+
+		/*
+		 * Now we have a fragment [wpos, wend) staring with 0
+		 */
+		end = wpos + to_alloc - *prev_tail;
+		free_bits = find_next_bit(buf, min(end, wend), wpos);
+
+		free_len = *prev_tail + free_bits - wpos;
+
+		if (*b_len < free_len) {
+			*b_pos = wbit + wpos - *prev_tail;
+			*b_len = free_len;
+		}
+
+		if (free_len >= to_alloc)
+			return wbit + wpos - *prev_tail;
+
+		if (free_bits >= wend) {
+			*prev_tail += free_bits - wpos;
+			return -1;
+		}
+
+		wpos = free_bits + 1;
+
+		*prev_tail = 0;
+	}
+
+	return -1;
+}
+
+/*
+ * wnd_close
+ *
+ *
+ */
+void wnd_close(struct wnd_bitmap *wnd)
+{
+	struct rb_node *node, *next;
+
+	if (wnd->free_bits != wnd->free_holder)
+		ntfs_free(wnd->free_bits);
+	run_close(&wnd->run);
+
+	node = rb_first(&wnd->start_tree);
+
+	while (node) {
+		next = rb_next(node);
+		rb_erase(node, &wnd->start_tree);
+		ntfs_free(rb_entry(node, struct e_node, start.node));
+		node = next;
+	}
+}
+
+static struct rb_node *rb_lookup(struct rb_root *root, size_t v)
+{
+	struct rb_node **p = &root->rb_node;
+	struct rb_node *r = NULL;
+
+	while (*p) {
+		struct rb_node_key *k;
+
+		k = rb_entry(*p, struct rb_node_key, node);
+		if (v < k->key) {
+			p = &(*p)->rb_left;
+		} else if (v > k->key) {
+			r = &k->node;
+			p = &(*p)->rb_right;
+		} else {
+			return &k->node;
+		}
+	}
+
+	return r;
+}
+
+/*
+ * rb_insert_count
+ *
+ * Helper function to insert special kind of 'count' tree
+ */
+static inline bool rb_insert_count(struct rb_root *root, struct e_node *e)
+{
+	struct rb_node **p = &root->rb_node;
+	struct rb_node *parent = NULL;
+	size_t e_ckey = e->count.key;
+	size_t e_skey = e->start.key;
+
+	while (*p) {
+		struct e_node *k =
+			rb_entry(parent = *p, struct e_node, count.node);
+
+		if (e_ckey > k->count.key) {
+			p = &(*p)->rb_left;
+		} else if (e_ckey < k->count.key) {
+			p = &(*p)->rb_right;
+		} else if (e_skey < k->start.key) {
+			p = &(*p)->rb_left;
+		} else if (e_skey > k->start.key) {
+			p = &(*p)->rb_right;
+		} else {
+			WARN_ON(1);
+			return false;
+		}
+	}
+
+	rb_link_node(&e->count.node, parent, p);
+	rb_insert_color(&e->count.node, root);
+	return true;
+}
+
+/*
+ * inline bool rb_insert_start
+ *
+ * Helper function to insert special kind of 'count' tree
+ */
+static inline bool rb_insert_start(struct rb_root *root, struct e_node *e)
+{
+	struct rb_node **p = &root->rb_node;
+	struct rb_node *parent = NULL;
+	size_t e_skey = e->start.key;
+
+	while (*p) {
+		struct e_node *k;
+
+		parent = *p;
+
+		k = rb_entry(parent, struct e_node, start.node);
+		if (e_skey < k->start.key) {
+			p = &(*p)->rb_left;
+		} else if (e_skey > k->start.key) {
+			p = &(*p)->rb_right;
+		} else {
+			WARN_ON(1);
+			return false;
+		}
+	}
+
+	rb_link_node(&e->start.node, parent, p);
+	rb_insert_color(&e->start.node, root);
+	return true;
+}
+
+#define NTFS_MAX_WND_EXTENTS (32u * 1024u)
+
+/*
+ * wnd_add_free_ext
+ *
+ * adds a new extent of free space
+ * build = 1 when building tree
+ */
+static void wnd_add_free_ext(struct wnd_bitmap *wnd, size_t bit, size_t len,
+			     bool build)
+{
+	struct e_node *e, *e0 = NULL;
+	size_t ib, end_in = bit + len;
+	struct rb_node *n;
+
+	if (build) {
+		/* Use extent_min to filter too short extents */
+		if (wnd->count >= NTFS_MAX_WND_EXTENTS &&
+		    len <= wnd->extent_min) {
+			wnd->uptodated = -1;
+			return;
+		}
+	} else {
+		/* Try to find extent before 'bit' */
+		n = rb_lookup(&wnd->start_tree, bit);
+
+		if (!n) {
+			n = rb_first(&wnd->start_tree);
+		} else {
+			e = rb_entry(n, struct e_node, start.node);
+			n = rb_next(n);
+			if (e->start.key + e->count.key == bit) {
+				/* Remove left */
+				bit = e->start.key;
+				len += e->count.key;
+				rb_erase(&e->start.node, &wnd->start_tree);
+				rb_erase(&e->count.node, &wnd->count_tree);
+				wnd->count -= 1;
+				e0 = e;
+			}
+		}
+
+		while (n) {
+			size_t next_end;
+
+			e = rb_entry(n, struct e_node, start.node);
+			next_end = e->start.key + e->count.key;
+			if (e->start.key > end_in)
+				break;
+
+			/* Remove right */
+			n = rb_next(n);
+			len += next_end - end_in;
+			end_in = next_end;
+			rb_erase(&e->start.node, &wnd->start_tree);
+			rb_erase(&e->count.node, &wnd->count_tree);
+			wnd->count -= 1;
+
+			if (!e0)
+				e0 = e;
+			else
+				ntfs_free(e);
+		}
+
+		if (wnd->uptodated != 1) {
+			/* Check bits before 'bit' */
+			ib = wnd->zone_bit == wnd->zone_end ||
+					     bit < wnd->zone_end ?
+				     0 :
+				     wnd->zone_end;
+
+			while (bit > ib && wnd_is_free_hlp(wnd, bit - 1, 1)) {
+				bit -= 1;
+				len += 1;
+			}
+
+			/* Check bits after 'end_in' */
+			ib = wnd->zone_bit == wnd->zone_end ||
+					     end_in > wnd->zone_bit ?
+				     wnd->nbits :
+				     wnd->zone_bit;
+
+			while (end_in < ib && wnd_is_free_hlp(wnd, end_in, 1)) {
+				end_in += 1;
+				len += 1;
+			}
+		}
+	}
+	/* Insert new fragment */
+	if (wnd->count >= NTFS_MAX_WND_EXTENTS) {
+		if (e0)
+			ntfs_free(e0);
+
+		wnd->uptodated = -1;
+
+		/* Compare with smallest fragment */
+		n = rb_last(&wnd->count_tree);
+		e = rb_entry(n, struct e_node, count.node);
+		if (len <= e->count.key)
+			goto out; /* Do not insert small fragments */
+
+		if (build) {
+			struct e_node *e2;
+
+			n = rb_prev(n);
+			e2 = rb_entry(n, struct e_node, count.node);
+			/* smallest fragment will be 'e2->count.key' */
+			wnd->extent_min = e2->count.key;
+		}
+
+		/* Replace smallest fragment by new one */
+		rb_erase(&e->start.node, &wnd->start_tree);
+		rb_erase(&e->count.node, &wnd->count_tree);
+		wnd->count -= 1;
+	} else {
+		e = e0 ? e0 : ntfs_alloc(sizeof(struct e_node), 0);
+		if (!e) {
+			wnd->uptodated = -1;
+			goto out;
+		}
+
+		if (build && len <= wnd->extent_min)
+			wnd->extent_min = len;
+	}
+	e->start.key = bit;
+	e->count.key = len;
+	if (len > wnd->extent_max)
+		wnd->extent_max = len;
+
+	rb_insert_start(&wnd->start_tree, e);
+	rb_insert_count(&wnd->count_tree, e);
+	wnd->count += 1;
+
+out:;
+}
+
+/*
+ * wnd_remove_free_ext
+ *
+ * removes a run from the cached free space
+ */
+static void wnd_remove_free_ext(struct wnd_bitmap *wnd, size_t bit, size_t len)
+{
+	struct rb_node *n, *n3;
+	struct e_node *e, *e3;
+	size_t end_in = bit + len;
+	size_t end3, end, new_key, new_len, max_new_len;
+
+	/* Try to find extent before 'bit' */
+	n = rb_lookup(&wnd->start_tree, bit);
+
+	if (!n)
+		return;
+
+	e = rb_entry(n, struct e_node, start.node);
+	end = e->start.key + e->count.key;
+
+	new_key = new_len = 0;
+	len = e->count.key;
+
+	/* Range [bit,end_in) must be inside 'e' or outside 'e' and 'n' */
+	if (e->start.key > bit)
+		;
+	else if (end_in <= end) {
+		/* Range [bit,end_in) inside 'e' */
+		new_key = end_in;
+		new_len = end - end_in;
+		len = bit - e->start.key;
+	} else if (bit > end) {
+		bool bmax = false;
+
+		n3 = rb_next(n);
+
+		while (n3) {
+			e3 = rb_entry(n3, struct e_node, start.node);
+			if (e3->start.key >= end_in)
+				break;
+
+			if (e3->count.key == wnd->extent_max)
+				bmax = true;
+
+			end3 = e3->start.key + e3->count.key;
+			if (end3 > end_in) {
+				e3->start.key = end_in;
+				rb_erase(&e3->count.node, &wnd->count_tree);
+				e3->count.key = end3 - end_in;
+				rb_insert_count(&wnd->count_tree, e3);
+				break;
+			}
+
+			n3 = rb_next(n3);
+			rb_erase(&e3->start.node, &wnd->start_tree);
+			rb_erase(&e3->count.node, &wnd->count_tree);
+			wnd->count -= 1;
+			ntfs_free(e3);
+		}
+		if (!bmax)
+			return;
+		n3 = rb_first(&wnd->count_tree);
+		wnd->extent_max =
+			n3 ? rb_entry(n3, struct e_node, count.node)->count.key :
+			     0;
+		return;
+	}
+
+	if (e->count.key != wnd->extent_max) {
+		;
+	} else if (rb_prev(&e->count.node)) {
+		;
+	} else {
+		n3 = rb_next(&e->count.node);
+		max_new_len = len > new_len ? len : new_len;
+		if (!n3) {
+			wnd->extent_max = max_new_len;
+		} else {
+			e3 = rb_entry(n3, struct e_node, count.node);
+			wnd->extent_max = max(e3->count.key, max_new_len);
+		}
+	}
+
+	if (!len) {
+		if (new_len) {
+			e->start.key = new_key;
+			rb_erase(&e->count.node, &wnd->count_tree);
+			e->count.key = new_len;
+			rb_insert_count(&wnd->count_tree, e);
+		} else {
+			rb_erase(&e->start.node, &wnd->start_tree);
+			rb_erase(&e->count.node, &wnd->count_tree);
+			wnd->count -= 1;
+			ntfs_free(e);
+		}
+		goto out;
+	}
+	rb_erase(&e->count.node, &wnd->count_tree);
+	e->count.key = len;
+	rb_insert_count(&wnd->count_tree, e);
+
+	if (!new_len)
+		goto out;
+
+	if (wnd->count >= NTFS_MAX_WND_EXTENTS) {
+		wnd->uptodated = -1;
+
+		/* Get minimal extent */
+		e = rb_entry(rb_last(&wnd->count_tree), struct e_node,
+			     count.node);
+		if (e->count.key > new_len)
+			goto out;
+
+		/* Replace minimum */
+		rb_erase(&e->start.node, &wnd->start_tree);
+		rb_erase(&e->count.node, &wnd->count_tree);
+		wnd->count -= 1;
+	} else {
+		e = ntfs_alloc(sizeof(struct e_node), 0);
+		if (!e)
+			wnd->uptodated = -1;
+	}
+
+	if (e) {
+		e->start.key = new_key;
+		e->count.key = new_len;
+		rb_insert_start(&wnd->start_tree, e);
+		rb_insert_count(&wnd->count_tree, e);
+		wnd->count += 1;
+	}
+
+out:
+	if (!wnd->count && 1 != wnd->uptodated)
+		wnd_rescan(wnd);
+}
+
+/*
+ * wnd_rescan
+ *
+ * Scan all bitmap. used while initialization.
+ */
+static int wnd_rescan(struct wnd_bitmap *wnd)
+{
+	int err = 0;
+	size_t prev_tail = 0;
+	struct super_block *sb = wnd->sb;
+	struct ntfs_sb_info *sbi = sb->s_fs_info;
+	u64 lbo, len = 0;
+	u32 blocksize = sb->s_blocksize;
+	u8 cluster_bits = sbi->cluster_bits;
+	const u32 ra_bytes = 512 * 1024;
+	const u32 ra_pages = ra_bytes >> PAGE_SHIFT;
+	u32 wbits = 8 * sb->s_blocksize;
+	u32 ra_mask = (ra_bytes >> sb->s_blocksize_bits) - 1;
+	struct address_space *mapping = sb->s_bdev->bd_inode->i_mapping;
+	u32 used, frb;
+	const ulong *buf;
+	size_t wpos, wbit, iw, vbo;
+	struct buffer_head *bh = NULL;
+	CLST lcn, clen;
+
+	wnd->uptodated = 0;
+	wnd->extent_max = 0;
+	wnd->extent_min = MINUS_ONE_T;
+	wnd->total_zeroes = 0;
+
+	vbo = 0;
+
+	for (iw = 0; iw < wnd->nwnd; iw++) {
+		if (iw + 1 == wnd->nwnd)
+			wbits = wnd->bits_last;
+
+		if (wnd->inited) {
+			if (!wnd->free_bits[iw]) {
+				/* all ones */
+				if (prev_tail) {
+					wnd_add_free_ext(wnd,
+							 vbo * 8 - prev_tail,
+							 prev_tail, true);
+					prev_tail = 0;
+				}
+				goto next_wnd;
+			}
+			if (wbits == wnd->free_bits[iw]) {
+				/* all zeroes */
+				prev_tail += wbits;
+				wnd->total_zeroes += wbits;
+				goto next_wnd;
+			}
+		}
+
+		if (!len) {
+			if (!run_lookup_entry(&wnd->run, vbo >> cluster_bits,
+					      &lcn, &clen, NULL)) {
+				err = -ENOENT;
+				goto out;
+			}
+
+			lbo = (u64)lcn << cluster_bits;
+			len = (u64)clen << cluster_bits;
+		}
+
+		if (!(iw & ra_mask))
+			page_cache_readahead_unbounded(
+				mapping, NULL, lbo >> PAGE_SHIFT, ra_pages, 0);
+
+		bh = ntfs_bread(sb, lbo >> sb->s_blocksize_bits);
+		if (!bh) {
+			err = -EIO;
+			goto out;
+		}
+
+		buf = (ulong *)bh->b_data;
+
+		used = __bitmap_weight(buf, wbits);
+		if (used < wbits) {
+			frb = wbits - used;
+			wnd->free_bits[iw] = frb;
+			wnd->total_zeroes += frb;
+		}
+
+		wpos = 0;
+		wbit = vbo * 8;
+
+		if (wbit + wbits > wnd->nbits)
+			wbits = wnd->nbits - wbit;
+
+		do {
+			used = find_next_zero_bit(buf, wbits, wpos);
+
+			if (used > wpos && prev_tail) {
+				wnd_add_free_ext(wnd, wbit + wpos - prev_tail,
+						 prev_tail, true);
+				prev_tail = 0;
+			}
+
+			wpos = used;
+
+			if (wpos >= wbits) {
+				/* No free blocks */
+				prev_tail = 0;
+				break;
+			}
+
+			frb = find_next_bit(buf, wbits, wpos);
+			if (frb >= wbits) {
+				/* keep last free block */
+				prev_tail += frb - wpos;
+				break;
+			}
+
+			wnd_add_free_ext(wnd, wbit + wpos - prev_tail,
+					 frb + prev_tail - wpos, true);
+
+			/* Skip free block and first '1' */
+			wpos = frb + 1;
+			/* Reset previous tail */
+			prev_tail = 0;
+		} while (wpos < wbits);
+
+next_wnd:
+
+		if (bh)
+			put_bh(bh);
+		bh = NULL;
+
+		vbo += blocksize;
+		if (len) {
+			len -= blocksize;
+			lbo += blocksize;
+		}
+	}
+
+	/* Add last block */
+	if (prev_tail)
+		wnd_add_free_ext(wnd, wnd->nbits - prev_tail, prev_tail, true);
+
+	/*
+	 * Before init cycle wnd->uptodated was 0
+	 * If any errors or limits occurs while initialization then
+	 * wnd->uptodated will be -1
+	 * If 'uptodated' is still 0 then Tree is really updated
+	 */
+	if (!wnd->uptodated)
+		wnd->uptodated = 1;
+
+	if (wnd->zone_bit != wnd->zone_end) {
+		size_t zlen = wnd->zone_end - wnd->zone_bit;
+
+		wnd->zone_end = wnd->zone_bit;
+		wnd_zone_set(wnd, wnd->zone_bit, zlen);
+	}
+
+out:
+	return err;
+}
+
+/*
+ * wnd_init
+ */
+int wnd_init(struct wnd_bitmap *wnd, struct super_block *sb, size_t nbits)
+{
+	int err;
+	u32 blocksize = sb->s_blocksize;
+	u32 wbits = blocksize * 8;
+
+	init_rwsem(&wnd->rw_lock);
+
+	wnd->sb = sb;
+	wnd->nbits = nbits;
+	wnd->total_zeroes = nbits;
+	wnd->extent_max = MINUS_ONE_T;
+	wnd->zone_bit = wnd->zone_end = 0;
+	wnd->nwnd = bytes_to_block(sb, bitmap_size(nbits));
+	wnd->bits_last = nbits & (wbits - 1);
+	if (!wnd->bits_last)
+		wnd->bits_last = wbits;
+
+	if (wnd->nwnd <= ARRAY_SIZE(wnd->free_holder)) {
+		wnd->free_bits = wnd->free_holder;
+	} else {
+		wnd->free_bits = ntfs_alloc(wnd->nwnd * sizeof(u16), 1);
+		if (!wnd->free_bits)
+			return -ENOMEM;
+	}
+
+	err = wnd_rescan(wnd);
+	if (err)
+		return err;
+
+	wnd->inited = true;
+
+	return 0;
+}
+
+/*
+ * wnd_map
+ *
+ * call sb_bread for requested window
+ */
+static struct buffer_head *wnd_map(struct wnd_bitmap *wnd, size_t iw)
+{
+	size_t vbo;
+	CLST lcn, clen;
+	struct super_block *sb = wnd->sb;
+	struct ntfs_sb_info *sbi;
+	struct buffer_head *bh;
+	u64 lbo;
+
+	sbi = sb->s_fs_info;
+	vbo = (u64)iw << sb->s_blocksize_bits;
+
+	if (!run_lookup_entry(&wnd->run, vbo >> sbi->cluster_bits, &lcn, &clen,
+			      NULL)) {
+		return ERR_PTR(-ENOENT);
+	}
+
+	lbo = ((u64)lcn << sbi->cluster_bits) + (vbo & sbi->cluster_mask);
+
+	bh = ntfs_bread(wnd->sb, lbo >> sb->s_blocksize_bits);
+
+	if (!bh)
+		return ERR_PTR(-EIO);
+
+	return bh;
+}
+
+/*
+ * wnd_set_free
+ *
+ * Marks the bits range from bit to bit + bits as free
+ */
+int wnd_set_free(struct wnd_bitmap *wnd, size_t bit, size_t bits)
+{
+	int err = 0;
+	struct super_block *sb = wnd->sb;
+	size_t bits0 = bits;
+	u32 wbits = 8 * sb->s_blocksize;
+	size_t iw = bit >> (sb->s_blocksize_bits + 3);
+	u32 wbit = bit & (wbits - 1);
+	struct buffer_head *bh;
+
+	while (iw < wnd->nwnd && bits) {
+		u32 tail, op;
+		ulong *buf;
+
+		if (iw + 1 == wnd->nwnd)
+			wbits = wnd->bits_last;
+
+		tail = wbits - wbit;
+		op = tail < bits ? tail : bits;
+
+		bh = wnd_map(wnd, iw);
+		if (IS_ERR(bh)) {
+			err = PTR_ERR(bh);
+			break;
+		}
+
+		buf = (ulong *)bh->b_data;
+
+		lock_buffer(bh);
+
+		__bitmap_clear(buf, wbit, op);
+
+		wnd->free_bits[iw] += op;
+
+		set_buffer_uptodate(bh);
+		mark_buffer_dirty(bh);
+		unlock_buffer(bh);
+		put_bh(bh);
+
+		wnd->total_zeroes += op;
+		bits -= op;
+		wbit = 0;
+		iw += 1;
+	}
+
+	wnd_add_free_ext(wnd, bit, bits0, false);
+
+	return err;
+}
+
+/*
+ * wnd_set_used
+ *
+ * Marks the bits range from bit to bit + bits as used
+ */
+int wnd_set_used(struct wnd_bitmap *wnd, size_t bit, size_t bits)
+{
+	int err = 0;
+	struct super_block *sb = wnd->sb;
+	size_t bits0 = bits;
+	size_t iw = bit >> (sb->s_blocksize_bits + 3);
+	u32 wbits = 8 * sb->s_blocksize;
+	u32 wbit = bit & (wbits - 1);
+	struct buffer_head *bh;
+
+	while (iw < wnd->nwnd && bits) {
+		u32 tail, op;
+		ulong *buf;
+
+		if (unlikely(iw + 1 == wnd->nwnd))
+			wbits = wnd->bits_last;
+
+		tail = wbits - wbit;
+		op = tail < bits ? tail : bits;
+
+		bh = wnd_map(wnd, iw);
+		if (IS_ERR(bh)) {
+			err = PTR_ERR(bh);
+			break;
+		}
+		buf = (ulong *)bh->b_data;
+
+		lock_buffer(bh);
+
+		__bitmap_set(buf, wbit, op);
+		wnd->free_bits[iw] -= op;
+
+		set_buffer_uptodate(bh);
+		mark_buffer_dirty(bh);
+		unlock_buffer(bh);
+		put_bh(bh);
+
+		wnd->total_zeroes -= op;
+		bits -= op;
+		wbit = 0;
+		iw += 1;
+	}
+
+	if (!RB_EMPTY_ROOT(&wnd->start_tree))
+		wnd_remove_free_ext(wnd, bit, bits0);
+
+	return err;
+}
+
+/*
+ * wnd_is_free_hlp
+ *
+ * Returns true if all clusters [bit, bit+bits) are free (bitmap only)
+ */
+static bool wnd_is_free_hlp(struct wnd_bitmap *wnd, size_t bit, size_t bits)
+{
+	struct super_block *sb = wnd->sb;
+	size_t iw = bit >> (sb->s_blocksize_bits + 3);
+	u32 wbits = 8 * sb->s_blocksize;
+	u32 wbit = bit & (wbits - 1);
+
+	while (iw < wnd->nwnd && bits) {
+		u32 tail, op;
+
+		if (unlikely(iw + 1 == wnd->nwnd))
+			wbits = wnd->bits_last;
+
+		tail = wbits - wbit;
+		op = tail < bits ? tail : bits;
+
+		if (wbits != wnd->free_bits[iw]) {
+			bool ret;
+			struct buffer_head *bh = wnd_map(wnd, iw);
+
+			if (IS_ERR(bh))
+				return false;
+
+			ret = are_bits_clear((ulong *)bh->b_data, wbit, op);
+
+			put_bh(bh);
+			if (!ret)
+				return false;
+		}
+
+		bits -= op;
+		wbit = 0;
+		iw += 1;
+	}
+
+	return true;
+}
+
+/*
+ * wnd_is_free
+ *
+ * Returns true if all clusters [bit, bit+bits) are free
+ */
+bool wnd_is_free(struct wnd_bitmap *wnd, size_t bit, size_t bits)
+{
+	bool ret;
+	struct rb_node *n;
+	size_t end;
+	struct e_node *e;
+
+	if (RB_EMPTY_ROOT(&wnd->start_tree))
+		goto use_wnd;
+
+	n = rb_lookup(&wnd->start_tree, bit);
+	if (!n)
+		goto use_wnd;
+
+	e = rb_entry(n, struct e_node, start.node);
+
+	end = e->start.key + e->count.key;
+
+	if (bit < end && bit + bits <= end)
+		return true;
+
+use_wnd:
+	ret = wnd_is_free_hlp(wnd, bit, bits);
+
+	return ret;
+}
+
+/*
+ * wnd_is_used
+ *
+ * Returns true if all clusters [bit, bit+bits) are used
+ */
+bool wnd_is_used(struct wnd_bitmap *wnd, size_t bit, size_t bits)
+{
+	bool ret = false;
+	struct super_block *sb = wnd->sb;
+	size_t iw = bit >> (sb->s_blocksize_bits + 3);
+	u32 wbits = 8 * sb->s_blocksize;
+	u32 wbit = bit & (wbits - 1);
+	size_t end;
+	struct rb_node *n;
+	struct e_node *e;
+
+	if (RB_EMPTY_ROOT(&wnd->start_tree))
+		goto use_wnd;
+
+	end = bit + bits;
+	n = rb_lookup(&wnd->start_tree, end - 1);
+	if (!n)
+		goto use_wnd;
+
+	e = rb_entry(n, struct e_node, start.node);
+	if (e->start.key + e->count.key > bit)
+		return false;
+
+use_wnd:
+	while (iw < wnd->nwnd && bits) {
+		u32 tail, op;
+
+		if (unlikely(iw + 1 == wnd->nwnd))
+			wbits = wnd->bits_last;
+
+		tail = wbits - wbit;
+		op = tail < bits ? tail : bits;
+
+		if (wnd->free_bits[iw]) {
+			bool ret;
+			struct buffer_head *bh = wnd_map(wnd, iw);
+
+			if (IS_ERR(bh))
+				goto out;
+
+			ret = are_bits_set((ulong *)bh->b_data, wbit, op);
+			put_bh(bh);
+			if (!ret)
+				goto out;
+		}
+
+		bits -= op;
+		wbit = 0;
+		iw += 1;
+	}
+	ret = true;
+
+out:
+	return ret;
+}
+
+/*
+ * wnd_find
+ * - flags - BITMAP_FIND_XXX flags
+ *
+ * looks for free space
+ * Returns 0 if not found
+ */
+size_t wnd_find(struct wnd_bitmap *wnd, size_t to_alloc, size_t hint,
+		size_t flags, size_t *allocated)
+{
+	struct super_block *sb;
+	u32 wbits, wpos, wzbit, wzend;
+	size_t fnd, max_alloc, b_len, b_pos;
+	size_t iw, prev_tail, nwnd, wbit, ebit, zbit, zend;
+	size_t to_alloc0 = to_alloc;
+	const ulong *buf;
+	const struct e_node *e;
+	const struct rb_node *pr, *cr;
+	u8 log2_bits;
+	bool fbits_valid;
+	struct buffer_head *bh;
+
+	/* fast checking for available free space */
+	if (flags & BITMAP_FIND_FULL) {
+		size_t zeroes = wnd_zeroes(wnd);
+
+		zeroes -= wnd->zone_end - wnd->zone_bit;
+		if (zeroes < to_alloc0)
+			goto no_space;
+
+		if (to_alloc0 > wnd->extent_max)
+			goto no_space;
+	} else {
+		if (to_alloc > wnd->extent_max)
+			to_alloc = wnd->extent_max;
+	}
+
+	if (wnd->zone_bit <= hint && hint < wnd->zone_end)
+		hint = wnd->zone_end;
+
+	max_alloc = wnd->nbits;
+	b_len = b_pos = 0;
+
+	if (hint >= max_alloc)
+		hint = 0;
+
+	if (RB_EMPTY_ROOT(&wnd->start_tree)) {
+		if (wnd->uptodated == 1) {
+			/* extents tree is updated -> no free space */
+			goto no_space;
+		}
+		goto scan_bitmap;
+	}
+
+	e = NULL;
+	if (!hint)
+		goto allocate_biggest;
+
+	/* Use hint: enumerate extents by start >= hint */
+	pr = NULL;
+	cr = wnd->start_tree.rb_node;
+
+	for (;;) {
+		e = rb_entry(cr, struct e_node, start.node);
+
+		if (e->start.key == hint)
+			break;
+
+		if (e->start.key < hint) {
+			pr = cr;
+			cr = cr->rb_right;
+			if (!cr)
+				break;
+			continue;
+		}
+
+		cr = cr->rb_left;
+		if (!cr) {
+			e = pr ? rb_entry(pr, struct e_node, start.node) : NULL;
+			break;
+		}
+	}
+
+	if (!e)
+		goto allocate_biggest;
+
+	if (e->start.key + e->count.key > hint) {
+		/* We have found extension with 'hint' inside */
+		size_t len = e->start.key + e->count.key - hint;
+
+		if (len >= to_alloc && hint + to_alloc <= max_alloc) {
+			fnd = hint;
+			goto found;
+		}
+
+		if (!(flags & BITMAP_FIND_FULL)) {
+			if (len > to_alloc)
+				len = to_alloc;
+
+			if (hint + len <= max_alloc) {
+				fnd = hint;
+				to_alloc = len;
+				goto found;
+			}
+		}
+	}
+
+allocate_biggest:
+
+	/* Allocate from biggest free extent */
+	e = rb_entry(rb_first(&wnd->count_tree), struct e_node, count.node);
+	if (e->count.key != wnd->extent_max)
+		wnd->extent_max = e->count.key;
+
+	if (e->count.key < max_alloc) {
+		if (e->count.key >= to_alloc) {
+			;
+		} else if (flags & BITMAP_FIND_FULL) {
+			if (e->count.key < to_alloc0) {
+				/* Biggest free block is less then requested */
+				goto no_space;
+			}
+			to_alloc = e->count.key;
+		} else if (-1 != wnd->uptodated) {
+			to_alloc = e->count.key;
+		} else {
+			/* Check if we can use more bits */
+			size_t op, max_check;
+			struct rb_root start_tree;
+
+			memcpy(&start_tree, &wnd->start_tree,
+			       sizeof(struct rb_root));
+			memset(&wnd->start_tree, 0, sizeof(struct rb_root));
+
+			max_check = e->start.key + to_alloc;
+			if (max_check > max_alloc)
+				max_check = max_alloc;
+			for (op = e->start.key + e->count.key; op < max_check;
+			     op++) {
+				if (!wnd_is_free(wnd, op, 1))
+					break;
+			}
+			memcpy(&wnd->start_tree, &start_tree,
+			       sizeof(struct rb_root));
+			to_alloc = op - e->start.key;
+		}
+
+		/* Prepare to return */
+		fnd = e->start.key;
+		if (e->start.key + to_alloc > max_alloc)
+			to_alloc = max_alloc - e->start.key;
+		goto found;
+	}
+
+	if (wnd->uptodated == 1) {
+		/* extents tree is updated -> no free space */
+		goto no_space;
+	}
+
+	b_len = e->count.key;
+	b_pos = e->start.key;
+
+scan_bitmap:
+	sb = wnd->sb;
+	log2_bits = sb->s_blocksize_bits + 3;
+
+	/* At most two ranges [hint, max_alloc) + [0, hint) */
+Again:
+
+	/* TODO: optimize request for case nbits > wbits */
+	iw = hint >> log2_bits;
+	wbits = sb->s_blocksize * 8;
+	wpos = hint & (wbits - 1);
+	prev_tail = 0;
+	fbits_valid = true;
+
+	if (max_alloc == wnd->nbits) {
+		nwnd = wnd->nwnd;
+	} else {
+		size_t t = max_alloc + wbits - 1;
+
+		nwnd = likely(t > max_alloc) ? (t >> log2_bits) : wnd->nwnd;
+	}
+
+	/* Enumerate all windows */
+	for (; iw < nwnd; iw++) {
+		wbit = iw << log2_bits;
+
+		if (!wnd->free_bits[iw]) {
+			if (prev_tail > b_len) {
+				b_pos = wbit - prev_tail;
+				b_len = prev_tail;
+			}
+
+			/* Skip full used window */
+			prev_tail = 0;
+			wpos = 0;
+			continue;
+		}
+
+		if (unlikely(iw + 1 == nwnd)) {
+			if (max_alloc == wnd->nbits) {
+				wbits = wnd->bits_last;
+			} else {
+				size_t t = max_alloc & (wbits - 1);
+
+				if (t) {
+					wbits = t;
+					fbits_valid = false;
+				}
+			}
+		}
+
+		if (wnd->zone_end > wnd->zone_bit) {
+			ebit = wbit + wbits;
+			zbit = max(wnd->zone_bit, wbit);
+			zend = min(wnd->zone_end, ebit);
+
+			/* Here we have a window [wbit, ebit) and zone [zbit, zend) */
+			if (zend <= zbit) {
+				/* Zone does not overlap window */
+			} else {
+				wzbit = zbit - wbit;
+				wzend = zend - wbit;
+
+				/* Zone overlaps window */
+				if (wnd->free_bits[iw] == wzend - wzbit) {
+					prev_tail = 0;
+					wpos = 0;
+					continue;
+				}
+
+				/* Scan two ranges window: [wbit, zbit) and [zend, ebit) */
+				bh = wnd_map(wnd, iw);
+
+				if (IS_ERR(bh)) {
+					/* TODO: error */
+					prev_tail = 0;
+					wpos = 0;
+					continue;
+				}
+
+				buf = (ulong *)bh->b_data;
+
+				/* Scan range [wbit, zbit) */
+				if (wpos < wzbit) {
+					/* Scan range [wpos, zbit) */
+					fnd = wnd_scan(buf, wbit, wpos, wzbit,
+						       to_alloc, &prev_tail,
+						       &b_pos, &b_len);
+					if (fnd != MINUS_ONE_T) {
+						put_bh(bh);
+						goto found;
+					}
+				}
+
+				prev_tail = 0;
+
+				/* Scan range [zend, ebit) */
+				if (wzend < wbits) {
+					fnd = wnd_scan(buf, wbit,
+						       max(wzend, wpos), wbits,
+						       to_alloc, &prev_tail,
+						       &b_pos, &b_len);
+					if (fnd != MINUS_ONE_T) {
+						put_bh(bh);
+						goto found;
+					}
+				}
+
+				wpos = 0;
+				put_bh(bh);
+				continue;
+			}
+		}
+
+		/* Current window does not overlap zone */
+		if (!wpos && fbits_valid && wnd->free_bits[iw] == wbits) {
+			/* window is empty */
+			if (prev_tail + wbits >= to_alloc) {
+				fnd = wbit + wpos - prev_tail;
+				goto found;
+			}
+
+			/* Increase 'prev_tail' and process next window */
+			prev_tail += wbits;
+			wpos = 0;
+			continue;
+		}
+
+		/* read window */
+		bh = wnd_map(wnd, iw);
+		if (IS_ERR(bh)) {
+			// TODO: error
+			prev_tail = 0;
+			wpos = 0;
+			continue;
+		}
+
+		buf = (ulong *)bh->b_data;
+
+		/* Scan range [wpos, eBits) */
+		fnd = wnd_scan(buf, wbit, wpos, wbits, to_alloc, &prev_tail,
+			       &b_pos, &b_len);
+		put_bh(bh);
+		if (fnd != MINUS_ONE_T)
+			goto found;
+	}
+
+	if (b_len < prev_tail) {
+		/* The last fragment */
+		b_len = prev_tail;
+		b_pos = max_alloc - prev_tail;
+	}
+
+	if (hint) {
+		/*
+		 * We have scanned range [hint max_alloc)
+		 * Prepare to scan range [0 hint + to_alloc)
+		 */
+		size_t nextmax = hint + to_alloc;
+
+		if (likely(nextmax >= hint) && nextmax < max_alloc)
+			max_alloc = nextmax;
+		hint = 0;
+		goto Again;
+	}
+
+	if (!b_len)
+		goto no_space;
+
+	wnd->extent_max = b_len;
+
+	if (flags & BITMAP_FIND_FULL)
+		goto no_space;
+
+	fnd = b_pos;
+	to_alloc = b_len;
+
+found:
+	if (flags & BITMAP_FIND_MARK_AS_USED) {
+		/* TODO optimize remove extent (pass 'e'?) */
+		if (wnd_set_used(wnd, fnd, to_alloc))
+			goto no_space;
+	} else if (wnd->extent_max != MINUS_ONE_T &&
+		   to_alloc > wnd->extent_max) {
+		wnd->extent_max = to_alloc;
+	}
+
+	*allocated = fnd;
+	return to_alloc;
+
+no_space:
+	return 0;
+}
+
+/*
+ * wnd_extend
+ *
+ * Extend bitmap ($MFT bitmap)
+ */
+int wnd_extend(struct wnd_bitmap *wnd, size_t new_bits)
+{
+	int err;
+	struct super_block *sb = wnd->sb;
+	struct ntfs_sb_info *sbi = sb->s_fs_info;
+	u32 blocksize = sb->s_blocksize;
+	u32 wbits = blocksize * 8;
+	u32 b0, new_last;
+	size_t bits, iw, new_wnd;
+	size_t old_bits = wnd->nbits;
+	u16 *new_free;
+
+	if (new_bits <= old_bits)
+		return -EINVAL;
+
+	/* align to 8 byte boundary */
+	new_wnd = bytes_to_block(sb, bitmap_size(new_bits));
+	new_last = new_bits & (wbits - 1);
+	if (!new_last)
+		new_last = wbits;
+
+	if (new_wnd != wnd->nwnd) {
+		if (new_wnd <= ARRAY_SIZE(wnd->free_holder)) {
+			new_free = wnd->free_holder;
+		} else {
+			new_free = ntfs_alloc(new_wnd * sizeof(u16), 0);
+			if (!new_free)
+				return -ENOMEM;
+		}
+
+		if (new_free != wnd->free_bits)
+			memcpy(new_free, wnd->free_bits,
+			       wnd->nwnd * sizeof(short));
+		memset(new_free + wnd->nwnd, 0,
+		       (new_wnd - wnd->nwnd) * sizeof(short));
+		if (wnd->free_bits != wnd->free_holder)
+			ntfs_free(wnd->free_bits);
+
+		wnd->free_bits = new_free;
+	}
+
+	/* Zero bits [old_bits,new_bits) */
+	bits = new_bits - old_bits;
+	b0 = old_bits & (wbits - 1);
+
+	for (iw = old_bits >> (sb->s_blocksize_bits + 3); bits; iw += 1) {
+		u32 op;
+		size_t frb;
+		u64 vbo, lbo, bytes;
+		struct buffer_head *bh;
+		ulong *buf;
+
+		if (iw + 1 == new_wnd)
+			wbits = new_last;
+
+		op = b0 + bits > wbits ? wbits - b0 : bits;
+		vbo = (u64)iw * blocksize;
+
+		err = ntfs_vbo_to_lbo(sbi, &wnd->run, vbo, &lbo, &bytes);
+		if (err)
+			break;
+
+		bh = ntfs_bread(sb, lbo >> sb->s_blocksize_bits);
+		if (!bh)
+			return -EIO;
+
+		lock_buffer(bh);
+		buf = (ulong *)bh->b_data;
+
+		__bitmap_clear(buf, b0, blocksize * 8 - b0);
+		frb = wbits - __bitmap_weight(buf, wbits);
+		wnd->total_zeroes += frb - wnd->free_bits[iw];
+		wnd->free_bits[iw] = frb;
+
+		set_buffer_uptodate(bh);
+		mark_buffer_dirty(bh);
+		unlock_buffer(bh);
+		/*err = sync_dirty_buffer(bh);*/
+
+		b0 = 0;
+		bits -= op;
+	}
+
+	wnd->nbits = new_bits;
+	wnd->nwnd = new_wnd;
+	wnd->bits_last = new_last;
+
+	wnd_add_free_ext(wnd, old_bits, new_bits - old_bits, false);
+
+	return 0;
+}
+
+/*
+ * wnd_zone_set
+ */
+void wnd_zone_set(struct wnd_bitmap *wnd, size_t lcn, size_t len)
+{
+	size_t zlen;
+
+	zlen = wnd->zone_end - wnd->zone_bit;
+	if (zlen)
+		wnd_add_free_ext(wnd, wnd->zone_bit, zlen, false);
+
+	if (!RB_EMPTY_ROOT(&wnd->start_tree) && len)
+		wnd_remove_free_ext(wnd, lcn, len);
+
+	wnd->zone_bit = lcn;
+	wnd->zone_end = lcn + len;
+}
+
+int ntfs_trim_fs(struct ntfs_sb_info *sbi, struct fstrim_range *range)
+{
+	int err = 0;
+	struct super_block *sb = sbi->sb;
+	struct wnd_bitmap *wnd = &sbi->used.bitmap;
+	u32 wbits = 8 * sb->s_blocksize;
+	CLST len = 0, lcn = 0, done = 0;
+	CLST minlen = bytes_to_cluster(sbi, range->minlen);
+	CLST lcn_from = bytes_to_cluster(sbi, range->start);
+	size_t iw = lcn_from >> (sb->s_blocksize_bits + 3);
+	u32 wbit = lcn_from & (wbits - 1);
+	const ulong *buf;
+	CLST lcn_to;
+
+	if (!minlen)
+		minlen = 1;
+
+	if (range->len == (u64)-1)
+		lcn_to = wnd->nbits;
+	else
+		lcn_to = bytes_to_cluster(sbi, range->start + range->len);
+
+	down_read_nested(&wnd->rw_lock, BITMAP_MUTEX_CLUSTERS);
+
+	for (; iw < wnd->nbits; iw++, wbit = 0) {
+		CLST lcn_wnd = iw * wbits;
+		struct buffer_head *bh;
+
+		if (lcn_wnd > lcn_to)
+			break;
+
+		if (!wnd->free_bits[iw])
+			continue;
+
+		if (iw + 1 == wnd->nwnd)
+			wbits = wnd->bits_last;
+
+		if (lcn_wnd + wbits > lcn_to)
+			wbits = lcn_to - lcn_wnd;
+
+		bh = wnd_map(wnd, iw);
+		if (IS_ERR(bh)) {
+			err = PTR_ERR(bh);
+			break;
+		}
+
+		buf = (ulong *)bh->b_data;
+
+		for (; wbit < wbits; wbit++) {
+			if (!test_bit(wbit, buf)) {
+				if (!len)
+					lcn = lcn_wnd + wbit;
+				len += 1;
+				continue;
+			}
+			if (len >= minlen) {
+				err = ntfs_discard(sbi, lcn, len);
+				if (err)
+					goto out;
+				done += len;
+			}
+			len = 0;
+		}
+		put_bh(bh);
+	}
+
+	/* Process the last fragment */
+	if (len >= minlen) {
+		err = ntfs_discard(sbi, lcn, len);
+		if (err)
+			goto out;
+		done += len;
+	}
+
+out:
+	range->len = (u64)done << sbi->cluster_bits;
+
+	up_read(&wnd->rw_lock);
+
+	return err;
+}
-- 
2.25.4


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 05/10] fs/ntfs3: Add attrib operations
  2020-09-11 14:10 [PATCH v5 00/10] NTFS read-write driver GPL implementation by Paragon Software Konstantin Komarov
  2020-09-11 14:10 ` [PATCH v5 01/10] fs/ntfs3: Add headers and misc files Konstantin Komarov
  2020-09-11 14:10 ` [PATCH v5 03/10] fs/ntfs3: Add bitmap Konstantin Komarov
@ 2020-09-11 14:10 ` Konstantin Komarov
  2020-09-11 14:10 ` [PATCH v5 06/10] fs/ntfs3: Add compression Konstantin Komarov
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-11 14:10 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, joe,
	mark, nborisov, Konstantin Komarov

This adds attrib operations

Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
---
 fs/ntfs3/attrib.c   | 1300 +++++++++++++++++++++++++++++++++++++++++++
 fs/ntfs3/attrlist.c |  462 +++++++++++++++
 fs/ntfs3/xattr.c    |  996 +++++++++++++++++++++++++++++++++
 3 files changed, 2758 insertions(+)
 create mode 100644 fs/ntfs3/attrib.c
 create mode 100644 fs/ntfs3/attrlist.c
 create mode 100644 fs/ntfs3/xattr.c

diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
new file mode 100644
index 000000000000..6f8c33bf9d54
--- /dev/null
+++ b/fs/ntfs3/attrib.c
@@ -0,0 +1,1300 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/fs/ntfs3/attrib.c
+ *
+ * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved.
+ *
+ * TODO: merge attr_set_size/attr_data_get_block/attr_allocate_frame?
+ */
+
+#include <linux/blkdev.h>
+#include <linux/buffer_head.h>
+#include <linux/fs.h>
+#include <linux/hash.h>
+#include <linux/nls.h>
+#include <linux/ratelimit.h>
+#include <linux/sched/signal.h>
+#include <linux/slab.h>
+
+#include "debug.h"
+#include "ntfs.h"
+#include "ntfs_fs.h"
+
+/*
+ * You can set external NTFS_MIN_LOG2_OF_CLUMP/NTFS_MAX_LOG2_OF_CLUMP to manage
+ * preallocate algorithm
+ */
+#ifndef NTFS_MIN_LOG2_OF_CLUMP
+#define NTFS_MIN_LOG2_OF_CLUMP 16
+#endif
+
+#ifndef NTFS_MAX_LOG2_OF_CLUMP
+#define NTFS_MAX_LOG2_OF_CLUMP 26
+#endif
+
+// 16M
+#define NTFS_CLUMP_MIN (1 << (NTFS_MIN_LOG2_OF_CLUMP + 8))
+// 16G
+#define NTFS_CLUMP_MAX (1ull << (NTFS_MAX_LOG2_OF_CLUMP + 8))
+
+/*
+ * get_pre_allocated
+ *
+ */
+static inline u64 get_pre_allocated(u64 size)
+{
+	u32 clump;
+	u8 align_shift;
+	u64 ret;
+
+	if (size <= NTFS_CLUMP_MIN) {
+		clump = 1 << NTFS_MIN_LOG2_OF_CLUMP;
+		align_shift = NTFS_MIN_LOG2_OF_CLUMP;
+	} else if (size >= NTFS_CLUMP_MAX) {
+		clump = 1 << NTFS_MAX_LOG2_OF_CLUMP;
+		align_shift = NTFS_MAX_LOG2_OF_CLUMP;
+	} else {
+		align_shift = NTFS_MIN_LOG2_OF_CLUMP - 1 +
+			      __ffs(size >> (8 + NTFS_MIN_LOG2_OF_CLUMP));
+		clump = 1u << align_shift;
+	}
+
+	ret = (((size + clump - 1) >> align_shift)) << align_shift;
+
+	return ret;
+}
+
+/*
+ * attr_must_be_resident
+ *
+ * returns true if attribute must be resident
+ */
+static inline bool attr_must_be_resident(struct ntfs_sb_info *sbi,
+					 enum ATTR_TYPE type)
+{
+	const struct ATTR_DEF_ENTRY *de;
+
+	switch (type) {
+	case ATTR_STD:
+	case ATTR_NAME:
+	case ATTR_ID:
+	case ATTR_LABEL:
+	case ATTR_VOL_INFO:
+	case ATTR_ROOT:
+	case ATTR_EA_INFO:
+		return true;
+	default:
+		de = ntfs_query_def(sbi, type);
+		if (de && (de->flags & NTFS_ATTR_MUST_BE_RESIDENT))
+			return true;
+		return false;
+	}
+}
+
+/*
+ * attr_load_runs
+ *
+ * load all runs stored in 'attr'
+ */
+int attr_load_runs(struct ATTRIB *attr, struct ntfs_inode *ni,
+		   struct runs_tree *run)
+{
+	int err;
+	CLST svcn = le64_to_cpu(attr->nres.svcn);
+	CLST evcn = le64_to_cpu(attr->nres.evcn);
+	u32 asize;
+	u16 run_off;
+
+	if (svcn >= evcn + 1 || run_is_mapped_full(run, svcn, evcn))
+		return 0;
+
+	asize = le32_to_cpu(attr->size);
+	run_off = le16_to_cpu(attr->nres.run_off);
+	err = run_unpack_ex(run, ni->mi.sbi, ni->mi.rno, svcn, evcn,
+			    Add2Ptr(attr, run_off), asize - run_off);
+	if (err < 0)
+		return err;
+
+	return 0;
+}
+
+/*
+ * int run_deallocate_ex
+ *
+ * Deallocate clusters
+ */
+static int run_deallocate_ex(struct ntfs_sb_info *sbi, struct runs_tree *run,
+			     CLST vcn, CLST len, CLST *done, bool trim)
+{
+	int err = 0;
+	CLST vcn0 = vcn, lcn, clen, dn = 0;
+	size_t idx;
+
+	if (!len)
+		goto out;
+
+	if (!run_lookup_entry(run, vcn, &lcn, &clen, &idx)) {
+failed:
+		run_truncate(run, vcn0);
+		err = -EINVAL;
+		goto out;
+	}
+
+	for (;;) {
+		if (clen > len)
+			clen = len;
+
+		if (!clen) {
+			err = -EINVAL;
+			goto out;
+		}
+
+		if (lcn != SPARSE_LCN) {
+			mark_as_free_ex(sbi, lcn, clen, trim);
+			dn += clen;
+		}
+
+		len -= clen;
+		if (!len)
+			break;
+
+		if (!run_get_entry(run, ++idx, &vcn, &lcn, &clen)) {
+			// save memory - don't load entire run
+			goto failed;
+		}
+	}
+
+out:
+	if (done)
+		*done = dn;
+
+	return err;
+}
+
+/*
+ * attr_allocate_clusters
+ *
+ * find free space, mark it as used and store in 'run'
+ */
+int attr_allocate_clusters(struct ntfs_sb_info *sbi, struct runs_tree *run,
+			   CLST vcn, CLST lcn, CLST len, CLST *pre_alloc,
+			   enum ALLOCATE_OPT opt, CLST *alen, const size_t fr,
+			   CLST *new_lcn)
+{
+	int err;
+	CLST flen, vcn0 = vcn, pre = pre_alloc ? *pre_alloc : 0;
+	struct wnd_bitmap *wnd = &sbi->used.bitmap;
+	size_t cnt = run->count;
+
+	for (;;) {
+		err = ntfs_look_for_free_space(sbi, lcn, len + pre, &lcn, &flen,
+					       opt);
+
+		if (err == -ENOSPC && pre) {
+			pre = 0;
+			if (*pre_alloc)
+				*pre_alloc = 0;
+			continue;
+		}
+
+		if (err)
+			goto out;
+
+		if (new_lcn && vcn == vcn0)
+			*new_lcn = lcn;
+
+		/* Add new fragment into run storage */
+		if (!run_add_entry(run, vcn, lcn, flen)) {
+			down_write_nested(&wnd->rw_lock, BITMAP_MUTEX_CLUSTERS);
+			wnd_set_free(wnd, lcn, flen);
+			up_write(&wnd->rw_lock);
+			err = -ENOMEM;
+			goto out;
+		}
+
+		vcn += flen;
+
+		if (flen >= len || opt == ALLOCATE_MFT ||
+		    (fr && run->count - cnt >= fr)) {
+			*alen = vcn - vcn0;
+			return 0;
+		}
+
+		len -= flen;
+	}
+
+out:
+	/* undo */
+	run_deallocate_ex(sbi, run, vcn0, vcn - vcn0, NULL, false);
+	run_truncate(run, vcn0);
+
+	return err;
+}
+
+/*
+ * attr_set_size_res
+ *
+ * helper for attr_set_size
+ */
+static int attr_set_size_res(struct ntfs_inode *ni, struct ATTRIB *attr,
+			     struct ATTR_LIST_ENTRY *le, struct mft_inode *mi,
+			     u64 new_size, struct runs_tree *run,
+			     struct ATTRIB **ins_attr)
+{
+	int err = 0;
+	struct ntfs_sb_info *sbi = mi->sbi;
+	struct MFT_REC *rec = mi->mrec;
+	u32 used = le32_to_cpu(rec->used);
+	u32 asize = le32_to_cpu(attr->size);
+	u32 aoff = PtrOffset(rec, attr);
+	u32 rsize = le32_to_cpu(attr->res.data_size);
+	u32 tail = used - aoff - asize;
+	char *next = Add2Ptr(attr, asize);
+	int dsize = QuadAlign(new_size) - QuadAlign(rsize);
+	CLST len, alen;
+	struct ATTRIB *attr_s = NULL;
+	bool is_ext;
+
+	if (dsize < 0) {
+		memmove(next + dsize, next, tail);
+	} else if (dsize > 0) {
+		if (used + dsize > sbi->max_bytes_per_attr)
+			goto resident2nonresident;
+
+		memmove(next + dsize, next, tail);
+		memset(next, 0, dsize);
+	}
+
+	rec->used = cpu_to_le32(used + dsize);
+	attr->size = cpu_to_le32(asize + dsize);
+	attr->res.data_size = cpu_to_le32(new_size);
+	mi->dirty = true;
+	*ins_attr = attr;
+
+	return 0;
+
+resident2nonresident:
+	len = bytes_to_cluster(sbi, rsize);
+
+	run_init(run);
+
+	is_ext = is_attr_ext(attr);
+
+	if (!len) {
+		alen = 0;
+	} else if (is_ext) {
+		if (!run_add_entry(run, 0, SPARSE_LCN, len)) {
+			err = -ENOMEM;
+			goto out;
+		}
+		alen = len;
+	} else {
+		err = attr_allocate_clusters(sbi, run, 0, 0, len, NULL,
+					     ALLOCATE_DEF, &alen, 0, NULL);
+		if (err)
+			goto out;
+
+		err = ntfs_sb_write_run(sbi, run, 0, resident_data(attr),
+					rsize);
+		if (err)
+			goto out;
+	}
+
+	attr_s = ntfs_memdup(attr, asize);
+	if (!attr_s) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	/*verify(mi_remove_attr(mi, attr));*/
+	used -= asize;
+	memmove(attr, Add2Ptr(attr, asize), used - aoff);
+	rec->used = cpu_to_le32(used);
+	mi->dirty = true;
+	if (le)
+		al_remove_le(ni, le);
+
+	err = ni_insert_nonresident(ni, attr_s->type, attr_name(attr_s),
+				    attr_s->name_len, run, 0, alen,
+				    attr_s->flags, &attr, NULL);
+	if (err)
+		goto out;
+
+	ntfs_free(attr_s);
+	attr->nres.data_size = cpu_to_le64(rsize);
+	attr->nres.valid_size = attr->nres.data_size;
+
+	*ins_attr = attr;
+
+	if (attr_s->type == ATTR_DATA && !attr_s->name_len &&
+	    run == &ni->file.run) {
+		ni->ni_flags &= ~NI_FLAG_RESIDENT;
+	}
+
+	/* Resident attribute becomes non resident */
+	return 0;
+
+out:
+	/* undo: do not trim new allocated clusters */
+	run_deallocate(sbi, run, false);
+	run_close(run);
+
+	if (attr_s) {
+		memmove(next, Add2Ptr(rec, aoff), used - aoff);
+		memcpy(Add2Ptr(rec, aoff), attr_s, asize);
+		rec->used = cpu_to_le32(used + asize);
+		mi->dirty = true;
+		ntfs_free(attr_s);
+		/*reinsert le*/
+	}
+
+	return err;
+}
+
+/*
+ * attr_set_size
+ *
+ * change the size of attribute
+ * Extend:
+ *   - sparse/compressed: no allocated clusters
+ *   - normal: append allocated and preallocated new clusters
+ * Shrink:
+ *   - no deallocate if keep_prealloc is set
+ */
+int attr_set_size(struct ntfs_inode *ni, enum ATTR_TYPE type,
+		  const __le16 *name, u8 name_len, struct runs_tree *run,
+		  u64 new_size, const u64 *new_valid, bool keep_prealloc,
+		  struct ATTRIB **ret)
+{
+	int err = 0;
+	struct ntfs_sb_info *sbi = ni->mi.sbi;
+	u8 cluster_bits = sbi->cluster_bits;
+	bool is_mft =
+		ni->mi.rno == MFT_REC_MFT && type == ATTR_DATA && !name_len;
+	u64 old_valid, old_size, old_alloc, new_alloc, new_alloc_tmp;
+	struct ATTRIB *attr, *attr_b;
+	struct ATTR_LIST_ENTRY *le, *le_b;
+	struct mft_inode *mi, *mi_b;
+	CLST alen, vcn, lcn, new_alen, old_alen, svcn, evcn;
+	CLST next_svcn, pre_alloc = -1, done = 0;
+	bool is_ext;
+	u32 align;
+	struct MFT_REC *rec;
+
+again:
+	le_b = NULL;
+	attr_b = ni_find_attr(ni, NULL, &le_b, type, name, name_len, NULL,
+			      &mi_b);
+	if (!attr_b) {
+		err = -ENOENT;
+		goto out;
+	}
+
+	if (!attr_b->non_res) {
+		err = attr_set_size_res(ni, attr_b, le_b, mi_b, new_size, run,
+					&attr_b);
+		if (err || !attr_b->non_res)
+			goto out;
+
+		/* layout of records may be changed, so do a full search */
+		goto again;
+	}
+
+	is_ext = is_attr_ext(attr_b);
+
+again_1:
+
+	if (is_ext) {
+		align = 1u << (attr_b->nres.c_unit + cluster_bits);
+		if (is_attr_sparsed(attr_b))
+			keep_prealloc = false;
+	} else {
+		align = sbi->cluster_size;
+	}
+
+	old_valid = le64_to_cpu(attr_b->nres.valid_size);
+	old_size = le64_to_cpu(attr_b->nres.data_size);
+	old_alloc = le64_to_cpu(attr_b->nres.alloc_size);
+	old_alen = old_alloc >> cluster_bits;
+
+	new_alloc = (new_size + align - 1) & ~(u64)(align - 1);
+	new_alen = new_alloc >> cluster_bits;
+
+	if (keep_prealloc && is_ext)
+		keep_prealloc = false;
+
+	if (keep_prealloc && new_size < old_size) {
+		attr_b->nres.data_size = cpu_to_le64(new_size);
+		mi_b->dirty = true;
+		goto ok;
+	}
+
+	vcn = old_alen - 1;
+
+	svcn = le64_to_cpu(attr_b->nres.svcn);
+	evcn = le64_to_cpu(attr_b->nres.evcn);
+
+	if (svcn <= vcn && vcn <= evcn) {
+		attr = attr_b;
+		le = le_b;
+		mi = mi_b;
+	} else if (!le_b) {
+		err = -EINVAL;
+		goto out;
+	} else {
+		le = le_b;
+		attr = ni_find_attr(ni, attr_b, &le, type, name, name_len, &vcn,
+				    &mi);
+		if (!attr) {
+			err = -EINVAL;
+			goto out;
+		}
+
+next_le_1:
+		svcn = le64_to_cpu(attr->nres.svcn);
+		evcn = le64_to_cpu(attr->nres.evcn);
+	}
+
+next_le:
+	rec = mi->mrec;
+
+	err = attr_load_runs(attr, ni, run);
+	if (err)
+		goto out;
+
+	if (new_size > old_size) {
+		CLST to_allocate;
+		size_t free;
+
+		if (new_alloc <= old_alloc) {
+			attr_b->nres.data_size = cpu_to_le64(new_size);
+			mi_b->dirty = true;
+			goto ok;
+		}
+
+		to_allocate = new_alen - old_alen;
+add_alloc_in_same_attr_seg:
+		lcn = 0;
+		if (is_mft) {
+			/* mft allocates clusters from mftzone */
+			pre_alloc = 0;
+		} else if (is_ext) {
+			/* no preallocate for sparse/compress */
+			pre_alloc = 0;
+		} else if (pre_alloc == -1) {
+			pre_alloc = 0;
+			if (type == ATTR_DATA && !name_len &&
+			    sbi->options.prealloc) {
+				CLST new_alen2 = bytes_to_cluster(
+					sbi, get_pre_allocated(new_size));
+				pre_alloc = new_alen2 - new_alen;
+			}
+
+			/* Get the last lcn to allocate from */
+			if (old_alen &&
+			    !run_lookup_entry(run, vcn, &lcn, NULL, NULL)) {
+				lcn = SPARSE_LCN;
+			}
+
+			if (lcn == SPARSE_LCN)
+				lcn = 0;
+			else if (lcn)
+				lcn += 1;
+
+			free = wnd_zeroes(&sbi->used.bitmap);
+			if (to_allocate > free) {
+				err = -ENOSPC;
+				goto out;
+			}
+
+			if (pre_alloc && to_allocate + pre_alloc > free)
+				pre_alloc = 0;
+		}
+
+		vcn = old_alen;
+
+		if (is_ext) {
+			if (!run_add_entry(run, vcn, SPARSE_LCN, to_allocate)) {
+				err = -ENOMEM;
+				goto out;
+			}
+			alen = to_allocate;
+		} else {
+			/* ~3 bytes per fragment */
+			err = attr_allocate_clusters(
+				sbi, run, vcn, lcn, to_allocate, &pre_alloc,
+				is_mft ? ALLOCATE_MFT : 0, &alen,
+				is_mft ? 0 :
+					 (sbi->record_size -
+					  le32_to_cpu(rec->used) + 8) /
+							 3 +
+						 1,
+				NULL);
+			if (err)
+				goto out;
+		}
+
+		done += alen;
+		vcn += alen;
+		if (to_allocate > alen)
+			to_allocate -= alen;
+		else
+			to_allocate = 0;
+
+pack_runs:
+		err = mi_pack_runs(mi, attr, run, vcn - svcn);
+		if (err)
+			goto out;
+
+		next_svcn = le64_to_cpu(attr->nres.evcn) + 1;
+		new_alloc_tmp = (u64)next_svcn << cluster_bits;
+		attr_b->nres.alloc_size = cpu_to_le64(new_alloc_tmp);
+		mi_b->dirty = true;
+
+		if (next_svcn >= vcn && !to_allocate) {
+			/* Normal way. update attribute and exit */
+			attr_b->nres.data_size = cpu_to_le64(new_size);
+			goto ok;
+		}
+
+		/* at least two mft to avoid recursive loop*/
+		if (is_mft && next_svcn == vcn &&
+		    ((u64)done << sbi->cluster_bits) >= 2 * sbi->record_size) {
+			new_size = new_alloc_tmp;
+			attr_b->nres.data_size = attr_b->nres.alloc_size;
+			goto ok;
+		}
+
+		if (le32_to_cpu(rec->used) < sbi->record_size) {
+			old_alen = next_svcn;
+			evcn = old_alen - 1;
+			goto add_alloc_in_same_attr_seg;
+		}
+
+		if (type == ATTR_LIST) {
+			err = ni_expand_list(ni);
+			if (err)
+				goto out;
+			if (next_svcn < vcn)
+				goto pack_runs;
+
+			/* layout of records is changed */
+			goto again;
+		}
+
+		if (!ni->attr_list.size) {
+			err = ni_create_attr_list(ni);
+			if (err)
+				goto out;
+			/* layout of records is changed */
+		}
+
+		if (next_svcn >= vcn) {
+			/* this is mft data, repeat */
+			goto again;
+		}
+
+		/* insert new attribute segment */
+		err = ni_insert_nonresident(ni, type, name, name_len, run,
+					    next_svcn, vcn - next_svcn,
+					    attr_b->flags, &attr, &mi);
+		if (err)
+			goto out;
+
+		if (!is_mft)
+			run_truncate_head(run, evcn + 1);
+
+		svcn = le64_to_cpu(attr->nres.svcn);
+		evcn = le64_to_cpu(attr->nres.evcn);
+
+		le_b = NULL;
+		/* layout of records maybe changed */
+		/* find base attribute to update*/
+		attr_b = ni_find_attr(ni, NULL, &le_b, type, name, name_len,
+				      NULL, &mi_b);
+		if (!attr_b) {
+			err = -ENOENT;
+			goto out;
+		}
+
+		attr_b->nres.alloc_size = cpu_to_le64((u64)vcn << cluster_bits);
+		attr_b->nres.data_size = attr_b->nres.alloc_size;
+		attr_b->nres.valid_size = attr_b->nres.alloc_size;
+		mi_b->dirty = true;
+		goto again_1;
+	}
+
+	if (new_size != old_size ||
+	    (new_alloc != old_alloc && !keep_prealloc)) {
+		vcn = max(svcn, new_alen);
+		new_alloc_tmp = (u64)vcn << cluster_bits;
+
+		err = run_deallocate_ex(sbi, run, vcn, evcn - vcn + 1, &alen,
+					true);
+		if (err)
+			goto out;
+
+		run_truncate(run, vcn);
+
+		if (vcn > svcn) {
+			err = mi_pack_runs(mi, attr, run, vcn - svcn);
+			if (err < 0)
+				goto out;
+		} else if (le && le->vcn) {
+			u16 le_sz = le16_to_cpu(le->size);
+
+			/*
+			 * NOTE: list entries for one attribute are always
+			 * the same size. We deal with last entry (vcn==0)
+			 * and it is not first in entries array
+			 * (list entry for std attribute always first)
+			 * So it is safe to step back
+			 */
+			mi_remove_attr(mi, attr);
+
+			if (!al_remove_le(ni, le)) {
+				err = -EINVAL;
+				goto out;
+			}
+
+			le = (struct ATTR_LIST_ENTRY *)((u8 *)le - le_sz);
+		} else {
+			attr->nres.evcn = cpu_to_le64((u64)vcn - 1);
+			mi->dirty = true;
+		}
+
+		attr_b->nres.alloc_size = cpu_to_le64(new_alloc_tmp);
+
+		if (vcn == new_alen) {
+			attr_b->nres.data_size = cpu_to_le64(new_size);
+			if (new_size < old_valid)
+				attr_b->nres.valid_size =
+					attr_b->nres.data_size;
+		} else {
+			if (new_alloc_tmp <=
+			    le64_to_cpu(attr_b->nres.data_size))
+				attr_b->nres.data_size =
+					attr_b->nres.alloc_size;
+			if (new_alloc_tmp <
+			    le64_to_cpu(attr_b->nres.valid_size))
+				attr_b->nres.valid_size =
+					attr_b->nres.alloc_size;
+		}
+
+		if (is_ext)
+			le64_sub_cpu(&attr_b->nres.total_size,
+				     ((u64)alen << cluster_bits));
+
+		mi_b->dirty = true;
+
+		if (new_alloc_tmp <= new_alloc)
+			goto ok;
+
+		old_size = new_alloc_tmp;
+		vcn = svcn - 1;
+
+		if (le == le_b) {
+			attr = attr_b;
+			mi = mi_b;
+			evcn = svcn - 1;
+			svcn = 0;
+			goto next_le;
+		}
+
+		if (le->type != type || le->name_len != name_len ||
+		    memcmp(le_name(le), name, name_len * sizeof(short))) {
+			err = -EINVAL;
+			goto out;
+		}
+
+		err = ni_load_mi(ni, le, &mi);
+		if (err)
+			goto out;
+
+		attr = mi_find_attr(mi, NULL, type, name, name_len, &le->id);
+		if (!attr) {
+			err = -EINVAL;
+			goto out;
+		}
+		goto next_le_1;
+	}
+
+ok:
+	if (new_valid) {
+		__le64 valid = cpu_to_le64(min(*new_valid, new_size));
+
+		if (attr_b->nres.valid_size != valid) {
+			attr_b->nres.valid_size = valid;
+			mi_b->dirty = true;
+		}
+	}
+
+out:
+	if (!err && attr_b && ret)
+		*ret = attr_b;
+
+	/* update inode_set_bytes*/
+	if (!err && attr_b && attr_b->non_res &&
+	    ((type == ATTR_DATA && !name_len) ||
+	     (type == ATTR_ALLOC && name == I30_NAME))) {
+		ni->vfs_inode.i_size = new_size;
+		inode_set_bytes(&ni->vfs_inode,
+				le64_to_cpu(attr_b->nres.alloc_size));
+		ni->ni_flags |= NI_FLAG_UPDATE_PARENT;
+		mark_inode_dirty(&ni->vfs_inode);
+	}
+
+	return err;
+}
+
+int attr_data_get_block(struct ntfs_inode *ni, CLST vcn, CLST clen, CLST *lcn,
+			CLST *len, bool *new)
+{
+	int err = 0;
+	struct runs_tree *run = &ni->file.run;
+	struct ntfs_sb_info *sbi;
+	u8 cluster_bits;
+	struct ATTRIB *attr, *attr_b;
+	struct ATTR_LIST_ENTRY *le, *le_b;
+	struct mft_inode *mi, *mi_b;
+	CLST hint, svcn, to_alloc, evcn1, new_evcn1, next_svcn;
+	u64 new_size, total_size;
+	u32 clst_per_frame;
+	bool ok;
+
+	if (new)
+		*new = false;
+
+	down_read(&ni->file.run_lock);
+	ok = run_lookup_entry(run, vcn, lcn, len, NULL);
+	up_read(&ni->file.run_lock);
+
+	if (ok && (*lcn != SPARSE_LCN || !new)) {
+		/* normal way */
+		return 0;
+	}
+
+	if (!clen)
+		clen = 1;
+
+	if (ok && clen > *len)
+		clen = *len;
+
+	sbi = ni->mi.sbi;
+	cluster_bits = sbi->cluster_bits;
+	new_size = ((u64)vcn + clen) << cluster_bits;
+
+	ni_lock(ni);
+	down_write(&ni->file.run_lock);
+
+again:
+	le_b = NULL;
+	attr_b = ni_find_attr(ni, NULL, &le_b, ATTR_DATA, NULL, 0, NULL, &mi_b);
+	if (!attr_b) {
+		err = -ENOENT;
+		goto out;
+	}
+
+	if (!attr_b->non_res) {
+		if (!new) {
+			*lcn = RESIDENT_LCN;
+			goto out;
+		}
+
+		err = attr_set_size_res(ni, attr_b, le_b, mi_b, new_size, run,
+					&attr_b);
+		if (err)
+			goto out;
+
+		if (!attr_b->non_res) {
+			/* Resident attribute still resident */
+			*lcn = RESIDENT_LCN;
+			goto out;
+		}
+
+		/* Resident attribute becomes non resident */
+		goto again;
+	}
+
+	clst_per_frame = 1u << attr_b->nres.c_unit;
+	to_alloc = (clen + clst_per_frame - 1) & ~(clst_per_frame - 1);
+
+	svcn = le64_to_cpu(attr_b->nres.svcn);
+	evcn1 = le64_to_cpu(attr_b->nres.evcn) + 1;
+
+	attr = attr_b;
+	le = le_b;
+	mi = mi_b;
+
+	if (le_b && (vcn < svcn || evcn1 <= vcn)) {
+		attr = ni_find_attr(ni, attr_b, &le, ATTR_DATA, NULL, 0, &vcn,
+				    &mi);
+		if (!attr) {
+			err = -EINVAL;
+			goto out;
+		}
+		svcn = le64_to_cpu(attr->nres.svcn);
+		evcn1 = le64_to_cpu(attr->nres.evcn) + 1;
+	}
+
+	err = attr_load_runs(attr, ni, run);
+	if (err)
+		goto out;
+
+	if (!ok) {
+		ok = run_lookup_entry(run, vcn, lcn, len, NULL);
+		if (ok && (*lcn != SPARSE_LCN || !new)) {
+			/* normal way */
+			err = 0;
+			goto out;
+		}
+
+		if (!ok && !new) {
+			*len = 0;
+			err = 0;
+			goto out;
+		}
+
+		if (ok && clen > *len) {
+			clen = *len;
+			new_size = ((u64)vcn + clen) << cluster_bits;
+			to_alloc = (clen + clst_per_frame - 1) &
+				   ~(clst_per_frame - 1);
+		}
+	}
+
+	if (!is_attr_ext(attr_b)) {
+		err = -EINVAL;
+		goto out;
+	}
+
+	/* Get the last lcn to allocate from */
+	hint = 0;
+
+	if (vcn > evcn1) {
+		if (!run_add_entry(run, evcn1, SPARSE_LCN, vcn - evcn1)) {
+			err = -ENOMEM;
+			goto out;
+		}
+	} else if (vcn && !run_lookup_entry(run, vcn - 1, &hint, NULL, NULL)) {
+		hint = -1;
+	}
+
+	err = attr_allocate_clusters(
+		sbi, run, vcn, hint + 1, to_alloc, NULL, 0, len,
+		(sbi->record_size - le32_to_cpu(mi->mrec->used) + 8) / 3 + 1,
+		lcn);
+	if (err)
+		goto out;
+	*new = true;
+
+	new_evcn1 = vcn + *len;
+	if (new_evcn1 < evcn1)
+		new_evcn1 = evcn1;
+
+	total_size = le64_to_cpu(attr_b->nres.total_size) +
+		     ((u64)*len << cluster_bits);
+
+repack:
+
+	err = mi_pack_runs(mi, attr, run, new_evcn1 - svcn);
+	if (err < 0)
+		goto out;
+
+	attr_b->nres.total_size = cpu_to_le64(total_size);
+	inode_set_bytes(&ni->vfs_inode, total_size);
+
+	mi_b->dirty = true;
+	ni->ni_flags |= NI_FLAG_UPDATE_PARENT;
+	mark_inode_dirty(&ni->vfs_inode);
+
+	next_svcn = le64_to_cpu(attr->nres.evcn) + 1;
+	if (next_svcn >= evcn1) {
+		/* Normal way. update attribute and exit */
+		goto out;
+	}
+
+	if (!ni->attr_list.le) {
+		err = ni_create_attr_list(ni);
+		if (err)
+			goto out;
+		/* layout of records is changed */
+		le_b = NULL;
+		attr_b = ni_find_attr(ni, NULL, &le_b, ATTR_DATA, NULL, 0, NULL,
+				      &mi_b);
+		if (!attr_b) {
+			err = -ENOENT;
+			goto out;
+		}
+
+		attr = attr_b;
+		le = le_b;
+		mi = mi_b;
+		goto repack;
+	}
+
+	/* Estimate next attribute */
+	attr = ni_find_attr(ni, attr, &le, ATTR_DATA, NULL, 0, &evcn1, &mi);
+
+	if (attr && le32_to_cpu(mi->mrec->used) + 8 <= sbi->record_size) {
+		svcn = next_svcn;
+		evcn1 = le64_to_cpu(attr->nres.evcn) + 1;
+
+		err = attr_load_runs(attr, ni, run);
+		if (err)
+			goto out;
+
+		attr->nres.svcn = cpu_to_le64(svcn);
+		err = mi_pack_runs(mi, attr, run, evcn1 - svcn);
+		if (err < 0)
+			goto out;
+
+		le->vcn = cpu_to_le64(svcn);
+		ni->attr_list.dirty = true;
+
+		mi->dirty = true;
+
+		next_svcn = le64_to_cpu(attr->nres.evcn) + 1;
+
+		if (next_svcn >= evcn1) {
+			/* Normal way. update attribute and exit */
+			goto out;
+		}
+	}
+
+	err = ni_insert_nonresident(ni, ATTR_DATA, NULL, 0, run, next_svcn,
+				    evcn1 - next_svcn, attr_b->flags, &attr,
+				    &mi);
+	if (err)
+		goto out;
+
+	run_truncate_head(run, vcn);
+
+out:
+	up_write(&ni->file.run_lock);
+	ni_unlock(ni);
+
+	return err;
+}
+
+/*
+ * attr_load_runs_vcn
+ *
+ * load runs with vcn
+ */
+int attr_load_runs_vcn(struct ntfs_inode *ni, enum ATTR_TYPE type,
+		       const __le16 *name, u8 name_len, struct runs_tree *run,
+		       CLST vcn)
+{
+	struct ATTRIB *attr;
+	int err;
+	CLST svcn, evcn;
+	u16 ro;
+
+	attr = ni_find_attr(ni, NULL, NULL, type, name, name_len, &vcn, NULL);
+	if (!attr)
+		return -ENOENT;
+
+	svcn = le64_to_cpu(attr->nres.svcn);
+	evcn = le64_to_cpu(attr->nres.evcn);
+
+	if (evcn < vcn || vcn < svcn)
+		return -EINVAL;
+
+	ro = le16_to_cpu(attr->nres.run_off);
+	err = run_unpack_ex(run, ni->mi.sbi, ni->mi.rno, svcn, evcn,
+			    Add2Ptr(attr, ro), le32_to_cpu(attr->size) - ro);
+	if (err < 0)
+		return err;
+	return 0;
+}
+
+/*
+ * attr_is_frame_compressed
+ *
+ * This function is used to detect compressed frame
+ */
+int attr_is_frame_compressed(struct ntfs_inode *ni, struct ATTRIB *attr,
+			     CLST frame, CLST *clst_data, bool *is_compr)
+{
+	int err;
+	u32 clst_frame;
+	CLST len, lcn, vcn, alen, slen, vcn1;
+	size_t idx;
+	struct runs_tree *run;
+
+	*clst_data = 0;
+	*is_compr = false;
+
+	if (!is_attr_compressed(attr))
+		return 0;
+
+	if (!attr->non_res)
+		return 0;
+
+	clst_frame = 1u << attr->nres.c_unit;
+	vcn = frame * clst_frame;
+	run = &ni->file.run;
+
+	if (!run_lookup_entry(run, vcn, &lcn, &len, &idx)) {
+		err = attr_load_runs_vcn(ni, attr->type, attr_name(attr),
+					 attr->name_len, run, vcn);
+		if (err)
+			return err;
+
+		if (!run_lookup_entry(run, vcn, &lcn, &len, &idx))
+			return -ENOENT;
+	}
+
+	if (lcn == SPARSE_LCN) {
+		/* The frame is sparsed if "clst_frame" clusters are sparsed */
+		*is_compr = true;
+		return 0;
+	}
+
+	if (len >= clst_frame) {
+		/*
+		 * The frame is not compressed 'cause
+		 * it does not contain any sparse clusters
+		 */
+		*clst_data = clst_frame;
+		return 0;
+	}
+
+	alen = bytes_to_cluster(ni->mi.sbi, le64_to_cpu(attr->nres.alloc_size));
+	slen = 0;
+	*clst_data = len;
+
+	/*
+	 * The frame is compressed if *clst_data + slen >= clst_frame
+	 * Check next fragments
+	 */
+	while ((vcn += len) < alen) {
+		vcn1 = vcn;
+
+		if (!run_get_entry(run, ++idx, &vcn, &lcn, &len) ||
+		    vcn1 != vcn) {
+			err = attr_load_runs_vcn(ni, attr->type,
+						 attr_name(attr),
+						 attr->name_len, run, vcn1);
+			if (err)
+				return err;
+			vcn = vcn1;
+
+			if (!run_lookup_entry(run, vcn, &lcn, &len, &idx))
+				return -ENOENT;
+		}
+
+		if (lcn == SPARSE_LCN) {
+			slen += len;
+		} else {
+			if (slen) {
+				/*
+				 * data_clusters + sparse_clusters =
+				 * not enough for frame
+				 */
+				return -EINVAL;
+			}
+			*clst_data += len;
+		}
+
+		if (*clst_data + slen >= clst_frame) {
+			if (!slen) {
+				/*
+				 * There is no sparsed clusters in this frame
+				 * So it is not compressed
+				 */
+				*clst_data = clst_frame;
+			} else {
+				*is_compr = *clst_data < clst_frame;
+			}
+			break;
+		}
+	}
+
+	return 0;
+}
+
+/*
+ * attr_allocate_frame
+ *
+ * allocate/free clusters for 'frame'
+ */
+int attr_allocate_frame(struct ntfs_inode *ni, CLST frame, size_t compr_size,
+			u64 new_valid)
+{
+	int err = 0;
+	struct runs_tree *run = &ni->file.run;
+	struct ntfs_sb_info *sbi = ni->mi.sbi;
+	struct ATTRIB *attr, *attr_b;
+	struct ATTR_LIST_ENTRY *le, *le_b;
+	struct mft_inode *mi, *mi_b;
+	CLST svcn, evcn1, next_svcn, lcn, len;
+	CLST vcn, clst_data;
+	u64 total_size, valid_size, data_size;
+	bool is_compr;
+
+	le_b = NULL;
+	attr_b = ni_find_attr(ni, NULL, &le_b, ATTR_DATA, NULL, 0, NULL, &mi_b);
+	if (!attr_b)
+		return -ENOENT;
+
+	if (!is_attr_ext(attr_b))
+		return -EINVAL;
+
+	vcn = frame << NTFS_LZNT_CUNIT;
+	total_size = le64_to_cpu(attr_b->nres.total_size);
+
+	svcn = le64_to_cpu(attr_b->nres.svcn);
+	evcn1 = le64_to_cpu(attr_b->nres.evcn) + 1;
+	data_size = le64_to_cpu(attr_b->nres.data_size);
+
+	if (svcn <= vcn && vcn < evcn1) {
+		attr = attr_b;
+		le = le_b;
+		mi = mi_b;
+	} else if (!le_b) {
+		err = -EINVAL;
+		goto out;
+	} else {
+		le = le_b;
+		attr = ni_find_attr(ni, attr_b, &le, ATTR_DATA, NULL, 0, &vcn,
+				    &mi);
+		if (!attr) {
+			err = -EINVAL;
+			goto out;
+		}
+		svcn = le64_to_cpu(attr->nres.svcn);
+		evcn1 = le64_to_cpu(attr->nres.evcn) + 1;
+	}
+
+	err = attr_load_runs(attr, ni, run);
+	if (err)
+		goto out;
+
+	err = attr_is_frame_compressed(ni, attr_b, frame, &clst_data,
+				       &is_compr);
+	if (err)
+		goto out;
+
+	total_size -= (u64)clst_data << sbi->cluster_bits;
+
+	len = bytes_to_cluster(sbi, compr_size);
+
+	if (len == clst_data)
+		goto out;
+
+	if (len < clst_data) {
+		err = run_deallocate_ex(sbi, run, vcn + len, clst_data - len,
+					NULL, true);
+		if (err)
+			goto out;
+
+		if (!run_add_entry(run, vcn + len, SPARSE_LCN,
+				   clst_data - len)) {
+			err = -ENOMEM;
+			goto out;
+		}
+	} else {
+		CLST alen, hint;
+		/* Get the last lcn to allocate from */
+		if (vcn + clst_data &&
+		    !run_lookup_entry(run, vcn + clst_data - 1, &hint, NULL,
+				      NULL)) {
+			hint = -1;
+		}
+
+		err = attr_allocate_clusters(sbi, run, vcn + clst_data,
+					     hint + 1, len - clst_data, NULL, 0,
+					     &alen, 0, &lcn);
+		if (err)
+			goto out;
+	}
+
+	total_size += (u64)len << sbi->cluster_bits;
+
+repack:
+	err = mi_pack_runs(mi, attr, run, evcn1 - svcn);
+	if (err < 0)
+		goto out;
+
+	attr_b->nres.total_size = cpu_to_le64(total_size);
+	inode_set_bytes(&ni->vfs_inode, total_size);
+
+	mi_b->dirty = true;
+	mark_inode_dirty(&ni->vfs_inode);
+
+	next_svcn = le64_to_cpu(attr->nres.evcn) + 1;
+
+	if (next_svcn >= evcn1) {
+		/* Normal way. update attribute and exit */
+		goto out;
+	}
+
+	if (!ni->attr_list.size) {
+		err = ni_create_attr_list(ni);
+		if (err)
+			goto out;
+		/* layout of records is changed */
+		le_b = NULL;
+		attr_b = ni_find_attr(ni, NULL, &le_b, ATTR_DATA, NULL, 0, NULL,
+				      &mi_b);
+		if (!attr_b) {
+			err = -ENOENT;
+			goto out;
+		}
+
+		attr = attr_b;
+		le = le_b;
+		mi = mi_b;
+		goto repack;
+	}
+
+	/* Estimate next attribute */
+	attr = ni_find_attr(ni, attr, &le, ATTR_DATA, NULL, 0, &evcn1, &mi);
+
+	if (attr && le32_to_cpu(mi->mrec->used) + 8 <= sbi->record_size) {
+		svcn = next_svcn;
+		evcn1 = le64_to_cpu(attr->nres.evcn) + 1;
+
+		err = attr_load_runs(attr, ni, run);
+		if (err)
+			goto out;
+
+		attr->nres.svcn = cpu_to_le64(svcn);
+		err = mi_pack_runs(mi, attr, run, evcn1 - svcn);
+		if (err < 0)
+			goto out;
+
+		le->vcn = cpu_to_le64(svcn);
+		ni->attr_list.dirty = true;
+		mi->dirty = true;
+
+		next_svcn = le64_to_cpu(attr->nres.evcn) + 1;
+
+		if (next_svcn >= evcn1) {
+			/* Normal way. update attribute and exit */
+			goto out;
+		}
+	}
+
+	err = ni_insert_nonresident(ni, ATTR_DATA, NULL, 0, run, next_svcn,
+				    evcn1 - next_svcn, attr_b->flags, &attr,
+				    &mi);
+	if (err)
+		goto out;
+
+	run_truncate_head(run, vcn);
+
+out:
+	if (new_valid > data_size)
+		new_valid = data_size;
+
+	valid_size = le64_to_cpu(attr_b->nres.valid_size);
+	if (new_valid != valid_size) {
+		attr_b->nres.valid_size = cpu_to_le64(valid_size);
+		mi_b->dirty = true;
+	}
+
+	return err;
+}
diff --git a/fs/ntfs3/attrlist.c b/fs/ntfs3/attrlist.c
new file mode 100644
index 000000000000..3739572a4eca
--- /dev/null
+++ b/fs/ntfs3/attrlist.c
@@ -0,0 +1,462 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/fs/ntfs3/attrib.c
+ *
+ * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved.
+ *
+ */
+
+#include <linux/blkdev.h>
+#include <linux/buffer_head.h>
+#include <linux/fs.h>
+#include <linux/nls.h>
+#include <linux/sched/signal.h>
+
+#include "debug.h"
+#include "ntfs.h"
+#include "ntfs_fs.h"
+
+/* Returns true if le is valid */
+static inline bool al_is_valid_le(const struct ntfs_inode *ni,
+				  struct ATTR_LIST_ENTRY *le)
+{
+	if (!le || !ni->attr_list.le || !ni->attr_list.size)
+		return false;
+
+	return PtrOffset(ni->attr_list.le, le) + le16_to_cpu(le->size) <=
+	       ni->attr_list.size;
+}
+
+void al_destroy(struct ntfs_inode *ni)
+{
+	run_close(&ni->attr_list.run);
+	ntfs_free(ni->attr_list.le);
+	ni->attr_list.le = NULL;
+	ni->attr_list.size = 0;
+	ni->attr_list.dirty = false;
+}
+
+/*
+ * ntfs_load_attr_list
+ *
+ * This method makes sure that the ATTRIB list, if present,
+ * has been properly set up.
+ */
+int ntfs_load_attr_list(struct ntfs_inode *ni, struct ATTRIB *attr)
+{
+	int err;
+	size_t lsize;
+	void *le = NULL;
+
+	if (ni->attr_list.size)
+		return 0;
+
+	if (!attr->non_res) {
+		lsize = le32_to_cpu(attr->res.data_size);
+		le = ntfs_alloc(al_aligned(lsize), 0);
+		if (!le) {
+			err = -ENOMEM;
+			goto out;
+		}
+		memcpy(le, resident_data(attr), lsize);
+	} else if (attr->nres.svcn) {
+		err = -EINVAL;
+		goto out;
+	} else {
+		u16 run_off = le16_to_cpu(attr->nres.run_off);
+
+		lsize = le64_to_cpu(attr->nres.data_size);
+
+		run_init(&ni->attr_list.run);
+
+		err = run_unpack_ex(&ni->attr_list.run, ni->mi.sbi, ni->mi.rno,
+				    0, le64_to_cpu(attr->nres.evcn),
+				    Add2Ptr(attr, run_off),
+				    le32_to_cpu(attr->size) - run_off);
+		if (err < 0)
+			goto out;
+
+		le = ntfs_alloc(al_aligned(lsize), 0);
+		if (!le) {
+			err = -ENOMEM;
+			goto out;
+		}
+
+		err = ntfs_read_run_nb(ni->mi.sbi, &ni->attr_list.run, 0, le,
+				       lsize, NULL);
+		if (err)
+			goto out;
+	}
+
+	ni->attr_list.size = lsize;
+	ni->attr_list.le = le;
+
+	return 0;
+
+out:
+	ni->attr_list.le = le;
+	al_destroy(ni);
+
+	return err;
+}
+
+/*
+ * al_enumerate
+ *
+ * Returns the next list le
+ * if le is NULL then returns the first le
+ */
+struct ATTR_LIST_ENTRY *al_enumerate(struct ntfs_inode *ni,
+				     struct ATTR_LIST_ENTRY *le)
+{
+	size_t off;
+	u16 sz;
+
+	if (!le) {
+		le = ni->attr_list.le;
+	} else {
+		sz = le16_to_cpu(le->size);
+		if (sz < sizeof(struct ATTR_LIST_ENTRY)) {
+			/* Impossible 'cause we should not return such le */
+			return NULL;
+		}
+		le = Add2Ptr(le, sz);
+	}
+
+	/* Check boundary */
+	off = PtrOffset(ni->attr_list.le, le);
+	if (off + sizeof(struct ATTR_LIST_ENTRY) > ni->attr_list.size) {
+		// The regular end of list
+		return NULL;
+	}
+
+	sz = le16_to_cpu(le->size);
+
+	/* Check le for errors */
+	if (sz < sizeof(struct ATTR_LIST_ENTRY) ||
+	    off + sz > ni->attr_list.size ||
+	    sz < le->name_off + le->name_len * sizeof(short)) {
+		return NULL;
+	}
+
+	return le;
+}
+
+/*
+ * al_find_le
+ *
+ * finds the first le in the list which matches type, name and vcn
+ * Returns NULL if not found
+ */
+struct ATTR_LIST_ENTRY *al_find_le(struct ntfs_inode *ni,
+				   struct ATTR_LIST_ENTRY *le,
+				   const struct ATTRIB *attr)
+{
+	CLST svcn = attr_svcn(attr);
+
+	return al_find_ex(ni, le, attr->type, attr_name(attr), attr->name_len,
+			  &svcn);
+}
+
+/*
+ * al_find_ex
+ *
+ * finds the first le in the list which matches type, name and vcn
+ * Returns NULL if not found
+ */
+struct ATTR_LIST_ENTRY *al_find_ex(struct ntfs_inode *ni,
+				   struct ATTR_LIST_ENTRY *le,
+				   enum ATTR_TYPE type, const __le16 *name,
+				   u8 name_len, const CLST *vcn)
+{
+	struct ATTR_LIST_ENTRY *ret = NULL;
+	u32 type_in = le32_to_cpu(type);
+
+	while ((le = al_enumerate(ni, le))) {
+		u64 le_vcn;
+		int diff;
+
+		/* List entries are sorted by type, name and vcn */
+		diff = le32_to_cpu(le->type) - type_in;
+		if (diff < 0)
+			continue;
+
+		if (diff > 0)
+			return ret;
+
+		if (le->name_len != name_len)
+			continue;
+
+		if (name_len &&
+		    memcmp(le_name(le), name, name_len * sizeof(short)))
+			continue;
+
+		if (!vcn)
+			return le;
+
+		le_vcn = le64_to_cpu(le->vcn);
+		if (*vcn == le_vcn)
+			return le;
+
+		if (*vcn < le_vcn)
+			return ret;
+
+		ret = le;
+	}
+
+	return ret;
+}
+
+/*
+ * al_find_le_to_insert
+ *
+ * finds the first list entry which matches type, name and vcn
+ * Returns NULL if not found
+ */
+static struct ATTR_LIST_ENTRY *
+al_find_le_to_insert(struct ntfs_inode *ni, enum ATTR_TYPE type,
+		     const __le16 *name, u8 name_len, const CLST *vcn)
+{
+	struct ATTR_LIST_ENTRY *le = NULL, *prev;
+	u32 type_in = le32_to_cpu(type);
+	int diff;
+
+	/* List entries are sorted by type, name, vcn */
+next:
+	le = al_enumerate(ni, prev = le);
+	if (!le)
+		goto out;
+	diff = le32_to_cpu(le->type) - type_in;
+	if (diff < 0)
+		goto next;
+	if (diff > 0)
+		goto out;
+
+	if (ntfs_cmp_names(name, name_len, le_name(le), le->name_len, NULL) > 0)
+		goto next;
+
+	if (!vcn || *vcn > le64_to_cpu(le->vcn))
+		goto next;
+
+out:
+	if (!le)
+		le = prev ? Add2Ptr(prev, le16_to_cpu(prev->size)) :
+			    ni->attr_list.le;
+
+	return le;
+}
+
+/*
+ * al_add_le
+ *
+ * adds an "attribute list entry" to the list.
+ */
+int al_add_le(struct ntfs_inode *ni, enum ATTR_TYPE type, const __le16 *name,
+	      u8 name_len, CLST svcn, __le16 id, const struct MFT_REF *ref,
+	      struct ATTR_LIST_ENTRY **new_le)
+{
+	int err;
+	struct ATTRIB *attr;
+	struct ATTR_LIST_ENTRY *le;
+	size_t off;
+	u16 sz;
+	size_t asize, new_asize;
+	u64 new_size;
+	typeof(ni->attr_list) *al = &ni->attr_list;
+
+	/*
+	 * Compute the size of the new le and the new length of the
+	 * list with al le added.
+	 */
+	sz = le_size(name_len);
+	new_size = al->size + sz;
+	asize = al_aligned(al->size);
+	new_asize = al_aligned(new_size);
+
+	/* Scan forward to the point at which the new le should be inserted. */
+	le = al_find_le_to_insert(ni, type, name, name_len, &svcn);
+	off = PtrOffset(al->le, le);
+
+	if (new_size > asize) {
+		void *ptr = ntfs_alloc(new_asize, 0);
+
+		if (!ptr)
+			return -ENOMEM;
+
+		memcpy(ptr, al->le, off);
+		memcpy(Add2Ptr(ptr, off + sz), le, al->size - off);
+		le = Add2Ptr(ptr, off);
+		ntfs_free(al->le);
+		al->le = ptr;
+	} else {
+		memmove(Add2Ptr(le, sz), le, al->size - off);
+	}
+
+	al->size = new_size;
+
+	le->type = type;
+	le->size = cpu_to_le16(sz);
+	le->name_len = name_len;
+	le->name_off = offsetof(struct ATTR_LIST_ENTRY, name);
+	le->vcn = cpu_to_le64(svcn);
+	le->ref = *ref;
+	le->id = id;
+	memcpy(le->name, name, sizeof(short) * name_len);
+
+	al->dirty = true;
+
+	err = attr_set_size(ni, ATTR_LIST, NULL, 0, &al->run, new_size,
+			    &new_size, true, &attr);
+	if (err)
+		return err;
+
+	if (attr && attr->non_res) {
+		err = ntfs_sb_write_run(ni->mi.sbi, &al->run, 0, al->le,
+					al->size);
+		if (err)
+			return err;
+	}
+
+	al->dirty = false;
+	*new_le = le;
+
+	return 0;
+}
+
+/*
+ * al_remove_le
+ *
+ * removes 'le' from attribute list
+ */
+bool al_remove_le(struct ntfs_inode *ni, struct ATTR_LIST_ENTRY *le)
+{
+	u16 size;
+	size_t off;
+	typeof(ni->attr_list) *al = &ni->attr_list;
+
+	if (!al_is_valid_le(ni, le))
+		return false;
+
+	/* Save on stack the size of le */
+	size = le16_to_cpu(le->size);
+	off = PtrOffset(al->le, le);
+
+	memmove(le, Add2Ptr(le, size), al->size - (off + size));
+
+	al->size -= size;
+	al->dirty = true;
+
+	return true;
+}
+
+/*
+ * al_delete_le
+ *
+ * deletes from the list the first le which matches its parameters.
+ */
+bool al_delete_le(struct ntfs_inode *ni, enum ATTR_TYPE type, CLST vcn,
+		  const __le16 *name, size_t name_len,
+		  const struct MFT_REF *ref)
+{
+	u16 size;
+	struct ATTR_LIST_ENTRY *le;
+	size_t off;
+	typeof(ni->attr_list) *al = &ni->attr_list;
+
+	/* Scan forward to the first le that matches the input */
+	le = al_find_ex(ni, NULL, type, name, name_len, &vcn);
+	if (!le)
+		return false;
+
+	off = PtrOffset(al->le, le);
+
+	if (!ref)
+		goto del;
+
+	/*
+	 * The caller specified a segment reference, so we have to
+	 * scan through the matching entries until we find that segment
+	 * reference or we run of matching entries.
+	 */
+next:
+	if (off + sizeof(struct ATTR_LIST_ENTRY) > al->size)
+		goto del;
+	if (le->type != type)
+		goto del;
+	if (le->name_len != name_len)
+		goto del;
+	if (name_len &&
+	    memcmp(name, Add2Ptr(le, le->name_off), name_len * sizeof(short)))
+		goto del;
+	if (le64_to_cpu(le->vcn) != vcn)
+		goto del;
+	if (!memcmp(ref, &le->ref, sizeof(*ref)))
+		goto del;
+
+	off += le16_to_cpu(le->size);
+	le = Add2Ptr(al->le, off);
+	goto next;
+
+del:
+	/*
+	 * If we've gone off the end of the list, or if the type, name,
+	 * and vcn don't match, then we don't have any matching records.
+	 */
+	if (off >= al->size)
+		return false;
+	if (le->type != type)
+		return false;
+	if (le->name_len != name_len)
+		return false;
+	if (name_len &&
+	    memcmp(name, Add2Ptr(le, le->name_off), name_len * sizeof(short)))
+		return false;
+	if (le64_to_cpu(le->vcn) != vcn)
+		return false;
+
+	/* Save on stack the size of le */
+	size = le16_to_cpu(le->size);
+	/* Delete the le. */
+	memmove(le, Add2Ptr(le, size), al->size - (off + size));
+
+	al->size -= size;
+	al->dirty = true;
+	return true;
+}
+
+/*
+ * al_update
+ *
+ *
+ */
+int al_update(struct ntfs_inode *ni)
+{
+	int err;
+	struct ntfs_sb_info *sbi = ni->mi.sbi;
+	struct ATTRIB *attr;
+	typeof(ni->attr_list) *al = &ni->attr_list;
+
+	if (!al->dirty)
+		return 0;
+
+	err = attr_set_size(ni, ATTR_LIST, NULL, 0, &al->run, al->size, NULL,
+			    false, &attr);
+	if (err)
+		goto out;
+
+	if (!attr->non_res)
+		memcpy(resident_data(attr), al->le, al->size);
+	else {
+		err = ntfs_sb_write_run(sbi, &al->run, 0, al->le, al->size);
+		if (err)
+			goto out;
+
+		attr->nres.valid_size = attr->nres.data_size;
+	}
+
+	ni->mi.dirty = true;
+	al->dirty = false;
+
+out:
+	return err;
+}
diff --git a/fs/ntfs3/xattr.c b/fs/ntfs3/xattr.c
new file mode 100644
index 000000000000..c5359fb17306
--- /dev/null
+++ b/fs/ntfs3/xattr.c
@@ -0,0 +1,996 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/fs/ntfs3/xattr.c
+ *
+ * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved.
+ *
+ */
+
+#include <linux/blkdev.h>
+#include <linux/buffer_head.h>
+#include <linux/fs.h>
+#include <linux/nls.h>
+#include <linux/posix_acl.h>
+#include <linux/posix_acl_xattr.h>
+#include <linux/xattr.h>
+
+#include "debug.h"
+#include "ntfs.h"
+#include "ntfs_fs.h"
+
+#define SYSTEM_DOS_ATTRIB "system.dos_attrib"
+#define SYSTEM_NTFS_ATTRIB "system.ntfs_attrib"
+#define SYSTEM_NTFS_ATTRIB_BE "system.ntfs_attrib_be"
+#define USER_DOSATTRIB "user.DOSATTRIB"
+
+static inline size_t unpacked_ea_size(const struct EA_FULL *ea)
+{
+	return !ea->size ? DwordAlign(offsetof(struct EA_FULL, name) + 1 +
+				      ea->name_len + le16_to_cpu(ea->elength)) :
+			   le32_to_cpu(ea->size);
+}
+
+static inline size_t packed_ea_size(const struct EA_FULL *ea)
+{
+	return offsetof(struct EA_FULL, name) + 1 -
+	       offsetof(struct EA_FULL, flags) + ea->name_len +
+	       le16_to_cpu(ea->elength);
+}
+
+/*
+ * find_ea
+ *
+ * assume there is at least one xattr in the list
+ */
+static inline bool find_ea(const struct EA_FULL *ea_all, u32 bytes,
+			   const char *name, u8 name_len, u32 *off)
+{
+	*off = 0;
+
+	if (!ea_all || !bytes)
+		return false;
+
+	for (;;) {
+		const struct EA_FULL *ea = Add2Ptr(ea_all, *off);
+		u32 next_off = *off + unpacked_ea_size(ea);
+
+		if (next_off > bytes)
+			return false;
+
+		if (ea->name_len == name_len &&
+		    !memcmp(ea->name, name, name_len))
+			return true;
+
+		*off = next_off;
+		if (next_off >= bytes)
+			return false;
+	}
+}
+
+/*
+ * ntfs_read_ea
+ *
+ * reads all extended attributes
+ * ea - new allocated memory
+ * info - pointer into resident data
+ */
+static int ntfs_read_ea(struct ntfs_inode *ni, struct EA_FULL **ea,
+			size_t add_bytes, const struct EA_INFO **info)
+{
+	int err;
+	struct ATTR_LIST_ENTRY *le = NULL;
+	struct ATTRIB *attr_info, *attr_ea;
+	void *ea_p;
+	u32 size;
+
+	static_assert(le32_to_cpu(ATTR_EA_INFO) < le32_to_cpu(ATTR_EA));
+
+	*ea = NULL;
+	*info = NULL;
+
+	attr_info =
+		ni_find_attr(ni, NULL, &le, ATTR_EA_INFO, NULL, 0, NULL, NULL);
+	attr_ea =
+		ni_find_attr(ni, attr_info, &le, ATTR_EA, NULL, 0, NULL, NULL);
+
+	if (!attr_ea || !attr_info)
+		return 0;
+
+	*info = resident_data_ex(attr_info, sizeof(struct EA_INFO));
+	if (!*info)
+		return -EINVAL;
+
+	/* Check Ea limit */
+	size = le32_to_cpu((*info)->size);
+	if (size > MAX_EA_DATA_SIZE || size + add_bytes > MAX_EA_DATA_SIZE)
+		return -EINVAL;
+
+	/* Allocate memory for packed Ea */
+	ea_p = ntfs_alloc(size + add_bytes, 0);
+	if (!ea_p)
+		return -ENOMEM;
+
+	if (attr_ea->non_res) {
+		struct runs_tree run;
+
+		run_init(&run);
+
+		err = attr_load_runs(attr_ea, ni, &run);
+		if (!err)
+			err = ntfs_read_run_nb(ni->mi.sbi, &run, 0, ea_p, size,
+					       NULL);
+		run_close(&run);
+
+		if (err)
+			goto out;
+	} else {
+		void *p = resident_data_ex(attr_ea, size);
+
+		if (!p) {
+			err = -EINVAL;
+			goto out;
+		}
+		memcpy(ea_p, p, size);
+	}
+
+	memset(Add2Ptr(ea_p, size), 0, add_bytes);
+	*ea = ea_p;
+	return 0;
+
+out:
+	ntfs_free(ea_p);
+	*ea = NULL;
+	return err;
+}
+
+/*
+ * ntfs_listxattr_hlp
+ *
+ * copy a list of xattrs names into the buffer
+ * provided, or compute the buffer size required
+ */
+static int ntfs_listxattr_hlp(struct ntfs_inode *ni, char *buffer,
+			      size_t bytes_per_buffer, size_t *bytes)
+{
+	const struct EA_INFO *info;
+	struct EA_FULL *ea_all = NULL;
+	const struct EA_FULL *ea;
+	u32 off, size;
+	int err;
+
+	*bytes = 0;
+
+	err = ntfs_read_ea(ni, &ea_all, 0, &info);
+	if (err)
+		return err;
+
+	if (!info || !ea_all)
+		return 0;
+
+	size = le32_to_cpu(info->size);
+
+	/* Enumerate all xattrs */
+	for (off = 0; off < size; off += unpacked_ea_size(ea)) {
+		ea = Add2Ptr(ea_all, off);
+
+		if (buffer) {
+			if (*bytes + ea->name_len + 1 > bytes_per_buffer) {
+				err = -ERANGE;
+				goto out;
+			}
+
+			memcpy(buffer + *bytes, ea->name, ea->name_len);
+			buffer[*bytes + ea->name_len] = 0;
+		}
+
+		*bytes += ea->name_len + 1;
+	}
+
+out:
+	ntfs_free(ea_all);
+	return err;
+}
+
+/*
+ * ntfs_get_ea
+ *
+ * reads xattr
+ */
+static int ntfs_get_ea(struct ntfs_inode *ni, const char *name, size_t name_len,
+		       void *buffer, size_t bytes_per_buffer, u32 *len)
+{
+	const struct EA_INFO *info;
+	struct EA_FULL *ea_all = NULL;
+	const struct EA_FULL *ea;
+	u32 off;
+	int err;
+
+	*len = 0;
+
+	if (name_len > 255) {
+		err = -ENAMETOOLONG;
+		goto out;
+	}
+
+	err = ntfs_read_ea(ni, &ea_all, 0, &info);
+	if (err)
+		goto out;
+
+	if (!info)
+		goto out;
+
+	/* Enumerate all xattrs */
+	if (!find_ea(ea_all, le32_to_cpu(info->size), name, name_len, &off)) {
+		err = -ENODATA;
+		goto out;
+	}
+	ea = Add2Ptr(ea_all, off);
+
+	*len = le16_to_cpu(ea->elength);
+	if (!buffer) {
+		err = 0;
+		goto out;
+	}
+
+	if (*len > bytes_per_buffer) {
+		err = -ERANGE;
+		goto out;
+	}
+	memcpy(buffer, ea->name + ea->name_len + 1, *len);
+	err = 0;
+
+out:
+	ntfs_free(ea_all);
+
+	return err;
+}
+
+static noinline int ntfs_getxattr_hlp(struct inode *inode, const char *name,
+				      void *value, size_t size,
+				      size_t *required)
+{
+	struct ntfs_inode *ni = ntfs_i(inode);
+	int err;
+	u32 len;
+
+	if (!(ni->ni_flags & NI_FLAG_EA))
+		return -ENODATA;
+
+	if (!required)
+		ni_lock(ni);
+
+	err = ntfs_get_ea(ni, name, strlen(name), value, size, &len);
+	if (!err)
+		err = len;
+	else if (-ERANGE == err && required)
+		*required = len;
+
+	if (!required)
+		ni_unlock(ni);
+
+	return err;
+}
+
+static noinline int ntfs_set_ea(struct inode *inode, const char *name,
+				const void *value, size_t val_size, int flags,
+				int locked)
+{
+	struct ntfs_inode *ni = ntfs_i(inode);
+	struct ntfs_sb_info *sbi = ni->mi.sbi;
+	int err;
+	struct EA_INFO ea_info;
+	const struct EA_INFO *info;
+	struct EA_FULL *new_ea;
+	struct EA_FULL *ea_all = NULL;
+	size_t name_len, add;
+	u32 off, size;
+	__le16 size_pack;
+	struct ATTRIB *attr;
+	struct ATTR_LIST_ENTRY *le;
+	struct mft_inode *mi;
+	struct runs_tree ea_run;
+	u64 new_sz;
+	void *p;
+
+	if (!locked)
+		ni_lock(ni);
+
+	run_init(&ea_run);
+	name_len = strlen(name);
+
+	if (name_len > 255) {
+		err = -ENAMETOOLONG;
+		goto out;
+	}
+
+	add = DwordAlign(offsetof(struct EA_FULL, name) + 1 + name_len +
+			 val_size);
+
+	err = ntfs_read_ea(ni, &ea_all, add, &info);
+	if (err)
+		goto out;
+
+	if (!info) {
+		memset(&ea_info, 0, sizeof(ea_info));
+		size = 0;
+		size_pack = 0;
+	} else {
+		memcpy(&ea_info, info, sizeof(ea_info));
+		size = le32_to_cpu(ea_info.size);
+		size_pack = ea_info.size_pack;
+	}
+
+	if (info && find_ea(ea_all, size, name, name_len, &off)) {
+		struct EA_FULL *ea;
+		size_t ea_sz;
+
+		if (flags & XATTR_CREATE) {
+			err = -EEXIST;
+			goto out;
+		}
+
+		/* Remove current xattr */
+		ea = Add2Ptr(ea_all, off);
+		if (ea->flags & FILE_NEED_EA)
+			le16_add_cpu(&ea_info.count, -1);
+
+		ea_sz = unpacked_ea_size(ea);
+
+		le16_add_cpu(&ea_info.size_pack, 0 - packed_ea_size(ea));
+
+		memmove(ea, Add2Ptr(ea, ea_sz), size - off - ea_sz);
+
+		size -= ea_sz;
+		memset(Add2Ptr(ea_all, size), 0, ea_sz);
+
+		ea_info.size = cpu_to_le32(size);
+
+		if ((flags & XATTR_REPLACE) && !val_size)
+			goto update_ea;
+	} else {
+		if (flags & XATTR_REPLACE) {
+			err = -ENODATA;
+			goto out;
+		}
+
+		if (!ea_all) {
+			ea_all = ntfs_alloc(add, 1);
+			if (!ea_all) {
+				err = -ENOMEM;
+				goto out;
+			}
+		}
+	}
+
+	/* append new xattr */
+	new_ea = Add2Ptr(ea_all, size);
+	new_ea->size = cpu_to_le32(add);
+	new_ea->flags = 0;
+	new_ea->name_len = name_len;
+	new_ea->elength = cpu_to_le16(val_size);
+	memcpy(new_ea->name, name, name_len);
+	new_ea->name[name_len] = 0;
+	memcpy(new_ea->name + name_len + 1, value, val_size);
+
+	le16_add_cpu(&ea_info.size_pack, packed_ea_size(new_ea));
+	size += add;
+	ea_info.size = cpu_to_le32(size);
+
+update_ea:
+
+	if (!info) {
+		/* Create xattr */
+		if (!size) {
+			err = 0;
+			goto out;
+		}
+
+		err = ni_insert_resident(ni, sizeof(struct EA_INFO),
+					 ATTR_EA_INFO, NULL, 0, NULL, NULL);
+		if (err)
+			goto out;
+
+		err = ni_insert_resident(ni, 0, ATTR_EA, NULL, 0, NULL, NULL);
+		if (err)
+			goto out;
+	}
+
+	new_sz = size;
+	err = attr_set_size(ni, ATTR_EA, NULL, 0, &ea_run, new_sz, &new_sz,
+			    false, NULL);
+	if (err)
+		goto out;
+
+	le = NULL;
+	attr = ni_find_attr(ni, NULL, &le, ATTR_EA_INFO, NULL, 0, NULL, &mi);
+	if (!attr) {
+		err = -EINVAL;
+		goto out;
+	}
+
+	if (!size) {
+		/* delete xattr, ATTR_EA_INFO */
+		err = ni_remove_attr_le(ni, attr, le);
+		if (err)
+			goto out;
+	} else {
+		p = resident_data_ex(attr, sizeof(struct EA_INFO));
+		if (!p) {
+			err = -EINVAL;
+			goto out;
+		}
+		memcpy(p, &ea_info, sizeof(struct EA_INFO));
+		mi->dirty = true;
+	}
+
+	le = NULL;
+	attr = ni_find_attr(ni, NULL, &le, ATTR_EA, NULL, 0, NULL, &mi);
+	if (!attr) {
+		err = -EINVAL;
+		goto out;
+	}
+
+	if (!size) {
+		/* delete xattr, ATTR_EA */
+		err = ni_remove_attr_le(ni, attr, le);
+		if (err)
+			goto out;
+	} else if (attr->non_res) {
+		err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size);
+		if (err)
+			goto out;
+	} else {
+		p = resident_data_ex(attr, size);
+		if (!p) {
+			err = -EINVAL;
+			goto out;
+		}
+		memcpy(p, ea_all, size);
+		mi->dirty = true;
+	}
+
+	if (ea_info.size_pack != size_pack)
+		ni->ni_flags |= NI_FLAG_UPDATE_PARENT;
+	mark_inode_dirty(&ni->vfs_inode);
+
+	/* Check if we delete the last xattr */
+	if (val_size || flags != XATTR_REPLACE ||
+	    ntfs_listxattr_hlp(ni, NULL, 0, &val_size) || val_size) {
+		ni->ni_flags |= NI_FLAG_EA;
+	} else {
+		ni->ni_flags &= ~NI_FLAG_EA;
+	}
+
+out:
+	if (!locked)
+		ni_unlock(ni);
+
+	run_close(&ea_run);
+	ntfs_free(ea_all);
+
+	return err;
+}
+
+static inline void ntfs_posix_acl_release(struct posix_acl *acl)
+{
+	if (acl && refcount_dec_and_test(&acl->a_refcount))
+		kfree(acl);
+}
+
+static struct posix_acl *ntfs_get_acl_ex(struct inode *inode, int type,
+					 int locked)
+{
+	struct ntfs_inode *ni = ntfs_i(inode);
+	const char *name;
+	struct posix_acl *acl;
+	size_t req;
+	int err;
+	void *buf;
+
+	buf = __getname();
+	if (!buf)
+		return ERR_PTR(-ENOMEM);
+
+	/* Possible values of 'type' was already checked above */
+	name = type == ACL_TYPE_ACCESS ? XATTR_NAME_POSIX_ACL_ACCESS :
+					 XATTR_NAME_POSIX_ACL_DEFAULT;
+
+	if (!locked)
+		ni_lock(ni);
+
+	err = ntfs_getxattr_hlp(inode, name, buf, PATH_MAX, &req);
+
+	if (!locked)
+		ni_unlock(ni);
+
+	/* Translate extended attribute to acl */
+	if (err > 0) {
+		acl = posix_acl_from_xattr(&init_user_ns, buf, err);
+		if (!IS_ERR(acl))
+			set_cached_acl(inode, type, acl);
+	} else {
+		acl = err == -ENODATA ? NULL : ERR_PTR(err);
+	}
+
+	__putname(buf);
+
+	return acl;
+}
+
+/*
+ * ntfs_get_acl
+ *
+ * inode_operations::get_acl
+ */
+struct posix_acl *ntfs_get_acl(struct inode *inode, int type)
+{
+	return ntfs_get_acl_ex(inode, type, 0);
+}
+
+static noinline int ntfs_set_acl_ex(struct inode *inode, struct posix_acl *acl,
+				    int type, int locked)
+{
+	const char *name;
+	size_t size;
+	void *value = NULL;
+	int err = 0;
+
+	if (S_ISLNK(inode->i_mode))
+		return -EOPNOTSUPP;
+
+	switch (type) {
+	case ACL_TYPE_ACCESS:
+		if (acl) {
+			umode_t mode = inode->i_mode;
+
+			err = posix_acl_equiv_mode(acl, &mode);
+			if (err < 0)
+				return err;
+
+			if (inode->i_mode != mode) {
+				inode->i_mode = mode;
+				mark_inode_dirty(inode);
+			}
+
+			if (!err) {
+				/*
+				 * acl can be exactly represented in the
+				 * traditional file mode permission bits
+				 */
+				acl = NULL;
+				goto out;
+			}
+		}
+		name = XATTR_NAME_POSIX_ACL_ACCESS;
+		break;
+
+	case ACL_TYPE_DEFAULT:
+		if (!S_ISDIR(inode->i_mode))
+			return acl ? -EACCES : 0;
+		name = XATTR_NAME_POSIX_ACL_DEFAULT;
+		break;
+
+	default:
+		return -EINVAL;
+	}
+
+	if (!acl)
+		goto out;
+
+	size = posix_acl_xattr_size(acl->a_count);
+	value = ntfs_alloc(size, 0);
+	if (!value)
+		return -ENOMEM;
+
+	err = posix_acl_to_xattr(&init_user_ns, acl, value, size);
+	if (err)
+		goto out;
+
+	err = ntfs_set_ea(inode, name, value, size, 0, locked);
+	if (err)
+		goto out;
+
+out:
+	if (!err)
+		set_cached_acl(inode, type, acl);
+
+	kfree(value);
+
+	return err;
+}
+
+/*
+ * ntfs_set_acl
+ *
+ * inode_operations::set_acl
+ */
+int ntfs_set_acl(struct inode *inode, struct posix_acl *acl, int type)
+{
+	return ntfs_set_acl_ex(inode, acl, type, 0);
+}
+
+static int ntfs_xattr_get_acl(struct inode *inode, int type, void *buffer,
+			      size_t size)
+{
+	struct super_block *sb = inode->i_sb;
+	struct posix_acl *acl;
+	int err;
+
+	if (!(sb->s_flags & SB_POSIXACL))
+		return -EOPNOTSUPP;
+
+	acl = ntfs_get_acl(inode, type);
+	if (IS_ERR(acl))
+		return PTR_ERR(acl);
+
+	if (!acl)
+		return -ENODATA;
+
+	err = posix_acl_to_xattr(&init_user_ns, acl, buffer, size);
+	ntfs_posix_acl_release(acl);
+
+	return err;
+}
+
+static int ntfs_xattr_set_acl(struct inode *inode, int type, const void *value,
+			      size_t size)
+{
+	struct super_block *sb = inode->i_sb;
+	struct posix_acl *acl;
+	int err;
+
+	if (!(sb->s_flags & SB_POSIXACL))
+		return -EOPNOTSUPP;
+
+	if (!inode_owner_or_capable(inode))
+		return -EPERM;
+
+	if (!value)
+		return 0;
+
+	acl = posix_acl_from_xattr(&init_user_ns, value, size);
+	if (IS_ERR(acl))
+		return PTR_ERR(acl);
+
+	if (acl) {
+		err = posix_acl_valid(sb->s_user_ns, acl);
+		if (err)
+			goto release_and_out;
+	}
+
+	err = ntfs_set_acl(inode, acl, type);
+
+release_and_out:
+	ntfs_posix_acl_release(acl);
+	return err;
+}
+
+/*
+ * ntfs_acl_chmod
+ *
+ * helper for 'ntfs_setattr'
+ */
+int ntfs_acl_chmod(struct inode *inode)
+{
+	struct super_block *sb = inode->i_sb;
+	int err;
+
+	if (!(sb->s_flags & SB_POSIXACL))
+		return 0;
+
+	if (S_ISLNK(inode->i_mode))
+		return -EOPNOTSUPP;
+
+	err = posix_acl_chmod(inode, inode->i_mode);
+
+	return err;
+}
+
+/*
+ * ntfs_permission
+ *
+ * inode_operations::permission
+ */
+int ntfs_permission(struct inode *inode, int mask)
+{
+	struct super_block *sb = inode->i_sb;
+	struct ntfs_sb_info *sbi = sb->s_fs_info;
+	int err;
+
+	if (sbi->options.no_acs_rules) {
+		/* "no access rules" mode - allow all changes */
+		return 0;
+	}
+
+	err = generic_permission(inode, mask);
+
+	return err;
+}
+
+/*
+ * ntfs_listxattr
+ *
+ * inode_operations::listxattr
+ */
+ssize_t ntfs_listxattr(struct dentry *dentry, char *buffer, size_t size)
+{
+	struct inode *inode = d_inode(dentry);
+	struct ntfs_inode *ni = ntfs_i(inode);
+	ssize_t ret = -1;
+	int err;
+
+	if (!(ni->ni_flags & NI_FLAG_EA)) {
+		ret = 0;
+		goto out;
+	}
+
+	ni_lock(ni);
+
+	err = ntfs_listxattr_hlp(ni, buffer, size, (size_t *)&ret);
+
+	ni_unlock(ni);
+
+	if (err)
+		ret = err;
+out:
+
+	return ret;
+}
+
+static int ntfs_getxattr(const struct xattr_handler *handler, struct dentry *de,
+			 struct inode *inode, const char *name, void *buffer,
+			 size_t size)
+{
+	int err;
+	struct ntfs_inode *ni = ntfs_i(inode);
+	size_t name_len = strlen(name);
+
+	/* Dispatch request */
+	if (name_len == sizeof(SYSTEM_DOS_ATTRIB) - 1 &&
+	    !memcmp(name, SYSTEM_DOS_ATTRIB, sizeof(SYSTEM_DOS_ATTRIB))) {
+		/* system.dos_attrib */
+		if (!buffer) {
+			err = sizeof(u8);
+		} else if (size < sizeof(u8)) {
+			err = -ENODATA;
+		} else {
+			err = sizeof(u8);
+			*(u8 *)buffer = le32_to_cpu(ni->std_fa);
+		}
+		goto out;
+	}
+
+	if (name_len == sizeof(SYSTEM_NTFS_ATTRIB) - 1 &&
+	    !memcmp(name, SYSTEM_NTFS_ATTRIB, sizeof(SYSTEM_NTFS_ATTRIB))) {
+		/* system.ntfs_attrib */
+		if (!buffer) {
+			err = sizeof(u32);
+		} else if (size < sizeof(u32)) {
+			err = -ENODATA;
+		} else {
+			err = sizeof(u32);
+			*(u32 *)buffer = le32_to_cpu(ni->std_fa);
+		}
+		goto out;
+	}
+
+	if (name_len == sizeof(SYSTEM_NTFS_ATTRIB_BE) - 1 &&
+	    !memcmp(name, SYSTEM_NTFS_ATTRIB_BE,
+		    sizeof(SYSTEM_NTFS_ATTRIB_BE))) {
+		/* system.ntfs_attrib_be */
+		if (!buffer) {
+			err = sizeof(u32);
+		} else if (size < sizeof(u32)) {
+			err = -ENODATA;
+		} else {
+			err = sizeof(u32);
+			*(__be32 *)buffer =
+				cpu_to_be32(le32_to_cpu(ni->std_fa));
+		}
+		goto out;
+	}
+
+	if (name_len == sizeof(USER_DOSATTRIB) - 1 &&
+	    !memcmp(name, USER_DOSATTRIB, sizeof(USER_DOSATTRIB))) {
+		/* user.DOSATTRIB */
+		if (!buffer) {
+			err = 5;
+		} else if (size < 5) {
+			err = -ENODATA;
+		} else {
+			err = sprintf((char *)buffer, "0x%x",
+				      le32_to_cpu(ni->std_fa) & 0xff) +
+			      1;
+		}
+		goto out;
+	}
+
+	if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&
+	     !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,
+		     sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||
+	    (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&
+	     !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,
+		     sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {
+		err = ntfs_xattr_get_acl(
+			inode,
+			name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 ?
+				ACL_TYPE_ACCESS :
+				ACL_TYPE_DEFAULT,
+			buffer, size);
+	} else {
+		err = ntfs_getxattr_hlp(inode, name, buffer, size, NULL);
+	}
+
+out:
+	return err;
+}
+
+/*
+ * ntfs_setxattr
+ *
+ * inode_operations::setxattr
+ */
+static noinline int ntfs_setxattr(const struct xattr_handler *handler,
+				  struct dentry *de, struct inode *inode,
+				  const char *name, const void *value,
+				  size_t size, int flags)
+{
+	int err = -EINVAL;
+	struct ntfs_inode *ni = ntfs_i(inode);
+	size_t name_len = strlen(name);
+	u32 attrib = 0; /* not necessary just to suppress warnings */
+	__le32 new_fa;
+
+	/* Dispatch request */
+	if (name_len == sizeof(SYSTEM_DOS_ATTRIB) - 1 &&
+	    !memcmp(name, SYSTEM_DOS_ATTRIB, sizeof(SYSTEM_DOS_ATTRIB))) {
+		if (sizeof(u8) != size)
+			goto out;
+		attrib = *(u8 *)value;
+		goto set_dos_attr;
+	}
+
+	if (name_len == sizeof(SYSTEM_NTFS_ATTRIB) - 1 &&
+	    !memcmp(name, SYSTEM_NTFS_ATTRIB, sizeof(SYSTEM_NTFS_ATTRIB))) {
+		if (sizeof(u32) != size)
+			goto out;
+		attrib = *(u32 *)value;
+		goto set_dos_attr;
+	}
+
+	if (name_len == sizeof(SYSTEM_NTFS_ATTRIB_BE) - 1 &&
+	    !memcmp(name, SYSTEM_NTFS_ATTRIB_BE,
+		    sizeof(SYSTEM_NTFS_ATTRIB_BE))) {
+		if (sizeof(u32) != size)
+			goto out;
+		attrib = be32_to_cpu(*(__be32 *)value);
+		goto set_dos_attr;
+	}
+
+	if (name_len == sizeof(USER_DOSATTRIB) - 1 &&
+	    !memcmp(name, USER_DOSATTRIB, sizeof(USER_DOSATTRIB))) {
+		if (size < 4 || ((char *)value)[size - 1])
+			goto out;
+
+		/*
+		 * The input value must be string in form 0x%x with last zero
+		 * This means that the 'size' must be 4, 5, ...
+		 *  E.g: 0x1 - 4 bytes, 0x20 - 5 bytes
+		 */
+		if (sscanf((char *)value, "0x%x", &attrib) != 1)
+			goto out;
+
+set_dos_attr:
+		if (!value)
+			goto out;
+
+		/*
+		 * Thanks Mark Harmstone:
+		 * keep directory bit consistency
+		 */
+		new_fa = cpu_to_le32(attrib);
+		if (S_ISDIR(inode->i_mode))
+			new_fa |= FILE_ATTRIBUTE_DIRECTORY;
+		else
+			new_fa &= ~FILE_ATTRIBUTE_DIRECTORY;
+
+		if (ni->std_fa != new_fa) {
+			ni->std_fa = new_fa;
+			mark_inode_dirty(inode);
+		}
+		err = 0;
+
+		goto out;
+	}
+
+	if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&
+	     !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,
+		     sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||
+	    (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&
+	     !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,
+		     sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {
+		err = ntfs_xattr_set_acl(
+			inode,
+			name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 ?
+				ACL_TYPE_ACCESS :
+				ACL_TYPE_DEFAULT,
+			value, size);
+	} else {
+		err = ntfs_set_ea(inode, name, value, size, flags, 0);
+	}
+
+out:
+	return err;
+}
+
+/*
+ * Initialize the ACLs of a new inode. Called from ntfs_create_inode.
+ */
+int ntfs_init_acl(struct inode *inode, struct inode *dir)
+{
+	struct posix_acl *default_acl, *acl;
+	int err;
+
+	/*
+	 * TODO refactoring lock
+	 * ni_lock(dir) ... -> posix_acl_create(dir,...) -> ntfs_get_acl -> ni_lock(dir)
+	 */
+	inode->i_default_acl = NULL;
+
+	default_acl = ntfs_get_acl_ex(dir, ACL_TYPE_DEFAULT, 1);
+
+	if (!default_acl || default_acl == ERR_PTR(-EOPNOTSUPP)) {
+		inode->i_mode &= ~current_umask();
+		err = 0;
+		goto out;
+	}
+
+	if (IS_ERR(default_acl)) {
+		err = PTR_ERR(default_acl);
+		goto out;
+	}
+
+	acl = default_acl;
+	err = __posix_acl_create(&acl, GFP_NOFS, &inode->i_mode);
+	if (err < 0)
+		goto out1;
+	if (!err) {
+		posix_acl_release(acl);
+		acl = NULL;
+	}
+
+	if (!S_ISDIR(inode->i_mode)) {
+		posix_acl_release(default_acl);
+		default_acl = NULL;
+	}
+
+	if (default_acl)
+		err = ntfs_set_acl_ex(inode, default_acl, ACL_TYPE_DEFAULT, 1);
+
+	if (!acl)
+		inode->i_acl = NULL;
+	else if (!err)
+		err = ntfs_set_acl_ex(inode, acl, ACL_TYPE_ACCESS, 1);
+
+	posix_acl_release(acl);
+out1:
+	posix_acl_release(default_acl);
+
+out:
+	return err;
+}
+
+static bool ntfs_xattr_user_list(struct dentry *dentry)
+{
+	return 1;
+}
+
+static const struct xattr_handler ntfs_xattr_handler = {
+	.prefix = "",
+	.get = ntfs_getxattr,
+	.set = ntfs_setxattr,
+	.list = ntfs_xattr_user_list,
+};
+
+const struct xattr_handler *ntfs_xattr_handlers[] = { &ntfs_xattr_handler,
+						      NULL };
-- 
2.25.4


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 06/10] fs/ntfs3: Add compression
  2020-09-11 14:10 [PATCH v5 00/10] NTFS read-write driver GPL implementation by Paragon Software Konstantin Komarov
                   ` (2 preceding siblings ...)
  2020-09-11 14:10 ` [PATCH v5 05/10] fs/ntfs3: Add attrib operations Konstantin Komarov
@ 2020-09-11 14:10 ` Konstantin Komarov
  2020-09-11 14:10 ` [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc Konstantin Komarov
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-11 14:10 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, joe,
	mark, nborisov, Konstantin Komarov

This adds compression

Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
---
 fs/ntfs3/lznt.c | 452 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 452 insertions(+)
 create mode 100644 fs/ntfs3/lznt.c

diff --git a/fs/ntfs3/lznt.c b/fs/ntfs3/lznt.c
new file mode 100644
index 000000000000..c9bdecfb1294
--- /dev/null
+++ b/fs/ntfs3/lznt.c
@@ -0,0 +1,452 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/fs/ntfs3/lznt.c
+ *
+ * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved.
+ *
+ */
+#include <linux/blkdev.h>
+#include <linux/buffer_head.h>
+#include <linux/fs.h>
+#include <linux/nls.h>
+#include <linux/sched/signal.h>
+
+#include "debug.h"
+#include "ntfs.h"
+#include "ntfs_fs.h"
+
+/* src buffer is zero */
+#define LZNT_ERROR_ALL_ZEROS 1
+#define LZNT_CHUNK_SIZE 0x1000
+
+struct lznt_hash {
+	const u8 *p1;
+	const u8 *p2;
+};
+
+struct lznt {
+	const u8 *unc;
+	const u8 *unc_end;
+	const u8 *best_match;
+	size_t max_len;
+	bool std;
+
+	struct lznt_hash hash[LZNT_CHUNK_SIZE];
+};
+
+static inline size_t get_match_len(const u8 *ptr, const u8 *end, const u8 *prev,
+				   size_t max_len)
+{
+	size_t len = 0;
+
+	while (ptr + len < end && ptr[len] == prev[len] && ++len < max_len)
+		;
+	return len;
+}
+
+static size_t longest_match_std(const u8 *src, struct lznt *ctx)
+{
+	size_t hash_index;
+	size_t len1 = 0, len2 = 0;
+	const u8 **hash;
+
+	hash_index =
+		((40543U * ((((src[0] << 4) ^ src[1]) << 4) ^ src[2])) >> 4) &
+		(LZNT_CHUNK_SIZE - 1);
+
+	hash = &(ctx->hash[hash_index].p1);
+
+	if (hash[0] >= ctx->unc && hash[0] < src && hash[0][0] == src[0] &&
+	    hash[0][1] == src[1] && hash[0][2] == src[2]) {
+		len1 = 3;
+		if (ctx->max_len > 3)
+			len1 += get_match_len(src + 3, ctx->unc_end,
+					      hash[0] + 3, ctx->max_len - 3);
+	}
+
+	if (hash[1] >= ctx->unc && hash[1] < src && hash[1][0] == src[0] &&
+	    hash[1][1] == src[1] && hash[1][2] == src[2]) {
+		len2 = 3;
+		if (ctx->max_len > 3)
+			len2 += get_match_len(src + 3, ctx->unc_end,
+					      hash[1] + 3, ctx->max_len - 3);
+	}
+
+	/* Compare two matches and select the best one */
+	if (len1 < len2) {
+		ctx->best_match = hash[1];
+		len1 = len2;
+	} else {
+		ctx->best_match = hash[0];
+	}
+
+	hash[1] = hash[0];
+	hash[0] = src;
+	return len1;
+}
+
+static size_t longest_match_best(const u8 *src, struct lznt *ctx)
+{
+	size_t max_len;
+	const u8 *ptr;
+
+	if (ctx->unc >= src || !ctx->max_len)
+		return 0;
+
+	max_len = 0;
+	for (ptr = ctx->unc; ptr < src; ++ptr) {
+		size_t len =
+			get_match_len(src, ctx->unc_end, ptr, ctx->max_len);
+		if (len >= max_len) {
+			max_len = len;
+			ctx->best_match = ptr;
+		}
+	}
+
+	return max_len >= 3 ? max_len : 0;
+}
+
+static const size_t s_max_len[] = {
+	0x1002, 0x802, 0x402, 0x202, 0x102, 0x82, 0x42, 0x22, 0x12,
+};
+
+static const size_t s_max_off[] = {
+	0x10, 0x20, 0x40, 0x80, 0x100, 0x200, 0x400, 0x800, 0x1000,
+};
+
+static inline u16 make_pair(size_t offset, size_t len, size_t index)
+{
+	return ((offset - 1) << (12 - index)) |
+	       ((len - 3) & (((1 << (12 - index)) - 1)));
+}
+
+static inline size_t parse_pair(u16 pair, size_t *offset, size_t index)
+{
+	*offset = 1 + (pair >> (12 - index));
+	return 3 + (pair & ((1 << (12 - index)) - 1));
+}
+
+/*
+ * compress_chunk
+ *
+ * returns one of the three values:
+ * 0 - ok, 'cmpr' contains 'cmpr_chunk_size' bytes of compressed data
+ * 1 - input buffer is full zero
+ * -2 - the compressed buffer is too small to hold the compressed data
+ */
+static inline int compress_chunk(size_t (*match)(const u8 *, struct lznt *),
+				 const u8 *unc, const u8 *unc_end, u8 *cmpr,
+				 u8 *cmpr_end, size_t *cmpr_chunk_size,
+				 struct lznt *ctx)
+{
+	size_t cnt = 0;
+	size_t idx = 0;
+	const u8 *up = unc;
+	u8 *cp = cmpr + 3;
+	u8 *cp2 = cmpr + 2;
+	u8 not_zero = 0;
+	/* Control byte of 8-bit values: ( 0 - means byte as is, 1 - short pair ) */
+	u8 ohdr = 0;
+	u8 *last;
+	u16 t16;
+
+	if (unc + LZNT_CHUNK_SIZE < unc_end)
+		unc_end = unc + LZNT_CHUNK_SIZE;
+
+	last = min(cmpr + LZNT_CHUNK_SIZE + sizeof(short), cmpr_end);
+
+	ctx->unc = unc;
+	ctx->unc_end = unc_end;
+	ctx->max_len = s_max_len[0];
+
+	while (up < unc_end) {
+		size_t max_len;
+
+		while (unc + s_max_off[idx] < up)
+			ctx->max_len = s_max_len[++idx];
+
+		// Find match
+		max_len = up + 3 <= unc_end ? (*match)(up, ctx) : 0;
+
+		if (!max_len) {
+			if (cp >= last)
+				goto NotCompressed;
+			not_zero |= *cp++ = *up++;
+		} else if (cp + 1 >= last) {
+			goto NotCompressed;
+		} else {
+			t16 = make_pair(up - ctx->best_match, max_len, idx);
+			*cp++ = t16;
+			*cp++ = t16 >> 8;
+
+			ohdr |= 1 << cnt;
+			up += max_len;
+		}
+
+		cnt = (cnt + 1) & 7;
+		if (!cnt) {
+			*cp2 = ohdr;
+			ohdr = 0;
+			cp2 = cp;
+			cp += 1;
+		}
+	}
+
+	if (cp2 < last)
+		*cp2 = ohdr;
+	else
+		cp -= 1;
+
+	*cmpr_chunk_size = cp - cmpr;
+
+	t16 = (*cmpr_chunk_size - 3) | 0xB000;
+	cmpr[0] = t16;
+	cmpr[1] = t16 >> 8;
+
+	return not_zero ? 0 : LZNT_ERROR_ALL_ZEROS;
+
+NotCompressed:
+
+	if ((cmpr + LZNT_CHUNK_SIZE + sizeof(short)) > last)
+		return -2;
+
+	/*
+	 * Copy non cmpr data
+	 * 0x3FFF == ((LZNT_CHUNK_SIZE + 2 - 3) | 0x3000)
+	 */
+	cmpr[0] = 0xff;
+	cmpr[1] = 0x3f;
+
+	memcpy(cmpr + sizeof(short), unc, LZNT_CHUNK_SIZE);
+	*cmpr_chunk_size = LZNT_CHUNK_SIZE + sizeof(short);
+
+	return 0;
+}
+
+static inline ssize_t decompress_chunk(u8 *unc, u8 *unc_end, const u8 *cmpr,
+				       const u8 *cmpr_end)
+{
+	u8 *up = unc;
+	u8 ch = *cmpr++;
+	size_t bit = 0;
+	size_t index = 0;
+	u16 pair;
+	size_t offset, length;
+
+	/* Do decompression until pointers are inside range */
+	while (up < unc_end && cmpr < cmpr_end) {
+		/* Correct index */
+		while (unc + s_max_off[index] < up)
+			index += 1;
+
+		/* Check the current flag for zero */
+		if (!(ch & (1 << bit))) {
+			/* Just copy byte */
+			*up++ = *cmpr++;
+			goto next;
+		}
+
+		/* Check for boundary */
+		if (cmpr + 1 >= cmpr_end)
+			return -EINVAL;
+
+		/* Read a short from little endian stream */
+		pair = cmpr[1];
+		pair <<= 8;
+		pair |= cmpr[0];
+
+		cmpr += 2;
+
+		/* Translate packed information into offset and length */
+		length = parse_pair(pair, &offset, index);
+
+		/* Check offset for boundary */
+		if (unc + offset > up)
+			return -EINVAL;
+
+		/* Truncate the length if necessary */
+		if (up + length >= unc_end)
+			length = unc_end - up;
+
+		/* Now we copy bytes. This is the heart of LZ algorithm. */
+		for (; length > 0; length--, up++)
+			*up = *(up - offset);
+
+next:
+		/* Advance flag bit value */
+		bit = (bit + 1) & 7;
+
+		if (!bit) {
+			if (cmpr >= cmpr_end)
+				break;
+
+			ch = *cmpr++;
+		}
+	}
+
+	/* return the size of uncompressed data */
+	return up - unc;
+}
+
+/*
+ * std = true - standard compression
+ * std = false - best compression, requires a lot of cpu
+ */
+struct lznt *get_compression_ctx(bool std)
+{
+	struct lznt *r = ntfs_alloc(
+		std ? sizeof(struct lznt) : offsetof(struct lznt, hash), 1);
+
+	if (r)
+		r->std = std;
+	return r;
+}
+
+/*
+ * compress_lznt
+ *
+ * Compresses "unc" into "cmpr"
+ * +x - ok, 'cmpr' contains 'final_compressed_size' bytes of compressed data
+ * 0 - input buffer is full zero
+ */
+size_t compress_lznt(const void *unc, size_t unc_size, void *cmpr,
+		     size_t cmpr_size, struct lznt *ctx)
+{
+	int err;
+	size_t (*match)(const u8 *src, struct lznt *ctx);
+	u8 *p = cmpr;
+	u8 *end = p + cmpr_size;
+	const u8 *unc_chunk = unc;
+	const u8 *unc_end = unc_chunk + unc_size;
+	bool is_zero = true;
+
+	if (ctx->std) {
+		match = &longest_match_std;
+		memset(ctx->hash, 0, sizeof(ctx->hash));
+	} else {
+		match = &longest_match_best;
+	}
+
+	/* compression cycle */
+	for (; unc_chunk < unc_end; unc_chunk += LZNT_CHUNK_SIZE) {
+		cmpr_size = 0;
+		err = compress_chunk(match, unc_chunk, unc_end, p, end,
+				     &cmpr_size, ctx);
+		if (err < 0)
+			return unc_size;
+
+		if (is_zero && err != LZNT_ERROR_ALL_ZEROS)
+			is_zero = false;
+
+		p += cmpr_size;
+	}
+
+	if (p <= end - 2)
+		p[0] = p[1] = 0;
+
+	return is_zero ? 0 : PtrOffset(cmpr, p);
+}
+
+/*
+ * decompress_lznt
+ *
+ * decompresses "cmpr" into "unc"
+ */
+ssize_t decompress_lznt(const void *cmpr, size_t cmpr_size, void *unc,
+			size_t unc_size)
+{
+	const u8 *cmpr_chunk = cmpr;
+	const u8 *cmpr_end = cmpr_chunk + cmpr_size;
+	u8 *unc_chunk = unc;
+	u8 *unc_end = unc_chunk + unc_size;
+	u16 chunk_hdr;
+
+	if (cmpr_size < sizeof(short))
+		return -EINVAL;
+
+	/* read chunk header */
+	chunk_hdr = cmpr_chunk[1];
+	chunk_hdr <<= 8;
+	chunk_hdr |= cmpr_chunk[0];
+
+	/* loop through decompressing chunks */
+	for (;;) {
+		size_t chunk_size_saved;
+		size_t unc_use;
+		size_t cmpr_use = 3 + (chunk_hdr & (LZNT_CHUNK_SIZE - 1));
+
+		/* Check that the chunk actually fits the supplied buffer */
+		if (cmpr_chunk + cmpr_use > cmpr_end)
+			return -EINVAL;
+
+		/* First make sure the chunk contains compressed data */
+		if (chunk_hdr & 0x8000) {
+			/* Decompress a chunk and return if we get an error */
+			ssize_t err =
+				decompress_chunk(unc_chunk, unc_end,
+						 cmpr_chunk + sizeof(chunk_hdr),
+						 cmpr_chunk + cmpr_use);
+			if (err < 0)
+				return err;
+			unc_use = err;
+		} else {
+			/* This chunk does not contain compressed data */
+			unc_use = unc_chunk + LZNT_CHUNK_SIZE > unc_end ?
+					  unc_end - unc_chunk :
+					  LZNT_CHUNK_SIZE;
+
+			if (cmpr_chunk + sizeof(chunk_hdr) + unc_use >
+			    cmpr_end) {
+				return -EINVAL;
+			}
+
+			memcpy(unc_chunk, cmpr_chunk + sizeof(chunk_hdr),
+			       unc_use);
+		}
+
+		/* Advance pointers */
+		cmpr_chunk += cmpr_use;
+		unc_chunk += unc_use;
+
+		/* Check for the end of unc buffer */
+		if (unc_chunk >= unc_end)
+			break;
+
+		/* Proceed the next chunk */
+		if (cmpr_chunk > cmpr_end - 2)
+			break;
+
+		chunk_size_saved = LZNT_CHUNK_SIZE;
+
+		/* read chunk header */
+		chunk_hdr = cmpr_chunk[1];
+		chunk_hdr <<= 8;
+		chunk_hdr |= cmpr_chunk[0];
+
+		if (!chunk_hdr)
+			break;
+
+		/* Check the size of unc buffer */
+		if (unc_use < chunk_size_saved) {
+			size_t t1 = chunk_size_saved - unc_use;
+			u8 *t2 = unc_chunk + t1;
+
+			/* 'Zero' memory */
+			if (t2 >= unc_end)
+				break;
+
+			memset(unc_chunk, 0, t1);
+			unc_chunk = t2;
+		}
+	}
+
+	/* Check compression boundary */
+	if (cmpr_chunk > cmpr_end)
+		return -EINVAL;
+
+	/*
+	 * The unc size is just a difference between current
+	 * pointer and original one
+	 */
+	return PtrOffset(unc, unc_chunk);
+}
-- 
2.25.4


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc
  2020-09-11 14:10 [PATCH v5 00/10] NTFS read-write driver GPL implementation by Paragon Software Konstantin Komarov
                   ` (3 preceding siblings ...)
  2020-09-11 14:10 ` [PATCH v5 06/10] fs/ntfs3: Add compression Konstantin Komarov
@ 2020-09-11 14:10 ` Konstantin Komarov
  2020-09-21 13:26   ` Pali Rohár
  2020-09-11 14:10 ` [PATCH v5 09/10] fs/ntfs3: Add NTFS3 in fs/Kconfig and fs/Makefile Konstantin Komarov
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-11 14:10 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, joe,
	mark, nborisov, Konstantin Komarov

This adds Kconfig, Makefile and doc

Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
---
 Documentation/filesystems/ntfs3.rst | 107 ++++++++++++++++++++++++++++
 fs/ntfs3/Kconfig                    |  23 ++++++
 fs/ntfs3/Makefile                   |  11 +++
 3 files changed, 141 insertions(+)
 create mode 100644 Documentation/filesystems/ntfs3.rst
 create mode 100644 fs/ntfs3/Kconfig
 create mode 100644 fs/ntfs3/Makefile

diff --git a/Documentation/filesystems/ntfs3.rst b/Documentation/filesystems/ntfs3.rst
new file mode 100644
index 000000000000..7b7d71b26c95
--- /dev/null
+++ b/Documentation/filesystems/ntfs3.rst
@@ -0,0 +1,107 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=====
+NTFS3
+=====
+
+
+Summary and Features
+====================
+
+NTFS3 is fully functional NTFS Read-Write driver. The driver works with
+NTFS versions up to 3.1, normal/compressed/sparse files
+and journal replaying. File system type to use on mount is 'ntfs3'.
+
+- This driver implements NTFS read/write support for normal, sparse and
+  compressed files.
+  NOTE: Operations with compressed files require increased memory consumption;
+- Supports native journal replaying;
+- Supports extended attributes;
+- Supports NFS export of mounted NTFS volumes.
+
+Mount Options
+=============
+
+The list below describes mount options supported by NTFS3 driver in addition to
+generic ones.
+
+===============================================================================
+
+nls=name		This option informs the driver how to interpret path
+			strings and translate them to Unicode and back. If
+			this option is not set, the default codepage will be
+			used (CONFIG_NLS_DEFAULT).
+			Examples:
+				'nls=utf8'
+
+nls_alt=name		This option extends "nls". It will be used to translate
+			path string to Unicode if primary nls failed.
+			Examples:
+				'nls_alt=cp1251'
+
+uid=
+gid=
+umask=			Controls the default permissions for files/directories created
+			after the NTFS volume is mounted.
+
+fmask=
+dmask=			Instead of specifying umask which applies both to
+			files and directories, fmask applies only to files and
+			dmask only to directories.
+
+nohidden		Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN)
+			attribute will not be shown under Linux.
+
+sys_immutable		Files with the Windows-specific SYSTEM
+			(FILE_ATTRIBUTE_SYSTEM) attribute will be marked as system
+			immutable files.
+
+discard			Enable support of the TRIM command for improved performance
+			on delete operations, which is recommended for use with the
+			solid-state drives (SSD).
+
+force			Forces the driver to mount partitions even if 'dirty' flag
+			(volume dirty) is set. Not recommended for use.
+
+sparse			Create new files as "sparse".
+
+showmeta		Use this parameter to show all meta-files (System Files) on
+			a mounted NTFS partition.
+			By default, all meta-files are hidden.
+
+prealloc		Preallocate space for files excessively when file size is
+			increasing on writes. Decreases fragmentation in case of
+			parallel write operations to different files.
+
+no_acs_rules		"No access rules" mount option sets access rights for
+			files/folders to 777 and owner/group to root. This mount
+			option absorbs all other permissions:
+			- permissions change for files/folders will be reported
+				as successful, but they will remain 777;
+			- owner/group change will be reported as successful, but
+				they will stay as root
+
+acl			Support POSIX ACLs (Access Control Lists). Effective if
+			supported by Kernel. Not to be confused with NTFS ACLs.
+			The option specified as acl enables support for POSIX ACLs.
+
+noatime			All files and directories will not update their last access
+			time attribute if a partition is mounted with this parameter.
+			This option can speed up file system operation.
+
+===============================================================================
+
+ToDo list
+=========
+
+- Full journaling support (currently journal replaying is supported) over JBD.
+
+
+References
+==========
+https://www.paragon-software.com/home/ntfs-linux-professional/
+	- Commercial version of the NTFS driver for Linux.
+
+almaz.alexandrovich@paragon-software.com
+	- Direct e-mail address for feedback and requests on the NTFS3 implementation.
+
diff --git a/fs/ntfs3/Kconfig b/fs/ntfs3/Kconfig
new file mode 100644
index 000000000000..92a9c68008c8
--- /dev/null
+++ b/fs/ntfs3/Kconfig
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: GPL-2.0-only
+config NTFS3_FS
+	tristate "NTFS Read-Write file system support"
+	select NLS
+	help
+	  Windows OS native file system (NTFS) support up to NTFS version 3.1.
+
+	  Y or M enables the NTFS3 driver with full features enabled (read,
+	  write, journal replaying, sparse/compressed files support).
+	  File system type to use on mount is "ntfs3". Module name (M option)
+	  is also "ntfs3".
+
+	  Documentation: <file:Documentation/filesystems/ntfs3.rst>
+
+config NTFS3_64BIT_CLUSTER
+	bool "64 bits per NTFS clusters"
+	depends on NTFS3_FS && 64BIT
+	help
+	  Windows implementation of ntfs.sys uses 32 bits per clusters.
+	  If activated 64 bits per clusters you will be able to use 4k cluster
+	  for 16T+ volumes. Windows will not be able to mount such volumes.
+
+	  It is recommended to say N here.
diff --git a/fs/ntfs3/Makefile b/fs/ntfs3/Makefile
new file mode 100644
index 000000000000..4d4fe198b8b8
--- /dev/null
+++ b/fs/ntfs3/Makefile
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for the ntfs3 filesystem support.
+#
+
+obj-$(CONFIG_NTFS3_FS) += ntfs3.o
+
+ntfs3-objs := bitfunc.o bitmap.o inode.o fsntfs.o frecord.o \
+	    index.o attrlist.o record.o attrib.o run.o xattr.o\
+	    upcase.o super.o file.o dir.o namei.o lznt.o\
+	    fslog.o
-- 
2.25.4


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 09/10] fs/ntfs3: Add NTFS3 in fs/Kconfig and fs/Makefile
  2020-09-11 14:10 [PATCH v5 00/10] NTFS read-write driver GPL implementation by Paragon Software Konstantin Komarov
                   ` (4 preceding siblings ...)
  2020-09-11 14:10 ` [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc Konstantin Komarov
@ 2020-09-11 14:10 ` Konstantin Komarov
  2020-09-11 14:10 ` [PATCH v5 10/10] fs/ntfs3: Add MAINTAINERS Konstantin Komarov
       [not found] ` <20200911141018.2457639-3-almaz.alexandrovich@paragon-software.com>
  7 siblings, 0 replies; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-11 14:10 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, joe,
	mark, nborisov, Konstantin Komarov

This adds NTFS3 in fs/Kconfig and fs/Makefile

Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
---
 fs/Kconfig  | 1 +
 fs/Makefile | 1 +
 2 files changed, 2 insertions(+)

diff --git a/fs/Kconfig b/fs/Kconfig
index aa4c12282301..eae96d55ab67 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -145,6 +145,7 @@ menu "DOS/FAT/EXFAT/NT Filesystems"
 source "fs/fat/Kconfig"
 source "fs/exfat/Kconfig"
 source "fs/ntfs/Kconfig"
+source "fs/ntfs3/Kconfig"
 
 endmenu
 endif # BLOCK
diff --git a/fs/Makefile b/fs/Makefile
index 1c7b0e3f6daa..b0b4ad8affa0 100644
--- a/fs/Makefile
+++ b/fs/Makefile
@@ -100,6 +100,7 @@ obj-$(CONFIG_SYSV_FS)		+= sysv/
 obj-$(CONFIG_CIFS)		+= cifs/
 obj-$(CONFIG_HPFS_FS)		+= hpfs/
 obj-$(CONFIG_NTFS_FS)		+= ntfs/
+obj-$(CONFIG_NTFS3_FS)		+= ntfs3/
 obj-$(CONFIG_UFS_FS)		+= ufs/
 obj-$(CONFIG_EFS_FS)		+= efs/
 obj-$(CONFIG_JFFS2_FS)		+= jffs2/
-- 
2.25.4


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 10/10] fs/ntfs3: Add MAINTAINERS
  2020-09-11 14:10 [PATCH v5 00/10] NTFS read-write driver GPL implementation by Paragon Software Konstantin Komarov
                   ` (5 preceding siblings ...)
  2020-09-11 14:10 ` [PATCH v5 09/10] fs/ntfs3: Add NTFS3 in fs/Kconfig and fs/Makefile Konstantin Komarov
@ 2020-09-11 14:10 ` Konstantin Komarov
       [not found] ` <20200911141018.2457639-3-almaz.alexandrovich@paragon-software.com>
  7 siblings, 0 replies; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-11 14:10 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, joe,
	mark, nborisov, Konstantin Komarov

This adds MAINTAINERS

Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
---
 MAINTAINERS | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index b5cfab015bd6..71659f72b83a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -12370,6 +12370,13 @@ T:	git git://git.kernel.org/pub/scm/linux/kernel/git/aia21/ntfs.git
 F:	Documentation/filesystems/ntfs.rst
 F:	fs/ntfs/
 
+NTFS3 FILESYSTEM
+M:	Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
+S:	Supported
+W:	http://www.paragon-software.com/
+F:	Documentation/filesystems/ntfs3.rst
+F:	fs/ntfs3/
+
 NUBUS SUBSYSTEM
 M:	Finn Thain <fthain@telegraphics.com.au>
 L:	linux-m68k@lists.linux-m68k.org
-- 
2.25.4


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 01/10] fs/ntfs3: Add headers and misc files
  2020-09-11 14:10 ` [PATCH v5 01/10] fs/ntfs3: Add headers and misc files Konstantin Komarov
@ 2020-09-12  2:28   ` Joe Perches
  0 siblings, 0 replies; 23+ messages in thread
From: Joe Perches @ 2020-09-12  2:28 UTC (permalink / raw)
  To: Konstantin Komarov, linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, mark,
	nborisov

On Fri, 2020-09-11 at 17:10 +0300, Konstantin Komarov wrote:
> This adds headers and misc files

The code may be ok, but its cosmetics are poor.

> diff --git a/fs/ntfs3/debug.h b/fs/ntfs3/debug.h
[]
> +#define QuadAlign(n) (((n) + 7u) & (~7u))
> +#define IsQuadAligned(n) (!((size_t)(n)&7u))
> +#define Quad2Align(n) (((n) + 15(&u) & (~15u))
> +#define IsQuad2Aligned(n) (!((size_t)(n)&15u))
> +#define Quad4Align(n) (((n) + 31u) & (~31u))
> +#define IsSizeTAligned(n) (!((size_t)(n) & (sizeof(size_t) - 1)))
> +#define DwordAlign(n) (((n) + 3u) & (~3u))
> +#define IsDwordAligned(n) (!((size_t)(n)&3u))
> +#define WordAlign(n) (((n) + 1u) & (~1u))
> +#define IsWordAligned(n) (!((size_t)(n)&1u))

All of these could use column alignment.
IsSizeTAligned could is the kernel's IS_ALIGNED

#define IsQuadAligned(n)	(!((size_t)(n) & 7u))
#define QuadAlign(n)		(((n) + 7u) & (~7u))
#define IsQuadAligned(n)	(!((size_t)(n) & 7u))
#define Quad2Align(n)		(((n) + 15u) & (~15u))
#define IsQuad2Aligned(n)	(!((size_t)(n) & 15u))

Though all of these could also use two macros with
the same form.  Something like:

#define NTFS3_ALIGN(n, at)	(((n) + ((at) - 1)) & (~((at) - 1)))
#define NTFS3_IS_ALIGNED(n, at)	(!((size_t)(n) & ((at) - 1)))

So all of these could be ordered by size and use actual size

#define WordAlign(n)		NTFS3_ALIGN(n, 2)
#define IsWordAligned(n)	NTFS3_IS_ALIGNED(n, 2)
#define DwordAlign(n)		NTFS3_ALIGN(n, 4)
#define IsDwordAligned(n)	
NTFS3_IS_ALIGNED(n, 4)
#define QuadAlign(n)		NTFS3_ALIGN(n, 8)
#define IsQuadAligned(n)	NTFS3_IS_ALIGNED(n, 8)
#define
Quad2Align(n)		NTFS3_ALIGN(n, 16)
#define IsQuad2Aligned(n)	NTFS3_IS_ALIGNED(n, 16)

#define IsSizeTAligned(n)	NTFS3_IS_ALIGNED(n, sizeof(size_t))


> +#ifdef CONFIG_PRINTK
> +__printf(2, 3) void ntfs_printk(const struct super_block *sb, const char *fmt,
> +				...);

Better would be

__printf(2, 3)
void ntfs_printk(const struct super_block *sb, const char *fmt, ...);

etc...

There's a lot of code that could be made more readable for a human.




^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 03/10] fs/ntfs3: Add bitmap
  2020-09-11 14:10 ` [PATCH v5 03/10] fs/ntfs3: Add bitmap Konstantin Komarov
@ 2020-09-13 18:43   ` Joe Perches
  2020-09-14  2:38     ` Matthew Wilcox
  2020-09-18 16:29     ` Konstantin Komarov
  0 siblings, 2 replies; 23+ messages in thread
From: Joe Perches @ 2020-09-13 18:43 UTC (permalink / raw)
  To: Konstantin Komarov, linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, mark,
	nborisov

On Fri, 2020-09-11 at 17:10 +0300, Konstantin Komarov wrote:
> This adds bitmap

$ make fs/ntfs3/
  SYNC    include/config/auto.conf.cmd
  CALL    scripts/checksyscalls.sh
  CALL    scripts/atomic/check-atomics.sh
  DESCEND  objtool
  CC      fs/ntfs3/bitfunc.o
  CC      fs/ntfs3/bitmap.o
fs/ntfs3/bitmap.c: In function ‘wnd_rescan’:
fs/ntfs3/bitmap.c:556:4: error: implicit declaration of function ‘page_cache_readahead_unbounded’; did you mean ‘page_cache_ra_unbounded’? [-Werror=implicit-function-declaration]
  556 |    page_cache_readahead_unbounded(
      |    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |    page_cache_ra_unbounded
cc1: some warnings being treated as errors
make[2]: *** [scripts/Makefile.build:283: fs/ntfs3/bitmap.o] Error 1
make[1]: *** [scripts/Makefile.build:500: fs/ntfs3] Error 2
make: *** [Makefile:1792: fs] Error 2




^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 03/10] fs/ntfs3: Add bitmap
  2020-09-13 18:43   ` Joe Perches
@ 2020-09-14  2:38     ` Matthew Wilcox
  2020-09-18 16:35       ` Konstantin Komarov
  2020-09-18 16:29     ` Konstantin Komarov
  1 sibling, 1 reply; 23+ messages in thread
From: Matthew Wilcox @ 2020-09-14  2:38 UTC (permalink / raw)
  To: Joe Perches
  Cc: Konstantin Komarov, linux-fsdevel, viro, linux-kernel, pali,
	dsterba, aaptel, rdunlap, mark, nborisov

On Sun, Sep 13, 2020 at 11:43:50AM -0700, Joe Perches wrote:
> On Fri, 2020-09-11 at 17:10 +0300, Konstantin Komarov wrote:
> > This adds bitmap
> 
> $ make fs/ntfs3/
>   SYNC    include/config/auto.conf.cmd
>   CALL    scripts/checksyscalls.sh
>   CALL    scripts/atomic/check-atomics.sh
>   DESCEND  objtool
>   CC      fs/ntfs3/bitfunc.o
>   CC      fs/ntfs3/bitmap.o
> fs/ntfs3/bitmap.c: In function ‘wnd_rescan’:
> fs/ntfs3/bitmap.c:556:4: error: implicit declaration of function ‘page_cache_readahead_unbounded’; did you mean ‘page_cache_ra_unbounded’? [-Werror=implicit-function-declaration]
>   556 |    page_cache_readahead_unbounded(
>       |    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>       |    page_cache_ra_unbounded
> cc1: some warnings being treated as errors
> make[2]: *** [scripts/Makefile.build:283: fs/ntfs3/bitmap.o] Error 1
> make[1]: *** [scripts/Makefile.build:500: fs/ntfs3] Error 2
> make: *** [Makefile:1792: fs] Error 2

That was only just renamed.  More concerningly, the documentation is
quite unambiguous:

 * This function is for filesystems to call when they want to start
 * readahead beyond a file's stated i_size.  This is almost certainly
 * not the function you want to call.  Use page_cache_async_readahead()
 * or page_cache_sync_readahead() instead.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 03/10] fs/ntfs3: Add bitmap
  2020-09-13 18:43   ` Joe Perches
  2020-09-14  2:38     ` Matthew Wilcox
@ 2020-09-18 16:29     ` Konstantin Komarov
  2020-09-18 16:38       ` Matthew Wilcox
  1 sibling, 1 reply; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-18 16:29 UTC (permalink / raw)
  To: Joe Perches, linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, mark,
	nborisov

From: Joe Perches <joe@perches.com>
Sent: Sunday, September 13, 2020 9:44 PM
> 
> On Fri, 2020-09-11 at 17:10 +0300, Konstantin Komarov wrote:
> > This adds bitmap
> 
> $ make fs/ntfs3/
>   SYNC    include/config/auto.conf.cmd
>   CALL    scripts/checksyscalls.sh
>   CALL    scripts/atomic/check-atomics.sh
>   DESCEND  objtool
>   CC      fs/ntfs3/bitfunc.o
>   CC      fs/ntfs3/bitmap.o
> fs/ntfs3/bitmap.c: In function ‘wnd_rescan’:
> fs/ntfs3/bitmap.c:556:4: error: implicit declaration of function ‘page_cache_readahead_unbounded’; did you mean
> ‘page_cache_ra_unbounded’? [-Werror=implicit-function-declaration]
>   556 |    page_cache_readahead_unbounded(
>       |    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>       |    page_cache_ra_unbounded
> cc1: some warnings being treated as errors
> make[2]: *** [scripts/Makefile.build:283: fs/ntfs3/bitmap.o] Error 1
> make[1]: *** [scripts/Makefile.build:500: fs/ntfs3] Error 2
> make: *** [Makefile:1792: fs] Error 2
> 
Hi Joe! Doesn't seem to be an issue for 5.9_rc5. Which repo should've
been used to reproduce?

Thanks.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 03/10] fs/ntfs3: Add bitmap
  2020-09-14  2:38     ` Matthew Wilcox
@ 2020-09-18 16:35       ` Konstantin Komarov
  2020-09-18 16:54         ` Matthew Wilcox
  0 siblings, 1 reply; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-18 16:35 UTC (permalink / raw)
  To: Matthew Wilcox, Joe Perches
  Cc: linux-fsdevel, viro, linux-kernel, pali, dsterba, aaptel,
	rdunlap, mark, nborisov

From: Matthew Wilcox <willy@infradead.org>
Sent: Monday, September 14, 2020 5:39 AM
> 
> On Sun, Sep 13, 2020 at 11:43:50AM -0700, Joe Perches wrote:
> > On Fri, 2020-09-11 at 17:10 +0300, Konstantin Komarov wrote:
> > > This adds bitmap
> >
> > $ make fs/ntfs3/
> >   SYNC    include/config/auto.conf.cmd
> >   CALL    scripts/checksyscalls.sh
> >   CALL    scripts/atomic/check-atomics.sh
> >   DESCEND  objtool
> >   CC      fs/ntfs3/bitfunc.o
> >   CC      fs/ntfs3/bitmap.o
> > fs/ntfs3/bitmap.c: In function ‘wnd_rescan’:
> > fs/ntfs3/bitmap.c:556:4: error: implicit declaration of function ‘page_cache_readahead_unbounded’; did you mean
> ‘page_cache_ra_unbounded’? [-Werror=implicit-function-declaration]
> >   556 |    page_cache_readahead_unbounded(
> >       |    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >       |    page_cache_ra_unbounded
> > cc1: some warnings being treated as errors
> > make[2]: *** [scripts/Makefile.build:283: fs/ntfs3/bitmap.o] Error 1
> > make[1]: *** [scripts/Makefile.build:500: fs/ntfs3] Error 2
> > make: *** [Makefile:1792: fs] Error 2
> 
> That was only just renamed.  More concerningly, the documentation is
> quite unambiguous:
> 
>  * This function is for filesystems to call when they want to start
>  * readahead beyond a file's stated i_size.  This is almost certainly
>  * not the function you want to call.  Use page_cache_async_readahead()
>  * or page_cache_sync_readahead() instead.

Hi Matthew! it's not so clear for us by several reasons (please correct
if this is wrong):
page_cache_sync_readahead() seems applicable as a replacement, but
it doesn't seem to be reasonable as readahead in this case gives perf
improvement because of it's async nature. The 'async' function is incompatible
replacement based on the arguments list.

Thanks.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 03/10] fs/ntfs3: Add bitmap
  2020-09-18 16:29     ` Konstantin Komarov
@ 2020-09-18 16:38       ` Matthew Wilcox
  0 siblings, 0 replies; 23+ messages in thread
From: Matthew Wilcox @ 2020-09-18 16:38 UTC (permalink / raw)
  To: Konstantin Komarov
  Cc: Joe Perches, linux-fsdevel, viro, linux-kernel, pali, dsterba,
	aaptel, rdunlap, mark, nborisov

On Fri, Sep 18, 2020 at 04:29:34PM +0000, Konstantin Komarov wrote:
> From: Joe Perches <joe@perches.com>
> Sent: Sunday, September 13, 2020 9:44 PM
> > 
> > On Fri, 2020-09-11 at 17:10 +0300, Konstantin Komarov wrote:
> > > This adds bitmap
> > 
> > $ make fs/ntfs3/
> >   SYNC    include/config/auto.conf.cmd
> >   CALL    scripts/checksyscalls.sh
> >   CALL    scripts/atomic/check-atomics.sh
> >   DESCEND  objtool
> >   CC      fs/ntfs3/bitfunc.o
> >   CC      fs/ntfs3/bitmap.o
> > fs/ntfs3/bitmap.c: In function ‘wnd_rescan’:
> > fs/ntfs3/bitmap.c:556:4: error: implicit declaration of function ‘page_cache_readahead_unbounded’; did you mean
> > ‘page_cache_ra_unbounded’? [-Werror=implicit-function-declaration]
> >   556 |    page_cache_readahead_unbounded(
> >       |    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >       |    page_cache_ra_unbounded
> > cc1: some warnings being treated as errors
> > make[2]: *** [scripts/Makefile.build:283: fs/ntfs3/bitmap.o] Error 1
> > make[1]: *** [scripts/Makefile.build:500: fs/ntfs3] Error 2
> > make: *** [Makefile:1792: fs] Error 2
> > 
> Hi Joe! Doesn't seem to be an issue for 5.9_rc5. Which repo should've
> been used to reproduce?

It's in the -mm tree, which is included in linux-next.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 02/10] fs/ntfs3: Add initialization of super block
       [not found]   ` <c819ee72-6bb0-416d-dfc4-0bc2ad6d0ccd@harmstone.com>
@ 2020-09-18 16:39     ` Konstantin Komarov
  0 siblings, 0 replies; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-18 16:39 UTC (permalink / raw)
  To: Mark Harmstone, linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, joe, nborisov

From: Mark Harmstone <mark.harmstone@gmail.com> On Behalf Of Mark Harmstone
Sent: Friday, September 11, 2020 7:19 PM
> Subject: Re: [PATCH v5 02/10] fs/ntfs3: Add initialization of super block
> 
> Am I right in that inodes will only ever be created with one of two security
> descriptors? This seems like a significant shortcoming - Windows doesn't have
> traverse checking turned on by default, which means a file created by Linux
> will be accessible to any user on Windows, provided they know its name.
> 
> There's documentation on how to compute a SD on MSDN, but it's not trivial:
> https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-dtyp/98267ad6-66db-4a2c-972e-efb7d4603da1
> 

Hi Mark! You are right. Also, in V6 the single default value will be used.
This implementation is not positioned as full specs implemetation, however.
Please check out our V6, it has several adjustments on SDs inspired
by your feedback.

> On 11/9/20 3:10 pm, Konstantin Komarov wrote:
> > This adds initialization of super block
> >
> > Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
> > ---
> >  fs/ntfs3/fsntfs.c | 2210 +++++++++++++++++++++++++++++++++++++
> >  fs/ntfs3/index.c  | 2639 +++++++++++++++++++++++++++++++++++++++++++++
> >  fs/ntfs3/inode.c  | 2004 ++++++++++++++++++++++++++++++++++
> >  fs/ntfs3/super.c  | 1430 ++++++++++++++++++++++++
> >  4 files changed, 8283 insertions(+)
> >  create mode 100644 fs/ntfs3/fsntfs.c
> >  create mode 100644 fs/ntfs3/index.c
> >  create mode 100644 fs/ntfs3/inode.c
> >  create mode 100644 fs/ntfs3/super.c
> >
> > diff --git a/fs/ntfs3/fsntfs.c b/fs/ntfs3/fsntfs.c
> > new file mode 100644
> > index 000000000000..3814b62331db
> > --- /dev/null
> > +++ b/fs/ntfs3/fsntfs.c
> > @@ -0,0 +1,2210 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + *  linux/fs/ntfs3/fsntfs.c
> > + *
> > + * Copyright (C) 2019-2020 Paragon Software GmbH, All rights reserved.
> > + *
> > + */
> > +
> > +#include <linux/blkdev.h>
> > +#include <linux/buffer_head.h>
> > +#include <linux/fs.h>
> > +#include <linux/nls.h>
> > +#include <linux/sched/signal.h>
> > +
> > +#include "debug.h"
[]
> > +	return err;
> > +}
> > +
> > +static void __exit exit_ntfs_fs(void)
> > +{
> > +	if (ntfs_inode_cachep) {
> > +		rcu_barrier();
> > +		kmem_cache_destroy(ntfs_inode_cachep);
> > +	}
> > +
> > +	unregister_filesystem(&ntfs_fs_type);
> > +}
> > +
> > +MODULE_LICENSE("GPL");
> > +MODULE_DESCRIPTION("ntfs3 filesystem");
> > +MODULE_AUTHOR("Konstantin Komarov");
> > +MODULE_ALIAS_FS("ntfs3");
> > +
> > +module_init(init_ntfs_fs) module_exit(exit_ntfs_fs)
> 

Thanks.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 02/10] fs/ntfs3: Add initialization of super block
       [not found]   ` <2011cd8c-7dc4-b2e7-114b-d5647336ec92@harmstone.com>
@ 2020-09-18 16:43     ` Konstantin Komarov
  0 siblings, 0 replies; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-18 16:43 UTC (permalink / raw)
  To: Mark Harmstone, linux-fsdevel
  Cc: viro, linux-kernel, pali, dsterba, aaptel, willy, rdunlap, joe, nborisov

From: Mark Harmstone <mark.harmstone@gmail.com> On Behalf Of Mark Harmstone
Sent: Friday, September 11, 2020 7:50 PM
> Subject: Re: [PATCH v5 02/10] fs/ntfs3: Add initialization of super block
> 
> Also, I think it would be a good idea if you *are* going to have static SDs, to
> allocate them dynamically rather than using a blob. The blobs' contents don't
> match your comments,

Hi Mark! Fixed the comment thing. Check out the V6.
V6 also introduces the system.ntfs_security xattr.

>and it'd have been picked up if you'd done that!
> 
> s_dir_security translates to:
> owner S-1-5-32-544 (Administrators)
> group S-1-5-32-544 (Administrators)
> ACE: allow S-1-1-0 (Everyone) with FILE_ALL_ACCESS
> 
> s_file_security translates to:
> owner S-1-1-0 (Everyone)
> group S-1-5-21-1987932187-4155540637-3944497785-513 (your personal SID?!)
> ACE: allow S-1-1-0 (Everyone) with FILE_ALL_ACCESS
> 
> 
> On 11/9/20 3:10 pm, Konstantin Komarov wrote:
> > This adds initialization of super block
> >
[]
> > +}
> > +
> > +MODULE_LICENSE("GPL");
> > +MODULE_DESCRIPTION("ntfs3 filesystem");
> > +MODULE_AUTHOR("Konstantin Komarov");
> > +MODULE_ALIAS_FS("ntfs3");
> > +
> > +module_init(init_ntfs_fs) module_exit(exit_ntfs_fs)
> 

Thanks

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 03/10] fs/ntfs3: Add bitmap
  2020-09-18 16:35       ` Konstantin Komarov
@ 2020-09-18 16:54         ` Matthew Wilcox
  2020-09-25 16:04           ` Konstantin Komarov
  0 siblings, 1 reply; 23+ messages in thread
From: Matthew Wilcox @ 2020-09-18 16:54 UTC (permalink / raw)
  To: Konstantin Komarov
  Cc: Joe Perches, linux-fsdevel, viro, linux-kernel, pali, dsterba,
	aaptel, rdunlap, mark, nborisov

On Fri, Sep 18, 2020 at 04:35:11PM +0000, Konstantin Komarov wrote:
> > That was only just renamed.  More concerningly, the documentation is
> > quite unambiguous:
> > 
> >  * This function is for filesystems to call when they want to start
> >  * readahead beyond a file's stated i_size.  This is almost certainly
> >  * not the function you want to call.  Use page_cache_async_readahead()
> >  * or page_cache_sync_readahead() instead.
> 
> Hi Matthew! it's not so clear for us by several reasons (please correct
> if this is wrong):
> page_cache_sync_readahead() seems applicable as a replacement, but
> it doesn't seem to be reasonable as readahead in this case gives perf
> improvement because of it's async nature. The 'async' function is incompatible
> replacement based on the arguments list.

I think the naming has confused you (so I need to clarify the docs).
The sync function is to be called when you need the page which is being
read, and you might want to take the opportunity to read more pages.
The async version is to be called when the page you need is in cache,
but you've noticed that you're getting towards the end of the readahead
window.  Neither version waits for I/O to complete; you have to wait for
the page to become unlocked and then you can check PageUptodate.

Looking at what you're doing, you don't have a file_ra_state because
you're just trying to readahead fs metadata, right?  I think you want
to call force_page_cache_readahead(mapping, NULL, start, nr_pages);
The prototype for it is in mm/internal.h, but I think moving it to
include/linux/pagemap.h is justifiable.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc
  2020-09-11 14:10 ` [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc Konstantin Komarov
@ 2020-09-21 13:26   ` Pali Rohár
  2020-09-25 16:30     ` Konstantin Komarov
  0 siblings, 1 reply; 23+ messages in thread
From: Pali Rohár @ 2020-09-21 13:26 UTC (permalink / raw)
  To: Konstantin Komarov
  Cc: linux-fsdevel, viro, linux-kernel, dsterba, aaptel, willy,
	rdunlap, joe, mark, nborisov

On Friday 11 September 2020 17:10:16 Konstantin Komarov wrote:
> +Mount Options
> +=============
> +
> +The list below describes mount options supported by NTFS3 driver in addition to
> +generic ones.
> +
> +===============================================================================
> +
> +nls=name		This option informs the driver how to interpret path
> +			strings and translate them to Unicode and back. If
> +			this option is not set, the default codepage will be
> +			used (CONFIG_NLS_DEFAULT).
> +			Examples:
> +				'nls=utf8'
> +
> +nls_alt=name		This option extends "nls". It will be used to translate
> +			path string to Unicode if primary nls failed.
> +			Examples:
> +				'nls_alt=cp1251'

Hello! I'm looking at other filesystem drivers and no other with UNICODE
semantic (vfat, udf, isofs) has something like nls_alt option.

So do we really need it? And if yes, it should be added to all other
UNICODE filesystem drivers for consistency.

But I'm very sceptical if such thing is really needed. nls= option just
said how to convert UNICODE code points for userpace. This option is
passed by userspace (when mounting disk), so userspace already know what
it wanted. And it should really use this encoding for filenames (e.g.
utf8 or cp1251) which already told to kernel.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 03/10] fs/ntfs3: Add bitmap
  2020-09-18 16:54         ` Matthew Wilcox
@ 2020-09-25 16:04           ` Konstantin Komarov
  0 siblings, 0 replies; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-25 16:04 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Joe Perches, linux-fsdevel, viro, linux-kernel, pali, dsterba,
	aaptel, rdunlap, mark, nborisov

From: Matthew Wilcox <willy@infradead.org>
Sent: Friday, September 18, 2020 7:55 PM
> Subject: Re: [PATCH v5 03/10] fs/ntfs3: Add bitmap
> 
> On Fri, Sep 18, 2020 at 04:35:11PM +0000, Konstantin Komarov wrote:
> > > That was only just renamed.  More concerningly, the documentation is
> > > quite unambiguous:
> > >
> > >  * This function is for filesystems to call when they want to start
> > >  * readahead beyond a file's stated i_size.  This is almost certainly
> > >  * not the function you want to call.  Use page_cache_async_readahead()
> > >  * or page_cache_sync_readahead() instead.
> >
> > Hi Matthew! it's not so clear for us by several reasons (please correct
> > if this is wrong):
> > page_cache_sync_readahead() seems applicable as a replacement, but
> > it doesn't seem to be reasonable as readahead in this case gives perf
> > improvement because of it's async nature. The 'async' function is incompatible
> > replacement based on the arguments list.
> 
> I think the naming has confused you (so I need to clarify the docs).
> The sync function is to be called when you need the page which is being
> read, and you might want to take the opportunity to read more pages.
> The async version is to be called when the page you need is in cache,
> but you've noticed that you're getting towards the end of the readahead
> window.  Neither version waits for I/O to complete; you have to wait for
> the page to become unlocked and then you can check PageUptodate.
> 
> Looking at what you're doing, you don't have a file_ra_state because
> you're just trying to readahead fs metadata, right?

Hi Matthew! Yes, correct.

> I think you want
> to call force_page_cache_readahead(mapping, NULL, start, nr_pages);
> The prototype for it is in mm/internal.h, but I think moving it to
> include/linux/pagemap.h is justifiable.

Seems like this. We decided to temporarily remove the ra usage iv V7, while it's
not clear which of the public options is "our" case. Is there anything we can
assist regarding the force_page_cache_readahead() move to
include/linux/pagemap.h?

Thanks.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc
  2020-09-21 13:26   ` Pali Rohár
@ 2020-09-25 16:30     ` Konstantin Komarov
  2020-09-26  8:23       ` Pali Rohár
  0 siblings, 1 reply; 23+ messages in thread
From: Konstantin Komarov @ 2020-09-25 16:30 UTC (permalink / raw)
  To: Pali Rohár
  Cc: linux-fsdevel, viro, linux-kernel, dsterba, aaptel, willy,
	rdunlap, joe, mark, nborisov

From: Pali Rohár <pali@kernel.org>
Sent: Monday, September 21, 2020 4:27 PM
> To: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
> Cc: linux-fsdevel@vger.kernel.org; viro@zeniv.linux.org.uk; linux-kernel@vger.kernel.org; dsterba@suse.cz; aaptel@suse.com;
> willy@infradead.org; rdunlap@infradead.org; joe@perches.com; mark@harmstone.com; nborisov@suse.com
> Subject: Re: [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc
> 
> On Friday 11 September 2020 17:10:16 Konstantin Komarov wrote:
> > +Mount Options
> > +=============
> > +
> > +The list below describes mount options supported by NTFS3 driver in addition to
> > +generic ones.
> > +
> > +===============================================================================
> > +
> > +nls=name		This option informs the driver how to interpret path
> > +			strings and translate them to Unicode and back. If
> > +			this option is not set, the default codepage will be
> > +			used (CONFIG_NLS_DEFAULT).
> > +			Examples:
> > +				'nls=utf8'
> > +
> > +nls_alt=name		This option extends "nls". It will be used to translate
> > +			path string to Unicode if primary nls failed.
> > +			Examples:
> > +				'nls_alt=cp1251'
> 
> Hello! I'm looking at other filesystem drivers and no other with UNICODE
> semantic (vfat, udf, isofs) has something like nls_alt option.
> 
> So do we really need it? And if yes, it should be added to all other
> UNICODE filesystem drivers for consistency.
> 
> But I'm very sceptical if such thing is really needed. nls= option just
> said how to convert UNICODE code points for userpace. This option is
> passed by userspace (when mounting disk), so userspace already know what
> it wanted. And it should really use this encoding for filenames (e.g.
> utf8 or cp1251) which already told to kernel.

Hi Pali! Thanks for the feedback. We do not consider the nls_alt option as the must have
one. But it is very nice "QOL-type" mount option, which may help some amount of
dual-booters/Windows users to avoid tricky fails with files originated on non-English
Windows systems. One of the cases where this one may be useful is the case of zipping
files with non-English names (e.g. Polish etc) under Windows and then unzipping the archive
under Linux. In this case unzip will likely to fail on those files, as archive stores filenames not
in utf. Windows have that "Language for non-Unicode programs" setting, which controls the
encoding used for the described (and similar) cases.
Overall, it's kinda niche mount option, but we suppose it's legit for Windows-originated filesystems.
What do you think on this, Pali?

Best regards!

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc
  2020-09-25 16:30     ` Konstantin Komarov
@ 2020-09-26  8:23       ` Pali Rohár
  2020-10-09 15:31         ` Konstantin Komarov
  0 siblings, 1 reply; 23+ messages in thread
From: Pali Rohár @ 2020-09-26  8:23 UTC (permalink / raw)
  To: Konstantin Komarov
  Cc: linux-fsdevel, viro, linux-kernel, dsterba, aaptel, willy,
	rdunlap, joe, mark, nborisov

On Friday 25 September 2020 16:30:19 Konstantin Komarov wrote:
> From: Pali Rohár <pali@kernel.org>
> Sent: Monday, September 21, 2020 4:27 PM
> > To: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
> > Cc: linux-fsdevel@vger.kernel.org; viro@zeniv.linux.org.uk; linux-kernel@vger.kernel.org; dsterba@suse.cz; aaptel@suse.com;
> > willy@infradead.org; rdunlap@infradead.org; joe@perches.com; mark@harmstone.com; nborisov@suse.com
> > Subject: Re: [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc
> > 
> > On Friday 11 September 2020 17:10:16 Konstantin Komarov wrote:
> > > +Mount Options
> > > +=============
> > > +
> > > +The list below describes mount options supported by NTFS3 driver in addition to
> > > +generic ones.
> > > +
> > > +===============================================================================
> > > +
> > > +nls=name		This option informs the driver how to interpret path
> > > +			strings and translate them to Unicode and back. If
> > > +			this option is not set, the default codepage will be
> > > +			used (CONFIG_NLS_DEFAULT).
> > > +			Examples:
> > > +				'nls=utf8'
> > > +
> > > +nls_alt=name		This option extends "nls". It will be used to translate
> > > +			path string to Unicode if primary nls failed.
> > > +			Examples:
> > > +				'nls_alt=cp1251'
> > 
> > Hello! I'm looking at other filesystem drivers and no other with UNICODE
> > semantic (vfat, udf, isofs) has something like nls_alt option.
> > 
> > So do we really need it? And if yes, it should be added to all other
> > UNICODE filesystem drivers for consistency.
> > 
> > But I'm very sceptical if such thing is really needed. nls= option just
> > said how to convert UNICODE code points for userpace. This option is
> > passed by userspace (when mounting disk), so userspace already know what
> > it wanted. And it should really use this encoding for filenames (e.g.
> > utf8 or cp1251) which already told to kernel.
> 
> Hi Pali! Thanks for the feedback. We do not consider the nls_alt option as the must have
> one. But it is very nice "QOL-type" mount option, which may help some amount of
> dual-booters/Windows users to avoid tricky fails with files originated on non-English
> Windows systems. One of the cases where this one may be useful is the case of zipping
> files with non-English names (e.g. Polish etc) under Windows and then unzipping the archive
> under Linux. In this case unzip will likely to fail on those files, as archive stores filenames not
> in utf.

Hello!

Thank you for providing example. Now I can imagine the problem which
this option is trying to "workaround".

Personally, I think that this is the issue of the program which is
unzipping content of the archive. If files are in archive are stored in
different encoding, then user needs to provide information in which it
is stored. Otherwise it would be broken.

Also this your approach with nls=utf-8 and nls_alt=cp1251 is broken. I
can provide you string encoded in cp1251 which is also valid UTF-8
sequence.

For example: sequence of bytes "d0 93".

In cp1251 it is Р“, but also it is valid UTF-8 sequence for Г (CYRILLIC
CAPITAL LETTER GHE).

Because cp1251 is set as nls_alt, you would get UTF-8 interpretation.
And for all other invalid UTF-8 sequences you would get cp1251.

For me it looks like you are trying to implement workaround based on
some heuristic in kernel for userspace application which handles
encoding incorrectly. And because all CP???? encodings are defined at
full 8bit space and UTF-8 is subset of 8bit space, it would never work
correctly.

Also I do not think that kernel is correct place for workarounding
userspace applications which handles encoding incorrectly.

> Windows have that "Language for non-Unicode programs" setting, which controls the
> encoding used for the described (and similar) cases.

This windows setting is something different. It is system wide option
which affects -A WINAPI functions and defines one fixed 8bit encoding
(ACP) which should be used for converting UTF-16 strings (wchar_t*) into
8bit (char*) ACP encoding.

It is something similar to Unix CODESET set in LC_CTYPE from locale. But
not the same.

> Overall, it's kinda niche mount option, but we suppose it's legit for Windows-originated filesystems.
> What do you think on this, Pali?

I think this is not only for Windows-orientated FS, but rather for all
filesystems which store filenames in UNICODE (as opposite of sequence of
bytes).

E.g. ext4 now has extension for storing (and validating) that filenames
are also in UNICODE (on disk it is in UTF-8).

Same for Beos FS or UDF fs (on DVD/BD-R). In most cases these fs are
mounted with nls=utf-8 to interpret UNICODE as utf-8.

And none of these fs have such nls_alt option as I show above, it cannot
work reliable.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc
  2020-09-26  8:23       ` Pali Rohár
@ 2020-10-09 15:31         ` Konstantin Komarov
  2020-10-09 15:47           ` Pali Rohár
  0 siblings, 1 reply; 23+ messages in thread
From: Konstantin Komarov @ 2020-10-09 15:31 UTC (permalink / raw)
  To: Pali Rohár
  Cc: linux-fsdevel, viro, linux-kernel, dsterba, aaptel, willy,
	rdunlap, joe, mark, nborisov

From: Pali Rohár <pali@kernel.org>
Sent: Saturday, September 26, 2020 11:23 AM
> To: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
> Cc: linux-fsdevel@vger.kernel.org; viro@zeniv.linux.org.uk; linux-kernel@vger.kernel.org; dsterba@suse.cz; aaptel@suse.com;
> willy@infradead.org; rdunlap@infradead.org; joe@perches.com; mark@harmstone.com; nborisov@suse.com
> Subject: Re: [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc
> 
> On Friday 25 September 2020 16:30:19 Konstantin Komarov wrote:
> > From: Pali Rohár <pali@kernel.org>
> > Sent: Monday, September 21, 2020 4:27 PM
> > > To: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
> > > Cc: linux-fsdevel@vger.kernel.org; viro@zeniv.linux.org.uk; linux-kernel@vger.kernel.org; dsterba@suse.cz; aaptel@suse.com;
> > > willy@infradead.org; rdunlap@infradead.org; joe@perches.com; mark@harmstone.com; nborisov@suse.com
> > > Subject: Re: [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc
> > >
> > > On Friday 11 September 2020 17:10:16 Konstantin Komarov wrote:
> > > > +Mount Options
> > > > +=============
> > > > +
> > > > +The list below describes mount options supported by NTFS3 driver in addition to
> > > > +generic ones.
> > > > +
> > > > +===============================================================================
> > > > +
> > > > +nls=name		This option informs the driver how to interpret path
> > > > +			strings and translate them to Unicode and back. If
> > > > +			this option is not set, the default codepage will be
> > > > +			used (CONFIG_NLS_DEFAULT).
> > > > +			Examples:
> > > > +				'nls=utf8'
> > > > +
> > > > +nls_alt=name		This option extends "nls". It will be used to translate
> > > > +			path string to Unicode if primary nls failed.
> > > > +			Examples:
> > > > +				'nls_alt=cp1251'
> > >
> > > Hello! I'm looking at other filesystem drivers and no other with UNICODE
> > > semantic (vfat, udf, isofs) has something like nls_alt option.
> > >
> > > So do we really need it? And if yes, it should be added to all other
> > > UNICODE filesystem drivers for consistency.
> > >
> > > But I'm very sceptical if such thing is really needed. nls= option just
> > > said how to convert UNICODE code points for userpace. This option is
> > > passed by userspace (when mounting disk), so userspace already know what
> > > it wanted. And it should really use this encoding for filenames (e.g.
> > > utf8 or cp1251) which already told to kernel.
> >
> > Hi Pali! Thanks for the feedback. We do not consider the nls_alt option as the must have
> > one. But it is very nice "QOL-type" mount option, which may help some amount of
> > dual-booters/Windows users to avoid tricky fails with files originated on non-English
> > Windows systems. One of the cases where this one may be useful is the case of zipping
> > files with non-English names (e.g. Polish etc) under Windows and then unzipping the archive
> > under Linux. In this case unzip will likely to fail on those files, as archive stores filenames not
> > in utf.
> 
> Hello!
> 
> Thank you for providing example. Now I can imagine the problem which
> this option is trying to "workaround".
> 
> Personally, I think that this is the issue of the program which is
> unzipping content of the archive. If files are in archive are stored in
> different encoding, then user needs to provide information in which it
> is stored. Otherwise it would be broken.
> 

Hi Pali! Partially it is the issue of the program. But such issue may affect
a lot of programs, esp. given that this case is somehwat niche for the Linux,
because it origins in Windows. We may assume it is unlikely a lot of programs
will try/are trying to support this case. The mount option, on the other hand,
gives this ability without relying on the application itself.

> Also this your approach with nls=utf-8 and nls_alt=cp1251 is broken. I
> can provide you string encoded in cp1251 which is also valid UTF-8
> sequence.
> 
> For example: sequence of bytes "d0 93".
> 
> In cp1251 it is Р“, but also it is valid UTF-8 sequence for Г (CYRILLIC
> CAPITAL LETTER GHE).
> 
> Because cp1251 is set as nls_alt, you would get UTF-8 interpretation.
> And for all other invalid UTF-8 sequences you would get cp1251.
> 
> For me it looks like you are trying to implement workaround based on
> some heuristic in kernel for userspace application which handles
> encoding incorrectly. And because all CP???? encodings are defined at
> full 8bit space and UTF-8 is subset of 8bit space, it would never work
> correctly.
> 

In this case, the whole string will be treated as UTF-8 string, no matter of
what's the value of "nls_alt" option. The option's related logic starts to be applied
only for those strings which contain "invalid" UTF-8 character. And it works not in
symbol-by-symbol way, but applies to the whole string. So if a string does not
contain invalid UTF-8, the "nls_alt" won't be applied, even if set. And if the string
contains invalid UTF-8 AND "nls_alt" is set, then the logic will be applied with assumption
that user, when set the "nls_alt" had in mind that he may be in such situation.

When it is coming to valid UTF-8 sequences, which are actually meant to represent another
encoding, there is ambiguity, which seems unable to be resolved both with and without
the "nls_alt". But at least those cases, when the sequence is incorrect UTF-8, but correct
e.g. CP-1251, may be solved with the "nls_alt", but not without it. 

It is not covering all the cases, but covers at least those, which otherwise lead to inability
operating with the file (e.g. error during unzip). Also it is set only explicitly by the user.

We'll follow the community opinion in our implementation, just want to make sure the
solution is understood correctly. Regarding the "nls_alt", it doesn't seem
to be harmful in any way, but may help some amount of users to overcome interoperability
issues.

> Also I do not think that kernel is correct place for workarounding
> userspace applications which handles encoding incorrectly.
> 
> > Windows have that "Language for non-Unicode programs" setting, which controls the
> > encoding used for the described (and similar) cases.
> 
> This windows setting is something different. It is system wide option
> which affects -A WINAPI functions and defines one fixed 8bit encoding
> (ACP) which should be used for converting UTF-16 strings (wchar_t*) into
> 8bit (char*) ACP encoding.
> 
> It is something similar to Unix CODESET set in LC_CTYPE from locale. But
> not the same.
> 
> > Overall, it's kinda niche mount option, but we suppose it's legit for Windows-originated filesystems.
> > What do you think on this, Pali?
> 
> I think this is not only for Windows-orientated FS, but rather for all
> filesystems which store filenames in UNICODE (as opposite of sequence of
> bytes).
> 
> E.g. ext4 now has extension for storing (and validating) that filenames
> are also in UNICODE (on disk it is in UTF-8).
> 
> Same for Beos FS or UDF fs (on DVD/BD-R). In most cases these fs are
> mounted with nls=utf-8 to interpret UNICODE as utf-8.
> 
> And none of these fs have such nls_alt option as I show above, it cannot
> work reliable.

Thanks.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc
  2020-10-09 15:31         ` Konstantin Komarov
@ 2020-10-09 15:47           ` Pali Rohár
  0 siblings, 0 replies; 23+ messages in thread
From: Pali Rohár @ 2020-10-09 15:47 UTC (permalink / raw)
  To: Konstantin Komarov
  Cc: linux-fsdevel, viro, linux-kernel, dsterba, aaptel, willy,
	rdunlap, joe, mark, nborisov

On Friday 09 October 2020 15:31:10 Konstantin Komarov wrote:
> From: Pali Rohár <pali@kernel.org>
> Sent: Saturday, September 26, 2020 11:23 AM
> > To: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
> > Cc: linux-fsdevel@vger.kernel.org; viro@zeniv.linux.org.uk; linux-kernel@vger.kernel.org; dsterba@suse.cz; aaptel@suse.com;
> > willy@infradead.org; rdunlap@infradead.org; joe@perches.com; mark@harmstone.com; nborisov@suse.com
> > Subject: Re: [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc
> > 
> > On Friday 25 September 2020 16:30:19 Konstantin Komarov wrote:
> > > From: Pali Rohár <pali@kernel.org>
> > > Sent: Monday, September 21, 2020 4:27 PM
> > > > To: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
> > > > Cc: linux-fsdevel@vger.kernel.org; viro@zeniv.linux.org.uk; linux-kernel@vger.kernel.org; dsterba@suse.cz; aaptel@suse.com;
> > > > willy@infradead.org; rdunlap@infradead.org; joe@perches.com; mark@harmstone.com; nborisov@suse.com
> > > > Subject: Re: [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc
> > > >
> > > > On Friday 11 September 2020 17:10:16 Konstantin Komarov wrote:
> > > > > +Mount Options
> > > > > +=============
> > > > > +
> > > > > +The list below describes mount options supported by NTFS3 driver in addition to
> > > > > +generic ones.
> > > > > +
> > > > > +===============================================================================
> > > > > +
> > > > > +nls=name		This option informs the driver how to interpret path
> > > > > +			strings and translate them to Unicode and back. If
> > > > > +			this option is not set, the default codepage will be
> > > > > +			used (CONFIG_NLS_DEFAULT).
> > > > > +			Examples:
> > > > > +				'nls=utf8'
> > > > > +
> > > > > +nls_alt=name		This option extends "nls". It will be used to translate
> > > > > +			path string to Unicode if primary nls failed.
> > > > > +			Examples:
> > > > > +				'nls_alt=cp1251'
> > > >
> > > > Hello! I'm looking at other filesystem drivers and no other with UNICODE
> > > > semantic (vfat, udf, isofs) has something like nls_alt option.
> > > >
> > > > So do we really need it? And if yes, it should be added to all other
> > > > UNICODE filesystem drivers for consistency.
> > > >
> > > > But I'm very sceptical if such thing is really needed. nls= option just
> > > > said how to convert UNICODE code points for userpace. This option is
> > > > passed by userspace (when mounting disk), so userspace already know what
> > > > it wanted. And it should really use this encoding for filenames (e.g.
> > > > utf8 or cp1251) which already told to kernel.
> > >
> > > Hi Pali! Thanks for the feedback. We do not consider the nls_alt option as the must have
> > > one. But it is very nice "QOL-type" mount option, which may help some amount of
> > > dual-booters/Windows users to avoid tricky fails with files originated on non-English
> > > Windows systems. One of the cases where this one may be useful is the case of zipping
> > > files with non-English names (e.g. Polish etc) under Windows and then unzipping the archive
> > > under Linux. In this case unzip will likely to fail on those files, as archive stores filenames not
> > > in utf.
> > 
> > Hello!
> > 
> > Thank you for providing example. Now I can imagine the problem which
> > this option is trying to "workaround".
> > 
> > Personally, I think that this is the issue of the program which is
> > unzipping content of the archive. If files are in archive are stored in
> > different encoding, then user needs to provide information in which it
> > is stored. Otherwise it would be broken.
> > 
> 
> Hi Pali! Partially it is the issue of the program. But such issue may affect
> a lot of programs, esp. given that this case is somehwat niche for the Linux,
> because it origins in Windows. We may assume it is unlikely a lot of programs
> will try/are trying to support this case. The mount option, on the other hand,
> gives this ability without relying on the application itself.

Hello! I understand this issue. But what you have described is basically
filesystem independent, this may happen also on ext4 with UNICODE
support and also on fat32 with VFAT support.

Therefore I do not think it is good idea to have e.g. nls_alt=cp1251
option directly in just one filesystem driver.

This would just make ntfs driver inconsistent with other Linux UNICODE
filesystem drivers and it would cause more problems that application
unzip is working with ntfs driver, but does not work with vfat or ext4
driver.

I would really suggest to not include this nls_alt option into
filesystem driver. And rather come up with some universal solution for
all UNICODE filesystem drivers. There are multiple solutions, e.g.
implement option in VFS layer and propagate it to UNICODE fs drivers, OR
implement NLS codepage which would handle it...

Inconsistency of one FS driver with all other would really cause
problems if not now, then in future.

This issue which you described is already there, also without ntfs
driver. And still adding something for ntfs driver looks like a
workaround or hack for that issue. Not a solution.

> > Also this your approach with nls=utf-8 and nls_alt=cp1251 is broken. I
> > can provide you string encoded in cp1251 which is also valid UTF-8
> > sequence.
> > 
> > For example: sequence of bytes "d0 93".
> > 
> > In cp1251 it is Р“, but also it is valid UTF-8 sequence for Г (CYRILLIC
> > CAPITAL LETTER GHE).
> > 
> > Because cp1251 is set as nls_alt, you would get UTF-8 interpretation.
> > And for all other invalid UTF-8 sequences you would get cp1251.
> > 
> > For me it looks like you are trying to implement workaround based on
> > some heuristic in kernel for userspace application which handles
> > encoding incorrectly. And because all CP???? encodings are defined at
> > full 8bit space and UTF-8 is subset of 8bit space, it would never work
> > correctly.
> > 
> 
> In this case, the whole string will be treated as UTF-8 string, no matter of
> what's the value of "nls_alt" option. The option's related logic starts to be applied
> only for those strings which contain "invalid" UTF-8 character. And it works not in
> symbol-by-symbol way, but applies to the whole string. So if a string does not
> contain invalid UTF-8, the "nls_alt" won't be applied, even if set. And if the string
> contains invalid UTF-8 AND "nls_alt" is set, then the logic will be applied with assumption
> that user, when set the "nls_alt" had in mind that he may be in such situation.
> 
> When it is coming to valid UTF-8 sequences, which are actually meant to represent another
> encoding, there is ambiguity, which seems unable to be resolved both with and without
> the "nls_alt". But at least those cases, when the sequence is incorrect UTF-8, but correct
> e.g. CP-1251, may be solved with the "nls_alt", but not without it. 
> 
> It is not covering all the cases, but covers at least those, which otherwise lead to inability
> operating with the file (e.g. error during unzip). Also it is set only explicitly by the user.
> 
> We'll follow the community opinion in our implementation, just want to make sure the
> solution is understood correctly. Regarding the "nls_alt", it doesn't seem
> to be harmful in any way, but may help some amount of users to overcome interoperability
> issues.
> 
> > Also I do not think that kernel is correct place for workarounding
> > userspace applications which handles encoding incorrectly.
> > 
> > > Windows have that "Language for non-Unicode programs" setting, which controls the
> > > encoding used for the described (and similar) cases.
> > 
> > This windows setting is something different. It is system wide option
> > which affects -A WINAPI functions and defines one fixed 8bit encoding
> > (ACP) which should be used for converting UTF-16 strings (wchar_t*) into
> > 8bit (char*) ACP encoding.
> > 
> > It is something similar to Unix CODESET set in LC_CTYPE from locale. But
> > not the same.
> > 
> > > Overall, it's kinda niche mount option, but we suppose it's legit for Windows-originated filesystems.
> > > What do you think on this, Pali?
> > 
> > I think this is not only for Windows-orientated FS, but rather for all
> > filesystems which store filenames in UNICODE (as opposite of sequence of
> > bytes).
> > 
> > E.g. ext4 now has extension for storing (and validating) that filenames
> > are also in UNICODE (on disk it is in UTF-8).
> > 
> > Same for Beos FS or UDF fs (on DVD/BD-R). In most cases these fs are
> > mounted with nls=utf-8 to interpret UNICODE as utf-8.
> > 
> > And none of these fs have such nls_alt option as I show above, it cannot
> > work reliable.
> 
> Thanks.

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2020-10-09 15:47 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-11 14:10 [PATCH v5 00/10] NTFS read-write driver GPL implementation by Paragon Software Konstantin Komarov
2020-09-11 14:10 ` [PATCH v5 01/10] fs/ntfs3: Add headers and misc files Konstantin Komarov
2020-09-12  2:28   ` Joe Perches
2020-09-11 14:10 ` [PATCH v5 03/10] fs/ntfs3: Add bitmap Konstantin Komarov
2020-09-13 18:43   ` Joe Perches
2020-09-14  2:38     ` Matthew Wilcox
2020-09-18 16:35       ` Konstantin Komarov
2020-09-18 16:54         ` Matthew Wilcox
2020-09-25 16:04           ` Konstantin Komarov
2020-09-18 16:29     ` Konstantin Komarov
2020-09-18 16:38       ` Matthew Wilcox
2020-09-11 14:10 ` [PATCH v5 05/10] fs/ntfs3: Add attrib operations Konstantin Komarov
2020-09-11 14:10 ` [PATCH v5 06/10] fs/ntfs3: Add compression Konstantin Komarov
2020-09-11 14:10 ` [PATCH v5 08/10] fs/ntfs3: Add Kconfig, Makefile and doc Konstantin Komarov
2020-09-21 13:26   ` Pali Rohár
2020-09-25 16:30     ` Konstantin Komarov
2020-09-26  8:23       ` Pali Rohár
2020-10-09 15:31         ` Konstantin Komarov
2020-10-09 15:47           ` Pali Rohár
2020-09-11 14:10 ` [PATCH v5 09/10] fs/ntfs3: Add NTFS3 in fs/Kconfig and fs/Makefile Konstantin Komarov
2020-09-11 14:10 ` [PATCH v5 10/10] fs/ntfs3: Add MAINTAINERS Konstantin Komarov
     [not found] ` <20200911141018.2457639-3-almaz.alexandrovich@paragon-software.com>
     [not found]   ` <c819ee72-6bb0-416d-dfc4-0bc2ad6d0ccd@harmstone.com>
2020-09-18 16:39     ` [PATCH v5 02/10] fs/ntfs3: Add initialization of super block Konstantin Komarov
     [not found]   ` <2011cd8c-7dc4-b2e7-114b-d5647336ec92@harmstone.com>
2020-09-18 16:43     ` Konstantin Komarov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).