All of lore.kernel.org
 help / color / mirror / Atom feed
* [SPDK] compression status
@ 2019-07-12 20:06 Luse, Paul E
  0 siblings, 0 replies; 6+ messages in thread
From: Luse, Paul E @ 2019-07-12 20:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1466 bytes --]

Quick status and a request for any ideas anyone's go moving forward :)

Over the last 3 weeks, I've addressed several issues in reduce/compress as we've started beating on it much harder. It's a reasonably complex thing so this is not unexpected of course.  I'm at the point now where I have multiple systems that can run for days w/o sefaults or other strange errors that plagued us earlier.

The problem is, data integrity is not good.  I can run overnight with bdevperf but only a few minutes with fio. Bdevperf is simply write/read/compare sequentially and when it gets to the end it wraps.  Fio is random in terms of IOs, that's the main difference I see in the workloads anyways.

I'm working with a very small logical vol to make debug easer and have added an in memory crc array, one for every logical map entry.  I write the crc of the uncompressed data into the right slot and on subsequent read, cal a crc from the uncompressed data and compare.  They all match with bdevperf, I just now hit my assert with fio telling me the read doesn't match the expected so I do have something to go on before fio sees a failure.  This is what I'll be plugging away on

Any ideas, insights into bdevperf vs fio, would be appreciated.  I know we had that one fio setting that caused some sort of data integrity issue when combined with another setting, doubt that's what I'm hitting but you never know - anyone remember exactly what that was?

Thanks!
Paul

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] compression status
@ 2019-07-20  1:41 Luse, Paul E
  0 siblings, 0 replies; 6+ messages in thread
From: Luse, Paul E @ 2019-07-20  1:41 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6790 bytes --]

FYI made some progress, let me know Ziye or anyone if you have anything to share.  I can make it fail with bdevperf now too which is good, need higher Q depth.  In my test code the only optimization I have is to use iovecs on decompressing to send decompressed data straight to the host instead of copying it from our scratch buffer.  Without that one optimization it works fine. I looked closely at the mbuf chaining implantation and even had Fiona look at it as, without the optimization, there's never more than 1 mbuf associated with a decompress.  With it there can be up to 3.

Anyway, here's the part that I can't get my head around. If I keep the iovec optimization in but instead point the iovec that I was pointing to the host buffer to the scratch buffer instead, everything works.  So all of iovec construction is still in there, chained mbufs are still being used.  Literally changing the iovecs to point to the scratch buffer instead of the host buffer (and copying to the req buffer after decompress) is all I have to do to make it work.  That's enough for me to sleep on!!

Have a great weekend everyone :)
Paul

PS: note that fio reuses the same buffer for all reads and writes, at least at Q depth 1. I've confirmed that when I use it in the iovec it is the same as when I use it to copy when the iovec is pointing to the scratch buffer

-----Original Message-----
From: Luse, Paul E 
Sent: Saturday, July 13, 2019 1:45 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: RE: compression status

If you want to mess with it, including the in mem CRC I added for debug, use this git fetch "https://peluse(a)review.gerrithub.io/a/spdk/spdk" refs/changes/13/455113/60 && git checkout FETCH_HEAD plus this hacked up reduce.c, make sure your LV is size 1 or the CRC array will be too small  https://gist.github.com/peluse/1f545204db3c1699218239a76846a54d 

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Saturday, July 13, 2019 1:36 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] compression status

Thanks Guys!  I've tried a bunch of fio configs and they all fail.  My CRC check is catching the error just before fio so it's likely our issue.  Bdevperf doesn't see it, runs for hours (fio for a minute) but one other big difference I see is again in the IO pattern.  reduceLib will queue any request that comes in if someone else is working on at the same vol offset.  When I run fio there are never any queued IOs, there are tons with bdevperf.  I'm on vacation through Wed next week, if you have any ideas please feel free to share, I likely won't respond though as I'll be out of town.  

[global]
ioengine=spdk_bdev
#ioengine=libaio
spdk_conf=/home/peluse/fio/b.conf
thread=1
group_reporting=1
direct=1
verify=md5
time_based=1
ramp_time=0
runtime=4000
iodepth=32
rw=write
#rw=randrw
#rwmixread=60
#rwmixwrite=40
bs=16k
#size=1M

[test]
filename=COMP_3414e50f-53c8-4e59-b763-9cab042c4688 #Malloc0 #Nvme0n1
#filename=/dev/nvme0n1 #/dev/nbd0 #Malloc0 #Nvme0n1 #/dev/nvme0n1 #/dev/nbd0 #aio0 #/dev/nvme0n1 #Nvme0n1
numjobs=1

sudo gdb --args ./bdevperf -c ./b.conf -q 1 -o 16384 -w verify -M 50 -t 1200

PS: here's the tiny lvol I'm using:

sudo ~/spdk//scripts/rpc.py construct_lvol_store Nvme0n1 LVS0
sudo ~/spdk//scripts/rpc.py construct_lvol_bdev -t -l LVS0 ldxzl1 1
sudo ~/spdk//scripts/rpc.py construct_compress_bdev -p ~/pm_files -b 249a062a-7d71-4949-aa31-1371eb9cdca6

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Yan, Liang Z
Sent: Friday, July 12, 2019 6:36 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] compression status

Hi Paul,

As ziye said, we need your fio configuration file. 
Fio read verification is a little different. You need to write data sequentially into the disk with one verify_pattern to make sure the disk all has the expected data.
 Then you read the data with the same verify_pattern. 


Thanks.

Liang Yan

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Yang, Ziye
Sent: Saturday, July 13, 2019 9:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] compression status

Hi Paul,

Could you paste your fio configuration file? Need to know how many jobs you used and what's your IO pattern.

Thanks.




Best Regards
Ziye Yang 


-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Saturday, July 13, 2019 4:06 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] compression status

Quick status and a request for any ideas anyone's go moving forward :)

Over the last 3 weeks, I've addressed several issues in reduce/compress as we've started beating on it much harder. It's a reasonably complex thing so this is not unexpected of course.  I'm at the point now where I have multiple systems that can run for days w/o sefaults or other strange errors that plagued us earlier.

The problem is, data integrity is not good.  I can run overnight with bdevperf but only a few minutes with fio. Bdevperf is simply write/read/compare sequentially and when it gets to the end it wraps.  Fio is random in terms of IOs, that's the main difference I see in the workloads anyways.

I'm working with a very small logical vol to make debug easer and have added an in memory crc array, one for every logical map entry.  I write the crc of the uncompressed data into the right slot and on subsequent read, cal a crc from the uncompressed data and compare.  They all match with bdevperf, I just now hit my assert with fio telling me the read doesn't match the expected so I do have something to go on before fio sees a failure.  This is what I'll be plugging away on

Any ideas, insights into bdevperf vs fio, would be appreciated.  I know we had that one fio setting that caused some sort of data integrity issue when combined with another setting, doubt that's what I'm hitting but you never know - anyone remember exactly what that was?

Thanks!
Paul
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] compression status
@ 2019-07-13 20:44 Luse, Paul E
  0 siblings, 0 replies; 6+ messages in thread
From: Luse, Paul E @ 2019-07-13 20:44 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5239 bytes --]

If you want to mess with it, including the in mem CRC I added for debug, use this git fetch "https://peluse(a)review.gerrithub.io/a/spdk/spdk" refs/changes/13/455113/60 && git checkout FETCH_HEAD plus this hacked up reduce.c, make sure your LV is size 1 or the CRC array will be too small  https://gist.github.com/peluse/1f545204db3c1699218239a76846a54d 

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Saturday, July 13, 2019 1:36 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] compression status

Thanks Guys!  I've tried a bunch of fio configs and they all fail.  My CRC check is catching the error just before fio so it's likely our issue.  Bdevperf doesn't see it, runs for hours (fio for a minute) but one other big difference I see is again in the IO pattern.  reduceLib will queue any request that comes in if someone else is working on at the same vol offset.  When I run fio there are never any queued IOs, there are tons with bdevperf.  I'm on vacation through Wed next week, if you have any ideas please feel free to share, I likely won't respond though as I'll be out of town.  

[global]
ioengine=spdk_bdev
#ioengine=libaio
spdk_conf=/home/peluse/fio/b.conf
thread=1
group_reporting=1
direct=1
verify=md5
time_based=1
ramp_time=0
runtime=4000
iodepth=32
rw=write
#rw=randrw
#rwmixread=60
#rwmixwrite=40
bs=16k
#size=1M

[test]
filename=COMP_3414e50f-53c8-4e59-b763-9cab042c4688 #Malloc0 #Nvme0n1
#filename=/dev/nvme0n1 #/dev/nbd0 #Malloc0 #Nvme0n1 #/dev/nvme0n1 #/dev/nbd0 #aio0 #/dev/nvme0n1 #Nvme0n1
numjobs=1

sudo gdb --args ./bdevperf -c ./b.conf -q 1 -o 16384 -w verify -M 50 -t 1200

PS: here's the tiny lvol I'm using:

sudo ~/spdk//scripts/rpc.py construct_lvol_store Nvme0n1 LVS0
sudo ~/spdk//scripts/rpc.py construct_lvol_bdev -t -l LVS0 ldxzl1 1
sudo ~/spdk//scripts/rpc.py construct_compress_bdev -p ~/pm_files -b 249a062a-7d71-4949-aa31-1371eb9cdca6

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Yan, Liang Z
Sent: Friday, July 12, 2019 6:36 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] compression status

Hi Paul,

As ziye said, we need your fio configuration file. 
Fio read verification is a little different. You need to write data sequentially into the disk with one verify_pattern to make sure the disk all has the expected data.
 Then you read the data with the same verify_pattern. 


Thanks.

Liang Yan

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Yang, Ziye
Sent: Saturday, July 13, 2019 9:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] compression status

Hi Paul,

Could you paste your fio configuration file? Need to know how many jobs you used and what's your IO pattern.

Thanks.




Best Regards
Ziye Yang 


-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Saturday, July 13, 2019 4:06 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] compression status

Quick status and a request for any ideas anyone's go moving forward :)

Over the last 3 weeks, I've addressed several issues in reduce/compress as we've started beating on it much harder. It's a reasonably complex thing so this is not unexpected of course.  I'm at the point now where I have multiple systems that can run for days w/o sefaults or other strange errors that plagued us earlier.

The problem is, data integrity is not good.  I can run overnight with bdevperf but only a few minutes with fio. Bdevperf is simply write/read/compare sequentially and when it gets to the end it wraps.  Fio is random in terms of IOs, that's the main difference I see in the workloads anyways.

I'm working with a very small logical vol to make debug easer and have added an in memory crc array, one for every logical map entry.  I write the crc of the uncompressed data into the right slot and on subsequent read, cal a crc from the uncompressed data and compare.  They all match with bdevperf, I just now hit my assert with fio telling me the read doesn't match the expected so I do have something to go on before fio sees a failure.  This is what I'll be plugging away on

Any ideas, insights into bdevperf vs fio, would be appreciated.  I know we had that one fio setting that caused some sort of data integrity issue when combined with another setting, doubt that's what I'm hitting but you never know - anyone remember exactly what that was?

Thanks!
Paul
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] compression status
@ 2019-07-13 20:36 Luse, Paul E
  0 siblings, 0 replies; 6+ messages in thread
From: Luse, Paul E @ 2019-07-13 20:36 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4501 bytes --]

Thanks Guys!  I've tried a bunch of fio configs and they all fail.  My CRC check is catching the error just before fio so it's likely our issue.  Bdevperf doesn't see it, runs for hours (fio for a minute) but one other big difference I see is again in the IO pattern.  reduceLib will queue any request that comes in if someone else is working on at the same vol offset.  When I run fio there are never any queued IOs, there are tons with bdevperf.  I'm on vacation through Wed next week, if you have any ideas please feel free to share, I likely won't respond though as I'll be out of town.  

[global]
ioengine=spdk_bdev
#ioengine=libaio
spdk_conf=/home/peluse/fio/b.conf
thread=1
group_reporting=1
direct=1
verify=md5
time_based=1
ramp_time=0
runtime=4000
iodepth=32
rw=write
#rw=randrw
#rwmixread=60
#rwmixwrite=40
bs=16k
#size=1M

[test]
filename=COMP_3414e50f-53c8-4e59-b763-9cab042c4688 #Malloc0 #Nvme0n1
#filename=/dev/nvme0n1 #/dev/nbd0 #Malloc0 #Nvme0n1 #/dev/nvme0n1 #/dev/nbd0 #aio0 #/dev/nvme0n1 #Nvme0n1
numjobs=1

sudo gdb --args ./bdevperf -c ./b.conf -q 1 -o 16384 -w verify -M 50 -t 1200

PS: here's the tiny lvol I'm using:

sudo ~/spdk//scripts/rpc.py construct_lvol_store Nvme0n1 LVS0
sudo ~/spdk//scripts/rpc.py construct_lvol_bdev -t -l LVS0 ldxzl1 1
sudo ~/spdk//scripts/rpc.py construct_compress_bdev -p ~/pm_files -b 249a062a-7d71-4949-aa31-1371eb9cdca6

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Yan, Liang Z
Sent: Friday, July 12, 2019 6:36 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] compression status

Hi Paul,

As ziye said, we need your fio configuration file. 
Fio read verification is a little different. You need to write data sequentially into the disk with one verify_pattern to make sure the disk all has the expected data.
 Then you read the data with the same verify_pattern. 


Thanks.

Liang Yan

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Yang, Ziye
Sent: Saturday, July 13, 2019 9:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] compression status

Hi Paul,

Could you paste your fio configuration file? Need to know how many jobs you used and what's your IO pattern.

Thanks.




Best Regards
Ziye Yang 


-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Saturday, July 13, 2019 4:06 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] compression status

Quick status and a request for any ideas anyone's go moving forward :)

Over the last 3 weeks, I've addressed several issues in reduce/compress as we've started beating on it much harder. It's a reasonably complex thing so this is not unexpected of course.  I'm at the point now where I have multiple systems that can run for days w/o sefaults or other strange errors that plagued us earlier.

The problem is, data integrity is not good.  I can run overnight with bdevperf but only a few minutes with fio. Bdevperf is simply write/read/compare sequentially and when it gets to the end it wraps.  Fio is random in terms of IOs, that's the main difference I see in the workloads anyways.

I'm working with a very small logical vol to make debug easer and have added an in memory crc array, one for every logical map entry.  I write the crc of the uncompressed data into the right slot and on subsequent read, cal a crc from the uncompressed data and compare.  They all match with bdevperf, I just now hit my assert with fio telling me the read doesn't match the expected so I do have something to go on before fio sees a failure.  This is what I'll be plugging away on

Any ideas, insights into bdevperf vs fio, would be appreciated.  I know we had that one fio setting that caused some sort of data integrity issue when combined with another setting, doubt that's what I'm hitting but you never know - anyone remember exactly what that was?

Thanks!
Paul
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] compression status
@ 2019-07-13  1:35 Yan, Liang Z
  0 siblings, 0 replies; 6+ messages in thread
From: Yan, Liang Z @ 2019-07-13  1:35 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2709 bytes --]

Hi Paul,

As ziye said, we need your fio configuration file. 
Fio read verification is a little different. You need to write data sequentially into the disk with one verify_pattern to make sure the disk all has the expected data.
 Then you read the data with the same verify_pattern. 


Thanks.

Liang Yan

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Yang, Ziye
Sent: Saturday, July 13, 2019 9:18 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] compression status

Hi Paul,

Could you paste your fio configuration file? Need to know how many jobs you used and what's your IO pattern.

Thanks.




Best Regards
Ziye Yang 


-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Saturday, July 13, 2019 4:06 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] compression status

Quick status and a request for any ideas anyone's go moving forward :)

Over the last 3 weeks, I've addressed several issues in reduce/compress as we've started beating on it much harder. It's a reasonably complex thing so this is not unexpected of course.  I'm at the point now where I have multiple systems that can run for days w/o sefaults or other strange errors that plagued us earlier.

The problem is, data integrity is not good.  I can run overnight with bdevperf but only a few minutes with fio. Bdevperf is simply write/read/compare sequentially and when it gets to the end it wraps.  Fio is random in terms of IOs, that's the main difference I see in the workloads anyways.

I'm working with a very small logical vol to make debug easer and have added an in memory crc array, one for every logical map entry.  I write the crc of the uncompressed data into the right slot and on subsequent read, cal a crc from the uncompressed data and compare.  They all match with bdevperf, I just now hit my assert with fio telling me the read doesn't match the expected so I do have something to go on before fio sees a failure.  This is what I'll be plugging away on

Any ideas, insights into bdevperf vs fio, would be appreciated.  I know we had that one fio setting that caused some sort of data integrity issue when combined with another setting, doubt that's what I'm hitting but you never know - anyone remember exactly what that was?

Thanks!
Paul
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] compression status
@ 2019-07-13  1:17 Yang, Ziye
  0 siblings, 0 replies; 6+ messages in thread
From: Yang, Ziye @ 2019-07-13  1:17 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2013 bytes --]

Hi Paul,

Could you paste your fio configuration file? Need to know how many jobs you used and what's your IO pattern.

Thanks.




Best Regards
Ziye Yang 


-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Saturday, July 13, 2019 4:06 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] compression status

Quick status and a request for any ideas anyone's go moving forward :)

Over the last 3 weeks, I've addressed several issues in reduce/compress as we've started beating on it much harder. It's a reasonably complex thing so this is not unexpected of course.  I'm at the point now where I have multiple systems that can run for days w/o sefaults or other strange errors that plagued us earlier.

The problem is, data integrity is not good.  I can run overnight with bdevperf but only a few minutes with fio. Bdevperf is simply write/read/compare sequentially and when it gets to the end it wraps.  Fio is random in terms of IOs, that's the main difference I see in the workloads anyways.

I'm working with a very small logical vol to make debug easer and have added an in memory crc array, one for every logical map entry.  I write the crc of the uncompressed data into the right slot and on subsequent read, cal a crc from the uncompressed data and compare.  They all match with bdevperf, I just now hit my assert with fio telling me the read doesn't match the expected so I do have something to go on before fio sees a failure.  This is what I'll be plugging away on

Any ideas, insights into bdevperf vs fio, would be appreciated.  I know we had that one fio setting that caused some sort of data integrity issue when combined with another setting, doubt that's what I'm hitting but you never know - anyone remember exactly what that was?

Thanks!
Paul
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-07-20  1:41 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-12 20:06 [SPDK] compression status Luse, Paul E
2019-07-13  1:17 Yang, Ziye
2019-07-13  1:35 Yan, Liang Z
2019-07-13 20:36 Luse, Paul E
2019-07-13 20:44 Luse, Paul E
2019-07-20  1:41 Luse, Paul E

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.