Display Disk Settings2020/01/08 |
Display Disk Settings information on your Computer.
|
|
[1] | Install required packages. For SCSI or SATA devices, Install [hdparm], For NVMe devices like M.2 SSD, Install [nvme-cli]. |
[root@dlp ~]# dnf -y install hdparm nvme-cli
|
[2] | This is the basic usage for [hdparm]. |
# display basic disk settings [root@dlp ~]# hdparm /dev/sda /dev/sda: multcount = 16 (on) IO_support = 1 (32-bit) readonly = 0 (off) readahead = 8192 (on) geometry = 364801/255/63, sectors = 5860533168, start = 0 # display detailed disk info [root@dlp ~]# hdparm -I /dev/sda /dev/sda: ATA device, with non-removable media Model Number: ST3000VN007-2E4166 Serial Number: Z6A084MY Firmware Revision: SC60 Transport: Serial, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0 Standards: Used: unknown (minor revision code 0x001f) Supported: 9 8 7 6 5 Likely used: 9 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 ..... ..... # run cached and buffered reads test [root@dlp ~]# hdparm -Tt /dev/sda /dev/sda: Timing cached reads: 22418 MB in 1.99 seconds = 11270.03 MB/sec Timing buffered disk reads: 1380 MB in 3.00 seconds = 459.57 MB/sec |
[3] | This is the basic usage for [nvme-cli]. |
# display NVMe devices [root@dlp ~]# nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 BTNH93310Q5T1P0B INTEL SSDPEKNW010T8 1 1.02 TB / 1.02 TB 512 B + 0 B 002C /dev/nvme1n1 P02724117355 PLEXTOR PX-1TM8SeY 1 1.02 TB / 1.02 TB 512 B + 0 B 1.00 # display device info [root@dlp ~]# nvme id-ctrl -H /dev/nvme0n1 NVME Identify Controller: vid : 0x8086 ssvid : 0x8086 sn : BTNH93310Q5T1P0B mn : INTEL SSDPEKNW010T8 fr : 002C rab : 6 ieee : 5cd2e4 cmic : 0 [2:2] : 0 PCI [1:1] : 0 Single Controller [0:0] : 0 Single Port mdts : 5 cntlid : 1 ver : 10300 rtd3r : 7a120 rtd3e : 1e8480 oaes : 0x200 ..... ..... # display SMART log [root@dlp ~]# nvme smart-log /dev/nvme0n1 Smart Log for NVME device:nvme0n1 namespace-id:ffffffff critical_warning : 0 temperature : 36 C available_spare : 100% available_spare_threshold : 10% percentage_used : 0% data_units_read : 7,840 data_units_written : 279,605 host_read_commands : 36,731 host_write_commands : 2,155,655 controller_busy_time : 7 power_cycles : 4 power_on_hours : 2,085 unsafe_shutdowns : 0 media_errors : 0 num_err_log_entries : 0 Warning Temperature Time : 0 Critical Composite Temperature Time : 0 Thermal Management T1 Trans Count : 7 Thermal Management T2 Trans Count : 0 Thermal Management T1 Total Time : 98 Thermal Management T2 Total Time : 0 # display error log [root@dlp ~]# nvme error-log /dev/nvme0n1 Error Log Entries for device:nvme0n1 entries:64 ................. Entry[ 0] ................. error_count : 0 sqid : 0 cmdid : 0 status_field : 0(SUCCESS: The command completed successfully) parm_err_loc : 0 lba : 0 nsid : 0 vs : 0 cs : 0 ..... ..... |
[4] | For benchmarking to any Disk, it's possible to do with [fio (Flexible I/O Tester)] tool. |
[root@dlp ~]#
dnf -y install fio
# test sequential reads with 4K block size [root@dlp ~]# fio --bs=4k --size=1G --direct=1 --rw=read --numjobs=64 --runtime=10 --group_reporting --name=testjob4K1G --filename=/var/m2/testfile4K1G testjob4K1G: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 ... fio-3.7 Starting 64 processes testjob4K1G: Laying out IO file (1 file / 1024MiB) Jobs: 61 (f=59): [E(1),f(1),E(2),R(6),f(1),R(53)][100.0%][r=1165MiB/s,w=0KiB/s][r=298k,w=0 IOPS][eta 00m:00s] testjob4K1G: (groupid=0, jobs=64): err= 0: pid=8000: Tue Jan 7 14:01:05 2020 read: IOPS=299k, BW=1169MiB/s (1226MB/s)(11.4GiB/10002msec) clat (usec): min=33, max=4488, avg=213.16, stdev=31.87 lat (usec): min=33, max=4488, avg=213.25, stdev=31.87 clat percentiles (usec): | 1.00th=[ 180], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 215], | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 231], 95.00th=[ 239], | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 302], 99.95th=[ 502], | 99.99th=[ 1270] bw ( KiB/s): min=18123, max=20000, per=1.56%, avg=18709.27, stdev=327.56, samples=1223 iops : min= 4530, max= 5000, avg=4677.30, stdev=81.89, samples=1223 lat (usec) : 50=0.01%, 100=0.03%, 250=98.02%, 500=1.90%, 750=0.03% lat (usec) : 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% cpu : usr=0.83%, sys=2.29%, ctx=2992979, majf=0, minf=749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=2992737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=1169MiB/s (1226MB/s), 1169MiB/s-1169MiB/s (1226MB/s-1226MB/s), io=11.4GiB (12.3GB), run=10002-10002msec Disk stats (read/write): nvme1n1: ios=2924615/7, merge=0/0, ticks=608592/22, in_queue=779, util=98.84% # test sequential reads with 512K block size [root@dlp ~]# fio --bs=512k --size=1G --direct=1 --rw=read --numjobs=64 --runtime=10 --group_reporting --name=testjob512K1G --filename=/var/m2/testfile512K1G testjob512K1G: (g=0): rw=read, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=psync, iodepth=1 ... fio-3.7 Starting 64 processes testjob512K1G: Laying out IO file (1 file / 1024MiB) Jobs: 64 (f=64): [R(64)][100.0%][r=2392MiB/s,w=0KiB/s][r=4784,w=0 IOPS][eta 00m:00s] testjob512K1G: (groupid=0, jobs=64): err= 0: pid=7884: Tue Jan 7 14:00:09 2020 read: IOPS=4763, BW=2382MiB/s (2498MB/s)(23.3GiB/10014msec) clat (usec): min=433, max=27306, avg=13411.41, stdev=1990.06 lat (usec): min=433, max=27307, avg=13411.67, stdev=1990.07 clat percentiles (usec): | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], | 30.00th=[13042], 40.00th=[13042], 50.00th=[13042], 60.00th=[13042], | 70.00th=[13042], 80.00th=[13042], 90.00th=[13042], 95.00th=[13304], | 99.00th=[25822], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], | 99.99th=[27132] bw ( KiB/s): min=36790, max=44032, per=1.56%, avg=38108.50, stdev=1125.47, samples=1280 iops : min= 71, max= 86, avg=74.38, stdev= 2.21, samples=1280 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.02%, 4=0.04%, 10=0.14%, 20=97.12%, 50=2.66% cpu : usr=0.03%, sys=0.24%, ctx=47939, majf=0, minf=8823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=47705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=2382MiB/s (2498MB/s), 2382MiB/s-2382MiB/s (2498MB/s-2498MB/s), io=23.3GiB (25.0GB), run=10014-10014msec Disk stats (read/write): nvme1n1: ios=187741/0, merge=0/0, ticks=2498092/0, in_queue=2356724, util=98.64% |
Sponsored Link |