Disk Performance

If you find that the performance of your disk is not as you expected you can use the tools described on this page to diagnose the cause of the unexpected disk performance.

This page has been written in response to the long standing bug report #197762. Please read this page and refer to the reporting a bug part before involving others in your problem.

Generating a write test load

To test a specific part in as much isolation from other parts as possible, it is recommended to generate a write load with the command dd. As an example the following will write to the file test-file in /media/usb-disk.

dd if=/dev/zero of=/media/usb-disk/test-file bs=32

You stop the writing with ctrl-c, which will also show the average throughput.

The destination can also be a device name, which can be used when trying to rule out filesystem related problems, but this will overwrite the data on the device and make the formatted partition for the device unusable, so backup your data on the device first. You can of course format the partition for normal use again after testing is done.

Monitoring performance

To monitor the performance of your disk you should use the command dstat. It defaults to show a lot of information for the whole system every second. Normally the aggregated read and write throughput from/to all disks in your system is shown, but it can be limited to monitor only certain devices.

Syntax for the -D option to limit monitored devices: dmstat -D [total | disk, [disk2,[...]]]

$ dstat -D hda

Example output:

----total-cpu-usage---- --dsk/hda-- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw 
 23   2  75   1   0   0|  12k   25k|   0     0 | 536B 2424B| 258   394 
 16   1  83   0   0   0|   0     0 |   0     0 |   0     0 | 327   608

The two columns that are most interesting are read and writ for dsk/hda that tells the number of bytes read from and written to the disk hda. In the example above there is almost no disk activity for the first sample and no disk activity at all for the second sample.

You should start the monitoring before starting the test load and verify that there is no real disk activity (and in general ensure that there is no load on the system).

For a top-like view use iostat.

Expected performance

So which kind of performance can you actually expect of your device? Perhaps you've got multiple operating systems installed on your computer and have seen much better in Windows or with another version of Ubuntu or another Linux distribution. Or if your device is plugable you might have seen much better performance when it was connected to another computer. These kind of observations are interesting and should be included in a bug report if you decide that it is appropriate to file one.

Please be sure to use the "safe removal of hardware" functions before concluding that data has reached your device as it might otherwise still be in the memory of the operating system (applies for e.g. USB plugable devices).

Otherwise you should expect that conventional platter based hard disks should produce double digit megabytes per second numbers. Flash based devices can show a lot of variance by themselves and you should work hard to establish that they really are slower when used in Ubuntu compared to other operating systems.

Troubleshooting

Nautilus shows high transfer rate at beginning of transfer but then slows down

The speed that Nautilus indicates does not necessarily tell how much data is written to disk at the moment. This is true for other tools as well, since the data might only go to the memory of the operating system and then later be written to disk. For this reason you should measure performance with dstat as described above.

Mounting disk manually gives better performance than auto-mounting and semi-automounting

Your disk has been mounted automatically by Gnome or semi-automatically as a response to accessing the device in the file manager (Nautilus) or using 'mount <mountpoint>' from the command line. The performance as indicated by dstat is bad.

During debugging you used 'mount <dev> <mountpoint>' and experienced much better performance! Perhaps you did other things as well as starting up without Gnome etc. and swung a cat above your head at full moon, but in the interest of producing a usable bug report you must be able to reproduce the different performance without doing that.

Identify which exact mount option is responsible for the performance difference and make sure you can manually mount your disk with only this single option as difference and reproduce the performance difference as indicated by dstat.

You can look at the content of /proc/mounts and /etc/mtab to see the active mount options for your disk.

When mounting your device you must use 'mount -o <options> <dev> <mountpoint>' and not just 'mount <mountpoint>'.

Having followed these instructions, report a new bug. See also general instructions in reporting a bug below.

Full filesystem

If you have a filesystem that is close to being full you cannot expect the full performance for otherwise sequential workloads (e.g. copying of files that are measured in megabytes) since they will be turned into lots of small reads/writes.

Aged filesystem

Even if you have a filesystem that is far from full you might experience bad read performance because the files read are heavily fragmented because the filesystem was close to being full when the files were written. If you are trying to establish whether your disk is slow or not, consider moving the files off a partition and reformatting it before moving the files back.

Check DMA is enabled for disk

For ATA connected devices you should verify that DMA is enabled for your device with hdparm.

Use hdparm -d [disk-device]

and check for

using_dma     =  1 (on)

If USB is slower on Ubuntu than on other OS on same computer

Consider removing "usbmount" from apt, if usbmount is installed.

Check USB device is connected in 2.0 mode

Either check the output of lsusb and check whether the bus is the same for your disk as for a 2.0 root hub.

Or, look at dmesg and check whether it says "new full speed USB device" or "new high speed USB device". The former means it is in 1.1 mode and the latter 2.0 mode (no, it is not the other way around!)

USB 1.1 devices are limited to about 1 MB/s, while USB 2.0 devices should only be limited by the disk performance.

Using noop as elavator gives much better performance than the default cfq

You can try noop as scheduler for your device by

  • echo noop > /sys/block/<dev>/queue/scheduler

as root. Change back to cfq or try anticipatory or deadline with same command but substituting noop.

See the currently active scheduler with

  • cat /sys/block/<dev>/queue/scheduler

See ubuntu bugs #381300 and #131094 and upstream kernel reports.

Reporting a bug

A bug report should include the following: Filesystem type, size and whether it is newly formatted. Transport type, e.g. USB, eSATA, (P)ATA, SATA, FireWire. Description of how you have performed your experiments.

Performance data for the write test load above (i.e. dstat output for around 60 seconds). Performance data for disk when connected with different transports (for devices that support multiple transports). Performance with other operating systems. For other Linux distributions provide the kernel version. If you know another version of the kernel that gives better performance that is very helpful.

Is your problem reproducible everytime or does it come and go?

When reporting a bug you should be prepared to run a number of experiments to diagnose your problem. For a good diagnose it can be helpful to run tests that destroy the data on your device, so consider moving your data to another place before reporting the bug.

If you are not prepared to spend the time required to diagnose the problem you should not expect that reporting a bug will lead to a solution (i.e. don't bother to report anything).

If you have opened a new bug, put a pointer to it in the bug report #197762. Another bug related is #177235

DiskPerformance (last edited 2015-12-05 00:51:56 by 143)