Announcement

Collapse
No announcement yet.

User interface performance during I/O (processes going into disk sleep state)

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    User interface performance during I/O (processes going into disk sleep state)

    Hi,

    I have recently got a new laptop, and I have noticed horrible performance during I/O operations (at least user interface performance as everything blocks).

    Opening a system monitor, I notice that multiple applications are continuously entering and leaving "disk sleep" state (in the CPU% column). As soon as the I/O load decreases, applications stop entering this disk sleep state and everything is smooth again.

    The three applications often going into disk sleep state are:
    - kio-file
    - kswapd0
    - ktorrent

    other applications also tend to go into disk sleep state but less often:
    - jdb2/sda1-8
    - flush-8:0
    - virtuoso-t
    - amarok
    - plasma-desktop
    - chromium-browser

    I've heard that the default Linux I/O scheduler (CFQ) tends to cause user interface responsiveness issues, but the problem also persists with an other kernel (Liquorix kernel with BFQ).

    Does anyone know what this "disk sleep" state entails? Has anyone had similar issues? On the internet I have found explanations linking this state to an "I/O bottleneck" but my laptop uses an SSD drive which should be faster then regular HDD drives (which did not suffer as badly from this issue).

    regards,

    #2
    Disk sleep is when the CPU has things to do, but those things rely on disk activity, so the process sleeps (and blocks) until this disk activity has finished (ie a process is waiting on a file being loaded from the disk). Processes go into this state all the time, but not normally for very long (a few milliseconds so you only really see it when there is a bottleneck with the disk io.

    When a process enter the disk sleep state the CPU tends to get on with other processes, but if they all do it then there is not much the CPU can do.

    Comment


      #3
      Originally posted by cloudslayer View Post
      but my laptop uses an SSD drive
      Since you have an SSD, you should change the I/O scheduler. CFQ is designed for rotating media and attempts to order I/O requests to reduce seek time by minimizing movement of the read heads. Since SSDs don't have this physical limitation, CFQ simply adds unnecessary overhead. Noop is a better I/O scheduler for SSD-equipped computers.

      Edit your GRUB configuration and add
      Code:
      elevator=noop
      to the kernel's boot parameters.

      While you're at it, you should also add the discard mount parameter to instruct the file system to make full use of your drive's TRIM features. For each EXT4 partition, fstab should look like this:
      Code:
      UUID=...   /   ext4   {other_parameters},[B]discard[/B]   0 1
      And also run this command once for each partition:
      Code:
      sudo tune2fs -o discard /dev/sd[I]XN[/i]

      Comment


        #4
        Hi,

        I've been running my PC with the suggested changes for a week or so, and it appears to slightly improve matters.

        When there is heavy disk I/O (e.g. copying files from internal SSD drive to external HDD) the performance does however still become (very) choppy again.

        Originally posted by james147 View Post
        Disk sleep is when the CPU has things to do, but those things rely on disk activity, so the process sleeps (and blocks) until this disk activity has finished (ie a process is waiting on a file being loaded from the disk). Processes go into this state all the time, but not normally for very long (a few milliseconds so you only really see it when there is a bottleneck with the disk io.

        When a process enter the disk sleep state the CPU tends to get on with other processes, but if they all do it then there is not much the CPU can do.
        What I do not understand is why the performance of an SSD would be worse there than that of a HDD: the former should be a lot faster than the latter, turning it into less of a bottleneck, no?

        Comment

        Working...
        X