Announcement

Collapse
No announcement yet.

"No Space Left on Device" on drives that aren't full and no ability to use sudo??

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    "No Space Left on Device" on drives that aren't full and no ability to use sudo??

    New install of Ubuntu server 14.04.3 and I'm getting the oddest behavior.

    After running unattended for hours, it will all of a sudden stops accepting writes to the drives reporting disk full errors (they're not). Trying anything with sudo either fails with an odd "Bus Error" or simply takes 10-20 minutes to execute. No errors in any log file except those that refer to disk full errors (logs that can't write, etc.) .Then - after 20-30 minutes or so, everything returns to normal.

    The only thing I can think of is the boot drive is a rather old SSD. It's not reporting any errors through smartctl but it has VERY limited smart support. It appears it doesn't even support self tests. ( Patriot Warp v2 circa 2009 ).

    So I'm going to migrate the install - if I can - to another drive and toss it, unless someone knows another possible cause.

    I'm suspecting the drive mostly because of the lack of logging, but the almost the entire system is brand new - mobo, RAM, CPU - so it could be elsewhere. I'll move the install, boot to a different device, and see if this goes away.

    Please Read Me

    #2
    Could you have run out of inodes, if your file system uses them? I had this happen on the drive I have on my PC that has been upgraded since at least raring, and all those kernels were using them up for some reason.

    Comment


      #3
      No, btrfs doesn't report or use inodes in the same way as more traditional file systems (df -i always reports 0 inodes). The correct way to determine actual space on a btrfs filesystem is

      sudo btrfs df /

      with "/" being the mount point of the FS. @Here, that reports:
      # btrfs fi df /
      Data, single: total=21.31GiB, used=14.51GiB
      System, single: total=4.00MiB, used=16.00KiB
      Metadata, single: total=776.00MiB, used=576.02MiB
      unknown, single: total=208.00MiB, used=9.33MiB

      So, although the metadata is more full percentage-wise than the data, there's still space. But your question got me thinking and doing more digging. Running a balance on data using any chunk size above 18 starts failing with ENOSPC errors and metadata is worse with anything above 0 causing errors. More research reveals that this is the source of the disk full errors.

      Still, I think it could be hardware (the SSD) because most of SMART is disabled, what it does report is a pre-fail warning on temp, and temps reads at 0c, and it has failed to boot once or twice (possibly the new mobo, not the drive, so which??).

      Anyway - testing the drive is necessary before I toss it so I snapshotted and copied all the subvolumes to a different drive. Once I am sure it's booting to the other drive, I'll wipe it and see if it's usable at all.

      Please Read Me

      Comment


        #4
        This is interesting: A new btrfs filesystem on a much larger device shows this:

        root@server:/mnt/sdc2# btrfs fi df /mnt/sdc2
        Data, single: total=15.01GiB, used=14.20GiB
        System, DUP: total=8.00MiB, used=16.00KiB
        System, single: total=4.00MiB, used=0.00
        Metadata, DUP: total=1.50GiB, used=619.27MiB
        Metadata, single: total=8.00MiB, used=0.00
        unknown, single: total=208.00MiB, used=0.00


        A considerably lesser amount of data space reserved for the same amount of data but more metadata used. Very curious...


        EDIT: Sorry, to be more clear the above FS holds the copies of the subvolumes from the SSD.

        Please Read Me

        Comment


          #5
          Update: SSD died totally.

          Luckily, I had already backed up the installs there.

          I'm assuming that was the issue, but I will post back if it re-occurs.

          Please Read Me

          Comment

          Working...
          X