Announcement

Collapse
No announcement yet.

Dolphin File Folders Maximum Limit?

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Dolphin File Folders Maximum Limit?

    Does anyone have a link to official info on the max limit for folders? Or... do you personally have experience with problems with too many folders in Dolphin. I am also wondering if there is a possible limit to amount of files in a folder.
    I don't want to crash my system. Do you think 100K folders with 5K+ files each in them would cause issues? These are all file formats and various sizes each.
    Kubuntu 18.04.3 LTS -- KDE 5.12.9

    #2
    The Dolphin is using kio: https://en.wikipedia.org/wiki/KIO

    When searching ...

    An older topic (2012) from the KDE Forums: Dolphin very slow when opening folders with a lot of files - https://forum.kde.org/viewtopic.php?f=224&t=107910
    Dolphin is very slow and laggy when it comes to loading folders that have a lot of files in them. This is most palpable in gallery folders where I keep more than 40.000 images...
    2015 - Slow copy/move operation compared to other file managers: https://forum.kde.org/viewtopic.php?f=223&t=124677

    I think that it will depend your system: CPU/available memory/hard drive/etc...
    Last edited by Rog131; Jan 08, 2018, 03:43 AM.
    Before you edit, BACKUP !

    Why there are dead links ?
    1. Thread: Please explain how to access old kubuntu forum posts
    2. Thread: Lost Information

    Comment


      #3
      I doubt you would crash your entire system. You might drag it down to the point of being unusable or crash Dolphin.

      Here's one comment: https://askubuntu.com/questions/6312...-100000-images

      Please Read Me

      Comment


        #4
        Thank you both for that info. I was having a really hard time getting any results from here or Google in general on my question. I'm usually quite good at searches.
        That was all very helpful. I am planning a huge revamp of the way I keep data, and I really wanted a plan for growth of added data before I started.
        Kubuntu 18.04.3 LTS -- KDE 5.12.9

        Comment


          #5
          Uh, yeah and 50 million data files is a LOT!

          I suppose you know this already, but with that much stuff the file system you use and fine tuning it's settings will be critical. Also, if Dolphin isn't up to the task, I wouldn't hesitate to use another file manager or even another desktop environment. That much data is a very specific task and the task should always drive the system design.

          BTW, out of curiosity, I did a quick test of Dolphin on my install. 16.7 GiB, 378811 files, and 37693 folders and it counted that in just a few seconds. I think if you keep file previews turned off you should be OK.

          EDIT: For the record, that test was done on a two SSD RAID0 fs using btrfs. YMMV
          Last edited by oshunluvr; Jan 08, 2018, 08:35 AM.

          Please Read Me

          Comment


            #6
            Or was that 500 million?

            Please Read Me

            Comment


              #7
              Originally posted by oshunluvr View Post
              ...

              BTW, out of curiosity, I did a quick test of Dolphin on my install. 16.7 GiB, 378811 files, and 37693 folders ...
              How does one get that information?
              Kubuntu 20.04

              Comment


                #8
                Originally posted by chimak111 View Post
                How does one get that information?
                I just open Dolphin, selected my root subvolume, and right-clicked then select "Properties"

                Please Read Me

                Comment


                  #9
                  You are correct, that would be 500 Million. But that was just an estimate of extreme growth to see what the tolerance level would be. Right now I have approximately 6000 folders and less than 400,000 files total. That does not include any hidden or OS files. That is just pure data, photo and video. And it all resides on a 1TB HD that is almost full.
                  The only problem I have ever encountered with Dolphin files, is the thumbnailer having an issue with some tiff files I had. I no longer use that format or have any, but it was an ever spooling mess. It could easily have been a wonky file, who knows. I just converted them to something else and all is good.

                  I probably don't to really say this here at Kubuntu, but I just love Dolphin. This has been the best graphical FM I have ever used. It really lets me do almost everything that I want. It really fits my workflow and I can usually customize it how I need to.
                  Kubuntu 18.04.3 LTS -- KDE 5.12.9

                  Comment


                    #10
                    I love Dolphin myself, but for a special use case like hundred's of millions of files I would find the best tool for the job.

                    Re. Dolphin and previews, if you go into preferences you can specify which files to preview and limit the file sizes for which previews are generated. This would speed things up.

                    Frankly if it were me, for functions dealing with this large of a target I would use the command line for as much as possible.

                    Please Read Me

                    Comment


                      #11
                      IF you really want to do file management fast then the best option is mc (Midnight Commander) on the CLI.

                      Regardless, recalling experiences from my Windows days of 15-20 years ago, I found that nesting a lot of sub-directories does more to slow a file manager than anything else. While comparing State tax returns against IRS tax returns the IRS supplied their files in sub-directories nested 15 deep (which was the maximum, IIRC). Sllloooowwww. I'd re-arrange the data into a single sub-directory before I ran my comparison program. Even then, on the hardware available around 2000AD, to compare 1.8 million tax returns required the program to run all night, i.e., more than 12 hours.

                      Moral: (IF it applies to Linux in this era) keep your file sub-directories depth short.
                      "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                      – John F. Kennedy, February 26, 1962.

                      Comment


                        #12
                        Originally posted by GreyGeek View Post
                        ... files in sub-directories nested 15 deep (which was the maximum, IIRC). Sllloooowwww.
                        Moral: (IF it applies to Linux in this era) keep your file sub-directories depth short.
                        I have had bad experiences with Windows and this exact thing. Hence I was hoping this was not a file issue on all OS's. I am not planning on going more than 5 deep at this time since that is about as far as I really want to classify data.
                        Kubuntu 18.04.3 LTS -- KDE 5.12.9

                        Comment


                          #13
                          Moral: (IF it applies to Linux in this era) keep your file sub-directories depth short.
                          IME with Unix it does not. I've not done this with Linux (is that an urge to futz... tempting), but to investigate this point once I ran a command to make a directory, change into it, repeat indefinitely. I can't remember which Unix it was. The script continued till the file system ran out of i-nodes, many thousands deep. My shell process was quite happy down in this hole after I'd deleted a few to free up some i-nodes, but the pwd command crashed. The command "rm - r" from the top crashed with a "stack overflow", but from the bottom working up was fine. (Maybe these days shells will get upset trying to maintain things like the $PWD variable. I'm sure other things would choke also. The database used by locate might get large, for example.)

                          Windows' NTFS inherited its approach from DEC's VMS, which has a limit on path length, which if I recall correctly has files always accessed from the root under the hood, for every file access, but Unix processes have a current directory link to the directory i-node and files can be accessed without knowing the structure.

                          So, I don't recommend having folders 100,000 deep, but I imagine a few hundred would be fine. Linux should really be called GNU/Linux, (Linux is "just" a kernel) and the GNU folks had a design goal of taking out arbitrary limits on anything.

                          However, IMO for any sensible method of organizing the files the depth will increase only logarithmically, roughly to the base of the average number of directories in a directory. For 500,000,000 files at an average 100 entries per directory that's only 5 or so deep; at 10 only 10 or so.

                          If that's on one file system you'd want to check the number i-nodes free. ext4 would need more than 2 TB to have enough. See man mkfs.ext4. Or, use btrfs, which doesn't have such a limit (ok, 264). I imagine you'll need at least a few drives and btrfs would greatly ease the management of them, I think.

                          Regards, John Little
                          Last edited by jlittle; Jan 13, 2018, 12:07 AM. Reason: Fix quoting
                          Regards, John Little

                          Comment


                            #14
                            Interesting post!
                            "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
                            – John F. Kennedy, February 26, 1962.

                            Comment


                              #15
                              Originally posted by jlittle View Post
                              , use btrfs, which doesn't have such a limit (ok, 264). I imagine you'll need at least a few drives and btrfs would greatly ease the management of them, I think.
                              Interesting use-case argument for btrfs. I hadn't really thought about that level of capacity, but btrfs with its on-line expand-ability would be a benefit.

                              Please Read Me

                              Comment

                              Working...
                              X