Announcement

Collapse
No announcement yet.

Badly behaving applications - and op system?

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Badly behaving applications - and op system?

    Hello Everyone,

    I am new here, and slowly moving over from WXP to Kubuntu 18.04, on a dual boot system.

    I needed to edit (split) a pdf file and installed PDF-Shuffler to accomplish that (Discover suggested this).
    The pdf document I had was about 2000 pages, I needed to extract two pages.
    PDF-Shuffler was overwhelmed by this, it simply became unresponsive while trying to render the thumbnail pictures of the pages. This happened after about 1800 pages.

    The most disconcerting thing was that the system itself became unresponsive while the disk activity light on the computer was on all the time!
    The cursor barely moved, nothing happened when I was trying to close the PDF-Shuffler program.
    Could not minimize the applications on the screen and switch to KSysGuard to terminate PDF-Shuffler.
    I eventually had to shut down the computer to get out of this situation.

    Tell me what am I doing wrong, and such system behavior is not normal!

    EDIT:
    It is a desktop system, with an AMD dual core, 64-bit CPU, about 10 years old.
    2Gb RAM
    KDE Plasma Version 5.12.7.
    Nvidia G84 graphics card

    Thanks, Peter

    #2
    Off the cuff observation:

    10 year old PC (in terms of computers, that is ANCIENT!)
    2Gb RAM (2,000 page pdf! I'd bet that formatting the required thumbnails just might have taxed your memory.)
    Windows no longer obstructs my view.
    Using Kubuntu Linux since March 23, 2007.
    "It is a capital mistake to theorize before one has data." - Sherlock Holmes

    Comment


      #3
      I am not familiar with pdf shuffler, but what looks to have happened is it gobbled up all your ram, and then your computer began using swap, which can definitely churn a hard drive and slow your system to a crawl . 2000 pages is probably far more than the program could handle, let alone most computers


      Looking at it, the program is very old (2012), so it may have many bugs due to 'bit rot' . I can't see any program settings anywhere, so there doesn't seem to be a way to reduce thumbnail ram usage.


      There are many ways to do what you want, but any of the GUI ones will all kill your system in the same way that pdf shuffler does. There are command line ways, which may seem a bit daunting but are not actually difficult to use, and won't bog the computer as they are not loading 2000 thumbnails.



      I was going to suggest using Libreoffice for the task, but that was before I saw the system specs.

      Comment


        #4
        I agree, 2G of RAM can't cope with it. Especially on KDE, but then.
        One possible solution would be to split the pdf into parts, and just load the one(s) you think more likely to contain what you want.
        Lots of commands/utilities will do that. Try... split --help

        Comment


          #5
          I have a 10 years old desktop (assembled 2009 with 2008 pieces), and I remember I have extracted many JPEG / TIFF images from large PDF files ─ but not so large in page number ─ just large filesizes.

          My desktop is a 2 × Intel® Core™2 Duo CPU E7300 @ 2.66GHz.

          Difference to yours is: Memory: 3,8 GiB of RAM ─ indeed, 4 GB minus 255 MB to integrated GPU.

          Some times it take a long time do extract 200+ TIFF from a PDF, with high CPU usage.

          Most of the times, I have used CLI commands, ─ which need less machine resources ─ but unhappily I cannot remember exactly what commands just now.

          It was about 2014-2015, and I have installed many packages to try:

          pdfchain
          pdfcrack
          pdfmod
          pdfshuffler
          pdftk
          poppler-utils
          qpdf

          Googling "linux pdf extract pages" now, I have found many other nice tools. This is just 1 example:

          pdfjam <input file> <page ranges> -o <output file>

          where <page ranges> could be:

          3,67-70,80

          to extract page 3, pages 67 to 70 and page 80, and put these pages into a single document.

          This example (above) is from 2014
          https://tex.stackexchange.com/questi...rom-a-document

          but it is still active:

          https://linux.die.net/man/1/pdfjam

          https://warwick.ac.uk/fac/sci/statis...oftware/pdfjam

          I hope these ideas could help you.

          Comment


            #6
            I have a 10 years old desktop (assembled 2009 with 2008 pieces), and I remember I have extracted many JPEG / TIFF images from large PDF files ─ but not so large in page number ─ just large filesizes.

            My desktop is a 2 × Intel® Core™2 Duo CPU E7300 @ 2.66GHz.

            Difference to yours is: Memory: 3,8 GiB of RAM ─ indeed, 4 GB minus 255 MB to integrated GPU.

            Some times it take a long time do extract 200+ TIFF from a PDF, with high CPU usage.

            Most of the times, I have used CLI commands, ─ which need less machine resources ─ but unhappily I cannot remember exactly what commands just now.

            It was about 2014-2015, and I have installed many packages to try:

            pdfchain
            pdfcrack
            pdfmod
            pdfshuffler
            pdftk
            poppler-utils
            qpdf

            Googling "linux pdf extract pages" now, I have found many other nice tools. This is just 1 example:

            pdfjam <input file> <page ranges> -o <output file>

            where <page ranges> could be:

            3,67-70,80

            to extract page 3, pages 67 to 70 and page 80, and put these pages into a single document.

            This example (above) is from 2014
            https://tex.stackexchange.com/questi...rom-a-document

            but it is still active:

            https://linux.die.net/man/1/pdfjam

            https://warwick.ac.uk/fac/sci/statis...oftware/pdfjam

            I hope these ideas could help you.

            Comment


              #7
              Hello Everyone,

              Many thanks for the help and explanations, it makes perfect sense!
              I will see if I can scrounge up some more memory.
              On the other hand, I could do this task on WXP without a hitch...

              Thanks again, Peter

              Comment

              Working...
              X