[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

More CPUs doen't equal more speed

Ahh, 2 really excellent ideas! I'm reading about parallel right now. And, I
know how to use make, so I really should have thought of -j as well. Thanks
for the ideas.

On Fri, May 24, 2019 at 12:02 AM Christian Gollwitzer <auriocus at>

> Am 23.05.19 um 23:44 schrieb Paul Rubin:
> > Bob van der Poel <bob at> writes:
> >> for i in range(0, len(filelist), CPU_COUNT):
> >>      for z in range(i, i+CPU_COUNT):
> >>          doit( filelist[z])
> >
> > Write your program to just process one file, then use GNU Parallel
> > to run the program on your 1200 files, 6 at a time.
> >
> This is a very sensible suggestion. GNU parallel on a list of files is
> relatively easy, for instance I use it to resize many images in parallel
> like this:
>         parallel convert {} -resize 1600 small_{} ::: *.JPG
> The {} is replaced for each file in turn.
> Another way with an external tool is a Makefile. GNU make can run in
> parallel, by setting the flag "-j", so "make -j6" will run 6 processes i
> parallel. It is more work to set up the Makefile, but it might pay off
> if you have a dependency graph or if the process is interrupted.
> "make" can figure out which files need to be processed and therefore
> continue a stopped job.
> Maybe rewriting all of this from scratch in Python is not worth it.
>         Christian
> --


**** Listen to my FREE CD at ****
Bob van der Poel ** Wynndel, British Columbia, CANADA **
EMAIL: bob at