65

How can we GZip every file separately?

I don't want to have all of the files in a big tar.

8 Answers 8

106

You can use gzip *


Note:

  • This will zip each file individually and DELETE the original.
  • Use -k (--keep) option to keep the original files.
  • This may not work if you have a huge number of files due to limits of the shell
  • To run gzip in parallel see @MarkSetchell's answer below.
Sign up to request clarification or add additional context in comments.

1 Comment

If I want to run it under a crontab, which command line I'll put ? For example /tmp/app/gzip * ?
58

Easy and very fast answer that will use all your CPU cores in parallel:

parallel gzip ::: *

GNU Parallel is a fantastic tool that should be used far more in this world where CPUs are only getting more cores rather than more speed. There are loads of examples that we would all do well to take 10 minutes to read... here

3 Comments

Knew about parallel, but keep forgetting to use it! Ran the accepted answer then scrolled down to see your answer... saved lots of hours! Maybe a good idea to make your comment an edit to your answer?
Is there a way to use parallel if the argument list is too long? Have about 60k files that I need individually compressed.
@Californian Sure find . -name XYZ -print0 | parallel -0 gzip
45

After seven years, this highly upvoted comment still doesn't have its own full-fledged answer, so I'm promoting it now:

gzip -r .

This has two advantages over the currently accepted answer: it works recursively if there are any subdirectories, and it won't fail from Argument list too long if the number of files is very large.

2 Comments

Does this keep or delete the files? Do you still need to add the -k option mentioned in the other answer.
Add -k if you want to keep the original files.
14

If you want to gzip every file recursively, you could use find piped to xargs:

$ find . -type f -print0 | xargs -0r gzip

4 Comments

No need for find+xargs. Gzip can handle recursion itself: gzip -9r .
As always, find . -type f -print0 | xargs -0r gzip is better.
For the equivalent of gzip *, you may also need -maxdepth 1 in find.
@musiphil: good point about protecting for spaces in file names! I just edited the answer to integrate your comment (waiting for peer review).
8

Try a loop

$ for file in *; do gzip "$file"; done

Comments

8

Or, if you have pigz (gzip utility that parallelizes compression over multiple processors and cores)

pigz *

Comments

3

The following command can run multiple times inside a directory (without "already has .gz suffix" warnings) to gzip whatever is not already gzipped.

find . -maxdepth 1 -type f ! -name '*.gz' -exec gzip "{}" \;

A more useful example of utilizing find is when you want to gzip rolling logs. E.g. you want every day or every month to gzip rolled logs but not current logs.

# Considering that current logs end in .log and 
# rolled logs end in .log.[yyyy-mm-dd] or .log.[number]
find . -maxdepth 1 -type f ! -name '*.gz' ! -name '*.log' -exec gzip "{}" \;

Comments

0

fyi, this will help to overwrite if an existing gz file along with creating few gz file if it is not present:

find . -type f | grep "in case any specific" | grep -E -v "*.gz$" | xargs -n1 -P8 sh -c 'yes | gzip --force --best -f $0'

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.