I am using grep with the -f option and I run into an issue when the pattern file given with -f is very long. The one that is failing for me has almost 300 thousand lines and 40 MBytes. grep does not refuse to run but ends Killed. Is there a specific maximum size for the pattern file?
-Fswitch. Otherwise, is something written to syslog (memory exhausted maybe)?-Fdoes not help in my case. It could, however, fix some borderline cases. Syslog saysOut of memory: Killed process 2965427 (grep) total-vm:1897916kB, anon-rss:1879488kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:3736kB oom_score_adj:0That would suggest that the limit will depend on memory, although 40 Megs of the original file does not seem that much.sudo(if you have sufficient privs) to see how they behaves. Don't really understand your use case from the description given, but 300k patterns seems a lot (and obviously causing you a problem). What if break it down into multiple files? You would have to process the input multiple times though.sudodoes not change the behaviour,grepgets killed just the same. I hit this issue by chance. I have acronjobs that makes a list of new files that have appeared in a datastore, and then removes some based on a previously stored list. Normally it works, but I today I started the script with empty history, so the list of files to remove grew exceptionally big. I can avoid that condition, and I could certainly split the pattern file. Frankly I was just curious how large the chunks could be to be safe :-)