1

We have a requirement to screen user-uploaded content. However, I've noticed that most of our user-uploaded content has actually originated from our own system: for example someone downloads a pdf from our document library, renames it as something else to suit their needs, and re-uploads it into their "custom content" section, which can be shared with other users.

I'd like to mark these files as trusted, without someone having to actually look at them, and I thought I could do this using file size and some kind of checksum. eg

  • for a given new file
    • find all files in our resource library folder with the same file extension and same filesize
    • for all the ones with the same extension & size, do some kind of checksum comparison.
    • If we find a match, then declare the new file as trusted.

Now, our resource library directory is 132 GB - quite large. So, any solution that involves looking at every file in there (even every file with the same extension) is going to be quite slow.

It seems like the sensible thing to do is keep some kind of database (not necessarily using a literal DBMS) of file checksums, which is automatically updated when the contents change, or perhaps just run with a scheduler once a day. Then, for any given new file, I can get the checksum and look it up in the db.

This feels like it must be a solved problem. Does anyone have any ideas?

thanks, Max

2 Answers 2

1

You could look at File integrity monitoring software.

Basically these are designed to detect the introduction of rootkits to filesystems but at the core they have a database for files with meta information (checksum, hashes) and monitor files that have been changed or added under a set of directories which is what you want.

The oldest one I've heard about is Tripwire but an open source version was created called AIDE. A more recent one is OSSEC recommended from https://serverfault.com/questions/141800/recommend-alternative-to-tripwire.

0

This may be a solved problem, but it's too specific to have any standard tool in the Unix/Linux world. Your question contains a large part of the answer. You need a database, or more precisely, you need an index of checksums. And also you need a component that will add, update and check new files against this index. I think you will have to implement it yourself and the natural place of implementation will be in the upload mechanism (eg. a web page).

3
  • Thanks Tomasz. I thought that someone might have done a tool for "Is this file somewhere in this massive nested directory under a different name"? Commented Jul 5, 2018 at 12:29
  • Do you have any suggestions for how to store an index of checksums, btw? Commented Jul 5, 2018 at 12:30
  • @MaxWilliams As you wrote in the question, you need a database. File -> checksum. Build index on the checksums and that's it. You also mentioned size and extension, but I'm not sure if this is necessary or helpful. A checksum is enough for file representation, and all you need is a clever way of making use of it. Commented Jul 5, 2018 at 12:35

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.