0

I am currently working on the modernization of a quite large application analyzing large amount of text data. The application is made of executables, launching other executables and so forth. Some of these executables can be launched in //.

The result of the different steps of the analysis are written in independant files which are merged to a coherent set of data at the end of the whole process, no hassle here.

Now, the application can be launched by a web interface that needs to display the logs in real time. And here comes the problems: each process produces its own log file and I am asked to generate a single file synchronously collecting all these logs on the fly.

What I need is a logging server.

The first option is to find an open source logger that would have such a feature. But I could not find any.
The other option is to use a light weight SQL DBMS (SQLite for instance) and build such a logging server on top of it.

Would any of you have some hints or advises on this matter?

1
  • You probably need to change the way you log in the first place. I never call log methods from my code directly, but make abstract base classes with report methods (with arguments that need logging if needed). Then I inject those into the code that needs logginh. Your first step could be to create an implementation (reporter that implements your reporting interface) that then calls your original log functions. After that you can "easily" create implementations of that would for instance all log to the same file or even an elastics server. Commented Nov 13 at 6:59

1 Answer 1

1

Example of what I mean, this kind of refactoring will decouple your actual logging infrastructure from your business logic completely:



// Current code.

void SomeFunction(int x)
{
    LOG("SomeFunction entered : Value " + std::to_string(x));
}

// Refactored code.

struct ReportingItf
{
    virtual ~ReportingItf() = default;
    virtual void ReportSomeFunctionEntry(int x) = 0;
}

struct LegacyReporting : public ReportingItf
{
    void ReportSomeFunctionEntry(int x) override
    {
        LOG("SomeFunction entered : Value " + std::to_string(x));
    }
};

// Meyer's singelton if you can't do dependency injection of ReportingItf into your code
ReportingItf& GetReporting()
{
    static LegacyReporting instance;
    return instance;
}

void SomeFunction(int x)
{
    GetReporting().ReportSomeFunctionEntry(x);
}

// Now later you can create a new class that implements ReportingItf too
// but logs everything to one location
// Or even log to the same location you do now AND to a global file/server

struct NewReporting : public ReportingItf
{
    // NewReporting(const std::string& shardLogFileName)
    // NewReporting(const ServerConnection& serverConnection) or whatever you need

    void ReportSomeFunctionEntry(int x) override
    {
        // New reporting mechanism
        NEW_LOGGER->LogEvent("SomeFunction entered", {{"Value", x}});
    }
};

// And then you can inject that new reporting mechanism where needed
Sign up to request clarification or add additional context in comments.

4 Comments

Thanks for your answer but I realize my question was not enough explicit. In fact, when I said I was doing some modernization, I did more or less what you propose here. I have changed the old logging system in order to decouple the processing and the logging. I have now a singleton which is fed with classical static functions (error, info, debug, etc.) called from the code. And this singleton then send all these traces to a list of sinks or appenders.
What interests me now is your last words: "create implementations of that would for instance all log to the same file or even an elastics server". This what I look for: the server. Would you have some suggestions on that part?
No I don't have any specific recommendations... maybe FluentD. But check your requirments and what licences you're allowed to use.
It's ok, communication isn't always easy :) What I don't really like about generic "log" methods with error/info/debug that this will assume your target logging system even have those concepts (and sometimes logging systems have things like categories which you will then not be able to add). So that's why I tend to work with those pure abstract report methods. I can then always add the logging specific data in the implementation of the report call.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.