0

I have logs in the following type of format:

2021-10-12 14:41:23,903716 [{"Name":"A","Dimen":[{"Name":"in","Value":"348"},{"Name":"ses","Value":"asfju"}]},{"Name":"read","A":[{"Name":"ins","Value":"348"},{"Name":"ses","Value":"asf5u"}]}]
2021-10-12 14:41:23,903716 [{"Name":"B","Dimen":[{"Name":"in","Value":"348"},{"Name":"ses","Value":"a7hju"}]},{"Name":"read","B":[{"Name":"ins","Value":"348"},{"Name":"ses","Value":"ashju"}]}]

Each log on a new line. Problem is I want each object from the single line in the top level array to be a separate document and parsed accordingly.

I need to parse this and send it to Elasticsearch. I have tried a number of filters, grok, JSON, split etc and I cannot get it to work the way I need to and I have little experience with these filters so if anyone can help it would be much appreciated.

The JSON codec is what I would need if I can remove the Text/timestamp from the file.

"If the data being sent is a JSON array at its root multiple events will be created (one per element)"

If there is a way to do that, this would also be helpful

1 Answer 1

1

This is the config example for your usecase:

input { stdin {} }
filter {
grok {
        match => { "message" => "%{DATA:date},%{DATA:some_field} %{GREEDYDATA:json_message}" }
      }

#Use the json plugin to translate raw to json
json { source => "json_message" target => "json" }

#and split the result to dedicated raws
split { field => "json" }

}
output {
  stdout {
    codec => rubydebug 
  }
}

If you need to parse the start of the log as date, you can use the grok with the date format or connect two fields and set than as source to the date plugin.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.