Skip to main content
added 491 characters in body
Source Link
9000
  • 24.4k
  • 4
  • 53
  • 80

awk seems purpose-built for tasks like that.

cat input_file | awk '{print int($1/1000000 + 0.5), $2}' > temp_file

Now temp_file contains per-second packet amounts, with many duplicate seconds values.

To coalesce the values, you can pipe them through this invocation:

awk 'BEGIN {tstamp = 0; bytes = 0}; 
     {if ($1 == tstamp) {bytes += $2} 
      else {if (tstamp != 0) {print tstamp, bytes}; 
            tstamp=$1; bytes = $2}}'

I'd put these two awk scripts in files and use something like

cat input_file | awk -f divide.awk | awk -f coalesce.awk > output_file 

awk seems purpose-built for tasks like that.

cat input_file | awk '{print int($1/1000000 + 0.5), $2}' > output_file

awk seems purpose-built for tasks like that.

cat input_file | awk '{print int($1/1000000 + 0.5), $2}' > temp_file

Now temp_file contains per-second packet amounts, with many duplicate seconds values.

To coalesce the values, you can pipe them through this invocation:

awk 'BEGIN {tstamp = 0; bytes = 0}; 
     {if ($1 == tstamp) {bytes += $2} 
      else {if (tstamp != 0) {print tstamp, bytes}; 
            tstamp=$1; bytes = $2}}'

I'd put these two awk scripts in files and use something like

cat input_file | awk -f divide.awk | awk -f coalesce.awk > output_file 
Source Link
9000
  • 24.4k
  • 4
  • 53
  • 80

awk seems purpose-built for tasks like that.

cat input_file | awk '{print int($1/1000000 + 0.5), $2}' > output_file