awk seems purpose-built for tasks like that.
cat input_file | awk '{print int($1/1000000 + 0.5), $2}' > temp_file
Now temp_file contains per-second packet amounts, with many duplicate seconds values.
To coalesce the values, you can pipe them through this invocation:
awk 'BEGIN {tstamp = 0; bytes = 0};
{if ($1 == tstamp) {bytes += $2}
else {if (tstamp != 0) {print tstamp, bytes};
tstamp=$1; bytes = $2}}'
I'd put these two awk scripts in files and use something like
cat input_file | awk -f divide.awk | awk -f coalesce.awk > output_file