I've seen some ways to read a formatted binary file in Python to Pandas, namely, I'm using this code that read using NumPy fromfile formatted with a structure given using dtype.
import numpy as np
import pandas as pd
input_file_name = 'test.hst'
input_file = open(input_file_name, 'rb')
header = input_file.read(96)
dt_header = np.dtype([('version', 'i4'),
('copyright', 'S64'),
('symbol', 'S12'),
('period', 'i4'),
('digits', 'i4'),
('timesign', 'i4'),
('last_sync', 'i4')])
header = np.fromstring(header, dt_header)
dt_records = np.dtype([('ctm', 'i4'),
('open', 'f8'),
('low', 'f8'),
('high', 'f8'),
('close', 'f8'),
('volume', 'f8')])
records = np.fromfile(input_file, dt_records)
input_file.close()
df_records = pd.DataFrame(records)
# Now, do some changes in the individual values of df_records
# and then write it back to a binary file
Now, my issue is on how to write this back to a new file. I can't find any function in NumPy (neither in Pandas) that allows me to specify exactly the bytes to use in each field to write.
records.tofilemethod. It writes the data in the same format as it is stored in memory.