I have a file (PSF from Zemax if you want to know) that looks like this:
Listing of FFT PSF Data
File : C:\G_Drive\Projects\MSE\Telescope\AAO_designs\MSE_PF_6u_1300-Shan-Nicolas_2.zmx
Title: MSE Prime Focus WFC with CLADC
Date : 2/9/2018
Configuration 1 of 4
FFT PSF
0.5510 µm at 0.5300, 0.0000 (deg).
Data spacing is 0.300 µm.
Data area is 153.600 µm wide.
Surface: Image (Focal surface)
Reference Coordinates: 2.02066E+02, 0.00000E+00
Pupil grid size: 256 by 256
Image grid size: 512 by 512
Center point is: row 257, column 256
Values are normalized to peak = 1.0
1.7638E-02 1.7079E-02 1.6531E-02 1.5996E-02 1.5475E-02 ...
So, it has a header with text and what I imagine are characters that have required some iso-8859-1 encoding. After the header come 512 lines of 512 floats, which I want to import into a numpy array.
I started with this:
data = ascii.read(path_in + files[0], data_start=19, encoding='iso-8859-1')
n = np.array(data)
n.shape
but the array has not the right shape:
(508,)
I also tried:
im = np.loadtxt(path_in + files[0], skiprows=19)
but got the following error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb5 in position 211: invalid start byte
And np.loadtxt does not accept a different encoding.
I have then tried things like:
arr = np.fromiter(codecs.open(path_in +files[0], encoding='iso-8859-1'), np.float)
but this does not like the header:
ValueError: could not convert string to float: 'Listing of FFT PSF Data\r\n'
Finally, I found some similar question here: Reading unicode elements into numpy array, but this:
s = codecs.open(path_in + files[0], encoding='iso-8859-1').read()
im = np.loadtxt(s)
gets me the "IOPub data rate exceeded" error message, even though I bumped the rate a lot.
loadtxtfunction has anencodingparameter. Maybe you can upgrade your SciPy package?loadtxtwithencodingparameter! thanks a lot @lenz