Writing Fortran unformatted files with Python - python

I have some single-precision little-endian unformatted data files written by Fortran77. I am reading these files using Python using the following commands:
import numpy as np
original_data = np.dtype('float32')
f = open(file_name,'rb')
original_data = np.fromfile(f,dtype='float32',count=-1)
f.close()
After some data manipulation in Python, I (am trying to) write them back in the original format using Python using the following commands:
out_file = open(output_file,"wb")
s = struct.pack('f'*len(manipulated_data), *manipulated_data)
out_file.write(s)
out_file.close()
But it doesn't seem to be working. Any ideas what is the right way of writing the data using Python back in the original fortran unformatted format?
Details of the problem:
I am able to read the final file with manipulated data from Fortran. However, I want to visualize these data using a software (Paraview). For this I convert the unformatted data files in the *h5 format. I am able to convert both the original and manipulated data in h5 format using h5 utilities. But while Paraview is able to read the *h5 files created from original data, Paraview is not able to read the *h5 files created from the manipulated data. I am guessing something is being lost in translation.
This is how I am opening the file written by Python in Fortran (single precision data):
open (in_file_id,FILE=in_file,form='unformatted',access='direct',recl=4*n*n*n)
And this is I am writing the original unformatted data by Fortran:
open(out_file_id,FILE=out_file,form="unformatted")
Is this information sufficient?

Have you tried using the .tofile method of the manipulated data array? It will write the array in C order but is capable of writing plain binary.
The documentation for .tofile also suggests this is the same as:
with open(outfile, 'wb') as fout:
fout.write(manipulated_data.tostring())

this is creating an unformatted sequential access file:
open(out_file_id,FILE=out_file,form="unformatted")
Assuming you are writing a single array real a(n,n,n) using simply write(out_file_id)a you should see a file size 4*n^3+8 bytes. The extra 8 bytes being a 4 byte integer (=4n^3) repeated at the start and end of the record.
the second form:
open (in_file_id,FILE=in_file,form='unformatted',access='direct',recl=4*n*n*n)
opens direct acess, which does not have those headers. For writing now you'd have write(unit,rec=1)a. If you read your sequential access file using direct acess it will read without error but you'll get that integer header read as a float (garbage) as the (1,1,1) array value, then everything else is shifted. You say you can read with fortran ,but are you looking to see that you are really reading what you expect?
The best fix to this is to fix your original fortran code to use unformatted,direct access for both reading and writing. This gives you an 'ordinary' raw binary file, no headers.
Alternately in your python you need to first read that 4 byte integer, then your data. On output you could put the integer headers back or not depending on what your paraview filter is expecting.
---------- here is python to read/modify/write an unformatted sequential fortran file containing a single record:
import struct
import numpy as np
f=open('infile','rb')
recl=struct.unpack('i',f.read(4))[0]
numval=recl/np.dtype('float32').itemsize
data=np.fromfile(f,dtype='float32',count=numval)
endrec=struct.unpack('i',f.read(4))[0]
if endrec is not recl: print "error unexpected end rec"
f.close()
f=open('outfile')
f.write(struct.pack('i',recl))
for i in range(0,len(data)):data[i] = data[i]**2 #example data modification
data.tofile(f)
f.write(struct.pack('i',recl)
just loop for multiple records.. note that the data here is read as a vector and assumed to be all floats. Of course you need to know the actuall data type to make use if it..
Also be aware you may need to deal with byte order issues depending on platform.

Related

How to read/print the header (first 100 lines) of a netCDF file in Python?

I have been trying to read the header (first 100 lines) of a netCDF file in Python, but have been facing some issues. I am familiar with the read_nc function available in the synoptReg package for R and with the ncread function that comes with MATLAB, as well as the read_csv function available in the pandas library. To my knowledge, however, there isn't anything similar for netCDF (.nc) files.
Noting this, and using answers from this question, I've tried the following (with no success):
with open(filepath,'r') as f:
for i in range(100):
line = next(f).strip()
print(line)
However, I receive this error, even though I've ensured that tabs have not been mixed with spaces and that the for statement is within the with block (as given as explanations by the top answers to this question):
'utf-8' codec can't decode byte 0xbb in position 411: invalid start byte
I've also tried the following:
with open(filepath,'r') as f:
for i in range(100):
line = [next(f) for i in range(100)]
print(line)
and
from itertools import islice
with open('/Users/toshiro/Desktop/Projects/CCAR/Data/EDGAR/v6.0_CO2_excl_short-cycle_org_C_2010_TOTALS.0.1x0.1.nc','r') as f:
for i in range(100):
line = list(islice(f, 100))
print(line)
But receive the same error as above. Are there any workarounds for this?
You can't. netCDFs are binary files and can't be interpreted as text.
If the files are netCDF3 encoded, you can read them in with scipy.io.netcdf_file. But it's much more likely they are netCDF4, in which case you'll need the netCDF4 package.
On top of this, I'd highly recommend the xarray package for reading and working with netCDF data. It supports a labeled N-dimensional array interface - think pandas indexes on each dimension of a numpy array.
Whether you go with netCDF or xarray, netCDFs are self-describing and support arbitrary reads, so you don't need to load the whole file to view the metadata. So similar to viewing the head of a text file, you can simply do:
import xarray as xr
ds = xr.open_dataset("path/to/myfile.nc")
print(ds) # this will give you a preview of your data
Additionally, xarray does have a xr.Dataset.head function which will display the first 5 (or N if you provide an int) elements along each dimension:
ds.head() # display a 5x5x...x5 preview of your data
See the getting started guide and the User guide section on reading and writing netCDF files for more info.

Prevent New Line Formation in json_normalize Data Frames

I am working to flatten some tweets into a wide data frame. I simply use the pandas.json_normalize function on my to perform this.
I then save this data frame into a CSV file. The CSV format when uploaded produces some rows that are associated with the above, rather than holding all the data on a single row. I discovered this issue when uploading the CSV into R and into Domo.
When I run the following command in a jupyter notebook the CSV loads fine,
sb_2019 = pd.read_csv('flat_tweets.csv',lineterminator='\n',low_memory=False)
Without the lineterminator I see this error:
Error tokenizing data. C error: Buffer overflow caught - possible malformed input file.
Needs:
I am looking for a post-processing step to eliminate the need for a the lineterminator. I need to open the CSV in platforms and languages that do not have this specification. How might I go about doing this?
Note:
I am working with over 700k tweets. The json_normalize function works great on small pieces of my data where issues are being found. When I run json_normalize on the whole dataset I am finding this issue.
Try using '\r\n' or '\r' as lineterminator, and not '\n'.
This solution would be helpful too, opening in universal-new-line mode:
sb_2019 = pd.read_csv(open('flat_tweets.csv','rU'), encoding='utf-8', low_memory=False)

How can i save a list/matrix in binary format in KDB?

I am trying to save a matrix to file in binary format in KDB as per below:
matrix: (til 10)*/:til 10;
save matrix;
However, I get the error 'type.
I guess save only works with tables? In which case does anyone know of a workaround?
Finally, I would like to read the matrix from the binary file into Python with NumPy, which I presume is just:
import numpy as np
matrix = np.fromfile('C:/q/w32/matrix', dtype='f')
Is that right?
Note: I'm aware of KDB-Python libraries, but have been unable to install them thus far.
save does work, you just have to reference it by name.
save`matrix
You can also save using
`:matrix set matrix;
`:matrix 1: matrix;
But I don't think you'll be able to read this into python directly using numpy as it is stored in kdb format. It could be read into python using one of the python-kdb interfaces (e.g PyQ) or by storing it in a common format such as csv.
Another option is to save in KDB+ IPC format and then read it into Python with qPython as a Pandas DataFrame.
On the KDB+ side you can save it with
matrix:(til 10)*/:til 10;
`:matrix.ipc 1: -8!matrix;
On the Python side you do
from pandas import DataFrame
from qpython.qreader import QReader
with open('matrix.ipc',"rb") as f:
matrix = DataFrame(QReader(f).read().data)
print(matrix)

python struct.pack and write vs matlab fwrite

I am trying to port this bit of matlab code to python
matlab
function write_file(im,name)
fp = fopen(name,'wb');
M = size(im);
fwrite(fp,[M(1) M(2) M(3)],'int');
fwrite(fp,im(:),'float');
fclose(fp);
where im is a 3D matrix. As far as I understand, the function first writes a binary file with a header row containing the matrix size. The header is made of 3 integers. Then, the im is written as a single column of floats. In matlab this takes few seconds for a file of 150MB.
python
import struct
import numpy as np
def write_image(im, file_name):
with open(file_name, 'wb') as f:
l = im.shape[0]*im.shape[1]*im.shape[2]
header = np.array([im.shape[0], im.shape[1], im.shape[2]])
header_bin = struct.pack("I"*3, *header)
f.write(header_bin)
im_bin = struct.pack("f"*l,*np.reshape(im, (l,1), order='F'))
f.write(im_bin)
f.close()
where im is a numpy array. This code works well as I compared with the binary returned by matlab and they are the same. However, for the 150MB file, it takes several seconds and tends to drain all the memory (in the image linked I stopped the execution to avoid it, but you can see how it builds up!).
This does not make sense to me as I am running the function on a 15GB of RAM PC. How come a 150MB file processing requires so much memory?
I'd happy to use a different method, as far as it is possible to have two formats for the header and the data column.
There is no need to use struct to save your array. numpy.ndarray has a convenience method for saving itself in binary mode: ndarray.tofile. The following should be much more efficient than creating a gigantic string with the same number of elements as your array:
def write_image(im, file_name):
with open(file_name, 'wb') as f:
np.array(im.shape).tofile(f)
im.T.tofile(f)
tofile always saves in row-major C order, while MATLAB uses column-major Fortran order. The simplest way to get around this is to save the transpose of the array. In general, ndarray.T should create a view (wrapper object pointing to the same underlying data) instead of a copy, so your memory usage should not increase noticeably from this operation.

Creating a .mat file of v7.3 in python

I need to perform multiplication involving 60000X70000 matrix either in python or matlab. I have a 16GB RAM and am able to load each row of the matrix easily (which is what I require). I am able to create the matrix as a whole in python but not in matlab.
Is there anyway I can save the array as .mat file of v7.3 using h5py or scipy so that I can load each row separately?
For MATLAB v7.3 you can use hdf5storage which requires h5py, download the file here, extract, then type: python setup.py install from a command prompt.
https://pypi.python.org/pypi/hdf5storage
import h5py
import hdf5storage
import numpy as np
matfiledata = {} # make a dictionary to store the MAT data in
matfiledata[u'variable1'] = np.zeros(100) # *** u prefix for variable name = unicode format, no issues thru Python 3.5; advise keeping u prefix indicator format based on feedback despite docs ***
matfiledata[u'variable2'] = np.ones(300)
hdf5storage.write(matfiledata, '.', 'example.mat', matlab_compatible=True)
If MATLAB can't load the whole thing at once, I think you'll have to save it in different variables matfiledata[u'chunk1'] matfiledata[u'chunk2'] matfiledata[u'chunk3'] etc.
Then in MATLAB if you save each chunk as a variable
load(filename,'chunk1')
do stuff...
clear chunk1
load(filename,'chunk2')
do stuff...
clear chunk2
etc.
The hdf5storage.savemat has a parameter to allow the file to be read into Python correctly in the future so worth checking out, and follows the scipy.io.loadmat format... although you can do something like this if saving data from MATLAB to make it easy to import back into Python:
MATLAB
save('example.mat','-v7.3')
Python
matdata = hdf5storage.loadmat('example.mat')
That will load back into Python as a dictionary which you can then convert into whatever datatypes you need.

Categories