I have an MPI-based program written in Fortran which produces a 3D array of complex data at each node (sections of a 2D time-series). I would like to use parallel I/O to write these arrays to a single file which can be relatively easily opened in python for further analysis/visualization. Ideally I would like the solution to be memory efficient (i.e. avoid the creation of intermediate temporary arrays).
Using NetCDF, I have managed to adapt a subroutine which achieves this for a 3D array of real numbers. However, I've hit a stumbling block when it comes to complex arrays.
In the following code I have attempted to extend the subroutine from reals to complex numbers by creating a compound datatype consisting of two reals and assuming that the real and imaginary components of the Fortran complex datatype are stored contiguously in the 1st dimension of the 3D array.
module IO
use NetCDF
use, intrinsic :: iso_fortran_env, only: dp => real64
implicit none
contains
subroutine output_3D(dataname, starts, ends, global_data_dims, &
local_data, MPI_communicator)
character(len=*), intent(in) :: dataname
integer, dimension(3), intent(in) :: starts
integer, dimension(3), intent(in) :: ends
integer, dimension(3), intent(in) :: global_data_dims
complex(dp), intent(in) :: local_data( 1:(ends(1) - starts(1)+ 1), &
1:(ends(2) - starts(2) + 1), &
1:(ends(3) - starts(3) + 1))
integer, dimension(3) :: expanded_starts
integer, intent(in) :: MPI_communicator
integer :: ncid, varid, dimid(3)
integer :: counts(3)
integer :: typeid
expanded_starts(1) = (starts(1))* 2 + 1
expanded_starts = starts(2)
expanded_starts(3) = starts(3)
call check(nf90_create( trim(dataname)//'.cdf', &
IOR(NF90_NETCDF4, NF90_MPIIO), &
ncid, &
comm = MPI_communicator, &
info = MPI_INFO_NULL))
call check(nf90_def_dim(ncid, "x", global_data_dims(1), dimid(1)))
call check(nf90_def_dim(ncid, "y", global_data_dims(2) * 2, dimid(2)))
call check(nf90_def_dim(ncid, "z", global_data_dims(3), dimid(3)))
! define a complex data type consisting of two real(8)
call check(nf90_def_compound(ncid, 16, "COMPLEX", typeid))
call check(nf90_insert_compound(ncid, typeid, "REAL", 0, NF90_DOUBLE))
call check(nf90_insert_compound(ncid, typeid, "IMAG", 8, NF90_DOUBLE))
! define a 3D variable of type "complex"
call check(nf90_def_var(ncid, dataname, typeid, dimid, varid))
! exit define mode
call check(nf90_enddef(ncid))
! Now in NETCDF data mode
! set to use MPI/PnetCDF collective I/O
call check(nf90_var_par_access(ncid, varid, NF90_COLLECTIVE))
counts(1) = (ends(1) - starts(1) + 1) * 2
counts(2) = (ends(2) - starts(2) + 1)
counts(3) = (ends(3) - starts(3) + 1)
call check(nf90_put_var(ncid, &
varid, &
local_data, &
start = expanded_starts,&
count = counts))
! close the file
call check(nf90_close(ncid))
return
end subroutine output_3D
subroutine check(status)
integer, intent ( in) :: status
if(status /= nf90_noerr) then
print *, trim(nf90_strerror(status))
stop 2
end if
end subroutine check
end module IO
program test_write
use IO
use MPI
complex(dp) :: data(2,2,3)
integer :: flock
integer :: rank
integer :: ierr
integer :: i, j, k
call MPI_init(ierr)
call MPI_comm_size(MPI_comm_world, flock, ierr)
call MPI_comm_rank(MPI_comm_world, rank, ierr)
do k = 1, 3
do j = 1, 2
do i = 1, 2
data(i,j,k) = cmplx(i, j, 8)
enddo
enddo
enddo
if (rank == 0) then
call output_3D_hdf5('out', [1,1,1], [2,2,3], [2,2,6], &
data, MPI_comm_world)
else
call output_3D_hdf5('out', [1,1,4], [2,2,6], [2,2,6], &
data, MPI_comm_world)
endif
call MPI_finalize(ierr)
end program test_write
The above code results in an "There is no specific function for nf90_put_var" error on compilation. This indicates that the function is not happy with the data type of the input array, so clearly there is something I'm missing regarding the usage of compound data types.
EDIT: One simple workaround is to assign the complex array to a real pointer as described in this post. The array can then be reshaped/recast using numpy to arrive at the complex array in python. It's a bit clunky, and somewhat dissatisfying - but is probably good enough for my purposes for now.
This is only a partial answer for reasons you will see below - but it is too long for a comment. Hopefully I will be able to find the missing info and "upgrade" it, but this is what I have so far.
If you look in the NetCDF4 documentation under the "Compound type introduction" at https://www.unidata.ucar.edu/software/netcdf/fortran/docs/f90-user-defined-data-types.html#f90-compound-types-introduction you will see:
To write data in a compound type, first use nf90_def_compound to create the type, multiple calls to nf90_insert_compound to add to the compound type, and then write data with the appropriate nf90_put_var1, nf90_put_vara, nf90_put_vars, or nf90_put_varm call.
Note it doesn't mention nf90_put_var at all, but 4 different functions. This makes some kind of sense, nf90_put_var is presumably nicely overloaded to deal with all intrinsic types NetCDF support (and it is utterly crap it doesn't support complex), so for non-intrinsic type presumably there is some C like interface with something like a void *, and I'm guessing it is this that the four functions mentioned above implement.
So far so good, you should call one of nf90_put_var1, nf90_put_vara, nf90_put_vars, or nf90_put_varm rather than nf90_put_var. Now the bad news - I can't find any documentation for these 4 functions. The equivalent C functions are at https://www.unidata.ucar.edu/software/netcdf/docs/group__variables.html so you might be able to work out what is required from there, but it's not very good - I'd at the very least put in a bug report to Unidata, but that said for me lack of intrinsic support for complex is enough to make me look elsewhere for my I/O solution ...
While I am here you really shouldn't use explicit numbers for kinds for variables, I can show you compilers where complex(8) will fail to compile. Instead use Selected_real_kind or similar, or use the constant in the intrinsic module iso_fortran_env, or possibly those in iso_c_binding, and the fact that the kind of a complex number is the same as the kind of the reals that compose it.
Related
I have Fortran dll(FortDll.dll) similar to;
integer function FortAddr(x) bind(c, name="FORTADDR") RESULT(res)
!DEC$ ATTRIBUTES DLLEXPOR::FortAddr
use, intrinsic :: iso_c_binding
integer(kind=c_int), intent(in) :: x
integer :: y
y = 2*x
res = loc(y)
end function
I want to import Fortran data to visualize using python.
And from python, I call it as
import ctypes
flib = CDLL('FortDll.dll')
x = c_int(15)
addrx = flib.FORTADDR(byref(x))
val = ctypes.cast(addrx, ctypes.py_object).value
But Python script stops at ctypes.cast. How can I get the value in Fortran dll?
Below is the Fortran code that I am running and I want to save the Qr values to a file. This subroutine is called and executed in python.
subroutine thrustTorque(n, Np, Tp, r, precurve, presweep, precone, &
Rhub, Rtip, precurveTip, presweepTip, T, Q)
implicit none
integer, parameter :: dp = kind(0.d0)
! in
integer, intent(in) :: n
real(dp), dimension(n), intent(in) :: Np, Tp, r, precurve, presweep
real(dp), intent(in) :: precone, Rhub, Rtip, precurveTip, presweepTip
! out
real(dp), intent(out) :: T, Q
! local
real(dp) :: ds
real(dp), dimension(n+2) :: rfull, curvefull, sweepfull, Npfull, Tpfull
real(dp), dimension(n+2) :: thrust, torque, x_az, y_az, z_az, cone, s
integer :: i
There is long list of more variables and their definations here, which I am skipping.
cone =0.0_dp
z_az = 0.0_dp
! integrate Thrust and Torque (trapezoidal)
thrust = Npfull*cos(cone)
torque = Tpfull*z_az
Now here Qr(i) values I want to be saved in a file.
T = 0.0_dp
do i = 1, n+1
ds = s(i+1) - s(i)
T = T + 0.5_dp*(thrust(i) + thrust(i+1))*ds
Q = Q + 0.5_dp*(torque(i) + torque(i+1))*ds
Qr(i) = Q
end do
end subroutine thrustTorque
I tried this:
T = 0.0_dp
open (1, file = 'data1.dat', status ='new')
do i = 1, n+1
ds = s(i+1) - s(i)
T = T + 0.5_dp*(thrust(i) + thrust(i+1))*ds
Q = Q + 0.5_dp*(torque(i) + torque(i+1))*ds
Qr(i) = Q
write(1, *) Qr(i)
end do
close(1)
end subroutine thrustTorque
This subroutine is called in python using:
T, Q = _oxi.thrustTorque(Np, Tp, *args)
I cannot return the values of Qr as this is also linked to other areas of the code and will require many changes. Instead, I would prefer if I can print the output in Terminal or save them in a file.
Although the program is executed I don't see the results being saved in a file or even a file being created.
Several issues stand out:
You use file unit 1 -- that's not a good idea. Fortran uses these low numbers often for specific units, i.e. standard out, error out, standard in. Better use this syntax:
integer :: u ! unit for file i/o
open(newunit=u, file='data1.dat', status='new', action='write')
do
...
end do
That way, you can be sure that the unit number is free.
The write(*, *) <data> always writes to standard out -- you should have seen the values being displayed on screen when you ran it. In order to write to a file, you need to replace the first * of the write statement with the file unit.
write(u, *) Qr(i)
Oki #francescalus was correct in the comments it was still using the old version, I had to update my code after the changes so that interface knows an update has been made. Do this using: f2py -c -m codename codename.f90
Edit: Oki after running some tests on this, I am able to print and create a file but this has to be in a separate subroutine, I don't understand this. looks like it has something to do with import function. import _codename is diffrent from import codename. If anyone can explain this, please let me know.
I have a binary file that was created using a Python code. This code mainly scripts a bunch of tasks to pre-process a set of data files. I would now like to read this binary file in Fortran. The content of the binary file is coordinates of points in a simple format e.g.: number of points, x0, y0, z0, x1, y1, z1, ....
These binary files were created using the 'tofile' function in numpy. I have the following code in Fortran so far:
integer:: intValue
double precision:: dblValue
integer:: counter
integer:: check
open(unit=10, file='file.bin', form='unformatted', status='old', access='stream')
counter = 1
do
if ( counter == 1 ) then
read(unit=10, iostat=check) intValue
if ( check < 0 ) then
print*,"End Of File"
stop
else if ( check > 0 ) then
print*, "Error Detected"
stop
else if ( check == 0 ) then
counter = counter + 1
print*, intValue
end if
else if ( counter > 1 ) then
read(unit=10, iostat=check) dblValue
if ( check < 0 ) then
print*,"End Of File"
stop
else if ( check > 0 ) then
print*, "Error Detected"
stop
else if ( check == 0 ) then
counter = counter + 1
print*,dblValue
end if
end if
end do
close(unit=10)
This unfortunately does not work, and I get garbage numbers (e.g 6.4731191026611484E+212, 2.2844499004808491E-279 etc.). Could someone give some pointers on how to do this correctly?
Also what would be a good way of writing and reading binary files interchangeably between Python and Fortran - as it seems like that is going to be one of the requirements of my application.
Thanks
Here's a trivial example of how to take data generated with numpy to Fortran the binary way.
I calculated 360 values of sin on [0,2π),
#!/usr/bin/env python3
import numpy as np
with open('sin.dat', 'wb') as outfile:
np.sin(np.arange(0., 2*np.pi, np.pi/180.,
dtype=np.float32)).tofile(outfile)
exported that with tofile to binary file 'sin.dat', which has a size of 1440 bytes (360 * sizeof(float32)), read that file with this Fortran95 (gfortran -O3 -Wall -pedantic) program which outputs 1. - (val**2 + cos(x)**2) for x in [0,2π),
program numpy_import
integer, parameter :: REAL_KIND = 4
integer, parameter :: UNIT = 10
integer, parameter :: SAMPLE_LENGTH = 360
real(REAL_KIND), parameter :: PI = acos(-1.)
real(REAL_KIND), parameter :: DPHI = PI/180.
real(REAL_KIND), dimension(0:SAMPLE_LENGTH-1) :: arr
real(REAL_KIND) :: r
integer :: i
open(UNIT, file="sin.dat", form='unformatted',&
access='direct', recl=4)
do i = 0,ubound(arr, 1)
read(UNIT, rec=i+1, err=100) arr(i)
end do
do i = 0,ubound(arr, 1)
r = 1. - (arr(i)**2. + cos(real(i*DPHI, REAL_KIND))**2)
write(*, '(F6.4, " ")', advance='no')&
real(int(r*1E6+1)/1E6, REAL_KIND)
end do
100 close(UNIT)
write(*,*)
end program numpy_import
thus if val == sin(x), the numeric result must in good approximation vanish for float32 types.
And indeed:
output:
360 x 0.0000
So thanks to this great community, from all the advise I got, and a little bit of tinkering around, I think I figured out a stable solution to this problem, and I wanted to share with you all this answer. I will provide a minimal example here, where I want to write a variable size array from Python into a binary file, and read it using Fortran. I am assuming that the number of rows numRows and number of columns numCols are also written along with the full array datatArray. The following Python script writeBin.py writes the file:
import numpy as np
# Read in the numRows and numCols value
# Read in the array values
numRowArr = np.array([numRows], dtype=np.float32)
numColArr = np.array([numCols], dtype=np.float32)
fileObj = open('pybin.bin', 'wb')
numRowArr.tofile(fileObj)
numColArr.tofile(fileObj)
for i in range(numRows):
lineArr = dataArray[i,:]
lineArr.tofile(fileObj)
fileObj.close()
Following this, the fortran code to read the array from the file can be programmed as follows:
program readBin
use iso_fortran_env
implicit none
integer:: nR, nC, i
real(kind=real32):: numRowVal, numColVal
real(kind=real32), dimension(:), allocatable:: rowData
real(kind=real32), dimension(:,:), allocatable:: fullData
open(unit=10,file='pybin.bin',form='unformatted',status='old',access='stream')
read(unit=10) numRowVal
nR = int(numRowVal)
read(unit=10) numColVal
nC = int(numColVal)
allocate(rowData(nC))
allocate(fullData(nR,nC))
do i = 1, nR
read(unit=10) rowData
fullData(i,:) = rowData(:)
end do
close(unit=10)
end program readBin
The main point that I gathered from the discussion on this thread is to match the read and the write as much as possible, with precise specifications of the data types to be read, the way they are written etc. As you may note, this is a made up example, so there may be some things here and there that are not perfect. However, I have used this now to program a finite element program, and the mesh data was where I used this binary read/write - and it worked very well.
P.S: In case you find some typo, please let me know, and I will edit it out rightaway.
Thanks a lot.
I am reading a file containing single precision data with 512**3 data points. Based on a threshold, I assign each point a flag of 1 or 0. I wrote two programs doing the same thing, one in fortran, the other in python. But the one in fortran takes like 0.1 sec while the one in python takes minutes. Is it normal? Or can you please point out the problem with my python program:
fortran.f
program vorticity_tracking
implicit none
integer, parameter :: length = 512**3
integer, parameter :: threshold = 1320.0
character(255) :: filen
real, dimension(length) :: stored_data
integer, dimension(length) :: flag
integer index
filen = "vor.dat"
print *, "Reading the file ", trim(filen)
open(10, file=trim(filen),form="unformatted",
& access="direct", recl = length*4)
read (10, rec=1) stored_data
close(10)
do index = 1, length
if (stored_data(index).ge.threshold) then
flag(index) = 1
else
flag(index) = 0
end if
end do
stop
end program
Python file:
#!/usr/bin/env python
import struct
import numpy as np
f_type = 'float32'
length = 512**3
threshold = 1320.0
file = 'vor_00000_455.float'
f = open(file,'rb')
data = np.fromfile(f, dtype=f_type, count=-1)
f.close()
flag = []
for index in range(length):
if (data[index] >= threshold):
flag.append(1)
else:
flag.append(0)
********* Edit ******
Thanks for your comments. I am not sure then how to do this in fortran. I tried the following but this is still as slow.
flag = np.ndarray(length, dtype=np.bool)
for index in range(length):
if (data[index] >= threshold):
flag[index] = 1
else:
flag[index] = 0
Can anyone please show me?
Your two programs are totally different. Your Python code repeatedly changes the size of a structure. Your Fortran code does not. You're not comparing two languages, you're comparing two algorithms and one of them is obviously inferior.
In general Python is an interpreted language while Fortran is a compiled one. Therefore you have some overhead in Python. But it shouldn't take that long.
One thing that can be improved in the python version is to replace the for loop by an index operation.
#create flag filled with zeros with same shape as data
flag=numpy.zeros(data.shape)
#get bool array stating where data>=threshold
barray=data>=threshold
#everywhere where barray==True put a 1 in flag
flag[barray]=1
shorter version:
#create flag filled with zeros with same shape as data
flag=numpy.zeros(data.shape)
#combine the two operations without temporary barray
flag[data>=threshold]=1
Try this for python:
flag = data > threshhold
It will give you an array of flags as you want.
I'm trying to read a very big Fortran unformatted binary file with python. This file contains 2^30 integers.
I find that the record markers is confusing (the first one is -2147483639), anyway I have achieved to recover the data structure ( those wanted integers are all similar, thus differ from record markers ) and write the code below ( with help of here ).
However, we can see the markers at the begin and the end of each record are not the same. Why is that?
Is it because the size of the data is too long ( 536870910 = (2^30 - 4) / 2 ) ?
But ( 2^31 - 1 ) / 4 = 536870911 > 536870910.
Or just some mistakes made by the author of the data file?
Another question, what's the type of the marker at begin of a record , int or unsigned int?
fp = open(file_path, "rb")
rec_len1, = struct.unpack( '>i', fp.read(4) )
data1 = np.fromfile( fp, '>i', 536870910)
rec_end1, = struct.unpack( '>i', fp.read(4) )
rec_len2, = struct.unpack( '>i', fp.read(4) )
data2 = np.fromfile( fp, '>i', 536870910)
rec_end2, = struct.unpack( '>i', fp.read(4) )
rec_len3, = struct.unpack( '>i', fp.read(4) )
data3 = np.fromfile( fp, '>i', 4)
rec_end3, = struct.unpack( '>i', fp.read(4) )
data = np.concatenate([data1, data2, data3])
(rec_len1,rec_end1,rec_len2,rec_end2,rec_len3,rec_end3)
here's the values of record lenth readed as showed above:
(-2147483639, -2176, 2406, 589824, 1227787, -18)
Finally, things seem to be more clear.
Here is a Intel Fortran Compiler User and Reference Guides,
see the section Record Types:Variable-Length Records.
For a record length greater than 2,147,483,639 bytes, the record is
divided into subrecords. The subrecord can be of any length from 1 to
2,147,483,639, inclusive.
The sign bit of the leading length field indicates whether the record
is continued or not. The sign bit of the trailing length field
indicates the presence of a preceding subrecord. The position of the
sign bit is determined by the endian format of the file.
A subrecord that is continued has a leading length field with a sign
bit value of 1. The last subrecord that makes up a record has a
leading length field with a sign bit value of 0. A subrecord that has
a preceding subrecord has a trailing length field with a sign bit
value of 1. The first subrecord that makes up a record has a trailing
length field with a sign bit value of 0. If the value of the sign bit
is 1, the length of the record is stored in twos-complement notation.
After many essays, I realized that I was mislead by twos-complement notation, the record marker just change the sign according to the rules above, instead changing to its twos-complement notation when the sign bit is 1. Anyway it's also possible that my data was
created with a diffrent compiler.
Below is the solution.
The data is larger than 2GB, so it's devided into several subrecords.
As we see the first record start marker is -2147483639,
so the lenth of the first record is 2147483639 which is exactly the maximum length of subrecord, not 2147483640 as I thought nor 2147483638 the twos-complement notation of -2147483639.
If we skip 2147483639 bytes to read the record end marker, you will get 2147483639,
as it's the first subrecord whose end marker is positive.
Below is the code to check the record markers:
fp = open(file_path, "rb")
while 1:
prefix, = struct.unpack( '>i', fp.read(4) )
fp.seek(abs(prefix), 1) #or read |prefix| bytes data as you want
suffix, = struct.unpack( '>i', fp.read(4) )
print prefix, suffix
if abs(suffix) - abs(prefix):
print "suffix != prefix!"
break
if prefix > 0: break
And screen prints
-2147483639 2147483639
-2147483639 -2147483639
18 -18
We can see the record begin marker and end marker always are the same except the sign.
Length of the three records are 2147483639, 2147483639, 18 bytes, not nessary to be multiple of 4. So the first record ends with the first 3 bytes of certain integer and the second record begins with the rest 1 byte.
Since this question seems to come up often.. this is a python utilty code to scan a binary file and determine if it is (might be) a fortran unformatted sequential access file. It works by trying several header formats. Of course since the "unformatted" format isn't standard there could be other varients but this should hit the most common ones.
note the left brackets are escaped so you might need to change & #060; back to a 'less than' sign if you screen copy this.
def scanfbinary(hformat,file,fsize):
""" scan a file to see if it has the simple structure typical of
an unformatted sequential access fortran binary:
recl1,<data of length recl1 bytes>,recl1,recl2,<data of length recl2 bytes>,recl2 ...
"""
import struct
print 'scan type',hformat,
if 'qQ'.find(hformat[1])>=0: hsize=8
elif 'iIlL'.find(hformat[1])>=0: hsize=4
if hformat[0] == '<': endian='little'
elif hformat[0] == '>': endian='big'
print '(',endian,'endian',hsize,'byte header)',
f.seek(0)
nrec = 0
while fsize > 0:
h0=struct.unpack(hformat,f.read(hsize))[0]
if h0 < 0 : print 'invalid integer ',h0; return 1
if h0 > fsize - 2*hsize:
print 'invalid header size ',h0,' exceeds file size ',fsize
if nrec > 0:print 'odd perhaps a corrupe file?'
return 2
# to read the data replace the next line with code to read h0 bytes..
# eg
# import numpy
# dtype = numpy.dtype('<i')
# record=numpy.fromfile(f,dtype,h0/dtype.itemsize)
f.seek(h0,1)
h=struct.unpack(hformat,f.read(hsize))[0]
if h0!=h : print 'unmatched header'; return 3
nrec+=1
if nrec == 1:print
if nrec < 10:print 'read record',nrec,'size',h
fsize-=(h+2*hsize)
print 'successfully read ',nrec,' records with unformatted fortran header type',hformat
return 0
f=open('binaryfilename','r')
f.seek(0,2)
fsize=f.tell()
res=[scanfbinary(hformat,f,fsize) for hformat in ('<q','>q','<i','>i')]
if res.count(0)==0:
print 'no match found, file size ',fsize, 'starts..'
f.seek(0)
for i in range(0,12): print f.read(2).encode('hex_codec'),
print
I found that using f2py is a more convenient way for python to access fortran data.
However, the strange behavior of the record marks remains a question. At least we can avoid diving into (sometimes confusing ) fortran unformatted file structure. And it matches well with numpy.
F2PY Users Guide and Reference Manual is here.
Here's a example fortran source file for opening and closing file, reading integer 1-D array and float 2-D array. Note the comments begin with !f2py, they are helpful to make f2py more 'clever'.
To use it, you need wrap it into a module and import into python session. Then you can call these functions just as those python functions.
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
!cc cc
!cc FORTRAN MODULE for PYTHON PROGRAM CALLING cc
!cc cc
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
!Usage:
! Compile: f2py -c fortio.f90 -m fortio
! Import: from fortio import *
! or import fortio
!Note:
! Big endian: 1; Little endian: 0
!cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
SUBROUTINE open_fortran_file(fileUnit, fileName, endian, error)
implicit none
character(len=256) :: fileName
integer*4 :: fileUnit, error, endian
!f2py integer*4 optional, intent(in) :: endian=1
!f2py integer*4 intent(out) :: error
if(endian .NE. 0) then
open(unit=fileUnit, FILE=fileName, form='unformatted', status='old', &
iostat=error, convert='big_endian')
else
open(unit=fileUnit, FILE=fileName, form='unformatted', status='old', &
iostat=error)
endif
END SUBROUTINE
!cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
SUBROUTINE read_fortran_integer4(fileUnit, arr, leng)
implicit none
integer*4 :: fileUnit, leng
integer*4 :: arr(leng)
!f2py integer*4 intent(in) :: fileUnit, leng
!f2py integer*4 intent(out), dimension(leng), depend(leng) :: arr(leng)
read(fileUnit) arr
END SUBROUTINE
!cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
SUBROUTINE read_fortran_real4(fileUnit, arr, row, col)
implicit none
integer*4 :: fileUnit, row, col
real*4 :: arr(row,col)
!f2py integer*4 intent(in):: fileUnit, row, col
!f2py real*4 intent(out), dimension(row, col), depend(row, col) :: arr(row,col)
read(fileUnit) arr
END SUBROUTINE
!cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
SUBROUTINE close_fortran_file(fileUnit, error)
implicit none
integer*4 :: fileUnit, error
!f2py integer*4 intent(in) :: fileUnit
!f2py integer*4 intent(out) :: error
close(fileUnit, iostat=error)
END SUBROUTINE