I have a matlab file (.m) and I want to run this file using python. I do not have matlab on my ubuntu system. Can I still run the matlab file?
isomonotone.m
% Checks vector v for monotonicity, and returns the direction (increasing,
% decreasing, constant, or none).
%
% Meaning of parameter tol:
% - if tol==0, check for non-increasing or non-decreasing sequence (default).
% - if tol>0, allow backward steps of size <= tol
% - if tol<0, require forward steps of size >= tol
%
% Inputs
% v: vector to check for monotonicity
% tol: see above
%
% Outputs
% b: a bitfield indicating monotonicity. Can be tested as follows:
% bitand(b,1)==true --> v is increasing (within tolerance)
% bitand(b,2)==true --> v is decreasing (within tolerance)
% bitand(b,3)==true --> v is both increasing and decreasing
% (i.e. v is constant, within tolerance).
% --------------------------------------------------------------------------
function b = ismonotone( v, tol )
if ( nargin < 2 )
tol = 0;
end
b = 0;
dv = diff(v);
if ( min(dv) >= -tol ) b = bitor( b, 1 ); end
if ( max(dv) <= tol ) b = bitor( b, 2 ); end
end
%!test assert(ismonotone(linspace(0,1,20)),1);
%!test assert(ismonotone(linspace(1,0,20)),2);
%!test assert(ismonotone(zeros(1,100)),3);
%!test
%! v=[0 -0.01 0 0 0.01 0.25 1];
%! assert(ismonotone(v,0.011),1);
%! assert(ismonotone(-v,0.011),2);
Can I run this file using python without having matlab on my ubuntu?
You can install Octave from the Ubuntu repository (or download and install the latest - that's more work)
Starting Octave in the directory where this file is, allows me to run the %! tests, thus:
octave:1> test ismonotone
PASSES 4 out of 4 tests
In fact the presence of those %! suggests that this file was originally written for Octave. Can someone confirm whether MATLAB can handle those doctest like lines?
edit - add interactive examples
octave:1> ismonotone(linspace(0,1,20))
ans = 1
octave:2> ismonotone(zeros(1,100))
ans = 3
Or from the linux shell
1424:~/myml$ octave -fq --eval 'ismonotone(linspace(0,1,20))'
ans = 1
For someone used to running Python scripts from the shell, Octave is more friendly than MATLAB. The startup overhead is much smaller, and the command line options are more familiar. In the interactive mode, doc opens a familiar UNIX info system.
Have you tried:
1. Small Matlab to Python compiler
2. LiberMate
3. Rewrite the code using SciPy module.
Hope this helps
Python cannot run Matlab programs natively. You would need to rewrite this in Python, likely using the SciPy libraries.
Related
I am trying to include a python "max" command inside a quicksum command using gurobi with python. There is obviously an error with doing so, under LinExpr limitations as it is not accepted.
shutdowncost = quicksum(quicksum(shutdown_cost[i] * max((v[hour -1, i] - v[hour, i]),0) for i in num_gen) for hour in hour_range)
V is a binary variable in the model, while the remainder are fixed variables. The issue is around shutdowncost being negative in the scenario where v[hour - 1, i] is 0, and v[hour, i] is 1.
Is there another command that can be used to replace the max command inside the quicksum?
Here is a paper that talks about startup and shutdown constraints: MIPFormulation. They use the notation:
u[t] 1 for online, 0 for offline, binary (state)
v[t] 1 for turned on that time period, binary (turn_on)
w[t] 1 for turned off that period, binary (turn_off)
These gurobi binary variables are defined with the constraints:
u[t] - u[t-1] == v[t] - w[t]
v[t] + w[t] <= 1
Then your shutdowncost can be defined:
shutdowncost = quicksum([shutdown_cost[i] * w[hour, i] for i in num_gen for hour in hour_range])
(No need for 2 quicksums!)
This shutdowncost can then be used in your objective function or another constraint. And it is easier to see what is happening.
I am currently trying to obscure the contents of files or simply txt files without using any libraries. I know that this won't be very secure at all but what I basically want is a program that asks you what the password to "encrypt" it with is, then it asks what the files name is and then it finds that file and "encrypts" it. Then another program is used to "decrypt" it so it asks for the password and filename and then "decrypts" it. I don't care about actual security so if it can be easily opened it's fine I just need it so it doesn't just open if you click the file.
On top of that I don't want it to use ANY libraries so no pycrypto or anything like that.
I am on 64 bit windows.
I also am a complete beginner in tthe world of code and only know basic things such as how to get user input, print stuff, if loops and while loops.
Thanks in advance!
I don't know if this qualifies as an "external library" in your mind, but if you're on a linux machine you probably have the gpg command available to you. This is a reasonably* secure encryption protocol, which you could access from python - or directly from the command line, if you just want the files protected and you don't care about having it done through python.
Alternatively, you could bang together a trivial mechanism for obscuring a file's contents based on a known password. For example, you could "stretch" the password to the length of the file text (multiply the string by (1 + (text length / password length)) and then zip the two together. This gives you a bunch of tuples, which can by converted to their ordinal value (ord('f')=>102, for example) and xored together (ord('f')^ord('b')=>4) and converted back to chars (chr(4) => the unprintable '\x04'). The resulting chars are your cyphertext.
All of this is trivial to break, of course, but it's easy to implement, and decryption is trivial.
*intentional understatement :)
You can try using the password as a key to encrypt it. May be a logical operation on the file on a binary level such as or, and, or others will be able to encrypt it very simply -- but it won't be secure like you mentioned.
You can use XTEA (xTended Tiny Encryption Algorithm) by copying the python code (xTended Tiny Encryption Algorithm) code into your project, it is only 28 lines of python. It has been subjected to cryptanalysis and shown to be reasonably secure.
import struct
def crypt(key,data,iv='\00\00\00\00\00\00\00\00',n=32):
def keygen(key,iv,n):
while True:
iv = xtea_encrypt(key,iv,n)
for k in iv:
yield ord(k)
xor = [ chr(x^y) for (x,y) in zip(map(ord,data),keygen(key,iv,n)) ]
return "".join(xor)
def xtea_encrypt(key,block,n=32,endian="!"):
v0,v1 = struct.unpack(endian+"2L",block)
k = struct.unpack(endian+"4L",key)
sum,delta,mask = 0L,0x9e3779b9L,0xffffffffL
for round in range(n):
v0 = (v0 + (((v1<<4 ^ v1>>5) + v1) ^ (sum + k[sum & 3]))) & mask
sum = (sum + delta) & mask
v1 = (v1 + (((v0<<4 ^ v0>>5) + v0) ^ (sum + k[sum>>11 & 3]))) & mask
return struct.pack(endian+"2L",v0,v1)
def xtea_decrypt(key,block,n=32,endian="!"):
v0,v1 = struct.unpack(endian+"2L",block)
k = struct.unpack(endian+"4L",key)
delta,mask = 0x9e3779b9L,0xffffffffL
sum = (delta * n) & mask
for round in range(n):
v1 = (v1 - (((v0<<4 ^ v0>>5) + v0) ^ (sum + k[sum>>11 & 3]))) & mask
sum = (sum - delta) & mask
v0 = (v0 - (((v1<<4 ^ v1>>5) + v1) ^ (sum + k[sum & 3]))) & mask
return struct.pack(endian+"2L",v0,v1)
Attribution, code from: ActiveState Code » Recipes
# your code goes here
def lagrange(x0, xlist,ylist):
wynik =float(0)
if (len(xlist)!=len(ylist)):
raise BufferError("Rozmiary list wartosci x i y musza byc takie same!")
for i in range(len(xlist)):
licznik=float(1)
mianownik = float(1)
for j in range(len(xlist)):
if (i!=j):
licznik=licznik*(x0-xlist[j])
mianownik=mianownik*(xlist[i]-xlist[j])
wynik=wynik+((licznik/mianownik)*ylist[i])
return wynik
x=[2.0,4.0,5.0,6.0 ]
y=[0.57672, -0.06604, -0.32757, -0.27668]
print ("Lagrange polynomial for point 5.5 is %d" % lagrange(5.5, x, y))
Why do I get answer 0 after I run it? When rewritten to c# and run with the same data it outputs answer -0.3539. Seems to me like casting / rounding error but I'm struggling to find it without debugger.
I am completely new to python, I'm using basic IdleX on windows to code it.
The problem is not your function, it’s the printing.
The formatter %d is a signed integer decimal. So if you have -0.354 as a result, it gets rounded to 0.
Instead, print using %f:
>>> print ("Lagrange polynomial for point 5.5 is %f" % lagrange(5.5, x, y))
Lagrange polynomial for point 5.5 is -0.353952
I have a binary file that was created using a Python code. This code mainly scripts a bunch of tasks to pre-process a set of data files. I would now like to read this binary file in Fortran. The content of the binary file is coordinates of points in a simple format e.g.: number of points, x0, y0, z0, x1, y1, z1, ....
These binary files were created using the 'tofile' function in numpy. I have the following code in Fortran so far:
integer:: intValue
double precision:: dblValue
integer:: counter
integer:: check
open(unit=10, file='file.bin', form='unformatted', status='old', access='stream')
counter = 1
do
if ( counter == 1 ) then
read(unit=10, iostat=check) intValue
if ( check < 0 ) then
print*,"End Of File"
stop
else if ( check > 0 ) then
print*, "Error Detected"
stop
else if ( check == 0 ) then
counter = counter + 1
print*, intValue
end if
else if ( counter > 1 ) then
read(unit=10, iostat=check) dblValue
if ( check < 0 ) then
print*,"End Of File"
stop
else if ( check > 0 ) then
print*, "Error Detected"
stop
else if ( check == 0 ) then
counter = counter + 1
print*,dblValue
end if
end if
end do
close(unit=10)
This unfortunately does not work, and I get garbage numbers (e.g 6.4731191026611484E+212, 2.2844499004808491E-279 etc.). Could someone give some pointers on how to do this correctly?
Also what would be a good way of writing and reading binary files interchangeably between Python and Fortran - as it seems like that is going to be one of the requirements of my application.
Thanks
Here's a trivial example of how to take data generated with numpy to Fortran the binary way.
I calculated 360 values of sin on [0,2π),
#!/usr/bin/env python3
import numpy as np
with open('sin.dat', 'wb') as outfile:
np.sin(np.arange(0., 2*np.pi, np.pi/180.,
dtype=np.float32)).tofile(outfile)
exported that with tofile to binary file 'sin.dat', which has a size of 1440 bytes (360 * sizeof(float32)), read that file with this Fortran95 (gfortran -O3 -Wall -pedantic) program which outputs 1. - (val**2 + cos(x)**2) for x in [0,2π),
program numpy_import
integer, parameter :: REAL_KIND = 4
integer, parameter :: UNIT = 10
integer, parameter :: SAMPLE_LENGTH = 360
real(REAL_KIND), parameter :: PI = acos(-1.)
real(REAL_KIND), parameter :: DPHI = PI/180.
real(REAL_KIND), dimension(0:SAMPLE_LENGTH-1) :: arr
real(REAL_KIND) :: r
integer :: i
open(UNIT, file="sin.dat", form='unformatted',&
access='direct', recl=4)
do i = 0,ubound(arr, 1)
read(UNIT, rec=i+1, err=100) arr(i)
end do
do i = 0,ubound(arr, 1)
r = 1. - (arr(i)**2. + cos(real(i*DPHI, REAL_KIND))**2)
write(*, '(F6.4, " ")', advance='no')&
real(int(r*1E6+1)/1E6, REAL_KIND)
end do
100 close(UNIT)
write(*,*)
end program numpy_import
thus if val == sin(x), the numeric result must in good approximation vanish for float32 types.
And indeed:
output:
360 x 0.0000
So thanks to this great community, from all the advise I got, and a little bit of tinkering around, I think I figured out a stable solution to this problem, and I wanted to share with you all this answer. I will provide a minimal example here, where I want to write a variable size array from Python into a binary file, and read it using Fortran. I am assuming that the number of rows numRows and number of columns numCols are also written along with the full array datatArray. The following Python script writeBin.py writes the file:
import numpy as np
# Read in the numRows and numCols value
# Read in the array values
numRowArr = np.array([numRows], dtype=np.float32)
numColArr = np.array([numCols], dtype=np.float32)
fileObj = open('pybin.bin', 'wb')
numRowArr.tofile(fileObj)
numColArr.tofile(fileObj)
for i in range(numRows):
lineArr = dataArray[i,:]
lineArr.tofile(fileObj)
fileObj.close()
Following this, the fortran code to read the array from the file can be programmed as follows:
program readBin
use iso_fortran_env
implicit none
integer:: nR, nC, i
real(kind=real32):: numRowVal, numColVal
real(kind=real32), dimension(:), allocatable:: rowData
real(kind=real32), dimension(:,:), allocatable:: fullData
open(unit=10,file='pybin.bin',form='unformatted',status='old',access='stream')
read(unit=10) numRowVal
nR = int(numRowVal)
read(unit=10) numColVal
nC = int(numColVal)
allocate(rowData(nC))
allocate(fullData(nR,nC))
do i = 1, nR
read(unit=10) rowData
fullData(i,:) = rowData(:)
end do
close(unit=10)
end program readBin
The main point that I gathered from the discussion on this thread is to match the read and the write as much as possible, with precise specifications of the data types to be read, the way they are written etc. As you may note, this is a made up example, so there may be some things here and there that are not perfect. However, I have used this now to program a finite element program, and the mesh data was where I used this binary read/write - and it worked very well.
P.S: In case you find some typo, please let me know, and I will edit it out rightaway.
Thanks a lot.
Afternoon everyone. I'm currently porting over an IDL code to python and it's been plain sailing up until this point so far. I'm stuck on this section of IDL code:
nsteps = 266
ind2 = ((lindgen(nsteps+1,nsteps+1)) mod (nsteps+1))
dk2 = (k2arr((ind2+1) < nsteps) - k2arr(ind2-1) > 0)) / 2.
My version of this includes a rewritten lindgen function as follows:
def pylindgen(shape):
nelem = numpy.prod(numpy.array(shape))
out = numpy.arange(nelem,dtype=int)
return numpy.reshape(out,shape)
... and the ported code where k2arr is an array of shape (267,):
ind2 = pylindgen((nsteps+1,nsteps+1)) % (nsteps+1)
dk2 = (k2arr[ (ind2+1) < nsteps ] - k2arr[ (ind2-1) > 0. ]) / 2.
Now, the problem is that my code makes ind2 an array where, by looking at the IDL code and the errors thrown in the python script, I'm sure it's meant to be a scalar. Am I missing some feature of these IDL functions?
Any thoughts would be greatly appreciated.
Cheers.
My knowledge of IDL is not what it used to be, I had to research a little. The operator ">" in IDL is not an equivalent of python (or other languages). It stablishes a maximum, anything above it will be set to that value. Same goes for "<", obviously, it sets a minimum.
dk2 = (k2arr((ind2+1) < nsteps) - k2arr(ind2-1) > 0))
where k2arr is 266 and ind2 is (266,266) is equivalent to saying:
- (ind2+1 < nsteps) take ind2+1 and, in any place that ind2+1
is greater than nsteps, replace by nsteps.
- (ind2-1 > 0) take ind2-1 and, in any place that ind2-1 is
less than zero, put zero instead.
The tricky part is now. k2arr (266,) is evaluated for each of the rows of (ind2+1) and (ind2-1), meaning that if (ind2+1 < nsteps) = [1,2,3,...,nsteps-1, nsteps, nsteps] the k2arr will be evaluated for exactly that 266 times, one on top of the other, with the result being (266,266) array.
And NOW I remember why I stopped programming in IDL!
The code for pylindgen works perfectly for me. Produces an array of (267,267), though. IF k2array is a (267,) array, you should be getting an error like:
ValueError: boolean index array should have 1 dimension
Is that your problem?
Cheers