I have 6 arrays named , x1,......x6 that i read from 'npz' file. I need to perform some mathematical job on each array and stored that into 10 new arrays. I am doing it step by step in a very simple way. To read the file and store variables,
files = np.load("particle.npz")
x1 = files['x1']
x2 = files ['x2']
x3 = files['x3']
x4 = files ['x4']
x5 = files['x5']
x6 = files ['x6']
create another array from previous one,
pox1= x1[:,0]
pox2= x2[:,0]
pox3= x3[:,0]
pox4= x4[:,0]
pox5= x5[:,0]
pox6= x6[:,0]
Then create some new arrays,
sq_diff_x1 = np.zeros(40002)
sq_diff_x2 = np.zeros(40002)
sq_diff_x3 = np.zeros(40002)
sq_diff_x4 = np.zeros(40002)
sq_diff_x5 = np.zeros(40002)
sq_diff_x6 = np.zeros(40002)
And lastly perform calculation using for loop and store into new arrays,
for i in range (len(x1)-1):
sq_diff_x1[i] = (pox1[i]-pox1[0])**2
sq_diff_x2[i] = (pox1[i]-pox1[0])**2
sq_diff_x3[i] = (pox1[i]-pox1[0])**2
sq_diff_x4[i] = (pox1[i]-pox1[0])**2
sq_diff_x5[i] = (pox1[i]-pox1[0])**2
sq_diff_x6[i] = (pox1[i]-pox1[0])**2
The code is working fine but is there any other way where I can do it automatically by not assigning everything one by one? Because using my method is simple but will be very time consuming when I need work with 100 arrays.So something automated things are required.
files = np.load("particle.npz")
x_s = [files[key] for key in files.keys()]
Creating a list of arrays rather than individually named ones is the preferred method in Python.
pox_s = [x[:,0] for x in x_s]
Looks like the arrays are all the same size. So we can turn the list into an array:
pox_s = np.array(pox_s)
Or even
x_s = np.array(x_s) # (6, 40002, ?)
pox_s = x_s[:,:,0] # (6, 40002)
sq_diffs = (pox_s - pox_s[:,[0]])**2 # (6, 40002)-(6,1)
Without a small concrete example, I can't test this code. I think I've got the shapes right.
You basically need to write an extra loop to iterate over the keys of the dictionary.
files = np.load("particle.npz")
sq_diff_list = []
temp = []
for i in list(files.keys()):
arr = files[i]
for x in arr:
temp.append((x-arr[0])**2)
sq_diff_list.append(temp)
temp = []
Related
This MATLAB code is from Main_MOHHO.m from https://www.mathworks.com/matlabcentral/fileexchange/80776-multi-objective-harris-hawks-optimization-mohho. I want to make the same code using python, but I can't make the Rabbits variabel.
clc;
clear;
close all;
%% Problem Definition
nVar=3; % Number of Decision Variables
VarSize=[1 nVar]; % Size of Decision Variables Matrix
VarMin=0; % Lower Bound of Variables
VarMax=1; % Upper Bound of Variables
nPop=5; % Population Size
%% Initialization
empty_Rabbit.Location=[];
empty_Rabbit.Cost=[];
empty_Rabbit.Sol=[];
empty_Rabbit.IsDominated=[];
empty_Rabbit.GridIndex=[];
empty_Rabbit.GridSubIndex=[];
Rabbits=repmat(empty_Rabbit,nPop,1);
for i=1:nPop
Rabbits(i).Location = rand(VarSize).*(VarMax-VarMin)+VarMin;
X(i,:) = rand(VarSize).*(VarMax-VarMin)+VarMin;
end
I try to make it on google colab like this.
import numpy as np
nVar = 3 # Number of Decision Variables
VarSize = np.array((1, nVar)) # Size of Decision Variables Matrix
VarMin = 0 # Lower Bound of Variables
VarMax = 1 # Upper Bound of Variables
nPop = 5 # Population Size
class empty_Rabbit:
Location = []
Cost = []
IsDominated = []
GridIndex = []
GridSubIndex = []
Sol = []
Rabbits = np.tile(empty_Rabbit, (nPop, 1))
X = np.zeros((nPop, nVar))
Rabbit_Location = np.zeros((VarSize))
Rabbit_Energy = math.inf
for i in range(nPop):
Rabbits[i, 0].Location = np.multiply(np.random.rand(VarSize[0], VarSize[1]),
(VarMax-VarMin) + VarMin)
print(Rabbits[i,0].Location)
But, the Rabbits_Location same for each row.
Output Google Colab
What is the correct way to create Rabbits variable in python so the output like the output with number 1 in the pic? Thank you.
Two issues exist in your code. First, np.tile repeats the same object (nPop, 1) times. So, when you change one of the objects, you actually change the same memory location. Second, you want to initialize a different object each time instead of referring to the same object, so you want to write empty_Rabbit() to create a new instance of that object. Both suggestions can be achieved using a comprehension like [empty_Rabbit() for i in range(nPop)] and reshape to any new dimensions if required.
import numpy as np
nVar = 3 # Number of Decision Variables
VarSize = np.array((1, nVar)) # Size of Decision Variables Matrix
VarMin = 0 # Lower Bound of Variables
VarMax = 1 # Upper Bound of Variables
nPop = 5 # Population Size
class empty_Rabbit:
Location = []
Cost = []
IsDominated = []
GridIndex = []
GridSubIndex = []
Sol = []
Rabbits = np.array([empty_Rabbit() for i in range(nPop)]).reshape(nPop,1)
X = np.zeros((nPop, nVar))
Rabbit_Location = np.zeros((VarSize))
Rabbit_Energy = np.inf
for i in range(nPop):
Rabbits[i, 0].Location = np.multiply(np.random.rand(VarSize[0], VarSize[1]),
(VarMax-VarMin) + VarMin)
print(Rabbits[i,0].Location)
for i in range(nPop):
print(Rabbits[i,0].Location)
Now, the output of both print statements will be identical with distinct rows:
[[0.5392264 0.39375339 0.59483626]]
[[0.53959355 0.91049574 0.58115175]]
[[0.46152304 0.43111977 0.06882631]]
[[0.13693784 0.82075653 0.49488394]]
[[0.06901317 0.34133836 0.91453956]]
[[0.5392264 0.39375339 0.59483626]]
[[0.53959355 0.91049574 0.58115175]]
[[0.46152304 0.43111977 0.06882631]]
[[0.13693784 0.82075653 0.49488394]]
[[0.06901317 0.34133836 0.91453956]]
scipy.io.loadmat uses structured arrays when loading struct from MATLAB .mat files. But I think that's too advanced for you.
I think you need to create a set of numpy arrays, rather than try for some sort of class or more complicated structure.
empty_Rabbit.Location=[];
empty_Rabbit.Cost=[];
empty_Rabbit.Sol=[];
empty_Rabbit.IsDominated=[];
empty_Rabbit.GridIndex=[];
empty_Rabbit.GridSubIndex=[];
becomes instead
location = np.zeros(nPop)
cost = np.zeros(nPop)
sol = np.zeros(nPop)
isDominated = np.zeros(nPop) # or bool dtype?
gridIndex = np.zeros(nPop)
gridSubIndex = np.zeros(nPop)
np.zeros makes a float array; for some of those you might want np.zeros(nPop, dtype=int) (if used as index).
rabbit= np.zeros(nPop, dtype=[('location',float), ('cost',float),('sol',float), ....])
could be used to make structured array, but you'll need to read more about those.
MATLAB lets you use iteration freely as in
for i=1:nPop
Rabbits(i).Location = rand(VarSize).*(VarMax-VarMin)+VarMin;
X(i,:) = rand(VarSize).*(VarMax-VarMin)+VarMin;
end
but that's slow (as it used to be MATLAB before jit compilation). It's better to use whole array calculations
location = np.random.rand(nPop,VarSize) * (VarMax-VarMin)+VarMin
will make a (nPop,VarSize) 2d array, not the 1d that np.zeros(nPop) created.
Looks like X could be created in the same way (without iteration).
I'm trying to append items to an array by their name.
here is my code:
def create_filters(_shapes):
"""create masked filters to use on the image."""
row, col = _shapes
row, col = int(row/2), int(col/2)
urow, drow = row+10, row-10
ucol, dcol = col+10, col-10
masks = []
mask1 = np.ones(_shapes, np.uint16)
mask1[:,dcol:ucol] = 0
# masks.append(mask1)
mask2 = np.ones(_shapes, np.uint16)
mask2[drow:urow,:] = 0
# masks.append(mask2)
mask3 = cv.bitwise_not(mask1)
# masks.append(mask3)
mask4 = cv.bitwise_not(mask2)
# masks.append(mask4)
mask5 = np.zeros(_shapes, np.uint16)
mask5[drow:urow,dcol:ucol] = 255
# masks.append(mask5)
mask6 = cv.bitwise_not(mask5)
# masks.append(mask6)
mask7 = np.copy(mask1)
mask7[drow:urow,:] = 0
# masks.append(mask7)
mask8 = cv.bitwise_not(mask7)
# masks.append(mask8)
for i in range(1,9):
m = globals()[f'mask{i}']
masks.append(m)
return masks
so basically I want to append all mask1-8 to masks. Now, since mask1-8 are local variables I can't use global().
Is there a better way to do it? I know I can initialize the array like that:
masks=[mask1, mask2,...]
but I'm asking specifically for using string as the variable name in a for loop.
There are several ways to cut this, but to address your specific question:
As per my comment, you can simply use locals() in place of globals().
Another option is to use eval i.e:
for i in range(1,9):
m = eval(f"mask{i}")
masks.append(m)
The eval-function parses and evaluates any python expression using the global and local namespaces. See the python docs for more information on eval.
I've recently been working on some code in python to simulate a 2 dimensional U(1) gauge theory using monte carlo methods. Essentially I have an n by n by 2 array (call it Link) of unitary complex numbers (their magnitude is one). I randomly select element of my Link array and propose a random change to the number at that site. I then compute the resulting change in the action that would occur due to that change. I then accept the change with a probability equal to min(1,exp(-dS)), where dS is the change in the action. The code for the iterator is as follows
def iteration(j1,B0):
global Link
Staple = np.zeros((2),dtype=complex)
for i0 in range(0,j1):
x1 = np.random.randint(0,n)
y1 = np.random.randint(0,n)
u1 = np.random.randint(0,1)
Linkrxp1 = np.roll(Link,-1, axis = 0)
Linkrxn1 = np.roll(Link, 1, axis = 0)
Linkrtp1 = np.roll(Link, -1, axis = 1)
Linkrtn1 = np.roll(Link, 1, axis = 1)
Linkrxp1tn1 = np.roll(np.roll(Link, -1, axis = 0),1, axis = 1)
Linkrxn1tp1 = np.roll(np.roll(Link, 1, axis = 0),-1, axis = 1)
Staple[0] = Linkrxp1[x1,y1,1]*Linkrtp1[x1,y1,0].conj()*Link[x1,y1,1].conj() + Linkrxp1tn1[x1,y1,1].conj()*Linkrtn1[x1,y1,0].conj()*Linkrtn1[x1,y1,1]
Staple[1] = Linkrtp1[x1,y1,0]*Linkrxp1[x1,y1,1].conj()*Link[x1,y1,0].conj() + Linkrxn1tp1[x1,y1,0].conj()*Linkrxn1[x1,y1,1].conj()*Linkrxn1[x1,y1,0]
uni = unitary()
Linkprop = uni*Link[x1,y1,u1]
dE3 = (Linkprop - Link[x1,y1,u1])*Staple[u1]
dE1 = B0*np.real(dE3)
d1 = np.random.binomial(1, np.minimum(np.exp(dE1),1))
d = np.random.uniform(low=0,high=1)
if d1 >= d:
Link[x1,y1,u1] = Linkprop
else:
Link[x1,y1,u1] = Link[x1,y1,u1]
At the beginning of program I call a routine called "randomize" to generate K random unitary complex numbers which have small imaginary parts and store them in an array called Cnum of length K. In the same routine I also go through my Link array and set each element to a random unitary complex number. The code is listed below.
def randommatrix():
global Cnum
global Link
for i1 in range(0,K):
C1 = np.random.normal(0,1)
Cnum[i1] = np.cos(C1) + 1j*np.sin(C1)
Cnum[i1+K] = np.cos(C1) - 1j*np.sin(C1)
for i3,i4 in itertools.product(range(0,n),range(0,n)):
C2 = np.random.uniform(low=0, high = 2*np.pi)
Link[i3,i4,0] = np.cos(C2) + 1j*np.sin(C2)
C2 = np.random.uniform(low=0, high = 2*np.pi)
Link[i3,i4,1] = np.cos(C2) + 1j*np.sin(C2)
The following routine is used during the iteration routine to get a random complex number with a small imaginary part (by retrieving a random element of the Cnum array we generated earlier).
def unitary():
I1 = np.random.randint((0),(2*K-1))
mat = Cnum[I1]
return mat
Here is an example of what the iteration routine would be used for. I've written a routine called plaquette, which calculates the mean plaquette (real part of a 1 by 1 closed loop of link variables) for a given B0. The iteration routine is being used to generate new field configurations which are independent of previous configurations. After we get a new field configuration we calculate the plaquette for said configuration. We then repeat this process j1 times using a while loop, and at the end we end up with the mean plaquette.
def Plq(j1,B0):
i5 = 0
Lboot = np.zeros(j1)
while i5<j1:
iteration(25000,B0)
Linkrxp1 = np.roll(Link,-1, axis = 0)
Linkrtp1 = np.roll(Link, -1, axis = 1)
c0 = np.real(Link[:,:,0]*Linkrxp1[:,:,1]*Linkrtp1[:,:,0].conj()*Link[:,:,1].conj())
i5 = i5 + 1
We need to define some variables before we run anything, so here's the initial variables which I define before defining any routines
K = 20000
n = 50
a = 1.0
Link = np.zeros((n,n,2),dtype = complex)
Cnum = np.zeros((2*K), dtype = complex)
This code works, but it is painfully slow. Is there a way that I can use multiprocessing or something to speed this up?
You should use cython and c data types. Another cython link. It's built for fast computation.
You could use multiprocessing, potentially, in one of two cases.
If you have one object that multiple process would need to share you would need to use Manager (see multiprocessing link), Lock, and Array to share the object between processes. However, there is no guarantee this will result in an increased speed since each process needs to lock the link to guarantee your prediction, assuming the predictions are affected by all elements in the link (if a process modifies an element at the same time another process is making a prediction for an element, the prediction wouldn't be based on the most current information).
If your predictions do not take into account the state of the other elements, i.e. it only cares about the one element, then you could break your Link array into segments and divvy chunks out to several processes in a process pool, and when done combine the segments back to one array. This would certainly save time, and you wouldn't have to use any additional multiprocessing mechanisms.
I'm trying to load a large number of files saved in the Ensight gold format into a numpy array. In order to conduct this read I've written my own class libvec which reads the geometry file and then preallocates the arrays which python will use to save the data as shown in the code below.
N = len(file_list)
# Create the class object and read geometry file
gvec = vec.libvec(os.path.join(current_dir,casefile))
x,y,z = gvec.xyz()
# Preallocate arrays
U_temp = np.zeros((len(y),len(x),N),dtype=np.dtype('f4'))
V_temp = np.zeros((len(y),len(x),N),dtype=np.dtype('f4'))
u_temp = np.zeros((len(x),len(x),N),dtype=np.dtype('f4'))
v_temp = np.zeros((len(x),len(y),N),dtype=np.dtype('f4'))
# Read the individual files into the previously allocated arrays
for idx,current_file in enumerate(file_list):
U,V =gvec.readvec(os.path.join(current_dir,current_file))
U_temp[:,:,idx] = U
V_temp[:,:,idx] = V
del U,V
However this takes seemingly forever so I was wondering if you have any idea how to speed up this process? The code reading the individual files into the array structure can be seen below:
def readvec(self,filename):
# we are supposing for the moment that the naming scheme PIV__vxy.case PIV__vxy.geo not changes should that
# not be the case appropriate changes have to be made to the corresponding file
data_temp = np.loadtxt(filename, dtype=np.dtype('f4'), delimiter=None, converters=None, skiprows=4)
# U value
for i in range(len(self.__y)):
# x value counter
for j in range(len(self.__x)):
# y value counter
self.__U[i,j]=data_temp[i*len(self.__x)+j]
# V value
for i in range(len(self.__y)):
# x value counter
for j in range(len(self.__x)):
# y value counter
self.__V[i,j]=data_temp[len(self.__x)*len(self.__y)+i*len(self.__x)+j]
# W value
if len(self.__z)>1:
for i in range(len(self.__y)):
# x value counter
for j in range(len(self.__xd)):
# y value counter
self.__W[i,j]=data_temp[2*len(self.__x)*len(self.__y)+i*len(self.__x)+j]
return self.__U,self.__V,self.__W
else:
return self.__U,self.__V
Thanks a lot in advance and best regards,
J
It'a bit hard to say without any test input\output to compare against. But i think this would give you the same U\V arrays as your nested for loops in readvec. This method should be considerably faster then the for loops.
U = data[:size_x*size_y].reshape(size_x, size_y)
V = data[size_x*size_y:].reshape(size_x, size_y)
Returning these directly into U_temp and V_temp should also help. Right now you're doing 3(?) copies of your data to get them into U_temp and V_temp
From file to temp_data
From temp_data to self.__U\V
From U\V into U\V_temp
Although my guess is that the two nested for loop, and accessing one element at a time is causing the slowness
I've only started python recently but am stuck on a problem.
# function that tells how to read the urls and how to process the data the
# way I need it.
def htmlreader(i):
# makes variable websites because it is used in a loop.
pricedata = urllib2.urlopen(
"http://website.com/" + (",".join(priceids.split(",")[i:i + 200]))).read()
# here my information processing begins but that is fine.
pricewebstring = pricedata.split("},{")
# results in [[1234,2345,3456],[3456,4567,5678]] for example.
array1 = [re.findall(r"\d+", a) for a in pricewebstring]
# writes obtained array to my text file
itemtxt2.write(str(array1) + '\n')
i = 0
while i <= totalitemnumber:
htmlreader(i)
i = i + 200
See the comments in the script as well.
This is in a loop and will each time give me an array (defined by array1).
Because I print this to a txt file it results in a txt file with separate arrays.
I need one big array so it needs to merge the results of htmlreader(i).
So my output is something like:
[[1234,2345,3456],[3456,4567,5678]]
[[6789,4567,2345],[3565,1234,2345]]
But I want:
[[1234,2345,3456],[3456,4567,5678],[6789,4567,2345],[3565,1234,2345]]
Any ideas how I can approach this?
Since you want to gather all the elements in a single list, you can simply gather them in another list, by flattening it like this
def htmlreader(i, result):
...
result.extend([re.findall(r"\d+", a) for a in pricewebstring])
i, result = 0, []
while i <= totalitemnumber:
htmlreader(i, result)
i = i + 200
itemtxt2.write(str(result) + '\n')
In this case, the result created by re.findall (a list) is added to the result list. Finally, you are writing the entire list as a whole to the file.
If the above shown method is confusing, then change it like this
def htmlreader(i):
...
return [re.findall(r"\d+", a) for a in pricewebstring]
i, result = 0, []
while i <= totalitemnumber:
result.extend(htmlreader(i))
i = i + 200