I am trying to develop a code that solve a Dirichlet boundary value problem (Poisson's Equation). The main issue that I have is that I do not have any idea how to write a "grid generator" to generate my domain from which to extract my matrices. The python programming course I took never touched on anything like this. I will leave my domain and the discretization of the differential operator below. Any help to get me started would be much appreciated. Hope this explanation was clear!
If it is too small to see, the y axis goes from 0-4 and the x from 0-5.
A grid can be thought of as a list wherein each element in the list is itself a list. This nets you a two-dimensional matrix-like object:
myList = [[None for i in range(10)] for k in range(10)]
will, for instance, make you a 10x10 grid. You can change the expression to create grids of varying sizes and content (ie, instead of None substitute your computation, and instead of the range(10) expressions you substitute whatever it is that conditions the length of the row/col).
Related
I want to create a randomized array that contains another array few times,
So if:
Big_Array = np.zeroes(5,5)
Inner_array = no.array([[1,1,1],
[2,1,2]])
And if we want 2 Inner_array it could look like:
Big_Array = [[1,2,0,0,0],
[1,1,0,0,0],
[1,2,0,0,0],
[0,0,2,1,2],
[0,0,1,1,1]]
I would like to write a code that will
A. Tell whether the bigger array can fit the required amount of inner arrays, and
B. place randomly the inner array (in random rotations) x amount of times in the big array without overlap
Thanks in advance!
If I understood correctly, you'd like to sample valid tilings of a square which contain a specified amount of integral-sided rectangles.
This is a special case of the exact cover problem, which is NP-complete, so in general I'd expect there to be no really efficient solutions, but you could solve it using Knuth's algorithm x. It would take a while to code yourself though.
There are also a few implementations of DLX online, such as this one from code review SE (not sure what the copyright on that is though).
I'm trying to build up a banded matrix with 9 components to solve a finite difference problem. My idea is to have them first in a normal matrix and then diagonalised it with some sort of scipy.sparse method.
My code works fine with a normal if condition like:
for l,c in enumerate(node_n):
Pxx[l,4] = epsi
if (c[0]+1,c[1]) in tensor:
Pxx[l,5] = psi
but I was wondering it is instead possible to write it in a single line like:
Pxx[l,2] = gamma if ((c[0]+1,c[1]+1) in tensor)
I've tried with and without brackets, using square brackets and so on but I always get invalid syntax. I know it's not a big problem but I would rather have 9 lines of code, one for each component, instead of 9 if statements.
Thank you in advance!
Kind regards
I'm developing a genetic program and by now the whole algorithm appears to be fine. (Albeit slow...).
I'm iterating through lists of real values, one at a time and then applying a function to the list. The format is something like :
trainingset=[[3.32,55,33,22],[3.322,5,3,223],[23.32,355,33,122]...]]
Where each inner list represents a line in the set and the last item of that list is the result of the regression in that line/individual.
The function I use is some thing like:
def getfitness(individual,set):
...
for elem in set:
apply the function individual to it
fitness=fitness+(set[-1]-(result of individual with the parameters of the set))
fitness=RMS(fitness)
return fitness
So, what I'de like to know is , is there a way of calculating the function in one go, are there any libs that can do this ? I've been looking at matrices in numpy but to no avail.
Thanks in advance.
Jorge
I have a numpy array points of shape [N,2] which contains the (x,y) coordinates of N points. I'd like to compute the mean distance of every point to all other points using an existing function (which we'll call cmp_dist and which I just use as a black box).
First a verbose solution in "normal" python to illustrate what I want to do (written from the top of my head):
mean_dist = []
for i,(x0,y0) in enumerate(points):
dist = [
for j,(x1,y1) in enumerate(points):
if i==j: continue
dist.append(comp_dist(x0,y0,x1,y1))
mean_dist.append(np.array(dist).mean())
I already found a "better" solution using list comprehensions (assuming list comprehensions are usually better) which seems to work just fine:
mean_dist = [np.array([cmp_dist(x0,y0,x1,y1) for j,(x1,y1) in enumerate(points) if not i==j]).mean()
for i,(x0,y0) in enumerate(points)]
However, I'm sure there's a much better solution for this in pure numpy, hopefully some function that allows to do an operation for every element using all other elements.
How can I write this code in pure numpy/scipy?
I tried to find something myself, but this is quite hard to google without knowing how such operations are called (my respective math classes are quite a while back).
Edit: Not a duplicate of Fastest pairwise distance metric in python
The author of that question has a 1D array r and is satisfied with what scipy.spatial.distance.pdist(r, 'cityblock') returns (an array containing the distances between all points). However, pdist returns a flat array, that is, is is not clear which of the distances belong to which point (see my answer).
(Although, as explained in that answer, pdist is what I was ultimately looking for, it doesnt solve the problem as I've specified it in the question.)
Based on #ali_m's comment to the question ("Take a look at scipy.spatial.distance.pdist"), I found a "pure" numpy/scipy solution:
from scipy.spatial.distance import cdist
...
fct = lambda p0,p1: great_circle_distance(p0[0],p0[1],p1[0],p1[1])
mean_dist = np.sort(cdist(points,points,fct))[:,1:].mean(1)
definitely
That's for sure an improvement over my list comprehension "solution".
What i don't really like about this, though, is that I have to sort and slice the array to remove the 0.0 values which are the result of computing the distance between identical points (so basically that's my way of removing the diagonal entries of the matrix I get back from cdist).
Note two things about the above solution:
I'm using cdist, not pdist as suggested by #ali_m.
I'm getting back an array of the same size as points, which contains the mean distance from every point to all other points, just as specified in the original question.
pdist unfortunately just returns an array that contains all these mean values in a flat array, that is, the mean values are unlinked from the points they are referring to, which is necessary for the problem as it I've described it in the original question.
However, since in the actual problem at hand I only need the mean over the means of all points (which I did not mention in the question), pdist serves me just fine:
from scipy.spatial.distance import pdist
...
fct = lambda p0,p1: great_circle_distance(p0[0],p0[1],p1[0],p1[1])
mean_dist_overall = pdist(points,fct).mean()
Though this would for sure be the definite answer if I had asked for the mean of the means, but I've purposely asked for the array of means for all points. Because I think there's still room for improvement in the above cdist solution, I won't accept this as THE answer.
Alright so I am trying to make a program that prompts for a square bin dimension using a list of lists to fill this list of lists with squared blocks. It also prompts for a text file for example: blockList.txt:
3 1 2 1 3
I have a function that splits that up into a list and tries to fill in the space of the lists using the First Fit descending algorithm. The problem is is that the function only fills the highest valued item in the list and then stops and prints the grid. Can someone help me figure out why it isn't looping correctly? All help would be much appreciated
Here is my code:
https://gist.github.com/anonymous/1ac55a8fcb350d0992a4
I'm not 100% on python syntax, but it seems you called your placement() function in your pack() function before you defined your placement() function. That could be messing you up.