calculating mean of several numpy masked arrays (masked_all) - python

first of all I'm new to python and programming but you guys already helped me a lot, so thanks a lot! But I've come to a problem I haven't found an answer so far:
I have the data of several plates where the data represents the pressure on each plate at a large number of different spots. The thing is, these plates aren't perfectly round because of the sensors measuring the pressure and sometimes these sensors even produce an error so I don't have any data at a spot within the plate.
When I just have to plot one plate, I'll do it like that:
import numpy.ma as ma
matrix=ma.masked_all((160,65),float)
for x in range(len(plate.X)):
matrix[(plate.Y[x],plate.X[x])]=data.index(plate.measurementname[x])
image.pcolormesh(matrix,min,max)
This works fine. Now that I have several plates I'd like to plot the mean pressure on each spot. Because I don't know any mean function, I thought of adding all plates together and divide by the number of plates...I tried following:
import numpy.ma as ma
meanmatrix=ma.masked_all((160,65),float)
for plate in plateslist:
matrix=ma.masked_all((160,65),float)
for x in range(len(plate.X)):
matrix[(plate.Y[x],plate.X[x])]=data.index(plate.measurementname[x])
meanmatrix+=matrix
meanmatrix=meanmatrix/len(plateslist)
image.pcolormesh(meanmatrix,min,max)
This works pretty good but there's one problem I can't solve. As I said sometimes some plates didn't get all data, therefore there's a "hole" at some spots in the plot. Now my meanmatrix has a whole where ever one of the plates had a whole even if all others had data at that spot.
How can I make sure I won't get these holes or is there even a smoother way of getting my "meanmatrix"?? (I hope my question is clear enough...)
Edit:
The problem is not that I don't get the mean of the data, this actually works (well I don't like how I did it but it works), the problem is that I get these "holes" I described before. That's what bothers me.

EDIT: Sorry, I misinterpreted the question. Try this:
allplates = ma.masked_all((160, 65, numplates))
# fill in allplates
meanplate = allplates.mean(axis=2)
This will compute the mean over the last dimension of the array, i.e., average the plates together. Missing values are ignored.
Earlier answer: You can take the mean of a masked array, and it will ignore the missing values:
>>> X = ma.masked_all((160, 65))
>>> X.mean()
masked
>>> X[0, 0] = 1
>>> X.mean()
1.0
Try to avoid using matrix as a variable name, though, because it also refers to a NumPy data structure.

Ok I got an answer:
import numpy.ma as ma
allplats=ma.masked_all((160,65),float)
for plate in plateslist:
for x in range(len(plate.X)):
allplates[(plate.Y[x],plate.X[x])]+=data.index(plate.measurementname[x])
allplates=allplates/len(plateslist)
image.pcolormesh(meanmatrix,min,max)
This actually works! So i guess there was a mistake when adding two masked_all arrays...("Stupid is as stupid does")
If someone has a better approach to get the mean of all plates at each single spot, it would be nice to read it.

Related

How many time can you fit one array to a specific size array (python)

I want to create a randomized array that contains another array few times,
So if:
Big_Array = np.zeroes(5,5)
Inner_array = no.array([[1,1,1],
[2,1,2]])
And if we want 2 Inner_array it could look like:
Big_Array = [[1,2,0,0,0],
[1,1,0,0,0],
[1,2,0,0,0],
[0,0,2,1,2],
[0,0,1,1,1]]
I would like to write a code that will
A. Tell whether the bigger array can fit the required amount of inner arrays, and
B. place randomly the inner array (in random rotations) x amount of times in the big array without overlap
Thanks in advance!
If I understood correctly, you'd like to sample valid tilings of a square which contain a specified amount of integral-sided rectangles.
This is a special case of the exact cover problem, which is NP-complete, so in general I'd expect there to be no really efficient solutions, but you could solve it using Knuth's algorithm x. It would take a while to code yourself though.
There are also a few implementations of DLX online, such as this one from code review SE (not sure what the copyright on that is though).

how does scipy.misc.toimage changes image domain?

there's a code that I've been working on and I saw a code that I don't Understand, I really appreciate someone explains it to me how it works.
the first code will normalize it in [0,1] but in the third code, it makes it in e.g [2,89] (it depends on input image),
1- my question is on code 3, how it makes it to the new domain?
2- if I want to take it back on [0,255] how to undo code 3 (e.g I normalized it and then take it back to the first image)?
img = img.astype(np.float32)/255.0
sc = np.power(np.power(2.0, -3), 0.5)
img=scipy.misc.toimage(sc*np.squeeze(img), cmin=0.0, cmax=1.0)
img=np.asarray(s)
so finally after a few days I figured it out, I Answer it here in case someone had my answer :)
it works like scipy.misc.bytescale and the math behind that is like below:
((I-Cmin)/(Cmax-Cmin))*255
the I parameter is the value of the pixel. for the test, you should make a small matrix like (e.g 3 by 3) and change the Cmax and Cmin. I'm sure you'll understand better.

Numpy mean for big array dataset using For loop Python

I have big dataset in array form and its arranged like this:
Rainfal amount arranged in array form
Average or mean mean for each latitude and longitude at axis=0 is computed using this method declaration:
Lat=data[:,0]
Lon=data[:,1]
rain1=data[:,2]
rain2=data[:,3]
--
rain44=data[:,44]
rainT=[rain1,rain2,rain3,rain4,....rain44]
mean=np.mean(rainT)
The result was aweseome but requires time computation and I look forward to use For Loop to ease the calculation. As for the moment the script that I used is like this:
mean=[]
lat=data[:,0]
lon=data[:,1]
for x in range(2,46):
rainT=data[:,x]
mean=np.mean(rainT,axis=0)
print mean
But weird result is appeared. Anyone?
First, you probably meant to make the for loop add the subarrays rather than keep replacing rainT with other slices of the subarray. Only the last assignment matters, so the code averages that one subarray rainT=data[:,45], also it doesn't have the correct number of original elements to divide by to compute an average. Both of these mistakes contribute to the weird result.
Second, numpy should be able to average elements faster than a Python for loop can do it since that's just the kind of thing that numpy is designed to do in optimized native code.
Third, your original code copies a bunch of subarrays into a Python List, then asks numpy to average that. You should get much faster results by asking numpy to sum the relevant subarray without making a copy, something like this:
rainT = data[:,2:] # this gets a view onto data[], not a copy
mean = np.mean(rainT)
That computes an average over all the rainfall values, like your original code.
If you want an average for each latitude or some such, you'll need to do it differently. You can average over an array axis, but latitude and longitude aren't axes in your data[].
Thanks friends, you are giving me such aspiration. Here is the working script ideas by #Jerry101 just now but I decided NOT to apply Python Loop. New declaration would be like this:
lat1=data[:,0]
lon1=data[:,1]
rainT=data[:,2:46] ---THIS IS THE STEP THAT I AM MISSING EARLIER
mean=np.mean(rainT,axis=1)*24 - MAKE AVERAGE DAILY RAINFALL BY EACH LAT AND LON
mean2=np.array([lat1,lon1,mean])
mean2=mean2.T
np.savetxt('average-daily-rainfall.dat2',mean2,fmt='%9.3f')
And finally the result is exactly same to program made in Fortran.

What simple filter could I use to de-noise my data?

I'm processing some experimental data in Python 3. The data (raw_data in my code) is pretty noisy:
One of my goal is to find the peaks, and for this I'd like to filter the noise. Based on what I found in the documentation of SciPy's Signal module, the theory of filtering seems to be really complicated, and unfortunately I have zero background. Of course I got to learn it sooner or later - and I intend to - but now now the profit doesn't worth the time (and learning filter theory isn't the purpose of my work), so I shamefully copied the code in Lyken Syu's answer without a chance of understanding the background:
import numpy as np
from scipy import signal as sg
from matplotlib import pyplot as plt
# [...] code, resulting in this:
raw_data = [arr_of_xvalues, arr_of_yvalues] # xvalues are in decreasing order
# <magic beyond my understanding>
n = 20 # the larger n is, the smoother the curve will be
b = [1.0 / n] * n
a = 2
filt = sg.lfilter(b, a, raw_data)
filtered = sg.lfilter(b, a, filt)
# <\magic>
plt.plot(filtered[0], filtered[1], ".")
plt.show()
It kind of works:
What concerns me is the curve from 0 to the beginning of my dataset the filter adds. I guess it's a property of the IIR filter I used, but I don't know how to prevent this. Also, I couldn't make other filters work so far. I need to use this code on other experimental results alike this, so I need a somewhat more general solution than e.g. cutting out all y<10 points.
Is there a better (possibly simpler) way, or choice of filter that is easy to implement without serious theoretical background?
How, if, could I prevent my filter adding that curve to my data?

Image Sharpening Algorithm coded in Python

I was hoping someone could take a look at this sharpening algorithm I devised using PILLOW and explain to me why it is not giving a desirable sharpening effect on images. It really just looks like crap when applied to my sample images. I've worked on this for several days, but haven't made much progress in either improving the quality of the sharpening effect or the efficiency of the algorithm itself. Ideally, I'm looking for a subtle sharpening effect or something that can be scaled easily. I really appreciate any help or insight that can be provided. Here are the sources that I used to come up with this algorithm:
http://lodev.org/cgtutor/filtering.html#Sharpen
http://www.foundalis.com/res/imgproc.htm
from PIL import *
from PIL import Image
import os
os.chdir(r"C:")
filter1=9
filter2=-1
def sharpen2(photo,height,width,filter1,filter2):
for y in range(1,height-1):
for x in range(1,width-1):
(r,g,b)=photo.getpixel((x,y))
r=int(r*filter1)
g=int(g*filter1)
b=int(b*filter1)
(r1,g1,b1)=photo.getpixel((x-1,y-1))
r1=int(r1*filter2)
g1=int(g1*filter2)
b1=int(b1*filter2)
(r2,g2,b2)=photo.getpixel((x,y-1))
r2=int(r2*filter2)
g2=int(g2*filter2)
b2=int(b2*filter2)
(r3,g3,b3)=photo.getpixel((x+1,y-1))
r3=int(r3*filter2)
g3=int(g3*filter2)
b3=int(b3*filter2)
(r4,g4,b4)=photo.getpixel((x-1,y))
r4=int(r4*filter2)
g4=int(g4*filter2)
b4=int(b4*filter2)
(r5,g5,b5)=photo.getpixel((x+1,y))
r5=int(r5*filter2)
g5=int(g5*filter2)
b5=int(b5*filter2)
(r6,g6,b6)=photo.getpixel((x-1,y+1))
r6=int(r6*filter2)
g6=int(g6*filter2)
b6=int(b6*filter2)
(r7,g7,b7)=photo.getpixel((x,y+1))
r7=int(r7*filter2)
g7=int(g7*filter2)
b7=int(b7*filter2)
(r8,g8,b8)=photo.getpixel((x+1,y+1))
r8=int(r8*filter2)
g8=int(g8*filter2)
b8=int(b8*filter2)
rfPixel=r+r1+r2+r3+r4+r5+r6+r7+r8
if rfPixel>255:
rfPixel=255
elif rfPixel<0:
rfPixel=0
gfPixel= g+g1+g2+g3+g4+g5+g6+g7+g8
if gfPixel>255:
gfPixel=255
elif gfPixel<0:
gfPixel=0
bfPixel=b+b1+b2+b3+b4+b5+b6+b7+b8
if bfPixel>255:
bfPixel=255
elif bfPixel<0:
bfPixel=0
photo.putpixel((x,y),(rfPixel,gfPixel,bfPixel))
return photo
photo=Image.open("someImage.jpg").convert("RGB")
photo2=photo.copy()
height=photo.height
width=photo.width
x=sharpen2(photo,height,width,filter1,filter2)
One problem is likely that you're saving the results to the same image you are getting pixel data from. By the time you get to a pixel, some of its neighbors have been replaced by the filtered data, and some have not. The error is small at first but adds up.
To fix: save the results to a different image, say filtered_photo.putpixel(...). You'd have to create a blank filtered_photo first.
Another big problem (mentioned by #Mark Ransom) is that you probably want filter1 = 1.1 and filter2 = -0.1 or something along those lines. Using 9 and -1 will make most values come out of range.
A better implementation: don't loop over each pixel in python code, use numpy to process the whole image at once, it will be much faster (and shorter code). The usual implementation of sharpen is to subtract the gaussian-filtered image from the original image, which is a one-liner using numpy and ndimage (or skimage).

Categories