python normal distribution - python

I have a list of numbers, with sample mean and SD for these numbers. Right now I am trying to find out the numbers out of mean+-SD,mean +-2SD and mean +-3SD.
For example, in the part of mean+-SD, i made the code like this:
ND1 = [np.mean(l)+np.std(l,ddof=1)]
ND2 = [np.mean(l)-np.std(l,ddof=1)]
m=sorted(l)
print(m)
ND68 = []
if ND2 > m and m< ND1:
ND68.append(m<ND2 and m>ND1)
print (ND68)
Here is my question:
1. Could number be calculated by the list and arrange. If so, which part I am doing wrong. Or there is some package I can use to solve this.

This might help. We will use numpy to grab the values you are looking for. In my example, I create a normally distributed array and then use boolean slicing to return the elements that are outside of +/- 1, 2, or 3 standard deviations.
import numpy as np
# create a random normally distributed integer array
my_array = np.random.normal(loc=30, scale=10, size=100).astype(int)
# find the mean and standard dev
my_mean = my_array.mean()
my_std = my_array.std()
# find numbers outside of 1, 2, and 3 standard dev
# the portion inside the square brackets returns an
# array of True and False values. Slicing my_array
# with the boolean array return only the values that
# are True
out_std_1 = my_array[np.abs(my_array-my_mean) > my_std]
out_std_2 = my_array[np.abs(my_array-my_mean) > 2*my_std]
out_std_3 = my_array[np.abs(my_array-my_mean) > 3*my_std]

You are on the right track there. You know the mean and standard deviation of your list l, though I'm going to call it something a little less ambiguous, say, samplePopulation.
Because you want to do this for several intervals of standard deviation, I recommend crafting a small function. You can call it multiple times without too much extra work. Also, I'm going to use a list comprehension, which is just a for loop in one line.
import numpy as np
def filter_by_n_std_devs(samplePopulation, numStdDevs):
# you mostly got this part right, no need to put them in lists though
mean = np.mean(samplePopulation) # no brackets needed here
std = np.std(samplePopulation) # or here
band = numStdDevs * std
# this is the list comprehension
filteredPop = [x for x in samplePopulation if x < mean - band or x > mean + band]
return filteredPop
# now call your function with however many std devs you want
filteredPopulation = filter_by_n_std_devs(samplePopulation, 1)
print(filteredPopulation)
Here's a translation of the list comprehension (based on your use of append it looks like you may not know what these are, otherwise feel free to ignore).
# remember that you provide the variable samplePopulation
# the above list comprehension
filteredPop = [x for x in samplePopulation if x < mean - band or x > mean + band]
# is equivalent to this:
filteredPop = []
for num in samplePopulation:
if x < mean - band or x > mean + band:
filteredPop.append(num)
So to recap:
You don't need to make a list object out of your mean and std calculations
The function call let's you plug in your samplePopulation and any number of standard deviations you want without having to go in and manually change the value
List comprehensions are one line for loops, more or less, and you can even do the filtering you want right inside it!

Related

looping through complicated nested dictionary

I have a rather complex list of dictionaries with nested dictionaries and arrays. I am trying to figure out a way to either,
make the list of data less complicated and then loop through the
raster points or,
find a way to loop through the array of raster points as is.
What I am ultimately trying to do is loop through all raster points within each polygon, perform a simple greater than or less than on the value assigned to that raster point (values are elevation values). If greater than a given value assign 1, if less than given value assign 0. I would then create a separate array of these 1s and 0s of which I can then get an average value.
I have found all these points (allpoints within pts), but they are in arrays within a dictionary within another dictionary within a list (of all polygons) at least I think, I could be wrong in the organization as dictionaries are rather new to me.
The following is my code:
import numpy as np
def mystat(x):
mystat = dict()
mystat['allpoints'] = x
return mystat
stats = zonal_stats('acp.shp','myGeoTIFF.tif')
pts = zonal_stats('acp.shp','myGeoTIFF.tif', add_stats={'mystat':mystat})
Link to my documents. Any help or direction would be greatly appreciated!
I assume you are using rasterstats package. You could try something like this:
threshold_value = 15 # You may change this threshold value to yours
for o_idx in range(0, len(pts)):
data = pts[o_idx]['mystat']['allpoints'].data
for d_idx in range(0, len(data)):
for p_idx in range(0, len(data[d_idx])):
# You may change the conditions below as you want
if data[d_idx][p_idx] > threshold_value:
data[d_idx][p_idx] = 1
elif data[d_idx][p_idx] <= threshold_value:
data[d_idx][p_idx] = 0;
It is going to update the data within the pts list

Python: Is there a way to get the average of the n newest numbers in an array?

I am trying to build a sort of a battery meter, where I have one program that collects voltage samples and adds it to an array. My idea is, that I collect a lot of data when the battery is full, and then build a function that will compare this data with the average of the last 100 or so readings of the voltage as new readings are added every few seconds as long as I don't interrupt the process.
I am using matplotlib to show the voltage output and so far it is working fine: I posted an answer here on live changing graphs
The voltage function looks like this:
pullData = open("dynamicgraph.txt","r").read() //values are stored here in another function
dataArray = pullData.split('\n')
xar = []
yar = []
averagevoltage = 0
for eachLine in dataArray:
if len(eachLine)>=19:
x,y = eachLine.split(',')
xar.append(np.int64(x)) //a datetime value
yar.append(float(y)) //the reading
ax1.clear()
ax1.plot(xar,yar)
ax1.set_ylim(ymin=25,ymax=29)
if len(yar) > 1:
plt.title("Voltage: " + str(yar [-1]) + " Average voltage: "+ str(np.mean(yar)))
I am just wondering what the syntax of getting the average of the last x numbers of the array should look like?
if len(yar) > 100
#get average of last 100 values only
It's a rather simple problem. Assuming you're using numpy which provides easy functions for averaging.
array = np.random.rand(200, 1)
last100 = array[-100:] # Retrieve last 100 entries
print(np.average(last100)) # Get the average of them
If you want to cast your normal array to a numpy array you can do it with:
np.array(<your-array-goes-here>)
Use slice notation with negative index to get the n last items in a list.
yar[-100:]
If the slice is larger than the list, the entire list will be returned.
I don't think you even need to use numpy. You can access the last 100 elements by slicing your array as follows:
l = yar[-100:]
This returns all elements at indices from -100 ('100th' last element) to -1 (last element). Then, you can just native Python functions as follows.
mean = sum(l) / len(l)
Sum(x) returns the sum of all values within the list, and len(l) returns the length of the list.
you could use the Python Standard Library statistics:
import statistics
statistics.mean(your_data_list[-n:]) # n = n newst numbers

Create multiple numpy arrays of same size at once [duplicate]

I was unable to find anything describing how to do this, which leads to be believe I'm not doing this in the proper idiomatic Python way. Advice on the 'proper' Python way to do this would also be appreciated.
I have a bunch of variables for a datalogger I'm writing (arbitrary logging length, with a known maximum length). In MATLAB, I would initialize them all as 1-D arrays of zeros of length n, n bigger than the number of entries I would ever see, assign each individual element variable(measurement_no) = data_point in the logging loop, and trim off the extraneous zeros when the measurement was over. The initialization would look like this:
[dData gData cTotalEnergy cResFinal etc] = deal(zeros(n,1));
Is there a way to do this in Python/NumPy so I don't either have to put each variable on its own line:
dData = np.zeros(n)
gData = np.zeros(n)
etc.
I would also prefer not just make one big matrix, because keeping track of which column is which variable is unpleasant. Perhaps the solution is to make the (length x numvars) matrix, and assign the column slices out to individual variables?
EDIT: Assume I'm going to have a lot of vectors of the same length by the time this is over; e.g., my post-processing takes each log file, calculates a bunch of separate metrics (>50), stores them, and repeats until the logs are all processed. Then I generate histograms, means/maxes/sigmas/etc. for all the various metrics I computed. Since initializing 50+ vectors is clearly not easy in Python, what's the best (cleanest code and decent performance) way of doing this?
If you're really motivated to do this in a one-liner you could create an (n_vars, ...) array of zeros, then unpack it along the first dimension:
a, b, c = np.zeros((3, 5))
print(a is b)
# False
Another option is to use a list comprehension or a generator expression:
a, b, c = [np.zeros(5) for _ in range(3)] # list comprehension
d, e, f = (np.zeros(5) for _ in range(3)) # generator expression
print(a is b, d is e)
# False False
Be careful, though! You might think that using the * operator on a list or tuple containing your call to np.zeros() would achieve the same thing, but it doesn't:
h, i, j = (np.zeros(5),) * 3
print(h is i)
# True
This is because the expression inside the tuple gets evaluated first. np.zeros(5) therefore only gets called once, and each element in the repeated tuple ends up being a reference to the same array. This is the same reason why you can't just use a = b = c = np.zeros(5).
Unless you really need to assign a large number of empty array variables and you really care deeply about making your code compact (!), I would recommend initialising them on separate lines for readability.
Nothing wrong or un-Pythonic with
dData = np.zeros(n)
gData = np.zeros(n)
etc.
You could put them on one line, but there's no particular reason to do so.
dData, gData = np.zeros(n), np.zeros(n)
Don't try dData = gData = np.zeros(n), because a change to dData changes gData (they point to the same object). For the same reason you usually don't want to use x = y = [].
The deal in MATLAB is a convenience, but isn't magical. Here's how Octave implements it
function [varargout] = deal (varargin)
if (nargin == 0)
print_usage ();
elseif (nargin == 1 || nargin == nargout)
varargout(1:nargout) = varargin;
else
error ("deal: nargin > 1 and nargin != nargout");
endif
endfunction
In contrast to Python, in Octave (and presumably MATLAB)
one=two=three=zeros(1,3)
assigns different objects to the 3 variables.
Notice also how MATLAB talks about deal as a way of assigning contents of cells and structure arrays. http://www.mathworks.com/company/newsletters/articles/whats-the-big-deal.html
If you put your data in a collections.defaultdict you won't need to do any explicit initialization. Everything will be initialized the first time it is used.
import numpy as np
import collections
n = 100
data = collections.defaultdict(lambda: np.zeros(n))
for i in range(1, n):
data['g'][i] = data['d'][i - 1]
# ...
How about using map:
import numpy as np
n = 10 # Number of data points per array
m = 3 # Number of arrays being initialised
gData, pData, qData = map(np.zeros, [n] * m)

Find two local minima in python

I have a list of t values. My code for finding the minima values is as follows;
for i in np.arange(0,499,1):
if t[i]<t[i-1] and t[i]<t[i+1] :
t_min.append(t[i])
My t values change every time and hence it may happen that one of the minima occurs at the beginning or end in that case this code would not work. So I need a general code which will work for any range of t values.
You can loop around the end using the % operator and adding one to the length of the iterator. This treats your array 'as a circle', which is what you really want.
t_min = []
for i in range(len(t)):
if t[i] < min(t[i - 1], t[(i + 1) % len(t)]):
t_min.append(t[i])
Edit: Fix the range of values i takes so that the first element isn't checked twice, thanks to #Jasper for pointing this out
Instead of looping over the array, I suggest using scipy.signal.argrelmin which finds all local minima. You can pick two you like most from those.
from scipy.signal import argrelmin
import numpy as np
t = np.sin(np.linspace(0, 4*np.pi, 500))
relmin = argrelmin(t)[0]
print(relmin)
This outputs [187 437].
To treat the array as wrapping around, use argrelmin(t, mode=‘wrap’)
Without wrap-around, argrelmin does not recognize the beginning and end of an array as candidates for local minimum. (There are different interpretations of "local minimum": one allows the endpoints, the other does not.) If you want the endpoints to be included when the function achieves minimum there, do it like this:
if t[0] < t[1]:
relmin = np.append(relmin, 0)
if t[-1] < t[-2]:
relmin = np.append(relmin, len(t)-1)
Now the output is [187 437 0].

Efficient Array replacement in Python

I'm wondering what is the most efficient way to replace elements in an array with other random elements in the array given some criteria. More specifically, I need to replace each element which doesn't meet a given criteria with another random value from that row. For example, I want to replace each row of data as a random cell in data(row) which is between -.8 and .8. My inefficinet solution looks something like this:
import numpy as np
data = np.random.normal(0, 1, (10, 100))
for index, row in enumerate(data):
row_copy = np.copy(row)
outliers = np.logical_or(row>.8, row<-.8)
for prob in np.where(outliers==1)[0]:
fixed = 0
while fixed == 0:
random_other_value = r.randint(0,99)
if random_other_value in np.where(outliers==1)[0]:
fixed = 0
else:
row_copy[prob] = row[random_other_value]
fixed = 1
Obviously, this is not efficient.
I think it would be faster to pull out all the good values, then use random.choice() to pick one whenever you need it. Something like this:
import numpy as np
import random
from itertools import izip
data = np.random.normal(0, 1, (10, 100))
for row in data:
good_ones = np.logical_and(row >= -0.8, row <= 0.8)
good = row[good_ones]
row_copy = np.array([x if f else random.choice(good) for f, x in izip(good_ones, row)])
High-level Python code that you write is slower than the C internals of Python. If you can push work down into the C internals it is usually faster. In other words, try to let Python do the heavy lifting for you rather than writing a lot of code. It's zen... write less code to get faster code.
I added a loop to run your code 1000 times, and to run my code 1000 times, and measured how long they took to execute. According to my test, my code is ten times faster.
Additional explanation of what this code is doing:
row_copy is being set by building a new list, and then calling np.array() on the new list to convert it to a NumPy array object. The new list is being built by a list comprehension.
The new list is made according to the rule: if the number is good, keep it; else, take a random choice from among the good values.
A list comprehension walks over a sequence of values, but to apply this rule we need two values: the number, and the flag saying whether that number is good or not. The easiest and fastest way to make a list comprehension walk along two sequences at once is to use izip() to "zip" the two sequences together. izip() will yield up tuples, one at a time, where the tuple is (f, x); f in this case is the flag saying good or not, and x is the number. (Python has a built-in feature called zip() which does pretty much the same thing, but actually builds a list of tuples; izip() just makes an iterator that yields up tuple values. But you can play with zip() at a Python prompt to learn more about how it works.)
In Python we can unpack a tuple into variable names like so:
a, b = (2, 3)
In this example, we set a to 2 and b to 3. In the list comprehension we unpack the tuples from izip() into variables f and x.
Then the heart of the list comprehension is a "ternary if" statement like so:
a if flag else b
The above will return the value a if the flag value is true, and otherwise return b. The one in this list comprehension is:
x if f else random.choice(good)
This implements our rule.

Categories