Python merge datasets X1(t), X2(t) -> X1(X2) - python

I have some datasets (lets stay at 2 here) which are dependent on a common variable t, like X1(t) and X2(t). However X1(t) and X2(t) don't have to share the same t values or even have the same amount of datapoints.
For example they could look like:
t1 = [2,6,7,8,10,13,14,16,17]
X1 = [10,10,10,20,20,20,30,30,30]
t2 = [3,4,5,6,8,10,11,14,15,16]
X2 = [95,100,100,105,158,150,142,196,200,204]
I am trying to create a new dataset YNew(XNew) (=X2(X1)) such that both datasets are linked without the shared variable t.
In this case it should look like:
XNew = [10,20,30]
YNew = [100,150,200]
where to every occuring X1-value a corresponding X2-value (a mean value) is assigned.
Is there an easy already known way to achieve this(maybe with pandas)?
My first guess would be to find all t-values for a certain X1-value (in the example case the X1-value 10 would lie in the range 2,...,7) and then look for all X2-values in that range and get their mean value. Then you should be able to assign YNew(XNew).
Thanks for every advice!
Update:
I added a graph, so maybe my intentions are a bit more clear. I want to assign the mean X2-value to the corresponding X1-value in the marked regions (where the same X1-values occur).
graph corresponding to example lists

alright, I just tried to implement what I mentioned and it works as I liked it.
Although I think that some things are still a little clumsy...
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# datasets to treat
t1 = [2,6,7,8,10,13,14,16,17]
X1 = [10,10,10,20,20,20,30,30,30]
t2 = [3,4,5,6,8,10,11,14,15,16]
X2 = [95,100,100,105,158,150,142,196,200,204]
X1Series = pd.Series(X1, index = t1)
X2Series = pd.Series(X2, index = t2)
X1Values = X1Series.drop_duplicates().values #returns all occuring values of X1 without duplicates as array
# lists for results
XNew = []
YNew = []
#find for every occuring value X1 the mean value of X2 in the range of X1
for value in X1Values:
indexpos = X1Series[X1Series == value].index.values
max_t = indexpos[indexpos.argmax()] # get max and min index of the range of X1
min_t =indexpos[indexpos.argmin()]
print("X1 = "+str(value)+" occurs in range from "+str(min_t)+" to "+str(max_t))
slicedX2 = X2Series[(X2Series.index >= min_t) & (X2Series.index <= max_t)] # select range of X2
print("in this range there are following values of X2:")
print(slicedX2)
mean = slicedX2.mean() #calculate mean value of selection and append extracted values
print("with the mean value of: " + str(mean))
XNew.append(value)
YNew.append(mean)
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
ax1.plot(t1, X1,'ro-',label='X1(t)')
ax1.plot(t2, X2,'bo',label='X2(t)')
ax1.legend(loc=2)
ax1.set_xlabel('t')
ax1.set_ylabel('X1/X2')
ax2.plot(XNew,YNew,'ro-',label='YNew(XNew)')
ax2.legend(loc=2)
ax2.set_xlabel('XNew')
ax2.set_ylabel('YNew')
plt.show()

Related

Remove outlier from multiple lists in python

I'm very new to Python and am working on plotting a graph with matplotlib with values from a csv and am trying to figure out the most efficient way to remove outliers from my lists. The CSV has three variables, x, y, z, which I've put into separate lists.
I want to find the standard deviation of each list and remove each point that is < or > 2x stdev (remove the point from each list - x, y, z, not just one list).
I'm having a hard time figuring out how to efficiently remove a point that is represented in three separate lists while making sure that I don't mix up different data points.
Do I use while loop and delete the value at a certain position for each variable? If so, how would I reference the position in the list where then number is larger than 2x stdev? Thanks!
import matplotlib.pyplot as plt
import csv
import statistics as stat
#making list of each variable
x = []
y = []
z = []
with open('fundata.csv', 'r') as csvfile:
plots = csv.reader(csvfile, delimiter = ',')
#skip the header line in CSV
next(plots)
#import each variable from the CSV file into a list as a float
for row in plots:
x.append(float(row[0]))
y.append(float(row[1]))
z.append(float(row[2]))
#cleaning up the data
stdev_x = stat.stdev(x)
stdev_y = stat.stdev(y)
stdev_z = stat.stdev(z)
print(stdev_x)
print(stdev_y)
print(stdev_z)
#making the graph
fig, ax = plt.subplots()
#make a scatter plot graphing x by y with z variable as color, size of each point is 3
ax.scatter(x, y, c=z, s=3)
#Set chart title and label the axes
ax.set_title("Heatmap of variables", fontsize = 18)
ax.set_xlabel("Var 1", fontsize = 14)
ax.set_ylabel("Var 2", fontsize = 14)
#open Matplotlib viewer
plt.show()
Data set is as follows but is ~35000 rows long with more variability:
var1
var2
var3
3876514
3875931
3875846
3876515
3875931
3875846
3876516
3875931
3875846
It is nearly always easier to use pandas to deal with data of this kind. Calculate the row-wise means and standard deviations, then select values within the required range. The outliers will be replaced with missing values. You can then use dropna to drop all the rows that contain missing values.
import pandas as pd
df = pd.read_csv("fundata.csv", names=["x", "y", "z"])
mean = df.mean(axis=0)
std = df.std(axis=0)
edited = df[(mean - 2 * std <= df) & (df <= mean + 2 * std)].dropna()
Alternatively, use scipy.stats.zscore, which will do the calculation for you:
from scipy.stats import zscore
...
edited = df[(abs(zscore(df)) <= 2).all(axis=1)]
If you want to avoid pandas for some reason, then one way would be to replace all the outliers within each column with None:
def replace_outliers(values):
mean = statistics.mean(values)
stdev = statistics.stdev(values)
for v in values:
if mean - 2 * stdev <= v <= mean + 2 * stdev:
yield v
else:
yield None
x, y, z = [replace_outliers(column) for column in [x, y, z]]
Then zip the columns together and select rows that do not contain None:
selected_rows = [row for row in zip(x, y, z) if not None in row]
Finally if needed you can zip the rows together to transpose the data back into three column lists:
x, y, z = zip(*selected_rows)

Draw longest possible vertical line between two curves in seaborn

I currently have a plot like this (consider that data is the dataframe I pasted at the very bottom):
import seaborn as sns
sns.relplot(
data = data,
x = "Threshold",
y = "Value",
kind = "line",
hue="Metric"
).set(xlabel="Threshold")
Which produces:
Now, I want to know how can I annotate a line in this plot, such that it is located between the curves, at the x-Axis value where the distance between curves are maximized. I would also need to annotate text to show the distance value.
It should be something like this:
Here is the pandas dataframe:
Threshold,Metric,Value
0.0,Recall,1.0
0.010101010101010102,Recall,0.9802536231884058
0.020202020202020204,Recall,0.9706521739130435
0.030303030303030304,Recall,0.9621376811594203
0.04040404040404041,Recall,0.9541666666666667
0.05050505050505051,Recall,0.9456521739130435
0.06060606060606061,Recall,0.9322463768115942
0.07070707070707072,Recall,0.9173913043478261
0.08080808080808081,Recall,0.908695652173913
0.09090909090909091,Recall,0.8976449275362319
0.10101010101010102,Recall,0.8813405797101449
0.11111111111111112,Recall,0.8644927536231884
0.12121212121212122,Recall,0.8498188405797101
0.13131313131313133,Recall,0.8358695652173913
0.14141414141414144,Recall,0.818659420289855
0.15151515151515152,Recall,0.7967391304347826
0.16161616161616163,Recall,0.7748188405797102
0.17171717171717174,Recall,0.7521739130434782
0.18181818181818182,Recall,0.7269927536231884
0.19191919191919193,Recall,0.6952898550724638
0.20202020202020204,Recall,0.6704710144927536
0.21212121212121213,Recall,0.648731884057971
0.22222222222222224,Recall,0.6097826086956522
0.23232323232323235,Recall,0.5847826086956521
0.24242424242424243,Recall,0.5521739130434783
0.25252525252525254,Recall,0.5023550724637681
0.26262626262626265,Recall,0.4766304347826087
0.27272727272727276,Recall,0.42047101449275365
0.2828282828282829,Recall,0.3958333333333333
0.29292929292929293,Recall,0.3539855072463768
0.30303030303030304,Recall,0.3327898550724638
0.31313131313131315,Recall,0.3036231884057971
0.32323232323232326,Recall,0.2798913043478261
0.33333333333333337,Recall,0.2371376811594203
0.3434343434343435,Recall,0.22119565217391304
0.3535353535353536,Recall,0.17300724637681159
0.36363636363636365,Recall,0.15996376811594204
0.37373737373737376,Recall,0.13568840579710145
0.38383838383838387,Recall,0.11938405797101449
0.393939393939394,Recall,0.10652173913043478
0.4040404040404041,Recall,0.09891304347826087
0.4141414141414142,Recall,0.08894927536231884
0.42424242424242425,Recall,0.07681159420289856
0.43434343434343436,Recall,0.06557971014492754
0.4444444444444445,Recall,0.05253623188405797
0.4545454545454546,Recall,0.04655797101449275
0.4646464646464647,Recall,0.024456521739130436
0.4747474747474748,Recall,0.019384057971014494
0.48484848484848486,Recall,0.009782608695652175
0.494949494949495,Recall,0.0034420289855072463
0.5050505050505051,Recall,0.002173913043478261
0.5151515151515152,Recall,0.0016304347826086956
0.5252525252525253,Recall,0.0007246376811594203
0.5353535353535354,Recall,0.00018115942028985507
0.5454545454545455,Recall,0.0
0.5555555555555556,Recall,0.0
0.5656565656565657,Recall,0.0
0.5757575757575758,Recall,0.0
0.5858585858585859,Recall,0.0
0.595959595959596,Recall,0.0
0.6060606060606061,Recall,0.0
0.6161616161616162,Recall,0.0
0.6262626262626263,Recall,0.0
0.6363636363636365,Recall,0.0
0.6464646464646465,Recall,0.0
0.6565656565656566,Recall,0.0
0.6666666666666667,Recall,0.0
0.6767676767676768,Recall,0.0
0.686868686868687,Recall,0.0
0.696969696969697,Recall,0.0
0.7070707070707072,Recall,0.0
0.7171717171717172,Recall,0.0
0.7272727272727273,Recall,0.0
0.7373737373737375,Recall,0.0
0.7474747474747475,Recall,0.0
0.7575757575757577,Recall,0.0
0.7676767676767677,Recall,0.0
0.7777777777777778,Recall,0.0
0.787878787878788,Recall,0.0
0.797979797979798,Recall,0.0
0.8080808080808082,Recall,0.0
0.8181818181818182,Recall,0.0
0.8282828282828284,Recall,0.0
0.8383838383838385,Recall,0.0
0.8484848484848485,Recall,0.0
0.8585858585858587,Recall,0.0
0.8686868686868687,Recall,0.0
0.8787878787878789,Recall,0.0
0.888888888888889,Recall,0.0
0.8989898989898991,Recall,0.0
0.9090909090909092,Recall,0.0
0.9191919191919192,Recall,0.0
0.9292929292929294,Recall,0.0
0.9393939393939394,Recall,0.0
0.9494949494949496,Recall,0.0
0.9595959595959597,Recall,0.0
0.9696969696969697,Recall,0.0
0.9797979797979799,Recall,0.0
0.98989898989899,Recall,0.0
1.0,Recall,0.0
0.0,Fall-out,1.0
0.010101010101010102,Fall-out,0.6990465720990212
0.020202020202020204,Fall-out,0.58461408367334
0.030303030303030304,Fall-out,0.516647992727734
0.04040404040404041,Fall-out,0.4643680104855929
0.05050505050505051,Fall-out,0.4172674037587468
0.06060606060606061,Fall-out,0.3796376551170116
0.07070707070707072,Fall-out,0.3507811343889394
0.08080808080808081,Fall-out,0.33186055852694335
0.09090909090909091,Fall-out,0.3152231359533222
0.10101010101010102,Fall-out,0.29964272879098575
0.11111111111111112,Fall-out,0.2855844238208993
0.12121212121212122,Fall-out,0.27161068008371564
0.13131313131313133,Fall-out,0.25719298987379235
0.14141414141414144,Fall-out,0.24338836860241422
0.15151515151515152,Fall-out,0.2312538316808659
0.16161616161616163,Fall-out,0.22026087140350506
0.17171717171717174,Fall-out,0.2083377375642137
0.18181818181818182,Fall-out,0.19694311143056467
0.19191919191919193,Fall-out,0.18402638310466565
0.20202020202020204,Fall-out,0.17440754286197493
0.21212121212121213,Fall-out,0.16548633279073208
0.22222222222222224,Fall-out,0.15278100754709004
0.23232323232323235,Fall-out,0.14292962391391667
0.24242424242424243,Fall-out,0.1317252605542989
0.25252525252525254,Fall-out,0.11555292476164303
0.26262626262626265,Fall-out,0.10612434729298353
0.27272727272727276,Fall-out,0.08902183793839714
0.2828282828282829,Fall-out,0.08331395471745978
0.29292929292929293,Fall-out,0.07232099444009894
0.30303030303030304,Fall-out,0.06735302200706086
0.31313131313131315,Fall-out,0.061454876012092256
0.32323232323232326,Fall-out,0.05665602604485973
0.33333333333333337,Fall-out,0.048982094158932836
0.3434343434343435,Fall-out,0.045641925459273196
0.3535353535353536,Fall-out,0.03748176648415534
0.36363636363636365,Fall-out,0.0341415977844957
0.37373737373737376,Fall-out,0.029321607509037482
0.38383838383838387,Fall-out,0.026996173604211148
0.393939393939394,Fall-out,0.024353635075999407
0.4040404040404041,Fall-out,0.022514428260364035
0.4141414141414142,Fall-out,0.01940680295118703
0.42424242424242425,Fall-out,0.017165930279263473
0.43434343434343436,Fall-out,0.014459970826374648
0.4444444444444445,Fall-out,0.011035240893812233
0.4545454545454546,Fall-out,0.009386296852208105
0.4646464646464647,Fall-out,0.004756569350781135
0.4747474747474748,Fall-out,0.003868676405301989
0.48484848484848486,Fall-out,0.002135171130795087
0.494949494949495,Fall-out,0.0008033317125763693
0.5050505050505051,Fall-out,0.0004228061645138786
0.5151515151515152,Fall-out,0.00031710462338540896
0.5252525252525253,Fall-out,4.228061645138786e-05
0.5353535353535354,Fall-out,0.0
0.5454545454545455,Fall-out,0.0
0.5555555555555556,Fall-out,0.0
0.5656565656565657,Fall-out,0.0
0.5757575757575758,Fall-out,0.0
0.5858585858585859,Fall-out,0.0
0.595959595959596,Fall-out,0.0
0.6060606060606061,Fall-out,0.0
0.6161616161616162,Fall-out,0.0
0.6262626262626263,Fall-out,0.0
0.6363636363636365,Fall-out,0.0
0.6464646464646465,Fall-out,0.0
0.6565656565656566,Fall-out,0.0
0.6666666666666667,Fall-out,0.0
0.6767676767676768,Fall-out,0.0
0.686868686868687,Fall-out,0.0
0.696969696969697,Fall-out,0.0
0.7070707070707072,Fall-out,0.0
0.7171717171717172,Fall-out,0.0
0.7272727272727273,Fall-out,0.0
0.7373737373737375,Fall-out,0.0
0.7474747474747475,Fall-out,0.0
0.7575757575757577,Fall-out,0.0
0.7676767676767677,Fall-out,0.0
0.7777777777777778,Fall-out,0.0
0.787878787878788,Fall-out,0.0
0.797979797979798,Fall-out,0.0
0.8080808080808082,Fall-out,0.0
0.8181818181818182,Fall-out,0.0
0.8282828282828284,Fall-out,0.0
0.8383838383838385,Fall-out,0.0
0.8484848484848485,Fall-out,0.0
0.8585858585858587,Fall-out,0.0
0.8686868686868687,Fall-out,0.0
0.8787878787878789,Fall-out,0.0
0.888888888888889,Fall-out,0.0
0.8989898989898991,Fall-out,0.0
0.9090909090909092,Fall-out,0.0
0.9191919191919192,Fall-out,0.0
0.9292929292929294,Fall-out,0.0
0.9393939393939394,Fall-out,0.0
0.9494949494949496,Fall-out,0.0
0.9595959595959597,Fall-out,0.0
0.9696969696969697,Fall-out,0.0
0.9797979797979799,Fall-out,0.0
0.98989898989899,Fall-out,0.0
1.0,Fall-out,0.0
Use pivot to transform the data from long to wide
Use idxmax to find the x (Threshold) of the max difference between y1 and y2 (Fall-out and Recall)
Use vlines to plot the vertical line at x from y1 to y2
Use annotate to plot the label at the midpoint of y1 and y2
g = sns.relplot(data=data, x='Threshold', y='Value', hue='Metric', kind='line')
# pivot to wide form
p = data.pivot(index='Threshold', columns='Metric', values='Value')
# find x, y1, and y2 corresponding to max difference
diff = p['Fall-out'].sub(p['Recall']).abs()
x = diff.idxmax()
y1, y2 = p.loc[x]
# plot line and label
ax = g.axes.flat[0]
ax.vlines(x, y1, y2, ls='--')
ax.annotate(f'Dist = {diff.loc[x]:.2f}', ha='left', va='center',
xy=(x, 0.5*(y1+y2)), xycoords='data',
xytext=(5, 0), textcoords='offset pixels')
The easiest way which I can think of is to create two separate lists of all values where the metric is Recall and another with all values where metric is Fall-out. This can be easily done using pandas operations as follows (Assuming the dataframe has name df) -
import math
import matplotlib.pyplot as plt
ls_metric = df['Metric'].to_list()
ls_value = df['Value'].to_list()
ls_threshold = df['Threshold'].to_list()
ls_value_recall = []
ls_value_fallout = []
ls_threshold_recall = []
ls_threshold_fallout = []
for i, j, k in zip(ls_metric, ls_value, ls_threshold):
if (i == 'Recall'):
ls_value_recall.append(j)
ls_threshold_recall.append(k)
elif(i == 'Fall-out'):
ls_value_fallout.append(j)
ls_threshold_recall.append(k)
ls_dist = []
for i, j in zip(ls_value_recall, ls_value_fallout):
ls_dist.append(math.abs(i-j))
max_diff = max(ls_dist)
location_of_max_diff = ls_dist.index(max_diff)
value_of_threshold_at_max_diff = ls_threshold_recall[location_of_max_diff]
value_of_recall_at_max_diff = ls_value_recall[location_of_max_diff]
value_of_fallout_at_max_diff = ls_value_fallout[location_of_max_diff]
x_values = [value_of_threshold_at_max_diff, value_of_threshold_at_max_diff]
y_values = [value_of_recall_at_max_diff, value_of_fallout_at_max_diff]
plt.plot(x_values, y_values)
Certain Assumptions - The Threshold Values are the same and same number of readings are present for both metrics which I think is true having had a brief glance at the data but if not I believe it's still pretty easy to modify the code
You can add this plot to your own figure for which the syntax is readily available, now as far as the label for the line is concerned one way to do this is use matplotlib.pyplot.text to add a textbox but with that you'll need to tweak with the location to get the desired location another way to do this would be to add it as a legend only

How to drop data above a certain frequency in a histogram/dataset?

To make things clearer, I don't want to remove the entire bin from the histogram, I just want to get rid of some of the data so that it is brought below a desired frequency. The line in the image shows the max frequency I would like
For context, I have a dataset containing a number of angles. My question is very similar to the question asked here Remove data above threshold in histogram in terms of the data used but unlike the question in the link, I dont wish to get rid of the data, just reduce it.
Can I do this directly from the histogram or will I need to just delete some of the data in the dataset?
edit (sorry I am new to coding and formatting here):
here is a solution i tried
bns = 30
hist, bins = np.histogram(dataset['Steering'], bins= bns)
removeddata = []
spb = 700
for j in range(bns):
rdata = []
for i in range(len(dataset['Steering'])):
if dataset['Steering'][i] >= bins[j] and dataset['Steering'][i] <=
bins[j+1]:
rdata.append(i)
rdata = shuffle(rdata)
rdata = rdata[spb:]
removeddata.extend(rdata)
print('removed:', len(removeddata))
dataset.drop(dataset.index[removeddata], inplace = True)
print ('remaining:', len(dataset))
center = (bins[:-1] + bins[1:])*0.5
plt.bar(center,hist,width=0.05)
plt.show()
This is somebody else's solution but it seemed to work for them. Even directly copying, it still throws errors. The error I got was "ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()", I tried to change 'and' to & and got the error "TypeError: Cannot perform 'rand_' with a dtyped [float64] array and scalar of type [bool]". Unsure what this exactly refers to but points to the line with the if statement. Checked the dtype of everything and they are all type float64, so unsure of my next step
This solution takes into account the clarified requirement that the original input data that exceeds the frequency threshold be dropped. I left my other answer because it is simpler and different enough that it may be useful to another user.
To clarify, this answer produces a new 1D array of data with fewer elements and then plots a histogram from that new data. The data are shuffled before the elements are removed (in case the input data were pre-sorted) in order to prevent bias in dropping data from either the low or high side of each bin.
import numpy as np
import matplotlib.pyplot as plt
from random import shuffle
def remove_gated_val_recursive(idx, to_gate_lst, bins_lst, data_lst):
if to_gate_lst[idx] == 0:
return(data_lst)
else:
bin_min, bin_max = bins_lst[idx], bins_lst[idx + 1]
for i in range(len(data_lst)):
if bin_min <= data_lst[i] < bin_max:
del data_lst[i]
to_gate_lst[idx] -= 1
break
return remove_gated_val_recursive(idx, to_gate_lst, bins_lst, data_lst)
threshold = 80
fig, ax1 = plt.subplots()
ax1.set_title("Some data")
np.random.seed(30)
data = np.random.randn(1000)
num_bins = 23
raw_hist, raw_bins = np.histogram(data, num_bins)
to_gate = []
for i in range(len(raw_hist)):
if raw_hist[i] > threshold:
to_gate.append(raw_hist[i] - threshold)
else:
to_gate.append(0)
data_lst = list(data)
shuffle(data_lst)
for idx in range(len(raw_hist)):
remove_gated_val_recursive(idx, to_gate, raw_bins, data_lst)
new_data = np.array(data_lst)
hist, bins = np.histogram(new_data, num_bins)
width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) * 0.5
ax1.bar(center, hist, align='center', width=width)
plt.show()
gives the following histogram, plotted from the new_data array.
This answer doesn't re-bin or re-center the data, but I believe it generally achieves what you're asking. Working from the example in the chosen answer of the post you linked, I edit the hist array so that the original input data is not changed as you indicated is your preferred solution:
import numpy as np
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(1,2)
ax1.set_title("Some data")
ax2.set_title("Gated data < threshold")
np.random.seed(10)
data = np.random.randn(1000)
num_bins = 23
avg_samples_per_bin = 200
hist, bins = np.histogram(data, num_bins)
width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) * 0.5
ax1.bar(center, hist, align='center', width=width)
threshold = 80
gated = np.empty([len(hist)], dtype=np.int64)
for i in range(len(hist)):
if hist[i] > threshold:
gated[i] = threshold
else:
gated[i] = hist[i]
ax2.bar(center, gated, align="center", width=width)
plt.show()
which gives

matplotlib argrelmax doesn't find all maxes

I have a project where I'm sampling analog data and attempting to analyze with matplotlib. Currently, my analog data source is a potentiometer hooked up to a microcontroller, but that's not really relevant to the issue. Here's my code
arrayFront = RunningMean(array(dataFront), 15)
arrayRear = RunningMean(array(dataRear), 15)
x = linspace(0, len(arrayFront), len(arrayFront)) # Generate x axis
y = linspace(0, len(arrayRear), len(arrayRear)) # Generate x axis
min_vals_front = scipy.signal.argrelmin(arrayFront, order=2)[0] # Min
min_vals_rear = scipy.signal.argrelmin(arrayRear, order=2)[0] # Min
max_vals_front = scipy.signal.argrelmax(arrayFront, order=2)[0] # Max
max_vals_rear = scipy.signal.argrelmax(arrayRear, order=2)[0] # Max
maxvalfront = max(arrayFront[max_vals_front])
maxvalrear = max(arrayRear[max_vals_rear])
minvalfront = min(arrayFront[min_vals_front])
minvalrear = min(arrayRear[min_vals_rear])
plot(x, arrayFront, label="Front Pressures")
plot(y, arrayRear, label="Rear Pressures")
plot(x[min_vals_front], arrayFront[min_vals_front], "x")
plot(x[max_vals_front], arrayFront[max_vals_front], "o")
plot(y[min_vals_rear], arrayRear[min_vals_rear], "x")
plot(y[max_vals_rear], arrayRear[max_vals_rear], "o")
xlim(-25, len(arrayFront) + 25)
ylim(-1000, 7000)
legend(loc='upper left')
show()
dataFront and dataRear are python lists that hold the sampled data from 2 potentiometers. RunningMean is a function that calls:
convolve(x, ones((N,)) / N, mode='valid')
The problem is that the argrelmax (and min) functions don't always find all the maxes and mins. Sometimes it doesn't find ANY max or mins, and that causes me problems in this block of code
maxvalfront = max(arrayFront[max_vals_front])
maxvalrear = max(arrayRear[max_vals_rear])
minvalfront = min(arrayFront[min_vals_front])
minvalrear = min(arrayRear[min_vals_rear])
because the [min_vals_(blank)] variables are empty. Does anyone have any idea what is happening here, and what I can do to fix the problem? Thanks in advance.
Here's one of graphs of data where not all the maxes and mins are found:
signal.argrelmin is a thin wrapper around signal.argrelextrema with comparator=np.less. np.less(a, b) returns the truth value of a < b element-wise. Notice that np.less requires a to be strictly less than b for it to be True.
Your data has the same minimum value at a lot of neighboring locations. At the local minima, the inequality between local minimum and its neighbors does not satisfy a strictly less than relationship; instead it only satisfies a strictly less than or equal to relationship.
Therefore, to find these extrema use signal.argrelmin with comparator=np.less_equal. For example, using a snippet from your data:
import numpy as np
from scipy import signal
arrayRear = np.array([-624.59309896, -624.59309896, -624.59309896,
-625., -625., -625.,])
print(signal.argrelmin(arrayRear, order=2)[0])
# []
print(signal.argrelextrema(arrayRear, np.less_equal)[0])
# [0 1 3 4 5]
print(signal.argrelextrema(arrayRear, np.less_equal, order=2)[0])
# [0 3 4 5]

Python 2D array -- How to plug in x and retrieve y value?

I have been looking for an answer since yesterday but no luck. So I have a 1D spectrum (.fits) file with flux value at each wavelength. I have converted them into a 2D array (x,y)=(wavelength, flux) and want to write a program which will return flux(y) at some assigned wavelengths(x). I have tried this:
#modules
import scipy
import numpy as np
import pyfits as pf
#Target Global Vaiables
hdulist_tg = pf.open('cutmask1-2.0001.fits')
hdr_tg = hdulist_tg[0].header
flux_tg = hdulist_tg[0].data
crval_tg = hdr_tg['CRVAL1'] #Starting wavelength
cdel_tg = hdr_tg['CDELT1'] #Wavelength axis width
wave_tg = crval_tg + np.arange(3183)*cdel_tg #Create an x-axis
wavelist = [6207,6315,6369,6438,6490,6565,6588]
wave_flux=[]
diff = 10
for wave in wave_tg:
for flux in flux_tg:
wave_flux.append((wave,flux))
for item in wave_flux:
wave = item[0]
flux = item[1]
#Where I got my actual wavelength that exists in wave_tg
diffmatch = np.abs(wave - wavelist[0])
if diffmatch < diff:
flux_wave = flux
diff = diffmatch
wavematch = wave
print wavelist[0],flux_wave,wavematch
but the program always return the same flux value even though the wavelength is different. Please help...
I would skip the creation of the two dimensional table altogether and just use interp:
fluxvalues = np.interp(wavelist, wave_tg, flux_tg)
For the file you posted, the code you posted doesn't work due to the hard-coded length of the wave_tg array. I would therefore recommend you rather use
wave_tg = crval_tg + np.arange(len(flux_tg))*cdel_tg
Also, for some reason it seems that the file you posted doesn't actually go up to the wavelengths you are looking up. You might need to check that you are calculating the corresponding wavelengths correctly or check that you are looking up the right wavelengths.
I've made some changes in your code:
using numpy ot create wave_flux as a ndarray using np.hstack(), np.repeat() and np.tile()
using fancy indexing to get the values matching your search
The resulting code is:
#modules
import scipy
import numpy as np
import pyfits as pf
#Target Global Vaiables
hdulist_tg = pf.open('cutmask1-2.0001.fits')
hdr_tg = hdulist_tg[0].header
flux_tg = hdulist_tg[0].data
crval_tg = hdr_tg['CRVAL1'] #Starting wavelength
cdel_tg = hdr_tg['CDELT1'] #Wavelength axis width
wave_tg = crval_tg + np.arange(3183)*cdel_tg #Create an x-axis
wavelist = [6207,6315,6369,6438,6490,6565,6588]
wave_flux = np.vstack(( np.repeat(wave_tg, len(flux_tg)),
np.tile(flux_tg, len(wave_tg)) )).transpose()
wave_ref = wavelist[0]
diff = 10
print wave_flux[ np.abs(wave_flux[:,0]-wave_ref) < diff ]
Which will return a sub-group of wave_flux with the wave values in column 0 and flux values in column 1:
[[ 6197.10300138 500.21020508]
[ 6197.10300138 523.24102783]
[ 6197.10300138 510.6390686 ]
...,
[ 6216.68436446 674.94732666]
[ 6216.68436446 684.74255371]
[ 6216.68436446 712.20098877]]

Categories