I want my loop to only change the table cell from 0 to 5 inside the "walls".
The "walls" are user defined and could be of any shape, based on coordinates.
The plot is only for visualization.
import matplotlib.pyplot as plt
import pandas as pd
wallPointsX = [5,5,30,30,55,55,5]
wallPointsY = [5,30,30,55,55,5,5]
df = pd.DataFrame(0, index=range(60), columns=range(60))
for x in range(0, 60):
for y in range(0, 60):
df[x][y] = 5 #Should only apply inside "walls"
plt.plot(wallPointsX, wallPointsY)
plt.pcolor(df)
plt.show()
Result plot
Ok, took me some time but it was fun doing it. The idea here is to first create a continuous path out of the coordinates which define the walls. Next, create an object Path. Now you loop through each point in the DataFrame and then look if the created Path contains that (x,y) point using contains_point. I additionally had to use the condition x==55 and (5<y<=55) in the if statement so as to include the column adjacent to the
right most wall.
import matplotlib.path as mplPath
import numpy as np
wallPointsX = [5,5,30,30,55,55,5]
wallPointsY = [5,30,30,55,55,5,5]
# Create a continuous path across the wall coordinates
path = np.array([list(i) for i in zip(wallPointsX, wallPointsY)])
Path = mplPath.Path(path)
df = pd.DataFrame(0, index=range(60), columns=range(60))
for x in range(0, 60):
for y in range(0, 60):
if Path.contains_point((x,y)) or (x==55 and (5<y<=55)):
df[x-1][y-1] = 5 #Should only apply inside "walls"
plt.plot(wallPointsX, wallPointsY)
plt.pcolor(df)
Output
Related
I would like to do some meteorological calculation in the sea area. What I'm trying to do is making calculation for every 1x1 degree grid cells. like,
Go to the grid (to start somewhere with coordinates)
Make Calculations.
Go to another grid
Make calculations..
so on...
I made it with square but it doesn't help me to skip land areas.
I would like to make only for seas. when the coordinate falls on the land areas it has to skip (I don't know how). and move on to the next grid.
I can appreciate any tips from you guys.
edit : Is there a any possibility to determining all the coordinates and make with them a database or some kind of lists. and when the calculation begins the code runs for loop only in the list? honestly I don't know do it as well.
Sorry for mentioning the data. I've downloaded ECMWF's meteorological data model called ERA5 and the data looks like this.
data
The code below is my first trial actually its working for a square
latt = [48,49,50,51]
lonn = [38,39,40,41]
for the information this list in this code represents a 4x4 matrix the numbers inside doesn't mean anything actually.
import math
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from windrose import WindroseAxes
import matplotlib.cm as cm
import timeit
from matplotlib.gridspec import GridSpec
from matplotlib.ticker import PercentFormatter
import seaborn as sns
import matplotlib as mpl
data = pd.read_csv("/opt/local/var/macports/software/A.xlsx", index_col=0, parse_dates=True, header=None)
df = data.reset_index()
df.rename(columns={0:"Date", 1:"Lon", 2:"Lat", 3:"Value"}, inplace=True)
df["Value"] = (df["Value"]-273.14)
#Coordinate starting points.
n = 48
y = 49
z = 38
t = 39
#aylar
aylar = ["OCAK","ŞUBAT","MART","NİSAN","MAYIS","HAZİRAN","TEMMUZ","AĞUSTOS","EYLÜL","EKİM","KASIM","ARALIK"]
kaylar = ["oca","sub","mar","nis","may","haz","tem","agu","eyl","eki","kas","ara"]
ay = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]
loop = ["a","b","c","d","e","f","g","h","i","j","k","l"]
mpl.rcParams['agg.path.chunksize'] = 10000
latt = [48,49,50,51]
lonn = [38,39,40,41]
j=ay[0]
for h in range(len(latt)):
for g in range(len(lonn)):
#This is where i filter my first grid (A1)
s = df[(df['Lat'] >= n) & (df['Lat'] <= y)]
s = s[(s['Lon'] >= z) & (s['Lon'] <= t)]
m = 0
x = 1
r = kaylar[m]
p = aylar[m]
for i in range(len(loop)):
#plots#
plt.savefig("/Users/muratoksuz/Desktop/URUN/KAR/"+str(r)+"/K{j}/sic/K{j}_sicaklik.jpeg".format(j=j),dpi=500, bbox_inches="tight")
m += 1
x += 1
j += 1
n += 1
y += 1
z += 1
t += 1
n = 48
y = 49
That code starts from given coordinates and make a calculation for whole months and saves figures to different folders in my pc. and goes to the another grid to do same calculation for that grid. K or A is just a name for grids.
This is the folder structure.
And this is the wished result.
Here
Let's denote refVar, a variable of interest that contains experimental data.
For the simulation study, I would like to generate other variables V0.05, V0.10, V0.15 until V0.95.
Note that for the variable name, the value following V represents the correlation between the variable and refVar (in order to quick track in the final dataframe).
My readings led me to multivariate_normal() from numpy. However, when using this function, it generates 2 1D-arrays both with random numbers. What I want is to always keep refVar and generate other arrays filled with random numbers, while meeting the specified correlation.
Please, find below my my code. To cut it short, I've no clue how to generate other variables relative to my experimental variable refVar. Ideally, I would like to build a data frame containing the following columns: refVar,V0.05,V0.10,...,V0.95. I hope you get my point and thank you in advance for your time
import numpy as np
import pandas as pd
from numpy.random import multivariate_normal as mvn
refVar = [75.25,77.93,78.2,61.77,80.88,71.95,79.88,65.53,85.03,61.72,60.96,56.36,23.16,73.36,64.18,83.07,63.25,49.3,78.2,30.96]
mean_refVar = np.mean(refVar)
for r in np.arange(0,1,0.05):
var1 = 1
var2 = 1
cov = r
cov_matrix = [[var1,cov],
[cov,var2]]
data = mvn([mean_refVar,mean_refVar],cov_matrix,size=len(refVar))
output = 'corr_'+str(r.round(2))+'.txt'
df = pd.DataFrame(data,columns=['refVar','v'+str(r.round(2)])
df.to_csv(output,sep='\t',index=False) # Ideally, instead of creating an output for each correlation, I would like to generate a DF with refVar and all these newly created Series
Following this answer we can generate the sequence as follow:
def rand_with_corr(refVar, corr):
# center and normalize refVar
X = np.array(refVar) - np.mean(refVar)
X = X/np.linalg.norm(X)
# random sampling Y
Y = np.random.rand(len(X))
# centralize Y
Y = Y - Y.mean()
# find the orthorgonal component to X
Y = Y - Y.dot(X) * X
# normalize Y
Y = Y/np.linalg.norm(Y)
# output
return Y + (1/np.tan(np.arccos(corr))) * X
# test
out = rand_with_corr(refVar, 0.05)
pd.Series(out).corr(pd.Series(refVar))
# out
# 0.050000000000000086
In my work I have the task to read in a CSV file and do calculations with it. The CSV file consists of 9 different columns and about 150 lines with different values acquired from sensors. First the horizontal acceleration was determined, from which the distance was derived by double integration. This represents the lower plot of the two plots in the picture. The upper plot represents the so-called force data. The orange graph shows the plot over the 9th column of the CSV file and the blue graph shows the plot over the 7th column of the CSV file.
As you can see I have drawn two vertical lines in the lower plot in the picture. These lines represent the x-value, which in the upper plot is the global minimum of the orange function and the intersection with the blue function. Now I want to do the following, but I need some help: While I want the intersection point between the first vertical line and the graph to be (0,0), i.e. the function has to be moved down. How do I achieve this? Furthermore, the piece of the function before this first intersection point (shown in purple) should be omitted, so that the function really only starts at this point. How can I do this?
In the following picture I try to demonstrate how I would like to do that:
If you need my code, here you can see it:
import numpy as np
import matplotlib.pyplot as plt
import math as m
import loaddataa as ld
import scipy.integrate as inte
from scipy.signal import find_peaks
import pandas as pd
import os
# Loading of the values
print(os.path.realpath(__file__))
a,b = os.path.split(os.path.realpath(__file__))
print(os.chdir(a))
print(os.chdir('..'))
print(os.chdir('..'))
path=os.getcwd()
path=path+"\\Data\\1 Fabienne\\Test1\\left foot\\50cm"
print(path)
dataListStride = ld.loadData(path)
indexStrideData = 0
strideData = dataListStride[indexStrideData]
#%%Calculation of the horizontal acceleration
def horizontal(yAngle, yAcceleration, xAcceleration):
a = ((m.cos(m.radians(yAngle)))*yAcceleration)-((m.sin(m.radians(yAngle)))*xAcceleration)
return a
resultsHorizontal = list()
for i in range (len(strideData)):
strideData_yAngle = strideData.to_numpy()[i, 2]
strideData_xAcceleration = strideData.to_numpy()[i, 4]
strideData_yAcceleration = strideData.to_numpy()[i, 5]
resultsHorizontal.append(horizontal(strideData_yAngle, strideData_yAcceleration, strideData_xAcceleration))
resultsHorizontal.insert(0, 0)
#plt.plot(x_values, resultsHorizontal)
#%%
#x-axis "convert" into time: 100 Hertz makes 0.01 seconds
scale_factor = 0.01
x_values = np.arange(len(resultsHorizontal)) * scale_factor
#Calculation of the global high and low points
heel_one=pd.Series(strideData.iloc[:,7])
plt.scatter(heel_one.idxmax()*scale_factor,heel_one.max(), color='red')
plt.scatter(heel_one.idxmin()*scale_factor,heel_one.min(), color='blue')
heel_two=pd.Series(strideData.iloc[:,9])
plt.scatter(heel_two.idxmax()*scale_factor,heel_two.max(), color='orange')
plt.scatter(heel_two.idxmin()*scale_factor,heel_two.min(), color='green')#!
#Plot of force data
plt.plot(x_values[:-1],strideData.iloc[:,7]) #force heel
plt.plot(x_values[:-1],strideData.iloc[:,9]) #force toe
# while - loop to calculate the point of intersection with the blue function
i = heel_one.idxmax()
while strideData.iloc[i,7] > strideData.iloc[i,9]:
i = i-1
# Length calculation between global minimum orange function and intersection with blue function
laenge=(i-heel_two.idxmin())*scale_factor
print(laenge)
#%% Integration of horizontal acceleration
velocity = inte.cumtrapz(resultsHorizontal,x_values)
plt.plot(x_values[:-1], velocity)
#%% Integration of the velocity
s = inte.cumtrapz(velocity, x_values[:-1])
plt.plot(x_values[:-2],s)
I hope it's clear what I want to do. Thanks for helping me!
I didn't dig all the way through your code, but the following tricks may be useful.
Say you have x and y values:
x = np.linspace(0,3,100)
y = x**2
Now, you only want the values corresponding to, say, .5 < x < 1.5. First, create a boolean mask for the arrays as follows:
mask = np.logical_and(.5 < x, x < 1.5)
(If this seems magical, then run x < 1.5 in your interpreter and observe the results).
Then use this mask to select your desired x and y values:
x_masked = x[mask]
y_masked = y[mask]
Then, you can translate all these values so that the first x,y pair is at the origin:
x_translated = x_masked - x_masked[0]
y_translated = y_masked - y_masked[0]
Is this the type of thing you were looking for?
I've written a small program to do the following:
inspect an image
pick a row at random from the image
plot the pixel values along that row
make a list of the local minima in that row
and I'm trying to make it into a function, so that I do the same thing to, say 10 rows, so that I can plot the pixel values of all of those rows without haveing to run the program 10 times.
The code looks like this:
from astropy.io import fits
import matplotlib.pyplot as plt
import numpy as np
hdulist = fits.open('xbulge-w1.fits') # Open FITS file as image
w1data = hdulist[0].data
height = w1data.shape[0] # Inspect height of image
width = w1data.shape[1]
def plot_envelope(image, image_height):
index = np.random.randint(0, height/2) # Select random number in upper half
row = w1data[index] # Look at row number
local_minima = []
# Find local minimum, and add to list of minimum-valued pixels
for i in range(1, width-1):
if w1data[index][i-1] > w1data[index][i]:
if w1data[index][i+1] > w1data[index][i]:
local_minima.append(w1data[index][i])
else:
continue
return (local_minima, row, index)
plot_envelope(w1data, height)
x1 = range(width)
plt.plot(x1, row, color = 'r', linewidth = 0.5)
plt.title('Local envelope for row ' + str(index))
plt.xlabel('Position')
plt.ylabel('Pixel value')
plt.show()
It works fine if I don't use a function definition (i.e. if the declarations of index, row, and local_minima and the nested for loops are in the main part of the program). With the function definition as shown, it returns a NameError: name 'local_minima' is not defined error.
Since I'm passing those variables out of the function, shouldn't I be able to use them in the rest of the program?
Am I missing something about local and global variables?
When you call plot_envelope(w1data, height) you are telling the function to assign w1data and height to image and image_heigth respectivelly. Inside the function you should manipulate the w1data with the image dummy variable (change w1data for image inside the function) , which scope is only inside the function. Next thing is that you should get the result of the function (return) in a variable: envelope = plot_envelope(w1data, height) Then local_minima = envelope[0], row = envelope[1], index = envelope[2].
I am using the random number routines in python in the following code in order to create a noise signal.
res = 10
# Add noise to each X bin accross the signal
X = np.arange(-600,600,res)
for i in range(10000):
noise = [random.uniform(-2,2) for i in xrange(len(X))]
# custom module to save output of X and noise to .fits file
wp.save_fits('test10000', X, noise)
plt.plot(V, I)
plt.show()
In this example I am generate 10,000 'noise.fits' files, that I then wish to co-add together in order to show the expected 1/sqrt(N) dependence of the stacked noise root-mean-square (rms) as a function of the number of objects co-added.
My problem is that the rms follows this dependancy up until ~1000 objects, at which point it deviates upwards, suggesting that the random number generator.
Is there a routine or way to structure the code which will avoid or minimise this repetition? (Ideally with the number as a float in between a max and min value >1 and <-1)?
Here is the output of the co-adding code as well as the code pasted at the bottom for reference.
If I use the module random.random() the result is worse.
Here is my code which adds the noise signal files together, averaging over the number of objects.
import os
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
import glob
rms_arr =[]
#vel_w_arr = []
filelist = glob.glob('/Users/thbrown/Documents/HI_stacking/mockcat/testing/test10000/M*.fits')
filelist.sort()
for i in (filelist[:]):
print(i)
#open an existing FITS file
hdulist = fits.open(str(i))
# assuming the first extension is the table we assign data to record array
tbdata = hdulist[1].data
#index = np.arange(len(filelist))
# Access the signal column
noise = tbdata.field(1)
# access the vel column
X = tbdata.field(0)
if i == filelist[0]:
stack = np.zeros(len(noise))
tot_rms = 0
#print len(stack)
# sum signal in loop
stack = (stack + noise)
rms = np.std(stack)
rms_arr = np.append(rms_arr, rms)
numgal = np.arange(1, np.size(filelist)+1)
avg_rms = rms_arr / numgal