I'm not sure if I'm missing something I'm still new at Python, but I am reading a lot of Matlab files in a folder. In each matlab file, there are multiple arrays and I can do things with each array like plot and find the average, max min etc. My code works perfectly and reads the data correctly. Now, I want to add a while loop so it can keep running until I tell it to stop, meaning queuing the user to keep choosing folders that need the data to be read. However, when I run it the second time, it gives me this error TypeError: 'list' object is not callable
Correct me if I'm wrong, but I feel like what the code is doing is adding the next set of data, to the overall data of the program. Which is why it gives me an error when at this line maxGF=max(GainF). Because then it becomes an array of arrays...and it can't take the maximum of that.
when I load data from each matlabfile, this is how I did it:
Files=[] #list of files
for s in os.listdir(filename):
Files.append(scipy.io.loadmat(filename+s))
for filenumber in range(0,length):
#Gets information from MATLAB file and inserts it into array
results[filenumber]=Files[filenumber]['results']
#Paramaters within MATLAB
Gain_loading[filenumber]=results[filenumber]['PowerDomain'][0,0]['Gain'][0,0] #gets length of data within data array
length_of_data_array=len(Gain_loading[filenumber])
Gain[filenumber]=np.reshape(results[filenumber]['PowerDomain'][0,0]['Gain'][0,0],length_of_data_array) #reshapes for graphing purposes
PL_f0_dBm[filenumber]=np.reshape(results[filenumber]['PowerDomain'][0,0]['PL_f0_dBm'][0,0],length_of_data_array)
Pavs_dBm[filenumber]=np.reshape(results[filenumber]['PowerDomain'][0,0]['Pavs_dBm'][0,0],length_of_data_array)
PL_f0[filenumber]=np.reshape(results[filenumber]['PowerDomain'][0,0]['PL_f0'][0,0],length_of_data_array)
PL_2f0_dBm[filenumber]=np.reshape(results[filenumber]['PowerDomain'][0,0]['PL_2f0_dBm'][0,0],length_of_data_array)
CarrierFrequency[filenumber]=np.reshape(results[filenumber]['MeasurementParameters'][0,0]['CarrierFrequency'][0,0],length_of_data_array)
Gamma_In[filenumber]=np.reshape(abs(results[filenumber]['PowerDomain'][0,0]['Gin_f0'][0,0]),length_of_data_array)
Delta[filenumber]=PL_2f0_dBm[filenumber]-PL_f0_dBm[filenumber]
When I start doing stuff with the data like below, it works and it displays the correct data up until I run the max(GainF) command.
GainF=[None]*length
MaxPL_Gain=[None]*length
#finds highest value of PL # each frequency
for c in range(0,length):
MaxPL_Gain[c]=max(PL_f0_dBm[c])
MaxPL_Gain[:]=[x-6 for x in MaxPL_Gain] #subtracts 6dB from highest PL Values # each frequency
for c in range (0,length):
GainF[c]=np.interp(MaxPL_Gain[c],PL_f0_dBm[c],Gain[c]) #interpolates PL vs Gain. Finds value of Gain at 6dB backoff form max PL
maxGF=max(GainF)
I read other threads that said to use the seek(0) function. And I tried that with Files.seek(0) since that is where all my of data is initially saved, but when I run it it gives me the same error: AttributeError: 'list' object has no attribute 'seek'
How do I reset all of my data? Help
UPDATE:
I tried the following code
for name in dir():
if not name.startswith('_'):
del globals()[name]
And it works the way I want it to...or so I think. When I look at the PDF outputed from the program, I get distorted plots. It looks like the axis from the last program is still in the pdfs. Not just that, but when I run it 4-5 times, the spacing gets bigger and bigger and the graphs are further away from each other. How can I fix this error?
distorted graphs
Related
I have been given an ArcMAP file with hundreds of polylines which unfortunately consist of line segments. I have written a python script which adds connecting line segments together to create polylines. I now want to write these polylines back to a shapefile so that I can load it in ArcMAP again. However, so far I have not been successful in creating a shapefile (it is my first time working with ArcMAP so very unfamiliar with it).
I have written the polylines to a dictionary which now looks as follows:
{'Owner': 'Owner1', 'type': 'Sewage', 'points': [[44228.171875, 361147.6875], [44247.7695313, 361150.125], [44252.3203125, 361109.125], [44278.8789063, 361149.1875], [44262.4882813, 361177.71875], [44244.9609375, 361187.28125], [44229.9882813, 361118.125], [44286.6210938, 361148.90625], [44225.9882813, 361181.25], [44270.4882813, 361182.09375], [44302.0195313, 361208.4375], [44253.421875, 361203.21875], [44305.5390625, 361234.0], [44284.5117188, 361162.59375], [44286.6210938, 361148.90625], [44287.46875, 361099.78125], [44269.359375, 361089.46875], [44278.8789063, 361108.03125], [44249.28125, 361244.5]]}
So far what I have tried is to take the points-list from the dictionary, add it to a separate list and add this to a shapefile. I have not been succesful yet in doing so but the code I have used it shown below.
polyline_list = []
w = shapefile.Writer(shapeType=1)
w.autoBalance = 1
loop which determines points_list as shown above
polyline_list.append(points_list)
w.line(polyline_list)
w.field('Polyline_field')
w.record('Polyline_record')
w.save('Shapefile_KLIC_2022')
If anyone could help me that would be greatly appreciated. I am also new to using stackoverflow so I am not sure if the code which I provided is clear enough. The code to determine points_list consists of 2 nested while loops with a for loop inside as well so it is difficult to present here. If the points_list provided in the dictionary can be written to a shapefile however that would be enough help for me to write the complete shapefile. If there is a better way to ask my question also please let me know I'd like to learn :D
I have a pretty well functioning patch that uses two cameras, and employs cv.jit.track to track an object in "3D".
The console prints out sets of coordinates as the object moves through space (format [x,y,z]). My end goal is to translate this data into 3D modelling software to map a path that the object has travelled. One option for achieving this is a python script like this.
I need to determine how exactly to send the stream of data from the Max console outside of Max. Perhaps this is into a Python application, or a table, or literally a text file. Any suggestions are greatly appreciated!
excuse the code dump:
<pre><code>
----------begin_max5_patcher----------
3299.3oc6cz1aaia9yI+JDL5F1.7L3qhjGv.51ssaeaC31vvgtCExxpIpUVx
UVNuzC2+8IQRESEKISIJkljUihj7XIa879a7gr+xkWrXc1cQ6W38cduy6hK9
kKu3B4aU8FWnguXw1f6BSB1KusEoQ2ls9iKVptTQzcEx2N2KDV+lYGJRhJJt
eWj5KdwBueVeocAEgWGmd06yiBKTWEAvq.K8fLX0uPB4OQq.O7YROrMNs7KT
97A52Ldi7wVhJ+Ae+EGuS0yVdqvp27Wu7xperzYpCMNpi4yjTmPLVpiNcT21
n86CtJ5DxaeTgGv6MPu23FUhP9U+xm2OUhZgJIu.nRslJBPGKUBmcM08F1gs
PBXBZE17ECtziTQDK8vv9IH3oDDsCBBLoDDpGBR.jlTDDdjj.gMcjPWZdWEU
bylnaRh2WLNMONU4iTQrsn4su39jHSb3GxR1rvR0Rn+7a7UQ+wgQkleiiCHv
hUH.h.XB8qREWHDrRPfTAP+BJJ4NRePHnA24CYoE6i+h7AAgqnBJj6W+8H5i
KU8ISC1pXs+o73fjEsv+3SG+6v1nzCKLd5eHHLxLzvIrs3zRkpRt2xwvAY3U
bHlyv0uGujqRnmne0fCV4s3wpcR71ToKtHZqNwhE+sRZ3eEuMx6u+W799RtY
dPE1tr5G+0+wO58ehVGFr06eWDmDWbu2eNp330+wzfcO9y78A6CCVmD48Oyy
ze3E8auamXTbzOSd4MWDk+9nzpGjI+uoHFOg4XT90F4U6mvKNcW4ioWOFiVp
SjtIgH3DXofGBKFAVLYrwtCymtw6KdAIIiySiO2WEQQlRCUL3n7Hzz4N3iwE
qBiRRVmjE9o5u1a1GlmURdFZks4o35pOXXVRVthv.q3XeJyeYi+RfwbJpTav
fBOggozCaSkaeTx1rMMdtqyx2Dk23ACVZ7Cymz0QAO9dYDLrRIss+x7it9pF
eLg.gf7ks9Wler1xdkHk3XgRvqhe.L9L4Y2dh6jZqDwCNCKuqqihu5Z42J5A
ovoBqR7913MEWW8dDp0ge9gnznaBdvGUI9GuONKU9szhVHt9NuJOdSRbZjcR
jl5rjin72zgeNqCq8Z8JSG1+4jN7taiS2jcamQUJum2uMnHO9tyGUVkHOT63
AIofxfzCntETGIchlvPoqCRuZjMDfJqQgIighoCNFJFL+8zQluU4W99n9Swx
BAIVHITeUUZhNRn5nY13q0.imNwdGLlJc8Co6BB+jG1Rk89frIYKLU0REwfK
eG2QiiHSG+H7lUUrjh7RNxpM4AV4AvBhFIHU+R.Ft0Ac1sNLIZu2ltKqrLy8
dPu2lGrI9vdO1T3FrlQnbV.43gK98eRLGxuZMJ4v1fojngRpkM7NVgYyum+v
jr9bK1aup.LidUgYCW6lO+siJaWTpSQ1pIugGZiL+g1pTY+bwpqxCVWkbQUl
Edu8PZ7mOD4AmPcXHU8KcK2FTaWgyuRryscEURlByWDbo+Z3rzCVxy+dvhY3
S6kj.8rnEruD5.aq7uxLe9VGXqWerWgMfsUgtZ0p9Jz.V9h4lKtK3SkEjq92
3bynJJ1mxrYUVws3K.N+CDPvtcUsY9mGIEJMuYkEMJWCVHCPfkkPBDR2ACMT
JDL+kCrKORRvuyIBtr93SIXwHHXxS.AK8peFJhigpDDXDguOfHph9gQCOaH9
7uN5RJ5Cd+ljM+W4qkM9KmjqnVjqPDc3Vt.77mEjb.P7dC1EJ1WfFtoaKY8I
lvRcBy1VlAPwokxEkj38S8nJiTYzR0sNlbRqiOm1KaJ0dOrccTdugbkckr2z
1Ks.KqI43KjzOiT7PAC13jgFdZISYYLiMNJT.ZgM.Ty5CZEUkFR6YdfmfVU5
KlcuzEdq6ofVU6qU4mOb8EMiRYNLTF0fR6c9aoaPX3gs8kP1GRxBFCShHidW
5.YLZSCJyU1721jvqKUX5ew.zk5cNMJc+QnBcfQLQfPLAfhKcvxGrGHevrm1
9zQ6Pnj.obY96ifZohWPTqpXEGH1Irhrtx.XeXd7tBuhay9Nu679idah1Ub8
hyOALL5oZuPc.zgje.EO6Y2Vh7Qw2D48Eml4GJDbJESYCdlsoSYAYSPBQ0jG
k0B4M7DhnrmlDhNi9bVZTk97u06due28kp0e42u3b1oD0JLNJkXzSlR78iLe
OsiWfLkkwn19Drn6ZR7NWZMz3oPh34kgYsHiBGYsID+ut0lHG1x6G+vVpJmF
y7G4rVRdR1cLkz3cNSi3wNOoDx2lmzWTyhGjCTZ07WSyhG4ayS5+OoCidMpC
SetnCOEIOnSFZz4Nfeh5q4DO6rX8tWQEOcTyNKV7bd1YgBzwLEFw.FQeYL5r
Z4HlKdhFcVj3U0nyV6gVYGLhQmERe4M5rZhFoJXXDiNKD+0ZzYmBeePCucVr
DqsIzQu3FXVMQC4vQNvrP3y5AlEobDpVE1QLurPvy44kUGMSSciXIxe4Osr0
JvX9XmVV1raz94x7+xy7P8uXpVPk5gIve3s4X5DzEYWcUR2VimcErzdYUaGA
R8OseqYM77pMoR4yYQU0IO5f4QhpUueSRee1g7vZZqdSQ3cDc2DsuHNMnPWW
z6NtwYLtoswaTApzqXffJWRWtz30moFooFPUGkHxDOj2oD5XwxUU61szaZID
5.HDCnSQVfSHqOwFj0uOjkUlVSE5oVU8ZjkHSJ1MFIyYbqFaPMXjnYPpKmey
yhsUiNkmq7k5ujdeRxoRy4GU0Z3e9GkjzQN9np1aEmWZ2TkXprXY1ZwBOmhF
Aa55oIDWs6R8UlKUPSt0rM5f9v9rXPP4PVAUCdlIjqpRTavMhMdZTVylPtha
1n6UUDxY4aHjJco5wosAjZrrfHgZviHNi2UMG3r3MsW4MlBpvFBTuomgq3mb
RaUcMW8kis.SIr9vTAjafolPtha13OkzKWjCk5hPUQQZmARHW8JBbF2pNl6l
GbyFqFDvh3y50dR6nrIDV0zDBeE6waHkpKM0tPsJL9YTF7QyCCG5thZq7Qs+
ItZqc+HnoWswYhP6AEpp0oF0ZLImSUVcVfq85zBWlz9o765cRkv.wcCOIthn
Hg+Jxi20nSOhZSlVbaj89JWEpS0xlPpQrApNFOZ.nFbk1kIyP5XBmEJ.ngh9
SrX.BGfbv.ZpsAcV0tF4floeIgNEUQtYFBbVhqRGa5k31vEQnA30UGqtADog
GYR6djcylhZSYDTlEIcPwsLm6ceMWSxzF7Fws.uwpy3Oc74lPv5txympn0xN
tAOGdi6MATXKaBCrZ3yYTEMwYSCldVNrMsKRO+HlPtVnF2BtHF2u4GTZ+oOr
zD5rieDD1P7KgbEwQSESkPMypuIDVsln0LbAvckAa7Di41T2jZKUUWgGXFZB
qUHKAXOxZBM4oDaSVMP9fKBQzccdBe20ispQlhAi0pI.oyK4HVaS8d9Ct7C8
n810kbsWZ1vnGLNy5Ny8Iv8Lyp715u+ekwZ8OYmFxzqFWWWyUdsUUXzqiip+
Od3TbizCdSbGuoBqWVAX2wCk937UKnmtkQD26EL0FitdiTqRowu9bRQsrXMf
n0y.C+THhf2IuWdsIOiZt0BiyQxP0L.dDXpC9PsovUeaZ4HkC6N6+SulqJUV
Eg2u+7+j56T8RJX.4JtYy5VZQWJ35yfgG.lbguUK6o8XppT61wT2LoH13eCi
NuEkVZa.3Za.rI7LUXihnZZFMgbE2rpD4dWgPJQVpAfazzulPXlpigbUwTL2
S3gXUcmzyi1XlYuzjPOEsZmZeSev.id9f060BkOehAN+XiI77SDD6IBEp1wx
EPbqPNgMMKoWqKeJ7gFhTo6pyUuIjpyOJ+WXt6qiL1lp7obK7Wo2Sjzoxm.1
4DbaOIVT2IYMEtYwVnGHQaX2MB1uaztzUl.NCnsUpAVXooWzE0ljnIfYRBzY
HzqMjfMNKHbCewD9LzGEpygMpiT3CLia36d2yfTa7i0Oaj32rouT1wHb7IKB
GzFGDXgMbQ8Yirpe5MgzKIt1iqDxU71F8TnMRejOwXsOaBgQlK4EFMCIkaGg
fG.gX.M4C2AzVjEdNjUUydMWuIDSePcH0VjPecHDaVNODnAWGLGpH.ab0Q3m
OCYNB1xIXXWWxUST.wpBNsHydFm2Ed2xkbFuwVg2VTHU0+GS0Ede5kZf2p8C
Pvtc2DkWu4lkn7hsAeTs6k4KkfwoJP4dRXQdzMM2LzKBxCuNtHJr3PtZqRdm
9+2UWTse0ySODqUQKYVWpOaoezdP3gcYoZej7SQIIY2Ve7aWxf9Pvgjhlr0f
vvnzhla5dDExjahcNjK2M6.pfRnM210la.z2IOzqqNlw40LmkZYYd429wiAa
MlrsDMhq8NXJ6OR.xsfc0Al8fQelOgAjmT.TABRkTBD.E90aG+gifJgrbKrT
gg62oO3Bj6zkK+0K+e.gM30E
-----------end_max5_patcher-----------
</code></pre>
I am trying to convert multiple NetCDF4 files to GeoTIFF rasters while downsampling the data from daily frequency to monthly frequency (ignoring NaNs). Unfortunately, I get errors when I get to the step when I want to write the data into geotiffs.
It was easy to open and downsample files using xarray (awesome functionalities if compared to R, which is what I am used to working with) keeping just one variable, as I wanted and calculating monthly means from daily values but, I got stuck trying to export/convert my results into multiple *.tif files.
Even tried converting it to multiple NetCDF3 files, since I can convert those to geotiff easily, but also failed.
Input data are 4018 ESA soil moisture ".nc4" files in daily frequency, covering the whole globe but incomplete at each step (only swaths for each day are filled with data, the rest is empty/NA), so I want to ignore NaNs for calculating the monthly means.
In total, these 4018 days tally up to 132 months (11 years) and my xarray dataset produced from the code below, seems to show that, as expected.
import xarray as xr
# opening the files into an array
mfdataDIR = 'C:/full_path_here/*.nc'
DS = xr.open_mfdataset(mfdataDIR)
# downsampling it from daily to monthly means while keeping attributes and ignoring NAs
monthly_data = DS.sm.resample(time="1M").mean(skipna= True, keep_attrs=True)
I got the following warning here: "Default reduction dimension will be changed to the grouped dimension after xarray 0.12. To silence this warning, pass dim=xarray.ALL_DIMS explicitly. skipna=skipna, allow_lazy=True, **kwargs)"
Not sure if it matters but results seem ok when I print the monthly_data xarray.
# Now, trying to convert to GeoTIFFs using Robin Wilson's rasterio_to_xarray (and vice-versa) script (full script can be found on link below).
import sys
sys.path.insert(0, 'C:/path_to_py_script/')
import xarray_to_rasterio as xarrast
xarrast.xarray_to_rasterio_by_band(monthly_data, 'C:/path_here/%s.tif', dim= 'time')
Now I got this error: "IndexError: tuple index out of range"
I think the mistake is in how I am trying to use it and not on the script although I can't see where.
The script made by Robin Wilson, which would be ideal for my objectives, was gotten here: https://github.com/robintw/XArrayAndRasterio/blob/master/rasterio_to_xarray.py
I see errors traced to line 84:
82 if len(xa.shape) == 2:
83 count = 1
84 height = xa.shape[0]
85 width = xa.shape[1]
86 band_indicies = 1
87 else:
and line 122:
122 xarray_to_rasterio(data, filename)
So, it seems my time dim, which I hoped would count as bands, are not what the script expects and so, it fails. And also I am messing up with "filename" parameter, apparently.
Don't know how... Can I use this script or modify it to do what I want?
(to save 132 ".tif" files corresponding to 132 levels of the time dim)
# Second option, not ideal but could still help me, if it hadn't also failed:
mDS = monthly_data.to_dataset()
paths = ['C:/path_here/%s.nc' ]
xr.save_mfdataset(mDS, paths, format='NETCDF3_CLASSIC')
got this error: "TypeError: save_mfdataset only supports writing Dataset objects, received type <class 'str'>"
I think my lack of python knowledge is hampering me as I keep getting errors in both procedures (xarray to GeoTIFF or xarrayDataset to NetCDF3 classic).
When I inspect the monthly_data I see what I expected: only the "sm" variable, all three dimensions and 132 time "values".
Can anyone help please?
Using oct2py to call corrcoef.m on several (10MM+) size dataframes to return [R,P] matrices to generate training sets for a ML algorithm. Yesterday, I had this working no problem. Ran the script from the top this morning, returning an identical test set to be passed to Octave through oct2py.
I am being returned:
Oct2PyError: Octave evaluation error:
error: isnan: not defined for cell
error: called from:
corrcoef at line 152, column 5
CorrCoefScript at line 1, column 7
First, there are no null/nan values in the set. In fact, there aren't even any zeros. There is no uniformity in any column such that there is no standard deviation being returned in the corrcoef calculation. It is mathmatically sound.
Second, when I load the test set into Octave through the GUI and execute the same .m on the same data no errors are returned, and the [R,P] matrices are identical to the saved outputs from last night. I tested to see if the matrix var is being passed to Octave through oct2py correctly, and Octave is receiving an identical matrix. However, oct2py can no longer execute ANY .m with a nan check in the source code. The error above is returned for any Octave packaged .m script that contains .isnan at any point.
For s&g, I modified my .m to receive the matrix var and write it to a flat file like so:
csvwrite ('filename', data);
This also fails with an fprintf error; if I run the same code on the same dataset inside of the Octave GUI, works fine.
I'm at a loss here. I updated conda, oct2py, and Octave with the same results. Again, the exact code with the exact data ran behaved as expected less than 24 hours prior.
I'm using the code below in Jupyter Notebook to test:
%env OCTAVE_EXECUTABLE = F:\Octave\Octave-5.1.0.0\mingw32\bin\octave-cli-5.1.0.exe
import oct2py
from oct2py import octave
octave.addpath('F:\\FinanceServer\\Python\\Secondary Docs\\autotesting\\atOctave_Scripts');
data = x
octave.push('data',data)
octave.eval('CorrCoefScript')
cmat = octave.pull('R')
enter code here
Side note - I am only having this issue inside of a specific .ipynb script. Through some stroke of luck, the no other scripts using oct2py seem to be affected.
Got it fixed, but it generates more questions than answers. I was using a list of dataframes to loop by type, such that for each iteration i, x was generated through x = dflst[i]. For reasons beyond my understanding, that failed with the passage of time. However, by writing my loop into a custom function and explicitly calling each data frame within that function as so: oct_func(type1df) I am seeing the expected behavior and desired outcome. However, I still cannot use a loop to pass the dataframes to oct_func(). So, it's a band-aid solution that will fit my purposes, but is frustratingly unable to scale .
Edit:
The loop works fine if iterating through a dict of dataframes instead of a list.
I just started to use ArcPy to analyse geo-data with ArcGIS. The analysis has different steps, which are to be executed one after the other.
Here is some pseudo-code:
import arcpy
# create a masking variable
mask1 = "mask.shp"
# create a list of raster files
files_to_process = ["raster1.tif", "raster2.tif", "raster3.tif"]
# step 1 (e.g. clipping of each raster to study extent)
for index, item in enumerate(files_to_process):
raster_i = "temp/ras_tem_" + str(index) + ".tif"
arcpy.Clip_management(item, '#', raster_i, mask1)
# step 2 (e.g. change projection of raster files)
...
# step 3 (e.g. calculate some statistics for each raster)
...
etc.
This code works amazingly well so far. However, the raster files are big and some steps take quite long to execute (5-60 minutes). Therefore, I would like to execute those steps only if the input raster data changes. From the GIS-workflow point of view, this shouldn't be a problem, because each step saves a physical result on the hard disk which is then used as input by the next step.
I guess if I want to temporarily disable e.g. step 1, I could simply put a # in front of every line of this step. However, in the real analysis, each step might have a lot of lines of code, and I would therefore prefer to outsource the code of each step into a separate file (e.g. "step1.py", "step2.py",...), and then execute each file.
I experimented with execfile(step1.py), but received the error NameError: global name 'files_to_process' is not defined. It seems that the variables defined in the main script are not automatically passed to scripts called by execfile.
I also tried this, but I received the same error as above.
I'm a total Python newbie (as you might have figured out by the misuse of any Python-related expressions), and I would be very thankful for any advice on how to organize such a GIS project.
I think what you want to do is build each step into a function. These functions can be stored in the same script file or in their own module that gets loaded with the import statement (just like arcpy). The pseudo code would be something like this:
#file 1: steps.py
def step1(input_files):
# step 1 code goes here
print 'step 1 complete'
return
def step2(input_files):
# step 2 code goes here
print 'step 2 complete'
return output # optionally return a derivative here
#...and so on
Then in a second file in the same directory, you can import and call the functions passing the rasters as your inputs.
#file 2: analyze.py
import steps
files_to_process = ["raster1.tif", "raster2.tif", "raster3.tif"]
steps.step1(files_to_process)
#steps.step2(files_to_process) # uncomment this when you're ready for step 2
Now you can selectively call different steps of your code and it only requires commenting/excluding one line instead of a whle chunk of code. Hopefully I understood your question correctly.