I'm a Python noobie trying to find my way around using it for Dynamo. I've had quite a bit of success using simple while loops/nested ifs to neaten my Dynamo scripts; however, I've been stumped by this recent error.
I'm attempting to pull in lists of data (flow rates from pipe fittings) and then output the maximum flow rate of each fitting by comparing the indices of each list (a cross fitting would have 4 flow rates in Revit, I'm comparing each pipe inlet/outlet flow rate and calculating the maximum for sizing purposes). For some reason, adding lists in the while loop and iterating the indices gives me the "unexpected token" error which I presume is related to "i += 1" according to online debuggers.
I've been using this while loop code format for a bit now and it has always worked for non-listed related iterations. Can anyone give me some guidance here?
Thank you in advance!
Error in Dynamo:
Warning: IronPythonEvaluator.EvaluateIronPythonScript
operation failed.
unexpected token 'i'
Code used:
import sys
import clr
clr.AddReference('ProtoGeometry')
from Autodesk.DesignScript.Geometry import *
dataEnteringNode = IN
a = IN[0]
b = IN[1]
c = IN[2]
d = IN[3]
start = 0
end = 3
i = start
y=[]
while i < end:
y.append(max( (a[i], b[i], c[i] ))
i += 1
OUT = y
Related
Function I tried to replicate:
doing a project for coursework in which I need to make the blackbody function and manipulate it in some ways.
I'm trying out alternate equations and in doing 2 of them i keep getting over flow error.
this is the error message:
alt2_2 = (1/((const_e**(freq/temp))-1))
OverflowError: (34, 'Result too large')
temp is given in kelvin (im using 5800 as my test value as it is approximately the temp of the sun)
freq is speed of light divided by whatever wavelength is inputted
freq = (3*(10**8))/wavelength
in this case i am using 0.00000005 as the test value for wavelength.
and const e is 2.7182
first time using stack. also first time doing a project on my own, any help appreciated.
This does the blackbody computation with your values.
import math
# Planck constant
h = 6.6e-34
# Boltzmann constant
k = 1.38e-23
# Speed of light
c = 3e+8
# Wavelength
wl = 0.00000005
# Temp
T = 5800
# Frequency
f = c/wl
# This is the exponent for e (about 49).
k1 = h*f / (k*T)
# This computes the spectral radiance.
Bvh = 2*f*f*f*h / (math.exp(k1)-1)
print(Bvh)
Output:
9.293819741690355e-08
Since we only used one or two digits on the way in, the resulting value is only good to one or two digits, 9.3E-08.
I am doing an ordinal logistic regression, and following the guide here for the analysis: R Data Analysis Examples: Ordinal Logistic Regression
My dataframe (consult) looks like:
n raingarden es_score consult_case
garden_id
27436 7 0 3 0
27437 1 0 0 1
27439 1 1 1 1
37253 1 0 3 0
37256 3 0 0 0
I am at the part where I need to to create graph to test the proportional odds assumption, with the command in R as follows:
(s <- with(dat, summary(es_score ~ n + raingarden + consult_case, fun=sf)))
(es_score is an ordinal ranked score with values between 0 - 4; n is an integer; raingarden and consult_case, binary values of 0 or 1)
I have the sf function:
sf <- function(y) {
c('Y>=1' = qlogis(mean(y >= 1)),
'Y>=2' = qlogis(mean(y >= 2)),
'Y>=3' = qlogis(mean(y >= 3)))
}
in a utils.r file that I access as follows:
from rpy2.robjects.packages import STAP
with open('/file_path/utils.r', 'r') as f:
string = f.read()
sf = STAP(string, "sf")
And want to do something along the lines of:
R = ro.r
R.with(work_case_control, R.summary(formula, fun=sf))
The major problem is that the R withoperator is seen as a python keyword, so that even if I access it with ro.r.with it is still recognized as a python keyword. (As a side note: I tried using R's apply method instead, but got an error that TypeError: 'SignatureTranslatedAnonymousPackage' object is not callable ... I assume this is referring to my function sf?)
I also tried using the R assignment methods in rpy2 as follows:
R('sf = function(y) { c(\'Y>=1\' = qlogis(mean(y >= 1)), \'Y>=2\' = qlogis(mean(y >= 2)), \'Y>=3\' = qlogis(mean(y >= 3)))}')
R('s <- with({0}, summary(es_score~raingarden + consult_case, fun=sf)'.format(consult))
but ran into issues where the dataframe column names were somehow causing the error: RRuntimeError: Error in (function (file = "", n = NULL, text = NULL, prompt = "?", keep.source = getOption("keep.source"), :
<text>:1:19: unexpected symbol
1: s <- with( n raingarden
I could of course do this all in R, but I have a very involved ETL script in python, and would thus prefer to keep everything in python using rpy2 (I did try this using mord for scipy-learn to run my regreession, but it is pretty primitive).
Any suggestions would be most welcome right now.
EDIT
I tried various combinations #Parfait's suggestions, and qualifying the fun argument is syntactically incorrect, as per PyCharm interpreter (see image with red highlighting at end): ... it doesn't matter what the qualifier is, either, I always get an error
that SyntaxError: keyword can't be an expression.
On the other hand, with no qualifier, there is no syntax error: , but I do get the error TypeError: 'SignatureTranslatedAnonymousPackage' object is not callable when using the function sf as obtained:
from rpy2.robjects.packages import STAP
with open('/Users/gregsilverman/development/python/rest_api/rest_api/scripts/utils.r', 'r') as f:
string = f.read()
sf = STAP(string, "sf")
With that in mind, I created a package in R with the function sf, imported it, and tried various combos with the only one producing no error, being: print(base._with(consult_case_control, R.summary(formula, fun=gms.sf))) (gms is a reference to the package in R I made).
The output though makes no sense:
Length Class Mode
3 formula call
I am expecting a table ala the one on the UCLA site. Interesting. I am going to try recreating my analysis in R, just for the heck of it. I still would like to complete it in python though.
Consider bracketing the with call and be sure to qualify all arguments including fun:
ro.r['with'](work_case_control, ro.r.summary(formula, ro.r.summary.fun=sf))
Alternatively, import R's base package. And to avoid conflict with Python's named method with() translate the R name:
from rpy2.robjects.packages import importr
base = importr('base', robject_translations={'with': '_with'})
base._with(work_case_control, ro.r.summary(formula, ro.r.summary.fun=sf))
And be sure to properly create your formula. Consider using R's stats packages' as.formula to build from string. Notice too another translation is made due to naming conflict:
stats = importr('stats', robject_translations={'format_perc': '_format_perc'})
formula = stats.as_formula('es_score ~ n + raingarden + consult_case')
I'm working on a huffman encoder/decoder in Python, and am experiencing some unexpected (at least for me) behavior in my code. Encoding the file is fine, the problem occurs when decoding the file. Below is the associated code:
def decode(cfile):
with open(cfile,"rb") as f:
enc = f.read()
len_dkey = int(bin(ord(enc[0]))[2:].zfill(8) + bin(ord(enc[1]))[2:].zfill(8),2) # length of dictionary
pad = ord(enc[2]) # number of padding zeros at end of message
dkey = { int(k): v for k,v in json.loads(enc[3:len_dkey+3]).items() } # dictionary
enc = enc[len_dkey+3:] # actual message in bytes
com = []
for b in enc:
com.extend([ bit=="1" for bit in bin(ord(b))[2:].zfill(8)]) # actual encoded message in bits (True/False)
cnode = 0 # current node for tree traversal
dec = "" # decoded message
for b in com:
cnode = 2 * cnode + b + 1 # array implementation of tree
if cnode in dkey:
dec += dkey[cnode]
cnode = 0
with codecs.open("uncompressed_"+cfile,"w","ISO-8859-1") as f:
f.write(dec)
The first with open(cfile,"rb") as f call runs very quickly for all file sizes (tested sizes are 1.2MB, 679KB, and 87KB), but the part that slows down the code significantly is the for b in com loop. I've done some timing and I honestly don't know what's going on.
I've timed the whole decode function on each file, as shown below:
87KB 1.5 sec
679KB 6.0 sec
1.2MB 384.7 sec
first of all, I don't even know how to assign this complexity. Next, I've timed a single run through of the problematic loop, and got that the line cnode = 2*cnode + b + 1 takes 2e-6 seconds while the if cnode in dkey line takes 0.0 seconds (according to time.clock() on OSX). So it seems as if the arithmetic is slowing down my program significantly...? Which I feel like doesn't make sense.
I actually have no idea what is going on, and any help at all would be super welcome
I found a solution to my problem, but I am still left with confusion afterwards. I solved the problem by changing the dec from "" to [], and then changing the dec += dkey[cnode] line to dec.append(dkey[cnode]). This resulted in the following times:
87KB 0.11 sec
679KB 0.21 sec
1.2MB 1.01 sec
As you can see, this has immensely cut down the time, so in that aspect, this was a success. However, I am still confused as to why python's string concatenation seems to be the problem here.
I'm really new at python and I'm using a script to do some data analysis.
The analysis part is standard and in order to get results I need to import an .epw file.
nhours = 8760
n12h = 730
klimain = 'weather.epw'
data = readclimafile(klimain)
And I use the data array to do analysis. For example, I do this
def readradfile(radin):
rad = np.loadtxt(radin)
def do_analysis(t, rad):
R = 6.667
for j in range(rad.shape[0]):
q[j] = (20-t[j])/R
So, when I use an .epw for a whole year it works fine.
But when I import an .epw for 8 days (192 hours), I get an error:
q[j] = (20-t[j])/R
IndexError: index 192 is out of bounds for axis 0 with size 192
I changed the nhours to 191 and the n12h=16. I don't understand why it doesn't work. There's nowhere else to define the length of array, just the .epw imported. And it works fine for larger file, why does it get an error for a much smaller?
Any ideas?
Thank you!
I am reading a file containing single precision data with 512**3 data points. Based on a threshold, I assign each point a flag of 1 or 0. I wrote two programs doing the same thing, one in fortran, the other in python. But the one in fortran takes like 0.1 sec while the one in python takes minutes. Is it normal? Or can you please point out the problem with my python program:
fortran.f
program vorticity_tracking
implicit none
integer, parameter :: length = 512**3
integer, parameter :: threshold = 1320.0
character(255) :: filen
real, dimension(length) :: stored_data
integer, dimension(length) :: flag
integer index
filen = "vor.dat"
print *, "Reading the file ", trim(filen)
open(10, file=trim(filen),form="unformatted",
& access="direct", recl = length*4)
read (10, rec=1) stored_data
close(10)
do index = 1, length
if (stored_data(index).ge.threshold) then
flag(index) = 1
else
flag(index) = 0
end if
end do
stop
end program
Python file:
#!/usr/bin/env python
import struct
import numpy as np
f_type = 'float32'
length = 512**3
threshold = 1320.0
file = 'vor_00000_455.float'
f = open(file,'rb')
data = np.fromfile(f, dtype=f_type, count=-1)
f.close()
flag = []
for index in range(length):
if (data[index] >= threshold):
flag.append(1)
else:
flag.append(0)
********* Edit ******
Thanks for your comments. I am not sure then how to do this in fortran. I tried the following but this is still as slow.
flag = np.ndarray(length, dtype=np.bool)
for index in range(length):
if (data[index] >= threshold):
flag[index] = 1
else:
flag[index] = 0
Can anyone please show me?
Your two programs are totally different. Your Python code repeatedly changes the size of a structure. Your Fortran code does not. You're not comparing two languages, you're comparing two algorithms and one of them is obviously inferior.
In general Python is an interpreted language while Fortran is a compiled one. Therefore you have some overhead in Python. But it shouldn't take that long.
One thing that can be improved in the python version is to replace the for loop by an index operation.
#create flag filled with zeros with same shape as data
flag=numpy.zeros(data.shape)
#get bool array stating where data>=threshold
barray=data>=threshold
#everywhere where barray==True put a 1 in flag
flag[barray]=1
shorter version:
#create flag filled with zeros with same shape as data
flag=numpy.zeros(data.shape)
#combine the two operations without temporary barray
flag[data>=threshold]=1
Try this for python:
flag = data > threshhold
It will give you an array of flags as you want.