I'm working on my Computer Graphics homework. Since we're allowed to choose the PL we want, I decided this would be a good occasion to learn some Python, but I ran into some trouble, eventually.
In one module I have some functions like this:
def function1 (a, b, matrix):
...
function2 (matrix)
def function2(matrix):
...
function3(x,y,matrix):
def function3(x,y,matrix):
...
matrix[x][y] = something
Now, from a different module, I call function1. It should then call function2 passing it the matrix, which should in turn call function3 passing it the matrix. However, I get a list assignment index out of range when attempting to access matrix[x][y.
If I try to call them on a matrix from the same module, it will work, so I thought that the functions might not realize they are receiving a matrix. I changed the function definitions to something like
function2(matrix = [[]])
but I still get the same error. I'm kind of stuck.
Sorry everyone, you were right.
There was this one pixel in a 500x500, which was actually at matrix[249][500].
When I made the checks, I checked if they're <=500 instead of <500, don't know why.
Thanks, I was pretty sure I was screwing something else up, especially after I added my (faulty) tests, since this is my first time writing python.
Related
I'm working on converting a code that solves a BVP (Boundary Value Problem) from MATLAB to Python (SciPy). However, I'm having a little bit of trouble. I wanted to pass a few arguments into the function and the boundary conditions; so in MATLAB, it's something like:
solution = bvp4c(#(x,y)fun(x,y,args),#(ya,yb)boundarycond(ya,yb,arg1,args),solinit);
Where arg1 is a value, and args is a structure or a class instance. So I've been trying to do this on scipy, and the closest I get to is something like:
solBVP = solve_bvp(func(x,y,args), boundC(x,y,arg1,args), x, y)
But then it errors out saying that y is not defined (which it isn't, because it's the vector of first order DEs).
Has anyone tried to pass additional arguments into solve_bvp? If so, how did you manage to do it? I have a workaround right now essentially putting args and arg1 as global variables, but that seems incredibly sketchy if I want to make this code somewhat modular.
Thank you!
I've only recently learned about decorators, and despite reading nearly every search result I can find about this question, I cannot figure this out. All I want to do is define some function "calc(x,y)", and wrap its result with a series of external functions, without changing anything inside of my function, nor its calls in the script, such as:
#tan
#sqrt
def calc(x,y):
return (x+y)
### calc(x,y) = tan(sqrt(calc(x,y))
### Goal is to have every call of calc in the script automatically nest like that.
After reading about decorators for almost 10 hours yesterday, I got the strong impression this is what they were used for. I do understand that there are various ways to modify how the functions are passed to one another, but I can't find any obvious guide on how to achieve this. I read that maybe functools wraps can be used for this purpose, but I cannot figure that out either.
Most of the desire here is to be able to quickly and easily test how different functions modify the results of others, without having to tediously wrap functions between parenthesis... That is, to avoid having to mess with parenthesis at all, having my modifier test functions defined on their own lines.
A decorator is simply a function that takes a function and returns another function.
def tan(f):
import math
def g(x,y):
return math.tan(f(x,y))
return g
I would like to know why these two "programs" produce different output
f(x)=x^2
f(90).mod(7)
and
def f(x):
return(x^2)
f(90).mod(7)
Thanks
Great question! Let's take a deeper look at the functions in question.
f(x)=x^2
def g(x):
return(x^2)
print type(g(90))
print type(f(90))
This yields
<type 'sage.rings.integer.Integer'>
<type 'sage.symbolic.expression.Expression'>
So what you are seeing is the difference between a symbolic function defined with the f(x) notation and a Python function using the def keyword. In Sage, the former has access to a lot of stuff (e.g. calculus) that plain old Sage integers won't have.
What I would recommend in this case, just for what you need, is
sage: a = f(90)
sage: ZZ(a).mod(7)
1
or actually the possibly more robust
sage: mod(a,7)
1
Longer explanation.
For symbolic stuff, mod isn't what you think. In fact, I'm not sure it will do anything (see the documentation for mod to see how to use it for polynomial modular work over ideals, though). Here's the code (accessible with x.mod??, documentation accessible with x.mod?):
from sage.rings.ideal import is_Ideal
if not is_Ideal(I) or not I.ring() is self._parent:
I = self._parent.ideal(I)
#raise TypeError, "I = %s must be an ideal in %s"%(I, self.parent())
return I.reduce(self)
And it turns out that for generic rings (like the symbolic 'ring'), nothing happens in that last step:
return f
This is why we need to, one way or another, ask it to be an integer again. See Trac 27401.
Suppose we have a Python function:
def func():
# if called in the body of another function, do something
# if called as argument to a function, do something different
pass
func() can be called in the body of another function:
def funcA():
func()
func() can be also called as an argument to a function:
def funcB(arg1):
pass
def funcC(**kwargs):
pass
funcB(func())
funcC(kwarg1=func())
Is there a way to distinguish between those two case in the body of func()?
EDIT. Here is my use case. I'd like to use Python as a language for rule based 3D model generation. Each rule is a small Python function. Each subsequent rule refines the model and adds additional details. Here is an example rule set. Here is a tutorial describing how the rule set works. My rule language mimics a rule language called CGA shape grammar. If this StackExchange question can be solved, my rule language could be significantly simplified.
EDIT2. Code patching would also suffice for me. For example all cases when func is called in the body of another function are substituted for something like
call_function_on_the_right()>>func()
Others have already pointed out that this might not be a good idea. You could achieve the same by requiring using eg. funct() on toplevel and funca() when as argument, and both could call the same func() with a keyword argument specifying whether you are on toplevel or in an argument, for example. But I'm not here to argue whether this is a good idea or not, I'm here to answer the question.
Is it possible? It might be.
How would you do this, then? Well, the first thing to know is that you can use inspect.stack() to get information about the context you were called from.
You could figure out the line you were called from and read the source file to see how the function is called on that line.
There are two problems with this, however.
The source file could've been modified and give wrong information.
What if you do func(func())? It's called in both ways on the same line!
To get accurate information, you should look at the frame objects in the stack. They have a f_code member that contains the bytecode and a f_lasti member that contains the position of the last instruction. Using those you should be able to figure out which function on the line is currently being called and where the return value is going. You have to parse the whole bytecode (have a look at the dis module) for the frame, though, and keep track of the internal stack used by the interpreter to see where the return value goes.
Now, I'm not 100% sure that it'll work, but I can't see why it wouldn't. The only thing I can see that could "go wrong" is if keeping track of the return value proves to be too hard. But since you only have to keep track of it for the duration of one line of code, there really shouldn't be any structures that would be "impossible" to handle, as far as I can see.
I didn't say it would be easy, only that it might be possible. :)
So I have a time-critical section of code within a Python script, and I decided to write a Cython module (with one function -- all I need) to replace it. Unfortunately, the execution speed of the function I'm calling from the Cython module (which I'm calling within my Python script) isn't nearly as fast as I tested it to be in a variety of other scenarios. Note that I CANNOT share the code itself because of contract law! See the following cases, and take them as an initial description of my issue:
(1) Execute Cython function by using the Python interpreter to import the module and run the function. Runs relatively quickly (~0.04 sec on ~100 separate tests, versus original ~0.24 secs).
(2) Call Cython function within Python script at 'global' level (i.e. not inside any function). Same speed as case (1).
(3) Call Cython function within Python script, with Cython function inside my Python script's main function; tested with the Cython function in global and local namespaces, all with the same speed as case (1).
(4) Same as (3), but inside a simple for-loop within said Python function. Same speed as case (1).
(5) problem! Same as (4), but inside yet another for-loop: Cython function's execution time (whether called globally or locally) balloons to ~10 times that of the other cases, and this is where I need the function to get called. Nothing odd to report about this loop, and I tested all of the components of this loop (adjusting/removing what I could). I also tried using a 'while' loop for giggles, to no avail.
"One thing I've yet to try is making this inner-most loop a function and going from there." EDIT: Just tried this- no luck.
Thanks for any suggestions you have- I deeply regret not being able to share my code...it hurts my soul a little, but my client just can't have this code floating around. Let me know if there is any other information that I can provide!
-The Real Problem and an Initial (ugly) Solution-
It turns out that the best hint in this scenario was the obvious one (as usual): it wasn't the for-loop that was causing the problem; why would it? After a few more tests, it became obvious that something about the way I was calling my Cython function was wrong, because I could call it elsewhere (using an input variable different from the one going to the 'real' Cython function) without the performance loss issue.
The underlying issue: data types. I wrote my Cython function to expect a list full of standard floats. Unfortunately, my code did this:
function_input = list(numpy_array_containing_npfloat64_data) # yuck.
type(function_input[0]) = numpy.float64
output = Cython_Function(function_input)
inside the Cython function:
def Cython_Function(list function_input):
cdef many_vars
"""process lots of vars expecting C floats""" # Slowness from converting numpy.float64's --> floats???
type(output) = list
return output
I'm aware that I can play around more with types in the Cython function, which I very well may do to prevent having to 'list' an existing numpy array. Anyway, here is my current solution:
function_input = [float(x) for x in function_input]
I welcome any feedback and suggestions for improvement. The function_input numpy array doesn't really need the precision of numpy.float64, but it does get used a few times before getting passed to my Cython function.
It could be that, while individually, each function call with the Cython implementation is faster than its corresponding Python function, there is more overhead in the Cython function call because it has to look up the name in the module namespace. You can try assigning the function to a local callable first, for example:
from module import function
def main():
my_func = functon
for i in sequence:
my_func()
If possible, you should try to include the loops within the Cython function, which would reduce the overhead of a Python loop to the (very minimal) overhead of a compiled C loop. I understand that it might not be possible (i.e. need references from a global/larger scope), but it's worth some investigation on your part. Good luck!
function_input = list(numpy_array_containing_npfloat64_data)
def Cython_Function(list function_input):
cdef many_vars
I think the problem is in using the numpy array as a list ... can't you use the np.ndarray as input to the Cython function?
def Cython_Function(np.ndarray[dtype=np.float64] input):
....