User-defined Vector class for python - python

I've been looking for a way to deal with vectors in python and havent found a solution here or in the documentation that completely fits me.
This is what I've come up with so far for a vector class:
class vec(tuple):
def __add__(self, y):
if len(self)!=len(y):
raise TypeError
else:
ret=[]
for i,entry in enumerate(self):
ret.append(entry+y[i])
return vec(ret)
def __mul__(self, y):
t=y.__class__
if t == int or t==float:
#scalar multiplication
ret=[]
for entry in self:
ret.append(y*entry)
return vec(ret)
elif t== list or t==tuple or t==vec:
# dot product
if len(y)!=len(self):
print 'vecs dimensions dont fit'
raise TypeError
else:
ret=0
for i,entry in enumerate(self):
ret+=entry*y[i]
return ret
Theres a little bit more, left out to keep things short.
So far everythings working fine but I have lots of tiny specific questions (and will probably post more as they come up):
Are there base classes for the numeric and sequence-types and how can I address them?
How can I make all of this more Python-y? I want to learn how to write good Python code, so if you find something that's inefficient or just ugly, please tell me.
What about precision? As python seems to cast from integers to floats only if necessary, input and output are usually of the same type. So there might be problems with very large or small numbers, but I don't really need those currently. Should I generally worry about precision or does python do that for me? Would it be better to convert to the largest possible type automatically? Which one is that? What happens beyond that?
I want to use n-dimensional vectors in a project involving lots of vectorial equations and functions and I'd like to be able to use the usual notation that's used in math textbooks. As you can see this inherits from tuple (for easy construction, immutability and indexing) and most built in functions are overwritten in order to use the (+,-,*,..)- operators. They only work if the left operand is a vec (can I change that?). Multiplication includes dot- and scalar product, pow is also used for the cross-product if both vecs are 3D.
Test Script:
def testVec():
rnd=random.Random()
for i in range(0,10000):
a=utils.vec((rnd.random(),rnd.random(),rnd.random()))
### functions to test
a*(a*a)
###
def testNumpy():
rnd=random.Random()
for i in range(0,10000):
a=np.array((rnd.random(),rnd.random(),rnd.random()))
###
a.dot(a)*a
###
cProfile.run('testNumpy()')
-> 50009 function calls in 0.135 seconds
cProfile.run('testVec()')
-> 100009 function calls in 0.064 seconds

Related

Attempting to use np.insert in a created class which has subscripts yields "object does not support item assignment" debug

I have defined my own class which takes in any matrix and is defined in such a way to convert this matrix into three numpy arrays inside a parenthesis (which I assume means it's a tuple). Furthermore, I have added a getitem method which allows output arrays to be subscript-able just like normal arrays.
My class is called MatrixConverter, and say x is some random matrix, then:
q=MatrixConverter(x)
Where q gives:
q=(array[1,2,3,4],array[5,6,7,8],array[9,10,11,12])
(Note that this is just an example, it does not produce three arrays with consecutive numbers)
Then, for example, by my getitem method, it allows for:
q[0] = array[1,2,3,4]
q[0][1] = 2
Now, I'm attempting to design a method to add en element into one of the arrays using the np.insert function such as the following:
class MatrixConverter
#some code here
def __change__(self,n,x):
self[1]=np.insert(self[1],n,x)
return self
Then, my desired output for the case where n=2 and x=70 is the following:
In:q.__change__(2,70)
Out:(array[1,2,3,4],array[5,6,70,7,8],array[9,10,11,12])
However, this gives me a TypeError: 'MatrixConverter' object does not support item assignment.
Any help/debugs? Should I perhaps use np.concentate instead?
Thank you!
Change your method to:
def __change__(self,n,x):
temp = np.insert(self[1],n,x)
self[1] = temp
return self
This will help you distinguish between a problem with the insert and a problem with the self[1] = ... setting.
I don't think the problem is with the insert call, but you need to write code that doesn't confuse you on such matters. That's a basic part of debugging.
Beyond that you haven't given us enough code to help you. For example what's the "getitem".
Expressions like array[1,2,3,4] tell me that you aren't actually copying from your code. That's not a valid Python expression, or array display.

Is this an ideal way to automate object instantiation using functions?

I'm trying to design a function in Python which instantiates a number of objects based on a user input. I've derived one which works, and it looks as follows
class Node(): ...
def initialise():
binary_tree=[]
opt=int(input("Enter the number of nodes you want\n"))
for i in range(opt):
a=Node()
binary_tree.append(a)
although I'm not sure that this is the ideal way to do this.
Is there a better way of programming a function such as the one I've described, or is the above method sufficient for efficiency and clarity purposes?
Any responses are appreciated, thank you in advance.
Your code seems to work ok.
There are other options to format it; for example, you could use list comprehension, which might be slightly faster than using .append() and also uses slightly less code:
def initialise():
opt = int(input("Enter the number of nodes you want\n"))
return [
Node()
for _ in range(opt)]
Careful: this (as well as your version) code might raise a ValueError if the uses enters a string that cannot be converted to int.

Differents way to define a function in sagemath

I would like to know why these two "programs" produce different output
f(x)=x^2
f(90).mod(7)
and
def f(x):
return(x^2)
f(90).mod(7)
Thanks
Great question! Let's take a deeper look at the functions in question.
f(x)=x^2
def g(x):
return(x^2)
print type(g(90))
print type(f(90))
This yields
<type 'sage.rings.integer.Integer'>
<type 'sage.symbolic.expression.Expression'>
So what you are seeing is the difference between a symbolic function defined with the f(x) notation and a Python function using the def keyword. In Sage, the former has access to a lot of stuff (e.g. calculus) that plain old Sage integers won't have.
What I would recommend in this case, just for what you need, is
sage: a = f(90)
sage: ZZ(a).mod(7)
1
or actually the possibly more robust
sage: mod(a,7)
1
Longer explanation.
For symbolic stuff, mod isn't what you think. In fact, I'm not sure it will do anything (see the documentation for mod to see how to use it for polynomial modular work over ideals, though). Here's the code (accessible with x.mod??, documentation accessible with x.mod?):
from sage.rings.ideal import is_Ideal
if not is_Ideal(I) or not I.ring() is self._parent:
I = self._parent.ideal(I)
#raise TypeError, "I = %s must be an ideal in %s"%(I, self.parent())
return I.reduce(self)
And it turns out that for generic rings (like the symbolic 'ring'), nothing happens in that last step:
return f
This is why we need to, one way or another, ask it to be an integer again. See Trac 27401.

Inaccurate Large Fibonacci Numbers in Python

I am currently implementing this simple code trying to find the n-th element of the Fibonacci sequence using Python 2.7:
import numpy as np
def fib(n):
F = np.empty(n+2)
F[1] = 1
F[0] = 0
for i in range(2,n+1):
F[i]=F[i-1]+F[i-2]
return int(F[n])
This works fine for F < 79, but after that I get wrong numbers. For example, according to wolfram alpha F79 should be equal to 14472334024676221, but fib(100) gives me 14472334024676220. I think this could be caused by the way python deals with integers, but I have no idea what exactly the problem is. Any help is greatly appreciated!
the default data type for a numpy array is depending on architecture a 64 (or 32) bit int.
pure python would let you have arbitrarily long integers; numpy does not.
so it's more the way numpy deals with integers; pure python would do just fine.
Python will deal with integers perfectly fine here. Indeed, that is the beauty of python. numpy, on the other hand, introduces ugliness and just happens to be completely unnecessary, and will likely slow you down. Your implementation will also require much more space. Python allows you to write beautiful, readable code. Here is Raymond Hettinger's canonical implementation of iterative fibonacci in Python:
def fib(n):
x, y = 0, 1
for _ in range(n):
x, y = y, x + y
return x
That is O(n) time and constant space. It is beautiful, readable, and succinct. It will also give you the correct integer as long as you have memory to store the number on your machine. Learn to use numpy when it is the appropriate tool, and as importantly, learn to not use it when it is inappropriate.
Unless you want to generate a list with all the fibonacci numbers until Fn, there is no need to use a list, numpy or anything else like that, a simple loop and 2 variables will be enough as you only really need to know the 2 previous values
def fib(n):
Fk, Fk1 = 0, 1
for _ in range(n):
Fk, Fk1 = Fk1, Fk+Fk1
return Fk
of course, there is better ways to do it using the mathematical properties of the Fibonacci numbers, with those we know that there is a matrix that give us the right result
import numpy
def fib_matrix(n):
mat = numpy.matrix( [[1,1],[1,0]], dtype=object) ** n
return mat[0,1]
to which I assume they have an optimized matrix exponentiation making it more efficient that the previous method.
Using the properties of the underlying Lucas sequence is possible to do it without the matriz, and equally as efficient as exponentiation by squaring and with the same number of variables as the other, but that is a little harder to understand at first glance unlike the first example because alongside the second example it require more mathematical.
The close form, the one with the golden ratio, will give you the result even faster, but that have the risk of being inaccurate because the use of floating point arithmetic.
As an additional word to the previous answer by hiro protagonist, note that if using Numpy is a requirement, you can solve very easely your issue by replacing:
F = np.empty(n+2)
with
F = np.empty(n+2, dtype=object)
but it will not do anything more than transferring back the computation to pure Python.

Semantic Type Safety in Python

In my recent project I have the problem, that some values are often misinterpreted. For instance I calculate a wave as a sum of two waves (for which I need two amplitudes and two phase shifts), and then sample it at 4 points. I pass these tuples of four values to different functions, but sometimes I made the mistake to pass wave parameters instead of sample points.
These errors are hard to find, because all the calculations work without any error, but the values are totally meaningless in this context and so the results are just wrong.
What I want now is some kind of semantic type. I want to state that the one function returns sample points and the other function awaits sample points, and that I can do nothing that would conflict this declarations without immediately getting an error.
Is there any way to do this in python?
I would recommend implementing specific data types to be able to distinguish between different kind of information with the same structure.
You can simply subclass list for example and then do some type checking at runtime within your functions:
class WaveParameter(list):
pass
class Point(list):
pass
# you can use them just like lists
point = Point([1, 2, 3, 4])
wp = WaveParameter([5, 6])
# of course all methods from list are inherited
wp.append(7)
wp.append(8)
# let's check them
print(point)
print(wp)
# type checking examples
print isinstance(point, Point)
print isinstance(wp, Point)
print isinstance(point, WaveParameter)
print isinstance(wp, WaveParameter)
So you can include this kind of type checking in your functions, to make sure the correct kind of data was passed to it:
def example_function_with_waveparameter(data):
if not isinstance(data, WaveParameter):
log.error("received wrong parameter type (%s instead WaveParameter)" %
type(data))
# and then do the stuff
or simply assert:
def example_function_with_waveparameter(data):
assert(isinstance(data, WaveParameter))
Pyhon's notion of a "semantic type" is called a class, but as mentioned, Python is dynamically typed so even using custom classes instead of tuples you won't get any compile-time error - at best you'll get runtime errors if your classes are designed in such a way that trying to use one instead of the other will fail.
Now classes are not just about data, they are about behaviour too, so if you have functions that do waveform-specific computations these functions would probably become methods of the Waveform class, and idem for the Point part, and this might be enough to avoid logical errors like passing a "waveform" tuple to a function expecting a "point" tuple.
To make a long story short: if you want a statically typed functional language, Python is not the right tool (Haskell might be a better choice). If you really want / have to use Python, try using classes and methods instead of tuples and functions, it still won't detect type errors at compile-time but chances are you'll have less type errors AND that these type errors will be detected at runtime instead of producing wrong results.

Categories