Python subtracting floats - python

I have the following code embeded in a class.Whenever I run distToPoint it gives the error 'unsupported operand type(s) for -: 'NoneType' and 'float'' I don't know why it's returning with NoneType and how do I get the subtraction to work?
Both self and p are supposed to be pairs.
def __init__(self, x, y):
self.x = float(x)
self.y = float(y)
def distToPoint(self,p):
self.ax = self.x - p.x
self.ay = self.y - p.y
self.ac = math.sqrt(pow(self.ax,2)+pow(self.ay,2))

For sake of comparison,
import math
class Point(object):
def __init__(self, x, y):
self.x = x + 0.
self.y = y + 0.
def distToPoint(self, p):
dx = self.x - p.x
dy = self.y - p.y
return math.sqrt(dx*dx + dy*dy)
a = Point(0, 0)
b = Point(3, 4)
print a.distToPoint(b)
returns
5.0

You should check what value of p you are sending to the function, so that it has an x and y that are floats.
Old post (on second thought, I don't think you were trying to use distToPoint this way):
distToPoint doesn't return a value, this is probably the problem.

Related

How can I use an element of a python array that is an instance of an object in a function by using the index into the array?

I have a task wherein I am to determine if a Point(x,y) is closer than some amount to any of the Points that are stored in a Python array. Here is the test code:
from point import *
collection = []
p1 = Point(3,4)
collection.append(p1)
print(collection)
p2 = Point(3,0)
collection.append(p2)
print(collection)
p3 = Point(3,1)
radius = 1
print( collection[1] ) # This works, BTW
p = collection[1]
print( p ) # These two work also!
for i in collection:
p = collection[i] # THIS FAILS
if distance(p3,p) < 2*radius:
print("Point "+collection[i]+" is too close to "+p3)
The file point.py contains:
import math
class Point:
'''Creates a point on a coordinate plane with values x and y.'''
COUNT = 0
def __init__(self, x, y):
'''Defines x and y variables'''
self.X = x
self.Y = y
def move(self, dx, dy):
'''Determines where x and y move'''
self.X = self.X + dx
self.Y = self.Y + dy
def __str__(self):
return "Point(%s,%s)"%(self.X, self.Y)
def __str__(self):
return "(%s,%s)"%(self.X,self.Y)
def testPoint(x=0,y=0):
'''Returns a point and distance'''
p1 = Point(3, 4)
print (p1)
p2 = Point(3,0)
print (p2)
return math.hypot(p1, p2)
def distance(self, other):
dx = self.X - other.X
dy = self.Y - other.Y
return math.sqrt(dx**2 + dy**2)
#p1 = Point(3,4)
#p2 = Point(3,0)
#print ("p1 = %s"%p1)
#print ("distance = %s"%(distance(p1, p2)))
Now, I have a couple of questions here to help me understand.
In the test case, why doesn't the print of the array use the str function to
print the Point out as '(x,y)'?
In ' if distance(p3,collection[i]) ', why isn't collection[i] recognized as a Point which the distance function is expecting?
In the 'p = collection[i]' statement, why does python complain that the list indices must be integers or slices, not Point?
It appears that the collection array is not recognized as an array of Point instances. I'm confused as in other OO languages like Objective-C or Java, these are simple things to do.
Take a look at this question. __repr__() is used when rendering things in lists.
(and 3.) I'm not sure if I follow your questions, but the problem you have in your code is that Python hands you the object itself, not the index. So:
for i in collection:
p = collection[i] # THIS FAILS
if distance(p3,p) < 2*radius:
print("Point "+collection[i]+" is too close to "+p3)
should be:
for p in collection:
if distance(p3,p) < 2*radius:
print(f"Point {p} is too close to {p3}")

finding distance between two points in python, passing inputs via two different objects

I have to write a code to find the different between two point by passing value via two objects as below.
But I am getting TypeError: init() missing 3 required positional arguments: 'x', 'y', and 'z'
class Point:
def __init__(self, x, y,z):
self.x = x
self.y = y
self.z = z
def __str__(self):
return '(point: {},{},{})'.format(self.x, self.y, self.z)
def distance(self, other):
return sqrt( (self.x-other.x)**2 + (self.y-other.y)**2 + (self.z -other.z)**2 )
p = Point()
p1 = Point(12, 3, 4)
p2 = Point(4, 5, 6)
p3 = Point(-2, -1, 4)
print(p.distance(p1,p3))
The problem comes from this line:
p = Point()
When you defined you class, you specified it has to be passed 3 parameters for it to be initialised (def __init__(self, x, y,z)).
If you still want to be able to create this Point object without having to pass those 3 parameters, you can make them optional like this :
def __init__(self, x=0, y=0, z=0):
self.x = x
self.y = y
self.z = z
This way, if you were to not specify these parameters (as you did), it will create a point with coordinates {0, 0, 0} by default.
You are not passing the required 3 arguments for p = Point()
fixed your errors
from math import sqrt
class Point:
def __init__(self, x, y,z):
self.x = x
self.y = y
self.z = z
def __str__(self):
return '(point: {},{},{})'.format(self.x, self.y, self.z)
def distance(self, other):
return sqrt( (self.x-other.x)**2 + (self.y-other.y)**2 + (self.z -other.z)**2 )
# p = Point() # not required
p1 = Point(12, 3, 4)
p2 = Point(4, 5, 6)
p3 = Point(-2, -1, 4)
print(p1.distance(p3)) # use this to find distance between p1 and any other point
# or use this
print(Point.distance(p1,p3))
class Point:
def __init__(self, x, y,z):
self.x = x
self.y = y
self.z = z
def __str__(self):
return '(point: {},{},{})'.format(self.x, self.y, self.z)
def distance(self, other):
return math.sqrt( (self.x-other.x)**2 + (self.y-other.y)**2 + (self.z -other.z)**2 )
p1 = Point(12, 3, 4)
p2 = Point(4, 5, 6)
p3 = Point(-2, -1, 4)
print(Point.distance(p1,p3))
It works like this.You should not define a P point seperate than the other three points. Every point is a seperate instance. But when you try to use the function just call the class.

Class point - Python

The questions asks to "Write a method add_point that adds the position of the Point object given as an argument to the position of self". So far my code is this:
import math
epsilon = 1e-5
class Point(object):
"""A 2D point in the cartesian plane"""
def __init__(self, x, y):
"""
Construct a point object given the x and y coordinates
Parameters:
x (float): x coordinate in the 2D cartesian plane
y (float): y coordinate in the 2D cartesian plane
"""
self._x = x
self._y = y
def __repr__(self):
return 'Point({}, {})'.format(self._x, self._y)
def dist_to_point(self, other):
changex = self._x - other._x
changey = self._y - other._y
return math.sqrt(changex**2 + changey**2)
def is_near(self, other):
changex = self._x - other._x
changey = self._y - other._y
distance = math.sqrt(changex**2 + changey**2)
if distance < epsilon:
return True
def add_point(self, other):
new_x = self._x + other._x
new_y = self._y + other._y
new_point = new_x, new_y
return new_point
However, I got this error message:
Input: pt1 = Point(1, 2)
--------- Test 10 ---------
Expected Output: pt2 = Point(3, 4)
Test Result: 'Point(1, 2)' != 'Point(4, 6)'
- Point(1, 2)
? ^ ^
+ Point(4, 6)
? ^ ^
So I'm wondering what is the problem with my code?
Your solution returns a new tuple without modifying the attributes of the current object at all.
Instead, you need to actually change the object's attributes as per the instructions and don't need to return anything (ie, this is an "in-place" operation).
def add_point(self, other):
self._x += other._x
self._y += other._y

If given a Python class, how can I run it and see what it does?

import math
class Vector:
def __init__(self,x,y):
self.x= x
self.y =y
def add(self,other):
new_x = self.x + other.x
new_y = self.y + other.y
return Vector(new_x,new_y)
def subtract(self,other):
new_x = self.x - other.x
new_y = self.y - other.y
return Vector(new_x,new_y)
def scale(self,factor):
new_x = self.x * factor
new_y = self.y * factor
return Vector(new_x,new_y)
def length(self,other):
r_squared = self.x ** 2 + self.y **2
return Vector(r_squared)
I've been trying to test this code that I was given, how am I able to test this using some numbers so that I am able to learn to understand what each function in this code actually does. I am able to see what it does from looking at the code but I also want to reassure that what I am predicting it to do is actually what it does.
Thank you in advance!
Add a checker for your code at the very end of yor file:
if __name__=="__main__":
vec1 = Vector(0, 0)
vec2 = Vector(2,2)
vec3 = vec1.add(vec2)
print(vec1, vec2, vec3)
#add other tests
You could add an override to a built in function in the Vector class for printing instances in a human readable way.
def __repr__(self):
return 'Vector: ({}, {})'.format(self.x, self.y)
Then you might want to fix the length function. It should only return a number and not another Vector. Additionally, it should return the square root of the sum. For example the vector (3, 4) should have a length of 5, not 25. Also, the length method does not need vector supplied as a param.
Once these are fixed up you can add this to the bottom of the file and run the script in the terminal like so: python vec.py
if __name__ == '__main__':
v1 = Vector(0,0)
v2 = Vector(3,4)
print('v1', v1)
print('v2', v2)
print('v1 + v2', v1.add(v2))
print('v2.length', v2.length())

Calculate a point along a line segment one unit from a end of the seg

G'day! When I know the slope and y-intercept of a line, I need to calculate an x-value that is 1 unit out from the line.
For example, if pointA = (4,5), and I set a line going from it with 0 slope (and therefore 5 as the y-intercept), then the x value I want would be 5. If the slope were undefined (vertical), then the x value would be 4. And so on.
So far, I calculate x as x = m(point[0]+1)-b. This doesn't work so well for vertical lines, however.
This and this are similar, but I can't read C# for the first, and on the second one, I don't need to eliminate any possible points (yet).
This is kind of hitting a nail with a sledge hammer, but if you're going to be running into geometry problems often, I'd either write or find a Point/Vector class like
import math
class Vector():
def __init__(self, x=0.0, y=0.0, z=0.0):
self.x = x
self.y = y
self.z = z
def __add__(self, other):
self.x += other.x
self.y += other.y
self.z += other.z
return self
def __sub__(self, other):
self.x -= other.x
self.y -= other.y
self.z -= other.z
return self
def dot(self, other):
return self.x*other.x + self.y*other.y + self.z*other.z
def cross(self, other):
tempX = self.y*other.z - self.z*other.y
tempY = self.z*other.x - solf.x*other.z
tempZ = self.x*other.y - self.y*other.x
return Vector(tempX, tempY, tempZ)
def dist(self, other):
return math.sqrt((self.x-other.x)**2 + (self.y-other.y)**2 + (self.z-other.z)**2)
def unitVector(self):
mag = self.dist(Vector())
if mag != 0.0:
return Vector(self.x * 1.0/mag, self.y * 1.0/mag, self.z * 1.0/mag)
else:
return Vector()
def __repr__(self):
return str([self.x, self.y, self.z])
Then you can do all kinds of stuff like find the vector by subtracting two points
>>> a = Vector(4,5,0)
>>> b = Vector(5,6,0)
>>> b - a
[1, 1, 0]
Or adding an arbitrary unit vector to a point to find a new point (which is the answer to your original question)
>>> a = Vector(4,5,0)
>>> direction = Vector(10, 1, 0).unitVector()
>>> a + direction
[4.995037190209989, 5.099503719020999, 0.0]
You can add more utilities, like allowing Vector/Scalar operations for scaling, etc.

Categories