Indentation error with print function - python

Normally I do all my programming in Java, but I have a bit of python code I'm converting to java.
What I'm not getting is the indentation style, which is cool because that's the way things are done in python, what I need to do though is just add a
couple of print() functions to the code just to make sure that I'm getting the correct result.
For example
def relImgCoords2ImgPlaneCoords(self, pt, imageWidth, imageHeight):
ratio = imageWidth / float(imageHeight)
sw = ratio
sh = 1
return [sw * (pt[0] - 0.5), sh * (pt[1] - 0.5)]
It doesn't seem to matter how the print function is indented it shows an indent error, what's the trick?
or
else:
if scn.optical_center_type == 'camdata':
#get the principal point location from camera data
P = [x for x in activeSpace.clip.tracking.camera.principal]
#print("camera data optical center", P[:])
P[0] /= imageWidth
P[1] /= imageHeight
#print("normlz. optical center", P[:])
P = self.relImgCoords2ImgPlaneCoords(P, imageWidth, imageHeight)
elif scn.optical_center_type == 'compute':
if len(vpLineSets) < 3:
self.report({'ERROR'}, "A third grease pencil layer is needed to compute the optical center.")
return{'CANCELLED'}
#compute the principal point using a vanishing point from a third gp layer.
#this computation does not rely on the order of the line sets
vps = [self.computeIntersectionPointForLineSegments(vpLineSets[i]) for i in range(len(vpLineSets))]
vps = [self.relImgCoords2ImgPlaneCoords(vps[i], imageWidth, imageHeight) for i in range(len(vps))]
P = self.computeTriangleOrthocenter(vps)
else:
#assume optical center in image midpoint
pass
I want to see what the return values of the vps variables are in elif part of the code block, where do I put the print function call?

There does not seem to be any issues with where you are placing the commented out print statements.
The most likely case is that you are mixing indents with spaces causing indention errors.
Try using the reindent.py script in the Tools/scripts directory of where you installed Python.

Related

PySimpleGUI need help planning an intricate measurements calculator

i'm working on a program that should figure out the dimensions of individual pieces in kitchen cabinet modules, so you only set: height, depth, width of the material(18mm), and then you select modules from a list and after setting the dimensions of each you should be presented with a list of pieces and their dimensions.
Since all of this is somewhat standardized individual pieces's dimensions are figured out by simple math, but each consists of it's own set of operations, which should be ran once and display the results in the interface(eventually i'll figure out how to write it to an excel compatible format)
as you can see here it can get to be complex, i can work it out over time no problem, but right now i'm not sure PYGUI is what i need.
import PySimpleGUI as sg
layout1 = [[sg.Text('Altura', size=(10,1)),sg.Input('',key='Alt')], #Height
[sg.Text('Densidad Placa', size=(10,1)),sg.Input('',key='Placa')],# Material's density
[sg.Text('Profundidad', size=(10,1)),sg.Input('',key='Prof')]] #Depth
layout2 = [[sg.Text('Ancho Modulo', size=(10,1)),sg.Input('',key='WM')], #Module's width
[sg.Text('lateral', size=(10,1)),sg.Text('',key='Lat'),sg.Text('x'),sg.Text('',key='Prof2')], #side pieces
[sg.Text('Piso', size=(10,1)),sg.Text('',key='WM2'),sg.Text('x'),sg.Text('',key='Prof2')], # bottom piece
[sg.Button('Go')]]
#Define Layout with Tabs
tabgrp = [[sg.TabGroup([[sg.Tab('1', layout1),
sg.Tab('2', layout2)]])]]
window =sg.Window("Tabs",tabgrp)
#Read values entered by user
while True:
event,values=window.read()
if event in (sg.WINDOW_CLOSED, 'Close'):
break
elif event == 'Go':
anc = values['WM']
altura = values['Alt']
placa = values['Placa']
prof = values['Prof']
try:
v = int(anc) #width
w = int(prof)#depth
x = int(altura)#height
y = int (placa)# Material's density
altlat = str(x - y) #height of side pieces
prof2 = int(w - y) #depth of pieces,(total depth incluiding the door)
ancm = int(v) #width
except ValueError:
altlat = "error"
prof2 = "error"
ancm = "error"
window['Lat'].update(value=altlat)
window['Prof2'].update(value=prof2)
window['WM2'].update(value=ancm)
window.refresh
#access all the values and if selected add them to a string
window.close()
i figured i use functions for every set of operations and call them as i need them, but keys can't be reused and every tutorial i've seen points towards them, and other implementations i tried failed,. i've been using python since last night, so i'm not sure how many options i have, nor how limited my options will be with PYGUI's toolset.
I think what you are asking is, how can I make a function that will take the values and run an operation on them? This seems to be more of a general python question than one about pyGUI, but here is a quick answer.
def calc_side_panel_height(altura, placa):
x = int(altura)#height
y = int (placa)# Material's density
return (x - y) #height of side pieces
try {
height_of_side = calc_side_panel_height(altura, place)
# use height here.
altlat = str(height_of_side)
}
does that start to make sense? You would call functions and in those functions do the calculations, so you don't have to rewrite the code.
more info: https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Building_blocks/Functions
Let me know if I'm confused!

openCV triangulatePoints acting strange in python

I am trying to calculate the 3D points using OpenCV to being a multiple view reconstruction. I perform the standard sequences of finding matching points using SIFT, then getting the Fundamental and Essential Matrix with a known camera calibration matrix. After recovering the pose of the second cam relative to the first, I go on to try and triangulate the points. All other parts of the code work well and as expected. But this one part gets glitchy. I am using OpenCV 4.3.0. Sometimes triangulatePoints just breaks the IDE (Spyder), sometimes it gives me the points, and sometimes it gives me a bunch of points at [1,1,1]. The IDE also breaks if the number of points is over 200. The more points the more glitchy it seems to get.
This is getting frustrating, any help would be appreciated.
Here is a snippet of the code.
F, mask = cv.findFundamentalMat(pts1,pts2,cv.FM_LMEDS)
# We select only inlier points
pts1 = pts1[mask.ravel()==1]
pts2 = pts2[mask.ravel()==1]
print(len(pts1),len(pts2))
print(cv.__version__)
E, mask_2 = cv.findEssentialMat(pts1, pts2, focal=f_x, pp=(O_x, O_y), method=cv.FM_LMEDS, prob=0.999, threshold=3.0)
print("Essential Matrix")
print(E)
print(" ")
points, R_1, t_1, mask_2 = cv.recoverPose(E, pts1, pts2,pts2,focal=f_x, pp=(O_x, O_y), mask = mask_2)
print("Rotation Matrix")
print(R_1)
print(" ")
R_M = R.from_matrix(R_1)
R_1_E = R_M.as_euler('zyx', degrees=True)
print("angles (z,y,x) or (alpha, beta, gamma) Z is dir of Principal Ray, Y is Vert and X is horiz")
print(R_1_E)
print("Translation")
print(t_1)
K = np.array([[f_x, 0,O_x],
[0,f_x, O_y],
[0,0,1]])
Pr_1 = np.array([[1,0,0,0],[0,1,0,0],[0,0,1,0]])
Pr_2 = np.hstack((np.dot(K,R_1),np.dot(K,t_1)))
#Pr_2 = np.hstack((R_1,t_1))
pts1_t = pts1[:200].T
pts2_t = pts2[:200].T
#print(pts1_t)
points4D = cv.triangulatePoints(Pr_1,Pr_2,pts1_t, pts2_t)
#print(points4D.T[:3].T)
coordinate_eucl= cv.convertPointsFromHomogeneous(points4D.T)
coordinate_eucl=coordinate_eucl.reshape(-1,3)
px,py,pz=coordinate_eucl.T
coordP = []
for i in range(len(px)):
coordP.append([px[i],py[i],pz[i]])
print(coordP[:20])
the triangulate points function in OpenCV Python does not work very well.
I used the functions created by Eliasvan, here:
https://github.com/Eliasvan/Multiple-Quadrotor-SLAM/blob/master/Work/python_libs/triangulation.py
I used the Linear_LS_Triangulation Function, worked very well for me, and fast.

Point projection using cross-ratios goes completely wrong after certain threshold

I'm trying for a computer vision project to determine the projection transformation occurring in a football image. I detect the vanishing points, get 2 point matches, and calculate the projection from model field points to image points based on cross ratios. This works really well for almost all points, but for points (which lie behind the camera) the projection goes completely wrong. Do you know why and how I can fix this?
It's based on the article Fast 2D model-to-image registration using vanishing points for sports video analysis and I use this projection function given on the page 3. I tried calculating the result using different methods, too (namely based on intersections), but the result is the same:
There should be a bottom field line, but that one is projected to way out far to the right.
I also tried using decimal to see if it was a negative overflow error, but that wouldn't have made much sense to me, since the same result showed up on Wolfram Alpha with testing.
def Projection(vanpointH, vanpointV, pointmatch2, pointmatch1):
"""
:param vanpointH:
:param vanpointV:
:param pointmatch1:
:param pointmatch2:
:returns function that takes a single modelpoint as input:
"""
X1 = pointmatch1[1]
point1field = pointmatch1[0]
X2 = pointmatch2[1]
point2field = pointmatch2[0]
point1VP = linecalc.calcLineEquation([[point1field[0], point1field[1], vanpointH[0], vanpointH[1], 1]])
point1VP2 = linecalc.calcLineEquation([[point1field[0], point1field[1], vanpointV[0], vanpointV[1], 1]])
point2VP = linecalc.calcLineEquation([[point2field[0], point2field[1], vanpointV[0], vanpointV[1], 1]])
point2VP2 = linecalc.calcLineEquation([[point2field[0], point2field[1], vanpointH[0], vanpointH[1], 1]])
inters = linecalc.calcIntersections([point1VP, point2VP])[0]
inters2 = linecalc.calcIntersections([point1VP2, point2VP2])[0]
def lambdaFcnX(X, inters):
# This fcn provides the solution of where the point to be projected is, according to the matching,
# on the line connecting point1 and vanpointH. Based only on that the cross ratio is the same as in the model field
return (((X[0] - X1[0]) * (inters[1] - point1field[1])) / ((X2[0] - X1[0]) * (inters[1] - vanpointH[1])))
def lambdaFcnX2(X, inters):
# This fcn provides the solution of where the point to be projected is, according to the matching,
# on the line connecting point2 and vanpointH, Based only on that the cross ratio is the same as in the model field
return (((X[0] - X1[0]) * (point2field[1] - inters[1])) / ((X2[0] - X1[0]) * (point2field[1] - vanpointH[1])))
def lambdaFcnY(X, v1, v2):
# return (((X[1] - X1[1]) * (np.subtract(v2,v1))) / ((X2[1] - X1[1]) * (np.subtract(v2, vanpointV))))
return (((X[1] - X1[1]) * (v2[0] - v1[0])) / ((X2[1] - X1[1]) * (v2[0] - vanpointV[0])))
def projection(Point):
lambdaPointx = lambdaFcnX(Point, inters)
lambdaPointx2 = lambdaFcnX2(Point, inters2)
v1 = (np.multiply(-(lambdaPointx / (1 - lambdaPointx)), vanpointH) + np.multiply((1 / (1 - lambdaPointx)),
point1field))
v2 = (np.multiply(-(lambdaPointx2 / (1 - lambdaPointx2)), vanpointH) + np.multiply((1 / (1 - lambdaPointx2)),
inters2))
lambdaPointy = lambdaFcnY(Point, v1, v2)
point = np.multiply(-(lambdaPointy / (1 - lambdaPointy)), vanpointV) + np.multiply((1 / (1 - lambdaPointy)), v1)
return point
return projection
match1 = ((650,390,1),(2478,615,1))
match2 = ((740,795,1),(2114,1284,1))
vanpoint1 = [-2.07526585e+03, -5.07454315e+02, 1.00000000e+00]
vanpoint2 = [ 5.53599881e+03, -2.08240612e+02, 1.00000000e+00]
model = Projection(vanpoint2,vanpoint1,match2,match1)
model((110,1597))
Suppose the vanishing points are
vanpoint1 = [-2.07526585e+03, -5.07454315e+02, 1.00000000e+00]
vanpoint2 = [ 5.53599881e+03, -2.08240612e+02, 1.00000000e+00]
and two matches are:
match1 = ((650,390,1),(2478,615,1))
match2 = ((740,795,1),(2114,1284,1))
These work for almost all points as seen in the picture. The left bottom point, however, is completely off and gets image coordinates
[ 4.36108177e+04, -1.13418258e+04] This happens going down from (312,1597); for (312,1597) the result is [-2.34989787e+08, 6.87155603e+07] which is where it's supposed to be.
Why does it shift all the way to 4000? It would make sense perhaps if I calculated the camera matrix and then the point was behind the camera. But since what I do is actually similar to homography estimation (2D mapping) I cannot make geometric sense of this. However, my knowledge of this is definitely limited.
Edit: does this perhaps have to do with the topology of the projective plane and that it's non orientable (wraps around)? My knowledge of topology is not what it should be...
Okay, figured it out. This might not make too much sense to others, but it does for me (and if anyone ever has the same problem...)
Geometrically, I realized the following when using an equivalent approach, where v1 and v2 are calculated based on the different vanishing points and I project based on the intersection of the lines connecting points with the vanishing points. Here at some point, these lines become parallel, and after that the intersection actually lies completely on the other side. And that makes sense; it just took me a while to realize it does.
In the code above, the last cross ratio, called lambdapointy, goes to 1 and after that above. Here the same thing happens, but it was easiest to visualize based on the intersections.
Also know how to solve it; this is just in case anyone else tries such code.

Window-Leveling in python

I would like to change the window-level of my dicom images from lung window to chest window. I know the values need for the window-leveling. But how to implement it in python? Or else anyone can provide me with an detailed description of this process would be highly appreciated.
I have already implemented this in Python. Take a look at the function GetImage in dicomparser module in the dicompyler-core library.
Essentially it follows what kritzel_sw suggests.
Following open source code can implement bone window.
def get_pixels_hu(slices):
image = numpy.stack([s.pixel_array for s in slices])
image = image.astype(numpy.int16)
image[image == -2000] = 0
for slice_number in range(len(slices)):
intercept = slices[slice_number].RescaleIntercept
slope = slices[slice_number].RescaleSlope
if slope != 1:
image[slice_number] = slope * image[slice_number].astype(numpy.float64)
image[slice_number] = image[slice_number].astype(numpy.int16)
image[slice_number] += numpy.int16(intercept)
return numpy.array(image, dtype=numpy.int16)
And i added following code
image[slice_number] = image[slice_number]*3.5 + mean2*0.1
after
image[slice_number] += numpy.int16(intercept)
can change bone window to brain tissue window.
The point is setting of parameter 3.5 and 0.1. I just tried and got these two parameters suitable for brain tissue window. Maybe you can adjust them for chest window.

App Engine: Calculating the dimensions of thumbnails to be generated by serving thumbnails from the blobstore

I'm currently using the blobstore to generate thumbnails for images, however, I like to store the dimensions of the thumbnail in the img tag, as it is good practise and helps speed up rendering and makes a partially loaded page look a bit nicer.
How would I calculate the dimensions of a thumbnail generated by the blobstore, knowing only the dimensions of the original image?
My previous attempts haven't been very accurate, most of the time being off by a pixel or two (probably due to rounding).
I understand that fetching the thumbnail and than using the images API to check the dimensions would work, but I think that is inefficient.
Here is the code I use to calculate it at the moment, however, it is occasionally off by one pixel, causing the browser to slightly stretch the image, causing resize artefacts as well as being less performant.
from __future__ import division
def resized_size(original_width, original_height, width, height):
original_ratio = float(original_width) / float(original_height)
resize_ratio = float(width) / float(height)
if original_ratio >= resize_ratio:
return int(width), int(round(float(width) / float(original_ratio)))
else:
return int(round(float(original_ratio) * float(height))), int(height)
Accuracy is very important!
I see the problem. The reason is that C's rint is being used to calculate
the dimensions. Python does not have an equivalent rint implementation
as it was taken out by Rossum in 1.6:
http://markmail.org/message/4di24iqm7zhc4rwc
Your only recourse right now is to implement your own rint in python.
rint by default does a "round to even" vs pythons round which does something else.
Here is a simple implementation (no edge case handling for +inf -inf, etc.)
import math
def rint(x):
x_int = int(x)
x_rem = x - x_int # this is problematic
if (x_int % 2) == 1:
return round(x)
else:
if x_rem <= 0.5:
return math.floor(x)
else:
return math.ceil(x)
The above code is how it should be implemented in theory. The problem lies with
x_rem. x - x_int should get the fractional componenet but instead you can get the
fraction + delta. So you can attempt to add thresholding if you want
import math
def rint(x):
x_int = int(x)
x_rem = x - x_int
if (x_int % 2) == 1:
return round(x)
else:
if x_rem - 0.5 < 0.001:
return math.floor(x)
else:
return math.ceil(x)
Over here. I hard coded a 0.001 threshold. Thresholding itself is problematic.
I guess you really need to play around with the rint implementation and fit it
to your application and see what works best. Good Luck!
Assuming you have an original image of 1600x1200 and you want to get a size 800 thumbnail,
the expected result is an 800x600 image. To calculate this, you can do the following:
// In Java
double aspectRatio = originalWidth / originalHeight;
output_width = 800;
output_height = (int)Math.round(800.0 / aspectRatio);
Let's see if this work with the degenerate cases. Assume that you have an original
of 145x111 and you want to get a size 32 thumbnail.
aspectRatio == 1.3063063063063063
output_width == 32
output_height == Math.round(32.0 / 1.3063063063063063) == 24
You can do something similar for portrait oriented images.

Categories