Creating polygons/polylines from origin points using Python in ArcGIS? - python

I am still very new to Python. I am heading a project to map the building footprints within our county on the tax map.
I have found a previous question that may be very helpful for this project: https://gis.stackexchange.com/questions/6724/creating-line-of-varying-distance-from-origin-point-using-python-in-arcgis-deskt
Our Cama system generates views/table with the needed information. Below is an example:
PARID LLINE VECT X_COORD Y_COORD
1016649 0 R59D26L39U9L20U17 482547 1710874
180,59,270,26,0,39,90,9,0,20,90,17 (VECT column converted)
I have found some python examples to convert the VECT column, which are distance and direction calls to angles and distances separated by commas.
My question: Is there a way to implement a loop into the script below to utilize a table rather than static, user entered, numbers? This would be very valuable to the county as we have several thousand polygons to construct.
Below is the snippet to change the distances and angles to x, y points to be generated in ArcMap 10.2
#Using trig to deflect from a starting point
import arcpy
from math import radians, sin, cos
origin_x, origin_y = (400460.99, 135836.7)
distance = 800
angle = 15 # in degrees
# calculate offsets with light trig
(disp_x, disp_y) = (distance * sin(radians(angle)),\
distance * cos(radians(angle)))
(end_x, end_y) = (origin_x + disp_x, origin_y + disp_y)
output = "offset-line.shp"
arcpy.CreateFeatureClass_management("c:\workspace", output, "Polyline")
cur = arcpy.InsertCursor(output)
lineArray = arcpy.Array()
# start point
start = arcpy.Point()
(start.ID, start.X, start.Y) = (1, origin_x, origin_y)
lineArray.add(start)
# end point
end = arcpy.Point()
(end.ID, end.X, end.Y) = (2, end_x, end_y)
lineArray.add(end)
# write our fancy feature to the shapefile
feat = cur.newRow()
feat.shape = lineArray
cur.insertRow(feat)
# yes, this shouldn't really be necessary...
lineArray.removeAll()
del cur
Any suggestions would be greatly appreciated.
Thank you for your valuable time and knowledge.

You can create a dictionary of dictionaries from given table that would hold all the different values. Such as
d = {1:{"x":400460.99,"y":135836.7,"distance":800,"angle":15},
2:{"x":"etc","y":"etc","distance":"etc","angle":"etc"}}
for k in d.keys():
origin_x, d[k]["x"]
origin_y = d[k]["y"]
distance = d[k]["distance"]
angle = d[k]["angle"]
#rest of the code
#.....

Related

How to depict small charges on a spherical object using vpython library?

I am working on a project related to charge distribution on the sphere and I decided to simulate the problem using vpython and Coulomb's law. I ran into an issue when I created a sphere because I am trying to evenly place out like 1000 points (charges) on the sphere and I can't seem to succeed, I have tried several ways but can't seem to make the points be on the sphere.
I defined an arbitrary value SOYDNR as a number to divide the diameter of the sphere into smaller segments. This would allow me to create a smaller rings of charges and fill out the surface of the spahre with charges. Then I make a list with 4 values that represent different parts of the radius to create the charge rings on the surface. Then I run into a problem and I am not sure how to deal with it. I have tried calculating the radius at those specific heights but yet I couldn't implement it. This is how it looks visually:![Sphere with charges on the surface].(https://i.stack.imgur.com/3N4x6.png) If anyone has any suggestions, I would be greatful, thanks!
SOYDNR = 10 #can be changed
SOYD = 2*radi/SOYDNR # strips of Y direction; initial height + SOYD until = 2*radi
theta = 0
dtheta1 = 2*pi/NCOS
y_list = np.arange(height - radi + SOYD, height, SOYD).tolist()
print(y_list)
for i in y_list:
while Nr<NCOS and theta<2*pi:
position = radi*vector(cos(theta),i*height/radi,sin(theta))
points_on_sphere = points_on_sphere + [sphere(pos=position, radius=radi/50, color=vector(1, 0, 0))]
Nr = Nr + 1
theta = theta + dtheta1
Nr = 0
theta = 0
I found a great way to do it, it creates a bunch of spheres in the area that is described by an if statement this is the code I am using for my simulation that creates the sphere with points on it.
def SOSE (radi, number_of_charges, height):
Charged_Sphere = sphere(pos=vector(0,height,0), radius=radi, color=vector(3.5, 3.5, 3.5), opacity=(0.2))
points_on_sphere = []
NCOS = number_of_charges
theta = 0
dtheta = 2*pi/NCOS
dr = radi/60
direcVector = vector(0, height, 0)
while theta<2*pi:
posvec1 = radi*vector(1-radi*random(),1-radi*random()/radi,1-radi*random())
posvec2 = radi*vector(1-radi*random(),-1+radi*random()/radi,1-radi*random())
if mag(posvec1)<radi and mag(posvec1)>(radi-dr):
posvec1 = posvec1+direcVector
points_on_sphere=points_on_sphere+[sphere(pos=posvec1,radius=radi/60,color=vector(1, 0, 0))]
theta=theta + dtheta
if mag(posvec2)<radi and mag(posvec2)>(radi-dr):
posvec2 = posvec2+direcVector
points_on_sphere=points_on_sphere+[sphere(pos=posvec2,radius=radi/60,color=vector(1, 0, 0))]
theta=theta + dtheta
This code can be edited to add more points and I have two if statements because I want to change the height at which the sphere is present, and if I have just one statement I only see half of the sphere. :)

The area & center of gravity of a polygon having non-uniform density of vertices? (in Python)

I would like to calculate the COG of a polygon shaped exactly like the contour map of my town. However, using the available database of borderpoints would produce a rigged result, since some places have much bigger density of borderpoints than others, so the center of gravity would be skewed towards these regions. I tried to equalise the density of vertices by producing this Python code:
import numpy as np
punkty = open("borderpoints.txt","r", encoding = "utf8")
tempp = []
a = []
for line in punkty:
for c in line:
if c != " ":
tempp.append(c)
else:
p = "".join(tempp)
a.append(p)
tempp = []
i = 0
x= []
y = []
fx = open("outx1.txt", "w")
fy = open("outy1.txt", "w")
while i<len(a)-1:
x.append(a[i])
fx.write(a[i])
fx.write("\n")
y.append(a[i+1])
fy.write(a[i+1])
fy.write("\n")
i= i+2
j = 0
jump = 20
newxs = []
newys = []
fnx = open("newxs.txt","w")
fny = open("newys.txt", "w")
while j<len(x):
L = np.sqrt(pow((float(y[j+1])-float(y[j])),2)+pow((float(x[j+1])-float(x[j])),2))
n = jump*L
interval = (float(y[j+1])-float(y[j]))/n
k = 1
slope = (float(x[j+1])-float(x[j]))/(float(y[j+1])-float(y[j]))
inters = float(x[j+1])-slope*float(y[j+1])
while k<n+1:
g = float(y[j])+k*interval
newxs.append(g)
fnx.write(str(g))
fnx.write("\n")
g = (slope*(float(y[j])+k*interval)+inters)
newys.append(g)
fny.write(str(g))
fny.write("\n")
k = k+1
j = j+2
k=1
newxs.append(x)
newys.append(y)
but in the result, the points were denser everywhere except places that were previously empty and were supposed to get populated by the algorithm.
The graphs of the map before the application of the algorithm and
after (some proportions may vary but the main problem is the empty spot).
What is the approach that I could use in solving this problem? How to make the points distributed equally or maybe it's possible to calculate the COG with some other method?
My aim is that the amount of points shouldn't determine the COG, but rather determine the position of polygon sides - these are most important here, but obviously there is no database for them and it's harder to calculate the COG having a lot of linear functions and their ranges.

NASA spice tipbod spiceypy

Attempting to get the body fixed conditions of planets, RA, DEC, PM, using the NASA example at.
ftp://naif.jpl.nasa.gov/pub/naif/toolkit_docs/FORTRAN/spicelib/tipbod.html
TIPBOD is used to transform a position in J2000 inertial coordinates to a state in bodyfixed coordinates.
TIPM = TIPBOD ('J2000', BODY, ET)
Then convert position, the first three elements of STATE, to bodyfixed coordinates. What is STATE?
BDPOS = MXVG( TIPM, POSTN)
My code:
Targ = 399 (Earth)
et = spice.str2et(indate)
TIPM = spice.tipbod( "J2000", Targ, et )
BDPOS = spice.mxvg(TIPM, POSTN, BDPOS )
but what is POSTN and what is BDPOS?
You can get a bit more detail about the inputs to the spiceypy functions by searching for the relevant function here.
In your particular case TIPM will be a 3x3 2D matrix that provides the transformation between an object in the inertial frame and a body fixed frame. The required inputs to the mxvg function are given here. In your case POSTN should be a list (or numpy array) of 3 values giving the x, y, and z positions of the body you're interested in. BODPOS will be the output of mxvg, which will be the matrix TIPM multiplied by the vector POSTN, so will be a vector containing three values: the transformed x, y, and z positions of the body.
I'm not entirely sure what you require, but an example might be:
from astropy.time import Time
from spiceypy import spiceypy as spice
# create a time
t = Time('2010-03-19 11:09:00', format='iso')
# put in spice format - this may require a leap seconds kernel to be
# downloaded, e.g. download https://naif.jpl.nasa.gov/pub/naif/generic_kernels/lsk/naif0012.tls
# and then load it with spice.furnsh('naif0012.tls')
et = spice.str2et(t.iso)
# get the transformation matrix - this may require a kernel to be
# downloaded, e.g. download https://naif.jpl.nasa.gov/pub/naif/generic_kernels/pck/pck00010.tpc
# and then load it with spice.furnsh('pck00010.tpc')
target = 399 # Earth
TIPM = spice.tipbod( "J2000", target, et )
# get the position that you want to convert
from astropy.coordinates import Angle, ICRS
ra = Angle('12:32:12.23', unit='hourangle')
dec= Angle('-01:23:52.21', unit='deg')
# make an ICRS object (you can also input a proper motion as a radial velocity or using 'pm_dec' and 'pm_ra_cosdec' keyword arguments)
sc = ICRS(ra=ra, dec=dec)
# get position in xyz
xyz = sc.cartesian.xyz.value
# perform conversion to body centred frame
newpos = spice.mxvg(TIPM, xyz, 3, 3)
# convert to latitude and longitude
scnew = SkyCoord(x=newpos[0], y=newpos[1], z=newpos[2], representation_type='cartesian')
# print out new RA and dec
print(scnew.spherical.lon, scnew.spherical.lat)
There are probably ways of doing this entirely within astropy, either with a predefined frame or by definition your own, and using the transform_to() method of the ICRS object. For example, you could convert from ICRS to GCRS.
Thanks Matt, looks like tipbod and reclat works. Tell me if I wrong, but the numbers look ok.
#Saturn Tilt Negative rotation
Targ = 699
TIPM = spice.tipbod( "J2000", Targ, et )
#Rotation
Radius, Long, lat = spice.reclat(TIPM[0])
fy = r2d * Long
if fy < 0.0:
fy = fy + 360.0 #if degrees/radians are negative, add 360
#print 'X Longitude = ' +str(fy)
#Tilt
Radius, Long, lat = spice.reclat(TIPM[1])
fy = r2d * Long
if fy < 0.0:
fy = fy + 360.0
#print 'Y Longitude = ' +str(fy)

How to solve...ValueError: cannot convert float NaN to integer

I'm running quite a complex code so I won't bother with details as I've had it working before but now im getting this error.
Particle is a 3D tuple filled with 0 or 255, and I am using the scipy centre of mass function and then trying to turn the value into its closest integer (as I'm dealing with arrays). The error is found with on the last line... can anyone explain why this might be??
2nd line fills Particle
3rd line deletes any surrounding particles with a different label (This is in a for loop for all labels)
Particle = []
Particle = big_labelled_stack[x_start+20:x_stop+20,y_start+20:y_stop+20,z_start+20:z_stop+20]
Particle = np.where(Particle == i ,255,0)
CoM = scipy.ndimage.measurements.center_of_mass(Particle)
CoM = [ (int(round(x)) for x in CoM ]
Thanks in advance. If you need more code just ask but I dont think it will help you and its very messy.
################## MORE CODE
border = 30
[labelled_stack,no_of_label] = label(labelled,structure_array,output_type)
# RE-LABEL particles now no. of seeds has been reduced! LAST LABELLING
#Increase size of stack by increasing borders and equal them to 0; to allow us to cut out particles into cube shape which else might lye outside the border
h,w,l = labelled.shape
big_labelled_stack = np.zeros(shape=(h+60,w+60,l+60),dtype=np.uint32)
# Creates an empty border around labelled_stack full of zeros of size border
if (no_of_label > 0): #Small sample may return no particles.. so this stage not neccesary
info = np.zeros(shape=(no_of_label,19)) #Creates array to store coordinates of particles
for i in np.arange(1,no_of_label,1):
coordinates = find_objects(labelled_stack == i)[0] #Find coordinates of label i.
x_start = int(coordinates[0].start)
x_stop = int(coordinates[0].stop)
y_start = int(coordinates[1].start)
y_stop = int(coordinates[1].stop)
z_start = int(coordinates[2].start)
z_stop = int(coordinates[2].stop)
dx = (x_stop - x_start)
dy = (y_stop - y_start)
dz = (z_stop - z_start)
Particle = np.zeros(shape=(dy,dx,dz),dtype = np.uint16)
Particle = big_labelled_stack[x_start+30:x_start+dx+30,y_start+30:y_start+dy+30,z_start+30:z_start+dz+30]
Particle = np.where(Particle == i ,255,0)
big_labelled_stack[border:h+border,border:w+border,border:l+border] = labelled_stack
big_labelled_stack = np.where(big_labelled_stack == i , 255,0)
CoM_big_stack = scipy.ndimage.measurements.center_of_mass(big_labelled_stack)
C = np.asarray(CoM_big_stack) - border
if dx > dy:
b = dx
else: #Finds the largest of delta_x,y,z and saves as b, so that we create 'Cubic_Particle' of size 2bx2bx2b (cubic box)
b = dy
if dz > b:
b = dz
CoM = scipy.ndimage.measurements.center_of_mass(Particle)
CoM = [ (int(round(x))) for x in CoM ]
Cubic_Particle = np.zeros(shape=(2*b,2*b,2*b))
Cubic_Particle[(b-CoM[0]):(b+dx-CoM[0]),(b-CoM[1]):(b+dy-CoM[1]),(b-CoM[2]):(b+dz-CoM[2])] = Particle
volume = Cubic_Particle.size # Gives volume of the box in voxels
info[i-1,:] = [C[0],C[1],C[2],i,C[0]-b,C[1]-b,C[2]-b,C[0]+b,C[1]+b,C[2]+b,volume,0,0,0,0,0,0,0,0] # Fills an array with label.No., size of box, and co-ords
else:
print('No particles found, try increasing the sample size')
info = []
Ok, so I have a stack full of labelled particles, there are two things I am trying to do, first find the centre of masses of each particle with respect ot the labelled_stack which is what CoM_big_labelled_stack (and C) does. and stores the co-ords in a list (tuple) called info. I am also trying to create a cubic box around the particle, with its centre of mass as the centre (which is relating to the CoM variable), so first I use the find objects function in scipy to find a particle, i then use these coordinates to create a non-cubic box around the particle, and find its centre of mass.I then find the longest dimension of the box and call it b, creating a cubic box of size 2b and filling it with particle in the right position.
Sorry this code is a mess, I am very new to Python

Python SciPy RectSphereBivariateSpline interpolation generating wrong data?

I have 3D measurement data on a sphere that is very coarse and that I want to interpolate. With the great help from #M4rtini and #HYRY here at stackoverflow I have now been able to generate working code (based on the original example from the RectSphereBivariateSpline example from SciPy).
The test data can be found here: testdata
""" read csv input file, post process and plot 3D data """
import csv
import numpy as np
from mayavi import mlab
from scipy.interpolate import RectSphereBivariateSpline
# user input
nElevationPoints = 17 # needs to correspond with csv file
nAzimuthPoints = 40 # needs to correspond with csv file
threshold = - 40 # needs to correspond with how measurement data was captured
turnTableStepSize = 72 # needs to correspond with measurement settings
resolution = 0.125 # needs to correspond with measurement settings
# read data from file
patternData = np.empty([nElevationPoints, nAzimuthPoints]) # empty buffer
ifile = open('ttest.csv') # need the 'b' suffix to prevent blank rows being inserted
reader = csv.reader(ifile,delimiter=',')
reader.next() # skip first line in csv file as this is only text
for nElevation in range (0,nElevationPoints):
# azimuth
for nAzimuth in range(0,nAzimuthPoints):
patternData[nElevation,nAzimuth] = reader.next()[2]
ifile.close()
# post process
def r(thetaIndex,phiIndex):
"""r(thetaIndex,phiIndex): function in 3D plotting to return positive vector length from patternData[theta,phi]"""
radius = -threshold + patternData[thetaIndex,phiIndex]
return radius
#phi,theta = np.mgrid[0:nAzimuthPoints,0:nElevationPoints]
theta = np.arange(0,nElevationPoints)
phi = np.arange(0,nAzimuthPoints)
thetaMesh, phiMesh = np.meshgrid(theta,phi)
stepSizeRad = turnTableStepSize * resolution * np.pi / 180
theta = theta * stepSizeRad
phi = phi * stepSizeRad
# create new grid to interpolate on
phiIndex = np.arange(1,361)
phiNew = phiIndex*np.pi/180
thetaIndex = np.arange(1,181)
thetaNew = thetaIndex*np.pi/180
thetaNew,phiNew = np.meshgrid(thetaNew,phiNew)
# create interpolator object and interpolate
data = r(thetaMesh,phiMesh)
theta[0] += 1e-6 # zero values for theta cause program to halt; phi makes no sense at theta=0
lut = RectSphereBivariateSpline(theta,phi,data.T)
data_interp = lut.ev(thetaNew.ravel(),phiNew.ravel()).reshape((360,180)).T
def rInterp(theta,phi):
"""rInterp(theta,phi): function in 3D plotting to return positive vector length from interpolated patternData[theta,phi]"""
thetaIndex = theta/(np.pi/180)
thetaIndex = thetaIndex.astype(int)
phiIndex = phi/(np.pi/180)
phiIndex = phiIndex.astype(int)
radius = data_interp[thetaIndex,phiIndex]
return radius
# recreate mesh minus one, needed otherwise the below gives index error, but why??
phiIndex = np.arange(0,360)
phiNew = phiIndex*np.pi/180
thetaIndex = np.arange(0,180)
thetaNew = thetaIndex*np.pi/180
thetaNew,phiNew = np.meshgrid(thetaNew,phiNew)
x = (rInterp(thetaNew,phiNew)*np.cos(phiNew)*np.sin(thetaNew))
y = (-rInterp(thetaNew,phiNew)*np.sin(phiNew)*np.sin(thetaNew))
z = (rInterp(thetaNew,phiNew)*np.cos(thetaNew))
# plot 3D data
obj = mlab.mesh(x, y, z, colormap='jet')
obj.enable_contours = True
obj.contour.filled_contours = True
obj.contour.number_of_contours = 20
mlab.show()
Although the code runs, the resulting plot is much different than the non-interpolated data, see picture
as a reference.
Also, when running the interactive session, data_interp is much larger in value (>3e5) than the original data (this is around 20 max).
Does anyone have any idea what I may be doing wrong?
I seem to have solved it!
For on thing, I tried to extrapolate whereas I could only interpolate this scattered data. SO the new interpolation grid should only go up to theta = 140 degrees or so.
But the most important change is the addition of the parameter s=900 in the RectSphereBivariateSpline call.
So I now have the following code:
""" read csv input file, post process and plot 3D data """
import csv
import numpy as np
from mayavi import mlab
from scipy.interpolate import RectSphereBivariateSpline
# user input
nElevationPoints = 17 # needs to correspond with csv file
nAzimuthPoints = 40 # needs to correspond with csv file
threshold = - 40 # needs to correspond with how measurement data was captured
turnTableStepSize = 72 # needs to correspond with measurement settings
resolution = 0.125 # needs to correspond with measurement settings
# read data from file
patternData = np.empty([nElevationPoints, nAzimuthPoints]) # empty buffer
ifile = open('ttest.csv') # need the 'b' suffix to prevent blank rows being inserted
reader = csv.reader(ifile,delimiter=',')
reader.next() # skip first line in csv file as this is only text
for nElevation in range (0,nElevationPoints):
# azimuth
for nAzimuth in range(0,nAzimuthPoints):
patternData[nElevation,nAzimuth] = reader.next()[2]
ifile.close()
# post process
def r(thetaIndex,phiIndex):
"""r(thetaIndex,phiIndex): function in 3D plotting to return positive vector length from patternData[theta,phi]"""
radius = -threshold + patternData[thetaIndex,phiIndex]
return radius
#phi,theta = np.mgrid[0:nAzimuthPoints,0:nElevationPoints]
theta = np.arange(0,nElevationPoints)
phi = np.arange(0,nAzimuthPoints)
thetaMesh, phiMesh = np.meshgrid(theta,phi)
stepSizeRad = turnTableStepSize * resolution * np.pi / 180
theta = theta * stepSizeRad
phi = phi * stepSizeRad
# create new grid to interpolate on
phiIndex = np.arange(1,361)
phiNew = phiIndex*np.pi/180
thetaIndex = np.arange(1,141)
thetaNew = thetaIndex*np.pi/180
thetaNew,phiNew = np.meshgrid(thetaNew,phiNew)
# create interpolator object and interpolate
data = r(thetaMesh,phiMesh)
theta[0] += 1e-6 # zero values for theta cause program to halt; phi makes no sense at theta=0
lut = RectSphereBivariateSpline(theta,phi,data.T,s=900)
data_interp = lut.ev(thetaNew.ravel(),phiNew.ravel()).reshape((360,140)).T
def rInterp(theta,phi):
"""rInterp(theta,phi): function in 3D plotting to return positive vector length from interpolated patternData[theta,phi]"""
thetaIndex = theta/(np.pi/180)
thetaIndex = thetaIndex.astype(int)
phiIndex = phi/(np.pi/180)
phiIndex = phiIndex.astype(int)
radius = data_interp[thetaIndex,phiIndex]
return radius
# recreate mesh minus one, needed otherwise the below gives index error, but why??
phiIndex = np.arange(0,360)
phiNew = phiIndex*np.pi/180
thetaIndex = np.arange(0,140)
thetaNew = thetaIndex*np.pi/180
thetaNew,phiNew = np.meshgrid(thetaNew,phiNew)
x = (rInterp(thetaNew,phiNew)*np.cos(phiNew)*np.sin(thetaNew))
y = (-rInterp(thetaNew,phiNew)*np.sin(phiNew)*np.sin(thetaNew))
z = (rInterp(thetaNew,phiNew)*np.cos(thetaNew))
# plot 3D data
intensity = rInterp(thetaNew,phiNew)
obj = mlab.mesh(x, y, z, scalars = intensity, colormap='jet')
obj.enable_contours = True
obj.contour.filled_contours = True
obj.contour.number_of_contours = 20
mlab.show()
The resulting plot compares nicely to the original non-interpolated data:
I don't fully understand why s should be set at 900, since the RectSphereBivariateSpline documentation says that s=0 for interpolation. However, when reading the documentation a little further I gain some insight:
Chosing the optimal value of s can be a delicate task. Recommended values for s depend on the accuracy of the data values. If the user has an idea of the statistical errors on the data, she can also find a proper estimate for s. By assuming that, if she specifies the right s, the interpolator will use a spline f(u,v) which exactly reproduces the function underlying the data, she can evaluate sum((r(i,j)-s(u(i),v(j)))**2) to find a good estimate for this s. For example, if she knows that the statistical errors on her r(i,j)-values are not greater than 0.1, she may expect that a good s should have a value not larger than u.size * v.size * (0.1)**2.
If nothing is known about the statistical error in r(i,j), s must be determined by trial and error. The best is then to start with a very large value of s (to determine the least-squares polynomial and the corresponding upper bound fp0 for s) and then to progressively decrease the value of s (say by a factor 10 in the beginning, i.e. s = fp0 / 10, fp0 / 100, ... and more carefully as the approximation shows more detail) to obtain closer fits.

Categories