I am looking for a piece of code which help me in converting my road centreline feature to a buffer. I have the following feature classes.
roads = "c:/base/data.gdb/roadcentreline"
roadsoutput = "c:/base/data.gdb/roadcentreline_Buffer"
Now, I want to convert this into buffer and store it in the roadsoutput. Any way to achieve this?
UPD: "Buffer" tool is the best for one road or for set of roads. But for a network you'd better use some specific tools from Network Analyst toolbox
to complete previuos answer:
Your workflow should have been something like this:
Open "Search" panel in arcMap
Type "Buffer"
Explore answers, find suitable tool and open it. In your case it is "Buffer" from "Analysys" toolbox
Explore parameters
Open "Show Help" -> "Tool Help"
Scroll down
Find this code examples there (and also a very useful parameters table):
import arcpy
arcpy.env.workspace = "C:/data"
arcpy.Buffer_analysis("roads", "C:/output/majorrdsBuffered", "100 Feet", "FULL", "ROUND", "LIST", "Distance")
# Name: Buffer.py
# Description: Find areas of suitable vegetation which exclude areas heavily impacted by major roads
# import system modules
import arcpy
from arcpy import env
# Set environment settings
env.workspace = "C:/data/Habitat_Analysis.gdb"
# Select suitable vegetation patches from all vegetation
veg = "vegtype"
suitableVeg = "C:/output/Output.gdb/suitable_vegetation"
whereClause = "HABITAT = 1"
arcpy.Select_analysis(veg, suitableVeg, whereClause)
# Buffer areas of impact around major roads
roads = "majorrds"
roadsBuffer = "C:/output/Output.gdb/buffer_output"
distanceField = "Distance"
sideType = "FULL"
endType = "ROUND"
dissolveType = "LIST"
dissolveField = "Distance"
arcpy.Buffer_analysis(roads, roadsBuffer, distanceField, sideType, endType, dissolveType, dissolveField)
# Erase areas of impact around major roads from the suitable vegetation patches
eraseOutput = "C:/output/Output.gdb/suitable_vegetation_minus_roads"
xyTol = "1 Meters"
arcpy.Erase_analysis(suitableVeg, roadsBuffer, eraseOutput, xyTol)
One way, i find on internet is that we can run buffer using the variables set above and pass the remaining parameters in as strings.
Below is the suggested code to convert any polyline into buffer. For more details check Esri Documentation.
import arcpy
roads = "c:/base/data.gdb/roadcentreline"
roadsoutput = "c:/base/data.gdb/roadcentreline_Buffer"
arcpy.Buffer_analysis(roads, output, "distance", "FULL", "ROUND", "NONE")
But i am still doubtful, is there any better way to do this?
Related
I am trying to perform a fit to a tree. But I need to add some cut to the branches which are not the observables of the fit.
Website https://zfit.readthedocs.io/en/latest/getting_started/intro/data.html tells me that I can include cuts in the dataset by specifying the root_dir_options. But I don't know how to operate it.
For example, I want to open a ROOT file "test.root" with tree "ntuple". The observables of the fit is x.
I can write
data = zfit.Data.from_root("tese.root","ntuple","x")
If I need to set cut of two other branches in the tree y>1 and z>1, how can I write the code?
There are actually two ways as of today:
Using pandas
The most general way is to load the data first into a pandas dataframe (using uproot) and then load into zfit with from_pandas, there you can give an obs. So you will need to first create a space with obs = zfit.Space('obsname', (lower, upper)). Then you can use that in zfit.Data.from_pandas(...)
Loading with uproot can be (as an example):
branches = ["pt1", "pt2"]
with uproot.open(path_root) as f:
tree = f["events"]
true_data = tree.arrays(branches, library="pd")
Cutting edge
The cutting edge way is to give the limits directly in from_root; this is cutting edge development and will be available soon: https://github.com/zfit/zfit/pull/396
I am looking for a python module that will merge dxf files. I have found dxfgrabber and ezdxf, however they seem to be used for different applications then what I am after.
I am using ExpressPCB, which outputs each layer of the PCB, the holes, and the silkscreen separately. For my application I want to combine all of these individual DXF into a single. See photo
As far as I know, the origins, etc are the same so it should all line up as it would in real life.
Currently, neither of these modules have any tutorials for this type of application. Some psudo code to get the idea across in a pythonic way:
dxf_file1 = read(file1)
dxf_file2 = read(file2)
dxf_file3 = read(file3)
out_file.append(dxf_file1)
out_file.append(dxf_file2)
out_file.append(dxf_file3)
outfile.save()
In my application, the files will all have the same origin point and will never overlap, so you should be able to easily merge the files somehow. Thank you for the help in advance!
You can use the rewritten Importer add-on in ezdxf v0.10:
import ezdxf
from ezdxf.addons import Importer
def merge(source, target):
importer = Importer(source, target)
# import all entities from source modelspace into target modelspace
importer.import_modelspace()
# import all required resources and dependencies
importer.finalize()
base_dxf = ezdxf.readfile('file1.dxf')
for filename in ('file2.dxf', 'file3.dxf'):
merge_dxf = ezdxf.readfile(filename)
merge(merge_dxf, base_dxf)
# base_dxf.save() # to save as file1.dxf
base_dxf.saveas('merged.dxf')
This importer supports just basic graphics like LINE, CIRCLE, ARC and DIMENSION (without dimension style override) and so on.
All XDATA and 3rd party data will be ignored, but your files seems to be simple enough to work.
Documentation of the Importer add-on can be found here.
import sys
import ezdxf
from ezdxf.addons import geo
from shapely.geometry import shape
from shapely.ops import unary_union
def dxf2shapley(filename):
doc = ezdxf.readfile(filename)
msp = doc.modelspace()
entities = msp.query('LINE')
proxy = geo.proxy(entities)
shapley_polygon = shape(proxy)
if !shapley_polygon.is_valid:
raise Exception('polygon is not valid')
return shapley_polygon
h0 = dxf2shapley('test-0.dxf')
h1 = dxf2shapley('test-1.dxf')
polygons = [h0, h1]
polyout = unary_union(polygons)
result = ezdxf.addons.geo.dxf_entities(polyout, polygon=2)
doc = ezdxf.new('R2010')
msp = doc.modelspace()
for entity in result:
msp.add_entity(entity)
doc.saveas('test_merged.dxf')
I have a high resolution triangular mesh with about 2 million triangles. I want to reduce the number of triangles and vertices to about ~10000 each, while preserving its general shape as much as possible.
I know this can be done in Matlab using reducepatch. Another alternative is qslim package. Also there is decimation functionality in VTK which has python interface, so technically it is possible in python as well. Meshlab is probably available in python as well (?).
How can I do this kind of mesh decimation in python? Examples would be greatly appreciated.
Here is a minimal python prototype translated from its c++ equivalent vtk example (http://www.vtk.org/Wiki/VTK/Examples/Cxx/Meshes/Decimation), as MrPedru22 well suggested.
from vtk import (vtkSphereSource, vtkPolyData, vtkDecimatePro)
def decimation():
sphereS = vtkSphereSource()
sphereS.Update()
inputPoly = vtkPolyData()
inputPoly.ShallowCopy(sphereS.GetOutput())
print("Before decimation\n"
"-----------------\n"
"There are " + str(inputPoly.GetNumberOfPoints()) + "points.\n"
"There are " + str(inputPoly.GetNumberOfPolys()) + "polygons.\n")
decimate = vtkDecimatePro()
decimate.SetInputData(inputPoly)
decimate.SetTargetReduction(.10)
decimate.Update()
decimatedPoly = vtkPolyData()
decimatedPoly.ShallowCopy(decimate.GetOutput())
print("After decimation \n"
"-----------------\n"
"There are " + str(decimatedPoly.GetNumberOfPoints()) + "points.\n"
"There are " + str(decimatedPoly.GetNumberOfPolys()) + "polygons.\n")
if __name__ == "__main__":
decimation()
I would recommend you to use vtkQuadricDecimation, the quality of the output model is visually better than using vtkDecimatePro (without proper settings).
decimate = vtkQuadricDecimation()
decimate.SetInputData(inputPoly)
decimate.SetTargetReduction(0.9)
One of the most important things is to use Binary representation when saving STL:
stlWriter = vtkSTLWriter()
stlWriter.SetFileName(filename)
stlWriter.SetFileTypeToBinary()
stlWriter.SetInputConnection(decimate.GetOutputPort())
stlWriter.Write()
Best elegant and most beautiful Python decimation tool using meshlab (mainly MeshlabXML Library) can be found in this Dr. Hussein Bakri's repository
https://github.com/HusseinBakri/3DMeshBulkSimplification
I use it all the time. Have a look at the code
Another option is to apply open-source library MeshLib, which can be called both from C++ and Python code (where it is installed by pip).
And the decimating code will look like
import meshlib.mrmeshpy as mr
# load high-resolution mesh:
mesh = mr.loadMesh(mr.Path("busto.stl"))
# decimate it with max possible deviation 0.5:
settings = mr.DecimateSettings()
settings.maxError = 0.5
result = mr.decimateMesh(mesh, settings)
print(result.facesDeleted)
# 708298
print(result.vertsDeleted)
# 354149
# save low-resolution mesh:
mr.saveMesh(mesh, mr.Path("simplified-busto.stl"))
Visually both meshes look as follows:
I want to sample a radio station which broadcasts in format *.m3u8 and to produce the histogram of the first n seconds (where the user fixes n).
I had been trying using radiopy but it doesn't work and gnuradio seems useless. How can I produce and show this histogram?
EDIT: Now I use Gstreamer v1.0 so I can play it directly but now I need to live-sample my broadcast. How can I do it using Gst?
gnuradio seems useless
Well, I'd argue that this is what you're looking for, if you're looking for a live spectrogram:
As you can see, it's but a matter of connecting a properly configured audio source to a Qt GUI sink. If properly configured (I wrote an answer about that, and a GNU Radio wiki page as well).
Point is: you shouldn't be trying to play an internet station by yourself. Let a software do that which knows what it is doing.
In your case, I'd recommend:
using VLC or mplayer to write the radio, after decoding it to PCM 32bit float of a fixed sampling rate to a file.
Use Python with the libraries Numpy to open that file (samples = numpy.fromfile(filename, dtype=numpy.float32)), and matplotlib/pyplot to plot a spectrogram to a file, i.e. something like (untested, because written right here):
#!/usr/bin/python2
import sys
import os
import tempfile
import numpy
from matplotlib import pyplot
stream = sys.argv[1] ## you can pass the stream URL as argument
outfile = sys.argv[2] ## second argument: output file; ending determines type!
num_of_seconds = min(int(sys.argv[3]), 60) # not more than 1min of streaming
(intermediate_file, inter_fname) = tempfile.mkstemp()
# pan = number of output channels (1: mix to mono)
# resample = sampling rate. this must be the same for all files, so that we can actually compare spectrograms
# format = sample format, here: native floats
sys.system("mplayer -endpos %d -vo null -af pan=1 -af resample=441000 -af format=floatne -ao pcm:nowaveheader:file=%s" % num_of_seconds % inter_fname)
samples = numpy.fromfile(inter_fname, dtype=float32)
pyplot.figure((num_of_seconds * 44100, 256), dpi=1)
### Attention: this call to specgram expects of you to understand what the Discrete Fourier Transform does.
### This uses a Hanning window by default; whether that is appropriate for audio data is questionable. Use all your DSP skillz!
### pyplot.specgram has a lot of options, including colormaps, frequency scaling, overlap. Make yourself acquintanced with those!
pyplot.specgram(samples, NFFT=256, FS=44100)
pyplot.savefig(outfile, bbox_inches="tight")
I am running a query to select a polygon from a set of polygons. Then I input that polygon into a feature dataset in a geodatabase. I then use this polygon(or set of polygons) to dissolve to get the boundary of the polygons and the centroid of the polygon(s), also entered into separate feature datasets in a geodatabase.
import arcpy, os
#Specify the drive you have stored the NCT_GIS foler on
drive = arcpy.GetParameterAsText(0)
arcpy.env.workspace = (drive + ":\\NCT_GIS\\DATA\\RF_Properties.gdb")
arcpy.env.overwriteOutput = True
lot_DP = arcpy.GetParameterAsText(1).split(';')
PropertyName = arcpy.GetParameterAsText(2)
queryList= []
for i in range(0,len(lot_DP)):
if i % 2 == 0:
lt = lot_DP[i]
DP = lot_DP[i+1]
query_line = """( "LOTNUMBER" = '{0}' AND "PLANNUMBER" = {1} )""".format(lt, DP)
queryList.append(query_line)
if i < (len(lot_DP)):
queryList.append(" OR ")
del queryList[len(queryList)-1]
query = ''.join(queryList)
#Feature dataset for lot file
RF_Prop = drive + ":\\NCT_GIS\\DATA\\RF_Properties.gdb\\Lots\\"
#Feature dataset for the property boundary
RF_Bound = drive + ":\\NCT_GIS\\DATA\\RF_Properties.gdb\\Boundary\\"
#Feature dataset for the property centroid
RF_Centroid = drive + ":\\NCT_GIS\\DATA\\RF_Properties.gdb\\Centroid\\"
lotFile = drive + ":\\NCT_GIS\\DATA\\NSWData.gdb\\Admin\\cadastre"
arcpy.MakeFeatureLayer_management(lotFile, "lot_lyr")
arcpy.SelectLayerByAttribute_management("lot_lyr", "NEW_SELECTION", query)
#Create lot polygons in feature dataset
arcpy.CopyFeatures_management("lot_lyr", RF_Prop + PropertyName)
#Create property boundary in feature dataset
arcpy.
arcpy.Dissolve_management(RF_Prop + PropertyName , RF_Bound + PropertyName, "", "", "SINGLE_PART", "DISSOLVE_LINES")
#Create property centroid in feature dataset
arcpy.FeatureToPoint_management(RF_Bound + PropertyName, RF_Centroid + PropertyName, "CENTROID")
Every time I run this I get an error when trying to add anything to the geodatabase EXCEPT when copying the lot layer into the geodatabase.
I have tried not copying the lots into the geodatabase and copying it into a shapefile and then using that but still it the boundary and centroid will not import into the geodatabase. I tried outputing the boundaries into shapefiles then using the FeatureClassToGeodatabase tool but still I get error after error.
If anyone can shed light on this It would be grateful
In my experience, I found out that if I had recently opened then closed ArcMap or ArcCatalog it would leave two ArcGIS services running (check task manager) even though I had closed ArcMap and ArcCatalog. If I tried running a script while these two services were running I would get this error. Finding these services in Windows task manager and ending them fixed this error for me. The two services were
ArcGIS cache manager
ArcGIS online services
I've also heard that your computer's security/anti-virus software may possibly interfere with scripts running. So adding your working directory as an exception to your security software may also help.
If in the rare occasions that this didn't work I just had to restart the computer.