I am running a query to select a polygon from a set of polygons. Then I input that polygon into a feature dataset in a geodatabase. I then use this polygon(or set of polygons) to dissolve to get the boundary of the polygons and the centroid of the polygon(s), also entered into separate feature datasets in a geodatabase.
import arcpy, os
#Specify the drive you have stored the NCT_GIS foler on
drive = arcpy.GetParameterAsText(0)
arcpy.env.workspace = (drive + ":\\NCT_GIS\\DATA\\RF_Properties.gdb")
arcpy.env.overwriteOutput = True
lot_DP = arcpy.GetParameterAsText(1).split(';')
PropertyName = arcpy.GetParameterAsText(2)
queryList= []
for i in range(0,len(lot_DP)):
if i % 2 == 0:
lt = lot_DP[i]
DP = lot_DP[i+1]
query_line = """( "LOTNUMBER" = '{0}' AND "PLANNUMBER" = {1} )""".format(lt, DP)
queryList.append(query_line)
if i < (len(lot_DP)):
queryList.append(" OR ")
del queryList[len(queryList)-1]
query = ''.join(queryList)
#Feature dataset for lot file
RF_Prop = drive + ":\\NCT_GIS\\DATA\\RF_Properties.gdb\\Lots\\"
#Feature dataset for the property boundary
RF_Bound = drive + ":\\NCT_GIS\\DATA\\RF_Properties.gdb\\Boundary\\"
#Feature dataset for the property centroid
RF_Centroid = drive + ":\\NCT_GIS\\DATA\\RF_Properties.gdb\\Centroid\\"
lotFile = drive + ":\\NCT_GIS\\DATA\\NSWData.gdb\\Admin\\cadastre"
arcpy.MakeFeatureLayer_management(lotFile, "lot_lyr")
arcpy.SelectLayerByAttribute_management("lot_lyr", "NEW_SELECTION", query)
#Create lot polygons in feature dataset
arcpy.CopyFeatures_management("lot_lyr", RF_Prop + PropertyName)
#Create property boundary in feature dataset
arcpy.
arcpy.Dissolve_management(RF_Prop + PropertyName , RF_Bound + PropertyName, "", "", "SINGLE_PART", "DISSOLVE_LINES")
#Create property centroid in feature dataset
arcpy.FeatureToPoint_management(RF_Bound + PropertyName, RF_Centroid + PropertyName, "CENTROID")
Every time I run this I get an error when trying to add anything to the geodatabase EXCEPT when copying the lot layer into the geodatabase.
I have tried not copying the lots into the geodatabase and copying it into a shapefile and then using that but still it the boundary and centroid will not import into the geodatabase. I tried outputing the boundaries into shapefiles then using the FeatureClassToGeodatabase tool but still I get error after error.
If anyone can shed light on this It would be grateful
In my experience, I found out that if I had recently opened then closed ArcMap or ArcCatalog it would leave two ArcGIS services running (check task manager) even though I had closed ArcMap and ArcCatalog. If I tried running a script while these two services were running I would get this error. Finding these services in Windows task manager and ending them fixed this error for me. The two services were
ArcGIS cache manager
ArcGIS online services
I've also heard that your computer's security/anti-virus software may possibly interfere with scripts running. So adding your working directory as an exception to your security software may also help.
If in the rare occasions that this didn't work I just had to restart the computer.
Related
I am looking for a piece of code which help me in converting my road centreline feature to a buffer. I have the following feature classes.
roads = "c:/base/data.gdb/roadcentreline"
roadsoutput = "c:/base/data.gdb/roadcentreline_Buffer"
Now, I want to convert this into buffer and store it in the roadsoutput. Any way to achieve this?
UPD: "Buffer" tool is the best for one road or for set of roads. But for a network you'd better use some specific tools from Network Analyst toolbox
to complete previuos answer:
Your workflow should have been something like this:
Open "Search" panel in arcMap
Type "Buffer"
Explore answers, find suitable tool and open it. In your case it is "Buffer" from "Analysys" toolbox
Explore parameters
Open "Show Help" -> "Tool Help"
Scroll down
Find this code examples there (and also a very useful parameters table):
import arcpy
arcpy.env.workspace = "C:/data"
arcpy.Buffer_analysis("roads", "C:/output/majorrdsBuffered", "100 Feet", "FULL", "ROUND", "LIST", "Distance")
# Name: Buffer.py
# Description: Find areas of suitable vegetation which exclude areas heavily impacted by major roads
# import system modules
import arcpy
from arcpy import env
# Set environment settings
env.workspace = "C:/data/Habitat_Analysis.gdb"
# Select suitable vegetation patches from all vegetation
veg = "vegtype"
suitableVeg = "C:/output/Output.gdb/suitable_vegetation"
whereClause = "HABITAT = 1"
arcpy.Select_analysis(veg, suitableVeg, whereClause)
# Buffer areas of impact around major roads
roads = "majorrds"
roadsBuffer = "C:/output/Output.gdb/buffer_output"
distanceField = "Distance"
sideType = "FULL"
endType = "ROUND"
dissolveType = "LIST"
dissolveField = "Distance"
arcpy.Buffer_analysis(roads, roadsBuffer, distanceField, sideType, endType, dissolveType, dissolveField)
# Erase areas of impact around major roads from the suitable vegetation patches
eraseOutput = "C:/output/Output.gdb/suitable_vegetation_minus_roads"
xyTol = "1 Meters"
arcpy.Erase_analysis(suitableVeg, roadsBuffer, eraseOutput, xyTol)
One way, i find on internet is that we can run buffer using the variables set above and pass the remaining parameters in as strings.
Below is the suggested code to convert any polyline into buffer. For more details check Esri Documentation.
import arcpy
roads = "c:/base/data.gdb/roadcentreline"
roadsoutput = "c:/base/data.gdb/roadcentreline_Buffer"
arcpy.Buffer_analysis(roads, output, "distance", "FULL", "ROUND", "NONE")
But i am still doubtful, is there any better way to do this?
So, I'm playing around with the Spatialite Virtualknn to try my hand at some routing using open street maps data and Python.
After downloading the Australian osm data, I've extracted a roads layer and created a separate Spatialite database.
spatialite_osm_net.exe -o australia-latest.osm.pbf -d aus_road.sqlite -T roads
I then ran this query from Spatialite_gui
select ST_AsText(ref_geometry)
from knn where f_table_name = 'roads'
and ref_geometry = ST_POINT(145.61249, -38.333801)
And got these 3 results
POINT(145.61249 -38.333801)
POINT(145.61249 -38.333801)
POINT(145.61249 -38.333801)
So I then created this quick and nasty Python script (Python 3.9.0 but I have tried other versions)
import spatialite
file = "./aus_road.sqlite"
db = spatialite.connect(file)
allme = db.execute("select ST_AsText(ref_geometry) from knn where f_table_name = 'roads' and ref_geometry = ST_POINT(145.61249, -38.333801)")
allme.fetchall()
And all I get back is
[]
Anybody have any idea why Virtualknn wont work in Python?
I have a simple loop to download a large number of images (1,5 million). The images itself are small, but I estimated that the total size will be 250 GB, which is too much for my HDD.
I got an external HDD, but even though the code runs without errors, the designated image folder is empty!
I tried the same code for a direction on my internal HDD and it works fine, slowly retrieving the images. Interestingly, the code reads the .csv file from the external HDD, so reading seems to not problem.
Any idea what I could do?
import os
import pandas as pd
import urllib
# change paths and dependencies:
file_name = "ID_with_image_links.csv"
file_path = "/Volumes/Extreme SSD/"
path_for_images = "/Volumes/Extreme SSD/images"
os.chdir(file_path)
df = pd.read_csv(file_name)
total_len = len(df)
os.chdir(path_for_images)
df = df.head(10) # this is for try-out
n = 1
for index, row in df.iterrows():
id = str(row['ID'])
im_num = str(row["Image Number"])
link = str(row["Links"])
urllib.request.urlretrieve(link, (id + "_" + im_num + ".jpg"))
print("Image", n, "of ", total_len, "downloaded")
n = n +1
Try setting the directory writable. I figure you are using macOS?
You can set the directories rights to read/write by using chmod 666 /Volumes/Extreme SSD/images/ on the terminal as root.
At least on BSD (and macOS is based on that) mounting an external drive is read only by default IIRC.
I made this code in python 2.7 for downloading bing traffic flow map (specific area) every x minutes.
from cStringIO import StringIO
from PIL import Image
import urllib
import time
i = 1
end = time.time() + 60*24*60
url = 'https://dev.virtualearth.net/REST/V1/Imagery/Map/AerialWithLabels/45.8077453%2C15.963863/17?mapSize=500,500&mapLayer=TrafficFlow&format=png&key=Lt2cLlR9OcfEnMLv5qyd~YbPpC6zOQdhTMcwsKCwlgQ~Am2YLG00hHI6h7W1IPq31VOzqEXKAhedzHfknCejIrdQF_iVrQS82AUdjBT0YMtt'
while True:
buffer = StringIO(urllib.urlopen(url).read())
image = Image.open(buffer)
image.save('C:\Users\slika'+str(i)+'.png')
i=i+1
if time.time()>end:
break
time.sleep(60*10)
This is one of the images i got traffic flow
Now my question is can i convert only traffic flow lines (green,yellow, orange, red) and assign them attributes (1,2,3,4) or ('No traffic' , 'Light' , 'Moderate' , 'Heavy') into shape file for usage in QGIS. What modules should i look for and is it even possible. Any idea or sample code would be much helpful.
This is against the terms of use of Bing Maps.
Also, I notice that you are using a Universal Windows App key. Those keys are to only be used in public facing Windows apps that anyone has access to. These keys cannot be used in GIS/business apps. Use a Dev/Test key or upgrade to an Enterprise account.
I'm trying to use cartopy to plot several maps and I want to use them offline. Cartopy has a data directory,
import cartopy.config
cartopy.config
{'data_dir': '/home/user/.local/share/cartopy',
'downloaders': {('shapefiles',
'gshhs'): <cartopy.io.shapereader.GSHHSShpDownloader at 0x7f3ee33ee7d0>,
('shapefiles',
'natural_earth'): <cartopy.io.shapereader.NEShpDownloader at 0x7f3ee33ee710>},
'pre_existing_data_dir': '',
'repo_data_dir': '/home/user/bin/virtualenvs/mobi/local/lib/python2.7/site-packages/cartopy/data'}
So I believe that i can download the maps from Natural Earth site. How can I structure this data on this directory so cartopy would not use the internet to plot? And how can I do the same for OpenStreetMap data?
(Partial answer only)
At Natural Earth web site, http://www.naturalearthdata.com/downloads/, you can find all the downloadable files.
For example, this link provides low resolution data: http://www.naturalearthdata.com/downloads/110m-physical-vectors/
One of the data files on that page has this link address:
http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/physical/ne_110m_coastline.zip
This piece of code will download that file (if it is not readily available on the computer):-
import cartopy
fname = cartopy.io.shapereader.natural_earth( \
resolution='110m', \
category='physical', \
name='110m_coastline')
fname is full pathname of the downloaded file.
You dont need to arrange the download location for cartopy. It already has default location that you can find by:
cartopy.config['data_dir'] # usually 'C:\\Users\\username\\.local\\share\\cartopy'
You can check out the files you downloaded and see how they are structured in that location.
Next time when you use cartopy function cartopy.io.shapereader.natural_earth (with default config) it will use the local files if they are available.
I had Faced similar issue wherein, with cartopy, the plt.gca().coastlines() was triggering the download of a zip file from external server, but the download was failing as internet connectivity was absent.
/home/apps/CARTOPY/0.16.0/lib64/python2.7/site-packages/Cartopy-0.16.0-py2.7-linux-x86_64.egg/cartopy/io/__init__.py:260: DownloadWarning: Downloading: http://naciscdn.org/naturalearth/110m/physical/ne_110m_coastline.zip
warnings.warn('Downloading: {}'.format(url), DownloadWarning)
I manually downloaded the zip file , and extracted under - ~/.local/share/cartopy/shapefiles/natural_earth/physical.
~/.local/share/cartopy/shapefiles/natural_earth/physical> ls
ne_110m_coastline.README.html ne_110m_coastline.cpg ne_110m_coastline.prj ne_110m_coastline.shx
ne_110m_coastline.VERSION.txt ne_110m_coastline.dbf ne_110m_coastline.shp
then after renaming/removing "ne_" prefix from some files, i was able to solve this issue.
~/PLOT_TEST> ls ~/.local/share/cartopy/shapefiles/natural_earth/physical/
110m_coastline.cpg 110m_coastline.dbf 110m_coastline.prj 110m_coastline.shp 110m_coastline.shx ne_110m_coastline.README.html ne_110m_coastline.VERSION.txt
I have prepared a code in where you can download the the shapefiles from natural earth and then convert them into a dataframe. Pay attention, the country coordinates in natural earth are in polygon and multi-polygon format. In the case of dealing with Rivers which are linestring you need to modify the code.
You might need to manipulate the "name" with your desired filename like "coastlines". Find more information in the following link:
https://www.naturalearthdata.com/downloads/
import cartopy.io.shapereader as shpreader
ne_earth_countries = shpreader.natural_earth(resolution = '10m',
category = 'cultural',
name='admin_0_countries')
countries = shpreader.Reader(ne_earth_countries).records()
def extract_geom_meta(country):
coords = np.empty(shape=[0,2])
for geom in country.geometry:
coords = np.append(coords, geom.exterior.coords, axis=0)
country_name = country.attributes["ADMIN"]
return [country_name, coords]
WorldDF = pd.DataFrame([extract_geom_meta(country) for country in countries],
columns=['countries','coordinates'])
CountryDF = pd.concat([pd.DataFrame(WorldDF['coordinates'][country_idx])
for country_idx in range(len(WorldDF))]).reset_index()
CountryDF['Label'] = CountryDF.apply(lambda row: 1 if row['index'] == 0
else 0,axis=1).cumsum()-1
CountryDF['Country'] = CountryDF['Label'].apply(lambda row: WorldDF.countries[row])
CountryDF.rename(columns={0:'Longitude', 1:'Latitude'},inplace=True)
print(CountryDF.head())