Geopandas is throwing a driver error when reading a SHP file.
DriverError: '*PATH*/cb_2018_us_zcta510_500k.shp does not exist in the file system, and is not recognized as a supported dataset name.
All I am doing is this:
import geopandas
geopandas.read_file("*PATH*/cb_2018_us_zcta510_500k.shp")
The directory this pulls from includes all the other needed files downloaded from here:
https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html
and the actual files are here:
https://www2.census.gov/geo/tiger/GENZ2018/shp/cb_2018_us_zcta510_500k.zip
Just to confirm that the file is not corrupt or anything I opened it up in QGis and it pulled up perfectly.
In case someone else needs similar info: I, too, had a legit shapefile URL, GeoPandas read_file threw an error: DriverError not recognized as a supported file format.
What worked for me is the following:
import fiona
with fiona.open('/path/to/my_shapefile.shp') as shp:
ax = geo.plot()
#...rest of code
Related
In short, I am trying to convert a shapefile to geojson using gdal. Here is the idea:
from osgeo import gdal
def shapefile2geojson(infile, outfile):
options = gdal.VectorTranslateOptions(format="GeoJSON", dstSRS="EPSG:4326")
gdal.VectorTranslate(outfile, infile, options=options)
Okay then here is my input & output locations:
infile = r"C:\Users\clay\Desktop\Geojson Converter\arizona.shp"
outfile = r"C:\Users\clay\Desktop\Geojson Converter\arizona.geojson"
Then I call the function:
shapefile2geojson(infile, outfile)
It never saves where I can find it, if it is working at all. It would be nice if it would pull from a file and put the newly converted geojson in the same folder. I do am not receiving any errors. I am using windows and Jupyter Notebook and am a noob. I don't know if I am using this right:
r"C:\Users\clay\Desktop\Geojson Converter\arizona.shp"
I had a problem of opening .nc files and converting them to .csv files but still, I can not read them (meaning the first part). I saw this link also this link but I could not find out how to open them. I have written a piece of code and I faced an error which I will post below. To elaborate on the error, it is able to find the files but is not able to open them.
#from netCDF4 import Dataset # use scipy instead
from scipy.io import netcdf #### <--- This is the library to import.
import os
# Open file in a netCDF reader
directory = './'
#wrf_file_name = directory+'filename'
wrf_file_name = [f for f in sorted(os.listdir('.')) if f.endswith('.nc')]
nc = netcdf.netcdf_file(wrf_file_name,'r')
#Look at the variables available
nc.variables
#Look at the dimensions
nc.dimensions
And the error is:
Error: LAKE00000002-GloboLakes-L3S-LSWT-v4.0-fv01.0.nc is not a valid NetCDF 3 file
When loading a dataset into Jupyter, I know it requires lines of code to load it in:
from tensorflow.contrib.learn.python.learn.datasets import base
# Data files
IRIS_TRAINING = "iris_training.csv"
IRIS_TEST = "iris_test.csv"
# Load datasets.
training_set = base.load_csv_with_header(filename=IRIS_TRAINING,
features_dtype=np.float32,
target_dtype=np.int)
test_set = base.load_csv_with_header(filename=IRIS_TEST,
features_dtype=np.float32,
target_dtype=np.int)
So why is ther error NotFoundError: iris_training.csv
still thrown? I feel as though there is more to loading data sets on to jupyter, and would be grateful on any help on this topic
I'm following a course through AI adventures, and dont know how to add in the .csv file; the video mentions nothing about how to add it on.
Here is the link: https://www.youtube.com/watch?v=G7oolm0jU8I&list=PLIivdWyY5sqJxnwJhe3etaK7utrBiPBQ2&index=3
The issue is that you either need to use file's absolute path i.e C:\path_to_csv\iris_training.csv for windows and for UNIX/Linux /path_to_csv/iris_training.csv or you will need to place the file in your notebook workspace i.e directory that is being listed in your Jupyter UI which can be found at http://localhost:8888/tree Web UI. If you are having trouble finding the directory then just execute below python code and place the file in the printed location
import os
cwd = os.getcwd()
print(cwd)
Solution A
if you are working with python you can use python lib pandas to import your file .csv using:
import pandas as pd
IRIS_TRAINING = pd.read_csv("../iris_training.csv")
IRIS_TEST = pd.read_csv("../iris_test.csv")
Solution B
import numpy as np
mydata = np.genfromtxt(filename, delimiter=",")
Read More About
python-pandas
Read More About
python-Numpy
I'm trying to import the shapefile "Metropolin_31Jul_0921.shp" to python using the following code:
import shapefile
stat_area_df = shapefile.Reader("Metropolin_31Jul_0921.shp")
but i keep getting this error:
File "C:\Users\maya\Anaconda3\lib\site-packages\shapefile.py", line 291,
in load
raise ShapefileException("Unable to open %s.dbf or %s.shp." %
(shapeName, shapeName) )
shapefile.ShapefileException: Unable to open Metropolin_31Jul_0921.dbf
or Metropolin_31Jul_0921.shp.
Does anyone know what it means?
I tried adding the directory but it didn't help.
Make sure that the directory which the shapefile is located in, includes all of the supporting files such as .dbf, .shx, etc. the .shp will not work without these supporting files.
I am writing a code that creates an HDF5 that can later be used for data analysis. I load the following packages:
import numpy as np
import tables
Then I use the tables module to determine if my file is an HDF5 file with:
tables.isHDF5File(FILENAME)
This normally would print either TRUE or FALSE depending on if the file type is actually an HDF5 file or not. However, I get the error:
AttributeError: module 'tables' has no attribute 'isHDF5File'
So I tried:
from tables import isHDF5File
and got the error:
ImportError: cannot import name 'isHDF5File'
I've tried this code on another computer, and it ran fine. I've tried updating both numpy and tables with pip but it states that the file is already up to date. Is there a reason 'tables' isn't recognizing 'isHDF5File' for me? I am running this code on a Mac (not working) but it worked on a PC (if this matters).
Do you have the function name right?
In [21]: import tables
In [22]: tables.is_hdf5_file?
Docstring:
is_hdf5_file(filename)
Determine whether a file is in the HDF5 format.
When successful, it returns a true value if the file is an HDF5
file, false otherwise. If there were problems identifying the file,
an HDF5ExtError is raised.
Type: builtin_function_or_method
In [23]: