I'm trying to import the shapefile "Metropolin_31Jul_0921.shp" to python using the following code:
import shapefile
stat_area_df = shapefile.Reader("Metropolin_31Jul_0921.shp")
but i keep getting this error:
File "C:\Users\maya\Anaconda3\lib\site-packages\shapefile.py", line 291,
in load
raise ShapefileException("Unable to open %s.dbf or %s.shp." %
(shapeName, shapeName) )
shapefile.ShapefileException: Unable to open Metropolin_31Jul_0921.dbf
or Metropolin_31Jul_0921.shp.
Does anyone know what it means?
I tried adding the directory but it didn't help.
Make sure that the directory which the shapefile is located in, includes all of the supporting files such as .dbf, .shx, etc. the .shp will not work without these supporting files.
Related
In short, I am trying to convert a shapefile to geojson using gdal. Here is the idea:
from osgeo import gdal
def shapefile2geojson(infile, outfile):
options = gdal.VectorTranslateOptions(format="GeoJSON", dstSRS="EPSG:4326")
gdal.VectorTranslate(outfile, infile, options=options)
Okay then here is my input & output locations:
infile = r"C:\Users\clay\Desktop\Geojson Converter\arizona.shp"
outfile = r"C:\Users\clay\Desktop\Geojson Converter\arizona.geojson"
Then I call the function:
shapefile2geojson(infile, outfile)
It never saves where I can find it, if it is working at all. It would be nice if it would pull from a file and put the newly converted geojson in the same folder. I do am not receiving any errors. I am using windows and Jupyter Notebook and am a noob. I don't know if I am using this right:
r"C:\Users\clay\Desktop\Geojson Converter\arizona.shp"
I had a problem of opening .nc files and converting them to .csv files but still, I can not read them (meaning the first part). I saw this link also this link but I could not find out how to open them. I have written a piece of code and I faced an error which I will post below. To elaborate on the error, it is able to find the files but is not able to open them.
#from netCDF4 import Dataset # use scipy instead
from scipy.io import netcdf #### <--- This is the library to import.
import os
# Open file in a netCDF reader
directory = './'
#wrf_file_name = directory+'filename'
wrf_file_name = [f for f in sorted(os.listdir('.')) if f.endswith('.nc')]
nc = netcdf.netcdf_file(wrf_file_name,'r')
#Look at the variables available
nc.variables
#Look at the dimensions
nc.dimensions
And the error is:
Error: LAKE00000002-GloboLakes-L3S-LSWT-v4.0-fv01.0.nc is not a valid NetCDF 3 file
I'm working on a project for which I need to call functions from several python files to use in one main program. All of the programs in question are notebooks in the same directory in Google Colab. I am having trouble being able to call the functions I need and I haven't been able to find a solution that works. I've tried simply from InterpolateData import LoadandInterp where InterpolateData is the file name where the function LoadandInterp is stored. This is what I currently have:
from google.colab import files
import sys
sys.path.append( "/content/drive/My Drive/Colab Notebooks")
import InterpolateData
import numpy as np
import pandas as pd
from scipy.interpolate import griddata
#get, normalize and interpolate data
#SpectralHighData
temperatureList=np.arange(25.0,46.0,1.0)
interpList=np.arange(25.0,45.0,0.1)
pathBefore="/content/drive/My Drive/Colab Notebooks/Original Data/High Temperatures/Spectral_high/CdTe Spectra Interpolated "
pathAfter="C.csv"
interpolated=InterpolateData.LoadandInterp(temperatureList, interpList, pathBefore, pathAfter)
Everything that I've tried returns an error along the lines of ModuleNotFoundError: No module named 'InterpolateData'
Does anyone know a way I can get this to work? Surely, there is a way?
Edit: Before the previous code, I have code to mount my google drive and change the directory to where the files are stored. It looks like this:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
!ls "/content/drive/My Drive/Colab Notebooks"
%cd "/content/drive/My Drive/Colab Notebooks"
For anyone who stumbles across this in the future: I was able to find a solution, eventually.
In order to access another program, the program file must be a .py file and it must be in a folder that also contains a file called _init_.py. The _init_.py can be completely empty.
Once you have the files set up, change your directory to the folder with your 'module' program(s) using %cd 'filepath'. You can then import your module using import filename. The functions in your other program are now accessible through filename.function.
Geopandas is throwing a driver error when reading a SHP file.
DriverError: '*PATH*/cb_2018_us_zcta510_500k.shp does not exist in the file system, and is not recognized as a supported dataset name.
All I am doing is this:
import geopandas
geopandas.read_file("*PATH*/cb_2018_us_zcta510_500k.shp")
The directory this pulls from includes all the other needed files downloaded from here:
https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html
and the actual files are here:
https://www2.census.gov/geo/tiger/GENZ2018/shp/cb_2018_us_zcta510_500k.zip
Just to confirm that the file is not corrupt or anything I opened it up in QGis and it pulled up perfectly.
In case someone else needs similar info: I, too, had a legit shapefile URL, GeoPandas read_file threw an error: DriverError not recognized as a supported file format.
What worked for me is the following:
import fiona
with fiona.open('/path/to/my_shapefile.shp') as shp:
ax = geo.plot()
#...rest of code
I am writing a code that creates an HDF5 that can later be used for data analysis. I load the following packages:
import numpy as np
import tables
Then I use the tables module to determine if my file is an HDF5 file with:
tables.isHDF5File(FILENAME)
This normally would print either TRUE or FALSE depending on if the file type is actually an HDF5 file or not. However, I get the error:
AttributeError: module 'tables' has no attribute 'isHDF5File'
So I tried:
from tables import isHDF5File
and got the error:
ImportError: cannot import name 'isHDF5File'
I've tried this code on another computer, and it ran fine. I've tried updating both numpy and tables with pip but it states that the file is already up to date. Is there a reason 'tables' isn't recognizing 'isHDF5File' for me? I am running this code on a Mac (not working) but it worked on a PC (if this matters).
Do you have the function name right?
In [21]: import tables
In [22]: tables.is_hdf5_file?
Docstring:
is_hdf5_file(filename)
Determine whether a file is in the HDF5 format.
When successful, it returns a true value if the file is an HDF5
file, false otherwise. If there were problems identifying the file,
an HDF5ExtError is raised.
Type: builtin_function_or_method
In [23]: