I am trying to install the geopandas package for Python. I have tried every installation process outlined by Geopandas here (I have tried literally every option they offer): https://geopandas.org/en/stable/getting_started/install.html. I finally thought I got it to work by creating a new environment (it appears the package is installed), however, when I try to import a shapefile I get an error. I have also searched through all the stack exchange answers on questions for downloading Geopandas and none of the solutions are working for me. See the code and error below. Can anyone help me download geopandas successfully?
I am running Python version 3.9.13, on Windows, using Spyder.
CODE:
import geopandas as gpd
# Set filepath
fp = "filepath.shp"
# Read file using gpd.read_file()
data = gpd.read_file(fp)
ERROR:
fp = "filepath.shp"
data = gpd.read_file(fp)
Traceback (most recent call last):
Input In [3] in <cell line: 1>
data = gpd.read_file(fp)
File ~\Anaconda3\lib\site-packages\geopandas\io\file.py:81 in read_file
if hasattr(features.crs, "to_dict"):
File ~\Anaconda3\lib\site-packages\fiona\collection.py:214 in crs
self._crs = self.session.get_crs()
File fiona/ogrext.pyx:634 in fiona.ogrext.Session.get_crs
File fiona/_err.pyx:259 in fiona._err.exc_wrap_pointer
CPLE_OpenFailedError: Unable to open EPSG support file gcs.csv. Try setting the GDAL_DATA environment variable to point to the directory containing EPSG csv files.
Environment:
macOS Big Sur v 11.6.1
Python 3.7.7
pyarrow==5.0.0 (from pipfreeze)
From the terminal:
>>> import pyarrow
>>> pyarrow
<module 'pyarrow' from '/Users/garyb/Develop/DS/tools-pay-data-pipeline/env/lib/python3.7/site-packages/pyarrow/__init__.py'
So I confirmed that I have pyarrow installed. But when I try to write a Dask dataframe to parquet I get:
def make_parquet_file(filepath):
parquet_path = f'{PARQUET_DIR}/{company}_{table}_{batch}.parquet'
df.to_parquet(parquet_path, engine='pyarrow')
ModuleNotFoundError: No module named pyarrow
The exception detail:
~/Develop/DS/research-dask-parquet/env/lib/python3.7/site-packages/dask/dataframe/io/parquet/core.py in get_engine(engine)
970 elif engine in ("pyarrow", "arrow", "pyarrow-legacy", "pyarrow-dataset"):
971
--> 972 pa = import_required("pyarrow", "`pyarrow` not installed")
973 pa_version = parse_version(pa.__version__)
974
This function works. It's a much smaller csv file just to confirm that the df.to_parquet function works:
def make_parquet_file():
csv_file = f'{CSV_DATA_DIR}/diabetes.csv'
parquet_file = f'{PARQUET_DIR}/diabetes.parquet'
# Just to prove I can read the csv file
p_df = pd.read_csv(csv_file)
print(p_df.shape)
d_df = dd.read_csv(csv_file)
d_df.to_parquet(parquet_file)
Is it looking in the right place for the package? I'm stuck
It does seem that dask and pure python are using different environments.
In the first example the path is:
~/Develop/DS/tools-pay-data-pipeline/env/lib/python3.7
In the traceback the path is:
~/Develop/DS/research-dask-parquet/env/lib/python3.7
So a quick fix is to install pyarrow in the second environment. Another fix is to install the packages on workers (this might help).
A more robust fix is to use environment files.
OK, problem solved - maybe. During development and experimentation I use Jupyter to test and debug. Later all the functions get moved into scripts which can then be imported into any notebook that needs them. The role of the notebook at that point in this project is demo and document, i.e. a better alternative to the command line for demo.
In this case I'm still in the experiment mode. So the problem was the Jupyter kernel. I had inadvertency recycled the name, and the virtual environment in the two projects also have the same name - "env". See a pattern here? Lazy bit me.
I deleted the kernel that was being used and created a new one with a unique name. PyArrow was then pulled from the correct virtual environment and worked as expected.
I use python-pcl and want to load a pcd file.
My code is :
cloud_blob = pcl.load('./Downloads/table_scene_lms400.pcd')
this code works fine in shell, but in Pycharm always have error:
[pcl::PCDReader::readHeader] Could not find file './Downloads/table_scene_lms400.pcd'
I don't know why.
First install pypcd
pip install pypcd
Then run:
import pypcd
pc = pypcd.PointCloud.from_path('table_scene_lms400.pcd')
Reference taken from here
i try to train.py in object_detection in under git url
https://github.com/tensorflow/models/tree/master/research/object_detection
However, the following error occurs.
ModuleNotFoundError: No module named 'object_detection'
So I tried to solve the problem by writing the following code.
import sys
sys.path.append('/home/user/Documents/imgmlreport/inception/models/research/object_detection')
from object_detection.builders import dataset_builder
This problem has not been solved yet.
The directory structure is shown below.
~/object_detection/train.py
~/object_detection/builders/dataset_bulider.py
and here is full error massage
/home/user/anaconda3/lib/python3.6/site-packages/h5py/init.py:34: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated.
In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "train.py", line 52, in
import trainer
File"/home/user/Documents/imgmlreport/inception/models/research/object_detection/trainer.py", line 26, in
from object_detection.builders import optimizer_builder
ModuleNotFoundError: No module named 'object_detection'
how can i import modules?
Try install Tensorflow Object Detection Library Packaged
pip install tensorflow-object-detection-api
Cause of this error is installing object_detection library, So one of the solution which can work is running the below command inside models/research
sudo python setup.py install
If such solution does not work, please execute the below command one by one in the directory models/research
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
sudo python setup.py install
I hope this will work. I also faced the same problem while creating model from export_inference_graph.py. It worked for me.
You need to export the environmental variables every time you open a new terminal in that environment.
Please note that there are are back quotes on each of the pwd in the command as this might not be showing in the command below. Back quote is the same as the tilde key without pressing the shift key (US keyboard).
From tensorflow/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
try this:
python setup.py build
python setup.py install
There are a number of modules in the object_detection folder, and I have created setup.py in the parent directory(research folder) to import all of them.
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['Pillow>=1.0', 'Matplotlib>=2.1', 'Cython>=0.28.1']
setup(
name='object_detection',
version='0.1',
install_requires=REQUIRED_PACKAGES,
include_package_data=True,
packages=[p for p in find_packages() if p.startswith('object_detection')],
description='Tensorflow Object Detection Library',
)
You did have "sys.path.append()" before you imported the object detection, so I am surprised that you are facing this error!
Please check that the path you have used in sys.path.append() is right.
Well, the only and obvious answer for the error is that the path of the module is not added properly.
Besides the various ways mentioned here, here is a way in which you can add the "object_detection" path permanently to the PYTHONPATH variable.
If you are using a Linux system, here is how you would go about it:
Go to the Home directory. Press Ctrl + H to show hidden files. You will see a file called ".bashrc". Open this file using a code editor (I used Visual Studio).
In the last line of .bashrc file, add the line:
export PYTHONPATH=/your/module/path:/your/other/module/path:your/someother/module/path
Then press "save" in the code editor. Since ".bashrc" is a "Read-only" file the editor will throw a pop-up saying the same. Also in the pop-up there will be an option that says: "Try with sudo". Hit this button and now you are good to go.
All your modules are now permanently added to the PYTHONPATH. This means that you need not run sys.path.append every time you open your terminal and start a session!
Below is the screenshot with no error when I followed the said steps:
Try this. I hope it helps.
And finally, If you've followed all the steps here and are at your wit's end...make sure the file that you're running (the one with your source code in it ya know), isn't named object_detection.py - that would preclude it being searched for as a module.
Certainly I've never done anything like this that led me to add an embarrassing answer on Stack Overflow...
I had to do:
sudo pip3 install -e . (ref)
sudo python3 setup.py install
System:
OS: Ubuntu 16.04, Anaconda (I guess this is why I need to use pip3 and python3 even I made virtual environment with Pyehon 3.8)
any thoughts? I've tried uninstalling Shapely and installing with PIP
I have Anaconda and installed Fiona fine and Shapely seemingly fine.
Simple code:
import fiona
import shapely
dirVar = "C:\\Users\\me\\Desktop\\geocode\\"
with fiona.open(dirVar + "Regions.shp") as fiona_collection:
shapefile_record = fiona_collection.next()
shape = shapely.geometry.asShape(shapefile_record['geometry']) #GET ERROR HERE
point = shapely.geometry.Point(32.398516, -39.754028) # longitude, latitude
if shape.contains(point):
print "Found shape for point."
AttributeError: 'module' object has no attribute 'geometry'
When I look at the methods of shapely from Wing IDE I see only:
ctypes_declarations
ftools
geos
I would think I should see geometry if it was installed correctly?
Any thoughts?
You could use one of these:
import shapely.geometry
or
from shapely import geometry