Can't import modules that are there - python

From command line I can't import appengine, this might be something with my python path:
$ python
Python 2.7.1+ (r271:86832, Apr 11 2011, 18:13:53)
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.appengine.ext import db
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "google/appengine/ext/db/__init__.py", line 98, in <module>
from google.appengine.api import datastore
File "google/appengine/api/datastore.py", line 62, in <module>
from google.appengine.datastore import datastore_query
File "google/appengine/datastore/datastore_query.py", line 64, in <module>
from google.appengine.datastore import datastore_index
File "google/appengine/datastore/datastore_index.py", line 60, in <module>
from google.appengine.api import validation
File "google/appengine/api/validation.py", line 51, in <module>
import yaml
ImportError: No module named yaml
>>>
I don't want duplicate installations, I want to point the Python interpretor to where the missing module is. How do I make the interpretor find the app engine modules from the command prompt? In the application these imports are working.

appending:
/usr/local/google_appengine/:/usr/local/google_appengine/lib/:/usr/local/google_appengine/lib/yaml/
to your PYTHONPATH environment variable should do the trick (your SDK location may vary).

For appengine 1.9.6 google has created a new directory "yaml-3.10" that contains the yaml module. I added "[appengine install directory]/google_appengine/lib/yaml-3.10" to PYTHONPATH in my .bashrc file and that solved this problem. BTW: I use Ubuntu 14.04 LTS.

yaml is not installed in your current setup. yaml package is included with google_appengine in the lib folder. the setup.py script in the folder will add the yaml package to your current python
cd google_appengine/lib/yaml
sudo python setup.py install

Related

ImportError: cannot import name '_remove_dead_weakref'

Yesterday updated my ubuntu 17.04 to ubuntu 17.10. any comments? Appear when I try to run server in pycharm. #django project.
bash -cl "/home/encuentrum/venv-encuentrum3/bin/python /usr/share/pycharm/helpers/pycharm/django_manage.py check /home/encuentrum/GitLab/encuentrum3/ENCUENTRUM/packers_"
Traceback (most recent call last):
File "/usr/share/pycharm/helpers/pycharm/django_manage.py", line 5, in <module>
from pycharm_run_utils import adjust_django_sys_path
File "/usr/share/pycharm/helpers/pycharm/pycharm_run_utils.py", line 4, in <module>
import imp
File "/home/encuentrum/venv-encuentrum3/lib/python3.6/imp.py", line 19, in <module>
from importlib._bootstrap import _ERR_MSG, _exec, _load, _builtin_from_name
File "/home/encuentrum/venv-encuentrum3/lib/python3.6/importlib/__init__.py", line 57, in <module>
import types
File "/home/encuentrum/venv-encuentrum3/lib/python3.6/types.py", line 171, in <module>
import functools as _functools
File "/home/encuentrum/venv-encuentrum3/lib/python3.6/functools.py", line 23, in <module>
from weakref import WeakKeyDictionary
File "/home/encuentrum/venv-encuentrum3/lib/python3.6/weakref.py", line 12, in <module>
from _weakref import (
ImportError: cannot import name '_remove_dead_weakref'
Maybe you have mixed your multiple Python installations, the newer version of weakref are not compatible with older version python binary, try to remove any one (the older one is recommended) of Python installation.
Analysis
For my case, I have installed older version Python (3.5.1) before, and upgrade my Debian installation. The newer Debian upgrade it's Python3.5 to 3.5.3 which have _remove_dead_weakref in _weakref in its Python binary
When I type $ where python3.5, I get
/usr/local/bin/python3.5
/usr/local/bin/python3.5
/usr/bin/python3.5
The /usr/local/bin/python3.5 is my own older installation, and /usr/bin/python3.5 is Debian offical Python3.5
When I update my Python3.5 installation by apt-get, apt-get execute python3.5 -E -S /usr/lib/python3.5/py_compile.py $files (post-install script) in the deb package.`, it triggers the weakref issue, here is my log
Setting up python3.5-minimal (3.5.3-1+deb9u1) ...
Traceback (most recent call last):
File "/usr/lib/python3.5/py_compile.py", line 6, in <module>
import importlib._bootstrap_external
File "/usr/lib/python3.5/importlib/__init__.py", line 57, in <module>
import types
File "/usr/lib/python3.5/types.py", line 166, in <module>
import functools as _functools
File "/usr/lib/python3.5/functools.py", line 23, in <module>
from weakref import WeakKeyDictionary
File "/usr/lib/python3.5/weakref.py", line 12, in <module>
from _weakref import (
ImportError: cannot import name '_remove_dead_weakref'
I tested Python 3.5.1 and Python 3.5.3 with same import action, here are the compares
Official Python 3.5.3 from apt-get
Python 3.5.3 (default, Sep 27 2018, 17:25:39)
[GCC 6.3.0 20170516] on linux
>>> from _weakref import _remove_dead_weakref
>>>
My own Python 3.5.1 installation
Python 3.5.1 (default, Apr 23 2016, 16:40:21)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> from _weakref import _remove_dead_weakref
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name '_remove_dead_weakref'
>>>
So, I confirm that python3.5 in /usr/local/bin/ cannot use _remove_dead_weakref.
But which python did apt-get use in post-installation script? Try it.
$ which python3.5
/usr/local/bin/python3.5
So, here is why. The post-installation script use my custom installation of python, along with newer python library (/usr/lib/python3.5/weakref.py)
Fix it!
As I said, disable older version of python
sudo mv /usr/local/bin/python3.5 /usr/local/bin/python3.5.bak
Test
$ which python3.5
/usr/bin/python3.5
Adding to #Comzyh answer, this indeed is due to mixed python versions when an upgrade happen due to any reason. A quick fix is to remove your venv python binary i.e. rm <path-to-your-env>/bin/python and copying your system python binary to your venv like cp /usr/bin/python <path-to-your-env>/bin/python. This will fix the weak ref error

AttributeError: module 'functools' has no attribute 'wraps'

I am trying to test 3rd party code with Anaconda 4.2 / Python 3.5 When I execute tests I get following exception:
Traceback (most recent call last):
File "pyspark/sql/tests.py", line 25, in <module>
import subprocess
File "/home/user/anaconda3/lib/python3.5/subprocess.py", line 364, in <module>
import signal
File "/home/user/anaconda3/lib/python3.5/signal.py", line 3, in <module>
from functools import wraps as _wraps
File "/home/user/anaconda3/lib/python3.5/functools.py", line 22, in <module>
from types import MappingProxyType
File "/home/user/Spark/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/types.py", line 22, in <module>
import calendar
File "/home/user/anaconda3/lib/python3.5/calendar.py", line 10, in <module>
import locale as _locale
File "/home/user/anaconda3/lib/python3.5/locale.py", line 108, in <module>
#functools.wraps(_localeconv)
AttributeError: module 'functools' has no attribute 'wraps'
Normally I would assume some module is shadowing built-in modules but as far as I can tell this is not the issue:
I logged module path (functools.__file__) from the tests and it yields expected path. Also there is nothing strange in the path I get in the exception.
To exclude possible module corruption I tested completely new Anaconda installation.
When I execute tests, with the same configuration and path, from IPython shell (%run pyspark/sql/tests.py) problem disappears.
functools.wraps can be imported in the shell started in the same directory and with the same configuration.
When I replace Python 3 environment with Python 2 environment problem disappears.
Problem cannot be reproduced with environment create using virtualenv.
With different version of the same project I get:
Traceback (most recent call last):
File "pyspark/sql/tests.py", line 25, in <module>
import pydoc
File "/home/user/anaconda3/lib/python3.5/pydoc.py", line 55, in <module>
import importlib._bootstrap
File "/home/user/anaconda3/lib/python3.5/importlib/__init__.py", line 57, in <module>
import types
File "/home/user/Spark/spark-1.6.3-bin-hadoop2.6/python/pyspark/sql/types.py", line 22, in <module>
import calendar
File "/home/user/anaconda3/lib/python3.5/calendar.py", line 10, in <module>
import locale as _locale
File "/home/user/anaconda3/lib/python3.5/locale.py", line 19, in <module>
import functools
File "/home/user/anaconda3/lib/python3.5/functools.py", line 22, in <module>
from types import MappingProxyType
ImportError: cannot import name 'MappingProxyType'
Is there something obvious I missed here?
Edit:
Dockerfile which can be used to reproduce the problem:
FROM debian:latest
RUN apt-get update
RUN apt-get install -y wget bzip2
RUN wget https://repo.continuum.io/archive/Anaconda3-4.2.0-Linux-x86_64.sh
RUN bash Anaconda3-4.2.0-Linux-x86_64.sh -b -p /anaconda3
RUN wget ftp://ftp.piotrkosoft.net/pub/mirrors/ftp.apache.org/spark/spark-2.1.0/spark-2.1.0-bin-hadoop2.7.tgz
RUN tar xf spark-2.1.0-bin-hadoop2.7.tgz
ENV PATH /anaconda3/bin:$PATH
ENV SPARK_HOME /spark-2.1.0-bin-hadoop2.7
ENV PYTHONPATH $PYTHONPATH:$SPARK_HOME/python/lib/py4j-0.10.4-src.zip:$SPARK_HOME/python
WORKDIR /spark-2.1.0-bin-hadoop2.7
RUN python python/pyspark/sql/tests.py
I suspect this is happening because, the functools module of python3 has the following import: from types import MappingProxyType and, instead of picking up this module from ${CONDA_PREFIX}/lib/python3.5/types.py, it tries to import the module from inside the sql directory: ${SPARK_HOME}/python/pyspark/sql/types.py . The functools module of python2 does not have this import and hence does not throw the error.
A workaround to this, is to somehow import the required types module first and then invoke the script. As a proof of concept:
(root) ~/condaexpts$ PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.10.4-src.zip:$SPARK_HOME/python python
Python 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 2 2016, 17:53:06)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import types
>>> import os
>>> sqltests=os.environ['SPARK_HOME'] + '/python/pyspark/sql/tests.py'
>>> exec(open(sqltests).read())
.....Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/01/30 05:59:43 WARN SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0
17/01/30 05:59:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
...
----------------------------------------------------------------------
Ran 128 tests in 372.565s
Also note that there is nothing special about conda. One can see the same thing in a normal virtualenv (with python3):
~/condaexpts$ virtualenv -p python3 venv
Running virtualenv with interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in venv/bin/python3
Also creating executable in venv/bin/python
Installing setuptools, pip...done.
~/condaexpts$ source venv/bin/activate
(venv)~/condaexpts$ python --version
Python 3.4.3
(venv)~/condaexpts$ python $WORKDIR/python/pyspark/sql/tests.py
Traceback (most recent call last):
File "/home/ubuntu/condaexpts/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/tests.py", line 26, in <module>
import pydoc
File "/usr/lib/python3.4/pydoc.py", line 59, in <module>
import importlib._bootstrap
File "/home/ubuntu/condaexpts/venv/lib/python3.4/importlib/__init__.py", line 40, in <module>
import types
File "/home/ubuntu/condaexpts/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/types.py", line 22, in <module>
import calendar
File "/usr/lib/python3.4/calendar.py", line 10, in <module>
import locale as _locale
File "/home/ubuntu/condaexpts/venv/lib/python3.4/locale.py", line 20, in <module>
import functools
File "/home/ubuntu/condaexpts/venv/lib/python3.4/functools.py", line 22, in <module>
from types import MappingProxyType
ImportError: cannot import name 'MappingProxyType'

Getting an error when importing mechanize branch for python3

I have installed mechanize library for python3.
https://github.com/adevore/mechanize/tree/python3
But, when I import it, I get this error.
Python 3.3.3 (default, Dec 30 2013, 16:15:14)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import mechanize
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/Username/.virtualenvs/python3/lib/python3.3/site-packages/mechanize-0.2.6.dev_20140305-py3.3.egg/mechanize/__init__.py", line 122, in <module>
File "/Users/Username/.virtualenvs/python3/lib/python3.3/site-packages/mechanize-0.2.6.dev_20140305-py3.3.egg/mechanize/_mechanize.py", line 15, in <module>
File "/Users/Username/.virtualenvs/python3/lib/python3.3/site-packages/mechanize-0.2.6.dev_20140305-py3.3.egg/mechanize/_html.py", line 16, in <module>
ImportError: cannot import name _sgmllib_copy
But, I'm sure that mechanize is installed in the same virtualenv directory.
$ pip freeze
## FIXME: could not find svn URL in dependency_links for this package:
mechanize==0.2.6.dev-20140305
pyquery==1.2.8
Warning: cannot find svn location for mechanize==0.2.6.dev-20140305
I'm not used to operation in terminal, so I don't know how to fix this problem.
Could anyone please help me solve this problem?
Thank you in advance!
The git repository you referred to uses import wrong. The mechanize._html module imports _sgmllib_copy expecting to get mechanize._sgmllib_copy, but that way of doing imports has been deprecated in PEP 328. Rather it should be using relative imports, e.g. from . import _sgmllib_copy.
https://github.com/adevore/mechanize/tree/python3
This branch doesn't contain _sgmllib_copy.py at all. I took this file from master branch (it needs to change print smth to print (smth)). But I still don't get how import should be used. In _html.py module (it's located in mechanize folder) used
from . import _sgmllib_copy as sgmllib
Is this wrong? But from . import _beautifulsoup seems to be working.

What can't I run Redhawk HelloWorld from python?

I have Redhawk 1.9 loaded on a 32-bit CentOS 5 virtual machine. I am trying to run the Redhawk HelloWorld component described here: http://redhawksdr.github.io/Documentation/mainch3.html. I am able to launch and start the component in the eclipse sandbox. I cannot run it from Python though. I get the following error.
Python 2.7.2 (default, Feb 27 2012, 16:40:29)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from ossie.utils import sb
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/redhawk/core/lib/python/ossie/utils/sb/__init__.py", line 115, in <module>
from domainless import *
File "/usr/local/redhawk/core/lib/python/ossie/utils/sb/domainless.py", line 102, in <module>
from omniORB import CORBA, any
ImportError: No module named omniORB
>>>
Any ideas why it will not work?
You'll probably have to set your PYTHONPATH manually to include the appropriate directory. It looks like Eclipse is doing that for you.
Try running your example as root. If you can successfully run as root, then you have a permission problem on your directories or files. How do you identify the files that have the incorrect permission ? I ran the following command,
strace -o test.out python -c "from ossie.utils import sb"
This command will write the output to test.out. Look for the string "denied", which will identify the file that has the incorrect file permission.

Error when running dropbox.py

test#SERVER:~/source/dropbox/.dropbox-dist$ ./dropbox.py
Traceback (most recent call last):
File "./dropbox.py", line 39, in <module>
import urllib
File "/usr/lib/python2.6/urllib.py", line 30, in <module>
from urlparse import urljoin as basejoin
File "/usr/lib/python2.6/urlparse.py", line 84, in <module>
from collections import namedtuple
ImportError: cannot import name namedtuple
dropbox.py has 755 perms. In system I have 2, 2.6 versions of python. Running python2 dropbox.py or python2.6 dropbox.py spits the same error.
and here is dropbox.py file from Dropbox website
Update per comment:
test#SERVER:~/source/dropbox/.dropbox-dist$ python2.6
Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from collections import namedtuple
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name namedtuple
>>>
Looks like there's another module in your Python path named collections (probably collections.py but could also be a folder named collections that has an __init__.py in it) that's preventing the Python 2.6 collections module from being imported. Could be something in the directory that's current when you invoke Python -- check there first. Otherwise try python26 -c 'import sys; print sys.path' to see where Python's looking for modules.
#kindall was right. The .dropbox-dist folder contains collections.so which interferes with the dropbox.py script. Move the file to a different folder. Here is one way:
mkdir cli
mv dropbox.py cli/dropbox.py
cd cli
./dropbox.py --help
Can you try do to
python -c 'import collections; print collections.__file__'
And ensure that the path is /usr/lib/python2.6/collections.pyc. If not, then you have another modules in your PYTHONPATH that replace this one. If yes, then try again by doing:
python -c 'import collections; print dir(collections)'
And show us the result.
Normally, namedtuple have been introduced to 2.6 so...

Categories