I have a script written in Spyder(Python 3.8) on Linux. While on the Spyder console the script runs fine, while I an calling it from the Linux terminal it seems like it doesn't see the modules I import in the script. Opening the terminal I run: python3 /zhome/c9/f/144817/Desktop/ChargersDaniel.py but, here is the error I take:
Traceback (most recent call last): File "/zhome/c9/f/144817/Desktop/ChargersDaniel.py", line 9, in <module>
import GPyOpt ModuleNotFoundError: No module named 'GPyOpt'
where GPyOpt is the very first library I have at the first lines in my script.
It looks like for some reason, python3 doesn't see the libraries installed. I have checked the solution here but that's not my case cause I am already calling Python3 at my terminal.
Any suggestions ?
Thanks in advance
Maybe check if GPyOpt is installed when running from your terminal:
python3 -c "import GPyOpt"
if [ $? -eq 0 ]
then
echo "GPyOpt is installed!"
fi
Related
I'm trying to run a python script at start for my Raspberry Pi 3B.
My shell script "atboot.sh" looks like this:
#!/bin/bash
cd /home/pi/pythoncodefolder
date >> logs
sudo python3 test.py >> logs
When I try to run it on cmd with sh atboot.sh command, I get an import error which is:
Traceback (most recent call last):
File "test.py", line 5, in <module>
import cv2
ModuleNotFoundError: No module named 'cv2'
But when I run the program on cmd without shell script, using python3 test.py, I get no errors.
Thanks.
The use of sudo causing it. When you run python3 program.py you invoke it in your $USER environment setup.
You can remove Defaults !env_reset from sudoers. Or to add Defaults env_keep += "PYTHONPATH".
But I would assert that you can do it without sudo in the first place.
#!/bin/bash
cd /home/pi/pythoncodefolder
date >> logs
python3 test.py >> logs
Also, that's probably a (not exact) duplicate.
When installing gcloud for mac I get this error when I run the install.sh command according to docs here:
Traceback (most recent call last):
File "/path_to_unzipped_file/google-cloud-sdk/bin/bootstrapping/install.py", line 8, in <module>
from __future__ import absolute_import
I poked through and echoed out some stuff in the install shell script. It is setting the environment variables correctly (pointing to my default python installation, pointing to the correct location of the gcloud SDK).
If I just enter the python interpreter (using the same default python that the install script points to when running install.py) I can import the module just fine:
>>> from __future__ import absolute_import
>>>
Only other information worth noting is my default python setup is a virtual environment that I create from python 2.7.15 installed through brew. The virtual environment python bin is first in my PATH so python and python2 and python2.7 all invoke the correct binary. I've had no other issues installing packages on this setup so far.
If I echo the final line of the install.sh script that calls the install.py script it shows /path_to_virtualenv/bin/python -S /path_to_unzipped_file/google-cloud-sdk/bin/bootstrapping/install.py which is the correct python. Or am I missing something?
The script uses the -S command-line switch, which disables loading the site module on start-up.
However, it is a custom dedicated site module installed in a virtualenv that makes a virtualenv work. As such, the -S switch and virtualenvs are incompatible, with -S set fundamental imports such as from __future__ break down entirely.
You can either remove the -S switch from the install.bat command or use a wrapper script to strip it from the command line as you call your real virtualenv Python.
I had the error below when trying to run gcloud commands.
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/gcloud.py", line 20, in <module>
from __future__ import absolute_import
ImportError: No module named __future__
If you have your virtualenv sourced automatically you can specify the environment variable CLOUDSDK_PYTHON i.e. set -x CLOUDSDK_PYTHON /usr/bin/python to not use the virtualenv python.
In google-cloud-sdk/install.sh go to last line, remove variable $CLOUDSDK_PYTHON_ARGS as below.
"$CLOUDSDK_PYTHON" $CLOUDSDK_PYTHON_ARGS "${CLOUDSDK_ROOT_DIR}/bin/bootstrapping/install.py" "$#"
"$CLOUDSDK_PYTHON" "${CLOUDSDK_ROOT_DIR}/bin/bootstrapping/install.py" "$#"
I'm currently working on Pycharm with remote python Interpreter(miniconda3/bin/python).
So when I type echo $PATH in remote server, it prints
/home/woosung/bin:/home/woosung/.local/bin:/home/woosung/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
I created project in Pycharm and set remote python Interpreter as miniconda3 python, it works well when I just run some *.py files.
But when I typed some os.system() lines, weird things happened.
For instance, in test.py from Pycharm project
import os
os.system('echo $PATH')
os.system('python --version')
Output is
ssh://woosung#xxx.xxx.xxx.xxx:xx/home/woosung/miniconda3/bin/python -u /tmp/pycharm_project_203/test.py
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
Python 2.7.12
Process finished with exit code 0
I tried same command in remote server,
woosung#test-pc:~$ echo $PATH
/home/woosung/bin:/home/woosung/.local/bin:/home/woosung/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
woosung#test-pc:~$ python --version
Python 3.6.6 :: Anaconda, Inc.
PATH and the version of python are totally different! How can I fix this?
I've already tried add os.system('export PATH="$PATH:$HOME/miniconda3/bin"') to test.py. But it still gives same $PATH.(/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games)
EDIT
Thanks to the comment of #Dietrich Epp, I successfully add interpreter path to the shell $PATH.
(os.environ["PATH"] += ":/home/woosung/miniconda3/bin")
But I stuck the more basic problem. When I add the path and execute command the some *.py file including import library which is only in miniconda3, the shell gives ImportError.
For instance, in test.py
import matplotlib
os.environ["PATH"] += ":/home/woosung/miniconda3/bin"
os.system("python import_test.py")
and import_test.py
import matplotlib
And when I run test.py,
Traceback (most recent call last):
File "import_test.py", line 1, in <module>
import matplotlib
ImportError: No module named matplotlib
Looks like the shell doesn't understand how to utilize modified $PATH.
I find the solution.
It is not direct but quite simple.
I changed os.system("python import_test.py") to os.system(sys.executable + ' import_test.py').
This makes the shell uses the Pycharm remote interpreter(miniconda3), not original.
This is the resulting notice of entering "python --version"
Traceback (most recent call last): File "", line 1, in NameError: name 'python' is not defined
I have tried Python 2.7 and 3.6 shell as well as my terminal but cannot seem to figure out whats wrong.
Eventually, I am trying to get pip.
You habe to start a normal command line, not the python shell if you want that the command python --version works.
If you want to use the python shell you have to type
import sys
sys.version_info
Code:
sh 'python ./selenium/xy_python/run_tests.py'
Error:
Traceback (most recent call last):
File "./selenium/xy_python/run_tests.py", line 6, in
import nose
ImportError: No module named nose
I recommend explicitly activating a python env before you run your script in your jenkinsfile to ensure you are in an environment which has nose installed.
Please check out virtualenv, tox, or conda for information on how to do so.
Does it run successfully if you start it manually? If yes, then you might have problems with PYTHONPATH. You can use withEnv to set it.
withEnv(['PYTHONPATH=/your/pythonpath']) {
sh 'python ./selenium/xy_python/run_tests.py'
}