Im trying to run a package wal-e which I have installed as user root with sudo python3 -m pip install wal-e[aws,azure,google,swift].
I can run this command perfectly as user root using envdir /etc/wal-e.d/env wal-e backup-fetch /var/lib/postgresql/9.6/main LATEST.
However, when I sudo su - postgres and then run envdir /etc/wal-e.d/env wal-e backup-fetch /var/lib/postgresql/9.6/main LATEST, I get the error
Traceback (most recent call last):
File "/usr/local/bin/wal-e", line 7, in <module>
from wal_e.cmd import main
ImportError: No module named 'wal_e.cmd'
I gave user postgres full sudo permissions with usermod -aG sudo postgres. Also the wal-e package is installed in the same location.
When I run ls -la I get
-rwxr-xr-x 1 root root 211 Sep 20 14:24 /usr/local/bin/wal-e
Im also on Ubuntu 16.04.3
How can I run the command just like the root user?
I had to run a strict setup process for wal-e in order for the package to function properly.
Virtually what it boiled down to was installing all necessary dependencies on the machine that I was working with before installing and creating the user postgres. If the user was created before all the dependencies were installed, I got permissions errors.
Related
I got a new Raspberry Pi, installed Ubuntu on it, wrote a python script but when I run the script using python3 script.py it just cant find libraries that I installed using pip3 and give library missing erros.
But if I run the same script using sudo python script.py it runs.
I have given script.py permission using sudo chmod 777 script.py , yet same issue
I even gave folder permissions sudo chown user user /home/someuser/Desktop , yet same problem
Now the bigger problem is when I use basic IDE like Thonny , I cant run using sudo from the IDE itself , so I have to run the script from terminal separately which is such a pain
Here is my file permissions
-rwxrwxrwx 1 someuser someuser 2528 Dec 19 17:57 script.py
Here is my folder permissions
drwxr-xr-x 3 someuser someuser 4096 Dec 19 17:56 Desktop
There is no other user on the system except for the one I created during ubuntu setup
I have mostly installed all libraries using sudo pip3 install
One of the error I am getting while trying to use GPIO library
File "/home/someuser/Desktop/beep.py", line 11, in <module>
GPIO.setup(18, GPIO.OUT)
RuntimeError: Not running on a RPi!
Another error:
File "/usr/lib/python3.8/socket.py", line 231, in __init__
_socket.socket.__init__(self, family, type, proto, fileno)
PermissionError: [Errno 1] Operation not permitted
Is there a way where I dont have to use sudo every time and makes life easy and easily work with installed libraries.
Here is some additional info
/usr/lib/python38.zip
/usr/lib/python3.8
/usr/lib/python3.8/lib-dynload
/home/someuser/.local/lib/python3.8/site-packages
/usr/local/lib/python3.8/dist-packages
/usr/lib/python3/dist-packages
someuser#pi4:~$ which python3
/usr/bin/python3
did you use sudo to install libraries, if so you, thats why its not available for your current user.
install packages with pip install --user <package_name> to install them for current user.
or
use a virtualenv
I am trying to run a python script in EC2 user-data. The EC2 instance I am launching uses a custom AMI image that I repaired. I installed the 2 packages that I need - boto3 and pyodbc packages by executing these commands (notice, I am installing those as root):
sudo yum -y install boto3 pyodbc
My user-data script:
#!/bin/bash
set -e -x
# set AWS region
echo "export AWS_DEFAULT_REGION=${region}" >> /etc/profile
source /etc/profile
# copy python script from the s3 bucket
aws s3 cp s3://${bucket_name}/ /home/ec2-user --recursive
/usr/bin/python3 /home/ec2-user/my_script.py
After launching a new EC2 Instance (using the my custom AMI) and checking /var/log/cloud-init-output.log I see that error:
+ python3 /home/ec2-user/main_rigs_stopgap.py
Traceback (most recent call last):
File "/home/ec2-user/my_script.py", line 1, in <module>
import boto3
ModuleNotFoundError: No module named 'boto3'
util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/site-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
Any suggestions, please?
To make sure that you installed modules to correct version of python, use builtin pip module of the python version you are using:
/usr/bin/python3 -m pip install module_name
Yesterday I finished the tutorial for Django framework.
On this tutorial I've created simple application and now I'd like to move my app to remote internet server. I have such server , I'm connected to SSH by putty and when I write python, I see: Python 2.7.12 (default, Nov 19)
But if I try to put this commend: python setup.py install
I get:
Traceback (most recent call last):
File "setup.py", line 5, in <module>
from setuptools import find_packages, setup
ImportError: No module named setuptools
and it's strange because after this:
>>> import django
>>> django.VERSION
I see the version: (1, 10, 5, u'final', 0)
And I copied a folder with my app by FTP
So, can you tell me, step by step, how can I run my application?
Thanks for your answers. I've tried this code:
sudo apt-get install python-setuptools
and i received this message:
sudo: effective uid is not 0, is /usr/bin/sudo on a file system with
the 'nosuid' option set or an NFS file system without root privileges?
Any Ideas ?
If you're on a Debian based system, try running:
sudo apt-get install python-setuptools
to install Python setuptools package
If you're not on a Debian based system, look at the package installing guide here.
You're missing setuptools. Install it and the error will be gone.
Im using the following travis-ci configuration
language: python
env:
- DJANGO=1.4
- DJANGO=1.5
- DJANGO=1.6
python:
- "2.6"
- "2.7"
install:
- sudo pip install Django==$DJANGO
- sudo pip install .
script:
- cd autotest
- python manage.py test ...
But in everytime the tests are executed, I run into the following issue:
$ python manage.py test ...
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
The command "python manage.py test ..." exited with 1.
As i said on irc,
You are running pip install as root. More than that, sudo will reset the environment before finding and running pip. This will mean your pip install is not into the virtualenv that travis provides, but into the global site-packages.
When you do python manage.py test you are using the python binary provided by a virtualenv. However virtualenv will not look in the system site-packages. So it cannot see the Django you installed into the system site-packages.
Ubuntu Server in VirtualBox. I am trying to install VirtualEnv to start learning Flask and bottle.
Some details of my setup.
vks#UbSrVb:~$ cat /etc/os-release
NAME="Ubuntu"
VERSION="12.04.2 LTS, Precise Pangolin"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu precise (12.04.2 LTS)"
VERSION_ID="12.04"
vks#UbSrVb:~$ python --version
Python 2.7.3
vks#UbSrVb:~$ echo $VIRTUALENVWRAPPER_PYTHON
/usr/bin/python
vks#UbSrVb:~$ echo $VIRTUALENV_PYTHON
vks#UbSrVb:~$
When I boot my Virtual Machine, I get the following error on my console
/usr/bin/python: No module named virtualenvwrapper
virtualenvwrapper.sh: There was a problem running the initialization hooks.
If Python could not import the module virtualenvwrapper.hook_loader,
check that virtualenv has been installed for
VIRTUALENVWRAPPER_PYTHON=/usr/bin/python and that PATH is
set properly.
When i try to initialize a virtualenv I get the following errors
vks#UbSrVb:~/dropbox/venv$ virtualenv try1
New python executable in try1/bin/python3.2
Also creating executable in try1/bin/python
Traceback (most recent call last):
File "/usr/local/bin/virtualenv", line 9, in <module>
load_entry_point('virtualenv==1.9.1', 'console_scripts', 'virtualenv')()
File "/usr/local/lib/python3.2/dist-packages/virtualenv.py", line 979, in main
no_pip=options.no_pip)
File "/usr/local/lib/python3.2/dist-packages/virtualenv.py", line 1081, in create_environment
site_packages=site_packages, clear=clear))
File "/usr/local/lib/python3.2/dist-packages/virtualenv.py", line 1499, in install_python
os.symlink(py_executable_base, full_pth)
OSError: [Errno 30] Read-only file system
vks#UbSrVb:~/dropbox/venv$ ls
try1
vks#UbSrVb:~/dropbox/venv$ ls try1/
bin include lib
vks#UbSrVb:~/dropbox/venv$
My .bashrc entries
export WORKON_HOME='~/dropbox/venv/'
source '/usr/local/bin/virtualenvwrapper.sh'
Q1 - As per the error at bootup, How do I ensure virtualenv is installed for VIRTUALENVWRAPPER_PYTHON=/usr/bin/python and that PATH is
set properly ?
Q2 - Even with sudo I get the same "Read-only file system" Error ?
I have tried installing virtualenv using pip and then apt-get, just to hit and try.
Try setting your WORKON_HOME global to another path (~/.virtualenvs) for example a see if that works, maybe the problem is with that shared directory, are you using windows? If you are, try installing ntfs-3g, see https://askubuntu.com/questions/70281/why-does-my-ntfs-partition-mount-as-read-only
Also in my profile configuration file I like to detect first if virtualenvwrapper is installed:
if which virtualenvwrapper.sh &> /dev/null; then
WORKON_HOME=$HOME/.virtualenvs
# path to virtualenvwrapper, in my case
source /usr/local/share/python/virtualenvwrapper.sh
fi
I had the problem where my pip was for a different version of python than the one I wanted to use.
$ python -V
Python 2.7.5+
$ pip -V
pip 1.5.4 from /usr/local/lib/python3.3/dist-packages (python 3.3)
So when I used pip to install virtualenv and virtualenvwrapper, the new python packages were put in python3.3's dist-packages, so of course my python2.7 couldn't find them!
To fix this, I had to use the appropriate version of pip, in my case it was pip2.
$ pip2 -V
pip 1.5.4 from /usr/local/lib/python2.7/dist-packages (python 2.7)
So make sure you are using the appropriate version of pip.