Im trying to deploy an ansible playbook to spin up some new openstack instances and keep getting the errorr
"shade is required for this module"
Shade is definitely installed as are all its dependancies.
I've tried adding
localhost ansible_python_interpreter="/usr/bin/env python"
to the ansible hosts file as suggested here, but this did not work.
https://groups.google.com/forum/#!topic/ansible-project/rvqccvDLLcQ
Any advice on solving this would be most appreciated.
On my hosts file I have the following:
[local]
127.0.0.1 ansible_connection=local ansible_python_interpreter="/usr/bin/python"
So far I haven't been using venv and my playbooks work fine.
By adding the ansible_connection= local, it should tell your playbook to be executed on the Ansible machine (I guess that's what you are trying to do).
Then when I launch a playbook, I start with the following:
- hosts: local
connection: local
Not sure if that's the problem. If this does not work, you should give us more information (extract of your playbook at least).
Good luck!
Try installing ansible using pip because I don't know why the python environment of the ansible package provided by my distro isn't the same as the shade module (installed using pip).
On ArchLinux
sudo pacman -R ansible
sudo pip install ansible
Related
I'm running vscode-server to develop on a remote machine via ssh. This machine has no connection to the internet and runs Python 3.6.5.
I would like to use pylint in vscode for the linting. Problem is that I cannot install it the normal way, since I don't have an internet connection.
What I tried so far:
Use pip download pylint, tar the resulting folder, move it via scp and install it on the remote machine. This didn't work since my local mchine has a different python version from the remote (local: 3.10.x and remote: 3.6.5).
Use the Install on remote: ssh button in the vscode marketplace. This succeeds but when I write code, a message pops up that says: Linter pylint is not installed. When I click on install, it just tries to execute pip install pylint on the remote, which will obviously fail...
Any suggestions on how to proceed here?
This didn't work since my local machine has a different python version from the remote (local: 3.10.x and remote: 3.6.5).
I don't know if it's ultimately going to work, but you can download the latest pylint compatible with python 3.6.5 explicitly, it's pylint 2.13.9 afaik so pip download "pylint==2.13.9".
Problem is that I cannot install it the normal way, since I don't have an internet connection.
I think you can try to upload pylint locally to the server by using SFTP extension.
This extension can syncs your local directory with a remote server directory.
I'm trying to work with ansible, winrm, virtualenv and Jenkins...
Currently, I have installed Ansible with Tom via epel-release.
Jenkins has only basic configuration for now.
I have then created a virtualenv inside Jenkins home named $HOME/ansible-winrm. Then inside it, I have installed winrm via pip.
What I'm trying to do is :
- create a simple job on Jenkins with only a shell script calling ansible-playbook. And it should access to the winrm library installed inside my local virtualenv.
- It should be as transparent as possible.
P.S. It seems that python binary is hard codded inside ansible-playbook script.
What are your best practices to solve this issue ?
Best way to do it is installing winrm with pip in user workspace (option --user)
Ex: pip install --user pywinrm
I've a RHEL host with docker installed, it has default Py 2.7. My python scripts needs a bit more modules which
I can't install due to lack of sudo access & moreover, I dont want to screw up with the default Py which is needed for host to function.
Now, I am trying to get a python in docker container where I get to add few modules do the needfull.
Issue - docker installed RHEL is not connected to internet and cant be connected as well
The laptop i have doesnt have the docker either and I can't install docker here (no admin acccess) to create the docker image and copy them to RHEL host
I was hoping if docker image with python can be downloaded from Internet I might be able to use that as is!,
Any pointers in any approprite direction would be appreciated.
what have I done - tried searching for the python images, been through the dockers documentation to create the image.
Apologies if the above question sounds silly, I am getting better with time on docker :)
If your environment is restricted enough that you can't use sudo to install packages, you won't be able to use Docker: if you can run any docker run command at all you can trivially get unrestricted root access on the host.
My python scripts needs a bit more modules which I can't install due to lack of sudo access & moreover, I dont want to screw up with the default Py which is needed for host to function.
That sounds like a perfect use for a virtual environment: it gives you an isolated local package tree that you can install into as an unprivileged user and doesn't interfere with the system Python. For Python 2 you need a separate tool for it, with a couple of steps to install:
export PYTHONUSERBASE=$HOME
pip install --user virtualenv
~/bin/virtualenv vpy
. vpy/bin/activate
pip install ... # installs into vpy/lib/python2.7/site-packages
you can create a docker image on any standalone machine and push the final required image to docker registry ( docker hub ). Then in your laptop you can pull that image and start working :)
Below are some key commands that will be required for the same.
To create a image, you will need to create a Dockerfile with all the packages installed
Or you can also do sudo docker run -it ubuntu:16.04 then install python and other packages as required.
then sudo docker commit container_id name
sudo docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
sudo docker push IMAGE_NAME
Then you pull this image in your laptop and start working.
You can refer to this link for more docker commands https://github.com/akasranjan005/docker-k8s/blob/master/docker/basic-commands.md
Hope this helps. Thanks
I wish to connect to a Linux machine by ssh (from my code) and run some code that is using python libraries that are not installed on the remote machine, what would be the best way to do so?
using a call like this:
cat main.py | ssh user#server python -
will run main.py on the server, but wont help me with the dependencies, is there a way to somehow 'compile' the relevant libraries and have them sent over just for the running my code?
I wish to avoid installing the libraries on the remote machine if possible
Try virtualenv:
pip install virtualenv
then use
virtualenv venv
to create a seperated python environment in current path(in folder venv).
Instead of installing multiple packages in default python path, virtualenv needs only one package installed.
I've loaded uWSGI v 1.9.20, built from source. I'm getting this error, but how do I tell which plugin is needed?
!!!!!!!!!!!!!! WARNING !!!!!!!!!!!!!!
no request plugin is loaded, you will not be able to manage requests.
you may need to install the package for your language of choice, or simply load
it with --plugin.
!!!!!!!!!!! END OF WARNING !!!!!!!!!!
Which plugin should be loaded?
I had this problem and was stuck for hours.
Python2
My issue is different than the answer listed, make sure you have plugins = python in your uwsgi .ini file and you install the uwsgi python plugin:
sudo apt-get install uwsgi-plugin-python
Python3
If you're using Python3, use the same approach and do:
sudo apt-get install uwsgi-plugin-python3
then add plugins = python3 inside your uwsgi .ini file.
After I did the above my application worked. Obviously this is for python projects, but a similar approach is required for other projects.
It might be easiest to install uwsgi through pip instead of the package manager from the OS you're using, the package in pip is usually more up to date than the package managers from the OS you might be using:
sudo pip install uwsgi
This solved it for me anyway.
For using multiple Python versions on the same server, I would advice to take a look at virtualenv:
https://virtualenv.pypa.io/en/latest/
if you are using python3:
install plugin:
sudo apt install uwsgi-plugin-python3
add uwsgi python3 plugin line in your site config (.ini file):
plugins = python3
and if you want to list your uwsgi's python plugins list:
ls -l /usr/lib/uwsgi/plugins/ | grep python
KEEP IN MIND python3 plugin is different from python2.
If you do not define python's plugin uwsgi says:
!!!!!!!!!!!!!! WARNING !!!!!!!!!!!!!!
no request plugin is loaded, you will not be able to manage requests.
you may need to install the package for your language of choice, or simply load it with --plugin.
!!!!!!!!!!! END OF WARNING !!!!!!!!!!
if you use python2's plugin and your venv is in python3 it says:
ImportError: No module named site
Just stumbled upon this error message and wasted a couple of hours, yet in my case the cause was different from everything mentioned in other answers already.
Suppose you just installed a local uWSGI version via pip into your own virtualenv (e.g. as described here).
Suppose you are now trying to run your uWSGI server as root (because you want to serve the app as www-data user, for example). This is how you would do it, right?
. venv/bin/activate
sudo uwsgi --ini your-app.ini
Wrong! Even though your local uwsgi is in your path after you activated your environment, this path is not passed into the sudo command, and you are launching the system uwsgi rather than your local one, which may be the source of endless confusion, like it was in my case.
So, the solution in my case was to simply specify the full path:
sudo /full/path/to/venv/bin/uwsgi --ini your-app.ini
I had similar issue but this solved it (btw, I use MacOs, and both python2&3 versions installed, but I wanted to use Python3):
Open terminal and check for python3 location by typing:
which python3
Copy the full path and assign it to; plugins option in .ini file
I hope it helps!
If you've followed all the python plugin installation steps and uwsgi --plugin-list still fails to list 0: python as one of the plugins, try restarting your computer. My uwsgi instance ran as a service (from Bash, use service status-all to see running services) and probably the updated config settings were loaded on service restart.
On my side, this is because instead of having [uwsgi] as the header of my configuration inside /etc/uwsgi/apps-available/, I put something else (the name of the app).