How to see if redis is installed from within a python shell - python

From the shell prompt I can easily view whether I have redis on my machine by doing:
PROD WEB ➜ /home redis
~redis
How would I do the same from within the python shell? Do I have to use a subprocess call?

You can install a module of redis, and use it. For example:
import redis
redis.Redis('localhost', .....)

Related

How do you debug python code with kubernetes and skaffold?

I am currently running a django app under python3 through kubernetes by going through skaffold dev. I have hot reload working with the Python source code. Is it currently possible to do interactive debugging with python on kubernetes?
For example,
def index(request):
import pdb; pdb.set_trace()
return render(request, 'index.html', {})
Usually, outside a container, hitting the endpoint will drop me in the (pdb) shell.
In the current setup, I have set stdin and tty to true in the Deployment file. The code does stop at the breakpoint but it doesn't give me access to the (pdb) shell.
There is a kubectl command that allows you to attach to a running container in a pod:
kubectl attach <pod-name> -c <container-name> [-n namespace] -i -t
-i (default:false) Pass stdin to the container
-t (default:false) Stdin is a TTY
It should allow you to interact with the debugger in the container.
Probably you may need to adjust your pod to use a debugger, so the following article might be helpful:
How to use PDB inside a docker container.
There is also telepresence tool that helps you to use different approach of application debugging:
Using telepresence allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.
Use the --swap-deployment option to swap an existing deployment with the Telepresence proxy. Swapping allows you to run a service locally and connect to the remote Kubernetes cluster. The services in the remote cluster can now access the locally running instance.
It might be worth looking into Rookout which allows in-prod live debugging of Python on Kubernetes pods without restarts or redeploys. You lose path-forcing etc but you gain loads of flexibility for effectively simulating breakpoint-type stack traces on the fly.
This doesn't use Skaffold, but you can attach the VSCode debugger to any running Python pod with an open source project I wrote.
There is some setup involved to install it on your cluster, but after installation you can debug any pod with one command:
robusta playbooks trigger python_debugger name=myapp namespace=default
You can take a look at okteto/okteto. There's a good tutorial which explains how you can develop and debug directly on Kubernetes.

Install python module on remote server

I have a internal scheduler tool which runs a python script on a remote server. I am using configparser module within my script. When I run this script through the tool it gives me below error.
ImportError: No module named configparser
I don't have access to that remote server so I can't just login to server and install required module.
Is there any way through which I can install configparser module by running any installation script on remote server through the tool ( I can neither download package on remote server nor run any commands, All I can do is, running scripts through this tool.) Please let me know if you need more clarification on this.
How about you do something like this. Create a python script that calls a bash script to do what you want:
install.py:
import subprocess
script = """
source /path/to/venv/bin/activate
pip install AnyPackage
"""
subprocess.call(['sh', '-c', script])
I am assuming you are using a virtualenv. If not, I assume the account that the script-runner uses has sudo access.

How to make sudo execute in current python virtual environment?

I have a django website setup and configured in a python virtual environment (venv) on Ubuntu and all is working fine. Now in order to to run my server on port80 I need to use "sudo" which does not execute in the context of the virtual environment, raising errors (i.e no module named django ...)
Is there a way to get "sudo" to execute in the context of the python virtual environment?!
No, you don't need to do this. You shouldn't be trying to run the development server on port 80; if you're setting up a production environment, use a proper server.
As #DanielRoseman said you should not be using the Django development server in production.
But if you need to run the development server on port 80 you have to reference the use the virtual environment python executable directly.
sudo ../bin/python manage.py runserver localhost:80
This should be the solution.
Even tho I really don't recomment doing this. If you need sudo within Python you are probably on a wrong way.
this did the trick:
$ sudo -- sh -c './venv-bin-path/activate; gunicorn <params> -b 0.0.0.0:80'

Docker from python

Please be gentle, I am new to docker.
I'm trying to run a docker container from within Python but am running into some trouble due to environment variables not being set.
For example I run
import os
os.popen('docker-machine start default').read()
os.popen('eval "$(docker-machine env default)"').read()
which will start the machine but does not set the environment variables and so I can not pass a docker run command.
Ideally it would be great if I did not need to run the eval "$(docker-machine env default)". I'm not really sure why I can't set these to something static every time I start the machine.
So I am trying to set them using the bash command but Python just returns an empty string and then returns an error if I try to do docker run my_container.
Error:
Post http:///var/run/docker.sock/v1.20/containers/create: dial unix /var/run/docker.sock: no such file or directory.
* Are you trying to connect to a TLS-enabled daemon without TLS?
* Is your docker daemon up and running?
I'd suggest running these two steps to start a machine in a bash script first. Then you can have that same bash script call your python script and access docker with docker-py
import docker
import os
docker_host = os.environ['DOCKER_HOST']
client = docker.Client(docker_host)
client.create(...)
...

Python script cronjob with external Library

I'd like to run a Python script with external Library (beautifulsoup) on a hosted Webserver.
What type of Webserver do I need? I heard something about CGI?
How I've to implement and install the external library in my script and on the server?
Cronjob is only for PHP, isn't it? Is there a "cronjob" for Python?
cron is an OS scheduler for unix like systems. It allows you to run any command at set intervals.
Do you need a webserver, or do you just want to routinely execute your script? If you just want to routinely execute your script there is no need for a webserver, cron will suffice.
I believe all you need is ssh access, and sudo access to a server. (Amazon ec2 offeres free micro instances for a year)
After that you can install pip, virtualenv, and beautiful soup. You can then register your command (which just executes your script) with cron and you are all set

Categories