Running Xvfb from Python - python

I am working on a server with no X servers and trying to run a script that uses spynner module, which requires an X server. For this purpose, I want to run Xvfb.
I can run the script by calling it via xvfb-run, i.e.:
xvfb-run python2.6 try.py.
This works with no problem. However, I need to invoke Xvfb from within the script. For this purpose, I tried using subprocess as follows:
xvfb = subprocess.Popen(['Xvfb', ':99'])
After adding this piece of code to the beginning of the script, and trying to run the script as
python2.6 try.py
I get the message:
: cannot connect to X server
Is there something else I need to do? Thanks in advance.

For future visitors, it is worth mentioning that PyVirtualDisplay offers an abstraction over Xvfb to make it easy to use from Python.

you'll need to add:
import os
os.environ["DISPLAY"]=":99"
so that when it goes to open the connection to the X server it'll be able to find the Xvfb instance you've started

Related

how to make your python web app always running on ubuntu machine

I have a web app that I deployed to a machine that has ubuntu 20 installed
to be able to run the app I should open ssh to the ubuntu machine and then run this command
cd mywebapp
python3 app.py
it works successfully, but once I close the ssh console or reboot the machine or anything happens, it stopped and I have to repeat these commands
I tried to add it as a corn job to be run after machine reboot but it does not work
I post a question in the following link : run python app after server restart does not work using crontab
nothing work with me, and I have to make sure that this web app will always be running because it should be working to send push notification to mobile devices
can anyone please advice, I have been searching and trying for so many time
I'm not expert in it, but two solutions come in my mind:
1- Using systemd:
systemd can be responsible to keep services up.
You can write a custom unit for your app, and config it as a way to be up always.
This tutorial may be useful: writing unit
2- Using Docker:
When you have containerized app, you config it as to come up, on failure or anything like that.
Read about it here
What if you have the calling piece of Python script within a bash script and run that as a daemon:
Your bash script could like below (test.sh):
#!/bin/sh
cd desired/directory
python3 app.py
and you can run the bashscript like this by using nohup:
nohup ./test.sh 0<&- &>/dev/null &
You could refer this, if you want to store the outputs of nohup.

Error with Python subprocess when running Flask app using nginx + WSGI

I have developed a Python web server using Flask, and some of the endpoints make use of the subprocess module to call different executables. On development, using the Flask debug server, everything works fine. However, when running the server along with nginx+WSGI (on the exact same machine), some subprocess calls fail.
For example, one of the tools I'm using is Microsoft's dotnet, which I installed from my user as sudo apt-get install -y aspnetcore-runtime-5.0 and is then called from Python with the subprocess module. When I run the server with python3 server.py, it works like a charm. However, when using nginx and WSGI, the subprocess call fails with an exception that says: /bin/sh: 1: dotnet: not found.
I suspect this is due to the command not being accessible to the user and group running the server. I have used this guide as a reference to deploy the app, and on the wsgi .ini file, I have set uid = javierd and gid = www-data, while on the systemd .service file I have User=javierd, Group=www-data.
I have tried to add the executables' paths to /etc/profile, but it didn't work, and I don't know any other way to fix it. I find also very surprising that this happens to some executables, but not to all, and that it happes to dotnet, for example, which is located at /usr/bin/dotnet and therefore should be accessible to every user. Any idea on how to solve this problem? Furthermore, if somebody could explain me why this is happening, I would really appreciate the effort.
Thanks a lot!
Ok, finally after having a big headache, I noticed the error, and it was really simple.
On the tutorial I linked, when creating the system service file, the following line was included: Environment="PATH=/home/myuser/myfolder/enviroment/bin".
Of course, as this was overriding the path, there was no way of executing the commands. Once I notices it I just removed that line, restarted the service, and it was fixed.

How to let the python script (abc.py) keep executing on AWS even after Connection lost or SSH connection is terminated?

I was using Jupyter notebook on AWS AMI and using the port forwarding on Windows using PuTTY. I got my connection terminated and all the work of 24 hours is lost now and I'm unable to get that. So I used a script instead of Notebook and the same thing happened. I used to think that the process would keep going on even if the Shell connection is lost. But next time I login, I don't see anything.
I used top, htop aux to find if my processes are still running but they don't show my process. Please help how can I stop this from happening.
I am using Windows on 10 with Putty on local and Ubuntu 18 on the AWS AMI.
You can try
nohup python3 myscript.py &
Try the tmux
apt install tmux
I often use it to keep my web runing on server(not professional but convenient)
You can use screen.
You can install it wit:
sudo apt install screen
You can find more information right here: https://linuxize.com/post/how-to-use-linux-screen/

Autostart Python script and run in background with Ubuntu

I'm running Ubuntu server 16.04 and still getting to grips with it. I have a python script that runs in an endless loop, performing a task related to fetching data from an external source.
What I'm trying to do, is make this python script start after (or during) boot and then run in the background.
I've tried editing rc.local but the boot sequence just hangs since the script keeps running.
Any advice would be greatly appreciated.
As one of the comments mentions, you can use cronjobs to start scripts at certain times such as at startup(as you would like to do). It also would not halt execution like you mentioned with rc.local
The line that you need to add to the chronjob list is -
#reboot python /home/MyPythonScript.py
Here is are a couple of useful tutorials that show you how to do this: http://kvz.io/blog/2007/07/29/schedule-tasks-on-linux-using-crontab/
https://help.ubuntu.com/community/CronHowto
If you would like to do it with python itself there is this handy python library - https://pypi.python.org/pypi/python-crontab/
tmux is a great utility for background desktops. You can use it for this:
sudo apt get install tmux
Then add it to your rc.local:
/usr/bin/tmux new-session -d 'python /path/to/your/script'
After boot you can use it as follow:
tmux attach
And your console will be attached to the last desktop working at background.

How to debug Python script which is automatically called inside a web application?

I'm developing a cassandra storage finder for graphite-api.
graphite-api is installed via pip and run via gunicorn so I can't just call the script with a debugger but want to use interactive debugging.
When I import pdb in my storage finder and set a breakpoint, the code will halt there, but how can I connect now to the headless running pdb in the script?
Or is my approach to this debugging problem the wrong one and this has to be done in a completely other way?
pdb gives control over to gunicorn, which is not what you want. Have a look at rpdb or other remote debugging solutions.

Categories