I have a Flask setup in my Raspberry Pi 4 Model B via this tutorial.
OS = Ubuntu Server 20.04.2 LTS
Python = 3.8
Please keep in mind that I am using virtual env for my Flask application as shown in the tutorial and my Flask application is running absolutely fine.
Now I installed Adafruit_DHT in the same venv and tried using the following code in one of the endpoints
import Adafruit_DHT
humidity, temperature = Adafruit_DHT.read_retry(Adafruit_DHT.DHT22, 24)
to which I am getting the following error
File "/usr/local/lib/python3.8/dist-packages/Adafruit_DHT/common.py", line 81, in read
return platform.read(sensor, pin)
File "/usr/local/lib/python3.8/dist-packages/Adafruit_DHT/Raspberry_Pi_2.py", line 34, in read
raise RuntimeError('Error accessing GPIO.')
RuntimeError: Error accessing GPIO.
So, after that, I created a simple python script say z.py and wrote the above code in it. Then, I activated the same Flask venv using
source venv1/bin/activate
And run the script using
python z.py
Again I got the same error. But If I run the above command as sudo
sudo python z.py
then script executed perfectly fine and I got the following response
87.0999984741211 29.399999618530273
So, now the question arrives, how do I use Adafruit_DHT package inside the Flask app with sudo permission?
I don't think setting 777 to www-data group would be the right choice. Or running the Flask app as sudo user would be a great idea.
I have tried installing Adafruit_DHT package globally with sudo, but still I have to execute z.py as sudo
So what is the correct way to do this?
I believe the package will be trying to access the device /dev/gpiomem (possibly (/dev/gpiochip0 or /dev/gpiochip1).
I think the neatest way to address this would be have those devices be owned by a group other than root and give that group permission to access the device, e.g.
sudo su
groupadd gpio
chgrp gpio /dev/gpio*
chmod g+rw /dev/gpio*
Then I'd go ahead and add your user to that group (by default this is ubuntu, but you may have created another user):
usermod -a -G gpio ubuntu
Now you've created a group called "gpio" that now has permissions to access your Pi's GPIO, and added your user to that group.
Please note, I have not tested this
Related
I'm attempting to write an Azure function which converts an html input to pdf and either writes this to a blob and/or returns the pdf to the client. I'm using the pdfkit python library. This requires the wkhtmltopdf executable to be available.
To test this locally on my windows machine, I installed the windows version of wkhtmltopdf and this works completely fine.
When I deployed this function on a Linux app service on Azure, I could still execute the function successfully only after I execute the sudo command on kudo tools to install wkhtmltopdf on the app service.
sudo apt-get install wkhtmltopdf
I'm also aware that I can write this start up script on the app service itself.
My question is : Is there something I can do on my local windows machine so I can just deploy the the azure function along with the linux version of wkhtmltopdf directly from my vscode without having to execute another script on the app service itself?
By setting the below commands in the App configuration will work.
Thanks to #pamelafox for the comments.
Commands
PRE_BUILD_COMMAND or POST_BUILD_COMMAND
The following process is applied for each build.
Run custom command or script if specified by PRE_BUILD_COMMAND or PRE_BUILD_SCRIPT_PATH.
Create python virtual environment if specified by VIRTUALENV_NAME.
Run python -m pip install --cache-dir /usr/local/share/pip-cache --prefer-binary -r requirements.txt if requirements.txt exists in the root of repo or specified by CUSTOM_REQUIREMENTSTXT_PATH.
Run python setup.py install if setup.py exists.
Run python package commands and determine python package wheel.
If manage.py is found in the root of the repo manage.py collectstatic is run. However, if DISABLE_COLLECTSTATIC is set to true this step is skipped.
Compress virtual environment folder if specified by compress_virtualenv property key.
Run custom command or script if specified by POST_BUILD_COMMAND or POST_BUILD_SCRIPT_PATH.
Build Conda environment and Python JupyterNotebook
The following process is applied for each build.
Run custom command or script if specified by PRE_BUILD_COMMAND or PRE_BUILD_SCRIPT_PATH.
Set up Conda virtual environemnt conda env create --file $envFile.
If requirment.txt exists in the root of repo or specified by CUSTOM_REQUIREMENTSTXT_PATH, activate environemnt conda activate $environmentPrefix and run pip install --no-cache-dir -r requirements.txt.
Run custom command or script if specified by POST_BUILD_COMMAND or POST_BUILD_SCRIPT_PATH.
Package manager
The latest version of pip is used to install dependencies.
Run
The below process is applied to know how to start an app.
If user has specified a start script, run it.
Else, find a WSGI module and run with gunicorn.
Look for and run a directory containing a wsgi.py file (for Django).
Look for the following files in the root of the repo and an app class within them (for Flask and other WSGI frameworks).
application.py
app.py
index.py
server.py
Gunicorn multiple workers support
To enable running gunicorn with multiple workers strategy and fully utilize the cores to improve performance and prevent potential timeout/blocks from sync workers, add and set the environment variable PYTHON_ENABLE_GUNICORN_MULTIWORKERS=true into the app settings.
In Azure Web Apps the version of the Python runtime which runs your app is determined by the value of LinuxFxVersion in your site config. See ../base_images.md for how to modify this.
References taken from
Python runtime on App Service
I need to setup a Jetson Nano device so that a Python script is launched everytime an Internet connection is available.
So, referring to this question, I did the following:
I created the 'run_when_connection_available' script:
#!/bin/sh
# create a dummy folder to check script execution
mkdir /home /user_name/dummy_folder_00
# kill previous instances of the system
pkill python3
# move to folder with python script and launch it
cd /home/user_name/projects/folder
/usr/bin/python3 launcher.py --arg01 --arg02 ...
# create another dummy folder to check script execution
mkdir /home /user_name/dummy_folder_01
I made this script executable and I copied it to /etc/network/if-up.d
Now, everytime I plug the ethernet cable out and in again, I can see the dummy folders are created in /home/user_name, but the python script isn't launched (at least, it doesn't appear in the system monitor). I tried running the command in the script from the terminal, and everything works fine, the python program starts as expected. Am I doing something wrong?
I'm trying to figure out something similar to you, but not quite the same..
This solution got my script python script running upon internet connection, I can check the logs and everything is working fine:
Raspbian - Running A Script After An Internet Connection Is Established
However, my script uses notify-send to send notifications to my window manager which I can't seem to get working with systemd - the script works when run inside of the user space so I assume it's something to do with systemd and Xorg. Hopefully that shouldn't be a problem for you, I hope this solves your issue.
You shouldn't need a bash script in the middle, I got systemd service to run my python script with chmod u+x <file>.py and putting #!/usr/bin/env python3 at the top of the python file so that it's executable directly under the .service file like so:
ExecStart=/path/to/file/file.py
Ok, I guess it was a matter of permissions, I solved it by running everything as user_name, so I modified the script as
sudo -user user_name /usr/bin/python3 launcher.py --arg01 --arg02 ...
I am running a Python script that collects data and is running inside a Virtual Environment hosted remotely on a VPS (Debian based).
My PC crashed and I am trying to get back into the visual logs of the python script.
I know that the script is still running because it saves its data into a CSV file. That CSV is still being written.
If I activate the source again, then I can rerun the script. It sounds to me that I will have 2 instances of the same script running in this case...
I am not familiar with the virtual environment and I cannot find the right way to do it without deactivating and reactivating it. I am running my script on the cheapest OVH VPS I could buy because my computer is clearly not reliable for running 24/7.
You might use screen to run your script in a separate terminal session. This will avoid losing logging if the ssh connection gets dropped.
The workflow would be something in the lines of (on your host):
# Install screen
$ sudo apt udpate
$ sudo apt install screen
# Start a screen session
$ screen
# Run your script
$ python myscript.py
In case of dropping your ssh connections, it'll be enough to:
# ssh back into the host from your client
# reattach previous screen session
$ screen -r
For advanced use the official docs are quite comprehensive.
Note: As a more general note, what explained above is pretty much the basic logic of a terminal mulitplexer. You'll be able to achieve the same using tmux.
I have a internal scheduler tool which runs a python script on a remote server. I am using configparser module within my script. When I run this script through the tool it gives me below error.
ImportError: No module named configparser
I don't have access to that remote server so I can't just login to server and install required module.
Is there any way through which I can install configparser module by running any installation script on remote server through the tool ( I can neither download package on remote server nor run any commands, All I can do is, running scripts through this tool.) Please let me know if you need more clarification on this.
How about you do something like this. Create a python script that calls a bash script to do what you want:
install.py:
import subprocess
script = """
source /path/to/venv/bin/activate
pip install AnyPackage
"""
subprocess.call(['sh', '-c', script])
I am assuming you are using a virtualenv. If not, I assume the account that the script-runner uses has sudo access.
I am working with my Raspberry Pi and I am writing a cgi python script that creates a webpage to control my gpio out pins. My script crashes when I try to import RPi.GPIO as GPIO. This is the error that I am getting:
File "./coffee.py", line 7, in <module>
import RPi.GPIO as GPIO
RuntimeError: No access to /dev/mem. Try running as root!
My code works perfectly when I use sudo to run my script, but when I am running from a URL from my apache2 server it says that I do not have access to /dev/mem. I have already tried editing visudo and that did not work. This is what my visudo file looks like:
#includedir /etc/sudoers.d
pi ALL=(ALL) NOPASSWD: ALL
www-data ALL=(root) NOPASSWD: /usr/bin/python3 /usr/lib/cgi-bin/coffee.py *
apache2 ALL = (root) NOPASSWD: /usr/lib/cgi-bin/coffee.py
There any way that I can run my script as root from a URL call? Can anyone tell me what I am doing wrong?
I found that adding www-data to the gpio user group worked fine:
sudo usermod -aG gpio www-data
You can also add www-data to the memory user group:
sudo usermod -aG kmem www-data
As mentioned, it is a bad idea, but for me it was necessary.
Your problem is that the script is not executed as root. It is executed as the user that apache runs as.
Your apache process runs as a specific user, probably www-data. You could change the user that apache runs as. You should be able to find this in /etc/apache2/envvars:
# Since there is no sane way to get the parsed apache2 config in scripts, some
# settings are defined via environment variables and then used in apache2ctl,
# /etc/init.d/apache2, /etc/logrotate.d/apache2, etc.
export APACHE_RUN_USER=www-data
export APACHE_RUN_GROUP=www-data
If you change that to root you should have access. Normally this would be a terrible security hole, but you are doing direct memory access already. Be very careful!
If you are uncomfortable with this then you need to update your command so it is executed as root (this is a good way, but it requires you understand what you are doing!). You can do this by altering the way you call it, or by wrapping the call in a script which itself changes the user, or by using setuid (this is very similar to the suEXEC approach mentioned earlier). Wrapping it in a script seems the best way to me, as that should allow your entry in sudoers to correctly apply the privilieges for only that command, and it doesn't require you to understand the full implications of setuid approaches.