I have created a web-app using Python Flask framework on Raspberry Pi running Raspbian. I want to control the hardware and trigger some sudo tasks on the Pi through web.
The Flask based server runs in non-sudo mode listening to port 8080. When a web client sends request through HTTP, I want to start a subprocess with sudo privileges. (for ex. trigger changes on gpio pins, turn on camera etc.). What is the best practice for implementing this kind of behavior?
The webserver can ask for sudo password to the client, which can be used to raise the privileges. I want some pointers on how to achieve this.
Best practice is to never do this kind of thing. If you are giving sudo access to your pi from internet and then executing user input you are giving everyone in the internet the possibility of executing arbitrary commands in your system. I understand that this is probably your pet project, but still imagine someone getting access to your computer and turning camera when you don't really expect it.
Create a separate process that does the actual work for you. You can run this process in the background with elevated privileges.
Your web client will just deposit tasks for this process in a queue. This can be as simple as a database to which your Flask app writes a "task request", the other, privileged process reads this request, performs the action, and updates the databases.
This concept isn't new, so there are multiple brokers and task queues that you can use. Celery is popular with Python developers and is easily integrated into your application.
Related
I have a virtual server available which runs Linux, having 8 cores. 32 GB RAM, and 1 TB in addition. It should be a development environment. (same for test and prod) This is what I could get from IT. Server can only be accessed via so-called jump servers by putty or direct tcp/ip ports (ssh is a must).
The application I am working on starts several processes via multiprocessing. In every process an asyncio event loop is started, and an asyncio socket server in some cases. Basically it is a low level data streaming and processing application (unfortunately no kafka or similar technology available yet). The live application runs forever, no or limited interaction with the user (reads/processes/writes data).
I assume, IPython is an option for this, but - and maybe I am wrong - I think it starts new kernels per client request, but I need to start new process from the main code w/o user interaction. If so, this can be an option for monitoring the application, gathering data from it, sending new user commands to the main module, but not sure how to run processes and asyncio servers remotely.
I would like to understand how these can be done on the given environment. I do not know where to start, what alternatives there are. And I do not understand ipython properly, their page is not obviuos to me yet.
Please help me out! Thank you in advance!
After lots of research and learning I came across to a possible solution in our "sandbox" environment. First, I had to split the problem into several sub-problems:
"remote" development
paralllization
scheduling and executing parallel codes
data sharing between these "engines"
controlling these "engines"
Let's see in details:
Remote development means you want write your code on your laptop, but the code must be executed on a remote server. Easy answer is Jupyter Notebook (or equivalent solution) although it has several trade-offs, also other solutions are available, but this was faster to deploy and use and had the least dependency, maintenance, etc.
parallelization: had several challenges with iPython kernel when working with multiprocessing, so every code that must run parallel will be written in separated Jupyter Notebook. In a single code I can still use eventloop to get async behaviour
executing parallel codes: there are several options I will use :
iPyParallel - "workaround" for multiprocessing
papermill - execute JNs with parameters from command line (optional)
using %%writefile magic command in Jupyter Notebook - create importables
os task scheduler like cron.
async with eventloops
No option yet: docker, multiprocessing, multithreading, cloud (aws, azure, google...)
data sharing: selected ZeroMQ, took time to learn but was simpler and easier than writing everything on pure sockets. There are alternatives but come with extra dependency, and some very useful benefit (will check them later): RabbitMQ, Redis message broker, etc. The reasons for preferring ZMQ: fast, simple, elegant, and just a library. (Knonw risk: our IT will prefer RabbitMQ, but that problem comes later :-) )
controlling the engines: now this answer is obvious: separate python code (can be tested as JN code but easy to turn into pure .py and schedule it). This one can communicate with the other modules via ZMQ sockets: healthcheck, sending new parameters, commands, etc....
I am new to rabbitmq. In all tutorials of rabbitmq in python/php says that receiver side
php receiver.php
or
python receiver.py
but how can we do this in production?
if we have to run above command in production either we have to use & at last or we have to use nohup. Which is not a good idea?
How to implement rabbitmq receiver in production server in php/python?
Consumers/Receivers tend to be managed by a process controller. Either initd, systemd can work. What I saw used a lot more is something like http://supervisord.org/ or http://godrb.com/ or https://mmonit.com/
In production you ideally want to not only have something that will make sure a process is running, but also that the logs are separated and rolled, that you have some amount of monitoring to make sure a process is not just constantly restarting at boot or otherwise. Those tools are better adapted than running by hand.
I have created a Django application and uploaded in to AWS EC2. I can access the site using public IP address only when I run the python manage.py in AWS command line.
If I close the Putty window, I am not able to access the site. How can I make sure that the site is available always even if I close the command line / putty?
I tried WSGI option but its not working at all. Appreciate your help to give us a solution to run the Python application in AWS.
It happens because you are running the app from within the SSH session, which means that ending the session (SIGHUP) will kill your application.
There are several ways to keep the app running after you disconnect the SSH, the simplest would be to run it inside a screen session and keeping this instance running while disconnecting from SSH, the advantage of this method is that you can still control the app when you are reconnecting to this machine and control the state of the app and also potentially see the logs.
Although it might be pretty cool it's considered a patch, the more stable and solid way would be to create a service that will run the app and will allow you to start, stop and look at logs using the nifty wrappers of systemd.
Keep the process running with screen:
First you'll have to make sure screen is installed (apt-get or yum) whatever suits your desired distro.
Run screen.
Run the app just like you did outside screen.
Detach from the screen session by pressing Ctrl+A and then d.
Disconnect from the SSH and see how the service is still running.
Creating a systemd service is a bit more complicated so try and read through the following manual.
I have a node.js server running on a Raspberry Pi 3 B+. (I'm using node because I need the capabilities of a bluetooth library that works well).
Once the node server picks up a message from a bluetooth device, I want it to fire off an event/command/call to a different python script running on the same device.
What is the best way to do this? I've looked into spawning child processes and running the script in them, but that seems messy... Additionally, should I set up a socket between them and stream data through it? I imagine this is done often, what is the consensus solution?
Running a child process is how you would run a python script. That's how you do it from nodejs or any other program (besides a python program).
There are dozens of options for communicating between the python script and the nodejs program. The simplest would be stdin/stdout which are automatically set up for you when you create the child process, but you could also give the nodejs app a local http server that the python script could communicate with or vice versa.
Or, set up a regular socket between the two.
If, as you now indicate in a comment, your python script is already running, then you may want to use a local http server in the nodejs app and the python script can just send an http request to that local http server whenever it has some data it wants to pass to the nodejs app. Or, if you primarily want data to flow the opposite direction, you can put the http server in the python app and have the nodejs server send data to the python app.
If you want good bidirectional capabilities, then you could also set up a socket.io connection between the two and then you can easily send messages either way at any time.
I'm going to be using python to build a web-based asset management system to manage the production of short cg film. The app will be intranet-based running on a centos machine on the local network. I'm hoping you'll be able to browse through all the assets and shots and then open any of them in the appropriate program on the client machine (also running centos). I'm guessing that there will have to be some sort of set up on the client-side to allow the app to run commands, which is fine because I have access to all of the clients that will be using it (although I don't have root access). Is this sort of thing possible?
As you already guessed, you will need to have a service running on the client PC listening on a predetermined port.
When the client requests to open an asset, your webapp will send the request to the running service to download the asset and run it. As long as your port no. is above 1024 and you are not running any application which requires root access, you can run this service without root.
But this is a very bad idea as it exposes the clients to malicious attacks. You will have to ensure all requests to the client service is properly signed and that the client verifies each request as valid before executing it. There may be many other security factors you will have to consider depending on your implementation of the client service. But in general, having a service that can run arbitrary requests from a remote machine is a very dangerous thing to have.
You may also not be allowed to run such a service on client PC depending on your comany's IT policies.
You are better of having the client download the resource normally and then having the user execute the resource manually.
PS: You can have the client service run on a port below 1024, but it will have to start as root and after binding to the port drop all root privileges and change the running user to a different user using setuid (or the equivalent in your language of choice)
Note this is not a standard way. Imagine the websites out there had the ability to open Notepad or Minesweeper at their will when you visit or click something.
The way it is done is, you need to have a service which is running on the client machine which can expose certain apis and trust the request from the web apps call. this needs to be running on the client machines all the time and in your web app, you can send a request to this service to launch the application that you desire.
If you have a specific subset of applications that will be run on the client systems (aka you are distributing jobs), then you might want to consider python salt. It is a distributed RPC which uses a secure protocol and authentication to distribute jobs and deliver results:
http://docs.saltstack.org/en/latest/topics/index.html
If you are looking at automating content generation based on specific updates then you might want to consider Jenkins, which has plugins for various revision control systems and build systems:
https://wiki.jenkins-ci.org/display/JENKINS/Meet+Jenkins
It may not have integration with the particular tools you are using, but if it does then it could be a quicker setup and administration than generic salt automation.
--David