I have a flask server and I'm trying to run a script in the background to enable a pairing process on a raspberry pi. I have a button to enable and disable this which works fine.
I use process = subprocess.Popen(["python3","bt.py"]) to run the process then process.kill() when I need to stop it.
But once the task stops I need to update the page with the new device information, but I'm having trouble detecting when the pairing script stops via flask. I know I can run process.poll() to check if the subprocess is still running but I can't think of any way to implement this into flask as it would need to run in a loop, which would stop the client from receiving the page.
The only thing I think could work would be to edit a file from the bt.py script and have the JS part of my flask app detect the change in the file and cause a redirect. However, this seems clunky and feels like bad practice. Any suggestions would be great
Related
i have some project working with Flask and Jinja2, and the web ui just table like this image
that all Routes is triggering program backend for scraping some website, this web is working fine. but i wonder how make it more flexible.
i just thought of making auto-run that all routes with one button, and have their own status.
that column will have Status and their status like (running, done, stop, not running etc.) but i cannot imagine that logic.
i already create for auto-run and work fine, and my question just how to know their status is running, done, stop or not running in the background.
any idea really appreciate. this is my own project so i'm so excite to make this work
The simplest way of doing this is by running a log of each stage of the scraping process. So for example:
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get('https://www.google.com/')
print("Loaded Google.com")
some_task = driver.find_element(By.XPATH, '//button[text()="Some text"]')
print("Got some task")
(Locating elements as per: https://selenium-python.readthedocs.io/locating-elements.html)
However, for real-time processing of task status and for more efficiency, you can use Celery.
Celery works well for web scraping tasks as it actually allows you to asynchronously offload work from your Python app to workers and task queues.
You can then retrieve proper status reports from each worker. See: https://docs.celeryq.dev/en/stable/reference/celery.states.html
An easy and efficient approach will be to use AJAX to concurrently check a log file for the status of process and update the DOM element...
I would suggest you to have a seperate log file where the backend flask processes update the current status they are working on. something like initially everything is on status "SLEEP", and once the process is triggered it changes its corresponding log status to "RUNNING"... Once the process ends, it changes the log status to "DONE".
Use AJAX from the front end to parse the Log File every N seconds and update the DOM status element based on the status parsed from the log file by AJAX
PS. You can also add animation effects like a spinner on the DOM element on running process through AJAX
I am currently writing a web-app with Rails backend and react.js frontend. Meanwhile, my project partner wrote some scripts in python to do some web scraping. I am thinking how can I run the python script. I have a customary action in one of my controllers. But now I am not very sure how do I call such action from my react frontend. I am guessing maybe Axios can help but I only know Axios can do POST and GET and not very sure how to define my own action.
class DeadlinesController < ApplicationController
def import
exec(" python Users/sherrywu1999/Desktop/pythonweb.py")
end
end
This is entirely possible. The safest method here is probably executing the script from a background job (sidekiq, resque, etc.).
If you don't want to set anything like that up I would recommend
fork { exec("python Users/sherrywu1999/Desktop/pythonweb.py > /some/path/to/output/log/to.log") }
Using this, we are not blocking the execution of your rails app. If you just fire off the script with exec(...), the page will continue to load until the process exits which may result in a timeout or other bad behavior. Using this method allows your main ruby process to continue execution while the script is run in a separate process. You can then check the log output at a later time if needed.
Wherever you place this code, make sure to configure a get route to it, then perform a get request with axios to that path to execute the script when you want to.
I am writing a simple Mac application designed to run in the background and perform certain actions whenever the user clicks the mouse button. The app is written in Python using PyObjC. I am using addGlobalMonitorForEventsMatchingMask to watch for NSLeftMouseDown events:
NSEvent.addGlobalMonitorForEventsMatchingMask_handler_(NSLeftMouseDownMask, handler)
This code works perfectly when running in the terminal. However, when I bundle it as a standalone app (using py2app) and then launch it, the app doesn't receive any events at first. (Or at least, if it does, it doesn't run the code in my handler method.) Only when I click on the app in the Dock does it start receiving events, and after that, it continues to receive events even after it returns to the background. But it doesn't receive anything until activated once.
My question is: How can I get my app to start receiving events as soon as it is launched, without having to be activated first by clicking the Dock icon? Is this some known quirk of NSEvents, or is there perhaps something wrong with my run loop in PyObjC?
Any help or guidance is greatly appreciated!
Edit: Upon further testing, it seems that, in fact, my app spontaneously starts receiving notifications about ten seconds after launch, regardless of whether I activate it. Which is slightly annoying, but fine.
However, if I run the app with either LSUIElement = true or LSBackgroundOnly = true in my Info.plist (which I ultimately want to do, since this app should only run in the background and never appear in the Dock), I never receive notifications. So I am still stuck.
As you said "Only when I click on the app in the Dock does it start receiving events" , that means the handler gets registered after you click on the app in the Dock.
So it depends on at which point in the code you are calling this : NSEvent.addGlobalMonitorForEventsMatchingMask_handler_(NSLeftMouseDownMask, handler) ,
that is registering the handler.
You should register the handler in appdidfinishlaunching function.
I have application of following parts:
client->nginx->uwsgi(python)
and some python scripts can be running long time (2-6 minutes). After execution of script I should give to client content, but connection break with error "gateway timeout 504". What can I use for my case to avoid this error?
So is your goal to reduce the run time of the scripts, or to not have them time out? Browsers are going to give up on a 6 minute request no matter what you try.
Perhaps try doing the work on the server, and then polling for progress with AJAX requests?
Or, if possible, try optimizing the scripts. For example, if you have some horribly slow SQL stuff going on, try cleaning that up.
Otherwise, without more information, a more specific answer is hard to give.
I once set up a system where the "main page" contained an Iframe which showed the output of the long running program as text/plain. I think the the handler for the the Iframe content was a Python CGI script which emitted all headers and then the program output line by line under an Apache server.
I don't know whether this would work under your configuration.
This heavily depends on your server setup (i.e. how easy it is to push data back to the client), but is it possible while running your lengthy application to periodically send some “null” content (e.g plain newlines assuming your output is html) so that the browser thinks this is just a slow connection and not a stalled one?
Im attempting to start a server app (in erlang, opens ports and listens for http requests) via the command line using pexpect (or even directly using subprocess.Popen()).
the app starts fine, logs (via pexpect) to the screen fine, I can interact with it as well via command line...
the issue is that the servers wont listen for incoming requests. The app listens when I start it up manually, by typing commands in the command line. using subprocess/pexpect stops the app from listening somehow...
when I start it manually "netstat -tlp" displays the app as listening, when I start it via python (subprocess/pexpect) netstat does not register the app...
I have a feeling it has something to do with the environemnt, the way python forks things, etc.
Any ideas?
thank you
basic example:
note:
"-pz" - just ads ./ebin to the modules path for the erl VM, library search path
"-run" - runs moduleName, without any parameters.
command_str = "erl -pz ./ebin -run moduleName"
child = pexpect.spawn(command_str)
child.interact() # Give control of the child to the user
all of this stuff works correctly, which is strange. I have logging inside my code and all the log messages output as they should. the server wouldnt listen even if I started up its process via a bash script, so I dont think its the python code thats causing it (thats why I have a feeling that its something regarding the way the new OS process is started).
It could be to do with the way that command line arguments are passed to the subprocess.
Without more specific code, I can't say for sure, but I had this problem working on sshsplit ( https://launchpad.net/sshsplit )
To pass arguments correctly (in this example "ssh -ND 3000"), you should use something like this:
openargs = ["ssh", "-ND", "3000"]
print "Launching %s" %(" ".join(openargs))
p = subprocess.Popen(openargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
This will not only allow you to see exactly what command you are launching, but should correctly pass the values to the executable. Although I can't say for sure without seeing some code, this seems the most likely cause of failure (could it also be that the program requires a specific working directory, or configuration file?).