Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm creating a program in python that auto-runs for an embedded system and want to handle program interruptions gracefully. That is if possible close resources and signal child processes to also exit gracefully before actually exiting. At which point my watchdog should notice this and respawn everything.
What signals can/should I expect to receive in a non-interactive program from linux? I'm using try/except blocks to handle i/o errors.
Is the system shutting down an event that is signaled? In addition to my watchdog I will also have an independent process monitoring a hardware line that gets set when my power supply detects a brownout. I have a supercap to provide some run-time to allow a proper shutdown.
Trap sigint, sigterm and make sure to clean up anything like sockets, files, locks, etc.
Trap other signals based on what you are doing. For instance if you have open pipes you might trap sigpipe.
Just remember signal handling opens you to race conditions. You probably want to use sigprocmask to disable signals while handling them.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Is there any way to improve performance of Python script by making all threads ready and sending all of them at once?
For example, "get ready" 100 different threads with HTTP requests, and when they are ready, they will be released at the same time with smallest delay possible.
Is there any possibility to make all threads ready (for example 500 threads) and send all of them without waiting?
Yes.
What you need is a synchronization object. Basically you start all threads, but they try to acquire access to a resource, which is not possible initially. When all 500 threads are waiting, you release that resource and all 500 threads will run.
Please note that
on usual computers, you can only run 8 threads really parallel, because the CPU only has 8 cores. So starting 500 threads and having 1 HTTP request each will likely result in the same as running 8 threads that do 62 HTTP requests in a loop.
specifically for Python, it has the GIL (global interpreter lock), so you don't need multithreading, you need multiprocessing.
this seems to be used for load testing. There's software available which was specifically built for such purposes, they are reliable and tested. Don't reinvent the wheel, that's error prone.
Thread scheduling on Windows is done in 17 ms intervals, AFAIK. That's because there is a hardware timer causing an interrupt. This interrupt gives the kernel control over the CPU. So your 10 ms requirements may not be possible.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Hi I have a Server/client model using SocketServer module. The server job is to receive test name from the clients and launch the test.
the test is launched using subprocess module.
I would like the server to keep answering clients and any new jobs to be stacked on a list or queue and launch one after the other, the only restriction I have is the server should not launch the test unless currently running one is completed.
Thanks
You can use the module multiprocessing for starting new processes. On the server-side, you would have a variable which refers to the current running process. You can still have your SocketServer running and accepting requests and storing them in a list. Every second (or whatever you want), in another thread, you would check if the current process is dead or not by calling isAlive(). If it is dead, then just simply run the next test on the list.
Another way to do it (better), is that on the third thread (the one that checks), you call .join() from the process so that it will only call the next line of code once the current process is dead. That way you don't have to keep checking every second or whatever and it is more efficient.
What you might want to do is:
Get test name in server socket, put it in a Queue
In a separate thread, read test names from the Queue one by one
Execute the process and wait for it to end using communicate()
Keep polling Queue for new tests, repeat steps 2, 3 if test names are available
Meanwhile server continues receiving and putting test names in Queue
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am using Python 2.7.6 and the threading module.
I am fairly new to python threading. I am trying to write a program to read files from a filesystem and store some hashes in my database. That are a lot of files and I would like to do it in threads. Like one thread for every folder that starts with a, one thread for every folder that starts with b. Since I want to use a database connection in the threads I don't want to generate 26 threads at once. So I would like to have 10 threads running and always if one of them finishes I want to start a new thread.
The main program should hold a list of threads with a specified max
amount of threads (e.g. 10)
The main program should start 10 threads
The main program should be notified when one thread finished
If a thread is finished start a new one
And so on ... until the job is done and every thread is finished
I am not quite sure how the main program has to look like. How can I manage this list of threads without a big overhead?
I'd like to indicate you that python doesn't manage well multi-threading : As you might know (or not) python comes with a Global Interpreter Lock (GIL), that doesn't allow real concurrency : Indeed, only one thread will execute at a time. (However you will not see the execution as a sequential one, thanks to the process scheduler of your machine)
Take a look here for more information : http://www.dabeaz.com/python/UnderstandingGIL.pdf
That said, if you still want to do it this way, take a look at semaphores : every thread will have to acquire it, and if you initialize this lock to 10, only 10 thread at a time will be able to acquire it.
https://docs.python.org/2/library/threading.html#threading.Semaphore
Hope it helps
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I need to develop the performance review based python script , here is the scenario.
I need to send the logs to ElK (Elasticsearch, logstash , Kibana)
from yocto linux but only when system resources are free enough
So what I need here a python script which continuously monitor the
system performance and when system resources like CPU is less then 50%
start sending the logs and if CPU again goes above 50% PAUSE the logging
Now I am don't have idea we can pause any process with python
or not? This is because I want this for logs so when its start
again send the logs from where it stops last time
Yes, all your requirements are possible in Python.
In fact it's possible in basically any language because you're not asking for cutting edge stuff, this is basic scripting.
Sending logs to ES/Kibana
It's possible, Kibana, ES and Splunk all have public API's with good documentation on how to do it, so yes it's possible.
Pausing a process in Linux
Yes, also possible. If it's a external process simply find the PID of your process and send kill -STOP <PID> which would stop the process, to resume the process, do run kill -CONT <PID>. If it's your own process that you want to pause, simply enter a sleep cycle in your code (simple example while PAUSED: time.sleep(0.5).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a Flask application that schedules long running jobs to run using python-rq. One of my requirements is that the user can specify the number of jobs running at any given time.
The app doesn't need to kill any job in case the user use a smaller value than currently running jobs, but it needs to spawn another one in case the user increases the limit.
To run a job, the rq worker takes some time, but it doesn't need to babysit the job, it can safely run it and move on to the next one.
My issue is, sometimes the initial setup phase can take some time, so using only one worker might not be ideal. Another issue, which bugs me more, is that using this scheme my rq workers have to poll the database to know when they can go on and launch another job. Is there a better way to architect this?