Performance review based python script on yocto linux [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I need to develop the performance review based python script , here is the scenario.
I need to send the logs to ElK (Elasticsearch, logstash , Kibana)
from yocto linux but only when system resources are free enough
So what I need here a python script which continuously monitor the
system performance and when system resources like CPU is less then 50%
start sending the logs and if CPU again goes above 50% PAUSE the logging
Now I am don't have idea we can pause any process with python
or not? This is because I want this for logs so when its start
again send the logs from where it stops last time

Yes, all your requirements are possible in Python.
In fact it's possible in basically any language because you're not asking for cutting edge stuff, this is basic scripting.
Sending logs to ES/Kibana
It's possible, Kibana, ES and Splunk all have public API's with good documentation on how to do it, so yes it's possible.
Pausing a process in Linux
Yes, also possible. If it's a external process simply find the PID of your process and send kill -STOP <PID> which would stop the process, to resume the process, do run kill -CONT <PID>. If it's your own process that you want to pause, simply enter a sleep cycle in your code (simple example while PAUSED: time.sleep(0.5).

Related

How to get all threads ready and send all of them at once [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Is there any way to improve performance of Python script by making all threads ready and sending all of them at once?
For example, "get ready" 100 different threads with HTTP requests, and when they are ready, they will be released at the same time with smallest delay possible.
Is there any possibility to make all threads ready (for example 500 threads) and send all of them without waiting?
Yes.
What you need is a synchronization object. Basically you start all threads, but they try to acquire access to a resource, which is not possible initially. When all 500 threads are waiting, you release that resource and all 500 threads will run.
Please note that
on usual computers, you can only run 8 threads really parallel, because the CPU only has 8 cores. So starting 500 threads and having 1 HTTP request each will likely result in the same as running 8 threads that do 62 HTTP requests in a loop.
specifically for Python, it has the GIL (global interpreter lock), so you don't need multithreading, you need multiprocessing.
this seems to be used for load testing. There's software available which was specifically built for such purposes, they are reliable and tested. Don't reinvent the wheel, that's error prone.
Thread scheduling on Windows is done in 17 ms intervals, AFAIK. That's because there is a hardware timer causing an interrupt. This interrupt gives the kernel control over the CPU. So your 10 ms requirements may not be possible.

Python - Howto manage a list of threads [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am using Python 2.7.6 and the threading module.
I am fairly new to python threading. I am trying to write a program to read files from a filesystem and store some hashes in my database. That are a lot of files and I would like to do it in threads. Like one thread for every folder that starts with a, one thread for every folder that starts with b. Since I want to use a database connection in the threads I don't want to generate 26 threads at once. So I would like to have 10 threads running and always if one of them finishes I want to start a new thread.
The main program should hold a list of threads with a specified max
amount of threads (e.g. 10)
The main program should start 10 threads
The main program should be notified when one thread finished
If a thread is finished start a new one
And so on ... until the job is done and every thread is finished
I am not quite sure how the main program has to look like. How can I manage this list of threads without a big overhead?
I'd like to indicate you that python doesn't manage well multi-threading : As you might know (or not) python comes with a Global Interpreter Lock (GIL), that doesn't allow real concurrency : Indeed, only one thread will execute at a time. (However you will not see the execution as a sequential one, thanks to the process scheduler of your machine)
Take a look here for more information : http://www.dabeaz.com/python/UnderstandingGIL.pdf
That said, if you still want to do it this way, take a look at semaphores : every thread will have to acquire it, and if you initialize this lock to 10, only 10 thread at a time will be able to acquire it.
https://docs.python.org/2/library/threading.html#threading.Semaphore
Hope it helps

What linux signals should I trap to make a good application [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm creating a program in python that auto-runs for an embedded system and want to handle program interruptions gracefully. That is if possible close resources and signal child processes to also exit gracefully before actually exiting. At which point my watchdog should notice this and respawn everything.
What signals can/should I expect to receive in a non-interactive program from linux? I'm using try/except blocks to handle i/o errors.
Is the system shutting down an event that is signaled? In addition to my watchdog I will also have an independent process monitoring a hardware line that gets set when my power supply detects a brownout. I have a supercap to provide some run-time to allow a proper shutdown.
Trap sigint, sigterm and make sure to clean up anything like sockets, files, locks, etc.
Trap other signals based on what you are doing. For instance if you have open pipes you might trap sigpipe.
Just remember signal handling opens you to race conditions. You probably want to use sigprocmask to disable signals while handling them.

Avoiding DB polling in python-rq app [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a Flask application that schedules long running jobs to run using python-rq. One of my requirements is that the user can specify the number of jobs running at any given time.
The app doesn't need to kill any job in case the user use a smaller value than currently running jobs, but it needs to spawn another one in case the user increases the limit.
To run a job, the rq worker takes some time, but it doesn't need to babysit the job, it can safely run it and move on to the next one.
My issue is, sometimes the initial setup phase can take some time, so using only one worker might not be ideal. Another issue, which bugs me more, is that using this scheme my rq workers have to poll the database to know when they can go on and launch another job. Is there a better way to architect this?

Run Python script in background on remote server with Django view [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
What I want to achieve is to run python some script which will collect data and insert it to DB in a background.
So basically, a person opens Django view, clicks on a button and then closes the browser and Django launches this script on a server, the script then collects data in background while everything else goes on its own.
What is the best library, framework, module or package to achieve such functionality?
Celery is the most used tool for such tasks.
Celery is a good suggestion, but its a bit heavy solution and there more simple and straightforward solution exist unless you need full power of celery.
So i suggest to use rq and django integration of rq.
RQ inspired by the good parts of Celery, Resque , and has been created as a lightweight alternative to the heaviness of Celery or other AMQP-based queuing implementations.
I'd humbly reccomend the standard library module multiprocessing for this. As long as the background process can run on the same server as the one processing the requests, you'll be fine.
Although i consider this to be the simplest solution, this wouldn't scale well at all, since you'd be running extra processess on your server. If you expect these things to only happen once in a while, and not to last that long, it's a good quick solution.
One thing to keep in mind though: In the newly started process ALWAYS close your database connection before doing anything - this is because the forked process shares the same connection to the SQL server and might enter into data races with your main django process.

Categories