I am working on a Opencv based Python project. I am working on program development which takes less time to execute. For that i have tested my small program print hello world on python to test the time taken to run the program. I had run many time and every time it run it gives me a different run time.
Can you explain me why a simple program is taking different time to execute?
I need my program to be independent of system processes ?
Python gets different amounts of system resources depending upon what else the CPU is doing at the time. If you're playing Skyrim with the highest graphics levels at the time, then your script will run slower than if no other programs were open. But even if your task bar is empty, there may be invisible background processes confounding things.
If you're not already using it, consider using timeit. It performs multiple runs of your program in order to smooth out bad runs caused by a busy OS.
If you absolutely insist on requiring your program to run in the same amount of time every time, you'll need to use an OS that doesn't support multitasking. For example, DOS.
Related
I am running a program in Google Colab. I am using time module to compute the time of execution like start=time.time();....end=time.time().
Every time I run the code it's execution time is varying.
Is there a way I can find the average execution time or some other function I can use to resolve this.
I am running the same code using Linux Terminal. So
It is appreciated even if I can find the average time of execution while running the python program from Linux Terminal.
There are multiple solutions to this problem.
If you want to test a function or piece of code, you can write a script that does your own calculations with the help of a decorator.
you can make use of existing libraries.
I think there are enough examples for both options in the following title; How do I get time of a Python program's execution?
If there is something you cannot do specifically other than these, if you write, I can help.
To begin, several similar questions have previously been asked on this site, notably here and here. The former is 11 years old, and the latter, while being 4 years old, references the 11 year old post as the solution. I am curious to know if there is something more recent that could accomplish the task. In addition, these questions are only interested in the total time spent by the interpreter. I am hoping for something more granular than that, if such a thing exists.
The problem: I have a GTK program written in C that spawns a matplotlib Python process and embeds it into a widget within the GTK program using GtkSocket and GtkPlug. The python process is spawned using g_spawn (GLib) and then the plot is plugged into the socket on the Python side after it has been created. It takes three seconds to do so, during which time the GtkSocket widget is transparent. This is not very pleasant aesthetically, and I would like to see if there is anything I could do to reduce this three second wait time. I looked at using PyPy instead of CPython as the interpreter, but I am not certain that PyPy has matplotlib support, and that route could cause further headaches since I freeze the script to an executable using PyInstaller. I timed the script itself from beginning to end and the time was around 0.25 seconds. I can run the plotting script from the terminal (normal or frozen) and it takes the same amount of time for the plot to appear (~3 seconds), so it can't be the g_spawn(). The time must all be spent within the interpreter.
I created a minimal example that reproduces the issue (although much less extreme, the time before the plot appears in the socket is only one second). I am not going to post it now since it is not necessarily relevant, although if requested, I can add the file contents with an edit later (contains the GUI C code using GTK, an XML Glade file, and the Python script).
The fact that the minimal example takes one second and my actual plot takes three seconds is hardly a surprise (and further confirms that the timing problem is the time spent with the interpreter), since it is more complicated and involves more imports.
The question: Is there any utility that exists that would allow me to profile where the time is being spent within my script by the Python interpreter? Is most of the time spent with the imports? Is it elsewhere? If I could see where the interpreter spends most of its time, that may possibly allow me to reduce this three second wait time to something less egregious.
Any assistance would be appreciated.
I am new to python and I've just created this script :
import os
import os.path
import time
while True:
if os.path.isfile('myPathTo/shutdown.svg'):
os.remove('myPathTo/shutdown.svg')
time.sleep(1)
os.system('cd C:\Windows\PSTools & psshutdown -d -t 0')
As you can see, this script is very short and I think there is a way to make it less laggy. On my PC, it is using about 30% of my processor :
Python stats on my pc
I don't really know why it is using so much resources, I need your help :)
A little explanation of the program :
I'm using IFTTT to send a file on my google drive which is synchronized on my pc (shutdown.svg) when I ask google home to shut down my pc.
When Python detect the file, he has to remove it and shut down the pc. I've added time between theses actions to make sure the script does not check the file too many times to reduce lag. Maybe 1 second is too short ?
I've added time between theses actions to make sure the script does not check the file too many times to reduce lag
This loop is sleeping 1 sec only before shutting down when the file is found, i.e. it never sleeps until it actually finds a file. So, put sleep(1) out of the if-condition.
Maybe 1 second is too short?
If you can, make this sleep time as long as possible.
If your only task is to shut down the PC, there are so many ways of scanning for an update like crons for regular scripts running or setting a lightweight server
I created a python script that grabs some info from various websites, is it possible to analyze how long does it take to download the data and how long does it take to write it on a file?
I am interested in knowing how much it could improve running it on a better PC (it is currently running on a crappy old laptop.
If you just want to know how long a process takes to run the time command is pretty handy. Just run time <command> and it will report how much time it took to run with it counted in a few categories, like wall clock time, system/kernel time and user space time. This won't tell you anything about which parts of the system are taking up the amount of time. You can always look at a profiler if you want/need that type of information.
That said, as Barmar said, if you aren't doing much processing of the sites you are grabbing, the laptop is probably not going to be a limiting factor.
You can always store the system time in a variable before a block of code that you want to test, do it again after then compare them.
I had been working on a python and tkinter solution to the code golf here: https://codegolf.stackexchange.com/questions/26824/frogger-ish-game/
My response is the Python 2.7 one. The thing is, when I run this code on my 2008 mac pro, everything works fine. When I run it on Win7 (I have tried this on several different machines, with the same result), the main update loop runs way too slowly. You will notice that I designed my implementation with a 1-ms internal clock:
if(self.gameover == False):
self.root.after(1, self.process_world)
Empirical testing reveals that this runs much, much slower than every 1ms. Is this a well-known Windows 7-specific behavior? I have not been able to find much information about calls to after() lagging behind by this much. I understand that the call is supposed to be executed "at least" after the given amount of time, and not "at most", but I am seeing 1000 update ticks every 20 seconds instead of every 1 second, and a factor of 20 seems excessive. The timer loop that displays the game clock works perfectly well. I thought that maybe the culprit was my thread lock arrangement, but commenting that out makes no difference. This is my first time using tkinter, so I would appreciate any help and/or advice!