How to retrieve the process start time (or uptime) in python - python

How to retrieve the process start time (or uptime) in python in Linux?
I only know, I can call "ps -p my_process_id -f" and then parse the output. But it is not cool.

By using psutil https://github.com/giampaolo/psutil:
>>> import psutil, os, time
>>> p = psutil.Process(os.getpid())
>>> p.create_time()
1293678383.0799999
>>> time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(p.create_time()))
'2010-12-30 04:06:23'
>>>
...plus it's cross platform, not only Linux.
NB: I am one of the authors of this project.

If you are doing it from within the python program you're trying to measure, you could do something like this:
import time
# at the beginning of the script
startTime = time.time()
# ...
def getUptime():
"""
Returns the number of seconds since the program started.
"""
# do return startTime if you just want the process start time
return time.time() - startTime
Otherwise, you have no choice but to parse ps or go into /proc/pid. A nice bashy way of getting the elapsed time is:
ps -eo pid,etime | grep $YOUR_PID | awk '{print $2}'
This will only print the elapsed time in the following format, so it should be quite easy to parse:
days-HH:MM:SS
(if it's been running for less than a day, it's just HH:MM:SS)
The start time is available like this:
ps -eo pid,stime | grep $YOUR_PID | awk '{print $2}'
Unfortunately, if your process didn't start today, this will only give you the date that it started, rather than the time.
The best way of doing this is to get the elapsed time and the current time and just do a bit of math. The following is a python script that takes a PID as an argument and does the above for you, printing out the start date and time of the process:
import sys
import datetime
import time
import subprocess
# call like this: python startTime.py $PID
pid = sys.argv[1]
proc = subprocess.Popen(['ps','-eo','pid,etime'], stdout=subprocess.PIPE)
# get data from stdout
proc.wait()
results = proc.stdout.readlines()
# parse data (should only be one)
for result in results:
try:
result.strip()
if result.split()[0] == pid:
pidInfo = result.split()[1]
# stop after the first one we find
break
except IndexError:
pass # ignore it
else:
# didn't find one
print "Process PID", pid, "doesn't seem to exist!"
sys.exit(0)
pidInfo = [result.split()[1] for result in results
if result.split()[0] == pid][0]
pidInfo = pidInfo.partition("-")
if pidInfo[1] == '-':
# there is a day
days = int(pidInfo[0])
rest = pidInfo[2].split(":")
hours = int(rest[0])
minutes = int(rest[1])
seconds = int(rest[2])
else:
days = 0
rest = pidInfo[0].split(":")
if len(rest) == 3:
hours = int(rest[0])
minutes = int(rest[1])
seconds = int(rest[2])
elif len(rest) == 2:
hours = 0
minutes = int(rest[0])
seconds = int(rest[1])
else:
hours = 0
minutes = 0
seconds = int(rest[0])
# get the start time
secondsSinceStart = days*24*3600 + hours*3600 + minutes*60 + seconds
# unix time (in seconds) of start
startTime = time.time() - secondsSinceStart
# final result
print "Process started on",
print datetime.datetime.fromtimestamp(startTime).strftime("%a %b %d at %I:%M:%S %p")

man proc says that the 22nd item in /proc/my_process_id/stat is:
starttime %lu
The time in jiffies the process started after system boot.
Your problem now is, how to determine the length of a jiffy and how to determine when the system booted.
The answer for the latter comes still from man proc: it's in /proc/stat, on a line of its own like this:
btime 1270710844
That's a measurement in seconds since Epoch.
The answer for the former I'm not sure about. man 7 time says:
The Software Clock, HZ, and Jiffies
The accuracy of many system calls and timestamps is limited by the resolution of the software clock, a clock maintained by the kernel which measures time in jiffies. The size of a jiffy is determined by the value of the kernel constant HZ. The value of HZ varies across kernel versions and hardware platforms. On x86 the situation is as follows: on kernels up to and including 2.4.x, HZ was 100, giving a jiffy value of 0.01 seconds; starting with 2.6.0, HZ was raised to 1000, giving a jiffy of 0.001 seconds; since kernel 2.6.13, the HZ value is a kernel configuration parameter and can be 100, 250 (the default) or 1000, yielding a jiffies value of, respectively, 0.01, 0.004, or 0.001 seconds.
We need to find HZ, but I have no idea on how I'd go about that from Python except for hoping the value is 250 (as Wikipedia claims is the default).
ps obtains it thus:
/* sysinfo.c init_libproc() */
if(linux_version_code > LINUX_VERSION(2, 4, 0)){
Hertz = find_elf_note(AT_CLKTCK);
//error handling
}
old_Hertz_hack(); //ugh
This sounds like a job well done by a very small C module for Python :)

Here's code based on badp's answer:
import os
from time import time
HZ = os.sysconf(os.sysconf_names['SC_CLK_TCK'])
def proc_age_secs():
system_stats = open('/proc/stat').readlines()
process_stats = open('/proc/self/stat').read().split()
for line in system_stats:
if line.startswith('btime'):
boot_timestamp = int(line.split()[1])
age_from_boot_jiffies = int(process_stats[21])
age_from_boot_timestamp = age_from_boot_jiffies / HZ
age_timestamp = boot_timestamp + age_from_boot_timestamp
return time() - age_timestamp
I'm not sure if it's right though. I wrote a test program that calls sleep(5) and then runs it and the output is wrong and varies over a couple of seconds from run to run. This is in a vmware workstation vm:
if __name__ == '__main__':
from time import sleep
sleep(5)
print proc_age_secs()
The output is:
$ time python test.py
6.19169998169
real 0m5.063s
user 0m0.020s
sys 0m0.036s

def proc_starttime(pid=os.getpid()):
# https://gist.github.com/westhood/1073585
p = re.compile(r"^btime (\d+)$", re.MULTILINE)
with open("/proc/stat") as f:
m = p.search(f.read())
btime = int(m.groups()[0])
clk_tck = os.sysconf(os.sysconf_names["SC_CLK_TCK"])
with open("/proc/%d/stat" % pid) as f:
stime = int(f.read().split()[21]) / clk_tck
return datetime.fromtimestamp(btime + stime)

you can parse /proc/uptime
>>> uptime, idletime = [float(f) for f in open("/proc/uptime").read().split()]
>>> print uptime
29708.1
>>> print idletime
26484.45
for windows machines, you can probably use wmi
import wmi
c = wmi.WMI()
secs_up = int([uptime.SystemUpTime for uptime in c.Win32_PerfFormattedData_PerfOS_System()][0])
hours_up = secs_up / 3600
print hours_up

Related

Using astropy to generate solar eclipse conditions based on my location

This is a question for the astronomy-minded folks on here.
I am an amateur astrophotographer looking to develop a personal script to aid my photography of next year's total solar eclipse. I am developing a Python script to automate my photography, so that I may enjoy the eclipse with my own eyes while my DSLR clicks away. Here's the script I've developed so far. The script uses digicamcontrol to control the camera.
Right now in the script I have just develop automation based on the partial phase of the eclipse (first contact, C1) and the timing of the eclipse in UTC (as well as my own PC). But a thought occurred to me: What if I can't connect to the internet and get the exact timing of the solar eclipse based on my location? I'd like to be able to generate those times. Is there a more efficient method to utilize astropy for this task? Thanks in advance.
import digiCamControlPython as dccp
import time
from datetime import datetime
from astropy.time import Time
local_time = Time.now()
utc_time_now = local_time.utc
def PartialEclipse(start_time:str, end_time:str):
camera = dccp.Camera()
camera.setIso(100)
camera.setShutterspeed("1/50")
camera.setFolder(r"C:\Users\My_Name\Pictures\digiCamControl")
# Set the target capture time in astropy time format
partial_eclipse_start = Time(start_time, format='isot')
partial_eclipse_end = Time(end_time, format='isot')
# Wait until the capture time
while utc_time_now < partial_eclipse_start:
time.sleep(1)
# Start capturing images
while utc_time_now < partial_eclipse_end:
camera.capture()
time.sleep(30) # Capture an image every 30 seconds
PartialEclipse("2024-04-08T17:12:13", "2024-04-08T18:29:24") #times of partial eclipse start and T-15s before totality
EDIT: In the event anyone ever looks at this question, I did make some progress on this.
import numpy as np
import astropy.units as u
from astropy.coordinates import solar_system_ephemeris, AltAz, EarthLocation, SkyCoord
from astropy.coordinates import get_body, get_moon, get_sun
from astropy.time import Time
myLocation = EarthLocation(lat=26*u.deg, lon=-80*u.deg, height=0*u.m)
# set the time step (how often to check for a solar eclipse in seconds)
time_step = 3600 # 1 hour
# set the number of days to check for a solar eclipse
num_days = 365
# set the start and end times to check for a solar eclipse
start_time = Time.now()
end_time = start_time + num_days * u.day
# initialize a list to store the times of a solar eclipse
eclipse_times = []
# loop over the desired time range, checking for a solar eclipse every time_step seconds
with solar_system_ephemeris.set('jpl'):
for t in np.arange(start_time.unix, end_time.unix, time_step):
time = Time(t, format='unix')
moon = get_body('moon', time, myLocation)
sun = get_body('sun', time, myLocation)
sun_coord = SkyCoord(sun.ra, sun.dec, sun.distance, frame='icrs')
moon_coord = SkyCoord(moon.ra, moon.dec, moon.distance, frame='icrs')
# check if the angular separation between the moon and sun is close to zero
angular_separation = moon_coord.separation(sun_coord)
if angular_separation < 0.6 * u.deg: #elongation where the partial eclipse begins
eclipse_times.append(time)
# print the times of the next solar eclipse
if len(eclipse_times) > 0:
print("The next solar eclipse is at: ", eclipse_times[0].iso)
else:
print("No solar eclipses found in the specified time range.")
I think your approach is really good! If you want to increase the accuracy of your start time prediction without using a lot more computational power, you can use scipy.optimize.root_scalar to refine the start time you found.
In my solution below, I've defined a function called distance_contact() whose root represents the start of the eclipse. This function is zero if the Sun and Moon are barely touching, positive if they are separated, and negative if they are overlapping. Then I define a grid of times with a timestep of 1 hour similar to your code, and pass it into this function to search for eclipses. It then finds the first time where distance_contanct is negative and uses that time and the timestep before as the search space for scipy.optimize.root_scalar.
Also, instead of using 0.6 * u.deg as the separation distance for an eclipse to occur, I've calculated the angular radius of the Sun and Moon for the time argument to distance_contact to make the prediction as accurate as possible.
import numpy as np
import scipy.optimize
import astropy.units as u
import astropy.time
import astropy.constants
import astropy.coordinates
def distance_contact(
location: astropy.coordinates.EarthLocation,
time: astropy.time.Time,
eclipse_type: str,
) -> u.Quantity:
radius_sun = astropy.constants.R_sun
radius_moon = 1737.4 * u.km
coordinate_sun = astropy.coordinates.get_sun(time)
coordinate_moon = astropy.coordinates.get_moon(time)
frame_local = astropy.coordinates.AltAz(obstime=time, location=location)
alt_az_sun = coordinate_sun.transform_to(frame_local)
alt_az_moon = coordinate_moon.transform_to(frame_local)
angular_radius_sun = np.arctan2(radius_sun, alt_az_sun.distance).to(u.deg)
angular_radius_moon = np.arctan2(radius_moon, alt_az_moon.distance).to(u.deg)
if eclipse_type == 'total':
separation_max = angular_radius_moon - angular_radius_sun
elif eclipse_type == 'partial':
separation_max = angular_radius_moon + angular_radius_sun
else:
raise ValueError("Unknown eclipse type")
return (alt_az_moon.separation(alt_az_sun).deg * u.deg) - separation_max
def calc_time_start(
location: astropy.coordinates.EarthLocation,
time_search_start: astropy.time.Time,
time_search_stop: astropy.time.Time,
eclipse_type: str = 'partial'
) -> astropy.time.Time:
astropy.coordinates.solar_system_ephemeris.set("de430")
# If we're only looking for a partial eclipse, we can accept a coarser search grid
if eclipse_type == "partial":
step = 1 * u.hr
elif eclipse_type == "total":
step = 1 * u.min
else:
raise ValueError("Unknown eclipse type")
# Define a grid of times to search for eclipses
time = astropy.time.Time(np.arange(time_search_start, time_search_stop, step=step))
# Find the times that are during an eclipse
mask_eclipse = distance_contact(location=location, time=time, eclipse_type=eclipse_type) < 0
# Find the index of the first time that an eclipse is occuring
index_start = np.argmax(mask_eclipse)
# Search around that time to find when the eclipse actually starts
time_eclipse_start = scipy.optimize.root_scalar(
f=lambda t: distance_contact(location, astropy.time.Time(t, format="unix"), eclipse_type=eclipse_type).value,
bracket=[time[index_start - 1].unix, time[index_start].unix],
).root
time_eclipse_start = astropy.time.Time(time_eclipse_start, format="unix")
return time_eclipse_start
def test_calc_time_start():
location = astropy.coordinates.EarthLocation(lat=26 * u.deg, lon=-80 * u.deg, height=0 * u.m)
eclipse_type = 'partial'
time_start = calc_time_start(
location=location,
time_search_start=astropy.time.Time.now(),
time_search_stop=astropy.time.Time.now() + 0.9 * u.yr,
eclipse_type=eclipse_type,
)
print(time_start.isot)
which outputs:
2023-10-14T15:57:38.068

How globally accurate is Python's time.time function? [duplicate]

Is there a way to measure time with high-precision in Python --- more precise than one second? I doubt that there is a cross-platform way of doing that; I'm interesting in high precision time on Unix, particularly Solaris running on a Sun SPARC machine.
timeit seems to be capable of high-precision time measurement, but rather than measure how long a code snippet takes, I'd like to directly access the time values.
The standard time.time() function provides sub-second precision, though that precision varies by platform. For Linux and Mac precision is +- 1 microsecond or 0.001 milliseconds. Python on Windows uses +- 16 milliseconds precision due to clock implementation problems due to process interrupts. The timeit module can provide higher resolution if you're measuring execution time.
>>> import time
>>> time.time() #return seconds from epoch
1261367718.971009
Python 3.7 introduces new functions to the time module that provide higher resolution:
>>> import time
>>> time.time_ns()
1530228533161016309
>>> time.time_ns() / (10 ** 9) # convert to floating-point seconds
1530228544.0792289
Python tries hard to use the most precise time function for your platform to implement time.time():
/* Implement floattime() for various platforms */
static double
floattime(void)
{
/* There are three ways to get the time:
(1) gettimeofday() -- resolution in microseconds
(2) ftime() -- resolution in milliseconds
(3) time() -- resolution in seconds
In all cases the return value is a float in seconds.
Since on some systems (e.g. SCO ODT 3.0) gettimeofday() may
fail, so we fall back on ftime() or time().
Note: clock resolution does not imply clock accuracy! */
#ifdef HAVE_GETTIMEOFDAY
{
struct timeval t;
#ifdef GETTIMEOFDAY_NO_TZ
if (gettimeofday(&t) == 0)
return (double)t.tv_sec + t.tv_usec*0.000001;
#else /* !GETTIMEOFDAY_NO_TZ */
if (gettimeofday(&t, (struct timezone *)NULL) == 0)
return (double)t.tv_sec + t.tv_usec*0.000001;
#endif /* !GETTIMEOFDAY_NO_TZ */
}
#endif /* !HAVE_GETTIMEOFDAY */
{
#if defined(HAVE_FTIME)
struct timeb t;
ftime(&t);
return (double)t.time + (double)t.millitm * (double)0.001;
#else /* !HAVE_FTIME */
time_t secs;
time(&secs);
return (double)secs;
#endif /* !HAVE_FTIME */
}
}
( from http://svn.python.org/view/python/trunk/Modules/timemodule.c?revision=81756&view=markup )
David's post was attempting to show what the clock resolution is on Windows. I was confused by his output, so I wrote some code that shows that time.time() on my Windows 8 x64 laptop has a resolution of 1 msec:
# measure the smallest time delta by spinning until the time changes
def measure():
t0 = time.time()
t1 = t0
while t1 == t0:
t1 = time.time()
return (t0, t1, t1-t0)
samples = [measure() for i in range(10)]
for s in samples:
print s
Which outputs:
(1390455900.085, 1390455900.086, 0.0009999275207519531)
(1390455900.086, 1390455900.087, 0.0009999275207519531)
(1390455900.087, 1390455900.088, 0.0010001659393310547)
(1390455900.088, 1390455900.089, 0.0009999275207519531)
(1390455900.089, 1390455900.09, 0.0009999275207519531)
(1390455900.09, 1390455900.091, 0.0010001659393310547)
(1390455900.091, 1390455900.092, 0.0009999275207519531)
(1390455900.092, 1390455900.093, 0.0009999275207519531)
(1390455900.093, 1390455900.094, 0.0010001659393310547)
(1390455900.094, 1390455900.095, 0.0009999275207519531)
And a way to do a 1000 sample average of the delta:
reduce( lambda a,b:a+b, [measure()[2] for i in range(1000)], 0.0) / 1000.0
Which output on two consecutive runs:
0.001
0.0010009999275207519
So time.time() on my Windows 8 x64 has a resolution of 1 msec.
A similar run on time.clock() returns a resolution of 0.4 microseconds:
def measure_clock():
t0 = time.clock()
t1 = time.clock()
while t1 == t0:
t1 = time.clock()
return (t0, t1, t1-t0)
reduce( lambda a,b:a+b, [measure_clock()[2] for i in range(1000000)] )/1000000.0
Returns:
4.3571334791658954e-07
Which is ~0.4e-06
An interesting thing about time.clock() is that it returns the time since the method was first called, so if you wanted microsecond resolution wall time you could do something like this:
class HighPrecisionWallTime():
def __init__(self,):
self._wall_time_0 = time.time()
self._clock_0 = time.clock()
def sample(self,):
dc = time.clock()-self._clock_0
return self._wall_time_0 + dc
(which would probably drift after a while, but you could correct this occasionally, for example dc > 3600 would correct it every hour)
If Python 3 is an option, you have two choices:
time.perf_counter which always use the most accurate clock on your platform. It does include time spent outside of the process.
time.process_time which returns the CPU time. It does NOT include time spent outside of the process.
The difference between the two can be shown with:
from time import (
process_time,
perf_counter,
sleep,
)
print(process_time())
sleep(1)
print(process_time())
print(perf_counter())
sleep(1)
print(perf_counter())
Which outputs:
0.03125
0.03125
2.560001310720671e-07
1.0005455362793145
You can also use time.clock() It counts the time used by the process on Unix and time since the first call to it on Windows. It's more precise than time.time().
It's the usually used function to measure performance.
Just call
import time
t_ = time.clock()
#Your code here
print 'Time in function', time.clock() - t_
EDITED: Ups, I miss the question as you want to know exactly the time, not the time spent...
Python 3.7 introduces 6 new time functions with nanosecond resolution, for example instead of time.time() you can use time.time_ns() to avoid floating point imprecision issues:
import time
print(time.time())
# 1522915698.3436284
print(time.time_ns())
# 1522915698343660458
These 6 functions are described in PEP 564:
time.clock_gettime_ns(clock_id)
time.clock_settime_ns(clock_id, time:int)
time.monotonic_ns()
time.perf_counter_ns()
time.process_time_ns()
time.time_ns()
These functions are similar to the version without the _ns suffix, but
return a number of nanoseconds as a Python int.
time.clock() has 13 decimal points on Windows but only two on Linux.
time.time() has 17 decimals on Linux and 16 on Windows but the actual precision is different.
I don't agree with the documentation that time.clock() should be used for benchmarking on Unix/Linux. It is not precise enough, so what timer to use depends on operating system.
On Linux, the time resolution is high in time.time():
>>> time.time(), time.time()
(1281384913.4374139, 1281384913.4374161)
On Windows, however the time function seems to use the last called number:
>>> time.time()-int(time.time()), time.time()-int(time.time()), time.time()-time.time()
(0.9570000171661377, 0.9570000171661377, 0.0)
Even if I write the calls on different lines in Windows it still returns the same value so the real precision is lower.
So in serious measurements a platform check (import platform, platform.system()) has to be done in order to determine whether to use time.clock() or time.time().
(Tested on Windows 7 and Ubuntu 9.10 with python 2.6 and 3.1)
The original question specifically asked for Unix but multiple answers have touched on Windows, and as a result there is misleading information on windows. The default timer resolution on windows is 15.6ms you can verify here.
Using a slightly modified script from cod3monk3y I can show that windows timer resolution is ~15milliseconds by default. I'm using a tool available here to modify the resolution.
Script:
import time
# measure the smallest time delta by spinning until the time changes
def measure():
t0 = time.time()
t1 = t0
while t1 == t0:
t1 = time.time()
return t1-t0
samples = [measure() for i in range(30)]
for s in samples:
print(f'time delta: {s:.4f} seconds')
These results were gathered on windows 10 pro 64-bit running python 3.7 64-bit.
The comment left by tiho on Mar 27 '14 at 17:21 deserves to be its own answer:
In order to avoid platform-specific code, use timeit.default_timer()
I observed that the resolution of time.time() is different between Windows 10 Professional and Education versions.
On a Windows 10 Professional machine, the resolution is 1 ms.
On a Windows 10 Education machine, the resolution is 16 ms.
Fortunately, there's a tool that increases Python's time resolution in Windows:
https://vvvv.org/contribution/windows-system-timer-tool
With this tool, I was able to achieve 1 ms resolution regardless of Windows version. You will need to be keep it running while executing your Python codes.
For those stuck on windows (version >= server 2012 or win 8)and python 2.7,
import ctypes
class FILETIME(ctypes.Structure):
_fields_ = [("dwLowDateTime", ctypes.c_uint),
("dwHighDateTime", ctypes.c_uint)]
def time():
"""Accurate version of time.time() for windows, return UTC time in term of seconds since 01/01/1601
"""
file_time = FILETIME()
ctypes.windll.kernel32.GetSystemTimePreciseAsFileTime(ctypes.byref(file_time))
return (file_time.dwLowDateTime + (file_time.dwHighDateTime << 32)) / 1.0e7
GetSystemTimePreciseAsFileTime function
On the same win10 OS system using "two distinct method approaches" there appears to be an approximate "500 ns" time difference. If you care about nanosecond precision check my code below.
The modifications of the code is based on code from user cod3monk3y and Kevin S.
OS: python 3.7.3 (default, date, time) [MSC v.1915 64 bit (AMD64)]
def measure1(mean):
for i in range(1, my_range+1):
x = time.time()
td = x- samples1[i-1][2]
if i-1 == 0:
td = 0
td = f'{td:.6f}'
samples1.append((i, td, x))
mean += float(td)
print (mean)
sys.stdout.flush()
time.sleep(0.001)
mean = mean/my_range
return mean
def measure2(nr):
t0 = time.time()
t1 = t0
while t1 == t0:
t1 = time.time()
td = t1-t0
td = f'{td:.6f}'
return (nr, td, t1, t0)
samples1 = [(0, 0, 0)]
my_range = 10
mean1 = 0.0
mean2 = 0.0
mean1 = measure1(mean1)
for i in samples1: print (i)
print ('...\n\n')
samples2 = [measure2(i) for i in range(11)]
for s in samples2:
#print(f'time delta: {s:.4f} seconds')
mean2 += float(s[1])
print (s)
mean2 = mean2/my_range
print ('\nMean1 : ' f'{mean1:.6f}')
print ('Mean2 : ' f'{mean2:.6f}')
The measure1 results:
nr, td, t0
(0, 0, 0)
(1, '0.000000', 1562929696.617988)
(2, '0.002000', 1562929696.6199884)
(3, '0.001001', 1562929696.620989)
(4, '0.001001', 1562929696.62199)
(5, '0.001001', 1562929696.6229906)
(6, '0.001001', 1562929696.6239917)
(7, '0.001001', 1562929696.6249924)
(8, '0.001000', 1562929696.6259928)
(9, '0.001001', 1562929696.6269937)
(10, '0.001001', 1562929696.6279945)
...
The measure2 results:
nr, td , t1, t0
(0, '0.000500', 1562929696.6294951, 1562929696.6289947)
(1, '0.000501', 1562929696.6299958, 1562929696.6294951)
(2, '0.000500', 1562929696.6304958, 1562929696.6299958)
(3, '0.000500', 1562929696.6309962, 1562929696.6304958)
(4, '0.000500', 1562929696.6314962, 1562929696.6309962)
(5, '0.000500', 1562929696.6319966, 1562929696.6314962)
(6, '0.000500', 1562929696.632497, 1562929696.6319966)
(7, '0.000500', 1562929696.6329975, 1562929696.632497)
(8, '0.000500', 1562929696.633498, 1562929696.6329975)
(9, '0.000500', 1562929696.6339984, 1562929696.633498)
(10, '0.000500', 1562929696.6344984, 1562929696.6339984)
End result:
Mean1 : 0.001001 # (measure1 function)
Mean2 : 0.000550 # (measure2 function)
Here is a python 3 solution for Windows building upon the answer posted above by CyberSnoopy (using GetSystemTimePreciseAsFileTime). We borrow some code from jfs
Python datetime.utcnow() returning incorrect datetime
and get a precise timestamp (Unix time) in microseconds
#! python3
import ctypes.wintypes
def utcnow_microseconds():
system_time = ctypes.wintypes.FILETIME()
#system call used by time.time()
#ctypes.windll.kernel32.GetSystemTimeAsFileTime(ctypes.byref(system_time))
#getting high precision:
ctypes.windll.kernel32.GetSystemTimePreciseAsFileTime(ctypes.byref(system_time))
large = (system_time.dwHighDateTime << 32) + system_time.dwLowDateTime
return large // 10 - 11644473600000000
for ii in range(5):
print(utcnow_microseconds()*1e-6)
References
https://learn.microsoft.com/en-us/windows/win32/sysinfo/time-functions
https://learn.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-getsystemtimepreciseasfiletime
https://support.microsoft.com/en-us/help/167296/how-to-convert-a-unix-time-t-to-a-win32-filetime-or-systemtime
1. Python 3.7 or later
If using Python 3.7 or later, use the modern, cross-platform time module functions such as time.monotonic_ns(), here: https://docs.python.org/3/library/time.html#time.monotonic_ns. It provides nanosecond-resolution timestamps.
import time
time_ns = time.monotonic_ns()
# or on Unix or Linux you can also use:
time_ns = time.clock_gettime_ns()
# or on Windows:
time_ns = time.perf_counter_ns()
# etc. etc. There are others. See the link above.
From my other answer from 2016, here: How can I get millisecond and microsecond-resolution timestamps in Python?:
You might also try time.clock_gettime_ns() on Unix or Linux systems. Based on its name, it appears to call the underlying clock_gettime() C function which I use in my nanos() function in C in my answer here and in my C Unix/Linux library here: timinglib.c.
2. Python 3.3 or later
On Windows, in Python 3.3 or later, you can use time.perf_counter(), as shown by #ereOn here. See: https://docs.python.org/3/library/time.html#time.perf_counter. This provides roughly a 0.5us-resolution timestamp, in floating point seconds. Ex:
import time
# For Python 3.3 or later
time_sec = time.perf_counter() # Windows only, I think
# or on Unix or Linux (I think only those)
time_sec = time.monotonic()
3. Pre-Python 3.3 (ex: Python 3.0, 3.1, 3.2), or later
Summary:
See my other answer from 2016 here for 0.5-us-resolution timestamps, or better, in Windows and Linux, and for versions of Python as old as 3.0, 3.2 or 3.2 even! We do this by calling C or C++ shared object libraries (.dll on Windows, or .so on Unix or Linux) using the ctypes module in Python.
I provide these functions:
millis()
micros()
delay()
delayMicroseconds()
Download GS_timing.py from my eRCaGuy_PyTime repo, then do:
import GS_timing
time_ms = GS_timing.millis()
time_us = GS_timing.micros()
GS_timing.delay(10) # delay 10 ms
GS_timing.delayMicroseconds(10000) # delay 10000 us
Details:
In 2016, I was working in Python 3.0 or 3.1, on an embedded project on a Raspberry Pi, and which I tested and ran frequently on Windows also. I needed nanosecond resolution for some precise timing I was doing with ultrasonic sensors. The Python language at the time did not provide this resolution, and neither did any answer to this question, so I came up with this separate Q&A here: How can I get millisecond and microsecond-resolution timestamps in Python?. I stated in the question at the time:
I read other answers before asking this question, but they rely on the time module, which prior to Python 3.3 did NOT have any type of guaranteed resolution whatsoever. Its resolution is all over the place. The most upvoted answer here quotes a Windows resolution (using their answer) of 16 ms, which is 32000 times worse than my answer provided here (0.5 us resolution). Again, I needed 1 ms and 1 us (or similar) resolutions, not 16000 us resolution.
Zero, I repeat: zero answers here on 12 July 2016 had any resolution better than 16-ms for Windows in Python 3.1. So, I came up with this answer which has 0.5us or better resolution in pre-Python 3.3 in Windows and Linux. If you need something like that for an older version of Python, or if you just want to learn how to call C or C++ dynamic libraries in Python (.dll "dynamically linked library" files in Windows, or .so "shared object" library files in Unix or Linux) using the ctypes library, see my other answer here.
I created a tiny C-Extension that uses GetSystemTimePreciseAsFileTime to provide an accurate timestamp on Windows:
https://win-precise-time.readthedocs.io/en/latest/api.html#win_precise_time.time
Usage:
>>> import win_precise_time
>>> win_precise_time.time()
1654539449.4548845
def start(self):
sec_arg = 10.0
cptr = 0
time_start = time.time()
time_init = time.time()
while True:
cptr += 1
time_start = time.time()
time.sleep(((time_init + (sec_arg * cptr)) - time_start ))
# AND YOUR CODE .......
t00 = threading.Thread(name='thread_request', target=self.send_request, args=([]))
t00.start()

How to parallelize a nested for loop in python?

Ok, here is my problem: I have a nested for loop in my program which runs on a single core. Since the program spend over 99% of run time in this nested for loop I would like to parallelize it. Right now I have to wait 9 days for the computation to finish. I tried to implement a parallel for loop by using the multiprocessing library. But I only find very basic examples and can not transfer them to my problem. Here are the nested loops with random data:
import numpy as np
dist_n = 100
nrm = np.linspace(1,10,dist_n)
data_Y = 11000
data_I = 90000
I = np.random.randn(data_I, 1000)
Y = np.random.randn(data_Y, 1000)
dist = np.zeros((data_I, dist_n)
for t in range(data_Y):
for i in range(data_I):
d = np.abs(I[i] - Y[t])
for p in range(dist_n):
dist[i,p] = np.sum(d**nrm[p])/nrm[p]
print(dist)
Please give me some advise how to make it parallel.
There's a small overhead with initiating a process (50ms+ depending on data size) so it's generally best to MP the largest block of code possible. From your comment it sounds like each loop of t is independent so we should be free to parallelize this.
When python creates a new process you get a copy of the main process so you have available all your global data but when each process writes the data, it writes to it's own local copy. This means dist[i,p] won't be available to the main process unless you explicitly pass it back with a return (which will have some overhead). In your situation, if each process writes dist[i,p] to a file then you should be fine, just don't try to write to the same file unless you implement some type of mutex access control.
#!/usr/bin/python
import time
import multiprocessing as mp
import numpy as np
data_Y = 11 #11000
data_I = 90 #90000
dist_n = 100
nrm = np.linspace(1,10,dist_n)
I = np.random.randn(data_I, 1000)
Y = np.random.randn(data_Y, 1000)
dist = np.zeros((data_I, dist_n))
def worker(t):
st = time.time()
for i in range(data_I):
d = np.abs(I[i] - Y[t])
for p in range(dist_n):
dist[i,p] = np.sum(d**nrm[p])/nrm[p]
# Here - each worker opens a different file and writes to it
print 'Worker time %4.3f mS' % (1000.*(time.time()-st))
if 1: # single threaded
st = time.time()
for x in map(worker, range(data_Y)):
pass
print 'Single-process total time is %4.3f seconds' % (time.time()-st)
print
if 1: # multi-threaded
pool = mp.Pool(28) # try 2X num procs and inc/dec until cpu maxed
st = time.time()
for x in pool.imap_unordered(worker, range(data_Y)):
pass
print 'Multiprocess total time is %4.3f seconds' % (time.time()-st)
print
If you re-increase the size of data_Y/data_I again, the speed-up should increase up to the theoretical limit.

CPU usage per thread

I need to get CPU % for each process thread.
So, I create simple script:
import psutil
from psutil import Process
p = psutil.Process(4499)
treads_list = p.get_threads()
for i in treads_list:
o = i[0]
th = psutil.Process(o)
cpu_perc = th.get_cpu_percent(interval=1)
print('PID %s use %% CPU = %s' % (o, cpu_perc))
Here is how TOP looks like for this process:
4942 teamcity 20 0 3288m 831m 3124 R 33.3 10.6 10303:37 java
32700 teamcity 20 0 3288m 831m 3124 S 5.9 10.6 18:49.99 java
5824 teamcity 20 0 3288m 831m 3124 S 5.9 10.6 1:57.90 java
4621 teamcity 20 0 3288m 831m 3124 S 3.0 10.6 1834:09 java
4622 teamcity 20 0 3288m 831m 3124 S 2.6 10.6 1844:15 java
Threads use 2.6-5.9 % CPU, and parent PID - use 33.3.
But - here is script's result:
# ./psutil_threads.py
PID 10231 use % CPU = 60.9
PID 10681 use % CPU = 75.3
PID 11371 use % CPU = 69.9
PID 11860 use % CPU = 85.9
PID 12977 use % CPU = 56.0
PID 14114 use % CPU = 88.8
Looks like each thread 'eat' 56-88 % CPU...
What I'm missing here?
This should give you what you need and match top (adapt to your use case):
import psutil
def get_threads_cpu_percent(p, interval=0.1):
total_percent = p.get_cpu_percent(interval)
total_time = sum(p.cpu_times())
return [total_percent * ((t.system_time + t.user_time)/total_time) for t in p.get_threads()]
# Example usage for process with process id 8008:
proc = psutil.Process(8008)
print(get_threads_cpu_percent(proc))
get_cpu_percent(interval=0.1)
Return a float representing the process CPU utilization as a percentage.
When interval is > 0.0 compares process times to system CPU times elapsed before and after the interval (blocking).
When interval is 0.0 or None compares process times to system CPU times elapsed since last call, returning immediately. In this case is recommended for accuracy that this function be called with at least 0.1 seconds between calls.
This sounds a lot like it will give you how much of the CPU time spent non-idle is returned (that is: amount of process CPU time per system CPU time), while top shows the amount of CPU time of the process in relation to real time. This seems realistic given your numbers.
To get the values top would show you, simply multiplying each threads' CPU usage by the CPU usage of the core the thread runs on should work. psutil.cpu_percent should help with that. Note that you need to divide percentages by 100.0 (to get a "percentage" between 0 and 1) before multiplying them.
While Gabe's answer is great, note that newer psutil version requires the following updated syntax:
import psutil
def get_threads_cpu_percent(p, interval=0.1):
total_percent = p.cpu_percent(interval)
total_time = sum(p.cpu_times())
return [total_percent * ((t.system_time + t.user_time)/total_time) for t in p.threads()]
# Example usage for process with process id 8008:
proc = psutil.Process(8008)
print(get_threads_cpu_percent(proc))
I made improvements to Florent Thiery and Gabe solution, creating a little script you can use to monitor CPU usage (by thread) of any process.
python cpuusage.py <PID>
import psutil, sys, time, os
def clear():
if os.name == "nt":
_ = os.system("cls")
else:
_ = os.system("clear")
def get_threads_cpu_percent(p, interval=0.1):
total_percent = p.cpu_percent(interval)
total_time = sum(p.cpu_times())
return [('%s %s %s' % (total_percent * ((t.system_time + t.user_time)/total_time), t.id, psutil.Process(t.id).name())) for t in p.threads()]
try:
sys.argv[1]
except:
sys.exit('Enter PID')
proc = psutil.Process(int(sys.argv[1]))
while True:
clear()
threads = get_threads_cpu_percent(proc)
threads.sort(reverse=True)
for line in threads:
print(line)
time.sleep(1)

Calculating user, nice, sys, idle, iowait, irq and sirq from /proc/stat

/proc/stat shows ticks for user, nice, sys, idle, iowait, irq and sirq like this:
cpu 6214713 286 1216407 121074379 260283 253506 197368 0 0 0
How can I calculate the individual utilizations (in %) for user, nice etc with these values? Like the values that shows in 'top' or 'vmstat'.
This code calculates user utilization spread over all cores.
import os
import time
import multiprocessing
def main():
jiffy = os.sysconf(os.sysconf_names['SC_CLK_TCK'])
num_cpu = multiprocessing.cpu_count()
stat_fd = open('/proc/stat')
stat_buf = stat_fd.readlines()[0].split()
user, nice, sys, idle, iowait, irq, sirq = ( float(stat_buf[1]), float(stat_buf[2]),
float(stat_buf[3]), float(stat_buf[4]),
float(stat_buf[5]), float(stat_buf[6]),
float(stat_buf[7]) )
stat_fd.close()
time.sleep(1)
stat_fd = open('/proc/stat')
stat_buf = stat_fd.readlines()[0].split()
user_n, nice_n, sys_n, idle_n, iowait_n, irq_n, sirq_n = ( float(stat_buf[1]), float(stat_buf[2]),.
float(stat_buf[3]), float(stat_buf[4]),
float(stat_buf[5]), float(stat_buf[6]),
float(stat_buf[7]) )
stat_fd.close()
print ((user_n - user) * 100 / jiffy) / num_cpu
if __name__ == '__main__':
main()
From Documentation/filesystems/proc.txt:
(...) These numbers identify the amount of time the CPU has spent performing
different kinds of work. Time units are in USER_HZ (typically hundredths of a second).
So to figure out utilization in terms of percentages you need to:
Find out what USER_HZ is on the machine
Find out how long it's been since the system booted.
The second one is easy: there is a btime line in that same file which you can use for that. For USER_HZ, check out How to get number of mili seconds per jiffy.

Categories