Delay resolved by parallel functions in Python - python

I know that similar questions might have been asked before. But I couldn't find a solution that fits my case. Apologies for the dumb question in advance.
I am reading two voltage values from a USB-hub (connected to two sensors). The problem is that, the way that my code does it, there will be an approx. 0.8-second delay between them so I can never have both their values at the same time ( if I decrease the any of the two time.sleep(), the value of the second def will not be reported ). I was thinking that if both could run at the same time, maybe I could have values that belong to exact same time point and not shifted through time. If you have any comments that can improve this code, I appreciate it.
I thank you for your comments in advance.
import sys
import time
import datetime
from Phidget22.Devices.VoltageRatioInput import *
from Phidget22.PhidgetException import *
from Phidget22.Phidget import *
from Phidget22.Net import *
fig = plt.figure()
ax1 = fig.add_subplot(1,1,1)
try:
ch = VoltageRatioInput()
except RuntimeError as e:
print("Runtime Exception %s" % e.details)
print("Press Enter to Exit...\n")
readin = sys.stdin.read(1)
exit(1)
a=[]
b=[]
try:
start = time.time()
while True:
def VoltageRatioChangeHandler(e, voltageRatio):
n=voltageRatio
a.append(n)
ch.setOnVoltageRatioChangeHandler(VoltageRatioChangeHandler)
ch.setHubPort(1)
ch.setIsHubPortDevice(1)
ch.openWaitForAttachment(5000)
if(ch.getChannelSubclass() == ChannelSubclass.PHIDCHSUBCLASS_VOLTAGERATIOINPUT_BRIDGE):
ch.setBridgeEnabled(1)
time.sleep(0.3)
ch.close()
end1 = time.time()
Elt1 = end1-start
print (Elt1)
print a
###
def VoltageRatioChangeHandler(e, voltageRatio2):
m=voltageRatio2
if m is None:
b.append(0)
else:
b.append(m)
ch.setOnVoltageRatioChangeHandler(VoltageRatioChangeHandler)
ch.setHubPort(0)
ch.setIsHubPortDevice(0)
ch.openWaitForAttachment(5000)
if(ch.getChannelSubclass() == ChannelSubclass.PHIDCHSUBCLASS_VOLTAGERATIOINPUT_BRIDGE):
ch.setBridgeEnabled(1)
time.sleep(0.4)
ch.close()
end = time.time()
Elt = end - start
print (Elt)
print b
except KeyboardInterrupt:
print ("gracefully aborted")
sys.exit()

Going parallel sounds easier than done. (... watch the [us] lost here ) Better not move in this way, but:
Long story short: costs of going "parallel" are devastating for your use-case.
Also reinventing a wheel is quite expensive, so let me offer you a concept, which is almost free of charge and works like charm.
Your measurements are so close to a multi-agent control-system, so let's re-use the framework that was developed exactly for the same reason -- the MVC ( yes, an anxious and great idea, as old as the one originated in the famous nest of smart thinkers from the XEROX Palo Alto Research Centre ).
import Tkinter as tk # YESSSS! re-using a GUI-tool ( i.e. multi-agent by-design )
The key value is in the freedom of design "under" the Controller-part of the concept, using all the exceptionally well polished tools built-in, not to bother with low-level details.
High-level idea: ( a full sensoric-network control-plan may go this way )
Let the sensors get read as often as you need ( be it driven just by a common sense, a smell of reason, or an indeed rigorous Nyquist-boundary of the theory of stability of your experiment's control-loop ).
First, we may need a way, how to passively read a value-pair ( coherently read at the same time ( well, better withing a common window-of-time, right? ) ).
SENSOR_A_Last_Voltage_Value = tk.DoubleVar()
SENSOR_B_Last_Voltage_Value = tk.DoubleVar()
SCHEDULED_EVENT_READ_A = tk.StringVar()
SCHEDULED_EVENT_READ_B = tk.StringVar()
SIGNAL_2_READ_Voltage_Value = tk.IntVar()
These are the MVC-Model-part smart-"registers", if you wish.
def aSensorREAD_A():
#--------------------------------------------------
# handle all the tricks to read a given sensor ONCE
#--------------------------------------------------
...
ch.setHubPort( 0 )
ch.setIsHubPortDevice( 0 )
ch.openWaitForAttachment( 5000 )
...
a_just_read_value = ...
#--------------------------------------------------
# Let the MVC-framework store this value into MODEL
#--------------------------------------------------
SENSOR_A_Last_Voltage_Value.set( a_just_read_value )
#--------------------------------------------------
# schedule a ( self-operated ) read "next" ONCE
#--------------------------------------------------
SCHEDULED_EVENT_READ_A.set( root.after( 100, aSensorREAD_A ) )
# repeat after 100 [ms]
#
# a cool way to command actually your Boss, isn't it?
# + may,
# if at need, root.after_cancel( SCHEDULED_EVENT_READ_A )
So, we may consider a role of a SensorREAD_*() to be one of such independent agents, responsible for doing the low-level job with the actual sensor readings.
For a passive value-consumer, there will be just a pair of "intelligent" variables, that are granted to always carry the updated ( last read )-value.
print( "[A] {0: >16.3f}[mV]".format( SENSOR_A_Last_Voltage_Value.get() )
print( "[B] {0: >16.3f}[mV]".format( SENSOR_B_Last_Voltage_Value.get() )
For a triggered-expecting value-consumer, there might be an additional tool, that will inform any such trigger-expecting-reader.
idTrA1 = SENSOR_A_Last_Voltage_Value.trace_variable( "w", aTriggerdFUN1ToCallOnA )
idTrA2 = SENSOR_A_Last_Voltage_Value.trace_variable( "w", aTriggerdFUN2ToCallOnA )
idTrA3 = SENSOR_A_Last_Voltage_Value.trace_variable( "w", aTriggerdFUN3ToCallOnA )
idTrB1 = SENSOR_B_Last_Voltage_Value.trace_variable( "w", aTriggerdFUN1ToCallOnB )
...
idTrB7 = SENSOR_B_Last_Voltage_Value.trace_variable( "w", aTriggerdFUN7ToCallOnB )
# as one may wish and need
Last, but not least, there might be another coherent-reading strategy:
SIGNAL_2_READ_Voltage_Value = tk.IntVar() # MVC-Model "register"
idTrSIG2R_A = SIGNAL_2_READ_Voltage_Value( "w", aSensorREAD_A ) # MVC-Controller actor
idTrSIG2R_B = SIGNAL_2_READ_Voltage_Value( "w", aSensorREAD_B )
This makes an external trigger-to-read tool, that will actually help to "fire" both reading "at the same moment", just by touching:
SIGNAL_2_READ_Voltage_Value.set( 1 + SIGNAL_2_READ_Voltage_Value.get() )
Also some final steps for graceful termination are clear and honest:
finally:
#---------------------------------------------------
SIGNAL_2_READ_Voltage_Value.trace_vdelete( "w", idTrSIG2R_A )
SIGNAL_2_READ_Voltage_Value.trace_vdelete( "w", idTrSIG2R_B )
#---------------------------------------------------
SENSOR_A_Last_Voltage_Value.trace_vdelete( "w", idTrA1 )
SENSOR_A_Last_Voltage_Value.trace_vdelete( "w", idTrA2 )
SENSOR_A_Last_Voltage_Value.trace_vdelete( "w", idTrA3 )
#---------------------------------------------------
SENSOR_B_Last_Voltage_Value.trace_vdelete( "w", idTrB1 )
...
SENSOR_B_Last_Voltage_Value.trace_vdelete( "w", idTrB7 )
For more details and mock-up case inspiration may like to read this

Related

Tkinter using mainloop and another loop

I'm doing a project where I read info from a socket and then intend to display it on a gui using tkinter. The thing is, my read info from socket is a loop and for the gui I need another loop.
I'm pretty inexperienced with both Python and Tkinter, which probably explains my mistake here.
The fd_dict is a dictionary with the properties and respective values of a car ex: gear, power, speed, etc (theme of my project).
The main problem is either I get the values from the socket or I display the gui, never both obviously, since it stays on the earlier loop.
while True:
# UDP server part of the connection
message, address = server_socket.recvfrom(1024)
del address
fdp = ForzaDataPacket(message)
fdp.wall_clock = dt.datetime.now()
# Get all properties
properties = fdp.get_props()
# Get parameters
data = fdp.to_list(params)
assert len(data) == len(properties)
# Zip into a dictionary
fd_dict = dict(zip(properties, data))
# Add timestamp
fd_dict['timestamp'] = str(fdp.wall_clock)
# Print of various testing values
print('GEAR: ', fd_dict['gear'])
print('SPEED(in KMH): ', fd_dict['speed'] * 3.6) #speed in kph
print('POWER(in HP): ', fd_dict['power'] * 0.0013596216173) #power in hp
#print('PERFORMANCE INDEX: ', fd_dict['car_performance_index'])
print('\n')
The tkinter code:
window = Tk()
window.title('Forza Horizon 5 Telemetry')
window.geometry("1500x800")
window.configure(bg="#1a1a1a")
frame = Frame(window)
frame.pack()
label_gear = Label(text = '0')
label_gear.configure(bg="darkgrey")
label_gear.pack()
I read about using after() and using classes, but I've never used them, and can't figure out how to apply them here.
Thanks in advance.

Batch file does not run properly a loop in a python script

I have this script
from ib_insync import *
import pandas as pd
from tqdm import tqdm
util.startLoop()
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=1)
data = {'Name': ['AAPL', 'TSLA', 'GOOG', 'NVDA'], 'buy_amount': [10, 10, 10, 10]}
df = pd.DataFrame(data)
for i in tqdm(df.index):
contract = Stock(df['Name'].iloc[i],exchange='SMART',currency='USD')
marketOrder = MarketOrder('BUY',df['buy_amount'].iloc[i])
marketOrder.tif = 'GTC'
marketOrder = ib.placeOrder(contract, marketOrder)
I am using ib_insync to automate my trading. And I would like it to do everything automatically. To automate I tried using a bat file to run my scripts.
First, I can run this script without any problem in VScode. It places all orders without any problem with the loop. However, when I use bat file or run this scrip outside VScode (double-clicking on py script on Windows Explorer) it does not place all orders, only the first order (i.e. AAPL). Meaning that the loop only works when I run it with VScode. I do not use virtual environments, hence this should not be a problem. I could not identify the reason of the problem, any help is very much appreciated
As commented above,
"#user3666197 no I cannot see any exceptions. I tried using pause at the end of the bat file but the terminal closes after placing the first order. As far as I understand the terminal should not close when I use pause at the end of bat file. I have not tried to 'frame' my code, because I do not know how to do it yet. But that's gonna be the first thing that I learn now –
Bogdan Titomir
"
this provisional code may help trace the root-cause :
###################################### "fused"-section under review
try:
...code...
except aKnownAndExpectedExcREASON:
... a handled case ...
except:
... a not expected, yet happened EXC ...
print ( "EXC: this happened in ...code... Check why ... " )
finally:
print ( "INF: going to next section " )
######################################
This trivial "framing" can help you view, stop and diagnose all details when & where necessary
...
#______________________________________ a risky step, so fused here
try:
util.startLoop()
print ( "INF: ok... " )
except:
print ( "EXC: ..." )
# pause
finally:
pass
#______________________________________ a risky step, so fused here
try:
ib = IB()
ib.connect( '127.0.0.1',
7497,
clientId = 1 )
print ( "INF: ok... " )
except:
print ( "EXC: ..." )
# pause
finally:
pass
data = { 'Name': ['AAPL', 'TSLA', 'GOOG', 'NVDA'],
'buy_amount': [10, 10, 10, 10]
}
df = pd.DataFrame(data)
for i in tqdm(df.index):
contract = Stock( df['Name'].iloc[i],
exchange = 'SMART',
currency = 'USD'
)
marketOrder = MarketOrder( 'BUY', df['buy_amount'].iloc[i] )
marketOrder.tif = 'GTC'
#__________________________________ a risky step, so fused here
try:
marketOrder = ib.placeOrder( contract, marketOrder )
print ( "INF: ok... ", i )
except:
print ( "EXC: ..." )
# pause
finally:
pass
Thanks, when I run your code in VScode I get thisINF: ok... INF: ok... 100%|██████████| 4/4 [00:00<00:00, 1999.43it/s]INF: ok... 0 INF: ok... 1 INF: ok... 2 INF: ok... 3, meaning everything is ok. I can see the same results with my bat file. – Bogdan Titomir
Cool,that means you've sent all -4- ib.placeOrder()-calls without problem, i.e. either or AAPL, TSLA, GOOG, NVDA calls --Q.E.D.adding pauses and launching the same in a O/S-spawned python-interpreter ( might be printing also version-details ) shall do the rest.

Python data generation script slows down with time

EDIT 1:
As fizzybear pointed out it looks as though my memory usage is steadily increasing but I can't say why, any ideas would be greatly appreciated.
I'm running a script which uses the staticfg library to generate a tonne of control flow graphs from python programs, approximately 150,000 programs. My code simply loops through every program's file location and generates a corresponding control flow graph.
From a frequently updated progress bar I can see that when the script begins running it easily generates around 1000 CFGs in a few seconds, but half an hour into running it can barely generate 100 CFGs within a minute.
In an attempt to sped things up I implemented multi threading using python's multiprocessing map() function but this doesn't help enough.
Furthermore, the cpu utilization (for all cores) shoots up to around 80-90% at the beginning of the script but drops to around 30-40% after running for a few minutes.
I've tried running it on Windows 10 and Ubuntu 18.04 and both slow down to an almost unbearable speed.
Code for building control-flow-graph
from staticfg import CFGBuilder
def process_set():
content = get_file_paths()
iterate(build_cfg, ERROR_LOG_FILE, content)
def build_cfg(file_path):
cfg = CFGBuilder().build_from_file(os.path.basename(file_path), os.path.join(DATA_PATH, file_path))
cfg.build_visual(get_output_data_path(file_path), format='dot', calls=False, show=False)
os.remove(get_output_data_path(file_path)) # Delete the other weird file created
Code for running the cfg building
from threading import Lock
from multiprocessing.dummy import Pool as ThreadPool
import multiprocessing
def iterate(task, error_file_path, content):
progress_bar = ProgressBar(0, content.__len__(), prefix='Progress:', suffix='Complete')
progress_bar.print_progress_bar()
error_file_lock = Lock()
increment_work_lock = Lock()
increment_errors_lock = Lock()
def an_iteration(file):
try:
task(file)
except Exception as e:
with increment_errors_lock:
progress_bar.increment_errors()
with error_file_lock:
handle_exception(error_file_path, file, 'Error in doing thing', e)
finally:
with increment_work_lock:
progress_bar.increment_work()
progress_bar.print_progress_bar()
pool = multiprocessing.dummy.Pool(multiprocessing.cpu_count())
pool.map(an_iteration, content)
Code for error handling
def handle_exception(error_log_file_path, file_path, message, stacktrace):
with open(error_log_file_path, 'a+', encoding='utf8') as f:
f.write('\r{},{},{},{}\n'.format(str(datetime.datetime.now()), message, file_path, stacktrace))
As far as I can tell (?) there is no object ever increasing in size and no increasing lookup time somewhere, so I'm a little lost as to why the script should be slowing down at all. Any help would be greatly appreciated.
I'm also pretty sure that it's not the contention for the locks that is slowing down the program as I was having this problem before I implemented multi threading, and contention should be pretty low anyway because the CFG building should take up a lot more time than updating the progress bar. Furthermore, errors aren't that frequent so writing to the error log doesn't happen too often, not enough to justify a lot of contention.
Cheers.
Edit 2:
Code for progress bar in case that affects the memory usage
class ProgressBar:
def __init__(self, iteration, total, prefix='', suffix='', decimals=1, length=100, fill='█'):
self.iteration = iteration
self.total = total
self.prefix = prefix
self.suffix = suffix
self.decimals = decimals
self.length = length
self.fill = fill
self.errors = 0
def increment_work(self):
self.iteration += 1
def increment_errors(self):
self.errors += 1
def print_progress_bar(self):
percent = ("{0:." + str(self.decimals) + "f}").format(100 * (self.iteration / float(self.total)))
filled_length = int(self.length * self.iteration // self.total)
bar = self.fill * filled_length + '-' * (self.length - filled_length)
print('%s |%s| %s%% (%s/%s) %s, %s %s' % (self.prefix, bar, percent, self.iteration, self.total, self.suffix, str(self.errors), 'errors'), end='\r')
# Print New Line on Complete
if self.iteration == self.total:
print()

How to run a zmq and other while True: at the same time independently?

I have this code for my server:
import time
import zmq
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5000")
while True:
message = socket.recv()
socket.send(b"World")
print "sent"
while True:
print "done."
I have a separate client script that sends a message through zmq to this one whenever i send a message. On the server (this code), if i only have the first while True:, it prints "sent" every time i send a message, and if i only have the second while True:, it prints "done." continuously. But if i put both, it never prints done (or if i switch their order and put both it never prints "sent" when i send a message").
As an output i want it to continuously print "done.", and also print "sent" when I get a message. So something like this:
done.
done.
done.
done.
done.
sent
done.
lots more done....
Basically i want both loops to run continuously and completely independently of each other.
N.B. I have tried using multiprocessing (such as in the 3rd answer here How do I run two python loops concurrently?), but couldn't get that to work either. I tried it as below:
import time
import zmq
from multiprocessing import Process
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5000")
i = time.time()
def zmq_loop():
while True:
message = socket.recv()
socket.send(b"World")
print "sent"
def done_loop():
global i
while True:
i2 = time.time()-i
if i2%2 == 0:
print "done."
if __name__ == "__main__":
Process(target=zmq_loop).start()
Process(target=done_loop).start()
As was explained yesterday in this, the [CONCURRENT]-processing is technically achievable in several different ways in python, each with a way different cost.
Today, let's have a look onto another approach - using a framework, that was developed with the very same motivation - having the natural [CONCURRENT]-scheduling-already-in-DNA - originally intended for easy composing and smooth operating complex GUI Man-Machine-Interactions ( MMI ).
This framework may and will help you achieve a lot, right due to the fact, it has evolved with a lot of care for exactly the same scenarios, where more than one thing has to be monitored at once:
Welcome to Tkinter GUI framework, which we will use just for its smart concurrently operated event handlers.
I was many times positively surprised, how easy it comes to build a quite complex composition of Finite-State-Automata ( FSA ), that smoothly cooperate together ( a coalition of FSA-s ), using tools for both independent, isolated operations ( the inner-logic of each FSA ), yet being easily able to propagate signals / messages from one FSA towards another(s). Yes, they can actually operate in 1-event-source-FSA : N-consumer(s)-FSA(s)
There you can create ( with ZeroMQ always in a non-blocking manner ) handlers -- one "sniffer" for a regular checking ( best by a timeout-controlled .poll() method to { NACK | POSACK } anything to read ) -- another "reader" for actual reading from the ZeroMQ Socket() instance ( triggered by the POSACK-signal from the "sniffer", as was mentioned previously -- another "do-a-work-er" for any other task one may wish to operate
Tkinter .mainloop() method is the global controller, which orchestrates the dirty job for you.
The hich level concept of Tkinter-mediated agent's co-processing starts in main() as simple as:
def main():
root = Tk() # INIT a Tk() instance
root.lift() # + make it visible
app = myApplication( root ) # SET ( [HERE] are all your app gems )
root.mainloop() # START the core event-handling orchestrator
Tkinter might look as a garrage full of various GUI-gadgets, that have nothing to do with your problem, but don't panic.
Tkinter has incredibly well created tools right for your needs.
using control variables that will be used as a means for storing, signalling and propagating changes of values among otherwise independent and explicitly un-coordinated actors ( ref. "sniffer", "reader", "worker" and any others ... )
tools for handling events - real, abstract and even virtual
tools for handling timed-operations - in a form of an almost a lightweight real-time system, using setups with preferred timing of what shall happen next, the .mainloop()-yet tasked to bear in mind
an explicitly specified timing .after( thisAmountOfMILLISECONDS, callThisFUNCTION ) or a liberal .after_idle( callAlwaysThatFUNCTION ).
One does not need indeed anything more to go and solve your task, using these already perfect tools.
So all the rest is just in principle under your creativity how to re-use these smart tools.
A small demo,how to make two ( 3 ! ) things happen "at the same time independently"
Let's setup the case, when one wants to process ( here demonstrated by a printing ) several independent processes, all at the same time.
>>> #-----------------------------------------------FAST MOCK-UP EXAMPLE
>>> import Tkinter as tk # python27
>>> root = tk.Tk()
>>> root.protocol( "WM_DELETE_WINDOW", root.quit() )
'3071841620Ldestroy'
>>> #------VAR-------------------------------------IMPORTANT TOOL:
>>> aStringVAR = tk.StringVar()
>>> aStringVAR.set( "_init_" )
>>> def aKeyPressEventHANDLER( anEvent ): # SIMPLE EventHANDLER,
# # also ignites remote responsive processes
... aTemplate = "[KEY]::{3: >10s}\n<s/n>::{0: >10d}\n(=#=)::{1: > 10d}\n^from::({5:})"
... sString = aTemplate.format( anEvent.serial,
... anEvent.time,
... anEvent.char,
... anEvent.keysym,
... anEvent.keysym_num,
... str(anEvent.widget )
... )
... aStringVAR.set( sString )
... print sString
...
>>> #----VAR_TRACER----------------------------------------[#1]
>>> def aVAR_TRACER_A( p1_quasiNAME, p2_indexOrEmptyString, p3_accessMODE ):
... print "aVAR_TRACER_A()-called::(on){0:} traced_event({1:})".format( str( p1_quasiNAME ), str( p3_accessMODE ) )
... # ###############=[A]#######
... # < do some task =[A] here >
... # ###############=[A]#######
... print "aVAR_TRACER_A() [{0:}]".format( str( root.globalgetvar( p1_quasiNAME ) ).replace( " ", "" ) )
...
>>> #----VAR_TRACER----------------------------------------[#2]
>>> def aVAR_TRACER_B( p1_quasiNAME, p2_indexOrEmptyString, p3_accessMODE ):
... print "aVAR_TRACER_B()-called::(on){0:} traced_event({1:})".format( str( p1_quasiNAME ), str( p3_accessMODE ) )
... # ###############=[B]#######
... # < do some task =[B] here >
... # ###############=[B]######
... print "aVAR_TRACER_B() [{0:}]".format( str( root.globalgetvar( p1_quasiNAME ) ).replace( " ", "" ) )
...
>>> #-----VAR_A_tracer_ID------------------------------"w" EVENT SNIFFER
>>> aTraceVAR_A_tracer_ID = aStringVAR.trace_variable( "w", aVAR_TRACER_A )
>>> #-----VAR_B_tracer_ID------------------------------"w" EVENT SNIFFER
>>> aTraceVAR_B_tracer_ID = aStringVAR.trace_variable( "w", aVAR_TRACER_B )
>>> #-----------tracer_ID values for ev. theirs resp. de-activation:
>>> aTraceVAR_A_tracer_ID
'3071960124LaVAR_TRACER_A'
>>> aTraceVAR_B_tracer_ID
'3071961284LaVAR_TRACER_B'
>>> #---.bind()-----------------------EventHANDLER with a system event <KeyPress>
>>> root.bind( "<KeyPress>", aKeyPressEventHANDLER ) # <-since here LIVE (!)
'3071841740LaKeyPressEventHANDLER'
>>> #------------------------------------------------^^^ since here, it went live
>>> # 1: having put a mouse on tk-window,
>>> # 2: set-focus by click
>>> # 3: started keys:
>>> # ( "a",
>>> # 6-on-<NumKeyPad>,
>>> # *-on-<NumKeyPad>
>>> # this happened "independently, at the same time" ( see time (=#=):: values )
>>>
aVAR_TRACER_B()-called::(on)PY_VAR0 traced_event(w)
aVAR_TRACER_B() [[KEY]::a<s/n>::832(=#=)::88486992^from::(.)]
aVAR_TRACER_A()-called::(on)PY_VAR0 traced_event(w)
aVAR_TRACER_A() [[KEY]::a<s/n>::832(=#=)::88486992^from::(.)]
[KEY]:: a
<s/n>:: 832
(=#=):: 88486992
^from::(.)
aVAR_TRACER_B()-called::(on)PY_VAR0 traced_event(w)
aVAR_TRACER_B() [[KEY]::KP_6<s/n>::832(=#=)::88509107^from::(.)]
aVAR_TRACER_A()-called::(on)PY_VAR0 traced_event(w)
aVAR_TRACER_A() [[KEY]::KP_6<s/n>::832(=#=)::88509107^from::(.)]
[KEY]:: KP_6
<s/n>:: 832
(=#=):: 88509107
^from::(.)
aVAR_TRACER_B()-called::(on)PY_VAR0 traced_event(w)
aVAR_TRACER_B() [[KEY]::KP_Multiply<s/n>::832(=#=)::88541180^from::(.)]
aVAR_TRACER_A()-called::(on)PY_VAR0 traced_event(w)
aVAR_TRACER_A() [[KEY]::KP_Multiply<s/n>::832(=#=)::88541180^from::(.)]
[KEY]::KP_Multiply
<s/n>:: 832
(=#=):: 88541180
^from::(.)

How do I gather performance metrics for GDI and user Objects using python

Think this is my first question I have asked on here normally find all the answers I need (so thanks in advance)
ok my problem I have written a python program that will in threads monitor a process and output the results to a csv file for later. This code is working great I am using win32pdhutil for the counters and WMI, Win32_PerfRawData_PerfProc_Process for the CPU %time. I have now been asked to monitor a WPF application and specifically monitor User objects and GDI objects.
This is where I have a problem, it is that i can't seem to find any python support for gathering metrics on these two counters. these two counters are easily available in the task manager I find it odd that there is very little information on these two counters. I am specifically looking at gathering these to see if we have a memory leak, I don't want to install anything else on the system other than python that is already installed. Please can you peeps help with finding a solution.
I am using python 3.3.1, this will be running on a windows platform (mainly win7 and win8)
This is the code i am using to gather the data
def gatherIt(self,whoIt,whatIt,type,wiggle,process_info2):
#this is the data gathering function thing
data=0.0
data1="wobble"
if type=="counter":
#gather data according to the attibutes
try:
data = win32pdhutil.FindPerformanceAttributesByName(whoIt, counter=whatIt)
except:
#a problem occoured with process not being there not being there....
data1="N/A"
elif type=="cpu":
try:
process_info={}#used in the gather CPU bassed on service
for x in range(2):
for procP in wiggle.Win32_PerfRawData_PerfProc_Process(name=whoIt):
n1 = int(procP.PercentProcessorTime)
d1 = int(procP.Timestamp_Sys100NS)
#need to get the process id to change per cpu look...
n0, d0 = process_info.get (whoIt, (0, 0))
try:
percent_processor_time = (float (n1 - n0) / float (d1 - d0)) *100.0
#print whoIt, percent_processor_time
except ZeroDivisionError:
percent_processor_time = 0.0
# pass back the n0 and d0
process_info[whoIt] = (n1, d1)
#end for loop (this should take into account multiple cpu's)
# end for range to allow for a current cpu time rather that cpu percent over sampleint
if percent_processor_time==0.0:
data=0.0
else:
data=percent_processor_time
except:
data1="N/A"
else:
#we have done something wrong so data =0
data1="N/A"
#endif
if data == "[]":
data=0.0
data1="N/A"
if data == "" :
data=0.0
data1="N/A"
if data == " ":
data=0.0
data1="N/A"
if data1!="wobble" and data==0.0:
#we have not got the result we were expecting so add a n/a
data=data1
return data
cheers
edited for correct cpu timings issue if anyone tried to run it :D
so after a long search i was able to mash something together that gets me the info needed.
import time
from ctypes import *
from ctypes.wintypes import *
import win32pdh
# with help from here http://coding.derkeiler.com/Archive/Python/comp.lang.python/2007-10/msg00717.html
# the following has been mashed together to get the info needed
def GetProcessID(name):
object = "Process"
items, instances = win32pdh.EnumObjectItems(None, None, object, win32pdh.PERF_DETAIL_WIZARD)
val = None
if name in instances :
tenQuery = win32pdh.OpenQuery()
tenarray = [ ]
item = "ID Process"
path = win32pdh.MakeCounterPath( ( None, object, name, None, 0, item ) )
tenarray.append( win32pdh.AddCounter( tenQuery, path ) )
win32pdh.CollectQueryData( tenQuery )
time.sleep( 0.01 )
win32pdh.CollectQueryData( tenQuery )
for tencounter in tenarray:
type, val = win32pdh.GetFormattedCounterValue( tencounter, win32pdh.PDH_FMT_LONG )
win32pdh.RemoveCounter( tencounter )
win32pdh.CloseQuery( tenQuery )
return val
processIDs = GetProcessID('OUTLOOK') # Remember this is case sensitive
PQI = 0x400
#open a handle on to the process so that we can query it
OpenProcessHandle = windll.kernel32.OpenProcess(PQI, 0, processIDs)
# OK so now we have opened the process now we want to query it
GR_GDIOBJECTS, GR_USEROBJECTS = 0, 1
print(windll.user32.GetGuiResources(OpenProcessHandle, GR_GDIOBJECTS))
print(windll.user32.GetGuiResources(OpenProcessHandle, GR_USEROBJECTS))
#so we have what we want we now close the process handle
windll.kernel32.CloseHandle(OpenProcessHandle)
hope that helps
For GDI count, I think a simpler, cleaner monitoring script is as follows:
import time, psutil
from ctypes import *
def getPID(processName):
for proc in psutil.process_iter():
try:
if processName.lower() in proc.name().lower():
return proc.pid
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
pass
return None;
def getGDIcount(PID):
PH = windll.kernel32.OpenProcess(0x400, 0, PID)
GDIcount = windll.user32.GetGuiResources(PH, 0)
windll.kernel32.CloseHandle(PH)
return GDIcount
PID = getPID('Outlook')
while True:
GDIcount = getGDIcount(PID)
print(f"{time.ctime()}, {GDIcount}")
time.sleep(1)

Categories