I have several IO operation that I carry out on class init but they often fail with IOError. What I would like to do is delay a few hundred ms and try again until success or some defined timeout. How can I make sure each individual command succeeds before continuing/ending the loop? I assume there is a better way than an if statement for each item and a counter to check if all commands succeeded.
My current code below often fails with IOError and hangs the rest of the application.
def __init__(self):
print("Pressure init.")
self.readCoefficients()
def readCoefficients(self):
global a0_MSB;
global a0_LSB;
global b1_MSB;
global b1_LSB;
global b2_MSB;
global b2_LSB;
global c12_MSB;
global c12_LSB;
a0_MSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_A0_COEFF_MSB+0);
a0_LSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_A0_COEFF_LSB+0);
b1_MSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_B1_COEFF_MSB+0);
b1_LSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_B1_COEFF_LSB+0);
b2_MSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_B2_COEFF_MSB+0);
b2_LSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_B2_COEFF_LSB+0);
c12_MSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_C12_COEFF_MSB+0);
c12_LSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_C12_COEFF_LSB+0);
Are you wanting to retry each one of those last 8 lines independently or as a group? If independently you will want to make a little helper function:
def retry_function(tries, function, *args, **kwargs):
for try in range(tries):
try:
return function(*args, **kwargs)
except IOError as e:
time.sleep(.005)
raise e # will be the last error from inside the loop. be sure tries is at least 1 or this will be undefined!
Then call it like this:
a0_MSB = retry_function(5, Pressure.bus.read_byte_data, Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_A0_COEFF_MSB+0)
If not independently but as a group, you probably still want this helper function. But you'll have to rewrite it to handle a list of functions/arguments, or pass in another custom function
If it's OK for you that all the files are read one after the other, you can use a simple function.
import time
# ...
def readCoefficients(self):
global a0_MSB;
global a0_LSB;
global b1_MSB;
global b1_LSB;
global b2_MSB;
global b2_LSB;
global c12_MSB;
global c12_LSB;
max_retries = 15
a0_MSB = self.readretry(Pressure.MPL115A2_REGISTER_A0_COEFF_MSB+0, max_retries)
a0_LSB = self.readretry(Pressure.MPL115A2_REGISTER_A0_COEFF_LSB+0, max_retries)
b1_MSB = self.readretry(Pressure.MPL115A2_REGISTER_B1_COEFF_MSB+0, max_retries)
b1_LSB = self.readretry(Pressure.MPL115A2_REGISTER_B1_COEFF_LSB+0, max_retries)
b2_MSB = self.readretry(Pressure.MPL115A2_REGISTER_B2_COEFF_MSB+0, max_retries)
b2_LSB = self.readretry(Pressure.MPL115A2_REGISTER_B2_COEFF_LSB+0, max_retries)
c12_MSB = self.readretry(Pressure.MPL115A2_REGISTER_C12_COEFF_MSB+0, max_retries)
c12_LSB = self.readretry(Pressure.MPL115A2_REGISTER_C12_COEFF_LSB+0, max_retries)
def readretry(self, address, max_retries):
for i in range(max_retries):
try:
return Pressure.bus.read_byte_data(
Pressure.MPL115A2_ADDRESS,
address
)
except IOError as e:
# print(e)
time.sleep(0.1)
else:
raise IOError("Reading failed after multiple tries")
Note: You should not use globals, most specially in classes.
This is another way of doing it. this code tries to read all addresses, and saves the one that failed. Then waits a little and retries all the addresses that failed until all addresses have been read properly or the number of allowed retries exceeded.
def readCoefficients(self):
(
a0_MSB, a0_LSB,
b1_MSB, b1_LSB,
b2_MSB, b2_LSB,
c12_MSB, c12_LSB) = self.mio_read(15,
Pressure.MPL115A2_REGISTER_A0_COEFF_MSB+0,
Pressure.MPL115A2_REGISTER_A0_COEFF_LSB+0,
Pressure.MPL115A2_REGISTER_B1_COEFF_MSB+0,
Pressure.MPL115A2_REGISTER_B1_COEFF_LSB+0,
Pressure.MPL115A2_REGISTER_B2_COEFF_MSB+0,
Pressure.MPL115A2_REGISTER_B2_COEFF_LSB+0,
Pressure.MPL115A2_REGISTER_C12_COEFF_MSB+0,
Pressure.MPL115A2_REGISTER_C12_COEFF_LSB+0
)
def mio_read(self, max_retries, *addresses):
# Create storage for results
results = [None] * len(addresses)
# Keep track of the index of a particular address in the list of results
ios = list(enumerate(addresses))
for i in range(max_retries):
failedios = []
for index, address in ios:
try:
results[index] = Pressure.bus.read_byte_data(
Pressure.MPL115A2_ADDRESS,
address
)
except IOError as e:
# Place address in the queue for the next round
failedios.append((index, address))
# If all succeeded
if len(failedios) == 0:
return results
# Time may be reduced as so was spent checking other addresses
time.sleep(0.1)
ios = failedios
else:
raise IOError(",".join((addr for ind, addr in failedios)))
Related
So I made this program that I want to loop for ever until closed. So at the moment I use this piece of code;
while True:
a = start();
for aaa in a:
check(a[aaa], 0)
But that is pretty slow. How can I multithread this using this (this is my try, it's incorrect ofcourse);
pool = ThreadPool(threads)
results = pool.map(check, a, 0)
I tried that code, with threads = 1. And it just gave nothing. Could anyone help me with this?
==== EDIT ====
Start function;
def start():
global a
url = "URL_WAS_HERE" // receives a json like {"a":56564356, "b":654653453} etc. etc.
r = requests.get(url)
a = json.loads(r.text)
return a
Check function;
def check(idd, tries):
global checked
global snipe
global notworking
if tries < 1:
checked = checked+1
url = "URL_WAS_HERE"+str(idd) // Receives json with extra information about the id
r = requests.get(url)
try:
b = json.loads(r.text)
if b['rap'] > b['best_price']:
difference = b['rap']-b['best_price'];
print(str(idd)+" has a "+str(difference)+ "R$ difference. Price: "+str(b['best_price'])+" //\\ Rap: "+str(b['rap']))
snipe = snipe+1
except:
time.sleep(1)
tries = tries+1
notworking = notworking+1
check(idd, tries)
settitle("Snipes; "+str(snipe)+" //\\ Checked; "+str(checked)+" //\\ Errors; "+str(notworking))
I hope this helps a bit
Perhaps start by using a documented object, ThreadPoolExecutor. ThreadPool is an undocumented language feature.
The docs offer minimal examples to get you started. For your example try the following construction:
from concurrent.futures import ThreadPoolExecutor, as_completed
values_to_test = a()
result_container = []
with ThreadPoolExecutor(max_workers=2) as executor: # set `max_workers` as appropriate
pool = {executor.submit(check, val, tries=0): val for val in values_to_test}
for future in as_completed(pool):
try:
result_container.append(future.result())
except:
pass # handle exceptions here
If you are set on using the map method, you cannot pass 0 as an argument because it is not an iterable; see the method signature.
I get two streams of data from an API, so there are 3 threads, main one, stream1 and stream2. Stream1 and Stream2 need to process this data and once they're done they store them on main_value1 and main_value2.
From main thread I need to read the last value at any given time (so if I need this value and it is still processing then I get the last processed/stored one), what would be the optimal way? from the code example here I need help in coding functions get_main_value1() and, of course, get_main_value2()
def stream1():
while True:
main_value1 = process()
def stream2():
while True:
main_value2 = process2()
def get_main_value1(): ?
def get main_value2(): ?
def main():
threading.Thread(function=stream1,).start()
threading.Thread(function=stream2).start()
while True:
time.sleep(random.randint(0,10))
A = get_main_value1()
B = get_main_value2()
One way would be to make them global:
STREAM1_LAST_VALUE = None
def stream1():
global STREAM1_LAST_VALUE
while True:
main_value1 = process()
STREAM1_LAST_VALUE = main_value1
STREAM2_LAST_VALUE = None
def stream2():
global STREAM2_LAST_VALUE
while True:
main_value2 = process2()
STREAM2_LAST_VALUE = main_value2
def get_main_value1():
return STREAM1_LAST_VALUE
def get main_value2():
return STREAM2_LAST_VALUE
def main():
threading.Thread(function=stream1,).start()
threading.Thread(function=stream2).start()
while True:
time.sleep(random.randint(0,10))
A = get_main_value1()
B = get_main_value2()
I am trying to implement a keep-alive that sends some data every 30 seconds to keep a telnet connection open.
My code calls reinitScore every second. This function will sometimes call calculateWinner, which sends the data through telnet via stelnet.send(data).
The problem is, when I call stelnet.send(data) inside any function, it raises a NameError: global name 'stelnet' is not defined.
My questions is: why would stelnet.send(data) work in one place, and not another?
Here is the part of my code that concerns telnet transfer and function calling:
import socket, select, string, sys
import string
import threading
leftKeyCounter = 0
rightKeyCounter = 0
frontKeyCounter = 0
backKeyCounter = 0
# function called by reinitScore
def calculateWinner(d):
scores = {}
high_score = 0
for key, value in d.items():
try:
scores[value].append(key)
except KeyError:
scores[value] = [key]
if value > high_score:
high_score = value
results = scores[high_score]
if len(results) == 1:
print results[0]
stelnet.send(results[0])
return results[0]
else:
print 'TIE'
return 'TIE', results
#called once and repeat itselfs every second
def reinitScore():
threading.Timer(1, reinitScore).start()
#globaling for changing the content
global leftKeyCounter
global rightKeyCounter
global frontKeyCounter
global backKeyCounter
values = {'left' : leftKeyCounter, 'right' : rightKeyCounter, 'front' : frontKeyCounter, 'back' : backKeyCounter}
if (leftKeyCounter != 0 or rightKeyCounter != 0 or frontKeyCounter != 0 or backKeyCounter != 0):
calculateWinner(values)
leftKeyCounter = 0
rightKeyCounter = 0
frontKeyCounter = 0
backKeyCounter = 0
print "back to 0"
reinitScore()
if __name__ == "__main__":
if(len(sys.argv) < 3) :
print 'Usage : python telnet.py hostname port'
sys.exit()
host = sys.argv[1]
port = int(sys.argv[2])
stelnet = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
stelnet.settimeout(2)
# connect to remote host
try :
stelnet.connect((host, port))
except :
print 'Unable to connect'
sys.exit()
print 'Connected to remote host'
while True:
// ... Some code that has nothing to do with telnet
while 1:
socket_list = [sys.stdin, stelnet]
read_sockets, write_sockets, error_sockets = select.select(socket_list , [], [])
for sock in read_sockets:
if sock == stelnet:
data = sock.recv(4096)
if not data :
print 'Connection closed'
sys.exit()
else :
sys.stdout.write(data)
else :
msg = sys.stdin.readline()
stelnet.send(msg)
I tried to declare stelnet as a global variable at many places, but it doesn't change anything --- I always get the "not defined" NameError.
In response to your updated code... The error message is still correct, because although you have defined stelnet at the module level, you've defined it too late. It's definition occurs after its use in the calculateWinner function.
Stripping your code down to a ridiculously minimal example, you are doing something like this:
def calculateWinner():
# A leap of faith... There is no `stelnet` defined
# in this function.
stelnet.send(results[0])
def reinitScore():
# Indirectly depends on `stelnet` too.
calculateWinner()
# But we haven't defined `stelnet` yet...
reinitScore() # Kaboom!
# These lines will never run, because the NameError has
# already happened.
if __name__ == '__main__':
stelnet = ... # Too late.
calculateWinner depends on a name that does not exist when the function is compiled. Whether it works or crashes will depend on whether some other code has defined stelnet 1) where calculateWinner can get at it, and 2) before calculateWinner is executed.
Suggestions
Functions that depend on global mutable state are hard to follow, let alone code correctly. It's not easy to tell what depends on which variables, or what's modifying them, or when. Also, coming up with an MCVE is more trouble than it should be, because functions that appear independent might not be.
Stuff as much of your module-level code as you can into a main function, and call it (and nothing else) from the body of if __name__ == '__main__': (since even that is actually at module level).
Consider something like this:
def reinit_score(output_socket, shared_scores):
# Ensuring safe concurrent access to the `shared_scores`
# dictionary is left as an exercise for the reader.
winner = ... # Determined from `shared_scores`.
output_socket.send(winner)
for key in shared_scores:
shared_scores[key] = 0
threading.Timer(
interval=1,
function=reinit_score,
args=[output_socket, shared_scores],
).start()
def main():
output_socket = ... # This was `stelnet`.
shared_scores = {...} # A dictionary with 4 keys: L/R/U/D.
reinit_score(output_socket, shared_scores)
while True:
play_game(shared_scores)
# `play_game` mutates the `shared_scores` dictionary...
if __name__ == '__main__':
main()
These functions are still connected by the shared dictionary that they pass around, but only functions that are explicitly passed that dictionary can change its contents.
Your code is not working because you are not passing stelnet to your function.
My question:
I would like to know if there is a "best practice" pattern in Python for returning values from coroutine endpoints (aka the "sink" or "consumer"). More generally, how would you approach the following scenario?
My scenario:
I have my (producer) > (filter) > (consumer) coroutine pipeline to process a text-based table and to build a list of dictionaries from it. I would like the object that is built in consumer to be returned to the original caller of producer.
My approach:
My approach has been to set up a unique finish-processing signal that each coroutine checks for. If it hears the signal, then it passes on the signal to its child and yields the returned value. The consumer just yields its current value.
Alternative approaches:
I considered:
Using a global to hold the desired object to be "returned" to the caller.
A class-based approach with regular subroutines.
Reasons why I should maybe reconsider these for my scenario would also be welcome.
My implementation:
Here is a simplified version of what I have done, with all key components included.
import uuid
FINISH_PROCESSING_SIGNAL = uuid.uuid4()
def coroutine(func):
def start(*args,**kwargs):
cr = func(*args,**kwargs)
cr.next()
return cr
return start
# Sink
#coroutine
def list_builder():
# accepts objects and adds them to a list
_list = []
try:
while True:
data = (yield)
if data is FINISH_PROCESSING_SIGNAL:
yield _list
break
_list.append(data)
except GeneratorExit:
pass
# Filter
#coroutine
def user_data_filter(target=None):
if target is None:
target = list_builder()
header = "-+-"
footer = "Transfer Packets"
username = "User Name"
fullname = "Full Name"
note = "Description"
try:
while True:
user = {}
data = (yield)
if data is FINISH_PROCESSING_SIGNAL:
yield target.send(FINISH_PROCESSING_SIGNAL)
break
line = data
if header in line:
while True:
line = (yield)
if footer in line:
target.send(user)
break
elif username in line:
user["username"] = line.split('|')[1]
elif fullname in line:
user["fullname"] = line.split('|')[1]
elif note in line:
user["note"] = line.split('|')[1]
except GeneratorExit:
target.close()
# Producer
def process_users_table(table, target=None):
if target is None:
target = user_data_filter()
lines = table.split('\r\n')
for line in lines:
target.send(line)
processed_data = target.send(FINISH_PROCESSING_SIGNAL)
return processed_data
if __name__ == '__main__':
test_users_table = \
"""
Item |Value\r\n
----------------+-----------------------\r\n
User Name |alice\r\n
Full Name |Alice Doe\r\n
Description |\r\n
Transfer Packets|0\r\n
----------------+-----------------------\r\n
User Name |bob\r\n
Full Name |Bob Tables\r\n
Description |\r\n
Transfer Packets|0\r\n
"""
users = process_users_table(test_users_table)
print users
Your method of signaling the consumer to terminate is fine and is in harmony with what you would do if using a multiprocessing or threaded Queue. However, generators also have a way to throw Exceptions (rather than sending values) and the purpose of throw is precisely to signal events or changes in state to the generator. Moreover, when an exception is thrown to a generator,
[i]f the
generator catches the exception and yields another value, that is
the return value of g.throw().
That seems perfectly suited for your use case. Instead of sending a FINISH_PROCESSING_SIGNAL value, you could throw a FINISH_PROCESSING_SIGNAL Exception, and use try..except to yield the final value.
class FINISH_PROCESSING_SIGNAL(Exception): pass
def coroutine(func):
def start(*args,**kwargs):
cr = func(*args,**kwargs)
cr.next()
return cr
return start
# Sink
#coroutine
def list_builder():
# accepts objects and adds them to a list
_list = []
try:
while True:
data = (yield)
_list.append(data)
except FINISH_PROCESSING_SIGNAL:
yield _list
# Filter
#coroutine
def user_data_filter(target=list_builder()):
header = "-+-"
footer = "Transfer Packets"
username = "User Name"
fullname = "Full Name"
note = "Description"
try:
while True:
user = {}
data = (yield)
line = data
if header in line:
while True:
line = (yield)
if footer in line:
target.send(user)
break
elif username in line:
user["username"] = line.split('|')[1]
elif fullname in line:
user["fullname"] = line.split('|')[1]
elif note in line:
user["note"] = line.split('|')[1]
except FINISH_PROCESSING_SIGNAL as err:
# Pass along the Exception to the target, and yield its result back
# to the caller
yield target.throw(err)
# Producer
def process_users_table(table, target=user_data_filter()):
lines = table.split('\r\n')
for line in lines:
target.send(line)
processed_data = target.throw(FINISH_PROCESSING_SIGNAL)
# processed_data = target.close()
return processed_data
if __name__ == '__main__':
test_users_table = \
"""
Item |Value\r\n
----------------+-----------------------\r\n
User Name |alice\r\n
Full Name |Alice Doe\r\n
Description |\r\n
Transfer Packets|0\r\n
----------------+-----------------------\r\n
User Name |bob\r\n
Full Name |Bob Tables\r\n
Description |\r\n
Transfer Packets|0\r\n
"""
users = process_users_table(test_users_table)
print users
How I can stop threading by clicking a wxButton?
Here is my code:
def startMonitor(self,event):
selectedInterface = self.interfaces_cblist.GetValue()
Publisher().sendMessage(("test"),selectedInterface)
self.Close()
selectInterfaceStr = str(selectedInterface)
if len(selectedInterface) == 0:
noSelect_error = wx.MessageDialog(None,"Please select an interface","",wx.OK|wx.ICON_ERROR)
noSelect_error.ShowModal()
else:
monitorStarted = wx.MessageDialog(None,"Monitor on %s started"%selectInterfaceStr,"",wx.OK|wx.ICON_ERROR)
monitorStarted.ShowModal()
self.monitorInterface_button.Disable()
threading.Thread(target=self.camtableDetection,args=(selectInterfaceStr,)).start()
threading.Thread(target=self.dhcpexhaustion,args=(selectInterfaceStr,)).start()
def camtableDetection(self,getInterface):
global interface
interface = str(getInterface)
THRESH=(254/4)
START = 5
def monitorPackets(p):
if p.haslayer(IP):
hwSrc = p.getlayer(Ether).src
if hwSrc not in hwList:
hwList.append(hwSrc)
delta = datetime.datetime.now() - start
if((delta.seconds > START) and ((len(hwList)/delta.seconds) > THRESH)):
print "[*]- Detected CAM Table Attack."
#camAttackDetected = wx.MessageDialog(None,"Cam Attack Detected","",wx.ICON_ERROR)
#camAttackDetected.ShowModal()
hwList = []
start = datetime.datetime.now()
sniff(iface=interface,prn=monitorPackets)
def dhcpexhaustion(self,getInterface):
interface = str(getInterface)
global reqCnt
global ofrCnt
reqCnt = 0
ofrCnt = 0
def monitorPackets(p):
if p.haslayer(BOOTP):
global reqCnt
global ofrCnt
opCode = p.getlayer(BOOTP).op
if opCode == 1:
reqCnt=reqCnt+1
elif opCode == 2:
ofrCnt=ofrCnt+1
print "[*] - "+str(reqCnt)+" Requests, "+str(ofrCnt)+" Offers."
sniff(iface=interface,prn=monitorPackets)
I am thinking to stop the threading when I click on a button, but have no idea how can it can be done.
There are self.abort techniques, but I'm not sure how to apply it in my code.
As I said in a comment:
If [sniff is] a function that you have no control over (e.g., from a C extension module) and it loops forever, then it must have some way to cancel it. Maybe it's having your callback return a special value, maybe it's calling a control function, maybe it's closing the object it's working on… whatever it is, you have to do that.
So, why not read the documentation for scapy.sniff to see how to cancel it?
Sniff packets
sniff([count=0,] [prn=None,] [store=1,] [offline=None,] [lfilter=None,] + L2ListenSocket args) -> list of packets
count: number of packets to capture. 0 means infinity
store: wether to store sniffed packets or discard them
prn: function to apply to each packet. If something is returned,
it is displayed. Ex:
ex: prn = lambda x: x.summary()
lfilter: python function applied to each packet to determine
if further action may be done
ex: lfilter = lambda x: x.haslayer(Padding)
offline: pcap file to read packets from, instead of sniffing them
timeout: stop sniffing after a given time (default: None)
L2socket: use the provided L2socket
opened_socket: provide an object ready to use .recv() on
stop_filter: python function applied to each packet to determine
if we have to stop the capture after this packet
ex: stop_filter = lambda x: x.haslayer(TCP)
So, the way to stop it sniffing forever is to pass it a stop_filter function, which will return True when you want to stop it. So that function is where you're going to check your stop flag. For example:
def __init__(self, whatever):
self.stopflag = False
self.stoplock = threading.Lock()
# rest of your init
def stop(self):
with self.stoplock:
self.stopflag = True
def stop_filter(self):
with self.stoplock:
return self.stopflag
def dhcpexhaustion(self, getInterface):
# etc.
sniff(iface=interface,prn=monitorPackets, stop_filter=self.stop_filter)
You're probably going to want to store the two Thread objects at start time, so you can join them at stop time, rather than just leaking them until your program exits. But otherwise, this should do it.