In PyQt5, I want to read my serial port after writing (requesting a value) to it. I've got it working using readyRead.connect(self.readingReady), but then I'm limited to outputting to only one text field.
The code for requesting parameters sends a string to the serial port. After that, I'm reading the serial port using the readingReady function and printing the result to a plainTextEdit form.
def read_configuration(self):
if self.serial.isOpen():
self.serial.write(f"?request1\n".encode())
self.label_massGainOutput.setText(f"{self.serial.readAll().data().decode()}"[:-2])
self.serial.write(f"?request2\n".encode())
self.serial.readyRead.connect(self.readingReady)
self.serial.write(f"?request3\n".encode())
self.serial.readyRead.connect(self.readingReady)
def readingReady(self):
data = self.serial.readAll()
if len(data) > 0:
self.plainTextEdit_commandOutput.appendPlainText(f"{data.data().decode()}"[:-2])
else: self.serial.flush()
The problem I have, is that I want every answer from the serial port to go to a different plainTextEdit form. The only solution I see now is to write a separate readingReady function for every request (and I have a lot! Only three are shown now). This must be possible in a better way. Maybe using arguments in the readingReady function? Or returning a value from the function that I can redirect to the correct form?
Without using the readyRead signal, all my values are one behind. So the first request prints nothing, the second prints the first etc. and the last is not printed out.
Does someone have a better way to implement this functionality?
QSerialPort has asyncronous (readyRead) and syncronous API (waitForReadyRead), if you only read configuration once on start and ui freezing during this process is not critical to you, you can use syncronous API.
serial.write(f"?request1\n".encode())
serial.waitForReadyRead()
res = serial.read(10)
serial.write(f"?request2\n".encode())
serial.waitForReadyRead()
res = serial.read(10)
This simplification assumes that responces comes in one chunk and message size is below or equal 10 bytes which is not guaranteed. Actual code should be something like this:
def isCompleteMessage(res):
# your code here
serial.write(f"?request2\n".encode())
res = b''
while not isCompleteMessage(res):
serial.waitForReadyRead()
res += serial.read(10)
Alternatively you can create worker or thread, open port and query requests in it syncronously and deliver responces to application using signals - no freezes, clear code, slightly more complicated system.
Related
Hi guys I'm working a on script that will get data from a host using the Data Communications Standard (Developed by: Data Communication Standard Committee Lens Processing Division of The Vision Council), by serial port and pass the data into ModBus Protocol for the device to perform it's operations.
Since I don't fiscally have access to the host machine I'm trying to develop a secondary script to emulate the host. I am currently on the stage where I need to read a lot of information from the serial port and I get only part of the data. I was hoping to get the whole string sent on the send_job() function on my host emulator script.
Guys also can any of you tell me if this would be a good approach? the only thing the machine is supposed to do is grab 2 values from the host response and assign them to two modbus holding registers.
NOTE: the initialization function is hard coded because it will always be the same and the actual response data will not matter except for status. Also the job request is hard coded i only pass the job # that i get from a modbus holding register, the exact logic on how the host resolved this should not matter i only need to send the job number scanned from the device in this format.
main script:
def request_job_modbus(job):
data = F'[06][1c]req=33[0d][0a]job={job}[0d][0a][1e][1d]'.encode('ascii')
writer(data)
def get_job_from_serial():
response = serial_client.read_all()
resp = response.decode()
return resp
# TODO : SEND INIT SEQUENCE ONCE AND VERIFY IF REQUEST status=0
initiation_request()
init_response_status = get_init_status()
print('init method being active')
print(get_init_status())
while True:
# TODO: get job request data
job_serial = get_job_from_serial()
print(job_serial)
host emulation script:
def send_job():
job_response = '''[06][1c]ans=33[0d]job=30925[0d]status=0;"ok"[0d]do=l[0d]add=;2.50[0d]ar=1[0d]
bcerin=;3.93[0d]bcerup=;-2.97[0d]crib=;64.00[0d]do=l[0d]ellh=;64.00[0d]engmask=;613l[0d]
erdrin=;0.00[0d]erdrup=;10.00[0d]ernrin=;2.00[0d]ernrup=;-8.00[0d]ersgin=;0.00[0d]
ersgup=;4.00[0d]gax=;0.00[0d]gbasex=;-5.30[0d]gcrosx=;-7.96[0d]kprva=;275[0d]kprvm=;0.55[0d]
ldpath=\\uscqx-tcpmain-at\lds\iot\do\800468.sdf[0d]lmatid=;151[0d]lmatname=;f50[0d]
lnam=;vsp_basic_fh15[0d]sgerin=;0.00[0d]sgerup=;0.00[0d]sval=;5.18[0d]text_11=;[0d]
text_12=;[0d]tind=;1.53[0d][1e][1d]'''.encode('ascii')
writer(job_response)
def get_init_request():
req = p.readline()
print(req)
request = req.decode()[4:11]
# print(request)
if request == 'req=ini':
print('request == req=ini??? <<<<<<< cumple condicion y enviala respuesta')
send_init_response()
send_job()
while True:
# print(get_init_request())
get_init_request()
what I get in screen: main script
init method being active
bce
erd
condition was met init status=0
outside loop
ers
condition was met init status=0
inside while loop
trigger reset <<<--------------------
5782
`:lmatid=;151[0d]lmatname=;f50[0d]
lnam=;vsp_basic_fh15[0d]sgerin=;0.00[0d]sgerup=;0.00[0d]sval=;5.18[0d]text_11=;[0d]
text_12=;[0d]tind=;1.53[0d][1e][1d]
outside loop
condition was met init status=0
outside loop
what I get in screen: host emulation script
b'[1c]req=ini[0d][0a][1e][1d]'
request == req=ini??? <<<<<<< cumple condicion y enviala respuesta
b''
b'[06][1c]req=33[0d][0a]job=5782[0d][0a][1e][1d]'
b''
b''
b''
b''
b''
b''
I'm suspect you're trying to write too much at once to a hardware buffer that is fairly small. Especially when dealing with low power hardware, assuming you can stuff an entire message into a buffer is not often correct. Even full modern PC's sometimes have very small buffers for legacy hardware like serial ports. You may find when you switch from development to actual hardware, that the RTS and DTR lines need to be used to determine when to send or receive data. This will be up to whoever designed the hardware unfortunately, as they are often also ignored.
I would try chunking your data transfer into smaller bits as a test to see if the whole message gets through. This is a quick and dirty first attempt that may have bugs, but it should get you down the right path:
def get_job_from_serial():
response = b'' #buffer for response
while True:
try:
response += serial_client.read() #read any available data or wait for timeout
#this technically could only be reading 1 char at a time, but any
#remotely modern pc should easily keep up with 9600 baud
except serial.SerialTimeoutException: #timeout probably means end of data
#you could also presumably check the length of the buffer if it's always
#a fixed length to determine if the entire message has been sent yet.
break
return response
def writer(command):
written = 0 #how many bytes have we actually written
chunksize = 128 #the smaller you go, the less likely to overflow
# a buffer, but the slower you go.
while written < len(command):
#you presumably might have to wait for p.dtr() == True or similar
#though it's just as likely to not have been implemented.
written += p.write(command[written:written+chunksize])
p.flush() #probably don't actually need this
P.S. I had to go to the source code for p.read_all (for some reason I couldn't find it online), and it does not do what I think you expect it does. The exact code for it is:
def read_all(self):
"""\
Read all bytes currently available in the buffer of the OS.
"""
return self.read(self.in_waiting)
There is no concept of waiting for a complete message, it just a shorthand for grab everything currently available.
I'm really new to programming in general and very inexperienced, and I'm learning python as I think it's more simple than other languages. Anyway, I'm trying to use Flask-Ask with ngrok to program an Alexa skill to check data online (which changes a couple of times per hour). The script takes four different numbers (from a different URL) and organizes it into a dictionary, and uses Selenium and phantomjs to access the data.
Obviously, this exceeds the 8-10 second maximum runtime for an intent before Alexa decides that it's taken too long and returns an error message (I know its timing out as ngrok and the python log would show if an actual error occurred, and it invariably occurs after 8-10 seconds even though after 8-10 seconds it should be in the middle of the script). I've read that I could just reprompt it, but I don't know how and that would only give me 8-10 more seconds, and the script usually takes about 25 seconds just to get the data from the internet (and then maybe a second to turn it into a dictionary).
I tried putting the getData function right after the intent that runs when the Alexa skill is first invoked, but it only runs when I initialize my local server and just holds the data for every new Alexa session. Because the data changes frequently, I want it to perform the function every time I start a new session for the skill with Alexa.
So, I decided just to outsource the function that actually gets the data to another script, and make that other script run constantly in a loop. Here's the code I used.
import time
def getData():
username = '' #username hidden for anonymity
password = '' #password hidden for anonymity
browser = webdriver.PhantomJS(executable_path='/usr/local/bin/phantomjs')
browser.get("https://gradebook.com") #actual website name changed
browser.find_element_by_name("username").clear()
browser.find_element_by_name("username").send_keys(username)
browser.find_element_by_name("password").clear()
browser.find_element_by_name("password").send_keys(password)
browser.find_element_by_name("password").send_keys(Keys.RETURN)
global currentgrades
currentgrades = []
gradeids = ['2018202', '2018185', '2018223', '2018626', '2018473', '2018871', '2018886']
for x in range(0, len(gradeids)):
try:
gradeurl = "https://www.gradebook.com/grades/"
browser.get(gradeurl)
grade = browser.find_element_by_id("currentStudentGrade[]").get_attribute('innerHTML').encode('utf8')[0:3]
if grade[2] != "%":
grade = browser.find_element_by_id("currentStudentGrade[]").get_attribute('innerHTML').encode('utf8')[0:4]
if grade[1] == "%":
grade = browser.find_element_by_id("currentStudentGrade[]").get_attribute('innerHTML').encode('utf8')[0:1]
currentgrades.append(grade)
except Exception:
currentgrades.append('No assignments found')
continue
dictionary = {"class1": currentgrades[0], "class2": currentgrades[1], "class3": currentgrades[2], "class4": currentgrades[3], "class5": currentgrades[4], "class6": currentgrades[5], "class7": currentgrades[6]}
return dictionary
def run():
dictionary = getData()
time.sleep(60)
That script runs constantly and does what I want, but then in my other script, I don't know how to just call the dictionary variable. When I use
from getdata.py import dictionary
in the Flask-ask script it just runs the loop and constantly gets the data. I just want the Flask-ask script to take the variable defined in the "run" function and then use it without running any of the actual scripts defined in the getdata script, which have already run and gotten the correct data. If it matters, both scripts are running in Terminal on a MacBook.
Is there any way to do what I'm asking about, or are there any easier workarounds? Any and all help is appreciated!
It sounds like you want to import the function, so you can run it; rather than importing the dictionary.
try deleting the run function and then in your other script
from getdata import getData
Then each time you write getData() it will run your code and get a new up-to-date dictionary.
Is this what you were asking about?
This issue has been resolved.
As for the original question, I didn't figure out how to make it just import the dictionary instead of first running the function to generate the dictionary. Furthermore, I realized there had to be a more practical solution than constantly running a script like that, and even then not getting brand new data.
My solution was to make the script that gets the data start running at the same time as the launch function. Here was the final script for the first intent (the rest of it remained the same):
#ask.intent("start_skill")
def start_skill():
welcome_message = 'What is the password?'
thread = threading.Thread(target=getData, args=())
thread.daemon = True
thread.start()
return question(welcome_message)
def getData():
#script to get data here
#other intents and rest of script here
By design, the skill requested a numeric passcode to make sure I was the one using it before it was willing to read the data (which was probably pointless, but this skill is at least as much for my own educational reasons as for practical reasons, so, for the extra practice, I wanted this to have as many features as I could possibly justify). So, by the time you would actually be able to ask for the data, the script to get the data will have finished running (I have tested this and it seems to work without fail).
I'm building a simple server-client app using sockets. Right now, I am trying to get my client to print to console only when it received a specific message (actually, when it doesn't receive a specific message), but for some reason, every other time I run it, it goes through the other statement in my code and is really inconsistent - sometimes it will work as it should and then it will randomly break for a couple uses.
Here is the code on my client side:
def post_checker(client_socket):
response = client_socket.recv(1024)
#check if response is "NP" for a new post from another user
if response == "NP":
new_response = client_socket.recv(1024)
print new_response
else: #print original message being sent
print response
where post_checker is called in the main function as simply "post_checker(client_socket)" Basically, sometimes I get "NPray" printed to my console (when the client only expects to receive the username "ray") and other times it will print correctly.
Here is the server code correlated to this
for sublist in user_list:
client_socket.send("NP")
client_socket.send(sublist[1] + " ")
where user_list is a nested list and sublist[1] is the username I wish to print out on the client side.
Whats going on here?
The nature of your problem is that TCP is a streaming protocol. The bufsize in recv(bufsize) is a maximum size. The recv function will always return data when available, even if not all of the bytes have been received.
See the documentation for details.
This causes problems when you've only sent half the bytes, but you've already started processing the data. I suggest you take a look at the "recvall" concept from this site or you can also consider using UDP sockets (which would solve this problem but may create a host of others as UDP is not a guaranteed protocol).
You may also want to let the python packages handle some of the underlying framework for you. Consider using a SocketServer as documented here:
buffer = []
def recv(sock):
global buffer
message = b""
while True:
if not (b"\r\n" in b"".join(buffer)):
chunk = sock.recv(1024)
if not chunk:
break
buffer.append(chunk)
concat = b"".join(buffer)
if (b"\r\n" in concat):
message = concat[:concat.index(b"\r\n")]
concat = concat[concat.index(b"\r\n") + 2:]
buffer = [concat]
break
return message
def send(sock, data):
sock.send(data + b"\r\n")
I have tested this, and in my opinion, it works perfectly
My use case: I have two scripts that send data quickly, it ends up that one time or another, the buffers receive more than they should, and gather the data, with this script it leaves everything that receives more saved, and continues receiving until there is a new line between the data, and then, it gathers the data, divides in the new line, saves the rest and returns the data perfectly separated
(I translated this, so please excuse me if anything is wrong or misunderstood)
I'm making a SocketServer that will need to be able to handle a lot of commands. So to keep my RequestHandler from becoming too long it will call different functions depening on the command. My dilemma is how to make it send info back to the client.
Currently I'm making the functions "yield" everything it wants to send back to the client. But I'm thinking it's probably not the pythonic way.
# RequestHandler
func = __commands__.get(command, unkown_command)
for message in func():
self.send(message)
# example_func
def example():
yield 'ip: {}'.format(ip)
yield 'count: {}'.format(count)
. . .
for ping in pinger(ip,count):
yield ping
Is this an ugly use of yield? The only alterative I can think of is if when the RequestHandler calls the function is passes itself as an argument
func(self)
and then in the function
def example(handler):
. . .
handler.send('ip: {}'.format(ip))
But this way doesn't feel much better.
def example():
yield 'ip: {}'.format(ip)
yield 'count: {}'.format(count)
What strikes me as strange in this solution is not the use of yield itself (which can be perfectly valid) but the fact that you're losing a lot of information by turning your data into strings prematurely.
In particular, for this kind of data, simply returning a dictionary and handling the sending in the caller seems more readable:
def example():
return {'ip': ip, 'count': count}
This also helps you separate content and presentation, which might be useful if you want, for example, to return data encoded in XML but later switch to JSON.
If you want to yield intermediate data, another possibility is using tuples: yield ('ip', ip). In this way you keep the original data and can start processing the values immediately outside the function
I do the same as you with yield. The reason for this is simple:
With yield the main loop can easily handle the case that sending data to one socket will block. Each socket gets a buffer for outgoing data that you fill with the yield. The main loop tries to send as much of that as possible to the socket and when it blocks it records how far it got in the buffer and waits for the socket to be ready for more. When the buffer is empty is runs next(func) to get the next chunk of data.
I don't see how you would do that with handler.send('ip: {}'.format(ip)). When that socket blocks you are stuck. You can't pause that send and handle other sockets easily.
Now for this to be useful there are some assumptions:
the data each yield sends is considerable and you don't want to generate all of it into one massive buffer ahead of time
generating the data for each yield takes time and you want to already send the finished parts
you want to use reply = yield data waiting for the peer to respond to the data in some way. Yes, you can make this a back and forth. next(func) becomes func.send(reply).
Any of these is a good reason to go the yield way or coroutines in general. The alternative seems to be to use one thread per socket.
Note: the func can also call other generators using yield from. Makes it easy to split a large problem into smaller handlers and to share common parts.
I am writing a little Python script that parses the input from a QR reader (which is seen as a keyboard by the system).
At the moment I am using raw_input() but this function waits for an EOF/end-of-line symbol in order to submit the received string to the program.
I am wondering if there is a way to continuously parse the input string and not just in chunks limited by a line end.
In practice:
- is there a way in python to asynchronously and continuously parse a console input ?
- is there a way to change raw_input() (or an equivalent function) to look for another character in order to submit the string read into the program?
It seems like you're generally trying to solve two problems:
Read input in chunks
Parse that input asynchronously
For the first part, it will vary greatly based on the specifics of the input function your calling, but for standard input you could use something like
sys.stdin.read(1)
As for parsing asynchronously, there are a number of approaches you could take. Python is synchronous, so you will necessarily have to involve some subprocess calls. Manually spawning a function using the subprocess library is one option. You could also use something like Redis or some lightweight job queue to pop input chunks on and have them read and processed by another background script. Finally, gevent is a very popular coroutine based library for spawning asynchronous processes. Using gevent, this whole set up would look something like this:
class QRLoader(object):
def __init__(self):
self.data = []
def add_data(data):
self.data.append(data)
# if self._data constitutes a full QR code
# do something with data
gevent.spawn(parse_async)
def parse_async():
# do something with qr_loader.data
qr_loader = QRLoader()
while True:
data = sys.stdin.read(1)
if data:
qr_loader.add_data(data)