In PyQt5, I want to read my serial port after writing (requesting a value) to it. I've got it working using readyRead.connect(self.readingReady), but then I'm limited to outputting to only one text field.
The code for requesting parameters sends a string to the serial port. After that, I'm reading the serial port using the readingReady function and printing the result to a plainTextEdit form.
def read_configuration(self):
if self.serial.isOpen():
self.serial.write(f"?request1\n".encode())
self.label_massGainOutput.setText(f"{self.serial.readAll().data().decode()}"[:-2])
self.serial.write(f"?request2\n".encode())
self.serial.readyRead.connect(self.readingReady)
self.serial.write(f"?request3\n".encode())
self.serial.readyRead.connect(self.readingReady)
def readingReady(self):
data = self.serial.readAll()
if len(data) > 0:
self.plainTextEdit_commandOutput.appendPlainText(f"{data.data().decode()}"[:-2])
else: self.serial.flush()
The problem I have, is that I want every answer from the serial port to go to a different plainTextEdit form. The only solution I see now is to write a separate readingReady function for every request (and I have a lot! Only three are shown now). This must be possible in a better way. Maybe using arguments in the readingReady function? Or returning a value from the function that I can redirect to the correct form?
Without using the readyRead signal, all my values are one behind. So the first request prints nothing, the second prints the first etc. and the last is not printed out.
Does someone have a better way to implement this functionality?
QSerialPort has asyncronous (readyRead) and syncronous API (waitForReadyRead), if you only read configuration once on start and ui freezing during this process is not critical to you, you can use syncronous API.
serial.write(f"?request1\n".encode())
serial.waitForReadyRead()
res = serial.read(10)
serial.write(f"?request2\n".encode())
serial.waitForReadyRead()
res = serial.read(10)
This simplification assumes that responces comes in one chunk and message size is below or equal 10 bytes which is not guaranteed. Actual code should be something like this:
def isCompleteMessage(res):
# your code here
serial.write(f"?request2\n".encode())
res = b''
while not isCompleteMessage(res):
serial.waitForReadyRead()
res += serial.read(10)
Alternatively you can create worker or thread, open port and query requests in it syncronously and deliver responces to application using signals - no freezes, clear code, slightly more complicated system.
I am gathering data from a web api (using python) and I am using a loop to go through thousands of calls to the api. The function was running fine but my computer went to sleep and the internet connection was lost. I am storing the data as a list of dictionarys while calling the api. My question is this: When the function failed, since my list was inside the function I can't even get the several hundred successful calls it made before it failed. How can I add error handling or some other method so that if it fails at some point, say after 500 calls, I can still get 499 pieces of data?
If I had run the code without putting it into a function, my list would still be viable up to the point the code broke, but I felt like putting it into a function was "more correct"
#this is how the function is set up in pseudo-code:
def api_call(x):
my_info = []
for i in x:
dictionary = {}
url=f'http://www.api.com/{x}'
dictionary['data'] = json['data']
my_info.append(dictionary)
return my_info
another_variable = api_call(x)
Just wrap it in a try/except/finally block. The finally is always executed before leaving the try statement. Explanation of what the finally block does is here.
def api_call(x):
my_info = []
try:
for i in x:
dictionary = {}
url=f'http://www.api.com/{x}'
dictionary['data'] = json['data']
my_info.append(dictionary)
except Exception as e:
print('Oopsie') # Can log the error here if you need to
finally:
return my_info
another_variable = api_call(x)
I am playing around with a small project to get a better understanding of web technologies.
One requirement is that if multiple clients have access to my site, and one make a change all the others should be notified. From what I have gathered Server Sent events seems to do what I want.
However when I open my site in both Firefox and Chrome and try to send an event, only one of the browsers gets it. If I send an event again only one of the browsers gets the new event, usually the browser that did not get event number one.
Here is the relevant code snippets.
Client:
console.log("setting sse handlers")
viewEventSource = new EventSource("{{ url_for('viewEventRequest') }}");
viewEventSource.onmessage = handleViewEvent;
function handleViewEvent(event){
console.log("called handle view event")
console.log(event);
}
Server:
#app.route('/3-3-3/view-event')
def view_event_request():
return Response(handle_view_event(), mimetype='text/event-stream')
def handle_view_event():
while True:
for message in pubsub_view.listen():
if message['type'] == 'message':
data = 'retry: 1\n'
data += 'data: ' + message['data'] + '\n\n'
return data
#app.route('/3-3-3/test')
def test():
red.publish('view-event', "This is a test message")
return make_response("success", 200)
My question is, how do I get the event send to all connected clients and not just one?
Here are some gists that may help (I've been meaning to release something like 'flask-sse' based on 'django-sse):
https://gist.github.com/3680055
https://gist.github.com/3687523
also useful - https://github.com/jkbr/chat/blob/master/app.py
The first thing I notice about your code is that 'handle_view_event' is not a generator.
Though it is in a 'while' loop, the use of 'return' will always exit the function the first time we return data; a function can only return once. I think you want it to be 'yield' instead.
In any case, the above links should give you an example of a working setup.
As Anarov says, websockets and socket.io are also an option, but SSE should work anyway. I think socket.io supports using SSE if ws are not needed.
I know this is more of a learning thing than a problem in programming but still I need to ask it.Please don't down vote it,I wouldn't have asked it here if I knew any other appropriate place.I have a view as follows:
def takedown(request,aid):
approveobj = get_object_or_404(approve,pk=aid)
# fetching mapping
map = mapping.objects.get(appval=approveobj)
try:
# deleting option from main database
map.optval.delete()
# changing the status of the appval
map.appval.status = 'Pending'
map.appval.save()
# finally deleting the map
map.delete()
except:
print("Error in taking down the entry")
redirect_url = "/wars/configure/"+str(map.appval.warval.id)+"/"
return HttpResponseRedirect(redirect_url)
I want to design some tests for the above view.At present I'm checking whether it redirects to appropriate url or not.What else I can test?I need to test it thoroughly.
Looking at your view, I can see three other possible tests:
Test that the view returns status code 404 for an aid that does not exist
Check that the map object exists in the database. Fetch the view in your test, then check that the map object has been deleted as you expected.
Test that your view works as expected when there is an exception in the try except block. It's not clear what you're expecting to go wrong here. Note that because you only print the error, nothing will be displayed to the user so it's tricky to test this.
I am getting an ldap.SIZELIMIT_EXCEEDED error when I run this code:
import ldap
url = 'ldap://<domain>:389'
binddn = 'cn=<username> readonly,cn=users,dc=tnc,dc=org'
password = '<password>'
conn = ldap.initialize(url)
conn.simple_bind_s(binddn,password)
base_dn = "ou=People,dc=tnc,dc=org"
filter = '(objectClass=*)'
attrs = ['sn']
conn.search_s( base_dn, ldap.SCOPE_SUBTREE, filter, attrs )
Where username is my actual username, password is my actual password, and domain is the actual domain.
I don't understand why this is. Can somebody shed some light?
Manual: http://www.python-ldap.org/doc/html/ldap.html
exception ldap.SIZELIMIT_EXCEEDED
An LDAP size limit was exceeded. This
could be due to a sizelimit
configuration on the LDAP server.
I think your best bet here is to limit the sizelimit on the message you receive from the server. You can do that by setting the attribute LDAPObject.sizelimit (deprecated) or using the sizelimit parameter when using search_ext()
You should also make sure your bind was actually successful...
You're encountering that exception most likely because the server you're communicating with has more results than can be returned by a single request. In order to get around this you need to use paged results which can be done by using SimplePagedResultsControl.
Here's a Python3 implementation that I came up with after heavily editing what I found here and in the official documentation. At the time of writing this it works with the pip3 package python-ldap version 3.2.0.
def get_list_of_ldap_users():
hostname = "<domain>:389"
username = "username_here"
password = "password_here"
base = "ou=People,dc=tnc,dc=org"
print(f"Connecting to the LDAP server at '{hostname}'...")
connect = ldap.initialize(f"ldap://{hostname}")
connect.set_option(ldap.OPT_REFERRALS, 0)
connect.simple_bind_s(username, password)
search_flt = "(objectClass=*)"
page_size = 500 # how many users to search for in each page, this depends on the server maximum setting (default highest value is 1000)
searchreq_attrlist=["sn"] # change these to the attributes you care about
req_ctrl = SimplePagedResultsControl(criticality=True, size=page_size, cookie='')
msgid = connect.search_ext(base=base, scope=ldap.SCOPE_SUBTREE, filterstr=search_flt, attrlist=searchreq_attrlist, serverctrls=[req_ctrl])
total_results = []
pages = 0
while True: # loop over all of the pages using the same cookie, otherwise the search will fail
pages += 1
rtype, rdata, rmsgid, serverctrls = connect.result3(msgid)
for user in rdata:
total_results.append(user)
pctrls = [c for c in serverctrls if c.controlType == SimplePagedResultsControl.controlType]
if pctrls:
if pctrls[0].cookie: # Copy cookie from response control to request control
req_ctrl.cookie = pctrls[0].cookie
msgid = connect.search_ext(base=base, scope=ldap.SCOPE_SUBTREE, filterstr=search_flt, attrlist=searchreq_attrlist, serverctrls=[req_ctrl])
else:
break
else:
break
return total_results
This will return a list of all users but you can edit it as required to return what you want without hitting the SIZELIMIT_EXCEEDED issue :)
see here for what to do when you get this error:
How get get more search results than the server's sizelimit with Python LDAP?
The filter you provided (objectClass=*) is a presence filter. In this case it limits the results to the search request to objects in the directory at and underneath the base object you supplied - which is every object underneath the base object since every object has at least one objectClass. Restrict your search by using a more restrictive filter, or a tighter scope, or a lower base object, or all three. For more information on the topic of the search request, see Using ldapsearch and LDAP: Programming Practices.
Directory Server administrators are free to impose a server-wide limit on entries that can be returned to LDAP clients, these are known as a server-imposed size limit. There is a time limit which follows the same rules.
LDAP clients should always supply a size limit and time limit with a search request, these limits, known as client-requested limits cannot override the server-imposed limits, however.
Active Directory defaults to returning a max of 1000 results. What is sort of annoying is that rather than return 1000, with an associated error code, it seems to send the error code without the data.
eDirectory starts with no default, and is completely conifgurable to whatever you like.
Other directories handle it differently. (Edit and add in, if you know).
You must use paged search to achieve this.
The page size would depend on your ldap server, 1000 would work for Active Directory.
Have a look at http://google-apps-for-your-domain-ldap-sync.googlecode.com/svn/trunk/ldap_ctxt.py for an example