I'm using the Nest API to poll the current temperature and related temperature data from two of my Nests (simultaneously).
I was initially polling this data every minute but started getting an error:
nest.nest.APIError: blocked
I don't get the error every minute, more like intermittently every 5-10 minutes.
Reading through their documentation it seems that while pulling data once per minute is permissible, it's the maximum recommended query frequency.
So I set it to two minutes. I'm still getting the error.
I'm using this Python package, although I'm starting to wonder if there's too much going on under the hood that is making unnecessary requests.
Has anyone had any experience with this type of Nest error, or this Python package before?
Does polling two Nests with the same authenticated call result in multiple requests, as it relates to their data limiting?
Should I just scrap this package and roll my own? (this is generally my preference, but I need to learn to stop always re-writing everything the moment I hit a snag like this in order to fully control and thoroughly understand each aspect of a particular integration, right?)
Related
I have been working with the Coinbase Websocket API recently for data analysis purposes. I am trying to track the order book in at least seconds-frequency.
As far as I am aware of, it is possible to use the REST API for that, but it does not include timestamp. The other options are the websocket level2 updates and the full channels.
The problem is that when I am processing the level2 updates I am constantly falling back in time (I did not focus on processing speed while I was programming since it was not my goal and I do not have the hardware neither the connection speed to do it), so for example after 30min I am able to process only 10 min of data.
The problem comes if, for whatever reason I am disconnected from the exchange, I have to reconnect again and I have a big empty window of data in the middle.
Is there any aggregated feed or way to do it (Receive all updates in one second or something like that) that I am not aware of? or should I just resign and improve my code and buy better equipment?
P.D: I am relatively new, so sorry if this type of question does not fit here!
Just in case anyone interested I just opened multiple websocket at different time windows and reconnect them periodically in order to miss as few price updates as possible.
I have a few API microservices that use gorrilla mux/handlers that randomly return characters that are not in the string, or only the first character. The problem never shows up when tested in Postman, about 10% of the time when called from the React.js front end, 50% of the time when called from AWS lambda python function when called every minute, and nearly never when lambda is called once every 3 minutes. To be honest, I am not sure where in the stack this is going wrong.
The microservices are running in a docker container within an OpenShift cluster that are ran though a Kong API gateway. Currently I am leaning to the problem being within gorrilla mux/handlers or the GoLang code, because we had the same issue before we moved to OpenShift and it seems to be happening across a few different clients, and services.
First, we looked into the problem being in the frontend. This is where we notice the first character only bug. We thought this may be an issue with react possibly displaying the data before it pulls all of it. So we logged the object that was returned and it contains the same data. What is interesting is that most of the strings are fine except one that is usually >100 characters, and when it happens it happens to all of strings with the same key. Again this only happens about 5-10% of the time and refreshing normally fixes it.
Then we noticed that an AWS lambda python function was getting garbage data from a similar API service. Sometimes '\000000\0000000' over and over again, or random parts of other strings from the JSON output, would be added in the middle of other strings. This started happening more often (as more data was added to the DB tables) until it was about 50% of the time when the lambda function was called every minute. I changed the call to be scheduled every three minutes and the problem disappeared.
When I go to find where in the stack this is happening with postman calls the problem never shows up, even when I blast calls in rapid succession. Which makes this a bit of a pain to track down.
I am not certain that the problems between lambda and the react front end are the same but I figured that the inconsistency is what is consistent between these problems. I am chalking up the difference in how this inconsistency is expressed to how the JSON is read into the clients.
I am hoping someone could point me in the right direction on this. Google and GitHub are not returning any known bugs that seem similar, and I am at a bit of a loss on where to be probing this one.
I'm using jira-python to automate a bunch of tasks in Jira. One thing that I find weird is that jira-python takes a long time to run. It seems like it's loading or something before sending the requests. I'm new to python, so I'm a little confused as to what's actually going on. Before finding jira-python, I was sending requests to the Jira REST API using the requests library, and it was blazing fast (and still is, if I compare the two). Whenever I run the scripts that use jira-python, there's a good 15 second delay while 'loading' the library, and sometimes also a good 10-15 second delay sending each request.
Is there something I'm missing with python that could be causing this issue? Anyway to keep a python script running as a service so it doesn't need to 'load' the library each time it's ran?
#ThePavoIC, you seem to be correct. I notice MASSIVE changes in speed if Jira has been restarted and re-indexed recently. Scripts that would take a couple minutes to run would complete in seconds. Basically, you need to make sure Jira is tuned for performance and keep your indexes up to date.
I was wondering if it would be a good idea to use callLater in Twisted to keep track of auction endings. It would be a callLater on the order of 100,000's of seconds, though does that matter? Seems like it would be very convenient. But then again it seems like a horrible idea if the server crashes.
Keeping a database of when all the auctions are ending seems like the most secure solution, but checking the whole database each second to see if any auction has ended seems very expensive.
If the server crashes, maybe the server can recreate all the callLater's from database entries of auction end times. Are there other potential concerns for such a model?
One of the Divmod projects, Axiom, might be applicable here. Axiom is an object database. One of its unexpected, useful features is a persistent scheduling system.
You schedule events using APIs provided by the database. When the events come due, a callback you specified is called. The events persist across process restarts, since they're represented as database objects. Large numbers of scheduled events are supported, by only doing work to keep track when the next event is going to happen.
The canonical Divmod site went down some time ago (sadly the company is no longer an operating concern), but the code is all available at http://launchpad.net/divmod.org and the documentation is being slowly rehosted at http://divmod.readthedocs.org/.
I'm serving requests from several XMLRPC clients over WAN. The thing works great for, let's say, a period of one day (sometimes two), then freezes in socket.py:
data = self._sock.recv(self._rbufsize)
_sock.timeout is -1, _sock.gettimeout is None
There is nothing special I do in the main thread (just receiving XMLRPC calls), there are another two threads talking to DB. Both these threads work fine and survive this block (did a check with WinPdb). Clients are sending requests not being longer than 1KB, and there isn't any special content: just nice and clean strings in dictionary. Between two blockings I serve tens of thousands requests without problems.
Firewall is off, no strange software on the same machine, etc...
I use Windows XP and Python 2.6.4. I've checked differences between 2.6.4. and 2.6.5, and didn't find anything important (or am I mistaking?). 2.7 version is not an option as I would miss binaries for MySqlDB.
The only thing that happens from time to time caused by the clients that have poor internet connection is that sockets break. This is happening, every 5-10 minutes (there are just five clients accessing server every 2 seconds).
I've spent great deal of time on this issue, now I'm beginning to lose any ideas what to do. Any hint or thought would be highly appreciated.
What exactly is happening in your OS's TCP/IP stack (possibly in the python layers on top, but that's less likely) to cause this is a mystery. As a practical workaround, I'd set a timeout longer than the delays you expect between requests (10 seconds should be plenty if you expect a request every 2 seconds) and if one occurs, close and reopen. (Calibrate the delay needed to work around freezes without interrupting normal traffic by trial and error). Unpleasant to hack a fix w/o understanding the problem, I know, but being pragmatical about such things is a necessary survival trait in the world of writing, deploying and operating actual server systems. Be sure to comment the workaround accurately for future maintainers!
thanks so much for the fast response. Right after I've receive it I augmented the timeout to 10 seconds. Now it is all running without problems, but of course I would need to wait another day or two to have sort of confirmation, but only after 5 days I'll be sure and will come back with the results. I see now that 140K request went well already, having so hard experience on this one I would wait at least another 200K.
What you were proposing about auto adaptation of timeouts (without putting the system down) sounds also reasonable. Would the right way to go be in creating a small class (e.g. AutoTimeoutCalibrator) and embedding it directly into serial.py?
Yes - being pragmatical is the only way without loosing another 10 days trying to figure out the real reason behind.
Thanks again, I'll be back with the results.
(sorry, but for some reason I was not able to post it as a reply to your post)