Softlayer API -- exception happened when calling softlayer api - python

I got an exception: "TransportError: TransportError(0): ('Connection aborted.', error(110, 'Connection timed out'))" when I called the api: Virtual_Guest::getBandwidthTotal.
It happened in this situation:
one same softlayer-api username and key
I called the functions concurrently thousands times at one moment.
So I do not know the exception happened due to "huge concurrent api callings" or just a network problem, or some other reasons.
If it causes since "huge concurrent api callings", here is an additional question:
As I says before that I called with one same username and key, if I calls concurrently with different username and key, will this exception happen as well?

The timeout errors are usually generated when the client is waiting a response of the API, this situation is documented here, in your case you can try to increase the timeout of your client, if you are using the Softlayer Python client please see the documentation to increase the timeout here, and aslo please review that you network connection is fine.
Regards

There is a limit on the number of API calls that can be made by an account per second. I believe this limit is per username, however I would not recommend using a bunch of different users to get around this limit.
My suggestion would be to use an objectMask to get as much data as possible in one API call, instead of making numerous api calls.
Instead of calling Virtual_Guest::getBandwidthTotal on every virtual guest on your account, you could call
SoftLayer_Account::getVirtualGuests(mask="mask[inboundPrivateBandwidthUsage,inboundPublicBandwidthUsage,outboundPrivateBandwidthUsage,outboundPublicBandwidthUsage]")
You might also need to use result Limits so that one big call doesn't time out as well.

Related

RabbitMQ/pika - Single RPC callback for multiple clients

I have followed the examples at the official RabbitMQ site and I then tried to take it a step further. I tried to apply the same RPC logic with a single server and multiple clients. Following the examples, I am using BlockingConnection() for now. Each client calls the process_data_events() function in a loop and check for it's corresponding correlation_id. All of the clients check for their correlation id on the same callback_queue.
For example, in a setup of 2 clients and 1 sever, there are 2 queues. One that both clients publish to, and one that both clients check for the corresponding correlation_id. The code works flawlessly with a single client and a single server (or even multiple servers), but fails to work when more than one clients consume on the callback_queue.
My experiments have showed that when a client receives (via the process_data_events()) an id that is not theirs, that id is not processed by the other client, ever. Hence a timeout occurs or the connection is dropped since no heartbeat is sent for quite some time. The function after which the problem occurs is channel.basic_consume(queue='callback',on_message_callback=on_resp)
Should I use a unique callback queue for each client? The documentation was not as helpful as I would have hoped, is there something you would recommend me studying?
I can post minimal code to reproduce the issue if you ask me to.
Thanks in advance
EDIT: This repo contains minimal code to reproduce the issue plus some more details.

TooManyRequests Overpass Error

I'm using overpy to query the Overpass API, and the nature of the data is such that I have a lot of queries to execute. I've run into the 429 OverpassTooManyRequests exception and I'm trying to play by the rules. I've tried introducing time.sleep methods to space out the requests, but I have no basis for how long the program should wait before continuing.
I found this link which mentions a "Retry-after" header:
How to avoid HTTP error 429 (Too Many Requests) python
Is there a way to access that header in an overpy response? I've been through the docs and the source code, but nothing stood out that would allow me to access that header so I can pause querying until it's acceptable to do so again.
I'm using Python 3.6 and overpy 0.4.
Maybe this isn't quite the answer you're seeking, but I ran into the same issue and fixed it by simply hosting my own OSM database server using docker. Just clone the repo and follow instructions:
https://github.com/mediasuitenz/docker-overpass-api
from http://overpass-api.de/command_line.html do check that you do not have a single 'runaway' request that is taking up all the resources.
After verifying that I don't have runaway queries, I have taken Peter's advice and added a catch for the TooManyRequests exception that waits 30s and tries again. This seems to be working as an immediate solution.
I will also raise an issue with the originators of OverPy to suggest an enhancement to allow evaluating the /api/status, as per mmd's advice.

Why do I get a 500 internal server error from the Google Drive API when adding users to a Google Sheet in Python?

I am writing python script using the Google Sheets apiAPI. It reads data and writes it to a new file, shares that file with a specified email and returns the id of the new file.
def read_sheet(self,spreadsheetId):
try:
result=self.service.spreadsheets().get(spreadsheetId=spreadsheetId,includeGridData=True,fields='namedRanges,properties,sheets').execute()
return result
except apiclient.errors.HttpError as e:
traceback.print_exc()
print(e)
sys.exit(1)
def create_spreadsheet(self,data,email):
try:
newid=self.service.spreadsheets().create(body=data,fields='spreadsheetId').execute()
newid=newid.get('spreadsheetId')
self.give_permissions(email,newid)
return newid
except apiclient.errors.HttpError as e:
traceback.print_exc()
print(e)
sys.exit(1)
This code works very well, but not with 100% accuracy. Sometimes I get a 500 Internal Server Error, but the file is created in my account. I found a similar Stack Overflow question (Getting 500 Error when using Google Drive API to update permissions), but it didn't help. I want to know the exact reason for this. Can anyone help?
EDIT1:
This is the exact error message
https://www.googleapis.com/drive/v3/files/349hsadfhSindfSIins-rasdfisadfOsa3OQmE/permissions?sendNotificationEmail=true&alt=json&transferOwnership=false
returned "Internal Error. User message: "An internal error has
occurred which prevented the sharing of these item(s): Template"">
As hinted to above in DaimTo's comment, the error is due to Google Drive still processing the create request while you're trying to add the permission to share the (new) file with. Remember, when you add a file to Drive, Google's servers are still working on the file-create as well as making it accessible globally. Once the flurry of activity settles down, then adding additional users to the document shouldn't be a problem.
You can see from this Drive API documentation page a description of the (500) error you received as well as the recommended course of action which is to implement exponential backoff, which is really just saying you should pause a bit before trying again & extending that delay each time you get the same error. He also pointed to another SO Q&A which you can look at. Another resource is this descriptive blog post. If you don't want to implement it yourself, you can try the retrying or backoff packages.
NOTE: You didn't show us all your code, but I changed the title of this question to more accurately reflect that you're using the Drive API for adding the permissions. While you've used the Sheets API to create the Sheet with, realize that you can just do this all with the Drive API (and not use the Sheets API at all unless you're doing spreadsheet-oriented operations. The Drive API is for all file-related operations like sharing, copying, import/export, etc.)
Bottom line is that you can create Sheets using either API, but if you're not doing anything else with the Sheets API, why bother making your app more complex? If you want to see how to create Sheets with both APIs, there's a short segment in my blog post that covers this... you'll see that they're nearly identical but using the Drive API does require one more thing, the MIMEtype.
If you want to learn more about both APIs, see this answer I gave to a related question that features additional learning resources I've created for both Drive and Sheets, most of which are Python-based.
I guess I'm late but just in case: just add a few seconds delay between the create request and the give permissions one. For me it works making the thread sleep for 10 seconds. Try this:
def create_spreadsheet(self,data,email):
try:
newid=self.service.spreadsheets().create(body=data,fields='spreadsheetId').execute()
newid=newid.get('spreadsheetId')
time.sleep(10)
self.give_permissions(email,newid)
return newid
except apiclient.errors.HttpError as e:
traceback.print_exc()
print(e)
sys.exit(1)

Why does search in gmail API return different result than search in gmail website?

I'm using the gmail API to search emails from users. I've created the following search query:
ticket after:2015/11/04 AND -from:me AND -in:trash
When I run this query in the browser interface of Gmail I get 11 messages (as expected). When I run the same query in the API however, I get only 10 messages. The code I use to query the gmail API is written in Python and looks like this:
searchQuery = 'ticket after:2015/11/04 AND -from:me AND -in:trash'
messagesObj = google.get('/gmail/v1/users/me/messages', data={'q': searchQuery}, token=token).data
print messagesObj.resultSizeEstimate # 10
I sent the same message on to another gmail address and tested it from that email address and (to my surprise) it does show up in an API-search with that other email address, so the trouble is not the email itself.
After endlessly emailing around through various test-gmail accounts I *think (but not 100% sure) that the browser-interface search function has a different definition of "me". It seems that in the API-search it does not include emails which come from email addresses with the same name while these results are in fact included in the result of the browser-search. For example: if "Pete Kramer" sends an email from petekramer#icloud.com to pete#gmail.com (which both have their name set to "Pete Kramer") it will show in the browser-search and it will NOT show in the API-search.
Can anybody confirm that this is the problem? And if so, is there a way to circumvent this to get the same results as the browser-search returns? Or does anybody else know why the results from the gmail browser-search differ from the gmail API-search? Al tips are welcome!
I would suspect it is the after query parameter that is giving you trouble. 2015/11/04 is not a valid ES5 ISO 8601 date. You could try the alternative after:<time_in_seconds_since_epoch>
# 2015-11-04 <=> 1446595200
searchQuery = 'ticket AND after:1446595200 AND -from:me AND -in:trash'
messagesObj = google.get('/gmail/v1/users/me/messages', data={'q': searchQuery}, token=token).data
print messagesObj.resultSizeEstimate # 11 hopefully!
The q parameter of the /messages/list works the same as on the web UI for me (tried on https://developers.google.com/gmail/api/v1/reference/users/messages/list#try-it )
I think the problem is that you are calling /messages rather than /messages/list
The first time your application connects to Gmail, or if partial synchronization is not available, you must perform a full sync. In a full sync operation, your application should retrieve and store as many of the most recent messages or threads as are necessary for your purpose. For example, if your application displays a list of recent messages, you may wish to retrieve and cache enough messages to allow for a responsive interface if the user scrolls beyond the first several messages displayed. The general procedure for performing a full sync operation is as follows:
Call messages.list to retrieve the first page of message IDs.
Create a batch request of messages.get requests for each of the messages returned by the list request. If your application displays message contents, you should use format=FULL or format=RAW the first time your application retrieves a message and cache the results to avoid additional retrieval operations. If you are retrieving a previously cached message, you should use format=MINIMAL to reduce the size of the response as only the labelIds may change.
Merge the updates into your cached results. Your application should store the historyId of the most recent message (the first message in the list response) for future partial synchronization.
Note: You can also perform synchronization using the equivalent Threads resource methods. This may be advantageous if your application primarily works with threads or only requires message metadata.
Partial synchronization
If your application has synchronized recently, you can perform a partial sync using the history.list method to return all history records newer than the startHistoryId you specify in your request. History records provide message IDs and type of change for each message, such as message added, deleted, or labels modified since the time of the startHistoryId. You can obtain and store the historyId of the most recent message from a full or partial sync to provide as a startHistoryId for future partial synchronization operations.
Limitations
History records are typically available for at least one week and often longer. However, the time period for which records are available may be significantly less and records may sometimes be unavailable in rare cases. If the startHistoryId supplied by your client is outside the available range of history records, the API returns an HTTP 404 error response. In this case, your client must perform a full sync as described in the previous section.
From gmail API Documentation
https://developers.google.com/gmail/api/guides/sync

ApplicationError2 and ApplicationError5 when communicating with external api from AppEngine

I have built an application on google app engine, in python27 to connect with another services API and in general everything works smoothly. Every now and then I get one of the following two errors
(<class 'google.appengine.api.remote_socket._remote_socket.error'>, error('An error occured while connecting to the server: ApplicationError: 2 ',), <traceback object at 0x11949c10>)
(<class 'httplib.HTTPException'>, HTTPException('ApplicationError: 5 ',), <traceback object at 0x113a5850>)
The first of these errors (ApplicationError: 2) I interpret to be an error occurring on the part of the servers with which I am communicating, however I've not been able to find any detail on this and if there is any way I am responsible / can fix it.
The second of these errors (ApplicationError: 5) I've found some detail on and it suggests that the server took too long to communicate with my application - however I've set the timeout to be 20s and it fails considerably quicker than that.
If anyone could offer links or insight into the errors - specifically what causes the error and what can be done to fix it I'd very much appreciate it.
You get to start using the word "idempotent" in casual conversations and curses :)
The only thing you can do is to try the call again, and accept the fact that your initial call may have gone through, only to time out on the response - i.e. if the call actually did something (create a customer order for example), after the timeout error you might have to check if the first request succeed so you don't end up with multiple copies of the same order.
Hope that makes sense. FWIW we work with some unfriendly API's and for us, about 80% of our code is dealing with exactly this sort of !##$%.

Categories