Which response goes with which request? - python

I'm using Python with OpenERP 7.
I fire requests to partners using urllib3. Some of these requests may be asynchronous.
So I've built a little asyncore server to wait for responses.
But the thing is, the server cannot know which response goes with which request.
In the content of my request, I have a tag named TransactionID.
So far, I tried to link responses with requests using this TransactionID.
But the responses are not the same from one partner to another.
So, what I've done is create a list of possible TransactionID tag structures.
This method works, but it's so ugly.
I was hopping for a better, cleaner solution, if someone knows how to achieve that.
Thanks !
Edit:
I've made a mistake by calling it asynchronous I think.
The partner gives a synchronous response. But it's just to confirm that my request is ok.
Later, the partner will send me a response on a specific url:port on my server. This is the response I'm talking about. Sorry if I've not given enough details.

Related

How can I bypass the 429-error from www.instagram.com?

i'm solliciting you today because i've a problem with selenium.
my goal is to make a full automated bot that create an account with parsed details (mail, pass, birth date...) So far, i've managed to almost create the bot (i just need to access to gmail and get the confirmation code).
My problem is here, because i've tried a lot of things, i have a Failed to load resource: the server responded with a status of 429 ()
So, i guess, instagram is blocking me.
how could i bypass this ?
The answer is in the description of the HTTP error code. You are being blocked because you made too many requests in a short time.
Reduce the rate at which your bot makes requests and see if that helps. As far as I know there's no way to "bypass" this check by the server.
Check if the response header has a Retry-After value to tell you when you can try again.
Status code of 429 means that you've bombarded Instagram's server too many times ,and that is why Instagram has blocked your ip.
This is done mainly to prevent from DDOS attacks.
Best thing would be to try after some time ( there might be a Retry-After header in the response).
Also, increase the time interval between each request and set the specific count of number of requests made within a specified time (let's say 1 hr).
Retry-After header is the best practice. However, there's no such response header in this scenario.

Is there a way to maintain a session through links in python?

I am creating a project in python that scrubs websites and sends links to customers. I currently have functioning versions that simply relay information, but it would be much better if I could make it easier by sending post requests to the server. For example, I currently send a link to my users to a certain product. I want to know if, instead, I can perform post requests in my program and send them a link that is part of the same session where that POST has been done already (apologies for poor use of terms). Basically completing actions for them and sending it through a link?

How Will API's interpret Calls from my Website? Python/Django Website Project

I'm building a website using Django (not sure if this is relevant) and Python, which allows players of a video game to get their game-histories. This involves pulling information from the API for the game, using a function like this below, where "url" is the url endpoint for the API:
def API_Call(url):
#dictionary to hold extra headers
HEADERS = {"X-API-Key":'myapikey'}
#make request
try:
r = requests.get(url, headers=HEADERS, timeout=5)
return r.json()
except request.exceptions.RequestException as e:
return {"ErrorCode":"Generic Request Exception"}
except request.exceptions.HTTPError as e:
return {"ErrorCode": "HTTPError"}
except request.exceptions.Timeout as e:
return {"ErrorCode": "Timeout"}
The API I call has rate limits imposed (something like 200/10 minutes from a single IP address), and I was hoping for some direction or help in understanding how this will work. That is, if I have 10 users log onto the website at the same time, and try to pull histories, will the API view all requests as coming from the same IP address? Or will it see it coming from 10 different IP addresses? If the former, is there a way to have the user's computer do the requests?
Sorry if the above question is too vague; if other specific information would help please let me know. I'm very new to website-building, and I'm really just hacking it so far, figuring out piece by piece what I need to accomplish various goals. I'm also still in a very basic testing mode, and have not completely figured out where/how to actually launch the website (if that matters).
If your webserver makes calls to the game API, then the server running the API will see all those request as coming from one IP address, being the address of your server. It doesn't matter whether those requests were performed in a cron job or by requests from a dozen different users, since it is always your server that communicates to the game servers.
There is a way to make a user's computer do the request, but then it will become a bit hacky. You either have to use an AJAX request, but that will only work if the server API responds with the right headers that allow cross domain AJAX calls. Or you'll have to hack something together using iframes or fake images, but then things will become ugly quite quickly.

Django Views and Urls

This is a bit long so please bear with me.....
I am in the middle of building an android application. I have built the client app, now I am working on the server.
I have decided to use Django for the server. Though I have already decided on the data structures, currently I am stuck with how am I supposed to send different kinds of requests from the server and how the server is supposed to handle them differently.
For example:
A request could be registering a new user and storing his
credentials.
Another request could be when a user likes or dislikes a comment.
..... there could be few more
One way that I can think of is to first have separate "django views" for each kind of requests and then attach a "django url" to it. Now from the client app, a particular kind of request could be made at its specific url, and then once received at the server, "django" will automatically direct it to its view, which will then take the desired actions.
Please let me know if there are any better ways to do it.
Yes, that is exactly how to do it.
You probably want to look into Django REST framework for this.

Client Digest Authentication Python with URLLIB2 will not remember Authorization Header Information

I am trying to use Python to write a client that connects to a custom http server that uses digest authentication. I can connect and pull the first request without problem. Using TCPDUMP (I am on MAC OS X--I am both a MAC and a Python noob) I can see the first request is actually two http requests, as you would expect if you are familiar with RFC2617. The first results in the 401 UNAUTHORIZED. The header information sent back from the server is correctly used to generate headers for a second request with some custom Authorization header values which yields a 200 OK response and the payload.
Everything is great. My HTTPDigestAuthHandler opener is working, thanks to urllib2.
In the same program I attempt to request a second, different page, from the same server. I expect, per the RFC, that the TCPDUMP will show only one request this time, using almost all the same Authorization Header information (nc should increment).
Instead it starts from scratch and first gets the 401 and regenerates the information needed for a 200.
Is it possible with urllib2 to have subsequent requests with digest authentication recycle the known Authorization Header values and only do one request?
[Re-read that a couple times until it makes sense, I am not sure how to make it any more plain]
Google has yielded surprisingly little so I guess not. I looked at the code for urllib2.py and its really messy (comments like: "This isn't a fabulous effort"), so I wouldn't be shocked if this was a bug. I noticed that my Connection Header is Closed, and even if I set it to keepalive, it gets overwritten. That led me to keepalive.py but that didn't work for me either.
Pycurl won't work either.
I can hand code the entire interaction, but I would like to piggy back on existing libraries where possible.
In summary, is it possible with urllib2 and digest authentication to get 2 pages from the same server with only 3 http requests executed (2 for first page, 1 for second).
If you happen to have tried this before and already know its not possible please let me know. If you have an alternative I am all ears.
Thanks in advance.
Although it's not available out of the box, urllib2 is flexible enough to add it yourself. Subclass HTTPDigestAuthHandler, hack it (retry_http_digest_auth method I think) to remember authentication information and define an http_request(self, request) method to use it for all subsequent requests (add WWW-Authenticate header).

Categories