Python requests find proxy latency - python

I am trying to test the latency of a proxy by pinging a site while using a proxy with a login. I know requests easily supports proxies and was wondering if there was a way to ping/test latency to a site through this. I am open to other methods as well, as long as they support a proxy with a login. Here is an example of my proxy integration with requests
import requests
proxy = {'https' : 'https://USER:PASS#IP:PORT'}
requests.get('https://www.google.com/', proxy=proxy)
How can I make a program to test the latency of a proxy with a login to a site?

Related

AWS API Gateway 301 when call Instagram

I'm trying to fetch some data from Instagram and to prevent IP limitation I want to use AWS API Gateway proxies. I'm using a python lib requests_ip_rotator to manage my gateway and it works well, my requests on few website get a 200. But when I make a request on Instagram, all my requests are redirected with a HTTP 301 Response.
Here is my code, pretty simple, you can remove the mount to check that the request works well without the gateway setup.
import requests
from requests_ip_rotator import ApiGateway
gateway = ApiGateway("https://www.instagram.com/", regions=['eu-west-3'], access_key_id=AWS_ID,
access_key_secret=AWS_KEY)
gateway.start()
session = requests.Session()
session.mount("https://www.instagram.com/", gateway)
response = session.get("https://www.instagram.com/neymarjr/feed")
print(response)
gateway.shutdown()
Hope someone can help me !
If you need more information don't hesitate !
And feel free to give me solution for mass "scrapping" on Instagram =)

Is there a way to download with python usging request, a csv that the url is like ‘Blob:https’

I would like to download the state of industry CSV from OpenTable with a python bot,
but the URL of the CSV is of the form “Blob: https” and I could not use the request library.
What can I do to get the correct URL? or to download it using python and the blob URL? I can with Selenium, but I rather use the request library.
To Get The Correct URL Try Making A proxy like mitmproxy https://mitmproxy.org
Or Some Software Like Burp Suit To Intercept The Traffic Between You as A Client And Website Server Then You Will Got The ALL HTTP Request With HTTP method And URL
And Body Data
May That Will Help You To Build Your BOT
Intercepting HTTP Traffic with mitmproxy
Intercepting HTTP Traffic With BurpSuit

How to use proxy with zillow api vai Python

I am trying to make request from zillow api. However, I want to use proxy.
import zillow
api = zillow.ValuationApi()
data = api.GetSearchResults(key, address, postal_code)
Is there any method to have my requests use my predefined proxy?
Thanks
Hi you can set the proxy using environment variables as below:
HTTP_PROXY="http://<proxy server>:<port>"
HTTPS_PROXY="http://<proxy server>:<port>"
You can read more about proxy in python requests proxy settings page

urllib2: https to target via http proxy

I am using a proxy server to connect to several target servers. Some of the target servers expect http and others expect https. My http requests work swimmingly, but urllib2 ignores the proxy handler on the https requests and sends the requests directly to the target server.
I've tried a number of different things but here is one reasonably concise attempt:
import urllib2
cookie_handler = urllib2.HTTPCookieProcessor (cookielib.LWPCookieJar())
proxies = {'http': 'http://123.456.78.9/',
'https': 'http://123.45.78.9/'}
proxy_handler = urllib2.ProxyHandler (proxies)
url_opener = urllib2.build_opener (proxy_handler, cookie_handler)
request = urllib2.Request ('https://example.com')
response = url_opener.open (request)
I understand that urllib2 has had the ability to send https requests to a proxy server since Python 2.6.3, but I can't seem to get it to work. I'm using 2.7.3.
Thanks for any advice you can offer.
UPDATE: The code above does work. I'm not certain why it wasn't working when I asked this question. Most likely, I had a typo in the https proxy URL.

Python urllib proxy

I'm trying to fetch some urls via urllib and mechanize through my proxy.
With mechanize I try the following:
from mechanize import Browser
import re
br = Browser()
br.set_proxies({"http": "MYUSERNAME:*******#itmalsproxy.italy.local:8080"})
br.open("http://www.example.com/")
I get the following error:
httperror_seek_wrapper: HTTP Error 407: Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy service is denied.
As the proxy, the username and the password are correct, what could be the problem?
Maybe the proxy is using NTLM authentication?
If that is the case, you can try using the NTLM Authorization Proxy Server (see also this answer).
you might get more info from the response headers
print br.response().info()
When your web browser uses proxy server to surf the Web from within your local
network your may be required to authenticate youself to use proxy. Google ntlmaps.

Categories