How to get the thread used by a python function? - python

I'm trying a sample code for multithreading in python, which was found in this doc page
The actual code is the following:
import concurrent.futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r page is %d bytes' % (url, len(data)))
I would like the load_url function to print out the id of the thread it is currently using so I can monitor if multithreading is actually working. Of course if you have better ways to achieve this same goal, please let me know.
Thanks
Edit
I think I just bumped into the answer, this seems to work
# Retrieve a single page and report the URL and contents
import threading
def load_url(url, timeout):
print('Using thread {}, looking for url {}'.format(threading.get_ident(), url))
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()
any feedback on preferred methods are however welcome.

Related

ThreadPoolExecutor not Working Properly with Python Requests

When I requests a long list of urls from ThreadPoolExecutor I am encountering a problem where it seems like the request isn't being sent. I am expecting json responses from the urls
urls = ["example1.com", "example2.com", "example3.com"] #My list is about 1,500 urls long
def load_url(url):
print("here")
r = requests.get(url)
print(r.url)
print(r.json())
return r
def main():
urls = ["example1.com", "example2.com", "example3.com"] #My list is about 1,500 urls long
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
future_to_url = {executor.submit(
load_url, url): url for url in urls}
for future in concurrent.futures.as_completed(future_to_url):
print("right here")
url = future_to_url[future]
try:
response = future.result()
except (HTTPError, ConnectionError, ConnectTimeout):
print("fail")
else:
print(response.json())
main()
When I execute the following code, only the first url's request seems to have been sent and received by the as_completed function. The rest, despite printing "here" and the correct urls and the correct json response (from the load_url function), they are not sent to the as_completed function. So, it's printing "here" continuously but not "right here".
My python version is 3.9.1

How can I use threading with requests? [duplicate]

This question already has answers here:
Asynchronous Requests with Python requests
(15 answers)
Closed 3 years ago.
Hello I am using the requests module and I would like to improve the speed because I have many urls so I suppose I can use threading to have a better speed. Here is my code :
import requests
urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]
for url in urls:
reponse = requests.get(url)
value = reponse.json()
But I don't know how to use requests with threading ...
Could you help me please ?
Thank you !
Just to add from bashrc, you can also use it with requests.
You don't need to use urllib.request method.
it would be something like :
from concurrent import futures
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
with futures.ThreadPoolExecutor(max_workers=5) as executor: ## you can increase the amount of workers, it would increase the amount of thread created
res = executor.map(requests.get,URLS)
responses = list(res) ## the future is returning a generator. You may want to turn it to list.
What I like to do however, it is to create a function that returns directly the json from the response (or the text if you want to scrape).
And use that function in the threadpool
import requests
from concurrent import futures
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
def getData(url):
res = requests.get(url)
try:
return res.json()
except:
return res.text
with futures.ThreadPoolExecutor(max_workers=5) as executor:
res = executor.map(getData,URLS)
responses = list(res) ## your list will already be pre-formated
You can use concurrent module.
pool = concurrent.futures.thread.ThreadPoolExecutor(max_workers=DEFAULT_NUMBER_OF_THREADS)
pool.map(lambda x : requests.get(x), urls)
This allows controlled concurrency.
This is a direct example from the threadpool documentation
import concurrent.futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r page is %d bytes' % (url, len(data)))

Can't get desired results using try/except clause within scrapy

I've written a script in scrapy to make proxied requests using newly generated proxies by get_proxies() method. I used requests module to fetch the proxies in order to reuse them in the script. What I'm trying to do is parse all the movie links from it's landing page and then fetch the name of each movie from it's target page. My following script can use rotation of proxies.
I know there is an easier way to change proxies, like it is described here HttpProxyMiddleware but I would still like to stick to the way I'm trying here.
website link
This is my current attempt (It keeps using new proxies to fetch a valid response but every time it gets 503 Service Unavailable):
import scrapy
import random
import requests
from itertools import cycle
from bs4 import BeautifulSoup
from scrapy.crawler import CrawlerProcess
def get_proxies():
response = requests.get("https://www.us-proxy.org/")
soup = BeautifulSoup(response.text,"lxml")
proxy = [':'.join([item.select_one("td").text,item.select_one("td:nth-of-type(2)").text]) for item in soup.select("table.table tbody tr") if "yes" in item.text]
return proxy
class ProxySpider(scrapy.Spider):
name = "proxiedscript"
handle_httpstatus_list = [503]
proxy_vault = get_proxies()
check_url = "https://yts.am/browse-movies"
def start_requests(self):
random.shuffle(self.proxy_vault)
proxy_url = next(cycle(self.proxy_vault))
request = scrapy.Request(self.check_url,callback=self.parse,dont_filter=True)
request.meta['https_proxy'] = f'http://{proxy_url}'
yield request
def parse(self,response):
print(response.meta)
if "DDoS protection by Cloudflare" in response.css(".attribution > a::text").get():
random.shuffle(self.proxy_vault)
proxy_url = next(cycle(self.proxy_vault))
request = scrapy.Request(self.check_url,callback=self.parse,dont_filter=True)
request.meta['https_proxy'] = f'http://{proxy_url}'
yield request
else:
for item in response.css(".browse-movie-wrap a.browse-movie-title::attr(href)").getall():
nlink = response.urljoin(item)
yield scrapy.Request(nlink,callback=self.parse_details)
def parse_details(self,response):
name = response.css("#movie-info h1::text").get()
yield {"Name":name}
if __name__ == "__main__":
c = CrawlerProcess({'USER_AGENT':'Mozilla/5.0'})
c.crawl(ProxySpider)
c.start()
To make sure whether the request is being proxied, I printed response.meta and could get results like this {'https_proxy': 'http://142.93.127.126:3128', 'download_timeout': 180.0, 'download_slot': 'yts.am', 'download_latency': 0.237013578414917, 'retry_times': 2, 'depth': 0}.
As I've overused the link to check how the proxied request within scrapy works, I'm getting 503 Service Unavailable error at this moment and I can see this keyword within the response DDoS protection by Cloudflare. However, I get valid response when I try with requests module applying the same logic I implemented here.
My earlier question: why I can't get the valid response as (I suppose) I'm using proxies in the right way? [solved]
Bounty Question: how can I define try/except clause within my script so that it will try with different proxies once it throws connection error with a certain proxy?
According to scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware docs
(and source)
proxy meta key is expected to use (not https_proxy)
#request.meta['https_proxy'] = f'http://{proxy_url}'
request.meta['proxy'] = f'http://{proxy_url}'
As scrapy didn't received valid meta key - your scrapy application didn't use proxies
The start_requests() function is just the entry point. On subsequent requests, you would need to resupply this metadata to the Request object.
Also, errors can occur on two levels: proxy and target server
We need to handle bad response codes from both the proxy and the target server. Proxy errors are returned by the middelware to the errback function. The target server response can be handled during parsing from the response.status
import scrapy
import random
import requests
from itertools import cycle
from bs4 import BeautifulSoup
from scrapy.crawler import CrawlerProcess
def get_proxies():
response = requests.get("https://www.us-proxy.org/")
soup = BeautifulSoup(response.text, "lxml")
proxy = [':'.join([item.select_one("td").text, item.select_one("td:nth-of-type(2)").text]) for item in
soup.select("table.table tbody tr") if "yes" in item.text]
# proxy = ['https://52.0.0.1:8090', 'https://52.0.0.2:8090']
return proxy
def get_random_proxy(proxy_vault):
random.shuffle(proxy_vault)
proxy_url = next(cycle(proxy_vault))
return proxy_url
class ProxySpider(scrapy.Spider):
name = "proxiedscript"
handle_httpstatus_list = [503, 502, 401, 403]
check_url = "https://yts.am/browse-movies"
proxy_vault = get_proxies()
def handle_middleware_errors(self, *args, **kwargs):
# implement middleware error handling here
print('Middleware Error')
# retry request with different proxy
yield self.make_request(url=args[0].request._url, callback=args[0].request._meta['callback'])
def start_requests(self):
yield self.make_request(url=self.check_url, callback=self.parse)
def make_request(self, url, callback, dont_filter=True):
return scrapy.Request(url,
meta={'proxy': f'https://{get_random_proxy(self.proxy_vault)}', 'callback': callback},
callback=callback,
dont_filter=dont_filter,
errback=self.handle_middleware_errors)
def parse(self, response):
print(response.meta)
try:
if response.status != 200:
# implement server status code handling here - this loops forever
print(f'Status code: {response.status}')
raise
else:
for item in response.css(".browse-movie-wrap a.browse-movie-title::attr(href)").getall():
nlink = response.urljoin(item)
yield self.make_request(url=nlink, callback=self.parse_details)
except:
# if anything goes wrong fetching the lister page, try again
yield self.make_request(url=self.check_url, callback=self.parse)
def parse_details(self, response):
print(response.meta)
try:
if response.status != 200:
# implement server status code handeling here - this loops forever
print(f'Status code: {response.status}')
raise
name = response.css("#movie-info h1::text").get()
yield {"Name": name}
except:
# if anything goes wrong fetching the detail page, try again
yield self.make_request(url=response.request._url, callback=self.parse_details)
if __name__ == "__main__":
c = CrawlerProcess({'USER_AGENT': 'Mozilla/5.0'})
c.crawl(ProxySpider)
c.start()

requests.exceptions.MissingSchema: Invalid URL (with bs4)

I am getting this error:
requests.exceptions.MissingSchema: Invalid URL 'http:/1525/bg.png': No schema supplied. Perhaps you meant http://http:/1525/bg.png?
I don't really care why the error happened, I want to be able to capture any Invalid URL errors, issue a message and proceed with the rest of the code.
Below is my code, where I'm trying to use try/except for that specific error but its not working...
# load xkcd page
# save comic image on that page
# follow <previous> comic link
# repeat until last comic is reached
import webbrowser, bs4, os, requests
url = 'http://xkcd.com/1526/'
os.makedirs('xkcd', exist_ok=True)
while not url.endswith('#'): # - last page
# download the page
print('Dowloading page %s...' % (url))
res = requests.get(url)
res.raise_for_status()
soup = bs4.BeautifulSoup(res.text, "html.parser")
# find url of the comic image (<div id ="comic"><img src="........"
</div
comicElem = soup.select('#comic img')
if comicElem == []:
print('Could not find any images')
else:
comicUrl = 'http:' + comicElem[0].get('src')
#download the image
print('Downloading image... %s' % (comicUrl))
res = requests.get(comicUrl)
try:
res.raise_for_status()
except requests.exceptions.MissingSchema as err:
print(err)
continue
# save image to folder
imageFile = open(os.path.join('xkcd',
os.path.basename(comicUrl)), 'wb')
for chunk in res.iter_content(1000000):
imageFile.write(chunk)
imageFile.close()
#get <previous> button url
prevLink = soup.select('a[rel="prev"]')[0]
url = 'http://xkcd.com' + prevLink.get('href')
print('Done')
What a my not doing? (I'm on python 3.5)
Thanks allot in advance...
if you don't care about the error (which i see as bad programming), just use a blank except statement that catches all exceptions.
#download the image
print('Downloading image... %s' % (comicUrl))
try:
res = requests.get(comicUrl) # moved inside the try block
res.raise_for_status()
except:
continue
but on the other hand if your except block isn't catching the exception then it's because the exception actually happens outside your try block, so move requests.get into the try block and the exception handling should work (that's if you still need it).
Try this, if you have this type of issue occur on use wrong URL.
Solution:
import requests
correct_url = False
url = 'Ankit Gandhi' # 'https://gmail.com'
try:
res = requests.get(url)
correct_url = True
except:
print("Please enter a valid URL")
if correct_url:
"""
Do your operation
"""
print("Correct URL")
Hope this help full.
The reason your try/except block isn't caching the exception is that the error is happening at the line
res = requests.get(comicUrl)
Which is above the try keyword.
Keeping your code as is, and just moving the try block up one line will fix it.

Detect http response encoding with aiohttp

I'm trying to learn how to use asyncio to build an asynchronous web crawler. The following is a crude crawler to test out the framework:
import asyncio, aiohttp
from bs4 import BeautifulSoup
#asyncio.coroutine
def fetch(url):
with (yield from sem):
print(url)
response = yield from aiohttp.request('GET',url)
response = yield from response.read_and_close()
return response.decode('utf-8')
#asyncio.coroutine
def get_links(url):
page = yield from fetch(url)
soup = BeautifulSoup(page)
links = soup.find_all('a',href=True)
return [link['href'] for link in links if link['href'].find('www') != -1]
#asyncio.coroutine
def crawler(seed, depth, max_depth=3):
while True:
if depth > max_depth:
break
links = yield from get_links(seed)
depth+=1
coros = [asyncio.Task(crawler(link,depth)) for link in links]
yield from asyncio.gather(*coros)
sem = asyncio.Semaphore(5)
loop = asyncio.get_event_loop()
loop.run_until_complete(crawler("http://www.bloomberg.com",0))
Whilst asyncio seems to be documented quite well, aiohttp seems to have very little documentation so I'm struggling to work some things out myself.
Firstly, is there a way for us to detect the encoding of page response?
Secondly, can we request that the connections are kept-alive within a session? Or is this by default True like in requests?
You can look on response.headers['Content-Type'] or use chardet library for bad-formed HTTP responses. Response body is bytes string.
For keep-alive connections you should use connector like:
connector = aiohttp.TCPConnector(share_cookies=True)
response1 = yield from aiohttp.request('get', url1, connector=connector)
body1 = yield from response1.read_and_close()
response2 = aiohttp.request('get', url2, connector=connector)
body2 = yield from response2.read_and_close()

Categories